Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
38,561,207 | 2016-07-25T06:42:00.000 | 0 | 0 | 0 | 0 | python,django,liclipse | 38,581,034 | 1 | false | 1 | 0 | Instead of adding django package as external library, add the containing folder of django. For example if folder hierarchy is something like /site-package/django than add site-package as external library and not django. | 1 | 0 | 0 | I am creating my first Django project from docs.djangoproject.com. After completing tutorial 4, I tried to import my project in LiClipse. But LiClipse is showing error of Unresolved Import however my projects works perfectly fine.
I have added django in external library.
Please help me with this issue.
LiClipse shows error only with django libraries and not with any other python library | Django library Unresolved Import LiClipse | 0 | 0 | 0 | 66 |
38,561,304 | 2016-07-25T06:48:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,mxnet | 45,135,108 | 2 | false | 0 | 0 | You can probably feed the data.
You will need to use MXNet Iterators to get the data out of the records, and then each record you will need to cast to something that Tensorflow understands. | 1 | 1 | 1 | I have created Mxnet Rec data through Im2rec. I would like to feed this into Tensorflow. Is it possible ? and How would i do that? Any idea ? | Feed Mxnet Rec to Tensorflow | 0.099668 | 0 | 0 | 365 |
38,561,479 | 2016-07-25T06:59:00.000 | 0 | 0 | 0 | 0 | java,python,maven,deployment,ant | 38,612,101 | 2 | false | 1 | 0 | Every one suggesting set the class path to wasanttask.jar or com.ibm.websphere.v61_6.1.100.ws_runtime.jar
and get the details.
but there is no jars is available with that name in WAS 8.5 | 1 | 0 | 0 | I'm new to jython and python scripts.
My new requirement is to deploy a war file from windows client to windows server, using scripts.
I have done using ant, in local environment completed. From remote I have done R&D but I didn't get solution.
That's why I moved to jython scripting, and local environment deployment completed.
But remote deployment is not working.
Can you please share any ideas and how to deploy the war file from my environment to a remote locations, please? | war/ear file deployment using jython/python scrpting from remote location in webspehere | 0 | 0 | 0 | 1,027 |
38,562,310 | 2016-07-25T07:50:00.000 | 1 | 1 | 1 | 0 | python | 38,562,772 | 1 | false | 0 | 0 | There is no need for sudo if you want to install packages locally. Generally, you should always use a virtualenv; once that is activated, all packages install within that virtualenv only, with no need for admin privileges. | 1 | 0 | 0 | I want to install some python packages on a remote server where I can actually log in and work on some existing python packages. Sometimes I need new python packages like easydict, then I have to install it. However, since I don't have access to the root (I mean I cannot sudo). How to solve this problem? Is it impossible to debug on someone else's computer where you cannot even "sudo"? | Install python packages on a remote server without access to root | 0.197375 | 0 | 0 | 176 |
38,566,435 | 2016-07-25T11:22:00.000 | 0 | 0 | 0 | 0 | python,apache-superset | 38,626,662 | 1 | true | 0 | 0 | I found the answer:
In Time Filter, you can just type 1 day, 2 day or a date string in Since and Until field | 1 | 1 | 0 | Recently, I use Caravel to do Log Analytics Dashboard, and I think Caravel Time filter is not good to use, for example:
I can't display all data without Time filter
Time filter is too rough, so I can't get data within two datetime
Is any way can resolve these problem? Thank you very much! | How to use Airbnb Caravel Time Filter | 1.2 | 0 | 0 | 176 |
38,570,535 | 2016-07-25T14:31:00.000 | 0 | 0 | 0 | 0 | python,django,django-rest-framework | 39,084,030 | 3 | false | 1 | 0 | I'm usually a huge proponent for DRF. It's simple to implement an easy use case, yet INCREDIBLY powerful for more complex uses.
However, if you are not using Django models for all your data, I think JsonResponse might be easier. Running queries and manual manipulation (especially if it is only a single endpoint) might be the way to go.
Sorry for not weighing in on the other part of the question. | 2 | 0 | 0 | I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.
Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?
I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:
Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?
Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context. | Advice on structuring a growing Django project (Models & API) | 0 | 0 | 0 | 429 |
38,570,535 | 2016-07-25T14:31:00.000 | 2 | 0 | 0 | 0 | python,django,django-rest-framework | 38,571,117 | 3 | false | 1 | 0 | Would there be a distinct advantage in building models for all the
tables that are populated via ETL processes so I can start using the
Django ORM vs using SQLAlchemy and query strings?
Yes, a centralized, consistent way of accessing the data, and of course, one less dependency on the project.
Which of these API structures is the most typical, and what do I
gain/lose by implementing JsonResponse views as an API vs using the
Django REST Framework?
In general terms, JSON is used for data, and REST for APIs. You mentioned that Django-REST is already in use, so if there's any tangible benefit from having a REST API, I'd go with it | 2 | 0 | 0 | I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.
Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?
I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:
Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?
Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context. | Advice on structuring a growing Django project (Models & API) | 0.132549 | 0 | 0 | 429 |
38,572,860 | 2016-07-25T16:19:00.000 | 0 | 1 | 0 | 0 | python-2.7,gnome-terminal,festival | 41,790,004 | 4 | false | 0 | 0 | Consider using the Festival utility text2wave to write the audio as a file, then play the file using sox with the speed and pitch effects. To slow the audio down you will need a speed value less than one, and compensate for the effect on pitch with a positive value in pitch. | 1 | 3 | 0 | I want festival tts to read a bit slower, can anyone help me with that?
I use python 2.7 and I run the code in gnome-terminal. | Can festival tts's speed of speech be changed? | 0 | 0 | 0 | 3,733 |
38,575,369 | 2016-07-25T18:50:00.000 | 0 | 0 | 0 | 0 | python,json,r,api,web-scraping | 39,146,729 | 1 | false | 0 | 0 | It is possible to add more than one flight to the request, see the google developer tutorial for qpx. I am not sure though how many flights fit in one request. | 1 | 0 | 0 | I'm looking to use this API for Google Flights to gather some flight data for project I hope to complete. I have one question tho. Can anyone see a way to request multiple dates for the same route in just one call? Or does it have to multiple requests?
Thanks so much I have seen it suggested that it is possible but haven't found any evidence:) | QPX. How many returns per query? | 0 | 0 | 1 | 54 |
38,576,350 | 2016-07-25T19:57:00.000 | 3 | 0 | 1 | 0 | django,python-3.x | 38,576,546 | 2 | true | 1 | 0 | You can simply put the mypythonfile.py in the same directory of your views.py file. And from mypythonfile import mystuff in your views.py | 1 | 4 | 0 | I'm currently using django and putting python code in my views.py to run them on my webpages. There is an excerpt of code that requires a certain class from a python file to work sort of like a package but it is just a python file I was given to use in order to execute a certain piece of code. How would I be able to reference the class from my python file in the django views.py file? I have tried putting the python file in my site packages folder in my Anaconda3 folder and have tried just using from [name of python file] import [class name] in the views.py file but it does not seem to recognize that the file exists in the site packages folder. I also tried putting the python file in the django personal folder and using from personal import [name of file] but that doesn't work either | How to import a class from a python file in django? | 1.2 | 0 | 0 | 9,514 |
38,577,126 | 2016-07-25T20:45:00.000 | 16 | 0 | 1 | 0 | python,pandas | 48,574,315 | 3 | false | 0 | 0 | You can simply use df.columns = df.columns.map(str)
DSM's first answer df.columns = df.columns.astype(str) didn't work for my dataframe. (I got TypeError: Setting dtype to anything other than float64 or object is not supported) | 1 | 42 | 1 | I have a pandas dataframe with mixed column names:
1,2,3,4,5, 'Class'
When I save this dataframe to h5file, it says that the performance will be affected due to mixed types. How do I convert the integer to string in pandas? | Convert Column Name from int to string in pandas | 1 | 0 | 0 | 62,888 |
38,580,135 | 2016-07-26T02:24:00.000 | 0 | 0 | 1 | 0 | python-3.x,pip | 38,612,536 | 2 | true | 0 | 0 | Open command prompt by typing Win + R and then type in cmd
then type
py -m pip --version or type python -m pip --version
This will output as pip 8.1.2 from {Install Directory}\lib\site-packages (python 3.5)
A note of warning though typing in --v won't work as there is confusion between --verbose and --version | 2 | 0 | 0 | I installed python version 3.5.1 on Windows 7. During the install I selected the option to install pip however, I have no idea how to verify that pip was actually installed. I searched the site for answers to help but I was unsuccessful. I apologize in advance if this question was asked. If the question was asked and you can point me to the specific post that would be great. Just looking for some help. | Verifying pip version for python 3.5.1 on Windows 7 | 1.2 | 0 | 0 | 98 |
38,580,135 | 2016-07-26T02:24:00.000 | 0 | 0 | 1 | 0 | python-3.x,pip | 38,628,997 | 2 | false | 0 | 0 | Try pip --version
It should work. | 2 | 0 | 0 | I installed python version 3.5.1 on Windows 7. During the install I selected the option to install pip however, I have no idea how to verify that pip was actually installed. I searched the site for answers to help but I was unsuccessful. I apologize in advance if this question was asked. If the question was asked and you can point me to the specific post that would be great. Just looking for some help. | Verifying pip version for python 3.5.1 on Windows 7 | 0 | 0 | 0 | 98 |
38,580,816 | 2016-07-26T04:00:00.000 | 0 | 0 | 0 | 0 | python,http,web.py | 38,581,743 | 1 | false | 1 | 0 | I think redirection is in dead loop. Maybe your certain var doesn't be saved in session after calling web.seeother | 1 | 0 | 0 | I'm making a web.py app. I'm using a unloadhook function to check if a certain var is in the session for each call.
I need to redirect (to index) if it's not there. However, firefox gives me the message that the redirect will never terminate when I call web.seeother in the unloadhook function. I can correctly detect both cases in the unloadhook and treat the case with the var in the session, but not the second.
def xCheck():
if 'x' in session:
print >> sys.stderr, "x in"
print >> sys.stderr, str(dict(session))
return
else:
print >> sys.stderr, "x out"
return web.seeother('/')
app.add_processor(web.unloadhook(sessionCheck)) | redirect in loadhook in web.py | 0 | 0 | 0 | 96 |
38,585,719 | 2016-07-26T09:13:00.000 | 1 | 0 | 0 | 0 | python,django,database,postgresql,saas | 38,587,539 | 1 | true | 1 | 0 | Store one at a time until you absolutely cannot anymore, then design something else around your specific problem.
SQL is a declarative language, meaning "give me all records matching X" doesn't tell the db server how to do this. Consequently, you have a lot of ways to help the db server do this quickly even when you have hundreds of millions of records. Additionally RDBMSs are optimized for this problem over a lot of years of experience so to a certain point, you will not beat a system like PostgreSQL.
So as they say, premature optimization is the root of all evil.
So let's look at two ways PostgreSQL might go through a table to give you the results.
The first is a sequential scan, where it iterates over a series of pages, scans each page for the values and returns the records to you. This works better than any other method for very small tables. It is slow on large tables. Complexity is O(n) where n is the size of the table, for any number of records.
So a second approach might be an index scan. Here PostgreSQL traverses a series of pages in a b-tree index to find the records. Complexity is O(log(n)) to find each record.
Internally PostgreSQL stores the rows in batches with fixed sizes, as pages. It already solves this problem for you. If you try to do the same, then you have batches of records inside batches of records, which is usually a recipe for bad things. | 1 | 1 | 0 | I am writing a Django application that will have entries entered by users of the site. Now suppose that everything goes well, and I get the expected number of visitors (unlikely, but I'm planning for the future). This would result in hundreds of millions of entries in a single PostgreSQL database.
As iterating through such a large number of entries and checking their values is not a good idea, I am considering ways of grouping entries together.
Is grouping entries in to sets of (let's say) 100 a better idea for storing this many entries? Or is there a better way that I could optimize this? | Storing entries in a very large database | 1.2 | 1 | 0 | 66 |
38,586,396 | 2016-07-26T09:43:00.000 | 0 | 0 | 1 | 0 | javascript,python,json,csv,geojson | 38,633,221 | 2 | false | 0 | 0 | I was able to write a conversion script, and it's working now, thanks! | 1 | 0 | 0 | So I am currently working on a project that involves the google maps API. In order to display data on this, the file needs to be in a geojson format. So far in order to accomplish this, I have been using two programs, 1 in javascript that converts a .json to a CSV, and another that converts a CSV to a geojson file, which can then be dropped on the map. However, I need to make both processes seamless, therefore I am trying to write a python script that checks the format of the file, and then converts it using the above programs and outputs the file. I tried to use many javascript to python converters to convert the javascript file to a python file, and even though the files were converted, I kept getting multiple errors for the past week that show the converted program not working at all and have not been able to find a way around it. I have only seen articles that discuss how to call a javascript function from within a python script, which I understand, but this program has a lot of functions and therefore I was wondering how to call the entire javascript program from within python and pass it the filename in order to achieve the end result. Any help is greatly appreciated. | How to execute an entire Javascript program from within a Python script | 0 | 0 | 1 | 196 |
38,586,767 | 2016-07-26T09:59:00.000 | 1 | 0 | 0 | 1 | python,celery | 38,587,766 | 1 | true | 1 | 0 | You can get it from the _cache object of the AsyncResult after you have called res.result
for example
res._cache['date_done'] | 1 | 1 | 0 | I need to trace the status for the tasks. i could get the 'state', 'info' attribute from the AsyncResult obj. however, it looks there's no way to get the 'done_date'. I use MySQL as result backend so i could find the date_done column in the taskmeta table, but how could i get the task done date directly from AysncResult obj? thanks | Celery: How to get the task completed time from AsyncResult | 1.2 | 0 | 0 | 696 |
38,587,600 | 2016-07-26T10:38:00.000 | 2 | 1 | 0 | 0 | python,naming-conventions,naming | 38,587,644 | 1 | true | 0 | 0 | There is no such word as autoreply so you should name it as send_auto_reply | 1 | 0 | 0 | I am unsure whether to name a method send_auto_reply() or send_autoreply().
What guidelines can be applied here?
It's Python, but AFAIK this should not matter. | Naming method: send_auto_reply() vs send_autoreply() | 1.2 | 0 | 0 | 29 |
38,588,000 | 2016-07-26T10:57:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,boto,amazon-swf | 38,601,627 | 1 | false | 1 | 0 | There is no delay option when scheduling an activity. The solution is to schedule a timer with delay based on activity execution count and when the timer fires schedule an activity execution. | 1 | 0 | 0 | I am using python boto library to implement SWF.
We are simulating a workflow where we want to execute same task 10 times in a workflow. After the 10th time, the workflow will be marked complete.
The problem is, we want to specify an interval for execution which varies based on the execution count. For example: 5 minutes for 1st execution, 10 minutes for 2nd execution, and so on.
How do I schedule a task by specifying time to execute? | Amazon SWF to schedule task | 0.197375 | 0 | 1 | 218 |
38,588,970 | 2016-07-26T11:42:00.000 | 13 | 0 | 1 | 0 | python,python-3.x,pycharm | 38,589,030 | 1 | true | 0 | 0 | Settings -> Editor -> Inspections -> Python -> Code compatibility inspection
Either disable it entirely or unselect Python 2.X | 1 | 7 | 0 | I would like to use python3 syntax in PyCharm 2016.2 and have configured the interpreter to use python3.5. The code runs fine when I execute it but PyCharm complains about python3 syntax that isn't supported in python2.
How can I convince PyCharm that the python3 syntax is ok? | Python3 syntax in PyCharm | 1.2 | 0 | 0 | 2,561 |
38,589,963 | 2016-07-26T12:30:00.000 | 0 | 0 | 0 | 0 | python,sqlite | 38,715,072 | 1 | false | 0 | 0 | Peewee will use either the standard library sqlite3 module or, if you did not compile Python with SQLite, Peewee will look for pysqlite2.
The problem is most definitely not with Peewee on this one, as Peewee requires a SQLite driver to use the SqliteDatabase class... If that driver does not exist, then you need to install it. | 1 | 1 | 0 | I have python 2.7.12 installed on my server. I'm using PuTTY to connect to my server. When running my python script I get the following.
File "home/myuser/python/lib/python2.7/site-packages/peewee.py", line 3657, in _connect
raise ImproperlyConfigured('pysqlite or sqlite3 must be installed.')
peewee.ImproperlyConfigured: pysqlite or sqlite3 must be installed.
I thought sqlite was installed with python 2.7.12, so I'm assuming the issue is something else. Haven't managed to find any posts on here yet that have been helpful.
I am missing something?
Thanks in advance | Python - pysqlite or sqlite3 must be installed | 0 | 1 | 0 | 955 |
38,590,391 | 2016-07-26T12:50:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,csv,beautifulsoup,export | 38,590,656 | 1 | false | 1 | 0 | Here are a few hints:
When you have a string like you want to separate on a given character, use string.split to get list, which you can get the first value using lst[0].
Then, take a look at the csv module to do your export. | 1 | 0 | 0 | I scraped six different values with Python 3.5 using beautifulSoup. Now I have the following six variables with values:
project_titles
project_href
project_desc
project_per
project_mon
project_loc
The data for e.g. "project_titles" looks loke this:
['Formula Pi - Self-driving robot racing with the Raspberry Pi', 'The Superbook: Turn your smartphone into a laptop for $99'] --> seperated by a comma.
Now I want to export this data to a csv.
The Headlines should be in A1 (project_titles), B1 (project_href) and so on.
And in A2 I need the first value of "project_titles". In B2 the first value of "project_href".
I think I need a loop for this, but I didn't get it. Please help me... | Export from python 3.5 to csv | 0 | 0 | 0 | 104 |
38,590,876 | 2016-07-26T13:12:00.000 | 1 | 0 | 1 | 0 | revit-api,revitpythonshell | 38,599,242 | 2 | false | 0 | 0 | This is a great question Arnaud, in the past Ive done the following:
Create a text project parameter, and populate it with XML (yes you can have line breaks in a text parameter). This is similar to what Ideate BIM Link does (check the project parameters of any project that has used BIM Link). This is a long-winded method for keeping data persistent between commands.
The second part (saving a walls IDs) is more difficult I think, as I understand it every time you open a project the IDs are reassigned. You could test this to see if its the case?
Another method could involve using an External command that lingers after you have finished selecting walls. Could you go into a little more info about what youre wanting to achieve? | 1 | 1 | 0 | I'm currently using Revit Python Shell 2017 and I'd like let's say to make "communicate" different canned commands.
For instance, let's say I load a house model, and I create some additional walls on it, via a canned command that I would have previously created. While creating these walls, I could store all these new walls IDs in a variable, as a list.
Now, if I want to delete exactly these walls afterwards, I'd like to identify them using their IDs that I stored in the list, then delete them.
If I was in an interactive Python Shell session, well the "IDs list" variable would still be accessible (as long as I don't close the shell), and I could just retrieve the IDs from it, then delete the walls.
But what if I'm using canned commands? The first command would be "create the walls", and the 2nd would be "erase these walls". But that "IDs list" variable doesn't exists in the second canned command environment, so that I can't use it to erase the walls.
So, what would be the approach? Of course in this example I could identify the walls in the second command using a different methodology, such as asking the user to select them etc etc.. But the idea I'm going for would be the store that list from the first command "somewhere in Revit", and retrieve it when calling the second command.
I could write the list to an external text file, and read the file in the second command... but is there a cleaner way?
I'm sorry for the beginner's language used here, and hope that my question is clear enough! And that somebody can help ;)
Best,
Arnaud. | Is it possible to keep a variable active with Revit Python Shell, using canned commands? | 0.099668 | 0 | 0 | 141 |
38,591,547 | 2016-07-26T13:41:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,pip | 38,591,837 | 1 | false | 0 | 0 | when i try running the application the window will just open for brief second and then closes
Sounds like you are trying to open the pip.exe file and expect an interactive interface of some kind.
Unfortunately, that's not how you use pip. Open up a cmd, type your pip commands there. The command prompt will print and error, and not close, if there is a problem. | 1 | 0 | 0 | Using Python version 3.5 on Windows 10 64bit,
I'm unable to run the pip command. When I try running the application, the window will just open for a brief second and then closes.
I already tried adding the directory to the PATH environment variable and rebooting the system - didn't work. | Can't open pip in Python version 3.5 | 0.197375 | 0 | 0 | 87 |
38,593,488 | 2016-07-26T15:03:00.000 | 0 | 0 | 0 | 1 | python,ubuntu | 38,618,691 | 1 | false | 0 | 0 | Update, managed to solve my problem. I needed to make sure that all directory paths were correct as I found that HTCondor was looking within its own files for the resources my submission program used. I therefore needed to define a variable in the .py file that contains the directory to the resource | 1 | 0 | 0 | I am attempting to run a python 2.7 program on HTCondor, however after submitting the job and using 'condor_q' to assess the job status, I see that the job is put in 'held'.
After querying using 'condor_q -analyse jobNo.' the error message is "Hold reason: Error from Ubuntu: Failed to execute '/var/lib/condor/execute/dir_12033/condor_exec.exe': (errno=8: 'Exec format error').
I am unsure how to resolve this error, any help would be much appreciated. As I am relatively new to HTCondor and Ubuntu could any guidance be step wise and easy to follow
I am running Ubuntu 16.04 and the latest release of HTCondor | Unable to submit python files to HTCondor- placed in 'held' | 0 | 1 | 0 | 197 |
38,593,744 | 2016-07-26T15:14:00.000 | 0 | 0 | 0 | 1 | python,sockets,python-3.x,getaddrinfo | 38,660,201 | 1 | false | 0 | 0 | socket.SOCK_STREAM should be passed in the type field. Using it in the proto field probably has a very random effect, which is what you're seeing. Proto only takes the IPPROTO constants. For a raw socket, you should use type = socket.SOCK_RAW. I'm not sure getaddrinfo supports that though, it's mostly for TCP and UDP.
It's probably better to have some actual code in your questions. It's much easier to see what's going on then. | 1 | 0 | 0 | I'm busy trying to use socket.getaddrinfo() to resolve a domain name. When I pass in:
host = 'www.google.com', port = 80, family = socket.AF_INET, type = 0, proto = 0, flags = 0
I get a pair of socket infos like you'd expect, one with SocketKind.SOCK_DGRAM (for UDP) and and the other with SocketKind.SOCK_STREAM (TCP).
When I set proto to socket.IPPROTO_TCP I narrow it to only TCP as expected.
However, when I use proto = socket.SOCK_STREAM (which shouldn't work) I get back a SocketKind.SOCK_RAW.
Also, Python won't let me use proto = socket.IPPROTO_RAW - I get 'Bad hints'.
Any thoughts on what's going on here? | Unexpected socket.getaddrinfo behavior in Python using SOCK_STREAM | 0 | 0 | 1 | 175 |
38,594,429 | 2016-07-26T15:44:00.000 | 0 | 0 | 0 | 0 | python,smartsheet-api | 38,619,885 | 3 | false | 0 | 0 | It's possible for the value of any cell to be null if the cell has never had any value set (e.g. if you add a value to a cell in a new row, all the other cells will be null). I'm guessing that's what you're seeing.
If you check one of those checkbox cells, save, then uncheck it, you should see its value as false in the API.
For the purposes of your program logic, you can treat any null checkbox cells as unchecked. | 1 | 2 | 0 | Whenever I retrieve a SmartSheet row and loop through the cells within it, all cells of type CHECKBOX always have a displayValue or value of Null, regardless of the status of the checkbox on the sheet. Has anyone else experienced this? (I am using their python sdk) | Smartsheet CHECKBOX Cells Always Returned as Empty | 0 | 0 | 0 | 644 |
38,596,793 | 2016-07-26T17:56:00.000 | 1 | 0 | 0 | 0 | python,google-spreadsheet-api | 38,600,670 | 1 | true | 1 | 0 | If you want to do this with only manipulating your Python program, you would have to run it all day. This would waste CPU resources.
It's best to use cron to schedule your unix system to run a command for you every 2 hours. In this case, it'd be your python program. | 1 | 0 | 0 | I'm trying to read from a Google sheet say every 2 hours. I have looked at both the API for Google sheets and also the Google Apps Script.
I'm using Python/Flask, and what I'm specifically confused about is how to add the time trigger. I can use the Google Sheets API to read from the actual file,but I'm unsure of how to run this process every x hours. From my understanding, it seems like Google Apps Script, is for adding triggers to doc, sheets, etc, which is not really what I want to do.
I'm pretty sure I'm looking in the wrong area for this x hour read. Should I be looking into using the sched module or Advanced Python Scheduler?Any advice on how to proceed would be very appreciated. | Reading From Google Sheets Periodically | 1.2 | 0 | 0 | 164 |
38,597,286 | 2016-07-26T18:23:00.000 | 0 | 0 | 0 | 0 | python,lidar,slam | 51,737,241 | 1 | false | 0 | 0 | You will need a realtime camera looking at the road for slam! the lidar basemap can support the static layer but you still need a dynamic layer (in this case camera or a GPS/IMU). | 1 | 2 | 0 | I have a .pcap file collected from using a Velodyne VLP16 LIDAR unit. I've been looking through the different implementations of SLAM and most use camera or stereo camera inputs. I was wondering if there was an implementation that supported pcap LIDAR data. | Which SLAM implementations support pcap input? | 0 | 0 | 0 | 393 |
38,598,880 | 2016-07-26T20:00:00.000 | 1 | 0 | 1 | 0 | python,module,qpython,qpython3 | 39,323,863 | 3 | false | 0 | 1 | Extract the zip file to the site-packages folder.
Find the qpyplus folder in that Lib/python3.2/site-packages extract here that's it.Now you can directly use your module from REPL terminal by importing it. | 1 | 14 | 0 | I found this great module on within and downloaded it as a zip file. Once I extracted the zip file, i put the two modules inside the file(setup and the main one) on the module folder including an extra read me file I needed to run. I tried installing the setup file but I couldn't install it because the console couldn't find it. So I did some research and I tried using pip to install it as well, but that didn't work. So I was wondering if any of you could give me the steps to install it manually and with pip (keep in mind that the setup.py file needs to be installed in order for the main module to work).
Thanks! | How do I install modules on qpython3 (Android port of python) | 0.066568 | 0 | 0 | 51,897 |
38,599,404 | 2016-07-26T20:35:00.000 | 1 | 0 | 0 | 0 | python,parameters,automation,sas | 38,734,883 | 2 | false | 0 | 0 | You can pass a string to SAS using the -sysparm command line option, and this string will be available in SAS as the &sysparm automatic variable.
E.g. (from the command line:)
sas myprogram.sas -sysparm myparam
If you need to parse the string inside SAS, the %scan() macro function will probably be useful. | 1 | 1 | 0 | I have multiple SAS program files, each of which has various macro variables which need to be set dynamically through an excel file. I was wondering if I can pass the values from the excel file to the SAS program through a python script (or shell script). I wish to automate the process of setting parameter for each SAS program manually.
Please suggest. | How to set SAS program macro variables through a python script | 0.099668 | 0 | 0 | 1,027 |
38,600,625 | 2016-07-26T22:04:00.000 | 0 | 0 | 0 | 0 | python-3.x,tkinter | 38,600,746 | 2 | false | 0 | 1 | I will admit that I have no experience in this, but maybe a sort of watchdog timer could work? A timer would count up to your desired time, but anytime an element is activated it would reset the counter. This concept is used in micro-controllers a lot but I'm not sure how you would apply it to python. | 1 | 0 | 0 | What is the general method for 'doing something' after a period of user inactivity in tkinter? In my case the 'do something' will be to go to the start screen (tk.frame) that is already instantiated. | do something after a period of gui user inactivity tkinter | 0 | 0 | 0 | 844 |
38,600,982 | 2016-07-26T22:38:00.000 | 0 | 0 | 0 | 0 | java,android,python | 39,757,493 | 1 | false | 1 | 0 | Short answer is no.
Google deepdream is an iPython notebook with dependencies on caffe which itself has several dependencies.
There is however no reason why someone couldn't develop a similar tool for Android. There is an app called dreamscope for producing these kinds of images that is available for android, but I would presume they do all of their image computation in the cloud. | 1 | 0 | 0 | How can I run Google Deep Dream on Android? Can I execute the Python script or do I need to port it to Java for performance reasons? | How can I run Deep Dream on Android | 0 | 0 | 0 | 108 |
38,601,376 | 2016-07-26T23:19:00.000 | 0 | 1 | 0 | 0 | python,sleep,beagleboneblack | 59,011,900 | 2 | false | 0 | 0 | For the beaglebone black, I was able to do this using the rtcwake function. There are several different modes.
For instance, if you want to put the BBB in sleep mode for 10 seconds and then wake back up, you would enter the following command:
sudo rtcwake -u 10 -m standby
sudo rtcwake --help to see all options. | 1 | 1 | 0 | I'm a student, I am a beginner with beaglebones. I have a project and in the project we have a BeagleBone Black connected to a battery and solar panels.
It will work autonomously, and the beagle will send datas by the 3G network through an 3G usb.
What I want to do is to save as more energy as it's possible. What I thought first was to switch on hibernation or sleep mode the beaglebone. To switch on hibernate/sleep mode and then wake up the Beagle every x seconds or minutes or anything else.
So I want to know if it's possible and if there is an OS more adapted for that use.
I succeeded to disable the usb chipset and then to reactivate it several minutes later.
Thank you if you can help me ! | Hibernate a BeagleBone Black | 0 | 0 | 0 | 2,087 |
38,601,730 | 2016-07-27T00:03:00.000 | 0 | 0 | 1 | 0 | python,apache-spark,pyspark | 45,575,576 | 1 | false | 0 | 0 | I'm in a similar situation. We've done most of our development in Python (primarily Pandas) and now we're moving into Spark as our environment has matured to the point that we can use it.
The biggest disadvantage I see to PySpark is when we have to perform operations across an entire DataFrame but PySpark doesn't directly support the library or method. For example, when trying to use the Lifetimes library, this is not supported by PySpark so we either have to convert the PySpark Dataframe to a Pandas Dataframe (which takes a lot of time and loses the advantage of the cluster) or convert the code to something PySpark can consume and parallelize across the PySpark DataFrame. | 1 | 1 | 1 | Recently I've been working a lot with pySpark, so I've been getting used to it's syntax, the different APIs and the HiveContext functions. Many times when I start working on a project I'm not fully aware of what its scope will be, or the size of the input data, so sometimes I end up requiring the full power of distributed computing, while on others I end up with some scripts that will run just fine on my local machine.
My question is, is there a disadvantage to coding with pySpark as my main language as compared to regular Python/Pandas, even for just some exploratory analysis? I ask mainly because of the cognitive work of switching between languages, and the hassle of moving my code from Python to pySpark if I do en up requiring to distribute the work.
In term of libraries I know Python would have more capabilities, but on my current projects so far don't use any library not covered by Spark, so I'm mostly concerned about speed, memory and any other possible disadvantage; which would perform better on my local machine? | Programming on PySpark (local) vs. Python on Jupyter Notebook | 0 | 0 | 0 | 1,310 |
38,603,480 | 2016-07-27T03:59:00.000 | 2 | 0 | 1 | 0 | python,import,py2exe,pyinstaller,os.system | 38,665,906 | 1 | true | 0 | 0 | After a couple of days of some tests, I was able to figure out how to work around this problem. Instead of os.system, I am using subprocess.call("script.py arg1 arg2 ..., shell=True) for each script I need to run. Also, I used chmod +x (in linux) before transferring the scripts to windows to ensure they're an executable (someone can hopefully tell me if this was really necessary). Then without having to install python a colleague was able to run the program, after I compiled it as a single file with pyInstaller. I was also able to do the same thing with blast executables (where the user did not have to install blast locally - if the exe also accompanied the distribution of the script). This avoided having to call bipython ncbiblastncommandline and the install. | 1 | 2 | 0 | I'm using tkinter and pyinstaller/py2exe (either one would be fine), to create an executable as a single file from my python script. I can create the executable, and it runs as desired when not using the bundle option with py2exe or -F option with pyinstaller. I'm running third party python scripts within my code with os.system(), and can simply place these scripts in the 'dist' dir after it is created in order for it to work. The command has several parameters: input file, output file, number of threads..etc, so I'm unsure how to add this into my code using import. Unfortunately, this is on Windows, so some colleagues can use the GUI, and would like to have the single executable to distribute.
**EDIT:**I can get it to bundle into a single executable, and provide the scripts along with the exe. The issue still however, is with os.system("python script.py -1 inputfile -n numbthreads -o outputfile..") when running the third party scripts within my code. I had a colleague test the executable with the scripts provided with it, however at this point they need to have python installed, which is unacceptable since there will be multiple users. | PyInstaller/Py2exe - include os.system call with third party scripts in single file compilation | 1.2 | 0 | 0 | 1,346 |
38,604,598 | 2016-07-27T05:43:00.000 | 2 | 0 | 1 | 0 | python,r,machine-learning,simulation | 38,605,047 | 2 | false | 0 | 0 | How about writing scripts outputs to files ,and construct a web interface that consumes this files and displays them in read-only mode?
For example in R you can use
sink()
to route the output messages to a file, you then construct a web interface that simply displays this file. | 1 | 1 | 0 | I have a client that would like to examine results of a script I have written. I don't want the client to see the inner workings of the script or I lose my value to them but I want them to be able to run it as many times as they want and observe the results.
I am not sure if there is a general solution to this or specific to a language. If the latter applies, I have scripts in Python and R.
Thanks | black box script execution? | 0.197375 | 0 | 0 | 93 |
38,605,752 | 2016-07-27T06:54:00.000 | 1 | 0 | 1 | 0 | python,multithreading,thread-safety,python-multithreading | 38,606,759 | 1 | false | 0 | 0 | Reading the file serially is your best option since (hardware wise) it gives you the best read throughout.
Usually the slow part is not in the data reading but in its processing... | 1 | 0 | 0 | I have to read a file in chunks of 2KB and do some operation on those chunks. Now where I'm actually stuck is, when the data needs to be thread-safe. From what I've seen in online tutorials and StackOverflow answers, we define a worker thread, and override its run method. The run method uses data from a queue which we pass as an argument, and which contains the actual data. But to load that queue with data, I'll have to go through the file serially, which eliminates parallelism. I want that multiple threads read the file in parallel manner. So I'll have to cover the read part in the run function only. But I'm not sure how to go with that. Help needed. | Read a file multi-threaded in python in chunks of 2KB. | 0.197375 | 0 | 0 | 189 |
38,605,787 | 2016-07-27T06:56:00.000 | 0 | 0 | 1 | 0 | python-3.x,pandas,data-science | 45,953,438 | 2 | false | 0 | 0 | Install Anaconda and all your problems will be gone you can install 720+ packages related to python.
conda install pandas
conda install numpy
conda install ...... | 1 | 0 | 0 | i am new to python and want to learn data analysis through python 3.5, while installing pandas through cmd it is showing warnings. | How to install pandas for python 3.5 on windows 10 | 0 | 0 | 0 | 7,107 |
38,609,561 | 2016-07-27T09:52:00.000 | 0 | 0 | 1 | 1 | python,ubuntu,path,anaconda | 38,609,999 | 1 | false | 0 | 0 | Fixed the issue with a simple restart. Still not sure where the original PATH without the 3 in it came from as I hadn't restarted before. | 1 | 0 | 0 | ISSUE FIXED: Restart fixed the issue.
I've just finished installing Anaconda3 on Ubuntu 16.04. It automatically added the following line to .bashrc.
export PATH="/home/username/anaconda3/bin:$PATH"
However when I run python I still get the default python 2 version.
printenv PATH
gives me
/home/username/anaconda/bin:/home/username/bin: ...
What is causing the 3 to be dropped from the path? | Anaconda3 install PATH issue | 0 | 0 | 0 | 3,692 |
38,610,955 | 2016-07-27T10:53:00.000 | 4 | 0 | 0 | 0 | python,pandas,scikit-learn,sparse-matrix,bigdata | 38,612,762 | 1 | true | 0 | 0 | I would suggest you give CloudxLab a try.
Though it is not free it is quite affordable ($25 for a month). It provides complete environment to experiment with various tools such as HDFS, Map-Reduce, Hive, Pig, Kafka, Spark, Scala, Sqoop, Oozie, Mahout, MLLib, Zookeeper, R, Scala etc. Many of the popular trainers are using CloudxLab. | 1 | 3 | 1 | I am currently working on my thesis, which involves dealing with quite a sizable dataset: ~4mln observations and ~260ths features. It is a dataset of chess games, where most of the features are player dummies (130k for each colour).
As for the hardware and the software, I have around 12GB of RAM on this computer. I am doing all my work in Python 3.5 and use mainly pandas and scikit-learn packages.
My problem is that obviously I can't load this amount of data to my RAM. What I would love to do is to generate the dummy variables, then slice the database into like a thousand or so chunks, apply the Random Forest and aggregate the results again.
However, to do that I would need to be able to first create the dummy variables, which I am not able to do due to memory error, even if I use sparse matrices. Theoretically, I could just slice up the database first, then create the dummy variables. However, the effect of that will be that I will have different features for different slices, so I'm not sure how to aggregate such results.
My questions:
1. How would you guys approach this problem? Is there a way to "merge" the results of my estimation despite having different features in different "chunks" of data?
2. Perhaps it is possible to avoid this problem altogether by renting a server. Are there any trial versions of such services? I'm not sure exactly how much CPU/RAM would I need to complete this task.
Thanks for your help, any kind of tips will be appreciated :) | Dealing with big data to perform random forest classification | 1.2 | 0 | 0 | 392 |
38,611,999 | 2016-07-27T11:43:00.000 | 0 | 0 | 1 | 0 | python,anaconda,theano | 43,754,593 | 1 | false | 0 | 0 | Just found the temperary solution , rename configparser.py to config_parser or any other name that are not confilct .
and change name of each module include it to config_parser . | 1 | 0 | 1 | I download the theano from github, and install it.
But when I try to import the theano in ipython, I meet this problem
In [1]: import theano
ImportError Traceback (most recent call last)
<ipython-input-1-3397704bd624> in <module>()
----> 1 import theano
C:\Anaconda3\lib\site-packages\theano\__init__.py in <module>()
40 from theano.version import version as version
41
---> 42 from theano.configdefaults import config
43
44 # This is the api version for ops that generate C code. External ops
C:\Anaconda3\lib\site-packages\theano\configdefaults.py in <module>()
14
15 import theano
---> 16 from theano.configparser import (AddConfigVar, BoolParam, ConfigParam, EnumStr,
17 FloatParam, IntParam, StrParam,
18 TheanoConfigParser, THEANO_FLAGS_DICT)
C:\Anaconda3\lib\site-packages\theano\configparser.py in <module>()
13
14 import theano
---> 15 from theano.compat import configparser as ConfigParser
16 from six import string_types
17
C:\Anaconda3\lib\site-packages\theano\compat\__init__.py in <module>()
4 # Python 3.x compatibility
5 from six import PY3, b, BytesIO, next
----> 6 from six.moves import configparser
7 from six.moves import reload_module as reload
8 import collections
C:\Anaconda3\lib\site-packages\six.py in __get__(self, obj, tp)
90
91 def __get__(self, obj, tp):
---> 92 result = self._resolve()
93 setattr(obj, self.name, result) # Invokes __set__.
94 try:
C:\Anaconda3\lib\site-packages\six.py in _resolve(self)
113
114 def _resolve(self):
--> 115 return _import_module(self.mod)
116
117 def __getattr__(self, attr):
C:\Anaconda3\lib\site-packages\six.py in _import_module(name)
80 def _import_module(name):
81 """Import module, returning the module after the last dot."""
---> 82 __import__(name)
83 return sys.modules[name]
84
C:\Anaconda3\Lib\site-packages\theano\configparser.py in <module>()
13
14 import theano
---> 15 from theano.compat import configparser as ConfigParser
16 from six import string_types
17
When I get into the files, I indeed can not find configparser.py in the directory, but the original file do not have it neither.
ImportError: cannot import name 'configparser' | Import Theano on Anaconda of platform windows10 | 0 | 0 | 0 | 437 |
38,612,509 | 2016-07-27T12:07:00.000 | 0 | 0 | 1 | 1 | python,windows | 62,245,111 | 4 | false | 0 | 0 | The simple answer is to copy the "LazyLibrarian.py" to "LazyLibraryian.pyw" and create a shortcut to Desktop. Then put the shortcut in your startup folder. | 1 | 4 | 0 | Is there any way to run a Python script without a command shell momentarily appearing?
Naming my files with the ".pyw" extension doesn't work. | How to run a Python script without Windows console appearing | 0 | 0 | 0 | 15,470 |
38,612,836 | 2016-07-27T12:21:00.000 | 2 | 0 | 0 | 0 | python,performance,model-view-controller,pyramid | 38,612,965 | 1 | false | 1 | 0 | Why don't you use ajax function , post data to the server and when proccess to the server is done display the result to the html page | 1 | 0 | 0 | I am trying to speed up my website. So at the moment, controller fetches data from database, do calculation on data and display on view.
what I plan to do is, controller/action fetches half the data and display to the view. Than come back to different controller/action and do calculation on data and display data on screen.
But what I want to know is once I fetch data and display on screen, how do I go back to controller automatically(without any click by user) to do calculations on same data. | Suggestions to make website fast by breaking a request in two parts | 0.379949 | 0 | 0 | 51 |
38,615,088 | 2016-07-27T13:58:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,vectorization,tf-idf,text-analysis | 38,615,418 | 1 | true | 0 | 0 | You seem to be misunderstanding what the TF-IDF vectorization is doing. For each word (or N-gram), it assigns a weight to the word which is a function of both the frequency of the term (TF) and of its inverse frequency of the other terms in the document (IDF). It makes sense to use it for words (e.g. knowing how often the word "pizza" comes up) or for N-grams (e.g. "Cheese pizza" for a 2-gram)
Now, if you do it on lines, what will happen? Unless you happen to have a corpus in which lines are repeated exactly (e.g. "I need help in Python"), your TF-IDF transformation will be garbage, as each sentence will appear exactly once in the document. And if your sentences are indeed always similar to the punctuation mark, then for all intents and purposes they are not sentences in your corpus, but words. This is why there is no option to do TF-IDF with sentences: it makes zero practical or theoretical sense. | 1 | 2 | 1 | I'm trying to analyze a text which is given by lines, and I wish to vectorize the lines using sckit-learn package's TF-IDF-vectorization in python.
The problem is that the vectorization can be done either by words or n-grams but I want them to be done for lines, and I already ruled out a work around that just vectorize each line as a single word (since in that way the words and their meaning wont be considered).
Looking through the documentation I didnt find how to do that, so is there any such option? | Tf-Idf vectorizer analyze vectors from lines instead of words | 1.2 | 0 | 0 | 791 |
38,615,740 | 2016-07-27T14:23:00.000 | 1 | 0 | 1 | 0 | python,regex,thai | 69,601,309 | 5 | false | 0 | 0 | In Java you can match a combination of Thai en English with:
^[\\p{L}\\p{javaUnicodeIdentifierPart}\\p{Blank}\\p{P}]*$
Breakdown:
\\p{L} is an 'normal' letter
\\p{javaUnicodeIdentifierPart} matches a Thai letter
\\p{Blank} matches a space character
\\p{P} matches punctuation.
I'm not an expert in the Thai language (other than that I recognize it), but without the punctuation-match the string does not match. | 1 | 9 | 0 | I need to vectorize text documents in Thai (e.g Bag of Words, doc2vec).
First I want to go over each document, omitting everything except the Thai characters and English words (e.g. no punctuation, no numbers, no other special characters except apostrophe).
For English documents, I use this regular expression:
[^a-zA-Z' ]|^'|'$|''
For Thai documents, I cannot find the right regular expression to use. I know that the Unicode block for Thai is u0E00–u0E7F.
I tried [^ก-๛a-zA-Z' ]|^'|'$|'' and many other combinations but they don't succeed.
For example:
I want
"ทรูวิชั่นส์ ประกาศถ่ายทอดสดศึกฟุตบอล พรีเมียร์ ลีก อังกฤษ ครบทุกนัดเป็นเวลา 3 ปี ตั้งแต่ฤดูกาล 2016/2017 - 2018/2019 พร้อมด้วยอีก 5 ลีกดัง อาทิ ลา ลีกา สเปน, กัลโช เซเรีย เอ อิตาลี และลีกเอิง ฝรั่งเศส ภายใต้แพ็กเกจสุดคุ้ม ทั้งผ่านมือถือ และโทรทัศน์ some, English words here! abc123"
to be:
"ทรูวิชั่นส์ ประกาศถ่ายทอดสดศึกฟุตบอล พรีเมียร์ ลีก อังกฤษ ครบทุกนัดเป็นเวลา ปี ตั้งแต่ฤดูกาล พร้อมด้วยอีก ลีกดัง อาทิ ลา ลีกา สเปน, กัลโช เซเรีย เอ อิตาลี และลีกเอิง ฝรั่งเศส ภายใต้แพ็กเกจสุดคุ้ม ทั้งผ่านมือถือ และโทรทัศน์ some English words here abc" | Regular Expression to accept all Thai characters and English letters in python | 0.039979 | 0 | 0 | 25,026 |
38,616,200 | 2016-07-27T14:42:00.000 | 0 | 1 | 0 | 1 | python,cx-freeze | 38,618,295 | 1 | false | 0 | 0 | Executing the exe from the console works out fine. Thanks. | 1 | 0 | 0 | so I made that python program using several module including os, zipfile, time, datetime, shutil, shelve and ftplib. I froze it with cx_freeze, it won't run on the target machine (but it run on mine). I'm super new to cx_freeze, but I've poked around a bit and I suspect it's a module ot found error. Trouble is, when I execute the exe on the target machine the window doesn't stay open long enough for me to catch the error message so I can't even narrow down the issue to try and solve it. Any idea on how I could deal with it? | cx_freeze freezed python program doesn't run - no time to see the error message on executing | 0 | 0 | 0 | 205 |
38,616,773 | 2016-07-27T15:04:00.000 | 1 | 1 | 0 | 0 | python-3.x,raspberry-pi3 | 38,617,501 | 1 | false | 0 | 0 | You just have to detect whenever the state of the button is toggled. When toggled if it's pushed down you will need to store the current time with pressedTime = time.time(). When released, to get how long the button have been pushed down, you just do : howLong = time.time() - pressedTime | 1 | 0 | 0 | I'm trying to write a little code that detects how long a button connected to my Raspberry Pi is pushed down, not just if it's pushed. Is there an easy way to do this with Python? Thanks! | Detect time button is pushed | 0.197375 | 0 | 0 | 36 |
38,617,090 | 2016-07-27T15:17:00.000 | 1 | 0 | 1 | 0 | python,attributes,metaclass,python-internals,getattribute | 38,625,734 | 1 | false | 0 | 0 | This is because the attribute lookup searches all bases of type(a)(type(a).__mro__), rather than all types of type(a)(type(type(a))).
Also, type(self) is not called continuously, so the lookup chain looks like this:
a.__dict__['x']
type(a).__dict__['x']
b.__dict__[x] for b in type(a).__mro__
raise AttributeError
As @jsbueno wisely pointed out in the comment, the second step is actually included in the third one. This is because for any class, let's say class C, C itself is exactly the first item in C.__mro__. | 1 | 1 | 0 | According to Python 2.7.12 documentation, 3.4.2.3. Invoking Descriptors¶:
The default behavior for attribute access is to get, set, or delete
the attribute from an object’s dictionary. For instance, a.x has a
lookup chain starting with a.__dict__['x'], then
type(a).__dict__['x'], and continuing through the base classes of
type(a) excluding metaclasses.
But why metaclasses are excluded?
If you continuously call type(self), no matter what self is, an instance object or a type object, you'll eventually get <type 'type'>. So I can't understand why metaclasses enjoy this "privilege".
By the way, I'm a little confused by this quotation: For instance objects, object.__getattribute__ are used, so I think the lookup chain should look like this:
a.__dict__['x']
type(a).__dict__['x']
b.__dict__[x] for b in type(a).__mro__
type(b).__dict__[x] for b in type(a).__mro__
c.__dict__[x] for c in type(b).__mro__
......
Am I right? | Why aren't metaclass's attributes searched in instance attribute lookups? | 0.197375 | 0 | 0 | 147 |
38,619,166 | 2016-07-27T17:03:00.000 | 0 | 0 | 0 | 0 | python,django,database,heroku,web-applications | 38,627,826 | 1 | false | 1 | 0 | I've been running my django app in heroku for about 6 months now and I've never experienced db gets reset when ever I updated/deploy/push to heroku
note I'm using heroku postgress for db | 1 | 0 | 0 | I have a django application on heroku where some data is added in the admin settings. It is linked to my github. One of the things you add is a picture. It doesn't show up on the site after its uploaded. What could be the cause and solution? | How can I upload a picture to my django app on heroku and get it to be displayed? | 0 | 0 | 0 | 58 |
38,620,899 | 2016-07-27T18:46:00.000 | 2 | 0 | 1 | 1 | python,command-line-arguments | 38,621,007 | 1 | true | 0 | 0 | As for the how/why, sys is only imported once (when python starts up). When sys is imported, it's argv member gets populated with the commandline arguements. Subsequent import statements return the same sys module object so no matter where you import sys from, you'll always get the same object and therefore sys.argv will always be the same list no matter where you reference it in your application.
Whether you should be doing commandline parsing in more than one place is a different question. Generally, my answer would be "NO" unless you are only hacking together a script to work for the next 2 or 3 days. Anything that you expect to last should do all it's parsing up front (probably with a robust argument parser like argparse) and pass the data necessary for the various functions/classes to them from it's entry point. | 1 | 0 | 0 | I recently discovered (Much to my surprised) you can call command line args in files other than the one that is explicitly called when you enter it.
So, you can run python file1.py abc in command line, and use sys.argv[1] to get the string 'abc' from within file2.py or file3.py.
I still feel like this shouldn't work, but I'm glad it does, since it saved me a lot of trouble.
But now I'd really appreciate an answer as to why/how this works. I had assumed that sys.argv[1] would be local to each file. | Using command line args from different files in python | 1.2 | 0 | 0 | 275 |
38,620,969 | 2016-07-27T18:50:00.000 | 4 | 0 | 0 | 0 | python,django,celery,channels | 42,395,545 | 6 | false | 1 | 0 | Django channels gives to Django the ability to handle more than just plain HTTP requests, including Websockets and HTTP2. Think of this as 2 way duplex communication that happens asynchronously
No browser refreshing. Multiple clients can send and receive data via websocket and django channels orchestrates this intercommunication example a group chat with simultaneously clients accessing at the same time. You can achieve background processing of long running code simliar to that of a celery to a certain extent, but the application of channels is different to that of celery.
Celery is an asynchronous task queue/job queue based on distributed message passing. As well as scheduling. In layman's terms, I want to fire and run a task in the background or I want to have a periodical task that fires and runs in the back on a set interval. You can also fire task in a synchronous way as well fire and wait until complete and continue.
So the key difference is in the use case they serve and objectives of the frameworks | 2 | 32 | 0 | Recently I came to know about Django channels.
Can somebody tell me the difference between channels and celery, as well as where to use celery and channels. | How are Django channels different than celery? | 0.132549 | 0 | 0 | 11,029 |
38,620,969 | 2016-07-27T18:50:00.000 | 3 | 0 | 0 | 0 | python,django,celery,channels | 50,080,329 | 6 | false | 1 | 0 | Other answers, greatly explained the diff, but in facts Channels & Celery can both do asynchronous pooled tasks in common.
Channels and Celery both use a backend for messages and worker daemon(s). So the same kind of thing could be implemented with both.
But keep in mind that Celery is primary made for and can handle most issue of task pooling (retries, result backend, etc), where Channels is absolutely not made for. | 2 | 32 | 0 | Recently I came to know about Django channels.
Can somebody tell me the difference between channels and celery, as well as where to use celery and channels. | How are Django channels different than celery? | 0.099668 | 0 | 0 | 11,029 |
38,622,523 | 2016-07-27T20:22:00.000 | 1 | 0 | 0 | 1 | java,python,shell,maven | 38,622,730 | 1 | false | 1 | 0 | The maven path for all the artifacts is not the same that gets generated when you run it or export the project. You can check this by exporting the project as Jar/War/Ear file and viewing it via winRAR or any other tool.
The resources should be in jar parallel to com directory if its a jar project, but you can double check it. | 1 | 1 | 0 | I am building my Java project with Maven and I have a script file that ends up in the target/classes/resources folder. While I can access the file itself via this.getClass.getResource("/lookUpScript.py").getPath(), I cannot execute a shell command with "." + this.getClass.getResource("/lookUpScript.py").getPath(); this ultimately ends up being ./lookUpScript.py. To execute the shell command I am using a method that is part of my company's code that I can get to work fine with any command not involving a file. Is there a standard way of accessing files located in the resources area of a Maven build that may fix this? | Maven build with Java: How to execute script located in resources? | 0.197375 | 0 | 0 | 1,406 |
38,623,138 | 2016-07-27T21:03:00.000 | 2 | 0 | 1 | 0 | python,visual-studio-code | 38,623,229 | 7 | false | 0 | 0 | You can set up current working directory for debugged program using cwd argument in launch.json | 1 | 193 | 0 | I'm starting to use vscode for Python. I have a simple test program. I want to run it under debug and I need to set the working directory for the run.
How/where do I do that? | VSCode -- how to set working directory for debug | 0.057081 | 0 | 0 | 203,282 |
38,625,448 | 2016-07-28T00:53:00.000 | 4 | 0 | 1 | 0 | python,numpy | 38,625,477 | 1 | true | 0 | 0 | Well, that's a simplification.
The float type in Python is double-precision.
The int type has integer precision, but only limited by memory. Large numbers can have far more significant digits than a double float.
When using NumPy, you may choose the precision you want. | 1 | 0 | 0 | I know MatLab does everything in double, I heard about the similar thing for python but not quite sure. Can anyone confirm it with reference? Thanks! | Does python run all computations in double percision? | 1.2 | 0 | 0 | 37 |
38,626,409 | 2016-07-28T03:00:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,pip | 38,627,143 | 3 | false | 0 | 0 | Try navigating to ~/Python[version]/Scripts in cmd, then use pip[version] [command] [module] (ie. pip3 install themodulename or pip2 install themodulename) | 2 | 0 | 0 | I keep trying to install Pip using get-pip.py and only get the wheel file in the scripts folder. Try running "pip" in the command prompt and it just comes out with an error. Running windows 8 incase you need.
edit error is 'pip' is not recognized as an internal or external command... | Python pip installation not working how to do? | 0 | 0 | 0 | 17,163 |
38,626,409 | 2016-07-28T03:00:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,pip | 38,627,346 | 3 | false | 0 | 0 | If you are using latest version of Python.
In computer properties, Go to Advanced System Settings -> Advanced tab -> Environmental Variables
In System variables section, there is variable called PATH. Append c:\Python27\Scripts (Note append, not replace)
Then open a new command prompt, try "pip" | 2 | 0 | 0 | I keep trying to install Pip using get-pip.py and only get the wheel file in the scripts folder. Try running "pip" in the command prompt and it just comes out with an error. Running windows 8 incase you need.
edit error is 'pip' is not recognized as an internal or external command... | Python pip installation not working how to do? | 0.066568 | 0 | 0 | 17,163 |
38,627,504 | 2016-07-28T05:07:00.000 | 0 | 0 | 1 | 1 | python,python-idle | 38,867,924 | 4 | false | 0 | 0 | The same thing is happening with my shell. In the older versions, this does not happen. I've also noticed that when I press Python 3.5.2 Module Docs, I have my Internet browser open up and I see my directory being displayed onscreen. It looks like:
C:\Users\mycomputername\AppData\Local\Programs\Python\Python35-32\DLLs.
Is that suppose to happen? Is that secured? I don't know.
I've also found that this prints out whenever I "imported" something. So if I use an import command, and put it right before the line of my random name, it will print out that "RESTART" thing. It's always at the beginning. Or what it reads as the beginning. | 4 | 1 | 0 | I have a simple script I wrote, and when trying to run it (F5) , I get this msg:
================== RESTART: C:\Users\***\Desktop\tst.py ==================
I restarted the shell, reopened the script, but still, the same msg appears.
I use python 3.5.1 and I tried to simplify the script as much as possible, but I still get this result. Now my script is only one line with a simple print(1) command and I still get this msg.
Was there something wrong with the shell installation? | What does "== RESTART ==" in the IDLE Shell mean? | 0 | 0 | 0 | 17,105 |
38,627,504 | 2016-07-28T05:07:00.000 | 1 | 0 | 1 | 1 | python,python-idle | 38,627,619 | 4 | true | 0 | 0 | I have a simple script I wrote, and when trying to run it (F5)
That's the hotkey for IDLE to run a file. It is not ordering to do anything. It's a log statement to explicitly declare that your namespace is being cleared and the file is going to be ran fresh again.
no, I didn't tell it to restart
But you did... You pressed F5 | 4 | 1 | 0 | I have a simple script I wrote, and when trying to run it (F5) , I get this msg:
================== RESTART: C:\Users\***\Desktop\tst.py ==================
I restarted the shell, reopened the script, but still, the same msg appears.
I use python 3.5.1 and I tried to simplify the script as much as possible, but I still get this result. Now my script is only one line with a simple print(1) command and I still get this msg.
Was there something wrong with the shell installation? | What does "== RESTART ==" in the IDLE Shell mean? | 1.2 | 0 | 0 | 17,105 |
38,627,504 | 2016-07-28T05:07:00.000 | 0 | 0 | 1 | 1 | python,python-idle | 57,694,797 | 4 | false | 0 | 0 | CIsForCookies, my guess is that you don't actually have a complete script; maybe you have just a function definition and you haven't included a line to run that function. (I had this problem and then remembered to call the function I defined; the problem went away.) | 4 | 1 | 0 | I have a simple script I wrote, and when trying to run it (F5) , I get this msg:
================== RESTART: C:\Users\***\Desktop\tst.py ==================
I restarted the shell, reopened the script, but still, the same msg appears.
I use python 3.5.1 and I tried to simplify the script as much as possible, but I still get this result. Now my script is only one line with a simple print(1) command and I still get this msg.
Was there something wrong with the shell installation? | What does "== RESTART ==" in the IDLE Shell mean? | 0 | 0 | 0 | 17,105 |
38,627,504 | 2016-07-28T05:07:00.000 | 0 | 0 | 1 | 1 | python,python-idle | 64,823,524 | 4 | false | 0 | 0 | You may have made the same mistake as me and ran a program and then you wonder why RESTART is all that shows up. My program was working perfectly I just did not print anything or ask for any input so it ran and was done with the program and nothing showed up. | 4 | 1 | 0 | I have a simple script I wrote, and when trying to run it (F5) , I get this msg:
================== RESTART: C:\Users\***\Desktop\tst.py ==================
I restarted the shell, reopened the script, but still, the same msg appears.
I use python 3.5.1 and I tried to simplify the script as much as possible, but I still get this result. Now my script is only one line with a simple print(1) command and I still get this msg.
Was there something wrong with the shell installation? | What does "== RESTART ==" in the IDLE Shell mean? | 0 | 0 | 0 | 17,105 |
38,631,493 | 2016-07-28T08:53:00.000 | 0 | 0 | 1 | 0 | python,import,module | 38,631,601 | 2 | false | 0 | 0 | You can do this :
pip3.5 install paperclip
pyperclip is install but not paperclip | 2 | 0 | 0 | i'm currently into learning a bit of python and i want to import the paperclip third party module into my python file.
Yes, i already installed the pyperclip module with
pip install pyperclip.
if i create a file on my desktop, i get an error which says
Traceback (most recent call last):
File "test.py", line 1, in <module>
import pyperclip
ImportError: No module named pyperclip
However if i put the test.py in my python folder, it runs.
The question now is, is there a way to make all my installed modules available on a global scope ? I just want to have my file e.g. on my Desktop and run it without having import issues.
Thank you.
Greetings
Edit: I'm working on a Mac, maybe this leads to the problem | Python third party Module global import | 0 | 0 | 0 | 736 |
38,631,493 | 2016-07-28T08:53:00.000 | 1 | 0 | 1 | 0 | python,import,module | 38,632,431 | 2 | true | 0 | 0 | Found the problem.
The pip installautomatically used pip3.5 install
whereas python test.pydidn't use python3.5 test.py
Thank you @Bakurìu
Is there a way i can define python3.5as python? | 2 | 0 | 0 | i'm currently into learning a bit of python and i want to import the paperclip third party module into my python file.
Yes, i already installed the pyperclip module with
pip install pyperclip.
if i create a file on my desktop, i get an error which says
Traceback (most recent call last):
File "test.py", line 1, in <module>
import pyperclip
ImportError: No module named pyperclip
However if i put the test.py in my python folder, it runs.
The question now is, is there a way to make all my installed modules available on a global scope ? I just want to have my file e.g. on my Desktop and run it without having import issues.
Thank you.
Greetings
Edit: I'm working on a Mac, maybe this leads to the problem | Python third party Module global import | 1.2 | 0 | 0 | 736 |
38,636,905 | 2016-07-28T12:50:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 38,686,877 | 1 | false | 0 | 0 | All Intellij-based IDEs support shell integration, scripting, plugins etc. I bet you can insert your notifications in a build script.
Run menu-> edit configurations
Actually, CLion shows notification on failed/successful build, but I did make something like this with PhpStorm.
You're programmer, all the bells and whistles are given, just use them. | 1 | 4 | 0 | I have been looking for a feature where PyCharm notifies me that a script has finished running in the console after each run. At present, I have to add print('done') after each code.
I got smarter so I defined d = 'done' once and after each run I simply add a d so it prints out 'done' which I thought to be more of a time saver.
Now I am even more lazy and whenever I press F10 (my run command button), I want PyCharm to automatically run a small script with d = 'done' in it right after finishing running the main script.
Is there a way to do this? | Automatically run another script after i run main code in PyCharm | 0 | 0 | 0 | 2,375 |
38,636,967 | 2016-07-28T12:52:00.000 | 5 | 0 | 1 | 0 | python,regex,python-2.7,nltk | 38,637,504 | 1 | true | 0 | 0 | You can take a couple of approaches. If there are lots of possibilities, as you say, you can treat this as a machine learning problem and use approach 1. Otherwise 1, if the possibilities are limited (say, around 5), you can use the second approach.
Approach 1:
Consider it a machine learning problem. Classify each sentence in the text as 0 or 1 depending on if it contains the year of experience. This can be done by training some data manually. Against each training example, you will assign a label. For example:
Job Experience: 3 years (Label 1)
Studying for two years (Label 0)
Working hard for years (Label 0)
Two years of experience (Label 1)
Experience: 2010-2014 (Label 1)
Once you have a lot of examples, you can use skicit-learn or a similar package to train a model.
Approach 2:
1- Search for years. Either, it could be the exact word (year or years), or a four digit number (e.g, 2014).
2- If 1 passes, search for the word experience (or something like that) in a close proximity.
If both 1 and 2 pass, then you have years of experience. Then, depending on what you want, you can further extract. | 1 | 1 | 0 | I have extracted mail id, phone number- By using regular expressions
I have extracted the name by using Core NLP server
I had extracted skills by giving in a set and comparing the words.
But I didn't have any idea how to extract the years of experience using python -
Can anyone please give an idea regarding it?
Examples:
2 years of experience
Two years of experience
2010-2014
Like this, there are so many possibilities. | How to extract the experience from resume using python? | 1.2 | 0 | 0 | 6,584 |
38,639,948 | 2016-07-28T14:55:00.000 | 0 | 0 | 0 | 0 | python,sqlalchemy | 38,652,751 | 2 | false | 0 | 0 | Unfortunately it is most likely not possible... | 1 | 0 | 0 | Is there any way in SQLAlchemy by reflection or any other means to get the name that a column has in the corresponding model? For example i have the person table with a column group_id. In my Person class this attribute is refered to as 'group' is there a way to dynamically and generically getting this without importing or call the Person class? | SQLAlchemy get attribute name from table and column name | 0 | 1 | 0 | 937 |
38,647,353 | 2016-07-28T21:52:00.000 | -3 | 0 | 0 | 0 | python,numpy,tensorflow | 54,919,826 | 3 | false | 0 | 0 | .numpy() will convert tensor to an array. | 1 | 8 | 1 | How can you convert a tensor into a Numpy ndarray, without using eval or sess.run()?
I need to pass a tensor into a feed dictionary and I already have a session running. | Tensorflow: Convert Tensor to numpy array WITHOUT .eval() or sess.run() | -0.197375 | 0 | 0 | 13,763 |
38,650,665 | 2016-07-29T04:43:00.000 | 2 | 1 | 0 | 0 | javascript,python,email,bokeh,smtplib | 38,650,801 | 1 | false | 1 | 0 | Sorry, but you'll not be able to send an email with JavaScript embedded. That is a security risk. If you're lucky, an email provider will strip it before rendering, if you're unlucky, you'll be sent directly to spam and the provider will distrust your domain.
You're better off sending an email with a link to the chart. | 1 | 0 | 0 | at the first place, I could not help myself with the correct search terms on this.
secondly, I couldnt pretty much make it working with standard smtplib or email package in python.
The question is, I have a normal html page(basically it contains a that is generated from bokeh package in python, and all it does is generating an html page the javascript within renders a nice zoomable plot when viewed in a browser.
My aim is to send that report (the html basically) over to recipients in a mail. | sending dynamic html email containing javascript via a python script | 0.379949 | 0 | 1 | 516 |
38,653,450 | 2016-07-29T07:56:00.000 | 3 | 0 | 1 | 0 | python,list | 38,653,520 | 7 | true | 0 | 0 | [:] is equivalent to copy.
A[:][0] is the first row of a copy of A.
A[0][:] is a copy of the first row of A.
The two are the same.
To get the first column: [a[0] for a in A]
Or use numpy and np.array(A)[:,0] | 2 | 2 | 0 | Suppose if A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
Then A[0][:] prints [1, 2, 3]
But why does A[:][0] print [1, 2, 3] again ?
It should print the column [1, 4, 7], shouldn't it? | Printing a column of a 2-D List in Python | 1.2 | 0 | 0 | 107 |
38,653,450 | 2016-07-29T07:56:00.000 | 2 | 0 | 1 | 0 | python,list | 38,653,516 | 7 | false | 0 | 0 | [:] matches the entire list.
So A[:] is the same as A. So A[0][:] is the same as A[0].
And A[0][:] is the same as A[0]. | 2 | 2 | 0 | Suppose if A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
Then A[0][:] prints [1, 2, 3]
But why does A[:][0] print [1, 2, 3] again ?
It should print the column [1, 4, 7], shouldn't it? | Printing a column of a 2-D List in Python | 0.057081 | 0 | 0 | 107 |
38,655,061 | 2016-07-29T09:19:00.000 | 1 | 0 | 1 | 0 | python,binary,serial-port | 38,655,170 | 3 | false | 0 | 0 | There is:
First convert it to an int and from there to a binary literal, like so
bin(int("01101101", base=2)) | 1 | 0 | 0 | In Python I have learnt that creating ints like 0b01101101 will create a binary literal.
Say I have a string data type of "01101101" is there a way to convert this to a binary literal?
The example usage is that I am going to create a data packet slowly building up the byte with relivant pices of data (Setting bits according to variables). Once I have the string, I'll need to write raw binary over a serial connection.
Is it possible to convert the string "01101101" to 0b01101101 so it is a binary literal.
Another exammple of my target for this, if it helps. is so I can dynamically create the binary data on the fly without having do do massive bitwise operations, I see it simpler just to make up a string of 1's and 0's as I collate data then convert it to a binary literal. Of course if there is a better way to go about it, improvements would be gladly accepted. | Is there a way of converting a string of 1's and 0's to its binary counterpart, i.e. not ASCII | 0.066568 | 0 | 0 | 1,098 |
38,657,054 | 2016-07-29T10:55:00.000 | 3 | 0 | 0 | 0 | python-2.7,pdf,jupyter-notebook | 41,493,134 | 2 | false | 1 | 0 | When I want to save a Jupyter Notebook I right click the mouse, select print, then change Destination to Save as PDF. This does not save the analysis outputs though. So if I want to save a regression output, for example, I highlight the output in Jupyter Notebook, right click, print, Save as PDF. This process creates fairly nice looking documents with code, interpretation and graphics all-in-one. There are programs that allow you to save more directly but I haven't been able to get them to work. | 1 | 6 | 1 | I am doing some data science analysis on jupyter and I wonder how to get all the output of my cell saved into a pdf file ?
thanks | how to save jupyter output into a pdf file | 0.291313 | 0 | 0 | 19,046 |
38,659,721 | 2016-07-29T13:07:00.000 | 2 | 0 | 1 | 0 | python,ipython,keyboard-shortcuts,xterm | 39,015,985 | 3 | false | 0 | 0 | The ctrl+j or ctrl+m keyboard shortcuts are validating the entry. | 1 | 21 | 0 | The new release of IPython does not depend any more on readline but uses the pure Python library prompt-toolkit, solving maintenance problems on Apple's and Windows' systems.
A new feature is the ability to edit a multi-line code block, using the cursor keys to move freely in the code block — with this power it comes, at least for me, a problem: because a ret inserts a new line in your code, to pass the whole block to the interpreter you have to use the shortcut alt+ret or possibly the less convenient key sequence esc followed by ret.
I say, this is a problem, because my terminal emulator of choice is the XTerm and, on many Linux distributions, the shortcut alt+ret is not passed to the application but it is directly used by the XTerm in which IPython is running, to toggle the screen-fullness of the said terminal (@ThomasDickey, xterm's mantainer and co-author pointed out that, by default, xterm doesn't care to send to the application the modifier bit on Enter even when one unbinds the Fullscreen action).
For this reason I'd like to modify at least this specific IPython key binding.
I've found instructions (sort of) for the previouos versions, the readline based ones, of IPython that do not apply to the new, 5.0 version.
What I would need are instructions that lead me to find, in IPython's user documentation, the names of the possible actions that I can bind, the names of the shortcuts to bind with the actions and the procedure to follow to configure a new key binding.
Failing to have this type of canonical answer, I may be happy with a recipe to accomplish this specific keybinding, with the condition that the recipe still works in IPython 6.0 | IPython 5.0 and key bindings in console | 0.132549 | 0 | 0 | 1,224 |
38,661,144 | 2016-07-29T14:20:00.000 | 6 | 0 | 0 | 0 | python,database,numpy,web-scraping | 38,661,572 | 1 | false | 0 | 0 | Use a layered approach: downloading, parsing, storage, analysis.
Separate the layers. Most importantly, don't just download data and then store it in the final parsed format. You will inevitably realise you missed something and need to scrape it all over again. Use something like requests + requests_cache (I found that extending requests_cache.backends.BaseCache and storing it on the filesystem is more convenient examining scraped html than the default sqlite storage backend).
For parsing you're already using beautiful soup which works fine.
For storage & analysis use a database. Avoid the temptation to go with NoSQL -- as soon as you need to run aggregate queries you'll regret it. | 1 | 1 | 0 | I am scraping data of football player statistics from the web using python and Beautiful Soup. I will be scraping from multiple sources, and each source will have a variety of variables about each player which include strings, integers, and booleans. For example player name, position drafted, pro bowl pick (y/n).
Eventually I would like to put this data into a data mining tool or an analysis tool in order to find trends. This will need to be searchable and I will need to be able to add data to a player's info when I am scraping from a new source in a different order.
What techniques should I use to store the data so that I will best be able to add too it and the analyze it later? | Best way to store scraped data in Python for analysis | 1 | 0 | 1 | 2,197 |
38,661,210 | 2016-07-29T14:22:00.000 | 0 | 0 | 1 | 0 | ipython,knitr,jupyter | 38,668,010 | 5 | false | 0 | 0 | Jupyter Notebooks are stored as json. If you are comfortable reading the raw JSON, simply open the notebook file in your favorite text editor. | 1 | 2 | 0 | I would like to use Jupyter/IPython notebooks for writing reports, but I would prefer to avoid the browser interface. Instead, I would like to be able to write the notebook in some text-based format, e.g. markdown, and export the notebook afterwards. Essentially I would like to use Jupyter in a Knitr-esque workflow. Is this currently possible?
Thanks in advance. | How can I edit Jupyter/IPython notebooks as text files? | 0 | 0 | 0 | 1,659 |
38,664,788 | 2016-07-29T17:54:00.000 | 0 | 0 | 0 | 1 | python,multithreading,google-app-engine | 38,665,116 | 1 | false | 1 | 0 | Multithreading is not related to billing in any way - you still pay for one instance even if 10 threads are running in parallel on that instance. | 1 | 1 | 0 | I have an appserver that is getting painfully complicated in that it has to buffer data from incoming requests then push those buffers out, via pubsub, after enough has been received. The buffering isn't the problem, but efficient locking is... hairy, and I'm concerned that it's slowing down my service. I'm considering dropping thread safety in order to remove all the locking, but I'm worried that my app instance count will have to double (or more) to handle the same user load.
My understanding is that a threadsafe app is one where each thread is a billed app instance. In other words, I get billed for two instances by allowing multiple threads to run in a process, with the only advantage being that the threads can share memory and therefore, have a smaller overall footprint.
So to rephrase, does a multithreaded app instance handle multiple simultaneous connections, or is each billed app instance a separate thread - only capable of handling one request at a time? If I remove thread safety, am I going to need to run a larger pool of app instances? | What's the advantage of making your appengine app thread safe? | 0 | 0 | 0 | 171 |
38,666,274 | 2016-07-29T19:33:00.000 | 0 | 0 | 1 | 0 | python,setuptools | 38,666,614 | 1 | false | 0 | 0 | The user of your .exe does not need to have python installed; that's beauty of creating binaries. All of the instructions that the client's computer needs to run the program are already in the .exe | 1 | 0 | 0 | I wanted to package up my python installer so it would be easier to integrate into our WIX installer or other forms of product distribution. I was able to successfully build an exe (python setup.py bdist_wininst) and the msi (python setup.py bdist_msi) using setuptools, but what about the case where a user doesn't have python installed? Is there a way to add python itself as a dependency or otherwise have the msi/exe from setuptools install python if it is missing? | How to install python if it doesn't exist from setuptools msi | 0 | 0 | 0 | 144 |
38,670,168 | 2016-07-30T03:48:00.000 | 1 | 0 | 0 | 0 | python,wxpython | 38,679,469 | 1 | true | 0 | 1 | Refactor your code such that what you are drawing in your EVT_PAINT handler can be called passing it the wx.DC to be drawn upon, and then call that from your paint handler with the wx.PaintDC or whatever you are currently using. When you want to save it to an image call the same code passing a wx.MemoryDC with a wx.Bitmap selected into it. When it's done the bitmap will have the same contents as the window, and you can then save it to a file or whatever you need to do with it. | 1 | 0 | 0 | I've been trying this for a bit and haven't found a solution that works for me
I have a wx.scrolledcanvas that I'm trying to save to an image, however when i use the answers I've found they all save only the visible portion of the canvas, and not the full canvas. Is there any way to save the entirety of the scrolled canvas to a file?
Thanks | Saving wxPython scrolledcanvas contents to image | 1.2 | 0 | 0 | 224 |
38,670,372 | 2016-07-30T04:36:00.000 | 24 | 0 | 0 | 0 | python,amazon-web-services,boto3 | 38,707,084 | 2 | false | 0 | 0 | Resources are just a resource-based abstraction over the clients. They can't do anything the clients can't do, but in many cases they are nicer to use. They actually have an embedded client that they use to make requests. The downside is that they don't always support 100% of the features of a service. | 1 | 50 | 0 | Boto3 Mavens,
What is the functional difference, if any, between Clients and Resources?
Are they functionally equivalent?
Under what conditions would you elect to invoke a Boto3 Resource vs. a Client (and vice-versa)?
Although I've endeavored to answer this question by RTM...regrets, understanding the functional difference between the two eludes me.
Your thoughts?
Many, many thanks!
Plane Wryter | Are Boto3 Resources and Clients Equivalent? When Use One or Other? | 1 | 0 | 1 | 6,208 |
38,671,739 | 2016-07-30T08:00:00.000 | 1 | 1 | 0 | 0 | linux,python-3.x,pyglet,nodebox | 42,557,054 | 1 | true | 0 | 1 | I had the same error. Placing all the .py files except (and this is important) the __init__.py file in the main libraries folder fixed it for me. The final path should look like ~/lib/python3.5/site-packages/bezier.py | 1 | 0 | 0 | I'm trying to import everything from nodebox.graphics into my python 3.5 code but I get errors:
ImportError: No module named 'bezier'
To mention, this module exists in nodebox/graphics. As I searched in python documentations, I have to add the nodebox and pyglet folders into the directory of my code but that did not work.
I also didn't succeed in adding them to system directories.
How can I solve the problem and run my code properly?
P.S. I'm currently using ubuntu 16.04 if it matters. | use nodebox as a module for python 3.5 | 1.2 | 0 | 0 | 730 |
38,672,768 | 2016-07-30T10:15:00.000 | 2 | 0 | 1 | 0 | ipython,anaconda,seaborn,conda | 38,698,060 | 1 | true | 0 | 0 | conda is a command line tool, not a Python function. You should be typing these commands in a bash (or tcsh, etc.) shell, not in the IPython interpreter. | 1 | 1 | 0 | I've recently tried to install seaborn on ipython, which the latter was installed using anaconda. However, when I ran conda install seaborn, i was returned with a syntax error. I tried again with conda install -c anaconda seaborn=0.7.0 this time but syntax error was returned again. Apologies for my limited programming knowledge, but could anyone provide advice on how to resolve this issue? | Syntax error while installing seaborn using conda | 1.2 | 0 | 0 | 494 |
38,673,550 | 2016-07-30T11:42:00.000 | 2 | 0 | 0 | 1 | python,tornado,wsgi,falcon,hendrix | 38,676,247 | 1 | false | 1 | 0 | So found the solution. Created a python file according to hendrix's docs. And imported my app's wsgi callable there. | 1 | 2 | 0 | Hendrix is a WSGI compatible server written in Tornado. I was wondering if it can be used to run an app written in Falcon ? | Can I use Hendrix to run Falcon app? | 0.379949 | 0 | 0 | 52 |
38,676,315 | 2016-07-30T16:50:00.000 | 2 | 0 | 1 | 0 | python,django,virtualenv | 38,676,435 | 4 | false | 1 | 0 | Personally I do.
Virtualenvs help you keep the dependencies required for a project organised and manageable. If you have a django 1.7 project, it will require django1.7 and thus install it in your virtualenv. Without a virtualenv, you might decide to take on a project that requires django1.10. This means your django1.7 project might break. To avoid such a scenario use a virtual environment. | 1 | 0 | 0 | Do you create a new virtualenv every time you start a new project?
I'm going through some tutorials on the web and they create a virtualenv first then pip install django in the virtualenv. But there's one tutorial that i saw saying that you wouldn't create a project within the virtualenv and its only used for dependencies. | python django: create a new virtualenv for each django project? | 0.099668 | 0 | 0 | 1,824 |
38,676,595 | 2016-07-30T17:25:00.000 | 0 | 0 | 1 | 0 | python,debugging,ide,pycharm | 51,597,555 | 1 | false | 0 | 0 | I too have experienced the same problem quite a few times but With every new version released new problem too pop out . Its not the first time PyCharm is giving some one a hard time. In previous version the IDE would just stop working or not debug at all .
The best way to solve this is by writing to Jet Brains so they can find and solve the issue and release a new update. | 1 | 8 | 0 | I just started trying out PyCharm, and while it is very nice, I found the interactive console in debugger (that can be activated with "Show Python Prompt" on a breakpoint) is unusably slow. If I keep pressing enter, for example, after 2-3 tries, I have to wait several seconds for the next prompt to show up.
Is this a common experience? I'm running Pycharm with a pretty fast machine (with i7-3770 CPU) so I was wondering if something is wrong. | Pycharms debugger interactive console very slow | 0 | 0 | 0 | 599 |
38,680,485 | 2016-07-31T03:29:00.000 | 2 | 0 | 1 | 0 | python,multithreading,cython,pickle,gil | 38,712,208 | 1 | false | 0 | 0 | I solved my issue.
The solution was
Making my object really simple. In my case, I converted my object to array of simple stringified dictionaries.
I used file.write(stringified_dictionaries) directly instead of using pickle. This reduced time for serializing python object to string.
Since disk I/O does not require GIL in python, the only moment main thread blocked was the moment of converting my object, which was really short. | 1 | 2 | 0 | Currently I'm using python 3.4.3 and developing PyQt5 application.
In my app, there's a QThread, and some large object(100MB) is being (pickle) dumped by the thread.
However, dumping that object requires 1~2 seconds, and it blocks the main thread about 1~2 seconds because of GIL.
How can I solve this issue(non-blocking the main thread)?
I think that serializing my object to string takes time and it requires GIL, eventually blocks the main thread.(As I know, writing to file does not require GIL)
I'm thinking about using Cython, but since I'm the beginner in cython, I'm not sure whether or not using Cython will solve this issue.
Is there any way to work around this issue?
Edit: I tried multiprocessing module, but the intercommunication time (passing shared memory variables across processes) also takes about 1~2 seconds, which eventually gives no advantages. | pickle.dump blocks the main thread in multithreaded python application because of GIL | 0.379949 | 0 | 0 | 791 |
38,685,275 | 2016-07-31T15:03:00.000 | 0 | 0 | 1 | 0 | python-3.x,visualization,python-ggplot | 38,685,621 | 1 | true | 0 | 0 | Found the issue, some files have not been fully converted for python3.
Open file /usr/local/lib/python3.5/dist-packages/ggplot/ggplot.py and change
import StringIO
to this
from io import StringIO
I am not getting any error now but there could be some other files where python2 codes needs to be converted to work in python3. | 1 | 0 | 0 | I have Ubuntu Gnome 16.04 both python 2.7 and python 3.5 installed.
I have installed ggplot in python 3.5 but not able to import it.
I am getting ImportError: No module named 'StringIO'.
Am I missing something? As far as I know StringIO module has been merged in io module in python 3.5. | Not able to import ggplot in python 3.5 | 1.2 | 0 | 0 | 339 |
38,686,493 | 2016-07-31T17:14:00.000 | 0 | 0 | 1 | 1 | python,installation | 38,686,576 | 3 | false | 0 | 0 | You may try to use which python to find out the location of python in your computer. If it doesn't work and you cannot find where is the your installed directory, you may need to reinstall and make sure you remember the installation directory and add it as a environment variable in windows system. | 2 | 0 | 0 | I have installed Python 3.5.2 on my windows 8 computer, I tried in cmd python --version and it gave me that stupid error:
"python is not recognized as an internal or external command..."
I also have no files named python on my computer anywhere. I used the search feature on file explorer and I've search manually. I even looked through the hidden files. I have tried to install Python 3 times and the same thing keeps happening. Any help appreciated. | Where is python folder? Python install problems | 0 | 0 | 0 | 2,715 |
38,686,493 | 2016-07-31T17:14:00.000 | 4 | 0 | 1 | 1 | python,installation | 38,686,608 | 3 | true | 0 | 0 | In Windows 8 for Python 3.* I believe it is:
C:\Users\yourusername\AppData\Local\Programs\Python\
To use that from the Windows command line you will need to add it to your path.
Control Panel -> Advanced System Settings -> Environment Variables -> System Variables -> Path
Add in the Python path at the end after a semi-colon, do not delete the others. | 2 | 0 | 0 | I have installed Python 3.5.2 on my windows 8 computer, I tried in cmd python --version and it gave me that stupid error:
"python is not recognized as an internal or external command..."
I also have no files named python on my computer anywhere. I used the search feature on file explorer and I've search manually. I even looked through the hidden files. I have tried to install Python 3 times and the same thing keeps happening. Any help appreciated. | Where is python folder? Python install problems | 1.2 | 0 | 0 | 2,715 |
38,686,528 | 2016-07-31T17:17:00.000 | 2 | 0 | 0 | 0 | pythonanywhere | 38,704,698 | 1 | false | 0 | 0 | Your code running on PythonAnywhere could be on a whole bunch of IPs that could change at any time. You could try to add all the IPs, but that might not be the best/most sustainable. | 1 | 2 | 0 | I have a Flask webapp running on Pythonanywhere. I've recently been having a look at using Google Cloud's MYSQL service. It requires a list of IP addresses to be whitelisted for access.
How can I find this? I've tried 50.19.109.98 which is the IP address for Python Anywhere, but unless there is a secondary issue thats not it.
Thanks,
Ben | Pythonanywhere: getting the IP address for database access whitelist | 0.379949 | 1 | 0 | 1,260 |
38,688,707 | 2016-07-31T21:54:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,sublimetext3,virtualenv | 38,689,463 | 1 | false | 0 | 0 | Solved. In ST3, use Virtualenv: Add Directory instead of Virtualenv: New. The latter creates a new virtualenv (hence the new Scripts folder). | 1 | 0 | 0 | I'm trying to run Python scripts inside virtualenv from Sublime Text 3. When I activate the virtualenv in ST3 and choose the .py, ST3 creates a Scripts folder inside the preexisting Scripts folder (for a new `.py'). What is causing this problem and how I do stop this from happening?
Following are the detailed steps I follow:
Create `virtualenx Somevenv' from CMD
Navigate to 'Someenv\Scripts`
activate
pip install somePackage
Select Virtualenv:New (Virtualenv: Activate does nothing)
Paste \path\to\Someenv\Scripts under Virtualenv Path
Select c:\Python27
ST3 does it's thing and produces this message:
New python executable in C:\Users\Gandalf\Documents\Python_Virtual_Env\Legolas\Scripts\Scripts\python.exe
Installing setuptools, pip, wheel...done.
As you see, ST3 creates a Scripts inside the previous Scripts folder. As a result, the packages installed in step 4 are not used. I want to stop the creation of this second Scripts folder. | Sublime Text3 creates Scripts inside Scripts folder inside virtualenv | 0 | 0 | 0 | 36 |
38,689,268 | 2016-07-31T23:31:00.000 | 0 | 0 | 1 | 0 | python-2.7,geolocation,map-projections | 38,689,379 | 1 | true | 0 | 0 | For a small scale like that, the map projection does not really matter. You can first make the latitude and longitude relative to the map by subtracting from them the latitude and longitude of the top-left-hand corner pixel of the image.
To convert the resulting angles to distances, convert them to radians and then multiply them by the average radius of the Earth (about 6371000 m), or if you want to be more precise, multiply them by the radius of the Earth in the area (ranging from 6356752 m at the poles to 6378137 m at the Equator).
To convert these distance offsets to points on the map image in pixel coordinates, simply divide them by 250. | 1 | 1 | 0 | I'm making a weather program for myself in Python using images of the local rain radar (png) which have been modified to a custom size of (496, 480) pixels. I need advice on drawing my location (from latitude and longitude) on the image given that I know each pixel represents 250m, and a given point p's both image coordinate and corresponding real world coordinate. | Convert geolocation Lat and Long to pixel coordinates on custom image | 1.2 | 0 | 0 | 1,367 |
38,696,394 | 2016-08-01T10:21:00.000 | 2 | 1 | 1 | 0 | python,callstack,rascal | 38,777,882 | 1 | true | 0 | 0 | When you have a representation of your Python source code as tree (parse tree or abstract syntax tree) you
can convert this to a Rascal data type and use Rascal for
further processing. This can be achieved by using and connecting an existing Python parser to generate the Rascal
representation of your Python program. This could be done by simply
dumping the parse tree in a format that can be read by Rascal.
Why this complex solution: because the built-in parser
generator of Rascal is not (yet) well equipped to parse
indentation-sensitive languages like Python. | 1 | 1 | 0 | I would like to scan all project files in a python project, identify all instantiations of objects that are subclass of a certain type and then:
1. Add the "yield" keyword to the object instantiation
2. identify all call stack for that object creation, and add a decorator to all functions in that call stack.
is that doable using Rascal? | python source file analysis and transformation using Rascal | 1.2 | 0 | 0 | 149 |
38,696,575 | 2016-08-01T10:30:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 38,696,887 | 4 | false | 0 | 0 | Python itself does not include any facilities to allow the programmer direct access to memory. This means that sadly (or happily, depending on your outlook) the answer to your question is "no". | 2 | 8 | 0 | I know python is a high level language and manages the memory allocation etc. on its own user/developer doesn't need to worry much unlike other languages like c, c++ etc.
but is there a way through will we can write some value in a particular memory address in python
i know about id() which if hex type casted can give hexadecimal location but how can we write at a location. | Writing to particular address in memory in python | 0.049958 | 0 | 0 | 5,817 |
38,696,575 | 2016-08-01T10:30:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 51,573,628 | 4 | false | 0 | 0 | I can't advise how but I do know (one) why. Direct writing to registers allows one to set up a particular microcontroller. For example, configuring ports or peripherals. This is normally done in C (closer to hardware) but it would be a nice feature if you wanted to use Python for other reasons. | 2 | 8 | 0 | I know python is a high level language and manages the memory allocation etc. on its own user/developer doesn't need to worry much unlike other languages like c, c++ etc.
but is there a way through will we can write some value in a particular memory address in python
i know about id() which if hex type casted can give hexadecimal location but how can we write at a location. | Writing to particular address in memory in python | 0 | 0 | 0 | 5,817 |
38,699,035 | 2016-08-01T12:33:00.000 | 0 | 0 | 0 | 0 | python | 38,837,806 | 1 | false | 1 | 0 | Just wanted to close out on this in case someone in the future is looking at this as well.
I was able to capture the password used to login by adding the following to my db.py
def on_ldap_connect(form):
username = request.vars.username
password = request.vars.password
You can save user/password to some session variable or secure file to
use for authenticating to other services.
auth.settings.login_onaccept.append(on_ldap_connect) | 1 | 0 | 0 | I'm just starting to use Web2PY.
My basic one page app authenticates users to a AD based LDAP service.
I need to collect other data via rest api calls on behave of the user from the server side of the app.
I'd like to cache the username and password of the user for a session so the user doesn't have to be prompted for credentials multiple times.
Is there an easy way to do this ? | Web2PY caching password | 0 | 0 | 1 | 71 |
38,699,927 | 2016-08-01T13:15:00.000 | 0 | 0 | 0 | 1 | python,django,amazon-web-services,amazon-s3 | 38,701,216 | 1 | false | 1 | 0 | No magic solution here. You have to manage states on your model, specially when working with celery tasks. You might need another field called state with the states: NONE (no action is beeing done), PROCESSING (task was sent to celery to process) and DONE (image was rotated)
NONE is the default state. you should set the POCESSING state before calling the celery task (and not inside the celery task, already had bugs because of that) and finally the celery task should set the status to DONE when finished.
When the task is fast the user will not see any difference but when it takes some time you might want to add a message "image is being processed, please try again" or something like that
At least that's how I do it... Hope this helps | 1 | 0 | 0 | So I've a Django model which has a FileField. This FileField, contains generally an image. After I receipt the picture from a request, I need to run some picture analysis processes.
The problem is that sometimes, I need to rotate the picture before running the analysis (which runs in celery, loading the model again getting the instance by the id). So I get the picture, rotate it and save it with:
storage.save(image_name, new_image_file), where storage is the django default storage (using AWS S3)
The problem is that in some minor cases (lets say 1 in 1000), the picture is not already rotated when running the analysis process in celery, after the rotation process was executed, but after that, if I open the image, is already rotated, so it seems that the save method is taking some time to update the file in the storage, (asynchronously)...
Have anyone had a similar issue? Is there a way to check if the file was already updated, like with a callback or a kind of handler?
Many thanks! | Django: Know when a file is already saved after usign storage.save() | 0 | 0 | 0 | 49 |
38,703,892 | 2016-08-01T16:37:00.000 | 0 | 0 | 0 | 0 | python,sql,google-forms | 39,074,108 | 1 | false | 1 | 0 | You can add a script in the Google spreadsheet with an onsubmit trigger. Then you can do whatever you want with the submitted data. | 1 | 0 | 0 | I am creating a web project where I take in Form data and write to a SQL database.
The forms will be a questionnaire with logic branching. Due to the nature of the form, and the fact that this is an MVP project, I've opted to use an existing form service (e.g Google Forms/Typeform).
I was wondering if it's feasible to have form data submitted to multiple different tables (e.g CustomerInfo, FormDataA, FormDataB, etc.). While this might be possible with a custom form application, I do not think it's possible with Google Forms and/or Typeform.
Does anyone have any suggestions on how to parse user submitted Form data into multiple tables when using Google Forms or Typeform? | Using Google Forms to write to multiple tables? | 0 | 1 | 0 | 638 |
38,705,921 | 2016-08-01T18:42:00.000 | 0 | 0 | 1 | 0 | python-2.7,pip,anaconda,canopy | 38,705,976 | 2 | false | 0 | 0 | Maybe you can try the following.
Find where both pip-s reside (whereis pip, I have it on ~/anaconda2/bin), then cd to the pip directory of the python version you want, and execute it from there. | 1 | 0 | 0 | Using pip with different Python version is a common problem, as I see when I search the Internet. There are a lot of answers around, also in this forum. However nobody seems to encounter the same problem that I have:
I use Canopy python most and it was installed first. Later I installed Anaconda. Now when I try to install a program with pip it always install it in Canopy (or refuse to install it because it is already installed in Canopy.
for example:
$ pip install ipython
gives:
Requirement already satisfied...
but there are no ipython in my Anaconda-folder, it is in the /Enthought/Canopy_64bit/... folder
How can I overcome this problem?
Both versions are 2.7 and even if one is 2.7.11 and the other 2.7.12, it did not work to distinguish between the two by this. | New point of view: pip dealing with multiple Python versions, Canopy, Anaconda on Linux | 0 | 0 | 0 | 599 |
38,706,050 | 2016-08-01T18:50:00.000 | 1 | 0 | 0 | 0 | python,profiling,igraph | 38,720,955 | 1 | true | 0 | 0 | If you don't have vertex or edge attributes, your best bet is a simple edge list, i.e. Graph.Read_Edgelist(). The disadvantage is that it assumes that vertex IDs are in the range [0; |V|-1], so you'll need to have an additional file next to it where line i contains the name of the vertex with ID=i. | 1 | 0 | 1 | I have a very large network structure which I am working with in igraph. There are many different file formats which igraph Graph objects can write to and then be loaded from. I ran into memory problems when using g.write_picklez, and Graph.Read_Lgl() takes about 5 minutes to finish. I was wondering if anyone had already profiled the numerous file format choices for write and load speed as well as memory footprint. FYI this network has ~5.7m nodes and ~130m edges. | fastest format to load saved graph structure into python-igraph | 1.2 | 0 | 0 | 484 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.