Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,110,564 | 2017-02-08T10:27:00.000 | 0 | 0 | 1 | 0 | python,asynchronous,tornado,sleep,pytest | 49,313,320 | 2 | false | 0 | 0 | You may check your code whether your request method is POST or PUT, I get the 599 Response Code when I use method POST and didn't post a data, and it gones when I added data.
error:
res = self.fetch('/', method='POST')
fixed:
res = self.fetch('/', method='POST', data={}) | 1 | 0 | 0 | I have a problem with one my async tests: tornado seems not to be up and running (I get 599s), even though it is started as a fixture. I would like to verify that it is indeed running, by doing the following:
Start the test
Before the requests are actually sent to tornado, I would like my code to "sleep" for a long time so that I can manually check with the browser that tornado is indeed running.
How can I tell python to sleep without blocking the async loop? If I do time.sleep, the whole process (and thus the loop) is not responsive for the duration of the sleep. | Sleep in async application, without blocking event loop | 0 | 0 | 0 | 964 |
42,114,174 | 2017-02-08T13:13:00.000 | 0 | 1 | 0 | 0 | node.js,python-2.7,stream,sensors,raspberry-pi3 | 42,114,793 | 1 | false | 1 | 0 | Depending on the amount of data and the complexity/simplicity that you want to achieve, you can e.g.
hit HTTP endpoint of your Node server from the Python program every time there's ne data
connect with WebSocket and send new data as messages
connect with TCP once and send new data as new lines
connect with TCP every time when there's new data
send a UDP packet with every new data
if the Node and Python programs are running on the same system then you can use IPC, named pipes etc.
there are more ways to do it
All of those can be done with Node and Python. | 1 | 0 | 0 | I have a idea for a small project where I will try to transfer real time sensor data that is captured and converted to digital signal using MCP3008 to the NodeJS server that is installed on Raspberry PI.
My question is: what is the most efficient and/or fastest way for data transfer from Python program to NodeJS server to be displayed in webpage.
Thanks for you advices | Data transfer between Python and NodeJS in Raspberry Pi | 0 | 0 | 0 | 648 |
42,116,597 | 2017-02-08T15:01:00.000 | 2 | 0 | 0 | 1 | python,macos,docker,machine-learning,tensorflow | 44,015,407 | 3 | false | 0 | 0 | Lets assume you have a script my_script.py located at /Users/awesome_user/python_scripts/ on your mac
By default the tensorFlow image bash will locate you at /notebooks.
Run this command in your terminal: docker run --rm -it -v /Users/awesome_user/python_scripts/:/notebooks gcr.io/tensorflow/tensorflow bash
This will map your local mac folder /Users/awesome_user/python_scripts/ to the docker's local folder /notebooks
then just run from the bash python my_script.py. Also running ls should reveal your folder content | 1 | 1 | 0 | I just got a Mac and shifted from Windows and installed Tensorflow using Docker and everything is working fine, but I want to run a python script that I have from before. Is there any way to run a python script in docker on a Mac, using the terminal? | How to run Python Scripts on Mac Terminal using Docker with Tensorflow? | 0.132549 | 0 | 0 | 2,693 |
42,117,777 | 2017-02-08T15:54:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,neural-network,theano,keras | 42,120,658 | 1 | false | 0 | 0 | In Keras < 1.0 (I believe), one would pass the show_accuracy argument to model.fit in order to display the accuracy during training.
This method has been replaced by metrics, as you can now define custom metrics that can help you during training. One of the metrics is of course, accuracy. The changes to your code to keep the same behavior are minimum:
Remove show_accuracy from the model.fit call.
Add metrics = ["accuracy"] to the model.compile call.
And that's it. | 1 | 1 | 1 | im new to Keras in python, i got this warning message when after executing my code. I tried to search on google, but still didnt manage to solve this problem. Thank you in advance.
UserWarning: he "show_accuracy" argument is deprecated, instead you
should pass the "accurac " metric to the model at compile time:
model.compile(optimizer, loss, metrics=["accuracy"])
warnings.warn('The "show_accuracy" argument is deprecated, ' | Keras with Theano BackEnd | 0.197375 | 0 | 0 | 223 |
42,118,850 | 2017-02-08T16:42:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,neural-network,generator,keras | 46,009,804 | 2 | false | 0 | 0 | I think the only option here is to NOT shuffle the files. I have been wondering this myself and this is the only thing I could find in the docs. Seems odd and not correct... | 1 | 7 | 1 | If I don't shuffle my files, I can get the file names with generator.filenames. But when the generator shuffles the images, filenames isn't shuffled, so I don't know how to get the file names back. | How to retrieve the filename of an image with keras flow_from_directory shuffled method? | 0 | 0 | 0 | 1,974 |
42,120,541 | 2017-02-08T18:09:00.000 | 0 | 0 | 0 | 1 | python,macos,google-app-engine,pycharm | 42,124,236 | 1 | false | 1 | 0 | I did a clean install recently, and not using the AppEngineLauncher anymore - not sure it even ships with the newer SDK.
My GAE is located here:
/usr/local/google-cloud-sdk/platform/google_appengine
Looks like you might be using an older version of AppEngine SDK | 1 | 0 | 0 | My colleague and I both have Macs, and we both have PyCharm Professional, same version (2016.3.2) and build (December 28, 2016). We use a repository to keep our project directories in sync, and they are currently identical. Under Preferences, we both have "Enable Google App Engine support" checked, and we both have the same directory shown as "SDK directory", with the same files in that directory.
When I choose menu option Tools > Google App Engine > Upload App Engine app..., the App Config Tool panel appears at the bottom of my PyCharm window. The first line is:
/usr/bin/python
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/appcfg.py
update .
and the last line is:
appcfg.py >
I can also run that update command from a Terminal window.
Meanwhile, my colleague can also run the update command from a Terminal window. But when he runs menu option Tools > Google App Engine > Upload App Engine app..., the App Config Tool panel only shows:
appcfg.py >
We've researched this extensively and made many attempts at repair, no luck so far. Any help will be most appreciated. | PyCharm PRO for Mac GAE upload not working | 0 | 0 | 0 | 40 |
42,120,805 | 2017-02-08T18:23:00.000 | 0 | 0 | 1 | 0 | python,spss | 42,143,686 | 2 | false | 0 | 0 | Once you have created the syntax file, which can havce any extension, you execute it by executing an INSERT command pointing to the file.
You can also execute it directly from a Python script without creating a file by using the spss.Submit api. | 2 | 0 | 0 | I have a python script that generates text files containing SPSS syntax. Currently I have to copy the syntax from the text file and paste into an empty .sps file in order to run the syntax. I need to automatically generate .sps files, either using a python script, or any other automated method, because I have thousands of analyses to run.
I tried to generate a single large SPSS syntax file via my python script and then paste that into a single SPSS syntax file, but that doesn't solve the issue because the resulting syntax file is to large and crashes SPSS, so I need to be able to create multiple seperate files.
How do you create SPSS syntax files (.sps) outside of the SPSS GUI? | How do you create SPSS syntax files (.sps) using a system other then SPSS? | 0 | 0 | 0 | 349 |
42,120,805 | 2017-02-08T18:23:00.000 | 0 | 0 | 1 | 0 | python,spss | 42,340,766 | 2 | false | 0 | 0 | Please clarify your problem. As comments indicate are SPSS syntax files just plain text files, which can be any size. The problems you are having probably arise when trying to run/execute the syntax. You mention thousands of analyses. Can it be that the output generated is too large and your SPSS crashes? | 2 | 0 | 0 | I have a python script that generates text files containing SPSS syntax. Currently I have to copy the syntax from the text file and paste into an empty .sps file in order to run the syntax. I need to automatically generate .sps files, either using a python script, or any other automated method, because I have thousands of analyses to run.
I tried to generate a single large SPSS syntax file via my python script and then paste that into a single SPSS syntax file, but that doesn't solve the issue because the resulting syntax file is to large and crashes SPSS, so I need to be able to create multiple seperate files.
How do you create SPSS syntax files (.sps) outside of the SPSS GUI? | How do you create SPSS syntax files (.sps) using a system other then SPSS? | 0 | 0 | 0 | 349 |
42,121,508 | 2017-02-08T19:01:00.000 | -3 | 0 | 0 | 0 | python,video,moviepy | 42,135,938 | 4 | false | 0 | 0 | clip1=VideoFileClip('path')
c=clip1.duration
print(c) | 1 | 3 | 0 | I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here. | Moviepy unable to read duration of file | -0.148885 | 0 | 0 | 5,652 |
42,121,512 | 2017-02-08T19:02:00.000 | 0 | 0 | 0 | 0 | python-2.7,url,cgi,query-string | 42,122,277 | 1 | false | 1 | 0 | Thanks for all the help on what was actually not to complicated a question. What I was looking for was a router/dispatcher that is usually handled by a framework fairly simply though an @route or something similar. Opting for a more efficient approach all I had to do was import os and then look at os.environ.get('PATH_INFO', '') for all the data I could possibly need. For anyone else following the path I was, that is how I found my way. | 1 | 0 | 0 | I know I am using the wrong search terms and that's why I haven't been able to suss out the answer myself. However, I cannot seem to figure out how to use the CGI module to pull what I think counts as a query string from the url.
given a url www.mysite.com/~usr/html/cgi.py/desired/path/info how would one get the desired/path/info out of the url? I understand GET and POST requests and know I can use CGI's FieldStorage class to get at that data to fill my Jinja2 templates out and such. But now I want to start routing from a landing page with different templates to select before proceeding deeper into the site. I'm hoping the context is enough to see what I'm asking because I am lost in a sea of terms that I don't know.
Even if it's just the right search term, I need something to help me out here. | Getting what I think is a part of the query string using python 2.7/CGI | 0 | 0 | 0 | 33 |
42,125,195 | 2017-02-08T22:54:00.000 | 1 | 0 | 0 | 0 | timestamp,ironpython,spotfire | 42,451,598 | 1 | false | 1 | 0 | To achieve this, you need to add an extra column to one Information Link you are using or have an separate Information link only with this column element, say "Schedule Refresh Time".
Initially do not be concerned about what is the DB table or column you are using, just pull any column with data type DateTime.
Now add this column to your information link and edit SQL. Here find the column you added in SQL query, and replace the column from something
T1."Any Column" as "Schedule Refresh Time"
To something like
SYSDATE as "Schedule Refresh Time".
Done.
From now onwards, whenever the Scheduled update will be pull data from Information Link, this column will capture current server Time and that is time of scheduled refresh. | 1 | 1 | 0 | I have a spotfire template that runs every night using scheduler and I want to show its last refresh time on the template. Is there anyway I can do it within native spotfire functionality or using IronPython ? | How to get a file's timestamp in spotfire? | 0.197375 | 0 | 0 | 1,053 |
42,127,593 | 2017-02-09T03:12:00.000 | 8 | 0 | 1 | 0 | python,naming-conventions,filenames,camelcasing | 61,443,173 | 4 | false | 0 | 0 | There is a difference in the naming convention of the class name and the file that contains this class. This missunderstanding might come from languages like java where it is common to have one file per class.
In python you can have several classes per modul (a simple .py file). The classes in this module/file should be called according to the class naming convention: Class names should normally use the CapWords convention.
The file containing this classes should follow the modul naming convention: Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability.
=> CamelCase should in the file camelcase.py (or camel_case.py if neccessary) | 2 | 99 | 0 | I know that classes in Python are typically cased using camelCase.
Is it also the normal convention to have the file that contains the class also be camelCase'd especially if the file only contains the class?
For example, should class className also be stored in className.py instead of class_name.py? | Should Python class filenames also be camelCased? | 1 | 0 | 0 | 86,953 |
42,127,593 | 2017-02-09T03:12:00.000 | 20 | 0 | 1 | 0 | python,naming-conventions,filenames,camelcasing | 52,767,753 | 4 | false | 0 | 0 | The official convention is to use all lower case for file names (as others have already stated). The reason, however, has not been mentioned...
Since Python works cross platform (and it is common to use it in that manner), but file systems vary in the use of casing, it is better to just eliminate alternate cases. In Linux, for instance, it is possible to have MyClass.py and myclass.py in the same directory. That is not so in Windows!
On a related note, if you have MyClass.py and myclass.py in a git repo, or even just change the casing on the same file, git can act funky when you push/pull across from Linux and Windows.
And, while barely on topic, but in the same vein, SQL has these same issues where different standards and configurations may or may not allow UpperCases on table names.
I, personally, find it more pleasant to read TitleCasing / camelCasing even on filenames, but when you do anything that can work cross platform it's safest not to. | 2 | 99 | 0 | I know that classes in Python are typically cased using camelCase.
Is it also the normal convention to have the file that contains the class also be camelCase'd especially if the file only contains the class?
For example, should class className also be stored in className.py instead of class_name.py? | Should Python class filenames also be camelCased? | 1 | 0 | 0 | 86,953 |
42,127,684 | 2017-02-09T03:26:00.000 | 4 | 0 | 0 | 0 | python,django,views,models | 42,127,741 | 2 | false | 1 | 0 | You want the first_name and last_name attributes: request.user.first_name and request.user.last_name | 1 | 8 | 0 | I need to get the first name and last name of a user for a function in my views How do I do that? What is the syntax for it? I used request.user.email for the email, but I don't know see an option for a first name or last name. How do I go about doing this?
Should I import the model into the views and then use it? | How do I get the first name and last name of a logged in user in Django? | 0.379949 | 0 | 0 | 17,555 |
42,128,821 | 2017-02-09T05:20:00.000 | 0 | 0 | 1 | 0 | python,nested | 42,128,898 | 1 | true | 0 | 0 | print tempdict.get('a', {}).get('a', [None])[0] | 1 | 0 | 0 | I am using json to load content(json) of a file into a dictionary. Certain elements of this dictionary have a nested dictionary structure. However, this nested dictionary may have certain elements depending on certain criteria. For example:
tempdict = {'a':{'a':[0,1,2,3], 'b':2}, 'b':{'a':1, 'b':2}}
As you can see in this case tempdict.get('a').get('a')[0] will return a 0 in this case but there will be times where the outer element 'a' will be missing and hence the expression would return TypeError: 'NoneType' object has no attribute 'getitem'
I don't know a priori if 'a' will be present or not. So in this scenario is there some form of optional chaining that I can perform?
Appreciate your time and suggestion. | Python: Retrieve elements from a variable nested dictionary | 1.2 | 0 | 0 | 31 |
42,130,491 | 2017-02-09T07:18:00.000 | 5 | 0 | 0 | 0 | python,tensorflow,deep-learning,lstm,recurrent-neural-network | 46,544,816 | 2 | false | 0 | 0 | There is no difference in what the model learns.
At timestep t, RNNs need results from t-1, therefore we need to compute things time-major. If time_major=False, TensorFlow transposes batch of sequences from (batch_size, max_sequence_length) to (max_sequence_length, batch_size)*. It processes the transposed batch one row at a time: at t=0, the first element of each sequence is processed, hidden states and outputs calculated; at t=max_sequence_length, the last element of each sequence is processed.
So if your data is already time-major, use time_major=True, which avoids a transpose. But there isn't much point in manually transposing your data before feeding it to TensorFlow.
*If you have multidimensional inputs (e.g. sequences of word embeddings: (batch_size, max_sequence_length, embedding_size)), axes 0 and 1 are transposed, leading to (max_sequence_length, batch_size, embedding_size) | 1 | 8 | 1 | Do RNNs learn different dependency patterns when the input is batch-major as opposed to time-major? | Batch-major vs time-major LSTM | 0.462117 | 0 | 0 | 4,154 |
42,131,205 | 2017-02-09T08:06:00.000 | 2 | 0 | 1 | 0 | python,json,django,pandas | 42,133,075 | 2 | true | 1 | 0 | You can also use pd.DataFrame.from_records() when you have json or dictonary
df = pd.DataFrame.from_records([ json ]) OR df = pd.DataFrame.from_records([ dict. ])
or
you need to provide iterables for pandas dataframe:
e.g. df = pd.DataFrame({'column_1':[ values ],'column_2':[ values ]}) | 1 | 2 | 1 | I have a simple json in Django. I catch the file with this command data = request.body and i want to convert it to pandas datarame
JSON:
{ "username":"John", "subject":"i'm good boy", "country":"UK","age":25}
I already tried pandas read_json method and json.loads from json library but it didn't work. | Django JSON file to Pandas Dataframe | 1.2 | 0 | 0 | 820 |
42,132,797 | 2017-02-09T09:28:00.000 | 0 | 0 | 1 | 1 | python,path,sublimetext3 | 42,132,998 | 2 | false | 0 | 0 | You have to add Python to the Environmental Variables. | 1 | 0 | 0 | I have installed Anaconda on Sublime Text 3 and this error pops up each time
whenever I try to build a file:
'python' is not recognized as an internal or external command,
operable program or batch file.
If it involves adding the python to the path file(on windows), could please explain exactly what I need to add.
I use Windows 7 Ultimate, Python 3.6.0 | How do I use use python with sublime text 3? | 0 | 0 | 0 | 414 |
42,135,962 | 2017-02-09T11:49:00.000 | 0 | 0 | 0 | 0 | python,django,widget | 42,137,163 | 1 | true | 1 | 0 | A widget makes no sense in the absence of a field. The field is responsible for accepting the input and validating it; so it is the field that has to determine the name attribute, otherwise it would have no way of knowing what value to use in the data.
Butu you should never need to render the widget directly by calling its render method. Again, that is the job of the field. | 1 | 0 | 0 | This is a question I've from time ago, as I just don't understand why was this decission headed that way.
When we render a widget (e.g., 'cause of using a form), its render functions has a name arg. Why, if a HTML tag name is an attr, cannot be specified as part of the attrs dict passed to that function? Should make more sense to use name only when you no specify an attr name.
For understanding, if I set an attrs {"name": "no_one_knows[]"}, when I render the widget its name should be "no_one_knows[]", not the one passed by arg. That way I could have a HTML tag that can be parsed directly as a list (getlist(..)) in the server side (for example). | About Django's widget design principles | 1.2 | 0 | 0 | 29 |
42,139,307 | 2017-02-09T14:29:00.000 | 1 | 0 | 0 | 1 | google-app-engine,google-cloud-datastore,google-app-engine-python | 42,145,088 | 1 | false | 1 | 0 | In my tests, I found that the database files created by AppEngine Dev and Datastore Emulator are compatible. I was able to copy the local_db.bin from app-engine database to replace the same file in Datastore Emulator's data directory and was able to access the data. | 1 | 0 | 0 | We have two app engine apps, which read/save to the same datastore (that is, same project).
Datastore is actually the way they "transfer data" to each other.
One of the apps is running on standard environment, and the other is running in the flexible environment.
In the flexible environment, to run local tests in my machine, without using google datastore servers, I have to use the Datastore Emulator, which it's configured already.
What I would like now is to find a simple way to export data saved in the standard environment app (created using dev_appserver.py) and import it in the datastore emulator.
I would NOT like to push the data google servers and export from there, if that could be avoidable, instead exporting from the database that ran in my local machine.
Is there a feature/library which might help me with this task? | Exporting data from local standard environment and importing it in Datastore Emulator | 0.197375 | 0 | 0 | 344 |
42,142,261 | 2017-02-09T16:44:00.000 | 0 | 1 | 1 | 0 | python,c,winapi,python-stackless | 42,142,752 | 2 | true | 0 | 0 | Turns out I had to set PYTHONPATH before, then load the dll with a delay. The python library I have seems to be non-standard / modified. | 1 | 0 | 0 | I have a really weird problem with embedding python. If I don't specify PYTHONPATH, Py_Initialize fails with ImportError: No module named site.
If I set PYTHONPATH in cmd and then run my program, it works!
If I set PYTHONPATH programmatically (_putenv_s / SetEnvironmentVariable) it fails with ImportError again.
I've checked that the value is set with system("echo %PYTHONPATH%");, I've made sure multiple times that it is the correct path. I have no idea why it's failing... any ideas appreciated.
Setup: win10 x64, stackless python 2.7 x86 embedded in a C program. | Embedded python not picking up PYTHONPATH | 1.2 | 0 | 0 | 2,090 |
42,142,533 | 2017-02-09T16:57:00.000 | -1 | 0 | 0 | 0 | java,python,mysql,database,database-connection | 42,142,621 | 2 | false | 1 | 0 | Yes you can share the DB, you'll have to install the corresponding dependencies for connecting python to the DB as well as for Java. I've done this with postgresql, mysql and mssql | 1 | 0 | 0 | I created python script that sends notifications when result declared but I want to make website that takes data of student email id and store in database.
Now here problem is that I don't know django framework so it takes time to make website.
Java, database connection, Data insertion,
Servelet calling easily do that by me.
Want to know way that java html css takes input from user and stores in database and then python program retrieves that data.
Hope you understand my question. | Can we set one database between Java and python? | -0.099668 | 0 | 0 | 212 |
42,143,261 | 2017-02-09T17:33:00.000 | 9 | 1 | 1 | 0 | java,python,interface,mixins | 42,143,613 | 1 | true | 1 | 0 | Well, the 'abstract methods' part is quite important.
Java is strongly typed. By specifying the interfaces in the type definition, you use them to construct the signature of the new type. After the type definition, you have promised that this new type (or some sub-class) will eventually implement all the functions that were defined in the various interfaces you specified.
Therefore, an interface DOES NOT really add any methods to a class, since it doesn't provide a method implementation. It just adds to the signature/promise of the class.
Python, however, is not strongly typed. The 'signature' of the type doesn't really matter, since it simply checks at run time whether the method you wish to call is actually present.
Therefore, in Python the mixin is indeed about adding methods and functionality to a class. It is not at all concerned with the type signature.
In summary:
Java Interfaces -> Functions are NOT added, signature IS extended.
Python mixins -> Functions ARE added, signature doesn't matter. | 1 | 2 | 0 | I have been reading about Python-Mixin and come to know that it adds some features (methods) to class. Similarly, Java-Interfaces also provide methods to class.
Only difference, I could see is that Java-interfaces are abstract methods and Python-Mixin carry implementation.
Any other differences ? | Difference between Java Interfaces and Python Mixin? | 1.2 | 0 | 0 | 699 |
42,144,698 | 2017-02-09T18:53:00.000 | 0 | 0 | 1 | 0 | python,.net,database,orm | 42,144,831 | 1 | false | 1 | 0 | I love questions like that.
Here is what you have to consider, your web site has to be fast, and the bottleneck of most web sites is a database. The answer to your question would be - make it easy for .NET to work with SQL. That will require little more work with python, like specifying names of the table, maybe row names. I think Django and SQLAlchemy are both good for that.
Another solution could be to have a bridge between database with gathered data and database to display data. On a background you can have a task/job to migrate collected data to your main database. That is also an option and will make your job easier, at least all database-specific and strange code will go to the third component.
I've been working with .NET for quite a long time before I switched to python, and what you should know is that whatever strategy you chose it will be possible to work with data in both languages and ORMs. Do the hardest part of the job in the language your know better. If you are a Python developer - pick python to mess with the right names of tables and rows. | 1 | 0 | 0 | I am making a database with data in it. That database has two customers: 1) a .NET webserver that makes the data visible to users somehow someway. 2) a python dataminer that creates the data and populates the tables.
I have several options. I can use the .NET Entity Framework to create the database, then reverse engineer it on the python side. I can vice versa that. I can just write raw SQL statements in one or the other systems, or both. What are possible pitfalls of doing this one way or the other? I'm worried, for example, that if I use the python ORM to create the tables, then I'm going to have a hard time in the .NET space... | Sharing an ORM between languages | 0 | 1 | 0 | 79 |
42,144,827 | 2017-02-09T19:01:00.000 | 0 | 0 | 1 | 0 | python,windows,python-3.x,tkinter | 42,145,121 | 2 | false | 0 | 1 | The answer to this question is yes.
To link two python files, use:
If you are in python 3, use exec(open(r"example").read())
If you are in python 2, use open(r"example")
-
Note: the python 2 example works both in python 2 and 3
-
They do not need to be in the same file, just simply use their location.
e.g. if I had a program on my desktop, I would use
exec(open(r"C:/Users/MyName/Desktop/program").read()) | 1 | 0 | 0 | Just a quick question. I have been using Tkinter in Python in order to create Windows. My code is a bit all over the place when it is one file...
Is it possible to call a window that will be located in a different file?
For example,
Window1.py opens a window, there is a button in that window that should initiate window 2, which is located in Window2.py. Does the code physically have to be in the same file for it to work together? | Can you link multiple Python files? | 0 | 0 | 0 | 3,316 |
42,145,097 | 2017-02-09T19:17:00.000 | 0 | 0 | 0 | 0 | python,bokeh | 42,145,160 | 2 | false | 0 | 0 | Importing individual names from a library isn't really "contamination". What you want to avoid is doing from somelibrary import *. This is different because you don't know which names will be imported, so you can't be sure there won't be a name clash.
In contrast, doing from numpy import linspace just creates one name linspace. It's no different from doing creating an ordinary variable like linspace = 2 or defining your own function with def linspace. There's no danger of unexpected name clashes because you know exactly which names you're creating in your local namespace. | 1 | 3 | 1 | As I was checking out the Bokeh package I noticed that the tutorials use explicit import statements like from bokeh.plotting import figure and from numpy import linspace. I usually try to avoid these in favor of, e.g., import numpy as np, import matplotlib.pyplot as plt. I thought this is considered good practice as it helps to avoid namespace contamination.
Is there any reason why Bokeh deviates from this practice, and/or are there common aliases to use for Bokeh imports (e.g. import bokeh.plotting as bp)? | Why do bokeh tutorials use explicit imports rather than aliases? | 0 | 0 | 0 | 545 |
42,148,101 | 2017-02-09T22:30:00.000 | 2 | 0 | 0 | 0 | python,scipy,cython,shared-memory,python-multithreading | 42,148,230 | 2 | true | 0 | 0 | Not safe. If CPython could safely run that kind of code without the GIL, we wouldn't have the GIL in the first place. | 1 | 2 | 1 | This is sort of a general question related to a specific implementation I have in mind, about whether it's safe to use python routines designed for use inside the GIL in a shared memory environment. Specifically what I'd like to do is use scipy.optimize.curve_fit on a large array inside a cython function.
The data can be expressed as a 2d numpy array (say, of floats) with the axis to be fit along and the other the serialized axis to be parallelized over. Then I'd just like to release the GIL and start looping through the data with a cython.parallel.prange (the idea being then that I can have all my cores working on fitting at once).
The main issue I can foresee is that curve_fit does not operate "in place"; it returns the fit values of the parameters (and optionally their covariance matrix) and so has to allocate that memory at some point. (Of course I also have no idea about any intermediate memory allocation the routine performs.) I'm worried about how this will operate outside the GIL with many threads working concurrently.
I realize that the answer could just be "it should work fine go try it," but I'm hoping to get some idea of what to look out for. I also realize that this question is similar to others about parallelizing scipy/numpy routines, but I think this one is worded differently in that falls within the cython scope of a C environment for python.
Thanks for any help/suggestions. | Using scipy routines outside of the GIL | 1.2 | 0 | 0 | 312 |
42,148,704 | 2017-02-09T23:14:00.000 | 0 | 0 | 0 | 0 | python,django | 42,149,106 | 1 | false | 1 | 0 | Why would you like to have base64 encoded ids, pretty bad choice for URL string as it has signs that aren't URL friendly
You should extend your object with extra field that contains for instance random generated slug or UUID and have it as parameter in URL instead of id
then you would query in your view by that field | 1 | 0 | 0 | I'm working on a django application that takes an ID based dynamic URL, but rather than having the URL be straight up the ID, as it is right now:
url(r'^history/(?P< id>[0-9]+)/$', views.history)
I would like the URL to be the base64 encoded version of the object's ID, and i couldn't find a lot about encoding Django URLs. | Django ID based dynamic URL with base64 | 0 | 0 | 0 | 724 |
42,148,878 | 2017-02-09T23:28:00.000 | 0 | 0 | 0 | 0 | python,django,file,static,virtualenv | 42,149,002 | 1 | false | 1 | 0 | virtualenv is a tool to create isolated Python environments.
It doesn't have anything to do with your code and static_url settings, only thing different is what packages you have there and what django core version you are using. | 1 | 0 | 0 | I've installed Django 1.8 in the virtualenv and I'm trying to use the static files from it.
For Example, if I want to edit the header color of the admin base.html, It keeps using the django global file (1.7) even though I'm working with my virtualenv on.
Doesn't my STATIC_URL = '/static/' use the django that is currently running in my virtualenv?
Sorry for the bad English | Python - Using the django that is installed in the virtualenv | 0 | 0 | 0 | 68 |
42,149,777 | 2017-02-10T00:59:00.000 | 0 | 0 | 0 | 0 | ipython,spyder | 50,688,040 | 1 | false | 0 | 0 | One way that I have figured out is to define a dictionary and then record the results you want individually. Apparently, this is not the most efficient way, but it works. | 1 | 2 | 1 | What I meant by the title is that I have two different programs and I want to plot data on one figure. In Matlab there is this definition for figure handle which eventually points to a specific plot. Let's say if I call figure(1) the first time, I get a figure named ''1'' created. The second I call figure(1), instead of creating a new one, Matlab simply just plot on the previous figure named ''1''. I wondered how I can go about and do that in Spyder.
I am using Matplotlib in sypder. I would imagine this could be easily achieved. But I simply don't know much about this package to figure my problem out. :(
Any suggestions are appreciated! | How to plot data from different runs on one figure in Spyder | 0 | 0 | 0 | 54 |
42,150,448 | 2017-02-10T02:21:00.000 | 0 | 0 | 1 | 1 | macos,python-3.x,anaconda,pyomo | 42,165,867 | 1 | true | 0 | 0 | Pyomo is a Python package, so the way you use it is by importing it in Python scripts and executing those scripts with the Python interpreter that you installed Pyomo into.
If you want use the pyomo command to solve a model file (rather than creating a Pyomo solver object in your Python script and running it directly) you will have to add the bin location of your Anaconda script to your PATH. I do this on my Mac by adding a line like the following to ~/.bash_profile:
export PATH=/Users/gabe/<Anaconda-installation-directory>/bin:$PATH
This will add the location to the beginning of your PATH, causing the Anaconda Python to be executed by default from your terminal (rather than the default system Python). This is also the location that pip will install Pyomo related executables into (assuming you used the pip installed with Anaconda and not the pip associated with some other Python installation). | 1 | 0 | 0 | I am new to Mac, having been well-versed in PCs for over 20 years. Unfortunately, the ease at which I can get "under the hood" with a PC is nigh on impossible for me to intuitively sort out in a Mac (ironic isn't it?). In any case, here is my situation:
I am looking to install a number of open-source analyst-centric tools on my new Mac, to include Python, R, and Pyomo. I am doing some home-testing to explore the viability of these tools for an enterprise solution on a work network. As such, I am looking at Anaconda Navigator as a potential one-stop shop for managing a variety of tools.
I have successfully installed Anaconda 4.3 with a Python 3.6 environment on the Mac, but I am running into trouble installing (or rather finding) Pyomo.
I attempted to do a "conda" install of Pyomo via the terminal shell, but got an error. I then attempted a "pip" install which apparently worked.
Unfortunately, I have no idea how to invoke Pyomo, either from the OS X interface or from Anaconda. This is partially due to my inexperience with the OS X system and how to navigate the file and/or PATH structure.
As I am attempting to evaluate Anaconda, how can I set up Pyomo through the Anaconda Navigator shell? I have attempted importing a new environment, but cannot find a specification file, again due to my inability to navigate the OS X file system.
All installations have been completing using default settings. | Installing Pyomo on a Mac with Anaconda installed | 1.2 | 0 | 0 | 726 |
42,150,750 | 2017-02-10T02:59:00.000 | 0 | 0 | 1 | 0 | arrays,python-2.7,for-loop,generator,index-error | 42,150,751 | 1 | false | 0 | 0 | While I do not know what exactly is causing this issue, python was working just fine earlier (the same code). So it could possibly be something internal. For me, restarting did not work, so my next step would be to reinstall python.
The most likely cause for this is an error with the python core. | 1 | 0 | 0 | Python 2.7 win32 crashing on 'for' loop header, IndexError
All for loops are refusing to break (Python 2.7.13) on win32
- only on execution of '.py' file
an example would be:
for x in range(5): #Line 1 of main.py
print x #Line 2 of main.py
and the resulting error would be:
File "pathToFile\main.py", line 1, in <module>
for x in range(5):
IndexError: array index out of range
I also have 32-bit python 3.6 installed, but the default for opening '.py' files is python 2.7
This error happens on custom made generators too, but never in the interactive shell. | Why is this happening? What is the recommended fix? Python 'for' loop iteration failure | 0 | 0 | 0 | 14 |
42,151,236 | 2017-02-10T03:54:00.000 | 1 | 0 | 1 | 0 | python | 42,151,338 | 2 | false | 0 | 0 | No. It wouldn't be possible unless written somewhere. Simple reason is that once the python process ends, GC cleans up everything. | 1 | 0 | 0 | This may be a dumb question, but I want to add a line at the very start of the code like
print 'previous runtime' time.time()-tic
Is there a way to do it? Or can I somehow get the previous runtime other than keeping a logfile? | Is there a way to write over the python code after the interpretation? | 0.099668 | 0 | 0 | 37 |
42,151,653 | 2017-02-10T04:40:00.000 | -1 | 0 | 1 | 0 | javascript,python,node.js,multithreading,asynchronous | 42,151,868 | 1 | false | 1 | 0 | You could use a messaging queue such as RabbitMQ. | 1 | 0 | 0 | Node JS seems to be a perfect fit for serving fast lightweight requests asynchronously. However, i'm not convinced that it is a good fit for intensive background work - despite the ability to deploy Node JS in a clustered fashion.
I am considering using Node JS to interact with my template rendering engine (Express) and serve requests by building up a range of lightweight micro-services in Node. Further, I am then considering having Node JS pass off intensive work to Python (perhaps via some kind of in-memory technology such as Redis or a dedicated Task Queue). I am familiar with Python and in particular, multi-threading.
For example, on a 4 core machine, I might have two cores dedicated to running load balanced background tasks and 2 cores dedicated to a Node JS cluster. Would this be a vaguely sensible approach in comparison to trying to "Javascript all the things"? | Combining Node JS and Python for CPU and IO intensive web applications | -0.197375 | 0 | 0 | 182 |
42,153,732 | 2017-02-10T07:24:00.000 | -1 | 0 | 0 | 0 | python,django,postgresql | 42,154,977 | 1 | false | 1 | 0 | Your problem is not about django. You better carry the data(not necesarry but could be good) to the server that you want to insert and create a simple python program or sth. else to insert the data.
Avoid to insert a data at this size by using an http server. | 1 | 1 | 0 | I am trying to enter about 1 millions records to PostgreSql since I create table dynamically I don't have any models associated with it so I cant perform bulk_insert of django
How is there any method of inserting data in a efficient manner.
I am trying using single insert statement but this is very time consuming and too slow | Insert bulk data django using raw query | -0.197375 | 1 | 0 | 311 |
42,156,136 | 2017-02-10T09:43:00.000 | 1 | 1 | 0 | 0 | python,ghostscript | 42,156,701 | 1 | false | 0 | 0 | This depends on whether you want to handle PostScript or PDF as an input.
In either event you will need to write an interpreter for the language, and a rendering library. Its believed that an interpreter and rendering library for PostScript is around 5 man years work. Although PDF is at first sight simpler, because it is a description language, not a programming language, the more complex graphics model (transparency for example) and complications like annotations, optional content etc will likely make this a task of similar or greater magnitude.
luser droog, who has written an at least reasonably complete PostScript ineterpreter, can possibly provide more detailed estimates of the effort involved.
So, once you can interpret the input, and render it, then you can count the number of pixels of each colour in your rendered output. That will give you the ink coverage. That part is very simple....
Of course, even more simply, just use Ghostscript and take advantage of something like 100 man years of development that's already been done. | 1 | 0 | 0 | I am trying to write a script in python which delivers functionality of ink coverage same as Ghostscript. I dont know where to start. Can someone please guide me? | Ghostscript ink coverage function in python | 0.197375 | 0 | 0 | 92 |
42,156,850 | 2017-02-10T10:18:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,amazon-elastic-transcoder | 42,161,950 | 1 | false | 1 | 0 | You have to copy the files to S3. | 1 | 0 | 0 | I am planning to use aws in my project to encode mp4 to streaming video format. But my videos are not saved in Amazon s3 bucket. When i tried to create pipline i noticed that they are asking s3 bucket name. Is it possible to use aws encoder in this scenario without downloading those videos from other cms to s3 bucket. | How to use aws transcoder on videos saved in other cms apart from s3 bucket? | 0 | 0 | 0 | 37 |
42,156,957 | 2017-02-10T10:23:00.000 | -1 | 0 | 0 | 0 | python,tensorflow,gradient | 56,578,297 | 6 | false | 0 | 0 | You can use Pytorch instead of Tensorflow as it allows the user to accumulate gradients during training | 1 | 18 | 1 | I'm using TensorFlow to build a deep learning model. And new to TensorFlow.
Due to some reason, my model has limited batch size, then this limited batch-size will make the model has a high variance.
So, I want to use some trick to make the batch size larger. My idea is to store the gradients of each mini-batch, for example 64 mini-batches, and then sum the gradients together, use the mean gradients of this 64 mini batches of training data to update the model's parameters.
This means that for the first 63 mini-batches, do not update the parameters, and after the 64 mini batch, update the model's parameters only once.
But as TensorFlow is graph based, do anyone know how to implement this wanted feature?
Thanks very much. | How to update model parameters with accumulated gradients? | -0.033321 | 0 | 0 | 8,795 |
42,161,928 | 2017-02-10T14:36:00.000 | -1 | 0 | 1 | 1 | python,airflow | 42,405,900 | 3 | false | 0 | 0 | This has been an issue with the current version. What I usually do is to duplicate the DAG and change its name so it reflects in the web server. As soon as I finish developing I keep the last renamed and delete the old ones. | 1 | 5 | 0 | I'm learning to use airflow to schedule some python ETL processes. Each time I update my python code I have to restart the webserver and also rename the DAG before code changes are picked up by airflow. Is there anyway around this, especially so I dont have to be renaming my DAG each time I make changes? | How to update python functions in airflow without the need to restart airflow webserver | -0.066568 | 0 | 0 | 6,293 |
42,169,854 | 2017-02-10T22:45:00.000 | 2 | 0 | 0 | 0 | python,django,postgresql,security,encryption | 42,170,097 | 2 | true | 1 | 0 | There’s no such thing as a safe design when it comes to storing passwords/secrets. There’s only, how much security overhead trade-off you are willing to live with. Here is what I would consider the minimum that you should do:
HTTPS-only (all passwords should be encrypted in transit)
If possible keep passwords encrypted in memory when working with them except when you need to access them to access the service.
Encryption in the data store. All passwords should be strongly encrypted in the data store.
[Optional, but strongly recommended] Customer keying; the customer should hold the key to unlock their data, not you. This will mean that your communications with the third party services can only happen when the customer is interacting with your application. The key should expire after a set amount of time. This protects you from the rogue DBA or your DB being compromised.
And this is the hard one, auditing. All accesses of any of the customer's information should be logged and the customer should be able to view the log to verify / review the activity. Some go so far as to have this logging enabled at the database level as well so all row access at the DB level are logged. | 1 | 3 | 0 | I am developing a web app which depends on data from one or more third party websites. The websites do not provide any kind of authentication API, and so I am using unofficial APIs to retrieve the data from the third party sites.
I plan to ask users for their credentials to the third party websites. I understand this requires users to trust me and my tool, and I intend to respect that trust by storing the credentials as safely as possible as well as make clear the risks of sharing their credentials.
I know there are popular tools that address this problem today. Mint.com, for example, requires users' credentials to their financial accounts so that it may periodically retrieve transaction information. LinkedIn asks for users' e-mail credentials so that it can harvest their contacts.
What would be a safe design to store users' credentials? In particular, I am writing a Django application and will likely build on top of a PostgreSQL backend, but I am open to other ideas.
For what it's worth, the data being accessed from these third party sites is nowhere near the level of financial accounts, e-mail accounts, or social networking profiles/accounts. That said, I intend to treat this access with the utmost respect, and that is why I am asking for assistance here first. | How to safely store users' credentials to third party websites when no authentication API exists? | 1.2 | 0 | 1 | 1,085 |
42,170,796 | 2017-02-11T00:28:00.000 | 0 | 0 | 1 | 1 | python | 42,170,966 | 1 | false | 0 | 0 | Bash command is actually a small script or program, it's supposed to provide end to end operations.
For programming, you'll use more basic unit operations (open, close, read, write, rewind, seek) than a set of operations (copy, delete). It's true both for Python, C, etc.
If you go deeper into operating system coding, you will probably need operations to manipulate file representation details such as file handler and storage system.
It's different level of abstraction for handle problems of different granularity. | 1 | 1 | 0 | What is the logic behind python requiring that a file be opened for write or append before being written to? Is there an advantage of doing this compared with something like bash which can just directly write to the file with something like: print 'Hello World' >> output.txt? | Purpose of opening and closing files python | 0 | 0 | 0 | 70 |
42,171,188 | 2017-02-11T01:28:00.000 | 1 | 0 | 1 | 0 | python,heroku | 42,213,846 | 1 | true | 1 | 0 | Have to agree with @KlausD, doing what you are suggesting is actually a bit more complex trying to work with a filesystem that won't change and tracking state information (last selected) that you may need to persist. Even if you were able to store the last item in some environmental variable, a restart of the server would lose that information.
Adding a db, and connecting it to python would literally take minutes on Heroku. There are plenty of well documented libraries and ORMs available to create a simple model for you to store your list and your cursor. I normally recommend against storing pointers to information in preference to making the correct item obvious due to the architecture, but that may not be possible in your case. | 1 | 0 | 0 | I have deployed a small application to Heroku. The slug contains, among other things, a list in a textfile. I've set a scheduled job to, once an hour, run a python script that select an item from that list, and does something with that item.
The trouble is that I don't want to select the same item twice in sequence. So I need to be able to store the last-selected item somewhere. It turns out that Heroku apparently has a read-only filesystem, so I can't save this information to a temporary or permanent file.
How can I solve this problem? Can I use os.environ in python to set a configuration variable that stores the last-selected element from the list? | Heroku: how to store a variable that mutates? | 1.2 | 0 | 0 | 93 |
42,171,499 | 2017-02-11T02:24:00.000 | 183 | 0 | 0 | 0 | python,scala,dataframe,apache-spark,apache-spark-sql | 42,171,552 | 5 | true | 0 | 0 | You need to call getNumPartitions() on the DataFrame's underlying RDD, e.g., df.rdd.getNumPartitions(). In the case of Scala, this is a parameterless method: df.rdd.getNumPartitions. | 1 | 90 | 1 | Is there any way to get the current number of partitions of a DataFrame?
I checked the DataFrame javadoc (spark 1.6) and didn't found a method for that, or am I just missed it?
(In case of JavaRDD there's a getNumPartitions() method.) | Get current number of partitions of a DataFrame | 1.2 | 0 | 0 | 144,311 |
42,174,484 | 2017-02-11T09:50:00.000 | 1 | 0 | 1 | 0 | python | 42,174,534 | 2 | false | 0 | 0 | The python is operator is used to test if two variables point to same object.
From the documentation:
The operators is and is not test for object identity: x is y is true if and only if x and y are the same object.
For eg.
a = 0.0
if you do b = a and then follow it up with b is a. It will return True.
Now if you do a = 0.0 and b = 0.0 and then try b is a it will return False, because now a and b are two variables pointing to two different objects. | 1 | 0 | 0 | Could someone answer below (Output is from IDLE or check on python shell - python 2.7). For 1),2) and 3),4) I am doing exactly same operation but getting different results.
1) >>> a=0
2) >>> a is 0
True
3) >>> a=0.0
4) >>> a is 0.0
False
5) >>> 0.0 is 0.0
True
why 4) is False? | identity comparison - comparing same object returns false | 0.099668 | 0 | 0 | 92 |
42,177,953 | 2017-02-11T15:57:00.000 | 0 | 0 | 0 | 0 | python,web.py | 42,183,038 | 2 | false | 1 | 0 | Not a web.py issue. Cannot be done from server-side by any python or non-python framework, must be done in the Client.
From the client, you can set target="_blank" in the HTML, or use javascript with something like window.open(url). Javascript will allow you to set size and position of second window. | 2 | 1 | 0 | I tried web.seeother("link"), but this does not open it in a new tab. Now I can generate a link with a _blank tag, but then the user has to click on the link separately that is separate button for generating the link and another button to follow that link. I want to perform both with a single click. A server side method to do this would be best.
I am using the web.py framework. | How to open dynamic links in new tab with web py framework? | 0 | 0 | 0 | 486 |
42,177,953 | 2017-02-11T15:57:00.000 | 0 | 0 | 0 | 0 | python,web.py | 42,178,882 | 2 | false | 1 | 0 | As the document says web.seeother() is used for redirecting a user to another page. So a more clear way for asking your question is: "how to make web.seeother() open a link in a new tab"?
As I have observed the documents, There is no way to do that on server-side. | 2 | 1 | 0 | I tried web.seeother("link"), but this does not open it in a new tab. Now I can generate a link with a _blank tag, but then the user has to click on the link separately that is separate button for generating the link and another button to follow that link. I want to perform both with a single click. A server side method to do this would be best.
I am using the web.py framework. | How to open dynamic links in new tab with web py framework? | 0 | 0 | 0 | 486 |
42,179,424 | 2017-02-11T18:11:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,touch | 42,180,008 | 1 | false | 0 | 1 | For a gui you could always take a look into Tkinter.
You could test the gui without having the actual raspberry pi.
Switching to leds would require an LED matrix, which is more demanding in terms of electrical engineering. Raspberry pi would be my recommendation. | 1 | 0 | 0 | I am new to Python and took on a small project for the firehouse.
I am looking to make a "Calls YTD" Sign.
The initial thought was a raspberry Pi connected to the a touch screen.
After some playing around and learning how to use python a little I realized one very important fact... I am way over my head.
Looking for some direction.
In order for this to display on the touch screen I will need to build it into a GUI. Should I stop right there and instead get a 12x12 LED and keep it more simple?
Otherwise the goal would be to display the current call number "61" for example, with an up and down arrow to simply advance or retract a number .
Adding the ability to display last years call volume would be cool but not necessary.
What I am looking for ultimately, is some direction if python and raspberry pi is the way to go or should I head in another direction.
Thank you in advance. | Counter Display Design | 0 | 0 | 0 | 67 |
42,182,243 | 2017-02-11T23:00:00.000 | 0 | 1 | 0 | 0 | python,discord | 69,640,480 | 5 | false | 0 | 0 | if you're trying to delete the last sent message, e.g if a user is calling a command and you want to remove their message and then send the command.
Use this "await ctx.message.delete()" at the top of your command, it will find the last sent message and delete it. | 1 | 10 | 0 | Is there any way to delete a message sent by anyone other than the bot itself, the documentation seems to indicate that it is possible
Your own messages could be deleted without any proper permissions. However to delete other people’s messages, you need the proper permissions to do so.
But I can't find a way to target the message to do so in an on_message event trigger, am I missing something or is it just not possible? | Deleting User Messages in Discord.py | 0 | 0 | 1 | 87,622 |
42,182,706 | 2017-02-11T23:59:00.000 | 2 | 0 | 1 | 1 | python,macos,anaconda,uninstallation | 49,156,868 | 12 | false | 0 | 0 | This is one more place that anaconda had an entry that was breaking my python install after removing Anaconda. Hoping this helps someone else.
If you are using yarn, I found this entry in my .yarn.rc file in ~/"username"
python "/Users/someone/anaconda3/bin/python3"
removing this line fixed one last place needed for complete removal. I am not sure how that entry was added but it helped | 4 | 188 | 0 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. | How to uninstall Anaconda completely from macOS | 0.033321 | 0 | 0 | 380,553 |
42,182,706 | 2017-02-11T23:59:00.000 | 1 | 0 | 1 | 1 | python,macos,anaconda,uninstallation | 71,133,391 | 12 | false | 0 | 0 | None of these solutions worked for me. Turns out I had to remove all the hidden files that you can reveal with ls -a My .zshrc file had some anaconda references in it that needed to be deleted | 4 | 188 | 0 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. | How to uninstall Anaconda completely from macOS | 0.016665 | 0 | 0 | 380,553 |
42,182,706 | 2017-02-11T23:59:00.000 | 0 | 0 | 1 | 1 | python,macos,anaconda,uninstallation | 50,745,801 | 12 | false | 0 | 0 | Adding export PATH="/Users/<username>/anaconda/bin:$PATH" (or export PATH="/Users/<username>/anaconda3/bin:$PATH" if you have anaconda 3)
to my ~/.bash_profile file, fixed this issue for me. | 4 | 188 | 0 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. | How to uninstall Anaconda completely from macOS | 0 | 0 | 0 | 380,553 |
42,182,706 | 2017-02-11T23:59:00.000 | 2 | 0 | 1 | 1 | python,macos,anaconda,uninstallation | 51,944,028 | 12 | false | 0 | 0 | After performing the very helpful suggestions from both spicyramen & jkysam without immediate success, a simple restart of my Mac was needed to make the system recognize the changes. Hope this helps someone! | 4 | 188 | 0 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. | How to uninstall Anaconda completely from macOS | 0.033321 | 0 | 0 | 380,553 |
42,183,014 | 2017-02-12T00:47:00.000 | 10 | 0 | 0 | 0 | python,web-applications,kivy | 42,185,476 | 1 | false | 1 | 1 | Kivy does not currently support working in a browser.
There are some experiments to do it, but the result is very slow, to open and to use, and doesn't work in all browsers, more work is needed, and it's not a priority to us, if you want a web app, use a web technology. | 1 | 5 | 0 | I have developed a Kivy application and was wondering if it was possible to deploy it as a WebApp. I've tried using flask but it is running into some problems. I run the Kivy Application by calling the App builder class while flask does something similar. So can anyone direct me to any tutorials or other information about deploying a Kivy Application in a web browser?
I just need the GUI to display in a web browser so I believe the html doesn't need to be too extravagant.
Thank you! | How do I deploy a Kivy GUI Application as a WebApp in a Web Browser? | 1 | 0 | 0 | 7,754 |
42,184,792 | 2017-02-12T06:18:00.000 | 1 | 0 | 1 | 1 | python-2.7,python-3.x,ubuntu,pip | 42,346,675 | 3 | false | 0 | 0 | Upgrade your setuptools:
wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python3
Generally sudo combined with pip is considered harmful, avoid this when your system is not already broken. | 2 | 2 | 0 | Running a command alongwith pip gives the following error. Even the command pip -V produces the following error.
I read that the error is due to setuptools version 31.0.0 and it should be lower than 28.0.0. But the version of my setuptools is 26.1.1 and it still gives the same error.
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in
from pip import main
File "/usr/local/lib/python3.5/dist-packages/pip/__init__.py", line 26, in
from pip.utils import get_installed_distributions, get_prog
File "/usr/local/lib/python3.5/dist-packages/pip/utils/__init__.py", line 27, in
from pip._vendor import pkg_resources
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3018, in
@_call_aside
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3004, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3046, in _initialize_master_working_set
dist.activate(replace=False)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2578, in activate
declare_namespace(pkg)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2152, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2092, in _handle_ns
_rebuild_mod_path(path, packageName, module)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2121, in _rebuild_mod_path
orig_path.sort(key=position_in_sys_path)
AttributeError: '_NamespacePath' object has no attribute 'sort' | Pip does not work after upgrade to ubuntu-16.10 | 0.066568 | 0 | 0 | 1,721 |
42,184,792 | 2017-02-12T06:18:00.000 | 2 | 0 | 1 | 1 | python-2.7,python-3.x,ubuntu,pip | 43,832,299 | 3 | true | 0 | 0 | The only solution I could find is reinstalling pip. Run these commands on your terminal
wget https://bootstrap.pypa.io/get-pip.py
sudo -H python get-pip.py --prefix=/usr/local/
However, this works only for pip, not pip3! | 2 | 2 | 0 | Running a command alongwith pip gives the following error. Even the command pip -V produces the following error.
I read that the error is due to setuptools version 31.0.0 and it should be lower than 28.0.0. But the version of my setuptools is 26.1.1 and it still gives the same error.
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in
from pip import main
File "/usr/local/lib/python3.5/dist-packages/pip/__init__.py", line 26, in
from pip.utils import get_installed_distributions, get_prog
File "/usr/local/lib/python3.5/dist-packages/pip/utils/__init__.py", line 27, in
from pip._vendor import pkg_resources
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3018, in
@_call_aside
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3004, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3046, in _initialize_master_working_set
dist.activate(replace=False)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2578, in activate
declare_namespace(pkg)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2152, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2092, in _handle_ns
_rebuild_mod_path(path, packageName, module)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2121, in _rebuild_mod_path
orig_path.sort(key=position_in_sys_path)
AttributeError: '_NamespacePath' object has no attribute 'sort' | Pip does not work after upgrade to ubuntu-16.10 | 1.2 | 0 | 0 | 1,721 |
42,193,707 | 2017-02-12T21:52:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,user-interface,input | 42,193,735 | 4 | true | 0 | 1 | The best module for me is PyQt. You have all kinds of stuff, from regular GUI to 3D OpenGL and graphs and designer with all that. If not that try with tkinter, I had better experience with PyQt but you can try both and see which is better for your needs ! | 1 | 0 | 0 | I am creating a small program, and I would like to know the simplest way to make a GUI. I tried wxPython, however, it only supports 2.6 and 2.7. Are there any good, simple ones for Windows, python 3.X? | Creating a GUI in Python | 1.2 | 0 | 0 | 629 |
42,193,779 | 2017-02-12T22:00:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 42,195,147 | 3 | false | 0 | 0 | I'm using tf.nn.seq2seq.sequence_loss_by_example - they've moved a lot of stuff from tf.contrib to main packages. This is because they updated their code, but not their examples - if you open github - you'll see a lot of requests to fix issues related to that! | 1 | 0 | 1 | I'm using tensorflow 0.12.1 on Python 3.5.2 on a Windows 10 64bit computer. For some reason, whenever I try to import legacy_seq2seq from tensorflow.contrib, it always occurs the error: ImportError: cannot import name 'legacy_seq2seq'.
What causes the problem and how can I fix it? | Can't import legacy_seq2seq from tensorflow.contrib | 0 | 0 | 0 | 2,843 |
42,195,983 | 2017-02-13T02:57:00.000 | 2 | 1 | 0 | 0 | python,django,heroku,heroku-toolbelt | 42,214,216 | 1 | true | 1 | 0 | I would start over.
Destroy the heroku app
heroku apps:destroy --app YOURAPPNAME
Remove the whole repo (I would even remove the directory)
Create new directory, copy files over (do NOT copy old git repo artifacts that may be left over, anything starting with .git)
Initialize your git repo, add files, and commit, then push upstream to your remote (like github, if you're using one) git init && git add . && git commit -m 'initial commit' and optionally git push origin master
Then perform the heroku create
That should remove the conflict. | 1 | 0 | 0 | So i've run a heroku create command on my django repo, and currently it is living on Heroku. What I didnt do prior was create my own local git repo. I run git init, create a .gitignore to filter out my pycharm ide files, all the fun stuff.
I go to run git add . to add everything to the initial commit. Odd...it returns:
[1] 4270 killed git add.
So i run git add . again and get back this:
fatal: Unable to create /Users/thefromanguard/thefromanguard/app/.git/index.lock': File exists.
"Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue."
So I go and destroy the file, run it again; same error. Removed the whole repo, repeated the process, still got the same message.
Is Heroku running in the background where ps can't see it? | Is Heroku blocking my local git commands? | 1.2 | 0 | 0 | 110 |
42,196,378 | 2017-02-13T03:45:00.000 | 0 | 1 | 1 | 0 | python-2.7,import,centos,importerror,pycrypto | 42,263,629 | 1 | false | 0 | 0 | That module is available as an RPM package from the EPEL repository. Uninstall what you have with pip first, then run yum install python-crypto. | 1 | 1 | 0 | I have installed Crypto , using pip install pycrypto.
It got installed perfectly in CentOS. Able to see all module files under Crypto folder. /usr/lib64/python2.7/site-packages/Crypto.
in terminal, when importing Crypto. Able to do it.
But getting error for importing Ciper from Crypto with below
from Crypto.Ciper import AES
Says below error:
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named Ciper
But no import error for other modules in Crypto
from Crypto import Hash
from Crypto import Signature
from Crypto import Util
from Crypto import Ciper
Traceback (most recent call last):
File "", line 1, in
ImportError: cannot import name Ciper
See for detailed imports in my terminal
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import os
import Crypto
print Crypto.file
/usr/lib64/python2.7/site-packages/Crypto/init.pyc
print dir(Crypto)
['all', 'builtins', 'doc', 'file', 'name', 'package', 'path', 'revision', 'version', 'version_info']
print os.listdir(os.path.dirname(Crypto.file))
['Protocol', 'Util', 'pct_warnings.py', 'init.pyc', 'init.py', 'Signature', 'PublicKey', 'Cipher', 'Hash', 'SelfTest', 'pct_warnings.pyc', 'Random']
Any ideas how to resolve this issue ? | Only Ciper is not importing , importerror but not for other modules like Random in Crypto | 0 | 0 | 0 | 78 |
42,196,936 | 2017-02-13T05:00:00.000 | 2 | 0 | 1 | 0 | python,pylint,pylintrc | 44,530,157 | 1 | false | 0 | 0 | As pylint --help say:
--evaluation=
Python expression which should return a note less than 10 (10 is the highest note). You have access to the variables errors warning, statement which respectively contain the number of errors / warnings messages and the total number of statements analyzed. This is used by the global evaluation report (RP0004). [current: 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)] | 1 | 0 | 0 | What do the various variables in the PyLint evaluation configuration setting (error, warning, refactor, convention, statement) represent? Are there any other variables available in the calculation? | PyLint evaluation Configuration | 0.379949 | 0 | 0 | 627 |
42,201,388 | 2017-02-13T10:06:00.000 | 0 | 0 | 0 | 0 | python,audio,pydub | 42,465,388 | 1 | true | 0 | 0 | We came up with an answer that may seem like a kludge, but honestly is working pretty well for us. It seems that that audio format specification does not allow for storing time code for the start and end of the session. So, instead, we encoded the beginning time stamp, with millisecond resolution, the moment the record button was pressed, as a string in the filename ("2017-02-13_10-04-27-943") and recorded the audio session. Then when recording stopped, we grabbed another time stamp, calculated the time difference in milliseconds, and then appended the duration as a string in the filename, just after closing the file ("Dur123456"). Thus, the time start and duration are referenced to the RTC (Real Time Clock) on the Android phone. We are then able to remap the WAV/PCM timebase to the true duration. As it turns out, "16KHz" is not actually 16,000Hz. We are finding errors on the order of a seconds for 10 minutes of audio recording. It may not seem like much, but for multi-hour recordings it adds up. Thanks. | 1 | 2 | 0 | I'm interested in using an audio file as a record of events taking place in time. That is I will have multiple data streams that need to be aligned in time and I would like to use the audio file as a reference. So, I'm wondering if it is possible to get the actual time-base for an audio stream, as referenced relative to a real-time-clock?
I appreciate that one can determine the duration of an audio clip from the sample count and the sampling frequency (say, 16KHz). For short clips, this is probably a good estimate, but for long (multi-hour recordings) how accurate will this estimate be? I would like to maintain sub-second accuracy over multiple hours.
Put another way, does the audio file store the actual start and stop time of the audio recording, as referenced to the RTC (real-time-clock). This would allow one to generate a time-base for every sample in the audio file. If so, can I get this data from a python audio library?
I'm using MP4/AAC for encoding on an Android platform and pydub for post-processing.
Thanks. | How to get an accurate audio time-base | 1.2 | 0 | 0 | 327 |
42,204,582 | 2017-02-13T12:54:00.000 | 0 | 0 | 0 | 0 | python,hadoop,mapreduce,hadoop-streaming | 42,225,141 | 1 | false | 0 | 0 | So, your have a table with 200 columns(say T), a separate list of entries(say L) to be picked from T and with the last 24-hours(from the timestamp in T).
MapReduce, mapper does give entries from T sequentially. Before your mapper gets into map(), I.e in setup() put the block of code to read from the L and make it handy(use a feasible data structure to hold the list of data). Now, your code should hold two checks/conditions 1) if the entry from T contains/matches with L. If so, then check 2) if the data is within 24-hours range.
Done. Your output is what you have expected. No, reducer is required here, at least to do this much.
Happy Mapreducing. | 1 | 0 | 1 | I have a table containing 200 columns out of which I need around 50 column mentioned in a list,
and rows of last 24 months according to column 'timestamp'.
I'm confused what comes under mapper and what under reducer?
As it is just transformation, will it only have mapper phase, or filtering of rows to last 24 months will come under reducer? I'm not sure if this exactly utilises
what map-reduce was made for.
I'm using python with hadoop streaming. | How to divide map-reduce tasks? | 0 | 0 | 0 | 165 |
42,207,211 | 2017-02-13T15:06:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,pandas,pdf,matplotlib | 60,818,333 | 4 | false | 0 | 0 | What I do sometimes is generate a HTML file with my tables as I want and after I convert to PDF file. I know this is a little harder but I can control any element on my documents. Logically, this is not a good solution if you want write many files. Another good solution is make PDF from Jupyter Notebook. | 2 | 11 | 1 | I am using Pandas to read data from some datafiles and generate a multiple-page pdf using PdfPages, in which each page contains the matplotlib figures from one datafile. It would be nice to be able to get a linked table of contents or bookmarks at each page, so that i can easily find figures corresponding to a given datafile . Is there a simple way to achieve this (for example by somehow inserting the name of the data file) in python 3.5? | A simple way to insert a table of contents in a multiple page pdf generated using PdfPages | 0 | 0 | 0 | 1,445 |
42,207,211 | 2017-02-13T15:06:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,pandas,pdf,matplotlib | 51,373,915 | 4 | false | 0 | 0 | It sounds like you want to generate fig{1, 2, ..., N}.pdf and then generate a LaTeX source file which mentions an \includegraphics for each of them, and produces a ToC. If you do scratch this particular itch, consider packaging it up for others to use, as it is a pretty generic use case. | 2 | 11 | 1 | I am using Pandas to read data from some datafiles and generate a multiple-page pdf using PdfPages, in which each page contains the matplotlib figures from one datafile. It would be nice to be able to get a linked table of contents or bookmarks at each page, so that i can easily find figures corresponding to a given datafile . Is there a simple way to achieve this (for example by somehow inserting the name of the data file) in python 3.5? | A simple way to insert a table of contents in a multiple page pdf generated using PdfPages | 0 | 0 | 0 | 1,445 |
42,211,463 | 2017-02-13T18:55:00.000 | 0 | 0 | 0 | 0 | python,mysql,mysql-workbench | 42,220,939 | 1 | false | 0 | 0 | Editing a table means to be able to write back data in a way that reliably addresses the records that have changed. In MySQL Workbench there are certain conditions which must be met to make this possible. A result set:
must have a primary key
must not have any aggregates or unions
must not contain subselects
When you do updates in a script you have usually more freedom by writing a WHERE clause that limits changes to a concrete record. | 1 | 0 | 0 | I have created a table in MySQL command line and I'm able to interact with it using python really well. However, I wanted to be able to change values in the table more easily so I installed MySQL workbench to do so.
I have been able to connect to my server but when I try to change any values after selecting a table, it doesn't let me edit it. I tried making a new table within MySQL Workbench and I could edit it then.
So, I started to use that table. However, trying to edit the table python stopped working, so I made another table with command line again and it works!
Does anyone know how to fix either of these problems? It seems MySQL Workbench can only edit tables that have been created with Workbench, and not with Command Line. There must be a configuration option somewhere that is limiting this.
Thanks in advance! | MySQL Workbench can't edit a table that was created using Command Line | 0 | 1 | 0 | 94 |
42,213,820 | 2017-02-13T21:23:00.000 | 1 | 0 | 0 | 1 | python,linux,bash,export,subprocess | 42,213,924 | 3 | false | 0 | 0 | This isn't a thing you can do. Your subprocess call creates a subshell and sets the env var there, but doesn't affect the current process, let alone the calling shell. | 2 | 1 | 0 | I'm trying to generate an encryption key for a file and then save it for use next time the script runs. I know that's not very secure, but it's just an interim solution for keeping a password out of a git repo.
subprocess.call('export KEY="password"', shell=True) returns 0 and does nothing.
Running export KEY="password" manually in my bash prompt works fine on Ubuntu. | How to set a 'system level' environment variable? | 0.066568 | 0 | 0 | 834 |
42,213,820 | 2017-02-13T21:23:00.000 | 2 | 0 | 0 | 1 | python,linux,bash,export,subprocess | 42,213,930 | 3 | false | 0 | 0 | subprocess.call('export KEY="password"', shell=True)
creates a shell, sets your KEY and exits: accomplishes nothing.
Environment variables do not propagate to parent process, only to child processes. When you set the variable in your bash prompt, it is effective for all the subprocesses (but not outside the bash prompt, for a quick parallel)
The only way to make it using python would be to set the password using a master python script (using os.putenv("KEY","password") or os.environ["KEY"]="password") which calls sub-modules or processes. | 2 | 1 | 0 | I'm trying to generate an encryption key for a file and then save it for use next time the script runs. I know that's not very secure, but it's just an interim solution for keeping a password out of a git repo.
subprocess.call('export KEY="password"', shell=True) returns 0 and does nothing.
Running export KEY="password" manually in my bash prompt works fine on Ubuntu. | How to set a 'system level' environment variable? | 0.132549 | 0 | 0 | 834 |
42,214,900 | 2017-02-13T22:43:00.000 | -2 | 0 | 1 | 1 | python,jupyter-notebook | 63,427,616 | 2 | false | 0 | 0 | Open command promt,
pip
If you use pip, you can install it with:
pip install notebook
Congratulations, you have installed Jupyter Notebook! To run the notebook, run the following command at the Terminal (Mac/Linux) or Command Prompt (Windows):
jupyter notebook | 1 | 4 | 0 | When I run my jupyter notebook, the home folder displayed in jupyter is always my home directory even if I start my notebook from a different directory. Also not all folders in my home directory are displayed. I tried to change the access of the unshown folders by using chmod -R 0755 foldername, however the folders do not show when I run jupyter.
I want all the folders in my home directory to show.
I am using ubuntu. | jupyter does not show all folders in my home directory | -0.197375 | 0 | 0 | 6,607 |
42,216,640 | 2017-02-14T02:02:00.000 | 0 | 1 | 0 | 1 | python,node.js,meteor,meteor-galaxy | 42,284,125 | 1 | false | 1 | 0 | It really depends on how horrible you want to be :)
No matter what, you'll need a well-specified requirements.txt or setup.py. Once you can confirm your scripts can run on something other than a development machine, perhaps by using a virtualenv, you have a few options:
I would recommend hosting your Python scripts as their own independent app. This sounds horrible, but in reality, with Flask, you can basically make them executable over the Internet with very, very little IT. Indeed, Flask is supported as a first-class citizen in Google App Engine.
Alternatively, you can poke at what version of Linux the Meteor containers are running and ship a binary built with PyInstaller in your private directory. | 1 | 1 | 0 | I have a meteor project that includes python scripts in our private folder of our project. We can easily run them from meteor using exec, we just don't know how to install python modules on our galaxy server that is hosting our app. It works fine running the scripts on our localhost since the modules are installed on our computers, but it appears galaxy doesn't offer a command line or anything to install these modules. We tried creating our own command line by calling exec commands on the meteor server, but it was unable to find any modules. For example when we tried to install pip, the server logged "Unable to find pip".
Basically we can run the python scripts, but since they rely on modules, galaxy throws errors and we aren't sure how to install those modules. Any ideas?
Thanks! | Installing python modules in production meteor app hosted with galaxy | 0 | 0 | 0 | 192 |
42,217,843 | 2017-02-14T04:23:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 42,263,254 | 1 | true | 0 | 0 | The thing I was looking for is multi-threading, thanks, Julien Bernu! | 1 | 0 | 0 | I have a script, say script.run()
which is constantly running, aka nothing after that line is getting processed. Is there a way for me to read off my console while I have it running and process it with the same .py file? | Input text while python is running | 1.2 | 0 | 0 | 48 |
42,218,056 | 2017-02-14T04:47:00.000 | 4 | 0 | 0 | 0 | python,mqtt,iot,broker,publisher | 42,220,926 | 2 | true | 0 | 0 | There is no concept of removing a topic.
If the publisher stops publishing data on a topic the subscribers will stop receiving data on that topic but there is nothing to remove. A subscriber can subscribe to a topic that no messages have ever been published on and that is fine, the broker will send then any messages that may be sent in the future.
Pub/sub messaging topics are not like message queues that need to be defined up front | 1 | 2 | 0 | I am new to MQTT protocol. As I read through the document, I couldn't see any function to remove the published topic. My purpose is to allow the publisher to remove the published topic. Did I miss something in the mqtt document? Any suggestion? Thanks ! | How to remove a published topic | 1.2 | 0 | 0 | 9,815 |
42,218,537 | 2017-02-14T05:28:00.000 | 0 | 1 | 0 | 0 | python-2.7 | 42,813,821 | 2 | false | 0 | 0 | iperf vs iperf3 from wikipedia
A rewrite of iperf from scratch, with the goal of a smaller, simpler
code base and a library version of the functionality that can be used
in other programs, called iperf3, was started in 2009. The first
iperf3 release was made in January 2014. The website states: "iperf3
is not backwards compatible with iperf2.x".
Alternatives are Netcat, Bandwidth Test Controller (BWCTL), ps-performance toolkit, iXChariot, jperf, Lanbench, NetIO-GUI, Netstress. | 1 | 0 | 0 | Can anyone tell difference between iPerf and iPerf3? while using it with client-server python script,what are the dependencies?And what are the alternatives to iPerf? | Iperf3 commands with python script | 0 | 0 | 1 | 2,323 |
42,218,932 | 2017-02-14T06:00:00.000 | 2 | 0 | 0 | 0 | python,google-apps-script,google-sheets,google-spreadsheet-api | 42,327,384 | 2 | false | 0 | 0 | you will need several changes. first you need to move the script to the cloud (see google compute engine) and be able to access your databases from there.
then, from apps script look at the onOpen trigger. from there you can urlFetchApp to your python server to start the work.
you could also add a custom "refresh" menu to the sheet to call your server which is nicer than having to reload the sheet.
note that onOpen runs server side on google thus its impossible for it to access your local machine files. | 1 | 0 | 0 | I have a python script (on my local machine) that queries Postgres database and updates a Google sheet via sheets API. I want the python script to run on opening the sheet. I am aware of Google Apps Script, but not quite sure how can I use it, to achieve what I want.
Thanks | Running python script from Google Apps script | 0.197375 | 1 | 0 | 6,619 |
42,224,303 | 2017-02-14T10:54:00.000 | 1 | 0 | 1 | 0 | python,anaconda | 49,497,975 | 2 | false | 0 | 0 | You have two python installations, your original one and Anaconda's. They are independent and whatever module you add in one it is not available to the other. You have to do the pip install in your Anaconda python for it to have its own pytrends. | 1 | 2 | 0 | Windows 10; installed Python 3.6.0, installed pytrends, then installed Anaconda 4.3.0 (Python 3.6, 64-bit).
Anaconda does not see my pytrends package: error "No module named 'pytrends'".
Pytrends is installed - Python Shells accepts "from pytrends.request import TrendReq". | Anaconda can't see pytrends | 0.099668 | 0 | 0 | 5,070 |
42,224,655 | 2017-02-14T11:11:00.000 | 0 | 0 | 0 | 0 | python-2.7,import,tensorflow,protocol-buffers,importerror | 42,253,328 | 1 | true | 0 | 0 | I had a similar problem. Make sure that pip and python has the same path when typing which pipand which python. If they differ, change your ~.bash_profile so that the python path match the pip path, and use source ~\.bash_profile.
If that doesn't work, I would try to reinstall pip and tensorflow.
I installed pip using this command:
wget https://bootstrap.pypa.io/get-pip.py
sudo python2.7 get-pip.py | 1 | 2 | 1 | I am using tensorflow with python 2.7. However, after updating python 2.7.10 to 2.7.13, I get an import error with tensorflow
File "", line 1, in
File "/Users/usrname/Library/Python/2.7/lib/python/site-
packages/tensorflow/__init__.py", line 24, in
from tensorflow.python import *
File "/Users/usrname/Library/Python/2.7/lib/python/site-
packages/tensorflow/python/__init__.py", line 63, in
from tensorflow.core.framework.graph_pb2 import *
File "/Users/usrname/Library/Python/2.7/lib/python/site-
packages/tensorflow/core/framework/graph_pb2.py", line 6, in
from google.protobuf import descriptor as _descriptor
ImportError: No module named google.protobuf
Output from pip install protobuf
Requirement already satisfied: protobuf in /usr/local/lib/python2.7/site-packages
Requirement already satisfied: setuptools in /Users/usrname/Library/Python/2.7/lib/
python/site-packages (from protobuf)
Requirement already satisfied: six>=1.9 in /Library/Python/2.7/site-packages/
six-1.10.0-py2.7.egg (from protobuf)
Requirement already satisfied: appdirs>=1.4.0 in /usr/local/lib/python2.7/site-packages
(from setuptools->protobuf)
Requirement already satisfied: packaging>=16.8 in /usr/local/lib/python2.7/site-packages
(from setuptools->protobuf)
Requirement already satisfied: pyparsing in /usr/local/lib/python2.7/site-packages
(from packaging>=16.8->setuptools->protobuf)
Output from which python:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
I believe this path changed after the python update, but not sure. A solution could possibly be to downgrade python, but this seems like a bad solution? As I work in a team, I would like to avoid reinstalling Tensorflow due to end up with different versions, but this is perhaps the way to go? Any advice?
Update: I tired to install tensorflow all over, but the same error keeps popping up. Maybe the problem is the environment variables as which pipreturns /usr/local/bin/pip(which is different from which python)? | Tensorflow import error after python update | 1.2 | 0 | 0 | 781 |
42,227,567 | 2017-02-14T13:34:00.000 | 0 | 0 | 0 | 0 | python,database,sqlite | 42,227,865 | 3 | false | 0 | 0 | You'd need to be more precise on how you intend to compare the tables' content and what is the expected outcome. Sqlite3 itself is a good tool for comparison and you can easily query the comparison results you wish to get.
If these tables however are located in different databases, you can dump them into temporary db using python's sqlite3 bulit-in module.
You can also dump the query results into a data collection such as list and then perform your comparison but then again we can't help you if we don't know the expected outcome. | 2 | 0 | 0 | Now the question is a little tricky.... I have 2 tables that i want to compare them for their content. The tables have same no. of columns and same column names and same ordering of columns(if there is such thing).
Now i want to compare their contents but the trick is the ordering of their rows can be different ,i.e., row no. 1 in table 1 can be present in row no. 1000 in table 2. I want to compare their contents such that the ordering of the rows don't matter. And also remember that their is no such thing as primary key.
Now i can use and design Data structures or i can use an existing library to do the job. I want to use some existing APIs (if any). So can any1 point me in the right direction?? | Comparing two sqlite3 tables using python | 0 | 1 | 0 | 1,752 |
42,227,567 | 2017-02-14T13:34:00.000 | 0 | 0 | 0 | 0 | python,database,sqlite | 42,228,061 | 3 | false | 0 | 0 | You say "there is no PRIMARY KEY". Does this mean there is truly no way to establish the identity of the item represented by each row? If that is true, your problem is insoluble since you can never determine which row in one table to compare with each row in the other table.
If there is a set of columns that establish identity, then you would read each row from table 1, read the row with the same identity from table 2, and compare the non-identity columns. If you find all the table 1 rows in table 2, and the non-identity columns are identical, then you finish up with a check for table 2 rows with identities that are not in table 1.
If there is no identity and if you don't care about identity, but just whether the two tables would appear identical, then you would read the records from each table sorted in some particular order. Compare row 1 to row 1, row 2 to row 2, etc. When you hit a row that's different, you know the tables are not the same.
As a shortcut, you could just use SQLite to dump the data into two text files (again, ordered the same way for both tables) and compare the file contents.
You may need to include all the columns in your ORDER BY clause if there is not a subset of columns that guarantee a unique sort order. (If there is such a subset of columns, then those columns would constitute the identity for the rows and you would use the above algorithm). | 2 | 0 | 0 | Now the question is a little tricky.... I have 2 tables that i want to compare them for their content. The tables have same no. of columns and same column names and same ordering of columns(if there is such thing).
Now i want to compare their contents but the trick is the ordering of their rows can be different ,i.e., row no. 1 in table 1 can be present in row no. 1000 in table 2. I want to compare their contents such that the ordering of the rows don't matter. And also remember that their is no such thing as primary key.
Now i can use and design Data structures or i can use an existing library to do the job. I want to use some existing APIs (if any). So can any1 point me in the right direction?? | Comparing two sqlite3 tables using python | 0 | 1 | 0 | 1,752 |
42,231,337 | 2017-02-14T16:30:00.000 | 1 | 1 | 0 | 0 | python-3.x,docker,raspberry-pi3 | 42,259,134 | 1 | false | 0 | 0 | Looks like the output buffering was in fact the issue. It was working but never outputting anything so I couldn't tell. Using python3 -u seems to do the trick. I updated my Docker image to reflect this. | 1 | 1 | 0 | If I execute a Python 3 script inside my Raspberry Pi 3 and it uses time.sleep(wait), it only works interactively. If I background the process using &, the script doesn't seem to work at all and I don't see any output in my CSV file the script writes to. It stays at file size 0 forever.
I've tried this by running a script directly (read-sensor >/var/lib/envirophat/sensor.csv &) and the same inside a Docker container (I'm using HypriotOS).
How can I read the sensor faster than once per minute (using crontab) but not continuously without any kind of sleep? | Using Python 3 time.sleep in Raspberry Pi 3 hangs process | 0.197375 | 0 | 0 | 514 |
42,231,450 | 2017-02-14T16:35:00.000 | 2 | 0 | 1 | 0 | python,pycharm | 42,231,509 | 3 | false | 0 | 0 | Check if you have Java installed on your windows 10. | 2 | 1 | 0 | I downloaded and installed JetBrains PyCharm (Community version) on my Windows 10, but nothing happens when I try to run it. I tried everything like rebooting Windows, Run as administrator, etc. Nothing is found in Task Manager either. | PyCharm is not Launching on Windows! What's wrong with it? | 0.132549 | 0 | 0 | 13,421 |
42,231,450 | 2017-02-14T16:35:00.000 | 6 | 0 | 1 | 0 | python,pycharm | 42,232,352 | 3 | true | 0 | 0 | The shortcut pointed to pycharm.exe which will never work no matter how you invoke it for some reason. Maybe my Windows doesn't have a 32 bit java version.
I found pycharm64.exe under the same folder by chance, and that one works.
It would be nice if the installer can figure out what version should be running, or at least someone puts a reminder on the download page. | 2 | 1 | 0 | I downloaded and installed JetBrains PyCharm (Community version) on my Windows 10, but nothing happens when I try to run it. I tried everything like rebooting Windows, Run as administrator, etc. Nothing is found in Task Manager either. | PyCharm is not Launching on Windows! What's wrong with it? | 1.2 | 0 | 0 | 13,421 |
42,231,764 | 2017-02-14T16:51:00.000 | 8 | 0 | 1 | 0 | python,anaconda,conda | 68,422,861 | 8 | false | 0 | 0 | As the answer from @pkowalczyk mentioned some drawbacks: In my humble opinion, the painless and risk-free (workaround) way is following these steps instead:
Activate & Export your current environment conda env export > environment.yml
Deactivate current conda environment. Modify the environment.yml file and change the name of the environment as you desire (usually it is on the first line of the yaml file)
Create a new conda environment by executing this conda env create -f environment.yml
This process takes a couple of minutes, and now you can safely delete the old environment.
P.S. nearly 5 years and conda still does not have its "rename" functionality. | 1 | 469 | 0 | I have a conda environment named old_name, how can I change its name to new_name without breaking references? | How can I rename a conda environment? | 1 | 0 | 0 | 242,754 |
42,232,177 | 2017-02-14T17:11:00.000 | 0 | 0 | 1 | 0 | python,opencv,dll | 43,529,327 | 3 | false | 0 | 0 | use python 2.7.1.0 instead of python 3, cv2 worked and dll load error fixed after using python 2.7 | 1 | 1 | 1 | I am using Anaconda 4.3.0 (64-bit) Python 3.6.0 on windows 7. I am getting the error "ImportError: DLL load failed: The specified module could not be found." for importing the package import cv2.
I have downloaded the OpenCV package and copy paste cv2.pyd into the Anaconda site package and updated my system path to point to OpenCV bin path to get the DLL. Still I am not able resolve this issue.
I did another way to install using pip install opencv-python. Still not working.
Please need suggestions. Thank you | Import cv2: ImportError: DLL load failed: windows 7 Anaconda 4.3.0 (64-bit) Python 3.6.0 | 0 | 0 | 0 | 2,068 |
42,232,500 | 2017-02-14T17:27:00.000 | 0 | 0 | 0 | 0 | python,computer-vision,opencv3.0,robotics,pose-estimation | 42,255,630 | 1 | false | 0 | 0 | The obvious advantage of having a pose estimate is that it restricts the image region for searching your target.
Next, if your problem is occlusion, you then need to model that explicitly, rather than just try to paper it over with image processing tricks: add to your detector objective function a term that expresses what your target may look like when partially occluded. This can be either an explicit "occluded appearance" model, or implicit - e.g. using an algorithm that is able to recognize visible portions of the targets independently of the whole of it. | 1 | 0 | 1 | Suppose that I have an array of sensors that allows me to come up with an estimate of my pose relative to some fixed rectangular marker. I thus have an estimate as to what the contour of the marker will look like in the image from the camera. How might I use this to better detect contours?
The problem that I'm trying to overcome is that sometimes, the marker is occluded, perhaps by a line cutting across it. As such, I'm left with two contours that if merged, would yield the marker. I've tried opening and closing to try and fix the problem, but it isn't robust to the different types of lighting.
One approach that I'm considering is to use the predicted contour, and perform a local convolution with the gradient of the image, to find my true pose.
Any thoughts or advice? | Using external pose estimates to improve stationary marker contour tracking | 0 | 0 | 0 | 68 |
42,233,282 | 2017-02-14T18:08:00.000 | 1 | 0 | 1 | 0 | python,process,pool,multiprocess | 48,311,022 | 1 | false | 0 | 0 | The argument 'processes' in Pool means the total subprocess you want to create in this program. So If you want to use all 60 cores, here should be 60. | 1 | 1 | 0 | I am using a linux server with 128 cores, but I'm not the only one using it so I want to make sure that I use a maximum of 60 cores for my program. The program I'm writing is a series of simulations, where each simulation is parallelized itself. The number of cores of such a single simulation can be chosen, and I typically use 12.
So in theory, I can run 5 of these simulations at the same time, which would result in (5x12) 60 cores used in total. I want start the simulations from python (since that's where all the preprocessing happens), and my eye has caught the multiprocessing library, especially the Pool class.
Now my question is: should I use
pool = Pool(processes=5)
or
pool = Pool(processes=60)
The idea being: does the processes argument signify the amount of workers used (where each worker is assigned 12 cores), or the total amount of processes available? | Python multiprocess pool processes count | 0.197375 | 0 | 0 | 800 |
42,233,903 | 2017-02-14T18:44:00.000 | 1 | 1 | 1 | 1 | python,c,linux,pip,centos6 | 42,234,395 | 1 | false | 0 | 0 | After digging a bit more, I found the problem.
The symbol was undefined in _io.so. I ldd this library and learned that it was pointing to an older libpython2.7.so (which is the library that happens to define the symbol in its new version). This was because I had the old /opt/python/lib in my LDD_LIBRARY_PATH:
linux-vdso.so.1 => (0x00007fffb68d5000)
libpython2.7.so.1.0 => /opt/python/lib/libpython2.7.so.1.0 (0x00007f4240492000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f424025f000)
libc.so.6 => /lib64/libc.so.6 (0x00007f423fecb000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f423fcc7000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f423fac3000)
libm.so.6 => /lib64/libm.so.6 (0x00007f423f83f000)
/lib64/ld-linux-x86-64.so.2 (0x000000337b000000)
I fixed this and it solved the problem. | 1 | 1 | 0 | I've installed python 2.7.13 from sources according to their readme file on CentOS 6.6. (just following the configure/make procedure). I run these python from the command line and seems to work fine. However, as it doesn't come with pip and setuptools, I downloaded get-pip.py and tried to run it this way:
/share/apps/Python-2.7.13/bin/python2.7 get-pip.py
Then I get the following error:
Traceback (most recent call last):
File "get-pip.py", line 28, in <module>
import tempfile
File "/share/apps/Python-2.7.13/lib/python2.7/tempfile.py", line 32, in <module>
import io as _io
File "/share/apps/Python-2.7.13/lib/python2.7/io.py", line 51, in <module>
import _io
ImportError: /share/apps/Python-2.7.13/lib/python2.7/lib-dynload/_io.so: undefined symbol: _PyCodec_LookupTextEncoding
I tried the same with Python 2.7.12 with identical results.
However, if I run get-pip.py with a prebuilt python 2.7.12 release, it works fine.
EDIT: I checked the library /share/apps/Python-2.7.13/lib/python2.7/lib-dynload/_io.so with nm -g and the symbol seems to be there (I found U _PyCodec_LookupTextEncoding)
Any help will be greatly appreciated,
thanks in advance,
Bernabé | Undefined symbol after Running get-pip on fresh Python source installation | 0.197375 | 0 | 0 | 465 |
42,235,122 | 2017-02-14T19:59:00.000 | 3 | 0 | 1 | 0 | python,python-2.7 | 42,235,244 | 2 | false | 0 | 0 | The problem is not int, the problem is the floating point value itself. Your value would need 17 digits of precision to be represented correctly, while double precision floating point values have between 15 and 16 digits of precision. So, when you input it, it is rounded to the nearest representable float value, which is 1000000000000000.0. When int is called it cannot do a thing - the precision is already lost.
If you need to represent this kind of values exactly you can use the decimal data type, keeping in mind that performance does suffer compared to regular floats. | 1 | 2 | 0 | I have a variable containing a large floating point number, say a = 999999999999999.99
When I type int(a) in the interpreter, it returns 1000000000000000.
How do I get the output as 999999999999999 for long numbers like these? | How to convert large float values to int? | 0.291313 | 0 | 0 | 1,257 |
42,236,545 | 2017-02-14T21:27:00.000 | 0 | 0 | 0 | 1 | python,macos,protocol-buffers | 42,259,034 | 1 | true | 0 | 0 | I was/am using PyCharm. The protobuf library doesn't automatically get linked to the PyCharm interpreter. If you run python script.py from the command line, there are no issues with missing modules. | 1 | 2 | 0 | I installed the package and ran all of the correct commands. I did this for 2.6.1, 2.7, and 3.2. Between each I subsequently uninstalled the previous version. Within each version I went into the python folder and ran the python installation commands.
I ran brew install protobuf (and subsequently uninstalled it).
I ran sudo pip install protobuf (and subsequently uninstalled it).
The issue I am constantly getting is that the generated .py protobuf file calls imports from google.protobuf, but I am returned an error: ImportError: No module named google.protobuf
I then copy in the google folder (which you shouldn't have to do) and it stops returning that error, but the file and examples won't work. | How do I use protobuf on a Mac with Python? | 1.2 | 0 | 0 | 1,350 |
42,237,072 | 2017-02-14T22:01:00.000 | 26 | 0 | 1 | 0 | python | 42,237,193 | 10 | true | 0 | 0 | Scan your import statements. Chances are you only import things you explicitly wanted to import, and not the dependencies.
Make a list like the one pip freeze does, then create and activate a virtualenv.
Do pip install -r your_list, and try to run your code in that virtualenv. Heed any ImportError exceptions, match them to packages, and add to your list. Repeat until your code runs without problems.
Now you have a list to feed to pip install on your deployment site.
This is extremely manual, but requires no external tools, and forces you to make sure that your code runs. (Running your test suite as a check is great but not sufficient.) | 1 | 71 | 0 | What is the most efficient way to list all dependencies required to deploy a working project elsewhere (on a different OS, say)?
Python 2.7, Windows dev environment, not using a virtualenv per project, but a global dev environment, installing libraries as needed, happily hopping from one project to the next.
I've kept track of most (not sure all) libraries I had to install for a given project. I have not kept track of any sub-dependencies that came auto-installed with them. Doing pip freeze lists both, plus all the other libraries that were ever installed.
Is there a way to list what you need to install, no more, no less, to deploy the project?
EDIT In view of the answers below, some clarification. My project consists of a bunch of modules (that I wrote), each with a bunch of imports. Should I just copy-paste all the imports from all modules into a single file, sort eliminating duplicates, and throw out all from the standard library (and how do I know they are)? Or is there a better way? That's the question. | List dependencies in Python | 1.2 | 0 | 0 | 86,884 |
42,238,406 | 2017-02-14T23:54:00.000 | 0 | 1 | 1 | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 71,913,507 | 2 | false | 0 | 0 | The accepted answer is no longer accurate due to at least two new AWS features added since the posting. There are many configuration options and costs to consider, but both would work for the OP's needs.
Lambda functions can now attach an Elastic File System (EFS) volume. Write to EFS instead of /tmp.
Lambda functions now let you configure the size of /tmp. Simply increase the size to handle your largest expected file. | 1 | 2 | 0 | I'm grabbing some zip files from an S3 bucket and then converting them to gzip. The zipped files are about 130 megs. When uncompressed they are about 2 Gigs so I'm hitting the '[Errno 28] No space left on device' error.
Is it possible to use a different scratch space? maybe an EBS volume? | Is there a way to change the 'scratch' (/tmp) space location of an AWS lambda function? | 0 | 0 | 0 | 330 |
42,239,173 | 2017-02-15T01:20:00.000 | 1 | 0 | 0 | 0 | python,django,python-2.7,virtualenv | 42,239,415 | 1 | true | 1 | 0 | The problem was not with the Django-core but with django-user-accounts app that was included with pinax. Upgrading the django-user-accounts app fixed the issue.
Thanks to @Selcuk for the solution. | 1 | 0 | 0 | I am trying to run an existing django app. The app has been built in django-1.10. I set up a new virtualenv and installed the requirements and everything. However, I get errors like the following:
from django.utils import importlib
ImportError: cannot import name importlib
Now, the above is from the following source - .virtualenvs/crowd/lib/python2.7/site-packages/account/conf.py
When I manually fix the conf.py file, I still keep getting errors to fix either deprecated or removed features from older django versions.
Any idea as to how to fix this? I thought the purpose of working in virtualenvs was to avoid such errors.
Any suggestions would be much appreciated. Thanks in advance!
This is how the question is different: Even after I fix the importlib import statement, it keeps giving me errors like that of the usage of SubFieldBase and so on. | django-1.10 still contains deprecated and removed features | 1.2 | 0 | 0 | 74 |
42,239,544 | 2017-02-15T02:11:00.000 | 1 | 0 | 0 | 0 | python,selenium,selenium-chromedriver | 42,261,282 | 2 | false | 1 | 0 | I'd look at the ajax load event listener (the code that loads more <li>s). You need to trigger whatever that listens for. (aka: does it watch for something entering the view port, or something's y-offset, or a MouseEvent, or a scroll()?)
Then you need to trigger that kind of event on the element it listens to. | 2 | 2 | 0 | There is a <ul>tag in a webpage, and so many <li> tags in the <ul> tag. The <li> tags are loaded by ajax automatically while mouse wheel scroll down continuously.
The loading of <li> tags will work well if I use mouse wheel.
I want to use selenium to get the loaded info in <li> tags, but the javascript of:
document.getElementById(/the id of ul tag/).scrollTop=200;
can not work as the new <li> can not be loaded by ajax neither in chrome console nor the selenium execute_script.
So, if there is an API of selenium to behave like mouse wheel scroll down? Or is there any other way to solve this problem? | How can python + selenium + chromedriver use mouse wheel? | 0.099668 | 0 | 1 | 418 |
42,239,544 | 2017-02-15T02:11:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-chromedriver | 42,455,875 | 2 | true | 1 | 0 | From now on, there is not any reasonable reason, so I close this question. | 2 | 2 | 0 | There is a <ul>tag in a webpage, and so many <li> tags in the <ul> tag. The <li> tags are loaded by ajax automatically while mouse wheel scroll down continuously.
The loading of <li> tags will work well if I use mouse wheel.
I want to use selenium to get the loaded info in <li> tags, but the javascript of:
document.getElementById(/the id of ul tag/).scrollTop=200;
can not work as the new <li> can not be loaded by ajax neither in chrome console nor the selenium execute_script.
So, if there is an API of selenium to behave like mouse wheel scroll down? Or is there any other way to solve this problem? | How can python + selenium + chromedriver use mouse wheel? | 1.2 | 0 | 1 | 418 |
42,240,124 | 2017-02-15T03:22:00.000 | 1 | 0 | 1 | 0 | python,path,ipython-notebook | 42,240,221 | 2 | false | 0 | 0 | It's easy question, modify \C:\Users\User\Desktop\A Student's Guide to Python for Physical Modeling by KInder and Nelson\code to C:\\Users\\User\\Desktop\\A Student's Guide to Python for Physical Modeling by KInder and Nelson\\code. | 1 | 1 | 1 | What am I doing wrong here?
I cannot add a path to my Jupyter Notebook. I am stuck. Any of my attempts did not work at all.
home_dir="\C:\Users\User\Desktop\"
data_dir=home_dir + "\C:\Users\User\Desktop\A Student's Guide to Python for Physical Modeling by KInder and Nelson\code"
data_set=np.loadtxt(data_dir + "HIVseries.csv", delimiter=',') | Add path in Python to a Notebook | 0.099668 | 0 | 0 | 541 |
42,241,038 | 2017-02-15T04:56:00.000 | 6 | 0 | 0 | 0 | python,tensorflow,deep-learning,text-recognition | 42,244,824 | 1 | true | 0 | 0 | The difficulty is that you don't know where the text is. The solution is, given an image, you need to use a sliding window to crop different part of the image, then use a classifier to decide if there are texts in the cropped area. If so, use your character/digit recognizer to tell which characters/digits they really are.
So you need to train another classifer: given a cropped image (the size of cropped images should be slightly larger than that of your text area), decide if there are texts inside.
Just construct training set (positive samples are text areas, negative samples are other areas randomly cropped from the big images) and train it~ | 1 | 7 | 1 | I am new to TensorFlow and to Deep Learning.
I am trying to recognize text in naturel scene images. I used to work with an OCR but I would like to use Deep Learning. The text has always the same format :
ABC-DEF 88:88.
What I have done is recognize every character/digit. It means that I cropped the image around every character (so each picture gives me 10 characters) to build my training and test set and they build a two conv neural networks. So my training set was a set of characters pictures and the labels were just characters/digits.
But I want to go further. What I would like to do is just to give the full pictures and output the entire text (not one character such as in my previous model).
Thank you in advance for any help. | TensorFlow - Text recognition in image | 1.2 | 0 | 0 | 13,369 |
42,242,719 | 2017-02-15T07:07:00.000 | 0 | 0 | 0 | 0 | python,django,firebase-authentication | 53,589,283 | 1 | false | 1 | 0 | Firebase authentication only supports login/signup, reset password or email.
but for that you need firebase admin credentials.
For other field you need local model. There is no problem with using django, but also no existing integration I'm aware of, so you'd have to hook it up yourself.
if you want auth-system like firebase and other functionality than you can use social-django-restframework. you can integrate all login system with your django app and control user with inbuilt user model. | 1 | 7 | 0 | I am evaluating if Firebase authentication to see if it works well with Django/Djangae. Here comes some context
require email/password authentication, able to additional field like job title, and basic things like reset password email.
use Djanage framework (Django that uses datastore as data storage), app engine.
really good to make use built-in authentication tool provided by Django, like session, require-loggin, etc.
Drop-in authentication seems to be a candidate. Does it work with Django authentication, like permission, group, etc.
Thanks for advance. | Firebase Authentication and Django/Djangae | 0 | 0 | 0 | 1,473 |
42,243,255 | 2017-02-15T07:37:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,sockets | 42,267,485 | 1 | true | 0 | 0 | Never mind i figures it out. My mistake was that i opened new socket with every thread, while should have opened that once in main() func, then do the accept in the thread. Thank you all | 1 | 0 | 0 | i have started working on python chat, using sockets.
I am now having a problem with connecting many clients to the server, because if they connect to the same port they won't be able to communicate live, because each client would wait in line until the port will be free. Now my idea was to choose (on the server side) how many clients I want first, then open that range of ports using a simple for function and threads. Now my problem is that on my client size I am using try, when the "try" point is connecting to the port. At first I thought if somebody already connected to some port it will throw an error so the client will just jump to next port, but I forgot about that line thing. Any ideas? | Trying to create Python chat | 1.2 | 0 | 1 | 65 |
42,243,330 | 2017-02-15T07:42:00.000 | 0 | 0 | 1 | 0 | python,deployment,anaconda,release,conda | 42,243,410 | 1 | false | 0 | 0 | You can use PyInstaller, cxfreeze, py2app for create standalone exe for windows and .tz for Linux and Mac it helps to distribute your code to others with or without python | 1 | 0 | 0 | I have installed anaconda and write a one file py script. I want to create a zip package that user can just unzip the file and double click a file (.bat or .py) for run.
How can I create such zip file?
I read the doc regarding the conda environment but it seems it is still something for the development machine. | How to deploy/release a python script created by anaconda? | 0 | 0 | 0 | 678 |
42,243,861 | 2017-02-15T08:12:00.000 | 0 | 0 | 0 | 0 | python,excel | 42,244,164 | 1 | false | 0 | 0 | Maybe the function from XLRD module can help you
where you can get the sheet contents by index like this:
worksheet = workbook.sheet_by_index(5)
and then you can copy that into some other sheet of a different index, like this:
workbook.sheet_by_index(0) = worksheet | 1 | 0 | 0 | I am trying to move an Excel sheet say of index 5 to the position of index 0. Right now I have a working solution that copies the entire sheet and writes it into a new sheet created at the index 0, and then deletes the original sheet.
I was wondering if there is another method that could push a sheet of any index to the start of the workbook without all the need of copy, create and write. | Python - Change the sheet index in excel workbook | 0 | 1 | 0 | 2,015 |
42,248,607 | 2017-02-15T11:51:00.000 | 1 | 0 | 1 | 1 | python,pycharm,fabric | 42,261,986 | 3 | false | 0 | 0 | I haven't used this setup on Windows, but on Linux/Mac, it's fairly straightforward:
Create a new Run Configuration in PyCharm for a Python script (when you click the "+" button, select the one labelled "Python")
The "Configuration" tab should be open.
For the "Script" field, enter the full path to fab.exe, like C:\Python27\.....\fab.exe or whatever it is.
For Script parameters, just try -l, to list the available commands. You'll tweak this later, and fill it in with whatever tasks you'd run from the command line, like "fab etc..."
For the "Working directory" field, you'll want to set that to the directory that contains your fabfile.
And it's about as easy as that, at least on *nix. Sorry that I don't have a Windows setup, but do let us know if you do have any issues with the setup described above. | 1 | 2 | 0 | There are some posts on SO and tell me to use fab-script.py as startup script for pycharm. It's exactly what I used before. Now when I upgrade fabric to latest version, fab-script disappeared, and only fab.exe left there. I tried a lot of other ways, but still failed to launch debugger in pycharm. | how to debug python fabric using pycharm | 0.066568 | 0 | 0 | 1,096 |
42,253,981 | 2017-02-15T15:49:00.000 | 0 | 0 | 0 | 0 | python,pyspark | 45,752,337 | 2 | false | 0 | 0 | I have the same problem. The Python file stat.py does not seem to be in Spark 2.1.x but in Spark 2.2.x. So it seems that you need to upgrade Spark with its updated pyspark (but Zeppelin 0.7.x does not seem to work with Spark 2.2.x). | 1 | 4 | 1 | Python 2.7, Apache Spark 2.1.0, Ubuntu 14.04
In the pyspark shell I'm getting the following error:
>>> from pyspark.mllib.stat import Statistics
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named stat
Solution ?
similarly
>>> from pyspark.mllib.linalg import SparseVector
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named linalg
I have numpy installed and
>>> sys.path
['', u'/tmp/spark-2d5ea25c-e2e7-490a-b5be-815e320cdee0/userFiles-2f177853-e261-46f9-97e5-01ac8b7c4987', '/usr/local/lib/python2.7/dist-packages/setuptools-18.1-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/pyspark-2.1.0+hadoop2.7-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/py4j-0.10.4-py2.7.egg', '/home/d066537/spark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip', '/home/d066537/spark/spark-2.1.0-bin-hadoop2.7/python', '/home/d066537', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client'] | unable to import pyspark statistics module | 0 | 0 | 0 | 1,550 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.