Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
43,354,382
2017-04-11T19:11:00.000
0
0
1
1
python,django,bash,macos,terminal
62,839,173
10
false
0
0
I have followed the below steps in Macbook. Open terminal type nano ~/.bash_profile and enter Now add the line alias python=python3 Press CTRL + o to save it. It will prompt for file name Just hit enter and then press CTRL + x. Now check python version by using the command : python --version
2
38
0
My Mac came with Python 2.7 installed by default, but I'd like to use Python 3.6.1 instead. How can I change the Python version used in Terminal (on Mac OS)? Please explain clearly and offer no third party version manager suggestions.
How to switch Python versions in Terminal?
0
0
0
185,402
43,355,220
2017-04-11T20:02:00.000
0
0
1
0
python,git,gitpython
43,550,583
1
true
0
0
I solved it using subprocess at the end, like this: subprocess.check_output(['git', 'diff', '-U500', commit_a, commit_b, file_path], cwd=project_dir).
1
1
0
I want to get the changes to a file in a git repository using the gitpython library. I'm using repo.git.diff(commit_a, commit_b, file_path) for that. But I need to increase the context of the diff similar to the -U argument. How can I do this using the library?
How to get git diff with full context using gitpython
1.2
0
0
437
43,358,798
2017-04-12T01:37:00.000
1
0
0
0
python,web,flask
43,359,234
1
true
1
0
You can define one model which contain three fields and define two forms. The first form contain question1 and question2. The second form inherit from the first form and add question3. There are some views: a view to show the first form to the first user to fill in a view to save the first form data to the database(SQLALchemy) and send email to the second user that contain a url(with a id point to the data object) which map to the next view a view which accept the id of data object and show the second form to the second user, in this view you can create the second form with form = Form2(obj=Data.query.get(id)) to populate data from data object a view to save the second from data to the database(You only define one model and update the last field in this view function)
1
0
0
I'm using Flask/Python for what it matters. I need to have one user begin filling out a form, lets say question 1 and 2, and then save that form and send it via email to another user to finish the form, question 3. The second user then saves it again and the final product is stored. Question 1 would be a name field and question 2 would be a status field (like here, not going, etc) and question 3, the question for the second user, would be a 'yes' 'no' to confirm the information from the first user. That description was just to outline the full workflow. If anyone can point me to some useful information and/or examples how of how to send the uncompleted form for another use to finish I'd appreciate it.
Process for one user starting a form and another finishing it
1.2
0
0
26
43,360,854
2017-04-12T05:18:00.000
0
0
0
0
python,abaqus
43,388,797
3
false
0
0
Abaqus doesn't output stiffness matrices. If you already know Abaqus and only want to do buckling, you might want to try CalculiX. I think it can output a stiffness matrix and uses Abaqus-style input files.
2
0
1
I need export geometric(stress or diffrential) stiffness matrix in linear buckling problem at abaqus software for use in matlab program, i find method export of usual stiffness matrix, but i cant find any data about export geometric stiffness matrix in abaqus script, Can some one help me about it? for example when i export stiffness matrix in buckling problem,the result is same as stiffness matrix in linear static problem
export geometric stiffness matrix in abaqus software
0
0
0
797
43,360,854
2017-04-12T05:18:00.000
1
0
0
0
python,abaqus
43,375,546
3
false
0
0
Let's discuss a little back ground on geometric nonlinear FEA. There are many different methods used for nonlinear geometric analysis of structure. The most well-known and established ones are [1]: Total Lagrangian (TL), Updated Lagrangian (UL). TL: Uses the full nonlinear definition of strains. UL: Uses the linear definition of strain but updates the reference configuration with previous deformation solutions. Abaqus's core solver uses a very specific type of geometric nonlinear algorithm called co-rotational in where the rotation and deformation with respect to reference configuration are treated separately. Abaqus's co-rotational formulation is propitiatory and I do not expect them to offer its output so easily. Even if you could see the output for nonlinear geometry matrices from Abaqus. These matrice(s) would be different than what you expect depending on the methods that you have used for nonlinear geometric modeling. [1] Reddy, Junuthula Narasimha. An Introduction to Nonlinear Finite Element Analysis: with applications to heat transfer, fluid mechanics, and solid mechanics. OUP Oxford, 2014.
2
0
1
I need export geometric(stress or diffrential) stiffness matrix in linear buckling problem at abaqus software for use in matlab program, i find method export of usual stiffness matrix, but i cant find any data about export geometric stiffness matrix in abaqus script, Can some one help me about it? for example when i export stiffness matrix in buckling problem,the result is same as stiffness matrix in linear static problem
export geometric stiffness matrix in abaqus software
0.066568
0
0
797
43,361,364
2017-04-12T05:55:00.000
2
1
0
0
python,django,amazon-web-services,boto3
43,364,021
1
true
1
0
When you run aws configure you will write files linked to your profile as they will be written in ~/.aws/config and ~/.aws/credentials. When you run your application, the application will look for those files using the same logic, ie.~/.aws/credentials. When in a shell, the interpreter will try to translate ~ to an absolute path based on $(whoami). TL;DR - You must run aws configure with the same user as your application is running with.
1
0
0
I have configured credentials using aws configure. aws configure list looks fine.Python/django is able to locate the credentials in the shell_plus but unable to locate credentials when django is being run through gunicorn / supervisor. This is weired
AWS credentials : unable to locate credentials in django python
1.2
0
0
1,139
43,363,636
2017-04-12T07:59:00.000
1
0
0
0
python,http,http-status-codes,valueerror
43,363,842
3
true
1
0
As there are various invalid types, you should use the most appropriate HTTP status code for each different situation, case by case. For Password doesn't match, I think 403 Forbidden is the best choice. For User doesn't exist in db, 204 No Content is the best one. For value is negative, it depends on why value is negative.
2
1
0
I am raising Value Error in my API because the input parameter of a particular function is not valid like below Password doesn't match User doesn't exist in db or the value is negative Client provided valid argument as per the API norms so I think Client side error is not the case(400 series code). So whether I have to return status code as 200 because that request is totally processed or there should be a http status code for this.
What is the http status code when Api raises ValueError?
1.2
0
1
2,334
43,363,636
2017-04-12T07:59:00.000
1
0
0
0
python,http,http-status-codes,valueerror
43,363,778
3
false
1
0
You should send another status code. A good example of a processed request which gives another status than 200 is the redirection 3xx. After submitting a form through a POST request, it is recommended that the server gives a 307 Temporary Redirect. However, the request was totally processed, and even succeeded. In your case, something happened (an exception has been raised). Let the client know it by sending a adequate status. I would recommend 403 Forbidden or 401 Unauthorized.
2
1
0
I am raising Value Error in my API because the input parameter of a particular function is not valid like below Password doesn't match User doesn't exist in db or the value is negative Client provided valid argument as per the API norms so I think Client side error is not the case(400 series code). So whether I have to return status code as 200 because that request is totally processed or there should be a http status code for this.
What is the http status code when Api raises ValueError?
0.066568
0
1
2,334
43,367,732
2017-04-12T11:01:00.000
1
0
0
0
python,django,postgresql
43,369,451
1
false
1
0
In fact you do not need to create a special table for each customer. SQL databases is designed in a manner to keep all similar data in one table. It is much easier to work with them in such a way. At a moment I'd like to recommend to read about relational databases to better understand ways how to store data in it. Then you'll see how to better design application and data storage.
1
0
0
I need a small help. I am new to postgres and django. I am creating a project in django where there will n number of clients and their data is saved into the database on monthly basis. So my doubts is should i go with only a single table and save all the data inside it or do I have an option to create individual tables dynamically as the user approaches and then save the values into those table?
Creating dynamic tables in postgres using django
0.197375
1
0
324
43,368,500
2017-04-12T11:38:00.000
0
0
0
0
python,dialogflow-es
54,820,198
1
false
0
0
If you are using the enterprise edition you can use @sys.number-sequence.
1
4
0
I am developing a customer care chat bot to resolve basic queries of customer for my e-commerce site. My order id is 13 digit long. To read queries like "Please check my order status with id 9876566765432" api.ai is unable to understand that it is order id. I have set entity type @sys.number. It is able to identify smaller number like 343434 etc. I have tried with @sys.number-integer, @sys.number-sequence but not working for long numbers. Pleas advise...
Unable to read very large number in api.ai as parameter
0
0
1
135
43,377,434
2017-04-12T18:50:00.000
1
0
0
0
python,pygame
43,377,665
1
true
0
1
You have to remember the face direction ( self.face_direction = RIGHT ) on click flip only if direction is wrong. Alternatively, save the flipped image in face_flipped_right. Then either show original image or flipped ( flipping is nondestructive)
1
0
0
I am making a game using the Pygame development module. When a user for my game presses the left key, I would like my character to "face" left and when the user presses the right key, I would like my character to be flipped and "face" the right. The character is one I drew and imported in. I am aware of the flip function in Pygame, but I think there will be errors. If the character starts off facing the left, and the user presses the right key, the character will be flipped and will move to the right. However, if he/she lets go of the right key and then presses it again, the character will flip and face the left, but will continue to move to the right. Is there any way to solve this problem? I already know how to move the character; I am having problems with flipping it. Also, another idea I have considered is the diplay blitting one image when the key is pressed, and then blitting another when the other key is presses. But I do not knoww how to make the original image disappear. Any thoughts on this as well? Thank you.
Trying to flip a character in Pygame
1.2
0
0
346
43,379,256
2017-04-12T20:40:00.000
1
0
0
0
python,google-api,google-custom-search,google-api-python-client
43,379,411
1
false
0
0
There is a 'search outside of google'checkbox in the dashboard. you will get the same result after you check it. it takes me a while to find it out. the default sitting is only return search result inside of all google websites.
1
2
0
How would I go about using the Google Custom Search API, using the python GCS library, to only return Google Shopping results? I have the basic implementation already for standard search queries, which searches the whole web and returns related sites, but how would I only return shopping results? Thank You.
Google Custom Search Python Shopping Results Only?
0.197375
0
1
463
43,379,554
2017-04-12T20:59:00.000
1
0
0
1
python,rabbitmq,celery
43,379,719
2
false
0
0
Celery is the task management framework--the API you use to schedule jobs, the code that gets those jobs started, the management tools (e.g. Flower) you use to monitor what's going on. RabbitMQ is one of several "backends" for Celery. It's an oversimplification to say that Celery is a high-level interface to RabbitMQ. RabbitMQ is not actually required for Celery to run and do its job properly. But, in practice, they are often paired together, and Celery is a higher-level way of accomplishing some things that you could do at a lower level with just RabbitMQ (or another queue or message delivery backend).
1
6
0
Is Celery mostly just a high level interface for message queues like RabbitMQ? I am trying to set up a system with multiple scheduled workers doing concurrent http requests, but I am not sure if I would need either of them. Another question I am wondering is where do you write the actual task in code for the workers to complete, if I am using Celery or RabbitMQ?
What is the relationship between Celery and RabbitMQ?
0.099668
0
1
3,260
43,379,578
2017-04-12T21:00:00.000
5
0
1
0
python,scala
43,379,784
1
true
0
0
^A is usually used to represent the Start Of Header Character (SOH). It's ascii value is x01. You can create this in code with val c: Char = 1, if it's more clear to you, or if you need it in a string literal you can use the unicode notation '\u0001'
1
2
0
In Python "^A" is represented by chr(1). This is what I use as a separator in myfiles. What is the equivalent in Scala.I am reading the file using scala. I want to know how to represent ^A in order to split the data i read from my files.
Control A Representation in Python/Scala
1.2
0
0
237
43,382,857
2017-04-13T03:04:00.000
2
0
0
0
python,nlp,nltk
43,383,011
3
false
0
0
My suggestion is as follows: Put each word through the same thesaurus, to get a list of synonyms. Get the size of the set of similar synonyms for the two words. That is a measure of similarity between the words. If you would like to do a more thorough analysis: Also get the antonyms for each of the two words. Get the size of the intersection of the sets of antonyms for the two words. If you would like to go further!... Put each word through the same thesaurus, to get a list of synonyms. Use the top n (=5, or whatever) words from the query result to initiate a new query. Repeat this to a depth you feel is adequate. Make a collection of synonyms from the repeated synonym queries. Get the size of the set of similar synonyms for the two words from the two collections of synonyms. That is a measure of similarity between the words.
1
5
1
I am wondering if it's possible to calculate the distance/similarity between two related words in Python (like "fraud" and "steal"). These two words are not synonymous per se but they are clearly related. Are there any concepts/algorithms in NLP that can show this relationship numerically? Maybe via NLTK? I'm not looking for the Levenshtein distance as that relates to the individual characters that make up a word. I'm looking for how the meaning relates. Would appreciate any help provided.
How to calculate the distance in meaning of two words in Python
0.132549
0
0
2,056
43,384,639
2017-04-13T06:04:00.000
0
0
0
0
python,user-interface,tkinter,ms-word
43,390,991
1
true
0
1
You can't do what you want. In a very, very limited sense you can embed a very few applications on linux, but for all intents and purposes you simply cannot embed a non-tkinter graphical application inside a tkinter window.
1
0
0
I've been working quite a bit with tkinter in Python. I was wondering if there is a way to combine another program into my python GUI script. Here is specifically what I am trying to accomplish: Python GUI Opens Left side of GUI is custom content (buttons etc.) Right side is parented MS Word document (When I move the root window the MS Word document moves accordingly)
How to parent programs to tkinter windows
1.2
0
0
43
43,385,369
2017-04-13T06:48:00.000
0
0
1
0
python-2.7,wxpython
43,394,585
2
false
0
0
Solution:- Step 1:- Open cmd Step 2:- Type pip show wxpython-phoenix That's it. It it shows your version of wxpython, then obviously your system is installed with it. Else install it by using the following command pip install wxpython-phoenix Done.
1
0
0
How to check if the installed wxPython in my machine is 32-bit or 64-bit?
How to check if the installed wxPython is 32-bit or 64-bit?
0
0
0
2,017
43,394,318
2017-04-13T13:59:00.000
-2
0
0
1
python,google-app-engine,google-cloud-datastore
43,394,458
1
false
1
0
You have to specify the path to your codebase. IF you are running the command from the same folder, use . dev_appserver.py .
1
4
0
I have a project on GAE wich use Google Cloud Datastore. Of course, I have a development environment on my local machine(with local Datastore), and stage environment and production environment on the Google Cloud with two Datastores(stage & prod) for each environment. When I run a project on my local machine NDB connect me to my local Datastore. And it's a problem because I want to connect to Google Cloud Datastore How can I run the project on my local machine and connect it Google Cloud Datastore(stage)? I use Python, and run the project via: dev_appserver.py app.yaml
Connect from local GAE project to Google Cloud Datastore
-0.379949
0
0
378
43,400,703
2017-04-13T19:54:00.000
3
0
1
1
linux,python-2.7,pip,installation
43,401,090
3
false
1
0
Seems there's a problem with your pip installation. I have two options for you. 1) Edit file /usr/lib/python2.7/site-packages/packaging/requirements.py and replace line MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") with MARKER_EXPR = originalTextFor(MARKER_EXPR)("marker") OR 2) Try and upgrade your pip installation with pip install -U pip setuptools
1
1
0
I am using python 2.7 and trying to install scrapy using pip but get this: Exception: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 324, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 634, in _prepare_file abstract_dist.prep_for_dist() File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 129, in prep_for_dist self.req_to_install.run_egg_info() File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 412, in run_egg_info self.setup_py, self.name, File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 387, in setup_py import setuptools # noqa File "/root/.local/lib/python2.7/site-packages/setuptools/init.py", line 12, in import setuptools.version File "/root/.local/lib/python2.7/site-packages/setuptools/version.py", line 1, in import pkg_resources File "/root/.local/lib/python2.7/site-packages/pkg_resources/init.py", line 72, in import packaging.requirements File "/root/.local/lib/python2.7/site-packages/packaging/requirements.py", line 59, in MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") TypeError: call() takes exactly 2 arguments (1 given)
Linux pip package installation error
0.197375
0
0
3,160
43,401,116
2017-04-13T20:24:00.000
2
1
0
0
uwsgi,pythonanywhere
43,477,460
1
true
0
0
In a dev setting you can just use uWSGI as your app and web server, or you can use one of the many Python WSGI servers.
1
3
0
What local development configuration is suggested for testing python web apps before deploying to pythonanywhere? May be some XAMPP/LAMP analogue for uWSGI is exist?
Local development configuration to test web app before deploy to pythonanywhere
1.2
0
0
147
43,403,560
2017-04-14T00:17:00.000
1
0
0
0
python,keras,reshape
43,403,593
1
true
0
0
Just put the last dim to -1.. So Reshape(15,-1)
1
1
1
Is it possible in keras to [Reshape][1] an array in eras by only specifying one of the two dimension, such that the last dimension fits accordingly? In my case i have (30,1,2080) and i want to reshape it to (15,)
Reshape array in keras
1.2
0
0
599
43,404,175
2017-04-14T01:54:00.000
0
0
1
0
python-3.x,crash
43,421,764
1
false
0
1
Found out the problem: -- The basic issue was a simple one. The chimp program tries to locate its support files (two images and two sounds) in a subdirectory called /data. -- It was attempting to load a file using that path, and of course had a failure. -- So I got it working OK after fixing that. -- The only question that remains is why the python/idle processes totally crash with a spinning ball when the script encounters a missing file error! Thanks Eric
1
0
0
I have worked with python 3 for a few months. Just a few days ago I have installed pygame and have been doing early tutorials. In two separate situations when I run a python script, the mac halts with a spinning ball and I need to cmd-opt-esc to abort it. I have done googling and not found similar complaints. As one example, I'm running the Chimp game and it loads a chimp image in a new little window, then spinning ball. Restarting the mac didn't help. Sound familiar, and/or does anyone here have guidance?
How avoid Pygame crashing a new mac pro with spinning ball
0
0
0
36
43,407,803
2017-04-14T08:17:00.000
1
1
1
1
python
43,407,826
1
false
0
0
Are you running it from the same folder using ./SleepCalc.py command? SleepCalc.pyonly will not work.
1
0
0
Usually I run python script python myscript.py, I want to run script directly without type python, I already Add shebang: #!/usr/bin/env python at the top of my script. And then give the file permission to execute by: chmod +x SleepCalc.py but it still tell me "Command not found" Is anything I need to change in cshell?or anything I did wrong?
Run python script without type pythonat the front
0.197375
0
0
81
43,411,738
2017-04-14T12:42:00.000
3
0
0
0
python,tensorflow
43,416,019
1
true
0
0
Mostly you use them tf.image.* for easiness of use. Both crop_to_bounding_box and pad_to_bounding_box use slice and padunderneath, but also add checkings and constraints to make sure you don't spend hours trying to debug your slice/pad indices and offsets.
1
1
1
I'd like to understand why does the two functions tf.image.crop_to_bounding_box and tf.image.pad_to_bounding_box exists, since the behaviour of these two functions can be done really simply with respectively tf.slice and tf.pad. They are not so much easier to understand, and their scope is narrow since they accept only 3D and 4D tensors. Furthermore, they tend to be slower in terms of time of execution. Is there something I miss here ?
tf.image.pad_to_bounding_box VS tf.pad and tf.image.crop_to_bounding_box VS tf.slice
1.2
0
0
825
43,412,245
2017-04-14T13:12:00.000
0
0
1
0
python,windows,pip,anaconda
43,412,379
5
false
0
0
Solution:- (Note:- This will surely work for all!!) Step 1:- conda search python Step 2:- conda install python=3.5.2 Step 3:- pip install tensorflow Step 4:- import tensorflow as tf
4
8
0
I downloaded the latest stable Anaconda install off the Continuum website which turned out to be Anaconda 4.3.1 (64-bit) and comes with Python 3.6.0. I am in a Windows 10 environment. However pip3 is missing and I cannot install any Python 3.x packages via pip3. What am I doing wrong?
Installed Anaconda 4.3.1 (64-bit) which contains Python 3.6 but pip3 missing, cannot install tensorflow
0
0
0
24,838
43,412,245
2017-04-14T13:12:00.000
0
0
1
0
python,windows,pip,anaconda
43,558,079
5
false
0
0
I have Windows10, 64bit, Anaconda4.3 with Python 3.6. Karthik' soluton worked for me. Before that I tried everything including "conda create -n tensorflow python=3.5", but it did not work (although "python --version" gave 3.5.3).
4
8
0
I downloaded the latest stable Anaconda install off the Continuum website which turned out to be Anaconda 4.3.1 (64-bit) and comes with Python 3.6.0. I am in a Windows 10 environment. However pip3 is missing and I cannot install any Python 3.x packages via pip3. What am I doing wrong?
Installed Anaconda 4.3.1 (64-bit) which contains Python 3.6 but pip3 missing, cannot install tensorflow
0
0
0
24,838
43,412,245
2017-04-14T13:12:00.000
0
0
1
0
python,windows,pip,anaconda
50,890,356
5
false
0
0
you should be able to install tensorflow using $ conda install -c conda-forge tensorflow
4
8
0
I downloaded the latest stable Anaconda install off the Continuum website which turned out to be Anaconda 4.3.1 (64-bit) and comes with Python 3.6.0. I am in a Windows 10 environment. However pip3 is missing and I cannot install any Python 3.x packages via pip3. What am I doing wrong?
Installed Anaconda 4.3.1 (64-bit) which contains Python 3.6 but pip3 missing, cannot install tensorflow
0
0
0
24,838
43,412,245
2017-04-14T13:12:00.000
1
0
1
0
python,windows,pip,anaconda
57,546,083
5
false
0
0
easy_install pip I used it, for the problem of pip installation in miniconda. sometimes the pip is not properly installed but it claims yes
4
8
0
I downloaded the latest stable Anaconda install off the Continuum website which turned out to be Anaconda 4.3.1 (64-bit) and comes with Python 3.6.0. I am in a Windows 10 environment. However pip3 is missing and I cannot install any Python 3.x packages via pip3. What am I doing wrong?
Installed Anaconda 4.3.1 (64-bit) which contains Python 3.6 but pip3 missing, cannot install tensorflow
0.039979
0
0
24,838
43,412,779
2017-04-14T13:42:00.000
0
0
0
0
python,selenium,svg,webdriver
43,422,726
1
false
0
0
Try to make use of Actions or Robot class
1
0
0
I am interacting with svg elements in a web page. I am able to locate the svg elements by xpath, but not able to click it, The error mentions that the methods like click(), onclick() are not available. Any suggestions of how can we make it clickable ? please advice ?
SVG Elements: Able to locate elements using xpath but not able to click
0
0
1
106
43,413,067
2017-04-14T13:59:00.000
6
1
0
0
python,mathematical-optimization,pyomo
43,420,032
2
false
0
0
While you can use NumPy data when creating Pyomo constraints, you cannot currently create blocks of constraints in a single NumPy-style command with Pyomo. Fow what it's worth, I don't believe that you can in languages like AMPL or GAMS, either. While Pyomo may eventually support users defining constraints using matrix and vector operations, it is not likely that that interface would avoid generating the individual constraints, as the solver interfaces (e.g., NL, LP, MPS files) are all "flat" representations that explicit represent individual constraints. This is because Pyomo needs to explicitly generate representations of the algebra (i.e., the expressions) to send out to the solvers. In contrast, NumPy only has to calculate the result: it gets its efficiency by creating the data in a C/C++ backend (i.e., not in Python), relying on low-level BLAS operations to compute the results efficiently, and only bringing the result back to Python. As far as performance and scalability goes, I have generated raw models with over 13e6 variables and 21e6 constraints. That said, Pyomo was designed for flexibility and extensibility over speed. Runtimes in Pyomo can be an order of magnitude slower than AMPL when using cPython (although that can shrink to within a factor of 4 or 5 using pypy). At least historically, AMPL has been faster than GAMS, so the gap between Pyomo and GAMS should be smaller.
1
10
1
I am interested in the performance of Pyomo to generate an OR model with a huge number of constraints and variables (about 10e6). I am currently using GAMS to launch the optimizations but I would like to use the different python features and therefore use Pyomo to generate the model. I made some tests and apparently when I write a model, the python methods used to define the constraints are called each time the constraint is instanciated. Before going further in my implementation, I would like to know if there exists a way to create directly a block of constraints based on numpy array data ? From my point of view, constructing constraints by block may be more efficient for large models. Do you think it is possible to obtain performance comparable to GAMS or other AML languages with pyomo or other python modelling library ? Thanks in advance for your help !
Performance of pyomo to generate a model with a huge number of constraints
1
0
0
4,313
43,414,689
2017-04-14T15:33:00.000
0
0
0
0
python,scikit-learn,random-forest
43,507,382
1
true
0
0
I think you can split majority class in +-10000 samples and train the same model using each sample plus the same points of minority class.
1
3
1
I am applying ScikitLearn's random forests on an extremely unbalanced dataset (ratio of 1:10 000). I can use the class_weigth='balanced' parameter. I have read it is equivalent to undersampling. However, this method seems to apply weights to samples and do not change the actual number of samples. Because each tree of the Random Forest is built on a randomly drawn subsample of the training set, I am afraid the minority class will not be representative enough (or not representated at all) in each subsample. Is this true? This would lead to very biased trees. Thus, my question is: does the class_weight="balanced" parameter allows to build reasonably unbiased Random Forest models on extremely unbalanced datasets, or should I find a way to undersample the majority class at each tree or when building the training set?
Undersampling vs class_weight in ScikitLearn Random Forests
1.2
0
0
900
43,416,606
2017-04-14T17:49:00.000
0
1
1
1
python,bash,shell
43,416,717
2
false
0
0
Call the python script with /usr/bin/time script. This allows you to track CPU and wall-clock time of the script.
1
0
0
I have bash shell script which is internally calling python script.I would like to know how long python is taking to execute.I am not allowed to do changes in python script. Any leads would be helpful thanks in advance.
Capture run time of python script executed inside shell script
0
0
0
884
43,416,724
2017-04-14T17:56:00.000
0
1
0
0
python,bots,telegram
43,864,589
4
false
0
0
In the bot API 3.0 the method used is "deleteMessage" with parameters chat_id and message_id Not yet officially announced
1
1
0
I need telegram bot which can delete message in group channel i googled it but did not find a solution Could you tell the library and the method for this, if you know, thank you in advance?
How to delete message on channel telegram?
0
0
1
14,759
43,419,146
2017-04-14T20:56:00.000
12
0
1
0
python,operating-system
43,419,284
1
true
0
0
If you used os.remove and ended up deleting a file by accident, then there's no reason for this file to be in the recycle bin. It is removed from the filesystem. There is no Python operation to get that file back. However, a simple deletion just breaks the link to the file but does not erase the bits of the file on the filesystem. You can try file recovery softwares to get it back. Note that Now that the file is erased, this question is not Python specific anymore. You'd be in the same situation if you had deleted the file by any other means. You should avoid using your system to minimize the chances of erasing the bits of the file by writing another file at the same place on the disk. The tools you can use to recover the file are platform specific and the generic question "how to recover deleted files" has most certainly already been asked here, on Super User, Unix & Linux, or some other Stack Exchange community.
1
6
0
So, I deleted a file using python. I can't find it in my recycling bin. Is there a way I can undo it or something. Thanks in advance. EDIT: I used os.remove. I have tried Recuva, but it doesn't seem to find anything. I have done a deep search.
Recovering a file deleted with python
1.2
0
0
10,817
43,419,795
2017-04-14T22:01:00.000
-1
0
1
0
python,tensorflow,installation,python-wheel
53,412,662
13
false
0
0
conda create -n tensorflow_gpuenv tensorflow-gpu Or type the command pip install c:.*.whl in command prompt (cmd).
4
26
1
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
how to install tensorflow on anaconda python 3.6
-0.015383
0
0
191,454
43,419,795
2017-04-14T22:01:00.000
0
0
1
0
python,tensorflow,installation,python-wheel
53,520,347
13
false
0
0
Uninstall Python 3.7 for Windows, and only install Python 3.6.0 then you will have no problem or receive the error message: import tensorflow as tf ModuleNotFoundError: No module named 'tensorflow'
4
26
1
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
how to install tensorflow on anaconda python 3.6
0
0
0
191,454
43,419,795
2017-04-14T22:01:00.000
1
0
1
0
python,tensorflow,installation,python-wheel
59,495,885
13
false
0
0
Well, conda install tensorflow worked perfect for me!
4
26
1
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
how to install tensorflow on anaconda python 3.6
0.015383
0
0
191,454
43,419,795
2017-04-14T22:01:00.000
-1
0
1
0
python,tensorflow,installation,python-wheel
45,116,048
13
false
0
0
For Windows 10 with Anaconda 4.4 Python 3.6: 1st step) conda create -n tensorflow python=3.6 2nd step) activate tensorflow 3rd step) pip3 install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp36-cp36m-win_amd64.whl
4
26
1
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
how to install tensorflow on anaconda python 3.6
-0.015383
0
0
191,454
43,419,989
2017-04-14T22:24:00.000
1
0
1
1
python,python-2.7
43,420,091
1
true
0
0
If you go to the Cygwin site, you can find the answers to all of your questions. Cygwin provides a collection of tools that give functionality similar to a Linux distro on Windows. Cygwin also provides substantial POSIX API functionality. When programmers launch their python scripts using Cygwin, they are using the tools provided within the Cygwin library. To avoid spoon-feeding while still answering your question, go into Cygwin and test it for yourself. What happens when you enter that command within Cygwin? Once you see the result, if you have any other questions, comment them on here.
1
0
0
I am new to Python. I must precise that I don't clearly understand the relation between Cygwin and Python. I've seen tutorials of programmers launching a Python script in Cygwin with the following line: python "file path" I think that this line makes python build and run that script. My question is: is it possible to directly write "print ("Hello World") " in Cygwin ? By the way, are the three arrows (>>>) used to designate a Cygwin Shell input line? Many thanks in advance! Nicola
Executing Python code lines in Cygwin
1.2
0
0
102
43,420,985
2017-04-15T00:42:00.000
0
0
0
0
python,django,amazon-web-services,django-rest-framework,pip
60,883,062
2
false
0
0
I encountered this and found the reason. There were 2 different tabs which were running server. For test reasons I just started server in another tab. Django doesn't warn in the second tab. So your requests are most probably falling to the another tab running the server.
1
1
0
If I set ALLOWED_HOSTS = ['*'] I am able to make a succesfull call, however this seems dangerous and counterintuitive. When I set ALLOWED_HOSTS to the recommended string, it fails. How to fix this?
DisallowedHost error not going away when adding IP address to ALLOWED_HOSTS
0
0
0
409
43,421,508
2017-04-15T02:20:00.000
0
0
0
0
python,pandas,quadtree,r-tree
43,423,664
1
false
0
0
Yes, r-trees can be stored on disk easily. (It's much harder with KD-trees and quad-trees) That is why the index is block oriented - the block size is meant to be chosen to match hour drive. I don't use pandas, and will not give a library recommendation.
1
0
1
I have a large dataset, 11 million rows, and I loaded the data into pandas. I want to then build a spatial index, like rtree or quad tree, but when I push it into memory, it consumes a ton of RAM along with the already reading the large file. To help reduce the memory footprint, I was thinking of trying to push the index to disk. Can you store the tree in a table? Or even a dataframe and store it in hdf table? Is there a better strategy? Thanks
Hdf5 and spatial indexes
0
0
0
245
43,424,090
2017-04-15T08:50:00.000
-1
1
0
0
caching,browser,static,python-requests
43,425,459
1
true
1
0
Browsers still send requests to the server in case the files are cached to get to know whether there are new contents to fetch. Note that the response code 304 comes from the server telling the browser that the contents it has cached are still valid, so it doesn't have to download them again.
1
0
0
I have seen in Firebug that my browser sends requests even for all static files. This happened when I have enabled caching for static files. I also saw the server response with 304 status code. Now, my question: Why should the browser send requests for all static files when the cache is enabled? Is there a way that the browser does not send any request for static files until the expiration of the cache?
Why does browser send requests for static files?
1.2
0
1
149
43,425,621
2017-04-15T11:43:00.000
2
0
1
1
python,ubuntu,tensorflow,deep-learning
43,425,655
2
true
0
0
I think it's easier for you to use Ubuntu if you have the possibility. Getting lapack and blas libraries from sources is easier in linux (you can get precompiled packages for windows though). I prefer native pip, but for windows and for starting Anaconda should be the choice.
1
2
1
I am a newbie to TensorFlow (and the whole deep learning as well). I have a machine with dual boot, Windows 10 and Ubuntu 16. Under which OS should I install and run TensorFlow? Windows or Ubuntu? Also, what is the recommended Python environment? Anaconda or native pip?
Is it recommended to use TensorFlow under Ubuntu or under Windows?
1.2
0
0
8,589
43,427,707
2017-04-15T15:15:00.000
0
0
0
0
python,curve-fitting,gaussian
43,436,325
1
false
0
0
I have successfully used a genetic algorithm to search error space and find initial parameter estimates for scipy's Levenberg-Marquardt nonlinear solver. The most recent versions of scipy now include the Differential Evolution genetic algorithm which is what I had been using. It will take a bit of experimenting on a previously fit data set to tune things like population size or genetic crossover strategy, but then you should have a way for the computer to find initial parameter estimates in this way. It takes a while to run, so let it go overnight, but it can be automated.
1
0
1
I was wondering if anyone could show me a way of fitting multiple gaussian curves to a data set containing 6 peaks(data comes from a diffraction pattern from a copper gold alloy crystal). the way i have at the moment involves using multiple gaussian equations added together meaning i have to give multiple guesses of values when curve fitting.
Curve fitting a sum of Gaussian's to 6 peaks
0
0
0
523
43,429,169
2017-04-15T17:34:00.000
1
0
0
0
python,django,facebook,user-registration
43,429,232
3
false
1
0
If you are trying to develop a Web-Application and use FaceBook login, register as a facebook developer (it is free) and then go through their instructions. It is completely self-explanatory and quite easy to follow. Visit : developers.facebook.com for clear way of how to do this.
1
2
0
My python version is 3.4.3 and my django version is 1.9.6 . I tried many django facebook registrations apps but almost all the versions are outdated. What I want is to allow users to login via facebook and check if their friends are registered to my website. Your help would be very much appreciated!
How can I add a facebook user login functionality
0.066568
0
0
62
43,430,790
2017-04-15T20:11:00.000
0
0
1
1
python
43,431,194
4
false
0
0
The option list starts after the code (which was passed as a string literal) according to the manual: Specify the command to execute (see next section). This terminates the option list (following options are passed as arguments to the command). It means that the name of the script will be replaced by -c. The python -c "import sys; print(sys.argv)" 1 2 3 results ['-c', '1', '2', '3'] A possible solution is the usage of inspect module, for example python3 -c "import sys; import inspect; inspect.getsource(sys.modules[__name__])" but it causes TypeError because the __main__ module is a built-in one.
2
1
0
When using sys.argv on python -c "some code" I only get ['-c'], how can I reliably access the code being passed to -c as a string?
Capture the value of python -c "some code"
0
0
0
87
43,430,790
2017-04-15T20:11:00.000
0
0
1
1
python
43,437,447
4
false
0
0
This works python -c "import sys; exec(sys.argv[1])" "print 'hello'" hello
2
1
0
When using sys.argv on python -c "some code" I only get ['-c'], how can I reliably access the code being passed to -c as a string?
Capture the value of python -c "some code"
0
0
0
87
43,432,038
2017-04-15T22:51:00.000
6
0
1
1
python,conda
43,432,128
1
true
0
0
Check your start menu, it should be there. Its a link named "Anaconda Prompt", that links to %windir%\system32\cmd.exe "/K" C:\...\Anaconda3\Scripts\activate.bat C:\...\Anaconda3, it's executed in C:\Users\...\AppData\Roaming\SPB_16.6
1
2
0
I am looking for the exe file for Anaconda Prompt, I am looking C:\Anaconda3\Scripts and don't know what it's named?
Anaconda Prompt, where is the exe file saved on windows?
1.2
0
0
8,760
43,432,439
2017-04-16T00:09:00.000
1
0
1
0
python,algorithm
43,432,668
2
true
0
0
Use shuffle. It is fast enough and adequately chaotic for any practical purpose.
1
2
1
Is there any alorithm, which provides a chaotic permutation of a given list in a reasonable time? Im using Python, but I am concerned, that the given shuffle function will not provide a good solution, due a length of 1.1 Millions elements in the given list. I did some googling and did not find any usefull results, would be really surprised if there is anything like this, but I would really appreciate an answer
Chaotic permutations
1.2
0
0
522
43,433,329
2017-04-16T03:12:00.000
3
0
1
0
python,jupyter-notebook,jupyter
43,436,983
1
false
0
0
Well, I shutdown the server and restarted it, and now it works. Wish I knew what happened.
1
3
0
Not sure what's up, but I just noticed my anaconda based jupyter totally fails to render latex. I don't get an error, but if I put $x$ in a markdown cell, I get back $x$. Any suggestions on how to diagnose/fix?
Jupyter not rendering latex
0.53705
0
0
2,129
43,434,028
2017-04-16T05:27:00.000
-1
0
1
0
python,tensorflow,pip,32bit-64bit,python-module
64,636,321
5
false
0
0
There's nothing much you can do. I also had this issue. The best thing to do is to change your python path and install the packages on the 64 bits python.
1
12
0
I have decided to learn generic algorithms recently and I needed to install Tensorflow package. Tensorflow run on python 64 bit only, so i install python 3.5.0 64 bit without uninstalling python 32 bit. because i was afraid to lose my packages on python 32 bit by uninstalling it. The problem is how can i force pip install to install a package on my python 64 bit version instead of 32 bit version.
how to pip install 64 bit packages while having both 64 bit and 32 bit versions?
-0.039979
0
0
30,428
43,435,955
2017-04-16T09:56:00.000
0
0
0
1
python,file,hadoop,pyspark,bigdata
46,250,897
1
false
0
0
How to handle files arriving at different times? Doesn't matter unless your data is time-sensitive. If so, then your raw data should include the timestamp at which the record was written. Should such large files be combined or processed separately? Large, separate files are best. Take note of the HDFS block size. This size depends on your installation. I want this solution to be implemented in python You're welcome to use Spark Streaming for watch a directory for files, or Oozie+Spark to just schedule regular batches but other tools are arguably simpler. Some you can research Apache NiFi Streamsets Data Collector Apache Flume Flume will require you to install agents on those 10 external servers. Each of the listed services can read data in near-real time, so you don't explicitly need 30 minute batches.
1
0
0
I have a scenario where text delimited files arrives from different servers(around 10) to hadoop system every 30 minutes. Each file has around 2.5 million records and may not arrive at the same time, I am looking for an approach where these file can be processed every 30 minutes. My questions are: How to handle files arriving at different times? I want the data to be aggregated across 10 files. Should such large files be combined or processed separately? I want this solution to be implemented in python but solutions using any tools/techniques in hadoop would be appreciated.
Processing Multiple files in hadoop python
0
0
0
212
43,438,653
2017-04-16T15:10:00.000
2
0
0
0
python
43,438,949
2
false
0
0
You can convert the list of the integers to list of floats as [float(i) for i in values] with list comprehension. An other option is to convert the img variable as numpy.ndarray to an other numpy.ndarray which contains float values: img = img.astype(float) After this assignment the results will contain float values.
1
1
1
How can I convert an image to an array of float numbers? img = cv2.imread('img.png') and now convert img to float so I get for print(img[0,0]) something like "[ 4.0 2.0 0.0] instead of [4 2 0] Do you have an idea? Thank you very much!
Convert image file to float array in Python
0.197375
0
0
27,136
43,439,038
2017-04-16T15:53:00.000
2
0
0
0
python-3.x,tkinter
43,439,285
1
false
0
1
You can use the canvas's .itemconfig() method to switch your line between state=HIDDEN and state=NORMAL.
1
0
0
Pretty much the subject line says it all. I just want to be able to turn off the line, undisplay it. I don't want to delete the line or the tag reference to the line. I want to use a checkbutton and once the line is drawn, done through a database, I want to be able to turn on and off the line with the checkbutton, without having to replace the tag in the line list everytime I turn the line back on, err in that case I would have to redraw the line from scratch. How do I turn I line off? I haven't tried but I don't think the disable feature is for the this purpose.
Turn off tkinter canvas line, but not delete the line
0.379949
0
0
141
43,439,795
2017-04-16T17:09:00.000
0
0
0
0
python,google-chrome,google-trends
43,440,788
1
false
0
0
I got this error because I accessed Google Trends far more frequently than most people due to my project being centered on Pytrend's API. Therefore, Google presumably thought I was a bot or that my activity was suspicious. I simply restarted my modem, and Google Trends, and hence, Pytrends, now both work. So simply restart your modem and possibly your computer as well just to be on the safe side. And it should work right after.
1
0
0
I've been using Google Trends for my project alongside a Python API called Pytrend. and it's due in soon. Today, just a couple of hours ago, suddenly, I'm unable to use Google Trends. Every time I search a word, I get the following error Oops! There was a problem displaying this page. Please try again Hence Pytrends doesn't work either. I read that it maybe due to an ad blocker preventing access to Google Trends, but my anti-virus Norton 360 allows Google Chrome, and I don't have any Google Chrome extension ad blocker either. Can someone please help provide a solution? I need one really soon. Many thanks.
Google Trends not working on my computer
0
0
1
1,009
43,440,821
2017-04-16T18:48:00.000
1
0
1
0
python,numpy,types
72,350,771
3
false
0
0
float32 is less accurate but faster than float64, and flaot64 is more accurate than float32 but consumes more memory. If speed accuracy is more important, you can use float64, and if speed is more important than accuracy, you can use float32.
2
34
1
I want to understand the actual difference between float16 and float32 in terms of the result precision. For instance, Numpy allows you to choose the range of the datatype you want (np.float16, np.float32, np.float64). My concern is that if I decide to go with float 16 to reserve memory and avoid possible overflow, would that create a loss of the final results comparing with float32 for instance? Thank you
The real difference between float32 and float64
0.066568
0
0
53,683
43,440,821
2017-04-16T18:48:00.000
37
0
1
0
python,numpy,types
52,804,163
3
false
0
0
float32 is a 32 bit number - float64 uses 64 bits. That means that float64’s take up twice as much memory - and doing operations on them may be a lot slower in some machine architectures. However, float64’s can represent numbers much more accurately than 32 bit floats. They also allow much larger numbers to be stored. For your Python-Numpy project I'm sure you know the input variables and their nature. To make a decision we as programmers need to ask ourselves What kind of precision does my output need? Is speed not an issue at all? what precision is needed in parts per million? A naive example would be if I store weather data of my city as [12.3, 14.5, 11.1, 9.9, 12.2, 8.2] Next day Predicted Output could be of 11.5 or 11.5164374 do your think storing float 32 or float 64 would be necessary?
2
34
1
I want to understand the actual difference between float16 and float32 in terms of the result precision. For instance, Numpy allows you to choose the range of the datatype you want (np.float16, np.float32, np.float64). My concern is that if I decide to go with float 16 to reserve memory and avoid possible overflow, would that create a loss of the final results comparing with float32 for instance? Thank you
The real difference between float32 and float64
1
0
0
53,683
43,441,827
2017-04-16T20:30:00.000
0
0
0
0
python,numpy,tensorflow,ipython,keras
43,813,452
1
false
0
0
You can use tf.contrib.Keras to get same behavior.
1
0
1
Keras has model.predict which generates output predictions for the input samples. I am looking for this in tensorflow but cannot seem to find it or code it up.
How can I generate output predictions in Tensorflow just like model.predict in Keras?
0
0
0
288
43,444,327
2017-04-17T02:26:00.000
-1
0
1
0
attributes,pip,main,importerror,python-3.6
45,270,809
1
false
0
0
Check the Scripts directory in your virtual environment home directory. Is there a pip.exe in Scripts? It can be in the path but not installed in the environment itself. You can just copy one from another env Scripts, but make sure it's the right version of pip. DRY
1
1
0
I have been looking all over for a solution. I found one question asking something similar to this, but they were doing it inside of python(which I also tried) and the answers didn't work. So please don't be mad; nothing seems to work! I've been having a lot of troubles with pip. I am on windows 10, python 3.6. First it wouldn't let me install scipy no matter what I did, so I uninstalled it and reinstalled. get-pip.py wouldn't work, so I reinstalled python and had it put pip in my path. Now no matter what I try I get: ImportError: module 'pip' has no attribute 'main' Pip is in my path, just like it used to be. I was going to uninstall it again, but because it has no 'main' pip uninstall pip also says it does not have 'main'. I've tried to fix the path, reinstall pip, reinstall python, use easy_install instead, and I am stuck. I have no idea how to uninstall pip at this point. Sorry if this is basic! I am so lost. I would really appreciate some help! This is a great community! Thanks!! Let me know if you need more information
python 'pip' has no attribute 'main'
-0.197375
0
0
2,131
43,444,863
2017-04-17T03:49:00.000
0
0
1
0
python,python-idle
43,451,748
1
true
0
0
In Windows, at least, the console does not have a menu at the top. When you run interactive python, there is no menu. When you run a program that does not create a menu, there is no menu. When you run an IDE such as IDLE, the IDE puts its menu at the top. The particular menu entries you refer to occur on the IDLE menu. So start IDLE from an icon or from a console. For the latter, <python> -m idlelib should work, where ` refers to a python binary. Some details depend on the OS.
1
0
0
I've installed and re-installed Python 3.6.1. I am not given the options at the top like "file, edit, shell, debug, etc" at the top. I have no idea what to do at this point as I've searched around for a while and haven't come up with a solution. Does anyone have any idea of what I should do about this?
My Python program doesn't have file, edit, shell, debug options
1.2
0
0
2,440
43,449,023
2017-04-17T09:45:00.000
4
0
1
0
python,python-2.7,matplotlib,plt
43,449,375
3
true
0
0
You want to type %matplotlib qt into your iPython console. This changes it for the session you're in only. To change it for the future, go Tools > Preferences, select iPython Console > Graphics, then set Graphics Backend to Qt4 or Qt5. This ought to work.
1
4
0
I am trying to show some plots using plt.show (). i get the plots shown on the IPython console, but I need to see each figure in a new window. What can I do ?
plt.show () does not open a new figure window
1.2
0
0
15,399
43,449,122
2017-04-17T09:52:00.000
3
0
1
1
python,cuda,tensorflow,installation
43,450,571
2
false
0
0
First copy samples folder from installation folder somewhere else, for example your home directory. Then navigate to sample you wish to run type make and it should create executable file. For example in folder samples/1_Utilities/deviceQuery you should get exec file named deviceQuery and you can run it ./deviceQuery edit: Just noticed that you are familiar more with python than C, therefore you should check out pyCUDA
1
2
0
I'm installing CUDA 8.0 on my MacBook Pro running Sierra (by way of installing TensorFlow). Very new to GPU computing; I've only ever worked in Python at a very high level (lots of data analysis using numpy). Most of the language on the CUDA website assumes knowledge I don't have. Specifically, I have no idea how to 1) run the sample programs included in the Samples file, and 2) how to "change library pathnames in my .bashrc file" (I'm fairly sure I don't have a .bashrc file, just .bash_history and .bash_profile. How to I do the above? And are there any good ground-up references online for someone very new to all this?
How do I run the Sample files included in CUDA 8.0?
0.291313
0
0
7,147
43,449,169
2017-04-17T09:55:00.000
4
0
1
0
python
43,449,238
1
false
0
0
A csv file is a simple file type with flat data, separated by commas. Unlike an excel file, for example, it cannot contain multiple sheets. If you need multiple sheets, you will have to make multiple csv files.
1
3
0
I created a csv file with single sheet. I want to know how to create csv file with multiple sheets using python language.
How to create multiple sheets in csv using python
0.664037
0
0
6,564
43,456,097
2017-04-17T17:15:00.000
3
0
1
0
python,ubuntu,ide,virtualenv,spyder
43,456,751
1
true
0
0
I figured out the issue. Seems that I was somehow running it from the wrong location, just had to run Spyder3 from the v-env bin folder.
1
2
0
I just recently started learning Ubuntu (17.04) and have managed to figure out how get plain Python3.5 + Spyder3 running, have created a virtual environment, and gotten the v-env running with Spyder by changing the interpreter setting within Spyder via pointing it to the virtual environment bin. However, I saw numerous other ways of installing Spyder, primarily via a pip3 install in the environment itself, but I found nothing as to how to actually RUN the pip-installed Spyder. Running "Spyder3" always runs the default install regardless of location. Does anyone know how to actually run it? I was curious because I figured it would allow for a similar functionality that Anaconda provides where you can simply select your virtual environment and run the respective Spyder version for it.
Running a pip-installed Spyder in virtual environment on Ubuntu without Anaconda?
1.2
0
0
1,124
43,456,232
2017-04-17T17:23:00.000
1
0
1
0
python,intellij-idea
43,456,673
2
false
0
0
The answer I got from a co-worker a couple of years age is that os was originally a third-party package; IntelliJ left it where it is for some backward compatibility issue.
1
2
0
I'm having trouble understanding Intellij's import policy for python for import os. As far as I know, the import order is supposed to be standard library first, then third party packages, then company packages, and finally intra-package or relative imports. For the most part Intellij orders everything correctly, but keeps pushing import os into third party packages. Am I missing smth? Isn't import os a standard library package?
Intellij keeps re-ordering my `import os`
0.099668
0
1
86
43,457,337
2017-04-17T18:33:00.000
4
0
1
0
css,ipython-notebook,jupyter-notebook,jupyter
43,461,264
2
true
1
0
I'm using Jupyter 5.0. Right now I've tried to edit custom.css and the changes are reflected immediately after reloading a page without restarting. I'm not sure about 4.3 version, but I guess it should work the same way. What did the property you change?
2
4
0
I'm using Jupyer 4.3.0. I find that when I update my ~/.jupyter/custom/custom.css, the changes are not reflected in my notebook until I kill jupyter-notebook and start it again. This is annoying, so how can I make Jupyter Notebook recognize the custom.css file changes without completely restarting the notebook?
Jupyter reload custom.css?
1.2
0
0
643
43,457,337
2017-04-17T18:33:00.000
0
0
1
0
css,ipython-notebook,jupyter-notebook,jupyter
67,893,531
2
false
1
0
The /custom/custom.css stopped working for me when I generated a config file, but if anyone stumbles to this problem too, the solution is to uncomment the line c.NotebookApp.extra_static_paths = [] in the jupyter_notebook_config.py file and add "./custom/" - or whatever path you chose for your custom css - inside the brackets. P.S.: OS is Linux Manjaro 5.12 and Jupyter Notebook version is 6.3.0.
2
4
0
I'm using Jupyer 4.3.0. I find that when I update my ~/.jupyter/custom/custom.css, the changes are not reflected in my notebook until I kill jupyter-notebook and start it again. This is annoying, so how can I make Jupyter Notebook recognize the custom.css file changes without completely restarting the notebook?
Jupyter reload custom.css?
0
0
0
643
43,459,032
2017-04-17T20:24:00.000
0
0
0
0
python,numpy,image-processing,octave
43,459,094
2
false
0
0
What you have is a 3D array in octave. Here in the x-dimension you seem to have RGB values for each pixel and Y and Z dimension are the rows and columns respectively. However when you print it you will see all the values in the array and hence it looks like a 1D array.
1
0
1
I have to translate a code from Octave to Python, among many things the program does something like this: load_image = imread('image.bmp') which as you can see its a bitmap, then if I do size(load_image) that prints (1200,1600,3) which its ok, but, when I do: load_image it prints a one dimensional array, that does not make any sense to me, my question is how in Octave are these values interpreted because I have to load the same image in opencv and I couldn't find the way. thanks.
imread octave to imread cv2
0
0
0
201
43,461,581
2017-04-18T00:15:00.000
0
0
1
0
python,eclipse,pydev
43,470,039
1
false
0
0
Yes, you can change that in the annotations (i.e.: Ctrl+3, Annotations > Occurrences (PyDev). As a note, if you're using LiClipse (which will manage the colors for you), you have to edit the theme (i.e.: Ctrl+3 > Color Theme > Edit theme > occurrenceIndication) -- or just select a different theme.
1
0
0
whenever I write code with the PyDev Eclipse Python editor, it highlights the similar code in yellow. Is it possible to change the color it highlights with so it's easier to read?
Eclipse PyDev highlighting text makes it hard to read
0
0
0
96
43,466,412
2017-04-18T07:40:00.000
0
0
0
0
python,python-2.7
43,467,319
3
false
0
0
I agree with @anekix, using twisted is the best way to do it. You need to know one thing, which ever method you use to 10000 HTTP request, there is open file descriptor limit, which is basically set to 1000 in Linux. What this mean is that you can have only 1000 concurrent TCP connections. However you can increase this from configurations /etc/security/limits.conf in Linux
1
0
0
This sends around 5 requests per second to the server. I need to send around 40 requests per second. The server does not limit my requests (I have run 10 instances of this Python script and it has worked) and my internet does not limit the requests. It's my code which limits my requests per second. Is it possible to make my Python script send more requests?
What is the fastest way to send 10,000 HTTP requests in Python 2.7?
0
0
1
2,702
43,466,644
2017-04-18T07:54:00.000
-2
0
1
0
python,arrays
43,467,467
4
false
0
0
If your arrays are only 'list', sumplt defines an empty list at the beginning and append item into it: foo=[] for i in range(14): ... foo.append(tab)
1
6
1
I have a project in which there is a for loop running about 14 times. In every iteration, a 2D array is created with this shape (4,3). I would like to concatenate those 2D arrays into one 3D array (with the shape of 4,3,14) so that every 2D array would be in different "layer". How should that be implemented in Python?
how to combine 2D arrays into a 3D array in python?
-0.099668
0
0
8,835
43,467,312
2017-04-18T08:29:00.000
0
0
0
0
python,gunicorn,spacy
65,805,224
4
false
1
0
One workaround is, you can load the spaCy pipeline before-hand, pickle (or any comfortable way of serializing) the resultant object and store it in a DB or file system. Each worker can just fetch the serialized object, and simply deserialize it.
1
24
0
I have a Flask app, served by Nginx and Gunicorn with 3 workers. My Flask app is a API microservice designed for doing NLP stuff and I am using the spaCy library for it. My problem is that they are taking huge number of RAM as loading the spaCy pipeline spacy.load('en') is very memory-intensive and since I have 3 gunicorn workers, each will take about 400MB of RAM. My question is, is there a way to load the pipeline once and share it across all my gunicorn workers?
Sharing data across my gunicorn workers
0
0
0
5,127
43,470,642
2017-04-18T11:16:00.000
0
0
0
0
python-2.7,wit.ai
43,544,729
1
false
0
0
since the new release of messenger, you can convert speech into text, so if you are developing for messenger or another app with a good voice-to-text, you can rely on the app instead of trying doing it by yourself. In the end you're gonna have just text inputs, but people would be able to convert it's speech into text.
1
0
0
I'm trying to build a chat bot using wit.ai, which will recognize the speech and convert into text in chat bot. Is it possible with the GUI of wit.ai to make such kind of chat bot? I actually converted the voice into text, but facing difficulty to integrate the voice input with chat bot. How to do this?
how to make speech recognition chat bot using wit.ai?
0
0
0
771
43,473,556
2017-04-18T13:25:00.000
0
0
1
0
python,memory-management,anaconda,spyder
43,482,677
1
true
0
0
What is the effect in a script called something.py and executed as python something.py? Memory is unloaded after completion of execution. Please confirm. Yes, I think memory is freed after execution. What is the effect when I run something.py in say Anaconda Spyder. The spyder memory will not be unloaded unless I disconnect from the kernel. Is that a true statement? Spyder doesn't hold memory, only the IPython kernel associated with a console does. To free that memory, you need to restart the kernel.
1
0
1
Python 101 question. A pandas dataframe is created as: df1=pandas.DataFrame(data, index=index, columns=columns) # takes up say 100 MB memory now df2=df1 # will memory usage be doubled? What is the effect in a script called something.py and executed as python something.py? Memory is unloaded after completion of execution. Please confirm. What is the effect when I run something.py in say Anaconda Spyder. The spyder memory will not be unloaded unless I disconnect from the kernel. Is that a true statement? Thank you all for being patient with a Python newbie
Python memory use for DataFrames in python script and in Anaconda Spyder
1.2
0
0
844
43,475,468
2017-04-18T14:50:00.000
0
0
0
0
windows,python-3.x,udp,broadcast,blocked
43,671,319
2
true
0
0
The strange thing about my problem, was that this exact code worked on the computer in question (and two development computers) previously, but wasn't working at the time that I posted this question. Wireshark wasn't leading me to my answer (only showing me that the UDP packets were not sent), so I decided to ping the IP via the command prompt. I received one of two errors (destination host unreachable, or request timed out). These errors led me to adding my desired target IP (169.254.255.255) to the ARP cache, which my problem was solved. I'd like to thank you for suggesting a possible solution.
1
1
0
OS: Windows 10 I use an Ethernet switch to send UDP packets to two other systems (connected directly to that same switch) simultaneously via Python 3.4.4. The same code works on two other dev/testing PC's so I know it's not the Python code, but for some reason it doesn't work on the PC that I want the system to be utilized by. When I use Wireshark to view the UDP traffic at 169.254.255.255 (the target IP for sending the UDP packets to), nothing appears. However, sending packets to 169.X.X.1 works. On the other hand, packets sent to 169.X.X.255 are sent, but I receive time-to-live exceeded messages in return. I am restricted to that target IP, so changing the IP is not a solution. I also have it sending on port 6000 (arbitrary), I have tried changing the port number to no avail. Also won't let me send to 169.254.255.1 I have the firewalls turned off. Thanks for your help.
Windows/Python UDP Broadcast Blocked No Firewall
1.2
0
1
3,390
43,478,908
2017-04-18T17:43:00.000
1
0
1
0
python,ipython-notebook,code-organization,project-organization
43,481,217
1
false
0
0
Well, I have this problem now and then when working with a big set of data. Complexity is something I learned to live with, sometimes it's hard to keep things simple. What i think that help's me a lot is putting all in a GIT repository, if you manage it well and make frequent commits with well written messages you can track the transformation to your data easily. Every time I make some test, I create a new branch and do my work on it. If it gets to nowhere I just go back to my master branch and keep working from there, but the work I done is still available for reference if I need it. If it leads to something useful I just merge it to my master branch and keep working on new tests, making new branches, as needed. I don't think it answer all of your question and also don't know if you already use some sort version control in your notebooks, but that is something that helps me a lot and I really recommend it when using jupyter-notebooks.
1
1
1
Imagine that you working with a large dataset, distributed over a bunch of CSV files. You open an IPython notebook and explore stuff, do some transformations, reorder and clean up data. Then you start doing some experiments with the data, create some more notebooks and in the end find yourself heaped up with a bunch of different notebooks which have data transformation pipelines buried in them. How to organize data exploration/transformation/learning-from-it process in such a way, that: complexity doesn't blow, raising gradually; keep your codebase managable and navigable; be able to reproduce and adjust data transformation pipelines?
How to manage complexity while using IPython notebooks?
0.197375
0
0
161
43,479,947
2017-04-18T18:45:00.000
0
0
1
0
python,pyqt5
43,481,749
1
true
0
1
Resource files(.qrc) are used to place static files such as icons, sounds, videos, etc. These are converted into python code with the pyrcc4 or pyrcc5 command and are loaded into ram, so they can not be modified online. In the case of C++ they are loaded into the executable. In both cases they can not be modified.
1
0
0
When I run my program, it displays a UI with many things on, some of those 'things' are images, that change depending on a user input. The input is recorded before the pyqt program is ran, and the images change using a different script, which also runs before the pyqt program. But for some reason the resource file doesnt care what the images look like, and only displays the images that were there when the resource file was compiled. Any tips? Just looking for some commands or something that I don't know about.
PyQt5 How to update resource files after they've been compiled?
1.2
0
0
848
43,481,052
2017-04-18T19:54:00.000
0
0
1
0
python,json,python-2.7,file
43,495,253
2
false
0
0
I will post my own solution, since it works for me: Every single python script checks (before opening and writing the data file) whether a file called data_check exists. If so, the pyhthon script does not try to read and write the file and dismisses the data, that was supposed to be written into the file. If not, the python script creates the file data_check and then starts to read and wirte the file. After the writing process is done the file data_check is removed.
1
0
0
I use multiple python scripts that collect data and write it into one single json data file. It is not possible to combine the scripts. The writing process is fast and it happens often that errors occur (e.g. some chars at the end duplicate), which is fatal, especially since I am using json format. Is there a way to prevent a python script to write into a file if there are other script currently trying to write into the file? (It would be absolutely ok, if the data that the python script tries to write into the file gets lost, but it is important that the file syntax does not get somehow 'injured'.) Code Snipped: This opens the file and retrieves the data: data = json.loads(open("data.json").read()) This appends a new dictionary: data.append(new_dict) And the old file is overwritten: open("data.json","w").write( json.dumps(data) ) Info: data is a list which contains dicts. Operating System: The hole process takes place on linux server.
How to prevent multi python scripts to overwrite same file?
0
0
0
323
43,483,717
2017-04-18T23:14:00.000
0
0
0
0
python,sas,regression,random-forest,dummy-variable
48,334,045
2
false
0
0
You may google SAS STAT manual /User guide. Check out any major regression procedures there that support Class statement. Underneath the Class it details Reference... option. They all detail how a design matrix is fashioned out. The way you fed your 100 dummies must have been obvious enough to trigger JMP to roll back to a temp class variable that re-engineer back to one single variable. If you want to know how exactly JMP is triggered to do roll back, go to JMP website and open a technical support track. But mechanically I am confident this is what happens.
1
1
1
I am building a regression model with about 300 features using Python Sklearn. One of the features has over 100 categories and I end up having ~100 dummy columns for this feature.Now each of the dummy column has its own coefficient, or a feature ranking score (if using Random Forest or xgb) - which is something I dont like. However, when I create the same model in SAS JMP, it gives me one single feature score for the feature with 100 categories -it apparently handles categories automatically. Can someone tell me how SAS JMP combines the coefficients/feature importances of 100 dummy variables into one metric. And how can i can achieve the same in Python.
Combining effects of dummy variables in a regression model
0
0
0
284
43,485,072
2017-04-19T02:05:00.000
1
0
0
0
javascript,node.js,python-3.x,web-scraping
43,485,536
1
true
1
0
If you want to write a web scraper that executes javascript, node.js (with something like Phantom.js) is a great choice. Another popular choice is Selenium. You would need to simulate user actions to activate event handlers. Let's call this action "scraping". BS4 would not be appropriate because it cannot execute javascript. Once you have your data saved to disk, displaying the results in HTML tabular form (let's call this action "reporting") would require yet another solution. Flask is a suitable choice. Since the scraping and reporting are separate concerns, no conflict would arise if you wanted to use the two services simultaneously. When using Selenium or node.js as a scraper, you aren't really running a web server. So it's incorrect to think of it as two web-servers in possible conflict.
1
0
0
I'm interested in trying a web scraping project. The target sites use Javascript to dynamically load and update content. Most of the discussion online concerning web scraping such sites indicates node.js, casper.js, phantom.js, and nightmare.js are all reasonably popular tools to use when attempting such a project. Node.js seems to be used most often. If I am running a Flask server and wish to display the results of a node.js, for example, scrape in tabular format on my site, is this possible? Will I run into compatibility issues? Or should I try to stick it out with a python-based approach to scraping like BS4 for the sake of consistency? I ask because node.js is described as a server, so I assume a conflict would arise if I tried to use it and Flask simultaneously.
Does running a Flask web server preclude web scraping in Node.JS?
1.2
0
1
160
43,485,569
2017-04-19T02:59:00.000
7
0
1
0
python,python-3.x,package,installation,anaconda
43,538,799
7
true
0
0
Probably due to the fact you have multiply python envs installed in your computer. when you do which python you will probably get the native python installed in your computer. that is /usr/bin/python You want to use the Python that came when you installed Anaconda. Just add Anaconda path to the beginning of your $PATH. (In order to do this you probably need to edit your ~/.bashrc file (or the equivalent file for your shell) then source ~/.bashrc. Next time you will go to will run python and import theano you'll succeed.
4
23
0
Forgive me but I'm new to python. I've installed a package (theano) using conda install theano, and when I type conda list, the package exists However, when I enter the python interpreter by running python, and try to import it with import theano, I get an error: "no module named theano", and when I list all python modules, theano doesn't exist. What am I missing?
Installed a package with Anaconda, can't import in Python
1.2
0
0
38,116
43,485,569
2017-04-19T02:59:00.000
2
0
1
0
python,python-3.x,package,installation,anaconda
60,058,665
7
false
0
0
I had this problem and realised that the issue was that ipython and jupyter-notebookdid not have the same sys.path as python, just in case that helps anyone.
4
23
0
Forgive me but I'm new to python. I've installed a package (theano) using conda install theano, and when I type conda list, the package exists However, when I enter the python interpreter by running python, and try to import it with import theano, I get an error: "no module named theano", and when I list all python modules, theano doesn't exist. What am I missing?
Installed a package with Anaconda, can't import in Python
0.057081
0
0
38,116
43,485,569
2017-04-19T02:59:00.000
2
0
1
0
python,python-3.x,package,installation,anaconda
43,499,897
7
false
0
0
Do you have another installation of Python on your system? You can run "which python" in your terminal to determine which Python will be used.
4
23
0
Forgive me but I'm new to python. I've installed a package (theano) using conda install theano, and when I type conda list, the package exists However, when I enter the python interpreter by running python, and try to import it with import theano, I get an error: "no module named theano", and when I list all python modules, theano doesn't exist. What am I missing?
Installed a package with Anaconda, can't import in Python
0.057081
0
0
38,116
43,485,569
2017-04-19T02:59:00.000
1
0
1
0
python,python-3.x,package,installation,anaconda
56,109,594
7
false
0
0
So I also had the same problem, turn out that I had named my own file to the same modulename (graphviz) and it tried to import that one instead... Took me a while before I figured that one out!
4
23
0
Forgive me but I'm new to python. I've installed a package (theano) using conda install theano, and when I type conda list, the package exists However, when I enter the python interpreter by running python, and try to import it with import theano, I get an error: "no module named theano", and when I list all python modules, theano doesn't exist. What am I missing?
Installed a package with Anaconda, can't import in Python
0.028564
0
0
38,116
43,485,972
2017-04-19T03:45:00.000
2
0
0
0
python,pandas
43,486,149
1
false
0
0
There are two ways to do it. 1) if you just want to change the display format, by using the {:,.0f} format to explicitly display (rounded) floating point values with no decimal numbers: pd.options.display.float_format = '{:,.0f}'.format 2) if you want to convert it to int : df.col = df.col.astype(int)
1
0
1
I have a pandas histogram that shows the frequency that specific years show up in a dataframe. The x axis contains 2006.0, 2006.5, 2007.0, 2007.5, etc. However I want my histogram x axis to only have 2006, 2007, etc. This will make my histogram clearer, especially since in my df I only have values for particular years in their integer form, not 2006.5, 2007.4, etc. How would I go about doing this?
How to show only years on x axis of pandas histogram?
0.379949
0
0
220
43,486,077
2017-04-19T03:56:00.000
5
0
1
0
python-3.x,python-imaging-library
68,526,524
2
false
0
1
It's simply: original_image = edited_image._image The im object gets passed into ImageDraw, then gets saved into the class variable _image. That's about it.
1
12
0
I am using PIL for my project and I have ImageDraw object. I want to get the image that is drawn on ImageDraw object. How do I get the image ?
How to get image from ImageDraw in PIL?
0.462117
0
0
9,781
43,490,495
2017-04-19T08:33:00.000
28
0
1
0
python,configuration,jupyter-notebook,jupyter,jupyterhub
49,305,034
4
false
0
0
Open the command line and enter jupyter notebook --NotebookApp.iopub_data_rate_limit=1e10 This should start jupyter with the increased data rate.
1
22
0
I want to start my notebooks with jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000 arguments. Where one could set it in JupyterHub?
How to set NotebookApp.iopub_data_rate_limit and others NotebookApp settings in JupyterHub?
1
0
0
64,716
43,490,887
2017-04-19T08:52:00.000
0
1
0
0
python,audio,sample-rate
68,753,091
4
false
0
0
!pip install pydub from pydub.utils import mediainfo info=mediainfo("abc.wav") print(info)
1
11
0
I have over a thousand audio files, and I want to check if their sample rate is 16kHz. To do it manually would take me forever. Is there a way to check the sample rate using python?
Check audio's sample rate using python
0
0
0
19,259
43,493,600
2017-04-19T10:50:00.000
1
1
0
0
python,proxy,anonymity
43,558,215
1
false
0
0
Launch test site in the internet. It will perform only one operation: save received request with all the headers into a database or a file. Each request should have your signature to be sure it's yours original request. Connect with your Python script via proxy being tested to the site. Send all headers you want to see on that side. Check data received - are there some headers or some date what can break your anonymity.
1
1
0
How to check if the proxy have a high anonymity level or a transparent level in Python? I am writing a script to sort the good proxies but I want to filter only the high anonymity ones (elite proxies).
How to check proxy's anonymity level?
0.197375
0
1
784
43,499,164
2017-04-19T14:50:00.000
1
1
1
0
python-2.7,xlrd,pypy
43,505,371
2
false
0
0
You might need to install a new pip for PyPy as it has a different space for storing modules.
1
3
0
I am using PyPy 2.2.1 on Ubuntu 14.04. I want to use the xlrd module for my program but running the programming with pypy throws me an import error. How do I fix this?
PyPy: "ImportError: No module named xlrd"
0.099668
0
0
2,701
43,501,102
2017-04-19T16:19:00.000
2
0
1
0
python
57,392,720
7
false
0
0
Just open anaconda prompt and use either of the below command to install the package. This solved my issue. conda install -c plotly chart-studio or conda install -c plotly/label/test chart-studio
2
9
1
I am getting this error when I am trying to import "matplotlib.pyplot". I cant even install matplotlib.pyplot through conda install. It shows this: import matplotlib.pyplot Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'matplotlib.pyplot'
anaconda cannot import matplotlib.pyplot
0.057081
0
0
25,113
43,501,102
2017-04-19T16:19:00.000
0
0
1
0
python
70,727,396
7
false
0
0
Check if the ...../python3.x/site-packages is listed within sys.path. If not append it with sys.path.append('.....python3.8/site-packages')
2
9
1
I am getting this error when I am trying to import "matplotlib.pyplot". I cant even install matplotlib.pyplot through conda install. It shows this: import matplotlib.pyplot Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'matplotlib.pyplot'
anaconda cannot import matplotlib.pyplot
0
0
0
25,113
43,503,589
2017-04-19T18:35:00.000
1
0
0
0
python,mysql,django
43,504,942
2
false
1
0
You can use Redis,Celery,Python RQ,Rabbit MQ as a queue for distributed tasks(chatting tasks) in your Django app. But this will increase complexity in your project. I will recommend you to Develop Python based multi client chat server.
1
0
0
i have a django chatbot application on web faction shard host. the idea is : the chatbot application simulate the customer service in chatting with the customers. Basically the conversation will be exchanged through the API using GET and POST, where it first POST the input then GET calls the python file to SELECT the input form the DB and process it then update the database with the retrieved out put.Finally a GET is used to fetch the out put and display it. so far it is working for one user at a time, what i am considering now is that i want it to chat with multiple customer at the same time an isolating each user. Do i have to use Redis just for the chatting part, if yes how i can merge it in my project? other there are other solution out there? i have developed it using: python3: for the chatbot code. Django: for the website. Mysql: for the data base, that hold the knowledge based for the chatbot such as a table that include number of input and it correspond output. Thank you,
making a Django chatbot application interact with multiple users
0.099668
0
0
2,101
43,504,316
2017-04-19T19:19:00.000
0
0
0
0
python,automation,spss
43,531,337
1
false
0
0
Can't you just create a production mode job and invoke it in your job stream?
1
0
0
I currently have a SPSS stream that is pulling information from a server and outputting a report to .xlxs In order to trigger the stream I have to manually access SPSS and click on the "Run" button so the report is created. The python script would be included into the execution board in SPSS. I have unsuccessfully tried to create a python script that will generate this trigger and repeat the function over and over. I am running with the issue that the scripts I am trying to use call for a prior action in order to trigger the automated click. The "Run" function would be the original action that would be repeated indefinitely Can someone guide me on the right way to generate this script.
How can I automatically trigger the "Run" function in SPSS
0
0
0
101
43,513,662
2017-04-20T08:04:00.000
1
0
0
0
algorithm,python-3.x
62,819,510
1
false
0
0
Your data might not warrant a larger number of clusters. Run the algorithm for lesser number of k values and note the total cost at the end. If this stops decreasing, there is no need to increase k. It's called the elbow method, you can look it up.
1
1
1
I have data containing a mixture of numeric values and categorical values. I used K-prototype to cluster them. init = 'Huang' n_clusters = 50 max_iter = 100 kproto = kprototypes.KPrototypes(n_clusters=n_clusters,init=init,n_init=5,verbose=verbose) clusters = kproto.fit_predict(data_cats_matrix,categorical=categoricals_indicies) when I run the last code I'm getting an error as follows : ValueError: Clustering algorithm could not initialize. Consider assigning the initial clusters manually.
ValueError: Clustering algorithm could not initialize. Consider assigning the initial clusters manually
0.197375
0
0
2,181
43,514,106
2017-04-20T08:26:00.000
3
0
0
1
python,python-3.x,shell,unix,formatting
43,573,926
4
false
0
0
I have the same problem while using pandas. So if this is what you are trying to solve, I fixed mine by doing pd.set_option('display.width', pd.util.terminal.get_terminal_size()[0])
2
9
0
My Python 3.5.2 output in the terminal (on a mac) is limited to a width of ca. 80px, even if I increase the size of the terminal window. This narrow width causes a bunch of line breaks when outputting long arrays which is really a hassle. How do I tell python to use the full command line window width? For the record, i am not seeing this problem in any other program, for instance my c++ output looks just fine.
Python terminal output width
0.148885
0
0
8,727
43,514,106
2017-04-20T08:26:00.000
7
0
0
1
python,python-3.x,shell,unix,formatting
43,605,633
4
true
0
0
For numpy, it turns out you can enable the full output by setting np.set_printoptions(suppress=True,linewidth=np.nan,threshold=np.nan).
2
9
0
My Python 3.5.2 output in the terminal (on a mac) is limited to a width of ca. 80px, even if I increase the size of the terminal window. This narrow width causes a bunch of line breaks when outputting long arrays which is really a hassle. How do I tell python to use the full command line window width? For the record, i am not seeing this problem in any other program, for instance my c++ output looks just fine.
Python terminal output width
1.2
0
0
8,727
43,520,502
2017-04-20T13:09:00.000
0
1
0
0
php,python,wordpress,xml-rpc
43,566,010
2
false
0
0
Found another way, in case anyone else stumbles on this. Dump all the posts to a csv file and use a "csv to post" widget. I'm using Ultimate CSV Importer Free. Very simple.
1
1
0
I have about 30,000 very short posts to make in Wordpress. I built them using a SQLlite database and python script, and was uploading them via the wordpress_xmlrpc library in python. But my site goes down after 100-200 posts, because the server thinks this is a DoS attack. My server is a linux box to which I have SSH access. I'm looking for a way to easily upload the posts into Wordpress more directly, say by interacting directly with its database, or using a local process that happens directly on the server. Can anyone offer any ideas?
Uploading 30,000 posts into Wordpress without causing an XML-RPC crash
0
0
0
129
43,520,574
2017-04-20T13:12:00.000
0
1
0
1
pytest,python-appium,aws-device-farm
44,378,193
1
false
1
0
I work for the AWS Device Farm team. This seems like an old thread but I will answer so that it is helpful to everyone in future. Device Farm parses the test in a random order. In case of Appium Python it will be the order what is received from pytest --collect-only. This order may change across executions. The only way to guarantee an order right now is to wrap all the test calls in a new test which will be the only test called. Although not the prettiest solution this is the only way to achieve this today. We are working on bringing more parity between your local environment and Device Farm in the coming weeks.
1
1
0
I'm using Appium-Python with AWS Device farm. And I noticed that AWS run my tests in a random order. Since my tests are part dependent I need to find a way to tell AWS to run my tests in a specific order. Any ideas about how can I accomplish that? Thanks
AWS Device Farm- Appium Python - Order of tests
0
0
0
251
43,520,772
2017-04-20T13:20:00.000
0
1
0
0
python,eclipse,pydev,code-completion
43,610,458
2
false
0
0
Ok, i found yersterday some example with pypredef, and I started with this solution, and for now on it works well. The only thing you need to care about is the identation. It looks like when you are implementing your .pypredef in Eclipse, you need to use 4 spacings als ident instead of a tab.
1
0
0
I would like to get the code completion in Eclipse with PyDev for the attributes of a class which is dynamically generated. Basically, I have a class which is defined by reading out an XML document. Depending on what is written in this XML document, the class has different attributes dynamically defined (the XML-tags). I would like to activate the code completion after calling the constructor of the class in my code. The problem I see is that I have no control on the attributes of the class, which means : before running the code, I have no idea which attributes might be available. Does anyone have an idea ? I tried to add the library to the Forces-Built In without success. Regards
Eclipse PyDev Code completion of dynamic class attributes?
0
0
0
323