Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
44,706,706
2017-06-22T18:15:00.000
0
0
0
0
python,sql,pandas
44,707,209
3
false
0
0
Rather than using the pandas library, make a database connection directly (using psycopg2, pymysql, pyodbc, or other connector library as appropriate) and use Python's db-api to read and write rows concurrently, either one-by-one or in whatever size chunks you can handle.
1
3
0
I am trying to retrieve a large amount of data(more than 7 million) from database and trying to save a s flat file. The data is being retrieved using python code(python calls stored procedure). But I am having a problem here. The process is eating up so much of memory hence killing the process automatically by unix machine. I am using read_sql_query to read the data and to_csv to write into flat file. So, I wanted to ask if there is a way to solve this problem. May be reading only a few thousand rows at a time and saving them and go to next line. I even used chunksize parameter as well. But it does not seem to resolve the issue. Any help or suggestion will be greatly appreciated.
Reading and writing large volume of data in Python
0
1
0
6,717
44,708,430
2017-06-22T20:03:00.000
0
1
0
1
python,google-cloud-platform,google-cloud-pubsub
64,239,668
2
false
1
0
This is how I usually do: 1. I create a python client class which does publish and subscribe with the topic, project and subscription used in emulator. Note: You need to set PUBSUB_EMULATOR_HOST=localhost:8085 as env in your python project. 2. I spin up a pubsub-emulator as a docker container. Note: You need to set some envs, mount volumes and expose port 8085. set following envs for container: PUBSUB_EMULATOR_HOST PUBSUB_PROJECT_ID PUBSUB_TOPIC_ID PUBSUB_SUBSCRIPTION_ID Write whatever integration tests you want to. Use publisher or subscriber from client depending on your test requirements.
1
1
0
I'm working on a flask API, which one of the endpoint is to receive a message and publish it to PubSub. Currently, in order to test that endpoint, I will have to manually spin-up a PubSub emulator from the command line, and keep it running during the test. It working just fine, but it wouldn't be ideal for automated test. I wonder if anyone knows a way to spin-up a test PubSub emulator from python? Or if anyone has a better solution for testing such an API?
How to boot up a test pubsub emulator from python for automated testing
0
0
0
768
44,709,221
2017-06-22T20:54:00.000
1
1
1
0
python
44,709,283
1
true
0
0
Try a Python linter, they will do this for you, for example flake8. Install with pip install flake8 and run by calling flake8 in the root folder of your project.
1
3
0
The title says it all. I am cleaning up python scripts I have written. Sometimes in the writing of scripts I have tried out one library only to replace it, or not use it later. I would like to be able to check if a library which is imported is actually used within the script later. Ideally i would do this without having to comment out the import line and run it looking for errors each time. Does anyone know of a resource or script/library which checks for this? Or what about other tips for cleaning up script to share with others? Thanks, Mike
How can I check if an imported library is 'used' in a python script?
1.2
0
0
718
44,709,252
2017-06-22T20:55:00.000
1
0
1
0
python,intellij-idea,ide
49,150,590
2
false
1
0
Open the root folder. And you should find the 64 bits version in it. You can then create a short cut for that.You should be fine.
1
2
0
I am trying to learn Python from an online course that I purchased. The instructor uses IntelliJ IDEA as an IDE and since I am trying to follow along I am trying to use it as well. When I download IntelliJ IDEA and then try to open it I receive the following error message: "No JVM Installation found. Please install a 32 bit JDK. If you already have a JDK installed, define a JAVA_Home variable in COmputer>System Properties>System Settings>Environmental Variables" I tried googling the issue and found my way to the Oracle website where they offered JDK downloads. However there was no 32bit windows version available. What is a JDK? Why do I need a JDK for Python development? How can I download a 32 bit version of JDK? What is an easy to learn IDE for Python? I've tried Visual Studio Code and PyCharm and have had issues with the set up of both and have not found many helpful articles/videos. Thank you!
Error Launching IDEA: No JVM Installation Found
0.099668
0
0
7,803
44,711,048
2017-06-22T23:57:00.000
0
0
1
0
python,string,utf-8,byte
44,711,115
2
false
0
0
You need to decode the byte data: byte_data.decode("utf-8")
1
1
0
I'm a Python3 User. And I'm now face some problem about byte to string control.. First, I'm get data from some server as a byte. [Byte data] : b'\xaaD\x12\x1c+\x00\x00 \x18\x08\x00\x00\x88\xb4\xa2\x07\xf8\xaf\xb6\x19\x00\x00\x00\x00\x03Q\xfa3/\x00\x00\x00\x1d\x00\x00\x00\x86=\xbd\xc9~\x98uA>\xdf#=\x9a\xd8\xdb\x18\x1c_\x9c\xc1\xe4\xb4\xfc;' This data isn't escape any string type such as utf-8, unicode-escape ... Who know the solution how to control these data?
How to escape the string "\x0a\xfd\x ....." in python?
0
0
0
1,232
44,711,255
2017-06-23T00:26:00.000
0
0
1
1
python,pip
44,713,819
2
true
0
0
if you have already installed python of whatever version you can skip ahead to step 4 or step 6. download and install python default installation is c:\python27 Create a new System Variable named Variable name: PYTHON_HOME and Variable value: c:\Python27 (or whatever your installation path was) Find the system variable called Path and click Edit Add the following text to the end of the Variable value:;%PYTHON_HOME%\;%PYTHON_HOME%\Scripts\ Verify a successful environment variable update by opening a new command prompt window (important!) and typing python from any location Download get-pip.py to a folder on your computer. Open a command prompt window and navigate to the folder containing get-pip.py. Then run python get-pip.py. This will install pip. verify your pip installation: open command prompt and type 'pip freeze' without quotation and if it shows like antiorm==1.1.1 enum34==1.0 requests==2.3.0 virtualenv==1.11.6 then you are successful. if above steps failed than update environment variable. go to Control Panel\System and Security\System select advance system setting then select environment variable and add c:\python27\scripts to path variable then it will be fine. i have tested it successfully on my pc.
2
0
0
I am trying to install pip for python in windows 7. I installed it and I added "C:\PythonXX\Scripts" to the windows path variables. But, when I typed "pip" in the command prompt it shows that pip is not recognized as an internal or external command. Is there any way to figure out this problem?
Error found when installing pip on Windows
1.2
0
0
6,397
44,711,255
2017-06-23T00:26:00.000
-1
0
1
1
python,pip
44,713,170
2
false
0
0
I know it's hustle to install pip in Windows. With latest Python you don't need to install pip, it's now prebuilt and you can access it by python -m pip
2
0
0
I am trying to install pip for python in windows 7. I installed it and I added "C:\PythonXX\Scripts" to the windows path variables. But, when I typed "pip" in the command prompt it shows that pip is not recognized as an internal or external command. Is there any way to figure out this problem?
Error found when installing pip on Windows
-0.099668
0
0
6,397
44,711,267
2017-06-23T00:27:00.000
3
0
0
0
python,user-interface,tkinter,widget
44,711,332
1
true
0
1
You can't mix grid and pack with widgets that share the same parent. Why? Because grid will try to lay widgets out, possibly growing or shrinking widgets according to various options. Next, pack will try to do the same according to its rules. This may require that it change widget widths or heights. grid will see that widgets have changed size so it will try to rearrange the widgets accirding to its rules. pack will then notice that some widgets have changed size so it will rearrange the widgets according to its rules. grid will see that widgets have changed size so it will try to rearrange the widgets according to their rules. pack will then notice that some widgets have changed size so it will rearrange the widgets according to its rules. grid will see that ...
1
2
0
I have a program with some Label() widgets, some Button() widgets, some Text() widgets, and a few Entry() widgets. A couple of revisions ago, I didn't have the labels, and I had less Entry() widgets, and I mixed .pack() and .grid() as was convenient and I was fine. I had to do some refactoring, and added the extra widgets in the process - all the new things added used .grid(). Nothing about the other widgets changed. Now, I get an error along the lines of "unable to use grid in .", etc. (I can post the full error message if necessary). Why, and how can I fix this? (I can post code if necessary as well.)
Using .pack() and .grid() at the same time with tkinter
1.2
0
0
1,773
44,711,726
2017-06-23T01:31:00.000
0
0
0
0
python,tensorflow,installation
61,737,353
1
false
0
0
Posting my own answer (alternative) here in case someone overlooked comment: I forced to delete the package in the python/lib/site-packages/ and reinstalled the tensorflow-gpu, and it seems working well. Though I solve this problem via such alternate I would still like to know the root cause and long-term fix for this.
1
3
1
I previously installed tensorflow-gpu 0.12.0rc0 with Winpython-3.5.2, and when I tried to upgrade or uninstall it to install the newer version using both the Winpython Control Panel and pip, I got the following error: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'c:\\users\\moliang\\downloads\\winpython-64bit-3.5.2.3qt5\\python-3.5.2.amd64\\lib\\site-packages\\tensorflow\\contrib\\session_bundle\\testdata\\saved_model_half_plus_two\\variables\\__pycache__\\__init__.cpython-35.pyc' I installed the tensorflow-gpu 0.12.0rc0 through Winpython-3.5.2 pip, and the __init__.cpython-35.pyc does exist at the correct directory. So I don't understand how this error could happen? And it prevents me from getting the new version.
Uninstall/upgrade tensorflow failed: __init__.cpython-35.pyc not found
0
0
0
508
44,711,871
2017-06-23T01:55:00.000
1
0
0
0
python,python-3.x,intellij-idea,ide
44,711,956
1
true
0
0
It is pre-compiled in some IDEs like PyDev but not in IDEA, you can add it manually if you want it. I also recommend you to use PyCharm instead of IDEA for python.
1
0
0
I am learning Python by watching youtube videos and also through an online course that I bought. In every video I watch, the first line of each file is: _author_='dev'. For some reason when I start a new file this does not come up. What does this mean and if it is an issue how do I correct it? FYI I am using IntelliJ IDEA as an IDE. Thank you!
script first line _author_="dev" does not show up
1.2
0
0
91
44,711,944
2017-06-23T02:05:00.000
0
0
1
0
python,python-2.7,anaconda,ubuntu-16.04,python-3.6
44,711,971
2
true
0
0
Anaconda2 (for python 2.7.x) and Anaconda3 (for 3.5/6) can exist side by side edit: To use one as the default python just set the path (windows control panel->system->environment variables OR .bashrc ) to point to the one you want to use as default. ps there is no need to include both on the path and it's probably a bad idea
1
1
0
I have installed python 3.6 but python 2.7.13 is default also I have anaconda for python 2.7.13. But I want anaconda for python 3.6 which is not default can I download it directly?
Is it necessary that default python should be python3.6 to download anaconda for python3.6?
1.2
0
0
389
44,713,145
2017-06-23T04:36:00.000
2
0
1
0
python-3.x
44,713,186
1
false
0
0
A module\module_name can be one of these: file folder folder with __init__.py inside of it __main__ module name (the file you just run...) you can change your module name or insert an object to *sys.modules that will appear as a module I hope this helps ...
1
0
0
What is the difference and relationship between module names and filenames in Python 3? Must a module name always be identical to the filename?
Python 3 Modules vs. filenames
0.379949
0
0
357
44,713,742
2017-06-23T05:32:00.000
0
0
1
0
python-2.7,python-multiprocessing
56,736,434
2
false
0
0
You can pass event.src_path to your main process as file gets modified (handle it in on_modified event of Event handler).
1
2
0
I have a set of 100 files which are created by an application. The files are updated dynamically not in the sequential manner. Using Python, I am trying to read the files. However, I don't know which file is getting updated at what time. I don't want to every time loop through each and every file to check which files are updated at the instance. I can create multiple processes/threads for triggering to the main process which file got updated. Is there any other way like the file updation can notify the main python process so that only those files are read?? Thanks.
Python trigger file change events
0
0
0
6,661
44,714,345
2017-06-23T06:20:00.000
2
0
0
0
python,sql,web,flask
44,715,054
1
true
1
0
This kind of data is called time series. There are specialized database engines for time series, but with a not-extreme volume of observations - (timestamp, wave heigh, wind, tide, which break it is) tuples - a SQL database will be perfectly fine. Try to model your data as a table in Postgres or MySQL. Start by making a table and manually inserting some fake data in a GUI client for your database. When it looks right, you have your schema. The corresponding CREATE TABLE statement is your DDL. You should be able to write SELECT queries against your table that yield the data you want to show on your webapp. If these queries are awkward, it's a sign that your schema needs revision. Save your DDL. It's (sort of) part of your source code. I imagine two tables: a listing of surf breaks, and a listing of observations. Each row in the listing of observations would reference the listing of surf breaks. If you're on a Mac, Sequel Pro is a decent tool for playing around with a MySQL database, and playing around is probably the best way to learn to use one. Next, try to insert data to the table from a Python script. Starting with fake data is fine, but mold your Python script to read from your upstream source (the result of scraping) and insert into the table. What does your scraping code output? Is it a function you can call? A CSV you can read? That'll dictate how this script works. It'll help if this import script is idempotent: you can run it multiple times and it won't make a mess by inserting duplicate rows. It'll also help if this is incremental: once your dataset grows large, it will be very expensive to recompute the whole thing. Try to deal with importing a specific interval at a time. A command-line tool is fine. You can specify the interval as a command-line argument, or figure out out from the current time. The general problem here, loading data from one system into another on a regular schedule, is called ETL. You have a very simple case of it, and can use very simple tools, but if you want to read about it, that's what it's called. If instead you could get a continuous stream of observations - say, straight from the sensors - you would have a streaming ingestion problem. You can use the Linux subsystem cron to make this script run on a schedule. You'll want to know whether it ran successfully - this opens a whole other can of worms about monitoring and alerting. There are various open-source systems that will let you emit metrics from your programs, basically a "hey, this happened" tick, see these metrics plotted on graphs, and ask to be emailed/texted/paged if something is happening too frequently or too infrequently. (These systems are, incidentally, one of the main applications of time-series databases). Don't get bogged down with this upfront, but keep it in mind. Statsd, Grafana, and Prometheus are some names to get you started Googling in this direction. You could also simply have your script send an email on success or failure, but people tend to start ignoring such emails. You'll have written some functions to interact with your database engine. Extract these in a Python module. This forms the basis of your Data Access Layer. Reuse it in your Flask application. This will be easiest if you keep all this stuff in the same Git repository. You can use your chosen database engine's Python client directly, or you can use an abstraction layer like SQLAlchemy. This decision is controversial and people will have opinions, but just pick one. Whatever database API you pick, please learn what a SQL injection attack is and how to use user-supplied data in queries without opening yourself up to SQL injection. Your database API's documentation should cover the latter. The / page of your Flask application will be based on a SQL query like SELECT * FROM surf_breaks. Render a link to the break-specific page for each one. You'll have another page like /breaks/n where n identifies a surf break (an integer that increments as you insert surf break rows is customary). This page will be based on a query like SELECT * FROM observations WHERE surf_break_id = n. In each case, you'll call functions in your Data Access Layer for a list of rows, and then in a template, iterate through those rows and render some HTML. There are various Javascript and Python graphing libraries you can feed this list of rows into and get graphs out of (client side or server side). If you're interested in something like a week-over-week change, you should be able to express that in one SQL query and get that dataset directly from the database engine. For performance, try not to get in a situation where more than one SQL query happens during a page load. By default, you'll be doing some unnecessary work by going back to the database and recomputing the page every time someone requests it. If this becomes a problem, you can add a reverse proxy cache in front of your Flask app. In your case this is easy, since nothing users do to the app cause its content to change. Simply invalidate the cache when you import new data.
1
0
0
I have a basic personal project website that I am looking to learn some web dev fundamentals with and database (SQL) fundamentals as well (If SQL is even the right technology to use??). I have the basic skeleton up and running but as I am new to this, I want to make sure I am doing it in the most efficient and "correct" way possible. Currently the site has a main index (landing) page and from there the user can select one of a few subpages. For the sake of understanding, each of these sub pages represents a different surf break and they each display relevant info about that particular break i.e. wave height, wind, tide. As I have already been able to successfully scrape this data, my main questions revolve around how would I go about inserting this data into a database for future use (historical graphs, trends)? How would I ensure data is added to this database in a continuous manner (once/day)? How would I use data that was scraped from an earlier time, say at noon, to be displayed/used at 12:05 PM rather than scraping it again? Any other tips, guidance, or resources you can point me to are much appreciated.
Flask website backend structure guidance assistance?
1.2
1
0
181
44,716,368
2017-06-23T08:14:00.000
3
0
0
0
python,machine-learning,scikit-learn
44,736,370
1
true
0
0
The NaNs are produced because the eigenvalues (self.lambdas_) of the input matrix are negative which provoke the ValueError as the square root does not operate with negative values. The issue might be overcome by setting KernelPCA(remove_zero_eig=True, ...) but such action would not preserve the original dimensionality of the data. Using this parameter is a last resort as the model's results may be skewed. Actually, it has been stated negative eigenvalues indicate a model misspecification, which is obviously bad. Possible solution for evading that fact without corroding the dimensionality of the data with remove_zero_eig parameter might be reducing the quantity of the original features, which are greatly correlated. Try to build the correlation matrix and see what those values are. Then, try to omit the redundant features and fit the KernelPCA() again.
1
2
1
After applying KernelPCA to my data and passing it to a classifier (SVC) I'm getting the following error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). and this warning while performing KernelPCA: RuntimeWarning: invalid value encountered in sqrt X_transformed = self.alphas_ * np.sqrt(self.lambdas_) Looking at the transformed data I've found several nan values. It makes no difference which kernel I'm using. I tried cosine, rbf and linear. But what's interesting: My original data only contains values between 0 and 1 (no inf or nan), it's scaled with MinMaxScaler Applying standard PCA works, which I thought to be the same as KernelPCA with linear kernel. Some more facts: My data is high dimensional ( > 8000 features) and mostly sparse. I'm using the newest version of scikit-learn, 18.2 Any idea how to overcome this and what could be the reason?
KernelPCA produces NaNs
1.2
0
0
622
44,716,408
2017-06-23T08:16:00.000
0
0
1
0
python,eclipse,documentation,pydev,docstring
44,779,789
2
false
0
0
Another coworker found the answer. Properties of Project -> PyDev - PYTHONPATH -> Source Folders : Add Source Folder. Choose the file, where your code is. For some reason, the eclipse of my coworkers did this automatically while mine didn't.
2
0
0
I'm working with PyDev on Eclipse and for some reason it doesn't show docstring when I'm hovering over a function. What also doesn't work, is to jump into a function when pressing F3. Both features work on the computer of my coworker. We tried it for the same functions in the same project. We compared our settings in Preferences -> PyDev -> Editor -> Hover and they look alike. Both of us are using Eclipse 4.6.2 and PyDef 5.8. I really hope someone can help me because it is driving me nuts.
PyDev doesn't show docstring
0
0
0
280
44,716,408
2017-06-23T08:16:00.000
1
0
1
0
python,eclipse,documentation,pydev,docstring
44,719,860
2
false
0
0
My guess is that the project isn't properly configured in your machine (probably a misconfiguration in the source folders). Can you attach a screenshot showing the PyDev Package explorer for your project (expanding the project settings) and the editor where F3 is not working to better diagnose it? Also, do you have some error in your error log?
2
0
0
I'm working with PyDev on Eclipse and for some reason it doesn't show docstring when I'm hovering over a function. What also doesn't work, is to jump into a function when pressing F3. Both features work on the computer of my coworker. We tried it for the same functions in the same project. We compared our settings in Preferences -> PyDev -> Editor -> Hover and they look alike. Both of us are using Eclipse 4.6.2 and PyDef 5.8. I really hope someone can help me because it is driving me nuts.
PyDev doesn't show docstring
0.099668
0
0
280
44,718,379
2017-06-23T09:50:00.000
1
0
0
0
python,sql,postgresql,psycopg2
44,718,475
1
false
0
0
you simple update the cell with the value NULL in SQL - psycopg2 will insert NULL into the database when you update your column with None-type from python.
1
0
0
I have a table in a PostgreSQL database. I'm writing data to this table (using some computation with Python and psycopg2 to write results down in a specific column in that table). I need to update some existing cell of that column. Till now, I was able either to delete the complete row before writing this single cell because all other cells on the row were also written back as the same time, or delete the entire column for the same reason. Now I can't do that anymore because that would mean long computation time to rebuild either the row or the column for only a few new values to be written in some cell. I know the update command. It works well for that. But, if I had existing values in some cells, and that a new computation gives me no more result for these cells, I would like to "clear" the existing values to keep the table up-to-date with the last computation I've done. Is there a simple way to do that ? update doesn't seems to work (it seems to keep the old values). I precise again I'm using psycopg2 to write things to my table.
Update field with no-value
0.197375
1
0
878
44,719,592
2017-06-23T10:49:00.000
0
0
1
0
macos,python-3.x,anaconda,jupyter
44,719,685
1
false
0
0
The OS on your MacBook Pro may be corrupted, try completely reformatting / resetting it to factory defaults.
1
0
0
I have just updated my MacBook Air 2014 with 4gb ram to a MacBook pro that is vastly more powerful, with 4* the ram & processing speed and installed the latest version of Anaconda, yet it is painfully slow. Even the simplest oneliners of code take some 30 seconds to run, and even simple things crash the notebook! It is almost unusable. I have checked and re-updated Anaconda and all the packages. I am running the latest OS on both laptops (and the MacBook air is faster!). I have also tried it on both Chrome and Safari - same issue. Any ideas what this could be?
Jupyter Notebook - incredibly slow code execution after the PC update
0
0
0
412
44,720,075
2017-06-23T11:14:00.000
3
0
1
1
python
44,720,171
1
true
0
0
You need to install the python development package (which contains the C headers files) for your OS (on debian-based distros it's named 'pythonX.X-dev' where 'X.X' is python version).
1
3
0
I am trying to install mysqlclient, but I get this error message: _mysql.c:40:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Could anyone help me to resolve this?
fatal error: Python.h: No such file or directory error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
1.2
0
0
1,400
44,721,450
2017-06-23T12:27:00.000
0
0
1
0
python,eclipse,import,pip,docx
44,775,360
2
false
0
0
thanks for the reply. The actual problem was that I was using python 3.6 where eclipse only accepts python grammar versions up to 3.5. The docx package also only works with python 2.6, 2.7, 3.3, or 3.4 so I installed python 3.4 and docx is now working!
1
0
0
So basically I used pip to import the docx python package and it installed correctly, (verified by the freeze command). However I cannot import the package in eclipse. Through some serious effort I've noticed that I can import the package using the 32 bit IDLE shell whereas I cannot when using the 64 bit IDLE shell. My PC is 64 bit and so I do not why I cannot import a 32 bit package in eclipse, a problem I've never encountered before. Does anybody have any insights as to how I can import this package properly in eclipse? I'm sure there's a very reasonable cause and hopefully solution as to why this is happening and would really appreciate if anyone could help with this issue as I need to use this package for the specific project I aim to do. side note: I'm using python 3.6 if that's of any relevance
Eclipse cannot import already installed pip package
0
0
0
582
44,722,489
2017-06-23T13:20:00.000
0
0
1
0
python,python-3.x
44,722,649
1
false
0
0
I had a similar problem in another IDE, and I solved it by making the folder I am importing from as a source folder.
1
0
0
My project is structured like this: project/ --bin/ ----__init__.py ----server/ ------__init__.py ------server.py ----tool/ ------__init__.py ------tool.py In bin/server/server.py, I have this import: from bin.tool.tool import tool_class And when I run python3 bin/server/server.py, I get this error: ModuleNotFoundError: No module named 'bin'
ModuleNotFoundError for multi-level package directory?
0
0
0
108
44,723,464
2017-06-23T14:08:00.000
0
0
0
0
python,machine-learning,deep-learning,keras
45,011,256
1
true
0
0
Low accuracy is caused by the problem in layers. I just modified my network and obtained .7496 accuracy.
1
1
1
I was trying to train CIFAR10 and MNIST dataset on VGG16 network. In my first attempt, I got an error which says shape of input_2 (labels) must be (None,2,2,10). What information does this structure hold in 2x2x10 array because I expect input_2 to have shape (None, 10) (There are 10 classes in both my datasets). I tried to expand dimensions of my labels from (None,10) to (None,2,2,10). But I am sure this is not the correct way to do it since I obtain a very low accuracy (around 0.09) (I am using keras, Python3.5)
VGG16 Training new dataset: Why VGG16 needs label to have shape (None,2,2,10) and how do I train mnist dataset with this network?
1.2
0
0
212
44,724,653
2017-06-23T15:06:00.000
1
0
1
0
python,git,pip
44,724,801
1
true
0
0
Yes. Dependent packages are looked up via said setup.py (setup_requires and install_requires).
1
2
0
Does anyone know if it is required that a Python repository has setup.py script in root directory in order for pip install git+[git repo] to work? How does pip find dependent packages from the source code? Thanks!
requirement for "pip install git+[git repo]"?
1.2
0
0
1,028
44,727,134
2017-06-23T17:38:00.000
1
0
1
0
python,multithreading,multiprocessing
44,729,595
1
true
0
0
Python "threads" permit independent threads of execution, but typically do not permit concurrency because of the global interpreter lock: only one thread can really be running at a time. This may be the reason why you only get a speedup with multiple processes, which do not share a global interpreter lock and thus can run concurrently.
1
1
0
I parse a big source code directory (100k files). I traverse every line in every file and do some simple regex matching. I tried threading this task to multiple threads but didn't get any speedup. Only multiprocessing managed to cut the time by 70%. I'm aware of the GIL death grip, but aren't threads supposed to help with IO bound access? If the disk access is serial, how come several processes finish the job quicker?
python: disk-bound task, thread vs process
1.2
0
0
154
44,727,232
2017-06-23T17:45:00.000
0
0
1
0
python
57,961,296
9
false
0
0
This absolutely worked for me . I am using windows 10 professional edition and it has taken me almost 6 months to get this solution.Thanks to the suggestion made above. I followed this suggestion and it worked right away and smoothly. All I did was to instruct the scheduler to run python.exe with my script as an argument just as explained by this fellow below This what I did Suppose the script you want to run is E:\My script.py. Instead of running the script directly, instruct the task scheduler to run python.exe with the script as an argument. For example: C:\Python27\ArcGIS10.2\python.exe "E:\My script.py" The location of python.exe depends on your install. If you don’t know where it is, you can discover its location; copy and paste the following code into a new Python script then execute the script. The script will print the location of python.exe as well as other information about your Python environment.
2
39
0
I already tried to convert my .py file into .exe file. Unfortunately, the .exe file gives problems; I believe this is because my code is fairly complicated. So, I am trying to schedule directly my .py file with Task Scheduler but every time I do it and then run it to see if works, a window pops up and asks me how I would like to open the program?-.- Does any of you know how I can successfully schedule my .py file with Task Scheduler? Please help, thanks Windows 10 Python 3.5.2
Scheduling a .py file on Task Scheduler in Windows 10
0
0
0
80,315
44,727,232
2017-06-23T17:45:00.000
1
0
1
0
python
44,728,388
9
false
0
0
The script you execute would be the exe found in your python directory ex) C:\Python27\python.exe The "argument" would be the path to your script ex) C:\Path\To\Script.py So think of it like this: you aren't executing your script technically as a scheduled task. You are executing the root python exe for your computer with your script being fed as a parameter.
2
39
0
I already tried to convert my .py file into .exe file. Unfortunately, the .exe file gives problems; I believe this is because my code is fairly complicated. So, I am trying to schedule directly my .py file with Task Scheduler but every time I do it and then run it to see if works, a window pops up and asks me how I would like to open the program?-.- Does any of you know how I can successfully schedule my .py file with Task Scheduler? Please help, thanks Windows 10 Python 3.5.2
Scheduling a .py file on Task Scheduler in Windows 10
0.022219
0
0
80,315
44,728,566
2017-06-23T19:19:00.000
2
1
0
0
python,iphone,location,twilio
44,731,799
2
false
0
0
I solved a similar problem by creating a basic webpage which uses the HTML5 geolocation function to get lat/lng of the phone. It then submits coordinates to a php script via AJAX. My server geocodes the employees location, calculates travelling time to next job and sends the customer an SMS with ETA information using the Twilio API. You could bypass Twilio altogether and get your server to make the request direct to your webhook, or even via the AJAX call if it's all on the same domain. All depends what you are trying to achieve I guess.
1
0
0
I have a webhook that handles any sms messages sent to my Twilio number. However, this webhook only works if there is text in the message (there will be a body in the GET request). Is it possible to parse a message if it is a location message? e.g. if I send my current location to my Twilio number and it redirects this message as a GET request to the webhook, could I possibly retrieve that location? This is what my webhook receives if I send my current location on an iPhone: at=info method=GET path="/sms/?ToCountry=US&MediaContentType0=text/x-vcard&ToState=NJ&SmsMessageSid=MMde62b3369705a8f65f18abe5b7387c2b&NumMedia=1&ToCity=NEWARK&FromZip=07920&SmsSid=MMde62b3369705a8f65f18abe5b7387c2b&FromState=NJ&SmsStatus=received&FromCity=SOMERVILLE&Body=&FromCountry=US&To=%2B18627019482&ToZip=07102&NumSegments=1&MessageSid=MMde62b3369705a8f65f18abe5b7387c2b&AccountSid=ACe72df68a68db79d9a4ac6248df6e981e&From=%2B19083925806&MediaUrl0=https://api.twilio.com/2010-04-01/Accounts/ACe72df68a68db79d9a4ac6248df6e981e/Messages/MMde62b3369705a8f65f18abe5b7387c2b/Media/MEcd56717ce17f3a320b06c4ee11df2243&ApiVersion=2010-04-01" For comparison, here's a normal text message: at=info method=GET path="/sms/?ToCountry=US&ToState=NJ&SmsMessageSid=SM4767dabb915fae749c7d5b59d6f655a2&NumMedia=0&ToCity=NEWARK&FromZip=07920&SmsSid=SM4767dabb915fae749c7d5b59d6f655a2&FromState=NJ&SmsStatus=received&FromCity=SOMERVILLE&Body=Denver+E+union&FromCountry=US&To=%2B18627019482&ToZip=07102&NumSegments=1&MessageSid=SM4767dabb915fae749c7d5b59d6f655a2&AccountSid=ACe72df68a68db79d9a4ac6248df6e981e&From=%2B19083925806&ApiVersion=2010-04-01" In the normal sms message, I can parse out the Body=Denver+E+union to get the message, but I'm not sure you could do anything with the content of the location message. If I can't get the location, what are some other easy ways I could send a parseable location?
iPhone "Send My Current Location" to Twilio
0.197375
0
1
525
44,728,629
2017-06-23T19:24:00.000
0
0
1
0
python
44,738,179
2
false
0
0
Then Cython is already installed. Go to your Python interpreter and type import cython and if there is no error you're fine.
1
0
0
i want install cython and when install with pip return to me this error: Requirement already satisfied: cython in /usr/local/lib/python3.5/dist-packages my pip version is pip 9.0.1 from /usr/local/lib/python3.5/dist-packages (python 3.5 and python version is Python 3.5.2
Requirement already satisfied: cython
0
0
0
1,779
44,733,040
2017-06-24T05:19:00.000
0
0
1
1
python,windows-7,contextmenu,python-idle
44,738,281
1
false
0
0
Eventually it became clear that it was launching IDLE, but IDLE was exiting as soon as it got to the point of waiting for user input. Windows CMD scripts do that sometimes when they aren't feeling conversational. I found the relevant script at F:\Python35\Lib\idlelib\Idle.bat. If you add a "/W" switch to the command, it waits for input. Problem solved, at least so far. The entire Idle.bat file now reads as follows. The only change was inserting "/W" in the main command line: @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 start "IDLE" "%CURRDIR%....\pythonw.exe" /W "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9
1
0
0
There are a lot of threads from Python users on Windows who lose the "Edit with IDLE" option on the context menu (right click on a .py file in the File Explorer). I do have the menu item, but most of the time it appears to do nothing. Checking the running applications and processes in Task Manager reveals nothing, except I think IDLE or its launcher or something runs very briefly, so quickly it usually never shows up in the Task Manager list. Thanks to all who suggested splitting this into question and answer. My solution (for now) will be posted next
Python's "EDIT with IDLE" runs but exits immediately (Win7)
0
0
0
152
44,734,164
2017-06-24T07:46:00.000
3
0
1
1
python-3.x,installation,ubuntu-16.04
44,744,454
2
false
0
0
In your terminal, type : which python3
2
3
0
I installed Python 3.6.1 on my Ubuntu 16 server and cannot find the install location. I have looked in /usr/bin and there are reference to all other versions except 3.6.1. Where can I find the executable for this version?
Python 3.6.1 install location
0.291313
0
0
6,396
44,734,164
2017-06-24T07:46:00.000
4
0
1
1
python-3.x,installation,ubuntu-16.04
44,734,300
2
true
0
0
Use command "whereis python3.6.1"
2
3
0
I installed Python 3.6.1 on my Ubuntu 16 server and cannot find the install location. I have looked in /usr/bin and there are reference to all other versions except 3.6.1. Where can I find the executable for this version?
Python 3.6.1 install location
1.2
0
0
6,396
44,734,534
2017-06-24T08:35:00.000
3
0
1
0
python,module,semantics
44,734,667
1
true
0
0
What's in a name? But as we are at it: 'library' is no technical/syntactical term in python and there is no organisational structure that would you call a python library. officially specified or mandatory way how you would organize your files and directories in your repository in order to call your project/package/module a library (while package is basically a directory with a __init__.py and a module is a *.py file.) But that term is widely used for modules and packages which provide a collection of features, functions, classes and so on for other python projects and applications. You would call them a library as they are not intended to be a stand-alone application. E.g. gentoo's portage is a python application while urllib is a library. There are python packages where this not so clear, like nosetests - I implement some of their classes and methods as if it is a library but then I call nosetests from the command line as if it is a stand-alone application. As @StefanPochmann pointed out, there is a thing called 'The Python Standard Library'.
1
1
0
Is it wrong to refer to Python modules and packages as "libraries" ? Is there a technical differense ? Python documentation never mentions the word "library" in the packages or modules entry, but always refers to Pythons' built-in code as "library".
Confusing semantics on Python module system
1.2
0
0
51
44,736,279
2017-06-24T12:16:00.000
0
0
0
0
python,kivy,kivy-language
44,750,212
2
false
0
1
Use GridLayout but populate it with BoxLayouts instead of buttons. Orient each of these BoxLayouts vertically and populate the buttons inside.
1
1
0
I'm working on a relatively simple layout in Kivy, and want to display a series of buttons which get populated top to bottom of the screen, then when they reach the bottom, a new column is started from the top. GridLayout appears to do what I want, but it always seems to go from left to right first, rather than top to bottom. I've check the official documentation and Google and can't seem to find a solution. StackLayout does what I want with the "orientation: "tb-lr" command, however the button widths don't get fully scaled to fit the container when there's only one column which GridLayout does do and is required for this application. Thanks for any help.
Kivy: GridLayout always goes left to right then down, can you go top to bottom then left to right?
0
0
0
1,475
44,737,529
2017-06-24T14:36:00.000
0
1
0
1
python,amazon-ec2,ansible
44,737,817
1
false
1
0
Looks like the problem was a temporary file in the hosts folder. After removing it the problems went away. It looks like std ansible behaviour: Pull in ALL files in the hosts folder.
1
0
0
I've seen a few posts on this topic with odd hard to reproduce behaviours. Here's a new set of data points. Currently the following works cd ./hosts ./ec2.py --profile=dev And this fails AWS_PROFILE=dev; ansible-playbook test.yml These were both working a couple days ago. Something in my environment changed. Still investigating. Any guesses? Error message: ERROR! The file ./hosts/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this withchmod -x ./hosts/ec2.py. ERROR! Inventory script (./hosts/ec2.py) had an execution error: ERROR: "Error connecting to AWS backend. You are not authorized to perform this operation.", while: getting EC2 instances ERROR! ./hosts/ec2.py:3: Error parsing host definition ''''': No closing quotation Note that the normal credentials error is: ERROR: "Error connecting to AWS backend. You are not authorized to perform this operation.", while: getting EC2 instances ... Hmmm. Error message has shifted. AWS_PROFILE=dev; ansible-playbook test.yml ERROR! ERROR! ./hosts/tmp:2: Expected key=value host variable assignment, got: {
Ansible ec2.py runs standalone but fails in playbook
0
0
0
1,445
44,740,161
2017-06-24T19:25:00.000
2
0
0
0
python-3.x,nlp,word2vec
44,740,700
1
true
0
0
If you are splitting each entry into a list of words, that's essentially 'tokenization'. Word2Vec just learns vectors for each word, not for each text example ('record') – so there's nothing to 'preserve', no vectors for the 45,000 records are ever created. But if there are 26,000 unique words among the records (after applying min_count), you will have 26,000 vectors at the end. Gensim's Doc2Vec (the ' Paragraph Vector' algorithm) can create a vector for each text example, so you may want to try that. If you only have word-vectors, one simplistic way to create a vector for a larger text is to just add all the individual word vectors together. Further options include choosing between using the unit-normed word-vectors or raw word-vectors of many magnitudes; whether to then unit-norm the sum; and whether to otherwise weight the words by any other importance factor (such as TF/IDF). Note that unless your documents are very long, this is a quite small training set for either Word2Vec or Doc2Vec.
1
0
1
I have 45000 text records in my dataframe. I wanted to convert those 45000 records into word vectors so that I can train a classifier on the word vector. I am not tokenizing the sentences. I just split the each entry into list of words. After training word2vec model with 300 features, the shape of the model resulted in only 26000. How can I preserve all of my 45000 records ? In the classifier model, I need all of those 45000 records, so that it can match 45000 output labels.
how to preserve number of records in word2vec?
1.2
0
0
277
44,741,368
2017-06-24T22:05:00.000
0
0
0
0
python,django,amazon-web-services,nginx,amazon-ec2
44,742,316
1
false
1
0
I’m pretty sure that you need to make this change wherever you host your domain. The only way i was able to do this with my personal server was to point to port 80instead of 8000
1
0
0
Ok so I'm hosting a Django EC2 instance right now using ngrok http 8000 and leaving it running. It's doing fine but a lot of browsers are blocking the traffic to my site. I need to make my reserved domain (I have some on Amazon and some on 1 and 1) to my 123.4.5.67:8000 public IPv4 IP or just my public IPv4 DNS on my EC2. What I need in a nutshell is example.com to redirect to 123.4.5.67:8000 while still saying example.com in the url. So far I have heard of Apache, WSGI, and nginx. None of them have worked for me, but maybe I haven't gotten the right direction. Please help!
How do I point my reserved domain to a Django EC2 instance?
0
0
0
47
44,741,587
2017-06-24T22:39:00.000
4
0
0
0
python,arrays,pandas,vector
51,706,173
3
false
0
0
Following on from VinceP's answer, to convert a datetime Series in-place do the following: df['Column_name']=df['Column_name'].astype(str)
1
32
1
I am new to python (coming from R), and I am trying to understand how I can convert a timestamp series in a pandas dataframe (in my case this is called df['timestamp']) into what I would call a string vector in R. is this possible? How would this be done? I tried df['timestamp'].apply('str'), but this seems to simply put the entire column df['timestamp'] into one long string. I'm looking to convert each element into a string and preserve the structure, so that it's still a vector (or maybe this a called an array?)
pandas timestamp series to string?
0.26052
0
0
102,326
44,742,686
2017-06-25T02:35:00.000
0
0
0
0
python,python-2.7,google-chrome,python-requests,request-headers
44,744,069
1
false
1
0
Your using two different libraries (Chrome's internal http library and requests). It's very rare for two unrelated libraries to send the same set of headers, especially when one is from a browser. You could manually set those headers in requests but I'm not sure what you're trying to do
1
0
0
Just wondering if anyone could explain why I navigate to a webpage using Chrome and the request headers include Accept, Accept-Encoding, Accept-Language, Connection, Cookie, Host, Referer, Upgrade-Insecure-Request, and User-Agent but when I make a request via Python and print request.headers it only returns Connection, Accept-Encoding, Accept, and User-Agent even if I set the User-Agent to the same one I see in Chrome. Also I'm wondering if it's possible to return those request headers I see in Chrome rather than those I see in Python. Thank you.
Python request.headers differs from what I see in Chrome
0
0
1
498
44,750,239
2017-06-25T20:18:00.000
1
0
1
0
performance,python-2.7,list,memory,initialization
44,752,492
1
false
0
0
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition. First method is slower than the second method because a no-op is performed in each loop iteration. P.S. If you want to initialise a list of list with constant value (k) then the ideal and the fastest way would be to used numpy np.ones((m, n))*k.
1
1
0
Is there any difference in initializing a list of lists by [[0 for i in xrange(n)] for j in xrange(m)] or [[0]*n for j in xrange(m)] From the point of view of time performance the first way is 4 times faster than the second way, and I am wondering whether the first way has any computational/memory use or allocation drawback.
Best way to initialize a list of lists
0.197375
0
0
369
44,750,685
2017-06-25T21:13:00.000
1
0
1
0
python,dictionary
44,750,817
1
false
0
0
I think dict is not the tool you should use. It is used to store a mapping. The amount of data is not the issue here, 300,000-400,000 items is fair but not huge (if your data is mainly text, your dicts'size would be less than the size of a 740p movie). But if your data should in the end be structured, in order to be queried and manipulated, then specific and really well-designed tools already exists out there. Specifically these two modules, both included with the anaconda installation : sqlite3 to store the data in a database, when the data already has a fixed schema pandas and its dataframes. It can handle data less structured than sqlite3, can read and write to its databases, and has an awful lot of great utility functions to do data cleaning. As you seem to be still unsure about the final schema of your data, I would go for pandas if I were you, but this is less simple than just dict
1
0
0
I'm new to python and working on a program as a useful tool for my work. I have a messy, massive amount of data from different sources, and it would save me an enormous amount of time if I could store the data sets as I collect them. So I'm looking to put it together for personal use as quickly as possible, but to continue working on it to improve the code and open it up for use to my colleagues once I can work out effective and secure data sharing. In all likelihood, the coding isn't going to start off very efficient. The program should write and read (i.e. search for objects) a dictionary of 6 arrays. Ideally, the program will also format and write the data to a fixed-layout document that may be printed. A quick estimate is that a "complete" dictionary product would have between 300,000-400,000 items. Considering the mutability of the dictionary and its size, is the best way to store the dictionary in json? And considering that anybody using the program would, in most instances, not be using a particularly high performing computer, would this overload the client? Input string: citation value source origin stem equivalence Desired output: ORIGIN1 Stem1 Source1: citation1 value1 equivalence1, citation3 value3 equivalence3; Source2: citation2 value2 equivalence2 Stem2 etc
Handling large dicts as main purpose of program
0.197375
0
0
63
44,753,724
2017-06-26T05:49:00.000
1
0
0
0
python,mysql,qt,pyqt
44,992,670
2
true
0
0
It said driver available but you need to rebuid a new Mysql driver base on Qt Source code and Mysql Library.
1
0
0
I am trying to connect to a MySQL database using PyQt5 on Python 3.6 for 64-bit Windows. When I call QSqlDatabase.addDatabase('MYSQL') and run my utility, it shows up with this error message: QSqlDatabase: QMYSQL driver not loaded QSqlDatabase: available drivers: QSQLITE QMYSQL QMYSQL3 QODBC QODBC3 QP SQL QPSQL7 This confuses me since according to the error message, the QMYSQL driver is loaded. I installed PyQt through the default installer, so the MySQL plugin should be installed. Has anyone else experienced this problem or does someone know the cause of this?
PyQt QSqlDatabase: QMYSQL driver not loaded
1.2
1
0
1,820
44,756,118
2017-06-26T08:56:00.000
1
0
0
0
python,mysql,pymysql
44,758,048
1
false
0
0
I solved the problem by myself... Because the config is automatically committed, so after each SQL sentence we should commit the changes. Approach 1: add cur.commit() after the cur.execute() Approach 2: edit the connection config, add autocommit=True
1
0
0
When I'm using pymysql to perform operations on MySQL database, it seems that all the operations are temporary and only visible to the pymysql connection, which means I can only see the changes through cur.execute('select * from qiushi') and once I cur.close() and conn.close() and log back in using pymysql, everything seems unchanged. However, when I'm looking at the incremental id numbers, it does increased, but I can't see the rows that were inserted from pymysql connection. It seems that they were automatically deleted?! Some of my code is here: import pymysql try: conn = pymysql.connect(host='127.0.0.1',port=3306,user='pymysql',passwd='pymysql',charset='utf8') cur = conn.cursor() #cur.execute('CREATE TABLE qiushi (id INT NOT NULL AUTO_INCREMENT, content_id BIGINT(10) NOT NULL, content VARCHAR(1000), created TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY(id));') #cur.execute('DESCRIBE content') #cur.fetchall() cur.execute('USE qiushibaike') for _ in range(0,len(content_ids)): cur.execute("INSERT INTO qiushi (content,content_id) VALUES (\"%s\",%d)"%(jokes[_],int(content_ids[_]))) finally: cur.close() conn.close()
Unable to INSERT with Pymysql (incremental id changes though)
0.197375
1
0
377
44,756,937
2017-06-26T09:46:00.000
0
0
0
1
python,azure-cosmosdb
44,772,468
2
false
0
0
DocumentDB allows only documents upto 2MB to be sent/inserted. What's the size of your document and how complex/nested it is?
1
0
0
I'm trying to send date to azure documentdb and I have to send a huge document (100 000+ lines), however when I send it I get a Request size is too large error. I guess it should be possible to change this request size limit (which should be stored in a variable somewhere) but I can't find it, does someone know this ? Thanks ! (I'm using pydocumentdb by the way)
How to change pydocumentdb request size limit?
0
0
0
85
44,757,705
2017-06-26T10:29:00.000
2
0
1
0
python,visual-studio
44,758,246
1
true
0
0
python usually runs it in command prompt. Please let me if you need any more details or help
1
2
0
I'm using Microsoft Visual Studio Code with its Python support on Windows 7. Now I can open an .py file and run it from menu Debug > Start Debugging. Is it possible to run it in command line?
Run python in command line in Microsoft Visual Studio Code?
1.2
0
0
3,215
44,758,375
2017-06-26T11:09:00.000
2
0
0
0
python,django,django-templates,django-views
44,759,403
1
true
1
0
In Django a "view" is a callable that is responsible for handling a request and returning a response, a template tag is a piece of code that will be executed in the context of rendering a template and will either push something in the template's context or render some text or markup. Oranges and apples, really, and it should be quite clear when you want a view and when you want a template tag. And yes, using inclusion tags (or a full-blown custom tag using a template to render some data) is "an accepted Django standard" - actually that's exactly what template tags are for.
1
2
0
I'm in the process of delving into Django into a little more depth - and I now have certain blocks around my website which are recycled, but not necessarily suited to a being placed in base.html and then sprinkled with {% extends /root/to/base.html %}. So, I have a bespoke widget I have created which is utilised on certain pages but in different configurations, is it best to register and inclusion tag and reference the template you want to accompany those stored variables and arrays/lists/dictionaries etc.. For me it seems easier to define tags and then dot these around where I need them and just make edits to the template that is registered with that tag method? But is this the accepted Django standard?
Best Django practise - when to use views and when to use tags
1.2
0
0
36
44,763,274
2017-06-26T15:31:00.000
0
0
1
0
python,anaconda
44,763,421
1
false
0
0
So python looks at the environmental variable PYTHONPATH to find the location of the Python packages. Change PYTHONPATH so that it points to where you have your non-Anaconda packages installed. PYTHONPATH is searched in order, so just make sure the location is before the Anaconda package install location. You can do this by doing something like PYTHONPATH="/new/location:${PYTHONPATH}" depending on the shell you're using. Alternatively, you can set PYTHONPATH within Python, so if you don't want to make the change permanent, you can do that.
1
0
0
I have anaconda python installed and it works great. However, occasionally I'd like to use my native python. If a run a file with /usr/bin/python file.py, any imports in the file are done from the anaconda package folder. Even if I run /usr/bin/python to drop my self in the python console, and then try import packagename, the package is imported from the anaconda folder on my machine. I verified this by typing help(packagename)and looking at the FILE attribute. How do I run my script using the native non-anaconda version of python and packages?
How to import non-anaconda packages
0
0
0
111
44,763,743
2017-06-26T15:55:00.000
0
0
0
0
python,machine-learning,nlp,text-classification
44,765,840
2
false
0
0
I'm currently working on something similar, besides what @Joonatan Samuel suggested I would encourage you to do careful preprocessing and considerations. If you want two or more tags for documents you could train several model : one model per tag. You need to consider if there will be enough cases for each model (tag) If you have a lot of tags, you could run into a problem with document-tag cases like above. Stick to most common tag prediction don't try to predict all tags.
1
0
1
I have thousands of documents with associated tag information. However i also have many documents without tags. I want to train a model on the documents WITH tags and then apply the trained classifier to the UNTAGGED documents; the classifier will then suggest the most appropriate tags for each UNTAGGED document. I have done quite a lot of research and there doesn't seem to be a SUPERVISED implementation to document tag classification. I know NLTK, gensim, word2vec and other libraries will be useful for this problem. I will be coding the project in Python. Any help would be greatly appreciated.
supervised tag suggestion for documents
0
0
0
461
44,763,758
2017-06-26T15:56:00.000
0
0
1
0
python,python-3.x,psycopg2,psycopg
44,766,725
2
true
0
0
The point is that values were edited with pgadmin3 (incorrectly, the correct way is shift+enter to add a new line). I asked the user to use phppgadmin (easier for him, multiline fields are edited with textarea control) and now everything is working properly. So pyscopg2 WORKS fine, I'm sorry to thought it was the culprit. He was putting literals \n in order to put new lines.
2
2
0
I have some records data with \n. When I do a SELECT query using psycopg2 the result comes with \n escaped like this \\n. I want the result have literal \n in order to use splitlines().
I don't want psycopg2 to escape new line character (\n) in query result
1.2
1
0
972
44,763,758
2017-06-26T15:56:00.000
-1
0
1
0
python,python-3.x,psycopg2,psycopg
44,763,994
2
false
0
0
Try this: object.replace("\\n", r"\n") Hope this helped :)
2
2
0
I have some records data with \n. When I do a SELECT query using psycopg2 the result comes with \n escaped like this \\n. I want the result have literal \n in order to use splitlines().
I don't want psycopg2 to escape new line character (\n) in query result
-0.099668
1
0
972
44,763,957
2017-06-26T16:07:00.000
0
0
1
0
string,python-3.x,pdf,docx,text-extraction
44,766,319
1
false
0
0
This is a pretty tough ask. What you are asking is not just a text search - that part would be relatively easy. But to find the page location you would have to work back from the string to is container, which would possibly be hierarchically nested - like a set of nested html elements - and calculate the position, then add your sig file in at a specified location comfortably below that. It would be much easier to dictate where the signature is put - maybe have your stationery reserve a location for the signature in the margin, where you know the co-ordinates. As for the doc option - I thinks its the same issue, but another crazy file format.
1
0
0
I am currently working on a project where I have to place a digital signature (.JPG file) on a PDF file under the string, 'Comments'. I want to be able to find the coordinates or location somehow of the String, 'Comments' and then preferably place that signature.jpg several coordinates below it. Originally, the file is a (.doc) rather than a (.pdf). Would it be easier to find the location of the string, 'Comments' on the (.doc) first and then convert it to a (.PDF)? And if so, how would I be able to find the location of 'Comments' on the (.doc) file in Python? Thanks
Best tool for finding text location from .PDF or .DOCX in Python 3.4
0
0
0
426
44,765,124
2017-06-26T17:17:00.000
1
0
0
0
python-2.7,tkinter,py2app
44,765,155
1
true
0
1
The python quit() and exit() functions are meant to be used in the REPL only. You need to use sys.exit() or raise SystemExit.
1
0
0
I have imported sys I have tried using sys.quit, again works when I run it as a python script but not as an app. I have also tried calling root.destroy() inside the app_exit function context: I have a button called exit which calls an app_exit function which just calls the quit function.
quit() using Tkinter not working while executing as an app using py2app. However it works when I run it a python script
1.2
0
0
79
44,767,704
2017-06-26T19:56:00.000
1
0
1
0
python,python-2.7,inheritance
44,768,152
2
false
0
0
I may have misunderstood the question, but from what I understand, a housing has four (or some number of) instruments inside it. You would then want a class Housing and a class Instrument. A housing holds a list of instruments while each instrument is created with a reference to its housing. If an instrument has to do something special, you can inherit from Instrument and likewise for housings.
1
0
0
This is mostly a program design question. I have multiple identical instrument housings each of which contains four identical instruments that I can communicate with remotely. I want to make an instrument housing class that contains: methods to communicate with each housing, methods to do housing-specific operations, and the housing attributes (addresses) necessary to use those methods with them. I additionally would like to make subclasses for the instruments themselves. These subclasses would have methods to do instrument-specific operations that call the superclass' methods and attributes to communicate through the housing. The problem with this design is that each housing would ultimately have five instances: one for its operations and one for each of its four instruments. Is it possible to create an instance of the housing class and then have subclasses inherit from the housing instance? Or am I thinking about this design in the wrong way (I'm relatively new to python)?
Python: Subclass of a class instance?
0.099668
0
0
1,898
44,769,832
2017-06-26T22:55:00.000
0
0
1
0
python-3.x,python-sphinx,namespace-package
44,790,172
1
false
0
0
Looking at the output of "sphinx-quickstart" showed me the 3 steps to generate documentation: "sphinx-quickstart" to create an initial directory structure with conf.py and index.rst "sphinx-apidoc" to generate *.rst files, which can also be adapted further "make html" or "sphinx-build" or "python setup.py build_sphinx" or "devpi upload --with-docs" to generate HTML from the *.rst files So "sphinx-apidoc" is not implicitly called by "python setup.py build_sphinx", but both must be called one after the other.
1
0
0
sphinx-apidoc supports the option --implicit-namespaces to handle namespace packages according to PEP420. When I create the Sphinx documentation with "python setup.py build_sphinx", this does not work with namespace packages by default. Is there a relation between "python setup.py build_sphinx" and sphinx-apidoc (e. g. is sphinx-apidoc implicitly called somewhere, when "python setup.py build_sphinx" is run?)? If so, can I specifiy somehow that "python setup.py build_sphinx" shall consider the --implicit-namespaces option of sphinx-apidoc?
"python setup.py build_sphinx" and "sphinx-apidoc --implicit-namespaces"
0
0
0
875
44,771,725
2017-06-27T03:13:00.000
0
0
0
0
python,python-3.x,pyqt,pyqt4
44,775,948
1
false
0
1
This is a pretty broad question. I recommend checking out the many tutorials on Youtube.com. However, in your init method, put something like this: self.ui.charge_codes_combo.currentIndexChanged.connect(self.setup_payments) In my example, the combo box was placed on a form in Qt Designer. Self.setup_payment is a method triggered by the change in the combo box. I hope this helps!
1
0
0
Exactly how do I utilize the various event methods that widgets have? Say I have a comboBox(drop down list) and I want to initiate a function every time someone changes the choice. There is the changeEvent() method in the documentation but It would be great if someone explains to me with a piece of code.
How to use pyqt widget event() method?
0
0
0
102
44,772,007
2017-06-27T03:54:00.000
6
1
1
1
python,path,relative-path,absolute-path,pwd
44,772,227
2
false
0
0
The biggest consideration is probably portability. If you move your code to a different computer and you need to access some other file, where will that other file be? If it will be in the same location relative to your program, use a relative address. If it will be in the same absolute location, use an absolute address.
1
13
0
For reference. The absolute path is the full path to some place on your computer. The relative path is the path to some file with respect to your current working directory (PWD). For example: Absolute path: C:/users/admin/docs/stuff.txt If my PWD is C:/users/admin/, then the relative path to stuff.txt would be: docs/stuff.txt Note, PWD + relative path = absolute path. Cool, awesome. Now, I wrote some scripts which check if a file exists. os.chdir("C:/users/admin/docs") os.path.exists("stuff.txt") This returns TRUE if stuff.txt exists and it works. Now, instead if I write, os.path.exists("C:/users/admin/docs/stuff.txt") This also returns TRUE. Is there a definite time when we need to use one over the other? Is there a methodology for how python looks for paths? Does it try one first then the other? Thanks!
When to use Absolute Path vs Relative Path in Python
1
0
0
29,634
44,774,814
2017-06-27T07:45:00.000
0
0
1
0
python,openerp,odoo-8,odoo-9,odoo-10
44,778,932
1
false
1
0
This depends on the language the user has chosen. Under Settings/Translations/Languages (in newer versions of Odoo you first need to activate developer mode to see this menu) chose a language and look into "Thousands Separator" and "Decimal Separator". For example using German language it's the other way around, because we use comma as decimal separator and dot as thousands separator. So nothing to program here, it's just a configuration issue.
1
0
0
I noticed that whenever you type comma in integer field, odoo (or python) automatically removes that comma and merges numbers (for example whenever you type 1,3 it will become 13). If I type 1.3 or 1;3 etc. everything is fine. Probably I could do something like @api.constrains for a field, but how I could fix this for whole system? Thank you for your time considering my question.
How to prevent user from saving comma at integer field in whole Odoo system?
0
0
0
741
44,776,765
2017-06-27T09:29:00.000
2
0
0
0
python,svn,cmd,buildbot
44,778,975
1
false
0
0
You shouldn't have anything "pop up" if you're using the correct tools for automating your process. You're probably using TortoiseProc.exe, which is advertised as not being suitable for unattended/automated usage. TortoiseProc.exe is not the replacement for svn.exe client. The command-line client svn is the correct way to automate Subversion client tasks (svn update, svn commit, etc.). Or if you're already working within a Python script, try one of the Python libraries for Subversion.
1
1
0
I created a Buildbot system and am using SVN with it and would like to do updates to my SVN without having to click the confirmations for when the update finished and committing version file. The buildbot system would just take in a command that would allow it to do all confirmation of updates and committing continuously without having said windows pop up. Is there an SVN command that will allow me to confirm and skip windows (like Confirm Update! and Committing Version to File) that pop up?
Is there an SVN command that will allow me to confirm and skip windows (like Confirm Update! and Committing Version to File) that pop up?
0.379949
0
0
160
44,776,786
2017-06-27T09:30:00.000
0
0
0
0
python,arrays,numpy,scikit-learn
44,777,825
1
true
0
0
n_values should only contain domain sizes for categorical values completely skipping out the non-categorical columns in the data matrix. Therefore if [True, False, True] format is used, the size should correspond to the number of True values in the array or if indices are used the two arrays should be of the same size. So there should be no None values but also no 0s, -1s or any other ways to encode real-valued variables in the n_values array.
1
0
1
I am passing in a hardcoded list / tuple (tried both) when initialising the OneHotEncoder and I get this error during fit_transform , not using numpy types anywhere (well except for the data matrix itself). The only thing is that some of the values in that array are None because I am also using categorical_features to specify a mask (as in some of the features are real-valued and I want them to stay real-valued. My n_values looks like [1, 2, 3, None, 5] or (1, 2, 3, None, 5) and my categorical_features looks like [0, 1, 2, 4] though I have also tried: [True, True, True, False, True]. The documentation does not present any actual examples with the mask on. EDIT: So, I tried replacing None with zeroes and this issue went away but now I get: ValueError: Shape mismatch: if n_values is an array, it has to be of shape (n_features,). Whether I wrap my mask array with np.array or not (and when I do the shape is indeed the same as (n_features,)) I get this same error (though interestingly it does not complain about it being a numpy array anymore as long as there are no None values in it.
sklearn::TypeError: Wrong type for parameter `n_values`. Expected 'auto', int or array of ints, got
1.2
0
0
1,561
44,780,195
2017-06-27T12:21:00.000
1
0
0
0
python,numpy,scipy,numpy-einsum
45,102,844
2
false
0
0
First, why do you need B to be 2-dim? Why not just np.einsum('ab , b -> a', A, B)? Now the actual question: It's not exactly what you want, but by using smart choices for A and B you can make this visible. e.g. A = [[1,10],[100,1000]] and B = [1,2], which gives np.einsum('ab , b -> a', A, B) = [21,2100] and it's quite obvious what has happend. More general versions are a little bit more complicated (but hopefully not necessary). The idea is to use different potences of primes (especially useful are 2 and 5, as they align to easy readyble number in dezimal system). In case you want to sum over more than one dimesion you might consider taking primes (2,3,5,7 etc) and then convert the result into another number system. In case you sume over two dims-> 30-ary system 3 dims (2,3,5,7)-> 210-ary system
1
1
1
I was planning to teach np.einsum to colleagues, by hoping to show how it would be reduced to multiplications and summations. So, instead of numerical data, I thought to use alphabet chars. in the arrays. Say, we have A (2X2) as [['a', 'b'], ['c', 'd']] and B (2X1) as [['e'], ['f']] We could use einsum to create a matrix C, say like: np.einsum('ab , bc -> ac', A, B). What I'd like to see is: it return the computation graph: something like: a*c + ..., etc. Ofcourse, np.einsum expects numerical data and would give an error if given the above code to run.
Generating np.einsum evaluation graph
0.099668
0
0
193
44,781,243
2017-06-27T13:13:00.000
1
0
1
0
python,regex,python-2.x
44,781,642
4
false
0
0
Regular expressions may not be the the best solution. Here is one algorithm: Make a dictionary of your target word with each letter being a key and the value(s) being the quantity of that letter in the word. e.g. for string, the key:value pair for s would be {'s':1}. for each word you want to test check to see if every letter is in the dictionary AND that the letter counts do not exceed the counts in the target word.
1
0
0
I am new to Python re, but I need help. I searched here, google, documentation, but nothing worked. So here is what I am trying to do. I have word (for example) "string" then I have word list: strings, string, str, ing, in, ins, rs, stress And I want to matches like: string, str, ing, in, ins, rs. I don't want to match: stress, strings (because there are 2x s, and in word string, there is only 1) Simply match only the letters which are in word string. Sorry for bad english and if I didnt explained good enough. YES, and also, some letters are unicode.
Python re match only letters from word
0.049958
0
0
1,439
44,782,916
2017-06-27T14:29:00.000
1
0
0
0
python,scikit-learn
44,783,384
1
false
0
0
It is not an error message, it is simply a warning that a module cross_validation has been transmitted from sklearn.cross_validation to sklearn.model_selection.. It is not a problem at all. If you are still eager to fix it, then you should find out what snippet of code tries to import sklearn.cross_validation and alter it to sklearn.model_selection. If you check both sklearn.cross_validation and sklearn.model_selection, you will see that they contain the same methods. Again, it is not an error.
1
0
1
Every time I run a tensorflow file on terminal, this warning pops up before the file runs. I have checked my version of sklearn and it is 0.18.1. How do you make this message to not appear? Thank you. anaconda2/envs/tensorflow/lib/python2.7/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning)
sklearn warning message whenever I run tensorflow on terminal
0.197375
0
0
187
44,783,953
2017-06-27T15:15:00.000
2
1
0
0
python,macos,selenium,automated-tests
44,784,204
2
false
0
0
Run the tests in a virtual machine. They will appear in the window you've logged into the VM with, which you can put anywhere you like or minimize/iconify and get on with your work. (the actual solution I used at work was to hire a junior test engineer to run and expand our Selenium tests, but that doesn't always apply)
1
2
0
I'm running automated tests with Selenium and Python on Macbook and two monitors. The big issue with the tests is that the tests kept appearing wherever I was working. E.g. the test start on monitor A and I was googling or reporting bugs on monitor B. When the test teardown and setup again, but on monitor B. It's very frustrating and restricting me from doing work when the tests are running. I am looking for solutions that can command the tests to stay in one place or on one monitor.
How to command automated tests to stay at one place of the monitor
0.197375
0
1
43
44,784,860
2017-06-27T15:57:00.000
-2
0
1
0
python,multithreading,linux-kernel,python-multithreading
52,535,497
2
false
0
0
Just do the following: t = threading.Thread(name='my_thread')
1
16
0
I would like to set the title of a thread (the title seen in ps or top)in Python in order to make it visible to process tracer programs. All the threads of a process are always called python or called file name when /usr/bin/python is used and the script is called via ./script. Now, I want to change each thread's name. I have a simple script with 4 threads (incl main thread). I use threading to start the threads. Is there a way that I could achieve this without having to install third-party stuff? Any guidance is appreciated.
Is there a way to set title/name of a thread in Python?
-0.197375
0
0
19,619
44,786,571
2017-06-27T17:37:00.000
0
1
1
0
python,floating-point,boolean,precision,abstract-data-type
44,786,761
1
false
0
0
I could think of that he meant, that you should not do calculations with long commma floats, as they will overflow and give you some kind of rounding. If you calculate with integers instead, you wont get them. and you cold shift commas later on. as Artyr already mentioned, there is no unsigned Integer in Python, thats more a C/C++ World. I don't realy know what an Objects precision should be.
1
1
0
according to my professor, its boolean, unsigned integer, integer, float, complex, string, and object. but why? how is a float less precise than an integer (for example)? Is it to do with the operations that ca be performed on a given item (ie the ore specific the things that can be performed, the most 'precise' the type is?) I really have nothing more to add since i have really no idea, so any hints appreciated!
order of data types from greatest to least precision in python?
0
0
0
102
44,787,713
2017-06-27T18:47:00.000
0
1
0
0
python,twilio
44,792,253
1
true
0
0
I think your best bet is to purchase a phone number that is not MMS compatible.
1
1
0
Is there a way to disable MMS on a number from Twilio? I only see articles on disabling SMS entirely, and I want to disable MMS as there are additional charges and is not being used.
Disabling MMS on Twilio
1.2
0
0
244
44,788,533
2017-06-27T19:40:00.000
0
0
1
0
python,regex,str-replace
44,788,966
1
false
0
0
state code always contains 2 uppercase characters, so you can use this pattern to do your replacement. match this: ([A-Z]{2}). and replace by this: $1
1
0
0
I have sentences with state codes followed by a . (ie. "CA.", "AL.", but also good "CA", "AL") or things like "acct." or "no." I'd like to: 1. remove those "." 2. keep other "." 3. change no. to # For example, I'd like: "Mr. J. Edgar Hoover from CA. owes us $123.45 from acct. no. 98765." to become "Mr. J. Edgar Hoover from CA owes us $123.45 from acct # 98765." Changing " no." to " #" and "acct." to "acct" is easily done with regex or replace and I could do that first to get those out of the way. (I'm open to other efficient approaches). But how do I change state code . to state code and keep the right state code? Thanks!
Python remove . after state
0
0
0
59
44,788,862
2017-06-27T19:58:00.000
0
0
1
0
python,python-2.7,docx,python-docx
44,789,853
2
false
0
0
A Word document produced by python-docx will preserve any headers and footers present in the "template" document. So the way to get those is to create a "starting" document that has the headers and footers you're after. A header or footer can contain a page number field, which is automatically updated with the current page number at display and/or print time. This is added using the Insert > Field > Page Number menu option in Word. Different Word versions do this slightly differently, but this should get you close enough to find it on your version. Otherwise a search on "Insert page number Word 2013" with your version will find you many resources.
1
2
0
I think this question is pretty self explanatory. From what I've read of the python-docx documentation, it seems that the header and footer must be exactly the same on every page, which of course makes adding page numbers difficult. Is this possible?
How can I add page numbers to each page's footer with python-docx?
0
0
0
3,202
44,789,046
2017-06-27T20:10:00.000
1
0
0
0
python,mysql,django
44,789,450
1
true
1
0
I don't think you can get deadlock just from rapid insertions. Deadlock occurs when you have two processes that are each waiting for the other one to do something before they can make the change that the other one is waiting for. If two processes are just inserting, the database will simply process them in the order that they're received, there's no dependency between them. If you're using InnoDB, it uses row-level locking. So unless two inserts both try to insert the same unique key, they shouldn't even lock each other out, they can be done concurrently.
1
1
0
I wanna migrate from sqlite3 to MySQL in Django. Now I have been working in Oracle, MS Server and I know that I can make Exception to try over and over again until it is done... However this is insert in a same table where the data must be INSERTED right away because users will not be happy for waiting their turn on INSERT on the same table. So I was wondering, will the deadlock happen on table if to many users make insert in same time and what should I do to bypass that, so that users don't sense it?
MySql: will it deadlock on to many insert - Django?
1.2
1
0
397
44,789,188
2017-06-27T20:20:00.000
0
0
1
0
c#,python,camera,integration
54,589,317
1
true
0
1
Actually, there is a python interface availabe for working with and access the features of Basler camera namely pypylon
1
1
0
My employer currently has a lot of python code currently written and he wants me to create an application that interacts with this code and controls a Basler Ace Camera. The Basler Ace Cameras only support C, C#, C++ coding languages. What is the easiest way to integrate his Python code with the camera code which I am currently writing in C# because of the .NET framework?
Basler Ace Camera Integration with Python
1.2
0
0
925
44,790,807
2017-06-27T22:16:00.000
1
1
1
0
python,unix
44,790,892
1
false
0
0
python-dev contains everything you need to build Python extensions. So, it will typically include the Python.h header file, and probably some Python shared object files to link with. If you have a compiler on the target machine, you can probably build that yourself by looking at how python-dev does it for various operating systems.
1
2
0
I couldn't find 'python-dev' package anywhere. I don't have the luxury to find it via pip or yum, since I don't have internet connection on my computer. I need to locate the 'python-dev' source, download it, and install it in my computer without internet and sudo access. Thanks.
How can I install python-dev package in a computer without internet?
0.197375
0
0
1,199
44,793,183
2017-06-28T03:30:00.000
2
0
1
0
python,ipython
44,798,696
4
true
0
0
It seems I've found the solution. You need to edit the file which starts IPython. On Linux you can enter it with: sudo nano $(which ipython). Once you're inside the file change the shebang line to what ever Python interpreter you like. And directories that contain Python3.4 modules need to be added to $PYTHONPATH variable. What is a shebang line? First line in a file that represents the path to python interpreter that will be used. Thanks to @code_byter.
1
6
0
Can we change the version of Python interpreter that IPython uses? I know there is IPython and IPython3, but the problem is, IPython uses Python2.7, and IPython3 uses Python3.4.2, and I see no way to change that. What if I wanted IPython to use which ever version of Python interpreter I wanted, could I make it that way? I want IPython to use the newest Python version, Python3.6. Can I make it that way?
How to set other Python interpreter to IPython
1.2
0
0
6,691
44,793,371
2017-06-28T03:53:00.000
0
0
1
0
python,multithreading,multiprocessing
69,449,636
3
false
0
0
Global Interpreter Lock (GIL) has to be taken into account to answer your question. When more number (say k) of threads are created, generally they will not increase the performance by k times, as it will still be running as a single threaded application. GIL is a global lock which locks everything out and allows only single thread execution utilizing only a single core. The performance does increase in places where C extensions like numpy, Network, I/O are being used, where a lot of background work is done and GIL is released. So when threading is used, there is only a single operating system level thread while python creates pseudo-threads which are completely managed by threading itself but are essentially running as a single process. Preemption takes place between these pseudo threads. If the CPU runs at maximum capacity, you may want to switch to multiprocessing. Now in case of self-contained instances of execution, you can instead opt for pool. But in case of overlapping data, where you may want processes communicating you should use multiprocessing.Process.
1
20
0
To the best of my knowledge, multiple threads can be spawned within the system concurrently but 2 different threads can not access or modify the same resource at the same time. I have even tried many things like creating many threads and putting them in a queue etc. But always I used to hear people say multithreading is not available in Python and that instead you can use multiprocessing to take advantage of multicore CPUs. I this true? Are Python threads only green threads, not real multithreading? Am I right about the resource locking of Python?
Is multithreading in python a myth?
0
0
0
22,104
44,794,347
2017-06-28T05:34:00.000
3
0
0
0
python-2.7,pyspark,seaborn
46,108,215
1
false
0
0
Generally, for plotting, you need to move all the data points to the master node (using functions like collect() ) before you can plot. PLotting is not possible while the data is still distributed in memory.
1
0
1
I am using Apache Pyspark with Jupyter notebook. In one of the machine learning tutorials, the instructors were using seaborn with pyspark. How can we install and use third party libraries like Seaborn on the Apache Spark (rather Pyspark)?
Installing seaborn on Pyspark
0.53705
0
0
824
44,795,589
2017-06-28T06:58:00.000
1
0
0
1
python,google-app-engine,google-cloud-platform
44,805,435
1
false
1
0
Based on my external observations of how things work, I suspect there is only a single ingress queue (per service/module) from which requests are only handed to instances which can immediately handle them. The actual parameters of this single queue (depth, waiting time, etc) would be the indicators driving the automatic/basic instance scaling logic for that service/module - starting and stopping instances. In such architecture the death of an instance has absolutely no impact on the queued requests, they would simply be dispatched to other instance(s), either already running or specifically started to handle such request. Note: this is just a theory, though.
1
0
0
I have B* instances running on App Engine(Python env) to serve user facing requests. Sometimes I see B* instances getting terminated due to Exceeded soft private memory limit I understand that increasing the instance class will solve the issue but I have few queries regarding requests that are present in the Instance Pending Queue ! Assume we have 2 instances of B* instance class and we call it let say => I-1, I-2 What will happen to those requests that are there in I-1 Instance Request Pending Queue after the I-1 instance gets terminated due to some reason? Will those requests gets evicted from the instance queue as this instance got terminated ? Will the requests in Instance Pending Queue are Dequeued from the Instance Pending Queue for I-1 and will be put in I-2 Request Queue by the Request scheduler as soon as Request Scheduler finds that I-1 is shutting down due to some reason. Any help regarding understanding these things will be highly appreciated !
What happens to requests in Instance Request Pending Queue when a backend instance gets terminated due to "Exceeded soft private memory limit"?
0.197375
0
0
34
44,796,047
2017-06-28T07:22:00.000
3
0
0
1
python,tftp
44,796,119
1
false
0
0
You can use TFTPy TFTPy is a pure Python implementation of the Trivial FTP protocol. TFTPy is a TFTP library for the Python programming language. It includes client and server classes, with sample implementations. Hooks are included for easy inclusion in a UI for populating progress indicators. It supports RFCs 1350, 2347, 2348 and the tsize option from RFC 2349.
1
2
0
Are there any TFTP libraries in Python to allow the PUT transfer of binary file to an IP address. Ideally I would rather use a built in library if this is not possible then calling the cmd via python would be acceptable. Usually if TFTP is installed in windows the command in command prompt would be: tftp -i xxx.xxx.xxx.xxx put example_filename.bin One thing to note is that python is 32bit and running on a 64bit machine. I've been unable to run tftp using subprocess.
Running TFTP client/library in python
0.53705
0
1
2,164
44,799,200
2017-06-28T09:53:00.000
3
0
0
0
python,tensorflow,ipython
44,799,700
1
true
0
0
Control+Z doesn't quit a process, it stops it (use fg to bring it back up). If some computation is running in a forked process, it may not stop with the main process (I'm no OS guy, this is just my intuition). In any case, properly quitting iPython (e.g. by Control+D or by running exit()) should solve the problem. If you need to interrupt a running command, first hit Control+C, then run exit().
1
0
1
I think it's some sort of bug. The problem is quite simple: launch ipython import Tensorflow and run whatever session type nvidia-smi in bash (see really high gpu memory usage, related process name, etc) control+z quit ipython type nvidia-smi in bash (still! really high GPU memory usage, and the same process name, strangely, these processes are not killed!) I guess iPython failed to clean Tensorflow variables or graphs when exiting. Is there any way I can clean the GPU memory without restart my machine? System: Ubuntu 14.04 Python: Python3.5 IPython: IPython6.0.0
When run a tensorflow session in iPython, GPU memory usage remain high when exiting iPython
1.2
0
0
449
44,804,051
2017-06-28T13:34:00.000
1
0
0
0
python,pandas,google-bigquery,google-cloud-platform,gsutil
44,814,853
1
true
0
0
Consider breaking up your data into daily tables (or partitions). Then you only need to upload the CVS from the current day. The script you have currently defined otherwise seems reasonable. Extract your new day of CSVs from your source of timeline data. Gzip them for fast transfer. Copy them to GCS. Load the new CVSs into the current daily table/partition. This avoids the need to delete existing tables and reduces the amount of data and processing that you need to do. As a bonus, it is easier to backfill a single day if there is an error in processing.
1
0
1
I have one program that downloads time series (ts) data from a remote database and saves the data as csv files. New ts data is appended to old ts data. My local folder continues to grow and grow and grow as more data is downloaded. After downloading new ts data and saving it, I want to upload it to a Google BigQuery table. What is the best way to do this? My current work-flow is to download all of the data to csv files, then to convert the csv files to gzip files on my local machine and then to use gsutil to upload those gzip files to Google Cloud Storage. Next, I delete whatever tables are in Google BigQuery and then manually create a new table by first deleting any existing table in Google BigQuery and then creating a new one by uploading data from Google Cloud Storage. I feel like there is room for significant automation/improvement but I am a Google Cloud newbie. Edit: Just to clarify, the data that I am downloading can be thought of downloading time series data from Yahoo Finance. With each new day, there is fresh data that I download and save to my local machine. I have to uploading all of the data that I have to Google BigQUery so that I can do SQL analysis on it.
Python/Pandas/BigQuery: How to efficiently update existing tables with a lot of new time series data?
1.2
1
0
642
44,804,099
2017-06-28T13:36:00.000
1
0
1
0
python,html,database,webpage
44,804,332
1
false
1
0
You could use Apache or another web server. Set up an HTML web page with javascript. I recommend using jQuery as it makes making ajax calls so much easier. Have your javascript/ajax/jquery call your python script every x minutes/seconds. Ensure your apache server is setup to run CGI scripts and ensure they are set with read and execute permissions.
1
0
0
I have a python script that collects data on some virtual machines. I want to display this data in a webpage.The web page must be dynamic since the script will run continuously and the data must be updated every time the script runs. I possibly want this data displayed in a table but I am not sure what direction to go?
Best way to display python script data in a webpage?
0.197375
0
0
653
44,804,235
2017-06-28T13:42:00.000
1
0
0
0
python,csv,pandas,merge,delimiter
44,804,374
1
true
0
0
In short : no, you do not need similar delimiters within your files to merge pandas Dataframes - in fact, once data has been imported (which requires setting the right delimiter for each of your files), the data is placed in memory and does not keep track of the initial delimiter (you can see this by writing down your imported dataframes to csv using the .to_csv method : the delimiter will always be , by default). Now, in order to understand what is going wrong with your merge, please post more details about your data and the code your are using to perform the operation.
1
0
1
I have 4 separate CSV files that I wish to read into Pandas. I want to merge these CSV files into one dataframe. The problem is that the columns within the CSV files contain the following: , ; | and spaces. Therefore I have to use different delimiters when reading the different CSV files and do some transformations to get them in the correct format. Each CSV file contains an 'ID' column. When I merge my dataframes, it is not done correctly and I get 'NaN' in the column which has been merged. Do you have to use the same delimiter in order for the dataframes to merge properly?
Pandas: Reading CSV files with different delimiters - merge error
1.2
0
0
891
44,804,965
2017-06-28T14:11:00.000
1
0
1
0
python,numpy
44,805,286
2
false
0
0
Because you are working with a numpy array, which was seen as a C array, size refers to how big your array will be. Moreover, if you can pass np.zeros(10) or np.zeros((10)). While the difference is subtle, size passed this way will create you a 1D array. You can give size=(n1, n2, ..., nn) which will create an nD array. However, because python users want multi-dimensional arrays, array.reshape allows you to get from 1D to an nD array. So, when you call shape, you get the N dimension shape of the array, so you can see exactly how your array looks like. In essence, size is equal to the product of the elements of shape. EDIT: The difference in name can be attributed to 2 parts: firstly, you can initialise your array with a size. However, you do not know the shape of it. So size is only for total number of elements. Secondly, how numpy was developed, different people worked on different parts of the code, giving different names to roughly the same element, depending on their personal vision for the code.
1
19
1
I noticed that some numpy operations take an argument called shape, such as np.zeros, whereas some others take an argument called size, such as np.random.randint. To me, those arguments have the same function and the fact that they have different names is a bit confusing. Actually, size seems a bit off since it really specifies the .shape of the output. Is there a reason for having different names, do they convey a different meaning even though they both end up being equal to the .shape of the output?
numpy: "size" vs. "shape" in function arguments?
0.099668
0
0
17,837
44,807,014
2017-06-28T15:37:00.000
0
0
1
0
python,regex
44,807,499
3
false
0
0
I believe a regex like this is what you're looking for: \s(\d+)\s
1
0
0
I am having issues capturing integers and dates correctly with regular expressions. Integers int_test: "Today is 6/28/2017 with 17.5 percent chance of rain" int_pattern = re.findall(r'\d[0-9].*', int_test) The problem I am having with this regular expression, it is capturing the the "6, 28, 2017, 17, and 5" from the int_test. I am not able to find a way to capture integers surrounded only by whitespace. Dates date_test = "Today is 6/28/2017 or June/28/2017 or 28/June/2017 or Jun/28/2017 or 28-Jun-2017" date_pattern = re.findall(r'\d.*[- /]\d+', date_test) For this one, I have already wrote code to support either "/" or "-" between dates. I have successfully been able to capture and digits before or after the "/" or "-", but I need a way to capture and amount of characters before or after the "/" or "-" in the sentence. Any help would be greatly appreciated!
Capturing integers/dates with regular expressions
0
0
0
88
44,807,243
2017-06-28T15:47:00.000
0
0
1
0
python,spss
44,815,451
1
true
0
0
The basic idea is to use the spss.Submit API to run regular commands but wrapped in OMS commands so that you can get their output as a dataset or xml structure. You can keep this output from appearing at all in the Viewer, or you can let it appear. If it is elaborate, you might, as Andy said, want to make it an extension command, including a custom dialog, but this is not always necessary.
1
0
0
I want to create a custom dialog box in SPSS to implement a new procedure that I developed. I have written the Python code for it and am now looking into creating the custom dialog box. However, my procedure uses the regression coefficients from a data set that are derived from the built-in regression procedure. Since I want to minimize manual procedures, I want to prevent that the user first has to run regression and then has to manually input the regression coefficients in the new custom procedure. I would prefer that I could call the built-in regression procedure in my Python code. Is this possible? Alternatively, I thought inserting the code to calculate the regression coefficients withing my Python code. But I prefer not, as this code is very long and if there is a built-in procedure already, I hope to find a way to take advantage of that.
Using information from built-in procedure SPSS for custom procedure
1.2
0
0
34
44,810,259
2017-06-28T18:42:00.000
0
0
0
0
python,django,forms
44,810,611
1
false
1
0
This can only be done using JavaScript. The hard part is to have the management form sync up with the number of rows. But there's two alternatives: Semi-javascript (Mezzanine's approach): Generate a ton of rows in the formset and only show one empty. Upon the click of the "add another row" button, unhide the next one. This makes it easier to handle the management form as the unfilled extra's don't need any work. No fix needed: Add as many rows as is humanly sane. In general, people don't need 40 rows, they get bored with filling out the form or worry that all that work is lost when the browser crashes. Hope this helps you along. Good luck!
1
1
0
I am trying to create a model via a form that has multiple other models related to it. Say I have a model Publisher, then another model Article with a foreign key to Publisher. When creating a Publisher via a form, I want to create An article at the same time. I know how to do this via formsets. However, I don't know how to add a button that says add extra article at the same view, without having to be redirected to a new page and losing the old data since the form was not saved. What I want is when someone clicks add new article, for a new form for article to appear and for the user to add a new Article. Is this possible to be done in the same view in django, if so can someone give me and idea how to approach this? I would show code or my attempts, but I am not sure how to even approach it.
Option to add extra choices in django form
0
0
0
56
44,810,388
2017-06-28T18:50:00.000
1
0
0
0
python,social-networking,networkx
44,811,313
1
false
0
0
I found the feature I'm looking for like, right after I posted this. It is the networkx ego_graph feature. networkx.github.io/documentation/networkx-1.10/reference/
1
1
0
I want to choose any given node and build a subgraph for that node based on what distance degree I want. I need to input a node's ID, and specify the degree ( 1 = only direct friends, 2 = direct friends, and friends of friends, 3 = direct friends, friends of friends, friends of friends of friends and so on...) and generate a subgraph for that node's social network to that degree. Does anyone know a good way to do this in networkx?
Generating NetworkX subgraphs based on distance
0.197375
0
1
169
44,811,520
2017-06-28T19:58:00.000
1
0
0
0
python-3.x,qt,tkinter,tk
62,209,717
1
true
0
1
tkinter based GUI should be smaller on the disk and RAM but has less capabilities and may not suite your needs depending on what you need. tkinter is best for a small, simple gui. it will have no problems running a fair sized document.
1
7
0
BACKGROUND: It's important to consider memory usage of applications on ARM computers like the Raspberry Pi. When programming with Python, there are several GUI choices. A couple of the most popular are QT and TK. The Raspberry Pi 2 and 3 are limited by 1-Gbyte RAM, and 32-Gbyte max USB memory storage, per stick. They also have a much slower RISC (ARM) processor compared to popular desktop or laptop computers. Still, it's "enough" to run applications, even many GUI applications at a time if they use conservative programming techniques. I'm figuring that if a user stuck to TK based applications (Python-Tkinter-GUI) with the Raspberry Pi, then there wouldn't be nearly the number of difficulties. Q: Does anyone have any statistics on this ... by using Tkinter instead of PyQT for GUI program development with the intended user being on a Raspberry Pi version 2 or 3 ... Performance ratios, programming with Tkinter vs PyQT: Size of Program in storage Size of Program executed in RAM Speed of Application
Memory savings of Python Tkinter GUI vs PyQT
1.2
0
0
1,089
44,812,538
2017-06-28T21:05:00.000
7
1
0
0
python,coverage.py
44,812,727
4
true
0
0
Use --include to only include files in particular directories. It matches file paths, so it can match a subdirectory.
3
14
0
I have a directory tests that includes a lot of different tests named test_*. I tried to run coverage run tests but it doesn't work. How can I run a single command to coverage multiple files in the directory?
How to run coverage.py on a directory?
1.2
0
0
16,132
44,812,538
2017-06-28T21:05:00.000
10
1
0
0
python,coverage.py
44,812,899
4
false
0
0
You can achieve that using --source. For example: coverage run --source=tests/ <run_tests>
3
14
0
I have a directory tests that includes a lot of different tests named test_*. I tried to run coverage run tests but it doesn't work. How can I run a single command to coverage multiple files in the directory?
How to run coverage.py on a directory?
1
0
0
16,132
44,812,538
2017-06-28T21:05:00.000
15
1
0
0
python,coverage.py
55,546,164
4
false
0
0
Here is a complete example with commands from the same PWD for all phases in one place. With a worked up example, I am also including the testing and the report part for before and after coverage is run. I ran the following steps and it worked all fine on osx/mojave. Discover and run all tests in the test directory $ python -m unittest discover <directory_name> Or Discover and run all tests in "directory" with tests having file name pattern *_test.py $ python -m unittest discover -s <directory> -p '*_test.py' run coverage for all modules $ coverage run --source=./test -m unittest discover -s <directory>/ get the coverage report from the same directory - no need to cd. $ coverage report -m Notice in above examples that the test directory doesn't have to be named "test" and same goes for the test modules.
3
14
0
I have a directory tests that includes a lot of different tests named test_*. I tried to run coverage run tests but it doesn't work. How can I run a single command to coverage multiple files in the directory?
How to run coverage.py on a directory?
1
0
0
16,132
44,812,996
2017-06-28T21:41:00.000
1
0
1
0
java,python,algorithm,indentation
44,814,671
1
true
0
0
The way I see it, there's two distinct ways that you can solve this question. Convert the python code to some intermediary format which resembles a Context-Free Grammar (CFG) and then perform the error checking on the CFG. Case checking. The former is a little more elegant and revolves around compiler theory, or at the very minimum, Automata theory. While this process is guaranteed to work and establishes some sort of language that people can refer to, it may be extremely tedious for something that is time sensitive. It's an elaborate fix to what may seemingly be, a simple task. The advantage of this technique would be that, if we're looking for a "hacky" solution, such as find the ":" and then checking if the next line is indented or not, this may not work for certain inline commands in python that use ":". An example for this could be print("Enter your name:"), or a subprocess.Popen command. This case will ensure that errors like this are avoided. The latter on the other hand is extremely difficult to keep track of as a programmer and debugging it would be fairly difficult. I say this in some relativistic measure, there would be several cases to check, top level statements. So let's use some "good programming practices", since the keywords def if elif else class etc can be stored in a common locationIn order to solve this, we'd declare some variable (let's call it i, and read in the file line by line and checking the first (or zeroth character). If the character is not a space, we will read till the next whitespace and check if that word exists in some Trie of words that define indentation blocks (you could use a hash as well, doesn't make a difference). If it is in one of the blocks, you increment i by 4 and move on to the next line. In the subsequent line, you'd read up to the ith character and ensure they're spaces. Essentially, you'd repeat this process until you find something that contradicts what is being looked for. Now, if something does contradict what we're looking for, we may have to read the previous line. Consider long_sequence = [1,2,3,4,1,2,3,4,1,2,3,4, 5,6,7] Now, this would technically throw an error, so we have to consider this case and several cases which are similar to this where there is a line which is longer than a single line but the command is not complete in the one line. So, as previously mentioned solution 2 would be very tedious and a debugging nightmare, 1 is elegant but really hard to construct. It really does depend on the timeline that you have to construct it.
1
1
0
I was recently interviewed with a local tech company for Java developer role and was asked to write a Java program to check the correctness of indentation in python code (at least find the first indentation error). It may be easier to deal with def, while and for. However it is tricky to handle things like if...elif...else. There are different situations, for example, just if no else or elif; nested ifs. If it is paired, maybe I can use a stack, but you don't know whether they are paired or not. I could really use some advice here.
How to implement an algorithm to check Python'code's indentation
1.2
0
0
1,100
44,813,676
2017-06-28T22:43:00.000
0
0
0
0
python,networking,dependencies
44,813,734
1
true
0
0
Try to connect, and if it fails after n retries, consider you don't have a connection. There are many strategies to setup retries, so search about it to figure what is best for you. Then if you can't connect, just keep going on after the failure... That's about it.
1
0
0
I have a python program that handles sensor data. Most of the functionality takes place locally and requires no network connection. The one thing i want it to do with a client is send a continuous stream of data IF a client is connected but for everything else to run regardless of weather there is a connected client or not. The only setups I've managed before all required a client to connect before anything else could happen. How do I set this up so my program doesn't depend on first having a client connected before it does anything??
Python run certain functions with or without a client connection
1.2
0
1
41
44,814,719
2017-06-29T00:55:00.000
0
0
1
0
python
71,638,920
4
false
0
0
Any immutable can be dictionary key. like string , number, tuple etc. the type which cannot be keys are mutable entity like list, set etc.
1
22
0
Can both the values and keys of a dictionary be integers in python? Or do I need one of them to be like a string or something?
Can both the values and keys of a dictionary be integers?
0
0
0
70,880
44,819,809
2017-06-29T08:19:00.000
0
0
0
1
python,workflow,pipeline,airflow,apache-airflow
44,886,741
1
false
0
0
There is an option to set maximum number of runs per dag in the global airflow.cfg file. The parameter to set is max_active_runs_per_dag.
1
1
0
When i backfill a dag for specific dates, i want to run it by sequentially, i.e. i want it to be run day by day completing all the tasks for the specific day and then next day so on.. I have used the depends_on_past argument, but it is only helping me to set the dependency on tasks not in dag runs. example :- Dag_A have 4 tasks , i use back fill with depends_on_past, After executing first task in Dag_A (first day) it triggers first task of Dag_A (second day), I dont want it
airflow backfilling dag run dependancy
0
0
0
522
44,823,132
2017-06-29T10:51:00.000
4
0
0
0
python,django,django-views,django-generic-views
44,823,316
3
false
1
0
These are completely different things. get_context_data() is used to generate dict of variables that are accessible in template. queryset is Django ORM queryset that consists of model instances Default implementation of get_context_data() in ListView adds return value of get_queryset() (which simply returns self.queryset by default) to context as objects_list variable.
1
4
0
What is the difference between get_context_data and queryset in Django generic views? They seem to do the same thing?
Diference between get_context_data and queryset in Django generic views?
0.26052
0
0
3,102
44,824,517
2017-06-29T11:57:00.000
-1
0
0
0
mysql,python-2.7,mariadb
54,635,754
2
false
0
0
dumpcmd = "mysqldump -h " + DB_HOST + " -u " + DB_USER + " -p" + DB_USER_PASSWORD + " " + DB_NAME + "| pv | gzip > " + pipes.quote( BACKUP_PATH) + "/" + FILE_NAME + ".sql"
1
2
0
i'm developing an app for my company, using Python2.7 and MariaDB. I have created a functions which backups our main database server to another database server. I use this command to do it:mysqldump -h localhost -P 3306 -u root -p mydb | mysql -h bckpIPsrv -P 3306 -u root -p mydb2 . I want to know if it's posible to see some kind of verbose mode or a percentage of the job and display it on screen. thank you.
is there a way to have mysqldump progress bar which shows the users the status of their backups?
-0.099668
1
0
6,336
44,825,336
2017-06-29T12:36:00.000
1
0
0
0
python,flask,eve
44,917,195
2
false
1
0
If your flask app is only for serving static files, you don't even need a flask app. You may be able to directly serve static assets via Nginx
1
1
0
I want to provide REST interface and a single page JS-application for consuming the REST-service. Is there a way to do both in one python app or should I start eve-app for providing REST and flask-app for providing HTML as two processes?
Using Eve and Flask in one application
0.099668
0
0
806
44,827,624
2017-06-29T14:14:00.000
7
0
1
0
python,virtualenv,requirements.txt
44,827,829
2
true
0
0
While it is technically possible I don't find any good reason to that. Having both is confusing because it is not clear which one is the "master". And you have to (or not?) worry about consistency between installed packages and requirements.txt file. Also venv and installed packages in many cases depend on the underlying OS, they have binaries, different layout, etc. It is generally advised to write os-independent code. All in all, I would stick to requirements.txt file and remove any venv folder from the project's repo.
2
5
0
I downloaded a Python project and it contains both a virtual environment and a requirements.txt file. Why would you need both? As far as I know, virtual environments already contain the required modules. Any idea when and why this combination would be useful?
Why would you create a requirements.txt file in a virtual environment in Python?
1.2
0
0
2,159
44,827,624
2017-06-29T14:14:00.000
1
0
1
0
python,virtualenv,requirements.txt
44,827,676
2
false
0
0
You can't distribute the virtualenv directory with your project because the contents may vary depending on the target operating system and the version of the operating system. Specifically, a virtualenv that includes libraries with compiled components installed on Ubuntu 14.04 will differ from the equivalent virtualenv installed on Ubuntu 16.04. Instead, you should distribute your requirements.txt file (just a convention, you could use any file name you want) so the end-user will be able to recreate a virtualenv on his machine.
2
5
0
I downloaded a Python project and it contains both a virtual environment and a requirements.txt file. Why would you need both? As far as I know, virtual environments already contain the required modules. Any idea when and why this combination would be useful?
Why would you create a requirements.txt file in a virtual environment in Python?
0.099668
0
0
2,159
44,828,905
2017-06-29T15:09:00.000
0
0
0
0
python,pandas
44,830,094
2
false
0
0
"Quick" in terms of what resource? If you want programming ease, then simply make a new frame resulting from subtracting adjacent columns. Any entry of zero or negative value is your target. If you need execution speed, do note that adjacent differences are still necessary: all you can save is the overhead of finding multiple violations in a given row. However, unless you have a particularly wide data frame, it's likely that you'll lose more in short-circuiting than you'll gain by the saved subtractions. Also note that a processor with matrix operations or other parallelism will be fast enough with the whole data frame, that the checking will cost you significant time.
1
2
1
I have a pandas dataframe with Datetime as index. The index is generally monotonically increasing however there seem to be a few rows don't follow this tread. Any quick way to identify these unusual rows?
find non-monotonical rows in dataframe
0
0
0
759
44,829,186
2017-06-29T15:21:00.000
0
0
0
0
python,rest,reactjs,web-applications,websocket
44,830,106
1
false
1
0
Here's a simple way I'd go about doing it: Create a backend service that exposes the python script's output via HTTP. You can use any familiar backend programming language: python, java, .net etc to host a simple http web server, and expose the data. Create a simple html page, that makes an ajax call to your backend service, pulls the data, and shows it on the page. Later add your buttons, make the page nicer, responsive on other devices. Topics to read depends on what you are missing above. My best guess based on your question is that you are not familar with front end programming. So read up on html, js, css and ajax.
1
0
0
I have a python script that continuously generates JSON object { time: "timestamp", behaviour: 'random behaviour'} and display it on stdout. Now I want to build a web app and modify the script so that I can read continous data to my web page. I donno where to start from. Well in details: I want a web API to have an on/off button and by clicking on it start reading the beahviour and on clicking off it stops. I am very new to programming so please suggest me topics and ways I need to study/ look upon to make this app. thanks
Web App to read data continously from another app
0
0
1
38