Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
41,041,998
2016-12-08T14:37:00.000
2
1
1
0
python,julia
45,740,595
2
false
0
0
In my experience, calling Julia using the Python pyjulia package is difficult and not a robust solution outside HelloWorld usage. 1) pyjulia is very neglected. There is practically no documentation aside the source code. For example, the only old tutorials I've found still use julia.run() that was replaced by julia.eval(). pyjulia is not registered on PyPI, the Python Package Index. Many old issues have few or no responses. And buggy, particularly with heavy memory usage and you could run into mysterious segfaults. Maybe, garbage collector conflicts... 2) You should limit pyjulia use to Julia functions that return simple type Julia objects imported to python using pyjulia are difficult to use. pyjulia doesn't import any type constructors. For example, pyjulia seems to convert complex Julia types into plain python matrix. 3) If you can isolate the software modules and manage the I/O, you should consider the shell facility, particularly in Linux / Unix environment.
1
1
0
I wrote an optimization function in Julia 0.4 and I want to call it from Python. I'm storing the function in a .jl file in the same working directory as the Python code. The data to be transferred is numeric, and I think of using Numpy arrays and Julia arrays for calls. Is there a tutorial on how to make this work?
Calling a Julia 0.4 function from Python. How i can make it work?
0.197375
0
0
569
41,042,418
2016-12-08T14:58:00.000
0
0
1
0
python-3.5,goto
41,042,524
1
true
0
0
Normally goto is explicitly used to make error handling more simple and code more readable by languages like C. Python has try and except for this, so that makes the goto statement unnecessary.
1
0
0
I would find it very useful in Python, and am curious as to why Python doesn't have it. Thanks!
Why is 'goto' seen as bad form?
1.2
0
0
64
41,042,739
2016-12-08T15:13:00.000
1
0
0
0
python,sumo
41,046,438
1
true
0
0
It is currently not possible to retrieve this value via TraCI. You can either parse and sum up the values in the tripinfo file or if you just need statistics in the end you can run sumo with the additional --duration-log.statistics option which will output an average time loss in the end. Furthermore you can retrieve the current value in the sumo GUI when displaying network parameters (provided you use the option above).
1
0
0
I am working on a project to optimize traffic light control for an isolated intersection using SUMO with TraCI in Python. I would like to minimize the total delay for all vehicles with respect to how they would drive if they never had to wait for other traffic or traffic lights. I saw that it's possible to output the timeloss for each vehicle once it has reached its destination by using --tripinfo-output <FILE>. Is there also a built-in method to obtain the current value of average or total delay?
How to obtain the total delay or total timeloss in SUMO and TraCI
1.2
0
0
850
41,043,800
2016-12-08T16:06:00.000
2
0
0
0
python,django,rabbitmq,celery
41,049,986
1
false
1
0
I am wondering if there is any better way to do this? "better" is entirely subjective. if it meets your needs, it's fine. something to consider, though: don't store the same information in more than one place. if you need an address, look it up from the service that provides address, every time. this may be a performance hit, but it eliminates the problem of replicating the data everywhere. another option would be a more proactive approach, as suggested in comments. instead of creating a task list for changes, and doing that periodically, send a message across rabbitmq immediately when the change happens. let every service that needs to know, get a copy of the message and update it's own cache of info. just remember, though. every time you have more than one copy of the information, you reduce the "correctness" of the system, as a whole. it will always be possible for the information found in one of your apps to be out of date, because it did not get an update from the official source.
1
3
0
i have written MicroServices like for auth, location, etc. All of microservices have different database, with for eg location is there in all my databases for these services.When in any of my project i need a location of user, it first looks in cache, if not found it hits the database. So far so good.Now when location is changed in any of my different databases, i need to update it in other databases as well as update my cache. currently i made a model (called subscription) with url as its field, whenever a location is changed in any database, an object is created of this subscription. A periodic task is running which checks for subscription model, when it finds such objects it hits api of other services and updates location and updates the cache. I am wondering if there is any better way to do this?
microservices and multiple databases
0.379949
1
0
426
41,044,395
2016-12-08T16:35:00.000
2
0
0
0
python,json,excel,powerpoint
41,045,163
3
false
0
0
Right click the chart, choose Edit Data. If it's an embedded chart, the chart and its workbook will open in Excel. From there you can File | Save As and save your new Excel file.
1
0
0
Years ago for a masters project my friend took a bunch of data from an excel sheet and used them in a powerpoint graph. He told me he made the graph in excel then copied it into powerpoint. Now, when I hover over the graph I see the points associated to where my mouse hovers. My friend lost the original excel sheet and is asking me to help pull the data from the powerpoint graph and put it in an excel sheet. How would I go about doing this? If theres away to get the points into a json file I can do the rest. I just know nothing about powerpoint graphs.
Take data points from my power point graph and put them into an excel sheet
0.132549
0
1
150
41,045,491
2016-12-08T17:35:00.000
2
0
0
1
python,azure-data-lake,u-sql
41,116,628
2
false
0
0
Assuming the libs work with the deployed Python runtime, try to upload the libraries into a location in ADLS and then use DEPLOY RESOURCE "path to lib"; in your script. I haven't tried it, but it should work.
1
2
1
Is it or will it be possible to add more Python libraries than pandas, numpy and numexpr to Azure Data Lake Analytics? Specifically, we need to use xarray, matplotlib, Basemap, pyresample and SciPy when processing NetCDF files using U-SQL.
Add more Python libraries
0.197375
0
0
360
41,046,955
2016-12-08T19:03:00.000
1
0
1
0
sql,ipython-notebook,jupyter-notebook,code-formatting
48,311,365
3
false
0
0
I found that this fixed the issue I was having. ``` sql Produced styled code in edit mode but not when the cell was run. ``` mysql Produced correct styling
1
16
0
I want to show some SQL queries inside a notebook. I neither need nor want them to run. I'd just like them to be well formatted. At the very least I want them to be indented properly with new lines, though keyword highlighting would be nice too. Does a solution for this exist already?
Formatting SQL Query Inside an IPython/Jupyter Notebook
0.066568
1
0
10,031
41,051,422
2016-12-09T00:54:00.000
0
0
0
0
ipython-notebook,amazon-redshift,psycopg2,jupyter-notebook
41,090,841
2
true
0
0
Adding this connection options to my connection string seems to have fixed my problem: keepalives=1&keepalives_idle=60
1
1
0
I normally am able to run long queries using pyscopg2+sql magic in a Notebook, but lately my Notebooks seems to lose their connection and stall. When I look at my redshift logs, I can see that the queries completed successfully, but my Notebook never gets any data back and just keeps waiting. What might be going on?
iPython Notebook Unresponsive Over Long SQL Queries
1.2
1
0
206
41,055,536
2016-12-09T07:51:00.000
0
0
0
0
python-2.7,gtk,gtkentry
41,364,413
1
true
0
1
I create a new Entry class adding this line: self.connect('focus-out-event', self.onEntryActivate )
1
0
0
I want one or several gtk.Entry reply to a button created by me (gtk.Button) instead of a enter key. I´m using Python 2.7 Is it possible?
How to simulate Enter key at gtk.entry to save info
1.2
0
0
388
41,056,503
2016-12-09T08:52:00.000
0
0
0
0
python,connection
41,058,203
1
false
0
0
To solve above problem regarding to above discussion I have queried once db, I keep it as dictionary and then each time I get key in dictionary. It speeds up the time execution, more over lost connection is not going to affect anymore the processing. I would like to mention that Time execution is reduced to 20 min. It's incredible!! Thanks to dictionary ..
1
0
0
I have a huge log file ~4 GB. I have to parse a log file line by line, for each line I should query database, and also connect to another csv files and join data from different sources. Execution time is near 2 days. However unfortunately for any reasons like lost connection to MySQL server during query, i've lost all the parsing so far and i have to run the script again and again. Then, During last one week I have executed this script several time and i've lost all the previous parsing. the script had to write the final result into csv file. I am looking for a solution to avoid of this problem, what can I do? Is there any way to keep the last status of the process somewhere in order I re-execute from the last point of process rather that running from the beginning each time? or any other solution that can avoid of this interruption.
Python db lost connection middle of processing and parsing a huge log file
0
1
0
43
41,056,925
2016-12-09T09:19:00.000
3
1
1
0
python,git,debugging,git-bisect
41,160,672
2
false
0
0
There is category of QA tool for "reverse dependency" CI builds. So your higher level projects get rebuilt every time a lower level change is made. At scale it can be resource intensive. The entire class of problem is removed if you stop dealing with repo-to-repo relationships and start following version release methodology for the subcomponents. Then you can track the versions of lower-level dependencies and know when you go to upgrade that it broke. Your CI could build against several versions of dependencies if you wanted to systematize it. Git submodules accomplish that tracking for individual commits, so you again get to decide when to incorporate changes from lower levels. (Notably, that can also be used like released versions if you only ever update to tagged release commits.)
1
8
0
Our software is modular and I have about 20 git repos in one project. If a test fails, it is sometimes hard to find the matching commit since several developers work on these 20 repos. I know the test worked yesterday and fails reproachable today. Sometimes I use git-bisec, but this works only for one git repo. Often changes in two git repos make a test fail. I could write a dirty script which loops over my N git repos myself, but before doing so, I would like to know how experts would solve this. I use Python, Django and pytest, but AFAIK this does not matter for this question.
git-bisect, but for N repos
0.291313
0
0
670
41,059,076
2016-12-09T11:09:00.000
3
0
0
1
python,google-app-engine,pycharm,google-cloud-platform
41,059,077
2
true
1
0
The correct path to use is the platform/google_appengine/ directory, within the google-cloud-sdk installation directory,
1
2
0
While configuring pycharm (professional) for Google App engine it asks for App Engine SDK path, while google now gives everything bundled in a Google cloud SDK. On choosing the cloud SDK directory, pycharm is saying it's invalid. What is the correct path for Google App engine SDK?
Google App Engine SDK path in linux for pycharm?
1.2
0
0
1,232
41,059,520
2016-12-09T11:32:00.000
2
0
0
0
python,python-3.x,sympy
41,071,087
1
false
0
0
The size of SymPy Rationals is limited only by your available memory. If you want an approximate but memory bounded number, use a Float. You can convert a Rational into a Float with evalf.
1
3
1
I am using Sympy Rational in an algorithm and I am getting enormous factions. Numerator and Denominator grow up to 10000 digits. I would like to stop the algorithm as soon as the fractions become unevaluable. So the question is, what is the maximum magnitude I can allow for sympy.Rational?
To what extent is it precise to evaluate Sympy enormous fractions (Rational)?
0.379949
0
0
59
41,060,813
2016-12-09T12:44:00.000
0
0
0
0
python,django,git
41,061,561
1
false
1
0
If you're getting a 'bad gateway', that means you're using something like Nginx or Apache to route requests to your app. A Bad Gateway error means that Nginx (substitute Apache if appropriate, I'll just use nginx here) isn't getting a response from the app server - your runserver equivalent isn't serving the app, or the app is returning a 500 error. The app server is usually served with gunicorn, uwsgi or sometimes just plain ol' runserver (but please don't do this in production!) You can access your nginx logs at /var/log/nginx/error.log - it'll probably say connect() failed (110: Connection timed out) while connecting to upstream, with an IP address. SSH to your server, run the virtualenv if you have one. If gunicorn is running, stop it. Do which gunicorn (again, substitute uwsgi if appropriate) to get the location, and run that manually: /etc/gunicorn/gunicorn -b 127.0.0.1:8000 -w 1 myapplication.wsgi and note the error message. You can also get this by doing runserver to see what's happening - if the app loads, then go to the website and see what the error message is in your terminal console. It's probably a misconfiguration that's stopping the app from running - that's where you'll get your more detailed debug information.
1
0
0
I recently made some changes to my Python/ Django project, and this caused one of the pages to display a 'Bad Gateway' message in the browser. I was unable to resolve this, so tried doing a 'soft reset' to the last working commit I had made. However, I am now getting an Internal Server Error when trying to view the website on the live server. The error message doesn't give any more information than this... but when I view my project locally in the browser, it works as expected. I've tried committing my local version, pushing it to the server, and then pulling it on the server again, but on the master branch on my local machine and on the server, I get a message telling that there's "nothing to commit, working directory clean". How can I resolve this Internal Server Error to ensure that my site is accessible again?
Python/ Django- Internal Server Error after performing a soft reset
0
0
0
429
41,064,321
2016-12-09T15:59:00.000
0
1
0
1
python,python-3.x,rabbitmq
41,069,086
1
false
0
0
Yes, there is - wrap all messages in response to a request as a transaction.
1
0
0
I am using a python script (3.5.2) and a RabbitMQ worker queue to process data. There is a queue that is filled with user requests of an external system. These user requests will be processed by my python script, each user request results in several output messages. I use the acknoledge functionality to ensure that the incoming message will be deleted only after processing it. This ensures that the message will be reassigned if the worker occasionally dies. But if the worker dies during sending out messages it could be possible that some messages of this user request are already sent to the queue and others wont be sent. Is there a way to send several messages atomically, i. e. sent all messages or none?
Make message sending to RabbitMQ atomic
0
0
0
134
41,064,556
2016-12-09T16:14:00.000
2
0
1
0
python,pycharm
41,067,849
2
true
0
0
Turns out this is (more or less) possible by using the PyCharm console. I guess I should have realized this earlier because it seems so simple now (though I've never used a console in my life so I guess I should learn). Anyway, the console lets you run blocks of your code presuming the required variables, functions, libraries, etc... have been specified beforehand. You can actually highlight a block of your code in the PyCharm editor, right click and select "Run in console" to execute it.
1
6
0
I've been searching everywhere for an answer to this but to no avail. I want to be able to run my code and have the variables stored in memory so that I can perhaps set a "checkpoint" which I can run from in the future. The reason is that I have a fairly expensive function that takes some time to compute (as well as user input) and it would be nice if I didn't have to wait for it to finish every time I run after I change something downstream. I'm sure a feature like this exists in PyCharm but I have no idea what it's called and the documentation isn't very clear to me at my level of experience. It would save me a lot of time if someone could point me in the right direction.
PyCharm: Storing variables in memory to be able to run code from a "checkpoint"
1.2
0
0
5,710
41,067,232
2016-12-09T19:03:00.000
0
0
0
0
python,html,css,excel,calculator
41,067,457
1
true
1
0
Does your work have office 365? You can host the excel file online for everyone to access. It works just like normal excel does but it is througt the online interface. Buttons and everything should work. Math isn't hard to do with javascript and you can learn super quickly on codeacademy (it's the first thing they teach) and how to display those numbers. Including prompting users for input if you want to go that route.
1
0
0
I am trying to convert a large excel calculator into an online website for people to freely use. I have been working on this project for a few months now, and the equations within the excel sheets that are used to derive the answers are too complicated for me to program it from the ground up. Are there any easy ways to take an excel calculator (workbook with 5 sheets) and turn it into a web app? Any assistance or guidance would be appreciated. I have also tried opening the document in python using OpenpyXL and manipulating it that way, but it doesn't work. Web design is totally out of my area of expertise, but I told my employer I would try anyway. Again, I appreciate any assistance.
Link excel calculator to online HTML app?
1.2
0
0
63
41,067,947
2016-12-09T19:59:00.000
1
0
1
0
python,interpreter,pdb
41,201,288
2
false
0
0
I read some posts that say that pdb is implemented using sys.settrace. If nothing else works I should be able to recreate the behavior I need using this. Don't view this as a last resort. I think it's the best approach for what you want to accomplish.
2
2
0
I am trying to find a way that I can have a program step through Python code line by line and do something with the results of each line. In effect a debugger that could be controlled programmatically rather than manually. pdb would be exactly what I am looking for if it returned its output after each step as a string and I could then call pdb again to pickup where I left off. However, instead it outputs to stdout and I have to manually input "step" via the keyboard. Things I have tried: I am able to redirect pdb's stdout. I could redirect it to a second Python program which would then process it. However, I cannot figure out how to have the second Python program tell pdb to step. Related to the previous one, if I could get pdb to step all the way through to the end (perhaps I could figure out something to spoof a keyboard repeatedly entering "step"?) and redirect the output to a file, I could then write another program that acted like it was stepping through the program when it was actually just reading the file line by line. I could use exec to manually run lines of Python code. However, since I would be looking at one line at a time, I would need to manually detect and handle things like conditionals, loops, and function calls which quickly gets very complicated. I read some posts that say that pdb is implemented using sys.settrace. If nothing else works I should be able to recreate the behavior I need using this. Is there any established/straight forward way to implement the behavior that I am looking for?
How to programmatically execute/step through Python code line by line
0.099668
0
0
1,492
41,067,947
2016-12-09T19:59:00.000
2
0
1
0
python,interpreter,pdb
41,069,302
2
true
0
0
sys.settrace() is the fundamental building block for stepping through Python code. pdb is implemented entirely in Python, so you can just look at the module to see how it does things. It also has various public functions/methods for stepping under program control, read the library reference for your version of Python for details.
2
2
0
I am trying to find a way that I can have a program step through Python code line by line and do something with the results of each line. In effect a debugger that could be controlled programmatically rather than manually. pdb would be exactly what I am looking for if it returned its output after each step as a string and I could then call pdb again to pickup where I left off. However, instead it outputs to stdout and I have to manually input "step" via the keyboard. Things I have tried: I am able to redirect pdb's stdout. I could redirect it to a second Python program which would then process it. However, I cannot figure out how to have the second Python program tell pdb to step. Related to the previous one, if I could get pdb to step all the way through to the end (perhaps I could figure out something to spoof a keyboard repeatedly entering "step"?) and redirect the output to a file, I could then write another program that acted like it was stepping through the program when it was actually just reading the file line by line. I could use exec to manually run lines of Python code. However, since I would be looking at one line at a time, I would need to manually detect and handle things like conditionals, loops, and function calls which quickly gets very complicated. I read some posts that say that pdb is implemented using sys.settrace. If nothing else works I should be able to recreate the behavior I need using this. Is there any established/straight forward way to implement the behavior that I am looking for?
How to programmatically execute/step through Python code line by line
1.2
0
0
1,492
41,069,755
2016-12-09T22:19:00.000
0
0
1
0
python,function
41,069,835
3
false
0
0
As far as I'm aware, no, there is not an easier way than those you describe. The way most people (including me!) learn is to do more programming, remember the frequently used functions and check the manual on those you don't know :)
1
0
0
Is there an easy way to determine/remember which functions return a new object and which act on the existing object. For example, list.append('new stuff') acts on that actual object, whereas string.rstrip() returns a new string that needs to be assigned somewhere. I'm forever having to look up (or open the Python Interpretor to check quickly) which functions act upon, and which functions return.
Tell difference between functions that perform action and functions that return
0
0
0
67
41,071,265
2016-12-10T01:32:00.000
0
0
1
0
python-2.7
41,071,984
1
false
0
0
If it is a list of tuples list[1][1] would return 'ScreenName=YaziAfrica' print(list[1][ScreenName]) does not work because ScreenName is not an index of the value 'YaziAfrica'. Essentially 'ScreenName=YaziAfrica' is the value, and the index is 1.
1
0
0
I have a Python (2.7) list with tuples that looks like this: [Status(ID=806829700194635776, ScreenName=YaziAfrica, Created=Thu Dec 08 11:56:05 +0000 2016, Text=u'RT @thandi25M: DST investment and journey outlined. #sfsa2016 #iamuwc DDG Tommy Makhode #dstgov https://someurl')] And I want to access the ScreenName value ("YaziAfrica") in this case. There may be several hundred of these list tuples so ideally I would loop through the response. I've tried print(list[1][ScreenName])but that prompts a TypeError: 'type' object has no attribute '__getitem__' Appreciate any help, I can't seem to find something that addresses this example.
Access Elements of List of Tuples Python
0
0
0
980
41,072,331
2016-12-10T04:57:00.000
0
0
1
0
process,console,python-3.5
41,072,531
1
false
0
0
The solution I would use would be to have a lockfile created in the tmp directory. The first instance would start, check for the existence of the file, create the file since it is not there, then run; the following instances will start, check for the existence of the file, then quit since it's there. The original instance would remove the lockfile as its last instruction. NOTE: If the app runs into an error and does not execute the instruction to remove the lockfile, you would need to manually remove it else the app will always see the file. I've seen on other threads that some suggest using the ps command and look for your app's name, which would work; however, if your app will ever run on Windows, you would need to use tasklist.
1
0
0
if I have two or more running python console applications at the same time of same application, but executed several times by hand or any other way. Is there any method from python code itself to stop all extra processes, close console window and keep running only one
How allow only one python code process to run if same is executed at the same time
0
0
0
78
41,073,034
2016-12-10T06:58:00.000
1
0
0
0
python,django,django-admin
41,083,650
2
true
1
0
Since you don't need to create the related model thanks to the presence of the InlineAdmin perhaps it would be better for you to remove the post_save signal receiver rather than adding more code inside post_save to determine where the save originated from. That way you can just place a call to create_related_models in your serializer's create method to achieve the same objective.
1
2
0
Is there a way to identify if the current save() or post_save happened via Django Admin? I want to do something differently for models saved via Admin, than models saved from my Django Rest Framework API. Specifically, I have a create_related_objects I call on post_save of model Transaction, to create related object CalendarInfo. But when creating Transaction via Admin, because the related CalendarInfo object is an inline, I don't need to call that create_related_objects on the Transaction model's post_save.
How to find if a model was saved from Django Admin, or elsewhere
1.2
0
0
808
41,074,688
2016-12-10T10:53:00.000
8
1
0
0
python,machine-learning,tensorflow,tensorboard
44,529,522
3
false
0
0
To finish user1501961's answer, you can then just export the list of scalars to a csv file easily with pandas pd.DataFrame(ea.Scalars('Loss)).to_csv('Loss.csv')
1
37
0
How can you write a python script to read Tensorboard log files, extracting the loss and accuracy and other numerical data, without launching the GUI tensorboard --logdir=...?
How do you read Tensorboard files programmatically?
1
0
0
18,729
41,074,980
2016-12-10T11:24:00.000
0
0
0
0
python,macos,opencv
45,069,376
2
false
0
0
The fix that worked best for me was using mathplotlib instead. Since you may have to remove all previous versions of OpenCV otherwise and reinstall from source!
1
6
1
I installed opencv-python using pip install, in mac os. Now the cv2.imshow function giving following error OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage How can I solve this issue? Why doesn't the pip check opencv dependencies?
opencv-python imshow giving errors in mac
0
0
0
2,492
41,077,133
2016-12-10T15:29:00.000
0
0
1
0
python-3.x,python-3.5,ubuntu-16.04,openpyxl
41,833,582
2
false
0
0
Since no one answered this, I will share my eventual work around. I backed up my files & installed Ubuntu 16.10 operating system from a bootable USB. Used Synaptic Package Manager to install openpyxl for Python3 & it is now working. Not sure this is a bona fide solution, but it worked for my purposes.
1
2
0
I've tried everything recommended & still can't get openpyxl to work in Python 3. I've tried both the pip3 and "sudo apt-get install python3-openpyxl" installation methods & they seem to work fine, but when I open the python3 interpreter & type "import openpyxl", I still get the ImportError: No module named 'openpyxl'. It works fine in the python2 interpreter, but I just can't get get it installed for python3 & need to write my programs in python3. I'm using Ubuntu 16.04 LTS Xenial Xerus & Python version 3.5.2. I've tried uninstalling & reinstalling the python3-openpyxl module but still get the error. Any help out there? Thanks
Openpyxl ImportError in Python 3.5.2
0
1
0
798
41,078,835
2016-12-10T18:26:00.000
0
0
0
0
python-2.7,error-handling,scikit-learn,svm
41,079,325
2
false
0
0
Since it cannot used sparse input on dense data, either convert your dense data to sparse data (recommended) or your sparse data to dense data. Use SciPy to create a sparse matrix from a dense one.
2
0
1
I am getting this error in Scikit learn. Previously I worked with K validation, and never encountered error. My data is sparse and training and testing set is divided in the ratio 90:10 ValueError: cannot use sparse input in 'SVC' trained on dense data Is there any straightforward reason and solution for this?
Title: SVC-Scikit Learn issue
0
0
0
960
41,078,835
2016-12-10T18:26:00.000
3
0
0
0
python-2.7,error-handling,scikit-learn,svm
41,079,820
2
true
0
0
This basically means that your testing set is not in the same format as your training set. A code snippet would have been great, but make sure you are using the same array format for both sets.
2
0
1
I am getting this error in Scikit learn. Previously I worked with K validation, and never encountered error. My data is sparse and training and testing set is divided in the ratio 90:10 ValueError: cannot use sparse input in 'SVC' trained on dense data Is there any straightforward reason and solution for this?
Title: SVC-Scikit Learn issue
1.2
0
0
960
41,082,106
2016-12-11T01:26:00.000
1
0
0
1
python,linux,terminal
41,082,125
3
false
0
0
you will need to chmod 0755 script.py and as a first line in script have something like #!/usr/bin/python
1
2
0
Whilst running python scripts from my linux terminal, I find myself typing in python myfile.py way too much. Is there a way on linux (or windows) to execute a python script by just entering in the name of the script, as is possible with bash/sh? like ./script.py?
Avoid typing "python" in a terminal to open a .py script?
0.066568
0
0
279
41,082,675
2016-12-11T03:25:00.000
1
0
0
0
python,pandas,dataframe
41,082,749
1
true
0
0
You can fix resulting DataFrame using df.replace({'FieldName': {'ErrorError': ''}})
1
0
1
I have a dataframe in which one of the row is filled with string "Error" I am trying to add rows of 2 different dataframe. However, since I have the string in one of the row, it is concatenating the 2 strings. So, I am having the dataframe filled with a row "ErrorError". I would prefer leaving this row empty than concatenating the strings. Any idea how to do it ? Thanks kiran
Throw an exception while doing dataframe sum in Pandas
1.2
0
0
49
41,082,947
2016-12-11T04:17:00.000
1
1
1
0
python,python-3.x
41,083,279
2
false
0
0
In addition to getting the source code online, if you have a standard Python install it should be already sitting in an easily found location on your hard-drive. In my case it is found as a file in C:\Python34\Lib. Needless to say, if you go this route you need to be careful to not modify (or, even worse, delete) the file.
1
0
0
I'm fairly new to the language and am wondering if it's possible to see the functions that are being used in a certain Python 3 module? Specifically the ones in the "ipaddress" module, thanks!
Is there any way to see source functions in Python 3 Modules?
0.099668
0
0
48
41,084,035
2016-12-11T07:44:00.000
1
0
1
0
python,multithreading,python-3.x
41,084,226
1
true
0
0
If the threads output data by using the print() function, you could interpose your own print function before starting the threads. Your print could store output in a queue, which will not slow down the code (but the queue may grow). A new single thread would read data from the queue and write it to the display.
1
2
0
I have a command line tool that is running parallel threads. These threads output to the command line. The problem I'm having is command line prompts are getting jumbled together from multiple threads outputting text concurrently. Logger is also jumbling text prompts. I can imagine that all command line prompts could be outputs to a variable lock display module. But that's a significant code refactoring. Plus it will slow down the code. Is there another simple solution to this problem?
Locking Command Line Text Output While Thread Completes Without Slowing Entire Program?
1.2
0
0
64
41,084,902
2016-12-11T09:51:00.000
0
0
1
0
python,pandas,machine-learning,feature-selection,xgboost
41,089,256
2
false
0
0
You should One Hot Encode your categorical values. XGBoost, and GBMs in general, will have difficulties with high dimensional categories. In those case you can feed those categorical values to a pre model, then xgboost
1
0
0
My data set has continues variable like age from 0 to 100, and the data also has categorial variable like provinces which has 50 classes. So I do not know whether I need to process the continues variable into bins. And what is the best way to process the provinces. Xgboost can not process the string type of variable. Should I use one-hot encoding for provinces with so many types?
What are the best ways to process continues variable and categorical variable in xgboost?
0
0
0
149
41,088,628
2016-12-11T16:53:00.000
0
0
1
0
python,pycharm,code-completion
41,090,989
2
false
0
0
I noticed on a few occasions the GUI going somehow off-rails, including in ways similar to the one described. I couldn't determine a pattern in the occurences. Just closing and re-opening the project didn't always help. What worked pretty reliably for me in the end was exiting PyCharm (giving it ample time to finish), making sure no related java processes remains active (running on Linux, in some cases I had to manually kill such processes when it became clear they're not going away by themselves) and then re-starting the IDE.
1
0
0
since yesterday PyCharm 2016.3 won't accept selected lines from the list of code completion: If I hit enter, a new line will be set into the editor rather than the selected line of the popup window. Is there any setting for this behaviour? Until now I couldn't find anything.
PyCharm won't accept code completion on enter
0
0
0
155
41,088,840
2016-12-11T17:18:00.000
1
0
1
0
python,csv,datetime,pandas,dataframe
41,088,981
3
true
0
0
I found that the problem was to do with missing values within the column. Using coerce=True so df["Date"] = pd.to_datetime(df["Date"], coerce=True) solves the problem.
1
0
1
I've got an imported csv file which has multiple columns with dates in the format "5 Jan 2001 10:20". (Note not zero-padded day) if I do df.dtype then it shows the columns as being a objects rather than a string or a datetime. I need to be able to subtract 2 column values to work out the difference so I'm trying to get them into a state where I can do that. At the moment if I try the test subtraction at the end I get the error unsupported operand type(s) for -: 'str' and 'str'. I've tried multiple methods but have run into a problem every way I've tried. Any help would be appreciated. If I need to give any more information then I will.
Converting objects from CSV into datetime
1.2
0
0
3,761
41,090,260
2016-12-11T19:33:00.000
2
0
1
0
python,types
41,090,293
2
false
0
0
Quoting the python library reference: There are four distinct numeric types: plain integers, long integers, floating point numbers, and complex numbers. In addition, Booleans are a subtype of plain integers. Plain integers (also just called integers) are implemented using long in C, which gives them at least 32 bits of precision. Long integers have unlimited precision. Floating point numbers are implemented using double in C. All bets on their precision are off unless you happen to know the machine you are working with.
1
0
0
These are the main built-in data types that I know in Python: Numbers Strings Lists Tuples Dictionaries Boolean Sets My question is, are integers and float numbers considered to be the same data type? Or are they two separate built-in data types? Thanks!
Built-in data types of python
0.197375
0
0
3,364
41,091,357
2016-12-11T21:31:00.000
0
0
1
0
python,flask,dependencies,virtualenv
41,091,835
2
false
1
0
If you are using Pycharm just switch to python 3, and there is a list of packages installed. Hit add(+), and look for flask You can switch to python 3 by hitting file-> setting -> Project:Projectname -> Project Interpreter
1
0
0
So, I know how to actually install flask, via pip install Flask But, I'm running a virtualenv with python3.4. The problem is, Flask is in fact installed. The problem is though, it is installed for python2.7 and not python3.4 . I did run this command with the virtualenv activated via source bin/activate , but it seems to install it for python2.7 even though a virtualenv running python3.4 is activated. How do I fix this? im pulling my hair out over this. Thanks
Install Flask in Python 3.4+ NOT for python2.7
0
0
0
1,368
41,093,725
2016-12-12T03:17:00.000
0
0
0
0
python,ruby,node.js,apache,nginx
41,095,058
3
false
1
0
Any user can run any service on a VPS. Just make sure that you are not conflicting with available ports in your services.
1
2
0
How to run python(Django,Flask), Java(spring),PHP, Nodejs(express),Ruby(rails) application on same VPS.If it is possible on VPS, can we do similar on Reseller Hosting where we have SSH Access. I have readed on some other Articles they.. suggests to Use "Virtual Host" in Apache. Also which one is better NGINX or Apache for same..?
How to run python, java, php, nodejs,ruby applications on same vps?
0
0
0
900
41,094,926
2016-12-12T05:56:00.000
0
0
0
0
python,django,django-admin
47,960,936
3
false
1
0
After ./manage.py sqlmigrate admin 0001, please run python manage.py migrate.
2
1
0
I get this error "ProgrammingError at /admin/ relation "django_admin_log" does not exist LINE 1: ..."."app_label", "django_content_type"."model" FROM "django_ad..." django_admin_log table does not exist in the database. Does anyone know how I can create it? I am not worried about deleting the data for my app. when i try './manage.py sqlmigrate admin 0001' or './manage.py sqlmigrate admin 0001' i get " BEGIN; -- Create model LogEntry CREATE TABLE "django_admin_log" ("id" serial NOT NULL PRIMARY KEY, "action_time" timestamp with time zone NOT NULL, "object_id" text NULL, "object_repr" varchar(200) NOT NULL, "action_flag" smallint NOT NULL CHECK ("action_flag" >= 0), "change_message" text NOT NULL, "content_type_id" integer NULL, "user_id" integer NOT NULL); ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_content_type_id_c4bce8eb_fk_django_content_type_id" FOREIGN KEY ("content_type_id") REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED; ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_log_user_id_c564eba6_fk_auth_user_id" FOREIGN KEY ("user_id") REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED; CREATE INDEX "django_admin_log_417f1b1c" ON "django_admin_log" ("content_type_id"); CREATE INDEX "django_admin_log_e8701ad4" ON "django_admin_log" ("user_id"); COMMIT;" but i still get the same error? i use postgresql if anyone cares.
I accidentally deleted django_admin_log and now i can not use the django admin
0
1
0
2,274
41,094,926
2016-12-12T05:56:00.000
2
0
0
0
python,django,django-admin
60,692,503
3
false
1
0
experienced the same issue, the best way is to copy the CREATE TABLE log, login to your database ./manage.py dbshell and Paste the content over there without adding the last line (COMMIT ) it will solve the problem and manually create the table for you.
2
1
0
I get this error "ProgrammingError at /admin/ relation "django_admin_log" does not exist LINE 1: ..."."app_label", "django_content_type"."model" FROM "django_ad..." django_admin_log table does not exist in the database. Does anyone know how I can create it? I am not worried about deleting the data for my app. when i try './manage.py sqlmigrate admin 0001' or './manage.py sqlmigrate admin 0001' i get " BEGIN; -- Create model LogEntry CREATE TABLE "django_admin_log" ("id" serial NOT NULL PRIMARY KEY, "action_time" timestamp with time zone NOT NULL, "object_id" text NULL, "object_repr" varchar(200) NOT NULL, "action_flag" smallint NOT NULL CHECK ("action_flag" >= 0), "change_message" text NOT NULL, "content_type_id" integer NULL, "user_id" integer NOT NULL); ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_content_type_id_c4bce8eb_fk_django_content_type_id" FOREIGN KEY ("content_type_id") REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED; ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_log_user_id_c564eba6_fk_auth_user_id" FOREIGN KEY ("user_id") REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED; CREATE INDEX "django_admin_log_417f1b1c" ON "django_admin_log" ("content_type_id"); CREATE INDEX "django_admin_log_e8701ad4" ON "django_admin_log" ("user_id"); COMMIT;" but i still get the same error? i use postgresql if anyone cares.
I accidentally deleted django_admin_log and now i can not use the django admin
0.132549
1
0
2,274
41,097,429
2016-12-12T09:11:00.000
0
0
1
0
python,methods,wrapper
41,097,657
1
false
0
0
You can't do this with a module. Use a class, with a @property.
1
0
0
I'm not sure what I'm trying to do is even possible: Basically I got a config.py file, it contains some variables, let's say var1 is one of them. I want that by doing config.var1 = 'some value' what will actually happen is config.some_method('some value') Is that possible?
Wrap a 1 argument method as a variable change in Python
0
0
0
27
41,101,246
2016-12-12T12:44:00.000
1
0
0
0
python,mysql
47,078,137
3
false
0
0
It's been a while to this question but I'll post the solution I found anyway. Using the official mysql connector for python, the rows are not fetched from the server until requested. As a result, if there are remaining rows, trying to close the connection throws an exception. One option is to use buffered cursor. A buffered cursor reads all rows from the server and the cursor can be closed at any point. However, this method comes with a memory cost. There is a hard limit of one open cursor per connection for the mysql connector. Trying to open another cursor raised an exception.
1
1
0
I am using the python mysql connector in a little script. The problem I'm facing is: when executing a select statement that returns 0 rows, I'm unable to close the cursor. When closing the cursor, "mysql.connector.errors.InternalError: Unread result found" is triggered. However, calling fetchall() results in an "mysql.connector.errors.InterfaceError: No result set to fetch from." error. So basically, I'm unable to close the cursor because of some unread data and I'm unable to read any data because there is no data to read.
How to properly handle empty data set with python mysql connector?
0.066568
1
0
2,753
41,102,210
2016-12-12T13:40:00.000
2
0
0
0
python,user-interface,pyqt,pyqt4
41,130,814
1
false
0
1
The mouse tracking needs to be True on the QTreeWidget class to enable the status tips. For this you need to call setMouseTracking(True) on your QTreeWidget class.
1
1
0
I was told the setToolTip() doesn't work on all platforms, therefore, I'm trying to use setStatusTip() to display a status message instead. But unfortunately, with the TreeWidgetItem the status bar is not getting updated. Nothing happens when I hover over the item.
Status message not appearing when using setStatusTip() function with a TreeWidgetItem
0.379949
0
0
334
41,103,817
2016-12-12T15:09:00.000
1
0
0
0
python,tkinter
41,110,070
2
false
0
1
On both Linux and Windows, Tcl will use the value of the LANG environment variable if set to initialize the locale. So if you set LANG=en you will get an English locale. If this is not set, then on Windows it then examines the registry to identify the locale in use and configures from that. You can find the Tcl code doing this in the msgcat.tcl file (search for registry). It will use LC_ALL, LC_MESSAGES or LANG in that order from the environment.
1
0
0
I am currently developing a cross platform GUI application with Python, and tkinter. Although I'm German, I want all button labels to be displayed in English. By now it is a strange mix because the messages in tkMessageBoxes are in English but button labels and file dialog boxes are in German. Is there a way to force Python / tkinter to use English labels only?
Forcing tkinter to 'speak' English
0.099668
0
0
988
41,104,303
2016-12-12T15:36:00.000
1
0
0
0
python,string,tkinter
41,104,418
2
false
0
1
Use setattr: setattr(self, '_prentice', tk.Button(master)).
1
0
0
Im trying to figure out how to change a string into a name of a Button. For example if I had the string self._name = 'self._prentice', I would like to make self._prentice = tk.Button(master). I need to do it this way as the string could be any name and I need to create a way of storing this so I can later pack or destroy it. I've tried using exec, however I could only get it to work for integers and not buttons.
Changing a string into the name of a button
0.099668
0
0
44
41,105,062
2016-12-12T16:18:00.000
1
0
1
0
python,c++,visual-studio-2012,pycharm,boost-python
41,107,779
1
true
0
1
I changed my VS12 project output file extension to .pyd (Right Click on Project -> Properties -> Linker -> General -> Output File -> changed to $(OutDir)$(TargetName).pyd) and now I can load the library in Python from the command line but still can't from Pycharm. After that, made the directory where the .pyd (along with .lib and .dll) is located under the Path variable. Then Pycharm is able to successfully load and run my custom boost-python library. UPDATE The pyd that Python understands and can load is simply the dll renamed to pyd. Therefore, an even cleaner way is to leave the VS12 project as is generating the original $(OutDir)$(TargetName)$(TargetExt) i.e. dll output and simply add a Post-Build Event that copies the dll into a pyd: (Right Click on Project -> Properties -> Configuration Properties -> Build Events -> Post-Build Event -> Command Line) and add copy $(OutDir)$(TargetName)$(TargetExt) $(OutDir)$(TargetName).pyd
1
0
0
I have a VS12 project and exposed some of the classes to Python using boost-python. After some linkage issues my project finally builds correctly and generates a MySDK.lib and MySDK.dll. I called the Boost Python module the same as the library i.e. BOOST_PYTHON_MODULE(MySDK). Are these .lib and .dll all I need to use MySDK from Python? I'm using Pycharm Community but can't find a way to import the generated MySDK.lib and MySDK.dll as a Python library module. Saddly there isn't much information of what to do after the Boost Python coding exercise.
Boost-Python C++ project builds, How to use the new library from Python?
1.2
0
0
610
41,107,643
2016-12-12T18:57:00.000
0
1
0
0
python,email,outlook
41,108,481
1
false
0
0
SMTP is only for sending. To receive (read) emails, you will need to use other protocols, such as POP3, IMAP4, etc.
1
0
0
I know how to send email through Outlook/Gmail using the Python SMTP library. However, I was wondering if it was possible to receive replys from those automated emails sent from Python. For example, if I sent an automated email from Python (Outlook/Gmail) and I wanted the user to be able to reply "ok" or "quit" to the automated email to either continue the script or kick off another job or something, how would I go about doing that in Python? Thanks
Python SMTP Send Email and Receive Reply
0
0
1
308
41,108,551
2016-12-12T19:57:00.000
1
0
0
0
python,python-requests,werkzeug
59,821,311
3
false
0
0
A PyPI package now exists for this so you can use pip install requests-flask-adapter.
1
11
0
I'm trying to make a usable tests for my package, but using Flask.test_client is so different from the requests API that I found it hard to use. I have tried to make requests.adapters.HTTPAdapter wrap the response, but it looks like werkzeug doesn't use httplib (or urllib for that matter) to build it own Response object. Any idea how it can be done? Reference to existing code will be the best (googling werkzeug + requests doesn't give any useful results) Many thanks!!
requests-like wrapper for flask's test_client
0.066568
0
1
1,360
41,109,851
2016-12-12T21:25:00.000
1
0
1
1
python,pdcurses,unicurses
49,979,004
1
false
0
0
To allow import, pdcurses.dll needs to be located in the python folder, for example C:\python36. To run a python script which imports and executes unicurses modules, the pdcurses.dll needs to be located in the same folder as the python script you are executing, so it needs to be located in 2 places.
1
3
0
When trying to import the UniCurses package, I receive the error "UniCurses initialization error - pdcurses.dll not found." I have downloaded the pdcurses distributions (specifically pdc34dllw.zip) and extracted the files to: *\Python\Lib\site-packages (Where unicurses.py is located) *\Python\Lib\site-packages\unicurses *\Python None of these have solved the problem.
UniCurses pdcurses.dll Error
0.197375
0
0
898
41,111,612
2016-12-12T23:53:00.000
0
1
1
0
python-3.x,unit-testing
41,235,098
1
false
0
0
The easiest way to do this is to not even evaluate the test. If you are going to just return that it passed anyway, just remove the if statement from your code, and replace it with just the return.
1
0
0
I'm thinking of ways to break unit test and can't figure out a way for a unit test to return positive even when it shouldn't all the time regardless of whether you know the tests or not.
Is there a way to make unit test return that every test passed even when it didn't? (Python 3)
0
0
0
20
41,111,623
2016-12-12T23:54:00.000
0
0
1
0
python,python-3.x,pip,python-requests,importerror
41,123,054
1
true
0
0
I resolved this issue by just reinstalling requests to the c: drive (which didn't fully solve it) and then just moving the requests folder to c:\Lib which now works fine and allows me to import it properly.
1
0
0
I am currently having trouble using requests. I use the import requests command yet I get the import error that says no module named 'requests'. To install it I first installed SetupTools, then pip and finally used the pip install requests command. This didn't work so I ended up uninstalling and reinstalling (with pip3 and pip3.5 commands) yet it still doesn't work. I am using python 3.5 which is installed directly to my c:\ drive. Thank you in advance.
Resolution for Import Error when trying to import 'Requests'
1.2
0
1
44
41,111,889
2016-12-13T00:26:00.000
2
0
1
0
python,dask
41,176,064
1
true
0
0
Update: You can do this to kill the workers started by the multiprocessing scheduler: from dask.context import _globals pool = _globals.pop('pool') # remove the pool from globals to make dask create a new one pool.close() pool.terminate() pool.join() First answer: For tasks that consume a lot of memory, I prefer to use the distributed scheduler even in localhost. It's very straightforward: Start the scheduler in one shell: $ dask-scheduler distributed.scheduler - INFO - ----------------------------------------------- distributed.scheduler - INFO - Scheduler at: 1.2.3.4:8786 distributed.scheduler - INFO - http at: 1.2.3.4:9786 distributed.bokeh.application - INFO - Web UI: http://1.2.3.4:8787/status/ distributed.scheduler - INFO - ----------------------------------------------- distributed.core - INFO - Connection from 1.2.3.4:56240 to Scheduler distributed.core - INFO - Connection from 1.2.3.4:56241 to Scheduler distributed.core - INFO - Connection from 1.2.3.4:56242 to Scheduler Start the worker in another shell, you can adjust the parameters accordingly: $ dask-worker --nprocs 8 --nthreads 1 --memory-limit .8 1.2.3.4:8786 distributed.nanny - INFO - Start Nanny at: 127.0.0.1:61760 distributed.nanny - INFO - Start Nanny at: 127.0.0.1:61761 distributed.nanny - INFO - Start Nanny at: 127.0.0.1:61762 distributed.nanny - INFO - Start Nanny at: 127.0.0.1:61763 distributed.worker - INFO - Start worker at: 127.0.0.1:61765 distributed.worker - INFO - nanny at: 127.0.0.1:61760 distributed.worker - INFO - http at: 127.0.0.1:61764 distributed.worker - INFO - Waiting to connect to: 127.0.0.1:8786 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.nanny - INFO - Start Nanny at: 127.0.0.1:61767 distributed.worker - INFO - Memory: 1.72 GB distributed.worker - INFO - Local Directory: /var/folders/55/nbg15c6j4k3cg06tjfhqypd40000gn/T/nanny-11ygswb9 ... Finally use the distributed.Client class to submit your jobs. In [1]: from distributed import Client In [2]: client = Client('1.2.3.4:8786') In [3]: client <Client: scheduler="127.0.0.1:61829" processes=8 cores=8> In [4]: from distributed.diagnostics import progress In [5]: import dask.bag In [6]: data = dask.bag.range(10000, 8) In [7]: data dask.bag In [8]: future = client.compute(data.sum()) In [9]: progress(future) [########################################] | 100% Completed | 0.0s In [10]: future.result() 49995000 I found out this way more reliable than the default scheduler. I prefer explicitly submit the task and handle the future to use the progress widget, which is really nice in a notebook. Also you can still do stuff while waiting the results. If you get errors due to memory issues, you can restart the workers or the scheduler (start all over again), use smaller chunks of data and try again.
1
3
0
After using the dask multiprocessing scheduler for a long period of time, I noticed that the python processes started by the multiprocessing scheduler take a lot of memory. How can I restart the worker pool?
How to terminate workers started by dask multiprocessing scheduler?
1.2
0
0
1,816
41,114,875
2016-12-13T06:28:00.000
1
0
0
0
java,python,scala,apache-spark,apache-spark-sql
53,483,109
5
false
0
0
Yes, the date_sub() function is the right for the question, anyway, there's an error in the selected answer: Return type: timestamp The return type should be date instead, date_sub() function will trim any hh:mm:ss part of the timestamp, and returns only a date.
1
21
0
How to get current_date - 1 day in sparksql, same as cur_date()-1 in mysql.
How to get today -"1 day" date in sparksql?
0.039979
1
0
78,284
41,117,150
2016-12-13T09:05:00.000
0
0
0
1
python,postgresql,pandas,vps
41,165,424
1
true
0
0
After investigating countless possible solutions: Creating a tunnel to forward a port from my local machine to the server so it can access the 3rd party app. modifying all my python code to manually insert the data from my local machine to the server using psycopg2 instead of pandas to_sql Creating a docker container for the 3rd party app that can be run on the server and several other dead ends or convoluted less than ideal solutions In the end, the solution was to simply install the 3rd party app on the server using wine but then ssh into it using the -X flag. I can therefore access the gui on my local machine while it is running on the server.
1
0
0
I have an issue which may have two possible approaches to getting a solution, im open to either. I use a 3rd party application to download data daily into pandas dataframes, which I then write into a local postgres database. The dataframes are large, but since the database is local I simply use df.to_sql and it completes in a matter of seconds. The problem is that now I have moved the database to a remote linux server (VPS). The same to_sql now takes over an hour. I have tried various values for chunksize but that doesn't help much. This wouldn't be an issue if I could simply install the 3rd party app on that remote server, but the server OS does not use a GUI. Is there a way to run that 3rd party app on the server even though it requires a GUI? (note: it is a Windows app so I use wine to run it on my local linux machine and would presumably need to do that on the server as well). If there is no way to run that app which requires a GUI on the VPS, then how should I go about writing these dataframes to the VPS from my local machine in a way that doesn't take over an hour? Im hoping there's some way to write the dataframes in smaller pieces or using something other than to_sql more suited to this. A really clunky, inelegant solution would be to write the dataframes to csv files, upload them to the server using ftp, then run a separate python script on the server to save the data to the db. I guess that would work but it's certainly not ideal.
Writing data to remote VPS database
1.2
1
0
107
41,118,478
2016-12-13T10:12:00.000
2
0
0
0
python,python-2.7,sockets,windows-xp
41,181,086
1
true
0
0
I made it. Maybe someone else would like to know. For interpreter run. Solution is simple: use older version of Python. Worked with 2.7.2, version from 2011. Propably, because Windows XP is deprecated, some functions changed and are no longer compatibile. Thats why I got error, while import socked.
1
1
0
I have a problem, I cant handle for few days. I made a simple app, but its important to install it on every machine in my company. I decided to python, because we got some very old machine, with Windows XP on it, without SP3. Its cut from internet, working only LAN. This Python app works well on Windows 10, 8 and 7 (compiled it using Py2Exe). But I need to run also on Windows XP, almost pure sometimes, just SP2, only some antivirus or firewalls included. All I got in return is "system cannon launch this application". No error code in return. I compiled it on Windows 10. I uninstalled all Python things and reinstall it, to be sure I got 32-bit version. App using some modules like: sockets and mysql connector module. At start I had some problem with py2exe, because it wanted some strange system DLL's, but when i used "dll_excludes" for some of them, everything compliled well, and worked on Windows 10, 8 and 7, not on XP. I was also tried to install python on XP, and I did it, simple calc scripts works well, but it crash when I only try to "import socket" : error loading module. PATH is set as it should be I guess. What else can I do, to start my app on Windows XP SP2? Pls give some advice.
Can't run Python EXE on Windows XP
1.2
0
0
2,193
41,124,843
2016-12-13T15:32:00.000
0
0
0
0
python,authentication,flask,flask-security
59,997,976
1
false
1
0
There could be several ways off the top of my head you approach this, none of them striking a nice balance between simplicity and effectiveness: One way could be to add a last_seen field to your User. Pick some arbitrary number(s) that could serve as a heuristic to determine whether someone is "active". Any sufficiently long gap in activity could trigger a reset of the active_login_count. This obviously has many apparent loopholes, the biggest I see at the moment being, users could simply coordinate logins and potentially rack up an unlimited number of active sessions without your application being any the wiser. It's a shame humans in general tend to use similar "logical" mechanisms to run their entire lives; but I digress... You could make this approach more sophisticated by trying to track the user's active ip addresses. Add an active_ips field and populate a list of (n) ips, perhaps with some browser information etc to try and fingerprint users' devices and manage it that way. Another way is to use an external service, such as a Redis instance or even a database. Create up to (n) session ids that are passed around in the http headers and which are checked every time the api is hit. No active session id, or if the next session id would constitute a breach of contract, no access to the app. Then you simply clear out those session ids at regular intervals to keep them fresh. Hopefully that gives you some useful ideas.
1
3
0
I'm trying to write a basic Flask app that limits the number of active logins a user can have, a la Netflix. I'm using the following strategy for now: Using Flask_Security store a active_login_count field for my User class. every time a successful login request is completed, the app increases the active_login_count by 1. If doing so makes the count greater than the limit, it calls logout_user() instead. This is a bad solution, because if the user loses her session (closed the incognito mode without logging out), the app hasn't been able to decrement her login count, and so, future logins are blocked. One solution is to store the sessions on the server, but I don't know how I would go about authenticating valid sessions. Flask_Sessions is something I'm looking into, but I have no idea how to limit the number of active sessions. As per my understanding, in the default configuration, Flask generates new session cookies on every request to prevent CSRF attacks. How would I go about doing that?
How to limit number of active logins in Flask
0
0
0
1,075
41,126,100
2016-12-13T16:35:00.000
0
0
1
0
python,class
41,126,396
3
false
0
0
Try to do it with self.data=None , or make an instance variable and call whenever you need. writing algorithm will make this thing more complex try to solve issue with inbuilt functions or with algorithm program vulnerability will affect alot.
1
0
0
I would like some advice on how to best design a class and it's instance variables. I initialize the class with self.name. However, the main purpose of this class it to retrieve data from an API passing self.name as a parameter, and then parsing the data accordingly. I have a class method called fetch_data(self.name) that makes the API request and returns ALL data. I want to store this data into a class instance variable, and then call other methods on that variable. For example, get_emotions(json), get_personality_traits(json), and get_handle(json), all take the same dictionary as a parameter, assign it to their own local variables, and then manipulate it accordingly. I know I can make fetch_data(self.name) return data, and then call fetch_data(self.name) within the other methods, assign the return value to a local variable, and manipulate that. The problem is then I will need to call the API 5 times rather than 1, which I can't do for time and money reasons. So, how do I make the result of fetch_data(self.name) global so that all methods within the class have access to the main data object? I know this is traditionally done in an instance variable, but in this scenario I can't initiliaze the data since I don't have it until after I call fetch_data(). Thank you in advance!
How do I best store API data into a Python class instance variable?
0
0
0
1,302
41,129,504
2016-12-13T19:56:00.000
4
0
1
0
python,pycharm
41,566,309
5
false
0
0
You don’t mention what operating system you’re using, and it’s relevant here. If it’s OS X or macOS, you can press Shift+Cmd+G in the file selection dialog (when you’re choosing the location of a new local interpreter) to enter a path manually. (This is a standard macOS shortcut that works in any native file selection dialog.)
1
78
0
How to use Pyenv virtualenv's with Pycharm 2016.3? In the earlier version of Pycharm, I could easily set up local interpreter to point anything installed on my machine. My first idea was to add .python-version file on the root of the project. I Pyenv virtualenv installed so this will activate & run the project with correct environment automatically. However, Pycharm still doesn't see the correct interpreter causing it to though import and syntax errors. How can I select my local Pyenv in new PyCharm 2016.3 I used to be able to set the path as variable now I can only browse the path using drop-down menu. It doesn't seem to show hidden files like default path for pyenv ~./.pyenv/versions{project}.
PyCharm with Pyenv
0.158649
0
0
48,759
41,129,537
2016-12-13T19:58:00.000
1
0
1
0
python,tkinter,exe
68,670,515
4
false
0
1
From The PyInstaller Documentation: Using a Console Window By default the bootloader creates a command-line console (a terminal window in GNU/Linux and Mac OS, a command window in Windows). It gives this window to the Python interpreter for its standard input and output. Your script’s use of print and input() are directed here. Error messages from Python and default logging output also appear in the console window. An option for Windows and Mac OS is to tell PyInstaller to not provide a console window. The bootloader starts Python with no target for standard output or input. Do this when your script has a graphical interface for user input and can properly report its own diagnostics. As noted in the CPython tutorial Appendix, for Windows a file extention of .pyw suppresses the console window that normally appears. Likewise, a console window will not be provided when using a myscript.pyw script with PyInstaller.
2
16
0
I want to make my program executable. I used TkInter to write the GUI, and I read somewhere that you have to save your file as .pyw to hide the console when the program is executed. The problem is that after making it an executable with PyInstaller, the console shows up again, even though the file converted was .pyw. How can I hide the console also in the .exe file?
Hide the console of an .exe file created with PyInstaller
0.049958
0
0
37,029
41,129,537
2016-12-13T19:58:00.000
50
0
1
0
python,tkinter,exe
41,129,758
4
true
0
1
Did you try --windowed command line flag ?
2
16
0
I want to make my program executable. I used TkInter to write the GUI, and I read somewhere that you have to save your file as .pyw to hide the console when the program is executed. The problem is that after making it an executable with PyInstaller, the console shows up again, even though the file converted was .pyw. How can I hide the console also in the .exe file?
Hide the console of an .exe file created with PyInstaller
1.2
0
0
37,029
41,130,126
2016-12-13T20:37:00.000
0
0
1
0
python-3.x,machine-learning,signal-processing
49,290,726
1
false
0
0
I had the same problem,just run pip install tools
1
0
1
I just installed scikits.talkbox, and tried using it in my program. But I get the following error 'ImportError: No module named 'tools' How do I solve this problem?
No module named 'tools' while importing scikits.talkbox
0
0
0
1,498
41,134,378
2016-12-14T03:27:00.000
0
0
1
0
python,python-3.x,pyinstaller,cx-freeze
41,135,217
1
false
0
0
Have you tried the --console flag with pyinstaller?
1
0
0
I have a command line based program that I've turned into an executable using PyInstaller. I would like the program to launch a command prompt and run when clicked. Currently I can only get it to run from the command prompt. When I click it, a blank command prompt will open, remain open for a couple of seconds and then close. I'm stuck. Does anyone have any suggestions?
Getting pyinstaller .exe program to run from clicking file?
0
0
0
245
41,136,136
2016-12-14T06:27:00.000
10
0
1
0
python,sqlalchemy,sqldatatypes
41,136,521
1
false
0
0
From what I know, Use varchar if you want to have a length constraint Use string if you don't want to restrict the length of the text The length field is usually required when the String type is used within a CREATE TABLE statement, as VARCHAR requires a length on most databases Parameters length - optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific. SQLAlchemy Docs
1
8
0
I used to use Varchar to text string of dynamical string length. Recently I saw people also use String with length to define it. What is the difference between them? Which one is better to use?
What is the difference between Varchar and String in sqlalchemy's data type?
1
1
0
29,995
41,136,853
2016-12-14T07:15:00.000
11
0
0
0
python,machine-learning
41,137,195
2
true
0
0
In fact, there is no difference in the effect of the two approaches (rather wordings) on your regression. In either case, you have to make sure that one of your dummies is left out (i.e. serves as base assumption) to avoid perfect multicollinearity among the set. For instance, if you want to take the weekday of an observation into account, you only use 6 (not 7) dummies assuming the one left out to be the base variable. When using one-hot encoding, your weekday variable is present as a categorical value in one single column, effectively having the regression use the first of its values as the base.
1
14
1
I'm making features for a machine learning model. I'm confused with dummy variable and one-hot encoding.For a instance,a category variable 'week' range 1-7.When using one-hot encoding, encode week = 1 as 1,000,000,week = 2 is 0,100,000... .But I can also make a dummy variable 'week_v',and in this way, I must set a hidden variable which means base variable,and feature week_v = 1 is 100,000,week_v = 2 is 010,000... and does not appear week_v = 7.So what's the difference between them? I'm using logistic model and then I'll try gbdt.
What's the difference between dummy variable and one-hot encoding?
1.2
0
0
14,355
41,139,136
2016-12-14T09:33:00.000
1
0
0
1
python,linux,bash,chromium
41,139,324
1
true
0
0
In your script that runs on startup try DISPLAY=:0 <command> & To clarify DISPLAY=:0 simply sets which monitor your window opens on with 0 representing the first monitor of the local machine.
1
0
0
I have a bash script that I've defined to run in startup, which runs a python script that waits for a command from another process, and when it gets it, it should open a chromium window with a certain URL. When I run this script manually it works fine, but when the script runs from startup, I get an error (displayed in syslog): Gtk: Can't open display I guess that's because it's running in a startup mode so it doesn't actually have a display to "lean" on... I was wondering if there's any way to get this work, anyway? Thanks in advance
Linux - Open chromium from a script that runs on startup
1.2
0
0
227
41,139,417
2016-12-14T09:47:00.000
0
1
0
0
python,parsing,pdf,pdfminer
41,146,378
1
false
0
0
The best method I found so far is to use the HTMLconverter class in the pdfminer lib. This allows you to convert the pdf in HTML format, and it is easier to figure out tables, rows and columns. In my case at least: it may work with all kinds of tables in a PDF file.
1
1
0
I'm parsing PDFs with pdfMiner, using it as a library in my python script. In most of these PDFs there is a table, where one of the columns is named "company". Is there a way to: 1) detect the existence of that table in the PDF. 2) get all the company names (i.e. all the entries in the 2nd column of the table). Thanks for your help AC
pdfminer - accessing PDF table
0
0
0
416
41,148,068
2016-12-14T16:55:00.000
0
0
1
0
python,argparse
41,148,465
2
false
0
0
You can use nargs=7 for example, and it will only accept exactly 7 (or return None if the flag is not entered) For example, add parser.add_argument('-x', nargs=7, help='testing') to your argument list and print args.x Say your file was a.py $ python a.py > None $ python a.py -x 1 > a.py: error: argument -x: expected 7 argument(s) $ python a.py -x 1 2 3 4 5 6 7 > ['1', '2', '3', '4', '5', '6', '7'] `
1
0
0
This is likely beyond the scope of the argparse module, but I'll try to describe my issue using an example. I have some fruits and some files attributed to each fruit. Specifically Apple, Banana, and Orange. Apple has 10 files associated with it, Banana has 7, and Oranges has 9. I can hardcode -a, -b, -o each taking nargs='+' to handle this toy example in a Python command-line script. But say I have a variable number of fruits, or a very large number (maybe 50). It would be crazy to hardcode a flag for each type, what is the best solution here?
Variable Number of narg Arguments Python Argparse
0
0
0
1,055
41,150,975
2016-12-14T19:48:00.000
12
0
0
1
python,linux,centos
47,796,696
4
false
0
0
You could also setup a "watch" in a separate window to constantly monitor Python processes as you run a script: watch -n 1 "ps u -C python3". Particularly useful when developing with multiprocessing.
1
49
0
How to display list of running processes Python with full name and active status? I tried this command: pgrep -lf python
How to display list of running processes Python?
1
0
0
121,162
41,154,217
2016-12-14T23:51:00.000
6
0
1
0
javascript,python,html,pycharm,jetbrains-ide
41,154,218
1
true
1
0
There are two solutions: You can disable HTML / CSS / JS inspections in Preferences -> Editor -> Inspections. If your project has a decent amount of third-party libraries (jquery, bootstrap etc.), this might be the reason. Don't index directories with these libraries: right click on a folder and then choose Mark Directory As -> Excluded. Don't forget to clear the cache: File -> Invalidate Caches / Restart.
1
6
0
PyCharm becomes nearly unresponsive in 5 minutes of editing HTML / CSS / JS code – even typing a single character causes a several seconds lag. However, everything is absolutely fine when I work with Python. I don't close PyCharm for days and it works smooth. Restarting PyCharm helps only for a few minutes. I'm on PyCharm 5.0.5 Professional & OS X 10.11.
PyCharm is terribly slow when editing HTML / CSS / JS
1.2
0
0
2,026
41,154,360
2016-12-15T00:10:00.000
1
0
0
0
python,api,pdf
41,170,783
2
false
1
0
I wanted to share my solution to this, but give credit to @CoolqB for the answer. The key was including 'rb' to properly read the binary file and including the codecs library. Here are the final code snippets: Client request: response = requests.get('https://www.mywebsite.com/_api_call') Server response: f = codecs.open(file_name, 'rb').read() return f Client handle: with codecs.open(file_to_write, 'w') as f: f.write(response.content) f.close() And all is right with the world.
2
0
0
We have two servers (client-facing, and back-end database) between which we would like to transfer PDFs. Here's the data flow: User requests PDF from website. Site sends request to client-server. Client server requests PDF from back-end server (different IP). Back-end server sends PDF to client server. Client server sends PDF to website. 1-3 and 5 are all good, but #4 is the issue. We're currently using Flask requests for our API calls and can transfer text and .csv easily, but binary files such as PDF are not working. And no, I don't have any code, so take it easy on me. Just looking for a suggestion from someone who may have come across this issue.
Transfer PDF files between servers in python
0.099668
0
1
1,341
41,154,360
2016-12-15T00:10:00.000
1
0
0
0
python,api,pdf
41,154,455
2
true
1
0
As you said you have no code, that's fine, but I can only give a few suggestions. I'm not sure how you're sending your files, but I'm assuming that you're using pythons open function. Make sure you are reading the file as bytes (e.g. open('<pdf-file>','rb')) Cut the file up into chunks and send it as one file, this way it doesn't freeze or get stuck. Try smaller PDF files, if this works definitely try suggestion #2. Use threads, you can multitask with them. Have a download server, this can save memory and potentially save bandwidth. Also it also lets you skip the PDF send back, from flask. Don't use PDF files if you don't have to. Use a library to do it for you. Hope this helps!
2
0
0
We have two servers (client-facing, and back-end database) between which we would like to transfer PDFs. Here's the data flow: User requests PDF from website. Site sends request to client-server. Client server requests PDF from back-end server (different IP). Back-end server sends PDF to client server. Client server sends PDF to website. 1-3 and 5 are all good, but #4 is the issue. We're currently using Flask requests for our API calls and can transfer text and .csv easily, but binary files such as PDF are not working. And no, I don't have any code, so take it easy on me. Just looking for a suggestion from someone who may have come across this issue.
Transfer PDF files between servers in python
1.2
0
1
1,341
41,155,598
2016-12-15T02:47:00.000
0
0
0
0
python-2.7,sockets,actionscript-3,bit-shift,bitwise-or
41,157,447
1
false
1
0
the only way i know, is sending negative number from server for getting a same result, otherwise not possible to reversing it through server (i guess)
1
0
0
I have been tasked with creating a server for a client. I have all of the clients' code, so using it, I must create a server. The client has a particular piece of code in a function called readInt(num1); return num1 << 24 | num1 << 16 | num1 << 8 | num1, which is called every time the client expects a type = int in the data received from a packet, and I just can't haggle with it. I tried what comes to mind first - just reverse it, right? If the int needs to be 3, do 3 >> 24, but no yield. My mind doesn't work well mathematically, and I'm not a good problem solver, so if I want, say 3 on the server to read as 3 on the client, what formula would I use server-side to achieve this? The client is in Adobe ActionScript 3, and the server in Python 2.7.12.
Bitwise in a client - server scenario miscalculating
0
0
0
35
41,155,765
2016-12-15T03:05:00.000
1
0
1
0
python,arrays,list,loops,iteration
41,155,912
4
false
0
0
You can just subtract the number rolled from distance from the player to the end of the board. if the difference is less than 0, send the player back to the start of the board and add the absolute value of the difference to the player's position.
1
2
0
I'm currently in the process of learning how to use Python, and i'm trying to build a Monopoly simulator (for starters, I just want to simulate how one player moves about on the board). How do i iteratively go through the list of board positions: eg. range(0, 39)? So, if the player is currently in position 35, and rolls a 6, he ends up in position 1. Hopefully you're able to help! All the best :)
Iteratively searching through a looped list
0.049958
0
0
49
41,158,067
2016-12-15T06:50:00.000
0
0
0
0
python,selenium,tkinter,wxpython,selenium-chromedriver
41,159,150
2
false
0
0
You can call autoit3 framework from Python even to open the File Open dialog and fill in the values and press OK or do whatever with the windows. Autoit3 has a dll that can be loaded and called using ctypes. That's what I did in one or 2 projects. If I understand your question correctly, wxpython or tk won't help you. They can be used to make the windowed UI, not to control other programs.
1
0
0
With Selenium Webdriver, I have to upload some files on a website but since the pop-up window for browsing the file location is handled by the operating system and not the browser, I cannot automate that part with Selenium. So I want to know which framework or module I need to use to work with system windows of Windows OS. Can tkInter or wxPython be used for this? My script will be used on Windows 7, 8 & 10.
How do I handle the system windows with Python on Windows OS?
0
0
1
253
41,163,150
2016-12-15T11:33:00.000
-2
0
1
0
python,anaconda,python-3.5,packaging,keras
41,163,228
4
false
0
0
Navigate to Anaconda installation folder/Scripts and install with pip command
1
1
1
Python 3.5, I am trying to find command to install a Keras Deep Learning package for Anaconda. The command conda install -c keras does not work, can anyone answer Why it doesn't work?
Package installation of Keras in Anaconda?
-0.099668
0
0
6,149
41,163,207
2016-12-15T11:36:00.000
0
1
0
0
python,eclipse,pydev
41,178,965
1
true
0
0
I got the thing working .Turns out XLWT dooesnt support Python 3.X ,they have a different version XLWT-future for3.X versions. The new one is working now .
1
0
0
I am using the Pydev plugin for eclipse to create a small python script. I need to write some data into excell sheets using python .Searching on the internet ,i got xlwt to be the best solution for this . I downloaded and unpacked the package for xlwt and installed it using easy_install.But still after this i am not able to import the package into my pydev project in eclipse . Is there something that I am missing here ? If not xlwt ,is there some other way in which I can write data to excell ?
Working with XLWT in pydev
1.2
0
0
97
41,163,658
2016-12-15T11:59:00.000
1
0
0
0
python,uber-api
41,212,414
1
true
1
0
"Does uber have an API which let user download invoice pdf?" - No, that API does not exist.
1
0
0
I have been searching for a way to get my past ride invoice. Is there any api which i can use to get my rides info/invoice. as i don't want to click on every ride and then. Just want to automate the boring stuff.
Does uber have an API which let user download invoice pdf?
1.2
0
1
106
41,164,620
2016-12-15T12:50:00.000
4
0
0
0
python,c,python-c-extension
41,164,724
2
false
0
0
No. A segmentation fault means the C code has performed invalid memory access; there's no guarantee the one that failed was the only one that had gone awry. This means you can't trust anything writable within that process's virtual memory space, including the Python runtime and memory allocation structures. Furthermore, there's a decent chance the failing call wasn't the first to do the wrong thing; it likely did so because something else had already corrupted its state. This is why memory access bugs are hard to debug. Python has less errors of this nature because it relies on data types that do understand their bounds, though if you break out of that shell using e.g. ctypes it can be just as fragile and dangerous.
1
4
0
I'm using a third party python library with C extensions. There is one function I call all the time. When I call this function with some special argument (that should be valid) it segfaults. Let's assume for the moment that I can't file a bug report (will take to long to resolve). It there any way to handle such problem with Python? I can provide the code but the question is so general it won't change much...
Any way to handle C extension segmentation fault from Python code?
0.379949
0
0
1,238
41,165,008
2016-12-15T13:10:00.000
2
0
0
0
python-2.7,openerp,odoo-10
41,179,576
1
true
1
0
In an onchange context it is the old value/record which was used before the change.
1
1
0
in odoo sale order line there is an onchange method _onchange_product_uom_qty() where they used a condition self.product_uom_qty < self._origin.product_uom_qty Now my question is what does self._orgin is used for what does it return? Note: 1. you can also find the origin in hr_appraisal module inside the onchange_colleagues method and somewhere in purchase 2. I does not mean this origin- "source document"`
what is _origin in odoo
1.2
0
0
2,580
41,166,406
2016-12-15T14:19:00.000
6
1
0
0
python,android-studio
41,166,600
1
true
1
1
If you only need to run the scripts and not to edit them, the easiest way to do so is to configure an external tool through Settings | Tools | External Tools. This doesn't require the Python plugin or any Python-specific configuration; you can simply specify the command line to execute. External tools appear as items in the Tools menu, and you can also assign keyboard shortcuts to them using Settings | Keymap.
1
9
0
I am running Android Studio 2.2.3 I need to run Python scripts during my development to process some data files. The final apk does not need to run Python in the device. At the moment I run the scripts from a terminal or from PyDev for Eclipse, but I was looking for a way of doing it from Android Studio. There seems to be a way to do this, as when I right-click on a .py file and select the 'Run' option, an 'Edit configuration' dialog opens where I can configure several options. The problem is that I cannot specify the Python interpreter, having to select it from a combo box of already configured Python SDKs. While there is no interpreter selected, there is an error message stating "Error: Please select a module with a valid Python SDK". I managed to create Java modules for my project, but not Python ones (I do have the Python Community Edition plugin installed).Does anybody know how to achieve this?. TIA.
Running Python scripts inside Android Studio
1.2
0
0
25,025
41,166,722
2016-12-15T14:34:00.000
0
0
1
0
python
41,167,177
2
false
0
0
using strftime() and strptime() functions you would be able to convert them without issues.
1
0
0
Is it possible to convert all possible types of datetime formats to '%Y-%m-%d %H:%M:%S' without error. For example, '2016-01-21 12:36:59.124' or '2016-Jan-12 21:36:12' or '16-January-23 23:59:32.1should be converted to %Y-%m-%d %H:%M:%S format without raising any error?
How to convert all possible types of datetime formats using python module
0
0
0
76
41,171,238
2016-12-15T18:44:00.000
2
0
0
0
php,python,download,youtube,youtube-dl
41,175,121
1
true
0
0
I found an answer myself, I don't know why but to generate a download URL the only things to do is add the title at the end of the URL, so adding &title=Bruno+Mars+-+24K+Magic+%5BOfficial+Video%5D at the end of the first URL solved my problem.
1
1
0
Is there a way to get the direct download URL using youtube-dl? I tried it with youtube-dl -g https://www.youtube.com/watch?v=xxx It returns a URL that looks correct at the first sight, but it leads to a blank page that shows the video player. I want to extract the direct download URL like the example below. Link to player: https://r4---sn-fpoq-cgpl.googlevideo.com/videoplayback?mime=video%2Fmp4&key=yt6&itag=22&lmt=1476010871066368&source=youtube&upn=4B17cM_dGEU&ei=cNdSWM7CKMjMigbzv62ADA&ip=151.45.98.20&requiressl=yes&initcwndbps=695000&ms=au&mt=1481824045&mv=m&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&id=o-AHTIR887C2uesvqaEJtgUJhaFssm050soDMhiXfgLQ1f&pl=16&mm=31&mn=sn-fpoq-cgpl&ipbits=0&dur=226.649&ratebypass=yes&expire=1481845712&signature=34BB16F2B7F758CA44680A778F46AC49EBCA3BE3.B0452B32B62D4AA133BA2F59E78EFD66FEA6298D Direct Link to file: https://r4---sn-cu-n1qs.googlevideo.com/videoplayback?id=o-AFc2eS8nuL2DLN608O3_QxaAQWNDCeIRl9oGTvRo-fKM&ip=81.140.223.31&sparams=dur,ei,id,initcwndbps,ip,ipbits,itag,lmt,mime,mm,mn,ms,mv,pcm2,pcm2cms,pl,ratebypass,requiressl,source,upn,expire&ei=qdVSWOP7BIK7WaTrkdgM&dur=226.649&pl=25&initcwndbps=1175000&source=youtube&ratebypass=yes&pcm2cms=yes&requiressl=yes&pcm2=yes&expire=1481845257&key=yt6&mime=video/mp4&ipbits=0&lmt=1476010871066368&itag=22&mv=m&mt=1481823436&ms=au&mn=sn-cu-n1qs&mm=31&upn=6EUZ1r48CCw&signature=9F514204B90A32936912E5134B58BD8200177AF1.5A5C8BAC42B32C62229D078F0B566890F7DA524B&&title=Bruno+Mars+-+24K+Magic+%5BOfficial+Video%5D Is there a way to do that?
youtube-dl get direct download url
1.2
0
1
3,296
41,173,467
2016-12-15T21:17:00.000
0
0
0
1
python-2.7,homebrew,macos-sierra
41,184,494
1
false
0
0
Go to this folder and delete the packages that you want to delete C:\Python27\Scripts Thank you
1
0
0
I have discovered that I have some old python packages installed in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python and want to remove these. Seems impossible to remove them however, tried "sudo rm -rf" etc but get permission errors. I general I have a working "homebrew" installation and need to get rid of the packages. How to go about it?
Removing old python packages on macos
0
0
0
107
41,175,714
2016-12-16T00:52:00.000
0
0
1
0
python,python-2.7,runtime-error,python-requests,pyinstaller
41,189,383
1
false
0
0
The one thing I forgot to try last night was to try a different version of PyInstaller. I uninstalled PyInstaller 3.2 and installed PyInstaller 3.1, and the executable now runs perfectly with no problems. I will be reporting the issue to the PyInstaller folks so they can figure out the problem on their end.
1
0
0
I'm having a problem creating a standalone executable using PyInstaller. Specifically, when I run pyinstaller -F module_name.py, it creates the executable, but the executable fails with ImportError: no module named requests. The module runs fine through the REPL. I know there's a few other questions out there with ImportErrors using PyInstaller; I've researched them and still can't get it working. Here's my setup: Pycharm 2016.3 on Windows 7 64-bit Python 2.7.12 32-bit in a virtual environment PyInstaller 3.2 This is a brand-new virtual environment, with the absolute minimum that I need to run this program. Requests and PyInstaller are both freshly installed from pip, and pip confirms that they're both up-to-date. I've checked my PYTHONPATH, and the path to the virtual environment is in there and correct, and requests is in the virtual environment's site_packages directory. I've tried adding --hiddenimports=requests, no change. I hope I'm not missing something obvious, but I'm about out of ideas. One thing I have noticed: the warncheck.txt file generated by PyInstaller shows a massive number of missing imports, many of them standard libraries (like re, and functools, and datetime). I don't know if this is a symptom of something else wrong. Any help would be appreciated.
PyInstaller executable raises ImportError: no module named requests
0
0
0
1,086
41,177,567
2016-12-16T05:03:00.000
1
0
0
1
python,windows
41,180,235
3
false
0
0
Server python -m http.server this will create a http server on port 8000 client python -c "import urllib; urllib.urlretrieve('http://x.x.x.x:8000/filename', 'filename')" where x.x.x.x is your server ip, filename is what you want to download
1
2
0
I don't want to use external modules like paramiko or fabric. Is there any python built in module through which we can transfer files from windows. I know for linux scp command is there like this is there any command for windows ?
How do I copy files from windows system to any other remote server from python script?
0.066568
0
0
5,431
41,177,692
2016-12-16T05:17:00.000
1
0
0
0
linux,django,python-3.x,ubuntu,sqlite
41,196,939
1
false
1
0
I figured out that this error was caused by me changing my python path to 3.5 from the default of 2.7.
1
0
0
So the issue is that apparently Django uses the sqlite3 that is included with python, I have sqlite3 on my computer and it works fine on its own. I have tried many things to fix this and have not found a solution yet. Please let me know how I can fix this issue so that I can use Django on my computer. :~$ python Python 3.5.2 (default, Nov 6 2016, 14:10:16) [GCC 6.2.0 20161005] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/sqlite3/__init__.py", line 23, in <module> from sqlite3.dbapi2 import * File "/usr/local/lib/python3.5/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named '_sqlite3' >>> exit()
How do I configure the sqlite3 module to work with Django 1.10?
0.197375
1
0
197
41,179,622
2016-12-16T07:52:00.000
0
0
0
0
python,mongodb,scrapy
41,194,752
1
false
1
0
I've got an answer. for example: using -s collection_name=abc for scrapy crawl command, then get the parameter in pipelines.py using param = settings.get('collection_name'). This is also found in stackoverflow, but can't remember which ticket. Hope this would help some one facing same problem.
1
0
0
The spider is to crawl info on a certain B2B website, and I want it to be a webserver, where user sumbit a url then the spider starts crawl. The url seems like: apple.b2bxxx.com, which is a minisite on a B2B website, where all the products are listed. The "apple" might be different because different companies use different names for there minisite, and duplication is not allowed. On the backend, it's MongoDB to store the data scraped. What I have done, is that, I can collect info on the given url, but all data are stored in the same db.collection. I know I can get parameters using "-a" for running scrapy, but how should I use it? Should I change the pipelines.py or the spider python file? Any suggestions?
Usa parameter as collection name in a scrapy project
0
0
0
79
41,181,453
2016-12-16T09:47:00.000
4
0
1
0
python,regex,python-3.x,unicode,python-2.x
41,181,548
2
true
0
0
All strings in Python3 are unicode by default. Just remove the u and you should be fine. In Python2 strings are lists of bytes by default, so we use u to mark them as unicode strings.
1
2
0
I'm trying to convert my Python 2 script to Python 3. How do we do Regex with Unicode? This is what I had in Python 2 which works It replaces quotes to « and »: text = re.sub(ur'"(.*?)"', ur'«\1»', text) I have some really complex ones which the "ur" made it so easy. But it doesn't work in Python 3: text = re.sub(ur'ه\sایم([\]\.،\:»\)\s])', ur'ه\u200cایم\1', text)
Unicode Regex in Python 3 (from Python 2 Code)
1.2
0
0
1,204
41,182,536
2016-12-16T10:40:00.000
2
1
0
0
python,unit-testing,python-green
43,283,913
1
false
0
0
Sorry for the delay! I didn't see this question come in! Coverage 4.1 changed default internal behavior which disabled the missing line output in coverage through Green. This was fixed in Green 2.5.3. So either use Coverage < 4.1 or Green >= 2.5.3 and you should get the missing lines output.
1
3
0
I am using green for my python project. I have a problem on knowing which part of the code that still uncovered with the unit test. When I run the green using green -vvv --run-coverage, on the result, it only shows the percentage of the covered code. How could I know which part of the code that were not covered? Is there any additional syntax that I can use in order to show the uncovered code?
Show code coverage in python using green
0.379949
0
0
231
41,185,646
2016-12-16T13:35:00.000
0
1
0
0
python,performance,python-3.x,artificial-intelligence
41,199,608
1
true
0
0
From your question, it's a bit unclear how your approaches would be implemented. But from the alpha-beta pruning, it seems as if you want to look at a lot of different game states, and in the recursion determine a "score" for each one. One very important observation is that recursion ends once a 4-in-a-row has been found. That means that at the start of a recursion step, the game board does not have any 4-in-a-row instances. Using this, we can intuitively see that the new piece placed in said recursion step must be a part of any 4-in-a-row instance created during the recursion step. This greatly reduces the search space for solutions from a total of 69 (21 vertical, 24 horizontal, 12+12 diagonals) 4-in-a-row positions to a maximum of 13 (3 vertical, 4 horizontal, 3+3 diagonal). This should be the baseline for your second approach. It will require a maximum of 52 (13*4) checks for a naive implementation, or 25 (6+7+6+6) checks for a faster algorithm. Now it's pretty hard to beat 25 boolean checks for this win-check I'd say, but I'm guessing that your #1 approach trades some extra memory-usage to enable less calculation per recursion step. The simplest way of doing this would be to store 8 integers (single byte is fine for this application) which represent the longest chains of same-color chips that can be found in any of the 8 directions. Using this, a check for win can be reduced to 8 boolean checks and 4 additions. Simply get the chain lengths on opposite sides of the newly placed chip, check if they're the same color as the chip, and if they are, add their lengths and add 1 (for the newly placed chip). From this calculation, it seems as if your #1 approach might be the most efficient. However, it has a much larger overhead of maintaining the data structure, and uses more memory, something that should be avoided unless you can pass by reference. Also (assuming that boolean checks and additions are similar in speed) the much harder approach only wins by a factor 2 even when ignoring the overhead. I've made some simplifications, and some explanations maybe weren't crystal clear, but ask if you have any further questions.
1
0
1
I am trying to make an ai following the alpha-beta pruning method for tic-tac-toe. I need to make checking a win as fast as possible, as the ai will goes through many different possible game states. Right now I have thought of 2 approaches, neither which is very efficient. Create a large tuple for scoring every possible 4 in a row win conditions, and loop through that. Using for loops, check horizontally, vertically, diag facing left, and diag facing right. This seems like it would be much slower that #1. How would someone recommend doing it?
fastest Connect 4 win checking method
1.2
0
0
812
41,185,693
2016-12-16T13:38:00.000
0
0
0
0
python,selenium,scrapy,web-crawler
46,603,023
1
false
1
0
You can use DOWNLOAD_DELAY = 0.25 in settings.py,that the downloader should wait before downloading consecutive pages from the same website. Another way you can use time.sleep() for delaying spider to get response from selenium.
1
0
0
I have a spider that I am using to crawl a site. I only need javascript for one piece of my item. So I scrape part of the site with scrapy then open the URL in selenium. While the URL is opening scrapy continues. How do I make scrapy to wait on my selenium logic to finish? Thanks in advance.
Scrapy and Selenium: Make scrapy wait for selenium?
0
0
1
505
41,188,326
2016-12-16T16:11:00.000
2
0
0
0
python,django,git,master
41,188,434
1
true
1
0
Possible causes: (Git) You forgot to git add files in the conceptCalendar branch, and they are still lying around when you checkout master. (Python) You have stale .pyc files in your project. Remove them. (Django) You forgot to makemigrations in the conceptCalendar branch (Django) You ran migrate on the conceptCalendar branch, your database schema has changed, but now the code on master reflects the old schema. Rebuild your database, or migrate backwards. I'm betting my money on this last point. From the error you posted, i'm thinking that maybe a Form is extending ModelForm for a Model that changed in the other branch. Check that all fields exist in the underlying model, and in the database.
1
1
0
I am working on a Python/ Django project, using Git to manage my version control. I recently made some changes on a branch called conceptCalendar3, and the changes I made broke my site. I committed the changes to that branch, and then checked out master, which, I had branched from in order to create the conceptCalendar3 branch. However, when I now to try to view my site from the localhost, on master branch (on which I have not made any changes since it was last working), I now get a message in the browser telling me that: This site can't be reached localhost refused to connect The Python console is displaying a lot of output with error messages that I've not seen before: File "/Users/.../Documents/Dev/moonhub/moon/moon/urls.py", line 27, in url(r'^costing/', include('costing.urls', namespace="costing")), File "/Users/.../.virtualenvs/moon/lib/python2.7/site-packages/django/conf/urls/init.py", line 52, in include urlconf_module = import_module(urlconf_module) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/init.py", line 37, in import_module import(name) File "/Users/.../Documents/Dev/moonhub/moon/costing/urls.py", line 2, in from . import views File "/Users/.../Documents/Dev/moonhub/moon/costing/views.py", line 2900, in from projects.views import get_project_folder File "/Users/elgan/Documents/Dev/moonhub/moon/projects/views.py", line 38, in from .forms import * File "/Users/.../Documents/Dev/moonhub/moon/projects/forms.py", line 1207, in class PostDepMeetingForm(ValidatedForm): File "/Users/.../.virtualenvs/moon/lib/python2.7/site-packages/django/forms/models.py", line 257, in new raise FieldError(message) django.core.exceptions.FieldError: Unknown field(s) (meeting_date_time) specified for Survey The 'field' that it seems to be complaining about, meeting_date_time is one that I added on the conceptCalendar3 branch- but it doesn't exist in the code on the master branch... I have tried running git pull origin master to ensure that I have the latest version of code from the live version of the project, but this tells me that everything is up-to-date. So why can't I view a version of my site locally, and why am I getting these errors in the console? Edit I tried checking out an old commit, and at one point was in a detached head state- could it be that I am still in this detached head state, and so some of my code is point to master, but some of it is pointing to conceptCalendar? If that's the case, how would I check, and how would I resolve it?
Git- made changes on a local branch, checked out master, and master is now broken
1.2
0
0
110
41,188,433
2016-12-16T16:17:00.000
0
0
1
0
python,restore
41,264,537
2
false
0
0
Since you are a new user I am assuming you are new to code development etc. Therefore, you should look at some versioning control tools like: SVN Github Gitlab There are some more, but these are the most common ones. They are used to store your code and to revert you code if you mess up. They are also used to merge codes when different programmers are changing it. For the moment this will not help but will help in the future. For now you may look at some restore tools but I highly doubt it that you are able to recreate the file. Another possiblity is: when you haven IDE to look at your command history. Maybe you executed the script and you can find the executed script as commands in the command history.
1
0
0
The code inside of a python file randomly deleted, is there anyway to restore? It is a file of 3.70 KB but when opened and put into a text document there is nothing.
Code inside of a python file deleted
0
0
0
929
41,189,282
2016-12-16T17:12:00.000
1
0
1
0
python,python-idle,auto-indent
41,189,551
1
true
0
0
Your parentheses/brackets/something else are most likely unbalanced on a previous line of code (I'll call it a parentheses). The open parentheses that does not have a closing one is likely on column 28.
1
1
0
I have a bit over 500 lines of code-plus-comments that I've written over the past few days. It runs (see edit), and does what it's supposed to. But whenever I hit enter after any text, IDLE fills in whitespace to column 28. If I go from column 28 and type a line that should need more indentation, for example For i in range(25):, and hit enter, it still only indents to column 28. When I edit a new file, it indents the right amount. When I try extending a logical line to multiple physical lines, it still indents the right amount. But when I copy-and-paste my 500-plus lines into the new file, it goes back to filling in whitespace to column 28. When I change the number of spaces to indent to 6 (from the standard 4), it still goes to column 28: it just deletes the whitespace 6 at a time when I hit backspace instead of 4. There are a bunch of logical lines continued to multiple physical lines, scattered through my code, but they all seem to be just like the ones I've tried in a new file and had it work right. I assume that one of them is different somehow, so that IDLE file-editor thinks I still want to continue it from column 28, but I don't see any difference. I'm using Python 3.4.3 on Windows 8.1. Edit: I was thinking I was looking at a version that ran, but I wasn't. The accepted answer was correct, and there was a mismatched parenthesis on column 28 in the code I was adding.
Python IDLE file editor thinks I want more auto-indent than I do
1.2
0
0
349
41,189,951
2016-12-16T17:57:00.000
2
0
1
0
python,dll,gdal
41,190,079
2
false
0
0
First of all, never download .dll files from shady websites. The best way of repairing missing dependencies is to reinstall the software that shipped the .dll files completely.
2
0
0
I am using python and I am trying to install the GDAL library. I kept having an error telling me that many DLL files were missing so I used the software Dependency Walker and it showed me that 330 DLL files were missing... My question is: How do I get that much files without downloading them one by one on a website ?
How do I get hundreds of DLL files?
0.197375
0
0
58
41,189,951
2016-12-16T17:57:00.000
1
0
1
0
python,dll,gdal
41,190,396
2
false
0
0
By properly installing the software that GDAL depends on. Consult GDAL's documentation for build and installation instructions.
2
0
0
I am using python and I am trying to install the GDAL library. I kept having an error telling me that many DLL files were missing so I used the software Dependency Walker and it showed me that 330 DLL files were missing... My question is: How do I get that much files without downloading them one by one on a website ?
How do I get hundreds of DLL files?
0.099668
0
0
58
41,191,394
2016-12-16T19:39:00.000
2
0
0
0
python,python-3.x,openpyxl
41,207,062
5
false
0
0
No this is not possible because Excel files do not support concurrent access.
2
9
0
I am using openpyxl to write to a workbook. But that workbook needs to be closed in order to edit it. Is there a way to write to an open Excel sheet? I want to have a button that runs a Python code using the commandline and fills in the cells. The current process that I have built is using VBA to close the file and then Python writes it and opens it again. But that is inefficient. That is why I need a way to write to open files.
How to write to an open Excel file using Python?
0.07983
1
0
18,268
41,191,394
2016-12-16T19:39:00.000
2
0
0
0
python,python-3.x,openpyxl
41,194,215
5
false
0
0
Generally, two different processes shouldn't not be writing to the same file because it will cause synchronization issues. A better way would be to close the existing file in parent process (aka VBA code) and pass the location of the workbook to python script. The python script will open it and write the contents in the cell and exit.
2
9
0
I am using openpyxl to write to a workbook. But that workbook needs to be closed in order to edit it. Is there a way to write to an open Excel sheet? I want to have a button that runs a Python code using the commandline and fills in the cells. The current process that I have built is using VBA to close the file and then Python writes it and opens it again. But that is inefficient. That is why I need a way to write to open files.
How to write to an open Excel file using Python?
0.07983
1
0
18,268
41,198,719
2016-12-17T12:47:00.000
2
0
0
0
python,pandas,dataframe
41,200,368
3
false
0
0
If the rest of the data in your columns is numeric then you should use pd.to_numeric(df, errors='coerce')
1
0
1
I am using df= df.replace('No data', np.nan) on a csv file containing ‘No data’ instead of blank/null entries where there is no numeric data. Using the head() method I see that the replace method does replace the ‘No data’ entries with NaN. When I use df.info() it says that the datatypes for each of the series is an object. When I open the csv file in Excel and manually edit the data using find and replace to change ‘No data’ to blank/null entries, although the dataframes look exactly the same when I use df.head(), when I use df.info() it says that the datatypes are floats. I was wondering why this was and how can I make it so that the datatypes for my series are floats, without having to manually edit the csv files.
Pandas replace method and object datatypes
0.132549
0
0
1,782
41,199,408
2016-12-17T14:08:00.000
0
0
1
0
python,opencv,anaconda
42,058,253
1
false
0
0
I might be missing something, but i believe you are just missing seting up the envi. variables. Set Enviromental Variables Right-click on "My Computer" (or "This PC" on Windows 8.1) -> left-click Properties -> left-click "Advanced" tab -> left-click "Environment Variables..." button. Add a new User Variable to point to the OpenCV (either x86 for 32-bit system or x64 for 64-bit system.) I am currently on a 64-bit machine. | 32-bit or 64 bit machine? | Variable | Value | |---------------------------|--------------|--------------------------------------| | 32-bit | OPENCV_DIR | C:\opencv\build\x86\vc12 | | 64-bit | OPENCV_DIR | C:\opencv\build\x64\vc12 | Append %OPENCV_DIR%\bin to the User Variable PATH. For example, my PATH user variable looks like this... Before: C:\Users\Johnny\Anaconda;C:\Users\Johnny\Anaconda\Scripts After: C:\Users\Johnny\Anaconda;C:\Users\Johnny\Anaconda\Scripts;%OPENCV_DIR%\bin
1
1
1
Hello guys i ve just installed anaconda3 in windows 8.1 and opencv 2.4.13 and 3.1.0/ Ive copied from the file c:/..../opencv/build/python/2.7/x64/cv2.pyd and i pasted it to C:\Users.....\Anaconda3\Lib\site-packages. I ve pasted both for opencv 2.4.13 as cv2.pyd and for opencv 3.1.0 as cv2(3)pyd in order to change it when i want to use any of them. My system is 64-bit and i use jupyter notebook. When i run the command import cv2 it write me ImportError Traceback (most recent call last) in () ----> 1 import cv2 In anaconda3 i use python3.5 ImportError: DLL load failed: The specified module could not be found.
Install opencv in anaconda3
0
0
0
1,034