Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
32,087,821 | 2015-08-19T06:18:00.000 | 0 | 1 | 0 | 0 | python,search,automation | 32,307,071 | 1 | false | 0 | 0 | In Total Commander you can pick menu Commands->Search in separate Process...
Then you can filter c/h files using e.g. this mask in "Search for" field: *.cpp;*.c;*.cxx;*.hpp;*.h;*.hxx (add/remove which one do you need).
Afterwards you can enable "find text" box and "Regex (2)". And enter your words in text as (word1|word2|word3).
This is also possible to do with Python if you prefer it. | 1 | 0 | 0 | I wish to search for 500 strings in 2500 .c/.h files and return the line and files containing the string. Something that is built in Total Commander's search function. Is there a way I can automate the TC search and retrieve the results?
Or else can this be achieved in Python without TC? | Total Commander Automation using Python | 0 | 0 | 0 | 476 |
32,090,306 | 2015-08-19T08:29:00.000 | 2 | 0 | 0 | 1 | python,amazon-ec2,flask,web,localhost | 32,092,809 | 2 | false | 1 | 0 | You cannot connect to localhost on a remote machine without a proxy. If you want to test it you will need to change the binding to the public IP address or 0.0.0.0.
You will then have to lock down access to your own IP address through the security settings in AWS. | 1 | 6 | 0 | currently I am working on a web app development and I am running my server on an Amazon ec2 instance. I am testing my (web app which uses Flask) by running the server on localhost:5000 as usual. However I don't have access to the gui hence I don't see my app and test it like I would do on a browser. I have a Mac OS X computer so my question is how can I see the localhost of Amazon EC2 from my mac's browser ? | Connecting to web app running on localhost on an Amazon EC2 from another computer | 0.197375 | 0 | 0 | 14,127 |
32,092,308 | 2015-08-19T09:59:00.000 | 2 | 0 | 0 | 0 | python,python-2.7,twisted | 32,105,713 | 1 | true | 0 | 0 | No. An IPullProducer must always be able to synchronously produce data on demand; that's why the interface exists.
Perhaps you want an IPushProducer instead? | 1 | 0 | 0 | I'm implementing a IPullProducer. Therefore the caller tells me when I have to produce data.
What if I'm temporarily unable to do that (waiting for some other event perhaps)?
Is there a way to tell the consumer that I can't produce data for a while? | Twisted IPullProducer what if I don't write anything? | 1.2 | 0 | 0 | 30 |
32,094,988 | 2015-08-19T12:02:00.000 | 0 | 0 | 0 | 0 | java,android,python,django,arraylist | 32,095,252 | 1 | false | 1 | 0 | When you convert the arrayList values to a string, the '[' will also be treated as string in the convesrsion. U may use a JSON kind of object using javascript and append it to the request parameters. In django, we can use dictionaries to parse the JSON data as key-value pairs. | 1 | 0 | 0 | In my android application I have an ArrayList which is: [1, 2, 8]
I am sending this array list in a job to the backed django view, where I need to process it further.
So I am calling the toString() method to convert it to a string and then send it to the server.
Inside the view on getting the parameter from the request and on trying to print what I have received I get: [1, 2, 8].
But on trying to get the 1st element, basically calling varialble[0] I am getting: [ and on calling variable[1] I am getting 1.
I just want to extract all the numbers from the variable and use them for further processing. Where am I going wrong? | ArrayList sent to server: Data retrieval | 0 | 0 | 0 | 62 |
32,096,242 | 2015-08-19T12:59:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 66,941,512 | 1 | false | 0 | 0 | PyCharm > Preferences... > Project Interpreter > Python Interpreters -> Cog on the upper right hand side ->
Click on green color icon(use condova package Manager)
Click the + Add button to install any package. Its work for me | 1 | 1 | 0 | I'm trying to add external libraries to my project in PyCharm using the
PyCharm > Preferences... > Project Interpreter > Python Interpreters -> Cog on the upper right hand side -> Click on + icon in Project Interpreters dialog
In the resulting screen, I add paths to the external libraries that I'd like to include in the project, and they get added to the list of paths in that dialog, but when I then expand the External Libraries entry in the Project window, the paths that I added are not shown.
I tried going through the contents of the .idea folder to identify where exactly the references to External Libraries are kept using .idea folder of a colleague but couldn't figure out the related settings file.
How can I get External Libraries added to PhCharm in this case? | Cannot add external libraries with PyCharm | 0 | 0 | 0 | 522 |
32,097,768 | 2015-08-19T14:03:00.000 | 5 | 0 | 1 | 0 | python,python-requests | 32,097,869 | 1 | true | 1 | 0 | There is no session-level JSON parameter, so the merging rules don't apply.
In other words, the json keyword argument to the session.request() method is passed through unchanged, None values in that structure do not result in keys being removed.
The same applies to data, there is no session-level version of that parameter, no merging takes place. If data is set to a dictionary, any keys whose value is set to None are ignored. Set the value to '' if you need those keys included with an empty value.
The rule does apply when merging headers, params, hooks and proxies. | 1 | 1 | 0 | from the requests documentation :
Remove a Value From a Dict Parameter
Sometimes you’ll want to omit session-level keys from a dict parameter. To do this, you simply set that key’s value to None in the method-level parameter. It will automatically be omitted.
I need the data with key's value as None to take the Json value null instead of being removed.
Is it possible ?
edit : This seems to happen with my request data keys. While they are not session-level the behaviour of removing is still the same. | python requests module - set key to null | 1.2 | 0 | 1 | 4,509 |
32,100,003 | 2015-08-19T15:38:00.000 | 0 | 1 | 0 | 1 | python-2.7 | 32,107,551 | 1 | false | 0 | 0 | Unless you are significantly compressing before download, and decompressing the image after download, the problem is your 115,200 baud transfer rate, not the speed of reading from a file.
At the standard N/8/1 line encoding, each byte requires 10 bits to transfer, so you will be transferring 1150 bytes per second.
In 10 minutes, you will transfer 1150 * 60 * 10 = 6,912,000 bytes. At 3 bytes per pixel (for R, G, and B), this is 2,304,600 pixels, which happens to be the number of pixels in a 1920 by 1200 image.
The answer is to (a) increase the baud rate; and/or (b) compress your image (using something simple to decompress on the FPGA like RLE, if it is amenable to that sort of compression). | 1 | 0 | 0 | I have a fpga board and I write a VHDL code that can get Images (in binary) from serial port and save them in a SDRAM on my board. then FPGA display images on a monitor via a VGA cable. my problem is filling the SDRAM take to long(about 10 minutes with 115200 baud rate).
on my computer I wrote a python code to send image(in binary) to FPGA via serial port. my code read binary file that saved in my hard disk and send them to FPGA.
my question is if I use buffer to save my images insted of binary file, do I get a better result? if so, can you help me how to do that, please? if not, can you suggest me a solution, please?
thanks in advans, | IS reading from buffer quicker than reading from a file in python | 0 | 0 | 0 | 96 |
32,100,787 | 2015-08-19T16:16:00.000 | 0 | 1 | 0 | 1 | python,c++,eclipse,pydev | 32,167,307 | 1 | false | 0 | 0 | After 'googling' around the internet, here is what appears to be working for my particular situation:
Create a C/C++ project (empty makefile project). This produces the following 3 files in my top-level local SVN check-out directory:
.settings
.cproject
.project
Note: I keep my Eclipse workspace separate from my Eclipse project.
Create a separate Python project that is outside of the local SVN check-out directory.
Note: This Eclipse Python project is in my Eclipse workspace.
This creates the following 2 files:
.pydevproject
.project
Copy the .pydevproject to the directory containing the .settings, .cproject, and .project files.
Copy the Python 'nature' elements from the Python .project file to the CDT .project file.
Restart Eclipse if it had been running while editing the dot (.) files.
Finally, get into the "C/C++ Perspective". In the 'Project Explorer" window, pull down the 'View Menu".
Select 'Customize View...'.
Select the 'Content' tab.
Uncheck the 'PyDev Navigator Content' option. | 1 | 0 | 0 | Eclipse 4.5 (Mars) / Windows 7
I have an Eclipse C/C++ Makefile project that has both Python and C/C++ code. The source code is checked-out from an SVN repository. The build environment is via a MSYS shell using a project specific configuration script to create all Makefiles in the top/sub-directories and 'make', 'make install' to build.
My .project file has both the PyDev and CDT natures configured.
I can switch between the PyDev and C/C++ perspectives and browse code including right-clicking on a symbol and 'open declaration'.
The 'Debug' perspective appears to be specific to the C/C++ perspective.
Do you have experience with configuring an Eclipse project that allows you to debug both Python and C/C++ code? | Debugging Python when both PyDev and CDT natures in same Eclipse project | 0 | 0 | 0 | 702 |
32,101,737 | 2015-08-19T17:14:00.000 | 1 | 0 | 0 | 0 | python,django,database | 32,108,208 | 1 | false | 1 | 0 | The best and most reliable way to do this is with a sql trigger That would completely eliminate the worries about simultaneous inserts. But overriding the save method is also perfectly workable.
Explicitly declare a primary key field and choose integer for it. In your save method if the primary key is None that means you are saving a new record, query the database to determine what should be the new primary key, asign it and save. Wherever you call your save method you would need to have a atomic transaction and retry the save if it fails.
BTW, you are starting for 0 each year. That's obviously going to be leading to conflicts. So you will have to prefix your primary key with the year and strip it out at the time you display it. (believe me you don't want to mess with composite primary keys in django) | 1 | 1 | 0 | I have this special case, where a customer requires a specific (legacy) format of booking numbers, the first one starts with the current year:
2015-12345
So basically every year I would have to start from 0
The other one is starting with a foreign-key:
7-123
So the first document created by every the user gets number 1, and so on.
Unfortunately there will be long lists starting with this booking number, so fetching all the records and calculating the booking number is not really an option. I have also thought about overriding the save() method, reading and auto-incrementing manually, but what about simultaneous inserts? | Django autoincrement IntergerField by rule | 0.197375 | 0 | 0 | 53 |
32,104,282 | 2015-08-19T19:41:00.000 | 2 | 1 | 1 | 1 | python,linux | 32,104,793 | 1 | true | 0 | 0 | There are a number of places where this enabled-by-default behavior could be turned off.
PYTHONDONTWRITEBYTECODE could be set in the environment
sys.dont_write_bytecode could be set through an out-of-band mechanism (ie. site-local initialization files, or a patched interpreter build).
File permissions could fail to permit it. This need not be obvious! Anything from filesystem mount flags to SELinux tags could have this result. I'd suggest using strace or a similar tool (as available for your platform) to determine whether any attempts to create these files exist.
On an embedded system, it makes much more sense to make this an explicit step rather than runtime behavior: This ensures that performance is consistent (rather than having some runs take longer than others to execute). Use py_compile or compileall to explicitly run ahead-of-time. | 1 | 0 | 0 | I have a python application running in an embedded Linux system. I have realized that the python interpreter is not saving the compiled .pyc files in the filesystem for the imported modules by default.
How can I enable the interpreter to save it ? File system permission are right. | Python is not saving .pyc files in filesystem | 1.2 | 0 | 0 | 377 |
32,104,943 | 2015-08-19T20:21:00.000 | 0 | 0 | 0 | 0 | python,django | 32,105,083 | 1 | false | 1 | 0 | I did this
[isinstance(node, ExtendsNode) for node in template.nodelist] | 1 | 1 | 0 | I want to check wether my django template has an extends block or not.
Is there an inbuilt function in django that can tell whether the template contains a particular tag or not ? | django template inbuilt tag search | 0 | 0 | 0 | 29 |
32,107,598 | 2015-08-20T00:09:00.000 | 0 | 0 | 1 | 0 | ipython,ipython-notebook | 32,110,082 | 1 | false | 0 | 0 | Ok I got it. I had to:
Change my python.exe under envs to python2.exe. I also change pythonw to pythonw2.
Added Anacoda\envs\python2 folder that includes python2.exe and scripts to path variable
Then ran this command in Anaconda command window: python2 -m IPython kernelspec install-self
Then ipython kernelspec list to verify | 1 | 1 | 0 | I am having a really hard time adding python 2.7 as a kernel to my iphyton notebook. I have anaconda installed with a python environment called "python2." I can navigate to the environment folder and launch ipython (using python 2.7) in the script folder.
I have tried ipython kernelspec install-self using iphython.exe, however, it seems like ipython is not even a command in that window.
I tried it again in anaconda command window and it just install python3. Please help with precise steps. | Add python 2.7 to ipython notebook (default python3) | 0 | 0 | 0 | 214 |
32,108,235 | 2015-08-20T01:40:00.000 | 4 | 0 | 1 | 0 | python,python-3.x | 32,108,273 | 4 | false | 0 | 0 | The built in set() was based on the old sets.Set() and runs faster.
Both 'do' the same thing, though in Python 3 the 'sets' module no longer exists.
Here is the answer directly from The Python 2 Library:
The built-in set and frozenset types were designed based on lessons learned from the sets module. The key differences are:
Set and ImmutableSet were renamed to set and frozenset.
- There is no equivalent to BaseSet. Instead, use isinstance(x, (set, frozenset)).
- The hash algorithm for the built-ins performs significantly better (fewer collisions) for most datasets.
- The built-in versions have more space efficient pickles.
- The built-in versions do not have a union_update() method. Instead, use the update() method which is equivalent.
- The built-in versions do not have a _repr(sorted=True) method. Instead, use the built-in repr() and sorted() functions: repr(sorted(s)).
- The built-in version does not have a protocol for automatic conversion to immutable. Many found this feature to be confusing and no one in the community reported having found real uses for it. | 1 | 8 | 0 | What's the difference between set("a") and sets.Set("a")? Their types are different, but they seem to do the same thing.
I can't find any resources online about it, but I've seen both used in examples. | Set vs. set python | 0.197375 | 0 | 0 | 3,697 |
32,108,471 | 2015-08-20T02:07:00.000 | 0 | 0 | 1 | 1 | python,matlab,shell | 32,135,546 | 3 | false | 0 | 0 | You probably want to use the IPython shell (now part of the jupyeter project). In the IPython shell you can also run system commands using !, although many basic commands (like ls or cd) work without even needing to !. Unlike in MATLAB, you don't need to pass it as a string (although you can). So !ls works fine in IPython, while in MATLAB you would need to do !'ls'. Further, you can assign the results to a variable in IPython, which you can't do in MATLAB. So a = !ls works in IPython but not in MATLAB. Further, if you use !!, the result is returned in a form easily usable in Python. So !!ls returns a list of file names.
IPython still uses the _ notation for getting the previous result (except, as with Python, None is counted as "no result" and thus is not recorded). You can also get the second-to-last result with __ and the third-to-last with ___. Further, IPython puts a number next to each line in the command prompt. To get the result of a particular line, just do _n where n is the number. So to get the result of the 3rd command, which has the number 3 next to it, just do _3. This still doesn't work if the result is None, though.
It has a ton of features. You can get the previous input (as a string) with _i (and so on, following the same pattern as with the outputs). You can time code with %timeit and %%timeit. You can jump into the debugger after encountering an error. | 1 | 2 | 0 | These days, I'm transitiong from Matlab to Python after using Matlab/Octave for more than ten years. I have two quick questions:
In the Python interactive mode, is there anything corresponding to Matlab's ans?
How can I run shell commands in the Python interactive mode? Of course, I can use os.system(), but in Matlab we may run shell commands just by placing ! before the actual command. Is there anything similar in Python? | Equivalent of matlab "ans" and running shell commands | 0 | 0 | 0 | 478 |
32,109,319 | 2015-08-20T03:58:00.000 | 1 | 0 | 0 | 0 | python,numpy,machine-learning,neural-network | 62,125,080 | 9 | false | 0 | 0 | ReLU(x) also is equal to (x+abs(x))/2 | 1 | 91 | 1 | I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy. | How to implement the ReLU function in Numpy | 0.022219 | 0 | 0 | 165,706 |
32,110,126 | 2015-08-20T05:25:00.000 | 4 | 0 | 1 | 0 | python,version-control | 32,110,468 | 4 | false | 0 | 0 | There isn't anything bad about the file, but it's useless junk, it's there only to speed up python application execution, and it's rebuilt every time you make changes, so it will just grow over time, to fix it you might want to add __pycache__ line to your .gitignore file | 2 | 39 | 0 | I forked a GitHub project in Python. After running the project for the first time, some .pyc files appeared inside. Should I put them under version control and commit them to my fork? | Should I put pyc files under version control? | 0.197375 | 0 | 0 | 14,190 |
32,110,126 | 2015-08-20T05:25:00.000 | 3 | 0 | 1 | 0 | python,version-control | 32,132,368 | 4 | false | 0 | 0 | No. You must not put pyc under version-control
Common rule is "Never put build-artifacts into source control, because you have sources in source-control and can|have to repeat process"
PYCs are such artifacts for corresponding PY files | 2 | 39 | 0 | I forked a GitHub project in Python. After running the project for the first time, some .pyc files appeared inside. Should I put them under version control and commit them to my fork? | Should I put pyc files under version control? | 0.148885 | 0 | 0 | 14,190 |
32,110,965 | 2015-08-20T06:27:00.000 | 2 | 0 | 0 | 0 | python,httpserver,basehttpserver | 32,111,853 | 2 | false | 0 | 0 | Try running it on 0.0.0.0, this accepts connections from all interfaces. Explicitly specifying the IP is a good practice in general (load balancing, caching servers, security, internal netwrok-only micro services, etc), but judging by your story this is not a production server, but some internal LAN application. | 2 | 1 | 0 | I am hosting a http server on Python using BaseHTTPServer module.
I want to understand why it's required to specify the IP on which you are hosting the http server, like 127.0.0.1/192.168.0.1 or whatever. [might be a general http server concept, and not specific to Python]
Why can't it be like anybody who knows the IP of the machine could connect to the http server?
I face problems in case when my http server is connected to two networks at the same time, and I want to serve the http server on both the networks. And often my IP changes on-the-fly when I switch from hotspot mode on the http server machine, to connecting to another wifi router. | Why host http server needs to specify the IP on which it is hosting? | 0.197375 | 0 | 1 | 1,443 |
32,110,965 | 2015-08-20T06:27:00.000 | 2 | 0 | 0 | 0 | python,httpserver,basehttpserver | 32,112,117 | 2 | true | 0 | 0 | You must specify the IP address of the server, mainly because the underlying system calls for listening on a socket requires it. At a lower level you declare what pair (IP address, port) you want to use, listen on it and accept incoming connexions.
Another reason is that professional grade server often have multiple network interfaces and multiple IP addresses, and some services only need to listen on some interface addresses.
Hopefully, there are special addresses:
localhost or 127.0.0.1 is the loopback address, only accessible from local machine. It is currently used for tests of local services
0.0.0.0 (any) is a special address used to declare that you want to listen to all the local interfaces. I think that it is what you want here. | 2 | 1 | 0 | I am hosting a http server on Python using BaseHTTPServer module.
I want to understand why it's required to specify the IP on which you are hosting the http server, like 127.0.0.1/192.168.0.1 or whatever. [might be a general http server concept, and not specific to Python]
Why can't it be like anybody who knows the IP of the machine could connect to the http server?
I face problems in case when my http server is connected to two networks at the same time, and I want to serve the http server on both the networks. And often my IP changes on-the-fly when I switch from hotspot mode on the http server machine, to connecting to another wifi router. | Why host http server needs to specify the IP on which it is hosting? | 1.2 | 0 | 1 | 1,443 |
32,113,139 | 2015-08-20T08:23:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,ansible,orchestration | 32,162,774 | 2 | false | 0 | 0 | Edit: as of Ansible 2.2 this answer is no longer accurate.
The best answer here is to have both versions of Python installed,
possibly running Python 2 and Ansible in a virtualenv.
It's possible that Ansible will be refactored for Python 3 but it's
unlikely for now, and there is no Alternative in Python 3. If you
don't want to use Ansible on Python2, you'll need to switch to another
tool like Saltstack, Chef, or Puppet. | 2 | 6 | 0 | I changed the Python 2 to Python 3. I felt the benefits of Asyncio.
Earlier in my project I used Ansible, but it is not supported with Python 3. Can you recommend to me an alternative to Ansible for Python 3? | Is there an alternative to Ansible on Python3 | 0.291313 | 0 | 0 | 2,355 |
32,113,139 | 2015-08-20T08:23:00.000 | 4 | 0 | 1 | 0 | python,python-3.x,ansible,orchestration | 41,511,498 | 2 | true | 0 | 0 | As of 2.2, Ansible works with Python 3. Yep, you may encounter some bugs but any patch related to py3 will be quickly reviewed and merged by the team. I already fixed two that I encountered myself. | 2 | 6 | 0 | I changed the Python 2 to Python 3. I felt the benefits of Asyncio.
Earlier in my project I used Ansible, but it is not supported with Python 3. Can you recommend to me an alternative to Ansible for Python 3? | Is there an alternative to Ansible on Python3 | 1.2 | 0 | 0 | 2,355 |
32,113,290 | 2015-08-20T08:30:00.000 | 0 | 0 | 0 | 0 | python-2.7,selenium | 32,117,309 | 1 | false | 1 | 0 | Use getText method if you want to to print 0.94 | 1 | 0 | 0 | I have the following html extract
0.94
I am trying to read the href value ie 0.94.I tried the following :
answer = browser.find_element_by_class_name("res")
print answer
output = answer.get_attribute('data-href')
print output
The Result is as follows:
None
I tried various other methods, using find_element_by_xpath etc,but not able to get the desired value ie. 0.94 (as in this example).
How can I get this value in the shortest way? Thanks in advance | getting href-data using selenium | 0 | 0 | 1 | 206 |
32,116,528 | 2015-08-20T11:03:00.000 | 3 | 0 | 1 | 0 | python | 32,116,654 | 1 | true | 0 | 0 | It is normal that you find __file__ undefined when running by single line because:
When a module is loaded in Python, __file__ is set to its name. You
can then use that with other functions to find the directory that the
file is located in.
There is no loaded module when you run by single line. | 1 | 3 | 0 | I have a specific question concerning the python (Python 2.7) IDE Spider (2.3.5.2)
Today I noticed that there is a difference in running my script as a whole, i.e. when I press F5.
Or
when I run just a single line or selection, by pressing F9.
I noticed this difference, when running specific syntax containing __file__
When I would run the script by line (by pressing F9), I would get the error NameError: name '__file__' is not defined
Whereas if I would run the script as a whole (by pressing F5) I would receive no such error, and was able to retrieve my file name using __file__
My question is: What is the difference between running by pressing F5 and running by pressing F9?
Note: there is probably some jargon that I'm missing which would allow me to ask my question better. Please edit the question if needed. I get the feeling I'm dealing with some very basic stuff. If anyone has some good tutorials or documentation, I would love to read it. | spider IDE python. Difference in running by pressing F5 and F9? | 1.2 | 0 | 0 | 628 |
32,123,775 | 2015-08-20T16:40:00.000 | 1 | 0 | 1 | 1 | python,shebang | 32,125,245 | 2 | false | 0 | 0 | I accepted John Schmitt's answer because it led me to the solution. However, I am posting what I actually did, because it might be useful for other Hadoopy users.
What I actually did was :
args['cmdenvs'] = ['export VIRTUAL_ENV=/n/2.7.9/ourvenv','export PYTHONPATH=/n/2.7.9/ourvenv', 'export PATH=/n/2.7.9/ourvenv/bin:$PATH']
and passed args into Hadoopy's launch function. In the executable .py files, I put the generic #!/usr/bin/env python shebang. | 1 | 0 | 0 | Here is the problem I am trying to solve. I don't have a specific question in the title because I don't even know what I need.
We have an ancient Hadoop computing cluster with a very old version of Python installed. What we have done is installed a new version (2.7.9) to a local directory (that we have perms on) visible to the entire cluster, and have a virtualenv with the packages we need. Let's call this path /n/2.7.9/venv/
We are using Hadoopy to distribute Python jobs on the cluster. Hadoopy distributes the python code (the mappers and reducers) to the cluster, which are assumed to be executable and come with a shebang, but it doesn't do anything like activate a virtualenv.
If I hardcode the shebang in the .py files to /n/2.7.9/venv/, everything works. But I want to put the .py files in a library; these files should have some generic shebang like #!/usr/bin/env python. But I tried this and it does not work, because at runtime the virtualenv is not "activated" by the script and therefore it bombs with import errors.
So if anyone has any ideas on how to solve this problem I would be grateful. Essentially I want #!/usr/bin/env python to resolve to /n/2.7.9/venv/ without /n/2.7.9/venv/ being active, or some other solution where I cannot hardcode the shebang.
Currently I am solving this problem by having a run function in the library, and putting a wrapper around this function in the main code (that calls the library) with the hardcoded shebang in it. This is less offensive because the hardcoded shebang makes sense in the main code, but it is still messy because I have to have an executable wrapper file around every function I want to run from the library. | Python: runtime shebang problems | 0.099668 | 0 | 0 | 244 |
32,124,699 | 2015-08-20T17:33:00.000 | 9 | 0 | 1 | 0 | python,regex | 32,124,774 | 2 | true | 0 | 0 | Those double-quotes in your regular expression are delimiting the string rather than part of the regular expression. If you want them to be part of the actual expression, you'll need to add more, and escape them with a backslash (r"\"\[.+\]\""). Alternatively, enclose the string in single quotes instead (r'"\[.+\]"').
re.match() only produces a match if the expression is found at the beginning of the string. Since, in your example, there is a double quote character at the beginning of the string, and the regular expression doesn't include a double quote, it does not produce a match. Try re.search() or re.findall() instead. | 1 | 5 | 0 | I use this statement result=re.match(r"\[.+\]",sentence) to match sentence="[balabala]". But I always get None. Why? I tried many times and online regex test shows it works. | How to match double quote in python regex? | 1.2 | 0 | 0 | 24,838 |
32,125,200 | 2015-08-20T17:59:00.000 | 0 | 0 | 0 | 0 | python,django,django-rest-framework | 33,711,333 | 1 | false | 1 | 0 | Turns out this was a local caching issue. It occurred when reloading a page with a GET request to my API and I guess the headers weren't in sync. The error went away when sending the max_age to 0, which is something I needed to do anyway. | 1 | 0 | 0 | When programmatically accessing certain data from my Django Rest Framework, I occasionally get an error:
new() missing 1 required positional argument: 'argument name'
What's odd is that the error is not predictable, in that I may refresh and everything loads fine. So this leads me to believe it may be some kind of data race type situation, but I'll be honest in saying I don't really know where this new constructor is coming from.
Can someone shed some light on how Django Rest Framework might be using the new constructor so I might have a better idea on where to track down the bug?
(I assume it's a DRF issue since that's what I'm using to access the data, but if it's not that then I'm really lost) | Django Rest Framework __new__ missing 1 required positional argument | 0 | 0 | 0 | 575 |
32,125,774 | 2015-08-20T18:32:00.000 | 3 | 0 | 0 | 0 | python,sockets,networking,udp | 32,126,306 | 3 | true | 0 | 0 | A UDP packet can be as large as approximately 64k. So if you want to send a file that is larger than that you can fragment yourself into packets of 64k. That is the theoretical maximum. My advice is to use fragments of smaller chunks of 500 bytes.
IP is responsible for fragmentation and reassembly of the packets if you do use 64k packets. Smaller packets of 500 bytes are not likely to be fragmented because the mtu is usually around 1500 bytes. If you use larger packets that are fragmented, IP is going to drop them if one of those fragments is lost.
You are right that using TCP is probably better to use for something like this or even an existing protocol like TFTP. It implements a per packet acking mechanism and sequence numbers just like you did. | 1 | 4 | 0 | I am new to socket programming and recently picked up Python for it. I have a few questions in mind which I can't seems to find a definite answer for.
I am looking into sending data over UDP and have written a simple python script to do just that. Works fine sending small objects (Small pickled objects to be exact) across but how should I handle objects that are too large to be fitted in one UDP packet?
I've thought of first sizing up the object in bytes. Nothing will be done if the object is small enough to be fitted in a UDP packet, but if the object is too huge, the object will then be split up evenly (if possible) into many smaller chunks so that it can be fitted into multiple UDP packets and be sent across to the client. Once the client receive the chunks, the client will reassemble the multiple UDP packets into the original state.
I immediately hit my first brick wall when trying to implement the mentioned above.
From my research done, it doesn't seems like there is any 'effective' way in getting the byte size of an object. This means I am unable to determine if an object is too large to fit in a UDP packet.
What happen if I insist on sending an large object across to the client? Will it get fragmented automatically and be reassembled on the client side or will the packet be dropped by the client?
What is the right way to handle large object over UDP? Keeping in mind that the large object could be a file that is 1GB in size or a byte object that is 25MB in size.
Thanks in advance.
Side Notes:
I do understand that UDP packets may not always come in order and
therefore I have already implemented countermeasure for it which is
to tag a sequence number to the UDP packets sent out to the client.
I do understand that there is no assurance that the client will receive all of the UDP packets. I am not concerned about packet loss for now.
I do understand that TCP is the right candidate for what I am trying to do but I am focusing on understanding UDP and on how to handle situations where acknowledgement of packets from client is not possible for now.
I do understand the usage of pickle is insecure. Will look into it at later stage. | Python: Sending large object over UDP | 1.2 | 0 | 1 | 5,845 |
32,126,189 | 2015-08-20T18:56:00.000 | 1 | 0 | 0 | 0 | python,statistics,correlation,pyspark | 32,126,371 | 2 | false | 0 | 0 | Firstly, make sure you're applying the right formula for correlation. Remember, given vectors x and y, correlation is ((x-mean(x)) * (y - mean(y)))/(length(x)*length(y)), where * represents the dot-product and length(x) is the square root of the sum of the squares of the terms in x. (I know that's silly, but noticing a mis-typed formula is a lot easier than redoing a program.)
Do you have a strong hunch that there should be some correlation among these columns? If you don't, then those small values are reasonable. On the other hand, if you're pretty sure that there ought to be a strong correlation, then try sampling a random 100 pairs and either finding the correlation there, or plotting them for visual inspection, which can also show you if there is correlation present. | 2 | 0 | 1 | I am trying to calculate correlation amongst three columns in a dataset. The dataset is relatively large (4 GB in size). When I calculate correlation among the columns of interest, I get small values like 0.0024, -0.0067 etc. I am not sure this result makes any sense or not. Should I sample the data and then try calculating correlation?
Any thoughts/experience on this topic would be appreciated. | How to calculate correlation on large number of records? | 0.099668 | 0 | 0 | 116 |
32,126,189 | 2015-08-20T18:56:00.000 | 0 | 0 | 0 | 0 | python,statistics,correlation,pyspark | 32,127,507 | 2 | false | 0 | 0 | There is nothing special about correlation of large data sets. All you need to do is some simple aggregation.
If you want to improve your numerical precision (remember that floating point math is lossy) you can use Kahan summation and similar techniques, in particular for values close to 0.
But maybe your data justt doesn't have strong correlation?
Try visualizing a sample! | 2 | 0 | 1 | I am trying to calculate correlation amongst three columns in a dataset. The dataset is relatively large (4 GB in size). When I calculate correlation among the columns of interest, I get small values like 0.0024, -0.0067 etc. I am not sure this result makes any sense or not. Should I sample the data and then try calculating correlation?
Any thoughts/experience on this topic would be appreciated. | How to calculate correlation on large number of records? | 0 | 0 | 0 | 116 |
32,128,514 | 2015-08-20T21:14:00.000 | 0 | 0 | 1 | 0 | vim,ipython | 32,129,706 | 1 | false | 0 | 0 | From within an IPython shell you can run a script with the %run magic!
Assuming you are in the right directory:
%run script.py
Will execute the script and leave you in an interactive prompt, where you can interact with objects from the script. Hope this helps! | 1 | 0 | 0 | I can run a file with :! ipython %, but it will close when the script is finished. How can I run the current script in IPython, and make it remain open? Currently I am hitting :! ipython and then %load the current file by name, but I'd like to automatize it.
Thanks! | VIM: Run current file in IPython and remain open | 0 | 0 | 0 | 186 |
32,128,518 | 2015-08-20T21:15:00.000 | 1 | 0 | 0 | 0 | python,django,search,model,storage | 32,128,663 | 2 | false | 1 | 0 | You can definitely store relatively large bodies of text in database. If you mean by performance there are two angles:
Searching. You should not do then free form searches in database. You may use specific features of ElasticSearch-like tools.
Serving large bodies of text. Unavoidable, naturally, if you want to present it, however you can use GZIP compression that will reduce the bandwidth drastically. | 2 | 0 | 0 | This is a question about best-practice for modelling a Django app. The project is a blog which will present articles written in something similar to Markdown or RST.
I've checked out a few tutorials to give me some sort of starting point and so far they all store the body of the article in the model. This seems wrong: my understanding of modern database engines isn't the best but storing textfields of arbitrary lengths can't be good for performance.
Three alternatives present themselves:
Limit the article model to metadata and create a separate model to
store the body of an article. At least only one table is a mess!
Limit the article model to metadata and store the article body as a
static file.
Ask someone with more experience. Maybe storing the body in the model
isn't so bad after all... at least it's easily searchable!
How should I model this app? How would you make your solution searchable? | How should I model large bodies of textual content in a django app? | 0.099668 | 0 | 0 | 133 |
32,128,518 | 2015-08-20T21:15:00.000 | 1 | 0 | 0 | 0 | python,django,search,model,storage | 32,128,552 | 2 | true | 1 | 0 | Storing the body of the post in the database shouldn't be an issue. Most blog engines take this approach. It'll be faster than storing it in a separate model (if it's in a separate model, you'd have to do a JOIN to get the body), and likely faster than storing it as a file on the file system. You're not using the body as a primary key, so the length doesn't really matter. | 2 | 0 | 0 | This is a question about best-practice for modelling a Django app. The project is a blog which will present articles written in something similar to Markdown or RST.
I've checked out a few tutorials to give me some sort of starting point and so far they all store the body of the article in the model. This seems wrong: my understanding of modern database engines isn't the best but storing textfields of arbitrary lengths can't be good for performance.
Three alternatives present themselves:
Limit the article model to metadata and create a separate model to
store the body of an article. At least only one table is a mess!
Limit the article model to metadata and store the article body as a
static file.
Ask someone with more experience. Maybe storing the body in the model
isn't so bad after all... at least it's easily searchable!
How should I model this app? How would you make your solution searchable? | How should I model large bodies of textual content in a django app? | 1.2 | 0 | 0 | 133 |
32,130,000 | 2015-08-20T23:19:00.000 | 3 | 0 | 1 | 0 | python,apache-spark,rdd | 32,147,241 | 1 | true | 0 | 0 | By calling repartition(N) spark will do a shuffle to change the number of partitions (and will by default result in a HashPartitioner with that number of partitions). When you call sc.parallelize with a desired number of partitions it splits your data (more or less) equally up amongst the slices (effectively similar to a range partitioner), you can see this in ParallelCollectionRDD inside of the slice function.
That being said, it is possible that both of these sc.parallelize(data, N) and rdd.reparitition(N) (and really almost any form of reading in data) can result in RDDs with empty partitions (its a pretty common source of errors with mapPartitions code so I biased the RDD generator in spark-testing-base to create RDDs with empty partitions). A really simple fix for most functions is just checking if you've been passed in an empty iterator and just returning an empty iterator in that case. | 1 | 5 | 1 | I was going through the documentation of spark. I got a bit confused with rdd.repartition() function and the number of partitions we pass during context initialization in sc.parallelize().
I have 4 cores on my machine, if I sc.parallelize(data, 4) everything works fine, but when I rdd.repartition(4) and apply rdd.mappartitions(fun) sometimes the partitions has no data and my function fails in such cases.
So, just wanted to understand what is the difference between these two ways of partitioning. | what is the difference between rdd.repartition() and partition size in sc.parallelize(data, partitions) | 1.2 | 0 | 0 | 2,919 |
32,132,987 | 2015-08-21T05:32:00.000 | 0 | 1 | 0 | 1 | python,rsync,fabric | 35,467,047 | 1 | false | 0 | 0 | Best way would be running the script from remotemachine1 if you can. | 1 | 1 | 0 | I need to copy huge files from remotemachine1 to remotemachine2-remotemachine10.
What is the best way to do it ? Doing a get on remotemachine1 and then a put to all the remaining machines aren't ideal as the file is huge and I need to be able to send the Fabric command from my laptop. The remotemachines are all in the same network. Or should I do a run('rsync /file_on_remotemachine1 RemoteMachine2:/targetpath/') ?
Is there a better way to do this in Fabric ? | How do you use Fabric to copy files between remote machines? | 0 | 0 | 0 | 223 |
32,134,565 | 2015-08-21T07:22:00.000 | 1 | 0 | 0 | 0 | python,flask | 32,181,469 | 1 | true | 1 | 0 | Your problem is almost certainly that you have set up 5 different logging handlers as well as 5 different loggers. Python's built-in logging system is a hierarchical logging system (unlike the normal loggers built for NodeJS, for example). All the loggers form a tree at run time, and the log messages bubble up the tree and are handled by the handlers attached to the tree. The normal handler registration registers the handlers at the root of the tree, so each handler sees messages from every logger (which is why your messages are created five times).
The solution is to create one logger per blueprint, but not register any handler for the blueprint's logger. Instead, register one handler at the application level. | 1 | 0 | 0 | I have 5 Flask apps which are running under Blueprint. Each app has its independent logger which writes to stdout. The problem is whenever any HTTP API is invoked, the log in that API is printed on screen 5 times, but the request is executed only once. How do I fix logger, so that each requested is printed only once ?
Python 2.7.10
Flask 0.10.1 | Python Blueprint duplicate Logs | 1.2 | 0 | 0 | 291 |
32,138,575 | 2015-08-21T10:56:00.000 | 4 | 0 | 1 | 1 | python,terminal,installation,scapy | 34,408,487 | 5 | false | 0 | 0 | Change os.chmod(fname,0755) to os.chmod(fname,0o755) and re-run | 1 | 32 | 0 | I have recently taken up learning networks, and I want to install scapy.
I have downloaded the latest version (2.2.0), and have two versions of python on my computer- 2.6.1 and 3.3.2. My OS is windows 7 64 bit.
After extracting scapy and navigating to the correct folder in the terminal, I was instructed to run "python setup.py install". I get the following error-
File "setup.py", line 35
os.chmod(fname,0755)
................................^
......................invalid
token
(dots for alignment)
How do I solve this problem? | Scapy installation fails due to invalid token | 0.158649 | 0 | 0 | 16,594 |
32,138,651 | 2015-08-21T11:00:00.000 | 0 | 0 | 0 | 0 | python,algorithm,function | 32,138,958 | 4 | false | 0 | 0 | You can compute the distance of chosen points.
1) Search for the minimun distance X value (left and right).
2) Search each points corresponding with X_MIN_LEFT and X_MIN_RIGHT. At the same time you can check the distance with Y and find the minimum Y distance.
That's it. | 1 | 2 | 1 | I need to interpolate a linear function and I can't use numpy or scipy.
I'm given this data points ((0,10), (1,4), (2,3), (3,5), (4,12)) and a point on `x = 2.8. the data points is polyline (linear between 2 coordinates)
of course I need to use the closest x-points from the data for 2.8. which is from (2,3) and (3,5) because 2.8 lies between the 2 and 3.
How do I make a function to find this closest points? | Finding the points nearest to a given x? | 0 | 0 | 0 | 3,022 |
32,138,885 | 2015-08-21T11:13:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,python-2.4,squish | 32,174,872 | 2 | false | 0 | 0 | Even if support wasn't over, I'm not sure that this python version is a criteria for a new kit.
You can do this instead: download a version of python 2.7 and install it to your localhost.
After you did this, open Squish and go to [Edit -> Preferences - PyDev -> Interpreter - Python] and choose the python.exe from where you installed it.
After you'll do this, Squish will add the new libraries from 2.7.
Close Squish and re-open it and you will have a SquishXX with python 2.7. | 1 | 0 | 0 | Discovered a problem - SQUISH is working with python2.4, but many necessary functions and libraries what i need - only in python2.7.
Support is over, and I can not ask for a version SQUISH with python2.7.
Somebody tell me the solution of this problem, or share the link to a version with integrated python2.7 Squish. | Squish change python 2.4 on python 2.7 | 0.099668 | 0 | 0 | 1,060 |
32,139,162 | 2015-08-21T11:28:00.000 | 1 | 1 | 1 | 1 | python,python-2.7,python-3.x | 32,139,340 | 2 | false | 0 | 0 | Unless you have done something to specifically allow this, such as SSH into machine B first, you cannot do this.
That's a basic safety consideration. If any host A could execute any script on host B, it would be extremely easy to run malicious code on other machines. | 1 | 0 | 0 | Details:
I am having xxx.py file in B machine.
I trying to execute that xxx.python file from A machine by using python script. | How to run the python file in remote machine directory? | 0.099668 | 0 | 0 | 85 |
32,139,820 | 2015-08-21T12:02:00.000 | 1 | 0 | 1 | 0 | python,windows,pyqt,32bit-64bit | 32,139,874 | 1 | false | 0 | 1 | Yes I would guess you're right since python will link to PyQt they should be based on the same architecture ! | 1 | 1 | 0 | I'm running Windows 8.1 64bit but have Python 2.7.10 32bit. Now I'm trying to install PyQt using Windows installer. What version should I download?
PyQt4-4.11.4-gpl-Py2.7-Qt4.8.7-x64.exe Windows 64 bit installer
PyQt4-4.11.4-gpl-Py2.7-Qt4.8.7-x32.exe Windows 32 bit installer
I have 64bit windows but 32bit Python and I want to make a 32bit executables.
I think that I should install 32bit, is it true? | What version of PyQt should I install (32 vs 64) | 0.197375 | 0 | 0 | 1,155 |
32,141,887 | 2015-08-21T13:42:00.000 | 1 | 1 | 1 | 0 | python,mocking | 32,206,732 | 2 | true | 0 | 0 | Finally I managed to install "mock" offline.
Step-by-step guide follows (I use Python 2.7):
Download necessary packages provided in .tar.gz archives:
mock, setuptools, pbr, six, funcsigs
Unpack all of the archives
Install modules one by one in the following order: setuptools, pbr, six, funcsigs, mock. To install a module, chdir to the folder it was unpacked to and execute python setup.py install | 1 | 0 | 0 | I need to use python "mock" library for unit testing.
Is it possible to install the library without connecting my development machine to the Internet?
Thx in advance. | Python mock offline standalone installation | 1.2 | 0 | 0 | 343 |
32,144,495 | 2015-08-21T15:48:00.000 | 0 | 0 | 1 | 0 | variables,random,python-3.2 | 32,144,615 | 1 | true | 0 | 0 | Well first you will need to open the file
file = open('filename.txt', 'w')
Then you need to read the file you can read each line into a list by doing words = file.readlines (this can also be done with a loop or in a number of other ways)
Then you can use the random module to generate a random number and get the word from that index in the words list. Then just store that word to a variable.
There are other ways of doing this but this is one of the easiest. | 1 | 0 | 0 | I am trying to code a program which reads a file, which will contain many words (one word per line), then selects a random line (word) from the file, so I am able to store it in a variable for me to use later on.
I don't really know where to start as I am not very experienced. Any help would be appreciated. | Pick a random line from a text file and store it in a variable (Python 3) | 1.2 | 0 | 0 | 128 |
32,145,217 | 2015-08-21T16:28:00.000 | 3 | 0 | 1 | 1 | python,windows | 32,146,376 | 2 | false | 0 | 0 | I used MicrosoftFixit.ProgramInstallUninstall and I was able to remove Python34 and then it reinstalled without any problems. | 2 | 4 | 0 | I installed Python 3.4.3 over 3.4.2 on Windows 7 and got problems with IDLE not starting.
When I use the Windows uninstaller via the control panel I get the message:
"There is a problem with this Windows Installer package a program required for this install to complete could not be run. Contact your support personnel or package vendor."
If I try to remove Python via the msi file then I get the same message.
There is no Python34 directory on my machine. I noticed that there is an entry in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.4\Modules. I didn't want to mess with my registry, but can I safely delete this entry? Is there any more to delete? | I installed Python 3.4.3 over 3.4.2 on Windows 7... and now I cannot uninstall Python | 0.291313 | 0 | 0 | 331 |
32,145,217 | 2015-08-21T16:28:00.000 | 1 | 0 | 1 | 1 | python,windows | 32,147,159 | 2 | false | 0 | 0 | Had a similar problem. This is what I did:
Restart computer (kill any running processes of Python)
Delete the main Python folder under C drive.
Using CCleaner (or a similar application), use the Tools -> Uninstall feature to remove Python (if it is still there after deleting the folder)
Then go to the Registry window in CCleaner and clean the registry. Python should now be completely gone from your computer. | 2 | 4 | 0 | I installed Python 3.4.3 over 3.4.2 on Windows 7 and got problems with IDLE not starting.
When I use the Windows uninstaller via the control panel I get the message:
"There is a problem with this Windows Installer package a program required for this install to complete could not be run. Contact your support personnel or package vendor."
If I try to remove Python via the msi file then I get the same message.
There is no Python34 directory on my machine. I noticed that there is an entry in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.4\Modules. I didn't want to mess with my registry, but can I safely delete this entry? Is there any more to delete? | I installed Python 3.4.3 over 3.4.2 on Windows 7... and now I cannot uninstall Python | 0.099668 | 0 | 0 | 331 |
32,145,958 | 2015-08-21T17:17:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,tkinter | 32,147,638 | 1 | false | 0 | 0 | You cannot do this. This is partly why interpreted languages like python exist: you write a platform-agnostic program that can run on any platform (in python, tcl, ruby, groovy, javascript, etc), then run it with a platform-specific runtime. | 1 | 3 | 0 | i have an ex.py file written in python using tkinter .i want to create an executable which can run on any platform.not creating separate executable for each platform(Linux,windows,Mac). | creating a platform independent GUI executable in python | 0.379949 | 0 | 0 | 670 |
32,146,943 | 2015-08-21T18:19:00.000 | 0 | 0 | 1 | 0 | python,printing,wxpython,receipt | 32,464,557 | 1 | true | 0 | 0 | Well i figure out some sort of solution :
receipt printing is impossible with wxPython , so, raw printing with escape sequences would be better option
os.system("echo ' some text ' | lpr -o raw" )
first initialize printer
os.system("echo ' \x1B\x40' | lpr -o raw" )
for bold letters with ESC code :
os.system("echo ' \x1BE some text \x1BF ' | lpr -o raw" )
for double width :
os.system("echo ' \x1BW\01 some text ' | lpr -o raw" )
for underline
os.system("echo ' \x1B\035 some text \x1B\034' | lpr -o raw" )
and many more options can be used with ESC code | 1 | 0 | 0 | I have prepared a small program for retail shop, and have to print out receipt (using tvs msp star 240 dot matrix printer/with paper roll) .
with wx.Printout() class for printing , as print preview is ok but actual printing is different and awkward :
1. i m using paper roll n don't know how to call end printing/OnEndPrinting()/ cut paper ?
2. how to correct text shape or which font for actual printout ?
I m new for programming .....
Please help and suggest appropriate code for this ...
Thanks in advance !! | Printout for receipt printer | 1.2 | 0 | 0 | 1,031 |
32,148,219 | 2015-08-21T19:46:00.000 | 2 | 0 | 0 | 0 | python-2.7,scikit-learn,pca | 32,236,196 | 1 | false | 0 | 0 | Truncated or partial means that you only calculate a certain number of components/singular vector-value pairs (the strongest ones).
In scikit-learn parlance, "partial" usually refers to the fact that a method is on line, meaning that it can be fed with partial data. The more data you give it the better it will converge to the expected optimum.
Both can be combined, and have been, also in sklearn: sklearn.decomposition.IncrementalPCA does this. | 1 | 1 | 1 | Can somebody tell me the difference between truncated SVD as implemented in sklearn and partial SVD as implemented in, say, fbpca?
I couldn't find a definitive answer as I haven't seen anybody use truncated SVD for principal component pursuit (PCP). | Truncated SVD vs Partial SVD | 0.379949 | 0 | 0 | 1,242 |
32,148,286 | 2015-08-21T19:51:00.000 | 1 | 0 | 1 | 0 | python,pycharm,remote-access | 32,148,371 | 1 | false | 0 | 0 | Fortunately pycharm has ssh plugin that you can clone server file , editing localy then sync with server by sftp protocol
Edit
In menu Tools->Deployment->configuration add sftp server and login with putty user and pass. The next steps are obvious. | 1 | 3 | 0 | Does PyCharm provide an option of remotely modifying a python program and running it on a different server, which for itself runs a full version of PyCharm?
If I want to simulate this process without PyCharm, what I would do is: Edit the code locally or use vim+ssh, and then run ssh+python. I want to have a GUI-based and far more efficient way of doing this. Does anybody know if PyCharm is capable of that?
So what I imagine is: Editing .py files locally and when I choose to run them, they would run in the PyCharm of the server side Additionally, it would be great to have the option that when there is a figure for matplotlib, the server side passes the figure to the client side to display it. Maybe this is too much to ask but I imagine it is very handy!
I figured out "deployment", but as far as I realised, deployment assumes that the remote server does not have PyCharm and starts installing everything again on the server so that the python on server side have the necessary libraries. Any clues? | How to run PyCharm remotely, using a local version of PyCharm? | 0.197375 | 0 | 0 | 1,869 |
32,148,604 | 2015-08-21T20:14:00.000 | 2 | 0 | 0 | 0 | python,wxpython | 32,154,356 | 2 | false | 0 | 1 | I just tried "Shift-prt scr" on my Windows 8.1 to capture a screen with a menu shown (used the wxPython demo) and it worked for me.
You can also use a screen capture utility, e.g. I often use IrfanView, to do this, with it I set a timer to capture things which go away when the window looses focus. | 1 | 1 | 0 | Is it possible to capture a screenshot of a wxpython GUI program when the program menu on the menu bar drops down?. I attempted to do this by pressing the print screen key on my keyboard but it didn't work.
Nevertheless, the print screen function key works fine when the menu on the menubar does NOT drop down.
I noticed I can take screenshots of other GUI programs on my system when their menu options are seen .
If this is possible, what codes can I place in my program to facilitate a successful screenshot when any menu is showing? | How to take a screenshot of a Wxpython GUI program when menu drops down on menubar | 0.197375 | 0 | 0 | 320 |
32,150,986 | 2015-08-22T00:11:00.000 | 1 | 0 | 0 | 0 | python,django | 32,151,156 | 1 | false | 1 | 0 | You can't really do list comprehensions inside Django templates. You should do this in your view and pass the list in your context to the template. | 1 | 0 | 0 | Is there a way to get a list of a specific attribute from a list of model objects, {{ object_list }} using the Django Template Language?
Similar to this in Python?
[o.my_attr for o in object_list] | get a list of a specific attribute from a list of model objects in a Django template | 0.197375 | 0 | 0 | 43 |
32,150,994 | 2015-08-22T00:13:00.000 | 1 | 0 | 0 | 0 | python,text,tkinter,undo-redo | 32,151,274 | 1 | true | 0 | 1 | The text widget has both an edit_undo() and edit_redo() method, which is what the built-in bindings use. You can call these methods from a button or menu item if you wish. | 1 | 0 | 0 | I know that you can set undo=True for a Text widget, and then press CTRL + Z and CTRL + Y for undo and redo.
But I was wondering if there was a function I could bind to a button or something like that. | Calling a function when undoing or redoing in a Text widget | 1.2 | 0 | 0 | 115 |
32,153,084 | 2015-08-22T06:25:00.000 | 0 | 0 | 0 | 0 | pytest,python-multithreading | 32,164,189 | 1 | true | 0 | 0 | @Bruno Oliveira is right. I try to use a clean py.test to test flickr/picasa auth, and it's able to open a web-browser. The problem may lies in other custom library that being developed.
Thanks!
PS: I will report it here if I found why webbrowser.open won't work | 1 | 0 | 0 | I am using Py.test to implement integration testing for uploading photos into Picasa. However, the authentication method from oauth2client.flow_from_clientsecrets (that should open a web-browser to authentication URL), simply stopped.
I am not sure about why it occur though, is it because from py.test we can't create/span new process? This is because oauth2client.flow_from_clientsecrets will call webbrowser.open that in turn will call subprocess.Popen | How to solve thread blocking in Py.test? | 1.2 | 0 | 1 | 567 |
32,154,052 | 2015-08-22T08:30:00.000 | 0 | 0 | 0 | 0 | python,selenium,urllib2 | 32,155,740 | 1 | false | 1 | 0 | Is there no request at all, or a GET request? I suspect there is a GET request. In that case, did you turn Persist on in Firebug's Net tab? Possibly the POST request was hidden after redirects. | 1 | 0 | 0 | I'm writing a script to download a pdf automatically.
Firstly, I open the url manually, it will redirect to a login website.
and I type my username and password, and click "submit".
Then download will start directly.
During this procedure, I check the firebug, I find there is no post while I click "submit".
I'm not familiar with this behavior, that means the pdf(300K) is saved before I submit?
If there is no post, then I must use some tool like selenium to simulate this "click"? | No post request after submitting a form when I want to download a PDF | 0 | 0 | 1 | 44 |
32,158,738 | 2015-08-22T17:12:00.000 | 3 | 0 | 1 | 0 | python,frameworks,cross-platform,desktop-application,electron | 62,152,039 | 4 | false | 1 | 0 | With electron-django app I am developing I used pyinstaller to get my django app compiled, then just spawn its child process and it works, please notice pyinstaller may not recognize all modules or dist folder. there are plenty examples online on how to get a workaround for that filling the .specs file and amending the dist folder adding the files you may need.
pyinstaller usually tells you what went wrong in the terminal. hope it helps | 1 | 54 | 0 | I am trying to write a cross-platform desktop app using web technologies (HTML5, CSS, and JS). I took a look at some frameworks and decided to use the Electron framework.
I've already done the app in Python, so I want to know if is possible to write cross-platform desktop applications using Python on the Electron framework? | Python on Electron framework | 0.148885 | 0 | 0 | 77,344 |
32,160,357 | 2015-08-22T20:15:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,debugging | 33,048,886 | 3 | false | 0 | 0 | I ran into this problem as well using PTVS with VS2013 update 4 inside a Django project.
So far the only way I have been able to get this to work is to right click on the project name in solution explorer, select properties and change the debug Launch mode to "Standard Python Launcher" and then right-clicking on the python script in solution explorer and choosing "Start with Debugging". Otherwise I get an interpreter not found error even though it is specified in my debug options in the project properties.
I will update post if I find a better solution. | 3 | 2 | 0 | I am using Visual Studios 2015 for creating python programs. Each file is a separate program and hence I wish to debug each one of them separately.
I know you can do this by going to :
Project Properties -> General -> Start Up File and type in my file name each time.
I want to know if there is a simpler way, that just runs the current .py file everytime I hit F5 Just like how it is in IDLE | Debug a single python file in Visual Studios with F5 | 0 | 0 | 0 | 1,662 |
32,160,357 | 2015-08-22T20:15:00.000 | 1 | 0 | 1 | 0 | python,visual-studio,debugging | 41,341,929 | 3 | true | 0 | 0 | Looks like this isnt possible. Right clicking on the file seems to be the only way to run a python file in visual studios. | 3 | 2 | 0 | I am using Visual Studios 2015 for creating python programs. Each file is a separate program and hence I wish to debug each one of them separately.
I know you can do this by going to :
Project Properties -> General -> Start Up File and type in my file name each time.
I want to know if there is a simpler way, that just runs the current .py file everytime I hit F5 Just like how it is in IDLE | Debug a single python file in Visual Studios with F5 | 1.2 | 0 | 0 | 1,662 |
32,160,357 | 2015-08-22T20:15:00.000 | 1 | 0 | 1 | 0 | python,visual-studio,debugging | 44,599,880 | 3 | false | 0 | 0 | Key thing is to deprive VS of a startup file and then it's going to run whatever file is in the active tab, wherever it came from.
Open project properties and delete the name of the startup file
so that the project has none
Open any .py file in VS (or drag&drop)
Make sure it's active tab (cursor is in it)
Right-Click, Start with Debugging | 3 | 2 | 0 | I am using Visual Studios 2015 for creating python programs. Each file is a separate program and hence I wish to debug each one of them separately.
I know you can do this by going to :
Project Properties -> General -> Start Up File and type in my file name each time.
I want to know if there is a simpler way, that just runs the current .py file everytime I hit F5 Just like how it is in IDLE | Debug a single python file in Visual Studios with F5 | 0.066568 | 0 | 0 | 1,662 |
32,165,061 | 2015-08-23T09:21:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-templates,django-views | 40,956,230 | 1 | false | 1 | 0 | You need to give STATIC URL AND STATIC_ROOT values in settings file of django project and then create static folder inside your App folder. Place all your css and js files inside this folder.Now add this tag in your template {% load static %} and give location of css like <link rel="stylesheet" type="text/css" href="{% static 'style.css' %}" />
Example
STATIC_URL='/static/'
STATIC_ROOT=os.path.join(BASE_DIR,"static")'
STATICFILES_DIRS=(os.path.join(BASE_DIR,'Get_Report','static'),) | 1 | 2 | 0 | I am trying to load css file in HTML template in python Django and getting this error
Resource interpreted as Stylesheet but transferred with MIME type application/x-css:
I am using Pydev , Eclipse, Django framework. | Resource interpreted as Stylesheet but transferred with MIME type application/x-css: | 0 | 0 | 0 | 2,551 |
32,168,410 | 2015-08-23T15:37:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,image-processing,video-processing | 32,974,248 | 1 | false | 0 | 0 | It is possible to display multiple videos(e.g 2-videos can be displayed for dual core processor) at a time for that you have use OpenMP. It is possible if your using opencv. And it is pretty easy Only you have to enable OpenMp in your property sheet. | 1 | 1 | 0 | I am trying some video processing exercises, and wondering if there is a way to display multiple video streams into one window a la pyplot.subplot command for the images.
I've tried using subplot syntax, but using it freezes the program, so any alternative source would be much appreciated. | How to display multiple video stream in one window | 0 | 0 | 0 | 727 |
32,170,818 | 2015-08-23T19:48:00.000 | 1 | 1 | 0 | 0 | python,ruby-on-rails,postgresql,raspberry-pi | 32,172,061 | 1 | true | 1 | 0 | There are many "easy" ways, depending on your skills.
Maybe: "Write triggers, which are sending the notify on insert/update" is the hint you need? | 1 | 2 | 0 | I have a rails application, and when there is an update to one of the rows in my database, I want to run a python script which is on a raspberry pi (example: lights up a LED when a user is created). I'm using PostgreSQL and have looked into NOTIFY/LISTEN channels, but can't quite figure that out. Is there an easy way to do this? The raspberry pi will not be on the same network as the rails application. | Trigger python script on raspberry pi from a rails application | 1.2 | 0 | 0 | 61 |
32,172,766 | 2015-08-23T23:58:00.000 | 0 | 0 | 0 | 0 | c#,ironpython,keil | 32,383,582 | 1 | false | 0 | 0 | Could be problems with cr/lf.
Helpful would be a binary diff of the parsed and new created file. You could get more help if you post a few lines of a binary diff here. | 1 | 0 | 0 | I need to change some settings in Keil uVision project. I did not find how to disable/enable project options through command line.
So I tried to do this by simple parsing .uvproj and .uvopt files with System.Xml in IronPython:
import clr
clr.AddReference('System.Xml')
xml_file = System.Xml.XmlDocument()
xml_file.Load(PATH_TO_UVPROJ_FILE)
xml_file.Save(PATH_TO_UVPROJ_FILE)
The problem is that I can't open parsed .uvproj file in uVision (get error "Cannot read project file").
If I copy all text from parsed .uvproj and past it to newly created file (New-Text Document in Windows Explorer -> rename extnsion to .uvproj -> past copied text -> save file) uVision opens it without error.
Why does this happen? | IronPython: Can't open Keil uVision .uvproj file edited with System.Xml | 0 | 0 | 0 | 1,224 |
32,173,695 | 2015-08-24T02:27:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,import,pyqt4 | 32,173,862 | 1 | true | 0 | 1 | Most likely you installed PyQt4 and pyqt4-dev-tools for Python 2.x, but not for Python 3.x.
Check if PyQt4 is in your site-packages directory for Python 3.x. For me this is under /usr/lib/python3.4/site-packages/PyQt4.
If it's not there, you need to grab the correct Python 3 version of the packages. What distro are you using? | 1 | 0 | 0 | I've got PyQt4 and pyqt4-dev-tools installed on my raspberry pi but I'm getting
ImportError: No module named PyQt4 on my Raspberry Pi
with the following includes when I run python3
from PyQt4 import QtGui
from PyQt4 import QtCore
I've got another Pi that PyQT4 is found so I'm not sure what I've done wrong on this one. Can anyone tell me what I can do to get Python to find the PyQt4 modules? | ImportError: No module named PyQt4 on my Raspberry Pi | 1.2 | 0 | 0 | 8,273 |
32,177,366 | 2015-08-24T08:10:00.000 | 0 | 0 | 0 | 0 | python,django,django-apps | 32,178,407 | 2 | false | 1 | 0 | I would consider leaving the 3rd party app and try to do customizing inside the project. If that isn't possible, and it requires lots of customizing, maybe there is an alternative to the app you are using?
Other than that, I would go with your 1st option. But your worries are there for a reason. If you decide to make your own fork, you need to take care of bugs and fixes as well. However, with the 1st option I think it will be easier to merge the original into your fork. But don't forget about separation of concerns. Otherwise it will be very hard to maintain. | 2 | 2 | 0 | Let's say I want to heavily customize a third-party Django app, such as django-postman (Add lots of new models, views as well as modifying those existing etc). What would be the best way to do this?
Options I've considered:
Fork the 3rd party repo. Clone locally outside of my django project. Do the updates, push them to the forked repo. Install my own fork into my venv (and add to my requirements.txt) for my django project.
Just clone into a vendors folder of my django project, update the 3rd party app there, and then keep it in the same git repo as the django project.
Either way, I am worried that will no longer be getting updates from the main 3rd party repo (bug fixes, new features etc), or if I merge into the fork (after changing lots) it could be a big headache.
Am I thinking about this in the best way? Is there a smarter way? What do others typically do? | Django: best practise when heavily customizing 3rd party app | 0 | 0 | 0 | 129 |
32,177,366 | 2015-08-24T08:10:00.000 | -1 | 0 | 0 | 0 | python,django,django-apps | 32,178,828 | 2 | false | 1 | 0 | If changes that you are making are not changing way how 3rd party app is functioning, but it is more like adding new features or additional ways to that app, consider contacting with autor of that app to implement your changes into it. That way you will have lot less work when updating this application. | 2 | 2 | 0 | Let's say I want to heavily customize a third-party Django app, such as django-postman (Add lots of new models, views as well as modifying those existing etc). What would be the best way to do this?
Options I've considered:
Fork the 3rd party repo. Clone locally outside of my django project. Do the updates, push them to the forked repo. Install my own fork into my venv (and add to my requirements.txt) for my django project.
Just clone into a vendors folder of my django project, update the 3rd party app there, and then keep it in the same git repo as the django project.
Either way, I am worried that will no longer be getting updates from the main 3rd party repo (bug fixes, new features etc), or if I merge into the fork (after changing lots) it could be a big headache.
Am I thinking about this in the best way? Is there a smarter way? What do others typically do? | Django: best practise when heavily customizing 3rd party app | -0.099668 | 0 | 0 | 129 |
32,178,005 | 2015-08-24T08:47:00.000 | 2 | 0 | 0 | 0 | python,c,struct,swig,return-type | 32,178,123 | 1 | true | 0 | 1 | You need to %include the header first. You need the headers for the nested structs too, in dependency order!
After you've done that, Swig should automatically wrap the struct so that a call to your function will return a proxy object with the appropriate members.
A typemap is for when you want to change Swig's default behaviors. | 1 | 0 | 0 | I have a C-function which returns a struct data type with several items in it (size_t, char*, int, unsigned and other structs). When I call this function there is no output in python. After some googling I think the problem is that I didn't declare the data type in my interface file. But this turns out to be not that easy. What is the right approach: typemaps or just a simple typedef?
Can someone help me? | Return Struct data type from C-function in Python via SWIG | 1.2 | 0 | 0 | 667 |
32,181,180 | 2015-08-24T11:26:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,python-3.4 | 32,181,317 | 1 | false | 0 | 0 | You error should happen if you are putting the file name into the code directly. Then you should fix the code and it might be good to use the r marker: r'd:\reports\2015\a.txt'. | 1 | 0 | 0 | I have a logging function which writes error messages to a file.
The cause of a particular error is a file not found, not because the file does not exist, but because of a typo error of backslashes.
For example, my application code is trying to open a file from the string 'd:\\reports\2015\\a.txt', which has a missing '\' before 2015. The except clause passes control to the logging function with the filename as an argument, but when the logging function tries to write the string containing the erroneous file name, it too crashes with a charmap codec error.
How do I write out safely anything and everything that is contained within a pair of quotation marks?
From the comments received thus far, I guess my question is not clear.
What I am asking is not about file names per se. What I trying to solve is an error logging function - writing an error message to a file - that will work no matter what the error message is. In the example above, the error message passed to the logging function contains an illegal string. | How to handle illegal strings? | 0 | 0 | 0 | 107 |
32,182,492 | 2015-08-24T12:35:00.000 | 0 | 0 | 1 | 1 | cmd,pip,python-3.4,windows-10 | 32,182,571 | 1 | false | 0 | 0 | Are you running the command line as administrator? | 1 | 0 | 0 | I have been having trouble installing pip modules. I have python 3.4 and windows 10. When i type into cmd python pip install [package], the computer comes up with an error saying "This app can't run on your pc" and cmd returns "Access is denied."
would this be a windows 10 incompatibility or is there something im missing/doing wrong?
Thanks in advance for help | Trying to install python modules on cmd with windows 10 - Access is denied | 0 | 0 | 0 | 1,078 |
32,183,164 | 2015-08-24T13:09:00.000 | 13 | 0 | 1 | 0 | python,refactoring,ipython-notebook,readability,jupyter | 34,528,556 | 4 | false | 0 | 0 | We are having the similar issue. However we are using several notebooks for prototyping the outcomes which should become also several python scripts after all.
Our approach is that we put aside the code, which seams to repeat across those notebooks. We put it into the python module, which is imported by each notebook and also used in the production. We iteratively improve this module continuously and add tests of what we find during prototyping.
Notebooks then become rather like the configuration scripts (which we just plainly copy into the end resulting python files) and several prototyping checks and validations, which we do not need in the production.
Most of all we are not afraid of the refactoring :) | 1 | 55 | 0 | Jupyter (iPython) notebook is deservedly known as a good tool for prototyping the code and doing all kinds of machine learning stuff interactively. But when I use it, I inevitably run into the following:
the notebook quickly becomes too complex and messy to be maintained and improved further as notebook, and I have to make python scripts out of it;
when it comes to production code (e.g. one that needs to be re-run every day), the notebook again is not the best format.
Suppose I've developed a whole machine learning pipeline in jupyter that includes fetching raw data from various sources, cleaning the data, feature engineering, and training models after all. Now what's the best logic to make scripts from it with efficient and readable code? I used to tackle it several ways so far:
Simply convert .ipynb to .py and, with only slight changes, hard-code all the pipeline from the notebook into one python script.
'+': quick
'-': dirty, non-flexible, not convenient to maintain
Make a single script with many functions (approximately, 1 function for each one or two cell), trying to comprise the stages of the pipeline with separate functions, and name them accordingly. Then specify all parameters and global constants via argparse.
'+': more flexible usage; more readable code (if you properly transformed the pipeline logic to functions)
'-': oftentimes, the pipeline is NOT splittable into logically completed pieces that could become functions without any quirks in the code. All these functions are typically needed to be only called once in the script rather than to be called many times inside loops, maps etc. Furthermore, each function typically takes the output of all functions called before, so one has to pass many arguments to each function.
The same thing as point (2), but now wrap all the functions inside the class. Now all the global constants, as well as outputs of each method can be stored as class attributes.
'+': you needn't to pass many arguments to each method -- all the previous outputs already stored as attributes
'-': the overall logic of a task is still not captured -- it is data and machine learning pipeline, not just class. The only goal for the class is to be created, call all the methods sequentially one-by-one and then be removed. On top of this, classes are quite long to implement.
Convert a notebook into python module with several scripts. I didn't try this out, but I suspect this is the longest way to deal with the problem.
I suppose, this overall setting is very common among data scientists, but surprisingly I cannot find any useful advice around.
Folks, please, share your ideas and experience. Have you ever encountered this issue? How have you tackled it? | Best practices for turning jupyter notebooks into python scripts | 1 | 0 | 0 | 13,358 |
32,184,638 | 2015-08-24T14:19:00.000 | 1 | 0 | 0 | 1 | python,tcp,apache-spark,pyspark,rdd | 32,192,487 | 1 | false | 0 | 0 | It seems like what your looking for might be best done with something like reduceByKey where you can remove the duplicates as you go for each sequence (assuming that the resulting amount of data for each sequence isn't too large, in your example it seems pretty small). Sorting the results can be done with the standard sortBy operator.
Saving the data out to HDFS is indeed done in parallel on the workers, forwarding the data to the Spark client app would create a bottleneck and sort of defeat the purpose (although if you do want to bring the data back locally you can use collect provided that the data is pretty small). | 1 | 0 | 1 | I am trying to 'follow-tcp-stream' in Hadoop sequence file that structured as follows:
i. Time stamp as key
ii. Raw Ethernet frame as value
The file contains a single TCP session, and because the record is very long, sequence-id of TCP frame overflows (which means that seq-id not necessarily unique and data cannot be sorted by seq-id because then it will get scrambled).
I use Apache Spark/Python/Scapy.
To create the TCP-stream I intended to:
1.) Filter out any non TCP-with-data frames
2.) Sort the RDD by TCP-sequence-ID (within each overflow cycle)
3.) Remove any duplicates of sequence-ID (within each overflow cycle)
4.) Map each element to TCP data
5.) Store the resulting RDD as testFile within HDFS
Illustration of operation on RDD:
input: [(time:100, seq:1), (time:101, seq:21), (time:102, seq:11), (time:103, seq:21), ... , (time:1234, seq=1000), (time:1235, seq:2), (time:1236, seq:30), (time:1237, seq:18)]
output:[(seq:1, time:100), (seq:11, time:102), (seq:21, time:101), ... ,(seq=1000, time:1234), (seq:2, time:1235), (seq:18, time:1237), (seq:30, time:1236)]
Steps 1 and 4 or obvious. The ways I came up for solving 2 and 3 required comparison between adjacent elements within the RDD, with the option to return any number of new elements (not necessarily 2, Without making any action of course - so the code will run in parallel). Is there any way to do this? I went over RDD class methods few times nothing came up.
Another issue the storage of the RDD (step 5). Is it done in parallel? Each node stores his part of the RDD to different Hadoop block? Or the data first forwarded to Spark client app and then it stores it? | Apache Spark RDD transformations with 2 elements as input | 0.197375 | 0 | 0 | 237 |
32,184,915 | 2015-08-24T14:32:00.000 | 1 | 0 | 0 | 0 | python,scipy,sparse-matrix,eigenvector,eigenvalue | 34,873,978 | 1 | false | 0 | 0 | I agree with @pv. If your matrix S was symmetric, you could see it as a laplacian matrix of the matrix I - S. The number of connected components of I - S is the number of zero-eigenvalues of this matrix (i.e, the dimension of the space associated to eigenvalue 1 of S). You could check the number of connected components of the graph whose similarity matrix is I - S*S' for a start, e.g. with scipy.sparse.csgraph.connected_components. | 1 | 20 | 1 | I have a very large sparse matrix which represents a transition martix in a Markov Chain, i.e. the sum of each row of the matrix equals one and I'm interested in finding the first eigenvalue and its corresponding vector which is smaller than one. I know that the eigenvalues are bounded in the section [-1, 1] and they are all real (non-complex).
I am trying to calculate the values using python's scipy.sparse.eigs function, however, one of the parameters of the functions is the number of eigenvalues/vectors to estimate and every time I've increased the number of parameters to estimate, the numbers of eigenvalues which are exactly one grew as well.
Needless to say, I am using the which parameter with the value 'LR' in order to get the k largest eigenvalues, with k being the number of values to estimate.
Does anyone have an idea how to solve this problem (finding the first eigenvalue smaller than one and its corresponding vector)? | Calculating eigen values of very large sparse matrices in python | 0.197375 | 0 | 0 | 1,334 |
32,186,447 | 2015-08-24T15:46:00.000 | 1 | 0 | 0 | 0 | python,tkinter,tkinter-canvas | 32,186,808 | 1 | false | 0 | 1 | You can put a tag on the line and then bind Enter or Button-1 to the tag with the tag_bind method of the canvas widget. | 1 | 1 | 0 | Is there any way to have an event for hovering/clicking on a drawn line on the Canvas widget (not the widget itself)? | Run event on hovering on drawn line on canvas in python's tkinter gui | 0.197375 | 0 | 0 | 59 |
32,187,398 | 2015-08-24T16:38:00.000 | 0 | 0 | 1 | 1 | ipython-notebook | 50,180,440 | 2 | false | 0 | 0 | I just got the same problem when I upgrade my python2.7 to python3 by using homebrew yesterday. Tried googled suggestions but no one really solved the problem. Then I checked the first line of my pip, pip3, ipython, ipython2, ipython3 and jupyter. Found the problem actually is that the first lines of jupyter and ipython2 still point to the old python2.7 path "/usr/local/opt/python/bin/python2.7" which is not exist anymore. So, I just changed the first line to "#!/usr/local/opt/python/bin/python3.6" for jupyter and the problem solved. | 1 | 2 | 0 | I have tried to open ipython notebook without luck and don't know why?
When i type the command "ipython notebook", the output i receive is :
-bash: /usr/local/bin/ipython: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
Any help ? | Ipython notebook - fails to open | 0 | 0 | 0 | 2,936 |
32,188,979 | 2015-08-24T18:14:00.000 | 1 | 0 | 0 | 0 | python,django,django-apps | 32,213,209 | 1 | true | 1 | 0 | My sugestion is to create a third model, called ArtEvent and make this model points to Art and Event, this way you can create an especific app to manage events and then link everything. For example, when creating a new ArtEvent you redirects the user for the Event app, to enable him to create a new event. Then redirects again to the Art app with the created event, create a new ArtEvent and links those objects.
In future lets suppose that you want to add events to another model, like User, if you follow the same strategy you can separate what is UserEvent specific, and maintain what is common between ArtEvent and UserEvent. | 1 | 0 | 0 | I am implementing a project using Django. It's a site where people can view different Art courses and register. I am having trouble implementing the app as reusable applications. I already have a standalone App which takes care of all the aspect of Arts. Now I want to create another application where an admin create various events for the Arts in the system. conceptually these two should be a standalone apps. Event scheduling is pretty general use case and I want to implement in a way where it can be used for scheduling any kind of Event.
In my case, those Events are Art related events. I don't want to put a foreign key to Art model in my Event model. how can I make it reusable so that it would work for scheduling Events related to any kind of objects. | regarding Django philosphy of implementing project as reusable applications | 1.2 | 0 | 0 | 49 |
32,189,476 | 2015-08-24T18:43:00.000 | 0 | 0 | 1 | 0 | python,pycharm,syntax-highlighting | 35,522,990 | 2 | false | 0 | 0 | Configuring Colors and Fonts
With PyCharm, you can maintain your preferable colors and fonts layout for syntax and error highlighting in the editor, search results, Debugger and consoles via font and color schemes.
PyCharm comes with a number of pre-defined color schemes. You can select one of them, or create your own one, and configure its settings to your taste. Note that pre-defined schemes are not editable. You have to create a copy of a scheme, and then change it as required.
You can view how the new scheme looks in the editor. To do that, just click Apply without closing the Settings dialog box.
To configure color and font scheme
Go to file -> Open Settings, and under the Editor node, click Colors & Fonts.
Select the desired scheme from the Scheme name drop-down list.(Darcula for black screen)
If you need to change certain settings of the selected scheme, create its copy. To do that, click Save as button, and type the new scheme name in the dialog box. | 1 | 7 | 0 | I'm not entirely sure if it's called syntax highlighting, but when I have my cursor over certain words in the editor, it will highlight other occurrences of it. The problem is, with the theme I'm using (warm neon), the highlighting is quite blinding on my screen. To be clear, I'm not talking about selecting text with the cursor, I mean when I have my cursor in the middle of certain words, it will change the background and font color of that word, and do so with other occurrences.
How do I adjust the color? I can't seem to figure it out in Preferences. | Pycharm: How to adjust color of variable/syntax highlighting? | 0 | 0 | 0 | 4,717 |
32,193,277 | 2015-08-24T23:25:00.000 | 2 | 0 | 0 | 0 | python,nginx,flask | 32,193,416 | 1 | true | 1 | 0 | Since generating the file appears to have no relation to the request / response cycle of a Flask app, don't use Flask to serve it. If it does require the Flask app to actively do something to it for every request, then do use Flask to serve it. | 1 | 0 | 0 | I understand the concept that nginx should host my static files and I should leave Flask to serving the routes that dynamically build content. I don't quite understand where one draws the line of a static vs dynamic file, though.
Specifically, I have some json files that are updated every 5 minutes by a background routine that Flask runs via @cron.interval_schedule and writes the .json to a file on the server.
Should I be building routes in flask to return this content (simply return the raw .json file) since the content changes every five minutes, or should have nginx host the json files? Can nginx handle a file that changes every five minutes with it's caching logic? | Should a json file that changes every 5 minutes be hosted by Flask or nginx? | 1.2 | 0 | 0 | 98 |
32,194,926 | 2015-08-25T03:07:00.000 | 0 | 0 | 1 | 0 | python,node.js,express,visual-studio-2013 | 32,195,626 | 1 | true | 0 | 0 | GYP_MSVS_VERSION should be set to the version, not the path. So in your case the command would be set GYP_MSVS_VERSION=2013. | 1 | 0 | 0 | I installed jsdom 3.1.2 module which has dependency on contextify. So I am now trying to install contextify but it show error in cmd:
key error: "C:\Program Files (x86)\Microsoft visual studio 2012.0\VC\bin
I already installed python 2.7 and set environment variable name is "Pythonpath" and values is "c:\python27". I also installed MS visual studio 2013 desktop and set the environment variable: variable name is "GYP_MSVS_VERSION" and variable values is "C:\Program Files (x86)\Microsoft visual studio 2012.0\VC\bin"
Can somebody tell me where is the problem? Thank you. | Error installing "Contextify" module in node.js in windows 8 | 1.2 | 0 | 0 | 33 |
32,195,042 | 2015-08-25T03:22:00.000 | 6 | 0 | 1 | 0 | python,text,sublimetext2,kivy | 60,463,882 | 2 | false | 0 | 1 | For enabling highlighting of Kivy language in Sublime Text make following steps:
Press Ctrl + Shift + P
Write IP (install package)
Chose "Install Package"
Write KV
Found KIVY install pack.
Press it.
In right bottom part of window Sublime Text you will seen "Plain Text". Press this and in this Syntax Bar choose Kivy Syntax.
Maybe it's not what you looking for, but when I looking solutions for my problem I found this topic and here was solution without method explanation. | 1 | 2 | 0 | I've tried everything to get this to work and nothing worked. I tried installing all of the binary directories for Kivy but it didn't work. Is there a very simple way to allow Sublime Text to understand Kivy language.
I am using Windows 10 with Python 2.7.10. | How do you add the Kivy directories so Sublime Text can compile using Kivy language? | 1 | 0 | 0 | 2,803 |
32,195,993 | 2015-08-25T05:15:00.000 | 0 | 0 | 0 | 1 | python,rabbitmq,celery | 32,202,757 | 1 | true | 0 | 0 | No, you must reload the workers. | 1 | 0 | 0 | Does celery detect the changes of task code even if task already is prefetched as past task code? | What happened when celery task code was changed before prefetched task executed? | 1.2 | 0 | 0 | 181 |
32,196,417 | 2015-08-25T05:50:00.000 | 0 | 0 | 0 | 0 | python,pandas | 32,196,617 | 1 | false | 0 | 0 | You can investigate the use of one of the built in or available libraries that let python actually perform the browser like operations and record the results, filter them and then use the built in csv library to output the results.
You will probably need one of the lower level libraries:
urllib/urllib2/urllib3
And you may need to override, one or more, of the methods to record the transaction data that you are looking for. | 1 | 0 | 0 | Is it possible if I have a list of url parse them in python and take this server calls key/values without need to open any browser manually and save them to a local file?
The only library I found for csv is pandas but anything for the first part. Any example will be perfect for me. | Collect calls and save them to csv | 0 | 0 | 1 | 30 |
32,200,757 | 2015-08-25T09:49:00.000 | 1 | 0 | 0 | 0 | python,pdf,pdfminer | 32,201,151 | 1 | false | 0 | 0 | PDF is a complex file format which supports many different features and ways of doing things. Your pdfminer app apparently has problems with some of those features, which causes it to misinterpret certain files. Preview on the other hand seems to correctly support everything and was able to correctly read the file into its internal presentation format. When you then re-saved the file, Preview wrote it in the way that it would write the same information. Again, lots of different ways to do the same thing means different programs will do things differently.
Preview apparently has a better, more compatible, more streamlined way to express the same content; and your pdfminer can handle it better. | 1 | 0 | 0 | When I extracted content from a pdf file with 12 pages using my program based on pdfminer, I got wrong result with only 11 pages. I tested it with other files and got right result in most cases.
By accident, I opened it with preview app in OS X Yosemite(v10.10.4), and save it without any other operations. Then the result I got from program was right. I found size of this file was changed from 2m to 300k by preview, but have no idea what it had done.
I tried searching an answer, but most topics are about using export function of preview app to compress pdf file, and seems no one come across the same problem with pdfminer neither.
1, What does preview app do with a pdf file when "save" ?
2, How can I deal with the problem ?
Thanks in advance! | What does preview app of OS X do to help extracting from pdf? | 0.197375 | 0 | 0 | 36 |
32,204,773 | 2015-08-25T13:02:00.000 | 1 | 0 | 0 | 1 | python,python-2.7,sockets,networking,tcp | 32,205,167 | 2 | false | 0 | 0 | Basically, it isn't (shouldn't be) possible for you to connect to your friends private IP through his firewall. That's the point of firewalls :-o
Two solutions - the simplest is a port forwarding rule on his firewall, the second is as you suggest an external server that both clients connect to. | 2 | 1 | 0 | I know my friend's external IP (from whatsmyip) and internal IP (e.g 192.168.1.x) and he knows mine. How do I establish a TCP connection with him?
Is it possible to do it without any port forwarding? Or do I require a server with an external IP to transfer messages between me and him? | Connecting to a known external ip and internal ip without port forwarding | 0.099668 | 0 | 1 | 918 |
32,204,773 | 2015-08-25T13:02:00.000 | 3 | 0 | 0 | 1 | python,python-2.7,sockets,networking,tcp | 32,220,457 | 2 | false | 0 | 0 | You cannot do that because of NAT(Network Address Translation). The public ip you see by whatsmyip.com is the public ip of your router. Since different machines can connect to the same router all of them will have the same public ip( that of the router). However each of them have an individual private ip assigned by the router. Each outgoing connection from the private network has to be distinguished hence the router converts the connection(private ip, port) to a (different port) and adds it to the NAT table.
So if you really want to have a working connection, you should have to determine both the internal and external port for both ends and do the port forwarding in the router. Its a bit tricky and hence techniques like TCP hole punching are used. | 2 | 1 | 0 | I know my friend's external IP (from whatsmyip) and internal IP (e.g 192.168.1.x) and he knows mine. How do I establish a TCP connection with him?
Is it possible to do it without any port forwarding? Or do I require a server with an external IP to transfer messages between me and him? | Connecting to a known external ip and internal ip without port forwarding | 0.291313 | 0 | 1 | 918 |
32,208,073 | 2015-08-25T15:27:00.000 | 2 | 0 | 1 | 0 | python,anaconda,spyder | 32,208,131 | 1 | false | 0 | 0 | Right click on your script -> Get Info -> then 'open with' in the combobox choose 'other' then locate the binary you want. Once you have chosen your binary click on 'change all' | 1 | 2 | 0 | I am using anaconda and spyder as well as MacOS X Yosemite.
How can I make spyder the default program to open python scripts? When I just click on a script it is opened with TextEdit. When I click on open with I cannot choose spyder.
It is bothering to open spyder and then browse to the correct script. | opening Python scripts with spyder | 0.379949 | 0 | 0 | 269 |
32,209,554 | 2015-08-25T16:41:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,command-prompt,enthought | 43,220,221 | 3 | false | 0 | 0 | After editing each path and creating a new variable for each python version, be sure to rename the python.exe to a unique one. i.e. "python3x" . then you can call it in the command line as "python3x". I am assuming that the original python installed (2X) retains the python.exe of which when you call "python" in the command line, it will show the 2x version | 2 | 0 | 0 | I have uninstalled Python 2.7 and installed Python 3. But, when I type Python on my command prompt I get this :
"Enthought Canopy Python 2.7.9 ........."
How can I run Python 3 from command line or how can I make it default on my computer? I asked Enthought Canopy help and I was told that I can "have Canopy be your default Python only in a "Canopy Command Prompt". Not sure what it means.
edit : Thanks everyone. As suggested, I had to uninstall everything and install Python again. | How to make Python 3 my default Python at command prompt? | 0 | 0 | 0 | 4,199 |
32,209,554 | 2015-08-25T16:41:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,command-prompt,enthought | 61,764,881 | 3 | false | 0 | 0 | You can copy python.exe to python3.exe.
If you are using Anaconda, then you will find it in the sub directory of your environment, for intance, c:\Anaconda\envs\myenvironment. | 2 | 0 | 0 | I have uninstalled Python 2.7 and installed Python 3. But, when I type Python on my command prompt I get this :
"Enthought Canopy Python 2.7.9 ........."
How can I run Python 3 from command line or how can I make it default on my computer? I asked Enthought Canopy help and I was told that I can "have Canopy be your default Python only in a "Canopy Command Prompt". Not sure what it means.
edit : Thanks everyone. As suggested, I had to uninstall everything and install Python again. | How to make Python 3 my default Python at command prompt? | 0 | 0 | 0 | 4,199 |
32,213,796 | 2015-08-25T20:49:00.000 | 1 | 0 | 0 | 1 | python,django,sqlite,twisted,daemon | 32,235,411 | 2 | false | 1 | 0 | No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work. | 1 | 0 | 0 | I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command.
On the other hand, for user interface, I am writing a web application on the same Django project on the same database (also using Crossbar as websockets and WAMP server). This is a second Django process accessing the same database.
I'm looking for some validation here. Is anything fundamentally wrong to this approach? I'm particularly scared of issues with database (two different processes accessing it via Django ORM). | Twisted + Django as a daemon process plus Django + Apache | 0.099668 | 1 | 0 | 143 |
32,214,747 | 2015-08-25T21:54:00.000 | 0 | 0 | 1 | 0 | python,multithreading,thread-safety,orange | 32,290,990 | 1 | false | 0 | 0 | Most probably yes. I wrote most of the C++ code and I don't think I did any non-thread safe stuff. Actually, you've chosen exactly the two classifiers that are not mine (TreeClassifier is, but SimpleTreeClassifier is not). If their authors followed the general design, they should be safe (I mean the classifiers, not the authors :).
How do you run Python in parallel - despite the global interpreter lock? When we do stuff in parallel, we usually just start separate processes. | 1 | 0 | 0 | I'm making use of the Orange Data Mining Suite in a Python program, I get a Classifier from a Learner and what I want to know is it possible to then use that Classifier in a multi-threaded environment ?
Basically I want to classify a set of results in parallel to make use of multiple CPUs I have at my disposal.
If this depends on the Classifier , the two I am most concerned about are Orange.classification.neural.NeuralNetworkClassifier and Orange.classification.tree.TreeClassifier (specifically the one returned by SimpleTreeLearner) | Are Orange Classifiers Thread-Safe? | 0 | 0 | 0 | 63 |
32,217,773 | 2015-08-26T04:01:00.000 | 0 | 0 | 0 | 0 | python,http,web-scraping,scrapy | 32,217,810 | 2 | false | 1 | 0 | It could be a rate limiter.
However a 400 error generally means that the client request was malformed and therefore rejected by the server.
You should start investigating this first. When your requests start failing, exit your program and immediately start it again. If it starts working, you know that you aren't being rate-limited and that there is in fact something wrong with how your requests are formed later on. | 1 | 0 | 0 | I'm doing a web scrape with Python (using the Scrapy framework). The scrape works successfully until it gets about an hour into the process and then every request comes back with a HTTP400 error code.
Is this just likely to be a IP based rate limiter or scrape detection tool? Any advice on how I might investigate the root cause further? | Python Web Scraping HTTP 400 | 0 | 0 | 1 | 611 |
32,228,920 | 2015-08-26T14:05:00.000 | 0 | 0 | 0 | 0 | windows,input,wxpython,tablet,on-screen-keyboard | 32,413,608 | 1 | false | 0 | 1 | You can create an onscreen keyboard with wxPython for your applications. If you want keyboard to appear when you click your app's wx.TextCtrl, you just need to bind wx.EVT_LEFT_DOWN and/or wx.EVT_LEFT_UP events to it. However, if you want keyboard to appear when any app's input is clicked then it is really hard to achieve, instead you can assign a function key to popup. | 1 | 0 | 0 | I'm developing an app using wxpython for use specifically with a Microsoft surface, which requires text input. Is there a way to automatically bring up the onscreen keyboard when an input box is selected? | Opening onscreen keyboard wxpython | 0 | 0 | 0 | 397 |
32,230,048 | 2015-08-26T14:57:00.000 | 0 | 0 | 0 | 1 | windows-10,python-3.5 | 37,099,911 | 2 | false | 0 | 0 | in python3 print is replaced by print() you can use this | 1 | 3 | 0 | I am a new python user. I need to run scripts written by (remote) coworkers.
My first install of Python is 3.5.0.rc1. It was installed on a Windows 10 machine using the python webinstaller.
On installation, I told the installer to add all Python components, and to add Python to the PATH. I authorized python for all users.
I can load and access Python through the command line. It will respond to basic instructions (>>> 1+1 2).
However, I do not get the expected response from some basic commands (eg, >>>import os followed by >>>print os.getcwd() results in a syntax error rather than in a print of the directory containing the python executable).
Further, I can not get python to execute scripts (eg. >>>python test.py). This results in a syntax error, which seems to point to various places in the script file name. I have tried a quick search of previous questions on StackOverfow, and can't seem to find discussion of what seems to be a failure on this basic of level.
Perhaps I have not loaded all the necessary python modules, or is it something else that I'm missing | Python Scripts on Windows 10 | 0 | 0 | 0 | 7,001 |
32,230,294 | 2015-08-26T15:07:00.000 | 1 | 1 | 0 | 0 | python,jira | 32,234,002 | 2 | false | 1 | 0 | Take a look at JIRA webhooks calling a small python based web server? | 1 | 1 | 0 | Let say I'm creating an issue in Jira and write the summary and the description. Is it possible to call a python script after these are written that sets the value for another field, depending on the values of the summary and the description?
I know how to create an issue and change fields from a python script using the jira-python module. But I have not find a solution for using a python script while editing/creating the issue manually in Jira. Does anyone have an idea of how I manage that? | Call python script from Jira while creating an issue | 0.099668 | 0 | 0 | 1,311 |
32,232,844 | 2015-08-26T17:19:00.000 | 1 | 0 | 1 | 0 | python,spyder | 32,232,959 | 1 | false | 0 | 0 | Click on 'Consoles'-->'Open a Python Console' in the menu bar. That should open the console for you which will let you run the code.
Alternatively, you can use the shortcut key Alt+o+p. | 1 | 0 | 0 | i am using python spyder 2.7.
i encountered a warning stating :
" no Python shell is currently selected to run eg.py
Please select or open a new python interpreter and try again" | Not able to run file in spyder | 0.197375 | 0 | 0 | 1,970 |
32,233,938 | 2015-08-26T18:19:00.000 | 3 | 0 | 0 | 0 | python,django,django-staticfiles | 32,244,567 | 2 | true | 1 | 0 | My first choice in this situation would be to fix whatever is stopping you from putting it into /static/. I can't imagine any half-decent third-party plugin would demand that the files be in the root; there must be some way to configure it to work from a subdirectory. If there isn't, I'd fork the project and add the option, then try to get them to merge it back. I realise you've probably already explored this option, but can you give us some more details about the plugin you're trying to use, and the reason it needs to go into the root? This really would be the best solution.
If you really must have the file in the root, and want to keep it as part of your django project, I'd try symlinking the files into the public root. This would mean it would be available in both locations; I can't see why that would be a problem, but you do specify "ONLY" in the root and I'm sure you have your reasons; in that case, perhaps you could configure your web server to redirect from /static/filename.js to /filename.js?
Lastly, you technically could change the settings STATIC_URL and STATIC_ROOT to point at the root directory, but that sounds like a pretty terrible idea to me. If you've got this far and still need to do it, it would be far better to take the file out of your django project altogether and just manually place it in your web root. | 1 | 1 | 0 | I've a website running on Django, Heroku.
I need to add few static JavaScript files for a third-party plugin.
My newly added files are available at domain.com/static/filename.js.
I need them to be available at domain.com/filename.js.
How to make ONLY the newly added Javascript files available at domain.com/filename.js?
If the info is not sufficient please ask which code is needed in the comments. | change static some static files location from /static/file.js to /file.js | 1.2 | 0 | 0 | 163 |
32,235,272 | 2015-08-26T19:37:00.000 | 4 | 0 | 0 | 0 | python,machine-learning,nlp,nltk | 32,235,511 | 2 | true | 0 | 0 | You might want look for TFIDF and cosine similarity.
There are challenging cases, however. Let's say you have the following three dishes:
Pulled pork
Pulled egg
Egg sandwich
Which of the two you are going to combine?
Pulled pork and pulled egg
Pulled egg and egg sandwich
Using TFIDF, you can find the most representative words. For example the word sandwich may happen to be in many dishes, hence not very representative. (Tuna sandwich, egg sandwich, cheese sandwich, etc.) Merging tuna sandwich and cheese sandwich may not be a good idea.
After you have the TFIDF vectors, you can use cosine similarity (using the TFIDF vectors) and maybe a static threshold, you can decide whether to merge them or not.
There is also another issue arises: When you match, what are you going to name them? (Pulled egg or egg sandwich?)
Update:
@alvas suggests to use clustering after having the similarity/dissimilarity values. I think that would be good idea. You can first create your nxn distance/similarity matrix using the cosine similarity with TFIDF vectors. And after you have the distance matrix, you can cluster them using a clustering algorithm. | 1 | 5 | 1 | I have a large data set of restaurant dishes (for example, "Pulled Pork", "Beef Brisket"...)
I am trying to "normalize" (wrong word) the dishes. I want "Pulled Pork" and "Pulled Pork Sandwich" and "Jumbo Pork Slider" all to map to a single dish, "Pulled Pork".
So far I have gotten started with NLTK using Python and had some fun playing around with frequency distributions and such.
Does anyone have a high-level strategy to approach this problem? Perhaps some keywords I could google?
Thanks | Normalizing a list of restaurant dishes | 1.2 | 0 | 0 | 183 |
32,238,882 | 2015-08-27T00:37:00.000 | 0 | 0 | 0 | 1 | python,macos,homebrew | 32,239,497 | 1 | false | 0 | 0 | Homebrew's Python build will only attempt to recognize brewed or system Tcl/Tk. To build against Homebrew's Tcl/Tk (and install it first if necessary), install Python with brew install python3 --with-tcl-tk. | 1 | 0 | 0 | Pundits warn against installing python in a mac usr/bin/Frameworks area.
Python self-installers write to Framework by default.
pundits advise using brew install of python to avoid the above.
Brew install python however, results in unstable state
Idle reports tclsh mismatch.
Pundits advise active state installer of correct tclsh. These are high-level python cognoscenti, and real pundits, lilies amidst the thorns.
Active-state installs to Frameworks (can you imagine?).
The said installer allows no other installation directory.
Brew installed python fails to see the active-state tclsh.
However, if one of you admonitory pundits could help me with a logical, non-idiomatic description of a process that will associate the appropriate "tclsh" in usr/bin with python3 in usr/local/bin, I would be ecstatic. | mac following brew install python warning thrown unstable state | 0 | 0 | 0 | 67 |
32,238,896 | 2015-08-27T00:40:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore,gql | 32,281,354 | 1 | true | 1 | 0 | It seems like Google Cloud SQL would do what I need, but since I'm trying not to spend any money on this project and GCS doesn't have a free unlimited tier, I've resorted to querying by my filter and then sorting the results myself. | 1 | 0 | 0 | I've written a tiny app on Google App Engine that lets users upload files which have about 10 or so string and numeric fields associated with them. I store the files and these associated fields in an ndb model. I then allow users to filter and sort through these files, using arbitrary fields for sorting and arbitrary fields or collections of fields for filtering. However, whenever I run a sort/filter combination on my app that I didn't run on the dev_appserver before uploading, I get a NeedIndexError along with a suggested index, which seems to be unique for every combination of sort and filter fields. I tried running through every combination of sort/filter field on the appserver, generating a large index.yaml file, but at some point the app stopped loading altogether (I wasn't monitoring whether this was a gradual slowdown or a sudden breaking).
My questions are as follows. Is this typical behavior for the GAE datastore, and if not what parts of my code would be relevant for troubleshooting this? If this is typical behavior, is there an alternative to the datastore on GAE that would let me do what I want? | arbitrary gql filters and sorts without huge index.yaml | 1.2 | 0 | 0 | 57 |
32,245,227 | 2015-08-27T09:13:00.000 | 0 | 0 | 0 | 1 | python,websocket,tornado | 32,245,768 | 1 | false | 1 | 0 | The on_close event can only be triggered when the connection is closed.
You can send a ping and wait for an on_pong event.
Timouts are typically hard to detect since you won't even get a message that the socket is closed. | 1 | 0 | 0 | I'm running a Python Tornado server with a WebSocket handler.
We've noticed that if we abruptly disconnect the a client (disconnect a cable for example) the server has no indication the connection was broken. No on_close event is raised.
Is there a workaround?
I've read there's an option to send a ping, but didn't see anyone use it in the examples online and not sure how to use it and if it will address this issue. | Tornado websocket pings | 0 | 0 | 1 | 1,215 |
32,247,747 | 2015-08-27T11:07:00.000 | 0 | 0 | 1 | 0 | kivy,qpython | 32,610,342 | 1 | false | 0 | 1 | The newest 1.2.0 version had fixed this blank log issue. | 1 | 0 | 0 | I think my question is already asked, but I didn't find any topic about that.
When I try somme script with kivy, I have sometimes errors (such as undeclared variable, bad indentation...), but Qpython don't display them.
I lunch kivy with:
"#qpy:kivy"
and consequently, there is no console. A log is however present, but it's empty.
Is there a way to remedy this ?
Should I add a line to display error ?
Thanks
Simon
PS: The "print" command is also useful, but not working (no console). I think it's the same problem. | How to display error occurred in script? | 0 | 0 | 0 | 143 |
32,254,733 | 2015-08-27T16:20:00.000 | 0 | 0 | 0 | 0 | python,graph,openpyxl | 32,256,294 | 1 | false | 0 | 0 | At the moment it is not possible to preserve charts in existing files. With rewrite in version 2.3 of openpyxl the groundwork has been laid that will make this possible. When it happens will depend on the resources available to do the work. Pull requests gladly accepted.
In the meantime you might be able find a workaround by writing macros to create the charts for you because macros are preserved. A bit clumsy but should work.
Make sure that you are using version 2.3 or higher when working on charts as the API has changed slightly. | 1 | 0 | 0 | I made a sheet with a graph using python and openpyxl. Later on in the code I add some extra cells that I would also like to see in the graph. Is there a way that I can change the range of cell that the graph is using, or maybe there is another library that lets me do this?
Example:
my graph initially uses columns A1:B10, then I want to update it to use A1:D10
Currently I am deleting the sheet, and recreating it, writing back the values and making the graph again, the problem is that this is a big process that takes days, and there will be a point that rewriting the sheet will take some time. | Python Excel, Is it possible to update values of a created graph? | 0 | 1 | 0 | 1,178 |
32,255,872 | 2015-08-27T17:29:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 32,257,282 | 1 | false | 0 | 1 | No. Some widget must have focus. You can set focus to the root window if you don't have any other widgets that naturally accept keyboard input. | 1 | 0 | 0 | That is, without doing focus_set on some other widget?
The original post ended with the sentence above, but it did not meet quality standards. These standards demand that the problem is described completely, including what one has tried. They insist on proper grammar too.
Well, it is not really a problem, just a question. I tried to find a widget method, say, focus_unset, that would do the trick. I didn't. My grammar is proper. Maybe the robotic police is confused with terms like focus_set? | Is there a way to make s tkinter widget lose focus? | 0 | 0 | 0 | 193 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.