Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,104,606 | 2014-08-03T12:25:00.000 | 0 | 0 | 0 | 1 | python,linux | 25,104,864 | 1 | false | 0 | 0 | There is no existing solution to my knowledge. But:
There is the inotify API, but that only gives out notifications of what just happened, i.e. you don't have any means to influence the result.
If that is an absolutely necessary requirement, intercepting operations on a filesystem level is the only universal choice, hacking either the kernel itself or using FUSE.
If you only want to monitor operations of a single process, you could intercept some calls using LD_PRELOAD to intercept some function calls like fopen() and fwrite(). | 1 | 0 | 0 | folder watcher functions in a way that after a file comes into a folder it does something ( it reacts ). Is there a method , such that, before a file enters the folder, a check is made, if it's a successful check then only the file enters folder, otherwise it does not. | a proactive folder watcher in Linux | 0 | 0 | 0 | 74 |
25,106,897 | 2014-08-03T16:52:00.000 | -1 | 0 | 0 | 0 | python,openpyxl | 47,234,993 | 2 | false | 0 | 0 | As A-Palgy said you have to add
worksheet.sheet_view.rigtToleft = True
but first, you have to enable that feature in the views.py file in this path:
C:\python36\Lib\site-packages\openpyxl\worksheet\views.py
and edit the line :
rightToLeft = Bool(allow_none=True)
to
rightToLeft = Bool(allow_none=False) | 1 | 4 | 0 | I was wondering how to adjust display left to right / right to left with openpyxl or if its even possible.
haven't really found anything in the documantation, maybe im blind.
thanks in advance :) | Python Openpyxl display left to right | -0.099668 | 0 | 0 | 930 |
25,109,746 | 2014-08-03T22:30:00.000 | 1 | 0 | 0 | 1 | python,json,google-app-engine,rest,google-cloud-datastore | 25,110,135 | 2 | false | 1 | 0 | This is a really good question, one that I've been asked in interviews, seen pop up in a lot of different situations as well. Your system essentially consists of two things:
Savings (or writing) models to the data store
Reading from the data store.
From my experience of this problem, when you view these two things differently you're able to come up with solid solutions to both. I typically use a cache, such as memcachd, in order to keep data easily accessible for reading. At the same time, for writing, I try to have a main db and a few slave instances as well. All the writes will go to the slave instances (thereby not locking up the main db for reads that sync to the cache), and the writes to the slave db's can be distributed in a round robin approach there by ensuring that your insert statements are not skewed by any of the model's attributes having a high occurance. | 1 | 2 | 0 | We're designing a system that will take thousands of rows at a time and send them via JSON to a REST API built on Google App Engine. Typically 3-300KB of data but let's say in extreme cases a few MB.
The REST API app will then adapt this data to models on the server and save them to the Datastore. Are we likely to (eventually if not immediately) encounter any performance bottlenecks here with Google App Engine, whether it's working with that many models or saving so many rows of data at a time to the datastore?
The client does a GET to get thousands of records, then a PUT with thousands of records. Is there any reason for this to take more than a few seconds, and necessitate the need for a Task queues API? | Google App Engine, Datastore and Task Queues, Performance Bottlenecks? | 0.099668 | 0 | 0 | 158 |
25,110,089 | 2014-08-03T23:40:00.000 | 0 | 0 | 0 | 0 | python,csv,organization,pytables | 25,122,607 | 1 | false | 0 | 0 | First of all, I am a big fan of Pytables, because it helped me manage huge data files (20GB or more per file), which I think is where Pytables plays out its strong points (fast access, built-in querying etc.). If the system is also used for archiving, the compression capabilities of HDF5 will reduce space requirements and reduce network load for transfer. I do not think that 'reproducing' your file system inside an HDF5 file has advantages (happy to be told I'm wrong on this). I would suggest a hybrid approach: keep the normal filesystem structure and put the experimental data in hdf5 containers with all the meta-data. This way you keep the flexibility of your normal filesystem (access rights, copying, etc.) and can still harness the power of pytables if you have bigger files where memory is an issue. Pulling the data from HDF5 into normal pandas or numpy is very cheap, so your 'normal' work flow shouldn't suffer. | 1 | 0 | 1 | I'm currently in the process of trying to redesign the general workflow of my lab, and am coming up against a conceptual roadblock that is largely due to my general lack of knowledge in this subject.
Our data currently is organized in a typical file system structure along the lines of:
Date\Cell #\Sweep #
where for a specific date there are generally multiple Cell folders, and within those Cell folders there are multiple Sweep files (these are relatively simple .csv files where the recording parameters are saved separated in .xml files). So within any Date folder there may be a few tens to several hundred files for recordings that day organized within multiple Cell subdirectory folders.
Our workflow typically involves opening multiple sweep files within a Cell folder, averaging them, and then averaging those with data points from other Cell folders, often across multiple days.
This is relatively straightforward to do with the Pandas and Numpy, although there is a certain “manual” feel to it when remotely accessing folders saved to the lab server. We also, on occasion, run into issues because we often have to pull in data from many of these files at once. While this isn’t usually an issue, the files can range between a few MBs to 1000s of MBs in size. In the latter case we have to take steps to not load the entire file into memory (or not load multiple files at once at the very least) to avoid memory issues.
As part of this redesign I have been reading about Pytables for data organization and for accessing data sets that may be too large to store within memory. So I guess my 2 main questions are
If the out-of-memory issues aren’t significant (i.e. that utility wouldn’t be utilized often), are there any significant advantages to using something like Pytables for data organization over simply maintaining a file system on a server (or locally)?
Is there any reason NOT to go the Pytables database route? We are redesigning our data collection as well as our storage, and one option is to collect the data directly into Pandas dataframes and save the files in the HDF5 file type. I’m currently weighing the cost/benefit of doing this over the current system of the data being stored into csv files and then loaded into Pandas for analysis later on.
My thinking is that by creating a database vs. the filesystem we current have we may 1. be able to reduce (somewhat anyway) file size on disk through the compression that hdf5 offers and 2. accessing data may overall become easier because of the ability to query based on different parameters. But my concern for 2 is that ultimately, since we’re usually just opening an entire file, that we won’t be utilizing that to functionality all that much – that we’d basically be performing the same steps that we would need to perform to open a file (or a series of files) within a file system. Which makes me wonder whether the upfront effort that this would require is worth it in terms of our overall workflow. | Benefits of Pytables / databases over file system for data organization? | 0 | 1 | 0 | 332 |
25,110,635 | 2014-08-04T01:23:00.000 | 1 | 1 | 0 | 1 | python,linux,unix,cron,raspberry-pi | 25,110,706 | 2 | true | 0 | 0 | It looks like you may have a stray . in there that would likely cause an error in the command chain.
Try this:
cd usr/local/sbin/cronjobs && virtualenv/secret_ciphers/bin/activate
&& cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate
Assuming that the virtualenv directory is in the cronjobs directory.
Also, you may want to skip the activate/deactivate, and simply run the python3 interpreter right out of the virtualenv. i.e.
/usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3 /usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py
Edit in response to comments from OP:
The activate call is what activates the virtualenv. Not sure what the . would do aside from cause shell command parsing issues.
Both examples involve the use of the virtualenv. You don't need to explicitly call activate. As long as you invoke the interpreter out of the virtualenv's directory, you're using the virtualenv. activate is essentially a convenience method that tweaks your PATH to make python3 and other bin files refer to the virtualenv's directory instead of the system install.
2nd Edit in response to add'l comment from OP:
You should redirect stderr, i.e.:
/usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3
/usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py >
/tmp/botlog.log 2>&1
And see if that yields any additional info.
Also, 5 asterisks in cron will run the script every minute 24/7/365. Is that really what you want?
3rd Edit in response to add'l comment from OP:
If you want it to always be running, I'm not sure you really want to use cron. Even with 5 asterisks, it will run it once per minute. That means it's not always running. It runs once per minute, and if it takes longer than a minute to run, you could get multiple copies running (which may or may not be an issue, depending on your code), and if it runs really quickly, say in a couple seconds, you'll have the rest of the minute to wait before it runs again.
It sounds like you want the script to essentially be a daemon. That is, just run the main script in a while (True) loop, and then just launch it once. Then you can quit it via <crtl>+c, else it just perpetually runs. | 1 | 0 | 0 | I am trying to run a cron script in python 3 so I had to setup a virtual environment (if there is an easier way, please let me know) and in order to run the script I need to be in the script's parent folder as it writes to text files there. So here is the long string of commands I have come up with and it works in console but does not work in cron (or I can't find the output..)
I can't type the 5 asterisks without it turning into bullet points.. but I have them in the cron tab.
cd usr/local/sbin/cronjobs && . virtualenv/secret_ciphers/bin/activate
&& cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate | how to write a multi-command cronjob on a raspberry pi or any other unix system | 1.2 | 0 | 0 | 822 |
25,112,648 | 2014-08-04T06:08:00.000 | 0 | 0 | 0 | 0 | postgresql,python-2.7,amazon-web-services,psycopg2,amazon-vpc | 25,115,502 | 1 | false | 0 | 0 | Point your python code to the same address and port you're using for the tunnelling.
If you're not sure check the pgAdmin destination in the configuration and just copy it. | 1 | 2 | 0 | My RDS is in a VPC, so it has a private IP address. I can connect my RDS database instance from my local computer with pgAdmin using SSH tunneling via EC2 Elastic IP.
Now I want to connect to the database instance in my code in python. How can I do that? | AWS - Connect to RDS via EC2 tunnel | 0 | 1 | 1 | 667 |
25,112,802 | 2014-08-04T06:21:00.000 | 0 | 0 | 1 | 0 | python,sublimetext2 | 25,138,882 | 1 | false | 0 | 0 | You can accept user input in Sublime Text with SublimeREPL module. Install it via package control. | 1 | 0 | 0 | I have been learning python since last week weeks. I am using sublime text2 editor.
I have a simple file which prints some small text. How do I run this?
I have tried using ctrl+B but it only builds the file. How can I execute it? | Execute a python script from sublime text 2 | 0 | 0 | 0 | 115 |
25,118,283 | 2014-08-04T12:04:00.000 | 0 | 0 | 0 | 1 | java,python,eclipse,plugins | 25,133,261 | 2 | false | 1 | 0 | I solved this problem:
There was no issue with either eclipse or with PyDev. It was about the Java version I had. PyDev works with JDK 7, but I had JDK 6. Due to this even after I copied the PyDev to droppins, nothing was shown up in Preferences. Once I used JDK 7, its working.
Thanks, | 1 | 0 | 0 | I am on an assignment to work with Jython. I tried to install PyDev plugin to my eclipse ( Kepler service release 2 on Linux 64 bit machine )manually (dev machine doesnot have internet connection). But when I do manually by downloading .zip file and adding it as following:
Help-> Install new software ->Add -> Archieve:
but I am getting an error.
No repository found at file:/home/lhananth/eclipse/dropins/PyDev%203.6.0.zip!.
No repository found at file:/home/lhananth/eclipse/dropins/PyDev%203.6.0.zip!.
I tried to manually add the unziped folders to dropin folder of eclipse but its not working as well- Python is not appearing as a selection in the Eclipse, Window, Preferences.
Can some body help me out ? (I tried all the replies for similar posts available in stack overflow) | Adding PyDev eclipse pulgin manually | 0 | 0 | 0 | 347 |
25,120,159 | 2014-08-04T13:44:00.000 | 2 | 0 | 1 | 0 | python,sockets,thread-safety | 25,122,539 | 1 | true | 0 | 0 | The socket API is thread safe (at least on linux and windows) to the extent that the system won't crash and the data will all be transferred. Its just that data sent among threads may be interleaved and there is no guarantee what any given thread will receive. But the minumum unit of transfer is 1 byte so if you have a protocol where messages are only 1 byte and interleaving doesn't make a difference, ... send away! | 1 | 0 | 0 | as i have found on thread safety of socket, it was not,
But how about each thread accesses a socket to write or read only one byte at a once.(1 byte means 1 character)
is it also un-safe?
i am coding in python. | is socket thread safe in writing or reading a 1byte? | 1.2 | 0 | 0 | 356 |
25,120,761 | 2014-08-04T14:13:00.000 | 8 | 0 | 0 | 0 | python,sockets,python-2.7 | 25,121,548 | 3 | false | 0 | 0 | send has extra information that recv doesn't: how much data there is to send. If you have 100 bytes of data to send, sendall can objectively determine if fewer than 100 bytes were sent by the first call to send, and continually send data until all 100 bytes are sent.
When you try to read 1024 bytes, but only get 512 back, you have no way of knowing if that is because the other 512 bytes are delayed and you should try to read more, or if there were only 512 bytes to read in the first place. You can never say for a fact that there will be more data to read, rendering recvall meaningless. The most you can do is decide how long you are willing to wait (timeout) and how many times you are willing to retry before giving up.
You might wonder why there is an apparent difference between reading from a file and reading from a socket. With a file, you have extra information from the file system about how much data is in the file, so you reliably distinguish between EOF and some other that may have prevented you from reading the available data. There is no such source of metadata for sockets. | 2 | 17 | 0 | When using recv() method, sometimes we can't receive as much data as we want, just like using send(). But we can use sendall() to solve the problem of sending data, how about receiving data? Why there is no such recvall() method? | why does python socket library not include recvall() method like sendall()? | 1 | 0 | 1 | 16,283 |
25,120,761 | 2014-08-04T14:13:00.000 | 0 | 0 | 0 | 0 | python,sockets,python-2.7 | 25,121,894 | 3 | false | 0 | 0 | Because recvall is fundamentally confusing: your assumption was that it would read exactly-N bytes, but I would have thought it would completely exhaust the stream, based on the name.
An operation that completely exhausts the stream is a dangerous API for a bunch of reasons, and the ambiguity in naming makes this a pretty unproductive API. | 2 | 17 | 0 | When using recv() method, sometimes we can't receive as much data as we want, just like using send(). But we can use sendall() to solve the problem of sending data, how about receiving data? Why there is no such recvall() method? | why does python socket library not include recvall() method like sendall()? | 0 | 0 | 1 | 16,283 |
25,122,947 | 2014-08-04T16:07:00.000 | -1 | 0 | 1 | 0 | python,database,computer-vision,pickle | 25,288,486 | 1 | true | 0 | 0 | Use a database because it allows you to query faster. I've done this before. I would suggest against using cPickle. What specific implementation are you using? | 1 | 1 | 1 | I have previously saved a dictionary which maps image_name -> list of feature vectors, with the file being ~32 Gb. I have been using cPickle to load the dictionary in, but since I only have 8 GB of RAM, this process takes forever. Someone suggested using a database to store all the info, and reading from that, but would that be a faster/better solution than reading a file from disk? Why? | Using Pickle vs database for loading large amount of data? | 1.2 | 0 | 0 | 811 |
25,125,447 | 2014-08-04T18:44:00.000 | 0 | 0 | 1 | 0 | python,sockets,flask | 25,129,481 | 1 | false | 0 | 0 | You probably need SO_REUSEPORT and SO_REUSEADDR socket options set? That error means that you tried to bind a port before the previous binding timed out after a close. | 1 | 1 | 0 | I'm on OS-X, using the flask library to make a small api.
Usually when I terminate the process with Ctrl-C it used to just raise KeyboardInterrupt, but now it will exit with socket.error: [Errno 48] Address already in use instead. After that, trying to restart the program throws the same error. This used to happen occasionally, but now seems to happen every time. Activity Monitor shows the Python process to still be running with 3 threads.
The fix is to quit the process from Activity Monitor.
Why does the process not get terminated properly anymore (note: I am using Ctrl-C, not Ctrl-Z), and is there a way to fix this minor inconvenience? | Ctrl-C not properly closing multi-threaded python (+ flask) program | 0 | 0 | 0 | 1,193 |
25,125,532 | 2014-08-04T18:49:00.000 | 1 | 0 | 1 | 0 | python,installation,iexpress | 31,952,116 | 1 | false | 0 | 0 | iexpress doesn't let you include folders, but you can include a batch file, which may create folders and copy files to the respective folder. To run a batch file, specify cmd /c IncludedBatchFile.bat under Install Program in the iexpress wizard. | 1 | 0 | 0 | ok so i've used iexpress a few times without a problem. i created a nice little program for my buddies and i and i'm now in the process of creating a installation package for it. i like iexpress cause it makes it easy and has the license agreement window n whatnot.
ok so program is made. using windows & iexpress i attempt to make the installer, problem is there is one folder that contains an item i need and it needs to be in that folder directory when the installed program needs to run. Problem: i can select files but not folders for the list of items to be in the installer.
Question: how do i include the folder in the install package so there doesnt need to be a few more additional steps for the installation.
i have thought about zipping it, but there isnt a way (that i know of) to add a extract command after the initial install extract.
i figure installers are to programs what instruction booklets are to Ikea furniture so i figured this would be the best place for help. tyvm | iexpress assistance for my program | 0.197375 | 0 | 0 | 535 |
25,125,959 | 2014-08-04T19:19:00.000 | 4 | 0 | 0 | 0 | python,django,rest,django-rest-framework | 25,138,732 | 3 | false | 1 | 0 | Why there is generics if ModelViewSet gives same abilities and more?
Let me first rephrase the question a little more explicitly for you...
"Why are there Generic Views, when there are also Generic ViewSets"
Which really just comes down to a question of why REST framework supports both views and viewsets. Answer - ViewSets are useful for prototyping or for cases when your API URLs neatly map to a fixed convention throughout (eg CRUD style APIs). Views are useful for being explicit, or for cases when your URLs do not neatly map to a fixed convention throughout. | 1 | 12 | 0 | I use generics and plain urls for my REST API, but now I stuck with problem: I want custom actions, simple views to make some things with my models, like "run", "publish", etc.
ViewSet gives action decorator to create custom actions, but only in ViewSets, also, there is stepial routers, which gives us ability to simplify everything using railsish convention-over-configuration.
But I find that ModelViewSet gives us same abilities, as generics: full CRUD, serializers, filters, cusstom pre/post and queryset, so, it leads to question:
Why there is generics if ModelViewSet gives same abilities and more? What a difference? | Django REST Framework: Generics or ModelViewSets? | 0.26052 | 0 | 0 | 14,838 |
25,126,843 | 2014-08-04T20:16:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing,python-requests | 25,128,782 | 2 | false | 0 | 0 | Figure out what your bottleneck is going to be before designing a solution;
The first thing to look at is probably network bandwith. If one stream can saturate your network, downloading more than one toghether won't be faster.
The second thing is disk write throughput. Can your disk and OS handle all these concurrent writes?
If you want to do transcoding, you might also run into computational limits. | 1 | 0 | 0 | I would like to be able to use the multiprocessing library in python to continuously stream from multiple live web apis with python-request (Using the Stream option). Would this be possible on a dual-core Linux system or am I better off running them as single programs in multiple screen sessions?
Would I also want to use a pool of workers?
Thank you for the help, let me know if the question is unclear. | Is it possible to process X amount of streaming API streams with multiprocessing? | 0.099668 | 0 | 1 | 218 |
25,129,311 | 2014-08-04T23:58:00.000 | 1 | 0 | 0 | 0 | python,c++,user-interface,jvm,pyc | 25,132,319 | 1 | true | 0 | 1 | All GUI toolkits on Python are a wrapper around C/C++ code. On Java there a some "pure" Java toolkits like Swing, but a the lowest level they depend on C code to do the drawing and handle user input. There's no special support for things like graphics in the Java VM.
As for how the GUI gets rendered at the lowest level, it depends. On Windows, user mode software isn't allowed direct access the video hardware. Ultimately any C/C++ GUI code has to go through either GDI or Direct3D to do the rendering. The kernel mode GDI code is able to do all the rendering itself by writing to the framebuffer, but also supports acceleration by passing operations to display driver. On the other hand, the Direct3D kernel code passes pretty much everything to the driver which in turn passes everything on to the GPU. Almost all of the kernel mode code is written in C, while code running on the GPU is a mixture of hand coded assembly from and code written in higher level shading languages.
Note that GPU assembly language is very different from Intel x86 assembly language, and varies considerably between manufacturers and GPU generations.
I'm not sure what current practice is on Linux and other Unix type operating systems, but it used to be common to give the X server, which is a user mode process, direct access to the framebuffer. C code in the X server was ultimately responsible for rendering. Presumably this has changed at least somewhat now that GPU acceleration is more common. | 1 | 0 | 0 | So I have been doing a lot of reading about execution environments (Python's, JVM...) and I am starting to implement one on my own. It is a register-based environment written in C. I have a basic byte code format defined and the execution is going pretty smooth so far. My question is how does VEs render GUIs. In a more detailed description about my work so far, my VE has a screen buffer (experimenting with it). Every time I poke it, I output the screen buffer completely to know the output.
So far so good with basic calculations and stuff but I hit a bump when I wanted to understand how to render GUIs. I am nowhere with this. Any help would be appreciated. Even if I am thinking completely wrong about this, any pointers as a start in the right direction would be really great. Thanks. | How do virtual machines render GUI? | 1.2 | 0 | 0 | 357 |
25,130,345 | 2014-08-05T02:34:00.000 | 3 | 1 | 1 | 0 | python,private | 25,130,467 | 5 | true | 0 | 0 | You can change the name of the the correctAnswer attribute in the code that you overwrite the tester with. This will instantly break all solutions that rely on the name of correctAnswer.
Furthermore, you could run both versions of the tester and diff the outputs. If there is a difference, then their code relies on the attribute name in some way. | 1 | 4 | 0 | I've read a number of SO threads about why Python doesn't have truly private variables, and I understand it for most applications.
Here's my problem: I'm creating a class project. In this class project, students design an agent that takes a multiple choice test. We want to be able to grade the agents' answers immediately so that the agents can learn from their wrong answers to previous problems. So, we need the correct answer to each problem to be stored in the program. Students are running these projects on their local machine, so they can see all the tester's code. They can't modify it -- we overwrite any of their changes to the tester code when they turn it in. They just turn in a new file representing their agent.
In Java, this is straightforward. In the Problem class, there is a private correctAnswer variable. correctAnswer always stores the correct answer to the problem. Agents can only read correctAnswer through a checkAnswer method, and in order to call checkAnswer, agents must actually give an answer to the question that cannot be changed later.
I need to recreate this behavior in Python, and so far I'm at a loss. It seems that no matter where in the program correctAnswer is stored, agents can access it -- I'm familiar with the underscore convention, but in this problem, I need agents to be incapable of accessing the right answer. The only thing I can think of is to name correctAnswer something different when we're testing students' code so that their agents can't anticipate what it will be called, but that's an inelegant solution.
Any suggestions? It's acceptable for agents to be able to read correctAnswer so long as we can detect when they read it (so we can set the 'agent's answer' variable to read-only after that... although then I'll need a way to do that, too, oof). | Recreating "private" class variables in Python | 1.2 | 0 | 0 | 120 |
25,138,508 | 2014-08-05T12:11:00.000 | 0 | 0 | 0 | 0 | python,image,3d,transform | 25,142,706 | 2 | false | 0 | 0 | Firstly, all lines in 3d correspond to an equation; secondly, all lines in 3d that lie on a particular plane for part of their length correspond to equations that belong to a set of linear equations that share certain features, which you would need to determine. The first thing you should do is identify the four corners of the supposed plane - they will have x, y or z values more extreme than the other points. Then check that the lines between the corners have equations in the set - three points in 3d always define a plane, four points may not. Then you should 'plot' the points of two parallel sides using the appropriate linear equations. All the other points in the supposed plane will be 'on' lines (whose equations are also in the set) that are perpendicular between the two parallel sides. The two end points of a perpendicular line on the sides will define each equation. The crucial thing to remember when determining whether a point is 'on' a line is that it may not be, even if the supposed plane was inputted as a plane. This is because x, y and z values as generated by an equation will be rounded so as correspond to 'real' points as defined by the resolution that the graphics program allows. Therefore you must allow for a (very small) discrepancy between where a point 'should be' and where it actually is - this may be just one pixel (or whatever unit of resolution is being used). To look at it another way - a point may be on a perpendicular between two sides but not on a perpendicular between the other two solely because of a rounding error with one of the two equations. If you want to test for a 'bumpy' plane, for whatever reason, just increase the discrepancy allowed. If you post a carefully worded question about the set of equations for the lines in a plane on math.stackexchange.com someone may know more about it. | 1 | 0 | 1 | I used micro CT (it generates a kind of 3D image object) to evaluate my samples which were shaped like a cone. However the main surface which should be flat can not always be placed parallel to the surface of image stacks. To perform the transform, first of all, I have to find a way to identify the flat surface. Therefore I learnt python to read the image data into numpy array.
Then I realized that I totally have no clue how to achieve the idea in a mathematical way.
If you have any idea or any suggestion or even packages would be so appreciated. | How to check a random 3d object surface if is flat in python | 0 | 0 | 0 | 435 |
25,139,041 | 2014-08-05T12:38:00.000 | 0 | 0 | 1 | 0 | python,google-app-engine,google-cloud-datastore,app-engine-ndb | 25,140,245 | 1 | false | 1 | 0 | There's probably a few ways to go about this, and using Google Cloud SQL would probably make your life easier for this. What you could do though is add a BooleanProperty to your entity types for example, named 'New', that would be set to 'True' by default. You would need to create an algorithm that would check through the datastore to return you any entity who's 'New' BooleanProperty is set to True, and then set it to False once it's returned. | 1 | 0 | 0 | So I'm making productivity app for myself and ppl around me. There will be a lots of different stuff related to different type groups or individual persons for different types of reasons.
Is there some best practice approach, algorythm or guidelines to do this properly that I should know before I screw everything up?
If it matters, I'm using GAE, Python, NDB.
CLARIFICATION EDIT:
someone contributed to household's savings account
Someone posted a picture to album of the party that I attended
somone from household added someting to shopping list
i received a direct message
someone uploaded a documentg to project workspace etc
I need to keep track of which items are new and which have been seen. | best practice for tracking new/seen items | 0 | 0 | 0 | 60 |
25,143,105 | 2014-08-05T15:49:00.000 | 1 | 0 | 0 | 0 | python,apache,flask,wsgi | 25,143,417 | 1 | true | 1 | 0 | The WSGIApplicationGroup directive may be what you're looking for as long as you have the wsgi app running in daemon mode (otherwise I believe apache's default behavior is to use prefork which spins up a process to handle each individual request):
The WSGIApplicationGroup directive can be used to specify which application group a WSGI application or set of WSGI applications belongs to. All WSGI applications within the same application group will execute within the context of the same Python sub interpreter of the process handling the request.
You have to provide an argument to the directive that specifies a name for the application group. There's a few expanding variables: %{GLOBAL}, %{SERVER}, %{RESOURCE} and %{ENV:variable}; or you can specify your own explicit name. %{GLOBAL} is special in that it expands to the empty string, which has the following behavior:
The application group name will be set to the empty string.
Any WSGI applications in the global application group will always be executed within the context of the first interpreter created by Python when it is initialised. Forcing a WSGI application to run within the first interpreter can be necessary when a third party C extension module for Python has used the simplified threading API for manipulation of the Python GIL and thus will not run correctly within any additional sub interpreters created by Python.
I would recommend specifying something other than %{GLOBAL}.
For every process you have mod_wsgi spawn, everything will be executed in the same environment. Then you can simply control the number of database connections based on the number of processes you want mod_wsgi to spawn. | 1 | 1 | 0 | I'm using wsgi apache and flask to run my application. I'm use from yourapplication import app as application to start my application. That works so far fine. The problem is, with every request a new instance of my application is created. That leads to the unfortunate situation that my flask application creates a new database connection but only closes it after about 15 min. Since my server allows only 16 open DB connections the server starts to block requests very soon. BTW: This is not happening when I run flask without apache/wsgi since it opens only one connection and serves all requests as I want.
What I want: I want to run only one flask instance which then servers all requests. | start only one flask instance using apache + wsgi | 1.2 | 1 | 0 | 1,457 |
25,143,621 | 2014-08-05T16:14:00.000 | 2 | 0 | 0 | 1 | python,django,python-2.7 | 25,291,353 | 1 | true | 1 | 0 | Okay, so to reiterate my last post.
There was a call to a Django service that was failing on application startup. No error was thrown, instead it was absorbed by Sentry. Those who were already using the VM on their local machines had worked around the issue.
The issue was identified by importing ipdb and calling its set_trace() function. From the console, I stepped through the application, testing likely variables and return values until it refused to continue. This narrowed it down to the misbehaving service and its unthrown error.
The code has been updated with proper try/catch blocks and the error is now handled gracefully.
So to summarise: Not a malfunctioning VM, but a problem with code. | 1 | 2 | 0 | I'm running a vagrant box on Mac OS X. The VM is running Ubuntu 12.04, with Python 2.7 and Django 1.4.5. When I start up manage.py, I call it like this:
./manage.py runserver 0.0.0.0:8000
And if I visit http://127.0.0.1:8000 from within the VM, the text browsers I've tried report that the HTTP request has been sent and then wait for a response until the request times out. No response ever comes.
I can telnet to the port like this:
telnet 127.0.0.1 8000
And enter random gibberish, which manage.py reports as the following:
127.0.0.1 - - [05/Aug/2014 17:06:26] code 400, message Bad request syntax ('asdfasdfadsfasd')
127.0.0.1 - - [05/Aug/2014 17:06:26] "asdfasdfadsfasd" 400 -
So manage.py is listening on that port. But a standard HTTP request generates no response from manage.py, either in the console or in the browser.
I've tried using different ports which hasn't had any effect. Does anyone have any ideas?
UPDATE
Some additional curl output.
Executing 'curl -v http://127.0.0.1:8000' returns
'* About to connect() to 127.0.0.1 port 8000 (#0)
* Trying 127.0.0.1... connected
GET / HTTP/1.1
User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
Host: 127.0.0.1:8000
Accept: /
'
Executing 'curl -v http://somefakedomain' results in
'* getaddrinfo(3) failed for somefakedomain:80
* Couldn't resolve host 'somefakedomain'
* Closing connection #0
curl: (6) Couldn't resolve host somefakedomain' | Django manage.py runserver fails to respond | 1.2 | 0 | 0 | 2,089 |
25,145,552 | 2014-08-05T18:09:00.000 | -2 | 0 | 0 | 0 | python,lucene,nlp,scikit-learn,tf-idf | 67,782,295 | 4 | false | 0 | 0 | The lengths of the documents The number of terms in common Whether the terms are common or unusual How many times each term appears | 1 | 42 | 1 | I have a corpus which has around 8 million news articles, I need to get the TFIDF representation of them as a sparse matrix. I have been able to do that using scikit-learn for relatively lower number of samples, but I believe it can't be used for such a huge dataset as it loads the input matrix into memory first and that's an expensive process.
Does anyone know, what would be the best way to extract out the TFIDF vectors for large datasets? | TFIDF for Large Dataset | -0.099668 | 0 | 0 | 28,481 |
25,147,323 | 2014-08-05T20:02:00.000 | 0 | 0 | 0 | 0 | android,python,ios,parse-platform,kivy | 25,147,496 | 3 | false | 1 | 1 | The answer is essentially that you must simply save the data somewhere. The details will depend on your requirements, and aren't specific to kivy - you can look up normal android or python practices.
I'm not sure exactly what android guarantees about permissions, but storing stuff in your app directory (on the main partition, not 'external' storage) should be inaccessible to other apps, though anything with root can view them. You can use normal python tools if you want to encrypt the data, and manage your own kivy gui for getting a decryption key/password from the user. | 1 | 3 | 0 | I am working on a Kivy app for iOS and Android and need help with keeping the user persistently logged in, even after the app is closed or killed. I am using Parse to store user credentials.
I've already added an on_pause method to the App class, but this only keeps the user logged in if the app is closed but not killed. Is there a best practice for securely allowing persistent user login with Kivy, even after an app is killed?
Edit: I prefer a single Kivy solution that works for both an Android app and an iOS app, without the need to edit/add iOS or Android specific code. | Saving login screen username and password for Kivy app | 0 | 0 | 0 | 5,234 |
25,148,418 | 2014-08-05T21:13:00.000 | 0 | 0 | 1 | 0 | python,text-analysis | 25,149,261 | 4 | false | 0 | 0 | If you can post the way you are doing it or thinking of doing it with R, I suspect someone could offer some suggestions as to how to do it with Python efficiently. For example, you can make a numpy array of strings and use functions in the numpy.char module to do vectorized operations over strings if you prefer that to writing list-comprehensions or for-loops. | 2 | 0 | 0 | I'm looking for a more efficient way of loading text data into Python, instead of using .readlines(), then manually parsing through the data. My goal here is to run different models on the text.
My classifiers are People's names, which are listed before the text of their... let's call them 'Reviews'... which are separated by ***. Here is an example of the txt file:
Mike P, Review, December, 2013
Mike P, Review, June, 2013
Tom A, Review, December, 2013
Tom A, Review, June, 2013
Mark D, Review, December, 2013
Mark D, Review, June, 2012
Sally M, Review, December, 2011
***
This is Mike P's first review
***
This is Mike P's second review
***
This is Tom A's first review
***
Etc...
Ultimately, I need to create a bag of words from the 'Reviews'. I can do this in R, but I'm forcing myself to learn Python for data analysis and keep spinning my wheels every which way I turn.
Thanks in advance! | Reading text file into python | 0 | 0 | 0 | 752 |
25,148,418 | 2014-08-05T21:13:00.000 | 0 | 0 | 1 | 0 | python,text-analysis | 25,148,491 | 4 | false | 0 | 0 | If it's a big amount of data to read at once you can iterate manually via readline() and then parse it on the way dumping unnecessary entries. | 2 | 0 | 0 | I'm looking for a more efficient way of loading text data into Python, instead of using .readlines(), then manually parsing through the data. My goal here is to run different models on the text.
My classifiers are People's names, which are listed before the text of their... let's call them 'Reviews'... which are separated by ***. Here is an example of the txt file:
Mike P, Review, December, 2013
Mike P, Review, June, 2013
Tom A, Review, December, 2013
Tom A, Review, June, 2013
Mark D, Review, December, 2013
Mark D, Review, June, 2012
Sally M, Review, December, 2011
***
This is Mike P's first review
***
This is Mike P's second review
***
This is Tom A's first review
***
Etc...
Ultimately, I need to create a bag of words from the 'Reviews'. I can do this in R, but I'm forcing myself to learn Python for data analysis and keep spinning my wheels every which way I turn.
Thanks in advance! | Reading text file into python | 0 | 0 | 0 | 752 |
25,149,761 | 2014-08-05T23:08:00.000 | 0 | 0 | 1 | 1 | python | 25,150,279 | 3 | false | 0 | 0 | The PYTHONPATH environmenbt variable is not used to select the path of the Python executable - which executable is selected depends, as in all other cases, on the shell's PATH environment variable. PYTHONPATH is used to augment the search list of directories (sys.path in Python) in which Python will look for modules to satisfy imports.
Since the interpreter puts certain directories on sys.path before it actions PYTHONPATH precisely to ensure that replacement modules with standard names do not shadow the standard library names. So any standard library module will be imported from the library associated with the interpreter it was installed with (unless you do some manual furkling, which I wouldn't recommend).
venv/bin/activate does a lot of stuff that needs to be handled in the calling shell's namespace, which can make tailoring code rather difficult if you can't find a way to source the script.. | 2 | 0 | 0 | I have a virtualenv in a structure like this:
venv/
src/
project_files
I want to run a makefile (which calls out to Python) in the project_files, but I want to run it from a virtual environment. Because of the way my deployment orchestration works, I can't simply do a source venv/bin/activate.
Instead, I've tried to export PYTHONPATH={project_path}/venv/bin/python2.7. When I try to run the makefile, however, the python scripts aren't finding the dependencies installed in the virtualenv. Am I missing something obvious? | Using PYTHONPATH to use a virtualenv | 0 | 0 | 0 | 103 |
25,149,761 | 2014-08-05T23:08:00.000 | 0 | 0 | 1 | 1 | python | 25,150,471 | 3 | false | 0 | 0 | You can actually just call the Python interpreter in your virtual environment. So, in your Makefile, instead of calling python, call venv/bin/python. | 2 | 0 | 0 | I have a virtualenv in a structure like this:
venv/
src/
project_files
I want to run a makefile (which calls out to Python) in the project_files, but I want to run it from a virtual environment. Because of the way my deployment orchestration works, I can't simply do a source venv/bin/activate.
Instead, I've tried to export PYTHONPATH={project_path}/venv/bin/python2.7. When I try to run the makefile, however, the python scripts aren't finding the dependencies installed in the virtualenv. Am I missing something obvious? | Using PYTHONPATH to use a virtualenv | 0 | 0 | 0 | 103 |
25,152,287 | 2014-08-06T04:33:00.000 | 0 | 0 | 1 | 0 | python | 25,152,320 | 2 | false | 0 | 1 | If you're using Windows, you can create a shortcut. Or you can just put your .py file on your desktop.
If you want to make an .exe file I would suggest installing py2exe. I've compiled Tkinter Python scripts into .exes with py2exe many times. | 1 | 0 | 0 | I wrote a program in Python, then created a GUI using Tkinter. When I use programs on my computer (like Microsoft Word), I don't need to access the GUI from the command line I just click the application icon.
How do I put my program (the program itself is in the same .py file as the GUI) into an application icon that will start my program? | how to create an icon for my python application | 0 | 0 | 0 | 2,183 |
25,156,170 | 2014-08-06T09:00:00.000 | 1 | 0 | 0 | 0 | javascript,jquery,python,html,mako | 25,156,421 | 1 | false | 1 | 0 | Ajax is the "right" way to do this.
In order to insert the values into your separate Javascript file dynamically, it can no longer be served as a static file. Beyond that, it adds an extra layer of problems with security and maintainability as you have to deal with string escaping, possible script injection, and having Mako syntax in your Javascript. Not to mention losing the ability to host your .js files on a CDN or server configured for static files. | 1 | 2 | 0 | I have a dashboard I am working on, using Python cherrypy framework and Mako template language.
I had an html file for each of dashboard pages.
There, I used Mako to pass some data to html and inline Javascript.
For example, to display the names of some processes when I only had the list of ids, I passed a Python dict that maps ids to their corresponding names, and then used the dict in ${} tags.
However, as I am now moving these Javascript codes into a separate file with .js extension, I found out simply putting the same Mako code blocks in the Javascript code does not work.
Is there any way I could use Mako template language in an external .js file that is imported in an html file?
Is it considered a bad practice and should I pass all these data using XMLHTTPRequests when I am passing them to Javascript? | Using Mako Template in Javascript | 0.197375 | 0 | 0 | 1,595 |
25,156,559 | 2014-08-06T09:19:00.000 | 0 | 0 | 0 | 0 | python,branch,history,revision,pysvn | 25,157,103 | 1 | true | 0 | 0 | Oh well, I found a way: I can create a list of log messages using the log command and loop through those messages. They are dictionaries containing all relevant information.. | 1 | 0 | 0 | I'm using PySVN to access the SVN API from a python script.
I can't seem to find a way to figure out what other revision numbers there are in a given branch.
I can determine the revision number of my working copy trunk, as well as the revision number of the latest change by using info and/or info2, but I would like to find out what other revision numbers I have in the current branch (previous and later) so I will be able to update to other specific revisions.
Any ideas?
Thx, Martin | Python PySVN - Branch revision history | 1.2 | 0 | 0 | 409 |
25,162,141 | 2014-08-06T13:50:00.000 | 3 | 0 | 1 | 1 | python,linux,python-module | 25,162,261 | 2 | false | 0 | 0 | You should install Python libraries with the Python package installer, pip.
Create a virtualenv with the Python version you want to use, activate it, and do pip install NetfilterQueue. You'll still need to install the system dependencies (eg libnetfilter-queue-dev in this case) with apt-get. | 1 | 3 | 0 | I have different python versions installed on my ubuntu machine. The default version is 2.7.
So when I install any new python module, for example using:
#apt-get install python-nfqueue
it will be istalled just for the default version (2.7)
How can I install the new modules for the other versions?
Is there a way to do it using apt-get install?
Thank you! | Install python module for non-default version on linux | 0.291313 | 0 | 0 | 2,428 |
25,164,837 | 2014-08-06T15:51:00.000 | 1 | 0 | 0 | 1 | python,bash | 25,164,957 | 3 | false | 0 | 0 | Yes, that is best and almost only way to pass data from python to bash.
Also your function can write to file, which would be read by bash script. | 1 | 0 | 0 | I am writing a dev ops kind of a bash script that is used for running an application in a local development environment under configuration as similar to production as possible. To eliminate duplicating some code/data which is already in a Python script, I would like my bash script to invoke a Python call to retrieve data that is hard coded in that Python script. The data structure in Python is a dict but I really only care about the keys so I can just return an array of keys. The Python script is used in production and I want to use it and not duplicate the data in my shell script to avoid having to follow on any modification in the production script with parallel changes in the local environment shell script.
Is there any way I can invoke a Python function from bash and retrieve this collection of values? If not, should I just have the Python function print to STDOUT and have the shell script parse the result? | How to pass collection data from Python to bash? | 0.066568 | 0 | 0 | 2,192 |
25,168,058 | 2014-08-06T18:55:00.000 | 0 | 0 | 0 | 0 | python,pandas,openpyxl,versions | 25,178,533 | 1 | false | 0 | 0 | The best thing would be to remove the version of openpyxl you installed and let Pandas take care. | 1 | 0 | 1 | My installed version of the python(2.7) module pandas (0.14.0) will not import. The message I receive is this:
UserWarning: Installed openpyxl is not supported at this time. Use >=1.6.1 and <2.0.0.
Here's the problem - I already have openpyxl version 1.8.6 installed so I can't figure out what the problem might be! Does anybody know where the issue may lie? Do I need a different combination of versions? | Python pandas module openpxyl version issue | 0 | 1 | 0 | 114 |
25,169,506 | 2014-08-06T20:22:00.000 | 1 | 0 | 1 | 0 | python,pandas | 25,170,275 | 2 | false | 0 | 0 | Try to locate your pandas lib in /python*/lib/site-packages, add dir to your sys.path file. | 1 | 0 | 1 | I installed pandas using pip and get the following message "pip install pandas
Requirement already satisfied (use --upgrade to upgrade): pandas in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
Cleaning up..."
When I load up python and try to import pandas, it says module not found. Please help | error in import pandas after installing it using pip | 0.099668 | 0 | 0 | 1,176 |
25,170,016 | 2014-08-06T20:55:00.000 | 0 | 0 | 1 | 1 | python,virtualenv,tornado | 25,170,606 | 2 | false | 0 | 0 | Could it be that the virtualEnv is inheriting the global site-packages? I'm not sure if I added -no-site-packages when I set up the virtualEnv. Is there an easy way to address this setting now or to test this possibility?
no-global-site-packages.txt is present in the python2.7 directory
and
orig-prefix.txt contains a parent directory outside of the virtualenv of the older version of tornado that is being loaded | 1 | 1 | 0 | I am running python-2.7 with virtualenv on a unix server to which I do not have root access. I updated the module tornado using pip install tornado --upgrade because installing ipython required tornado >= 3.1.0 but only version 2.4 was installed by default on the server. However, when I try to open ipython, it still complains that I don't have the updated version.
I confirmed that ipython is correctly aliased to the virtualenv, and that the upgrade had indeed produced tornado version 4.0 in the site-packages of the virtualenv.
However if I open python (correctly aliased to the virtualenv) and import tornado, I find that it is importing the earlier version (2.4) and not the newer version from my virtualenv. Importing another package that was only installed on the virtualenv correctly imports it from the site-packages of the virtualenv.
Any idea how I should tell python to use the updated version of tornado by default instead of the earlier version that isn't on the virtualenv?
One really hacky thing that I tried was appending to my virtualenv activate file the following:
PYTHONPATH=path_to_standardVE/lib/python2.7/site-packages/tornado:$PYTHONPATH
If I check $PYTHONPATH upon startup, it indeed contains this path at the front. However, loading the module in python still loads the 2.4 version.
Thanks! | virtualenv not finding updated module | 0 | 0 | 0 | 466 |
25,171,044 | 2014-08-06T22:05:00.000 | 2 | 0 | 0 | 0 | python,sockets,python-3.x,tcp | 25,171,183 | 1 | true | 0 | 0 | No. Unlike UDP sockets, TCP sockets work are connection-oriented. Whatever data is written into a socket, "magically" appears to come out of the socket at the other end as a stream of data. For that, both sockets maintain a virtual connection, a state. Among other things, the state defines both endpoints of the conntection - IP and port numbers of both sockets. So a single TCP socket can only talk to a single TCP socket at the other end.
UDP sockets on the other hand operate on a per-packet basis (connectionless), allowing you to send a receive packets to/from any destination using the same socket. However, UDP does not guarantee reliability and in-order delivery.
Btw, your question has nothing to do with python. All sockets (except raw sockets) are operating system sockets that work in the same way in all languages. | 1 | 1 | 0 | I'm trying to write a server for a chat program. I want the server to have a tcp connection with every chat user. Is there way for the server to have multiple tcp connections at the same time without creating a socket for every connection? And if yes, how? | Multiple TCP connections in python | 1.2 | 0 | 1 | 1,292 |
25,172,220 | 2014-08-07T00:03:00.000 | 1 | 0 | 0 | 0 | python,pyqt | 25,172,853 | 7 | true | 0 | 1 | You can use the removeItem() method to remove an item from the QComboBox.
void QComboBox::removeItem ( int index )
Removes the item at the given index from the combobox. This will update the current index if the index is removed.
This function does nothing if index is out of range.
If you don't know the index, use the findText() method.
There are no hide/unhide methods for QComboBox items. | 1 | 4 | 0 | I can't find a way to hide QComboBox items. So far the only way to filter its items out is to delete the existing ones (with .clear() method). And then to rebuild the entire QComboBox again using its .addItem() method.
I would rather temporary hide the items. And when they are needed to unhide them back.
Is hide/unhide on QCombobox items could be accomplished? | How to hide QComboBox items instead of clearing them out | 1.2 | 0 | 0 | 10,685 |
25,172,492 | 2014-08-07T00:39:00.000 | -1 | 0 | 1 | 0 | python,sympy | 26,978,949 | 2 | false | 0 | 0 | Here is a sequence of numbers:
4, 10, 16, 22, 28
The nth term of a sequence is always written in the form "?n + ?".
The number in front of the "n" is always the difference to get from one term to the next. Since the difference is 6, the first part of our rule will be "6n". The rule follows the six times table: 6, 12, 18, 24... etc.
The numbers in the sequence are always 2 less than the 6 times table so we "adjust" our rule by subtracting 2. Now putting this together gives us:
nth term = 6n - 2.
Your Welcome happy to help:-) | 1 | 2 | 0 | Is there a way to compute the nth term of a Taylor series expansion without defining the value of n? In the case of sine it is (-1)**n*x**(2*n+1)/(2*n+1)!. In Maxima it is a (somewhat) related form to do it with powerseries(sin(x), x, 0). | How to find the nth term of a Taylor series in Sympy | -0.099668 | 0 | 0 | 992 |
25,174,797 | 2014-08-07T05:28:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-models,django-forms | 25,174,963 | 2 | false | 1 | 0 | Define a Custom ModelForm by specifying model in Meta and declare the required additional field there. After that set form attribute of your Admin class with the name of YourForm. | 1 | 0 | 0 | I have a model form and I need to add a checkbox to it. This checkbox is not mapped to any database field. Is this possible ? How ! | Can I add a checkbox in to Model Forms in Django even if there is no Field in DB for it | 0 | 0 | 0 | 868 |
25,176,025 | 2014-08-07T06:57:00.000 | 1 | 0 | 0 | 0 | python,tkinter | 25,183,356 | 1 | false | 0 | 1 | It can only respond to keyboard events if it has the focus. If it is withdrawn it cannot have the keyboard focus. So, no, you cannot have a withdrawn window be the active window. | 1 | 0 | 0 | Is there a way to set the withdrawed tk window to the top level, i.e. the active window, even though its withdrawed?
Its so it will listen to keys. | TKinter Python - Set active window when withdrawed | 0.197375 | 0 | 0 | 246 |
25,176,734 | 2014-08-07T07:37:00.000 | 2 | 0 | 0 | 1 | python,django,twisted,event-driven,twisted.web | 25,198,042 | 2 | false | 1 | 0 | Nope, unless you heavily modify django db adapters and some core component you will not get any advantage. There are some tool for simplyfing the job, but you will be on the bleeding edge trying to adapt something built with the blocking paradigm since the beginning, to something completely different.
On the other side, performance should not be worst, as 99.9% of the time your app itself is the bottleneck, not your WSGI infrastructure.
Regarding async django, lot of people had luck with gevent, but you need to carefully analyze your app to be sure all of the components are gevent-friendly (and this could not be an easy task, expecially for db adapters).
Remember, even if your app is 99.9999999% non-blocking, you are still blocking. | 1 | 3 | 0 | I have a Django application which I need to deploy in a WSGI container. I can either chose an event driven app server like TwistedWeb or a process driven server like uWSGI. I completely understand the difference between an event driven and a process driven server and I know Django framework is blocking in nature.
I came across TwistedWeb which lets us run a WSGI application in a simple fashion.
My questions are as follows:
1) Would I gain anything by running Twisted instead of uWSGI as Django is blocking in nature. Is TwistedWeb different from the standard twisted library ? I know people run Twisted with Django when they need support for async as well, for ex chat along with normal functionality and they still want to have just one app. I have no such use case and for me its just a website.
2) Would the performance be worse on TwistedWeb as its just a single process and my request would block as Django is synchronous in nature ? Or TwistedWeb runs something like uWSGI which launches multiple processes before hand and distributes requests in a roundrobin fashion among those ? If yes then is TwistedWeb any better than uWSGI ?
3) Is there any other protocol other than WSGI which can integrate Twisted with Django and still give me async behavior (trying my luck here :) ) | Is twistedweb with django recommeneded | 0.197375 | 0 | 0 | 244 |
25,177,236 | 2014-08-07T08:04:00.000 | 1 | 0 | 0 | 0 | python,keyboard,kivy,python-3.4 | 25,405,116 | 1 | false | 0 | 1 | I have done something similar, however I did not use popups. My application has a ScreenManager with many Screens, each screen has sub-menu's and each sub-menu of each screen requires it's own key mappings.
To better handle the switching of these sets of key mappings, I created a KeyboardManager class and a KeyboardLayout class:
The KeyboardManager class has the keyboard_handler() function which you give to the Window.
The KeyboardLayout class would contain a dictionary of all the bindings, i.e. {'a': a_key_callback, 'esc': esc_key_callback}
Each sub-menu in my app would have a KeyboardLayout instance, and in it I would define my key mappings into it's dictionary. When the sub-menu is brought up, I would attach the KeyboardLayout to the KeyboardManager (save a reference to the KeyboardLayout in the KeyboardManager). The KeyboardManager.keyboard_handler method then intercepts key events, looks up the function to be called in the currently attached KeyboardLayout and then calls that function. In the keyboard_handler you can decide what to do if a key mapping is not there.
In your case, you would need one KeyboardLayout for the main layout and one for the popup. When the popup is opened, attach it's KeyboardLayout to the KeyboardManager, when you leave the popup, attach the main layout's KeyboardLayout back to the KeyboardManager.
This is the method I applied and it worked very well for me. Very easy to manager once the classes are set up. | 1 | 0 | 0 | I have main Layout and some Popups, I want keys to do some functions when main Layout is focused, some another functions for one Popup when it's opened, some another for another Popup etc. How can I do that the best way? | How can I use keyboard for different layouts in Kivy? | 0.197375 | 0 | 0 | 435 |
25,182,812 | 2014-08-07T12:39:00.000 | 27 | 1 | 0 | 1 | python,pytest,pudb | 25,183,130 | 2 | true | 0 | 0 | Simply by adding the -s flag pytest will not replace stdin and stdout and debugging will be accessible, i.e. pytest -s my_file_test.py will do the trick.
In documentation provided by ambi it is also said that previously using explicitly -s was required for regular pdb too, now -s flag is implicitly used with --pdb flag.
However pytest does not implicitly support pUdb, so setting -s is needed. | 1 | 24 | 0 | Before my testing library of choice was unittest. It was working with my favourite debugger - Pudb. Not Pdb!!!
To use Pudb with unittest, I paste import pudb;pudb.set_trace() between the lines of code.
I then executed python -m unittest my_file_test, where my_file_test is module representation of my_file_test.py file.
Simply using nosetests my_file_test.py won't work - AttributeError: StringIO instance has no attribute 'fileno' will be thrown.
With py.test neither works:
py.test my_file_test.py
nor
python -m pytest my_file_test.py
both throw ValueError: redirected Stdin is pseudofile, has no fileno()
Any ideas about how to use Pudb with py.test | Using Python pudb debugger with pytest | 1.2 | 0 | 0 | 4,263 |
25,186,308 | 2014-08-07T15:23:00.000 | 4 | 0 | 0 | 0 | python,security,ubuntu,flask | 25,212,067 | 1 | false | 1 | 0 | I understand you're using Apache or Nginx as your web server. If that's correct, I would place both your apps code and the app.wsgi file in your home directory.
Placing a file in /var/www allows it to be seen by the outside world in some cases (that it, unless you specifically specified it to be ignored by your webserver/deny access by the webserver). Placing it /home/user doesn't allow it to be seen by the outside world, unless explicitly specified.
As for permissions, You would need to give the web server user (usually www-data in Apache, unless flask_user is your web server user as well) read permission to the WSGI file, and probably also execute permissions. Not sure about permissions to the other python files, but that's easy to test. Start off with denying your web server user all permissions to the file. If that doesn't work, give it read permissions, and so on until the site works. That would be the minimum needed permission. | 1 | 4 | 0 | I have built a simple flask app on ubuntu server and have placed the code in the following directories:
Main app code: /home/user/flaskapp
WSGI Config: www/flaskapp/app.wsgi
My questions are:
Is the placement of the app's code in my home directory okay in production?
What should I have in my Folder Permissions to run a safe/secure site?
My Ubuntu Users name is 'flask_user', should I give it any special permissions or groups? | Setup of Flask App Directory and Permissions? | 0.664037 | 0 | 0 | 2,504 |
25,187,640 | 2014-08-07T16:28:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,qt4,pyqt4 | 25,195,595 | 2 | false | 0 | 1 | In PyQt4 it's not a bug but a feature. You can't edit this behaviour. In PyQt5, the placeholder text is shown until the text is not empty.
A simple way to solve problem is to focus some way before QLintEdit. When the user press TAB button, next focus is QLintEdit. | 1 | 2 | 0 | I have a QLineEdit with a PlaceholderText.
I want to clear the PlaceholderText only when a person starts typing, else the blinking cursor and PlacehoderText both should be there in that QLineEdit.
It is the first field of the page, so I have set the focus to this QLineEdit, but the PlaceholderText disappears as soon as this page is displayed.
Please suggest if I'll have to add a SIGNAL/SLOT for this QLIneEdit, so that the PlaceholderText doesn't get cleared off. | How to keep both cursor and PlaceholderText displayed at the same time in a QLineEdit? | 0.099668 | 0 | 0 | 256 |
25,189,859 | 2014-08-07T18:43:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,interpreter,bytecode,jit | 25,189,969 | 2 | false | 0 | 0 | Python loads the main script into memory, compiles it into bytecode and runs that. If you modify the source file in the meantime, you're not affecting the bytecode.
If you're running the script as the main script (i. e. by calling it like python myfile.py, then the bytecode will be discarded when the script exits.
If you're importing the script, however, then the bytecode will be written to disk as a .pyc file which won't be recompiled when imported again, unless you modify the corresponding .py file.
Your big 6.5 MB program consists of many modules which are imported by the (probably small) main script, so only that will have to be compiled at each run. All the other files will have their .pyc file ready to run. | 2 | 4 | 0 | Say I run a Python (2.7, though I'm not sure that makes a difference here) script. Instead of terminating the script, I tab out, or somehow switch back to my editing environment. I can then modify the script and save it, but this changes nothing in the still-running script.
Does Python load all source files into memory completely at launch? I am under the impression that this is how the Python interpreter works, but this contradicts my other views of the Python interpreter: I have heard that .pyc files serve as byte-code for Python's virtual machine, like .class files in Java. At the same time however, some (very few in my understanding) implementations of Python also use just-in-time compilation techniques.
So am I correct in thinking that if I make a change to a .py file while my script is running, I don't see that change until I re-run the script, because at launch all necessary .py files are compiled into .pyc files, and simply modifying the .py files does not remake the .pyc files?
If that is correct, then why don't huge programs, like the one I'm working on with ~6,550 kilobytes of source code distributed over 20+ .py files, take forever to compile at startup? How is the program itself so fast?
Additional Info:
I am not using third-party modules. All of the files have been written locally. The main source file is relatively small (10 kB), but the source file I primarily work on is 65 kB. It was also written locally and changes every time before launch. | How does Python read and interpret source files? | 0.291313 | 0 | 0 | 1,871 |
25,189,859 | 2014-08-07T18:43:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,interpreter,bytecode,jit | 25,190,071 | 2 | false | 0 | 0 | First of all, you are indeed correct in your understanding that changes to a Python source file aren't seen by the interpreter until the next run. There are some debugging systems, usually built for proprietary purposes, that allow you to reload modules, but this bring attendant complexities such as existing objects retaining references to code from the old module, for example. It can get really ugly, though.
The reason huge programs start up so quickly is the the interpreter tries to create a .pyc file for every .py file it imports if either no corresponding .pyc file exists or if the .py is newer. The .pyc is indeed the program compiled into byte code, so it's relatively quick to load.
As far as JIT compilation goes you may be thinking of the PyPy implementation, which is written in Python and has backends in several different languages. It's increasingly being used in Python 2 shops where execution speed is important, but it's along way from the CPython that we all know and love. | 2 | 4 | 0 | Say I run a Python (2.7, though I'm not sure that makes a difference here) script. Instead of terminating the script, I tab out, or somehow switch back to my editing environment. I can then modify the script and save it, but this changes nothing in the still-running script.
Does Python load all source files into memory completely at launch? I am under the impression that this is how the Python interpreter works, but this contradicts my other views of the Python interpreter: I have heard that .pyc files serve as byte-code for Python's virtual machine, like .class files in Java. At the same time however, some (very few in my understanding) implementations of Python also use just-in-time compilation techniques.
So am I correct in thinking that if I make a change to a .py file while my script is running, I don't see that change until I re-run the script, because at launch all necessary .py files are compiled into .pyc files, and simply modifying the .py files does not remake the .pyc files?
If that is correct, then why don't huge programs, like the one I'm working on with ~6,550 kilobytes of source code distributed over 20+ .py files, take forever to compile at startup? How is the program itself so fast?
Additional Info:
I am not using third-party modules. All of the files have been written locally. The main source file is relatively small (10 kB), but the source file I primarily work on is 65 kB. It was also written locally and changes every time before launch. | How does Python read and interpret source files? | 0.197375 | 0 | 0 | 1,871 |
25,191,537 | 2014-08-07T20:23:00.000 | 0 | 0 | 0 | 0 | python,authentication,oauth,flask | 25,207,309 | 1 | false | 1 | 0 | One option would be to require the user to register with your site after using OAuth2, but that's silly - you use OAuth2 to save that in the first place.
I'd just not save a password for this user. Why would you need it anyway? He's authenticating via OAuth2 as you said, and you need to ping the OAuth2 provider to verify the user. | 1 | 0 | 0 | I have a User model that has name, email, and password.
If a user signs up normally (through a form) the password stored in my db is hashed using passlib. Then, a token is generated and returned to the user's client. The token is a serialization of the player's id. To verify a player, he simply logs in with the token in the password field of a basic auth, and I simply deserialize the token and look up the player via the deserialized id. Also, a player may verify himself with his password.
The problem with using Google's OAuth2 is that a player does NOT set his password; he only sends the server a Google verified token, which the server sends to Google and obtains the user's email, name, etc.
How do I generate a password for the user? What am I supposed to do here?
My hackish workaround right now for Google OAuth2 user registration is simply: get the user's info from Google, generate a bogus password (which is hashed), and then generate the auth token for the user. Then, replace the bogus password with the auth token (which is hashed) and insert that as the user's password. The auth token is then returned to the user's client.
In other words, the auth token becomes the password. Right now my auth token's don't expire either.
Obviously this is a huge hack. What's the correct way to do this? Should I just ping Google every time a user needs to verify himself? | Python Flask and Google OAuth2 best practices | 0 | 0 | 0 | 396 |
25,192,029 | 2014-08-07T20:56:00.000 | 0 | 0 | 0 | 0 | python,nltk | 25,414,604 | 1 | false | 0 | 0 | Read in the chunked portion of your corpus and convert it into the format that the NLTK expects, i.e. as a list of shallow Trees. Once you have it in this form, you can pass it to the evaluate() method just like you would pass the "gold standard" examples.
The evaluate method will strip off the chunks, run your text through the chunker, and compare the result with the chunks you supplied to calculate accuracy. | 1 | 1 | 1 | How to test the default NLTK NER chunker's accuracy on own corpus?
I've tagged a percentage of my own corpus. I'm curious if it's possible to use the default NLTK tagger to see accuracy rate on this corpus?
I already know about the ne_chunker.evaluate() function, but it's not immediately clear to me how to input in my own corpus for evaluation (rather than the gold standard corpus) | How to test the default NLTK NER chunker's accuracy on own corpus? | 0 | 0 | 0 | 269 |
25,193,192 | 2014-08-07T22:18:00.000 | 0 | 0 | 0 | 0 | python,solr,lucene,solr-query-syntax | 25,193,866 | 2 | false | 1 | 0 | You shouldn't query solr when there is no term being looked for (and I seriously doubt google looks over it's searchable indexes when a search term is empty). This logic should be built into whatever mechanism you use to parse the user supplied query terms before constructing the solr query. Lets say the user's input is represented as a simple string where each word is treated as a unique query term. You would want to split the string on spaces into an array of strings, map over the array and remove strings prefixed by "-", and then construct the query terms from what remains in the array. If flattening the array yields an empty string, return an empty array instead of querying solr at all. | 1 | 0 | 0 | I want my search tool to have a similar behaviour to Google Search when all of the elements entered by the user are excluded (eg.: user input is -obama). In those cases, Google returns an empty result. In my current code, my program just makes an empty Solr query, which causes error in Solr.
I know that you can enter *:* to get all the results, but what should I fill in my Solr query so that Solr will return an empty search result?
EDIT:
Just to make it clearer, the thing is that when I have something like -obama, I want Solr to return an empty search result. If you google -obama, that's what you get, but if you put -obama on Solr, it seems that the result is everything (all the documents), except for the ones that have "obama" | How do I make Solr return an empty result? | 0 | 0 | 1 | 660 |
25,194,297 | 2014-08-08T00:23:00.000 | 3 | 0 | 0 | 0 | python-3.x,peewee | 25,365,070 | 3 | false | 0 | 0 | Peewee cannot create databases with MySql or with other systems that require database and user setup, but will create the database with sqlite when the first table is created. | 2 | 11 | 0 | I've looked through all the docs I could find, and read the source code...and it doesn't seem you can actually create a MySQL database (or any other kind, that I could find) using peewee. If so, that means for any the database I may need to connect to, I would need to create it manually using mysql or some other tool.
Is that accurate, or am I missing something? | Can peewee create a new MySQL database | 0.197375 | 1 | 0 | 4,151 |
25,194,297 | 2014-08-08T00:23:00.000 | 14 | 0 | 0 | 0 | python-3.x,peewee | 25,195,428 | 3 | true | 0 | 0 | Peewee can create tables but not databases. That's standard for ORMs, as creating databases is very vendor-specific and generally considered a very administrative task. PostgreSQL requires you to connect to a specific database, Oracle muddles the distinction between users and databases, SQLite considers each file to be a database...it's very environment specific. | 2 | 11 | 0 | I've looked through all the docs I could find, and read the source code...and it doesn't seem you can actually create a MySQL database (or any other kind, that I could find) using peewee. If so, that means for any the database I may need to connect to, I would need to create it manually using mysql or some other tool.
Is that accurate, or am I missing something? | Can peewee create a new MySQL database | 1.2 | 1 | 0 | 4,151 |
25,195,723 | 2014-08-08T03:38:00.000 | 1 | 0 | 0 | 0 | python,mysql,excel,win32com,xlrd | 25,203,796 | 1 | true | 1 | 0 | Probably not the answer you were looking for, but your post is very broad, and I've used win32coma and Excel a fair but and don't see those as good tools towards your goal. An easier strategy is this:
for the server, use Flask: it is a Python HTTP server that makes it crazy easy to respond to HTTP requests via Python code and HTML templates. You'll have a fully capable server running in 5 minutes, then you will need a bit of time create code to get data from your DB and render from templates (which are really easy to use).
for the database, use SQLite (there is far more overhead intergrating with MysQL); because you only have 2 days, so
you could also use a simple CSV file, since the API (Python has a CSV file read/write module) is much simpler, less ramp up time. One CSV per user, easy to manage. You don't worry about insertion of rows for a user, you just append; and you don't implement remove of rows for a user, you just mark as inactive (a column for active/inactive in your CSV). In processing GET request from client, as you read from the CSV, you can count how many certain rows are inactive, and do a re-write of the CSV, so once in a while the request will be a little slower to respond to client.
even simpler yet you could use in-memory data structure of your choice if you don't need persistence across restarts of the server. If this is for a demo this should be acceptable limitation.
for the client side, use jQuery on top of javascript -- maybe you are doing that already. Makes it super easy to manipulate the DOM and use effects like slide-in/out etc. Get yourself the book "Learning jQuery", you'll be able to make good use of jQuery in just a couple hours.
If you only have two days it might be a little tight, but you will probably need more than 2 days to get around the issues you are facing with your current strategy, and issues you will face imminently. | 1 | 0 | 0 | I have developed a website where the pages are simply html tables. I have also developed a server by expanding on python's SimpleHTTPServer. Now I am developing my database.
Most of the table contents on each page are static and doesn't need to be touched. However, there is one column per table (i.e. page) that needs to be editable and stored. The values are simply text that the user can enter. The user enters the text via html textareas that are appended to the tables via javascript.
The database is to store key/value pairs where the value is the user entered text (for now at least).
Current situation
Because the original format of my webpages was xlsx files I opted to use an excel workbook as my database that basically just mirrors the displayed web html tables (pages).
I hook up to the excel workbook through win32com. Every time the table (page) loads, javascript iterates through the html textareas and sends an individual request to the server to load in its respective text from the database.
Currently this approach works but is terribly slow. I have tried to optimize everything as much as I can and I believe the speed limitation is a direct consequence of win32com.
Thus, I see four possible ways to go:
Replace my current win32com functionality with xlrd
Try to load all the html textareas for a table (page) at once through one server call to the database using win32com
Switch to something like sql (probably use mysql since it's simple and robust enough for my needs)
Use xlrd but make a single call to the server for each table (page) as in (2)
My schedule to build this functionality is around two days.
Does anyone have any thoughts on the tradeoffs in time-spent-coding versus speed of these approaches? If anyone has any better/more streamlined methods in mind please share! | Database in Excel using win32com or xlrd Or Database in mysql | 1.2 | 1 | 0 | 273 |
25,197,798 | 2014-08-08T07:09:00.000 | 1 | 0 | 0 | 0 | python,scrapy | 25,197,916 | 1 | true | 1 | 0 | Broad Crawls
Scrapy defaults are optimized for crawling specific sites. These sites are often handled by a single Scrapy spider, although this is not necessary or required (for example, there are generic spiders that handle any given site thrown at them).
In addition to this “focused crawl”, there is another common type of crawling which covers a large (potentially unlimited) number of domains, and is only limited by time or other arbitrary constraint, rather than stopping when the domain was crawled to completion or when there are no more requests to perform. These are called “broad crawls” and is the typical crawlers employed by search engines.
These are some common properties often found in broad crawls:
they crawl many domains (often, unbounded) instead of a specific set
of sites
they don’t necessarily crawl domains to completion, because it
would impractical (or impossible) to do so, and instead limit the
crawl by time or number of pages crawled
they are simpler in logic (as opposed to very complex spiders with
many extraction rules) because data is often post-processed in a
separate stage they crawl many domains concurrently, which allows
them to achieve faster crawl speeds by not being limited by any
particular site constraint (each site is crawled slowly to respect
politeness, but many sites are crawled in parallel)
As said above, Scrapy default settings are optimized for focused crawls, not broad crawls. However, due to its asynchronous architecture, Scrapy is very well suited for performing fast broad crawls. | 1 | 0 | 0 | Is Scrapy framework efficient in crawling any website ? I ask this question because I found on their tutorial that they build usually regular expressions that depends on the architecture (the structure of the links) of the website to crawl it. Does this mean Scrapy is not able to be generic and crawl any website whatever the manner on which its URL are structured ? Because in my case I have to deal with a very large number of websites: it is impossible to program regular expressions for each one of them. | Is Scrapy able to crawl any type of websites? | 1.2 | 0 | 0 | 218 |
25,199,405 | 2014-08-08T08:47:00.000 | 10 | 0 | 0 | 1 | python,windows,ip,xml-rpc | 25,221,564 | 1 | false | 0 | 0 | Every domain name gets resolved. There is no exception to this rule, including with regards to a local site.
When you make a request to localhost, localhost's IP gets resolved by the host file every time it gets requested. In Windows, the host file controls this. But if you make a request to 127.0.0.1, the IP address is already resolved, so any request goes directly to this IP. | 1 | 15 | 0 | I've setup an XML-RPC server/client communication under Windows. What I've noticed is that if exchanged data volume become huge, there's a difference in starting the server listening on "localhost" vs. "127.0.0.1". If "127.0.0.1" is set, the communication speed is faster than using "localhost". Could somebody explain why? I thought it could be a matter on naming resolving, but....locally too? | "localhost" vs "127.0.0.1" performance | 1 | 0 | 1 | 5,148 |
25,199,709 | 2014-08-08T09:03:00.000 | 1 | 1 | 0 | 1 | python,unit-testing,firewall-access | 25,200,050 | 1 | false | 0 | 0 | Is it possible to have the script itself run through these steps? By this I mean have the setup phase of your unit tests probe for firewall, and if detected dynamically setup a proxy somehow, use it to run unit tests, then when done teardown proxy. That seems like it would achieve the transparency you're aiming for. | 1 | 0 | 0 | Here is the problem: I do have several python packages that do have unittest that do require access to different online services in order to run, like connecting to a postgresql database or a LDAP/AD server.
In many cases these are not going to execute successfully because local network is fire-walled, allowing only basic outgoing traffic on ports like 22, 80, 443, 8080 and 8443.
I know that first thing coming into your mind is: build a VPN. This is not the solution I am looking for, and that's due to two important issues: will affect other software running on the same machine probably breaking them.
Another solution I had in mind was SSH port forwarding, which I successfully used but also this is very hard to configure and worse it does require me to re-configure the addresses the python script is trying to connect to, and I do not want to go this way.
I am looking for a solution that could work like this:
detect if there is a firewall preventing your access
setup the bypassing strategy (proxy?)
run the script
restore settings.
Is there a way to setup this in a transparent way, one that would not require me to make changes to the executed script/app ? | Transparent solution for bypassing local outgoing firewalls for python scripts | 0.197375 | 0 | 0 | 352 |
25,205,157 | 2014-08-08T13:54:00.000 | 0 | 0 | 0 | 0 | oracle,python-2.7,blob | 25,205,260 | 1 | false | 0 | 0 | If you have a pure BLOB in the database, as opposed to, say, an ORDImage that happens to be stored in a BLOB under the covers, the BLOB itself has no idea what sort of binary data it contains. Normally, when the table was designed, a column would be added that would store the data type and/or the file name. | 1 | 0 | 0 | This is not a question of a code, I need to extract some BLOB data from an Oracle database using python script. My question is what are the steps in dealing with BLOB data and how to read as images, videos and text? Since I have no access to the database itself, is it possible to know the type of BLOBs stored if it is pictures, videos or texts? Do I need encoding or decoding in order to tranfer these BLOBs into .jpg, .avi or .txt files ? These are very basic questions but I am new to programming so need some help to find a starting point :) | Reading BLOB data from Oracle database using python | 0 | 1 | 0 | 1,342 |
25,207,697 | 2014-08-08T16:09:00.000 | 3 | 0 | 0 | 0 | python,database,django,postgresql,web | 25,208,098 | 1 | true | 1 | 0 | Are you allowed to use paging in your output? If so, then i'd start by setting a page size of 100 (for example) and then use LIMIT 100 in my various SQL queries. Essentially, each time the user clicks next or prev on the web page a new query would be executed based on the current filtering or sorting options with the LIMIT. The SQL should be pretty easy to figure out. | 1 | 1 | 0 | I am still a noob in web app development and sorry if this question might seem obvious for you guys.
Currently I am developing a web application for my University using Python and Django. And one feature of my web app is to retrieve a large set of data in a table in the database(postgreSQL), and displaying these data in a tabular form on webpage. Each column of the table need to have the sorting and filtering feature. The data set goes up to roughly 2 millions of rows.
So I wonder if something like jpxGrid could help me to achieve such goal or it would be too slow to handle/sort/display/render such a large data set on web page. I plan to retrieve all the data inside the table once (only initiate one database query call) and pass it into jpxGrid, however, my colleague suggests that each sort and filter should initiate a separate query call to the database to achieve better performance(database order by is very fast). I tried to use another open source jquery library that handles the form and enables sorting, filtering and paging(non professional outdated one) at the beginning, which starts to lag after 5k data rows and becomes impossible to use after 20k rows.
My question is if something like jpxGrid is a good solution to my problem or I should build my own system that letting the database to handle the sorting and filtering(probably need to add the paging feature too). Thank you very much for helping. | Handle and display large data set in web browser | 1.2 | 1 | 0 | 1,235 |
25,209,753 | 2014-08-08T18:13:00.000 | 4 | 0 | 1 | 0 | python,pip,setuptools,easy-install | 25,209,985 | 2 | true | 0 | 0 | You probably want to make sure that you and your colleagues use the same dependencies during development.
I think I would try to use virtualenv for this. If you and your collegues install it, it will give you a python environment for this project only, and dependencies for this project only.
So the steps would be:
Everybody installs virtualenv on their computers so they get an isolated environment to use for development of this project only.
One of you determine the current dependencies and installs them in your virtualenv.
You export a list of your used dependencies using this command:
(inside virtual environment) pip freeze > requirements.txt
You then share this text file with the others. They use this command to import the exact same packages and versions into their virtual environment:
(inside virtual environent) pip install -r requirements.txt
Just make sure that everybody enters their virtual environment before issuing these commands, otherwise the text file will contain their normal python environment installed packages. | 1 | 3 | 0 | I have a Python script, with several external dependencies, that I wish to distribute to colleagues. However, we will need to modify this script regularly so I don't want to install it per-se (i.e. copy to site-packages). From what I've seen setuptools seems to do this implicitly.
Is there a recommended approach to installing dependencies without installing the application/script itself? | Installing dependencies only - setuptools | 1.2 | 0 | 0 | 2,261 |
25,212,009 | 2014-08-08T20:48:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,heroku,bigdata | 25,887,408 | 4 | false | 1 | 0 | You might also consider using the PostGIS postgres extension which includes support for raster data types (basically large grids of numbers) and has many features to make use of them.
However, do not use the ORM in this case, you will want to do SQL directly on the server. The ORM will add a huge amount of overhead for large numerical datasets. It's also not very adapted to handling large matrices within python itself, for that you need numpy. | 1 | 21 | 0 | I am scoping out a project with large, mostly-uncompressible time series data, and wondering if Django + Postgres with raw SQL is the right call.
I have time series data that is ~2K objects/hour, every hour. This is about 2 million rows per year I store, and I would like to 1) be able to slice off data for analysis through a connection, 2) be able to do elementary overview work on the web, served by Django. I think the best idea is to use Django for the objects themselves, but drop to raw SQL to deal with the large time series data associated. I see this as a hybrid approach; that might be a red flag, but using the full ORM for a long series of data samples feels like overkill. Is there a better way? | Django + Postgres + Large Time Series | 0 | 1 | 0 | 10,343 |
25,212,475 | 2014-08-08T21:24:00.000 | 0 | 0 | 0 | 0 | python,django,celery | 25,213,065 | 1 | false | 0 | 0 | Celery is meant to do background jobs. That means it cannot directly interact to update your web page. To achieve something like what you described, you would need to store the progress data externally (like say into a database) and then query this data source in the webpage via AJAX. Yeah, it is a bit complicated. | 1 | 0 | 0 | I am looking for libraries that can help in displaying a progress message so a user knows how far along the task is. I found Celery, and I was wondering if it is possible to use Celery to display a progress message that updates on the page during the load. For example: "Loaded x amount of data" and enumerate x. | Using celery to update a progress message? | 0 | 0 | 1 | 61 |
25,217,223 | 2014-08-09T09:43:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine,vagrant | 26,824,688 | 1 | true | 1 | 0 | Finally found the answer!
In the latest version of google app engine, there is a new parameter you can pass to dev_appserver.py.
using dev_appserver.py --use_mtime_file_watcher=True works!
Although the change takes 1-2 seconds to detect, but it still works! | 1 | 2 | 0 | I am currently using Vagrant to spin up a VM to run GAE's dev_appserver in the Virtual Machine.
The sync folder works and I can see all the files.
But, after I run the dev appserver, changes to python files by the host machine are not dynamically updated.
To see updates to my python files, I have to relaunch dev appserver in my Virtual Machine.
Also, I have grunt tasks that watch html/css files. These also do not sync properly when updated by editors outside the Virtual Machine.
I suspect that it's something to do with the way Vagrant syncs files changed on the host machine.
Has anyone found a solution to this problem? | Vagrant and Google App Engine are not syncing files | 1.2 | 0 | 0 | 298 |
25,219,136 | 2014-08-09T13:36:00.000 | 0 | 0 | 1 | 1 | python,windows-7,virtualenv,virtualenvwrapper | 25,244,240 | 1 | true | 0 | 0 | i had found a solution ,just hope to help someone like me.
download the mktemp.exe , put it into the C:\msys\1.0\bin ,then everything will be ok... | 1 | 1 | 0 | I trying to setup virtualenvwrapper in GitBash (Windows 7), but get an error message.
When I run this command: " $ source /c/Python27/Scripts/virtualenvwrapper.sh"
Then I get an error: sh.exe":mktemp:command not found ERROR: virtualenvwrapper could not create a temporary file name.
Somebody help me... | sh.exe":mktemp:command not found ERROR: virtualenvwrapper could not create a temporary file name | 1.2 | 0 | 0 | 946 |
25,219,326 | 2014-08-09T13:59:00.000 | 6 | 0 | 1 | 0 | python,concurrency | 25,220,520 | 1 | true | 0 | 0 | Python does not have those operations. Java has more sophisticated concurrency controls than Python does.
CPython (the typical implementation almost everyone uses) has a Global Interpreter Lock that you will want to understand.
Jython is a Python implementation on the JVM, and so shares many of the concurrency characteristics of Java. | 1 | 6 | 0 | Trying to find if the python support CAS operations, lock free programming, concurrency like in java? | Does python have compare and swap operations | 1.2 | 0 | 0 | 1,552 |
25,219,911 | 2014-08-09T15:08:00.000 | 0 | 0 | 0 | 0 | python,django | 46,187,663 | 3 | false | 1 | 0 | zypper install python-pip
pip install virtualenv
virtualenv name-env
Source name-env/bin/activate
(name-env) pip install django==version
pip install django | 1 | 0 | 0 | I keep receiving the following error message when trying to install Python-Django on my OpenSuse Linux VM:
The installation has failed. For more information, see the log file at
/var/log/YaST2/y2log. Failure stage was: Adding Repositories
Not sure how to add additional Repositories when I am using the opensuse download center. Does anyone know how to resolve this error?
Thank you. | OpenSuse Python-Django Install Issue | 0 | 0 | 0 | 1,428 |
25,221,497 | 2014-08-09T17:58:00.000 | 0 | 0 | 0 | 0 | python,flask | 25,221,517 | 1 | true | 1 | 0 | If you use send_file() with a filename (not a file object), Flask will automatically set the Content-Length header for you.
send_from_directory() uses send_file() with a filename, so you are set there.
Do make sure you are using Flask 0.10 or newer; the header code was added in that version. | 1 | 2 | 0 | I am writing a small Python Flask application which allow people download file from my server. These files will be served by Python, not web server like Nginx or Apache.
I tried to use send_from_directory() and send_file() and I can download my file but there was no file-size because it was missed the Content-Length filed in the Header.
How can I added the header when I use send_from_directory() or send_file(). Or are there any other better way to do this.
Thank you. | Serving static files in Flask with Content-Length in HTTP response header | 1.2 | 0 | 0 | 1,698 |
25,222,515 | 2014-08-09T20:01:00.000 | 1 | 0 | 0 | 0 | python,html,database | 25,222,611 | 3 | false | 1 | 0 | If I would create such type of application then
I will have some common queries like get by current date,current time , date ranges, time ranges, n others based on my application for the user to select easily.
Some autocompletions for common keywords.
If the data gets changed frequently there is no use saving html, generating new one is good option | 3 | 0 | 0 | I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks! | Creating an archive - Save results or request them every time? | 0.066568 | 1 | 0 | 35 |
25,222,515 | 2014-08-09T20:01:00.000 | 1 | 0 | 0 | 0 | python,html,database | 25,222,656 | 3 | true | 1 | 0 | Specifically regarding retrieving the results from queries that have been run previously I would suggest saving the results to be able to view later rather than running the queries again and again. The main benefits of this approach are:
You save unnecessary computational work re-running the same queries;
You guarantee that the result set will be the same as the original report. For example if you save just the SQL then the records queried may have changed since the query was last run or records may have been added / deleted.
The disadvantage of this approach is that it will probably use more disk space, but this is unlikely to be an issue unless you have queries returning millions of rows (in which case html is probably not such a good idea anyway). | 3 | 0 | 0 | I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks! | Creating an archive - Save results or request them every time? | 1.2 | 1 | 0 | 35 |
25,222,515 | 2014-08-09T20:01:00.000 | 1 | 0 | 0 | 0 | python,html,database | 25,222,678 | 3 | false | 1 | 0 | The crucial difference is that if data changes, new query will return different result than what was saved some time ago, so you have to decide if the user should get the up to date data or a snapshot of what the data used to be.
If relevant data does not change, it's a matter of whether the queries will be expensive, how many users will run them and how often, then you may decide to save them instead of re-running queries, to improve performance. | 3 | 0 | 0 | I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks! | Creating an archive - Save results or request them every time? | 0.066568 | 1 | 0 | 35 |
25,226,153 | 2014-08-10T06:27:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine | 25,226,221 | 1 | false | 1 | 0 | If you call .put() on an entity that you've previously retrieved from the datastore, it will update the existing entity. (Make sure you're not specifying a new key for the entity.) | 1 | 1 | 0 | In appengine documentation, it says that the put() method replaces the previous entity. But when I do so it always adds a new entity to the datastore. How do I update an entity? | Python-How to update an entity in apeengine? | 0.664037 | 0 | 0 | 40 |
25,228,637 | 2014-08-10T12:31:00.000 | 1 | 1 | 0 | 0 | python,curl,pycurl | 25,228,720 | 2 | false | 0 | 0 | Having multiple easy interfaces running concurrently in the same thread means building your own reactor and driving curl at a lower level. That's painful in C, and just as painful in Python, which is why libcurl offers, and recommends, multi.
But that "in the same thread" is key here. You can also create a pool of threads and throw the easy instances into that. In C, that can still be painful; in Python, it's dead simple. In fact, the first example in the docs for using a concurrent.futures.ThreadPoolExecutor does something similar, but actually more complicated than you need here, and it's still just a few lines of code.
If you're comparing multi vs. easy with a manual reactor, the simplicity is the main benefit. In C, you could easily implement a more efficient reactor than the one libcurl uses; in Python, that may or may not be true. But in either language, the performance cost of switching among a handful of network requests is going to be so tiny compared to everything else you're doing—especially waiting for those network requests—that it's unlikely to ever matter.
If you're comparing multi vs. easy with a thread pool, then a reactor can definitely outperform threads (except on platforms where you can tie a thread pool to a proactor, as with Windows I/O completion ports), especially for huge numbers of concurrent connections. Also, each thread needs its own stack, which typically means about 1MB of memory pages allocated (although not all of them used), which can be a serious problem in 32-bit land for huge numbers of connections. That's why very few serious servers use threads for connections. But in a client making a handful of connections, none of this matters; again, the costs incurred by wasting 8 threads vs. using a reactor will be so small compared to the real costs of your program that they won't matter. | 1 | 1 | 0 | I am making something which involves pycurl since pycurl depends on libcurl, I was reading through its documentation and came across this Multi interface where you could perform several transfers using a single multi object. I was wondering if this is faster/more memory efficient than having miltiple easy interfaces ? I was wondering what is the advantage with this approach since the site barely says,
"Enable multiple simultaneous transfers in the same thread without making it complicated for the application." | Is the Multi interface in curl faster or more efficient than using multiple easy interfaces? | 0.099668 | 0 | 0 | 1,906 |
25,229,703 | 2014-08-10T14:43:00.000 | 4 | 1 | 0 | 1 | python,file-io,abort | 25,229,783 | 1 | true | 0 | 0 | Probably not. It will release the file handle when the script stops running. Also you typically only have to worry about corrupting a file when you kill a script that is writing to the file, in case it is interrupted mid-write. | 1 | 2 | 0 | If I run a python script (with Linux) which reads in a file (e.g.: with open(inputfile) as infi:): Will the file be in danger when I abort the script by pressing Ctrl C? | Will aborting a Python script corrupt file which is open for read? | 1.2 | 0 | 0 | 662 |
25,233,396 | 2014-08-10T21:57:00.000 | 1 | 0 | 0 | 1 | python,audio | 25,243,720 | 1 | true | 1 | 0 | I finally got an answer/workaround.
I am using pythons multiprocessing class/functionality to run multiple pygame instances.
So i can play more than one music file at a time with fully control over playmode and playback position. | 1 | 0 | 0 | I am searching for a way/framework to play at least 3 music files at one in a python application. It should run at least under ubuntu and mac as well as on the raspberry pi.
I need per channel/music file/"deck" that is played:
Control of the Volume of the Playback
Control of the start position of the playback
Play and Pause/Resume functionality
Should support mp3 (Not a must but would be great!)
Great would be built in repeat functionality really great would be a fade to the next iteration when the song is over.
If I can also play at least two video files with audio over the same framework, this would be great, but is not a must.
Has anyone an Idea? I already tried pygame but their player can play just one music file at once or has no control over the playback position.
I need that for a theatre where a background sound should be played (and started simultaneosly with the light) and when it is over, a next file fades over. while that is happening there are some effects (e.g. a bell) at a third audio layer. | Playback of at least 3 music files at one in python | 1.2 | 0 | 0 | 57 |
25,233,539 | 2014-08-10T22:16:00.000 | 1 | 0 | 0 | 0 | python | 25,233,675 | 2 | false | 0 | 0 | Another possibility is to represent the last axis of 20 bits as a single 32 bit integer. This way a 5000x5000 array would suffice. | 1 | 0 | 1 | I'm working with a large 3 dimensional array of data that is binary, each value is one of two possible values. I currently have this data stored in the numpy array as int32 objects that are either 1 or 0.
It works fine for small arrays but eventually i will need to make the array 5000x5000x20, which I can't even get close to without getting "Memory Error".
Does anyone have any suggestions for a better way to do this? I am really hoping that I can keep it all together in one data structure because I will need to access slices of it along all three axes. | Large Array of binary data | 0.099668 | 0 | 0 | 119 |
25,234,336 | 2014-08-11T00:30:00.000 | 10 | 0 | 0 | 1 | python,amazon-web-services,tcp,amazon-ec2,firewall | 25,234,530 | 1 | false | 0 | 0 | Your TCP_IP is only listening locally because you set your listening IP to 127.0.0.1.
Set TCP_IP = "0.0.0.0" and it will listen on "all" interfaces including your externally-facing IP. | 1 | 5 | 0 | I am trying to run a simple Python TCP server on my EC2, listening on port 6666. I've created an inbound TCP firewall rule to open port 6666, and there are no restrictions on outgoing ports.
I cannot connect to my instance from the outside world however, testing with telnet or netcat can never make the connection. Things do work if I make a connection from localhost however.
Any ideas as to what could be wrong?
#!/usr/bin/env python
import socket
TCP_IP = '127.0.0.1'
TCP_PORT = 6666
BUFFER_SIZE = 20 # Normally 1024, but we want fast response
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connection address:', addr
while 1:
data = conn.recv(BUFFER_SIZE)
if not data: break
print "received data:", data
conn.send(data) # echo
conn.close() | Simple Python TCP server not working on Amazon EC2 instance | 1 | 0 | 1 | 3,240 |
25,235,040 | 2014-08-11T02:17:00.000 | -1 | 1 | 1 | 0 | python,c,assembly,bit | 25,236,304 | 2 | false | 0 | 0 | You can store bits from 9th onwards in other variable. Then make those vits 0 in EAX. Then do EAX << 7 and add those bits again to it. | 1 | 0 | 0 | For example: EAX = 10101010 00001110 11001010 00100000
I want to move EAX high 8 bits to right 7 times,what can i do in c or in python?
In asm : SHR ah,7
The result of EAX is:10101010 00001110 00000001 00100000
And how about SHR ax,7?
I have tried ((EAX & 0xff00) >> 8 ) >> 7,but i don't how to add it back to EAX? | how to move a number's high 8 bits 7 times in c or python? | -0.099668 | 0 | 0 | 384 |
25,238,028 | 2014-08-11T07:36:00.000 | 0 | 0 | 0 | 0 | python,scipy,cluster-analysis,hierarchical-clustering | 25,242,915 | 1 | true | 0 | 0 | The solution was that scipy had it's own built in function to turn linkage matrix to binary tree. The function name is scipy.to_tree(matrix) | 1 | 0 | 1 | Recently I was visualizing my datasets using python modules scikit and scipy hierarchical clustering and dendrogram. Dendrogram method drawing me a graph and now I need to export this tree as a graph in my code. I am wondering is there any way to get this data. Any help would be really appreciated. Thanks. | Python hierarchical clustering visualization dump [scipy] | 1.2 | 0 | 0 | 682 |
25,238,425 | 2014-08-11T08:03:00.000 | 5 | 0 | 0 | 0 | python,django,oauth,django-rest-framework,django-oauth | 47,976,325 | 2 | false | 1 | 0 | As the previous answer suggested, you should extend AbstractUser from django.contrib.auth.models.
The problem with the access token that the OP referring to, occur when changing the setting AUTH_USER_MODEL AFTER django-oauth2-provider was migrated.
When django-oauth2-provider is migrated, it creates a key constrain between the User model and django-oauth2-provider.
The solution is very easy:
Create your new User model and change the AUTH_USER_MODEL setting.
Go to the django_migration table in your database.
Delete all rows of django-oauth2-provider.
run python manage.py makemigrations
run python manage.py migrate
Now, the django-oauth2-provider tables are connected to the RIGHT User model. | 1 | 3 | 0 | I am really stuck in my project right now. I am trying to implement Oauth2 for my app. I found out about django-oauth2-provider a lot and tried it. The only problem is, it uses the User model at django.contrib.auth. The main users of our site are saved in a custom model called User which does not inherit from or extend the model at django.contrib.auth.
Is there any way to use my custom User model for creating clients and token?
If django-oauth2-provider can't be used for this purpose, can anyone recommend me some oauth2 library with the option to implement oauth2 with my own model.
Sincerely,
Sushant Karki | django-oauth2-provider with custom user model? | 0.462117 | 0 | 0 | 2,847 |
25,239,361 | 2014-08-11T08:57:00.000 | 2 | 0 | 0 | 0 | python,mysql | 25,239,591 | 2 | true | 0 | 0 | You can just save the base64 string in a TEXT column type. After retrieval just decode this string with base64.decodestring(data) ! | 2 | 2 | 0 | I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of storing something as an integer by using the 'columnname' INT format, I would like to know how to store the information as base64? ie. 'data' TEXT(base64)? or something similar.
Could I do this by simply using the TEXT datatype, and encode the data in the python program?
I may be approaching this in the wrong way, as I may have misinterpreted what I read online.
I would be very grateful for any help. Thanks,
Ed | How to store base64 information in a MySQL table? | 1.2 | 1 | 0 | 5,730 |
25,239,361 | 2014-08-11T08:57:00.000 | 0 | 0 | 0 | 0 | python,mysql | 62,777,767 | 2 | false | 0 | 0 | You can storage a base64 string in a TEXT column type, but in my experience I recommend to use LONGTEXT type to avoid truncated errors in big base64 texts. | 2 | 2 | 0 | I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of storing something as an integer by using the 'columnname' INT format, I would like to know how to store the information as base64? ie. 'data' TEXT(base64)? or something similar.
Could I do this by simply using the TEXT datatype, and encode the data in the python program?
I may be approaching this in the wrong way, as I may have misinterpreted what I read online.
I would be very grateful for any help. Thanks,
Ed | How to store base64 information in a MySQL table? | 0 | 1 | 0 | 5,730 |
25,245,710 | 2014-08-11T14:28:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto | 25,245,827 | 1 | true | 1 | 0 | After some research in the boto docs, it looks like using the prefix parameter in the lifecycle add_rule method allows you to do this. | 1 | 0 | 0 | Using Boto, you can create an S3 bucket and configure a lifecycle for it; say expire keys after 5 days. I would like to not have a default lifecycle for my bucket, but instead set a lifecycle depending on the path within the bucket. For instance, having path /a/ keys expire in 5 days, and path /b/ keys to never expire.
Is there a way to do this using Boto? Or is expiration tied to buckets and there is no alternative?
Thank you | Setting a lifecycle for a path within a bucket | 1.2 | 1 | 1 | 41 |
25,247,910 | 2014-08-11T16:22:00.000 | 3 | 0 | 1 | 0 | python,list,python-2.7 | 25,247,976 | 2 | true | 0 | 0 | Provided there are no other references to l, then memory will be released yes.
What happens is:
l[25:100] creates a new list object with references to the values from l indices 25 through to 100 (exclusive).
l is rebound to now refer the new list object.
The reference count for the old list object formerly bound by l is decremented by 1.
If the reference count dropped to 0, the old list object formerly bound by l is deleted and memory is freed. | 2 | 0 | 0 | My python list named l contains 100 items.
If I do,
l = l[25:100]
Will this release the memory for the first 25 items?
If not, how can this be achieved? | Will slicing a list release memory? | 1.2 | 0 | 0 | 77 |
25,247,910 | 2014-08-11T16:22:00.000 | 0 | 0 | 1 | 0 | python,list,python-2.7 | 25,249,574 | 2 | false | 0 | 0 | There is also an option that doesn't allocate a new list: del l[:25] will remove the first 25 entries from the existing list, shifting the following ones down. Any other references to l will still point to it, but it is altered into a shorter list.
Note that reference semantics also apply to each of the items; they may have references from elsewhere than the list. It is also not guaranteed that they will be freed right away. | 2 | 0 | 0 | My python list named l contains 100 items.
If I do,
l = l[25:100]
Will this release the memory for the first 25 items?
If not, how can this be achieved? | Will slicing a list release memory? | 0 | 0 | 0 | 77 |
25,250,164 | 2014-08-11T18:38:00.000 | 2 | 1 | 1 | 0 | python | 25,250,281 | 1 | true | 0 | 0 | Coderbyte only has the standard 2.7.2 python. There is not a way to import a package they do not have setup for you to use in their environment. | 1 | 0 | 0 | I have been trying to import modules on my python Coderbyte challenges, but to no avail. I noticed that the C++ challenges allow includes, so I've been relying on , for the C++ challenges.
My question is, is there a way to successfully use other modules for challenges written in Python on Coderbyte? | CoderByte Python import statements | 1.2 | 0 | 0 | 552 |
25,250,998 | 2014-08-11T19:35:00.000 | 2 | 0 | 1 | 0 | python,multiple-instances,spyder | 25,689,444 | 4 | false | 0 | 0 | Although clicking on the Spyder icon will not allow you to open two instances, you can open a second instance by simply going to the folder where spyder.py is and running spyder.py from the command line.
Further, you could make an icon for your desktop that simply runs spyder.py from its location. However, I don't know how multiple instances will affect user preferences if Spyder has those. | 2 | 85 | 0 | I want to be able to have two instances which are completely independent in the sense that I can be working on two separate unrelated projects in different folders without any interference. | How do I run two separate instances of Spyder | 0.099668 | 0 | 0 | 66,493 |
25,250,998 | 2014-08-11T19:35:00.000 | 159 | 0 | 1 | 0 | python,multiple-instances,spyder | 25,956,248 | 4 | true | 0 | 0 | (Spyder maintainer here) This is easy. You need to go to:
Tools > Preferences > Application
in Spyder 5, or
Tools > Preferences > General
in Spyder 4, click the "Advanced Settings" tab, and deactivate the option called
[ ] Use a single instance
Then every time you start Spyder a new window will be opened. If you want the old behavior back, just activate that option again. | 2 | 85 | 0 | I want to be able to have two instances which are completely independent in the sense that I can be working on two separate unrelated projects in different folders without any interference. | How do I run two separate instances of Spyder | 1.2 | 0 | 0 | 66,493 |
25,252,413 | 2014-08-11T21:05:00.000 | 0 | 0 | 1 | 1 | python,windows,cmd | 25,253,872 | 1 | false | 0 | 0 | Create a shortcut to cmd.exe.
Get properties on that shortcut, and there's an entry for the current working directory. (They change the name of that field every other version of Windows, but I believe most recently it's called "Start In:".) Just set that to C:\Users\Name\FolderLocation\ProjectFolder.
Now, when you double-click that shortcut, it'll open a cmd.exe window, in your project directory. | 1 | 0 | 0 | To run Python files (.py) with Windows CMD.exe, I SHIFT + Right Click on my project folder which contains all of my Python code files. Doing this shows a menu containing the option Open command window here, which I click to open CMD.exe with the prompt C:\Users\Name\FolderLocation\ProjectFolder>. I then type the python command, and the file I want to run in my project folder (python MyFile.py) which runs the file, of course.
What I would like to know is if there is a way I can setup a shortcut to open CMD.exe with my project folder opened/ being accessed so then all I have to do is type python and the file name? Thanks | Making a Shortcut for Running Python Files in a Folder with CMD.exe | 0 | 0 | 0 | 5,375 |
25,255,032 | 2014-08-12T01:57:00.000 | 1 | 0 | 1 | 0 | python | 25,255,208 | 1 | true | 0 | 0 | When you import a module for the first time, it gets executed. So, you need to encapsulate all your execution flow inside functions that will be called from the main program.
In fact, in plot10i.py your main is as trivial as useless: just prints hello.
You don't need to use if __name__ == '__main__' if you don't have anything to put in. But if you don't and your project is small, you can use that to include some tests. For example, I would add one to things llike datetime.strptime(x, '%d/%m/%Y %H:%M') because they are easy to mess up. | 1 | 0 | 0 | Using Python 2.7 on Linux
i have two files one is called plot10.py & the other one is called plot10i.py and has def main() in it
with plot10.py being my main file, this is my code: | Import File In Python to another File | 1.2 | 0 | 0 | 59 |
25,255,414 | 2014-08-12T02:49:00.000 | 0 | 0 | 0 | 0 | python-2.7,facebook-graph-api,selenium-webdriver,phantomjs,facebook-sdk-3.0 | 25,302,804 | 1 | false | 1 | 0 | We faced the same issue. We resolved this by closing browser automatically after particular time interval. Clear temporary cache and open new browser instance and continue the process. | 1 | 3 | 0 | I am doing a data extraction project where i am required to build a web scraping program written using python using selenium and phantomjs headless webkit as browser for scaping public information like friendlist in facebook.The program is starting fairly fast but after a day of running it is getting slower and slower and I cannot figure out why ?? Can anyone give me an idea why it is getting slower ? I am running on a local machine which pretty good specs of 4gb ram and quad core processor . Does FB provide any API to find friends of friends ? | Web Crawler gets slower with time | 0 | 0 | 1 | 204 |
25,255,554 | 2014-08-12T03:09:00.000 | 1 | 0 | 1 | 0 | python | 25,255,614 | 2 | false | 0 | 0 | This should work for trivial usage, but beware if both instances of your program are, for example, writing to the same file, or acting on a database of some kind. You may get unexpected results. | 1 | 2 | 0 | Let's say I input "A" to the Python program and run it. This would take some time. Rather than waiting and doing nothing, I change the input to "B" in the source code and run another instance of the program. The two instances will output some results when they're done. Would this work or would this mess up some stuff? | Python: Running a code, change some input parameters, and run another instance | 0.099668 | 0 | 0 | 344 |
25,257,031 | 2014-08-12T05:51:00.000 | 0 | 0 | 1 | 0 | python,enigma2 | 38,699,309 | 2 | false | 0 | 0 | 1- Telnet to your STB and enter "enigma2", Your STB will do a Restart GUI then you will be able to see debug messages
2- Run your plugin and check what you need | 1 | 0 | 0 | I am new in Enigma dreambox. I am making simple login plugin. I want to see output of print statement so i can trace my plugin. So where can i find this output of print statement Or is there other way to trace program? | How to see print statment output while coding in enigma os | 0 | 0 | 0 | 360 |
25,261,552 | 2014-08-12T10:04:00.000 | 1 | 0 | 0 | 0 | python,sockets,tcp,udp,multiplayer | 25,261,988 | 1 | true | 0 | 0 | I was thinking I could create a new tcp/udp socket on a new port and let the matched(matched in the search room) players connect to that socket and then have a perfect isolated room for the two to interact with each another.
Yes, you can do that. And there shouldn't be anything hard about it at all. You just bind a socket on the first available port, pass that port to the two players, and wait for them to connect. If you're worried about hackers swooping in by, e.g., portscanning for new ports opening up, there are ways to deal with that, but given that you're not attempting any cheap protection I doubt it's an issue.
Or do I need a new server (machine/hardware) and than create a new socket on that one and let the peered players connect to it.
Why would you need that? What could it do for you? Sure, it could take some load off the first server, but there are plenty of ways you could load-balance if that's the issue; doing it asymmetrically like this tends to lead to one server at 100% while the other one's at 5%…
Or maybe there is another way of doing this.
One obvious way is to not do anything. Just let them keeping talking to the same port they're already talking to, just attach a different handler (or a different state in the client state machine, or whatever; you haven't given us any clue how you're implementing your server). I don't know what you think you're getting out of "perfect isolation". But even if you want it to be a different process, you can just migrate the two client sockets over to the other process; there's no reason to make them connect to a new port.
Another way to do it is to get the server out of the way entirely—STUN or hole-punch them together and let them P2P the game protocol.
Anything in between those two extremes doesn't seem worth doing, unless you have some constraints you haven't explained.
OBS. I am not going to have the game running on the server to deal with cheaters for now. Because this will be to much load for the server cpu on my setup.
I'm guessing that if putting even minimal game logic on the server for cheat protection is too expensive, spinning off a separate process for every pair of clients may also be too expensive. Another reason not to do it. | 1 | 1 | 0 | I am trying to create a multiplayer game for iPhone(cocos2d), which is almost finished, but the multiplayer part is left. I have searched the web for two days now, and can´t find any thing that answers my question.
I have created a search room(tcp socket on port 2000) that matches players that searches for a quick match to play. After two players has been matched, that server disconnects them from the search room to leave space for incoming searchers(clients/players)
Now I´m wondering how to create the play room(where two players interact and play)?
I was thinking I could create a new tcp/udp socket on a new port and let the matched(matched in the search room) players connect to that socket and then have a perfect isolated room for the two to interact with each another.
Or do I need a new server (machine/hardware) and than create a new socket on that one and let the peered players connect to it.
Or maybe there is another way of doing this.
OBS. I am not going to have the game running on the server to deal with cheaters for now. Because this will be to much load for the server cpu on my setup. | Multi multiplayer server sockets in python | 1.2 | 0 | 1 | 902 |
25,261,647 | 2014-08-12T10:08:00.000 | 3 | 0 | 1 | 0 | python,aes,python-3.4 | 25,435,955 | 6 | false | 0 | 0 | To add to @enrico.bacis' answer: AES is not implemented in the standard library. It is implemented in the PyCrypto library, which is stable and well tested. If you need AES, add PyCrypto as a dependency of your code.
While the AES primitives are, in theory, simple enough that you could write an implementation of them in pure Python, it is strongly recommended that you not do so. This is the first rule of crypto: don't implement it yourself. In particular, if you're just rolling your own crypto library, you'll almost certainly leave yourself open to some kind of side-channel attack. | 1 | 20 | 0 | Is it possible to encrypt/decrypt data with AES without installing extra modules? I need to send/receive data from C#, which is encrypted with the System.Security.Cryptography reference.
UPDATE
I have tried to use PyAES, but that is too old. I updated some things to make that work, but it didn't.
I've also can't install because it latest version is 3.3 while my version is 3.4. | Python AES encryption without extra module | 0.099668 | 0 | 0 | 48,069 |
25,265,110 | 2014-08-12T13:07:00.000 | 0 | 0 | 0 | 1 | python-3.x,google-cloud-platform,google-cloud-storage | 57,971,532 | 1 | false | 0 | 0 | Although this is quiet an old topic, I will try to provide an answer especially because of people who might stumble on this in the course of their own work. I have experience using more recent versions of gcsfs and it works quiet well. You can find the latest documentation at https://gcsfs.readthedocs.io/en/latest. To make it work you need to have the environment variable:
GOOGLE_APPLICATION_CREDENTIALS=SERVICE_ACCOUNT_KEY.json. | 1 | 1 | 0 | When mounting and writing files in the google cloud storage using the gcsfs, the gcsfs is creating folders and files but not writing files. Most of the times it shows input/output error. It even occurs even when we copy files from local directory to the mounted gcsfs directory.
gcsfs version 0.15 | gcsfs is not writing files in the google bucket | 0 | 1 | 0 | 554 |
25,265,148 | 2014-08-12T13:08:00.000 | 0 | 1 | 0 | 0 | python,ssh | 25,265,180 | 3 | false | 0 | 0 | Have you considered Dropbox or SVN ? | 2 | 0 | 0 | I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer?
Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course) | update file on ssh and link to local computer | 0 | 0 | 1 | 2,518 |
25,265,148 | 2014-08-12T13:08:00.000 | 0 | 1 | 0 | 0 | python,ssh | 25,265,274 | 3 | false | 0 | 0 | I don't know your local computer OS, but if it is Linux or OSX, you can consider LFTP. This is an FTP client which supports SFTP://. This client has the "mirror" functionality. With a single command you can mirror your local files against a server.
Note: what you need is a reverse mirror. in LFTP this is mirror -r | 2 | 0 | 0 | I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer?
Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course) | update file on ssh and link to local computer | 0 | 0 | 1 | 2,518 |
25,269,845 | 2014-08-12T16:52:00.000 | 4 | 0 | 1 | 0 | python,kivy,setup-deployment | 61,918,928 | 3 | true | 0 | 1 | You need a web server and a database to get this working.
Create a licenses table in your database.
Each time a new client pays for your software or asks for a trial, you generate a new long random license, insert it in the licenses table, associate it to the client's email address and send it to the client via email.
Each time a client tries to install the software on their computers, you ask for a license and contact the webserver to ensure that the license exists and is still valid.
Using that, people can still just create multiple emails and thus potentially get an infinite amount of trial versions.
You can then try to add a file somewhere in the person's computer, a place where nobody would ever look for, and just paste the old license there so that when the app starts again (even from a new installation), it can read the license from there and contact the webserver without asking for a license. With this method, when your app contacts the server with an expired trial license, your server can reply with a "license expired" signal to let your app know that it has to ask for a non-trial license now, and the server should only accept non-trial licenses coming from that app from now on. This whole method breaks if your clients realize that your app is taking this information from a local file because they can just delete it when found.
Another idea that comes to mind is to associate the MAC address of a laptop (or any other unique identifier you can think of) to one license instead of an email address, either at license-creation time (the client would need to send you his MAC address when asking for a trial) or at installation time (your app can check for the MAC address of the laptop it's running on). | 2 | 7 | 0 | I have made a desktop application in kivy and able to make single executable(.app) with pyinstaller. Now I wanted to give it to customers with the trial period of 10 days or so.
The problem is how to make a trial version which stop working after 10 days of installation and even if the user un-install and install it again after trial period get over it should not work.
Giving partial feature in trial version is not an option.
Evnironment
Mac OS and Python 2.7 with Kivy | How to make trial period for my python application? | 1.2 | 0 | 0 | 2,478 |
25,269,845 | 2014-08-12T16:52:00.000 | 0 | 0 | 1 | 0 | python,kivy,setup-deployment | 69,546,902 | 3 | false | 0 | 1 | My idea is
Make a table in database
Use datetime module and put system date in that table as begining date
Use timedelta module timedelta(15)( for calculating the date that program needs to be expired here i used 15 day trial in code) and store it in table of database as expiry date
Now each time your app start put this logic that it checks on if current date is matches with expiry if it does show error it is expired of your explicit logic
Note:- make sure begining and expiry runs only once instead date will be changed again and again. | 2 | 7 | 0 | I have made a desktop application in kivy and able to make single executable(.app) with pyinstaller. Now I wanted to give it to customers with the trial period of 10 days or so.
The problem is how to make a trial version which stop working after 10 days of installation and even if the user un-install and install it again after trial period get over it should not work.
Giving partial feature in trial version is not an option.
Evnironment
Mac OS and Python 2.7 with Kivy | How to make trial period for my python application? | 0 | 0 | 0 | 2,478 |
25,270,885 | 2014-08-12T17:48:00.000 | 56 | 0 | 1 | 1 | python,batch-file,python-3.x,cx-freeze | 25,936,813 | 3 | true | 0 | 0 | I faced a similar problem (Python 3.4 32-bit, on Windows 7 64-bit). After installation of cx_freeze, three files appeared in c:\Python34\Scripts\:
cxfreeze
cxfreeze-postinstall
cxfreeze-quickstart
These files have no file extensions, but appear to be Python scripts. When you run python.exe cxfreeze-postinstall from the command prompt, two batch files are being created in the Python scripts directory:
cxfreeze.bat
cxfreeze-quickstart.bat
From that moment on, you should be able to run cx_freeze.
cx_freeze was installed using the provided win32 installer (cx_Freeze-4.3.3.win32-py3.4.exe). Installing it using pip gave exactly the same result. | 2 | 26 | 0 | I am using python 3.4 at win-8. I want to obtain .exe program from python code. I learned that it can be done by cx_Freeze.
In MS-DOS command line, I wrote pip install cx_Freeze to set up cx_Freeze. It is installed but it is not working.
(When I wrote cxfreeze to command line, I get this warning:C:\Users\USER>cxfreeze
'cxfreeze' is not recognized as an internal or external command,operable program or batch file.)
(I also added location of cxfreeze to "PATH" by environment variables)
Any help would be appriciated thanks. | installing cx_Freeze to python at windows | 1.2 | 0 | 0 | 27,553 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.