Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36,869,258 | 2016-04-26T15:23:00.000 | 3 | 0 | 1 | 0 | python-3.x,anaconda,graphviz,spyder | 64,227,029 | 3 | false | 0 | 0 | Open Anaconda Prompt
Run-> "conda install python-graphviz" in anaconda prompt.
After install graphviz then copy the directory:
C:\Users\Admin\anaconda3\Library\bin\graphviz
Open Control Panel\System\Advanced system settings
Enviornment variables\path\Edit\New
Paste that copied directory and then click Ok | 1 | 9 | 0 | I am attempting to use Graphviz from Spyder (via an Anaconda install). I am having trouble understanding what is needed to do this and how to go about loading packages, setting variables, etc.
I straight forward approach for a new Python and Graphviz and Spyder user would be great!
Also, apart from just creating and running Graphviz, how can one run Graphviz from python with a pre-generated .gv file? | How to use Graphviz with Anaconda/Spyder? | 0.197375 | 0 | 0 | 36,755 |
36,873,990 | 2016-04-26T19:23:00.000 | 0 | 0 | 0 | 0 | python,networking,network-programming | 36,913,510 | 2 | false | 0 | 0 | Blocking of traffic has to happen inside the router. If the router does not have this feature, consider to replace it with a new one. | 2 | 1 | 0 | I am trying to find a way to block a certain Mac address / Internal IP from accessing the internet (Blocking a device in the LAN to WAN) in python.
This option is available in every modern router in every home but mine is kinda old and doesn't have that feature.
I have a basic knowledge in networking stuff and consider myself an Advanced-Beginner in python, so I'm up for the challenge but still need your help.
*Of course with the option to enable the internet again for that device | How can I block Internet access for a certain IP in my local network in python? | 0 | 0 | 1 | 1,760 |
36,873,990 | 2016-04-26T19:23:00.000 | 0 | 0 | 0 | 0 | python,networking,network-programming | 70,466,752 | 2 | false | 0 | 0 | I know I am kinda late now but... You can't necessarily block internet access to a machine like you would do in your router's config.
What you CAN do is implement something like an ARP Spoofer. Basically what you would do in a Man-in-the-Middle attack.
You send a malicious ARP packet to poison the target's ARP table. Making it believe your machine is the router/default gateway. That way you can intercept every packet being transmitted by the target. You can then choose if you want to router them or not.
If you choose not to forward the packets, the connection to the internet is cut off.
If you want to forward the packets to the actual router (in order to allow the target to access the internet) you must enable IP Forwarding on your machine.
You can do this by running echo 1 >> /proc/sys/net/ipv4/ip_forward on Linux or changing the Registry Key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter on Windows ('1' forwards the packets, '0' doesn't). By default IP forwarding is set to 0.
Remember you must resend the malicious ARP packet every couple of seconds as the ARP tables get updated quite frequently. This means you don't necessarily have to change the IP Forwarding configuration on your machine. After a minute or less of exiting the script the target's ARP table will go back to normal, giving them access to the internet again.
Here are some python modules you might want to take a look at:
Scapy (Packet Manipulation Tool)
winreg (Windows Registry) | 2 | 1 | 0 | I am trying to find a way to block a certain Mac address / Internal IP from accessing the internet (Blocking a device in the LAN to WAN) in python.
This option is available in every modern router in every home but mine is kinda old and doesn't have that feature.
I have a basic knowledge in networking stuff and consider myself an Advanced-Beginner in python, so I'm up for the challenge but still need your help.
*Of course with the option to enable the internet again for that device | How can I block Internet access for a certain IP in my local network in python? | 0 | 0 | 1 | 1,760 |
36,874,061 | 2016-04-26T19:27:00.000 | 2 | 0 | 0 | 0 | python-3.x,wamp,autobahn,crossbar | 36,875,764 | 2 | false | 0 | 0 | In case of your machine has a resolvable hostname try with:
import socket
socket.gethostbyname(socket.getfqdn())
Update. This is a more complete solution, should work fine with all OS:
import socket
print [l for l in ([ip for ip in socket.gethostbyname_ex(
socket.gethostname())[2] if not ip.startswith('127.')][:1], [[(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()
) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]])
if l][0][0] | 1 | 1 | 0 | I write chat using crossbar.io. We have several nodes of chat.
I need write statistics about each of nodes, that's why I need to get host name where specific node is run.
Is it possible get host name from component instance?
I use last version of crossbar/autobahn and python 3.4.
Expect get - 127.0.0.1 if I use local environment. | How to get Crossbar.io host name? | 0.197375 | 0 | 1 | 121 |
36,874,278 | 2016-04-26T19:40:00.000 | 0 | 0 | 0 | 1 | google-app-engine,google-app-engine-python | 36,874,638 | 2 | false | 1 | 0 | You can use a cron job that will start a task. In this task, you can call all your instances to clean up expired objects. | 1 | 1 | 0 | Been reading up a bit on background threads and it seems to only be allowed for backend instance. I have created an LRU instance cache that I want to call period cleanup jobs on to remove all expired objects. This will be used in both frontend and backend instances.
I thought about using deferred or taskqueue but those do not have the option to route a request back to the same instance. Any ideas? | How to create an equivalent of a background thread for an auto-scaling instance | 0 | 0 | 0 | 153 |
36,876,389 | 2016-04-26T21:48:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn | 36,876,832 | 1 | true | 0 | 0 | Discriminant analisys (a.k.a. supervised classification) is the way to go. You adjust the model by using the coordinates of the points and the information on the node they belong to. As a result, you obtain a model you can use to predict the node for new points as they are known. Linear discriminant analysis is one of the simplest algorithms. | 1 | 0 | 1 | I'm interested to approach the confirming point in polygon problem from another direction. I have a dataframe containing series of coordinates, known to be in certain polygon (administrative area). I have other dataframes with coordinates not assigned to any admin area. Would using SciKit offer an alternate means to assign these to the admin area.
Example:
I know (x, y) point 1 is in admin area a if (x, y) point 2 is within specified radius of point (1, i) can assign it to the same admin area. Does this approach sound viable? | Using cluster analysis as alternative to point in polygon assignment | 1.2 | 0 | 0 | 152 |
36,876,676 | 2016-04-26T22:08:00.000 | -2 | 0 | 1 | 0 | python-3.x,pyinstaller,virus,trojan | 36,876,835 | 1 | false | 0 | 0 | This is not trouble with python or .exe - this is antivirus policy. If you want to distribute your app/program you need certificate. Or you can tell your clients to disable AV (very bad solution - your reputation and trust may be trashed). Best way is to redistribute python programs is as-is with .py or (if there is need for compiling) in source format - let the clients do rest of work - open source. If you need closed source app - you buy certificate. | 1 | 1 | 0 | I had a perfectly normal file. I downloaded pyinstaller, created a .exe with it, and wanted to share it. I uploaded it to dropbox, filehopper and one more (cant remember which) each time i tried to share it. Every single time, when i download the file to check if it works, my computer says trojan virus detected and quarantines the file. How do I fix/whats wrong??? TIA | Pyinstaller creates Trojan Virus when converting files | -0.379949 | 0 | 0 | 5,390 |
36,877,127 | 2016-04-26T22:44:00.000 | 1 | 0 | 0 | 1 | python,proxy,twisted,reverse-proxy | 36,877,399 | 1 | true | 0 | 0 | You can router your scripts with an web framework, like: Django, Flask, Web2Py...
Or, if you prefer you can create an router script for route manually | 1 | 0 | 0 | Is there some way to run multiple twisted servers simultaneously on the same port? So that they would be listening on different directories (for example: example.com/twisted1 is one twisted script, and example.com/twisted2 is another script) | Run multiple twisted servers? | 1.2 | 0 | 0 | 127 |
36,877,581 | 2016-04-26T23:28:00.000 | 0 | 0 | 0 | 1 | python,django,celery | 36,891,584 | 2 | false | 1 | 0 | First, just to explain how it works briefly. You have a celery client running in your code. You call tasks.add(1,2) and a new Celery Task is created. That task is transferred by the Broker to the queue. Yes the queue is persisted in Rabbimq or SQS. The Celery Daemon is always running and is listening for new tasks. When there is a new task in the queue, it starts a new Celery Worker to perform the work.
To answer your questions:
Celery daemon is always running and it's starting celery workers.
Yes Rabitmq or SQS is doing the work of a queue.
With the celery monitor you can monitor how many tasks are running, how many are completed, what is the size of the queue, etc. | 1 | 1 | 0 | I have divided celery into following parts
Celery
Celery worker
Celery daemon
Broker: Rabbimq or SQS
Queue
Result backend
Celery monitor (Flower)
My Understanding
When i hit celery task in django e,g tasks.add(1,2). Then celery adds that task to queue. I am confused if thats 4 or 5 in above list
WHen task goes to queue Then worker gets that task and delete from queue
The result of that task is saved in Result Backend
My Confusions
Whats diff between celery daemon and celery worker
Is Rabbitmq doing the work of queue. Does it means tasks gets saved in Rabitmq or SQS
What does flower do . does it monitor workers or tasks or queues or resulst | Some confusions regarding celery in python | 0 | 0 | 0 | 385 |
36,878,089 | 2016-04-27T00:19:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy | 36,878,371 | 5 | false | 0 | 0 | I think the numpy method column_stack is more interesting because you do not need to create a column numpy array to stack it in the matrix of interest. With the column_stack you just need to create a normal numpy array. | 1 | 24 | 1 | I have a 60000 by 200 numpy array. I want to make it 60000 by 201 by adding a column of 1's to the right. (so every row is [prev, 1])
Concatenate with axis = 1 doesn't work because it seems like concatenate requires all input arrays to have the same dimension.
How should I do this? I can't find any existing useful answer, and most of the answers about this were written a few years ago so things might be different now. | Python: Add a column to numpy 2d array | 0.039979 | 0 | 0 | 64,625 |
36,879,127 | 2016-04-27T02:17:00.000 | 5 | 0 | 0 | 0 | python,postgresql,sqlalchemy,psycopg2 | 36,879,218 | 1 | false | 0 | 0 | As pointed out by Gordon, there doesn't appear to be a predefined limit on the number of values sets you can have in your statement. But you would want to keep this to a reasonable limit to avoid consuming too much memory at both the client and the server. The client only needs to build the string and the server needs to parse it as well.
If you want to insert a large number of rows speedily COPY FROM is what you are looking for. | 1 | 7 | 0 | When inserting rows via INSERT INTO tbl VALUES (...), (...), ...;, what is the maximum number of values I can use?
To clarify, PostgreSQL supports using VALUES to insert multiple rows at once. My question isn't how many columns I can insert, but rather how many rows of columns I can insert into a single VALUES clause. The table in question has only ~10 columns.
Can I insert 100K+ rows at a time using this format?
I am assembling my statements using SQLAlchemy Core / psycopg2 if that matters. | What is the maximum number of VALUES that can be put in a PostgreSQL INSERT statement? | 0.761594 | 1 | 0 | 5,268 |
36,881,830 | 2016-04-27T06:21:00.000 | 1 | 0 | 1 | 1 | python,subprocess | 36,881,946 | 2 | false | 0 | 0 | Yes, a new process is spawned every time you call subprocess.call() or any of its relatives, including Popen(). You do not need to explicitly kill the subprocesses normally--you'd just wait for them to exit. | 1 | 1 | 0 | If subprocess.call is invoked N times, I wonder if N subprocess will be created or not.
And when will the subprocess close? Should I kill it manually?
What about subprocess.Popen? | In python, will subprocess.call produce an individual subprocess every time being invoked? | 0.099668 | 0 | 0 | 56 |
36,883,631 | 2016-04-27T07:53:00.000 | 0 | 0 | 1 | 0 | python,arrays,cluster-computing | 36,883,848 | 1 | false | 0 | 0 | I also work with really big datasets (complete genomes or all possible gene combinations) and i store these in a zipped database with pickle. this way it is ram efficient and uses a lot less hard disk memory.
I suggest you try that. | 1 | 1 | 0 | I need to create a big array in python from Sqlite database. It's size is 1000_000_000*1000_000_000 and each item is one or zero. Actually, my computer can't store in RAM this volume of information. Maybe someone have idea how to work in this situation? Maybe store these vectors in database or there is some framework for similar needs? If i am able to do this, then i need to build clusters, that problem frighten me not less, with this information size.
Thanks in advance/ | How do you work with big array in python? | 0 | 0 | 0 | 259 |
36,884,019 | 2016-04-27T08:12:00.000 | 0 | 0 | 1 | 0 | python,equality | 36,884,102 | 2 | false | 0 | 0 | is checks if the two items are the exact same object. This check identity
== checks if the two objects are equal values
You use is not None to make sure that the object the "real" none and not just false-y. | 1 | 0 | 0 | Two items may be unequal in many ways. Can python tell what is the reason?
For example: 5 is not 6, int(5) is not float(5), "5" is not "5 ", ...
Edit: I did not ask what kinds of equality test there are, but why and how those are not equal. I think my question is not duplicate. | If "a is not b" how to know what is the difference? | 0 | 0 | 0 | 99 |
36,887,637 | 2016-04-27T10:48:00.000 | 0 | 1 | 0 | 0 | python-2.7,unit-testing,pytest | 36,905,061 | 1 | true | 0 | 0 | os.path.dirname(os.path.abspath(__file__)) (which is what I think you meant above) worked fine for me so far - it should work as long as Python can figure the path of the file, and with pytest I can't imagine a scenario where that wouldn't be true. | 1 | 0 | 0 | During a pytest fixture, what is the best way to robustly get the location of a text file for users that may specify different working directories at runtime e.g. I want a person using the cmd line in the test fixture directory find the file as well as an integration server which may work in the project's root. Can I somehow include the text file in a module? What are best practices for including and getting access to non .py files?
I am aware of BASE_DIR = os.path.dirname(os.path.dirname(__file__)), but I am not sure if this will always refer to the same directory given a particular way of running the test suite. | pytest: robust file path of a txt file used in tests | 1.2 | 0 | 0 | 1,067 |
36,888,098 | 2016-04-27T11:11:00.000 | 1 | 0 | 1 | 0 | python,mongodb,pymongo | 37,026,108 | 1 | true | 0 | 0 | Depending on the schema of your objects, you could hypothetically write an aggregation pipeline that would first transform the objects, then filter the results based on the results and then return those filtered results.
The main reason I would not recommend this way though is that, given a fairly large size for your dataset, the aggregation is going to fail because of memory problems.
And that is without mentioning the long execution time for this command. | 1 | 2 | 0 | So basically I have this collection where objects are stored with a string parameter.
example:
{"string_": "MSWCHI20160501"}
The last part of that string is a date, so my question is this: Is there a way of writing a mongo query which will take that string, convert part of it into an IsoDate object and then filter objects by that IsoDate.
p.s
I know I can do a migration but I wonder if I can achieve that without one. | Write a query in MongoDB's client pymongo that converts a part of the string to a date on the fly | 1.2 | 1 | 0 | 61 |
36,894,191 | 2016-04-27T15:24:00.000 | 0 | 0 | 0 | 0 | python,numpy,random,machine-learning,normal-distribution | 60,298,484 | 4 | false | 0 | 0 | You can subdivide your targeted range (by convention) to equal partitions and then calculate the integration of each and all area, then call uniform method on each partition according to the surface.
It's implemented in python:
quad_vec(eval('scipy.stats.norm.pdf'), 1, 4,points=[0.5,2.5,3,4],full_output=True) | 1 | 44 | 1 | In machine learning task. We should get a group of random w.r.t normal distribution with bound. We can get a normal distribution number with np.random.normal() but it does't offer any bound parameter. I want to know how to do that? | How to get a normal distribution within a range in numpy? | 0 | 0 | 0 | 43,386 |
36,894,358 | 2016-04-27T15:31:00.000 | 1 | 0 | 0 | 0 | python,opencv,image-processing,machine-learning,computer-vision | 36,908,550 | 2 | false | 0 | 0 | If the shadows cover a significant part of the image then this problem is non-trivial.
If the shadow is a small fraction of the area you're interested though you could try using k-medoids instead of k-means and as Piglet mentioned using a different color space with separate chromaticity and luminance channels may help. | 1 | 5 | 1 | Is it possible to extract the 'true' color of building façade from a photo/ a set of similar photos and removing the distraction of shadow? Currently, I'm using K-means clustering to get the dominant colors, however, it extracts darker colors (if the building is red, then the 1st color would be dark red) as there are lots of shadow areas in real photos.
Any suggestions are greatly appreciated!
Thanks in advance! | What is a good way to extract dominant colors from image without the shadow? | 0.099668 | 0 | 0 | 383 |
36,895,495 | 2016-04-27T16:19:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,casting,floating-point,precision | 36,895,565 | 1 | false | 0 | 0 | You probably want to use the ** operator instead of ^. ** is the power operator in python, ^ is the Binary XOR operator.
float(1)/(2**7) yields the correct 0.0078125. | 1 | 0 | 0 | When I enter float(1)/(2^7) in the Python console it outputs 0.2. But it is actually 0.0078125.
Could anyone please tell what I am doing wrong? | Python float() giving incorrect values | 0.53705 | 0 | 0 | 73 |
36,898,571 | 2016-04-27T18:56:00.000 | 1 | 0 | 0 | 0 | python-2.7,tkinter,tkinter-canvas | 36,898,667 | 1 | true | 0 | 1 | If you have the full path then that should work fine as it is. | 1 | 0 | 0 | I am making a program that will display images on a Tkinter canvas, but I need to use images in a different folder than the program is in so I can't use what I usually do:
img = PhotoImage(file=some_img)
I have an os path like C:\Users\SomeUser\Documents\some_img. I need to access some_img to make it a PhotoImage while it is in a different folder. How would I go about doing that? | Using an os path in Tkinter PhotoImage | 1.2 | 0 | 0 | 252 |
36,898,597 | 2016-04-27T18:57:00.000 | 0 | 0 | 1 | 0 | python,performance,python-3.x,file-io,echo | 36,898,685 | 3 | false | 0 | 0 | If you are indeed I/O bound by the time it takes to write the file, multi-threading with a pool of threads may help. Of course, there is a limit to that, but at least, it would allow you to issue non-blocking file writes. | 2 | 1 | 0 | I'm currently involved in a Python project that involves handling massive amounts of data. In this, I have to print massive amounts of data to files. They are always one-liners, but sometimes consisting of millions of digits.
The actual mathematical operations in Python only take seconds, minutes at most. Printing them to a file takes up to several hours; which I don't always have.
Is there any way of speeding up the I/O?
From what I figure, the number is stored in the RAM (Or at least I assume so, it's the only thing which would take up 11GB of RAM), but Python does not print it to a text file immediately. Is there a way to dump that information -- if it is the number -- to a file? I've tried Task Manager's Dump, which gave me a 22GB dump file (Yes, you read that right), and it doesn't look like there's what I was looking for in there, albeit it wasn't very clear.
If it makes a difference, I have Python 3.5.1 (Anaconda and Spyder), Windows 8.1 x64 and 16GB RAM.
By the way, I do run Garbage Collect (gc module) inside the script, and I delete variables that are not needed, so those 11GB aren't just junk. | Python 3 - Faster Print & I/O | 0 | 0 | 0 | 833 |
36,898,597 | 2016-04-27T18:57:00.000 | 0 | 0 | 1 | 0 | python,performance,python-3.x,file-io,echo | 36,898,804 | 3 | false | 0 | 0 | Multithreading could speed it up (have printers on other threads that you write to in memory that have a queue).
Maybe a system design standpoint, but maybe evaluate whether or not you need to write everything to the file. Perhaps consider creating various levels of logging so that a release mode could run faster (if that makes sense in your context). | 2 | 1 | 0 | I'm currently involved in a Python project that involves handling massive amounts of data. In this, I have to print massive amounts of data to files. They are always one-liners, but sometimes consisting of millions of digits.
The actual mathematical operations in Python only take seconds, minutes at most. Printing them to a file takes up to several hours; which I don't always have.
Is there any way of speeding up the I/O?
From what I figure, the number is stored in the RAM (Or at least I assume so, it's the only thing which would take up 11GB of RAM), but Python does not print it to a text file immediately. Is there a way to dump that information -- if it is the number -- to a file? I've tried Task Manager's Dump, which gave me a 22GB dump file (Yes, you read that right), and it doesn't look like there's what I was looking for in there, albeit it wasn't very clear.
If it makes a difference, I have Python 3.5.1 (Anaconda and Spyder), Windows 8.1 x64 and 16GB RAM.
By the way, I do run Garbage Collect (gc module) inside the script, and I delete variables that are not needed, so those 11GB aren't just junk. | Python 3 - Faster Print & I/O | 0 | 0 | 0 | 833 |
36,898,723 | 2016-04-27T19:03:00.000 | 0 | 0 | 0 | 0 | python-3.x,datatable | 36,917,130 | 1 | true | 1 | 0 | It's easier than I thought, there is no need for PHP and MariaDB.
When using nginx, you need uswgi and uswgi-plugin-cgi to let nginx know that the Python script is a script and not data. Point to the Python script in the Ajax parameter of the DataTable JS code, make it executable and print the array with JSON function in the Python script, and include cgi/json header strings. The array should look like that in the example of Datatables Website (Ajax source).
It's all running in the memory now. | 1 | 0 | 0 | I'm a beginner at website programming and want to understand some basics.
I've created a Python 3 script which fetches some data from a website and makes some calculations. Result is then about 20 rows with 7 columns.
What is the easiest way to make them available on my website? When refreshing my website, the Python script should fetch the data from the 3rd party website and this data should then be displayed in a simple table with sorting option.
I've discovered the jQuery plugin DataTables with Ajax JSON source. I would create a PHP script which executes the Python script which writes data to a DB like MariaDB. PHP then creates a JSON for Ajax.
Is this the right way or are there easier ways? Maybe using a framework etc.?
Thanks! | Using DataTables with Python | 1.2 | 0 | 0 | 1,506 |
36,900,529 | 2016-04-27T20:45:00.000 | 3 | 0 | 1 | 0 | java,python,regex | 36,900,603 | 1 | true | 0 | 0 | A regex doesn't exist for this language because the language is not regular.
You'll want to either create a HashMap<char, int> that captures the count of each char in the string and compare that to the input string or sort your list of words and the user's input and compare them. | 1 | 0 | 0 | I have a list of words:
"abac"
"abcc"
"acb"
"aaaa"
...
User can input any word for searching in this list.
My goal is to find in list specific word that contains the same count of each character from input word. For example, if an input word is "abca" then only first word from list should match, if "cba" - then only third.
I decided to use regex which will be applied to each words in list separately until matching.
My attempt is regex /^[abca]{4}$/ but this is the wrong approach since it ignores the count of each character therefore second and fourth words from list matching also, though they shouldn't.
Will appreciate any help. | Regex to find word with exact count of any letter | 1.2 | 0 | 0 | 125 |
36,901,311 | 2016-04-27T21:32:00.000 | 1 | 0 | 0 | 0 | python-2.7,pandas | 36,946,734 | 2 | false | 0 | 0 | Sorry for the garbled question. in order to do a groupby for a timedelta value the best way is to do a pd.numeric on the 'timedelta value' and once the results are obtained we can again do a pd.to_timedelta on it. | 1 | 0 | 1 | I have a Pandas Dataframe which has a field txn['time_diff']
Send_Agent Pay_Agent Send_Time Pay_Time score \
0 AKC383903 AXX100000 2014-08-19 18:52:35 2015-05-01 22:08:39 1
1 AWA280699 AXX100000 2014-08-19 19:32:18 2015-05-01 17:12:32 1
2 ALI030170 ALI030170 2014-08-26 10:11:40 2015-05-01 22:20:09 1
3 AKC403474 AXX100000 2014-08-19 20:35:53 2015-05-01 21:27:12 1
4 AED002616 AED002616 2014-09-28 18:37:32 2015-05-01 14:06:17 1
5 ALI030170 ALI030170 2014-08-20 05:08:03 2015-05-01 21:29:43 1
6 ADA414187 ADA414187 2014-09-26 17:46:24 2015-05-01 21:37:51 1
7 AWA042396 AWA042396 2014-08-27 12:07:11 2015-05-01 17:39:31 1
8 AED002616 AED002616 2014-08-23 04:53:03 2015-05-01 13:33:12 1
9 ALA500685 AXX100000 2014-08-27 16:41:26 2015-05-01 19:01:52 1
10 AWA263407 AXX100000 2014-08-27 18:04:24 2015-05-01 10:39:14 1
11 ACH928457 ACH928457 2014-08-28 10:26:41 2015-05-01 11:55:59 1
time_diff
0 255 days 03:16:04
1 254 days 21:40:14
2 248 days 12:08:29
3 255 days 00:51:19
4 214 days 19:28:45
5 254 days 16:21:40
6 217 days 03:51:27
7 247 days 05:32:20
8 251 days 08:40:09
9 247 days 02:20:26
10 246 days 16:34:50
11 246 days 01:29:18
txn['time_diff'].min() works fine. But txn['time_diff'].groupby(txn['Send_Agent']).min() gives me the output in seconds
Send_Agent
A03010016 86546000000000
A03020048 53056000000000
A10001087 113459000000000
A11120030 680136000000000
A11120074 787844000000000
A11120106 1478045000000000
A11120117 2505686000000000
A11120227 923508000000000
A11120294 1460320000000000
A11120304 970226000000000
A11120393 3787969000000000
A11120414 2499079000000000
A11120425 65753000000000
A11140016 782269000000000
But I want it in terms of days , hours , mins.
I did the following
txn = txn.astype(str)
Time_diff_min = txn['time_diff'].groupby(txn['Send_Agent']).min()
The output I get is in the right format but is erroneous and is fetching the "first" value it finds for that "groupby"
In [15]: Time_diff_min = txn['time_diff'].groupby(txn['Send_Agent']).min()
In [16]: Time_diff_min
Out[16]:
Send_Agent
A03010016 1 days 00:02:26.000000000
A03020048 0 days 14:44:16.000000000
A10001087 1 days 07:30:59.000000000
A11120030 13 days 06:29:35.000000000
A11120074 9 days 02:50:44.000000000
A11120106 17 days 02:34:05.000000000
A11120117 29 days 00:01:26.000000000
A11120227 10 days 16:31:48.000000000
A11120294 16 days 21:38:40.000000000
A11120304 11 days 05:30:26.000000000
A11120393 43 days 20:12:49.000000000
A11120414 28 days 22:11:19.000000000
A11120425 0 days 18:15:53.000000000
A11140016 9 days 01:17:49.000000000
A11140104 0 days 15:33:06.000000000
A11140126 1 days 18:36:07.000000000
A11140214 23 days 02:30:07.000000000
Also
Time_diff_min = txn['time_diff']..min().groupby(txn['Send_Agent'])
throws an error that I cannot groupby on a timedelta | obtaining the min value of time_diff for a Pandas Dataframe | 0.099668 | 0 | 0 | 110 |
36,901,709 | 2016-04-27T22:01:00.000 | 0 | 0 | 0 | 0 | jquery,python,mysql,api,scripting | 36,901,878 | 1 | true | 0 | 0 | You don't mention what server side language you're using, but the concepts would be the same for all - make your query to get your 200K variables, loop through the result set, making the curl call to the API for each, store the results in an array, json encode the array at the end of the loop, and then dump the result to a file. As for the limit to requests per time period, most languages have some sort of pause function, in PHP it's sleep(). If all else fails, you could put a loop that does nothing (except take time) into each call to put a delay in the process. | 1 | 0 | 0 | I am 'kind of' new to programming and must have searched a large chunk of the web in connection with this question. I am sure the answer is somewhere out there but I am probably simply not using the right terminology to find it. Nevertheless, I did my best and I am totally stuck. I hope people here understand the feeling and won't mind helping.
I am currently working on a data driven web app that I am building together with an outsourced developer while also learning more about programming. I've got some rusty knowledge of it but I've been working in business-oriented non-technical roles for a few years now and the technical knowledge gathered some dust.
The said web app uses MySql database to store information. In this MySql database there is currently a table containing 200,000 variables (Company Names). I want to run those Company Names through a third-party json RESTful API to return some additional data regarding those Companies.
There are 2 questions here and I don't expect straight answers. Pointing me in the right learning direction would be sufficient:
1. How would I go about taking those 200,000 variables and executing a script that would automatically make 200,000 calls to the API to obtain the data I am after. How do I then save this data to a json or csv file to import to MySql? I know how to make single API requests, using curl but making automated large volume requests like that is a mystery to me. I don't know whether I should create a json file out of it or somehow queue the requests, I am lost.
2. The API mentioned above is limited to 600 calls per 5 minutes perios, how do I introduce some sort of control system so that when the maximum volume of API calls is reached the script pauses and only returns to working when the specified amount of time goes by? What language is best to interact with the json RESTful API and to write the script described in question no1?
Thank you for your help.
Kam | Most effective way to run execute API calls | 1.2 | 0 | 1 | 392 |
36,903,698 | 2016-04-28T01:30:00.000 | 1 | 1 | 0 | 1 | python,c++,sockets,zeromq | 36,903,756 | 1 | false | 0 | 0 | And this is what happens when you forget to call connect() on the socket... | 1 | 0 | 0 | I am trying to write an application that uses ZeroMQ to recieve messages from clients. I receive the message from the client in the main loop, and need to send an update to a second socket (general idea is to establish a 'change feed' on objects in the database the application is built on).
Receiving the message works fine, and both sockets are connected without issue. However, sending the request on the outbound port simply hangs, and the test server meant to receive the message does not receive anything.
Is it possible to use both a REQ and REP socket within the same application?
For reference, the main application is C++ and the test server and test client communicating with it are written in Python. They are all running on Ubuntu 14.40. Thanks!
Alex | C++ ZeroMQ Single Application with both REQ and REP sockets | 0.197375 | 0 | 0 | 97 |
36,905,809 | 2016-04-28T05:16:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,url | 36,947,097 | 1 | false | 0 | 0 | This is typically a function of the terminal that you're using.
In iterm2, you can click links by pressing cmd+alt+left click.
In gnome-terminal, you can click links by pressting ctrl+left click or right clicking and open link. | 1 | 0 | 0 | I know writing print "website url" does not provide a clickable url. So is it possible to get a clickable URL in python, which would take you directly to that website? And if so, how can it be done? | Clickable website URL | 0 | 0 | 1 | 46 |
36,909,860 | 2016-04-28T08:57:00.000 | 1 | 0 | 1 | 0 | python,algorithm,sorting,dictionary | 36,910,515 | 2 | false | 0 | 0 | Binary Search is a searching technique which exploits the fact that list of keys in which a key is to be searched is already sorted, it doesn't requires you to sort and then search, making its worst case search time O(log n).
If you do not have a sorted list of keys and want to search a key then you will have to go for linear search which in worst case will run with O(n) complexity, there is no need to sort and then search which definitely slower since best known sorting algos can work in only O(n log n) time.
Building a dictionary from a list of keys and then performing a lookup is of no advantage here because linear search will yield the same for better performance and also there need for auxiliary memory which would be needed in case of dictionary; however if you have multiple lookups and key space is small using a dictionary can of advantage since building the dictionary is one time work of O(n) and subsequent lookups can be done by O(1) at the expense of some memory which will be used by the dictionary. | 2 | 3 | 0 | I was studying hash tables and a thought came:
Why not use dictionaries for searching an element instead of first sorting the list then doing binary search? (assume that I want to search multiple times)
We can convert a list to a dictionary in O(n) (I think) time because we have to go through all the elements.
We add all those elements to dictionary and this takes O(1) time
When the dictionary is ready,we can then search for any element in O(1) time(average) and O(n) is the worst case
Now if we talk about average case O(n) is better than other sorting algorithms because at best they take O(nlogn).And if I am right about all of what I have said then why not do this way?
I know there are various other things which you can do with the sorted elements which cannot be done in an unsorted dictionary or array.But if we stick only to search then Is it not a better way to do search than other sorting algorithms? | Using dictionary instead of sorting and then searching | 0.099668 | 0 | 0 | 725 |
36,909,860 | 2016-04-28T08:57:00.000 | 2 | 0 | 1 | 0 | python,algorithm,sorting,dictionary | 36,911,006 | 2 | true | 0 | 0 | Right, a well-designed hash table can beat sorting and searching.
For a proper choice, there are many factors entering into play such as in-place requirement, dynamism of the data set, number of searches vs. insertions/deletions, ease to build an effective hashing function... | 2 | 3 | 0 | I was studying hash tables and a thought came:
Why not use dictionaries for searching an element instead of first sorting the list then doing binary search? (assume that I want to search multiple times)
We can convert a list to a dictionary in O(n) (I think) time because we have to go through all the elements.
We add all those elements to dictionary and this takes O(1) time
When the dictionary is ready,we can then search for any element in O(1) time(average) and O(n) is the worst case
Now if we talk about average case O(n) is better than other sorting algorithms because at best they take O(nlogn).And if I am right about all of what I have said then why not do this way?
I know there are various other things which you can do with the sorted elements which cannot be done in an unsorted dictionary or array.But if we stick only to search then Is it not a better way to do search than other sorting algorithms? | Using dictionary instead of sorting and then searching | 1.2 | 0 | 0 | 725 |
36,911,785 | 2016-04-28T10:21:00.000 | 0 | 0 | 1 | 0 | python,artificial-intelligence | 37,138,718 | 2 | false | 0 | 0 | You need to assign score (evaluation) to each move based on the rules of the game. When you choose appropriate scoring method you can evaluate which out of 5 possible actions is the best one.
As an example let's assume simple game where you must take all opponents pawns by placing your pawn on top of theirs (checkers with simplified/relaxed rules). When you move pawn to the next free cell without exposing your pawn to the danger you can assign score +1 and when you take opponent's pawn +3. If opponent takes your pawn in next move you subtract your score -3. You can define other scoring rules. When you apply scoring to all possible moves you can then select best move either using MinMax algorithm for 2 players game or some greedy search algorithm which just maximizes the score selecting action which yields highest score on next move without predicting opponent's move. | 1 | 0 | 0 | I do not see too how I could set this:
I must code a small IA for asymmetric board game for 2 players. Each turn each player has a number of action points to use to move their pieces on the board (10x10).
For now I know how to generate the list of possible moves for each pawn based on the number of given action point but I block to the next step, selecting the best move. How could I code the fact that for example 5 action points it is best to move one pawn 3 cells and another 2 cells that move one pawn 5 cells. Do I have to use a particular algorithm or apply a programming concept ...? Well, I'm lost.
Hope you can help me :) | define the best possible move (AI - game) | 0 | 0 | 0 | 839 |
36,912,233 | 2016-04-28T10:40:00.000 | 0 | 0 | 1 | 0 | python,console,pycharm | 36,912,291 | 1 | false | 1 | 0 | Click the icon in the lower left corner of pycharm, then you will get a button for the console. | 1 | 0 | 0 | I am new to PyCharm and I am stuck on something really stupid: I cannot get Pycharm to display a Python interpreter console window.
The help tells me to click "Tools -> Run Python Console" in the main menu, which is simple and logical enough, except there is no Run Python Console command in my Tools submenu. There is a "Tools -> Python Console..." command (yes with the dots, plus an icon), but it does nothing. Ditto for the "Python Console" box (with the same icon) in the right end of the bottom bar.
I have searched a lot for a solution, but nobody seems to have discussed this or a similar problem.
My installation is:
PyCharm Community Edition 2016.1.2,
Build #PC-145.844, built on April 8, 2016,
JRE: 1.8.0_60-b27 x86,
JVM: Java HotSpot(TM) Server VM by Oracle Corporation
Thanks for any hints. | No "Run Python Console" in PyCharm menu | 0 | 0 | 0 | 1,002 |
36,917,042 | 2016-04-28T14:02:00.000 | 1 | 0 | 1 | 0 | python | 37,045,296 | 18 | false | 0 | 0 | L = [1, 2, 3]
a = zip(L, L[1:]+L[:1])
for i in a:
b = list(i)
print b | 2 | 77 | 0 | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
1 - 2
2 - 3
3 - 1 | Pairwise circular Python 'for' loop | 0.011111 | 0 | 0 | 7,843 |
36,917,042 | 2016-04-28T14:02:00.000 | 3 | 0 | 1 | 0 | python | 37,017,122 | 18 | false | 0 | 0 | If you don't want to consume too much memory, you can try my solution:
[(l[i], l[(i+1) % len(l)]) for i, v in enumerate(l)]
It's a little slower, but consume less memory. | 2 | 77 | 0 | Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
1 - 2
2 - 3
3 - 1 | Pairwise circular Python 'for' loop | 0.033321 | 0 | 0 | 7,843 |
36,920,262 | 2016-04-28T16:19:00.000 | 0 | 0 | 0 | 0 | python,vectorization,cosine-similarity | 65,979,818 | 5 | false | 0 | 0 | Below worked for me, have to provide correct signature
from scipy.spatial.distance import cosine
def cosine_distances(embedding_matrix, extracted_embedding):
return cosine(embedding_matrix, extracted_embedding)
cosine_distances = np.vectorize(cosine_distances, signature='(m),(d)->()')
cosine_distances(corpus_embeddings, extracted_embedding)
In my case
corpus_embeddings is a (10000,128) matrix
extracted_embedding is a 128-dimensional vector | 1 | 5 | 1 | In python, is there a vectorized efficient way to calculate the cosine distance of a sparse array u to a sparse matrix v, resulting in an array of elements [1, 2, ..., n] corresponding to cosine(u,v[0]), cosine(u,v[1]), ..., cosine(u, v[n])? | Cosine distance of vector to matrix | 0 | 0 | 0 | 3,904 |
36,921,961 | 2016-04-28T17:46:00.000 | 0 | 0 | 1 | 0 | python,performance,opencv,compilation,cython | 68,894,961 | 4 | false | 0 | 0 | If you try to find your answer using cython with Visual Studio to convert python code into pyd( Python Dynamic Module ) then, you will have a blurry answer. As, visual code that you expect to work might not due to compatibility issue with later versions. For instance, 1900, 1929 of msvc.
You will need to edit cygwin-compiler in disutils to get things done. If you want to use MingW then also you need to include the configuration of compiler used in disutils.
A very simple way is that we can use Nuitka, it is very simplifed and reliable to convert python code into C and Python Dynamic Library. No configuration, no support addition required.
Let's grab basics
1). Install nuitka, pip install nuitka
2). Install MingW, from Sourceforge
3). Add mingW to path
And everything is good to go now.
4).Open cmd as admin, type python -m nuitka --module file.py
Nuitka will create file.c, file.pyi(For imports) and file.cp39_architecture.pyd in current directory. From file.pyd you can import the module directly into your main.py and it will be lightning fast.
But, it you want to create standalone application then try,
python -m nuitkafile.py | 1 | 10 | 0 | I'm creating a project that uses Python OpenCV. My image processing is a bit slow, so I thought I can made the code faster by creating a .pyd file (I read that somewhere).
I am able to create a .c file using Cython, but how to make a .pyd? While they are a kind of .dll, should I make a .dll first and convert it? And I think they're not platform-independent, what are equivalents on Unix?
Thanks for any help! | How to create a .pyd file? | 0 | 0 | 0 | 28,584 |
36,922,177 | 2016-04-28T17:58:00.000 | 3 | 0 | 0 | 1 | python,bash,ubuntu | 36,922,507 | 2 | false | 0 | 0 | The rsync command is the right out-of-the-box solution to this problem. From the manpage:
It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
A simple loop of rsync and sleep will do for you. | 1 | 0 | 0 | I have some csv files which are continuously updated with new entries.
I want to write a script to copy those files to another server which is going to copy continuously without any repeating.
How can I manage to do that with a bash, or python script?
Thanks, | Copying a continuously growing file from one server to another in ubuntu bash | 0.291313 | 0 | 0 | 1,310 |
36,923,007 | 2016-04-28T18:42:00.000 | 1 | 0 | 1 | 1 | python,scheduled-tasks | 36,923,390 | 1 | true | 0 | 0 | The answer to this question will likely depend on your platform, the available facilities and your particular project needs.
First let me address system resources. If you want to use the fewest resources, just call time.sleep(NNN), where NNN is the number of seconds until the next instance of 10AM. time.sleep will suspend execution of your program and should consume zero (or virtually zero resources). The python GC may periodically wake up and do maintenance, but it's work should be negligible.
If you're on Unix, cron is the typical facility for scheduling future tasks. It implements a fairly efficient Franta–Maly event list manager. It will determine based on the list of tasks which will occurr next and sleep until then.
On Windows, you have the Schedule Manager. It's a Frankenstein of complexity -- but it's incredibly flexible and can handle running missed events due to power outages and laptop hibernates, etc... | 1 | 0 | 0 | Let's say I want to run some function once a day at 10 am.
Do I simply keep a script running in the background forever?
What if I don't want to keep my laptop open/on for many days at a time?
Will the process eat a lot of CPU?
Are the answers to these questions different if I use cron/launchd vs scheduling programmatically? Thanks! | How are scheduled Python programs typically ran? | 1.2 | 0 | 0 | 64 |
36,924,296 | 2016-04-28T19:54:00.000 | 0 | 0 | 1 | 0 | python,raw-input | 36,924,497 | 2 | false | 0 | 0 | Well I'm not sure of having a program understand the english language. It will only take a string literal as a string literal. "Good" does not mean Good or Bad to the Interpreter in Python.
What I'd suggest is making a dictionary of all of the good phrases you want, such as I'm good, Feelin' great, I'm A OK. You can store all of these good feeling string literals to your "Good Feels" dictionary and vice versa for your Bad feels string literals.
I'm not too sure how you'd work around spelling with <100% accuracy and the interpreter still picking it up
I'm a bit inexperienced myself, but I'd say a predefined dictionary is your best bet, maybe throw in an else statement that prompts the user to spell correctly if he can't get one of the saying right. | 1 | 0 | 0 | (I am new to Python and programming) I am using raw_input() so I can make a program that talks to the user. For example:
Program: How are you?
User: I am doing great!/I feel terrible.
I need my program to respond accordingly, as in "YAY!" or "Aw man... I hope you feel better soon." so can you please give me ways to scan for words such as "good" or "bad" in the user's raw input so my program knows how to respond?
I know a few ways to do this, but the problem is, I want multiple words for it to look for, like great, amazing, and awesome can all be classified into the "good" group. AND, I need it where it doesn't have to be exact. I keep on running into problems where the user has to exactly type, "I am good." instead of all the different variations that they could possibly say it. THANK YOU IN ADVANCE! | Python: If Raw_Input Contains...BLAH | 0 | 0 | 0 | 362 |
36,925,440 | 2016-04-28T21:04:00.000 | 0 | 1 | 0 | 0 | python,django,unit-testing,django-compressor | 38,980,458 | 1 | false | 1 | 0 | I think if you also set COMPRESS_PRECOMPILERS = () in your test-specific settings, that should fix your problem. | 1 | 0 | 0 | When I run my unit tests I am getting UncompressableFileError for files installed through Bower. This happens because I don't run bower install in my unit tests and I don't want to have to run bower install for my unit tests.
Is there a way to disable django-compressor, or to mock the files so that this error doesn't happen?
I have COMPRESS_ENABLED set to False but no luck there, it still looks for the file. | Django-Compressor throws UncompressableFileError on bower installed asset | 0 | 0 | 0 | 99 |
36,926,819 | 2016-04-28T22:54:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,cluster-analysis | 36,930,521 | 1 | false | 0 | 0 | Nothing is free, and you don't want algorithms to perform unnecessary computations.
Inertia is only sensible for k-means (and even then, do not compare different values of k), and it's simply the variance sum of the data. I.e. compute the mean of every cluster, then the squared deviations from it. Don't compute distances, the equation is simply ((x-mu)**2).sum() | 1 | 0 | 1 | Scikit-learn MiniBatchKMeans has an inertia field that can be used to see how tight clusters are. Does the Birch clustering algorithm have an equivalent? There does not seem to be in the documentation.
If there is no built in way to check this measurement, does it make sense to find the average euclidian distance for each point's closest neighbor in each cluster., then find the mean of those average distances? | Can I get "inertia" for sklearn Birch clusters? | 0.197375 | 0 | 0 | 886 |
36,926,832 | 2016-04-28T22:55:00.000 | 0 | 0 | 1 | 0 | python,multithreading,sockets,udp,ports | 36,927,011 | 1 | false | 0 | 0 | If you are a host then you are creating a new socket for each new client. With that in mind you can create a program that listens for connections and then create a new thread for each connection (to a client). Each thread can do multiple tasks, control the socket and/or exchange data with the main thread.
The same applies for you being a client: you can create a new thread for each new connection.
I hope that helps. | 1 | 0 | 0 | I am working on a school project, and had an idea which i think will benifet me outside of school and is a bit over what school requires from me.
That is why i have a little lack of knowledge, everything regarding threads and dealing with multiple clients at once.
I had a few ideas, such as using UDP and wait for 2 connections and handle each one, but it made my code really messy and hard to follow, and really not efficent.
I would like to know if there is a good way to handle such a problem, and how. | Exchange data between 2 connections efficently python sockets | 0 | 0 | 1 | 54 |
36,927,432 | 2016-04-29T00:05:00.000 | 0 | 0 | 0 | 0 | python,numpy,scipy,complex-numbers,zooming | 36,928,073 | 1 | false | 0 | 0 | This is not a good answer but it seems to work quite well. Instead of using the default parameters for the zoom method, I'm using order=0. I then proceed to deal with the real and imaginary part separately, as described in my question. This seems to reduce the artifacts although some smaller artifacts remain. It is by no means perfect and if somebody has a better answer, I would be very interested. | 1 | 0 | 1 | I have a numpy array of values and I wanted to scale (zoom) it. With floats I was able to use scipy.ndimage.zoom but now my array contains complex values which are not supported by scipy.ndimage.zoom. My workaround was to separate the array into two parts (real and imaginary) and scale them independently. After that I add them back together. Unfortunately this produces a lot of tiny artifacts in my 'image'. Does somebody know a better way? Maybe there also exists a python library for this? I couldn't find one.
Thank you! | Scipy zoom with complex values | 0 | 0 | 0 | 215 |
36,928,288 | 2016-04-29T01:54:00.000 | 0 | 0 | 0 | 0 | python,django | 36,930,928 | 2 | false | 1 | 0 | Did you install dev packages? assuming that you have already installed psycopg2
if not and if you are on ubuntu do this sudo apt-get install libpq-dev python-dev | 1 | 1 | 0 | After completing the installation of python djnago perfectly. When running a command
"python manage.py runserver"
getting an error like
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2' | python django No module named 'psycopg2' error | 0 | 0 | 0 | 1,046 |
36,932,420 | 2016-04-29T07:39:00.000 | 19 | 0 | 1 | 1 | python,ubuntu | 36,932,680 | 3 | true | 0 | 0 | If you are writing the output to the same file in disk, then yes, it will be overwritten. However, it seems that you're actually printing to the stdout and then redirect it to a file. So that is not the case here.
Now answer to your question is simple: there is no interaction between two different executions of the same code. When you execute a program or a script OS will load the code to the memory and execute it and subsequent changes to code has nothing to do with the code that is already running. Technically a program that is running is called a process. Also when you run a code on two different terminals there will be two different processes on the OS one for each of them and there is no way for two process to interfere unless you explicitly do that (IPC or inter-process communication) which you are doing here.
So in summary you can run your code simultaneously on different terminals they will be completely independent. | 2 | 15 | 0 | I have a python script which takes a while to finish its executing depending on the passed argument. So if I run them from two terminals with different arguments, do they get their own version of the code? I can't see two .pyc files being generated.
Terminal 1 runs: python prog.py 1000 > out_1000.out
Before the script running on terminal 1 terminate, i start running an another; thus terminal 2 runs: python prog.py 100 > out_100.out
Or basically my question is could they interfere with each other? | Run same python code in two terminals, will them interfere each other? | 1.2 | 0 | 0 | 6,112 |
36,932,420 | 2016-04-29T07:39:00.000 | 3 | 0 | 1 | 1 | python,ubuntu | 36,932,532 | 3 | false | 0 | 0 | Each Python interpreter process is independent. How the script reacts to itself being run multiple times depends on the exact code in use, but in general they should not interfere. | 2 | 15 | 0 | I have a python script which takes a while to finish its executing depending on the passed argument. So if I run them from two terminals with different arguments, do they get their own version of the code? I can't see two .pyc files being generated.
Terminal 1 runs: python prog.py 1000 > out_1000.out
Before the script running on terminal 1 terminate, i start running an another; thus terminal 2 runs: python prog.py 100 > out_100.out
Or basically my question is could they interfere with each other? | Run same python code in two terminals, will them interfere each other? | 0.197375 | 0 | 0 | 6,112 |
36,935,030 | 2016-04-29T09:47:00.000 | 0 | 0 | 0 | 0 | python-2.7,google-chrome,cookies | 36,935,556 | 1 | true | 0 | 0 | A solution turned to be tricky: when you add or change profiles, sometimes Chrome changes folders where it stores cookies. In my case the solution was to change "Default" word in cookie path to "Profile 2". | 1 | 0 | 0 | I was using browsercookie library and it was awesome. However, at some moment it just stopped working and I cannot see why.
Basically, it throws the error that it cannot locate cookie file at /Users/UserName/Library/Application Support/Google/Chrome/Default/Cookies
Google and Stackoverflow search does not give a hint where to look for an error. Would appreciate any help.
Mac OS 10.11.3, Chrome Version 50.0.2661.86 (64-bit), python2.7, pysqlite preinstalled. | BrowserCookieError: Can not find cookie file | 1.2 | 0 | 1 | 910 |
36,941,823 | 2016-04-29T15:08:00.000 | -1 | 0 | 0 | 0 | python,django,entity-attribute-value | 40,278,064 | 2 | false | 1 | 0 | I am trying to answer,let me know, wheather we are on a same plane. I think, you need to formulate EAV database scema first. For that identify what are the entities,attributes, and the associated values. Here, in the example mentioned by you, entity maybe device and it's attribute maybe setting. If we take other example, say in case of car sales, entity is sales recipt, attribute is product purchased by the customer(car), and values are price, car model, car colour etc.
Make master tables and tables that stores mappings if any.
This schema implementation in models.py will make your models, and insert values in those models through shell, or insert script. | 1 | 6 | 0 | I need to implement a fairly standard entity-attribute-value hierarchy. There are devices of multiple types, each type has a bunch of settings it can have, each individual device has a set of particular values for each setting. It seems that both django-eav and eav-django packages are no longer maintained, so I guess I need to roll my own. But how do I architect this? So far, I am thinking something like this (skipping a lot of detail)
class DeviceType(Model):
name = CharField()
class Device(Model):
name = CharField()
type = ForeignKey(DeviceType)
class Setting(Model):
name = CharField()
type = CharField(choices=(('Number', 'int'), ('String', 'str'), ('Boolean', 'bool')))
device_type = ForeignKey(DeviceType)
class Value(Model):
device = ForeignKey(Device)
setting = ForeignKey(Setting)
value = CharField()
def __setattr__(self, name, value):
if name == 'value':
... do validation based on the setting type ...
def __getattr__(self, name):
if name == 'value':
... convert string to whatever is the correct value for the type ...
Am I missing something? Is there a better way of doing this? Will this work? | How to implement EAV in Django | -0.099668 | 0 | 0 | 2,167 |
36,943,283 | 2016-04-29T16:25:00.000 | 1 | 0 | 0 | 0 | python-2.7,scipy | 36,951,224 | 1 | true | 0 | 0 | All these are in weave, which is not used anywhere else in scipy itself. So unless you're using weave directly, you're likely OK. And there is likely no reason to use weave in new code anyway. | 1 | 0 | 1 | Having some problems with scipy. Installed latest version using pip (0.17.0). Run scipy.test() and I'm getting the following errors. Are they okay to ignore? I'm using python 2.7.6.
Thanks for your help.
======================================================================
ERROR: test_add_function_ordered (test_catalog.TestCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 477, in test_add_function_ordered
q.add_function('f',string.upper)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 833, in add_function
self.add_function_persistent(code,function)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 849, in add_function_persistent
cat = get_catalog(cat_dir,mode)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
======================================================================
ERROR: test_add_function_persistent1 (test_catalog.TestCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 466, in test_add_function_persistent1
q.add_function_persistent('code',i)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 849, in add_function_persistent
cat = get_catalog(cat_dir,mode)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
======================================================================
ERROR: test_get_existing_files2 (test_catalog.TestCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 394, in test_get_existing_files2
q.add_function('code', os.getpid)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 833, in add_function
self.add_function_persistent(code,function)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 849, in add_function_persistent
cat = get_catalog(cat_dir,mode)
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
======================================================================
ERROR: test_create_catalog (test_catalog.TestGetCatalog)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/tests/test_catalog.py", line 286, in test_create_catalog
cat = catalog.get_catalog(pardir,'c')
File "/usr/local/lib/python2.7/dist-packages/scipy/weave/catalog.py", line 486, in get_catalog
sh = shelve.open(catalog_file,mode)
File "/usr/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python2.7/shelve.py", line 222, in init
import anydbm
File "/usr/lib/python2.7/anydbm.py", line 50, in
_errors.append(_mod.error)
AttributeError: 'module' object has no attribute 'error'
Ran 20343 tests in 138.416s
FAILED (KNOWNFAIL=98, SKIP=1679, errors=4) | scipy.test() results in errors | 1.2 | 0 | 0 | 82 |
36,944,403 | 2016-04-29T17:37:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,jupyter-notebook | 36,944,682 | 3 | false | 0 | 0 | If you're using the web interface, simply hit the interrupt kernel button in the toolbar. This throws a KeyboardInterrupt and immediately stops execution of the cell. | 1 | 3 | 0 | I'm using the Jupyter (previously iPython Notebooks) environment, with Python 3, and I want a program to terminate early.
Usually in Python, I'd just do a raise SystemExit or sys.exit().
However, in the Jupyter environment, I'm stuck with the ugly message "An exception has occurred, use %tb to see the full traceback." Is there a way in Jupyter to just tell the program to terminate immediately, without error?
Thanks. | Stopping Program in Jupyter | 0 | 0 | 0 | 2,063 |
36,946,288 | 2016-04-29T19:40:00.000 | 1 | 1 | 0 | 1 | python,git | 36,946,689 | 1 | true | 0 | 0 | Check the .git/FETCH_HEAD for the time stamp and the content.
Every time you fetch content its updating the content and the modification time of the file. | 1 | 0 | 0 | TL;DR
I would like to be able to check if a git repo (located on a shared network) was updated without using a git command. I was thinking checking one of the files located in the .git folder to do so, but I can't find the best file to check. Anyone have a suggestion on how to achieve this?
Why:
The reason why I need to do this is because I have many git repos located on a shared drive. From a python application I built, I synchronize the content of some of these git repo on a local drive on a lot of workstation and render nodes.
I don't want to use git because the git server is not powerful enough to support the amount of requests of all the computers in the studio would need to perform constantly.
This is why I ended up with the solution of putting the repos on the network server and syncing the repo content on a local cache on each computer using rsync
That works fine, but the as time goes by, the repos are getting larger and the rsync is taking too much time to perform. So I would like to be have to (ideally) check one file that would tell me if the local copy is out of sync with the network copy and perform the rsync only when they are out of sync.
Thanks | How to check if a git repo was updated without using a git command | 1.2 | 0 | 0 | 63 |
36,948,236 | 2016-04-29T22:11:00.000 | 1 | 0 | 1 | 0 | python,windows,scrapy,anaconda | 36,999,348 | 1 | true | 1 | 0 | Try the command scrapy.bat startproject tutorial, it should solve the problem.
And you don't need to edit the enviornment path. | 1 | 2 | 0 | I use anaconda installed scrapy in windows 10 system. But I can not start the scrapy with scrapy startproject tutorial, I got feedback "bash: scrapy: command not found".
After searching in internet, I found a suggestion from a similar topic to add the environment variable: C:\Users\conny\Anaconda2\Lib\site-packages\scrapy behind the variable PATH, but it still doesn't work.
Do you have any idea, what is the problem? | python scrapy can not start project | 1.2 | 0 | 0 | 702 |
36,949,405 | 2016-04-30T00:28:00.000 | 1 | 1 | 1 | 0 | python,pythonanywhere,xgboost | 36,953,546 | 2 | false | 0 | 0 | You probably installed it for a version of Python that is different to the one that you're running. | 2 | 0 | 0 | I installed xgboost in PythonAnywhere and it shows successful but when I import it in a script, an error is given which says, "No module xgboost found". What can be the reason? | Xgboost giving import error in pythonAnywhere | 0.099668 | 0 | 0 | 136 |
36,949,405 | 2016-04-30T00:28:00.000 | 0 | 1 | 1 | 0 | python,pythonanywhere,xgboost | 46,962,994 | 2 | false | 0 | 0 | In my case, I use Anaconda2 and installed xgboost through git. Everything was ok but I got this message while trying to use import xgboost:
No module xgboost found
When I run pip install xgboost I got the message that everything is ok and xgboost is installed.
I went on ../Anaconda2/Lib/site-package and saw a folder xgboost-0.6-py2.7.egg, and inside there was other one xgboost. I just copied this folder xgboost and pasted it inside ../Anaconda2/Lib/site-package. And now it works =) | 2 | 0 | 0 | I installed xgboost in PythonAnywhere and it shows successful but when I import it in a script, an error is given which says, "No module xgboost found". What can be the reason? | Xgboost giving import error in pythonAnywhere | 0 | 0 | 0 | 136 |
36,950,087 | 2016-04-30T02:38:00.000 | 3 | 0 | 1 | 0 | python | 36,950,102 | 1 | true | 0 | 0 | It's just a personal preference of the language designer. Neither way is more correct than the other. But Python tends toward making things explicit, so you see design decisions tilting in this direction. | 1 | 0 | 0 | I understand that self allows a method to act on specific data member of a class instance, but why is it necessary that self is included as a function parameter? Why is it not just a keyword like 'this' in C++? | Why is 'self' required to be a function parameter? | 1.2 | 0 | 0 | 40 |
36,950,694 | 2016-04-30T04:23:00.000 | 0 | 0 | 0 | 1 | python,ios,appium | 38,962,272 | 3 | false | 0 | 0 | Install libimobiledevice. Run command idevicesyslog using python and capture the logs. | 3 | 1 | 0 | I am new to MAC world. The requirement is to capture ios logs to a file and grep it for a ip address. Using Appium and python2.7, is there any way to do it without launching xcode?
Is there any way to automate it?
Any help would be appreciated.
Thanks in Advance!!! | Capture IOS logs to a text file with an automated script | 0 | 0 | 0 | 860 |
36,950,694 | 2016-04-30T04:23:00.000 | 0 | 0 | 0 | 1 | python,ios,appium | 36,951,633 | 3 | false | 0 | 0 | capture app log?
if so that is always automaticlly.
you do not need to launch xcode.
It's not about xcode, it's depend on that where the log files you submit,
and ways you logged | 3 | 1 | 0 | I am new to MAC world. The requirement is to capture ios logs to a file and grep it for a ip address. Using Appium and python2.7, is there any way to do it without launching xcode?
Is there any way to automate it?
Any help would be appreciated.
Thanks in Advance!!! | Capture IOS logs to a text file with an automated script | 0 | 0 | 0 | 860 |
36,950,694 | 2016-04-30T04:23:00.000 | 0 | 0 | 0 | 1 | python,ios,appium | 38,728,406 | 3 | false | 0 | 0 | Installed Apple configurator 2 on my mac.
ran command /usr/local/bin/cfgutil syslog on cammand line prompt to see the log. | 3 | 1 | 0 | I am new to MAC world. The requirement is to capture ios logs to a file and grep it for a ip address. Using Appium and python2.7, is there any way to do it without launching xcode?
Is there any way to automate it?
Any help would be appreciated.
Thanks in Advance!!! | Capture IOS logs to a text file with an automated script | 0 | 0 | 0 | 860 |
36,956,477 | 2016-04-30T15:03:00.000 | 1 | 0 | 0 | 1 | python,linux,python-3.x,dbus,bluez | 36,988,374 | 1 | true | 0 | 0 | A system update resolved this problem. | 1 | 1 | 0 | I have a BLE device which has a bunch of GATT services running on it. My goal is to access and read data from the service characteristics on this device from a Linux computer (BlueZ version is 5.37). I have enabled experimental mode - therefore, full GATT support should be available. BlueZ's DBUS API, however, only provides the org.bluez.GattManager1 interface for the connected device, and not the org.bluez.GattCharacteristic1 or org.bluez.GattService1 interfaces which I need. Is there something I'm doing wrong? The device is connected and paired, and really I've just run out of ideas as how to make this work, or what may be wrong.
If it helps, I'm using Python and the DBUS module to interface with BlueZ. | BlueZ DBUS API - GATT interfaces unavailable for BLE device | 1.2 | 0 | 0 | 1,158 |
36,957,843 | 2016-04-30T17:14:00.000 | 0 | 1 | 1 | 0 | python,shell,path,environment-variables | 36,957,901 | 2 | false | 0 | 0 | PYTHONPATH is the default search path for importing modules. If you use bash, you could type echo $PYTHONPATH to look at it. | 1 | 3 | 0 | What is the $PYTHONPATH variable, and what's the significance in setting it?
Also, if I want to know the content of my current pythonpath, how do I find that out? | Trying to understand the pythonpath variable | 0 | 0 | 0 | 464 |
36,959,589 | 2016-04-30T19:59:00.000 | 3 | 1 | 0 | 0 | python,c,multithreading,linear-algebra,hdf5 | 36,959,985 | 1 | true | 0 | 0 | Hardware
As Sven Marnach wrote in the comments, your problem is most likely I/O bound since disk access is orders of magnitude slower than RAM access.
So the fastest way is probably to have a machine with enough memory to keep the whole matrix multiplication and the result in RAM. It would save lots of time if you read the matrix only once.
Replacing the harddisk with an SSD would also help, because that can read and write a lot faster.
Software
Barring that, for speeding up reads from disk, you could use the mmap module. This should help, especially once the OS figures out you're reading pieces of the same file over and over and starts to keep it in the cache.
Since the calculation can be done by row, you might benefit from using numpy in combination with a multiprocessing.Pool for that calculation. But only really if a single process cannot use all available disk read bandwith. | 1 | 1 | 1 | I have a simple problem: multiply a matrix by a vector. However, the implementation of the multiplication is complicated because the matrix is 18 gb (3000^2 by 500).
Some info:
The matrix is stored in HDF5 format. It's Matlab output. It's dense so no sparsity savings there.
I have to do this matrix multiplication roughly 2000 times over the course of my algorithm (MCMC Bayesian Inversion)
My program is a combination of Python and C, where the Python code handles most of the MCMC procedure: keeping track of the random walk, generating perturbations, checking MH Criteria, saving accepted proposals, monitoring the burnout, etc. The C code is simply compiled into a separate executable and called when I need to solve the forward (acoustic wave) problem. All communication between the Python and C is done via the file system. All this is to say I don't already have ctype stuff going on.
The C program is already parallelized using MPI, but I don't think that's an appropriate solution for this MV multiplication problem.
Our program is run mainly on linux, but occasionally on OSX and Windows. Cross-platform capabilities without too much headache is a must.
Right now I have a single-thread implementation where the python code reads in the matrix a few thousand lines at a time and performs the multiplication. However, this is a significant bottleneck for my program since it takes so darn long. I'd like to multithread it to speed it up a bit.
I'm trying to get an idea of whether it would be faster (computation-time-wise, not implementation time) for python to handle the multithreading and to continue to use numpy operations to do the multiplication, or to code an MV multiplication function with multithreading in C and bind it with ctypes.
I will likely do both and time them since shaving time off of an extremely long running program is important. I was wondering if anyone had encountered this situation before, though, and had any insight (or perhaps other suggestions?)
As a side question, I can only find algorithmic improvements for nxn matrices for m-v multiplication. Does anyone know of one that can be used on an mxn matrix? | Efficient Matrix-Vector Multiplication: Multithreading directly in Python vs. using ctypes to bind a multithreaded C function | 1.2 | 0 | 0 | 342 |
36,959,902 | 2016-04-30T20:28:00.000 | 3 | 0 | 0 | 0 | python,user-interface,tkinter | 36,961,427 | 1 | true | 0 | 1 | You cannot scroll a label. I suggest using an entry widget. You can set the state to disabled to prevent users from using it like an entry widget, and you can change the borders to make it look like a label. | 1 | 1 | 0 | I am trying to create a label for a string in TKinter. The string can be very long, and greater than the length of the label. Therefore, I wanted to implement a label which can scroll sideways, to show the entirety of the string.
How would you do this in TKinter? | Scrollable label for TKinter? | 1.2 | 0 | 0 | 379 |
36,960,431 | 2016-04-30T21:26:00.000 | 1 | 1 | 0 | 1 | python,qpython | 42,102,991 | 2 | false | 0 | 0 | U need a compiler to build the cryptography module, and it is not included. The best option is to get the cross compiler then build the module yourself. I don't see any prebuilt module for QPython about ssh/paramiko.
Maybe u can try out other libs, busybox/ssh or maybe dropbear for arm.
Update
I've take a proper look at the QPython modules, and both OpenSSL and SSH are preinstalled. You don't need to install them.
Still having problem with the Crypto module. I can't understand how much usefull is the ssh module without the Cryto one ... omg.
Update 2
Tried the Qpypi lib manager, found the cryptography on list, but at time of install didn't found it. Couldn't believe how much difficult is to put ssh to work with QPython. | 1 | 1 | 0 | I have run into errors trying to pip install fabric, or paramiko (results in a pycrypto install RuntimeError: chmod error).
Is there a way to ssh from within a qpython script? | Is there a way to ssh with qpython? | 0.099668 | 0 | 0 | 650 |
36,960,576 | 2016-04-30T21:42:00.000 | 2 | 0 | 1 | 0 | python,sorting | 36,960,631 | 1 | true | 0 | 0 | Use a tuple of (my_date, my_time) as the "single element" you're sorting on. You could build a datetime.datetime object from the two, but that seems unnecessary just to sort them.
This applies in general to any situation where you want a lexicographical comparison between multiple quantities. "Lexicographical" meaning, most-significant first with less-significant quantities as tie-breakers, which is exactly what the standard comparisons do for tuple. | 1 | 0 | 0 | I have a list of objects that each contain a datetime.date() and a datetime.time() element. I know how to sort the array based on a single element using insertion sort, or any other sorting algorithm. However, how would I sort this list in chronological order using date AND time? | Python: sort by date and time? | 1.2 | 0 | 0 | 797 |
36,961,672 | 2016-05-01T00:22:00.000 | 2 | 0 | 0 | 0 | web-services,amazon-web-services,amazon-ec2,flask-sqlalchemy,python-webbrowser | 36,962,685 | 1 | false | 1 | 0 | It seems that web service isn't up and running or it is not listening on right port or it is listening just on 127.0.0.1 address. Check it with 'sudo netstat -tnlp' command. You should see process name, what IP and port it is listening on. | 1 | 0 | 0 | I hosted a Python/Flask web service on my Amazon (AWS) EC2 instance. modified the security group rules such that All inbound traffic is allowed.
I can login from ssh and ping(with public ip) is working fine but I couldn't open the service URL from the web browser. Could any one please suggest how can I debug this issue?
Thanks, | Web service hosted on EC2 host is not reachable from browser | 0.379949 | 0 | 1 | 543 |
36,962,317 | 2016-05-01T02:10:00.000 | 0 | 0 | 0 | 0 | python-2.7,user-interface,python-3.x,wxpython | 36,962,367 | 1 | false | 0 | 0 | I tried going through my code and deleting every single "wx.", and now it works. I guess Phoenix doesn't need that. | 1 | 0 | 0 | I'm taking a course in Python, and the current assignment is to convert a previous assignment written in Python 2 (which used wxPython) to Python 3 (which needs Phoenix). I successfully installed Phoenix, and in the Py3 shell I can now import wx just fine. However, if I try to run my actually script, it immediately gets this error:
Traceback (most recent call last):
File "C:\Python27\transferdrillPy3.py", line 10, in
class windowClass(wx.Frame):
NameError: name 'wx' is not defined
What's up with that? | Can't import wx (wxPython Phoenix) into my script | 0 | 0 | 0 | 233 |
36,962,378 | 2016-05-01T02:21:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 36,962,439 | 3 | false | 0 | 0 | You can separate each statement with a semi-colon like so...
x = 0; y = 5
while(x < y): print(x); x=x+1 | 1 | 1 | 0 | In Python, I am trying to make a variable increment in value while it is less than another number. I know that it is possible to do a for loop in the form (print(x) for x in range(1, 5)). My question is, is there a similar way to do a while loop in this form, such as x += 1 while x < y? | Python single line while loop | 0 | 0 | 0 | 5,023 |
36,966,042 | 2016-05-01T11:19:00.000 | 2 | 0 | 0 | 0 | python,pythonanywhere,alexa-skills-kit | 36,969,127 | 2 | true | 1 | 0 | You can't, but I seriously doubt that anyone would write code that would fall down when there were extra headers fields in a request. Perhaps you're misinterpreting the error. | 1 | 1 | 0 | I am trying to develop a web service back-end for an Alexa skill, and this requires me to have very specific headers in the HTTP response.
Looking at the details of my response (using hurl.it), I have a whole bunch of HTTP headers that Amazon doesn't want. How can I remove the 'X-Clacks-Overhead', the 'Server', etc., responses.
I am using Flask and Python 3. | Remove Headers from Flask Response | 1.2 | 0 | 0 | 6,083 |
36,968,992 | 2016-05-01T16:26:00.000 | 0 | 0 | 1 | 0 | python-2.7,loops,pygame,sprite | 37,002,212 | 2 | false | 0 | 1 | Try making one group for all sprites in your game, and filling it while filling the separate groups | 2 | 0 | 0 | I guess this might be a trivial question, nonetheless I couldn't find the answer anywhere.
I am currently building a small rpg style game and I am starting to have quite a few sprite groups.
I was wondering if there was a way to ask Pygame to refer to all existing groups ?
I would need this function to be able to move sprites which are deleted from my game to a deleted_sprite_group that I have.
At the moment I am adding all the groups to a list and iterating through that list but this requires some maintenance.
Note: I am not that lazy as to mind using a list but I try and optimize and clean up my code every so often. ;)
Thank you for your help ! | Is there a way to iterate through all existing groups automatically? | 0 | 0 | 0 | 36 |
36,968,992 | 2016-05-01T16:26:00.000 | 0 | 0 | 1 | 0 | python-2.7,loops,pygame,sprite | 37,038,660 | 2 | true | 0 | 1 | From @Ni.
Sprite.kill() removes a sprite from all its groups | 2 | 0 | 0 | I guess this might be a trivial question, nonetheless I couldn't find the answer anywhere.
I am currently building a small rpg style game and I am starting to have quite a few sprite groups.
I was wondering if there was a way to ask Pygame to refer to all existing groups ?
I would need this function to be able to move sprites which are deleted from my game to a deleted_sprite_group that I have.
At the moment I am adding all the groups to a list and iterating through that list but this requires some maintenance.
Note: I am not that lazy as to mind using a list but I try and optimize and clean up my code every so often. ;)
Thank you for your help ! | Is there a way to iterate through all existing groups automatically? | 1.2 | 0 | 0 | 36 |
36,970,110 | 2016-05-01T18:06:00.000 | 9 | 1 | 0 | 1 | python,linux,embedded | 37,817,521 | 2 | true | 0 | 0 | Bus errors are generally caused by applications trying to access memory that hardware cannot physically address. In your case there is a segmentation fault which may cause dereferencing a bad pointer or something similar which leads to accessing a memory address which physically is not addressable. I'd start by root causing the segmentation fault first as the bus error is the secondary symptom. | 1 | 9 | 0 | I'm working on a variscite board with a yocto distribution and python 2.7.3.
I get sometimes a Bus error message from the python interpreter.
My program runs normally at least some hours or days before the error ocours.
But when I get it once, I get it directly when I try to restart my program.
I have to reboot before the system works again.
My program uses only a serial port, a bit usb communication and some tcp sockets.
I can switch to another hardware and get the same problems.
I also used the python selftest with
python -c "from test import testall"
And I get errors for these two tests
test_getattr (test.test_builtin.BuiltinTest) ... ERROR test_nameprep
(test.test_codecs.NameprepTest) ... ERROR
And the selftest stops always at
test_callback_register_double (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... Segmentation
fault
But when the systems runs some hours the selftests stops earlier at
ctypes.macholib.dyld
Bus error
I checked the RAM with memtester, it seems to be okay.
How I can find the cause for the problems? | How to determine the cause for "BUS-Error" | 1.2 | 0 | 0 | 17,329 |
36,970,312 | 2016-05-01T18:24:00.000 | 0 | 0 | 1 | 0 | python,file,raw-data | 36,970,338 | 3 | false | 0 | 0 | This output is OK.
Python is outputting this data with double backslashes to show that it is non-printable. However, it's stored correctly, as bytes. | 1 | 2 | 0 | I have following problem:
I want to read from file into a raw binary string :
The file looks like this (with escape characters, not binary data):
\xfc\xe8\x82\x00\x00\x00\x60\x89\xe5\x31\xc0\x64\x8b\x50\x30\x8b\x52
code used:
data = open("filename", "rb").read()
result obtained:
b"\\xfc\\xe8\\x82\\x00\\x00\\x00\\x60\\x89\\xe5\\x31\\xc0\\x64\\x8b\\x50\\x30\\x8b\\x52"
With dobule \ .
How can I read it as binary string like : \xaa characters ?
(Without escape characters) | read \xHH escapes from file as raw binary in Python | 0 | 0 | 0 | 1,438 |
36,971,884 | 2016-05-01T21:00:00.000 | 0 | 0 | 0 | 0 | python,file | 36,971,947 | 2 | false | 0 | 1 | For this to happen you need to use a relative path from your Game directory (assuming this is the working directory for your game).
Eg. "Game\Images\an image.gif" | 1 | 0 | 0 | so i'm going to be as in depth as possible, here's my problem:
I'm using the turtle.addshape() command to add images onto the turtle. In order to do this I have to do turtle.addshape(C:\Users\Username Here\Desktop\Game\Images\an image.gif)
The problem is with this, is that if I were to distribute my file, it would search for C:\Users\Username Here\Desktop\Game
I want it to find the image, WHEREVER the file is and WHOEVER is using it, for example:
C:\Users\ the computers user here \ where ever the file is located here \Game\Images\an image.gif
If You Can Help Me Please Do, It's Been Driving Me CRAZY - Thanks :D | Python - File Directory Issue | 0 | 0 | 0 | 179 |
36,972,296 | 2016-05-01T21:40:00.000 | 0 | 0 | 0 | 0 | python,amazon-sqs | 36,972,378 | 2 | false | 1 | 0 | It looks like you can do the following:
Assigner
Reads from the assigner queue and assigns the proper ids
Packs the data in bulks and uploads them to S3.
Sends the path to S3 to the Dumper queue
Dumper reads the bulks and dumps them to DB in bulks | 2 | 1 | 0 | I am trying to scale an export system that works in the following steps:
Fetch a large number of records from a MySQL database. Each record is a person with an address and a product they want.
Make an external API call to verify address information for each of them.
Make an internal API call to get store and price information about the product on each record.
Assign identifiers to each record in a specific format, which is different for each export.
Dump all the data into a file, zip it and email it.
As of now all of this happens in one monolithic python script which is starting to show its age. As the number of records being exported at a time has grown by about 10x, the script takes a lot of memory and whole export process is slow because all the steps are blocking and sequential.
In order to speed up the process and make it scalable I want to distribute the work into a chain of SQS queues. This is quite straightforward for the first 4 steps:
Selector queue - takes a request, decides which records will be exported. Creates a msg for each of them in the verifier queue with export_id and record_id.
Verifier queue - takes the id of the record, makes the API call to verify its address. Creates a msg in the price queue with export_id and record_id.
Price queue - takes the id of a record, makes the API call to get prices and attaches it to the record. Creates a msg in the assigner queue with export_id and record_id.
Assigner queue - takes the id of a record, assigns it the sequential export ID. Creates a msg in the dumper queue with export_id and record_id.
Dumper queue - ???
This is all fine and dandy till now. Work is parallelized and we can add more workers to whichever step needs them the most.
I'm stumped by how to add the last step in the process?
Till now all the queues have been (suitably) dumb. They get a msg, perform an action and pass it on. In the current script, by the time we reach the last step, the program can be certain that all previous steps are complete for all the records and it is time to dump the information. How should I replicate this in the distributed case?
Here are the options I could think of:
The dumper queue just saves it's incoming msgs in a DB table till it gets a msg flagged "FINAL" and then it dumps all msgs of that export_id. This makes the final msg a single point of failure. If multiple exports are being processed at the same time, order of msgs is not guaranteed so deciding which msg is final is prone to failure.
Pass an expected_total and count in each step and the dumper queue waits till it gets enough msgs. This would cause the dumper queue to get blocked and other exports will have to wait till all msgs of a previously started export are received. Will also have to deal with possibly infinite wait time in some way if msgs get lost.
None of the above options seem good enough. What other options do I have?
At a high level, consistency is more important than availability in this problem. So the exported files can arrive late, but they should be correct.
Msg Delay Reasons
As asked in the comments:
Internal/External API response times may vary. Hard to quantify.
If multiple exports are being processed at the same time, msgs from one export may get lagged behind or be received in a mixed sequence in queues down the line. | Scaling a sequential program into chain of queues | 0 | 0 | 0 | 121 |
36,972,296 | 2016-05-01T21:40:00.000 | 0 | 0 | 0 | 0 | python,amazon-sqs | 61,071,601 | 2 | false | 1 | 0 | You should probably use a cache instead of a queue. | 2 | 1 | 0 | I am trying to scale an export system that works in the following steps:
Fetch a large number of records from a MySQL database. Each record is a person with an address and a product they want.
Make an external API call to verify address information for each of them.
Make an internal API call to get store and price information about the product on each record.
Assign identifiers to each record in a specific format, which is different for each export.
Dump all the data into a file, zip it and email it.
As of now all of this happens in one monolithic python script which is starting to show its age. As the number of records being exported at a time has grown by about 10x, the script takes a lot of memory and whole export process is slow because all the steps are blocking and sequential.
In order to speed up the process and make it scalable I want to distribute the work into a chain of SQS queues. This is quite straightforward for the first 4 steps:
Selector queue - takes a request, decides which records will be exported. Creates a msg for each of them in the verifier queue with export_id and record_id.
Verifier queue - takes the id of the record, makes the API call to verify its address. Creates a msg in the price queue with export_id and record_id.
Price queue - takes the id of a record, makes the API call to get prices and attaches it to the record. Creates a msg in the assigner queue with export_id and record_id.
Assigner queue - takes the id of a record, assigns it the sequential export ID. Creates a msg in the dumper queue with export_id and record_id.
Dumper queue - ???
This is all fine and dandy till now. Work is parallelized and we can add more workers to whichever step needs them the most.
I'm stumped by how to add the last step in the process?
Till now all the queues have been (suitably) dumb. They get a msg, perform an action and pass it on. In the current script, by the time we reach the last step, the program can be certain that all previous steps are complete for all the records and it is time to dump the information. How should I replicate this in the distributed case?
Here are the options I could think of:
The dumper queue just saves it's incoming msgs in a DB table till it gets a msg flagged "FINAL" and then it dumps all msgs of that export_id. This makes the final msg a single point of failure. If multiple exports are being processed at the same time, order of msgs is not guaranteed so deciding which msg is final is prone to failure.
Pass an expected_total and count in each step and the dumper queue waits till it gets enough msgs. This would cause the dumper queue to get blocked and other exports will have to wait till all msgs of a previously started export are received. Will also have to deal with possibly infinite wait time in some way if msgs get lost.
None of the above options seem good enough. What other options do I have?
At a high level, consistency is more important than availability in this problem. So the exported files can arrive late, but they should be correct.
Msg Delay Reasons
As asked in the comments:
Internal/External API response times may vary. Hard to quantify.
If multiple exports are being processed at the same time, msgs from one export may get lagged behind or be received in a mixed sequence in queues down the line. | Scaling a sequential program into chain of queues | 0 | 0 | 0 | 121 |
36,976,966 | 2016-05-02T07:17:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,cloud9-ide | 36,977,498 | 2 | false | 1 | 0 | Ok, found it. While after installing on c9 there is the ~/workspace/tensorflow-path with all the files (incl. the ops-files) in them, actually there also is the /usr/local/lib/python2.7/dist-packages/tensorflow-path.
When running from the ~/workspace/tensorflow-path the ops-files are still loaded from the /usr...-path. So when editing my python/ops/seq2seq.py in the /usr..-path all is fine and I get access to my third return-value. | 1 | 2 | 0 | In a local installation I added a return value of model_with_buckets() in /python/ops/seq2seq.py. Works like magic (locally). Then I upload both my model-files (/models/rnn/translate/seq2seq_model.py) as well as my new /python/ops/seq2seq.py to cloud 9.
But then when I run it the system complains it's requesting 3 return values but only getting 2 (even though the new seq2seq.py should return 3). Does c9 cache those ops-files somewhere?
Thx | cloud9 installation doesnt let me edit /python/ops/seq2seq.py | 0.099668 | 0 | 0 | 139 |
36,978,007 | 2016-05-02T08:19:00.000 | 2 | 0 | 0 | 0 | python,selenium,switch-statement | 36,978,124 | 1 | false | 0 | 0 | With .get(url), just like you got to the first page. | 1 | 0 | 0 | My main Question:
How do I switch pages?
I did some things on a page and than switch to another one,
how do I update the driver to be the current page? | Selenium Python, New Page | 0.379949 | 0 | 1 | 108 |
36,980,514 | 2016-05-02T10:42:00.000 | 1 | 0 | 1 | 0 | python,plugins,gimp | 37,013,691 | 1 | true | 0 | 0 | So -
It is possible to create plug-ins for GIMP in Python, C, Scheme and some other languages with varying levels of support, as no one maintains the binds for them.
However, these plug-ins interact with GIMP only exchanging data and issuing commands to GIMP through a GIMP-only "wire" protocol - it is not possible for a GIMP plug-in, in Python or otherwise, to create additional UI elements in GIMP but what they create by themselves on their own windows.
Also, it is not possible to receive events from GIMP's UI on the plug-in itself. Up to today, the workaround for plug-ins that need user input on the image itself is to draw an image preview on the plug-in window (which some plug-ins that ship with GIMP 2.8 do) - or, for example, to ask the user to create an specific Path, using the path nodes as markers that can be retrieved from the plug-in.
Due to this constraints, it is not possible to create a custom tool or path editor. You can however do these things in GIMP's main code itself and propose a patch to the project - but them, you have to use C + gobject. | 1 | 0 | 0 | So basically I'm looking for a way to write a GIMP plugin in Python that would create some additional user interface elements and allow me to add some functionality on top of what the GIMP has.
Question(s) are:
Is this even possible?
If yes then where is it best to look for a guide?
(or perhaps which source files would I be recomended to read upon, or maybe somebody could point me to a good example plugin I could get use of)
To let you get a better glimpse of what I need is I want create a custom tool-like plugin, a path editor, that would be able to display them in the editing viewport and list in and additional window. I welcome even any tips on the topic. | Creating GIMP interface plugins | 1.2 | 0 | 0 | 825 |
36,983,393 | 2016-05-02T13:15:00.000 | 0 | 0 | 0 | 0 | javascript,jquery,python,django,file | 36,983,801 | 1 | false | 1 | 0 | One viable option that would work and not set out security alarms all over the place, would be to use file form field on your page and ask an end user to give that file to you.
Then, you can use HTML5 File API and do whatever you need in javascript or send it to your server. | 1 | 0 | 0 | I need to access a local file on the client side of a Django project and read an xml file from the client's local disk. Like C:\\test.xml
I am doing this in a single html and script file and using Chrome --allow-file-access to get permission for this access and it works but when I move this code into my Django project and use this jquery script in my html templates, it does not work and shows cross origin request ... error.
Please help me. Why is this happening and what is the solution?
Thanks. | access local file with jquery in client side of a Django project | 0 | 0 | 0 | 269 |
36,984,164 | 2016-05-02T13:53:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,plot | 36,984,838 | 1 | true | 0 | 0 | I found a way to do it.
First, you have to change the type of graph, from bars to normal (a line
plot: pyplot.plot()).
Then, when setting the errors, there's a parameter called
errorevery=z which lets you put error bars every z samples. | 1 | 0 | 0 | My problem is that I have a bar plot in which I have from 100 to 500 elements in X, and if I put yerr in each one it turns out prety ugly.
What I want to do is to put the yerr bars every ten elements (for example: the first element of x has the yerr bars, but from 2 to 9 don't; then, 10 has, but 11 to 19 don't... and so on).
You know some way to do this? | Python yerr bars every ten samples | 1.2 | 0 | 0 | 58 |
36,984,229 | 2016-05-02T13:56:00.000 | 2 | 0 | 1 | 0 | python,markdown,spyder,jupyter-notebook | 62,840,072 | 2 | false | 0 | 0 | If you want to comment/uncomment you could use CTRL +1 to change a single line, and with CTRL+4 you could change an entire block. | 2 | 13 | 0 | I was experimenting in the Spyder IDE and learned, that it is possible to define cells using the #%% separator. Now I couldn't find it, but is it possible to use markdown, like in a Jupyter notebook? | Markdown in Spyder IDE | 0.197375 | 0 | 0 | 8,988 |
36,984,229 | 2016-05-02T13:56:00.000 | 7 | 0 | 1 | 0 | python,markdown,spyder,jupyter-notebook | 44,282,393 | 2 | true | 0 | 0 | No. As far as I know Spyder IDE has no markdown implemented as Jypter notebook even if it is connecting to a Jupyer notebook instance locally.
I would suggest you to use Jupyter notebook for coding and annotation if you need such a thing. Spyder is just a IDE. But if you want to edit Jupyter notebook in a IDE just take a look at PyCharm IDE. | 2 | 13 | 0 | I was experimenting in the Spyder IDE and learned, that it is possible to define cells using the #%% separator. Now I couldn't find it, but is it possible to use markdown, like in a Jupyter notebook? | Markdown in Spyder IDE | 1.2 | 0 | 0 | 8,988 |
36,985,522 | 2016-05-02T14:58:00.000 | 0 | 0 | 1 | 0 | python,ide,spyder | 37,000,145 | 1 | true | 0 | 0 | Just press ctrl+shift+I for displaying ipython console
or
Its a kind of workaround, Go to view and click on Attached Console Window (debugging) and then open a new console it shows the hidden console. | 1 | 0 | 0 | Sometimes when I use Spyder and that I press the wrong button or when the computation is too intensive I have my current IPython console that litteraly vanish from my screen. So I open another one with console->open IPython console but the other one seems to be still active somehow.
Is there a way to force the display of the ones that have disappeared ? | Spyder retrieve hidden IPython console | 1.2 | 0 | 0 | 877 |
36,986,700 | 2016-05-02T16:00:00.000 | 1 | 0 | 1 | 0 | parallel-processing,python-3.4 | 36,986,878 | 1 | true | 0 | 0 | Have you tried "import threading" and "import Queue" in your code? They are both standard libs in Python. There should be no need for an install. | 1 | 0 | 0 | When I pip install or conda install "threading" I get an error saying it cannot be found, I am having a similar problem with Queue. Does Anaconda only fetch 64-bit libraries? I am trying to go through Parallel Programming with Python.
How do I install this library correctly?
Is any other information is needed? | Unable to install threading with Anaconda 64-bit | 1.2 | 0 | 0 | 4,244 |
36,987,716 | 2016-05-02T17:00:00.000 | 1 | 0 | 1 | 0 | python,argparse,optparse | 36,988,831 | 2 | false | 0 | 0 | You may have to elaborate on what access you have to the scripts and their parsers. Are they black boxes that you can only invoke with -h and get back a help or usage message, or can you inspect the parser?
For example when using argparse, you make a parser, assign it some attributes and create argument Actions. Those action objects are collected in a list, parser._actions.
Look at parser.format_help and parser.format_usage methods to see what values are passed to the help formatter to create the string displays.
Apart from examining the argparse.py file, I'd suggest creating a parser in an interactive session, and examine the objects that are created. | 1 | 1 | 0 | let me explain what I have in mind to do in order to give you some context.
I have a bunch of python scripts ( that use argpars or optpars ) and their outputs can be usually on the consolle in json, plaint text or csv format.
I would like to build an webapp ( angular + node for instance) that generates automatically a web page for each of my script including some input box for any of the argument needed by the python script in order to run them form the UI.
I do not want to write, for each python script, the list and type of arguments that they needs to be run, but I am looking for an automatic way to extract such list form each python script itself.
I can try to parse the -h output for each of the script or parse the script itself ( add_option) but it maybe error prone.
Are you aware of any tools/script/module that allow me to do it automatically?
Thanks a lot. | Extract arguments from python script | 0.099668 | 0 | 0 | 1,093 |
36,988,384 | 2016-05-02T17:42:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,charts,openpyxl | 36,988,605 | 2 | false | 0 | 0 | This isn't easy but should be possible. You will need to work through the XML source of a suitably formatted sample chart and see which particular variables need setting or changing. openpyxl implements the complete chart API but this unfortunately very complicated. | 1 | 1 | 0 | Using openpyxl, the charts inserted into my worksheet have a border on them. Is there any way to set the style of the chart (pie/bar) to either via the styles.Style/styles.borders module to have no border, or at least a thin white border so that they would print borderless?
The only options I see on the object is .style = <int> which doesn't seem to actually affect the design of the final graphic. | openpyxl - Ability to remove border from charts? | 0 | 1 | 0 | 1,846 |
36,993,230 | 2016-05-02T23:30:00.000 | 0 | 0 | 1 | 0 | python-sphinx,restructuredtext | 66,927,504 | 2 | false | 0 | 0 | Perhaps indicate the start and end of the section where the files should go with a comment (.. START_GLOB_INCLUDE etc), and then have a build pre-process step that finds the files you want and rewrites that section of the master file. | 1 | 7 | 0 | I am trying to write documentation and want and have multiply files used by multiple toc trees. Previously I used an empty file with .. include:: <isonum.txt> however, this does not work for multiply files in a directory with sub directories. Another solution I have used was to use a relative file path to the index file I am linking to. However this messes up the sphinx nav tree. So my question is how to include a directory of files with RST and Sphinx? | How to include a directory of files with RST and Sphinx | 0 | 0 | 0 | 5,575 |
36,994,096 | 2016-05-03T01:17:00.000 | 3 | 0 | 1 | 0 | python,proxy,installation,pip,pycharm | 36,994,097 | 1 | false | 0 | 0 | This is pip's behavior when it can't connect to the internet properly. In my case, I had Fiddler running so pip couldn't get through Fiddler's proxy.
To anyone else getting this behavior: check your network, firewalls, proxies, and so on. | 1 | 5 | 0 | I'm on Windows 7 running PyCharm Community Edition 2016.1.2 and Python 3.4.3, and I have the following behavior:
Installing new packages in PyCharm (from Settings -> Project Interpreter) failed with the error message No matching distribution found for [package name], e.g. No matching distribution found for numpy.
Updating packages in PyCharm (from Settings -> Project Interpreter) claimed to succeed with a Package successfully installed notification, but did not change the package version in the table (and did not actually do the update).
Installing new packages on the command line with pip (e.g. pip install numpy failed with the same error message as PyCharm.
Updating packages on the command line gave the output Requirement already up-to-date and no update.
What???? | "No matching distribution found" on all pip installations in PyCharm | 0.53705 | 0 | 0 | 15,383 |
36,994,993 | 2016-05-03T03:13:00.000 | 5 | 0 | 1 | 0 | powershell-ise,python-interactive | 36,996,724 | 1 | false | 0 | 0 | When people say it "runs in the background" they mean that when you try to run Python in ISE, it opens a legacy console app which ISE illogically hides (even though it can't bridge your actions to that app).
If you run a script which runs and terminates, that's fine, you can do that. You can, for instance, run python --version ...
But you can only start it interactively if you run it as a separate window using the start-process command: start python
Frankly, you'll have a lot better success interacting with PowerShell using the native console version of PowerShell.exe instead of ISE -- there, you can run python interactively and get the output into PowerShell without redirecting it through files. | 1 | 1 | 0 | According Microsoft, we can't run interactive console like python in it's own powershell ISE console. According to some sources it runs in the background. Can we run the same python interpreter in foreground? | How to run python interpreter in windows powershell ISE? | 0.761594 | 0 | 0 | 5,798 |
36,995,801 | 2016-05-03T04:48:00.000 | 0 | 0 | 0 | 0 | mysql,mysql-workbench,mysql-python | 37,046,039 | 1 | false | 0 | 0 | Likely you do not have permission to create a database. | 1 | 0 | 0 | When am trying to create a database on server it shows an error
Error Code: 1006. Can't create database 'mmmm' (errno: 2)
How can I solve this error?
The server is mysql. | Error Code: 1006. Can't create database 'mmmm' (errno: 2) | 0 | 1 | 0 | 1,074 |
36,997,104 | 2016-05-03T06:26:00.000 | 3 | 0 | 0 | 0 | python,ip,haproxy | 36,997,238 | 1 | true | 0 | 0 | I had this issue with AWS ELB and Apache. The solution was mod_rpaf, which reads the X-Forwarded-For header and replaces it into the standard ip header.
You should check that haproxy is setting the X-Forwarded-For header (which contains the real client IP). You can use modrpaf or another technique to read the real IP. | 1 | 4 | 0 | In my python application, browser sends a request to my server to fetch certain information. I need to track the IP of the source from where the request was made.Normally, I am able to fetch that info by this call :
request.headers.get('Remote-Addr')
But, when I deploy the application behind a load balancer like HaProxy, the IP given is that of the load balancer and not the browser's.
How do I obtain the IP of the browser at my server when it's behind a load balancer?
Another problem with my case is that I am using TCP connection from browser to my server via HAProxy and not using http. | How to track IP of the browser when there is a load balancer in the middle | 1.2 | 0 | 1 | 242 |
36,998,698 | 2016-05-03T07:59:00.000 | 0 | 0 | 0 | 0 | python,google-analytics,google-api,google-analytics-api | 37,029,398 | 2 | false | 0 | 0 | Google Analytics stores a ton (technical term) of data; there are a lot of metrics and dimensions, and some of them (such as the users metric) have to be calculated specifically for every query. It's easy to underestimate the flexibility of Google Analytics, but the fact that it's easy to apply a carefully defined segment to three-year old data in real time means that the data will be stored in a horrendously complicated format, which is kept away from you for proprietary purposes.
So the data set would be vast, and incomprehensible. On top of that, there would be serious ramifications with regard to privacy, because of the way that Google stores the data (an issue which they can circumvent so long as you can only access the data through their protocols.
Short answer, you can take as much data as you can accurately describe and ask for, but there's no 'download all' button. | 1 | 1 | 0 | I'm using Google Analytics API to make a Python program.
For now it's capable to make specific querys, but...
Is possible to obtain a large JSON with all the data in a Google Analytics account?
I've been searching and i didn't have found any answer.
Someone know if it's possible and how? | Query all data of a Google Analytcs account | 0 | 1 | 1 | 488 |
37,001,538 | 2016-05-03T10:20:00.000 | 5 | 0 | 0 | 0 | python,scikit-learn,tableau-api,qlikview | 37,810,021 | 2 | true | 0 | 0 | There's no straightforward route to calling Python from QlikView. I have used this:
Create a Python program that outputs CSV (or any file format that QlikView can read)
Invoke your Python program from the QlikView script: EXEC python3 my_program.py > my_output.csv
Read the output into QlikView: LOAD * FROM my_output.csv (...)
Note that the EXEC command requires the privilege "Can Execute External Programs" on the Settings tab of the script editor. | 1 | 2 | 1 | I am using Scikit-Learn and Pandas libraries of Python for Data Analysis.
How to interface Python with data visualization tools such as Qlikview? | How to interface Python with Qlikview for data visualization? | 1.2 | 0 | 0 | 8,577 |
37,002,150 | 2016-05-03T10:51:00.000 | 2 | 0 | 1 | 0 | python-3.x,dll,ctypes | 38,547,145 | 1 | true | 0 | 1 | I would recommend using Cython to do your wrapping. Cython allows you to use C/C++ code directly with very little changes (in addition to some boilerplate). For wrapping large libraries, it's often straightforward to get something up and running very quickly with minimal extra wrapping work (such as in Ctypes). It's also been my experience that Cython scales better... although it takes more front end work to stand Cython up rather than Ctypes, it is in my opinion more maintainable and lends itself well to the programmatic generation of wrapping code to which you allude. | 1 | 2 | 0 | I know how to use ctypes to call a function from a C++ .dll in Python by creating a "wrapper" function that casts the Python input types to C. I think of this as essentially recreating the function signatures in Python, where the function body contains the type cast to C and a corresponding .dll function call.
I currently have a set of C++ .dll files. Each library contains many functions, some of which are overloaded. I am tasked with writing a Python interface for each of these .dll files. My current way forward is to "use the hammer I have" and go through each function, lovingly crafting a corresponding Python wrapper for each... this will involve my looking at the API documentation for each of the functions within the .dlls and coding them up one by one. My instinct tells me, though, that there may be a much more efficient way to go about this.
My question is: Is there a programmatic way of interfacing with a Windows C++ .dll that does not require crafting corresponding wrappers for each of the functions? Thanks. | How to programmatically wrap a C++ dll with Python | 1.2 | 0 | 0 | 1,193 |
37,005,545 | 2016-05-03T13:28:00.000 | 0 | 0 | 0 | 0 | python,pandas,matplotlib | 37,005,546 | 1 | false | 0 | 0 | After a bit of investigation, I realized that I could just use:
plt.close()
with no argument to close the current figure, or:
plt.close('all')
to close all of the opened figures. | 1 | 0 | 1 | I'm hitting MemoryError: In RendererAgg: Out of memory when I plot several pandas.scatter_matrix() figures.
Normally I use:
plt.close(fig)
to close matplotlib figures, so that I release the memory used, but pandas.scatter_matrix() does not return a matplotlib figure, rather it returns the axes object. For example:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
ax = pd.scatter_matrix(df, alpha=0.2)
How do I close this figure? | How to close pandas.scatter_matrix() figure | 0 | 0 | 0 | 521 |
37,006,366 | 2016-05-03T14:00:00.000 | 1 | 0 | 1 | 1 | python,windows,scheduled-tasks | 37,008,442 | 1 | true | 0 | 0 | Have you tried using:
Action: Start a Program
Program/script: C:\<path to python.exe>\python.exe
Add arguments: C:\\<path to script>\\script.py | 1 | 0 | 0 | I have multiple recurring tasks scheduled to run several times per day to keep some different data stores in sync with one another. The settings for the 'Actions' tab are as follows:
Action: Start a Program
Program/script: C:\<path to script>.py
Add arguments:
Start in: C:\<directory of script>
I can run the python files just fine if I use the command line and navigate to the file location and use python or even just using python without navigating.
For some reason, the scripts just won't run with a scheduled task. I've checked all over and tried various things like making sure the user profile is set correctly and has all of the necessary privileges, which holds true. These scripts have been working for several weeks now with no problems, so something has changed that we aren't able to identify at this time.
Any suggestions? | Python Script won't run as a scheduled task on Windows 2008 R2 | 1.2 | 0 | 0 | 1,159 |
37,009,692 | 2016-05-03T16:38:00.000 | 6 | 0 | 0 | 0 | python,django,python-requests | 38,236,543 | 1 | true | 1 | 0 | I did bunch of things but I believe pip uninstall pyopenssl did the trick | 1 | 3 | 0 | I keep getting SSLError: ('bad handshake SysCallError(0, None)) anytime I try to make a request with python requests in my django app.
What could possibly be the issue? | SSL Error: Bad handshake | 1.2 | 0 | 1 | 1,752 |
37,009,777 | 2016-05-03T16:43:00.000 | 0 | 0 | 1 | 1 | python,windows-10,sleep | 40,618,727 | 3 | false | 0 | 0 | set hdd close wait time = 0 in power options | 1 | 5 | 0 | How can I make a computer sleep with a python script?
It has to be sleep, not hibernate or anything else.
I have tried to use cmd but there is no command to sleep or I didn't find one. | How to make a Windows 10 computer go to sleep with a python script? | 0 | 0 | 0 | 12,969 |
37,010,482 | 2016-05-03T17:20:00.000 | 1 | 0 | 0 | 0 | java,python,automation,web,automated-tests | 37,010,752 | 1 | false | 1 | 0 | Generally, you can inspect the web traffic to figure out what kind of request is being sent. EG., the tamperdata plugin for firefox, or the firebug net panel.
Figure out what the browser is sending (EG., POST request to the server) which will include all the form data of buttons and dropdowns, and then replicate that in your own code using Apache HTTP Client or jsoup or other HTTP client library. | 1 | 1 | 0 | I am trying to automate a process in which a user goes on a specific website, clicks a few buttons, selects the same values on the drop down lists and finally gets a link on which he/she can then download csv files of the data.
The third-party vendor does not have an API. How can I automate such a step?
The data I am looking for is processed by the third party and not available on the screen at any given point. | How to automate user clicking through third-party website to retrieve data? | 0.197375 | 0 | 1 | 218 |
37,011,901 | 2016-05-03T18:38:00.000 | 1 | 1 | 0 | 0 | python,network-programming | 37,013,135 | 1 | false | 0 | 0 | Yes, that would be possible, Python has a large networking support (I would starting with the socket module, see the docs for that).
I would not say that it will be easy or build in a single weekend, but you should give it a try and spend some time on it! | 1 | 0 | 0 | I have not seen any questions regarding to packet filter in Python and I am wondering, If it's possible to build it at all.
Is there any way building a custom firewall in Python? Null-routing specific IP's for example, or blocking them when request amount capacity is reached in 5 seconds.
What modules would it need, would it be extra difficult? Is Python useful for things like firewall?
Also would it be possible to add powerful protection? So it can filter packets on all the layers.
I'm not asking for script or exact tutorial to build it, my sorted question:
How possible would it be to build firewall in Python? Could I make it powerful enough to filter packets on all layers? would it be easy to build simple firewall? | Packet filter in Python? | 0.197375 | 0 | 1 | 1,306 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.