Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
35,394,675
2016-02-14T17:17:00.000
3
1
1
0
python,package,pypi
35,530,716
2
false
0
0
Upon searching pypi.python.org for pi3d I have found that when you go to the pi3d v2.9 page there is now a large bold warning saying that it isn't the latest version and gives a link to v2.10 which was probably put there between the time you asked this question and now. However the fact that the v2.10 was not listed for me shows that your problem is not a local one. Googling site:pypi.python.org pi3d shows pi3d v2.10 as the first result which means that something is wrong with the pypi search engine. The answer to your question is no, there is not an obvious cause of that behaviour. The fact that when I use Google I get a result as opposed to the builtin search implies that their search backend needs to be reindexed.
2
9
0
I maintain the pi3d package which is available on pypi.python.org. Prior to v2.8 the latest version was always returned by a search for 'pi3d'. Subsequently v2.7 + v2.8 then v2.7 + v2.8 + v2.9 were listed. These three are still listed even though I am now at v2.10. i.e. the latest version is NOT listed and it requires sharp eyes to spot the text on the v2.9 page saying it's not the latest version! NB all old versions are marked as 'hidden' I have tried lots of different permutations of hiding and unhiding releases, updating releases, switching on and off autohide old releases, editing the text of each release etc ad infinitum. Is there some obvious cause of this behaviour that I have missed?
on pypi.python.org what would cause hidden old versions to be returned by explicit search
0.291313
0
0
269
35,395,843
2016-02-14T19:01:00.000
0
0
0
1
python,openshift
35,442,172
1
true
0
1
Bryan has answered the question. Tkinter will not work with WSGI. A web framework such as Django must be used.
1
0
0
I would like to deploy a Python3 app that uses tkinter on OpenShift. I added the following to setup.py: install_requires=["Tcl==8.6.4"]. When I ran git push I received the following error: Could not find suitable distribution for Requirement.parse('Tcl==8.6.4'). Can anyone provide the correct syntax, distribution package name and version?
Using Tkinter with Openshift
1.2
0
0
105
35,397,377
2016-02-14T20:07:00.000
0
0
0
0
python,themes,freeze,python-idle
35,397,402
1
false
0
0
Turns out one way is manually deleting the faulty theme. This allows the Configure IDLE menu to open. Whoops.
1
2
0
So, recently I was using the Python theme function for the IDLE program itself. I downloaded three themes and built my own one, which is selected now. The problem is, I forgot to set colours for the blinker and highlighting, which is hugely problematic. When I went to see if I could change back to the default setting, Python IDLE simply froze up when I selected 'Configure IDLE' under options. I can still scroll through the file, attempt to close the window and minimise it etc, but it has just frozen up. I can't close it or continue working with the file. I've removed Python and then reinstalled it but that hasn't worked, should I just manually delete the themes and force IDLE to use the original one, or is there a way to fix this? I am running Python 2.7 on Windows 8.1. Thanks
Python freezes when configuring IDLE
0
0
0
312
35,398,139
2016-02-14T21:21:00.000
1
0
1
0
python,memory,nonetype
35,398,203
2
false
0
0
The memory location of None is statically allocated. It is set, when python is compiled. So different versions of CPython has different ids.
1
2
0
When I type id(None) into a Python interpreter, I get 9545840. I can open another terminal and do the same thing, and I get the same result even if the first terminal has been closed, so apparently None has been assigned a place in memory that has been reserved. When is that memory location decided on? Is it something that changes on every reboot, or is it decided when Python is installed? Is it different on different computers?
None memory location
0.099668
0
0
117
35,399,162
2016-02-14T23:09:00.000
0
0
1
0
ipython,ubuntu-15.10
35,399,319
1
false
0
0
Did you try to install this traitlets dependency ? : pip install traitlets
1
0
0
I try to open ipython notebook but i get this message: Traceback (most recent call last): File "/usr/local/bin/ipython", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 3080, in @_call_aside File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 3066, in _call_aside f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 3093, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 651, in _build_master ws.require(requires) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 952, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line 839, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'traitlets' distribution was not found and is required by ipython
ipython notebook don't open
0
0
0
114
35,404,323
2016-02-15T08:12:00.000
0
0
0
0
python,pygal
35,423,410
1
false
0
0
Never mind, I got it sorted, I was using the wrong graph type
1
0
1
How can I plot multiple data series with different number of elements and have them fill the graph along the x axis? At the moment if I graph a = [1,2,3,4,5] and b = [1,2,3] the b lines only covers half the graph. Is this possible or do I need to somehow combine the graphs after plotting/rendering them?
pygal data series of different lengths
0
0
0
110
35,407,514
2016-02-15T10:56:00.000
0
0
1
1
python,c++,linux,gdb,arm
35,412,607
1
false
0
0
You are probably missing library headers (something like python3-dev). To install it on Ubuntu or similar start by sudo apt-get install python3-dev. Or if you don't plan to use python scripting in gdb, you can configure with "--without-python". As far as I can tell you are also not configuring gdb correctly. You can leave out --build (if you are building on PC arm-none-linux... is wrong). Your host should be arm-none-linux-gnueabi, not just arm.
1
2
0
I want to debug application on devices , i prefer to use gdb(ARM version) than gdb with gdbserver to debug, because there is a dashboard , a visual interface for GDB in Python. It must cooperation with gdb(ARM version) on devices,so i need to cross compiling a ARM version of gdb with python, the command used shows below: ./configure --build=arm-none-linux-gnueabi --host=arm -target=arm-none-linux-gnueabi CC=arm-none-linux-gnueabi-gcc --with-python=python3.3 --libdir=/u01/rootfs/lib/ --prefix=/u01/cross-compilation/gdb-7.7.1/arm_install --disable-libtool --disable-tui --with-termcap=no --with-curses=no But finally a error message appeared during make: checking for python3.3... missing configure: error: unable to find python program python3.3 Here I had a python3.3 binaries and libraries whicha are cross-compiled for ARM. Please give me any suggestion. Thanks in advance....
GDB cross-compilation for arm
0
0
0
1,845
35,411,265
2016-02-15T13:56:00.000
0
1
1
0
python,logging,optimization
35,420,774
2
false
0
0
Use logger.debug('%s', myArray) rather than logger.debug(myArray). The first argument is expected to be a format string (as all the documentation and examples show) and is not assumed to be computationally expensive. However, as @dwanderson points out, the logging will actually only happen if the logger is enabled for the level. Note that you're not forced to pass a format string as the first parameter - you can pass a proxy object that will return the string when str() is called on it (this is also documented). In your case, that's what'll probably do the array to string conversion. If you use logging in the manner that you're supposed to (i.e. as documented), there shouldn't be the problem you describe here.
1
2
1
I'm optimizing a Python program that performs some sort of calculation. It uses NumPy quite extensively. The code is sprinkled with logger.debug calls (logger is the standard Python log object). When I run cProfile I see that Numpy's function that converts an array to string takes 50% of the execution time. This is surprising, since there is no handler that outputs messages as the DEBUG level, only INFO and above. Why is the logger converting its arguments to string even though nobody is going to use this string? Is there a way to prevent it (other than not performing the logger calls)?
Python logger.debug converting arguments to string without logging
0
0
0
2,623
35,414,707
2016-02-15T16:47:00.000
0
0
0
0
python,rethinkdb
35,417,780
1
false
1
0
The easiest thing to do would be to denormalize your data so that your changefeed only has to look at one table.
1
0
0
I use rethinkdb changefeed and I need to catch event from one table with condition from another: first table contains some information, second table contains info about user and I need catch change in first table by the specific user. I tryed join tables and use changefeed with it, but it not works good. Are there ways for it?
How use changefeed with 2 tables?
0
0
0
35
35,421,803
2016-02-16T00:52:00.000
0
0
0
0
python,database,filesystems,document-oriented-db
35,421,941
2
false
0
0
A DODB sounds like a much more reliable and professional solution. Besides you can add stored procedures thinking in the future and besides most databases offer text search capabilities. Backups are also easier, instead of using an incremental tar command, you can use the native DB backup tools. I'm fan of CouchDB and you can add RESTful calls in a "transparent" way to it with JSON as the default response.
1
2
0
I'm working in a Python program which has to access data that is currently stored in plain text files. Each file represents a cluster of data points that will be accessed together. I don't need to support different queries, the only thing I need is to retrieve and copy to memory cluster of data as fast as possible. I'm wondering if maybe a document oriented database could work better than my current text file approach. In particular, I would like to know if the seek time and transfer speed are the same in document-oriented DBs that in files. Should I switch to a document-oriented database or stay with the plain file?
Document-oriented databases vs plain text files
0
1
0
367
35,422,002
2016-02-16T01:14:00.000
1
0
0
0
python,django,forms,model,message
35,422,055
2
false
1
0
I assume that you will have some view which will render page on which user of your site will be able to read the unread notifications. So I think you can simply add to notifications model bool field unread. This field is set up when there is new notification to true. After user render page with unread notifications this view simply change this filed to false. When you query using where(ureaded==true) for notifications one more time those will be avoided.
1
1
0
I want to develop a notification system with Django. So I have an button (and a count of unread messages), that show all messages to the user, so the counter returns to zero again. How can detect my database, that the user already has read the messages and reset the counter? I dont think that I can emulate this with forms, isn't it?
How can I model this behavior in Django?
0.099668
0
0
105
35,422,495
2016-02-16T02:14:00.000
1
0
0
1
python,google-app-engine,google-cloud-datastore,google-console-developer
35,578,109
1
false
1
0
If you check the little question-mark near the statistics summary it says the following: Statistics are generated every 24-48 hours. In addition to the kinds used for your entities, statistics also include system kinds, such as those related to metadata. System kinds are not visible in your kinds menu. Note that statistics may not be available for very large datasets. Could be any of these.
1
0
0
I recently deployed my app on GAE. In my Datastore page of Google Cloud Console, in the Dashboard Summary, it shows that I have 75 entities. However, when I click on the Entities tab, it shows I have 2 entities of one kind and 3 entities of another kind. I remember creating these entities. I'm just curious where the 75 entities comes from? Just checking if I'm doing something wrong here.
Google App Engine Console shows more entities than I created
0.197375
0
0
70
35,428,278
2016-02-16T09:14:00.000
1
0
1
0
python,performance,python-2.7,python-3.x,design-patterns
35,433,954
1
false
0
0
Your question is quite broad, so I can't give you an exact answer. However, what I would generally do here is to run a linter like flake8 over the whole codebase to show you where you have unused imports and if you have references in your files to things that you haven't imported. It won't tell you if a whole file is never imported by anything, but if you remove all unused imports, you can then search your codebase for imports of a particular module and if none are found, you can (relatively) safely delete that module. You can integrate tools like flake8 with most good text editors, so that they highlight mistakes in real time. As you're trying to work with legacy code, you'll more than likely have many errors when you run the tool, as it looks out for style issues as well as the kinds of import/usage issues that youre mention. I would recommend fixing these as a matter of principle (as they they are non-functional in nature), and then making sure that you run flake8 as part of your continuous integration to avoid regressions. You can, however, disable particular warnings with command-line arguments, which might help you stage things. Another thing you can start to do, though it will take a little longer to yield results, is write and run unit tests with code coverage switched on, so you can see areas of your codebase that are never executed. With a large and legacy project, however, this might be tough going! It will, however, help you gain better insight into the attribute usage you mention in point 1. Because Python is very dynamic, static analysis can only go so far in giving you information about atttribute usage. Also, make sure you are using a version control tool (such as git) so that you can track any changes and revert them if you go wrong.
1
0
0
In legacy system, We have created init module which load information and used by various module(import statement). It's big module which consume more memory and process longer time and some of information is not needed or not used till now. There is two propose solution. Can we determine in Python who is using this module.Fox Example LoadData.py ( init Module) contain 100 data member A.py import LoadData b = LoadData.name B.py import LoadData b = LoadData.width In above example, A.py using name and B.py is using width and rest information is not required (98 data member is not required). is there anyway which help us to determine usage of LoadData module along with usage of data member. In simple, we need to traverse A.py and B.py and find manually to identify usage of object. I am trying to implement first solution as I have more than 1000 module and it will be painful to determine by traversing each module. I am open to any tool which can integrate into python
Determine usage/creation of object and data member into another module
0.197375
0
1
41
35,434,188
2016-02-16T13:41:00.000
0
0
1
0
python,strip
35,435,311
2
false
0
0
So, what is the memory on your target system? Unless you have less than 220MB RAM or so for the whole process, I think str.strip is what you should use there. One could interactively consume the 1GB file to create a stripped 100MB part - but that would be cost intensive - having to hold up to the full 100MB in an intermediate buffer (which could be allocated in a file, though) Bu that would be far from "simple" as you request - specially when compared with "strip()". It could however be a nice way to strip whitespace from within the 100MB partitions if that matters.
1
0
0
I have a large string, >100mb in size. I want to remove leading and trailing white space. What is a simple and memory efficient way to do this? Consider the following problem: A 1Gb file will be partitioned for parallel processing. This file is divided into 10 equal parts, each 100 Mb long. A large part of these files is white space, so the leading and trailing white space is to be removed from each 100 Mb part. Is there a memory efficient and simple way to strip this white space from the head and tail of each part.
What is a simple and memory efficient way strip whitespace from a large string in Python
0
0
0
228
35,436,599
2016-02-16T15:32:00.000
1
0
0
0
python,scikit-learn,linear-regression
35,438,322
3
false
0
0
There is a linear classifier sklearn.linear_model.RidgeClassifer(alpha=0.) that you can use for this. Setting the Ridge penalty to 0. makes it do exactly the linear regression you want and set the threshold to divide between classes.
1
1
1
I trained a linear regression model(using sklearn with python3), my train set was with 94 features and the class of them was 0 or 1.. than i went to check my linear regression model on the test set and it gave me those results: 1.[ 0.04988957] its real value is 0 on the test set 2.[ 0.00740425] its real value is 0 on the test set 3.[ 0.01907946] its real value is 0 on the test set 4.[ 0.07518938] its real value is 0 on the test set 5.[ 0.15202335] its real value is 0 on the test set 6.[ 0.04531345] its real value is 0 on the test set 7.[ 0.13394644] its real value is 0 on the test set 8.[ 0.16460608] its real value is 1 on the test set 9.[ 0.14846777] its real value is 0 on the test set 10.[ 0.04979875] its real value is 0 on the test set as you can see that at row 8 it gave the highest value but the thing is that i want to use my_model.predict(testData) and it will give only 0 or 1 as results, how can i possibly do it? the model got any threshold or auto cutoff that i can use?
can i make linear regression predict like a classification?
0.066568
0
0
2,479
35,437,458
2016-02-16T16:11:00.000
1
0
0
0
python,mongodb,mongoengine
35,648,604
1
true
1
0
Mongoengine do not rebuild index automaticly. Mongoengine track changes in models (btw dont work if you add sparse to your filed(if field dont have unique options)) and then fire the ensureIndex in mongoDB. But when its fire - make sure you delete oldest index version manualy(Mongoengine doesn't) in mongoDB. The problem is: if you add sparse to filed w.o unique option - this changes dont mapped in mongoDB index. You need to combine unique = True, sparse = True If you change indexs in models - you need to manualy delete old indexes in mongoDB.
1
3
0
When Mongoengine rebuild(update) a information about indexes? I mean, if a added or change some field (added uniques or sparse option to filed) or added some meta info in model declaration. So question is: When mongoengine update it? How do they track changes?
When Mongoengine rebuild indexes?
1.2
1
0
349
35,437,656
2016-02-16T16:19:00.000
0
0
0
0
python,django,excel,django-import-export
35,437,885
2
false
1
0
An easy fix would be adding an apostrophe (') at the beginning of each number when doing using import-export. This way Excel will recognize those numbers as a text.
1
1
0
I am faced with the following problem: when I generate .csv files in python using django-import-export even though the field is a string, when I open it in Excel the leading zeros are omitted. E.g. 000123 > 123. This is a problem, because if I'd like to display a zipcode I need the zeros the way they are. I can cover it in quotes, but that's not desirable since it will grab unnecessary attention and it just looks bad. I'm also aware that you can do it in Excel files manually by changing the data type, but I don't want to explain that to people who are using my software. Any suggestions? Thanks in advance.
Django import-export leading zeros for numerical values in excel
0
0
0
281
35,440,612
2016-02-16T18:43:00.000
4
0
0
1
python,django,macos
35,442,348
1
true
1
0
This is what version control is for. Sign up for an account at Github, Bitbucket, or Gitlab, and push your code there.
1
5
0
I'm developing a Django project on my MacBook Pro. Constantly paranoid that if my house burns down, someone stoling my MB, hard drive failure or another things that are not likely, but catastrophic if it occurs. How can I create or get automatic backup every 1 hour from my OS X directory where the Django project is to a service like Dropbox or whatever cloud hosting company there might be a solution for. Is there a Python script that does this? I can't be the only one that has thought of this before.
Create regular backups from OS X to the cloud
1.2
0
0
35
35,441,310
2016-02-16T19:22:00.000
1
0
0
0
python-2.7,csv,pandas
35,441,725
1
true
0
0
Upgrading pandas from 0.15.1 to 0.17.1 resolved this issue.
1
0
1
I need to save to csv, but have date values in the series that are below 1900 (ie Mar 1 1899), which is preventing this from happening. I get ValueError: year=1899 is before 1900; the datetime strftime() methods require year >= 1900. It seems a little absurd for a function like this to work only for dates above 1900s, so I think there must be something I am missing. What is the right way of getting a csv when you're working with a dataframe that has a column with dates before the 1900s?
pandas to_csv on dataframe with a column that has dates below 1900
1.2
0
0
136
35,442,576
2016-02-16T20:42:00.000
0
0
1
0
java,python,scala
35,442,831
3
false
0
0
Scala is made because of this. It mixes functional and OOP language features, so you can create methods by themselves, without creating a class to contain them. Java doesn't have this feature, methods can't be created outside a class. Java is very object oriented, everything (except the primitives) extends object. Some people say, that this is like this, because java is garbage collected. Having functions inside objects makes it easier, to free space up. It really depends on you, if you find this good or not. In my opinion, it's better to get as far from functional programming, as possible. Let's not get back to the C era.
1
0
0
During 3 years of my working career, I have been working with databases, data, etc. It was only during the last year that I started working with Python for some data analysis. Now i got interested in all the Big Data ecosystem and Python gets me far enough, yet. However, recently I chose to learn Scala as my second programming language. It appears that usually my program needs to have a class, a method, and then it needs to be built. It is all very confusing to be honest :) So I read on and it appears that Scala comes from JVM environment, and I started reading on Java and it turns out that in Java you cannot just create a program consisting of a single command. You need to create a class, a method, etc. I understand that it is probably because it follows one of the principles of OOP, but could anyone please direct me to the source, which would explain why do we need to create classes and methods in java - as opposed to listing commands only?
Why do i need to create a class in Java program?
0
0
0
176
35,446,029
2016-02-17T01:00:00.000
0
1
1
0
python,pytest
44,379,571
1
false
0
0
You should just update to newer pytest. Looks like this problem was fixed in pytest=2.9.0.
1
1
0
I execute py.test like this : py.test -s -f, -f is looponfail mode and -s is --capture=no mode. But print() statement is allowed only when the test is fail. If all tests succeeded, all print() in all codes doesn't work. How could I enable print() statement even in looponfail mode? Python 3.4 Py.test 2.7.2
How could I enable print statement in pytest looponfail mode?
0
0
0
118
35,447,087
2016-02-17T03:01:00.000
3
0
0
0
python,django,caching,django-models,django-views
35,447,745
1
true
1
0
You ask about "caching" which is a really broad topic, and the answer is always a mix of opinion, style and the specific app requirements. Here are a few points to consider. If the data is per user, you can cache it per user: from django.core.cache import cache cache.set(request.user.id,"foo") cache.get(request.user.id) The common practice it to keep a database flag that tells you if the user's data changed since it was cached. So before you fetch the data from cache, check only this flag from the DB. If the flag says nothing changed, get the data from cache. If it did change, pull from DB, replace the cache, and set the flag again. The flag check should be fast and simple: one table, indexed by user.id, and a boolean flag field. This will squeeze a lot of index rows into a single DB page, and enables a fast fetching of a single one field row. Yet you still get a persistent updated main storage, that prevents the use of not updated cache data. You can check this flag in a middleware. You can run expiry in many ways: clear cache when user logs out, run a cron script that clears items, or let the cache backend expire items. If you use a flag check before you use the cache, there is no issue in keeping items in cache except space, and caching backends handle that. If you use the django simple file cache (which is easy, simple and zero config), you will have to clear the cache. A simple cron script will do.
1
1
0
I want to use caching in Django and I am stuck up with how to go about it. I have data in some specific models which are write intensive. records will get added continuously to the model. Each user has some specific data in the model similar to orders table. Since my model is write intensive I am not sure how effective caching frameworks in Django are going to be. I tried Django view specific caching and I am try to develop a view where first it will pick up data from the cache. Then I will have another call which will bring in data which was added to the model after the caching was done. What I want to do is add the updated data to the original cache data and store it again. It is like I don't want to expire my cache, I just want to keep adding to my existing cache data. may be once in 3 hrs I can clear it. Is what I am doing right. Are there better ways than this. Can I really add to items in existing cache. I will be very glad for your help
update existing cache data with newer items in django
1.2
0
0
1,927
35,451,340
2016-02-17T08:23:00.000
1
0
0
0
python,web-scraping,scrapy
35,451,490
1
true
1
0
Looks like you are trying the command scrapy startproject stack inside python interactive shell. Run the same command directly on bash shell, and not inside python shell. And you don't need import scrapy command to create a scrapy project.
1
4
0
I'm learning scrapy to create a crawler that could crawl website and get back the results, however on creating a new project, it is returning an error. I tried creating a folder manually, but again it returned an error. Any idea how to resolve this. SyntaxError: invalid syntax import scrapy scrapy startproject stack
Scrapy: Create Project returning error
1.2
0
0
4,043
35,451,564
2016-02-17T08:37:00.000
2
0
0
1
python,c,macos,core-foundation,mach
35,452,078
1
true
0
0
You can't. Mac OS X does not keep track of this information in the way you're looking for -- opening an application from another application does not establish a relationship of any sort between those applications.
1
2
0
I'd like to create a daemon (base on script or some lower level language) that calculates statistics on all opened applications according to their initiating process. The problem is that the initiating process does not always equivalent to the actual parent process. For instance, When I press an hyperlink from Microsoft Word that should open executable file like file:///Applications/Chess.app/ In the case above, I've observed that the ppid of 'Chess' is in fact 'launchd', just the same as if I was running it from launchpad. Perhaps there's a mach_port (or any other) api to figure out who really initiated the application ?
Running processes in OS X, Find the initiator process
1.2
0
0
81
35,453,451
2016-02-17T10:02:00.000
1
0
0
0
python,django
35,453,880
2
false
1
0
Add some boolean field (answered, is_answered.. etc) and check on every "Response" click if it answered. Hope it will help.
1
0
0
I am writing a mini-CRM system that two users can login at the same time and they can answer received messages. However, the problem is that they might response the same message because messages can only disappear when they click "Response" button. Is there any suggestion to me to lock the system?
Lock the system
0.099668
0
0
52
35,454,970
2016-02-17T11:08:00.000
1
0
1
1
python,testing,analysis,cuckoo
35,478,371
1
true
0
0
I was able to fix this issue just by changing the configuration file "virtualbox.conf". in this configuration file it says that the virtual machine as [cuckoo1] (title of the virtual machine configuration). Since my virtual machine name is "windows_7" i have to change [cuckoo1] to windows_7. That is why cuckoo don't get the virtual machine configuration (because configurations by default is set for [cuckoo1] virtual machine name).
1
2
0
I have installed cuckoo sandbox in ubuntu environment with windows7 32 bit as guest os. I have followed the instructions given in their website.The vm is named windows_7. I have edited the "machine" and "label" field properly in "virtualbox.conf". But when I try to start the cuckoo executing "sudo python cuckoo.py" it gives me an error : "WARNING: Configuration details about machine windows_7 are missing: Option windows_7 is not found in configuration, error: Config instance has no attribute 'windows_7' CRITICAL: CuckooCriticalError: No machines available.".
Cuckoo sandbox: shows "Configuration details about machine windows_7 are missing" error
1.2
0
0
1,540
35,457,300
2016-02-17T12:52:00.000
2
0
0
1
python,google-app-engine,debugging,breakpoints
35,457,609
2
false
1
0
As often happens with these things, writing this question gave me a couple of ideas to try. I was using the Personal edition ... so I downloaded the professional edition ... and it all worked fine. Looks like I'm paying $95 instead of $45 when the 30 day trial runs out.
2
2
0
I'm new to Python, Wing IDE and Google cloud apps. I've been trying to get Wing IDE to stop at a breakpoint on the local (Windows 7) Google App Engine. I'm using the canned guestbook demo app and it launches fine and responds as expected in the web browser. However breakpoints are not working. I'm not sure if this is important but I see the following status message when first starting the debugger: Debugger: Debug process running; pid=xxxx; Not listening (too many connections) ... My run arguments are as per the recommendation in the Wing IDE help file section "Using Wing IDE with Google App Engine", namely: C:\x\guestbook --max_module_instances=1 --threadsafe_override=false One problem I found when trying to follow these instructions. The instructions say go into Project Properties and the Debug/Execute tab and set the Debug Child Processes to Always Debug Child Process. I found this option doesn't exist. Note also that in the guestbook app, if I press the pause button, the code breaks, usually in the python threading.py file in the wait method (which makes sense). Further note also that if I create a generic console app in Wing IDE, breakpoints work fine. I'm running 5.1.9-1 of Wing IDE Personal. I've included the Google appengine directory and the guestbook directories in the python path. Perhaps unrelated but I also find that sys.stdout.write strings are not appearing in the Debug I/O window.
Wing IDE not stopping at break point for Google App Engine
0.197375
0
0
544
35,457,300
2016-02-17T12:52:00.000
5
0
0
1
python,google-app-engine,debugging,breakpoints
42,961,127
2
false
1
0
I have copied the wingdbstub.py file (from debugger packages of Wing ide) to the folder I am currently running my project on and used 'import wingdbstub' & initiated the debug process. All went well, I can now debug modules.
2
2
0
I'm new to Python, Wing IDE and Google cloud apps. I've been trying to get Wing IDE to stop at a breakpoint on the local (Windows 7) Google App Engine. I'm using the canned guestbook demo app and it launches fine and responds as expected in the web browser. However breakpoints are not working. I'm not sure if this is important but I see the following status message when first starting the debugger: Debugger: Debug process running; pid=xxxx; Not listening (too many connections) ... My run arguments are as per the recommendation in the Wing IDE help file section "Using Wing IDE with Google App Engine", namely: C:\x\guestbook --max_module_instances=1 --threadsafe_override=false One problem I found when trying to follow these instructions. The instructions say go into Project Properties and the Debug/Execute tab and set the Debug Child Processes to Always Debug Child Process. I found this option doesn't exist. Note also that in the guestbook app, if I press the pause button, the code breaks, usually in the python threading.py file in the wait method (which makes sense). Further note also that if I create a generic console app in Wing IDE, breakpoints work fine. I'm running 5.1.9-1 of Wing IDE Personal. I've included the Google appengine directory and the guestbook directories in the python path. Perhaps unrelated but I also find that sys.stdout.write strings are not appearing in the Debug I/O window.
Wing IDE not stopping at break point for Google App Engine
0.462117
0
0
544
35,457,531
2016-02-17T13:03:00.000
0
0
0
1
java,python,web-services,python-requests,legacy
35,458,539
2
false
1
0
Maybe you could add a man-in-the-middle. A socket server who gets the unix strings, parse them into a sys-2 type of message and send it to sys-2. That could be an option to not re-write all calls between the two systems.
2
0
0
I have a legacy web application sys-1 written in cgi that currently uses a TCP socket connection to communicated with another system sys-2. Sys-1 sends out the data in the form a unix string. Now sys-2 is upgrading to java web service which in turn requires us to upgrade. Is there any way to upgrade involving minimal changes to the existing legacy code. I am contemplating the creating of a code block which gets the output of Sys-1 and changes it into a format required by Sys-2 and vice versa. While researching, I found two ways of doing this: By using the "requests" library in python. Go with the java webservices. I am new to Java web services and have some knowledge in python. Can anyone advise if this method works and which is a better way to opt from a performance and maintenance point of view? Any new suggestions are also welcome!
Python requests vs java webservices
0
0
0
254
35,457,531
2016-02-17T13:03:00.000
0
0
0
1
java,python,web-services,python-requests,legacy
35,458,442
2
true
1
0
Is there any way to upgrade involving minimal changes to the existing legacy code. The solution mentioned, adding a conversion layer outside of the application, would have the least impact on the existing code base (in that it does not change the existing code base). Can anyone advise if this method works Would writing a Legacy-System-2 to Modern-System-2 converter work? Yes. You could write this in any language you feel comfortable in. Web Services are Web Services, it matters not what they are implemented in. Same with TCP sockets. better way to opt from a performance How important is performance? If this is used once in a blue moon then who cares. Adding a box between services will make the communication between services slower. If implemented well and running close to either System 1 or System 2 likely not much slower. maintenance point of view? Adding additional infrastructure adds complexity thus more problems with maintenance. It also adds a new chunk of code to maintain, and if System 1 needs to use System 2 in a new way you have two lots of code to maintain (Legacy System 1 and Legacy/Modern converter). Any new suggestions are also welcome! How bad is legacy? Could you rip the System-1-to-System-2 code out into some nice interfaces that you could update to use Modern System 2 without too much pain? Long term this would have a lower overall cost, but would have a (potentially significantly) larger upfront cost. So you have to make a decision on what, for your organisation, is more important. Time To Market or Long Term Maintenance. No one but your organisation can answer that.
2
0
0
I have a legacy web application sys-1 written in cgi that currently uses a TCP socket connection to communicated with another system sys-2. Sys-1 sends out the data in the form a unix string. Now sys-2 is upgrading to java web service which in turn requires us to upgrade. Is there any way to upgrade involving minimal changes to the existing legacy code. I am contemplating the creating of a code block which gets the output of Sys-1 and changes it into a format required by Sys-2 and vice versa. While researching, I found two ways of doing this: By using the "requests" library in python. Go with the java webservices. I am new to Java web services and have some knowledge in python. Can anyone advise if this method works and which is a better way to opt from a performance and maintenance point of view? Any new suggestions are also welcome!
Python requests vs java webservices
1.2
0
0
254
35,460,433
2016-02-17T15:09:00.000
0
0
0
1
python,debugging,gdb,qt-creator,opensuse
35,525,248
1
true
0
0
Works fine if "Run in terminal" unchecked or terminal changed back from konsole to xterm (works in konsole previously - weird).
1
1
0
See that when trying to debug my program in Qt Creator "Application Output" pane: Debugging starts Debugging has failed Debugging has finished Or freezes after Debugging starts Was able to run previously. Any way to fix this or to discover the problem? Qt Creator 3.5.1, gcc 4.8.5, gdb 7.9.1, Python 2.7.9 P.S. Hmm, works fine if "Run in terminal" unchecked or terminal changed back from konsole to xterm (works in konsole previously - weird).
Qt Creator failed to start gdb in latest openSUSE
1.2
0
0
116
35,463,019
2016-02-17T16:59:00.000
0
1
0
1
python,linux,process
35,463,485
1
false
0
0
Would running a filter in htop be quick enough? Run htop, Press F5 to enter tree mode, then F4 to filter, and type in python... it should show all the python processes as they open/close
1
2
0
I am on Linux and wish to find the process spawned by a Python command. Example: shutil.copyfile. How do I do so? Generally I have just read the processes from the terminal with ps however this command completes nearly instantaneously so I cannot do that for this without some lucky timing. htop doesn't show the info, strace seems to show a lot of info but I can't seem to get the process in it.
How to find the name of a process spawned by Python?
0
0
0
155
35,464,113
2016-02-17T17:51:00.000
0
0
1
1
python,macos,import,terminal
35,464,530
1
true
0
0
That error means that there is no 'intelhex' on your Python path. The contents of /usr/local/bin should not matter (those are executable files but are not the Python modules). Are you sure that you installed the package and are loading it from the same Python site packages location you installed it to?
1
0
0
I am using the terminal on a MacBook Pro. Trying to use intelhex in my code. I have downloaded intelhex using sudo pip install intelhex Success pip list shows intelhex installed run my code and receive this error: Traceback (most recent call last): File "./myCode.py", line 20, in from intelhex import IntelHex ImportError: No module named 'intelhex' I am using Python 2.7.11 ls /usr/local/bin shows the contents of intelhex: hex2bin.py bin2hex.py hexmerge.py hexdiff.py Where am I going wrong?!
Intelhex - import error - macOSX Terminal
1.2
0
0
1,912
35,464,138
2016-02-17T14:21:00.000
1
0
0
0
python,numpy,memory,arrays
35,464,658
2
false
0
0
By definition you cannot append anything to an array because when the array is declared in memory it has to reserve as much space as it is going to need. What you can do is to either declare an array with the known geometry and initial values and then rewrite the new values per row keeping a counter of the rows "appended" or you can double the size of the array when you run out of space.
1
0
1
Is there a way to create a 3d numpy array by appending 2d numpy arrays? What I currently do is append my 2d numpy array into an initialized list of pre determined 2d numpy array, i.e., List=[np.zeros((600,600))]. After appending all my 2d numpy arrays I use numpy.dstack to create 3d numpy array. I think this is not a very efficient method. Any suggestions.
Creating a 3d numpy array matrix using append method
0.099668
0
0
1,796
35,466,165
2016-02-17T19:40:00.000
0
0
0
0
python,list,pyodbc,netezza,executemany
35,599,759
2
false
0
0
Netezza is good for bulk loads, where executeMany() inserts number of rows in one go. The best way to load millions of rows is "nzload" utility which can be scheduled by vbscript, Excel Macro from Windows or Shell script from Linux.
1
1
0
I have about million records in a list that I would like to write to a Netezza table. I have been using executemany() command with pyodbc, which seems to be very slow (I can load much faster if I save the records to Excel and load to Netezza from the excel file). Are there any faster alternatives to loading a list with executemany() command? PS1: The list is generated by a proprietary DAG in our company, so writing to the list is very fast. PS2: I have also tried looping executemany() into chunks, with each chunk containing a list with 100 records. It takes approximately 60 seconds to load, which seems very slow.
Loading data to Netezza as a list is very slow
0
1
0
806
35,466,429
2016-02-17T19:53:00.000
7
0
1
0
python,python-3.x,opencv
44,714,952
5
false
0
0
For anyone who would like to install OpenCV on Python 3.5.1,use this library called opencv-contrib-python This library works for Python 3.5.1
2
10
1
I have searched quite a bit regarding this and I've tried some of these methods myself but I'm unable to work with OpenCV.So can anyone of you help me install OpenCV for python 3.5.1? I'm using anaconda along with Pycharm in windows Or is this not possible and i have to use python 2.7? Thanks in advance
OpenCV for Python 3.5.1
1
0
0
79,914
35,466,429
2016-02-17T19:53:00.000
1
0
1
0
python,python-3.x,opencv
63,226,795
5
false
0
0
For OS:Windows 10 and Python version: 3.5.1 & 3.6, this worked for me pip install opencv-contrib-python
2
10
1
I have searched quite a bit regarding this and I've tried some of these methods myself but I'm unable to work with OpenCV.So can anyone of you help me install OpenCV for python 3.5.1? I'm using anaconda along with Pycharm in windows Or is this not possible and i have to use python 2.7? Thanks in advance
OpenCV for Python 3.5.1
0.039979
0
0
79,914
35,472,785
2016-02-18T04:23:00.000
1
0
0
0
r,python-2.7,machine-learning,prediction,h2o
41,201,223
3
false
0
0
I have tried to use many of the default methods inside H2O with time series data. If you treat the system as a state machine where the state variables are a series of lagged prior states, it's possible, but not entirely effective as the prior states don't maintain their causal order. One way to alleviate this is to assign weights to each lagged state set based on time past, similar to how an EMA gives precedence to more recent data. If you are looking to see how easy or effective the DL/ML can be for a non-linear time series model, I would start with an easy problem to validate the DL approach gives any improvement over a simple 1 period ARIMA/GARCH type process. I have used this technique, with varying success. What I have had success with is taking well known non linear time series models and improving their predictive qualities with additional factors using the the handcrafted non linear model as an input into the DL method. It seems that certain qualities that I haven't manually worked out about the entire parameter space are able to supplement a decent foundation. The real question at that point is there is now an introduction of immense complexity that isn't entirely understood. Is that complexity warranted in the compiled landscape when the nonlinear model encapsulates about 95% of the information between the two stages?
1
1
1
We have hourly time series data having 2 columns, one is the timestamp and other is the error rate. We used H2O deep-learning model to learn and predict future error-rate but looks like it requires at least 2 features (except timestamp) for creating the model. Is there any way h2o can learn this type of data (time, value) having only one feature and predict the value given future time?
Can we predict time-series single-dimensional data using H2O?
0.066568
0
0
2,186
35,479,437
2016-02-18T10:54:00.000
0
0
1
0
python,pip
38,482,667
1
false
0
0
Make sure of two things: The pip version is the same in the offline server and in the online one. To find out: pip -V To update (if needed): pip install --upgrade pip The python version is the same in both virtual enviroments or servers. To find out: python (the header will have the version info) In my case I was calling pip install --download outside the virtual environment (using default python version - 2.7) and then installing in a virtual environment with python 3 and the error I got was exactly the one you mentioned.
1
2
0
In order to make packages installed offline, I use the -d (or --download) option to pip install. For instance, pip install --download dependencies -r requirements.txt will download the packages for all required dependencies mentioned in requirements.txt to dependencies dir (but will not install them). Then I use pip install --no-index --find-links dependencies -r requirements.txt to install those downloaded packages without accessing the network. Most of the time it works fine, but sometimes installation fails with error "Could not find a version that satisfies the requirement xyz". After doing pip install --user xyz --find-links dependencies manually (xyz IS present in the dependencies folder), installation fails with the same "Could not find a version that satisfies the requirement abc" error, but with different package 'abc'. It repeats several times until I manually resolve all failed dependencies. How could I make run pip install --no-index --find-links dependencies -r requirements.txt without those weird dependency errors not finding packages that are already there?
Offline installation for pip packages fails with error "Could not find a version that satisfies the requirement"
0
0
0
2,118
35,484,772
2016-02-18T14:54:00.000
0
0
1
0
python,c++,windows
35,495,227
1
false
0
0
You could have your python executable call the c++ executable and have the executable take in command line arguments. So basically in python have the service main code and a few basic cases that will call into a normal c++ executable. Not extremely efficient, but it works
1
0
0
Can I combine an executable with another executable (Windows Service Program) and run this program as a logical service? By combining, I mean to form a single executable. I want to write a Windows Service, and I've followed some tutorials that show how to do it using C++, i.e. writing the Service Program (in Windows) and using ServiceMain() functions as logical services. However, I prefer not to write the ServiceMain() functions in C++. Instead, I wonder whether I could write these logical services using Python and compile to binary using py2exe. Is this possible? - could I substitute the ServiceMain() functions for py2exe compiled modules? If so, please provide the details on how to do it.
Can I combine an executable with another executable (Windows Service Program) and run this program as a logical service?
0
0
0
75
35,484,844
2016-02-18T14:57:00.000
3
0
1
0
python,module,ptvs
42,005,862
3
false
0
0
I just wanted to add the below in addition to the verified answer, for a very specific scenario. I was recently asked to fix the same problem that the OP was experiencing for a work machine, which had recently had the user accounts migrated over to a new domain. Setup: Visual Studio 2013 PTVS 2.2.30718 Anaconda 3.5 Basically, Anaconda was installed for localmachine/UserA. Once the users were migrated over to the new domain (newdomain/UserA), the Python environment had to be updated from within VS2013, by clicking View > Other Windows > Python Environments. Once that was setup, the python scripts would run as expected, although none of the Search Folder references would work. They were then removed and re-added but to no avail. Various other things were tried, including setting up totally fresh projects, and linking them using the Search Paths, but to no avail. The only thing that fixed the problem was to reinstall the Python Environment (in my case Anaconda3) outside of a user account (by clicking the "for all users, using administrator privileges" option during the install). Then I restarted, removed and re-added the search folders, and the python worked as expected, including all the search paths. I hope that helps someone, as I just wasted hours solving it... D :)
1
8
0
In Visual Studio with PTVS I have two separate Python projects, one contains a Python source file named lib.py for use as a library of functions and the other is a main that uses the functions in the library. I am using an import statement in the main to reference the functions in the library project but get the following error: No module named lib I primarily program in F# using Visual Studio so my mindset is adding references to other .NET projects. How do I think in the Pythonic way to accomplish this?
PTVS: How to reference or use Python source code in one project from a second project
0.197375
0
0
7,145
35,485,629
2016-02-18T15:29:00.000
0
0
1
0
python,pandas
35,486,318
2
false
0
0
figured it out. specified the data type on import with dtype = {"phone" : str, "other_phone" : str})
1
0
1
I'm using pandas to input a list of names and phone numbers, clean that list, then export it. When I export the list, all of the phone numbers have '.0' tacked on to the end. I tried two solutions: A: round() B: converting to integer then converting to text (which has worked in the past) For some reason when I tried A, the decimal still comes out when I export to a text file and when I tried B, I got an unexpected negative ten digit number Any ideas about what's happening here and/or how to fix it? Thanks!
Removing decimals on export python
0
0
0
79
35,488,268
2016-02-18T17:22:00.000
4
0
0
0
python-3.x,google-search
35,488,561
2
false
0
0
Often when searching for Python stuff, I add the search term "python" anyway because many names refer to entirely different things in the world as well. Using "python3" here appears to solve your problem. I also feel it a lot less unobtrusive than the hacks you describe.
1
17
0
I like to use google when I'm searching for documentation on things related to python. Many times what I am looking for turns out to be in the official python documentation on docs.python.org. Unfortunately, at time of writing, the docs for the python 2.x branch tend to rank much higher on google than the 3.x branch, and I often end up having to switch to the 3.x branch after loading the page for the 2.x documentation. The designers of docs.python.org have made it easy to switch between python versions, which is great; but I just find it annoying to have to switch python versions and wait an extra page load every time I follow a link from google. Has anyone has done anything to combat this? I'd love to hear your solutions. Here's what I've tried so far: clicking on the python 3.x link farther down - this works sometimes, but often the discrepancy in ranking between 2.x and 3.x results is quite big, and the 3.x things are hard to find. copying the url from the search result and manually replacing the 2 with 3 - this works but is also inconvenient.
How to make google search results default to python3 docs
0.379949
0
1
710
35,489,583
2016-02-18T18:29:00.000
0
0
0
0
python,networkx,graphlab
35,491,984
2
false
0
0
Here is the first cut at porting from NetworkX to GraphLab. However, iterating appears to be very slow. temp1 = cc['component_id'] temp1.remove_column('__id') id_set = set() id_set = temp1['component_id'] for item in id_set: nodeset = cc_out[cc_out['component_id'] == item]['__id']
1
0
0
What is the GraphLab equivalent to the following NetworkX code? for nodeset in nx.connected_components(G): In GraphLab, I would like to obtain a set of Vertex IDs for each connected component.
NetworkX to GraphLab Connected Component Conversion
0
0
1
156
35,492,556
2016-02-18T21:14:00.000
12
0
0
0
python,numpy,machine-learning,computer-vision,scikit-learn
35,492,991
1
true
0
0
In sklearn you can do this only for linear kernel and using SGDClassifier (with appropiate selection of loss/penalty terms, loss should be hinge, and penalty L2). Incremental learning is supported through partial_fit methods, and this is not implemented for neither SVC nor LinearSVC. Unfortunately, in practise fitting SVM in incremental fashion for such small datasets is rather useless. SVM has easy obtainable global solution, thus you do not need pretraining of any form, in fact it should not matter at all, if you are thinking about pretraining in the neural network sense. If correctly implemented, SVM should completely forget previous dataset. Why not learn on the whole data in one pass? This is what SVM is supposed to do. Unless you are working with some non-convex modification of SVM (then pretraining makes sense). To sum up: From theoretical and practical point of view there is no point in pretraining SVM. You can either learn only on the second dataset, or on both in the same time. Pretraining is only reasonable for methods which suffer from local minima (or hard convergence of any kind) thus need to start near actual solution to be able to find reasonable model (like neural networks). SVM is not one of them. You can use incremental fitting (although in sklearn it is very limited) for efficiency reasons, but for such small dataset you will be just fine fitting whole dataset at once.
1
7
1
I have two data set with different size. 1) Data set 1 is with high dimensions 4500 samples (sketches). 2) Data set 2 is with low dimension 1000 samples (real data). I suppose that "both data set have the same distribution" I want to train an non linear SVM model using sklearn on the first data set (as a pre-training ), and after that I want to update the model on a part of the second data set (to fit the model). How can I develop a kind of update on sklearn. How can I update a SVM model?
How to update an SVM model with new data
1.2
0
0
4,307
35,493,291
2016-02-18T22:00:00.000
0
0
0
1
python,client,rethinkdb,failover,rethinkdb-python
35,513,048
1
false
0
0
Below is my opinion on how I setup thing. When the local proxy crashes, they should restart by using a process monitor like systemd. I don't use RethinkDB local proxy. I used HAProxy runs in TCP mode locally on every app server, to forward to RethinkDB. I used Consul Template so that when a RethinkDB node join cluster, HAProxy configuration will be updated and add the node and restart on its own. HAProxy is very lighweight and rock solid for me. Not just RethinkDB, HAProxy runs locally and do all kind of proxying request, even MySQL/Redis... HAProxy has all kind of routing/failover scenrario, like backup backend,...
1
0
0
I have: 4 servers running a single RethinkDB instance in cluster (4 shards / 3 replicas tables) 2 application servers (tornado + RethinkDB proxy) The clients connect only to their local proxy. How to specify both the local + the other proxy so that the clients could fail over to the other proxies when their local proxy crashes or experiences issues?
RethinkDB clients connection failover between proxies
0
0
0
102
35,493,485
2016-02-18T22:12:00.000
1
1
0
0
python,email,alert,splunk
35,514,817
2
true
0
0
well stated @IvanStarostin The script should always be located in : $SPLUNK_HOME/bin/scripts or in $SPLUNK_HOME/etc//bin/scripts in case of an app. When an alert triggers you can select a script to be run in the following way: Run the desired search and then click Save as Alert. Configure how often should your search run and the conditions according to which the alert should be triggered (e.g. when results is equal to 0). Then Select Run a script from the Add Actions menu. Enter the file name of the script that you want to run and you are set up! You can test you script in the search bar too by piping it after your query: ....|script commandname
1
0
0
I have a Splunk query which returns several JSON results and that I want to save as alert, sending regular emails to a list of people. I have created a Python script which takes as input some JSONS like the ones from the Splunk logs and beautifies the results. How can I configure the Splunk alert so that the users get by email the beautified results? Is it possible to configure Splunk to run the Python script on the query results and put the beautified output in the email body? Should I upload the script somewhere?
How to configure Python script to change body for Splunk email alert?
1.2
0
0
1,055
35,493,845
2016-02-18T22:35:00.000
2
0
1
0
python,elasticsearch,kibana
35,494,855
1
true
0
0
Make sure the field is mapped as a date field in ES and not a text field, that is likely your issue. The field name doesn't matter, other than when you tell kibana about your index make sure you pick the correct field. The ISO/XML datetime format is the default for ES, but it can be changed in the mapping if you needed it to be. 2016-02-18T22:38:27.568Z
1
0
0
I am sending data to elastic-search from a python script , it works fine and i am able to view it in Kibana. Now i want to insert the timestamps for each record/document so that i can get some plot in Kibana , based on the time information , for example , the number of documents submitted per five minutes , etc. When i append the time information to each record , it is visible in Kibana as simple data , and further more , the time field is not there when viewing from elastic search HQ mapping section. What could be the problem? In what format should i insert the time stamp , and do i need some special field name for it ? ( I am using Kibana3 , the filed name i am using is '_timestamp')
Time information in elasticsearch data
1.2
0
0
434
35,495,530
2016-02-19T01:07:00.000
0
0
0
1
python,dll,memory-leaks,ctypes,dllexport
35,613,787
1
true
0
0
I ended up writing a program in C without dynamic memory allocation to test the library. The leak is indeed in one of the functions I'm calling, not the Python program.
1
0
0
I've written an abstraction layer in Python for a piece of commercial software that has an API used for accessing the database back end. The API is exposed via a Windows DLL, and my library is written in Python. My Python package loads the necessary libraries provided by the application, initializes them, and creates a couple of Python APIs on top. There are low level functions that simply wrap the API, and make the functions callable from Python, as well as a higher level interface that makes interaction with the native API more efficient. The problem I'm encountering is that when running a daemon that uses the library, it seems there is a memory leak. (Several hundred KB/s) I've used several Python memory profiling tools, as well as tested each function individually, and only one function seems to leak, yet no tool reports that memory has been lost during execution of that function. On Linux, I would use Valgrind to figure out if the vendor's library was the culprit, but the application only runs on Windows. How can I diagnose whether the vendor is at fault, or if it's the way I'm accessing their library?
Diagnosing memory leak from Windows DLL accessed in Python with ctypes
1.2
0
0
217
35,495,874
2016-02-19T01:44:00.000
1
0
0
0
python,django,authorization,mechanicalturk
35,503,013
1
true
1
0
Every request from AWS will include additional URL parameters: workerId, assignmentId, hitId. That's probably the easiest way to identify a request coming from MTurk. There may be headers, as well, but they're not documented anywhere.
1
0
0
I have a django application that I want to host a form on to use as the template for an ExternalHit on Amazon's Mechanical Turk. I've been trying to figure out ways that I can make it so only mturk is authorized to view this document. One idea I've been considering is looking at the request headers and confirming that the request came from Amazon. However, I couldn't find any documentation regarding any of these topics and I am worried that if the source of the request ever changes the page will become inaccessible to mturk. Anyone have any suggestions or solutions that they have implemented? Fyi, I'm using python/django/boto.
What options are there for verifying that mturk is requesting my ExternalQuestion and not a 3rd party?
1.2
0
1
59
35,496,055
2016-02-19T02:05:00.000
2
0
1
0
python,anaconda
35,496,124
1
true
0
0
No, you shouldn't need to uninstall anything. Anaconda, including its own Python distribution, lives in a separate directory. Anaconda adjusts the paths to make this work, so if some things relied on specifics of your old Python paths, those may break, but that's about all.
1
3
0
I had some issues with matplotlib in virtualenvironments on Python and was recommended to uninstall 3.5 to install anaconda as a result. If so, do I need to pip uninstall everything (both globally and on my user) I see from pip freeze as well as everything I've installed with brew? Or will Anaconda be able to utilize what is already installed?
Do I need to uninstall Python 3.5 before installing Anaconda on OSX?
1.2
0
0
2,468
35,496,145
2016-02-19T02:15:00.000
7
0
1
0
python,arrays,algorithm,big-o
35,496,208
5
false
0
0
There is a very simple-looking solution that is O(n): XOR elements of your sequence together using the ^ operator. The end value of the variable will be the value of the unique number. The proof is simple: XOR-ing a number with itself yields zero, so since each number except one contains its own duplicate, the net result of XOR-ing them all would be zero. XOR-ing the unique number with zero yields the number itself.
2
3
0
For example, if L = [1,4,2,6,4,3,2,6,3], then we want 1 as the unique element. Here's pseudocode of what I had in mind: initialize a dictionary to store number of occurrences of each element: ~O(n), look through the dictionary to find the element whose value is 1: ~O(n) This ensures that the total time complexity then stay to be O(n). Does this seem like the right idea? Also, if the array was sorted, say for example, how would the time complexity change? I'm thinking it would be some variation of binary search which would reduce it to O(log n).
Find the unique element in an unordered array consisting of duplicates
1
0
0
1,157
35,496,145
2016-02-19T02:15:00.000
1
0
1
0
python,arrays,algorithm,big-o
35,496,242
5
false
0
0
Your outlined algorithm is basically correct, and it's what the Counter-based solution by @BrendanAbel does. I encourage you to implement the algorithm yourself without Counter as a good exercise. You can't beat O(n) even if the array is sorted (unless the array is sorted by the number of occurrences!). The unique element could be anywhere in the array, and until you find it, you can't narrow down the search space (unlike binary search, where you can eliminate half of the remaining possibilities with each test).
2
3
0
For example, if L = [1,4,2,6,4,3,2,6,3], then we want 1 as the unique element. Here's pseudocode of what I had in mind: initialize a dictionary to store number of occurrences of each element: ~O(n), look through the dictionary to find the element whose value is 1: ~O(n) This ensures that the total time complexity then stay to be O(n). Does this seem like the right idea? Also, if the array was sorted, say for example, how would the time complexity change? I'm thinking it would be some variation of binary search which would reduce it to O(log n).
Find the unique element in an unordered array consisting of duplicates
0.039979
0
0
1,157
35,497,392
2016-02-19T04:36:00.000
0
1
0
0
python,django,iis,fastcgi,gdal
35,591,876
1
true
1
0
Solved it by restart the machine
1
0
0
I set up a django website via IIS manager, which is working fine, then I add a function by using GDAL libs and the function is working fine. And also it is fine if I run this website by using CMD with this command python path\manage.py runserver 8000 But it cannot run via IIS I got error is DLL load failed: The specified module could not be found., which from from osgeo import gdal, osr My guess is I need to set environment variables to FastCGI Settings of IIS I set these to environment variables collections but does not work. GDAL_DATA C:\Program Files (x86)\GDAL\gdal-data GDAL_DRIVER_PATH C:\Program Files (x86)\GDAL\gdalplugins Any help would be appreciated
How to setup FastCGI setting of IIS with GDAL libs
1.2
0
0
134
35,505,089
2016-02-19T12:12:00.000
0
1
0
0
python,amazon-web-services,aws-lambda
42,859,631
2
false
1
0
You also have to include the query string parameter in the section Resources/Method Request.
1
0
0
I am creating an api with AWS API Gateway with Lambda functions. I want to be able to make an API call with the following criteria: In the method request of the API i have specified the Query String: itemid I want to be able to use this itemid value within my lambda function I am using Python in Lambda I have tried putting the following in the Mapping template under the Method execution, however get an error: -{ "itemid": "$input.params('itemid')" }
AWS Lambda parameter passing
0
0
0
1,250
35,507,732
2016-02-19T14:32:00.000
0
0
1
1
python,azure,queue
35,508,696
2
false
0
0
One possible strategy could be to use Webjobs. Webjobs can execute Python scripts and run on a schedule. Let's say that you run a Webjob every 5 minutes, the Python script can pool the queue, do some processing and post the results back to you API.
1
0
0
I'm trying to define an architecture where multiple Python scripts need to be run in parallel and on demand. Imagine the following setup: script requestors (web API) -> Service Bus queue -> script execution -> result posted back to script requestor To this end, the script requestor places a script request message on the queue, together with an API endpoint where the result should be posted back to. The script request message also contains the input for the script to be run. The Service Bus queue decouples producers and consumers. A generic set of "workers" simply look for new messages on the queue, take the input message and call a Python script with said input. Then they post back the result to the API endpoint. But what strategies could I use to "run the Python scripts"?
Using Microsoft Azure to run "a bunch of Python scripts on demand"
0
0
0
634
35,508,255
2016-02-19T14:58:00.000
-1
0
1
0
python,regex
35,508,334
3
false
0
0
Something like: [^a-zA-Z\-](ios)[^a-zA-Z\-] Might however be problematic at the beginning or the end of a line
1
5
0
After some search this seems more difficult than I thought: I am trying to write a regular expression in Python to find a word which is not surrounded by other letters or dashes. In the following examples, I am trying to match ios: It seems carpedios I like "ios" because they have blue products I like carpedios and ios I like carpedios and ios. i like carped-ios The matches should be as follows: 1: don't match because ios is after d. 2: match because ios is not surrounded by letters. 3: match because one of ios is not surrounded by letters. 4: match because one of ios is not surrounded by letters. 5: don't match because ios is followed by -. How to do it with regex?
Find word not surrounded by alpha char
-0.066568
0
0
297
35,509,019
2016-02-19T15:35:00.000
1
0
1
1
python,linux
35,509,182
1
false
0
0
Look at setuptools distutils These are classical tools for python packaging
1
3
0
I have created a simple software with GUI. It has several source files. I can run the project in my editor. I think it is ready for the 1.0 release. But I don't know how to create a setup/installer for my software. The source is in python. Environment is Linux(Ubuntu). I used an external library which does not come with standard Python library. How can I create the installer, so that I just distribute the source code in the tar file. And the user installs the software on his machine(Linux) by running a setup/installer file? Please note: When the setup is run, it should automatically take care of the dependencies.(Also, I don't want to build an executable for distribution.) Something similar to what happens when I type: sudo apt-get install XXXX
How to create setup/installer for my Python project which has dependencies?
0.197375
0
0
137
35,514,183
2016-02-19T20:16:00.000
1
0
1
1
python,redis,celery
35,532,031
2
true
0
0
If you need to preserve the python native data structure I'd recommend using one of the serialization modules such a cPickle which will preserve the data structure but won't be readable outside of Python.
1
4
0
I noticed this when using the delay() function to asynchronously send tasks. If I queue a task such as task.delay(("tuple",)), celery will store the argument as ["tuple"] and later the function will get the list back and not the tuple. Guessing this is because the data is being stored into json. This is fine for tuples, however I'm using namedtuples which can no longer be referenced properly once converted to a list. I see the obvious solution of switching the namedtuples out with dicts. Is there any other method? I couldn't seem to find anything in the configuration for celery. I'm using redis as the broker.
Python celery - tuples in arguments converted to lists
1.2
0
0
951
35,514,886
2016-02-19T20:57:00.000
1
0
1
0
python,pyqt,pyqt4,python-3.4,large-data
35,515,404
2
false
0
1
You might consider the HDF5 format, which can access using h5py, pytables, or other python packages. Depending on the dataformat, HDF5 could enable you to access the data on the HD in an efficient manner, which in practice means that you can save memory. The downside is that it requires some effort on your side as a programmer.
1
1
0
I'm working on an application using Python (3.4) and PyQt. The goal of the program is to manage and analyze large amount of data - up to ~50 binary files, which might be of total size up to 2-3 GB. When I tried to load a couple files into the program, it stops responding during loading and then takes ~1.5GB RAM just to keep running. My question is quite general - what are the possible methods in python/PyQt for handling such data sets?
Python - managing large data
0.099668
0
0
367
35,516,720
2016-02-19T23:07:00.000
1
0
0
0
python,django,performance
35,516,808
1
false
1
0
Yes, any content within the {% if condition %} and {% endif %} tags will not be sent the client if condition evaluates to False. It is not hidden via CSS, the content will simply not exist in the response. It will also reduce the size of your HTTP response.
1
0
0
{% if foo == 1 %} <-- blah blah blah -->> {% endif %} If the if block above evaluates the false, would the content inside the if block still be rendered to the client but hidden instead? If not, is this method an acceptable way to reduce page load?
Django: Can template tags prevent content to be rendered to the clients?
0.197375
0
0
59
35,516,849
2016-02-19T23:18:00.000
1
0
0
1
apache-kafka,kafka-python
35,533,318
1
true
0
0
I'm actually not sure for Kafka 0.9, haven't yet had the need to go over the new design thoroughly, but AFAIK this wasn't possible in v8. It certainly wasn't possible with the low-level consumer, but I also think that, if you assign more threads than you have partitions in the high-level consumer, only one thread per partition would be active at any time. This is why we say that parallelism in Kafka is determined by the number of partitions (which can be dynamically increased for a topic). If you think about it, that would require coordination on the message level between the consuming threads, which would be detrimental to performance. Consumer groups in v0.8 were used to make the thread -> partition assignment a responsibility of Kafka, not to coordinate multiple threads over a single partition. Now, it could be that this changed in 0.9, but I doubt that very much. [EDIT] Now that I'm reading your question once again, I hope I understood your question correctly. I mean, having multiple consumers (not consumer threads) per partition is a regular thing (each has its own offset), so I assumed you were asking about threads/partitions relationship.
1
0
0
For context, I am trying to transfer our python worker processes over to a kafka (0.9.0) based architecture, but I am confused about the limitations of partitions with respect to the consumer threads. Will having multiple consumers on a partition cause the other threads on the same partition to wait for the current thread to finish?
Mulitple Python Consumer Threads on a Single Partition with Kafka 0.9.0
1.2
0
0
793
35,516,906
2016-02-19T23:22:00.000
1
0
1
0
python,authentication,jupyter,jupyter-notebook,jupyterhub
35,571,314
2
false
0
0
there is a login hook in the config. You can write your own authentication there.
1
1
0
I've setup a Jupyter Notebook server with appropriate password and SSL so it is accessed via HTTPS. However, I'm looking now for a way to enforce a two factor authentication with username and password for loging in. The current Jupyter Notebook server only asks for a password and I hence have to create a shared one (no username though). I know about JupyterHub, but at the moment I'm looking for a way to add a username (or multiple usernames) and correspond password (passwords), so that everyone can access the same work space without necessarily having credentials on the Linux server side. Is this even possible, or do I have to resort to deploying a JupyterHub server?
two factor authentication with username and password for a Jupyter Notebook server
0.099668
0
0
2,499
35,522,767
2016-02-20T11:45:00.000
1
0
0
0
python,numpy,scikit-learn
37,745,749
1
false
0
0
I met the same problem today. Now I have solved it. Because I have installed numPy manually, and I use the command "pip" to install the else package. Solve way: find the old version of numPy. You can import numPy and print the path of it. delete the folder. use pip to install again.
1
0
1
I have been trying to install and use scikit-learn and nltk. However, I get the following error while importing anything: Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/site-packages/sklearn/init.py", line 57, in from .base import clone File "/usr/local/lib/python2.7/site-packages/sklearn/base.py", line 11, in from .utils.fixes import signature File "/usr/local/lib/python2.7/site-packages/sklearn/utils/init.py", line 10, in from .murmurhash import murmurhash3_32 File "numpy.pxd", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029) ValueError: numpy.dtype has the wrong size, try recompiling I did a pip uninstall numpy followed by a pip install numpy and also a pip uninstall scikit-learn and again reinstalled it. But the error persists.
Installation of nltk and scikit-learn
0.197375
0
0
474
35,524,022
2016-02-20T13:37:00.000
4
0
0
1
python,django,shell,subprocess,gunicorn
35,524,148
1
true
1
0
1) User who runs gunicorn has no permissions to run .sh files 2) Your .sh file has no rights to be runned 3) Try to user full path to the file Also, which error do you get when trying to run it on the production?
1
1
0
I have a django 1.9 project deployed using gunicorn with a view contains a line subprocess.call(["xvfb-run ./stored/all_crawlers.sh "+outputfile+" " + url], shell=True, cwd= path_to_sh_file) which runs fine with ./manage.py runserver but fails on deployment and (deployed with gunicorn and wsgi). Any Suggestion how to fix it?
Django deployed project not running subprocess shell command
1.2
0
0
309
35,531,367
2016-02-21T01:51:00.000
1
0
0
0
python,pandas
35,531,393
4
false
0
0
Try this method: Create a duplicate data set. Use .mode() to find the most common value. Pop all items with that value from the set. Run .mode() again on the modified data set.
1
0
1
So I'm generating a summary report from a data set. I used .describe() to do the heavy work but it doesn't generate everything I need i.e. the second most common thing in the data set. I noticed that if I use .mode() it returns the most common value, is there an easy way to get the second most common?
In pandas, how to get 2nd mode
0.049958
0
0
2,825
35,534,039
2016-02-21T08:39:00.000
0
0
0
0
python,django,django-rest-framework
38,318,621
1
false
1
0
One simple way would be to just reset the index in transform_dataframe method. df = df.reset_index() This would just add a new index and set your old index as a column, included in the output.
1
0
1
Backgroud I am using django-rest-pandas for serving json & xls. Observation When I hit url with format=xls, I get complete data in the downloaded file. But for format=josn, the index field of dataframe is not part of the records. Question How can I make django-rest-pandas to include dataframe's index field in json response? Note that the index field is present as part of serializer (extending serializer.ModelSerializer).
Include django rest pandas dataframe Index field in json response
0
0
0
363
35,534,170
2016-02-21T08:56:00.000
5
0
0
0
python,django,refactoring
35,534,947
1
true
1
0
This is not neccessery since django will pick only the updated files, and the whole idea of collectstatic is that you don't have to manually manage the static files. However, if the old files do take a lot of space, once in while you can delete all the files and directories in the static directory, and then run collectstatic again. Then the /static/ dir will include only the updated files. Before you run this, check how much time does it take, and prepare for maintenance. Note: Delete and re-create all files may still require reload of these files by the client browsers or a CDN. It depends on your specific configuration: CDN, caching headers that use the file creation dates, etc.
1
3
0
Is the any automated way to remove (or at least mark) unused non-used (non-referenced) files located in /static/ folder and its sub-folders in Django project?
Django: removing non-used files
1.2
0
0
1,427
35,535,422
2016-02-21T11:19:00.000
0
1
0
0
python
59,456,007
6
false
0
0
When working with python projects its always a good idea to create a so called virtual environment, this way your modules will be more organized and reduces the import errors. for example lets assume that you have a script.py which imports multiple modules including pypiwin32. here are the steps to solve your problem: 1. depending on you operating system you need to download and install virtualenv package, in debian its as simple as sudo apt install virtualenv . 2. after installing 'virtualenv' package go to your project/script folder and create a virtualenv folder with virtualenv venv it creates a folder named venv in that directory. 3. activate your virtualenv source /path/to/venv/bin/activate if your already in the directory where venv exists just issue source venv/bin/activate 4. after activating your venv install you project dependencies pip install pypiwin32 or pip install pywin 5. run your script, it wont throw that error again :)
3
6
0
I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com". Any ideas?
No module named win32com
0
0
0
17,931
35,535,422
2016-02-21T11:19:00.000
8
1
0
0
python
35,535,450
6
true
0
0
As it is not built into Python, you will need to install it. pip install pywin
3
6
0
I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com". Any ideas?
No module named win32com
1.2
0
0
17,931
35,535,422
2016-02-21T11:19:00.000
1
1
0
0
python
59,476,830
6
false
0
0
This will work as well python -m pip install pywin32
3
6
0
I've just installed Python for the first time and I'm trying to reference the win32com module however, whenever I try to import it I get the message "no module name win32com". Any ideas?
No module named win32com
0.033321
0
0
17,931
35,538,946
2016-02-21T16:53:00.000
1
0
1
0
python,cython
35,573,200
1
true
0
1
I guess you can just cast to void *, pass it into your container, then convert back to your extension type. It's up to you to ensure you still have a reference to it in order to not let the pointer being invalid.
1
0
0
I have a multithreaded cython application and would like to pass an extension type between threads that holds a pointer to a thread safe Circular buffer that also makes various calculations. Is there any way to make a c++ container handle a Extension type?
Gil-less container for cython extension type
1.2
0
0
143
35,539,077
2016-02-21T17:03:00.000
1
0
0
0
python,matplotlib,dct
41,886,452
1
false
0
0
Are you willing to use a library outside of numpy, scipy, and matplotlib? If so you can use skimage.color.rgb2yiq() from the scikit-image library.
1
1
1
In matlab we have rgb2ntsc() function to get YIQ components of a RGB image. Is there a similar function available in python (numpy ,matplotlib or scipy libraries)? Also to apply discrete cosine transform (compress it) we can use dct2() , in matlab , is there a similar function in python?
how to convert a RGB image to its YIQ components in python , and then apply dct transform to compress it?
0.197375
0
0
1,415
35,542,519
2016-02-21T21:45:00.000
2
0
1
0
python,nltk
35,542,694
1
false
0
0
You haven't given us much to go on. But let's assume you have a paragraph of text. Here's one I just stole from a Yelp review: What a beautiful train station in the heart of New York City. I've grown up seeing memorable images of GCT on newspapers, in movies, and in magazines, so I was well aware of what the interior of the station looked like. However, it's still a gem. To stand in the centre of the main hall during rush hour is an interesting experience- commuters streaming vigorously around you, sunlight beaming in through the massive windows, announcements booming on the PA system. It's a true NY experience. Okay, there are a bunch of words there. What kind of words do you want? Adjectives? Adverbs? NLTK will help you "tag" the words, so you can find all the ad-words: "beautiful", "memorable", "interesting", "massive", "true". Now, what are you going to do with them? Maybe you can throw in some verbs and nouns, "beaming" sounds pretty good. But "announcements" isn't so interesting. Regardless, you can build an associations database. This ad-word appears in a paragraph with these other words. Maybe you can count the frequency of each word, over your total corpus. Maybe "restaurant" appears a lot, but "pesthole" is relatively rare. So you can filter that way? (Only keep "interesting" words.) Or maybe you go the other way, and extract synonyms: if "romantic" and "girlfriend" appear together a lot, then call them "correlated words" and use them as part of your search engine? We don't know what you're trying to accomplish, so it's hard to make suggestions. But yes, NLTK can help you select certain subgroups of words, IF that's actually relevant.
1
0
0
Given words like "romantic" or "underground", I'd like to use python to go through a list of text data and retrieve entries that contain those words and associated words such as "girlfriend" or "hole-in-the-wall". It's been suggested that I work with NLTK to do this, but I have no idea where to start and I know nothing about language processing or linguistics. Any pointers would be much appreciated.
finding word associations using natural language processing
0.379949
0
0
2,265
35,544,448
2016-02-22T01:32:00.000
3
0
1
0
python,pygame
35,544,509
1
true
0
1
You should only call pygame.init() and pygame.quit() on 1 and same file. this the main file where your game loop runs. You will need other scripts for different things but all those you can just import in this main file where game loop runs. If you find this confusing checkout some pygame projects on github that will help you understand the structure. Also studing a little object orientation will help you a lot for making games.
1
0
0
I am programming a game with multiple script files and I am wondering, on the files that I have used pygame.init(), do I have to call pygame.quit() at the end of the file?
Python & Pygame - Does pygame.quit() have to be written at the end of every .py file?
1.2
0
0
62
35,544,800
2016-02-22T02:18:00.000
1
0
1
0
python,excel,combinations
35,557,793
1
false
0
0
I am told to use Pandas to get at each of the individual states in your excel file. I then use a dictionary structure to store state values and look up sates from the above to these.
1
0
0
I have a file that has a column with names, and another with comma separated US licenses, for example, AZ,CA,CO,DC,HI,IA,ID; but any combination of 50 states is possible. I have another file that has a certain value attached to each state, for example AZ=4, CA=30, DC=23, and so on for all 50. I need to add up the amount that each person is holding via their combination of licenses. Say, someone with just CA, would have 30, while some one with AZ, CA and DC, would end up with 30+4+23=57; and any combination of 50 licenses is possible. I know a bit of Python, but not enough to know how to even get started, what packages to use, what the architecture should be.. Any guidance is appreciated. Thank you.
How do I parse out all US states from comma separated strings in Python from an excel file.
0.197375
0
0
52
35,544,961
2016-02-22T02:39:00.000
1
0
0
1
python,unix,lsof
35,546,294
1
false
0
0
If you know the PID (eg. 12345) of the process, you can determine the entire argv array by reading the special file /proc/12345/cmdline. It contains the argv array separated by NUL (\0) characters.
1
3
0
I currently have a python script that accomplishes a very useful task in a large network. What I do is use lsof -iTCP -F and a few other options to dump all listening TCP sockets. I am able to get argv[0], this is not a problem. But to get the full argv value, I need to then run a ps, and map the PIDs together, and then merge the full argv value from ps into the record created by lsof. This feels needlessly complex, and for 10000+ hosts, in Python, it is very slow to merge this data. Is there any way to show the full argv value w/lsof? I have read the manual and I couldn't find anything, so I am not too hopeful there is any way to do this. Sure, I could write a patch for lsof, but then I'd have to deploy it to 10000+ systems, and that's a non-starter at this point. Also, if anyone has any clever ways to deal with the processing in Python such that it doesn't take 10 minutes to merge the data, I'd love to know. Currently, I load all the lsof and ps data into a dict where the key is (ip,pid) and then I merge them. I then create a new dict using the data in the merged dict where the key is (ip,port). This is really slow because the first two processes require iterating over all the lsof data. This is probably not a question, but I figured I'd throw it in here. My only idea at this point is to count processers and spawn N subprocesses, each with a chunk of the data to parse, then return them all back to the parent.
Is there any way for lsof to show the entire argv array instead of just argv[0]
0.197375
0
0
150
35,545,822
2016-02-22T04:27:00.000
0
0
0
0
python,amazon-web-services,flask,amazon-elastic-beanstalk
35,546,210
1
true
1
0
My best guess is that adding if __name__ == '__main__' didn't fix anything, but it coincidentally happened to work that time.
1
1
0
I was setting up a simple flask app on AWS with Elastic Beanstalk, but had a bug that would result in a timeout error when visiting the page ERROR: The operation timed out. The state of the environment is unknown. when running 'eb create'). Ultimately I fixed it by inserting the standard if __name__ == '__main__': condition before appplication.run() which I had originally excluded. My question is: Why should the conditional be necessary for Elastic Beanstalk to run the application? I thought the only purpose of __name__ == '__main__' was so that code does not run when used as a module and I don't see why the absence of the conditional would prevent code from running.
if __name__ == "__main__" condition with flask/Elastic Beanstalk
1.2
0
0
1,304
35,549,309
2016-02-22T08:50:00.000
1
0
0
0
python,mysql,django
35,551,670
2
false
1
0
You should delete the migrations folder inside your app folder. You should also delete the database file, if there is one (for SQLite there is a file called db.sqlite3 in the root project folder, but I'm not sure how this works for MySQL). Then run makemigrations and migrate.
1
1
0
I'm trying to reset my django database so I've run manage.py sqlflush and run that output in MySQL. I've then run manage.py flush. I think this should clear everything. I've then run manage.py makemigrations which seemed to identify all tables that would need building but when I run manage.py migrate it says nothing needs doing and so I now don't have any tables when I run my app.
Rebuilding Django development server database
0.099668
0
0
3,441
35,551,326
2016-02-22T10:31:00.000
27
0
0
0
python,tensorflow,tensorboard
43,568,782
3
false
0
0
You should provide a port flag (--port=6007). But I am here to explain how you can find it and other flags without any documentation. Almost all command line tools have a flag -h or --help which shows all possible flags this tool allows. By running it you will see information about a port flag and that --logdir allows you to also pass a comma separated list of log directories and you can also inspect separate event-files and tags with --event_file and --tag flags
1
63
1
Is there a way to change the default port (6006) on TensorBoard so we could open multiple TensorBoards? Maybe an option like --port="8008"?
Tensorflow Tensorboard default port
1
0
0
72,323
35,552,667
2016-02-22T11:39:00.000
0
0
0
0
python,scipy,interpolation,nan
35,576,424
2
false
0
0
Spline fitting/interpolation is global, so it's likely that even a single nan is messing up the whole mesh.
2
0
1
I have a gridded velocity field that I want to interpolate in Python. Currently I'm using scipy.interpolate's RectBivariateSpline to do this, but I want to be able to define edges of my field by setting certain values in the grid to NaN. However, when I do this it messes up the interpolation of the entire grid, effectively making it NaN everywhere. Apparently this is an error in the scipy fitpack, so what would be the best way to work around this? I want to be able to keep the NaNs in the grid to work with edges and out of bounds later on, but I don't want it to affect the interpolation in the rest of the grid.
Ignoring NaN when interpolating grid in Python
0
0
0
776
35,552,667
2016-02-22T11:39:00.000
1
0
0
0
python,scipy,interpolation,nan
35,552,764
2
false
0
0
All languages that implement floating point correctly (which includes python) allow you to test for a NaN by comparing a number with itself. x is not equal to x if, and only if, x is NaN. You'll be able to use that to filter your data set accordingly.
2
0
1
I have a gridded velocity field that I want to interpolate in Python. Currently I'm using scipy.interpolate's RectBivariateSpline to do this, but I want to be able to define edges of my field by setting certain values in the grid to NaN. However, when I do this it messes up the interpolation of the entire grid, effectively making it NaN everywhere. Apparently this is an error in the scipy fitpack, so what would be the best way to work around this? I want to be able to keep the NaNs in the grid to work with edges and out of bounds later on, but I don't want it to affect the interpolation in the rest of the grid.
Ignoring NaN when interpolating grid in Python
0.099668
0
0
776
35,555,798
2016-02-22T14:11:00.000
3
0
1
0
python,spyder
35,555,927
1
true
0
0
Simply removing the line which was an issue and starting Spyder did the trick. Spyder rebuilt the spyder.ini file upon running spyder.exe.
1
1
0
I'm working with WinPython and Spyder, and somehow spyder wouldn't start. It would briefly flash an error message of which the relevant line is: ConfigParser.ParsingError: File contains parsing errors: D:\progs\WinPython-64bit-2.7.10.3\settings\.spyder\spyder.ini [line 431]: u'_/switch to'. Then delving into that file it seems to be clipped. It abruptly ends on line 431 with _/switch to in the [shortcuts] section of the file. Can anyone link me to a complete spyder.ini file, I can't find it in the spyder github? Or if it's the last line (or one of the last few lines), provide me with the bit I'm missing?
Replace corrupted spyder.ini file (with winpython 64)
1.2
0
0
1,152
35,560,068
2016-02-22T17:29:00.000
8
0
1
0
python
35,560,186
2
true
0
0
You are missing that Python's division (from Python 3 on) is by default a float division, so you have reduced precision in that. Force the integer division by using // instead of / and you will get the same result.
1
5
0
I'm trying to execute next code int((226553150 * 1023473145) / 5) and python3 gives me an answer 46374212988031352. But ruby and swift give me an answer 46374212988031350. What do I miss?
python 3 long ints and multiplication
1.2
0
0
575
35,561,072
2016-02-22T18:26:00.000
0
0
1
0
python,loops,python-3.x,functional-programming
35,561,218
3
false
0
0
In a recursive function there are two main components: The recursive call The base case The recursive call is when you call the function from within itself, and the base case is where the function returns/stops calling itself. For your recursive call, you want nfactorial(n-1), because this is essentially the definition of a factorial (n(n-1)(n-2)...*2). Following this logic, the base case should be when n == 2. Good luck, hope I was able to point you in the right direction.
1
2
0
So we just started learning about loops and got this assignment def factorial_cap(num): For positive integer n, the factorial of n (denoted as n!), is the product of all positive integers from 1 to n inclusive. Implement the function that returns the smallest positive n such that n! is greater than or equal to argument num. Examples: factorial_cap(20) → 4 #3!<20 but 4!>20 factorial_cap(24) → 4 #4!=24 Can anyone give me a direction as to where to start? I am quite lost at how to even begin to start this. I fully understand what my program should do, just not how to start it.
Starting loops with python
0
0
0
124
35,561,176
2016-02-22T18:32:00.000
0
0
0
1
python,celery
47,490,225
2
false
0
0
Regarding the AttributeError message, adding a backend config setting similar to below should help resolve it: app = Celery('tasks', broker='pyamqp://guest@localhost//', backend='amqp://')
1
0
0
I'm using Celery with RabbitMQ as the broker and redis as the result backend. I'm now manually dispatching tasks to the worker. I can get the task IDs as soon as I sent the tasks out. But actually Celery worker did not work on them. I cannot see the resulted files on my disk. And later when I want to use AsyncResult to check the results, of course I got AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for' I checked RabbitMQ and redis, they're both working (redis-cli ping). The log also says Connected to amqp://myuser:**@127.0.0.1:5672/myvhost. Another interesting thing is that I actually have another remote server consuming the tasks connected to the broker. It also logs "Connected to amqp", but the two the nodes cannot see each other: mingle: searching for neighbors, mingle: all alone. The system worked before. I wonder where should I start to look for clues. Thanks.
Celery worker not consuming task and not retrieving results
0
0
0
1,693
35,565,733
2016-02-22T23:07:00.000
1
0
1
0
python,selenium,phantomjs
46,802,328
4
false
1
0
If you were able to execute in Terminal, just restart PyCharm and it will synchronize the environment variables from the system. (You can check in "RUN" => "Edit Configurations")
1
4
0
note: PhantomJS runs in PyCharm environment, but not IDLE I have successfully used PhantomJS in Python in the past, but I do not know what to do to revert to that set up. I am receiving this error in Python (2.7.11): selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. I have tried to 'symlink' phantomjs to the path (usr/local/bin [which is also in the path]), and even manually locate /usr/local/bin to place phantomjs in the bin folder. However, there is still a path error in python. What am I missing?
PhantomJS was placed in path and can execute in terminal, but PATH error in Python
0.049958
0
1
8,984
35,567,020
2016-02-23T01:14:00.000
0
0
0
1
python,linux,sockets
37,220,484
1
true
0
0
So it turns out the problem came from the provided websocket module from google cloud sdk. It has a bug where after 8192 bytes it will not continue to read from the socket. This can be fixed by supplying the websocket library maintained by Hiroki Ohtani earlier on your PYTHONPATH than the google cloud sdk.
1
1
0
I've created a docker image based on Ubuntu 14.04 which runs a python websocket client to read from a 3rd party service that sends variable length JSON encoded strings down. I find that the service works well until the encoded string is longer than 8192 bytes and then the JSON is malformed, as everything past 8192 bytes has been cut off. If I use the exact same code on my mac, I see the data come back exactly as expected. I am 100% confident that this is an issue with my linux configuration but I am not sure how to debug this or move forward. Is this perhaps a buffer issue or something even more insidious? Can you recommend any debugging steps?
Websocket client on linux cuts off response after 8192 bytes
1.2
0
1
91
35,570,376
2016-02-23T06:28:00.000
1
1
1
0
python,eclipse,ide
35,571,869
1
true
0
0
The 'Import Existing Projects into Workspace' wizard has a 'Copy projects into workspace' check box on the first page. Unchecking this option will make Eclipse work on the original files.
1
0
0
I have used the "import existing project" option to import an existing project into workspace. However, eclipse actually makes copies of the original files and create a new project. So, if I made a change on a file. It only affect on the copied file in workspace. The original file is untouched. My question is how do I make my modification affected on the original files?
eclipse modify imported project files
1.2
0
0
98
35,571,862
2016-02-23T07:59:00.000
2
1
0
1
python,pip,freebsd
35,946,582
2
false
0
0
The assumption that powerful and high-profile existing python tools use a lot of different python packages almost always holds true. We use FreeBSD in our company for quite some time together with a lot of python based tools (web frameworks, py-supervisor, etc.) and we never ran into the issue that a certain tool would not run on freeBSD or not be available for freeBSD. So to answer your question: Yes, all/most python packages are available on FreeBSD One caveat: The freeBSD ports system is really great and will manage all compatibility and dependency issues for you. If you are using it (you probably should), then you might want to avoid pip. We had a problem in the past where the package manager for ruby did not really play well with the ports database and installed a lot of incompatible gems. This was a temporary issue with rubygems but gave us a real headache. We tend to install everything from ports since then and try to avoid 3rd party package managers like composer, pip, gems, etc. Often the ports invoke the package managers but with some additional arguments so they ensure not to break dependencies.
1
5
0
The development environment, we use, is FreeBSD. We are evaluating Python for developing some tools/utilities. I am trying to figure out if all/most python packages are available for FreeBSD. I tried using a CentOS/Ubuntu and it was fairly easy to install python as well as packages (using pip). On FreeBSD, it was not as easy but may be I'm not using the correct steps or am missing something. We've some tools/utilities on FreeBSD that run locally and I want Python to interact with them - hence, FreeBSD. Any inputs/pointers would be really appreciated. Regards Sharad
Is Python support for FreeBSD as good as for say CentOS/Ubuntu/other linux flavors?
0.197375
0
0
982
35,574,857
2016-02-23T10:26:00.000
2
0
0
0
python,django,rest,django-rest-framework
35,583,466
1
false
1
0
"Normal" Django views (usually) return HTML pages. Django-Rest-Framework views (usually) return JSON. I am assuming you are looking for something more like a Single page application. In this case you will have a main view that will be the bulk of the HTML page. This will be served from "standard" Django view returning HTML (which will likely include a fair bit of JavaScript). Once the page is loaded the JavaScript code will makes requests to the DRF views. So when you interact with the page, JavaScript will request Json, and update (not reload) the page based on the contents of the JSON. Does that make sense?
1
0
0
I want to use Django REST framework for my new project but I am not sure if I can do it efficiently. I would like to be able to integrate easily classical Django app in my API. However I don't know how I can proceed to make them respect the REST framework philosophy. Will I have to rewrite all the views or is there a more suitable solution?
How to use classic Django app with Django REST framework?
0.379949
0
0
422
35,575,425
2016-02-23T10:49:00.000
12
0
1
1
python-2.7,debugging,gdb
47,475,156
2
true
0
0
I have the same, with gdb 8.0.1 compiled on Ubunutu 14.04 LST. Turns out the installation misses the necessary Python files. One indication was that "make install" stopped complaining about makeinfo being missing - although I did not change any of the .texi sources. My fix was to go into into the build area, into gdb/data-directory, and do "make install" once more, which installed the missing python scripts. Must be some weird tool-bug somewhere.
1
14
0
I have suddenly started seeing this message on nearly every GDB output line whilst debugging: Python Exception Installation error: gdb.execute_unwinders function is missing What is this? How do I rectify it?
GDB Error Installation error: gdb.execute_unwinders function is missing
1.2
0
0
10,678
35,576,509
2016-02-23T11:36:00.000
0
0
1
0
python,jupyter,jupyter-notebook
64,781,833
3
false
0
0
Click F11, to view the Jupyter Notebook in Full Screen Mode. Click F11 once more, to come out of Full Screen Mode.
1
15
0
I'm doing a bit of choropleth map plotting in a Jupyter notebook (with Folium), and I was just wondering if there's any way of making an output cell fullscreen? It would just make the map a bit easier to view. If not, is there an easy way of modifying the maximum height of an output cell?
Making a Jupyter notebook output cell fullscreen
0
0
0
10,495
35,577,179
2016-02-23T12:06:00.000
0
0
1
0
python,numpy
48,353,933
1
false
0
0
It may be possible that you have have installed pip for some lower version of python. To check it first look for your default python version by: $python Now check for your linked version of python with pip $pip --version Now see if the two python versions match. If they don't match then, you need to upgrade you pip : $pip install -U pip Now install numpy for this: sudo pip install numpy Hope this helps !
1
0
1
I have installed numpy-1.11.0b3 by, pip install "numpy-1.11.0b3+mkl-cp35-cp35m-win32.whl". The installation became successful. But, when I write "import numpy" at the Python Shell (3.5.1), I am getting the error as - ImportError: No module named 'numpy'. Can anyone suggest me regarding this ? Regards, Arpan Ghose
getting error for importing numpy at Python 3.5.1
0
0
0
429
35,577,248
2016-02-23T12:10:00.000
1
0
0
0
django,python-3.4
35,579,891
3
false
1
0
With python manage.py these commands are listed: Available subcommands: [auth] changepassword createsuperuser [django] check compilemessages createcachetable dbshell diffsettings dumpdata flush inspectdb loaddata makemessages makemigrations migrate sendtestemail shell showmigrations sqlflush sqlmigrate sqlsequencereset squashmigrations startapp startproject test testserver [sessions] clearsessions [staticfiles] collectstatic findstatic runserver There is no "validate" command in the list.
2
10
0
With running this command: python manage.py validate I faced with this error: Unknown command: 'validate' What should I do now? For more explanations: Linux Virtualenv Python 3.4.3+ Django (1, 9, 2, 'final', 0)
django "python manage.py validate" error : unknown command 'validate'
0.066568
0
0
7,322
35,577,248
2016-02-23T12:10:00.000
0
0
0
0
django,python-3.4
35,597,374
3
false
1
0
With this command: pip install Django==1.8.2 the problem will be solved. Django==1.9.2 does not support some commands.
2
10
0
With running this command: python manage.py validate I faced with this error: Unknown command: 'validate' What should I do now? For more explanations: Linux Virtualenv Python 3.4.3+ Django (1, 9, 2, 'final', 0)
django "python manage.py validate" error : unknown command 'validate'
0
0
0
7,322
35,580,213
2016-02-23T14:29:00.000
0
0
1
0
python,visual-studio-2015
36,571,734
1
true
0
0
Use 'anaconda' for install many typical packages. after run your program wihtout error.
1
2
0
I program with python 3.4 and vs 2015. When I want to add numpy to vs 2015 in pip window I see fllow problem. "Unable to find vcvarsall.bat" Can anyone help me?
Unable to find vcvarsall.bat in python 3.4 and vs 2015
1.2
0
0
490
35,581,528
2016-02-23T15:28:00.000
1
0
0
0
python,multithreading,pandas,geopandas
35,583,196
3
false
0
0
I am assuming you have already implemented GeoPandas and are still finding difficulties? you can improve this by further hashing your coords data. similar to how google hashes their search data. Some databases already provide support for these types of operations (eg mongodb). Imagine if you took the first (left) digit of your coords, and put each set of cooresponding data into a seperate sqlite file. each digit can be a hash pointing to the correct file to look for. now your lookup time has improved by a factor of 20 (range(-9,10)), assuming your hash lookup takes minimal time in comparison
2
2
1
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time. Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process. So I wonder: if there is any relatively easy way to parallelize those computations?
Fastest approach for geopandas (reading and spatialJoin)
0.066568
1
0
939
35,581,528
2016-02-23T15:28:00.000
1
0
0
0
python,multithreading,pandas,geopandas
35,786,998
3
true
0
0
As it turned out, the most convenient solution in my case is to use pandas.read_SQL function with specific chunksize parameter. In this case, it returns a generator of data chunks, which can be effectively feed to the mp.Pool().map() along with the job; In this (my) case job consists of 1) reading geoboundaries, 2) spatial joint of the chunk 3) writing the chunk to the database.
2
2
1
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time. Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process. So I wonder: if there is any relatively easy way to parallelize those computations?
Fastest approach for geopandas (reading and spatialJoin)
1.2
1
0
939
35,583,348
2016-02-23T16:48:00.000
0
0
0
0
python,html,css,jinja2
35,588,407
1
false
1
0
You can use flask, and put your css stylesheet in a folder named "static", at the root of your project. Call this file "style.css".
1
0
0
I am relatively new to Jinja and templating and have been struggling to get this sorted for some time now.. Here's my layout of folders: templates base content form styles newstyle I have a base template with blockhead/block sidebar/block content/block form layout. I extend it to my content template which has lots of HTML notes I have collected under the block content. This has then been extended to form template which is a dynamic block and takes user inputs to login and post comments. All this should be viewed on the same web page, when I render form.html using jinja_env.get_template(template) along with args. But its not working. Either I see only my sidebar and form block or I see only my content but never all the three... I tried to 'include' base onto content template and include content onto form but this messes up the CSS. Can anyone help?? Also I am a bit confused about to use dynaminc links using url_for which I came across in one of stackoverflow questions ?? Which Template should this be used in??
How to include template (with CSS) onto a template which is being rendered?
0
0
0
116