Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
23,900,813
2014-05-27T23:55:00.000
0
0
0
0
python,django,django-views,django-sites
23,903,821
2
false
1
0
add SITE_ID = 1 to your settings.py, and run python manage.py syncdb to create corresponding tables if not exist, this will work and then, you could login into your admin site: click Site to modify defalut example.com to yours, this used when you edit an object, It will provide a View on site button if you defined a get_absolute_url function in your models.py
1
0
0
I have an issue with the Sites system in Django. My problem is when running Site.objects.get_current() on some pages on initial load of the development server I get an exception saying Site matching query does not exist. however if I load the main page of the site it loads just fine then I can go back to any other page with the exception and they load fine as well. Has anyone come across this issue before? Thanks, Nick
Issue with Sites system in Django
0
0
0
58
23,904,066
2014-05-28T06:15:00.000
0
1
0
0
python,scala,unit-testing
24,052,936
2
true
0
0
Short answer: Maybe, but don't. Eventually common sense prevailed. I now aim to either use pyspark or port the unit tests across to Scala along with the rest of the library. Exposing Scala to Python was achievable by generating Scala code that prints specifically what I wanted. Then from Python calling SBT to compile that code and run it and capture the stdout. I started mocking the scala version of the same api in Python to create these custom scala scripts and by adding in a compilation step for each query it was getting very slow. As I started considering a command line interface or socket based api for my Mocked Scala classes the reality sunk in. To answer the actual question of running my existing Python unittests, while I think it could still be possible it is not a very good idea.
1
0
0
I have a library written in Python complete with unit tests. I'm about to start porting of the functionality to Scala to run on Spark and as part of an Android application. But I'm loath to reproduce the unit tests in scala. Is there a method for exposing the to-be-written Scala library to external interrogation from Python? I can rewrite the tests to use a command line interface, but I wondered if there were other ways. I have ruled out Jython because it is not compatible with my existing Python 3 library and unit tests.
Scala Testing with Python unittests
1.2
0
0
520
23,908,547
2014-05-28T10:02:00.000
1
1
1
0
python,performance,jit,numba
23,944,220
2
false
0
0
Numba is mapping math.sqrt calls to sqrt/sqrtf in libc already. The slowdown probably comes from the overhead of Numba. This overhead comes from (un)boxing PyObjects and detecting if errors occurred in the compiled code. It affects calling small functions from Python but less when calling from another Numba compiled function because there is no (un)boxing If you set the environment variable NUMBA_OPT=3, aggressive optimization will turn on, eliminating some of the overhead but increases the code generation time.
2
1
1
I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt. How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt? -- regards Kes
how improve speed of math.sqrt() with numba jit compiler in python 2.7
0.099668
0
0
1,612
23,908,547
2014-05-28T10:02:00.000
4
1
1
0
python,performance,jit,numba
23,943,709
2
false
0
0
Numba already does replace calls to math.sqrt to calls to a machine-code library for sqrt. So, if you are getting slower performance it might be something else. Can you post the code you are trying to speed up. Also, which version of Numba are you using. In the latest version of Numba, you can call the inspect_types method of the decorated function to print a listing of what is being interpreted as python objects (and therefore still being slow).
2
1
1
I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt. How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt? -- regards Kes
how improve speed of math.sqrt() with numba jit compiler in python 2.7
0.379949
0
0
1,612
23,909,692
2014-05-28T10:54:00.000
25
1
1
0
python,unit-testing
23,909,751
1
true
0
0
You can't. Local variables are local to the function and they cannot be accessed from the outside. But the bigger point is that you shouldn't actually be trying to test the value of a local variable. Functions should be treated as black boxes from the outside. The are given some parameters and they return some values and/or change the external state. Those are the only things you should check for.
1
15
0
I need to write a test that will check a local variable value inside a static function. I went through all the unittest docs but found nothing yet.
How to unittest local variable in Python
1.2
0
0
9,780
23,913,689
2014-05-28T13:56:00.000
1
0
0
1
python,linux,mod-wsgi,wsgi
23,920,537
1
false
1
0
OSError [Errno 10] no child processes can mean the program ran, but took too much memory and died. Starting jobs within Apache is fine. Running as root is a bit sketchy, but isn't that big of a deal. Note that the 'root' account setup, like PATH, might be different from your account. This would explain why it runs from the shell but not from Apache. In your program log the current directory. If the script requires a certain module in a certain location, then that would cause weird problems. Also 'root' tends to not have "current directory" (ie: ".") on the sys.path.
1
0
0
I have a python wsgi script that is attempting to make a call to generate an openssl script. Using subprocess.check_call(args), the process throws an OSError [Errno 10] no child processes. The owner of the opensll bin is root:root. Could this be the problem? Or does apache not allow for child processes? Using just the subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) seems to work fine, I just want to wait and make sure the process finishes before moving on. communicate() and wait() both fail with the same error. Running it outside of wsgi the code works fine. This is python 2.6 btw.
Python wsgi OSError: [Errno 10] No child process
0.197375
0
0
1,426
23,914,274
2014-05-28T14:20:00.000
1
0
1
0
python
23,915,208
1
true
0
0
The use of strings or integers depends on the size of your DNA sequence. I know that some sequences might be over millions of elements. It is better to use typed integers if you are dealing with a lot of information. Otherwise, you can use strings if it is more suitable for you.
1
3
0
I am trying to represent the sequence of a biological virus as ATGCs, but I have seen code where it is represented as 1234s instead. Are there any differences in memory usage or code speed if we store it as the integers [1,2,3,4] instead of the letters [A,T,G,C]? For those who might need a bit more context, I will not be doing any mathematical operations on the string of numbers/letters apart from changing their identities at random positions (i.e. mutation), keeping track of the positions that are mutated away from a reference sequence in a dictionary (such as: {2:'G', 52:'A'} or {2:3, 52:1}), and exporting the full sequence of any biological virus strain by iterating over the reference sequence and checking the mutation dictionary for any mutations.
Should I store categorical variables as integers or letters in Python?
1.2
0
0
86
23,916,186
2014-05-28T15:42:00.000
7
0
1
0
python,shared-libraries,python-wheel,python-cffi
30,941,390
2
false
0
0
Wheels are the standard way of distributing Python packages, but there is a problem when you have extension modules that depend on other so's. This is because the normal Linux dynamic linker is used, and that only looks in /usr/lib or /usr/local/lib. This is a problem when installing a wheel in a virtualenv. As far as I know, you have three options: Static linking, so the 'wrapper' does not depend on anything else; using ctypes to wrap the so directly from Python; Split the distribution into a wheel with the Python code & wrapper, and a separate RPM or DEB to install the so into either /usr/lib or /usr/local/lib. A wheel may work if you include the dependent so as a data file to be stored in /lib and install into the root Python environment (haven't tried that), but this will break if someone tries to install the wheel into a virtualenv (did try that).
1
30
0
I want to create package for python that embeds and uses an external library (.so) on Linux using the cffi module. Is there standard way to include .so file into python package? The package will be used only internally and won't be published to pypi. I think Wheel packages are the best option - they would create platform specific package with all files ready to be copied so there will be no need to build anything on target environments.
How to include external library with python wheel package
1
0
0
12,753
23,917,036
2014-05-28T16:23:00.000
4
0
1
0
metadata,ipython-notebook
24,728,131
2
false
0
0
If I understand what you are looking for: from the Cell Toolbar (top right of the ipython notebook toolbar), select Edit Metadata from the drop-down list.
1
10
0
I want to store the values of my IPython.html.widgets in my ipython notebook somehow. Is there a way to modify the metadata of the current cell from the code within the cell itself?
Modify cell's metadata
0.379949
0
0
4,821
23,917,776
2014-05-28T17:04:00.000
-1
0
1
0
python,function,return,exec
65,267,489
10
false
0
0
if we need a function that is in a file in another directory, eg we need the function1 in file my_py_file.py located in /home/.../another_directory we can use the following code: def cl_import_function(a_func,py_file,in_Dir): ... import sys ... sys.path.insert(0, in_Dir) ... ax='from %s import %s'%(py_file,a_func) ... loc={} ... exec(ax, globals(), loc) ... getFx = loc[afunc] ... return getFx test = cl_import_function('function1',r'my_py_file',r'/home/.../another_directory/') test() (a simple way for newbies...)
2
41
0
For testing purposes I want to directly execute a function defined inside of another function. I can get to the code object of the child function, through the code (func_code) of the parent function, but when I exec it, i get no return value. Is there a way to get the return value from the exec'ed code?
How do I get the return value when using Python exec on the code object of a function?
-0.019997
0
0
42,376
23,917,776
2014-05-28T17:04:00.000
0
0
1
0
python,function,return,exec
70,064,689
10
false
0
0
use eval() instead of exec(), it returns result
2
41
0
For testing purposes I want to directly execute a function defined inside of another function. I can get to the code object of the child function, through the code (func_code) of the parent function, but when I exec it, i get no return value. Is there a way to get the return value from the exec'ed code?
How do I get the return value when using Python exec on the code object of a function?
0
0
0
42,376
23,920,481
2014-05-28T19:41:00.000
-1
0
0
0
python,mysql,django,postgresql
23,920,627
3
false
1
0
Usually when the settings that are controlling the application are changed then the server has to be restarted.
3
0
0
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server? Any explanation would be really helpful, Cheers!
When do I need to restart database server in Django?
-0.066568
1
0
237
23,920,481
2014-05-28T19:41:00.000
2
0
0
0
python,mysql,django,postgresql
23,920,963
3
false
1
0
You will not NEED to restart your database in production due to anything you've done in Django. You may need to restart it to change your database security or configuration settings, but that has nothing to do with Django and in a lot of cases doesn't even need a restart.
3
0
0
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server? Any explanation would be really helpful, Cheers!
When do I need to restart database server in Django?
0.132549
1
0
237
23,920,481
2014-05-28T19:41:00.000
1
0
0
0
python,mysql,django,postgresql
23,920,777
3
true
1
0
You shouldn't really ever need to restart the database server. You probably do need to restart - or at least reload - the web server whenever any of the code changes. But the db is a separate process, and shouldn't need to be restarted.
3
0
0
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server? Any explanation would be really helpful, Cheers!
When do I need to restart database server in Django?
1.2
1
0
237
23,921,986
2014-05-28T21:13:00.000
2
0
0
0
python,web-scraping,beautifulsoup,web-crawler
23,922,228
2
false
1
0
You're basically asking "how do I write a search engine." This is... not trivial. The right way to do this is to just use Google's (or Bing's, or Yahoo!'s, or...) search API and show the top n results. But if you're just working on a personal project to teach yourself some concepts (not sure which ones those would be exactly though), then here are a few suggestions: search the text content of the appropriate tags (<p>, <div>, and so forth) for relevant keywords (duh) use the relevant keywords to check for the presence of tags that might contain what you're looking for. For example, if you're looking for a list of things, then a page containing <ul> or <ol> or even <table> might be a good candidate build a synonym dictionary and search each page for synonyms of your keywords too. Limiting yourself to "US" might mean an artificially low ranking for a page containing just "America" keep a list of words which are not in your set of keywords and give a higher ranking to pages which contain the most of them. These pages are (arguably) more likely to contain the answer you're looking for good luck (you'll need it)!
1
8
0
I'm trying to teach myself a concept by writing a script. Basically, I'm trying to write a Python script that, given a few keywords, will crawl web pages until it finds the data I need. For example, say I want to find a list of venemous snakes that live in the US. I might run my script with the keywords list,venemous,snakes,US, and I want to be able to trust with at least 80% certainty that it will return a list of snakes in the US. I already know how to implement the web spider part, I just want to learn how I can determine a web page's relevancy without knowing a single thing about the page's structure. I have researched web scraping techniques but they all seem to assume knowledge of the page's html tag structure. Is there a certain algorithm out there that would allow me to pull data from the page and determine its relevancy? Any pointers would be greatly appreciated. I am using Python with urllib and BeautifulSoup.
Web scraping without knowledge of page structure
0.197375
0
1
3,298
23,924,714
2014-05-29T02:14:00.000
1
0
0
0
python,scikit-learn,outliers
23,930,543
1
false
0
0
There's no support for masking in scikit-learn; outlier detection is done ad hoc by some estimators (e.g. DBSCAN, or RANSAC, which will appear in the next release). If you want to remove outliers yourself, just use NumPy indexing.
1
2
1
I have a pipeline where I transform some data and fit a curve to it. Is there a preferred/standard way for masking the outliers in the data?
Is it possible to mask outliers within a scikit learn pipeline?
0.197375
0
0
906
23,925,726
2014-05-29T04:31:00.000
11
0
0
0
python,django
25,240,502
12
false
1
0
You may be calling a site object before creating site model(before syncdb or migrate) ex: site = Site.objects.get(id=settings.SITE_ID)
5
35
0
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
1
0
0
30,101
23,925,726
2014-05-29T04:31:00.000
1
0
0
0
python,django
62,523,888
12
false
1
0
if you are getting this error when deploying you django app to Heroku, make sure you have run: heroku run python manage.py migrate This worked for me
5
35
0
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
0.016665
0
0
30,101
23,925,726
2014-05-29T04:31:00.000
-1
0
0
0
python,django
70,516,679
12
false
1
0
I just restarted my computer and problem disappeared :) (restarting docker-compose is not enough).
5
35
0
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
-0.016665
0
0
30,101
23,925,726
2014-05-29T04:31:00.000
1
0
0
0
python,django
34,530,826
12
false
1
0
A horrible code lead to this error for me. I had a global variable to get the current site SITE = Site.objects.get(pk=1) this was evaluated during migration and lead to the error.
5
35
0
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
0.016665
0
0
30,101
23,925,726
2014-05-29T04:31:00.000
1
0
0
0
python,django
60,034,840
12
false
1
0
Going to leave this here for future me: python manage.py makemigrations allauth This worked for me, I forgot why, took me too long to figure out how I fixed this the first time Edit: makemigrations sometimes doesnt make 3rd party things like allauth which some of my projects use, so I have to specify those ones
5
35
0
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
0.016665
0
0
30,101
23,930,672
2014-05-29T10:00:00.000
0
0
1
0
python,vim,pyflakes
23,931,250
2
false
0
0
You are supposed to install plugins and colorschemes in $HOME/.vim/. If that directory doesn't exist already, create it. Same deal with $HOME/.vim/ftplugin/ and so on…
1
0
0
I am having difficulty in having pyflakes in vim for statin analysis of python code. When I find folders on my system with name "ftplugin" I see following results: /usr/share/vim/vimfiles/after/ftplugin /usr/share/vim/vimfiles/after/ftplugin/ftplugin /usr/share/vim/vimfiles/ftplugin /usr/share/vim/vim72/ftplugin Where exactly do I have to add pyflakes files? I tried all the location but that does not help. I have also set filetype plugin indent on. Still no luck.
Where to add pyflakes folder for static analysis in vim for python?
0
0
0
159
23,932,002
2014-05-29T11:12:00.000
0
0
0
0
python,jython
23,934,381
2
false
1
0
The first thing to do is to read the exception. About seven lines in, your exception says "Caused by: java.lang.NullPointerException". I would focus on that. Where is the null coming from? Also note that your stack trace is missing some lines at the end, where it says "... 7 more". This makes it hard to read the exception, because we don't know what the missing lines say. See if you can find a way to show the missing lines, in case they are helpful.
1
0
0
I am a java person not well versed in jython or python. So pardon my ignorance if this is a basic question. I am using jython 2.5, python 2.5 and jre 1.7. Intermittently, the jython interpreter fails to start, and is throwing an error like: Exception in thread "main" java.lang.ExceptionInInitializerError - at java.lang.J9VMInternals.initialize(J9VMInternals.java:258) - at org.python.core.PySystemState.initStaticFields(PySystemState.java:912) - at org.python.core.PySystemState.doInitialize(PySystemState.java:882) - at org.python.core.PySystemState.initialize(PySystemState.java:800) - at org.python.core.PySystemState.initialize(PySystemState.java:750) - at org.python.core.PySystemState.initialize(PySystemState.java:743) - at org.python.util.jython.run(jython.java:150) - at org.python.util.jython.main(jython.java:129) - Caused by: java.lang.NullPointerException - at org.python.core.PyObject._cmpeq_unsafe(PyObject.java:1362) - at org.python.core.PyObject._eq(PyObject.java:1456) - at org.python.core.PyObject.equals(PyObject.java:244) - at java.util.HashMap.put(HashMap.java:475) - at java.util.HashSet.add(HashSet.java:217) - at org.python.core.PyType.fromClass(PyType.java:1317) - at org.python.core.PyType.fromClass(PyType.java:1275) - at org.python.core.PyEllipsis.(PyEllipsis.java:14) - at java.lang.J9VMInternals.initializeImpl(Native Method) - at java.lang.J9VMInternals.initialize(J9VMInternals.java:236) - ... 7 more I did search the net, however I did not find any helpful information. If anyone of you has solved this issue, please share. Thanks Ashoka
Jython throws java.lang.ExceptionInInitializerError intermittently
0
0
0
913
23,936,409
2014-05-29T14:59:00.000
14
0
1
0
ipython,ipython-notebook
23,940,905
1
true
0
0
You can launch your iPython notebook in the folder that you want the server to have access to, and the user won't be able to go to the parent directory, but you should realize that when the user launches a kernel, they will be able to chdir to any folder in the file system. So if you want to limit access to users of the iPython notebook, you should use unix file permissions to jail the user to their starting folder. The way I would do this is to: Create an ipython user that doesn't belong to any existing user groups Create a folder for the ipython user (cloud for instance) Launch the iPython notebook as ipython in the cloud folder
1
13
0
I'm running ipython notebook server on the cloud and i want to expose this as a service so that users can play around with the notebook, i noticed that using notebook i can access the filesystem and inspect files on the filesystem, i want to limit this access.I want only special folders to be accessible from ipython notebook.
How to limit ipython notebook directory access
1.2
0
0
6,374
23,943,024
2014-05-29T21:07:00.000
0
0
1
0
java,python,json,dictionary,map
23,943,045
2
false
1
0
You could pass it as JSON, and parse the info out of the JSON in Java. If the data is more simple, you could make every other String in Java's String[] args parameter represent a key or value, and then have your code loop through those and add them to a map.
1
0
0
I am calling a Java file within Python code. Currently, I am passing it several parameters which Java sees in its String[] args array. I would prefer to pass just one parameter that is a Python dictionary (dict) which Java can understand and make into a map. I know I will probably have to pass the python dictionary as a string. How can I do this? Should I use JSON?
Pass a Python dict to a Java map possibly with JSON?
0
0
0
1,082
23,944,242
2014-05-29T22:46:00.000
7
1
1
0
python,c,numpy,gmp,gmpy
23,946,348
1
true
0
0
numpy and GMPY2 have different purposes. numpy has fast numerical libraries but to achieve high performance, numpy is effectively restricted to working with vectors or arrays of low-level types - 16, 32, or 64 bit integers, or 32 or 64 bit floating point values. For example, numpy access highly optimized routines written in C (or Fortran) for performing matrix multiplication. GMPY2 uses the GMP, MPFR, and MPC libraries for multiple-precision calculations. It isn't targeted towards vector or matrix operations. The Python interpreter adds overhead to each call to an external library. Whether or not the slowdown is significant depends on the how much time is spend by the external library. If the running time of the external library is very short, say 10e-8 seconds, then Python's overhead is significant. If the running time of the external library is relatively long, several seconds or longer, then Python's overhead is probably insignificant. Since you haven't said what you are trying to accomplish, I can't give a better answer. Disclaimer: I maintain GMPY2.
1
2
1
I understand that GMPY2 supports the GMP library and numpy has fast numerical libraries. I want to know how the speed compares to actually writing C (or C++) code with GMP. Since Python is a scripting language, I don't think it will ever be as fast as a compiled language, however I have been wrong about these generalizations before. I can't get GMP to work on my computer, so I can't run any tests. If I could, just general math like addition and maybe some trig functions. I'll figure out GMP later.
How do numpy and GMPY2 compare with GMP in terms of speed?
1.2
0
0
2,014
23,948,317
2014-05-30T06:21:00.000
1
0
1
0
python,virtualenv
23,948,495
5
false
0
0
A Virtual Environment, put simply, is an isolated working copy of Python which allows you to work on a specific project without worry of affecting other projects. For example, you can work on a project which requires Django 1.3 while also maintaining a project which requires Django 1.0.
2
15
0
I am a beginner in Python. I read virtualenv is preferred during Python project development. I couldn't understand this point at all. Why is virtualenv preferred?
Why is virtualenv necessary?
0.039979
0
0
8,475
23,948,317
2014-05-30T06:21:00.000
1
0
1
0
python,virtualenv
57,047,780
5
false
0
0
Suppose you are working on multiple projects, one project requires certain version of python and other project requires some other version. In case you are not working on virtual environment both projects will access the same version installed in your machine locally which in case can give error for one. While in case of virtual environment you are creating a new instance of your machine where you can store all the libraries, versions separately. Each time you can create a new virtual environment and work on it as a new one.
2
15
0
I am a beginner in Python. I read virtualenv is preferred during Python project development. I couldn't understand this point at all. Why is virtualenv preferred?
Why is virtualenv necessary?
0.039979
0
0
8,475
23,954,173
2014-05-30T12:03:00.000
0
0
1
0
python,emacs,python-mode,ido
23,982,213
1
false
0
0
M-x imenu RET provides that functionality already. Should work with python-mode.el as with shipped python.el Checked with GNU Emacs 24.3.90.1 (i686-pc-linux-gnu, GTK+ Version 3.6.5) of 2014-04-21 In case of trouble start from emacs -Q
1
1
0
The function ido-goto-symbol when run in emacs python mode, shows the message No items suitable for an index found in this buffer. How can I get the ido-goto-symbol to work in python-mode so that it lists functions and classes present in the python file?
Ido-goto-symbol command support for emacs python mode
0
0
0
198
23,957,232
2014-05-30T14:40:00.000
1
0
0
0
python,django,email
23,959,164
3
false
1
0
As J. C. Leitão pointed out, the django error debug page has javascript and css(most css doesn't work in email). But all these css and js code are inline. The debug page is a single html file that has no external resources. In my company, we include the html as a attachment in the report email. When we feel the plain text traceback is not clear enough, we download the html page and open it. The user experience is not as good as Sentry, but much better than the plain-text only version.
1
7
0
Django has an awesome debug page that shows up whenever there's an exception in the code. That page shows all levels in the stack compactly, and you can expand any level you're interested in. It shows only in debug mode. Django also has an awesome feature for sending email reports when errors are encountered on the production server by actual users. These reports have a stacktrace with much less information than the styled debug page. There's an awesome optional setting 'include_html': True that makes the email include all the information of the debug page, which is really useful. The problem with this setting is that the HTML comes apparently unstyled, so all those levels of the stack are expanded to show all the data they contain. This results in such a long email that GMail usually can't even display it without sending you to a dedicated view. But the real problem is that it's too big to navigate in and find the stack level you want. What I want: I want Django to send that detailed stacktrace, but I want the levels of the stack to be collapsable just like in the debug page. How can I do that? (And no, I don't want to use Sentry.)
Making the stack levels in Django HTML email reports collapsable
0.066568
0
0
123
23,963,825
2014-05-30T21:44:00.000
0
0
0
0
python,user-input
23,963,904
1
false
0
0
No, python cannot accept this kind of input through raw_input. This is because you're thinking about sequences like: Ctrl-C, Ctrl-Z, etc. These are not keyboard inputs, these are signals that are processed by the terminal (not the program). You can try to set up signal handlers that will do this for you, but that is not a very reliable solution (regardless of whether you're using python or something else). The best solution for accepting this kind of input is to either use curses, or use readline (with adjustments to the configuration to handle things like Ctrl-H). Using readline will make your life much easier, but it comes with the cost that you have to license your program under the GNU GPL (or similar). Whereas curses does not have this kind of restriction.
1
0
0
I am writing a command line interface in python that accepts a lot of user input. For the values that I am querying the user about, there is a significant amount of "additional information" that I could display, but would rather only display if the user needed help with how to provide a value. So I thought I would provide my usual raw_input prompt, but also try an accept some Ctrl-H type sequences to output this help info. Can Python accept this kind of input via raw_input in a terminal/shell? It there another more proper way to do this (preferably in the stdlib)?
Can Python Accept Ctrl+[char] using raw_input?
0
0
0
158
23,964,506
2014-05-30T22:55:00.000
3
0
1
0
python,string,escaping
23,964,541
2
false
0
0
The r prefix for a string literal doesn't disable escaping, it changes escaping so that the sequence \x (where x is any character) is "converted" to itself. So then, \' emits \' and your string is unterminated because there's no ' at the end of it.
1
0
0
I'm trying to understand why python has this unheard of behavior. If I'm writing rawdata string, it is much more likely that I won't want escaping quotes. This behavior forces us to write this weird code: s = r'something' + '\\' instead of just 's = r'something\' any ideas why python developers found this more sensible? EDIT: I'm not asking why it is so. I'm asking what makes this design decision, or if anyone finds any thing good in it.
Why does python allow escaping quotes in rawdata string?
0.291313
0
0
78
23,965,028
2014-05-31T00:00:00.000
7
0
1
0
python,for-loop,generator,cython,coroutine
23,983,403
1
false
0
0
Some recommendations: Cython supports generators out of the box, so you should try passing your Python code with generators to cython and see what kind of speedup you get. Next step is to add as much static typing information to your loops to speed up the work the generators are doing. Python generators are cool, but if performance is important, they aren't the fastest way to do things. You're much better off converting your bottlenecks to working with contiguous arrays. Check out Cython's typed memoryviews. You can also use Cython with C++ std::vectors and other high-performance container objects. We'll need more information about your goals and constraints to provide more help here. A stripped down example would be helpful.
1
4
0
I have Python code that has lots of loops that consume data from Python generators. Some also re yield the processed data. This is a bottleneck and I want to speed this part up and was thinking of using Cython. What is the recommended way to deal with generators and yield. I would like to Convert Python generators into Cython without data copies Make Cython for loops consume data produced by Python generators Yield data like a generator I would guess this is a common enough use case, what is the recommended ways to do this.
Cythonizing for loops that iterate over generators
1
0
0
2,859
23,965,691
2014-05-31T01:57:00.000
0
0
1
0
python,git,virtualenv
23,965,791
1
true
0
0
When using env, that essentially means the first python instance in your PATH will be used. What is the output when you do which python? Which version is virtualenv using instead? It may be that you're using the system python instead of the virtualenv without that change. How are you actually invoking the scripts? If you invoke them directly from bin/python in the virtualenv environment you want to use, then it should use that python. Otherwise, if you just want to use python without a path, it's best to source the activate script in the virtualenv you want to use.
1
0
0
files from the Grinberg flask tutorial on git won't work for me locally unless I add #!/usr/bin/env python to the first line - but I thought the default python for my xubuntu apt-get installation of virtualenv was supposed to be python 2.7 ? Can I invoke virtualenv in a way that ensures the right python gets used so I don't have to add the shebang to every file I checkout from git? I also have to chmod 755 these files before they work in my local virtualenv. Am I causing these problems somehow? Is there a way to avoid having to change the files every time?
virtualenv gets wrong python?
1.2
0
0
2,213
23,965,870
2014-05-31T02:37:00.000
6
0
1
0
python,json,string,dictionary,ordereddictionary
23,965,905
2
true
0
0
When you convert an OrderedDict to a normal dict, you can't guarantee the ordering will be the preserved, because dicts are unordered. Which is why OrderedDict exists in the first place. It seems like you're trying to have your cake and eat it too, here. If you want the order of the JSON string preserved, use the answer from the question I linked to in the comments to load your json string directly into an OrderedDict. But you have to deal with whatever performance penalty that carries (I don't know offhand what that penalty is. It may even be neglible for you use-case.). If you want the best possible performance, just use dict. But it's going to be unordered.
1
1
0
How do I convert an OrderedDict to a normal dictionary while preserving the same order? The reason I am asking this is because is when I fetch my data from an API, I get a JSON string, in which I use json.loads(str) to return a dictionary. This dictionary that is returned from json.loads(...) is just out of order and is randomly ordered. Also, I've read that OrderedDict is slow to work with so I want to use regular dict in same order as original JSON string. Slightly off-topic: Is there anyway to convert a JSON string to a dict using json.loads(...) while maintaining the same order without using collections.OrderedDict?
Convert OrderedDict to normal dict preserving order?
1.2
0
0
3,780
23,967,242
2014-05-31T06:44:00.000
1
1
0
1
python,apache,web-deployment
23,967,560
1
true
0
0
It depends on how the GUI is written, what abc.exe does and how you want to use the web interface. In general, what you want is not possible. While for local applications there is only one user and it is clear, when the user terminates the program, for web applications there can be millions of users at the same time, and when the application doesn't hear anything form a user, it is not clear, if the user closed the window, or there is a network connection broken, or anything else. That's why web applications are as far as possible stateless, or session information is written to databases. This is not the case for local applications, so you have to rewrite probably large parts of the C code.
1
2
0
I wrote a program in C, and designed its GUI using Python. Now I want to convert it to a web app. I have GUI.py and abc.exe file. Can I directly execute GUI Python script (GUI.py) on 'Apache2' local server? If yes, then how?
How to run GUI Python script on Apache?
1.2
0
0
417
23,971,667
2014-05-31T15:24:00.000
3
0
0
0
python,sql,client,hive
23,984,349
1
false
0
0
Fixed! It was due to permission on remote server. Changed user in connect statement from 'root' to 'hdfs' solved the problem.
1
1
0
I am using pyhs2 as hive client. The sql statement with ‘where’ clause was not recognized. Got 'pyhs2.error.Pyhs2Exception: 'Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' But it runs ok in hive shell.
python hive client pyhs2 does not recognize 'where' clause in sql statement
0.53705
1
0
945
23,979,911
2014-06-01T12:33:00.000
0
0
1
0
python,python-2.7
23,979,979
2
false
0
0
You can solve it with str.split but this should be solved with os.path.split
1
0
0
Say that I have this string "D:\Users\Zache\Downloads\example.obj" and I want to copy another file to the same directory as example.obj. How do I do this in a way that´s not hardcoded? "example" can also be something else (user input). I'm using filedialog2 to get the big string. This is for an exporter with a basic GUI.
Remove a string (filename) from a bigger string (the whole path)
0
0
0
34
23,980,268
2014-06-01T13:15:00.000
3
0
0
0
xml,python-2.7,ms-word,python-docx
24,001,416
1
true
0
0
The short answer is that you can't reliably determine soft page breaks from a .docx file. You can identify hard page breaks and you may be able to detect where Word broke pages the last time it "flowed" the document. A Word document is a "flowed" document, meaning that Word's layout engine "flows" the text of the document into a page until it runs out of room, then creates a new page into which it flows the remaining text. These "soft" page breaks are not specified in the .docx file; they are determined by Word at the time of rendering, either for display or printing. This makes sense because whenever you change, for example, the margins, the pages may break at different locations. An implication of this is that the .docx file does not contain markup identifying where the following text should flow onto a new page. A hard page break is one explicitly inserted by the document author to cause following content to flow to a new page without regard to whether the current page is full. These are implemnted using a break element, within a run I believe, and can be detected. As an aid to assistive technologies, like a voice reader for the visually impaired, Word may insert <w:lastRenderedPageBreak> elements. I don't know much about these and under what circumstances Word inserts these, but it might be an avenue worth exploring.
1
1
0
How do I identify a new page, or some identifier that denotes a pages number using python-docx? I've looked through the docs to no avail so far and have also tried looking for the WD_BREAK.PAGE attribute but this feature is not yet support. All help is appreciated thanks.
Find a new page in a word document
1.2
0
0
666
23,983,822
2014-06-01T19:50:00.000
4
0
1
0
python,ipython,anaconda,qtconsole
23,998,898
2
true
0
0
The launcher always points to the root environment (Python 2). If you have activated the Python 3 environment, you can launch the notebook by just typing ipython notebook (and the same with the qtconsole with ipython qtconsole).
2
4
0
I have installed in Window 7 environment Anaconda 2.0. The default Python is 2.7 but also Python 3.4 is installed. I am able to activate Python 3.4 with the command "activate py3k". After this Spyder IDE does work right with Python 3.4 But 1) I'm not able to start Ipython Notebook and QT Console with Python 3.4 2) I'm not able to start Anaconda with Python 3.4 as default (so that also launcher starts the three apps -Spyder, Ipython Notebook and Ipython QT Console with python 3.4)
How to activate Ipython Notebook and QT Console with Python 3.4 in Anaconda 2.0
1.2
0
0
8,218
23,983,822
2014-06-01T19:50:00.000
5
0
1
0
python,ipython,anaconda,qtconsole
24,006,124
2
false
0
0
As asmeurer said, when in your py3k environment in the command prompt, you can launch a 3.4 kernel with the ipython notebook command. You can run both a 2.7 and a 3.4 at the same time if you specify a different port, for instance, ipython notebook --port 8080 The 2.7 will default to 8888. Note that, by default, IPython will look in your current directory for notebooks and store them there if you create them, so it can be helpful to create a directory just for Python 3 notebooks and either cd to it before launching or specify a directory with ipython notebook --port 8080 --notebook-dir C:\\Users\\[User name]\\Documents\\ipython3notebooks
2
4
0
I have installed in Window 7 environment Anaconda 2.0. The default Python is 2.7 but also Python 3.4 is installed. I am able to activate Python 3.4 with the command "activate py3k". After this Spyder IDE does work right with Python 3.4 But 1) I'm not able to start Ipython Notebook and QT Console with Python 3.4 2) I'm not able to start Anaconda with Python 3.4 as default (so that also launcher starts the three apps -Spyder, Ipython Notebook and Ipython QT Console with python 3.4)
How to activate Ipython Notebook and QT Console with Python 3.4 in Anaconda 2.0
0.462117
0
0
8,218
23,985,795
2014-06-02T00:19:00.000
1
0
0
0
python,amazon-web-services,amazon-s3,flask
23,986,820
1
false
1
0
Make the request to your Flask application, which will authenticate the user and then issue a redirect to the S3 object. The trick is that the redirect should be to a signed temporary URL that expires in a minute or so, so it can't be saved and used later or by others. You can use boto.s3.key.generate_url function in your Flask app to create the temporary URL.
1
0
0
I am trying to serve files securely (images in this case) to my users. I would like to do this using flask and preferably amazon s3 however I would be open to another cloud storage solution if required. I have managed to get my flask static files like css and such on s3 however this is all non-secure. So everyone who has the link can open the static files. This is obviously not what I want for secure content. I can't seems to figure out how I can make a file available to just authenticated user that 'owns' the file. For example: When I log into my dropbox account and copy a random file's download link. Then go over to anther computer and use this link it will denie me access. Even though I am still logged in and the download link is available to user on the latter pc.
Secure access of webassets with Flask and AWS S3
0.197375
0
0
890
23,986,203
2014-06-02T01:27:00.000
0
0
1
0
python,list
23,986,267
4
false
0
0
A list of lists is similar to having a 2-D array. To reference an element of the inner list, just use mylist[i][j] where i refers to the index of the list and j to the element in that list. In the case of 2, this would be: mylist[0][0]
1
1
0
I have a list that contain lists: mylist=[[2],[9],[3],[5],[7]] Is there a way in python to get the value of the inner list (not just print it)? Therefore, we would get integer 2 from the list mylist[0]. Without iterating the whole mylist.
integer extraction from list of lists in python
0
0
0
60
23,987,050
2014-06-02T03:41:00.000
1
0
0
0
python,django,orm
23,987,194
2
false
1
0
I chose option 1 when I set up my environment, which does much of the same stuff. I have a JSON interface that's used to pass data back to the server. Since I'm on a well-protected VLAN, this works great. The biggest benefit, like you say, is the Django ORM. A simple address call with proper data is all that's needed. I also think this is the simplest method. The "blocking on the DB" issue should be non-existent. I suppose that it would depend on the DB backend, but really, that's one of the benefits of a DB. For example, a single-threaded file-based sqlite instance may not work. I keep things in Django as much as I can. This could also help with DB security/integrity, since it's only ever accessed in one place. If your client accesses the DB directly, you'll need to ship username/password with the Client. My recommendation is to go with 1. It will make your life easier, with fewer lines of code. Besides, as long as you code Client properly, it should be easy to modify DB access later on down the road.
1
2
0
So in my spare time, I've been developing a piece of network monitoring software that essentially can be installed on a bunch of clients, and the clients report data back to the server(RAM/CPU/Storage/Network usage, and the like). For the administrative console as well as reporting, I've decided to use Django, which has been a learning experience in itself. The Clients report to the Server asynchronously, with whatever data they happen to have(As of right now, it's just received and dumped, not stored in a DB). I need to access this data in Django. I have already created the models to match my needs. However, I don't know how to go about getting the actual data into the django DB safely. What is the way to go about doing this? I thought of a few options, but they all had some drawbacks: Give the Django app a reference to the Server, and just start a thread that continuously checks for new data and writes it to the DB. Have the Server access the Django DB directly, and write it's data there. The problem with 1 is that im even more tightly coupling the server with the django app, but the upside is that I can use the ORM to write the data nicely. The problem with 2 is that I can't use the ORM to write data, and I'm not sure if it could cause blocking on the DB in some cases. Is there some obvious good option I'm missing? I'm sorry if this question is vague. I'm very new to Django, and I don't want to write myself into a corner.
How to access Django DB and ORM outside of Django
0.099668
1
0
849
23,987,917
2014-06-02T05:37:00.000
0
0
0
1
python,debugging,file-io,lsof
23,990,975
1
false
0
0
I never worked with Mac OS but I could imagine this: Maybe Python locks the file on open and the hex-editor is failing if you try to open it afterwards. The system hangs and get slow (even after killing all processes) -> I think thats some kind of caching which fill up your memory till your computer starts using the harddisk as memory (and turns really slow) I think you should try to find out how files get open with python on Mac OS (if there is some kind of lock) and you should take care that this large file never get stored complete in memory (there are different methods how to read large files in chunks). Greetings Kuishi PS: I apologize for my English. It isnt my native language.
1
0
0
I have a problem with understanding a strange file locking behavior in Python Debugger. I have a 2TB image file, which my script reads. Everything works perfect, until I want to read the same file with a different hex editor. If the file is opened in hex editor before I start my script, everything is fine. If I try to open the file during script paused at breakpoint, my system almost hangs and becomes very slow. I normally can kill Pyhon and hex editor from terminal, but it is very slow and takes up to 10 minutes. The same problem apperares AFTER I stop the script and even extensively kill all Python instances. The disk, where this image is situated is remained locked and it's not possible to unmount it (only with diskutil force command), system hangs if I try to open the file anywhere else. Also I can't start scripts one after another, next scripts just stops working and hangs my system. I have to wait up to 10 minutes to be able to work with the file again. I tried to find the process which locks the file with "sudo lsof +D" command but it doesn't list anything. Here are some more details: — My system is Mac Os X 10.9. Python is 3.4. I use Eclipse with Pydev to develop the script. — I use open('image.dmg', mode='rb') command to open the file in python and close()to close it. — The file is a 2TB disk image on external ExFat formatted drive. Other files don't have such problems. File is write-protected in Finder settings. Can anyone direct me in a proper direction to locate the source of this problem?
File is locked by Python debugger
0
0
0
79
23,993,475
2014-06-02T11:29:00.000
0
0
0
0
python-2.7,openerp
23,993,609
2
false
1
0
yes it is possible you can create two views for same table with separate menu and action for each view.
1
3
0
I need Multiple form views of the same object in my module , I created multiple forms but OpenERP shows only one form related to the object other forms are hidden . i looked in the documentation but there is no answer . if anybody know , please help. Thanks in advance.
Is it possible to show multiple form views or tree views of the same object in Openerp?
0
0
0
3,944
23,993,638
2014-06-02T11:39:00.000
0
0
0
1
python,hadoop
25,421,497
3
false
0
0
As far as I know, the same task is run on many nodes. As soon as one node returnes the result, tasks on onther nodes are killed. That's why job SUCCEEDED but single tasks are in KILLED state.
1
13
0
I just finished setting up a small hadoop cluster (using 3 ubuntu machines and apache hadoop 2.2.0) and am now trying to run python streaming jobs. Running a test job I encounter the following problem: Almost all map tasks are marked as successful but with note saying Container killed. On the online interface the log for the map jobs says: Progress 100.00 State SUCCEEDED but under Note it says for almost every attempt (~200) Container killed by the ApplicationMaster. or Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 In the log file associated with the attempt I can see a log saying Task 'attempt_xxxxxxxxx_0' done. I also get 3 attempts with the same log, only those 3 have State KILLED which are under killed jobs. stderr output is empty for all jobs/attempts. When looking at the application master log and following one of the successful (but killed) attempts I find the following logs: Transitioned from NEW to UNASSIGNED Transitioned from UNASSIGNED to ASSIGNED several progress updates, including: 1.0 Done acknowledgement RUNNING to SUCCESS_CONTAINER_CLEANUP CONTAINER_REMOTE_CLEANUP KILLING attempt_xxxx Transitioned from SUCCESS_CONTAINER_CLEANUP to SUCCEEDED Task Transitioned from RUNNING to SUCCEEDED All the attempts are numbered xxxx_0 so I assume they are not killed as a result of speculative execution. Should I be worried about this? And what causes the containers to be killed? Any suggestions would be greatly appreciated!
Hadoop streaming jobs SUCCEEDED but killed by ApplicationMaster
0
0
0
3,350
23,995,129
2014-06-02T12:58:00.000
1
0
1
1
python,windows,pip
23,996,033
1
true
0
0
python-dev installs the headers and libraries for linux, allowing you to write / compile extensions. In windows, the standard installer provides this by default. The header files are in the $installroot\include directory. The link libraries are found in $installroot\libs. For example I've installed python 2.7 into c:\python27. This means my include files are located here: c:\python27\include. And the link libraries are in c:\python27\libs
1
0
0
Actually, we can simply install python-dev on linux just use the following command: sudo apt-get install python-dev since python is integrated with Linux. But no such luck on windows. For input "pip install python-dev", there is no corresponding package on pypi.python.org. What packages should I install to match those when installed by "apt-get install python-dev" command under Linux on Windows?
How can I manage to install packages within python-dev on windows systems just like that on Linux?
1.2
0
0
7,578
23,998,359
2014-06-02T15:42:00.000
0
0
0
0
python,angularjs,testing,selenium
24,002,478
1
false
0
0
As said by @Siking, this is clearly a timing issue. The fact is that Selenium is very fast and faster than loading of element. Sometimes, Selenium requires a pause or a sleep to ensure that an element is present. I also recommend -especially for asynchronous requests- using waitForElementPresent to wait until ajax method is finished.
1
4
0
I've created a test that click on an element X. This element is only revealed after you click on another button, and those elements are connected with ng-hide. When i try to run my code the click on X element doesn't work. However, in debug mode or after adding 1 second sleep, it does. I'm using selenium framework in python, with a remote webdriver with ImplicitlyWait of 10 sec. Does someone knows the reason for this behavior?
Selenium click on elements works only in debug mode
0
0
1
2,065
23,998,471
2014-06-02T15:49:00.000
-2
0
1
0
python,python-3.x,global-variables
23,998,513
3
false
0
0
Should I declare this variable again and again as global in each function You should not have any global variables at all, and put these variables and functions into a class.
1
3
0
I am using some variable in multiple functions. This includes changing the variable values by each of those functions. I already declared the variable as 'global' in the first function. Should I declare this variable again and again as global in each function (and this will not overwrite the first global variable I declared in the first function) or I should not declare it again as global in all those functions (but the local variables there still will be seen as global since I already declared this variable so first time)?
Global variable changed by multiple functions - how to declare in Python
-0.132549
0
0
7,982
23,998,849
2014-06-02T16:10:00.000
0
0
0
1
python
23,999,339
1
false
0
0
I just solved the issue following these steps from Python26/Scripts/ I ran the following command easy_install pip Then I ran - pip install -U multi-mechanize and it works...
1
1
0
I just follow up the steps to install multimechanize on Windows, I didn't get errors during the installation, I'm tried with python 2.7 and 2.6... but I'm getting the following error when I tried to create a new project, C:\multi-mechanize-1.2.0>multimech-newproject msilesMultimech 'multimech-newproject' is not recognized as an internal or external command, operable program or batch file. is there something else that I need to do or install
multimech is not recognized on Windows
0
0
0
79
23,999,013
2014-06-02T16:19:00.000
3
0
1
1
python,corba,omniorb
24,046,108
1
true
0
0
With help from Duncan Grisby. The version of omniORBpy must match the Win32/Win64 status of your environment. Copy the distribution to a directory (I used python27/lib/site-packages/omniORB Add to or create a PYTHONPATH environment variable that points to ../omniORB/lib/python and ../omniORB/lib/x86_win32 Merge the contents of sample.reg into your Windows Registry Add an explicit PATH environment entry to ../omniORB/bin/x86_win32 Please note that omniORB is case sensitive for the paths, even though Windows is not.
1
2
0
I'd like to experiment with a Python (v2.7) app accessing a CORBA API, but I keep going around in circles about which OmniOrb pieces are necessary and where they should be placed. I've downloaded omniORBpy-4.2.0-win64-py27 which I thought should have contained the bits I needed. Is is as simple as adding the files in the bin\x86_win32 directory into my Python lib\Site-packages directory ? I've found conflicting information about using the PYTHONPATH environment variable (I don't have one now), is it necessary?
Python: Installing OmniOrbpy in Windows64(Windows 7) environment
1.2
0
0
801
24,000,654
2014-06-02T18:06:00.000
0
0
0
0
python,neural-network,backpropagation,pybrain
27,366,408
1
false
0
0
The assert statement checks if a condition is true. In this case, if the inner dimension (indim) of your network is the same as your dataset, ds. Check if im3.flatten() is igual 12288 assert ds.indim == network.indim # 12288 != im3.flatten(), error!
1
2
1
I initialized ds = SupervisedDataSet(12288,1) and add data ds.appendLinked(im3.flatten(),10) where im3 is an openCV picture. and this is my trainer -> trainer = BackpropTrainer(red, ds) When the running process reach BackpropTrainer, i get an AssertionError on backprop.py line 35 self.setData(dataset). It's a pybrain error on Windows, I developed it on Linux and it run without problem. I don't know what else to do, I tried to reinstall all but it still gets the same error. Can anyone help me?
I get a PyBrain BackpropTrainer AssertionError on Windows 7, which requirement is missin?
0
0
0
424
24,001,364
2014-06-02T18:55:00.000
1
0
0
0
python,amazon-web-services,elastic-map-reduce
29,903,784
1
false
0
0
You can use pip install mmh3 to install it.
1
0
1
I need to use mmh3 for hashing. However, when I run "python MultiwayJoin.py R.csv S.csv T.csv -r emr > output.txt" in terminal, it returned an error said that: File "MultiwayJoin.py", line 5, in import mmh3 ImportError: No module named mmh3
mmh3 not installed on Elastic MapReduce in AWS
0.197375
0
0
1,466
24,002,076
2014-06-02T19:44:00.000
3
0
1
0
python,plot,graphics,anaconda,spyder
59,919,089
2
false
0
0
If you are using the latest version of spyder, you just change "variable explore" to "plots", which shown above the console.
1
12
0
Setup: Anaconda 2.0.0 (Win 64), Spyder (2.3.0rc that came with Anaconda) I configure the graphics: Tools > Preferences > iPython console > Graphics > Graphics backend > Inline But no matter what I do the graphics always open in a separate window? Is there a way to force them to be inline in the console?
Spyder Plot Inline
0.291313
0
0
31,810
24,002,485
2014-06-02T20:13:00.000
0
0
0
0
python,machine-learning,scikit-learn,nltk
64,190,596
4
false
0
0
I think a better method would be to : Step-1: Create word/sentence embeddings for each text/sentence. Step-2: Calculate the POS-tags. Feed the POS-tags to a embedder as Step-1. Step-3: Elementwise multiply the two vectors. (This is to ensure that the word-embeddings in each sentence is weighted by the POS-tags associated with it. Thanks
1
13
1
I want to use part of speech (POS) returned from nltk.pos_tag for sklearn classifier, How can I convert them to vector and use it? e.g. sent = "This is POS example" tok=nltk.tokenize.word_tokenize(sent) pos=nltk.pos_tag(tok) print (pos) This returns following [('This', 'DT'), ('is', 'VBZ'), ('POS', 'NNP'), ('example', 'NN')] Now I am unable to apply any of the vectorizer (DictVectorizer, or FeatureHasher, CountVectorizer from scikitlearn to use in classifier Pls suggest
python: How to use POS (part of speech) features in scikit learn classfiers (SVM) etc
0
0
0
11,721
24,006,376
2014-06-03T02:53:00.000
0
0
0
0
python,openpyxl
24,012,565
1
false
0
0
This is currently not directly possible in openpyxl because it would require the reassigning of all cells below the new row. You can do it yourself by iterating through the relevant rows (starting at the end) and writing a new row with the values of the previous row. Then you create a row of cells where you wan them.
1
1
0
I'm curious if there's a way to insert a new row (that would push all the existing rows down) in an existing openpyxl worksheet? (I'm looking to insert at the first row if it helps) I looked through all the docs and didn't see anything mentioned.
How to insert a row in openpyxl
0
1
0
5,098
24,009,656
2014-06-03T07:42:00.000
0
0
0
0
python,animation,matplotlib,ipython
24,048,037
1
false
0
0
As @tcaswell pointed out, the problem was caused by the callback that was indirectly calling plt.show().
1
0
1
I have a python a script that generates an animation using matplotlib's animation.FuncAnimation and animation.FFMpegWriter. It works well, but there's an issue when running the code in IPython: each frame of the animation is displayed on screen while being generated, which slows down the movie generation process. I've tried issuing plt.ioff() before running the animation code, but the figure is still displayed on screen. Is there a way to disable this behavior in IPython? On a related note, if a run the script from a shell (i.e. python myMovieGenScript.py), only one frame is shown, blocking execution. I can close it and the rest of the frames are rendered off screen (which is what I want). Is there a way to prevent that single frame to be displayed, so no user interaction is required?
Matplotlib animation + IPython: temporary disabling interactive mode?
0
0
0
285
24,015,758
2014-06-03T12:54:00.000
1
0
0
0
python,database,django,oracle,database-connection
24,016,200
1
false
0
0
It can depend on how your hosting is setup, but if it is allowed you will need the following. Static IP, or Dynamic DNS setup so your home server can be found regularly. Port forwarding on your router to allow traffic to reach the server. The willingness to expose your home systems to the dangers of the internet Strictly speaking a static IP/Dynamic DNS setup is not required, but if you don't use that kind of setup, you will have to change the website configuration every time your home system changes IPs, the frequency of which depends on your ISP. It's also worth noting that many ISP's consider running servers on your home network a violation of the terms of service for residential customers, but in practice as long as you aren't generating too much traffic, it's not usually an issue. With Port forwarding on your router, you can specify traffic incoming on a particular port be redirected to a specific internal address:port on your network, (e.g. myhomesystem.com:12345 could be redirected to 192.168.1.5:1521) Once those are in place, you can use the static IP, or the Dynamic DNS entry as the hostname to connect to.
1
0
0
I have a "local" Oracle database in my work network. I also have a website at a hosting service. Can I connect to the 'local' Oracle database from the hosting service? Or does the Oracle database need to be at the same server as my website? At my work computer I can connect to the Oracle database with a host name, username, password, and port number.
Connect to local Oracle database from online website
0.197375
1
0
490
24,019,331
2014-06-03T15:40:00.000
0
0
0
1
python,network-programming,zeromq,ethernet,pyzmq
24,155,570
2
false
0
0
Maybe you could periodically send datagram message containing peer's ip address (or some other useful information) to broadcast address, to allow other peers to discover it. And after peer's address is dicovered you can estabish connection via ZeroMQ or other kind... connection. :)
1
1
0
I need to transfer data via pyzmq through two computers connected by an ethernet cable. I have already set up a script that runs on the same computer correctly, but I need to find the tcp address of the other computer in order to communicate. They both run Ubuntu 14.04. One of them should be a server processing requests while the other sends requests. How do I transfer data over tcp through ethernet? I simply need a way to find the address. EDIT: (Clarification) I am running a behavioural study. I have a program called OpenSesame which runs in python and takes python scripts. I need a participant to be able to sit at a computer and be able to ask another person questions (specifically for help in a task). I need a server (using pyzmq preferably) to be connected by ethernet and communicate with that computer. It wrote a script. It works on the same computer, but not over ethernet. I need to find the address
How can I find the tcp address for another computer to transfer over ethernet?
0
0
1
223
24,022,754
2014-06-03T18:51:00.000
1
0
1
0
python,md5,checksum
24,025,573
2
false
0
0
The recent researches about MD5 collisions may have baffled you because in 2013 some people gave algorithms to generate MD5 collisions in 1 second on a normal computer however I assure you that this does not nullify the use of MD5 for checking file integrity and duplicacy. It is highly unlikely that you'll get two normal-use files with the same hash unless obviously you're deliberately looking for trouble and put up binary files with the same hash. If you're still getting paranoid then I advice you to use larger key-space hash functions like SHA-512.
1
4
0
I've been tasked with consolidating about 15 years of records from a laboratory, most of which is either student work or raw data. We're talking 100,000+ human-generated files. My plan is to write a Python 2.7 script that will map the entire directory structure, create checksums for each, and then flag duplicates for deletion. I'm expecting probably 10-25% duplicates. My understanding is that MD5 collisions are possible, theoretically, but so unlikely that this is essentially a safe procedure (let's say that if 1 collision happened, my job would be safe). Is this a safe assumption? In case implementation matters, the only Python libraries I intend to use are: hashlib for the checksum; sqlite for databasing the results; os for directory mapping
Is it safe to use MD5 checksums to search for duplicate files across multiple hard drives?
0.099668
0
0
4,101
24,023,131
2014-06-03T19:12:00.000
2
0
0
0
python,django,bower
24,024,759
5
false
1
0
There is no recommended way - it depends on your project. If you are using bower, node for more than the django project, it might make sense to place it in your project root (above django) so that it may be reused elsewhere. If it's purely for django's static files, then it might make sense to place it in a src/ outside of the staticfiles system which builds to the static directory which is exported via collectstatic.
2
18
0
I'm new to Django framework and i have read that that the 'static' files like css and js must be inside the 'static' directory, but my question is: Given that bower package manager install its dependencies on a new directory called bower_components in the current directory, the bower.json must be created on the 'static' django directory? and if it is true, is not bower.json exported with the collectstatic command? (something might not wanted) Which is the recommended way to work with bower and Django framework? Update: Thanks Yuji 'Tomita' Tomita, your answer can give more perspective. I want to use bower just to manage front end dependencies like jQuery, bootstrap and so on, as you see, by logic must be inside de static/ django directory, but do it that way, can cause to the bower.json be treated as a static resource, something might not wanted.
How to use bower package manager in Django App?
0.07983
0
0
8,184
24,023,131
2014-06-03T19:12:00.000
2
0
0
0
python,django,bower
25,576,643
5
false
1
0
If you're afraid of the bower.json being included, the collectstatic command has an --ignore option that you can use to exclude whatever you want.
2
18
0
I'm new to Django framework and i have read that that the 'static' files like css and js must be inside the 'static' directory, but my question is: Given that bower package manager install its dependencies on a new directory called bower_components in the current directory, the bower.json must be created on the 'static' django directory? and if it is true, is not bower.json exported with the collectstatic command? (something might not wanted) Which is the recommended way to work with bower and Django framework? Update: Thanks Yuji 'Tomita' Tomita, your answer can give more perspective. I want to use bower just to manage front end dependencies like jQuery, bootstrap and so on, as you see, by logic must be inside de static/ django directory, but do it that way, can cause to the bower.json be treated as a static resource, something might not wanted.
How to use bower package manager in Django App?
0.07983
0
0
8,184
24,024,736
2014-06-03T20:54:00.000
0
0
1
0
python,random
24,024,883
2
false
0
0
Yes, there is. It is random.triangular(min, max, av). Your mean value will be close, but not equal to av. Edit: see comments below, this has drawbacks.
1
0
1
I need to generate 100 age values between 23 and 72 and the mean value must be 42 years. Do you think such a function already exists in standard python? If not, I think I know python just enough and should be able to code the algorithm but I am hoping something is already there for use. Any hints?
How to generate random int around specific mean?
0
0
0
110
24,025,334
2014-06-03T21:35:00.000
1
0
1
0
python
24,025,363
1
true
0
0
Pretty much. It's not necessarily creating a new instance of the class, since for numbers in a certain range, Python will store canonical instances of those integers and hand you an old one when you ask for a new one. You can do the same thing with your own classes if you want, by implementing __new__.
1
0
0
Does that mean when I call int("123"), it is actually creating an instance of int class?
Python print(int) gives me instead of
1.2
0
0
62
24,026,618
2014-06-03T23:25:00.000
0
0
0
0
python,bokeh
24,030,090
3
false
1
0
It seems that since bokeh uses html5 canvas as a backend, it will be writing things to static html pages. You could always export the html to pdf later.
1
29
1
Is it possible to output individual figures from Bokeh as pdf or svg images? I feel like I'm missing something obvious, but I've checked the online help pages and gone through the bokeh.objects api and haven't found anything...
Exporting figures from Bokeh as svg or pdf?
0
0
0
17,265
24,027,928
2014-06-04T02:12:00.000
0
0
0
0
python,selenium,web-scraping,mechanize
24,027,994
2
false
1
0
I would try to watch Chrome's network tab, and try to imitate the final request to get the image. If it turned out to be too difficult, then I would use selenium as you suggested.
1
1
0
My script logs in to my account, navigates the links it needs to, but I need to download an image. This seems to be easy enough to do using urlretrive. The problem is that the src attribute for the image contains a link which points it to the page which initiates a download prompt, and so my only foreseeable option is to right click and select 'save as'. I'm using mechanize and from what I can tell Mechanize doesn't have this functionality. My question is should I switch to something like Selenium?
Using Mechanize for python, need to be able to right click
0
0
1
194
24,028,075
2014-06-04T02:36:00.000
0
0
1
0
python,sublimetext3,ipython-notebook
59,419,565
2
false
0
0
I had the same concern when moving from Jupyter Notebook to Sublime Text, and tried Anaconda's autocomplete, which is far worse than Jupyter's. But after trying SublimeCodeIntel, all problems solved! It is almost same as VSCode's auto-complete and code tips.
1
0
0
Does anyone know of anything comparable? I am currently using Anaconda for linting and autocompletion, but it is nowhere near as good as the IPython Notebook autocompletion.
Python autocompletion comparable to IPython Notebook for Sublime Text 3
0
0
0
525
24,030,051
2014-06-04T05:57:00.000
0
0
0
0
python,linux,pygtk,browser-automation
24,030,111
1
false
0
0
Selinum with phantomjs should be a good replacement of splinter.
1
1
0
I have tried splinter for browser automation. Used firefox webdriver in splinter. But the problem is high CPU usage when the firefox loads and sometimes its hangs the gui. Please suggest me an option. I'm in a Linux box(Ubuntu) and building an app using pygtk.
Is there any inbuilt browser for web automation using python?
0
0
1
260
24,032,359
2014-06-04T08:20:00.000
0
0
0
0
python-2.7,selenium,selenium-webdriver,robotframework,automated-tests
24,085,441
3
false
1
0
I have seen this issue and found that there is a recovery period where Selenium does not work correctly for a short time after closing a window. Try using a fixed delay or poll with Wait Until Keyword Succeeds combined with a keyword from Selenium2Library.
1
0
0
I am having a problem on handling the pop up windows in robot framework. The process I want to automate is : When the button is clicked, the popup window appears. When the link from that popup window is clicked, the popup window is closed automatically and go back to the main page. While the popup window appears, the main page is disabled, and it can be enabled only when the link from the pop up window is clicked. The problem I have here is that I cannot go back to the main page after clicking the link from the popup window. I got the following error. 20140604 16:04:24.160 : FAIL : NoSuchWindowException: Message: u'Unable to get browser' I hope you guys can help me solve this problem. Thank you!
Going back to the main page after closing the pop up window
0
0
1
1,622
24,036,291
2014-06-04T11:33:00.000
11
0
0
1
c#,python,protocol-buffers,protobuf-net
24,038,019
1
true
0
0
DateTime is spoofed via a multi-field message that is not trivial, but not impossible to understand. In hindsight, I wish I had done it a different way, but it is what it is. The definition is available in bcl.proto in the protobuf-net project. However! If you are targering multiple platforms, I strongly recommend you simply use a long etc in your DTO model, representing some time granularity into some epoch (seconds or milliseconds since 1970, for example).
1
8
0
I'm working on a project consisting on Client/Server. Client is written in Python (will run on linux) and server in C#. I'm communicating through standard sockets and I'm using protobuf-net for protocol definition. However, I'm wondering how would protobuf-net handle DateTime serialization. Unix datetime differs from .net standard datetime, so how should I handle this situation? Thanks
How protobuf-net serialize DateTime?
1.2
0
0
3,765
24,036,896
2014-06-04T12:02:00.000
0
0
0
0
python-2.7,openstack
24,052,584
1
false
0
0
Use the size parameter on volume object. For example: for vol in cinder_instance.volume.list():print vol.id # Gives the cinder volume ID print vol.size # Gives cinder volume size
1
0
0
Can any one help me out to get the openstack cinder volume size through python bindings. I have list of cinder volumes that are extracted by calling cinder_client_instance.volumes.list() method. Now I want to know the size of the volume. How I can get the volume size?
Get the openstack cinder volume size through python bindings
0
0
0
220
24,039,160
2014-06-04T13:44:00.000
0
0
0
0
python,c++,swig
24,071,860
2
false
0
0
You can export your function with a different but compatible signature. In your case declare the export of your func to take int instead of bool. SWIG will generate the wrapper code for int, but the compiler will call your bool function at the c++ level (unless you have a book overload). There are no overloads in python so I don't know if the SWIG wrapper code will flag an error if you give a bool as call parameter. Swig may not like implicit int -> bool but it might be OK with implicit bool -> int.
2
1
0
I'm updating my bindings to support swig 3.0.1 , but I'm getting an error when trying to call a function that expects a boolean (it was not happening before with 2.0.9)... Specifically: TypeError: in method 'MClass_setStatus', argument 2 of type 'bool' Any hint on what actually changed ?
swig 3.0.1 , python 3 and bool data types
0
0
0
526
24,039,160
2014-06-04T13:44:00.000
1
0
0
0
python,c++,swig
24,077,769
2
true
0
0
I need backward compatibility so this isn't an acceptable option. I was able to override the behaviour to the legacy one using -DSWIG_PYTHON_LEGACY_BOOL on the swig command line
2
1
0
I'm updating my bindings to support swig 3.0.1 , but I'm getting an error when trying to call a function that expects a boolean (it was not happening before with 2.0.9)... Specifically: TypeError: in method 'MClass_setStatus', argument 2 of type 'bool' Any hint on what actually changed ?
swig 3.0.1 , python 3 and bool data types
1.2
0
0
526
24,048,350
2014-06-04T22:00:00.000
0
0
1
0
python,kivy,pyinstaller
39,481,035
4
false
0
1
I had a similar issue with garden modules. When I install like the following: start cmd /C "garden install --kivy roulettescroll" start cmd /C "garden install --kivy roulette" start cmd /C "garden install --kivy tickline" start cmd /C "garden install --kivy datetimepicker" I add PyInstaller ... --hidden-import=kivy.garden.tickline --hidden-import=kivy.garden.roulette --hidden-import=kivy.garden.roulettescroll --hidden-import=kivy.garden.datetimepicker ... I'm able to install and have no problems with Python picking up the modules. Unfortunately, the datetimepicker shows up as white rectangles. I have no idea what I might be missing.. But atleast I got past the import issue.
3
4
0
I have a Kivy-based Python project that I'm trying to build. It uses the NavigationDrawer component from Kivy Garden, through an import: from kivy.garden.navigationdrawer import NavigationDrawer I have a PyInstaller spec file for it which builds a distributable version. This version works well on my machine, but unfortunately not on other machines. Running the interpreter in the 'dist' version with the -v switch, it appears that when I run the distributable on my machine, the navigationdrawer component is not actually coming from inside my build folder. All the other imports show something like: import kivy.graphics.gl_instructions # dynamically loaded from C:\Users\me\myapp\dist\RACECA~1\kivy.graphics.gl_instructions.pyd But the navigationdrawer import says: import kivy.garden.navigationdrawer """directory C:\Users\me\.kivy\garden\garden.navigationdrawer C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.pyc matches C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.py import kivy.garden.navigationdrawer # precompiled from C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.pyc""" But noo! I don't want you to import them from c:\users. I want them to get nicely copied into my dist folder like all the other imports. I've tried adding c:\users\me to PyInstaller's pathex, the system PATH and PYTHONPATH without any joy. Anyone have any ideas?
Kivy Garden in PyInstaller - stuck trying to trace import
0
0
0
2,433
24,048,350
2014-06-04T22:00:00.000
2
0
1
0
python,kivy,pyinstaller
24,048,458
4
false
0
1
You could just copy the navigationdrawer code from C:\Users\me\.kivy\garden\garden.navigationdrawer to your app directory, call the folder 'navigationdrawer' and replace the import with from navigationdrawer import NavigationDrawer. It's not quite the 'right' way to do it (there's probably some way to make pyinstaller copy it in), but it should work fine.
3
4
0
I have a Kivy-based Python project that I'm trying to build. It uses the NavigationDrawer component from Kivy Garden, through an import: from kivy.garden.navigationdrawer import NavigationDrawer I have a PyInstaller spec file for it which builds a distributable version. This version works well on my machine, but unfortunately not on other machines. Running the interpreter in the 'dist' version with the -v switch, it appears that when I run the distributable on my machine, the navigationdrawer component is not actually coming from inside my build folder. All the other imports show something like: import kivy.graphics.gl_instructions # dynamically loaded from C:\Users\me\myapp\dist\RACECA~1\kivy.graphics.gl_instructions.pyd But the navigationdrawer import says: import kivy.garden.navigationdrawer """directory C:\Users\me\.kivy\garden\garden.navigationdrawer C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.pyc matches C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.py import kivy.garden.navigationdrawer # precompiled from C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.pyc""" But noo! I don't want you to import them from c:\users. I want them to get nicely copied into my dist folder like all the other imports. I've tried adding c:\users\me to PyInstaller's pathex, the system PATH and PYTHONPATH without any joy. Anyone have any ideas?
Kivy Garden in PyInstaller - stuck trying to trace import
0.099668
0
0
2,433
24,048,350
2014-06-04T22:00:00.000
-1
0
1
0
python,kivy,pyinstaller
35,244,089
4
false
0
1
Go to your python site packages there is a folder named garden, look in __init__.py. There is a explanation how to install navigationdrawer.
3
4
0
I have a Kivy-based Python project that I'm trying to build. It uses the NavigationDrawer component from Kivy Garden, through an import: from kivy.garden.navigationdrawer import NavigationDrawer I have a PyInstaller spec file for it which builds a distributable version. This version works well on my machine, but unfortunately not on other machines. Running the interpreter in the 'dist' version with the -v switch, it appears that when I run the distributable on my machine, the navigationdrawer component is not actually coming from inside my build folder. All the other imports show something like: import kivy.graphics.gl_instructions # dynamically loaded from C:\Users\me\myapp\dist\RACECA~1\kivy.graphics.gl_instructions.pyd But the navigationdrawer import says: import kivy.garden.navigationdrawer """directory C:\Users\me\.kivy\garden\garden.navigationdrawer C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.pyc matches C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.py import kivy.garden.navigationdrawer # precompiled from C:\Users\me\.kivy\garden\garden.navigationdrawer\__init__.pyc""" But noo! I don't want you to import them from c:\users. I want them to get nicely copied into my dist folder like all the other imports. I've tried adding c:\users\me to PyInstaller's pathex, the system PATH and PYTHONPATH without any joy. Anyone have any ideas?
Kivy Garden in PyInstaller - stuck trying to trace import
-0.049958
0
0
2,433
24,050,155
2014-06-05T01:20:00.000
1
0
0
1
python,google-app-engine,app-engine-ndb
32,898,743
4
false
1
0
I am answering this over a year since it was asked. The only way to test these sorts of things is by deploying an app on GAE. What I sometimes do when I run across these challenges is to just "whip up" a quick application that is tailor made to just test the scenario under consideration. And then, as you put it, you just have to 'script' the doing of stuff using some combination of tasks, cron, and client side curl type operations. The particular tradeoff in the original question is write throughput versus consistency. This is actually pretty straightforward once you get the hang of it. A strongly consistent query requires that the entities are in the same entity group. And, at the same time, there is the constraint that a given entity group may only have approximately 1 write per second. So, you have to look at your needs / usage pattern to figure out if you can use an entity group.
3
1
0
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
0.049958
0
0
261
24,050,155
2014-06-05T01:20:00.000
1
0
0
1
python,google-app-engine,app-engine-ndb
24,050,936
4
false
1
0
I am not sure it can be tested. The inconsistencies are inconsistent. I think you just have to know that datastore operations have inconsistencies, and code around them. You don't want to plan on observations from your tests being dependable in the future.
3
1
0
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
0.049958
0
0
261
24,050,155
2014-06-05T01:20:00.000
2
0
0
1
python,google-app-engine,app-engine-ndb
24,050,360
4
false
1
0
You really need to do testing in the real environment. At best the dev environment is an approximation of production. You certainly can't draw any conclusions at all about performance by just using the SDK. In many cases the SDK is faster (startup times) and slower (queries on large datasets. Eventual Consistency is emulated and not 100% the same as production.
3
1
0
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
0.099668
0
0
261
24,052,693
2014-06-05T06:03:00.000
3
0
1
0
python
24,052,765
3
false
0
0
I like ','.join(str(i) for i in lst), but it's really not much different than map
1
2
0
What is the most convenient way to join a list of ints to form a str without spaces? For example [1, 2, 3] will turn '1,2,3'. (without spaces). I know that ','.join([1, 2, 3]) would raise a TypeError.
Easiest way to join list of ints in Python?
0.197375
0
0
1,522
24,054,363
2014-06-05T07:38:00.000
1
1
0
0
python,oauth-2.0,keyerror
24,058,208
1
true
0
0
You get this keyerror because there is no refresh_token in the response. If you didn't ask for offline access in your request, there will be no refresh token in the response, only an access token, bearer and token expiry.
1
0
0
Hy! I use google-mail-oauth2-tools but I have a problem: When I write the verification code the program dead. Traceback (most recent call last): File "oauth2.py", line 346, in <module> main(sys.argv) File "oauth2.py", line 315, in main print 'Refresh Token: %s' % response['refresh_token'] KeyError: 'refresh_token Why? Thank you!
google-mail-oauth2-tools KeyError :-/
1.2
0
1
467
24,055,220
2014-06-05T08:27:00.000
3
0
1
0
python,printing,scripting,syntax-error,twill
24,055,670
4
false
0
0
I just found the answer. Thanks for viewing this question and probably for that downvote (SAD FACE).. So, the solution that I found was to use the 2to3 Script Tool found in Python Folder. Basically, it refactors Python 2 codes to Python 3 codes.
1
10
0
I've been trying to learn Twill Scripting on Python and I am using Python 3.4 and Twill 1.8.0. Been reading some posts here and I found it interesting to study. But, I do have a problem installing Twill. I just knew that PRINT in Python is now a function so meaning it needs parenthesis and that's where my problem starts. As I look through the codes on Twill 1.8.0, I think it isn't oriented with the change of using PRINT yet. Strings are not enclosed with parenthesis so I was thinking maybe there is a new patch or version of Twill that adapts the changes of Python, is there any? or is there anything I can do about this aside from manually editing Twill files? Your response is highly appreciated.
Is there a new/updated Twill?
0.148885
0
0
4,901
24,057,239
2014-06-05T10:05:00.000
1
0
0
0
python,django
24,057,408
1
false
1
0
You need to collectstatic in your live environment, have you setup your static folders and placed the appropriate declarations in your nginx? If yes then just run: ./manage.py collectstatic
1
0
0
i am using default django admin panel,I have just moved my django site on my live server, and found the admin panel has no styling with it,but in my local server,everything is fine.In mention i am using nginx.To fix this problem, i have just check the path /usr/local/lib/python2.7/site-packages/django/contrib/ and found that there is no /django/contrib/ directory in my virtual environment.There is no /django/contrib/ file in my virtual environment.is that the reason of missing django admin panel interface?
Django admin panel interface missing
0.197375
0
0
449
24,059,900
2014-06-05T12:11:00.000
0
0
1
0
python-3.x,module
24,060,037
1
true
0
0
Skip the .py. It's just import filename.
1
0
0
I have written a small program and saved it as a module using if __name__ == '__main__:' main() Now I want to use this program in another program that I'm writing. How do I do that without just copying and pasting the previous program's code? I have tried import filename.py but that doesn't work. Some context: The first program calculates the area of a polygon; The second program needs to use the area in order to calculate the centroid of a polygon.
How do I use a previous program that I've written in a new program that I am currently writing?
1.2
0
0
19
24,061,320
2014-06-05T13:18:00.000
2
0
0
0
python,xml,django,post
24,061,391
1
true
1
0
I'm not quite sure what you're asking. Do you just want to know how to access the POST data? You can get that via request.body, which will contain your XML as a string. You can then use your favourite Python XML parser on it.
1
0
0
There is XML data passed in POST request as a simple string and not inside some name=value pair, with HTTP header (optionally) set to 'Content-Type: text/xml'. How can I get this data in Python (by its own ways or by tools of Django)?
How can I get XML data in Python/Django passed in POST request?
1.2
0
1
2,986
24,062,830
2014-06-05T14:26:00.000
1
0
1
0
python,c,distutils
24,066,500
3
false
0
0
I'd consider building the python module as a subproject of a normal shared library build. So, use automake, autoconf or something like that to build the shared library, have a python_bindings directory with a setup.py and your python module.
1
16
0
I have written a library whose main functionality is implemented in C (speed is critical), with a thin Python layer around it to deal with the ctypes nastiness. I'm coming to package it and I'm wondering how I might best go about this. The code it must interface with is a shared library. I have a Makefile which builds the C code and creates the .so file, but I don't know how I compile this via distutils. Should I just call out to make with subprocess by overriding the install command (if so, is install the place for this, or is build more appropriate?) Update: I want to note that this is not a Python extension. That is, the C library contains no code to itself interact with the Python runtime. Python is making foreign function calls to a straight C shared library.
Best way to package a Python library that includes a C shared library?
0.066568
0
0
4,767
24,063,919
2014-06-05T15:13:00.000
1
0
1
0
python
24,064,458
1
true
0
0
Python module/package is for distributing a code, not data Building modules or packages is creating different types of distribution formats. Some of them being: python source code - for pure python apps, sort or ZIP archive, extension zip or tar.gz egg - zip archive with whatever can be compiled being compiled - extension egg exe - Windows installation package whl - wheel - compiled package distribution package etc. Do not confuse using distribution package with a mean for providing some data for an application. While it is possible and sometime used to provide static data as part of a distribution package, you shall never include changing data as part of your package as it conflicts with the fact, the package is installed on one place and can be simultaneously used by multiple application. Adding files to build directory In general, your build shall manage all the content on it's own and shall not expect modifying the distribution package. Forget about this kind of tricks and you will safe a lot of headache.
1
0
0
The python documentation states that we can build and install a python module separately. What benefit does it provide? Does it mean that we can add more files to the build directory if we have not installed the module(thought the module has been built)?
Can we add more files into the build directory if we have not installed the python module yet?
1.2
0
0
19
24,064,222
2014-06-05T15:27:00.000
1
0
1
0
python,psychopy
24,072,167
2
false
0
0
In concrete terms, when you add a Sound component in Builder, you just need to add an expression in the "Start (time)" field that takes account of the duration of the first sound stimulus and the ISI for this trial. So if you have a column for the ISI in the conditions file as Jonas suggests (let's say it is called "ISI") and a Sound component for the first auditory stimulus (called, say, "sound1"), then you could put this in the Start field of the second sound stimulus: $sound1.getDuration() + ISI The $ symbol indicates that this line is to be interpreted as a Python code expression and not as a literal duration. This assumes that sound1 starts at the very beginning of a trial. If it starts, say 1 second into the trial, then just add a constant to the expression: $1.0 + sound1.getDuration() + ISI Your ISI column should contain values in seconds. If you prefer milliseconds, then do this: $sound1.getDuration() + ISI/1000.0
1
1
0
I'm totally new to PsychoPy and I'm working with Builder. I'm not familiar with Python coding at all. I have audio stimuli that have variable durations. In each trial, I want the second stimulus to start 500ms or 1500ms after the end of the first stimulus. Is there a way to do this in Builder? If I have to do it on Coder, what should I do? Thank you very much!
variable stimuli duration but two kinds of fixed ISI in PsychoPy
0.099668
0
0
991
24,069,015
2014-06-05T19:48:00.000
3
0
1
0
python
24,069,166
1
true
0
0
You should split your code into multiple modules when it begins to be unwieldy to keep it all in one module. This is to some extent a matter of taste. Note that it may unwieldy for the code author (i.e., file is too big to navigate easily) or for the user of the library (e.g., too many unrelated functions/classes jammed together in the same namespace, hard to keep track of them).
1
0
0
Im not positive on the terminology here, so that may explain why searching on my own yielded no results. I was just curious if there is a widely accepted, general method to writing a module in python. Obviosuly people prefer splitting things into segmented .py scripts, importing when needed, and packing it all into a folder. What I want to know: Is there a general method to how/when/why we stop writing things together in one .py and begin a new one (And i mean other than obvious things like... one script .py for the main job, and then a preferences.py to handle reading/writing prefs)
Python convention for separating files
1.2
0
0
50
24,069,488
2014-06-05T20:17:00.000
1
0
1
0
python
24,592,547
1
true
0
0
After a long time I got it. Python(x,y) is a Python 2.7 distribution. I was trying to install mlpy module for Python 3.5. That was the problem. Thanks for everyone who tried to help :)
1
1
0
I always used Phython(x,y) for college programming because it makes my life a lot easier. I never needed to worry about the headache of installing new modules because all modules I needed until then came along with Python(x,y). The problem is that right now I need "mlpy" module, but I can't find a way to integrate it with Python(x,y) and I can't find if really there is a way to do it. Already tried mlpy ".exe" installer and its "setup.py", but with no success. Is there an easy way of doing it? EDIT: I want to use an DTW function. That's why I need mlpy. I won't need mlpy if Python(x,y) already has a module with an DTW function. The problem is: I can't find this function.
How do I install additional molules for Python(x,y)?
1.2
0
0
6,850
24,069,711
2014-06-05T20:31:00.000
1
0
0
0
python,rdf,freebase
25,625,683
1
false
1
0
The Freebase dump is in RDF format. The easiest way to query it is to dump it (or a subset of it) into an RDF store. It'll be quicker to query, but you'll need to pay the database load time up front first.
1
3
0
I downloaded the freebase data dump and I want to use it to get information about a query just like how I do it using the web API. How exactly do I do it? I tried using a simple zgrep but the result was a mess and takes too much time. Any graceful way to do it (preferably something that plays nicely with python)?
How to search freebase data dump
0.197375
0
1
964
24,072,231
2014-06-06T00:11:00.000
-1
0
0
0
python,csv,web-applications,flask
64,239,216
4
false
1
0
I am absolutely baffled by how many people discourage using CSV as an database storage back-end format. Concurrency: There is NO reason why CSV can not be used with concurrency. Just like how a database thread can write to one area of a binary file at the same time that another thread writes to another area of the same binary file. Databases can do EXACTLY the same thing with CSV files. Just as a journal is used to maintain the atomic nature of individual transactions, the same exact thing can be done with CSV. Speed: Why on earth would a database read and write a WHOLE file at a time, when the database can do what it does for ALL other database storage formats, look up the starting byte of a record in an index file and SEEK to it in constant time and overwrite the data and comment out anything left over and record the free space for latter use in a separate index file, just like a database could zero out the bytes of any unneeded areas of a binary "row" and record the free space in a separate index file... I just do not understand this hostility to non-binary formats, when everything that can be done with one format can be done with the other... everything, except perhaps raw binary data compression, depending on the particular CSV syntax in use (special binary comments... etc.). Emergency access: The added benefit of CSV is that when the database dies, which inevitably happens, you are left with a CSV file that can still be accessed quickly in the case of an emergency... which is the primary reason I do not EVER use binary storage for essential data that should be quickly accessible even when the database breaks due to incompetent programming. Yes, the CSV file would have to be re-indexed every time you made changes to it in a spread sheet program, but that is no different than having to re-index a binary database after the index/table gets corrupted/deleted/out-of-sync/etc./etc..
2
3
0
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database.
Somthing wrong with using CSV as database for a webapp?
-0.049958
1
0
3,024
24,072,231
2014-06-06T00:11:00.000
1
0
0
0
python,csv,web-applications,flask
47,320,760
4
false
1
0
I think there's nothing wrong with that as long as you abstract away from it. I.e. make sure you have a clean separation between what you write and how you implement i . That will bloat your code a bit, but it will make sure you can swap your CSV storage in a matter of days. I.e. pretend that you can persist your data as if you're keeping it in memory. Don't write "openCSVFile" in you flask app. Use initPersistence(). Don't write "csvFile.appendRecord()". Use "persister.saveNewReport()". When and if you actually realise CSV to be a bottleneck, you can just write a new persister plugin. There are added benefits like you don't have to use a mock library in tests to make them faster. You just provide another persister.
2
3
0
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database.
Somthing wrong with using CSV as database for a webapp?
0.049958
1
0
3,024
24,072,413
2014-06-06T00:33:00.000
0
0
1
0
python,flask
24,072,542
1
false
1
0
What exactly is happening when Run is clicked? It run the app.py on the codebox vm. It maybe python app.py To update, you fist should save your change(ctrl-s), and stop the run app, run it again.
1
0
0
When I create a Python box, I run command pip install -r requirements.txt then I click the Run button on the side and I can see the existing app run. What exactly is happening when Run is clicked? I've updated the existing file app.py, I made def hello(): return something new, however, it seems like codebox does not update and I still see hello world. What is happening when I click on the Run button? I'm trying to follow the Mega Flask Tutorial but haven't been able to make the flask server return the correct value, it always returns "hello world".
How do I run flask on codebox IDE?
0
0
0
256
24,072,713
2014-06-06T01:16:00.000
4
0
0
0
python,filepath
24,072,745
2
false
1
0
If you are presenting a filename to a user for any reason, it's better if that filename follows the usual OS conventions. Windows has been able to use the / for path separators for as long as there have been paths - this was a DOS feature.
1
11
0
Currently I use os.path.join almost always in my django project for cross OS support; the only places where I don't currently use it are for template names and for URLs. So in situations where I want the path '/path/to/some/file.ext' I use os.path.join('path', 'to', 'some', 'file.ext'). However I just tested my project on windows to see whether that worked fine / was necessary and it seems windows will happily accept '/' or '\\' (or '\' when working outside of python), and as all UNIX systems all use '/' it seems like there is no reason to ever use '\\', in which case is it necessary to use os.path.join anywhere? Is there a situation in which adding a '/' or using posixpath will cause problems on certain operating systems (not including XP or below as they are no longer officially supported)? If not I think I will just use posixpath or adding a '/' for joining variables with other variables or variables with strings and not separate out string paths (so leave it as '/path/to/some/file.ext') unless there is another reason for me to not do that other than it breaking things. To avoid this being potentially closed as primarily-opinion based I would like to clarify that my specific question is whether not using os.path.join will ever cause a python program to not work as intended on a supported operating system.
Is os.path.join necessary?
0.379949
0
0
2,005
24,074,684
2014-06-06T05:19:00.000
-1
0
0
1
python,linux,audio,alsa,pulseaudio
24,119,286
2
false
0
0
I guess it depends on what you would like to do with it after you got it "into" python. I would definitely look at the scikits.audiolab library. That's what you might use if you wanted to draw up spectrograms of what ever sound you are trying process (I'm guessing that's what you want to do?).
1
0
0
I was wondering if it was possible to play a sound directly into a input from python. I am using linux, and with that I am using OSS, ALSA, and Pulseaudio
How to play a sound onto a input stream
-0.099668
0
0
3,682
24,077,041
2014-06-06T08:05:00.000
1
0
0
1
python,google-app-engine
24,088,125
1
true
1
0
Unfortunately there is not currently a well-supported way to do this. However, with the disclaimer that this is likely to break at some point in the future, as it depends on internal implementation details, You can fetch the relevant _AE_Backup_Information and _AE_DatastoreAdmin_Operation entities from your datastore and inspect them for information regarding the backup. In particular, the _AE_DatastoreAdmin_Operation has fields active_jobs, completed_jobs, and status.
1
0
0
I am taking the backup of datastore , using Taskqueues. I want to check whether the backup has completed successfully or not. I can check the end of the backup job by checking the taskqueue, but how can i check whether the backup was successful or it failed due to some errors.
Google App Engine Check Success of backup programmatically
1.2
0
0
69
24,077,365
2014-06-06T08:27:00.000
1
0
0
1
python,windows
24,079,238
1
true
0
0
I guess a work-around for this would be to check the process's memory usage with shell command. If the specific process does not matter, I guess you could run a shell command and get the general system memory status. But this will only work for memory hungry processes.
1
1
0
In python, is there any way to check the progress of another windows application. That is to say for example, downloading a file in chrome, or converting a file in handbrake, Is there any way to get the current status of these processes. Specifically I want my script to wait until another program finishes a conversion, then continue.
Python, How to check windows application progress
1.2
0
0
81
24,081,061
2014-06-06T11:43:00.000
0
0
1
0
python,debugging,python-idle
24,083,884
1
false
0
0
Ensure that debug is enabled in IDLE by clicking on Debug -> Debugger on the menu bar. Break points are skipped if not in debug mode. It's also possible that the execution path never gets to your breakpoint. If that's the case, you should place a breakpoint earlier in your script.
1
0
0
When i set a breakpoint in a .py file in python IDLE , and rUn the debugger . The debugger bypasses the breakpoint and runs through the code without stopping at the breakpoint. Can anyone help me with this issue ?
Python IDLE debugger bypasses the breakpoint set?
0
0
0
796