Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
39,976,601
2016-10-11T11:49:00.000
1
0
0
0
python-2.7,ckan
40,041,730
1
true
0
0
If your dataset is private you will not be able to search it. So, my advice is making the dataset public in order to make your new fields searchable by default. Well I understand that the system is made to cover general purposes as a public Open Data Catalog. If your organization needs to have a hidden visibility for some datasets, I recommend you to check out the different options for membership and see which fits in best. You can alter the permissions, although adding in new permissions levels are not currently easy.
1
1
0
I am adding a new field in the ckan_dataset.json. And i want to be able to also search by this field when i search in the default search box. The logical thing would be to include the field_name some place, but I'm not able to find where. I also know that it searches by title by default so I have tried to find where it does that, but with no success.
How to search a field in CKAN
1.2
0
0
156
39,980,030
2016-10-11T14:46:00.000
3
0
1
0
python,class
39,993,932
2
true
0
0
You'll find quite a few similar examples - like len(obj) instead of obj.length(), hash(obj) instead of obj.hash(), isinstance(obj, cls) instead of obj.isinstance(cls). You may also have noticed that addition is spelled obj1 + obj2 instead of obj1.add(obj2), substraction spelled obj1 - obj2 instead of obj1.sub(obj2) etc... The point is that some builtin "functions" are to be considered as operators rather than really functions, and are supported by "__magic__" methods (__len__, __hash__, __add__ etc). As of the "why", you'd have to ask GvR but historical reasons set asides, it at least avoids a lot of namespace pollution / name clashes. How would you name the length of a "Line" or "Rectangle" class if length was already a "kind of but not explicitely reserved" name ? And how should introspection understand that Rectangle.length() doesn't mean Rectangle is a sizeable sequence-like object ? Using generic "operator" functions (note that proper operators also exist as functions, cf the operator module) + "__magic__" methods make the intention clear and leaves normal names open for "user space" semantic. wrt/ the "regular sentence structure for English", I have to say I don't really care - the very first programming language I learned was Apple's hyperscript (which later became applescript) and I quickly found it rather uselessly verbose . I could imagine there are cases where you're not sure if the object is even a proper object, rather than just None None is a "proper" object. Everything in Python (well, everything you can bind to a name) is a "proper" object.
2
1
0
When you need to access an object's attributes dynamically in Python, you can just use the builtin functions hasattr(object, attribute) or getattr(object, attribute). However this seems like an odd order for the syntax to take. It's less readable and intuitive as it messes up the regular sentence structure for English. if hasattr(person, age): if has attribute Person age Where having it as a method of the object would be much more readable: if person.hasattr(age): if Person has attribute age Is there a particular reason for not implementing it this way? I could imagine there are cases where you're not sure if the object is even a proper object, rather than just None, but surely in those cases of uncertainty you could just use the builtin function anyway for extra safety. Is there some other drawback or consideration I'm not thinking of that makes adding these not worth it?
Why don't objects have hasattr and getattr as attributes?
1.2
0
0
240
39,980,030
2016-10-11T14:46:00.000
1
0
1
0
python,class
39,982,252
2
false
0
0
Its part of the language design. I guess your find some docs about the more complicated thoughts behind it, but the key points are like You suggest to use a function of an object for a builtin function on all objects. Why should this function be specific to this object? Semantics: the getattr function works on objects, not as part of an object. Namespace: The functions of an object are defined by you, not by the language. Internal functions are of the form __getattr__ and you will find this function on your object ;-). And getattr uses it internally, so you can even override it (if you know, what you're doing).
2
1
0
When you need to access an object's attributes dynamically in Python, you can just use the builtin functions hasattr(object, attribute) or getattr(object, attribute). However this seems like an odd order for the syntax to take. It's less readable and intuitive as it messes up the regular sentence structure for English. if hasattr(person, age): if has attribute Person age Where having it as a method of the object would be much more readable: if person.hasattr(age): if Person has attribute age Is there a particular reason for not implementing it this way? I could imagine there are cases where you're not sure if the object is even a proper object, rather than just None, but surely in those cases of uncertainty you could just use the builtin function anyway for extra safety. Is there some other drawback or consideration I'm not thinking of that makes adding these not worth it?
Why don't objects have hasattr and getattr as attributes?
0.099668
0
0
240
39,981,292
2016-10-11T15:42:00.000
1
0
0
0
python,django,wsgi,django-settings
39,981,423
2
true
1
0
Just set DJANGO_SETTINGS_MODULE in environment variables to your desired config file. That won't make you to change any of other services config files, and you don't even need to change django settings files.
1
0
0
I have already searched on the web on this doubt, but they don't really seem to apply to my case. I have 3 different config files - Dev, Staging, Prod (of course) I want to modularize settings properly without repetition. So, I have made base_settings.py and I am importing it to dev_settings.py, stg_settings.py etc. Problem - How to invoke the scripts on each env properly with minimal changes? Right now, I'm doing this (taking dev env as an example)- python manage.py runserver --settings=core.dev_settings This works so far, but I am not convinced on how good workaround is this. Because wsgi.py and a couple of other services have - os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings') I am looking to do something without changing the config files of other services. Thank you everyone in advance. PS - I've tried to be as clear as possible, but please excuse me if anything is unclear.
How to configure Django settings for different environments in a modular way?
1.2
0
0
780
39,982,491
2016-10-11T16:47:00.000
1
1
0
1
python,linux,python-2.7,permissions,permission-denied
39,982,764
2
true
0
0
drwxr-xr-x means that: 1] only the directory's owner can list its contents, create new files in it (elevated access) etc., 2] members of the directory's group and other users can also list its contents, and have simple access to it. So in fact you don't have to change the directory's permissions unless you know what you are doing, you could just run your script with sudo like sudo python my_script.py.
1
0
0
I'm having an error for what seems to be a permissions problem when trying to create a zip file in a specified folder testfolder -folder has the following permissions: drwxr-xr-x 193 nobody nobody When trying to launch the following command in python I get the following: p= subprocess.Popen(['7z','a','-pinfected','-y','/home/John/testfolder/yada.zip'] + ['test.txt'],stdout=PIPE.subprocess,stderr=PIPE.subprocess) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/local/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied Any idea what wrong with permissions? I pretty new to it, my python runs from /usr/local/bin path
Using 7zip with python to create a password protected file in a given path
1.2
0
0
589
39,983,695
2016-10-11T18:00:00.000
-3
0
1
0
python,boolean-expression
60,946,294
8
false
0
0
Falsy means something empty like empty list,tuple, as any datatype having empty values or None. Truthy means : Except are Truthy
2
174
0
I just learned there are truthy and falsy values in python which are different from the normal True and False. Can someone please explain in depth what truthy and falsy values are? Where should I use them? What is the difference between truthy and True values and falsy and False values?
What is Truthy and Falsy? How is it different from True and False?
-0.07486
0
0
81,630
39,983,695
2016-10-11T18:00:00.000
3
0
1
0
python,boolean-expression
68,893,814
8
false
0
0
Any object in Python can be tested for its truth value. It can be used in an if or while condition or as operand of the Boolean operations. The following values are considered False: None False zero of any numeric type, for example, 0, 0L, 0.0, 0j. any empty sequence, for example, '', (), []. any empty mapping, for example, {}. instances of user-defined classes, if the class defines a __nonzero__() or __len__() method, when that method returns the integer zero or bool value False. All other values are considered True -- thus objects of many types are always true. Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated.
2
174
0
I just learned there are truthy and falsy values in python which are different from the normal True and False. Can someone please explain in depth what truthy and falsy values are? Where should I use them? What is the difference between truthy and True values and falsy and False values?
What is Truthy and Falsy? How is it different from True and False?
0.07486
0
0
81,630
39,984,082
2016-10-11T18:22:00.000
2
0
1
0
python,class,oop,encapsulation
39,984,839
2
true
0
0
The difference between a class and a function should be that a class has state. Some classes don't have state, but this is rarely a good idea (I'm sure there's exceptions, abstract base classes (ABCs) for instance but I'm not sure if they count), and some functions do have state, but this is rarely a good idea (caching or instrumentation might be exceptions). If you want an URL as input, and say a dict as output, and then you are done with that website, there's no reason to have a class. Just have a function that takes an URL and returns a dict. Stateless functions are simpler abstractions than classes, so all other things being equal, prefer them. However, very often there may be intermediate state involved. For instance, maybe you are scraping a family of pages rooted in a base URL, and it's too expensive to do all this eagerly. Maybe then what you want is a class that takes the root URL as its constructor. It then has some methods for querying which child URLs it can follow down, and methods for ordering subsequent scraping of children, which might be stored in nested data structures. And of course, if your task is reasonably complicated, you may well have layers with functions using classes, or classes calling function. But persisting state is a good indicator of whether the immediate task should be written as a class or set of functions. Edit: just to close the loop and come round to the original question: No, I would say it's not pythonesque to wrap all functions in classes. Free functions are just fine in python, it all depends what's appropriate. Also, the term pythonesque is not very pythonic ;-)
1
1
0
So, I'm developing a simple web-scraper in Python right now, but I had a question on how to structure my code. In other programming languages (especially compiled languages like C++ and C#), I've been in the habit of wrapping all of my functions in classes. I.e. in my web-scraping example I would have a class called something like "WebScraper" maybe and then hold all of the functions within that class. I might even go so far as to create a second helper class like "WebScraperManager" if I needed to instantiate multiple instances of the original "WebScraper" class. This leads me to my current question, though. Would similiar logic hold in the current example? Or would I simply define a WebScraper.py file, without a wrapper class inside that file, and then just import the functions as I needed them into some main.py file?
Is It "Python-esque" to wrap all functions in a class?
1.2
0
0
1,125
39,984,611
2016-10-11T18:51:00.000
7
0
1
1
python,anaconda,conda,portability,miniconda
51,868,203
3
false
0
0
Since you mentioned WinPython as an option, but said you dismissed it for being 'too large': WinPython now includes a 'Zero' version with each release that has nearly all of the bloat removed (equivalent to the relationship between Miniconda and Anaconda). I believe the folder containing the WinPython-64bit v3.6.3.0Zero release clocked in around 50-100MB.
1
28
0
I'd like to deploy Python to non-programmers in my organization such that the install process consists entirely of syncing a directory from Perforce and maybe running one batch file that sets up environment variables. Is it possible to package up Miniconda in such a way it can be "installed" just by copying down a directory? What does its installer do? The reason for this is that I'd like to automate certain tasks for our artists by providing them with Python scripts they can run from the commandline. But I need to get the interpreter onto their machines without having to run any kind of installer or uninstaller, or any process that can fail in a non-idempotent way. A batch file that sets up env vars is fine, because it is idempotent. An installer that can fail partway through and put the workstation into a state requiring intervention to fix is not. In particular, adding a library to everyone's install should consist of my using conda on my desk, checking the ensuing directory into P4, and then letting artists pick it up automatically with their next p4 sync. I looked at WinPython, but at 1.4GB it is too large. Portable Python is defunct. We are exclusively a Windows shop, so do not need Linux- or Mac-portable solutions.
Can Anaconda be packaged for a portable zero-configuration install?
1
0
0
32,449
39,986,952
2016-10-11T21:21:00.000
0
0
1
0
python,anaconda,software-distribution
39,987,141
2
false
0
0
There are many options: Create a pip repository in the offline network. Deploy your project with its dependencies. Use setuptools to create a setup.py file for easy installation. Use py2exe to create an executable instead of a python program.
1
0
0
Package managers like conda, pip and their online repositories make distributing packages easy and robust. But I am looking for ways to distribute to users that want to install and run my library on machines that are deliberately disconnected from internet for security purposes. I am to assume these computers don't have Python or any other packages or package managers like conda installed. I am also looking for recommended workflows for bundling my dependencies with the package as well.
How to distribute Python libraries to users without internet
0
0
0
515
39,989,680
2016-10-12T02:41:00.000
0
0
0
0
python
39,989,752
1
false
0
0
I think you're going to have to extract your own snippets by opening and reading the url in the search result.
1
1
0
I am now trying the python module google which only return the url from the search result. And I want to have the snippets as information as well, how could I do that?(Since the google web search API is deprecated)
How can I get the google search snippets using Python?
0
0
1
167
39,995,380
2016-10-12T09:40:00.000
0
0
1
1
python,windows,anaconda
39,995,712
8
false
0
0
Anaconda should add itself to the PATH variable so you can start any .py file with "python yourpythonfile.py" and it should work from any folder. Alternatively download pycharm community edition, open your python file there and run it. Make sure to have python.exe added as interpreter in the settings.
4
29
0
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
0
0
0
131,482
39,995,380
2016-10-12T09:40:00.000
4
0
1
1
python,windows,anaconda
54,141,774
8
false
0
0
Launch JupyterLab from Anaconda (Perform the following operation with JupyterLab ...) Click on icon folder in side menu Start up "Text File" Rename untitle.txt to untitle.py (The name of the file started up was also changed) Start up the "terminal" (In windows the power shell starts up) Execute the command python untitle.py
4
29
0
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
0.099668
0
0
131,482
39,995,380
2016-10-12T09:40:00.000
0
0
1
1
python,windows,anaconda
68,916,916
8
false
0
0
If you get the following error: can't open file 'command.py': [Errno 2] No such file or directory Then follow this steps to fix it: Check that you are in the correct directory where the Python file is. If you are not in the correct directory, then change the current working directory with cd path. For instance: cd F:\COURSE\Files. Now that you are in the directory where your .py file is, run it with the command python app.py.
4
29
0
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
0
0
0
131,482
39,995,380
2016-10-12T09:40:00.000
2
0
1
1
python,windows,anaconda
56,315,497
8
false
0
0
Right click on a .py file and choose 'open with' Scroll down through the list of applications and click something like 'use a different program' Naviage to C:\Users\<username>\AppData\Local\Continuum\anaconda3 click on python.exe and then click on 'ok' or 'open' Now when you double click on any .py file it will run it through Anaconda's interpreter and therefore run the python code. I presume if you run it through the command line the same would apply but perhaps someone could correct me?
4
29
0
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
0.049958
0
0
131,482
39,996,098
2016-10-12T10:15:00.000
0
0
1
0
python,azure,azure-worker-roles,azure-cloud-services
39,997,193
1
false
0
0
Likely many ways to solve your problem, but specifically from a worker role standpoint: Worker (and web) roles have definable startup tasks, allowing you to execute code/script during role startup. This allows you to do things like copying content from blob storage to local disk on your role instance. In this scenario, the blob where your code is stored acts like a shared disk.
1
0
0
I have a single Azure Cloud Service as a project in Visual Studio 2015, which contains 2 Python Worker Roles. They each have their own folder with source code files, and they are deployed to separate VMs. However, they both rely on some identical pieces of code. Right now my solution is to just include a copy of the code in each worker role, but then I have to remember to apply changes to both worker roles in case of a bug fix. I have tried making a folder on the project level, containing the shared files, but when I add them to the worker role, VS just copies the files. Is there a way to implement something like a shared folder, which only copies the files upon building the project?
Share python source code files between Azure Worker Roles in the same project
0
0
0
34
39,997,176
2016-10-12T11:14:00.000
9
0
0
0
python,testing,xpath,automated-tests,robotframework
40,018,399
3
true
0
0
By locating the element using xpath, I assume that you're using Sselenium2Library. In that lib there is a keyword named: Page Should Contain Element which requires an argument, which is a selector, for example the xpath that defines your element. The keyword failes, if the page does not contain the specified element. For the condition, use this: ${Result}= Page Should Contain Element ${Xpath} Run Keyword Unless '${RESULT}'=='PASS' Keyword args* You can also use an other keyword: Xpath Should Match X Times
1
4
0
I'm wondering, I'd love to find or write condition to check if some element exists. If it does than I want to execute body of IF condition. If it doesn't exist than to execute body of ELSE. Is there some condition like this or is it necessary to write by myself somehow?
Robot Framework - check if element defined by xpath exists
1.2
0
1
22,789
40,000,198
2016-10-12T13:40:00.000
0
0
0
0
python,shutil
47,220,051
1
false
0
0
Run the python script using Windows Task Scheduler; there it's possible to run the code using different credentials.
1
2
0
I am using the shutil module to delete a folder of windows7, however I would need Python to delete the folder using different credentials from the ones currently using the machine. Is it possible to set it to somehow the delete with different credentials?
Drop/delete folder with different credentials
0
0
0
91
40,001,836
2016-10-12T14:55:00.000
1
0
1
0
python,python-3.5,projects-and-solutions,spyder
44,144,280
1
false
0
0
There is an easy solution to this, at least for simple cases and as of May 2017 (Spyder 3.1.2): Create a new empty project in Spyder 3. The new project directory will then have a subdirectory named ".spyproject" with these files in it: codestyle.ini, encoding.ini, vcs.ini, workspace.ini. Copy the entire .spyproject subdirectory to the old Spyder 2 project directory. This allows Spyder 3 to at least see the source files in the old project directory, even if all the settings don't track. I only have dumb use cases (e.g. no "related projects") in my Spyder 2 projects. But this way I don't have to generate 75 new projects and manually import the old files.
1
1
0
The past few months, I've been working on a project using Spyder2 IDE with Python 2.7. However, now I'm being instructed to look into ways of translating the program from Python 2.7 to Python 3.5, which means I'm using Anaconda3 now instead of Anaconda2, and that means I'm using Spyder3 as the default IDE instead of Spyder2. I want to be able to import the entire project, but Spyder3 does not recognize it as such. So how to I import a Spyder2 Project into the Spyder3 IDE?
Import Project from Spyder2 to Spyder3
0.197375
0
0
505
40,002,232
2016-10-12T15:14:00.000
1
0
0
0
python,scikit-learn
44,608,692
2
false
0
0
If you use fit only on the training and transform on the test data, you won't get the correct result. When using fit_transform on the training data, it means that the machine is learning from the parameters in the feature space and also transforming (scaling) the training data. On the other hand, you should only use transform on the test data to scale it according to the parameters learned from the training data.
1
0
1
As the title says, I am using fit_transform with the CountVectorizer on the training data .. and then I am using tranform only with the testing data ... will this gives me the same as using fit only on the training and tranform only on the testing data ?
fit_transform with the training data and transform with the testing
0.099668
0
0
2,610
40,003,821
2016-10-12T16:34:00.000
6
0
1
0
python,linux,python-3.x,pyqt,pyinstaller
40,005,728
2
true
0
0
the binary file is in the dist folder not build folder
1
4
0
I am using Python 3.5.2 , PyQt 5.7 , PyInstaller 3.2 and I'm in Linux I can compile file.py with : pyinstaller file.py but when I run the binary file in Build folder it returns: Error loading Python lib '/home/arash/build/file/libpython3.5m.so.1.0': /home/arash/build/file/libpython3.5m.so.1.0: cannot open shared object file: No such file or directory Where is the python library (.so file) to copy inside binary file or PyInstaller flag for copy library file?
How to compile python with PyInstaller in Linux
1.2
0
0
2,044
40,004,334
2016-10-12T17:01:00.000
0
0
0
0
python-2.7,pandas
40,025,421
1
false
0
0
I think this is a misleading question/thought process. If you think of data in strictly 2 dimension then a regression line on a scatter plot makes sense. But let's say you have 5 dimensions of data you are plotting in your scatter matrix. In this case the regression for each pair of dimensions is not an accurate representation of the global regression. I would be wary presenting that to anyone as I can easily see where it could create confusion. That being said if you don't care about a regression across all of your dimensions then you could write your own function to do this. A quick walk through of steps may be: 1. Identify number of dimensions N 2. Create figure 3. Double for loop on N, first will walk down rows, second will walk across rows 4. At each point add subplot, calculate regression (if not kde/hist position), plot scatter cloud and regression line or kde/hist
1
0
1
I'm using scatter_matrix for correlation visualization and calculating correlation values using corr(). Is it possible to have the scatter_matrix visualization draw the regression line in the scatter plots?
Putting a Regression Line When Using Pandas scatter_matrix
0
0
0
674
40,007,759
2016-10-12T20:27:00.000
0
0
1
0
python,image-processing
40,007,828
1
true
0
0
You can search for this libraries: dlib, PIL (pillow), opencv and scikit learn image. This libraries are image processing libraries for python. Hope it helps.
1
0
1
I am starting a new project with a friend of mine, we want to design a system that would alert the driver if the car is diverting from its original path and its dangerous. so in a nutshell we have to design a real-time algorithm that would take pictures from the camera and process them. All of this will be done in Python. I was wondering if anyone has any advises for us or maybe point out some stuff that we have to consider Cheers !
Digital Image Processing via Python
1.2
0
0
124
40,008,788
2016-10-12T21:38:00.000
1
0
0
0
python,django,django-models
40,009,025
2
false
1
0
I think this is likely best accomplished by writing a server-side python script and adding a cronjob
1
1
0
How do you schedule updating the contents of a database table in Django based on time of day. For example every 5 minutes Django will call a REST api to update contents of a table.
Schedule table updates in Django
0.099668
0
0
752
40,010,657
2016-10-13T01:00:00.000
0
1
0
1
php,python,apache
40,065,573
2
false
1
0
It looks like I could use suEXEC. It is an Apache module that is not installed at default because they really don't want you to use it. It can be installed using the apt-get scheme. That said, I found the real answer to my issue, heyu uses the serial ports to do it's work. I needed to add www-data to the dialout group then reboot. This circumvented the need to run my code as me (as I had already add me to the dialout group a long time ago) in favor of properly changing the permissions. Thanks.
1
0
0
I am using Ubuntu server 12.04 to run Apache2 web server. I am hosting several webpages, and most are working fine. One page is running a cgi script which mostly works (I have the python code working outside Apache building the html code nicely.) However, I am calling a home automation program (heyu) and it is returning different answers then when I run it in my user account. Is there a way I can... 1 call the heyu program from my python script as a specific user, (me) and leave the rest of the python code and cgi code alone? 2, configure apache2 to run the cgi code, as a whole, as me? I would like to leave all the other pages unchanged. Maybe using the sites_available part. 3, at least determine which user is running the cgi code so maybe I can get heyu to be OK with that user. Thanks, Mark.
Apache2 server run script as specific user
0
0
0
206
40,011,896
2016-10-13T03:36:00.000
1
0
0
0
python,nlp,nltk,stanford-nlp
57,003,384
7
false
0
0
NLTK can be used for the learning phase to and perform natural language process from scratch and basic level. Standford NLP gives you high-level flexibility to done task very fast and easiest way. If you want fast and production use, can go for Standford NLP.
2
30
1
I have recently started to use NLTK toolkit for creating few solutions using Python. I hear a lot of community activity regarding using Stanford NLP. Can anyone tell me the difference between NLTK and Stanford NLP? Are they two different libraries? I know that NLTK has an interface to Stanford NLP but can anyone throw some light on few basic differences or even more in detail. Can Stanford NLP be used using Python?
NLTK vs Stanford NLP
0.028564
0
0
15,755
40,011,896
2016-10-13T03:36:00.000
1
0
0
0
python,nlp,nltk,stanford-nlp
50,968,392
7
false
0
0
I would add to this answer that if you are looking to parse date/time events StanfordCoreNLP contains SuTime which is the best datetime parser available. The support for arbitrary texts like 'Next Monday afternoon' is not present in any other package.
2
30
1
I have recently started to use NLTK toolkit for creating few solutions using Python. I hear a lot of community activity regarding using Stanford NLP. Can anyone tell me the difference between NLTK and Stanford NLP? Are they two different libraries? I know that NLTK has an interface to Stanford NLP but can anyone throw some light on few basic differences or even more in detail. Can Stanford NLP be used using Python?
NLTK vs Stanford NLP
0.028564
0
0
15,755
40,012,153
2016-10-13T04:10:00.000
0
0
0
0
python,url,web-applications
48,018,208
2
true
1
0
I just used a urlencode flag with the title. Something like {{title l urlencode}}. P.s the single l is a pipe.
1
0
0
am new to python and am trying to build a blog kind of web app, my major problem is that I want the title of the each post to be its link which I would store in my database. I am using the serial number of each post as the url, but it doesn't meet my needs. Any help is appreciated.
url encoding in python and sqlite web app
1.2
0
0
98
40,012,264
2016-10-13T04:25:00.000
0
0
1
0
list,python-2.7
40,012,356
3
false
0
0
Use a for loop to read a number from the list. create a variable and assign the number to it, read another number and compare them using an if statement. If they are the same sum them like sameNumSum+=sameNumSum else multiply them. Before for loop create these two variables and initialize them. I just gave you the algorithm to it, you can change it into your code. Hope that help though.
1
0
0
I am new to python. I am trying to print sum of all duplicates nos and products of non-duplicates nos from the python list. for examples list = [2,2,4,4,5,7,8,9,9]. what i want is sum= 2+2+4+4+9+9 and product=5*7*8.
print sum of duplicate numbers and product of non duplicate numbers from the list
0
0
0
79
40,013,849
2016-10-13T06:34:00.000
0
1
0
0
php,python,web,messagebox
40,015,293
1
false
0
0
I think you will need to read about pub/sub for messaging services. For php, you can use libraries such as redis. So for e.g, user1 subscribe to topic1, any user which publish to topic1, user1 will be notified, and you can implement what will happen to the user1.
1
0
0
I am running a website where user can send in-site message (no instantaneity required) to other user, and the receiver will get a notification about the message. Now I am using a simple system to implement that, detail below. Table Message: id content receiver sender Table User: some info notification some info When a User A send message to User B, a record will be add to Message and the B.notification will increase by 1. When B open the message box the notification will decrease to 0. It's simple but does well. I wonder how you/company implement message system like that. No need to care about UE(like confirm which message is read by user), just the struct implement. Thank a lot :D
How to implement a message system?
0
0
0
123
40,015,869
2016-10-13T08:25:00.000
-1
0
0
0
python,xml,http,python-requests,generator
40,018,118
2
false
0
0
The only way of connecting a data producer that requires a push interface for its data sink with a data consumer that requires a pull interface for its data source is through an intermediate buffer. Such a system can be operated only by running the producer and the consumer in "parallel" - the producer fills the buffer and the consumer reads from it, each of them being suspended as necessary. Such a parallelism can be simulated with cooperative multitasking, where the producer yields the control to the consumer when the buffer is full, and the consumer returns the control to the producer when the buffer gets empty. By taking the generator approach you will be building a custom-tailored cooperative multitasking solution for your case, which will hardly end up being simpler compared to the easy pipe-based approach, where the responsibility of scheduling the producer and the consumer is entirely with the OS.
1
6
0
I'm using the Python requests library to send a POST request. The part of the program that produces the POST data can write into an arbitrary file-like object (output stream). How can I make these two parts fit? I would have expected that requests provides a streaming interface for this use case, but it seems it doesn't. It only accepts as data argument a file-like object from which it reads. It doesn't provide a file-like object into which I can write. Is this a fundamental issue with the Python HTTP libraries? Ideas so far: It seems that the simplest solution is to fork() and to let the requests library communicate with the POST data producer throgh a pipe. Is there a better way? Alternatively, I could try to complicate the POST data producer. However, that one is parsing one XML stream (from stdin) and producing a new XML stream to used as POST data. Then I have the same problem in reverse: The XML serializer libraries want to write into a file-like object, I'm not aware of any possibility that an XML serializer provides a file-like object from which other can read. I'm also aware that the cleanest, classic solution to this is coroutines, which are somewhat available in Python through generators (yield). The POST data could be streamed through (yield) instead of a file-like object and use a pull-parser. However, is possible to make requests accept an iterator for POST data? And is there an XML serializer that can readily be used in combination with yield? Or, are there any wrapper objects that turn writing into a file-like object into a generator, and/or provide a file-like object that wraps an iterator?
How to stream POST data into Python requests?
-0.099668
0
1
10,296
40,019,042
2016-10-13T10:55:00.000
2
1
0
1
php,python,ssh,centos,exec
40,019,200
1
true
0
0
Perhaps it's caused by buffering of the output. Try adding the -u option to your Python command - this forces stdout, stdin and stderr to be unbuffered.
1
0
0
I am connecting to a server through PHP SSH and then using exec to run a python program on that server. If I connect to that server through putty and execute the same command through command line, I get result like: Evaluating.... Connecting.... Retrieving data.... 1) Statement 1 2) Statement 2 . . . N) Statement N Python program is written by somebody else... When I connect through SSH php, I can execute $ssh->exec("ls") and get the full results as proper as on server command line. But when I try $ssh->exec("python myscript.py -s statement 0 0 0"); I couldn't get the full results but I get a random line as an ouput. In general, if somebody had experienced the same issue and solved, please let me know. Thanks
PHP exec command is not returning full data from Python script
1.2
0
0
142
40,019,315
2016-10-13T11:07:00.000
0
0
0
0
python,opencv,image-processing
40,022,760
1
false
0
1
I'm not sure to understand what do you want exactly but to get bounding box around word in image, i could do this : Apply processing to get good a thresholding : only text, background in black, text in white. This step depends on the type and quality of your image. Compute the sum of each line. The sum should be different from 0 where there is text and all lines in the space between each line should be null (you can set a threshold on this value if there is some noise). You can find the top/bottom line for each text line For each text line found in step 2, compute the sum of each columns. Same than step two, columns with word should be different from 0. You can find all spaces between words and letters. Remove all spaces which are too small to be a space between two words. Congratulation you have the top/bottom line and first/last columns of each words.
1
0
0
I want to compare two screenshots containing text. Basically both the screenshots contains some pretty formatted text. I want to compare if the same formatting being reflected in both the pictures as well as same text appearing at same location in both images. How I am doing it right now is - Apply bilateral filters to remove the underlines of text. Apply threshold with value 180 as min value and clear them out Apply Gaussian blur on the image to remove the unfilled space between the characters. Apply threshold again with value 250 as min value. Compute contours in the images Draw rectangle bounding box around contours use O(n^2) algo to find out max overlapped rectangle and compare text within it. However the problem is the contours appearing in both the images are different, i.e. in one of the image number of contours are 38 while other contains 53. I want to have a generic solution and don't want to depend upon the image content. However one thing for sure is the image is containing a well formatted text. Thanks
putting bounding box around text in a image
0
0
0
1,329
40,020,767
2016-10-13T12:17:00.000
2
0
0
0
python,apache-spark,ibm-cloud,ibm-cloud-plugin
40,021,035
1
true
0
0
In a Python notebook: !pip install <package> and then import <package>
1
0
1
1) I have Spark on Bluemix platform, how do I add a library there ? I can see the preloaded libraries but cant add a library that I want. Any command line argument that will install a library? pip install --package is not working there 2) I have Spark and Mongo DB running, but I am not able to connect both of them. con ='mongodb://admin:ITCW....ssl=true' ssl1 ="LS0tLS ....." client = MongoClient(con,ssl=True) db = client.mongo11 collection = db.mongo11 ff=db.sammy.find() Error I am getting is : SSL handshake failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Add a library in Spark in Bluemix & connect MongoDB , Spark together
1.2
1
0
106
40,022,937
2016-10-13T13:50:00.000
0
0
0
0
python,video,youtube,video-streaming
40,024,596
1
false
0
0
If what you are measuring is the total time the user spent watching the video (including stalls/interruptions) then your strategy would work. However, if you are looking to measure the video duration, then simply counting the number of seconds since "start" was pressed isn't very accurate. While your wall clock is still running, the video playback could jitter or even stall for a number of reasons. Slow network (buffering) Decoding/rendering delay. For instance, if you are trying to play a high-resolution video on a low-end device. Probably many other...
1
0
0
I have a task, connected with detection time of watching video. If I watch the video, for example https://www.youtube.com/watch?v=jkaMiaRLgvY, is it real to get quantity of seconds, which passed from the moment you press the button to start and until stop?
Python: count time of watching video
0
0
0
513
40,023,005
2016-10-13T13:54:00.000
0
0
0
0
python,elasticsearch,profiler
40,023,342
1
true
0
0
adding profile="true" to the body did the trick. In my opinion this should be an argument like size etc in the search method of the Elasticsearch class
1
1
0
Elasticsearch has a very useful feature called profile API. That is you can get very useful information on the queries performed. I am using the python elasticsearch library to perform the queries and want to be able to get those information back but I don't see anywhere in the docs that this is possible. Have you managed to do it someway?
Elasticsearch profile api in python library
1.2
0
1
223
40,023,987
2016-10-13T14:35:00.000
2
0
1
0
python,monkeypatching
40,024,979
2
false
0
0
I suppose on some grammatical level they may be equivalent. However, decorators are applied at the time a function or class is defined, and monkeypatching modifies an object at runtime, making them very different both in spirit and in actual use.
2
2
0
Recently I was reading about monkey patching technique and wondered if it can be said.
Is decorators in python example of monkey patching technique?
0.197375
0
0
1,096
40,023,987
2016-10-13T14:35:00.000
5
0
1
0
python,monkeypatching
40,024,454
2
true
0
0
Decorator = a function that takes a function as an argument and returns a function Monkey patching = replacing a value of a field on an object with a different value (not necessarly functions) In case of functions monkey patching can be performed via a decorator. So I guess decorator might be thought as an example of monkey patching. However commonly monkey patching refers to altering behaviour of a 3rd party library. In that case decorators are less useful.
2
2
0
Recently I was reading about monkey patching technique and wondered if it can be said.
Is decorators in python example of monkey patching technique?
1.2
0
0
1,096
40,033,066
2016-10-14T00:21:00.000
2
0
0
1
python,linux
40,033,097
1
true
0
0
You can set up the script to run via cron, configuring time as @reboot With python scripts, you will not need to compile it. You might need to install it, depending on what assumptions your script makes about its environment.
1
2
0
I've been learning Python for a project required for work. We are starting up a new server that will be running linux, and need a python script to run that will monitor a folder and handle files when they are placed in the folder. I have the python "app" working, but I'm having a hard time finding how to make this script run when the server is started. I know it's something simple, but my linux knowledge falls short here. Secondary question: As I understand it I don't need to compile or install this application, basically just call the start script to run it. Is that correct?
Run a python application/script on startup using Linux
1.2
0
0
87
40,033,539
2016-10-14T01:30:00.000
1
0
0
0
python,signals,glade
57,244,185
2
false
0
1
I know this is an old question, but for future reference, for Python3 and Gtk3 what worked for me was: myTree.set_property('activate-on-single-click', True)
1
2
0
I am creating a graphical interface with glade that contents a treeview. I want to have a button that is initially enabled by doing a simple click on a row of the treeview. I am using row-activated, but when I activate a row for the first time I have to double click the row. Which signal should I use to detect the click to activate the row with a single click?
Tree view row activated with ONE click
0.099668
0
0
721
40,034,010
2016-10-14T02:33:00.000
1
0
1
0
python,ajax,asynchronous
40,034,220
1
true
1
0
Asynchronous behavior applies to either side independently. Either side can take advantage of the capability to take care of several tasks as they become ready rather than blocking on a single task and doing nothing in the meantime. For example, servers do things asynchronously (or at least they should) while clients usually don't need to (though there can be benefits if they do and modern programming practices encourage that they do).
1
0
0
So I want to implement async file upload for a website. It uses python and javascript for frontend. After googling, there are a few great posts on them. However, the posts use different methods and I don't understand which one is the right one. Method 1: Use ajax post to the backend. Comment: does it make a difference? I thought async has to be in the backend not the front? So when the backend is writing files to disk, it will still be single threaded. Method 2: Use celery or asyncio to upload file in python. Method 3: use background thread to upload file in python. Any advice would be thankful.
Confusion on async file upload in python
1.2
0
1
786
40,034,334
2016-10-14T03:12:00.000
2
1
0
0
python,scala,compression,information-theory
40,046,686
1
true
0
0
All this is going to do is tell you whether the words in the sentence, and maybe phrases in the sentence, are in the dictionary you supplied. I don't see how that's complexity. More like grade level. And there are better tools for that. Anyway, I'll answer your question. Yes, you can preset the zlib compressor a dictionary. All it is is up to 32K bytes of text. You don't need to run zlib on the dictionary or "freeze a state" -- you simply start compressing the new data, but permit it to look back at the dictionary for matching strings. However 32K isn't very much. That's as far back as zlib's deflate format will look, and you can't load much of the English language in 32K bytes. LZMA2 also allows for a preset dictionary, but it can be much larger, up to 4 GB. There is a Python binding for the LZMA2 library, but you may need to extend it to provide the dictionary preset function.
1
1
1
I'm trying to write an algorithm that can work out the 'unexpectedness' or 'information complexity' of a sentence. More specifically I'm trying to sort a set of sentences so the least complex come first. My thought was that I could use a compression library, like zlib?, 'pre train' it on a large corpus of text in the same language (call this the 'Corpus') and then append to that corpus of text the different sentences. That is I could define the complexity measure for a sentence to be how many more bytes it requires to compress the whole corpus with that sentence appended, versus the whole corpus with a different sentence appended. (The fewer extra bytes, the more predictable or 'expected' that sentence is, and therefore the lower the complexity). Does that make sense? The problem is with trying to find the right library that will let me do this, preferably from python. I could do this by literally appending sentences to a large corpus and asking a compression library to compress the whole shebang, but if possible, I'd much rather halt the processing of the compression library at the end of the corpus, take a snapshot of the relevant compression 'state', and then, with all that 'state' available try to compress the final sentence. (I would then roll back to the snapshot of the state at the end of the corpus and try a different sentence). Can anyone help me with a compression library that might be suitable for this need? (Something that lets me 'freeze' its state after 'pre training'.) I'd prefer to use a library I can call from Python, or Scala. Even better if it is pure python (or pure scala)
Using compression library to estimate information complexity of an english sentence?
1.2
0
0
101
40,035,593
2016-10-14T05:32:00.000
3
0
1
0
python
40,035,621
2
false
0
0
Circular dependencies are caused by A importing B and B importing A. The usual solution is to make a C which imports B and A so that A and B don't have to import each other. You could also concatenate the two files if they're too tightly coupled. This is a problem in almost all languages; it's harder to parse code with circular dependencies, so most languages restrict it to a directed acyclic graph. By importing modules inside functions, you avoid a circular dependency. Somewhat. First A imports B, then, when the function is called, B can import A. Since you have to call the function for B to import A, you don't get the same A imports B imports A imports B ... loop.
2
0
0
We chose python for a new project because we wanted to use a language where we could write beautiful code in non-verbose way. Our consultant we used to write the code has delivered a great working solution. But when we look at the code, it's riddled with function local from X import Y. We promptly moved the imports to the top of the files but are now stricken with circular dependencies. We have absolutely no wish to resolve the circular dependencies and we have no wish to move the imports back to the functions, which is extremely verbose. Question 1: How do we resolve this? Question 2: what is this circular dependency non-sense? How can the Python community accept this when other languages seems to have solved this just fine? (I hope it isn't regarded as a feature of some kind)
Python circular dependencies
0.291313
0
0
1,869
40,035,593
2016-10-14T05:32:00.000
2
0
1
0
python
40,035,664
2
false
0
0
We handle circular dependencies because, well, we usually don't create them. It's not exactly common when two modules require each other. If you can't refactor your A and B modules into a common module C that both import, you should just use local imports for one of the modules, only in the functions that require it. But really, you'll be better off modifying your modules.
2
0
0
We chose python for a new project because we wanted to use a language where we could write beautiful code in non-verbose way. Our consultant we used to write the code has delivered a great working solution. But when we look at the code, it's riddled with function local from X import Y. We promptly moved the imports to the top of the files but are now stricken with circular dependencies. We have absolutely no wish to resolve the circular dependencies and we have no wish to move the imports back to the functions, which is extremely verbose. Question 1: How do we resolve this? Question 2: what is this circular dependency non-sense? How can the Python community accept this when other languages seems to have solved this just fine? (I hope it isn't regarded as a feature of some kind)
Python circular dependencies
0.197375
0
0
1,869
40,036,942
2016-10-14T07:03:00.000
3
0
1
0
python,build,visual-studio-code
56,199,476
3
false
0
0
If you use the Code Runner extension you can add the following to your settings (click on the '{}' icon in the top right corner to get the settings.json file): "code-runner.executorMap": { "python": "$pythonPath -u $fullFileName xxx" } where xxx is your argument. This is a global change so you have to change when working on other files.
1
24
0
I am running a Python program that takes some command line arguments. How can I provide these arguments when I am building a program within the Visual Studio Code?
Running a Python program with arguments from within the Visual Studio Code
0.197375
0
0
41,152
40,037,580
2016-10-14T07:38:00.000
1
0
1
0
python,transitions
40,037,938
1
true
0
0
There is no concept of end state, but you can define a state 'end' on each fsm and check for it (see 'checking state' in the git readme), or you could add a 'on enter' reference on the 'end' state and that function will be called when the 'end' state is entered. Haven't seen transitions before, looks very nice, I like being able to produce the diagrams.
1
0
0
What is the best way to detect a sequence of characters in python? I'm trying to use transitions package by Tal yarkoni for creating fsm's based on input sequences. Then i want to use the created fsms for new sequence recognition. I'm storing the created fsm in a dict with sequence number as key. All the fsms from the dictionary should make transition as per input chars. The one which reaches end state is the required sequence and the function should return the key. Problem is that there is no concept of end state in the transitions fsm model. Is it possible to do this using transitions package?
Sequence Recognition using fsm in python
1.2
0
0
146
40,039,104
2016-10-14T09:00:00.000
0
0
1
0
regex,string,python-2.7
57,814,568
1
false
0
0
I tried this path = input(r'Input your path:') It seems that path is something like paht = \path\the\user\chose By the way, I use Python3.
1
3
0
How can i apply raw string notation on input from the user? For exmple, i want to get path from the user and enforce raw string notation on it, so if the input is something like: "\path\the\user\chose" it will be accepted to the raw input / be converted later to r"\path\the\user\chose"
Python raw string notation on input from the user
0
0
0
3,798
40,046,656
2016-10-14T15:18:00.000
1
0
1
1
python,python-2.7,centos
40,047,015
1
false
0
0
Replacing 2.7.6 with 2.7.12 would be fine using the procedure you linked. There should be no real problems with libraries installed with pip easy_install as the version updates are minor. Worst comes to worst and there is a library conflict it would be because the python library used for compiling may be different and you can always reinstall the library which would recompile against the correct python library if required. This is only problematic if the library being installed is actually compiled against the python library. Pure python packages would not be affected. If you were doing a major version change this would be okay as well as on CentOS you have to call python with python2.7 and not python, so a new version would call with python2.8
1
0
0
I run a script on several CentOS machines that compiles Python 2.7.6 from source and installs it. I would now like to update the script so that it updates Python to 2.7.12, and don't really know how to tackle this. Should I do this exactly the same way, just with source code of higher version, and it will overwrite the old Python version? Should I first uninstall the old Python version? If so, then how? Sorry if this is trivial - I tried Googleing and searching through Stack, but did not found anyone with a similar problem.
Updating Python version that's compiled from source
0.197375
0
0
240
40,048,987
2016-10-14T17:36:00.000
3
0
0
0
python,r,scikit-learn,pmml
40,049,831
1
true
0
0
You can't connect different specialized representations (such as R and Scikit-Learn native data structures) over a generalized representation (such as PMML). You may have better luck trying to translate R data structures to Scikit-Learn data structures directly. XGBoost is really an exception to the above rule, because its R and Scikit-Learn implementations are just thin wrappers around the native XGBoost library. Inside a trained R XGBoost object there's a blob raw, which is the model in its native XGBoost representation. Save it to a file, and load in Python using the xgb.Booster.load_model(fname) method. If you know that you need to the deploy XGBoost model in Scikit-Learn, then why do you train it in R?
1
7
1
There seem to be a few options for exporting PMML models out of scikit-learn, such as sklearn2pmml, but a lot less information going in the other direction. My case is an XGboost model previously built in R, and saved to PMML using r2pmml, that I would like to use in Python. Scikit normally uses pickle to save/load models, but is it also possible to import models into scikit-learn using PMML?
Importing PMML models into Python (Scikit-learn)
1.2
0
0
2,878
40,051,205
2016-10-14T20:06:00.000
4
0
1
1
python,linux,python-3.5
40,051,396
1
true
0
0
Pip is a python script. Open it and see : it begins with #!/usr/bin/python You can either create a symbolic link in the old path to point to the new one, or replace the shebang with the new path. You can also keep your distrib interpreter safe by leaving it be and set the compiled one into a new virtualenv.
1
3
0
I have compiled python sources with the --prefix option. After running make install the binaries are copied to a folder of my account's home directory. I needed to rename this folder but when I use pip after the renaming it says that it can't find the python interpreter. It shows an absolute path to the previous path (before renaming). Using grep I found out multiple references to absolute paths relative to the --prefix folder. I tried to override it by setting the PATH,PYTHONPATH and PYTHONHOME environment variables but it's not better. Is there a way to compile the python sources in a way that I can freely moves it after ?
Move python folder on linux
1.2
0
0
1,303
40,051,602
2016-10-14T20:36:00.000
0
0
0
0
python,django
40,051,673
2
false
1
0
This is possible. However, the client machine would need to be equipped with the correct technologies for this to work. When you launch a web app on a server (live), the server is required to have certain settings and installs. For example, a Django web app: the server must have a version of Django installed. Hence, whichever machine is running your web app, must have Django installed. It presumably also needs to have the database too. It might be quite a hassling process but it's possible. Just like as a developer, you may have multiple users working on 1 project. So, they all need to have that project 'installed' on their devices so they can run it locally.
2
1
0
I've never worked with Django before so forgive me if a question sounds stupid. I need to develop a web application, but I do not want to deploy it on a server. I need to package it, so that others would "install" it on their machine and run it. Why I want to do it this way? There are many reasons, which I don't want to go into right now. My question is: can I do it? If yes, then how?
packaging django application and deploying it locally
0
0
0
184
40,051,602
2016-10-14T20:36:00.000
0
0
0
0
python,django
40,051,692
2
false
1
0
You need to either use a python to executable program, with Django already in it. The website files you can place into the dist folder or whatever folder has the executable in it. Then you can compress it and share it with others (who have the same OS as you). For an example: You have this script in Django (I'm too lazy to actually write one), and you want to share it with someone who doesn't have Python and Django on his/her computer.
2
1
0
I've never worked with Django before so forgive me if a question sounds stupid. I need to develop a web application, but I do not want to deploy it on a server. I need to package it, so that others would "install" it on their machine and run it. Why I want to do it this way? There are many reasons, which I don't want to go into right now. My question is: can I do it? If yes, then how?
packaging django application and deploying it locally
0
0
0
184
40,052,745
2016-10-14T22:14:00.000
3
0
1
0
python-3.x,concurrency,async-await,python-asyncio
40,053,743
1
true
0
0
Yes, a single await coro() call is two times slower than just func(). But the whole asyncio-based program in total may be (and often is) faster than threaded-based solution.
1
1
0
In a couple of youtube videos I’ve seen today, both David Beazley and Yuri S. say that async is 2x slower than functions. I don’t understand this. The whole point of async is concurrency, so even if a single function is faster than a single coroutine, that’s almost never going to be a real world situation. Instead, you’re going to have a lot of coroutines running at the same time, instead of one at a time with functions, so who cares if one on one a function is faster? How is that a relevant benchmark?
python async speed compared to functions
1.2
0
0
332
40,054,168
2016-10-15T01:50:00.000
3
0
1
0
python-3.x,unicode,utf-8
40,064,177
2
false
0
0
If you want json to output a string that has non-ASCII characters in it then you need to pass ensure_ascii=False and then encode manually afterward.
1
0
0
I want to read a JSON file containing Cyrillic symbols. The Cyrillic symbols are represented like \u123. Python converts them to '\\u123' instead of the Cyrillic symbol. For example, the string "\u0420\u0435\u0433\u0438\u043e\u043d" should become "Регион", but becomes "\\u0420\\u0435\\u0433\\u0438\\u043e\\u043d". encode() just makes string look like u"..." or adds a new \. How do I convert "\u0420\u0435\u0433\u0438\u043e\u043d" to "Регион"?
How to encode Cyrillic characters in JSON
0.291313
0
0
5,699
40,055,676
2016-10-15T06:17:00.000
1
0
0
0
python,django,port
40,063,068
2
true
1
0
Port 80 has no magical meaning, it is not "reserved" or "privileged" on your server (besides most likely requiring root privileges to access, as others have mentioned). It is just a regular port that was chosen to be a default for http, so you don't have to write google.com:80 every time in your browser, that's it. If you have no web server running such as apache or nginx which usually listen to that port, then port 80 is up for grabs. You can run django runserver on it, you can run a plain python script listening to it, whatever you like.
2
1
0
Using this command python manage.py runserver 0.0.0.0:8000 we can host a Django server locally on any port.So a developer can use reserved and privileged port numbers say python manage.py runserver 127.0.0.1:80 So, now I am using port 80 defined for the HTTP protocol. So, why does this not raise any issues and how is this request granted ?
How is Django able to grant reserved port numbers?
1.2
0
0
115
40,055,676
2016-10-15T06:17:00.000
1
0
0
0
python,django,port
40,055,695
2
false
1
0
You should use a proper server instead of Django's test server such as nginx or apache to run the server in production on port 80. Running something like sudo python manage.py runserver 0.0.0.0:80 is not recommended at all.
2
1
0
Using this command python manage.py runserver 0.0.0.0:8000 we can host a Django server locally on any port.So a developer can use reserved and privileged port numbers say python manage.py runserver 127.0.0.1:80 So, now I am using port 80 defined for the HTTP protocol. So, why does this not raise any issues and how is this request granted ?
How is Django able to grant reserved port numbers?
0.099668
0
0
115
40,061,555
2016-10-15T16:33:00.000
0
0
0
0
python,django,model
40,061,752
2
false
1
0
it depend how much model you define, if you have only 1 to 5 class model, just put it into single file, but if you have more than 5 class model, i suggesting put it on several files, but in my experience, if the model put in a serveral files, it become little cumbersome when it comes to importing stuff,
1
3
0
When creating a reusable app, should I put all models I define into single file models.py or can I group the models into several files like topic1.py, topic2.py? Please describe all reasons pro and contra.
Using other file names than models.py for Django models?
0
0
0
965
40,065,396
2016-10-15T23:49:00.000
0
0
0
0
python,tensorflow,deep-learning,lstm
40,065,428
1
true
0
0
It sounds like you want tf.unpack()
1
0
1
Tensorflow: Converting a tenor [B, T, S] to list of B tensors shaped [T, S]. Where B, T, S are whole positive numbers ... How can I convert this? I can't do eval because no session is running at the time I want to do this.
Tensorflow: Converting a tenor [B, T, S] to list of B tensors shaped [T, S]
1.2
0
0
76
40,066,446
2016-10-16T03:12:00.000
2
0
0
0
python-3.x,kivy
40,067,336
1
false
0
1
You can implement it yourself, just bind the on_text event and change the suggestion_text property. you may also check for TAB key press event to know when to change the text to the suggested completion.
1
2
0
I am new to kivy. I want to know if there is a way to create a textinput with autocomplete functionality, that lets you select from a dictionary with 200 items. Similar to the select2 that you have in HTML/CSS
Kivy TextInput with autocomplete
0.379949
0
0
1,838
40,068,842
2016-10-16T09:33:00.000
1
0
0
0
python,django,exception,transactions
40,071,369
1
true
1
0
We use microservices in our company and at least once a month, we have one of our microservices down for a while. We have Transaction model for the payment process and statuses for every step that go before we send a product to the user. If something goes wrong or one of the connected microservices is down, we mark it like status=error and save to the database. Then we use cron job to find and finish those processes. You need to try something for the beginning and if does not fit your needs, try something else.
1
0
0
In programming web applications, Django in particular, sometimes we have a set of actions that must all succeed or all fail (in order to insure a predictable state of some sort). Now obviously, when we are working with the database, we can use transactions. But in some circumstances, these (all or nothing) constraints are needed outside of a database context (e.g. If payment is a success, we must send the product activation code or else risk customer complaints, etc) But lets say on some fateful day, the send_code() function just failed time and again due to some temporary network error (that lasted for 1+ hours) Should I log the error, and manually fix the problem, e.g. send the mail manually Should I set up some kind of work queue, where when things fail, they just go back onto the end of the queue for future retry? What if the logging/queueing systems also fail? (am I worrying too much at this point?)
What is proper workflow for insuring "transactional procedures" in case of exceptions
1.2
0
0
28
40,068,892
2016-10-16T09:40:00.000
0
0
0
0
python,openerp,odoo-8
44,375,335
2
false
0
0
Hello viral, When you upload first time data in excel that time take one unique column(i.e ID) and when you second time upload data that time check the unique column, if found so only update their data else upload data as new record.
2
1
0
In New API(Python-Odo o) I Successfully upload Excel File. But if second time i upload same file Data are Duplicated. So How I upload only Unique Data. If No Change in Excel file no changes in recored But if Change in Data this only recored updated reaming recored same as upload. Thanks
Unique Data Uplload in Python Excel
0
1
0
58
40,068,892
2016-10-16T09:40:00.000
0
0
0
0
python,openerp,odoo-8
40,134,328
2
false
0
0
For that you need atleast one field to identify the record to check the duplicacy.
2
1
0
In New API(Python-Odo o) I Successfully upload Excel File. But if second time i upload same file Data are Duplicated. So How I upload only Unique Data. If No Change in Excel file no changes in recored But if Change in Data this only recored updated reaming recored same as upload. Thanks
Unique Data Uplload in Python Excel
0
1
0
58
40,071,459
2016-10-16T14:34:00.000
0
1
0
0
python,twitter,tweepy
40,072,173
1
false
0
0
There is no workaround to rate limits other than polling for the rate limit status and waiting for the rate limit to be over. you can also use the flag 'wait_on_rate_limit=True'. This way tweepy will poll for rate limit by itself and sleep until the rate limit period is over. You can also use the flag 'monitor_rate_limit=True' if you want to handle the rate limit "Exception" by yourself. That being said, you should really devise some smaller geo range, since your rate limit will be reached every 0.000000001 seconds (or less... it's still twitter).
1
0
0
streamer.filter(locations=[-180, -90, 180, 90], languages=['en'], async=True) I am trying to extract the tweets which have been geotagged from the twitter streaming API using the above call. However, I guess tweepy is not able to handle the requests and quickly falls behind the twitter rate. Is there a suggested workaround the problem ?
Prevent Tweepy from falling behind
0
0
1
74
40,071,582
2016-10-16T14:46:00.000
1
0
0
0
python,tkinter
40,071,607
2
false
0
1
You need to save stuff in some kind of config file. In generel I'd recommend JSON and YAML as file formats also ini for ease of parsing. Also, do not forget about the windows registry (portability lost then though).
1
2
0
I'm working on a text-based game in Python using Tkinter. All the time the window contains a Label and a few Buttons (mostly 3). If it didn't have GUI, I could just use Pickle or even Python i/o (.txt file) to save data and later retrieve it. But what is the best way to tell the program to load the exact widgets without losing bindings, buttons' commands, classes etc.? P.S.: Buttons lead to cleaning the frame of widgets and summoning new widgets. I'm thinking of assigning a lambda (button's command) to a variable and then saving it (Pickle?) to be able to load it in the future and get the right point in the plot. Should I go for it or is there a better, alternative way to accomplish the thing? (If using lambda may work, I'd still be grateful to see your way of doing that.)
In-game save feature (in tkinter game)
0.099668
0
0
625
40,071,987
2016-10-16T15:27:00.000
1
0
1
0
python,linux,virtualenv,archlinux-arm,pacman-package-manager
40,072,017
1
true
0
0
You can create the virtualenv with the --system-site-packages switch to use system-wide packages in addition to the ones installed in the stdlib.
1
0
0
My system is Archlinux. My project will use NumPy, and my project is in a virtual environment created by virtualenv. As it is difficult to install NumPy by pip, I install it by Pacman: sudo pacman -S python-scikit-learn But how can I use it in virtualenv?
How to use the NumPy installed by Pacman in virtualenv?
1.2
0
0
492
40,072,873
2016-10-16T16:54:00.000
0
0
1
0
python,multithreading
63,320,614
4
false
0
0
the GIL does not protect you from modification of the internal states of the objects that you are accessing concurrently from different threads, meaning that you can still mess things up if you don't take measures. So, despite the fact that two threads may not be running at the same exact time, they can still be trying to manipulate the internal state of an object (one at a time, intermittently), and if you don't prevent that from happening (with some locking mechanism) your code could/will eventually fail. Regards.
1
43
0
I believe it is a stupid question but I still can't find it. Actually it's better to separate it into two questions: 1) Am I right that we could have a lot of threads but because of GIL in one moment only one thread is executing? 2) If so, why do we still need locks? We use locks to avoid the case when two threads are trying to read/write some shared object, because of GIL twi threads can't be executed in one moment, can they?
Why do we need locks for threads, if we have GIL?
0
0
0
5,744
40,074,378
2016-10-16T19:14:00.000
0
0
0
0
python,raspberry-pi
40,074,663
1
false
1
0
If I were to face with a problem like this right now, I wold do this: 1) First I'd try figuring out if I can use the event loop of the web framework to execute the code communicating with raspberry-pi asynchronously (i.e. inside of the event handlers). 2) If I failed to find a web framework extensible enough to do what I need or if it turned out that the raspberry-pi part can't be done asynchronously (e.g. it is taking to long to execute), I would figure out what is the difference between threads and processes in python, which of the two can I use in my specific situation and what tools can help me with that. This answer is as specific as the question (at the time of writting).
1
0
0
I want to run program in infinite loop which handles GPIO in raspberry PI and gets requests in infinite loop (as HTTP server). Is it possible? I tried Flask framework, but infinite loop waits for requests and then my program is executed.
Is it possible to run program in python with additional HTTP server in infinite loop?
0
0
0
165
40,076,481
2016-10-16T23:08:00.000
0
0
1
0
java,python,c++,multithreading,pthreads
40,077,181
2
true
0
0
Your design sounds like the correct approach. Don't think of them as per-thread mutexes: think of them as per-counter mutexes (each element of your array should probably be a mutex/counter pair). In the main thread there may be no need to lock all of the mutexes and then read all of the counters: you might be able to do the lock/read/unlock for each counter in sequence, if the value is something like your example (number of requests handled by each thread) where reading all the counters together doesn't give a "more correct" answer than reading them in sequence. Alternatively, you could use atomic variables for the counters instead of locks if your language/environment offers that.
2
0
0
Here's the scenario: a main thread spawns upto N worker threads that each will update a counter (say they are counting number of requests handled by each of them). The total counter also needs to be read by the main thread on an API request. I was thinking of designing it like so: 1) Global hashmap/array/linked-list of counters. 2) Each worker thread accesses this global structure using the thread-ID as the key, so that there's no mutex required to protect one worker thread from another. 3) However, here's the tough part: no example I could find online handles this: I want the main thread to be able to read and sum up all counter values on demand, say to serve an API request. I will NEED a mutex here, right? So, effectively, I will need a per-worker-thread mutex that will lock the mutex before updating the global array -- given each worker thread only contends with main thread, the mutex will fail only when main thread is serving the API request. The main thread: when it receives API request, it will have to lock each of the worker-thread-specific mutex one by one, read that thread's counter to get the total count. Am I overcomplicating this? I don't like requiring per-worker-thread mutex in this design. Thanks for any inputs.
Global array storing counters updated by each thread; main thread to read counters on demand?
1.2
0
1
89
40,076,481
2016-10-16T23:08:00.000
0
0
1
0
java,python,c++,multithreading,pthreads
40,076,739
2
false
0
0
Just use an std::atomic<int> to keep a running count. When any thread updates its counter it also updates the running count. When the main thread needs the count it reads the running count. The result may be less than the actual total at any given moment, but whenever things settle down, the total will be right.
2
0
0
Here's the scenario: a main thread spawns upto N worker threads that each will update a counter (say they are counting number of requests handled by each of them). The total counter also needs to be read by the main thread on an API request. I was thinking of designing it like so: 1) Global hashmap/array/linked-list of counters. 2) Each worker thread accesses this global structure using the thread-ID as the key, so that there's no mutex required to protect one worker thread from another. 3) However, here's the tough part: no example I could find online handles this: I want the main thread to be able to read and sum up all counter values on demand, say to serve an API request. I will NEED a mutex here, right? So, effectively, I will need a per-worker-thread mutex that will lock the mutex before updating the global array -- given each worker thread only contends with main thread, the mutex will fail only when main thread is serving the API request. The main thread: when it receives API request, it will have to lock each of the worker-thread-specific mutex one by one, read that thread's counter to get the total count. Am I overcomplicating this? I don't like requiring per-worker-thread mutex in this design. Thanks for any inputs.
Global array storing counters updated by each thread; main thread to read counters on demand?
0
0
1
89
40,077,546
2016-10-17T02:09:00.000
2
0
0
0
python,panda3d
40,102,063
1
false
0
1
Make those objects children of base.render2d, base.aspect2d or base.pixel2d. For proper GUI elements take a look at DirectGUI, for "I just want to throw these images up on the screen" at CardMaker.
1
3
0
I want to make a game in panda3d with support for touch because I want it to be playable on my windows tablet also without attaching a keyboard. What I want to do is, find a way to draw 2d shapes that don't change when the camera is rotated. I want to add a dynamic analog pad so I must be able to animate it when the d-pad is used with mouse/touch. Any help will be appreciated
How to draw onscreen controls in panda 3d?
0.379949
0
0
239
40,085,367
2016-10-17T11:36:00.000
2
0
0
0
python,numpy,scipy,grid
40,087,542
1
true
0
0
RectBivariateSpline Imagine your grid as a canyon, where the high values are peaks and the low values are valleys. The bivariate spline is going to try to fit a thin sheet over that canyon to interpolate. This will still work on irregularly spaced input, as long as the x and y array you supply are also irregularly spaced, and everything still lies on a rectangular grid. RegularGridInterpolator Same canyon, but now we'll linearly interpolate the surrounding gridpoints to interpolate. We'll assume the input data is regularly spaced to save some computation. It sounds like this won't work for you. Now What? Both of these map 2D-1D. It sounds like you have an irregular input space with, but rectangularly spaced sample points, and an output space with regularly spaced sample points. You might just try LinearNDInterpolator, since you're in 2D it won't be that much more expensive. If you're trying to interpolate a mapping between two 2D things, you'll want to do two interpolations, one that interpolates (x1, y1) -> x2 and one that interpolates (x1, y1) -> y2. Vstacking the output of those will give you an array of points in your output space. I don't know of a more efficient method in scipy for taking advantage of the expected structure of the interpolation output, given an irregular grid input.
1
0
1
I pretty new to python, and I'm looking for the most efficient pythonic way to interpolate from a grid to another one. The original grid is a structured grid (the terms regular or rectangular grid are also used), and the spacing is not uniform. The new grid is a regularly spaced grid. Both grids are 2D. For now it's ok using a simple interpolation method like linear interpolation, pheraps in the future I could try something different like bicubic. I'm aware that there are methods to interpolate from an irregular grid to a regular one, however given the regular structure of my original grid, more efficient methods should be available. After searching in the scipy docs, I have found 2 methods that seem to fit my case: scipy.interpolate.RegularGridInterpolator and scipy.interpolate.RectBivariateSpline. I don't understand the difference between the two functions, which one should I use? Is the difference purely in the interpolation methods? Also, while the non-uniform spacing of the original grid is explicitly mentioned in RegularGridInterpolator, RectBivariateSpline doc is silent about it. Thanks, Andrea
Which scipy function should I use to interpolate from a rectangular grid to regularly spaced rectangular grid in 2D?
1.2
0
0
401
40,086,091
2016-10-17T12:13:00.000
0
0
1
0
python,django,python-3.x,vagrant,centos7
40,120,623
1
false
1
0
pyvenv-3.4 --without-pip name_of_environment worked looks like pip was not installed. thanks for the help.
1
0
0
I am using Centos7 with vagrant and virtualbox on windows10. I am trying to create pyvenv virtual environment to develop python web apps with django. I have installed python 3.4. However, when I type pyvenv-3.4 name_of_environment, it gives back an error Error: [Errno 71] Protocol error: 'lib' -> '/vagrant/django_apps/app1/name_of_environment/lib64' What is wrong?
Error: [Errno 71] Protocol error: pyvenv
0
0
0
1,972
40,090,368
2016-10-17T15:32:00.000
2
0
1
1
python-2.7,shell,google-cloud-sdk
42,702,977
2
false
0
0
An additional thing to add to @cherba's answer: On Windows I found CLOUDSDK_PYTHON had to be a user level variable not a system level variable. (That's the first box if you're looking at windows system environment variables.)
1
2
0
I am trying to install the Google Cloud SDK which requires Python 2.7. I have both Python 3.5 and 2.7 with Anaconda. I am given a shell script and I would like to tell the shell script to use Python 2.7. How would I do this?
Installing Google Cloud SDK with Python 2.7
0.197375
0
0
2,759
40,090,892
2016-10-17T16:01:00.000
2
0
0
0
python,gpu,caffe
40,091,765
2
false
0
0
This happens when you run out of memory in the GPU. Are you sure you stopped the first script properly? Check the running processes on your system (ps -A in ubuntu) and see if the python script is still running. Kill it if it is. You should also check the memory being used in your GPU (nvidia-smi).
1
3
1
I am trying to run a neural network with pycaffe on gpu. This works when I call the script for the first time. When I run the same script for the second time, CUDA throws the error in the title. Batch size is 1, image size at this moment is 243x322, the gpu has 8gb RAM. I guess I am missing a command that resets the memory? Thank you very much! EDIT: Maybe I should clarify a few things: I am running caffe on windows. When i call the script with python script.py, the process terminates and the gpu memory gets freed, so this works. With ipython, which I use for debugging, the GPU memory indeed does not get freed (after one pass, 6 of the 8 bg are in use, thanks for the nvidia-smi suggestion!) So, what I am looking for is a command I can call from within pyhton, along the lines of: run network process image output free gpu memory
Check failed: error == cudaSuccess (2 vs. 0) out of memory
0.197375
0
0
2,965
40,091,704
2016-10-17T16:50:00.000
0
0
0
0
python,django,pug
40,091,767
1
false
1
0
You could try href="{% static 'images/favicon.ico' %}"?v=1'
1
0
0
I use Jade (pyjade) with my Django project. For now I need to use static template tag with GET variable specified - something like following: link(rel="shortcut icon", href="{% static 'images/favicon.ico?v=1' %}"). But I get /static/images/favicon.ico%3Fv%3D1 instead of /static/images/favicon.ico?v=1 Why it happens and how can I fix this? Thanks in advance!
GET variables with Jade in Django templates
0
0
0
241
40,092,290
2016-10-17T17:26:00.000
3
0
1
0
python
40,092,381
3
false
0
0
First you'll need to create a Python function to read from a text file. Second, create a method to convert the degrees. Then, you will create a method to write to the file the results. This is a very broad question, and you can't expect to get the full working code. So start your way from the first mission, and we'll be happy to help with more problem-specific problem.
1
0
0
I'm new to Python and I've been given the task to create a program which uses a text file which contains figures in Fahrenheit, and then I need to change them into a text file which gives the figures in Degrees... Only problem is, I have no idea where to start. Any advice?
Python - Creating a text file converting Fahrenheit to Degrees
0.197375
0
0
1,728
40,096,695
2016-10-17T22:31:00.000
2
0
0
1
python,uwsgi,supervisord
40,096,953
1
false
1
0
After an hour of searching, I finally found a way to do this. Just pass the --need-app argument when starting uWSGI, or add need-app = true in your .ini file, if you run things that way. No idea why this is off by default (in what situation would you ever want uWSGI to keep running when your app has died?) but so it goes.
1
1
0
I have my Python app running through uWSGI. Rarely, the app will encounter an error which makes it not be able to load. At that point, if I send requests to uWSGI, I get the error no python application found, check your startup logs for errors. What I would like to happen in this situation is for uWSGI to just die so that the program managing it (Supervisor, in my case) can restart it. Is there a setting or something I can use to force this? More info about my setup: Python 2.7 app being run through uWSGI in a docker container. The docker container is managed by Supervisor, and if it dies, Supervisor will restart it, which is what I want to happen.
How to Make uWSGI die when it encounters an error?
0.379949
0
0
221
40,099,001
2016-10-18T03:26:00.000
1
0
0
0
python-xarray
40,099,554
1
false
0
0
Use "conda install xarray==0.8.0" if you're using anaconda, or "pip install xarray==0.8.0" otherwise.
1
1
1
I am reading other's pickle file that may have data type based on xarray. Now I cannot read in the pickle file with the error "No module named core.dataset". I guess this maybe a xarray issue. My collaborator asked me to change my version to his version and try again. My version is 0.8.2, and his version 0.8.0. So how can I change back to his version? Thanks!
how to install previous version of xarray
0.197375
0
0
999
40,100,083
2016-10-18T05:18:00.000
0
0
0
0
python,angularjs,mongodb
40,100,193
1
true
1
0
I don't see a problem with your approach except that because you have real-time data, I would encourage you to go with some kind of WebSockets approach, like Socket.io on Node and the front end. The reason why I say this is because the alternative approach, which is long-polling, involved a lot of HTTP traffic back and forth between your server and client, which is a performance bottleneck. Angular is perfectly fine for this, as you will not need to manually update your model data on the front end, thanks to two-way data binding. There are many charting frameworks and libraries like D3.js and HighCharts that can be plugged into your front end to chart your data, use it according to your liking.
1
1
0
I want to write an app for plotting various data (cpu, ram,disk etc.) from Linux machines. On the client side: Data will be collected via a python script and saved to a database (on a remote server) eg.: In each second create an entry in a mongodb collection with: session identifier,cpu used, ram,iops and their values. This data will be written in sessions of a few hours (so ~25K-50K entries per session) On the server side: Data will be processed having the 'session' identified, plotted and saved to a cpu graph png/ram graph png etc. Also it will write to a separate collection in mongodb identification that will be used to gather and present this data in a webpage. The page will have the possibility to start the client on the remote machine. Is this approach optimal? Is there a better but simple way to store the data ? Can I make the page construct and display the session dynamically to be used for example to zoom. Will mongo be able to store/save hundreds of millions of entries like this ? I was thinking on using angular + nodejs or angular + flask on the server and mongodb. I don't know flask or node, which will be easier to use for creating a simple REST. My skill levels: python advanced, javascript/html/css medium, angularjs 1 beginner.
SPA webapp for plotting data with angular and python
1.2
0
0
250
40,100,528
2016-10-18T05:53:00.000
0
0
0
0
python-2.7,robotframework,selenium2library
61,486,889
3
false
1
0
If you know the element is clickable and just want to click anyway, try using Click Element At Coordinates with a 0,0 offset. It'll ignore the fact that it's obscured and will just click.
1
1
0
I'm using Robotframework selenium2Library with python base and Firefox browser for automating our web application. Having below issue when ever a Click event is about occur, Header in the web application is immovable during page scroll(ie., whenever page scroll happens header would always be available for user view, only the contents get's scrolled) now the issue is, when a element about to get clicked is not available in page view, click event tries to scroll page to bring the element on top of the webpage,which is exactly below the header(overlap) and click event never occurs, getting below exception. WebDriverException: Message: Element is not clickable at point (1362.63330078125, 15.5). Other element would receive the click: https://url/url/chat/chat.asp','popup','height=600, width=680, scrollbars=no, resizable=yes, directories=no, menubar=no, status=no, toolbar=no'));"> I have tried Wait Until Page is Visible keyword, but still this doesn't help, as the next statement, Click event(Click Element, Click Link etc) is again scrolling up to the header. Header being visible all time is a feature in our web application and due this scrips are failing, Can some one please help to over come this issue and make the click event to get executed successfully?
Robotframework Selenium2Library header overlay on element to be clicked during page scroll
0
0
1
1,072
40,100,596
2016-10-18T05:58:00.000
3
0
0
0
python,unicode,encode,gb2312
40,100,834
2
false
1
0
囧 not in gb2312, use gb18030 instead. I guess firefox may extends encode method when she face unknown characters.
1
1
0
Firefox can display '囧' in gb2312 encoded HTML. But u'囧'.encode('gb2312') throws UnicodeEncodeError. 1.Is there a map, so firefox can lookup gb2312 encoded characters in that map, find 01 display matrix and display 囧. 2.Is there a map for tranlating unicode to gb2312 but u'囧' is not in that map?
u'囧'.encode('gb2312') throws UnicodeEncodeError
0.291313
0
1
211
40,101,049
2016-10-18T06:29:00.000
1
0
0
0
python,django-models
40,101,170
1
false
1
0
The questions you should ask are the following: Can A be linked to at most 1 or many (more than 1) B? Can B be linked to at most 1 or many A? If A can be linked to many B and B can be linked to many A, you need a many-to-many link. If A can be linked to at most 1 B and B can be linked to many A, you need a one-to-many link, where the link column is in table A. If A can linked to at most 1 B and B can be linked to at most 1 A, you need a one-to-one link. At this point you should consider whether is viable to join them into 1 single table, though this may not be possible or good from other considerations. In your case, you ask yourself the question: Can a PossessableObject be linked to only at most 1 other PossessableObject or many other PossessableObject? Or in other words: Can a PossessableObject be owned by only at most 1 other PossessableObject or many other PossessableObject? If the answer is at most 1, use a one-to-many link, if the answer is many, use a many to many link. Also with regard to your question on a PossesableObject_Table for each possible type of object: I think it is best to put the things they have in common in a single table and then specify types. Than create a seperate table for each type of object that has the unique properties of an object and connect those, but your way will work as well. It depends on how many different types you have and what you find the easiest to work with. Remember: as long as is works it is fine.
1
1
0
Lets say, that I'm want to develop a game (RTS-like, with economy orientation) in which player, as well as AI, can posses almost every in-game object. For example: player posses a land and some buildings on it; other players, or AI can also have some buildings, or else, on this land piece; also, someone can posses an entire region with such land pieces and sell some of it to others. Possesable objects can be movable, or immovable, but all of them have common attributes, such as owner, title, world coords and so on. What DB structure with respect to Django models will be most suitable for this description? Owner_Table - (one-to-many) - PossesableObject_Table PossesableObject_Table - (many-to-many) - PossesableObject_Table (in example, building linked to land piece where it is) or Owner_Table - (one-to-many) - PossesableObjectType_Table (table for each type of possible object) PossesableObjectType_Table - (one-to-many) - PossesableObjectType_Table (for already explained above type of linking)
Model development choice
0.197375
0
0
23
40,105,483
2016-10-18T10:14:00.000
1
0
1
0
python,editor
40,105,820
3
false
0
0
Try PyCharm, probably this software will cover all your needs
1
0
0
I am looking for such python editors which suggest inputs(mentioned in file/database) while using custom modules and functions during writing programs. Is there similar type of editor already exists that I can build something upon? Can I develop such editor? If yes, how?
Custom python editor
0.066568
0
0
107
40,109,065
2016-10-18T13:02:00.000
1
0
0
0
python,openerp,odoo-9
40,124,695
1
true
1
0
Your smart button on partners should use a new action, like the button for customer or vendor bills. This button definition should include context="{'default_partner_id': active_id} which will allow to change the partner filter later on, or the upcoming action definition should include the partner in its domain. The action should be for model account.invoice and have to have the following domain: [('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')] If you want to filter only outgoing (customer invoices) add a filter tuple for field type.
1
2
0
In accounting -> Customer Invoices, there is a filter called Overdue. Now I want to calculate the overdue payments per user and then display it onto the customer form view. I just want to know how can we apply the condition of filter in python code. I have already defined a smart button to display it with a (total invoice value) by inheriting account.invoice. "Overdue" filter in invoice search view: ['&', ('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')]
Display Sum of overdue payments in Customer Form view for each customer
1.2
0
0
270
40,109,228
2016-10-18T13:09:00.000
1
0
1
1
python,eclipse,pydev
40,130,262
1
false
0
0
Unfortunately no, this is a Python restriction on setting the next line to be executed: it can't set the next statement after an exception is thrown (it can't even go to a different block -- i.e.: if you're inside a try..except, you can't set the next statement to be out of that block). You could in theory take a look at Python itself as it's open source and see how it handles that and make it more generic to handle your situation, but apart from that, what you want is not doable.
1
1
0
I am using Eclipse + PyDev, although I can break on exception using PyDev->Manage Exception Breakpoints, I am unable to continue the execution after the exception. What I would like to be able to do is to set the next statement before the exception so I can run a few commands in the console window and continue execution. If I use Eclipse -> Run -> Set Next Statement before the exception, the editor will show the next statement being where I set it but then when resuming the execution, the program will be terminated. Can this be done ?
How to retry before an exception with Eclipse/PyDev
0.197375
0
0
39
40,109,379
2016-10-18T13:16:00.000
0
0
0
0
python,python-2.7,python-3.x,opencv,numpy
40,109,662
1
false
0
0
You can try: Download the OpenCV module Copy the ./opencv/build/python/3.4/x64/cv2.pyd file To the python installation directory path: ./Python34/Lib/site-packages. I hope this helps
1
1
1
So I have OpenCV on my computer all sorted out, I can use it in C/C++ and the Python 2.7.* that came with my OS. My computer runs on Linux Deepin and whilst I usually use OpenCV on C++, I need to use Python 3.4.3 for some OpenCV tasks. Problem is, I've installed python 3.4.3 now but whenever I try to run an OpenCV program on it, it doesn't recognize numpy or cv2, the modules I need for OpenCV. I've already built and installed OpenCV and I'd rather not do it again Is there some way I can link my new Python 3.4.3 environment to numpy and the opencv I already built so I can use OpenCV on Python 3.4.3? Thanks in advance
How do I link python 3.4.3 to opencv?
0
0
0
1,213
40,110,260
2016-10-18T13:56:00.000
0
0
0
0
python,scipy,curve-fitting
40,116,907
2
true
0
0
No, least_squares (hence curve_fit) only supports box constraints.
1
0
1
I`m using scipy.optimize.curve_fit for fitting a sigmoidal curve to data. I need to bound one of parameters from [-3, 0.5] and [0.5, 3.0] I tried fit curve without bounds, and next if parameter is lower than zero, I fit once more with bounds [-3, 0.5] and in contrary with[0.5, 3.0] Is it possible, to bound function curve_fit with two intervals?
Python, scipy, curve_fit, bounds: How can I constraint param by two intervals?
1.2
0
0
569
40,110,714
2016-10-18T14:18:00.000
0
0
1
1
python-3.x,squish
46,297,701
1
false
0
0
Squish binary packages generally work on any supported operating system (they have been compiled for).
1
0
0
Does the squish build used in Windows 7 work for windows server 2008 as well? Or should I build squish separately for windows server?
Squish build for windows server 2008
0
0
0
55
40,111,605
2016-10-18T14:55:00.000
1
0
1
0
python,pycharm,quandl
40,111,939
2
false
0
0
try with sudo pip install (your package) on the terminal sudo pip install quandl Or Sudo easy_install quandl
1
0
0
I am trying to install quandl in PyCharm. I am trying to do this by going into project interpreter clicking the "+" button and then selecting Quandl. I am getting the following error. OSError: [Errno 13] Permission denied: '/Users/raysabbineni/Library/Python/2.7' I have installed pandas and sklearn in the above way so I'm not sure what the error with quandl is.
install quandl in pycharm
0.099668
0
0
2,033
40,114,836
2016-10-18T17:47:00.000
0
0
1
0
python,anaconda,launcher
40,135,824
1
false
0
0
I reinstall the program again, the problem was that I accidentally have 2 version of the same program
1
0
0
I download anaconda launcher and successfully launched the Ipython notebook, then the next time I open the anaconda launcher I couldn't find the ipython notebook in the app list
can't find ipython notebook in the app list of anaconda launcher
0
0
0
97
40,116,215
2016-10-18T19:11:00.000
17
0
0
0
python,scikit-learn,xgboost
51,822,131
4
false
0
0
In my case, the same error was thrown during a regular fit call. The root of the issue was that the objective was manually set to multi:softmax, but there were only 2 classes. Changing it to binary:logistic solved the problem.
1
20
1
I am trying to use the XGBClassifier wrapper provided by sklearn for a multiclass problem. My classes are [0, 1, 2], the objective that I use is multi:softmax. When I am trying to fit the classifier I get xgboost.core.XGBoostError: value 0for Parameter num_class should be greater equal to 1 If I try to set the num_class parameter the I get the error got an unexpected keyword argument 'num_class' Sklearn is setting this parameter automatically so I am not supposed to pass that argument. But why do I get the first error?
xgboost sklearn wrapper value 0for Parameter num_class should be greater equal to 1
1
0
0
14,175
40,116,845
2016-10-18T19:48:00.000
2
0
1
0
python,json,transformation,velocity
52,438,986
3
false
0
0
I have not found the transformer library suitable for my needs and spend couple of days trying to create my own. And then I realized that creating transformation scheme is more difficult than writing native python code that transforms one json-like python object to another. I understand, that this is not the answer to original question. And I also understand that my approach has certain limitations. F.e. if you need to generate documentation it wouldn't work. But if you just need to transform json-like objects consider the possibility to just write python code that does it. Chances are that the code would be cleaner and easier to understand than transformation schema description. I wish considered this approach more seriously couple of days ago.
1
12
0
Does anyone know of a python library to convert JSON to JSON in an XSLT/Velocity template style? JSON + transformation template = JSON (New) Thanks!
Python Library - json to json transformations
0.132549
0
0
8,228
40,117,180
2016-10-18T20:07:00.000
2
0
0
0
excel,python-3.x,pandas,openpyxl
40,126,930
2
true
0
0
What do you mean by "extracting the formulae faster"? They are stored with each cell so you have to go cell by cell. When it comes to parsing, openpyxl includes a tokeniser which you might find useful. In theory this would allow you to read the worksheet XML files directly and only parse the nodes with formulae in them. However, you'd also have to handle the "shared formulae" that some applications use. openpyxl automatically converts such formulae into per-cell ones. Internally Pandas relies on xlrd to read the files, so the ETL of getting the stuff into Pandas won't be faster than working directly with worksheet objects.
2
2
0
As part of a bigger set of tests I need to extract all the formulas within an uploaded Excel workbook. I then need to parse each formula into its respective range references and dump those references into a simple database. For example, if Cell A1 has a formula =B1 + C1 then my database would record B1 and C1 as referenced cells. Currently I read formulas one at a time using openpyxl and then parse them. This is fine for smaller workbooks, but for large workbooks it can be very slow. It feels entirely inefficient. Could pandas or a similar module extract Excel formulas faster? Or is there perhaps a better way to extract all workbook formulas than reading it one cell at a time? Any advice would be highly appreciated.
Fastest way to parse all Excel formulas using Python 3.5
1.2
1
0
994
40,117,180
2016-10-18T20:07:00.000
0
0
0
0
excel,python-3.x,pandas,openpyxl
40,118,989
2
false
0
0
Don't know about python, but a fast approach to the problem is: get all the formulas in R1C1 mode into an array using specialcells feed into a collection/dictionary to get uniques then parse the uniques
2
2
0
As part of a bigger set of tests I need to extract all the formulas within an uploaded Excel workbook. I then need to parse each formula into its respective range references and dump those references into a simple database. For example, if Cell A1 has a formula =B1 + C1 then my database would record B1 and C1 as referenced cells. Currently I read formulas one at a time using openpyxl and then parse them. This is fine for smaller workbooks, but for large workbooks it can be very slow. It feels entirely inefficient. Could pandas or a similar module extract Excel formulas faster? Or is there perhaps a better way to extract all workbook formulas than reading it one cell at a time? Any advice would be highly appreciated.
Fastest way to parse all Excel formulas using Python 3.5
0
1
0
994
40,120,312
2016-10-19T00:48:00.000
-3
0
0
1
python,django,celery,amazon-elastic-beanstalk,celerybeat
40,166,437
2
true
1
0
In case someone experience similar issues: I ended up switching to a different Queue / Task framework for django. It is called django-q and was set up and working in less than an hour. It has all the features that I needed and also better Django integration than Celery (since djcelery is no longer active). Django-q is super easy to use and also lighter than the huge Celery framework. I can only recommend it!
2
13
0
I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment. So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment). I have read about multiple solutions, but all of them seem to have issues that don't make it work for me: Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops. Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment. Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app. How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)? Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django?
Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk
1.2
0
0
1,251
40,120,312
2016-10-19T00:48:00.000
1
0
0
1
python,django,celery,amazon-elastic-beanstalk,celerybeat
54,745,929
2
false
1
0
I guess you could single out celery beat to different group. Your auto scaling group runs multiple django instances, but celery is not included in the ec2 config of the scaling group. You should have different set (or just one) of instance for celery beat
2
13
0
I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment. So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment). I have read about multiple solutions, but all of them seem to have issues that don't make it work for me: Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops. Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment. Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app. How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)? Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django?
Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk
0.099668
0
0
1,251
40,124,568
2016-10-19T07:12:00.000
1
0
0
0
python,django,nginx,gunicorn,django-media
40,125,586
1
true
1
0
You need to implement a solution for sharing files from one server to another. NFS is the standard in Unixes like Linux. An alternative is to use live mirroring, i.e. create a copy of the media files directory in the nginx server and keep it synchronized. There are probably many options for setting this up; I've successfully used lsyncd.
1
1
0
I have separate servers one running NGINX and other running gunicorn/Django, I managed to serve static files from NGINX directly as recommended from Django documentation, but I have an issue with files uploaded by users, which will be upload to server has gunicorn, not the server has NGINX, thus users can't find their files and browse them. How to upload files from Django to another server? or How to transfer files from other server after uploading to NGINX? Note: I don't have the CDN option, I'll server my statics from my servers.
Serve uploaded files from NGINX server instead of gunicorn/Django
1.2
0
0
167
40,126,407
2016-10-19T08:47:00.000
2
0
0
0
python,image,image-processing,rgb
40,127,791
1
true
0
0
Per color plane, replace the pixel at (X, Y) by the pixel at (X-1, Y+3), for example. (Of course your shifts will be different.) You can do that in-place, taking care to loop by increasing or decreasing coordinate to avoid overwriting. There is no need to worry about transparency.
1
1
0
What I'm trying to do is recreating what is commonly called an "RGB shift" effect, which is very easy to achieve with image manipulation programs. I imagine I can "split" the channels of the image by either opening the image as a matrix of triples or opening the image three times and every time operate just on one channel, but I wouldn't know how to "offset" the channels when merging them back together (possibly by creating a new image and position each channel's [0,0] pixel in an offsetted position?) and reduce each channel's opacity as to not show just the last channel inserted into the image. Has anyone tried to do this? Do you know if it is possible? If so, how did you do it? Thanks everyone in advance!
Split and shift RGB channels in Python
1.2
0
0
1,180
40,128,751
2016-10-19T10:28:00.000
1
0
0
1
python-2.7,opencv,ubuntu
45,497,131
4
false
0
0
sudo apt-get install build-essential cmake git pkg-config sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev sudo apt-get install libgtk2.0-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libatlas-base-dev gfortran sudo apt-get install python2.7-dev sudo pip install numpy sudo apt-get install python-opencv Then you can have a try: $ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import cv >>> import cv2
1
10
1
I have tried a lot of online posts to install opencv but they are not working for Ubuntu 16.04. May anyone please give me the steps to install openCV 2.4.13 on it?
How to install openCV 2.4.13 for Python 2.7 on Ubuntu 16.04?
0.049958
0
0
29,525
40,136,285
2016-10-19T15:53:00.000
3
0
0
0
python,django,passwords,password-encryption
40,136,359
1
true
1
0
No, there is no logical way of doing this that doesn't imply a huge security breach in the software. If the passwords are stored correctly (salted and hashed), then even site admins with unrestricted access on the database can not tell you what the passwords are in plain text. You should push back against this unreasonable request. If you have a working "password reset" functionality, then nobody but the user ever needs to know a user's password. If you don't have a reliable "password reset" feature, then try and steer the conversation and development effort in this direction. There is rarely any real business need for knowing/printing user passwords, and these kind of feature requests may be coming from non-technical people who have misunderstandings (or no understanding) about the implementation detail of authentication and authorization.
1
1
0
I am working on Django 1.9 project and I have been asked to enable some users to print a page with a list of a set of users and their passwords. Of course passwords are encrypted and there is no out-of-the-box ways of doing this. I know this would imply a security breach so my question is kind of contradictory, but is there any logical way of doing this that doesn't imply a huge security breach in the software?
How to expose user passwords in the most "secure" way in django?
1.2
0
0
288