Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,878,993 | 2016-11-30T01:57:00.000 | 0 | 0 | 1 | 0 | python,exe,anaconda | 40,879,151 | 2 | false | 0 | 0 | Check if a python package called py2exe is installed. Try using that. | 2 | 0 | 0 | I have installed anaconda. How can I convert the .py to .exe by anaconda only without the need to install other things like pyInstaller? I cannot easily install other packages in my company | Convert python to exe by anaconda | 0 | 0 | 0 | 1,567 |
40,878,993 | 2016-11-30T01:57:00.000 | 0 | 0 | 1 | 0 | python,exe,anaconda | 69,865,934 | 2 | false | 0 | 0 | pip install auto-py-to-exe
Install auto-py-to-exe and then
auto-py-to-exe | 2 | 0 | 0 | I have installed anaconda. How can I convert the .py to .exe by anaconda only without the need to install other things like pyInstaller? I cannot easily install other packages in my company | Convert python to exe by anaconda | 0 | 0 | 0 | 1,567 |
40,879,007 | 2016-11-30T01:58:00.000 | 0 | 0 | 1 | 1 | python,pywin32,python-3.6 | 44,041,940 | 2 | false | 0 | 0 | Simply rename
HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.6-32
To:
HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.6
This worked for Python 3.6.1 as well.
Taken from the link above. | 1 | 1 | 0 | I just intstalled python 3.6.0b4 (default, Nov 22 2016) amd64 on my Win 7 computer. When I try to install pywin32-220.win-amd64-py3.6 I get the error message Python version 3.6-32 required, which was not found in the registry.
Python version 3.6-32 sounds like the 32bit version, which seems inappropriate. Perhaps I misunderstand.
I've seen posts about a similar problem installing pywin 3.5-32, but none relating to 3.6b4 or the 64 bit version.
How do I fix this? | Python 3.6.0b4 amd64 - pywin32-220.win-amd64-py3.6 can't find python 3.6-32 | 0 | 0 | 0 | 4,192 |
40,879,385 | 2016-11-30T02:45:00.000 | 1 | 0 | 1 | 0 | python,version-control,visual-studio-code | 47,595,986 | 1 | false | 0 | 0 | So, I know this question is a year old, but I just had the same one and did some digging. It looks to me like the objectdb stores information about rope.base.resources.File objects, which include the local path of the file. As such, checking in the ObjectDB doesn't make a lot of sense. The config.py file, on the other hand, seems like it could be useful to share for a project. | 1 | 4 | 0 | The file .vscode/.ropeproject/objectdb has been modified (created) after applying refactoring to some python code (using DonJayamanne's pythonVSCode extension).
Should this objectdb file be excluded from versioning in vscode?
What info does it actually contains? | Should rope objectdb file be excluded from vcs? | 0.197375 | 0 | 0 | 617 |
40,883,050 | 2016-11-30T07:57:00.000 | 1 | 0 | 0 | 0 | python,c++,quantlib,quantlib-swig | 40,915,939 | 1 | false | 0 | 0 | As you found out, you can't just rename the library. When you compiled QuantLib, you chose the "Debug" configuration, which gave you QuantLib-vc140-mt-gd.lib. To get the QuantLib-vc140-mt.lib that Python is asking for, use the "Release" configuration instead. (Incidentally, the compiled library will also be a lot faster...) | 1 | 0 | 0 | I had QuantLib 1.9 built already (succeeded), then I tried to install QuantLib-Python from SWIG 1.9. I worked with VS2015,boost_1_62_0 (msvs-14.0 32bit), Anaconda3, QuantLib-1.9, QuantLib-SWIG-1.9 and swigwin-3.0.10,all in the same folder.
When I did "python setup.py build" in dev command prompt for vs2015, I came across the error: Link: fatal error LINK1104:cannot open file 'QuantLib-vc140-mt.lib'. So I went to QuantLib-lib folder, found that the lib file in there is called "QuantLib-vc140-mt-gd.lib". I make a copy of it and renamed it to 'QuantLib-vc140-mt.lib'and ran build command again, this time it ran longer but I got this new error under some of the obj files "quantlib fatal error LNK2001:unresolved external symbol___imp___CrtDbgReportW"
I am really new to the subject and would really appreciate if someone could shed some light on this. | QuantLib 1.9 Fatal Error when Build Python | 0.197375 | 0 | 0 | 155 |
40,885,331 | 2016-11-30T10:02:00.000 | -1 | 0 | 0 | 0 | python,refresh,reload | 40,885,562 | 2 | false | 0 | 0 | It's unclear what you mean with "reload", but the normal behavior of Python is that you need to restart the software for it to take a new look on a Python module and reread it. also you cannot add buttons inside a python script ... for that you may need to use HTML or other .. | 1 | 0 | 0 | i am very new to python and I wanted to ask how to reload a Python script, like I want to make a button inside an script that says Reload and when you press on it it refreshes the script, so that I do not have to close it and open it again. Is there any way to do this, I need a step by step guide.
Thanks | How to reload a Python script | -0.099668 | 0 | 0 | 1,588 |
40,887,074 | 2016-11-30T11:22:00.000 | 0 | 1 | 1 | 0 | python-sphinx | 41,701,850 | 1 | false | 0 | 0 | Very easy -- just precede with the following line:
.. highlight:: none
Otherwise Sphinx assumes it is Python code (default)! | 1 | 0 | 0 | I am trying to include in my source .rst file literal producing text like:
::
@reboot myscript
However @reboot appears in boldface. Did not find how to avoid it. | Character @ in a :: literal | 0 | 0 | 0 | 33 |
40,887,631 | 2016-11-30T11:48:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 40,947,457 | 1 | false | 0 | 0 | Regarding Problem 1, if you expect the different components of the target value to be independent, you can approach the problem as building a classifier for every component. That is, if the features are F = (F_1, F_2, ..., F_N) and the targets Y = (Y_1, Y_2, ..., Y_N), create a classifier with features F and target Y_1, a second classifier with features F and target Y_2, etc.
Regarding Problem 2, if you are not dealing with a time series, IMO the best you can do is simply predict the most frequent value for each feature.
That said, I believe your question fits better another stack exchange like cross-validated. | 1 | 1 | 1 | Disclaimer: I'm new to the field of Machine Learning, and even though I have done my fair share of research during the past month I still lack deep understanding on this topic.
I have been playing around with the scikit library with the objective of learning how to predict new data based on historic information, and classify existing information.
I'm trying to solve 2 different problems which may be correlated:
Problem 1
Given a data set containing rows R1 ... RN with features F1 ... FN, and a target per each group of rows, determine in which group does row R(N+1) belongs to.
Now, the target value is not singular, it's a set of values; The best solution I have been able to come up with is to represent those sets of values as a concatenation, this creates an artificial class and allows me to represent multiple values using only one attribute. Is there a better approach to this?
What I'm expecting is to be able to pass totally new set of rows, and being told which are the target values per each of them.
Problem 2
Given a data set containing rows R1 ... RN with features F1 ... FN, predict the values of R(N+1) based on the frequency of the features.
A few considerations here:
Most of the features are categorical in nature.
Some of the features are dates, so when doing the prediction the date should be in the future relative to the historic data.
The frequency analysis needs to be done per row, because certain sets of values may be invalid.
My question here is: Is there any process/ML algorithm, which given historic data would be able to predict a new set of values based on just the frequency of the parameters?
If you have any suggestions, please let me know. | Multi-Output Classification using scikit Decision Trees | 0 | 0 | 0 | 903 |
40,889,221 | 2016-11-30T13:08:00.000 | 0 | 1 | 0 | 0 | python,gdb,tensorflow,pdb | 68,941,661 | 3 | false | 0 | 0 | Adding on mrry's answer, in today's TF2 environment, the main entry point would be TFE_Execute, this should be where you add the breakpoint. | 1 | 6 | 0 | I am debugging decode_raw_op_test from TensorFlow. The test file is written in python however it executes code from underlying C++ files.
Using pdb, I could debug python test file however it doesn't recognize c++ file. Is there a way in which we can debug underlying c++ code?
(I tried using gdb on decode_raw_op_test but it gives "File not in executable format: File format not recognized") | Debugging TensorFlow tests: pdb or gdb? | 0 | 0 | 0 | 3,923 |
40,894,487 | 2016-11-30T17:23:00.000 | 3 | 0 | 1 | 0 | python,multithreading,web-crawler,python-multithreading | 40,894,613 | 2 | true | 1 | 0 | The rule of thumb when deciding whether to use threads in Python or not is to ask the question, whether the task that the threads will be doing, is that CPU intensive or I/O intensive. If the answer is I/O intensive, then you can go with threads.
Because of the GIL, the Python interpreter will run only one thread at a time. If a thread is doing some I/O, it will block waiting for the data to become available (from the network connection or the disk, for example), and in the meanwhile the interpreter will context switch to another thread. On the other hand, if the thread is doing a CPU intensive task, the other threads will have to wait till the interpreter decides to run them.
Web crawling is mostly an I/O oriented task, you need to make an HTTP connection, send a request, wait for response. Yes, after you get the response you need to spend some CPU to parse it, but besides that it is mostly I/O work. So, I believe, threads are a suitable choice in this case.
(And of course, respect the robots.txt, and don't storm the servers with too many requests :-) | 1 | 0 | 0 | I've made simple web-crawler with Python. So far everything it does it creates set of urls that should be visited, set of urls that was already visited. While parsing page it adds all the links on that page to the should be visited set and page url to the already visited set and so on while length of should_be_visited is > 0. So far it does everything in one thread.
Now I want to add parallelism to this application, so I need to have same kind of set of links and few threads / processes, where each will pop one url from should_be_visited and update already_visited. I'm really lost at threading and multiprocessing, which I should use, do I need some Pools, Queues? | Python threading or multiprocessing for web-crawler? | 1.2 | 0 | 1 | 1,352 |
40,894,943 | 2016-11-30T17:47:00.000 | 3 | 1 | 0 | 1 | python,c | 40,895,004 | 1 | true | 0 | 0 | By "Python's read" I assume you mean the read method of file objects. That method is closer in spirit to C's fread: it implements buffering and it tries to satisfy the requested amount, unless that is impossible due to an IO error or end-of-file condition.
If you really need to call the read() function available in many C environments, you can call os.read() to invoke the underlying C function. The only difference is that it returns the data read as a byte string, and it raises an exception in the cases when the C function would return -1.
If you call os.read(), remember to give it the file descriptor obtained using the fileno method on file objects, or returned by functions in the os module such as os.open, os.pipe, etc. Also remember not to mix calls to os.open() and file.open(), since the latter does buffering and can cause later calls to os.open() not to return the buffered data. | 1 | 1 | 0 | C's read:
The read() function shall attempt to read nbyte bytes from the file associated with the open file descriptor, fildes, into the buffer pointed to by buf.
Upon successful completion, these functions shall return a non-negative integer indicating the number of bytes actually read. Otherwise, the functions shall return −1 and set errno to indicate the error.
Python's read:
Read at most n characters from stream.
Read from underlying buffer until we have n characters or we hit EOF.
If n is negative or omitted, read until EOF.
Bold fonts are mine. Basically Python will insist on finding EOF if currrently available data is less than buffer size... How to make it simply return whatever is available? | What's Python's equivalent to C's read function? | 1.2 | 0 | 0 | 1,069 |
40,896,575 | 2016-11-30T19:25:00.000 | 0 | 1 | 1 | 0 | python | 40,896,863 | 2 | false | 0 | 0 | As lucasnadalutti mentioned, you can access them by importing your module.
In terms of advantages, it can make your main program care less about where the imports are coming from if the imp_mod handles all imports, however, as your program gets more complex and starts to include more namespaces, this approach can get more messy. You can start to handle a bit of this by using __init__.py within directories to handle imports to do a similar thing, but as things get more complex, personally, I feel it add a little more complexity. I'd rather just know where a module came from to look it up. | 1 | 2 | 0 | If I were to create a module that was called for example imp_mod.py and inside it contained all (subjectively used) relevant modules that I frequently used.
Would importing this module into my main program allow me access to the imports contained inside imp_mod.py?
If so, what disadvantages would this bring?
I guess a major advantage would be a reduction of time spent importing even though its only a couple of seconds saved... | What benefits or disadvantages would importing a module that contains 'import' commands? | 0 | 0 | 0 | 2,324 |
40,898,087 | 2016-11-30T20:58:00.000 | 0 | 1 | 0 | 0 | debian,python-3.5,file-sharing | 40,911,268 | 1 | true | 0 | 0 | Probably the easiest to achive, that is also secure is to use sshfs between the servers. | 1 | 0 | 0 | Right, i have a bot that has 2 shards, each on their own server. I need a way to share data between the two, preferably as files, but im unsure how to achieve this.
The bot is completely python3.5 based
The servers are both running Headless Debian Jessie
The two servers arent connected via LAN, so this has to be sharing data over the internet
The data dosent need to be encrypted, as no sensitive data is shared | Share data between two scripts on different servers | 1.2 | 0 | 1 | 27 |
40,900,493 | 2016-12-01T00:09:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,shell,convention | 48,886,552 | 1 | false | 0 | 0 | I don't think that this is a good python question since the exit code isn't normally used by python unless you're calling one python program from another -- But the general point here remains... The primary consumer of the exit code will be a shell or something that is interested in process management. As such, 0 is always used for success. Other conventions aren't so strong -- Just document the exit codes if you have more than 2 :-) and maybe tag this with shell or something to raise it with a more suitable audience.
– mgilson | 1 | 0 | 0 | According to the Python documentation:
The optional argument arg can be an integer giving the exit status (defaulting to zero), or another type of object. If it is an integer, zero is considered “successful termination” and any nonzero value is considered “abnormal termination” by shells and the like. Most systems require it to be in the range 0-127, and produce undefined results otherwise. Some systems have a convention for assigning specific meanings to specific exit codes, but these are generally underdeveloped; Unix programs generally use 2 for command line syntax errors and 1 for all other kind of errors.
So if I wanted to use the rest of the integer range for exit codes, would there be a convention as to what numbers should be used? I've looked on this site and on others but I cannot find anything related to an agreement on the codes beyond 0,1,2. Is there anything in Python philosophy that might dictate the values I should use? | Is there a convention for python exit codes? | 0.379949 | 0 | 0 | 4,276 |
40,902,238 | 2016-12-01T03:40:00.000 | 0 | 0 | 0 | 1 | php,python,google-app-engine,ftp | 40,904,184 | 1 | true | 1 | 0 | App Engine projects are not based on server virtual machines. App Engine is a platform as a service, not infrastructure as a service. Your code is packaged up and served on Google App Engine in a manner that can scale easily. App Engine is not a drop-in replacement for your old school web hosting, its quite a bit different.
That said, FTP is just a mechanism to move files. If your files just need to be processed by a job, you can look at providing an upload for your users where the files end up residing on Google Cloud Storage and then your cron job reads from that location and does any processing that is needed. What results from that processing might result in further considerations. Don't look at FTP being a requirement, but rather a means to moving files and you'll probably have plenty of options. | 1 | 0 | 0 | I'm running a PHP app on GCloud (Google App Engine). This app will require users to submit files for processing via FTP. A python cron job with process them.
Given that dev to prod is via the GAE deployment, I'm assuming there is no FTP access to the app folder structure.
How would I go about providing simple one-way FTP to my users? Can I deploy a Python project that will be a server? Or do I need to run a VM?
I've done some searching which suggests the VM option, but surely there are other options? | GCloud App needs FTP - do I need a VM or can I create an FTP app? | 1.2 | 0 | 0 | 50 |
40,902,567 | 2016-12-01T04:17:00.000 | 0 | 0 | 1 | 1 | python,macos,pip | 40,905,073 | 2 | false | 0 | 0 | Finally got it work after I symlink with brew's python
It was not symlinked into /usr/local
The command is simply brew link python and now which python will point to /usr/local/bin/python | 1 | 1 | 0 | I used brew to install python 2.7 and now my mac have 2 python version
on in /usr/bin/python and another on in /usr/local/Cellar/python/2.7.12_2/
pip installed oursql to /usr/local/lib/python2.7/site-packages
what should i do about it? | How to solve the ImportError: No module named oursql error | 0 | 0 | 0 | 1,134 |
40,902,770 | 2016-12-01T04:39:00.000 | 1 | 0 | 1 | 0 | python,virtualenv | 40,902,817 | 2 | false | 0 | 0 | It is better to have separate environment for every project. Maybe one project needs some package version 1.3 and other needs 1.6. So it is much easier to have an environment for each project then to have one for all.
If you had only one environment you would have to change update(change) packages every time you wanted to compile a project that needs some different versions. | 1 | 0 | 0 | I am new to Python. Just installed Anaconda and everything works fine. also the documentation mentioned it is good to configure virtual environment.
Since Anaconda works like a virtual environment, I don't need to configure another virtual environment.
Right or wrong? | Once Anaconda is installed is there no need to install a virtual environment? | 0.099668 | 0 | 0 | 149 |
40,904,368 | 2016-12-01T06:58:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,nlp,pos-tagger | 43,810,812 | 1 | false | 0 | 0 | To calculate accuracy on any dataset, count the number of words (excluding start and stop symbols) that you tagged correctly, and divide by the total number of words (excluding start and stop symbols). | 1 | 1 | 1 | I am implementing the Viterbi Algorithm for POS-Tagger using the Brown-corpus as my data set. Now an important aspect of this NLP task is finding the accuracy of the model. So I really need help as what to implement. It's easier using the nltk toolkit but since I am not using a toolkit, I am stuck on how to determine the accuracy of my model. Any help, code examples or referral links would be appreciated. Thanks | Finding the accuracy of an HMM model for POS-Tagger | 0 | 0 | 0 | 582 |
40,906,584 | 2016-12-01T09:14:00.000 | 2 | 0 | 1 | 1 | python,eclipse,virtualenv,pydev | 40,910,909 | 1 | false | 0 | 0 | Not sure... by default, any run will get the 'default' interpreter (which is the first interpreter in Preferences > PyDev > Interpreters > Python interpreter -- you may reorder those using the up/down button in that screen).
Now, that's the default, you may also configure to use a different interpreter per project (select project > alt+Enter for its properties > PyDev - Interpreter/Grammar > Interpreter).
Or you can choose a different one per launch:
Menu > Run > Run Configurations > Select launch > Interpreter.
Also, you may want to double check to make sure that the paths in the interpreter configuration window (Preferences > PyDev > Interpreters > Python interpreter > select interpreter) actually map to the proper site-packages/external libs you expect. | 1 | 1 | 0 | I have Python2.7 installed and Python3.5 installed on my windows machine.These are at locations C:\Python27 and C:\Python35-32. Both these are added in System Path environment variables and can be accessed from any directory.
Now i create a virtualenv in Python35-32 directory successfully under a sub-directory CODING_LABS.
I try to link/point my Eclipse python interpreter to the python.exe file contained in CODING_LABS. This is done OK.
However, when i run my script from eclipse,it still points to Python27.Unable to figure out why? | Linking python virtual environment to eclipse | 0.379949 | 0 | 0 | 1,333 |
40,907,731 | 2016-12-01T10:07:00.000 | 1 | 0 | 1 | 0 | python,performance,numpy,processing-efficiency | 40,907,857 | 1 | true | 0 | 0 | Yes, both approaches will have to loop over the values in the two matrices. However, python is dynamically typed such that the body of the loop needs to check the types of the three indices used for iteration, ensure that indexing the two input matrices is supported, determine the type of the values extracted from the matrices, ...
The numpy implementation is, as you said, lower-level and makes stronger assumptions about the input and output. In particular, the matrix multiplication is implemented in a statically typed language (C or Fortran--I can't quite remember) such that the overhead of type checking disappears. Furthermore, indexing in lower-level languages is a relatively simple operation. | 1 | 0 | 1 | When you have two numPy matrix, you could call a dot function to multiply them. Or you could loop through each manually and multiply each value manually. Why and what is the speed difference ? Surely dot function still has to do that but Lower level? | What is the underlying difference between multiplying a matrix and looping through? | 1.2 | 0 | 0 | 35 |
40,915,789 | 2016-12-01T16:33:00.000 | 1 | 0 | 1 | 1 | python,virtualenv | 40,916,124 | 2 | false | 0 | 0 | Since virtualenv copies python completely (including the binary) there is no way to know the exact path it originated from.
However, you can easily find the version by running ./python --version inside the environment's bin folder. | 1 | 0 | 0 | I have an old computer with dozens of old python projects installed, each with its own different virtualenv, and many of which built with different versions of python.
I'd prefer not to have to download these different versions when I create new virtualenvs via virtualenv -p whatever path that version of python has
My question is: within a virtualenv, is there a command I can run to find the path to the version of python which was used to create that particular environment?
For example, if I created a venv with 'virtualenv -p /usr/bin/python3.4' and then ran this command with the venv activated, it would return '/usr/bin/python3.4' | Discover path to python version used to make virtualenv | 0.099668 | 0 | 0 | 137 |
40,916,671 | 2016-12-01T17:19:00.000 | 5 | 0 | 1 | 0 | python,set,elements | 40,916,706 | 4 | false | 0 | 0 | Sets are unordered.
The remove command takes the element that you specify.
The pop takes any element. There's no way of predicting which | 3 | 7 | 0 | I'm reading about sets and see terms like "specific" elements and "arbitrary" elements. For example:
"The method remove removes a specific element from a set; pop removes an arbitrary element".
Can someone explain arbitrary elements? | What is an arbitrary element in Python? | 0.244919 | 0 | 0 | 5,252 |
40,916,671 | 2016-12-01T17:19:00.000 | 0 | 0 | 1 | 0 | python,set,elements | 40,916,937 | 4 | false | 0 | 0 | It is the element which is decided by a without any rule or arbiter rather than by a specific rule and structure. In Mathematics X+X = 2X where X is the arbitrary value which not defined in the equation. | 3 | 7 | 0 | I'm reading about sets and see terms like "specific" elements and "arbitrary" elements. For example:
"The method remove removes a specific element from a set; pop removes an arbitrary element".
Can someone explain arbitrary elements? | What is an arbitrary element in Python? | 0 | 0 | 0 | 5,252 |
40,916,671 | 2016-12-01T17:19:00.000 | 0 | 0 | 1 | 0 | python,set,elements | 57,626,194 | 4 | false | 0 | 0 | According to what I've seen, the pop method removes an arbitrary element from the set. In the case of numbers, it removes the lesser positive number
e.g
nums = {4, 3, 3, 3, 3, 4, 5, 6, 1 ,-3}
print(nums)
nums.add(-7)
nums.pop()
print(nums)
RESULT:
{1, 3, 4, 5, 6, -3}
{3, 4, 5, 6, -7, -3} | 3 | 7 | 0 | I'm reading about sets and see terms like "specific" elements and "arbitrary" elements. For example:
"The method remove removes a specific element from a set; pop removes an arbitrary element".
Can someone explain arbitrary elements? | What is an arbitrary element in Python? | 0 | 0 | 0 | 5,252 |
40,919,809 | 2016-12-01T20:30:00.000 | 2 | 0 | 0 | 1 | python,multithreading,asynchronous,couchdb,tornado | 40,922,571 | 1 | true | 0 | 0 | Use AsyncHTTPClient or CurlAsyncHTTPClient. Since the "requests" library is synchronous, it blocks the Tornado event loop during execution and you can only have one request in progress at a time. To do asynchronous networking operations with Tornado requires purpose-built asynchronous network code, like CurlAsyncHTTPClient.
Yes, CurlAsyncHTTPClient is a bit faster than AsyncHTTPClient, you may notice a speedup if you stream large amounts of data with it.
async and await are faster than gen.coroutine and yield, so if you have yield statements that are executed very frequently in a tight loop, or if you have deeply nested coroutines that call coroutines, it will be worthwhile to port your code. | 1 | 0 | 0 | How can I minimize the thread lock with Tornado? Actually, I have already the working code, but I suspect that it is not fully asynchronous.
I have a really long task.
It consists of making several requests to CouchDB to get meta-data and to construct a final link. Then I need to make the last request to CouchDB and stream a file (from 10 MB up to 100 MB). So, the result will be the streaming of a large file to a client.
The problem that the server can receive 100 simultaneous requests to download large files and I need not to lock thread and keep recieving new requests (I have to minimize the thread lock).
So, I am making several synchronous requests (requests library) and then stream a large file with chunks with AsyncHttpClient.
The questions are as follows:
1) Should I use AsyncHTTPClient EVERYWHERE? Since I have some interface it will take quite a lot of time to replace all synchronous requests with asynchronous ones. Is it worth doing it?
2) Should I use tornado.curl_httpclient.CurlAsyncHTTPClient? Will the code run faster (file download, making requests)?
3) I see that Python 3.5 introduced async and theoretically it can be faster. Should I use async or keep using the decorator @gen.coroutine? | Optimize Async Tornado code. Minimize the thread lock | 1.2 | 0 | 0 | 148 |
40,925,079 | 2016-12-02T04:57:00.000 | 1 | 0 | 0 | 0 | android,python,opencv,image-processing | 42,197,304 | 1 | true | 0 | 1 | This process can be done via NDK & OpenCV with Java support in Android Studio.
You need to compile NDK libraries and generate .so files. Then the project will work. | 1 | 1 | 1 | I am working on my OpenCV project with Python which basically recognizes hand gestures. I want to do the same using Android. Is it possible? Is there any way to do so?
I want to recognize very basic hand gestures using my Android device. Is it possible with Python & OpenCV with Android? Also share any other way possible. | How can I integrate my Python based OpenCV project in Android? | 1.2 | 0 | 0 | 220 |
40,927,573 | 2016-12-02T08:17:00.000 | 0 | 0 | 0 | 0 | jquery,python,json,linux,nfc | 40,928,045 | 1 | true | 1 | 0 | The progress of log in: The user starts the browser and go to your website and instead of manually entering credentials he clicks on the "log in via NFC." Server retains identification session from that IP and date (and maybe other info about client hardware for safe) to the database and "expects" NFC incoming data.
On the client PC / Phone you'll have to install your application/service, which will be able to receive data from the NFC scanner (who usually works as a keyboard) and sends them to your server, eg. Via ASP.NET WebAPI or other REST ...
The server will accept data from the IP and find a record in database of that IP perform log in (+ a time limit? + checking client hardware for safe?). Then the server side you have confirmed logon and the user can proceed (you can redirect him to our secure site).
Note. 1 The critical point is to pair the correct and safe Identification client browser and PC/mobile application which reads NFC tags.
Note. 2 You will need to select the appropriate NFC scanner, which will ideally have a standardized drivers built-in Win / Linux OS (otherwise you often solve the problem of missing / non-functional NFC drivers). | 1 | 0 | 0 | I have a web system where staff can log in with a username and password, then enter data. Is there a way to add the option for users to seamlessly log in just by swiping the card against an NFC scanner? The idea is to have multiple communal PCs people can walk up to and quickly authenticate.
It's important that the usual text login form works too for people using the site on PCs or phones without the NFC option.
The web client PCs with an NFC scanner could be linux or windows.
(The web system is a bootstrap/jquery site which gets supplied with JSON data from a python web.py backend. I'm able to modify the server and the client PCs.) | Optional NFC login to web based system | 1.2 | 0 | 1 | 1,332 |
40,930,450 | 2016-12-02T10:50:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,subprocess,external,executable | 40,985,274 | 1 | true | 0 | 0 | As I've not had any response I've kind of gone down a different route with this. Rather than relying on the subprocess module to call the exe I have moved that logic out into a batch file. The xmls are still modified by the python script and most of the logic is still handled in script. It's not what ideally would have liked from the program but it will have to do.
Thanks to anybody who gave this some thought and tried to at least look for an alternative. Even if nobody answered. | 1 | 0 | 0 | I am currently getting an issue with an external executable crashing when it is launched from a Python script. So far I have tried using various subprocess calls. As well as the more redundant methods such as os.system and os.startfile.
Now the exe doesn't have this issue when I call it normally from the command line or by double-clicking on it from the explorer window. I've looked around to see if other people have had a similar problem too. As far as I can tell the closest possible cause of this issue is that the child process unnecessarily hangs due to the I/O exceeding 65K. So I've tried using Popen without PIPES and I have also changed the stdout and stdin to write to temporary files to try and alleviate my problem. But unfortunately none of this has worked.
What I eventually want to do is be able to autorun this executable several times with various outputs provided by xmls. Everything else is pretty much in place, including the xml modifications which the executable requires. I have also tested the xml modification portion of the code as a standalone script to make sure that this isn't the issue.
Due to the nature of script I am a bit reluctant to put up any actual code up on the net as the company I work for is a bit strict when it comes to showing code. I would ask my colleagues if I could but unfortunately I'm the only one here who actually has used python.
Any help would be much appreciated.
Thanks. | External executable crashes when being launched from Python script | 1.2 | 0 | 0 | 146 |
40,937,544 | 2016-12-02T17:10:00.000 | 0 | 0 | 1 | 0 | python,django,web-applications,visual-studio-code,atom-editor | 63,690,785 | 5 | false | 1 | 0 | Nothing worked for me until I had disabled auto reload (--noreload as an argument is crucial, not really sure why it causes problem with debugging) | 1 | 26 | 0 | I'm new at django development and come from desktop/mobile app development with Xcode and related IDE.
I have to use Django and I was wondering if there was an efficient way to debug it using Visual Studio Code (or Atom).
Any help related to Django IDE would be helpful too. | How to use visual studio code to debug django | 0 | 0 | 0 | 35,237 |
40,939,604 | 2016-12-02T19:28:00.000 | 0 | 0 | 0 | 0 | python,facebook | 45,242,035 | 1 | false | 0 | 0 | I faced a similar problem. Now to get the picture of a post you can call
{post-id}?fields=picture,full_picture
where picture returns the url of a thumbnail of the image, while full_picture returns the url of the real image. | 1 | 0 | 0 | I am trying to get pictures from photo type posts on Facebook. I am using Python. I tried to access post_id/picture, but I keep getting:
facebook.GraphAPIError: (#12) picture edge for this type is deprecated for versions v2.3 and higher
Is there any alternative to the picture edge in v2.8? The documentation still lists picture as an option.
Thanks. | Facebook API get picture from post | 0 | 0 | 1 | 298 |
40,941,476 | 2016-12-02T21:59:00.000 | 69 | 0 | 1 | 0 | python,jupyter-notebook | 40,941,511 | 2 | true | 0 | 0 | Type the fully qualified function name, then type ??. | 1 | 27 | 0 | For example, I want to check the source code of a python library, directly in the notebook, is there a way to do that?
Thank you | How can I check source code of a module in Jupyter notebook? | 1.2 | 0 | 0 | 24,254 |
40,942,308 | 2016-12-02T23:18:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,user-interface,tkinter | 40,949,445 | 2 | false | 0 | 1 | @Manny102030
I got this code. basically what i want is to insert a Node in a Tree with the value that the user inputs in the Tkinter. I dont know if the value the user is inputting is actually being inserted because i cant figure how to call the function that i created to print the Tree (the function is also in the class BST).
What i did was call the BST in the mainWindow class, then in the BST i call the window for the user to input, and when he clicks "Ok" it calls the function insert. Then in the insert i pass the value from the user to create the node to put in the tree... Any improvements/any ideas on how to call the function to print the tree?
class mainWindow(object):
def __init__(self,master):
self.master = master
self.b=Button(master,text="Add value",command=self.popupAdd)
self.b.pack()
def popupAdd(self):
self.w=BST(self.master)
self.master.wait_window(self.w.top)
class BST(object):
def __init__(self,master):
self._root = None
top=self.top=Toplevel(master)
self.l=Label(top,text="Add a new value")
self.l.pack()
self.e=Entry(top)
self.e.pack()
self.b=Button(top,text='Ok',command=self.insert)
self.b.pack()
def insert(self):
novo = No(self.e.get()) #insert value in Node
if self._root == None:
self._root = novo
else:
pai = self._root
temp = self._root
while temp != None:
if valor > temp.getValor():
pai = temp
temp = temp.getRight()
elif (valor < temp.getValor()):
pai = temp
temp = temp.getLeft()
else:
temp = None
print("Value Already exists")
if valor > pai.getValor():
pai.setRight(novo)
elif valor < pai.getValor():
pai.setLeft(novo)
self.top.destroy()
def printTree(self, root):
if root != None:
self.printTree(root.getLeft())
print(" " + str(root.getValor()), end="")
self.printTree(root.getRight()) | 1 | 0 | 0 | I have a code in Python that has various options such as Add, Remove, Search etc...
Can i make a GUI using Tkinter that basically when it runs, it shows buttons with all the options, then when you click for example "Add" it appears a input box for the user to Add a new value, then goes back to the initial page etc... I do this very easily in Java using JOptionPane(not with buttons tho).. I tried searching for Menus in Tkinter but is not the ones i want (its the ones that appear on the top left of the page)...
Appreciate all the help | Python TKinter Menu with Options | 0 | 0 | 0 | 883 |
40,942,391 | 2016-12-02T23:25:00.000 | 2 | 0 | 0 | 0 | python,machine-learning | 40,942,885 | 2 | false | 0 | 0 | Since there's an unknown function that generates the output, it's a regression problem. Neural network with 2 hidden layers and e.g. sigmoid can learn any arbitrary function. | 2 | 2 | 1 | I need some orientation for a problem I’m trying to solve. Anything would be appreciated, a keyword to Google or some indication !
So I have a list of 5 items. All items share the same features, let’s say each item has 3 features for the example.
I pass the list to a ranking function which takes into account the features of every item in the list and returns an arbitrary ordered list of these items.
For example, if I give the following list of items (a, b, c, d, e) to the ranking function, I get (e, a, b, d, c).
Here is the thing, I don’t know how the ranking function works. The only things I have is the list of 5 items (5 is for the example, it could be any number greater than 1), the features of every item and the result of the ranking function.
The goal is to train a model which outputs an ordered list of 5 items the same way the ranking function would have done it.
What ML model can I use to support this notion of ranking ? Also, I can’t determine if it is a classification or a regression problem. I’m not trying to determine a continuous value or classify the items, I want to determine how they rank compared to each other by the ranking function.
I have to my disposition an infinite number of items since I generate them myself. The ranking function could be anything but let’s say it is :
attribute a score = 1/3 * ( x1 + x2 + x3 ) to each item and sort by descending score
The goal for the model is to guess as close as possible what the ranking function is by outputting similar results for the same batch of 5 items.
Thanks in advance ! | ML model to predict rankings (arbitrary ordering of a list) | 0.197375 | 0 | 0 | 1,288 |
40,942,391 | 2016-12-02T23:25:00.000 | 1 | 0 | 0 | 0 | python,machine-learning | 40,947,117 | 2 | true | 0 | 0 | It could be treated as a regression problem with the following trick: You are given 5 items with 5 feature vectors and the "black box" function outputs 5 distinct scores as [1, 2, 3, 4, 5]. Treat these as continuous values. So, you can think of your function as operating by taking five distinct input vectors x1, x2, x3, x4, x5 and outputting five scalar target variables t1, t2, t3, t4, t5 where the target variables for your training set are the scores the items get. For example, if the ranking for a single sample is (x1,4), (x2,5), (x3,3), (x4,1), (x5,2) then set t1=4, t2=5, t3=3, t4=1 and t5=2. MLPs have the "universal approximation" capability and given a black box function, they can approximate it arbitrarily close, dependent on the hidden unit count. So, build a 2 layer MLP with the inputs as the five feature vectors and the outputs as the five ranking scores. You are going to minimize a sum of squares error function, the classical regression error function. And don't use any regularization term, since you are going to try to mimic a deterministic black fox function, there is no random noise inherent in the outputs of that function, so you shouldn't be afraid of any overfitting issues. | 2 | 2 | 1 | I need some orientation for a problem I’m trying to solve. Anything would be appreciated, a keyword to Google or some indication !
So I have a list of 5 items. All items share the same features, let’s say each item has 3 features for the example.
I pass the list to a ranking function which takes into account the features of every item in the list and returns an arbitrary ordered list of these items.
For example, if I give the following list of items (a, b, c, d, e) to the ranking function, I get (e, a, b, d, c).
Here is the thing, I don’t know how the ranking function works. The only things I have is the list of 5 items (5 is for the example, it could be any number greater than 1), the features of every item and the result of the ranking function.
The goal is to train a model which outputs an ordered list of 5 items the same way the ranking function would have done it.
What ML model can I use to support this notion of ranking ? Also, I can’t determine if it is a classification or a regression problem. I’m not trying to determine a continuous value or classify the items, I want to determine how they rank compared to each other by the ranking function.
I have to my disposition an infinite number of items since I generate them myself. The ranking function could be anything but let’s say it is :
attribute a score = 1/3 * ( x1 + x2 + x3 ) to each item and sort by descending score
The goal for the model is to guess as close as possible what the ranking function is by outputting similar results for the same batch of 5 items.
Thanks in advance ! | ML model to predict rankings (arbitrary ordering of a list) | 1.2 | 0 | 0 | 1,288 |
40,943,923 | 2016-12-03T03:21:00.000 | 2 | 0 | 0 | 0 | python,flask | 58,601,756 | 2 | false | 1 | 0 | As far as routing goes, Pluggable Views (aka class-based views) is far more superior than Blueprints, which are just a bunch of functions with routes decorators.
In Pluggable Views paradigm, it facilitates code reuse code by organizing view logics in classes and subclass them. URL routes are registered with app.add_url_rule() call, this is great because it follows the S in SOLID principles (separate of concerns). In Blueprints approach, each front end view logic are encapsulated within the view functions, which aren't suited well for code reuse. | 1 | 2 | 0 | What's the difference between PluggableViews and Blueprint in Python Flask? | Difference between Pluggable Views and Blueprint in Python Flask | 0.197375 | 0 | 0 | 3,114 |
40,947,387 | 2016-12-03T11:46:00.000 | 0 | 0 | 1 | 1 | python,memory,memory-management | 40,947,523 | 3 | false | 0 | 0 | You can just open task manager and look how much ram does it take. I use Ubuntu and it came preinstalled. | 1 | 0 | 0 | I am running python program using Linux operating system and i want to know how much total memory used for this process. Is there any way to determine the total memory usage ? | total memory used by running python code | 0 | 0 | 0 | 494 |
40,948,991 | 2016-12-03T14:45:00.000 | 1 | 0 | 0 | 1 | python,linux,debian | 40,949,085 | 1 | true | 0 | 0 | You gave nohup one single argument containing spaces and quotes, and it failed to find a command with that name. Split it so the command is openvpn, with two more arguments (you'll probably find the extra quotes around the last argument shouldn't be there either). Sometimes this job is left to a shell, as with the system function, but that is in general riskier (similar to SQL injection) and inefficient (running another process for a trivial task). | 1 | 0 | 0 | For some strange reason when i run a python script with:
subprocess.Popen(["nohup", "openvpn --config '/usr/local/etc/openvpn/pia_openvpn/AU Melbourne.ovpn'"])
I get
nohup: failed to run command ‘openvpn --config '/usr/local/etc/openvpn/pia_openvpn/AU Melbourne.ovpn'’: No such file or directory
I can run openvpn --config "/usr/local/etc/openvpn/pia_openvpn/AU Melbourne.ovpn" with no errors. I've also tried running other commands and get the exact same error. | nohup: failed to run command | 1.2 | 0 | 0 | 1,657 |
40,949,988 | 2016-12-03T16:34:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,computer-vision,neural-network | 45,639,855 | 2 | false | 0 | 0 | It is good that you have created your own program. I would suggest you to keep experimenting with basic problems, such as MNIST by adding more hidden layers, plotting variation of loss with training iterations using different learning rates, etc.
In general, the learning rate should not be kept high initially when the network weights are random and it is a good practice to keep decreasing the learning rate over training period. Plotting the values of loss or error function w.r.t training iterations will give you good insight regarding this. If learning rate is very high, loss will fluctuate and vary too much. If learning rate is very small, loss will decrease very slowly with training iterations. If you are interested, read about this in Andrew Ng's course or some blog.
About your question regarding number of hidden layers and neurons, it better to start experimenting with lower number initially, such as 1 hidden layer and 30 neurons in your case. In your next experiment, you can have 2 hidden layers, however, keep track of number of learning parameters (weights and biases) compared to training samples you have because small training samples with large number of network parameters can overfit your network.
After experimenting with small problems, you can try same thing with some framework, let's say Tensorflow, after which you can attempt more challenging problems. | 2 | 1 | 1 | I have built a system where the neural network can change size (number of and size of hidden layers, etc). When training it with a learning rate of 0.5, 1 hidden layer of 4 neurons, 2 inputs and 1 output, it successfully learns the XOR and AND problem (binary inputs, etc). Works really well.
When I then make the structure 784 inputs, 1 hidden layer of 30 neurons, and 10 outputs, and apply the MNIST digit set, where each input is a pixel value, I simply cannot get good results (no better than random!). My question is quite theory based: If my code does seem to work with the other problems, should I assume i need to keep experimenting different learning rates, hidden layers, etc for this one? Or should decide theres a more underlying problem?
How do I find the right combo of layers, learning rate, etc? How would you go about this?
Testing is also difficult as it takes about 2 hours to get to a point where it should have learnt... (on a mac)
No, I am not using TensorFlow, or other libraries, because I am challenging myself. Either way, it does work ...to a point!
Many thanks. And apologies for the slightly abstract question - but I know its a problem many beginners have - so I hope it helps others too. | how to define an issue with neural networks | 0 | 0 | 0 | 24 |
40,949,988 | 2016-12-03T16:34:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,computer-vision,neural-network | 40,950,086 | 2 | false | 0 | 0 | A quick advice may be to solve an intermediate task (e.g. to use your own 5x5 ASCII "pictures" of digits), to have more neurons in the hidden layer, to reduce the data set for quicker simulation, to compare your implementation to other custom implementations in your programming language. | 2 | 1 | 1 | I have built a system where the neural network can change size (number of and size of hidden layers, etc). When training it with a learning rate of 0.5, 1 hidden layer of 4 neurons, 2 inputs and 1 output, it successfully learns the XOR and AND problem (binary inputs, etc). Works really well.
When I then make the structure 784 inputs, 1 hidden layer of 30 neurons, and 10 outputs, and apply the MNIST digit set, where each input is a pixel value, I simply cannot get good results (no better than random!). My question is quite theory based: If my code does seem to work with the other problems, should I assume i need to keep experimenting different learning rates, hidden layers, etc for this one? Or should decide theres a more underlying problem?
How do I find the right combo of layers, learning rate, etc? How would you go about this?
Testing is also difficult as it takes about 2 hours to get to a point where it should have learnt... (on a mac)
No, I am not using TensorFlow, or other libraries, because I am challenging myself. Either way, it does work ...to a point!
Many thanks. And apologies for the slightly abstract question - but I know its a problem many beginners have - so I hope it helps others too. | how to define an issue with neural networks | 0 | 0 | 0 | 24 |
40,954,342 | 2016-12-04T01:14:00.000 | 3 | 0 | 1 | 0 | python | 40,954,421 | 2 | true | 0 | 0 | Best is to use PYTHONPATH. Set it to the path where your common modules are found, before running Python. Then you can just do import my_funcs for example. | 1 | 0 | 0 | now I have a folder named my_funcs which have __init__.py and some .py files containing some functions and classes I wrote that I want to use for several projects.
So I want to know the best practice for these projects to direct import from this folder.
one solution is to sys.path.append('.../my_funcs'), in this case I will have to put this in front of the import statement for every .py file.
Any suggestions? BTW, I'm on Windows | Best practice for common functions used by several Python projects | 1.2 | 0 | 0 | 78 |
40,957,229 | 2016-12-04T09:31:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,nltk | 40,957,243 | 1 | false | 0 | 0 | Tokenize the string into words.
Use set membership operators, which are quick, to eliminate leading/trailing tokens while they match the list of stopwords.
If the next step really needs a string, then concatenate the list of words back into one with the idiomatic ' '.join(your_list) | 1 | 1 | 0 | Working using NLTK and I am prototyping a project I have in mind. I come from PHP so Python is a little unknown for me.
I have a list of stopwords and an n-word string, n being between 1 and 4.
I want to clean that string by trimming both ends of any stopwords. If I need to retest the string after I remove a stopword because there might be another one right after it.
How would you do that performance-wise in Python? | How to remove stopwords at the beginning or the end of a string in Python? | 0.197375 | 0 | 0 | 324 |
40,957,552 | 2016-12-04T10:17:00.000 | 1 | 0 | 1 | 1 | python,python-multiprocessing | 40,957,775 | 1 | true | 0 | 0 | Yes, you understand correctly.
According to documentation, this method only needed to maintain the multiprocessing module in working condition under frozen exe under Windows. | 1 | 2 | 0 | Am I understand correctly, that multiprocessing.freeze_support() need only to compile .py script to .exe in windows? Or is it used in other things? | Freeze_support what is needed? | 1.2 | 0 | 0 | 1,868 |
40,957,941 | 2016-12-04T11:06:00.000 | 0 | 0 | 0 | 0 | python-3.x,tkinter | 54,459,365 | 1 | false | 0 | 1 | You can use PyAutoGUI and Tkinter to:
Get current mouse position relative to desktop coordinates.
Minimize tkinter window or drag it out of the screen.
Simulate click event when the window will be hidden.
Return back tkinter window.
It should work but I'm not sure how fast would it be. | 1 | 0 | 0 | I'm trying to write a semi transparent click trough program to use like onion skin over my 3d application.
the one thing I couldn't find googling is how to make the window click trough. is there an attribute or something for it in tkinter? or maybe some way around it? | python 3.4 tkinter, click trough window | 0 | 0 | 0 | 50 |
40,958,107 | 2016-12-04T11:28:00.000 | 1 | 0 | 0 | 1 | python,postgresql,mutex,kubernetes,distributed-system | 40,968,608 | 1 | true | 0 | 0 | A completely different approach would be to run a (web) server that executes the job functionality. At a high level, the idea is that the webserver can contact this new job server to execute functionality. In addition, this new job server will have an internal cron to trigger the same functionality every 2 hours.
There could be 2 approaches to implementing this:
You can put the checking mechanism inside the jobserver code to ensure that even if 2 API calls happen simultaneously to the job server, only one executes, while the other waits. You could use the language platform's locking features to achieve this, or use a message queue.
You can put the checking mechanism outside the jobserver code (in the database) to ensure that only one API call executes. Similar to what you suggested. If you use a postgres transaction, you don't have to worry about your job crashing and the value of the lock remaining set.
The pros/cons of both approaches are straightforward. The major difference in my mind between 1 & 2, is that if you update the job server code, then you might have a situation where 2 job servers might be running at the same time. This would destroy the isolation property you want. Hence, database might work better, or be more idiomatic in the k8s sense (all servers are stateless so all the k8s goodies work; put any shared state in a database that can handle concurrency).
Addressing your ideas, here are my thoughts:
Find a setting in k8s that will limit this: k8s will not start things with the same name (in the metadata of the spec). But anything else goes for a job, and k8s will start another job.
a) etcd3 supports distributed locking primitives. However, I've never used this and I don't really know what to watch out for.
b) postgres lock value should work. Even in case of a job crash, you don't have to worry about the value of the lock remaining set.
Querying k8s API server for things that should be atomic is not a good idea like you said. I've used a system that reacts to k8s events (like an annotation change on an object spec), but I've had bugs where my 'operator' suddenly stops getting k8s events and needs to be restarted, or again, if I want to push an update to the event-handler server, then there might be 2 event handlers that exist at the same time.
I would recommend sticking with what you are best familiar with. In my case that would be implementing a job-server like k8s deployment that runs as a server and listens to events/API calls. | 1 | 0 | 0 | I have a Python program that I am running as a Job on a Kubernetes cluster every 2 hours. I also have a webserver that starts the job whenever user clicks a button on a page.
I need to ensure that at most only one instance of the Job is running on the cluster at any given time.
Given that I am using Kubernetes to run the job and connecting to Postgresql from within the job, the solution should somehow leverage these two. I though a bit about it and came with the following ideas:
Find a setting in Kubernetes that would set this limit, attempts to start second instance would then fail. I was unable to find this setting.
Create a shared lock, or mutex. Disadvantage is that if job crashes, I may not unlock before quitting.
Kubernetes is running etcd, maybe I can use that
Create a 'lock' table in Postgresql, when new instance connects, it checks if it is the only one running. Use transactions somehow so that one wins and proceeds, while others quit. I have not yet thought this out, but is should work.
Query kubernetes API for a label I use on the job, see if there are some instances. This may not be atomic, so more than one instance may slip through.
What are the usual solutions to this problem given the platform choice I made? What should I do, so that I don't reinvent the wheel and have something reliable? | Ensuring at most a single instance of job executing on Kubernetes and writing into Postgresql | 1.2 | 1 | 0 | 1,116 |
40,959,177 | 2016-12-04T13:34:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning | 40,960,903 | 1 | true | 0 | 0 | Embedding matrix is similar to any other variable. If you set the trainable flag to True it will train it (see tf.Variable) | 1 | 0 | 1 | In tensorflow,we may see these codes.
embeddings=tf.Variable(tf.random_uniform([vocabulary_size,embedding_size],-1.0,1.0))
embed=tf.nn.embedding_lookup(embeddings,train_inputs)
When tensorflow is training,does embedding matrix remain unchanged?
In a blog,it is said that embedding matrix can update.I wonder how does it work.Thanks a lot ! | In tensorflow,does embedding matrix remain unchanged? | 1.2 | 0 | 0 | 94 |
40,959,626 | 2016-12-04T14:28:00.000 | 0 | 0 | 0 | 0 | python,pandas | 40,962,551 | 2 | false | 0 | 0 | If your index is unique and you are OK with returning one row (in the case of multiple rows having the same max value) then you can use the idxmax method.
df.loc[df['money'].idxmax()]
And if you want to add some flare you can highlight the max value in each column with:
df.loc[df['money'].idxmax()].style.highlight_max() | 1 | 1 | 1 | searched for this, but cannot find an answer.
Say I have a dataframe (apologies for formatting):
a Dave $400
a Dave $400
a Dave $400
b Fred $220
c James $150
c James $150
d Harry $50
And I want to filter the dataframe so it only shows the rows where the third column is the MAXIMUM value, could someone point me in the right direction?
i.e. it would only show Dave's rows
All I can find is ways of showing the rows where its the maximum value for each separate index (the indexes being A, B, C etc)
Thank you in advance | Python Pandas - Only showing rows in DF for the MAX values of a column | 0 | 0 | 0 | 327 |
40,960,079 | 2016-12-04T15:15:00.000 | 1 | 0 | 1 | 0 | python,c++,c,algorithm | 40,960,224 | 1 | false | 0 | 0 | The bisection algorithm can be used to find a root in a range where the function is monotonic. You can find such segments by studying the derivative function, but in the general case, no assumptions can be made as to the monotonicity of a given function over any range.
For example, the function f(x) = sin(1/x) has an infinite number of roots between -1 and 1. To enumerate these roots, you must first determine the ranges where it is monotonic and these ranges become vanishingly small as x comes closer to 0. | 1 | 0 | 0 | Is there a way to find all the roots of a function using something on the lines of the bisection algorithm?
I thought of checking on both sides of the midpoint in a certain range but it still doesn't seem to guarantee how deep I would have to go to be able to know if there is a root in the newly generated range; also how would I know how many roots are there in a given range even when I know that the corresponding values on applying the function are of opposite sign?
Thanks. | Bisection algorithm to find multiple roots | 0.197375 | 0 | 0 | 1,208 |
40,960,357 | 2016-12-04T15:42:00.000 | -1 | 0 | 0 | 0 | python,tree,regression,cart | 41,774,573 | 2 | false | 0 | 0 | You can't; use matlab. Struggling with this at the moment. Using a python based home-cooked decision tree is also an option. However, there is no guarantee it will work properly (lots of places you can screw up). And you need to implement with numpy if you want any kind of reasonable run-time (also struggling with this now).
If you still have this problem, I do have a decision tree working with node knowledge and am implementing pruning this weekend...
If I get it to run fast and the code isn't too embarrassingly complicated, I will post a GitHub up here if you are still interested, in exchange for endorsements of ML'ing and Python/Numpy expertise on my LinkedIn. | 1 | 0 | 1 | I'm using scikit-learn to construct regression trees, using tree.DecisionTreeRegression().
I'm giving 56 data samples and it constructs me a Tree with 56 nodes (pruning=0).
How can I implement some pruning to the tree? Any help is appreciated! | Python Decision Tree Regressor Pruning | -0.099668 | 0 | 0 | 1,207 |
40,963,218 | 2016-12-04T20:21:00.000 | 2 | 1 | 0 | 0 | python,raspberry-pi,raspbian | 40,967,869 | 1 | false | 0 | 0 | Did you try using @restart "your command" in your crontab?
Try crontab -e, and see add @restart python -m SimpleHTTPServer | 1 | 2 | 0 | How would you start SimpleHTTPServer when the Pi boots up?
I made a startup.sh with python -m SimpleHTTPServer & but no luck | Run Python SimpleHTTPServer when raspberry Pi turns on | 0.379949 | 0 | 0 | 231 |
40,963,775 | 2016-12-04T21:19:00.000 | 0 | 0 | 1 | 0 | python,pip | 40,963,832 | 3 | false | 1 | 0 | I've done this before. I created a virtualenv for my project so all dependencies (including the python executable) are contained within the project sub-directory tree. Then just zip up that directory tree. To install elsewhere, just unzip and run it. | 1 | 1 | 0 | For a project, I can't let users use pip install before running the app.
My project is a python flask app that I used pip to grab the dependencies. How do I bundle it so the apps can run without using pip install? | How to bundle python pip dependencies with project so don't need to run pip install | 0 | 0 | 0 | 2,017 |
40,964,465 | 2016-12-04T22:28:00.000 | 0 | 0 | 0 | 0 | python,authentication | 40,964,673 | 1 | false | 0 | 0 | If you do not want to use SSL - there are a few options:
Client must send some authentication token (you may call it password) to server as a one of the first bunch of data sent through the socket. This is the simplest way. Also this way is cross-platform.
Client must send id of his process (OS-specific). Then server must make some system calls to determine path to executable file of this client process. If it is a valid path - client will be approved. For example valid path should be '/bin/my_client' or "C:\Program Files\MyClient\my_client.exe" and if some another client (let's say with path '/bin/some_another_app' will try to communicate with your server - it will be rejected. But I think it is also overhead. Also implementation is OS-specific. | 1 | 0 | 0 | Imagine you have two python processes, one server and one client, that interact with each other.
Both processes/programs run on the same host and communicate via TCP, eg. by using the AMP protocol of the twisted framework.
Could you think of an efficient and smart way how both python programs can authenticate each other?
What I want to achieve is, that for instance the server only accepts a connection from an authentic client and where not allowed third party processes can connect to the server.
I want to avoid things like public-key cryptography or SSL-connections because of the huge overhead. | Mutual authentication of python processes | 0 | 0 | 1 | 79 |
40,968,598 | 2016-12-05T06:51:00.000 | 2 | 0 | 0 | 1 | python,c++,sockets,unix-socket | 40,968,779 | 2 | false | 0 | 0 | It sounds like a design flaw that you need to send this much data over the socket to begin-with and that there is this risk of the reader not keeping up with the writer. As an alternative, you may want to consider using a delta-encoding, where you alternate between "key frame"s (whole frames) and multiple frames encoded as deltas from the the prior frame. You may also want to consider writing the data to a local buffer and then, on your UNIX domain socket, implementing a custom protocol that allows reading a sequence of frames starting at a given timestamp or a single frame given a timestamp. If all reads go through such buffer rather than directly from the source, I imagine you could also add additional encoding / compression options in that protocol. Also, if the server application that exports the data to a UNIX socket is a separate application from the one that is reading in the data and writing it to a buffer, you won't need to worry about your data ingestion being blocked by slow readers. | 1 | 0 | 0 | I have a C++ program which reads frames from a high speed camera and write each frame to a socket (unix socket). Each write is of 4096 bytes. Each frame is roughly 5MB. ( There is no guarantee that frame size would be constant but it is always a multiple of 4096 bytes. )
There is a python script which reads from the socket : 10 * 4096 bytes at each call of recv. Often I get unexpected behavior which I think boils down to understand the following about the sockets. I believe both of my programs are write/recving in blocking mode.
Can I write whole frame in one go (write call with 5MB of data)? Is it recommended? Speed is major concern here.
If python client fails to read or read slowly than write, does it mean that after some time write operation on socket would not add to buffer? Or, would they overwrite the buffer? If no-one is reading the socket, I'd not mind overwriting the buffer.
Ideally, I'd like my application to write to socket as fast as possibly. If no one is reading the data, then overwriting is fine. If someone is reading the data from socket but not reading fast enough, I'd like to store all data in buffer. Then how can I force my socket to increase the buffer size when reading is slow? | Yet another confustion about sending/recieving large amount of data over (unix-) socket | 0.197375 | 0 | 0 | 502 |
40,969,733 | 2016-12-05T08:09:00.000 | 4 | 0 | 1 | 1 | python,variables,memory,ipc,ram | 40,969,773 | 4 | false | 0 | 0 | Make it a (web) microservice: formalize all different CLI arguments as HTTP endpoints and send requests to it from main application. | 1 | 1 | 0 | I have a python script, that needs to load a large file from disk to a variable. This takes a while. The script will be called many times from another application (still unknown), with different options and the stdout will be used. Is there any possibility to avoid reading the large file for each single call of the script?
I guess i could have one large script running in the background that holds the variable. But then, how can I call the script with different options and read the stdout from another application? | Keeping Python Variables between Script Calls | 0.197375 | 0 | 0 | 2,241 |
40,971,318 | 2016-12-05T09:46:00.000 | 1 | 1 | 1 | 0 | python,python-unittest,allure | 41,627,450 | 2 | false | 0 | 0 | Allure just add addtitional categories like steps for tests and features/stories for group of test. Test fixture is responsibility of testing framework not reporting like Allure. | 2 | 1 | 0 | Does allure plugin for python have setup/teardown facilities, like those of python unittest module? | Does allure plugin for python have setup/teardown facilities? | 0.099668 | 0 | 0 | 539 |
40,971,318 | 2016-12-05T09:46:00.000 | 1 | 1 | 1 | 0 | python,python-unittest,allure | 45,822,745 | 2 | false | 0 | 0 | There are no such facilities like setUp or tearDown in allure.
But you can use the py.test fixtures to implement setUp and tearDown by yourself. | 2 | 1 | 0 | Does allure plugin for python have setup/teardown facilities, like those of python unittest module? | Does allure plugin for python have setup/teardown facilities? | 0.099668 | 0 | 0 | 539 |
40,972,550 | 2016-12-05T10:52:00.000 | 0 | 0 | 0 | 0 | python-3.x,flask,compiler-errors | 40,977,871 | 1 | true | 1 | 0 | I got this sorted out . Perhaps because I had set raise_exceptions=True, Setting it to False resolved the issue | 1 | 0 | 0 | I am trying to validate username and password of users in a flask app using ldap3. Normal ldap is not installing in python 3.5.
The user is entering username and password through login form, I am trying to authenticate user with username / password and allow them to access the index page if it is authenticated.
Does the authentication return true of false so that I can redirect to next page based on outcome.
The LDAP_PROVIDER_URL = "ldaps://appauth.corp.domain.com:636";
Please help me with the code for this.
When I type appauth.corp.domain.com or corp.domain.com as HOST I get the following error
(r_web) C:\Users\dasa17\r_web\RosterWeb\RosterWeb>python Roster.py
Traceback (most recent call last): File "Roster.py", line 10, in
s = Server(appauth.corp.domain.com, port=636, get_info=ALL) NameError: name 'appauth' is not defined
(r_web) C:\Users\dasa17\r_web\RosterWeb\RosterWeb>python Roster.py
Traceback (most recent call last): File "Roster.py", line 10, in
s = Server(corp.domain.com, port=636, get_info=ALL) NameError: name 'corp' is not defined
I made some modifications , now I am able to run it by giving dummy username and password. However, I am getting a different error now.>>> c = Connection(s,user='dasa17',password='',check_names=True, lazy=False,raise_exceptions=False)
c.open()
Traceback (most recent call last):
File "", line 1, in
c.open()
File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 57, in open
self.connection.refresh_server_info()
File "C:\Python35\lib\site-packages\ldap3\core\connection.py", line 1017, in refresh_server_info
self.server.get_info_from_server(self)
File "C:\Python35\lib\site-packages\ldap3\core\server.py", line 382, in get_info_from_server
self._get_dsa_info(connection)
File "C:\Python35\lib\site-packages\ldap3\core\server.py", line 308, in _get_dsa_info
get_operational_attributes=True)
File "C:\Python35\lib\site-packages\ldap3\core\connection.py", line 571, in search
response = self.post_send_search(self.send('searchRequest', request, controls))
File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 140, in post_send_search
responses, result = self.get_response(message_id)
File "C:\Python35\lib\site-packages\ldap3\strategy\base.py", line 298, in get_response
responses = self._get_response(message_id)
File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 158, in _get_response
responses = self.receiving()
File "C:\Python35\lib\site-packages\ldap3\strategy\sync.py", line 92, in receiving
raise communication_exception_factory(LDAPSocketReceiveError, exc) (self.connection.last_error)
ldap3.core.exceptions.LDAPSocketReceiveError: error receiving data: [WinError 10054] An existing connection was forcibly closed by the remote host | ldap3 bind syntax error in flask application | 1.2 | 0 | 0 | 1,094 |
40,973,950 | 2016-12-05T12:07:00.000 | 0 | 1 | 0 | 0 | python,rest,gis | 41,084,870 | 2 | false | 1 | 0 | realestate.co.nz seems to have both Javascript and Ruby APIs. I'm going to investigate the possibility of building a Python port as their code is on github/realestate.co.nz
I have no financial interest in either TradeMe or realestate.co.nz, for the record. Just a guy trying to avoid screen scraping. | 2 | 0 | 0 | I failed to get approval for my application that I started to write against the TradeMe API. My API access was not approved. I'm therefore looking for alternatives.
Any NZ property for sale APIs out there? I have seen realestate.co.nz which according to the github repo, might provide something in PHP and Ruby, but the Ruby repo hasn't been touched in several years. Google API perhaps?
I'm specifically interested in obtaining geo-location information for the properties on sale. | NZ Property for sale API | 0 | 0 | 0 | 453 |
40,973,950 | 2016-12-05T12:07:00.000 | 0 | 1 | 0 | 0 | python,rest,gis | 41,067,476 | 2 | false | 1 | 0 | The sandbox should let you access trademe without the need to access the main server. | 2 | 0 | 0 | I failed to get approval for my application that I started to write against the TradeMe API. My API access was not approved. I'm therefore looking for alternatives.
Any NZ property for sale APIs out there? I have seen realestate.co.nz which according to the github repo, might provide something in PHP and Ruby, but the Ruby repo hasn't been touched in several years. Google API perhaps?
I'm specifically interested in obtaining geo-location information for the properties on sale. | NZ Property for sale API | 0 | 0 | 0 | 453 |
40,974,077 | 2016-12-05T12:15:00.000 | 0 | 0 | 0 | 0 | python,modbus,can-bus,canopen | 41,042,839 | 1 | true | 0 | 0 | This will be the gateway's business to fix. There is no general answer, nor is there a standard for how such gateways work. Gateways have some manner of software that allows you to map data between the two field buses. In this case I suppose it would be either a specific CANopen PDO or a specific CAN id that you map to a Modbus address.
In case you are just writing a CANopen client, neither you or the firmware should need to worry about Modbus. Just make a CANopen node that is standard compliant and let the gateway deal with the actual protocol conversion.
You may however have to do the PDO mapping in order to let your client and the gateway know how to speak with each other, but that should preferably be a user-level configuration of the finished product, rather than some hard-coded mapping. | 1 | 0 | 0 | I am now studying and developing a CANopen client with a python stack and i'm struggling to find out how to communicate with a slave Modbus through a gateway.
Since the gateway address is the one present in the Object Dictionary of the CANopen, and the Gateway has addresses of modbus Slaves I/O, how to specify the address of the modbus input ?
As i can see it CANopen uses the node-ID to select the server and an address to select the property to read/write, but in this case i need to go farther than that and point an input.
just to be clear i'm in the "studying" phase i have no CANopen/Modbus gateway in mind.
Regards. | How does CANopen client communicate with Modbus slave through CANopen/Modbus gateway ? | 1.2 | 0 | 1 | 339 |
40,974,327 | 2016-12-05T12:29:00.000 | 4 | 0 | 1 | 0 | python,pycharm,project | 40,974,533 | 1 | true | 0 | 0 | In Pycharm Preferences, Appearance & Behaviour > System Settings, and there will be a Startup/Shutdown section. It doesn't look like you can specify a particular project, but you can stop it from reopening the last project on startup (which is what seems to be happening for you now), and select from a list instead. | 1 | 0 | 0 | Every time I start PyCharm Community Edition it automatically opens a project in the PyCharmProjects directory. There are several projects in that directory. I would like to have another project open automatically. Is that possible? | How can I change the default project in PyCharm? | 1.2 | 0 | 0 | 2,361 |
40,976,901 | 2016-12-05T14:46:00.000 | 0 | 0 | 1 | 0 | python,json,csv | 40,977,897 | 1 | false | 0 | 0 | turns out it was the json.dumps(), should've read more into what it does! Thanks. | 1 | 0 | 1 | I've been researching the past few days on how to achieve this, to no avail.
I have a JSON file with a large array of json objects like so:
[{
"tweet": "@SHendersonFreep @realDonaldTrump watch your portfolios go to the Caribbean banks and on to Switzerland. Speculation without regulation",
"user": "DGregsonRN"
},{
"tweet": "RT @CodeAud: James Mattis Vs Iran.\n\"The appointment of Mattis by @realDonaldTrump got the Iranian military leaders' more attention\". https:\u2026",
"user": "American1765"
},{
"tweet": "@realDonaldTrump the oyou seem to be only fraud I see is you, and seem scared since you want to block the recount???hmm cheater",
"user": "tgg216"
},{
"tweet": "RT @realDonaldTrump: @Lord_Sugar Dopey Sugar--because it was open all season long--you can't play golf in the snow, you stupid ass.",
"user": "grepsalot"
},{
"tweet": "RT @Prayer4Chandler: @realDonaldTrump Hello Mr. President, would you be willing to meet Chairman #ManHeeLee of #HWPL to discuss the #PeaceT\u2026",
"user": "harrymalpoy1"
},{
"tweet": "RT @realDonaldTrump: Thank you Ohio! Together, we made history \u2013 and now, the real work begins. America will start winning again! #AmericaF\u2026",
"user": "trumpemall"
}]
And I am trying to access each key and value, and write them to a csv file. I believe using json.loads(json.dumps(file)) should work in normal json format, but because there is an array of objects, I can't seem to be able to access each individual one.
converter.py:
import json
import csv
f = open("tweets_load.json",'r')
y = json.loads(json.dumps(f.read(), separators=(',',':')))
t = csv.writer(open("test.csv", "wb+"))
# Write CSV Header, If you dont need that, remove this line
t.writerow(["tweet", "user"])
for x in y:
t.writerow([x[0],x[0]])
grab_tweets.py:
import tweepy
import json
def get_api(cfg):
auth = tweepy.OAuthHandler(cfg['consumer_key'], cfg['consumer_secret'])
auth.set_access_token(cfg['access_token'], cfg['access_token_secret'])
return tweepy.API(auth)
def main():
cfg = {
"consumer_key" : "xxx",
"consumer_secret" : "xxx",
"access_token" : "xxx",
"access_token_secret" : "xxx"
}
api = get_api(cfg)
json_ret = tweepy.Cursor(api.search, q="@realDonaldTrump",count="100").items(100)
restapi =""
for tweet in json_ret:
rest = json.dumps({'tweet' : tweet.text,'user' :str(tweet.user.screen_name)},sort_keys=True,indent=4,separators=(',',': '))
restapi = restapi+str(rest)+","
f = open("tweets.json",'a')
f.write(str(restapi))
f.close()
if __name__ == "__main__":
main()
The output so far is looking like:
tweet,user^M
{,{^M
"
","
"^M
, ^M
, ^M
, ^M
, ^M
"""",""""^M
t,t^M
w,w^M
e,e^M
e,e^M
t,t^M
"""",""""^M
:,:^M
, ^M
"""",""""^M
R,R^M
T,T^M
, ^M
@,@^M
r,r^M
e,e^M
a,a^M
l,l^M
D,D^M
o,o^M
n,n^M
a,a^M
l,l^M
What exactly am I doing wrong? | Access key value from JSON array of objects Python | 0 | 0 | 0 | 1,675 |
40,978,516 | 2016-12-05T16:11:00.000 | 1 | 0 | 0 | 0 | python,web-services,rest,dsl,mps | 40,981,915 | 1 | true | 0 | 0 | I think this is a nice idea. To determine what kind of solution you could build you should consider different aspects:
Who would write these API combinations?
What kind of tool support would be appropriate? I mean validation, syntax highlighting, autocompletion, typesystem checks, etc
How much time would make sense to invest on it?
Depending on these answers you could consider different options. The simplest one is to build a DSL using ANTLR. You get a parser, then you build some program to process the AST and generate the code to call the APIs. Your user will just have to edit these programs in a text editor with not support. The benefit of this is that the cost of implementing this is reduced and your user could write these programs using a simple text editor.
Alternatively you could use a Language Workbench like Xtext or Jetbrains MPS to build some specific editors for your language and provide a better editing experience to your users. | 1 | 1 | 0 | We have a back end that exposes 50-60 Rest APIs. These will largely be consumed by standalone applications like a Python script or a Java program.
One issue we have is the APIs are at a very granular level, they do not match the business use case. For example to perform a business use case end user might have to call 4 to 5 APIs.
I want to develop a DSL or some solution that will help provide a high level abstraction that will enable end users to implement business use cases with ease. This can either be a standalone abstraction or a "library" for Python or or some much high level programming language.
For the specific purpose of combining multiple Rest API calls to create a business use case transaction, what are the approaches available.
Thanks | Meta language for rest client | 1.2 | 0 | 1 | 190 |
40,980,163 | 2016-12-05T17:42:00.000 | 0 | 0 | 0 | 0 | python-2.7,wxpython,word-wrap | 64,083,696 | 3 | false | 0 | 1 | It would be more maintainable to use the constants stc.WRAP_NONE, stc.WRAP_WORD, stc.WRAP_CHAR and stc.WRAP_WHITESPACE instead of their numerical values. | 2 | 1 | 0 | I was wondering about this, so I did quite a bit of google searches, and came up with the SetWrapMode(self, mode) function. However, it was never really detailed, and there was nothing that really said how to use it. I ended up figuring it out, so I thought I'd post a thread here and answer my own question for anyone else who is wondering how to make an stc.StyledTextCtrl() have word wrap. | How to set up word wrap for an stc.StyledTextCtrl() in wxPython | 0 | 0 | 0 | 265 |
40,980,163 | 2016-12-05T17:42:00.000 | 0 | 0 | 0 | 0 | python-2.7,wxpython,word-wrap | 40,986,021 | 3 | false | 0 | 1 | I see you answered your own question, and you are right in every way except for one small detail. There are actually several different wrap modes. The types and values corresponding to them are as follows:
0: None
1: Word Wrap
2: Character Wrap
3: White Space Wrap
So you cannot enter any value above 0 to get word wrap. In fact if you enter a value outside of the 0-3 you should just end up getting no wrap as the value shouldn't be recognized by Scintilla, which is what the stc library is. | 2 | 1 | 0 | I was wondering about this, so I did quite a bit of google searches, and came up with the SetWrapMode(self, mode) function. However, it was never really detailed, and there was nothing that really said how to use it. I ended up figuring it out, so I thought I'd post a thread here and answer my own question for anyone else who is wondering how to make an stc.StyledTextCtrl() have word wrap. | How to set up word wrap for an stc.StyledTextCtrl() in wxPython | 0 | 0 | 0 | 265 |
40,980,731 | 2016-12-05T18:16:00.000 | 0 | 0 | 0 | 0 | python,flask,alembic,flask-migrate | 64,959,830 | 2 | false | 1 | 0 | You can also check in your database and the current version should be displayed in a table called alembic_version. | 2 | 2 | 0 | My flask application now has 20+ migrations built with flask-migrate and they all have hashed file names like: 389d9662fec7_.py
I want to double check the settings on the latest migration that I ran, but don't want to open every file to look for the correct one. I could create a new dummy migration and look at what it references as the down_revision but that seems clunky.
I'm using flask-script, flask-migrate, and flask-sqlalchemy
My question is: How can I quickly find the latest migration that I created? | How do I find the latest migration created w/ flask-migrate? | 0 | 0 | 0 | 1,521 |
40,980,731 | 2016-12-05T18:16:00.000 | 3 | 0 | 0 | 0 | python,flask,alembic,flask-migrate | 40,980,983 | 2 | true | 1 | 0 | ./manage.py db history -r current: will show the migrations in the order they will be applied. -r current: shows only the migrations since the currently applied one.
./manage.py db heads will show the most recent migration for each branch (typically there's only one branch). ./manage.py db upgrade would apply all migrations to get to the head.
Use the -v flag to get verbose output, including the full path to the migration. | 2 | 2 | 0 | My flask application now has 20+ migrations built with flask-migrate and they all have hashed file names like: 389d9662fec7_.py
I want to double check the settings on the latest migration that I ran, but don't want to open every file to look for the correct one. I could create a new dummy migration and look at what it references as the down_revision but that seems clunky.
I'm using flask-script, flask-migrate, and flask-sqlalchemy
My question is: How can I quickly find the latest migration that I created? | How do I find the latest migration created w/ flask-migrate? | 1.2 | 0 | 0 | 1,521 |
40,981,908 | 2016-12-05T19:33:00.000 | 3 | 1 | 0 | 0 | python,amazon-web-services,aws-lambda | 40,981,986 | 3 | false | 0 | 0 | AWS Lambda's Python environment comes pre-installed with boto3. Any other libraries you want need to be part of the zip you upload. You can install them locally with pip install whatever -t mysrcfolder. | 1 | 1 | 0 | I'm new to AWS Lambda and pretty new to Python.
I wanted to write a python lambda that uses the AWS API.
boto is the most popular python module to do this so I wanted to include it.
Looking at examples online I put import boto3 at the top of my Lambda and it just worked- I was able to use boto in my Lambda.
How does AWS know about boto? It's a community module. Are there a list of supported modules for Lambdas? Does AWS cache its own copy of community modules? | How does AWS know where my imports are? | 0.197375 | 0 | 1 | 70 |
40,987,039 | 2016-12-06T02:51:00.000 | 4 | 0 | 0 | 0 | python,django,sqlite | 40,987,703 | 1 | false | 1 | 0 | Delete all the folders named 'migrations'. And go to terminal and run ./manage.py makemigrations, ./manage.py migrate --run-syncdb. | 1 | 2 | 0 | I deleted my database. I want to start afresh with a new database. How can I do that ? I tried making a new datasource but it gives me an error while applying migrations/or migrating that it couldn't find the tables? Which is true because its an empty database.
A similar scenario would be when some one pulls a version of my code. He wouldn't have migrations or the database (untracked). How would he run the application? | How to start afresh with a new database in Django? | 0.664037 | 0 | 0 | 3,214 |
40,987,412 | 2016-12-06T03:38:00.000 | 1 | 0 | 0 | 0 | python,sockets,connect,wireless | 40,987,772 | 2 | false | 0 | 0 | but one question. Is the socket server specific to python, or can
another language host and python connect or vise-versa?
As long as you are using sockets - you can connect to any socket-based server (made with any language). And vice-versa: any socket-based client will be able to connect to your server. Moreover it's cross-platform: socket-based client from any OS can connect to any socket-based server (from any OS). | 1 | 1 | 0 | I am wondering if there is any way to wirelessly connect to a computer/server using python's socket library. The dir(socket) brought up a lot of stuff and I wanted help sorting it out. | How do I wirelessly connect to a computer using python | 0.099668 | 0 | 1 | 102 |
40,989,746 | 2016-12-06T07:12:00.000 | 1 | 0 | 1 | 0 | python,exe,command-prompt,python-3.5,pyinstaller | 40,989,945 | 2 | false | 0 | 0 | You don't have module named pefile install the module pip install pefile then try again | 1 | 0 | 0 | How to convert python Python1.py created in Visual Studio 2015 to Python1.exe, with PyInstaller I gut error, so need to find some other tool to convert my PythonConsole.py to PythonConsole.exe | Convert .py to .exe | 0.099668 | 0 | 0 | 369 |
40,989,958 | 2016-12-06T07:26:00.000 | 0 | 1 | 0 | 1 | python,ibm-cloud,nao-robot | 41,919,816 | 1 | false | 0 | 0 | You can add any path to the PYTHONPATH environment variable from within your behavior. However, this has bad side effects, like:
If you forget to remove the path from the environment right after importing your module, you won't know anymore where you are importing modules from, since there is only one Python context for the whole NAOqi and all the behaviors.
For the same reason (a single Python context), you'll need to restart NAOqi if you change the module you are trying to import. | 1 | 0 | 0 | Currently, I am doing a project about Nao Robot. I am having problem with importing the python class file into choregraphe. So anyone knows how to do this?
Error message
[ERROR] behavior.box :init:8 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1271833616__root__RecordSound_3__RecSoundFile_4: ALProxy::ALProxy Can't find service: | How to import IBM Bluemix Watson speech-to-text in Choregraphe? | 0 | 0 | 0 | 272 |
40,991,259 | 2016-12-06T08:52:00.000 | 2 | 0 | 1 | 1 | python,pyinstaller | 56,047,956 | 2 | false | 0 | 0 | If you set your command directory to the .py script location and run pyinstaller yourscript.py, it will generate folders in the same location as your script. The folder named dist/ will contain the .exe file. | 1 | 2 | 0 | I am completely new to python and trying to create an application (or .exe) file for python using pyinstaller. I ran the command pyinstaller -[DIRECTORY].py and it saved it to an output directory "C:\Windows\System32\Dist\Foo", however when i tried to locate the directory it did not seem to exist (Dist).
NOTE: i'm trying to convert a .py file to .exe file in Python 3.5
Thanks for any help :) | Cannot find output folder for .exe file using pyinstaller | 0.197375 | 0 | 0 | 2,898 |
40,991,949 | 2016-12-06T09:31:00.000 | 2 | 0 | 0 | 0 | python,gtk,gtk3,pygobject | 41,002,629 | 1 | false | 0 | 1 | Setting the window size request should be sufficient. If your UI makes the window larger that is the same as your widgets becoming truncated on a smaller monitor.
To prevent this you'll need to put widgets that grows your UI inside scrollable windows. Watch out for labels. You will need to get them to wrap properly. | 1 | 2 | 0 | I want to test the appearance of a window on a smaller monitor than the one I'm using on the development machine.
I tried with set_geometry_hints() (setting minimum and maximum width and height), set_resizable(False), set_default_size(), and set_size_request(). However every time the window is bigger, because child widgets request a bigger size.
I noticed on a smaller monitor with a resolution smaller than the request size, the widgets are truncated. I have to be sure this doesn't happen refactoring the GUI layout, so I want to simulate on my monitor.
How can I make the window smaller without truncating widgets? | Gtk3: set a fixed window size (smaller than the requested size from the child widgets) | 0.379949 | 0 | 0 | 1,748 |
40,992,566 | 2016-12-06T10:02:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,pycharm | 45,028,022 | 1 | false | 0 | 0 | I know of 2 quick ways, but none of them is quick enough.
1: Using the AceJump addon, you just jump to one parens, hit delete, then jump to the other one, and hit delete... Naturally this has the disadvantage that parentheses in a situation like this: ([{(([]))}) would be harder to jump to.
2: There is a command called "Move caret to matching brace". Then, using either AceJump to jump directly to your first brace (or just navigating to it in any way), you activate the function "Move caret to matching brace 2 times". After moving the caret 2 times, you can delete the first matching parens, and then use the action to navigate back ("Back"), and then delete your second brace.
3: Solution 2 does not work for quotes. For them, instead of executing the action "Move to matching brace" you can use the incremental selection, and jump to the most convenient of the 2 quote signs... This however doesn't allow you to navigate back to the previous(or next) quote and delete that one. Therefore for quotes, I have no solution, but this "incremental selection" can work in a few situations (when one of the quotes is at the beginning or the end of a line) | 1 | 7 | 0 | The opening and closing quotes (or brackets, braces etc) are highlighted in PyCharm(and other editors). So this means it can identify the pair.
Now, is there a way to delete both the quotes at once (or brackets, braces etc) when either of the opening or closing quotes are deleted, Since it identifies the pair?
For eg. I want this in one keyboard action (by both cases either deleting the opening or closing square bracket):
From this: [[a for a in l1 if a!=0]]
To this: [a for a in l1 if a!=0]
I googled and searched on SO but couldn't find it. | Delete opening and closing quotes (or brackets,braces,etc) at once | 0 | 0 | 0 | 574 |
40,993,351 | 2016-12-06T10:41:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,python-3.x | 40,993,473 | 2 | false | 0 | 0 | If you did not kill the Python process, then it is most likely that the system ran out of some resource, and then the OS will kill it. | 2 | 0 | 0 | When I run Python script after I get message Killed. What does it mean?
I tried to find logs, but they are empty | Why I get message Killed in Python? | 0.099668 | 0 | 0 | 99 |
40,993,351 | 2016-12-06T10:41:00.000 | 2 | 0 | 0 | 0 | python,python-2.7,python-3.x | 40,993,472 | 2 | true | 0 | 0 | What does it mean?
answer -
Your script crossed some limit in the amount of system resources that you are allowed to use. Depending on your OS and configuration, this could mean you had too many open files, used too much filesytem space or something else. | 2 | 0 | 0 | When I run Python script after I get message Killed. What does it mean?
I tried to find logs, but they are empty | Why I get message Killed in Python? | 1.2 | 0 | 0 | 99 |
40,995,291 | 2016-12-06T12:20:00.000 | 0 | 0 | 1 | 0 | python,numpy,hdf5 | 40,995,578 | 1 | false | 0 | 0 | accepted
locals().update(npzfile)
a # and/or b
In the Ipython session, locals() is a large dictionary with variables the you've defined, the input history lines, and various outputs. Update adds the dictionary values of npzfile to that larger one.
By the way, you can also load and save MATLAB .mat files. Use scipy.io.loadmat and savemat. It handles v4 (Level 1.0), v6 and v7 to 7.2 files. But you have same issue - the result is a dictionary.
Octave has an expression form of the load command, that loads the data into a structure
S = load ("file", "options", "v1", "v2", ...) | 1 | 0 | 1 | I have a large data-set that I want save them in a npz file. But because the size of file is big for memory I cant save them in npz file.
Now i want insert data in iterations into npz file.
How can I do this?
Are HDF5 is better for this? | How can I update npz file in python? | 0 | 0 | 0 | 528 |
40,997,813 | 2016-12-06T14:33:00.000 | 1 | 0 | 0 | 0 | python,dymola | 41,197,119 | 1 | false | 0 | 0 | You can reduce the size of the simulation result file by using variable selections in Dymola. That will restrict the output to states, parameters, and the variables that match your selection criteria.
The new Dymola 2017 FD01 has a user interface for defining variable selections. | 1 | 1 | 1 | I have been having some issues trying to open a simulation result output file (.mat) in Python. Upon loading the file I am faced with the following error:
ValueError: Not enough bytes to read matrix 'description'; is this a
badly-formed file? Consider listing matrices with whosmat and
loading named matrices with variable_names kwarg to loadmat
Has anyone been successful in rectifying this error? I have heard there is a script DyMat which can manage mat files in Python but haven't had any luck with it so far.
Any suggestions would be greatly appreciated. | Not enough memory to read .mat result file into Python | 0.197375 | 0 | 0 | 374 |
41,000,124 | 2016-12-06T16:23:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,spyder | 63,757,231 | 1 | false | 0 | 0 | did you try this:
plot window pane -> Mute inline plotting
Toggle to enable / disable plotting within spyder plot window pane | 1 | 0 | 0 | I am used to code in Vim and run my scripts on the command line. My coworker uses Spyder, with is, I admit, a very good tool.
The problem comes in scripts that use matplotlib, where Spyder (or IPython) interferes with at least pyplot.show(), which is typically not required in Spyder, and pyplot.savefig(), which causes an unwanted pyplot.show() in Spyder.
I have tried so far, without success:
ticking 'Execute in a new dedicated Python interpreter' in run settings dialog box
specifying in Spyder the Python interpreter to use when running scripts
disabling the PYTHONSTARTUP script in Spyder, by pointing to a noop script
Any suggestion? | How to have Spyder act as command line Python interpreter | 0 | 0 | 0 | 552 |
41,001,442 | 2016-12-06T17:34:00.000 | 1 | 0 | 1 | 0 | python | 41,001,555 | 4 | false | 0 | 0 | Only slightly less new than you but I'll give this one a go, basically opening and closing are pretty different actions in a language like python. When you are opening the file what you are really doing is creating an object to be worked within your application that represents the file, so you create it with a function that informs the OS that the file has been opened and creates an object that python can use to read and write to the file. When it comes time to close the file what basically needs to be done is for your app to tell the OS that it is done with the file and dispose of the object that represented the file fro memory, and the easiest way to do that is with a method on the object itself. Also note that a syntax like "file".open would require the string type to include methods for opening files, which would be a very strange design and require a lot of extensions on the string type for anything else you wanted to implement with that syntax. close(file) would make a bit more sense but would still be a clunky way of releasing that object/letting the OS know the file was no longer open, and you would be passing a variable file representing the object created when you opened the file rather than a string pointing to the file's path. | 1 | 8 | 0 | Regarding syntax in Python
Why do we use open("file") to open but not "file".close() to close it?
Why isn't it "file".open() or inversely close("file")? | Python open() vs. .close() | 0.049958 | 0 | 0 | 3,290 |
41,001,551 | 2016-12-06T17:40:00.000 | 3 | 0 | 0 | 0 | python,django,bash,python-venv | 41,004,545 | 2 | true | 1 | 0 | run virtualenv venv in you desired directory
after install from t run :
source \your_folder\venv\bin\activate
now you sohuld see (venv) before $ in the shell
that mean you env is active
install packages run pip install package_name
run pip freeze to get installed packages
go to project folder that include manage.py file
run python manage.py runserver to make sure that evrything run fine
to access django-shell run python manage.py shell | 1 | 0 | 0 | I have been given an existing project to work on and I am really struggling to get the environment set up.
The project folder firstly contains manage.py server, which I use as an entry point to run the server.
There is also a venv folder which contains all the modules etc. I need.
So when I do runserver on manage.py, I get that "No module named sqlserver_ado.base". Even when I have activated the virtual environment and am in bash.... this module for instance is in venv folder in a venv\Lib\site-packages.
I am so very confused. I have also tried copying whatever modules are said to be missing and have ran into other issues this way also. | How to run manage.py inside venv? | 1.2 | 0 | 0 | 4,324 |
41,003,897 | 2016-12-06T20:05:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,cross-validation | 41,003,996 | 1 | false | 0 | 0 | If you run cross_val_predict then you can check the metric on the result. It is not a waste of compute time because cross_val_predict doesn't compute scores itself.
This won't give you per-fold scores though, only the aggregated score (which is not necessarily bad). I think you can workaround that by creating KFold / ... instance explicitly and then using it to split the cross_val_predict result. | 1 | 1 | 1 | I'm not sure whether I'm missing something really easy but I have a hard time trying to google something out.
I can see there are cross_val_score and cross_val_predict functions in scikit-learn. However, I can't find a way to get both score and predictions at one go. Seems quite obvious as calling the functions above one after another is a waste of computing time. Is there a cross_val_score_predict function or similar? | Scikit-learn - Cross validates score and predictions at one go? | 0.197375 | 0 | 0 | 80 |
41,005,047 | 2016-12-06T21:15:00.000 | 6 | 0 | 1 | 0 | python,pycharm | 41,005,483 | 2 | false | 0 | 0 | Select the file in the Project or Project Files pane and then hit File>Save as... The save-as dialog that pops up has an option (checked by default) to "open copy in editor". Not exactly what you asked for but it's the easiest way I could find. | 2 | 9 | 0 | Just got started with pycharm and was wondering how can I simply copy a file in pycharm for editing purposes? For instance, I have a file opened, want to edit the code but want to make sure that I do not accidentally "over-save" the original file.
In other environments, I can simply right click the file, and copy it to a new file. I do not see a 'copy to new file' option in pycharm, but instead I do have to manually open a new file (File>New>Python..), and then manually copy all the code from the original file and paste it in the empty new file.
Am I missing something or is that not possible in pycharm? | Pycharm - how can I copy a file (tab) in pycharm (to a new tab)? | 1 | 0 | 0 | 2,801 |
41,005,047 | 2016-12-06T21:15:00.000 | 7 | 0 | 1 | 0 | python,pycharm | 41,005,579 | 2 | true | 0 | 0 | If I understood well, you are looking for to a something similar duplicate functionality.
So to do this, you can simply go to the project structure (the panel to the left) and copy the file in this way:
simply Ctrl + c
Right click on choosen file > Copy
After this, paste your file in the directory that you want in this way:
simply Ctrl + v
Right click on choosen directory > Paste
In this way you have the possibility to duplicate the choosen file with another name. | 2 | 9 | 0 | Just got started with pycharm and was wondering how can I simply copy a file in pycharm for editing purposes? For instance, I have a file opened, want to edit the code but want to make sure that I do not accidentally "over-save" the original file.
In other environments, I can simply right click the file, and copy it to a new file. I do not see a 'copy to new file' option in pycharm, but instead I do have to manually open a new file (File>New>Python..), and then manually copy all the code from the original file and paste it in the empty new file.
Am I missing something or is that not possible in pycharm? | Pycharm - how can I copy a file (tab) in pycharm (to a new tab)? | 1.2 | 0 | 0 | 2,801 |
41,006,591 | 2016-12-06T23:06:00.000 | 1 | 0 | 0 | 0 | python,django,django-models,solr,django-haystack | 41,016,811 | 1 | true | 1 | 0 | There is no need to keep updating a expires_in field in your database - keep an expires_at with the time when the ad expires, and calculate the time left in your retrieval method in your model or in your view.
This way you'll avoid having to write more data to your database as traffic increases, and if the expiry date changes you won't run into a possible race condition if people are viewing the page at the same time while you're updating the expiry time. | 1 | 0 | 0 | I use Django as backend for my web-app and django-haystack(with Solr) for searching & displaying results.
I use the RealTimeSignalProccessor form django-haystack , but I have one problem:
- I have an Auction model and expires-(DateTimeField). When I'm displaying the results I'm doing it similar like e-bay (ex. Expires in: 1h 23m 5s ).
The problem is that on the page that all Auctions are displayed, if you want to update the Expires in parameter on every time you visit this view (as I've read in the django-haystack documentation) , you'll have to use the object.save() method to update the Solr indexing database. But if I do that for 30 results everytime i go to that view where all auctions are listed , it's very slow and it's not efficient.
Is there any other solution ? What do you suggest ? | Optimize displaying results with django-haystack RealTimeSignalProcessor | 1.2 | 0 | 0 | 78 |
41,010,560 | 2016-12-07T06:09:00.000 | 2 | 0 | 0 | 1 | python,apache-spark,celery,distributed,jobs | 41,021,060 | 2 | false | 1 | 0 | Adding to the above answer, there are other areas also to identify.
Integration with the existing big data stack if you have.
Data pipeline for ingestion
You mentioned "backend for web application". I assume its for read operation. The response times for any batch application might not be a good fit for any web application.
Choice of streaming can help you get the data into the cluster faster. But it will not guarantee the response times needed for web app. You need to look at HBase and Solr(if you are searching).
Spark is undoubtedly better and faster than other batch frameworks. In streaming there may be few other. As I mentioned above, you should consider the parameters on which your choice is made. | 2 | 2 | 1 | Problem: calculation task can be paralleled easily. but it is needed real-time response.
There can be two approaches.
1. using Celery: runs job in parallel from scratch
2. using Spark: runs job in parallel with spark framework
I think spark is better in scalability perspective. But is it OK Spark as backend of web-application? | For distributing calculation task, which is better celery or spark | 0.197375 | 0 | 0 | 3,004 |
41,010,560 | 2016-12-07T06:09:00.000 | 1 | 0 | 0 | 1 | python,apache-spark,celery,distributed,jobs | 41,012,633 | 2 | true | 1 | 0 | Celery :- is really a good technology for distributed streaming And its supports Python language . Which is it self strong in computation and easy to write. The streaming application in Celery supports so many features as well . Its little over head on CPU.
Spark- Its supports various programming language Java,Scala,Python. its not pure streaming its micro batch streaming as per the Spark documentation
If your task can only be full filled by streaming and you dont need the SQl like feature . Then Celery will be the best. But you need various feature along with streaming then SPark will be better . In that case you can take scenario you application will generate the data in how many batches within second . | 2 | 2 | 1 | Problem: calculation task can be paralleled easily. but it is needed real-time response.
There can be two approaches.
1. using Celery: runs job in parallel from scratch
2. using Spark: runs job in parallel with spark framework
I think spark is better in scalability perspective. But is it OK Spark as backend of web-application? | For distributing calculation task, which is better celery or spark | 1.2 | 0 | 0 | 3,004 |
41,017,578 | 2016-12-07T12:32:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 41,017,686 | 1 | false | 0 | 0 | Just use {} and [] and forget about dict() and list() | 1 | 0 | 0 | I've heard that construction {} works much faster then dict(). But does that mean I have to write {} everywhere ? | Is there any reason in writing dict() instead of {}? | 0 | 0 | 0 | 89 |
41,020,061 | 2016-12-07T14:30:00.000 | 6 | 0 | 1 | 0 | python | 41,020,095 | 2 | true | 0 | 0 | No, they are not.
But slicing [0:1] will always give you a subset of the iterable. If you want to just get the element, just do so: t[0]. That is an integer, not a tuple. | 1 | 3 | 0 | I'm slightly confused over tuple notation. Is each of the elements in a tuple treated as a tuple or as whatever type that element is?
For example, for the tuple t = (1,2,3,4), 1 in t is True, which means that the int 1 is in the tuple t.
However, if I evaluate t[0:1], we get the tuple (1,).
Even more confusing is the fact that (1,) in t is False.
What's going on here? Which is it; are the elements of t tuples or integers? | Are the elements of tuples, themselves tuples? | 1.2 | 0 | 0 | 51 |
41,020,807 | 2016-12-07T15:03:00.000 | 0 | 1 | 0 | 0 | python,django,django-templates,django-views | 41,021,233 | 2 | false | 1 | 0 | In your views you can handle any incoming get/post requests. And based on handler for that button (and this button obviously must send something to server) you can call any function. | 1 | 0 | 0 | Let's say that I have a python function that only sends email to myself, that I want to call whenever the user clicks on a button in a template, without any redirection (maybe just a popup messege).
Is there a way to do that? | Django: call python function when clicking on button | 0 | 0 | 0 | 981 |
41,021,109 | 2016-12-07T15:17:00.000 | 3 | 1 | 0 | 1 | python,user-interface,terminal,raspberry-pi | 41,022,529 | 2 | false | 0 | 0 | Without knowing you Pi setup it's a bit difficult. But with the assumption you're running raspbian with its default "desktop" mode:
Open a terminal on your Pi, either by sshing to it or connecting a monitor/keyboard.
First we need to allow you to login automatically, so sudo nano /etc/inittab to open the inittab for editing.
Find the line 1:2345:respawn:/sbin/getty 115200 tty1 and change it to #1:2345:respawn:/sbin/getty 115200 tty1
Under that line, add 1:2345:respawn:/bin/login -f pi tty1 </dev/tty1 >/dev/tty1 2>&1. Type Ctrl+O and then Ctrl+X to save and exit
Next, we can edit the rc.local. sudo nano /etc/rc.local
Add a line su -l pi -c startx (replacing pi with the username you want to launch as) above the exit 0 line. This will launch X on startup, which allows other applications to use graphical interfaces.
Add the command you'd like to run below the previous line (e.g python /path/to/mycoolscript.py &), but still above the exit 0 line.
Note the & included here. This "forks" the process, allowing other commands to run even if your script hasn't exited yet. Ctrl+O and Ctrl+X again to save and exit.
Now when you power on your Pi, it'll automatically log in, start X, and then launch the python script you've written!
Also, my program requires an internet connection on execution but pi connects to wifi later and my script executes first and ends with not connecting to the internet.
This should be solved in the script itself. Create a simple while loop that checks for internet access, waits, and repeats until the wifi connects. | 1 | 2 | 0 | I want to run a python script which executes a GUI on startup(as pi boots up). But I don't see any GUI on screen but when I open terminal my program executes automatically and GUI appears. Also, my program requires an internet connection on execution but pi connects to wifi later and my script executes first and ends with not connecting to the internet.
Is there any way my python script executes after pi boots up properly and pi connected with internet | raspberry pi : Auto run GUI on boot | 0.291313 | 0 | 0 | 7,947 |
41,022,923 | 2016-12-07T16:45:00.000 | 2 | 1 | 0 | 0 | python,pic,pyserial,ftdi | 41,058,265 | 1 | false | 1 | 0 | So I found that the rounded numbers didn't work i.e. 100000, 200000, 250000 but the multiples of 115200 do. i.e. 230400, 460800
I tried to use 230400 at first but the baud rate my microcontroller can produce is either 235294 or 222222. 235294 yields an error of -2.1% and 222222 yields an error of 3.55%. I naturally picked the one with the lower error however it didn't work and didn't bother trying 222222. For some reason 222222 works when 235294 though. So I don't actually have to use the 250000 baud rate I initially thought I'd have to.
I still don't know why pyserial doesn't work with those baud rates when putty does, so clearly my laptop can physically do it. Anyway will know in future to try more standard baud rates as well as when using microcontrollers which can't produce the exact baud rate required to try frequencies both above and below. | 1 | 3 | 0 | I'm trying to get pySerial to communicate with a microcontroller over an FTDI lead at a baud rate of 500,000. I know my microcontroller and FTDI lead can both handle it as can my laptop itself, as I can send to a putty terminal at that baud no problem. However I don't get anything when I try to send stuff to my python script with pySerial, although the same python code works with a lower baud rate.
The pySerial documentation says:
"The parameter baudrate can be one of the standard values: 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, 115200. These are well supported on all platforms.
Standard values above 115200, such as: 230400, 460800, 500000, 576000, 921600, 1000000, 1152000, 1500000, 2000000, 2500000, 3000000, 3500000, 4000000 also work on many platforms and devices."
So, I'm assuming why it's not working is because my system doesn't support it, but how do I check what values my system supports/is there anything I can do to make it work? I unfortunately do need to transmit at least 250,000 and at a nice round number like 250,000 or 500000 (to do with clock error on the microcontroller).
Thanks in advance for your help! | PySerial - Max baud rate for platform (windows) | 0.379949 | 0 | 0 | 3,411 |
41,023,513 | 2016-12-07T17:16:00.000 | 0 | 0 | 1 | 0 | python,pycharm,remote-debugging | 53,441,266 | 6 | false | 0 | 0 | Turning off the firewall addressed the problem in my case (macOS - Mojave). Note that this is not a general solution as it was not tested in any other environments/OS. | 3 | 22 | 0 | When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct? | pycharm always "uploading pycharm helpers" to same remote python interpreter when starts | 0 | 0 | 0 | 10,073 |
41,023,513 | 2016-12-07T17:16:00.000 | 1 | 0 | 1 | 0 | python,pycharm,remote-debugging | 54,241,936 | 6 | false | 0 | 0 | Note that -- at least as late as version 2018.3.x -- PyCharm also appears to require re-uploading of the helpers when the local network connection changes as well, for some reason.
What I've observed in my case is that if, while PyCharm remains running, I relocate my laptop and connect to a different LAN, the next remote debugging session I initiate will trigger the lengthy helper upload. It turns out that the contents of the helpers directory actually uploaded in this case are exactly identical to the contents already present in that directory on the remote system (I compared them), so this upload is entirely superfluous, but PyCharm isn't able to detect this.
As there's no way I know of in PyCharm to bypass or cancel automatic helpers upload, the only recourse is to completely exit from PyCharm (close all open project windows) after each change of network connection and restart the IDE. In my experience, this will cause the helper upload to succeed in the "checking remote helpers" phase, before actually uploading all the helpers again. Of course, this is major annoyance if you have multiple projects open, but it's faster than waiting the (tens of) minutes for the agonizingly slow helpers upload to complete.
All of what other responders describe for the course of action to take when changing PyCharm versions is true. It is sufficient to use rsync, ftp, scp, or whatever to transfer the contents of the new local helpers directory (on Linux, a subdirectory of where the app is installed) to the remote system (on Linux, ~/.pycharm_helpers, where ~ is the home directory of the user name used for the remote debugging session), and update the remote build.txt in the helpers directory with the new PyCharm version. | 3 | 22 | 0 | When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct? | pycharm always "uploading pycharm helpers" to same remote python interpreter when starts | 0.033321 | 0 | 0 | 10,073 |
41,023,513 | 2016-12-07T17:16:00.000 | 2 | 0 | 1 | 0 | python,pycharm,remote-debugging | 70,324,377 | 6 | false | 0 | 0 | In my case, several projects are projected to the remote server by Pycharm. All of them get stuck when one of the projects goes wrong on the remote server. Solution: leave only one that you need to work on and restart the PyCharm by "Invalidate caches". | 3 | 22 | 0 | When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct? | pycharm always "uploading pycharm helpers" to same remote python interpreter when starts | 0.066568 | 0 | 0 | 10,073 |
41,024,605 | 2016-12-07T18:21:00.000 | 3 | 1 | 1 | 0 | python | 41,024,664 | 1 | true | 0 | 0 | In most test runners, a test failure is indicated by raising an exception - which is what the assert() function does if its argument evaluates to False.
Thus assert(1 == 0) will fail, and abort that specific test with an AssertionError exception. This is caught by the test framework, and the test is marked as having failed.
The framework/test-runner then moves on to the next test. | 1 | 0 | 0 | I have gone through many source code of functional test cases written in python. Many of the code uses assert for testing equality, why so? | Use of assert statement in test cases written in python | 1.2 | 0 | 0 | 224 |
41,025,035 | 2016-12-07T18:46:00.000 | 0 | 0 | 1 | 0 | python,goto | 41,025,132 | 3 | false | 0 | 0 | There is no goto built into Python. There are ways to effectively 'halt' in a method by using yield and creating a generator, which is effectively how Python coroutines work (see the asyncio module) however this isn't really appropriate for your needs.
For saving game state, saving and serialising the state you need to resume the gameplay in a more general way is a much better idea. You could use pickle For this serialisation. | 1 | 0 | 0 | I'm coding a text game in python 3.4 and when I though about making a save game came the question:
How can I jump to the place that the player stopped?
I'm making a simple game, me and my friends, so I just wanna jump to a certain part of the code, and I can't do that without having to make around 15 copies of the code, so can I jump to a line? | Goto\Jump in Python | 0 | 0 | 0 | 2,080 |
41,030,128 | 2016-12-08T01:10:00.000 | 0 | 0 | 1 | 0 | string,python-3.x,character-encoding | 42,528,593 | 3 | false | 0 | 0 | i am far from a python expert but : str('yadayada').encode('utf-8).decode('utf-8)
contains syntax errors,
str('yadayada').encode('utf-8').decode('utf-8') ==mind the closing ' <== works fine | 1 | 11 | 0 | I have some code that grabs strings from one environment and reproduces them in another. I am using Python 3.5. I keep running into this kind of error:
UnicodeEncodeError: 'latin-1' codec can't encode character '\u2013' in
position 112: Body ('–') is not valid Latin-1. Use
body.encode('utf-8') if you want to send it encoded in UTF-8.
...and I want to avoid it. This error is coming from the requests module. The problem is that I am dealing with literally tens of thousands of strings and new ones are added all the time. People are cutting and pasting from Excel and whatnot - and have no idea what characters I will bump into so I can't just run a str.replace(). I would like to make sure that every string I get from environment 1 is properly utf-8 encoded before I send it to environment 2.
I tried str('yadayada').encode('utf-8).decode('utf-8) and that didn't work. I tried str('yadaya', 'utf-8') and that didn't work. I tried declaring "# -*- coding: UTF-8 -*-" and that didn't work. | str encoding from latin-1 to utf-8 arbitrarily | 0 | 0 | 0 | 21,059 |
41,031,326 | 2016-12-08T03:33:00.000 | 0 | 0 | 1 | 0 | python,database,file,cloud,storage | 41,031,479 | 1 | false | 0 | 0 | You could install gsutil and the boto library and use that. | 1 | 0 | 0 | I currently have a Python program which reads a local file (containing a pickled database object) and saves to that file when it's done. I'd like to branch out and use this program on multiple computers accessing the same database, but I don't want to worry about synchronizing the local database files with each other, so I've been considering cloud storage options. Does anyone know how I might store a single data file in the cloud and interact with it using Python?
I've considered something like Google Cloud Platform and similar services, but those seem to be more server-oriented whereas I just need to access a single file on my own machines. | Access a Cloud-stored File using Python 3? | 0 | 1 | 0 | 57 |
41,038,457 | 2016-12-08T11:32:00.000 | 0 | 1 | 0 | 0 | python,midi,music21 | 41,043,154 | 3 | false | 0 | 0 | The SMF format has no restrictions on how events are organized into tracks. It is common to have one track per channel, but it's also possible to have multiple channels in one track, or multiple tracks with events for the same channel.
The organization of the tracks is entirely determined by humans. It is unlikely that you can write code that can correctly determine how some random brain works.
All you have to go on are conventions (e.g., melody is likely to be in the first track(s), or has a certain structure), but you have to know if these conventions are actually used in the files you're handling. | 1 | 3 | 0 | I'm processing a bulk of midi files that are made for existing pop songs using music21.
While channel 10 is reserved for percussions, melodic tracks are all over different channels, so I was wondering if there is an efficient way to pick out the main melody (vocal) track.
I'm guessing one way to do it is to pick a track that consists of single notes rather than overlapping harmonics (chords), and the one that plays throughout the song, but is there any other efficient way? | music21: picking out melody track | 0 | 0 | 0 | 1,661 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.