Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
47,360,213
2017-11-17T22:28:00.000
2
1
1
0
python
47,360,365
1
true
0
0
Yes Why? Portability Python scripts in Linux environments are recognized as such, but NT is... special. If there's ever a need to move it to anything made by microsoft, you need the extension. Readability Appending a filename extension to it will make it obvious what kind of file it is, for when others look in your directories. Useability If ever you need to call it into python as a module, you NEED the .py extension. Convention It is the standard for all python scripts to have the .py or .pyc extension on them.
1
2
0
I'm writing a Python script that is only meant to be run from the console. e.g.: $> myscript. I'm wondering what is the leading conventions in Python for scripts of this sort. Should I call it myscript, or myscript.py?
Should executable Python script be named with .py?
1.2
0
0
612
47,360,357
2017-11-17T22:43:00.000
2
0
1
0
python,python-3.x,file-io,stream,extract
47,361,037
2
true
0
0
Try io.TextIOWrapper to wrap the io.BufferedReader.
1
2
0
Is there a simple way to extract a text file from a tar file as a file object of text I/O in python 3.4 or later? I am revising my python2 code to python3, and I found TarFile.extractfile, which used to return a file object with text I/O, now returns a io.BufferedReader object which seems to have binary I/O. The other part of my code expects a text I/O, and I need to absorb this change in some way. One method I can think of is to use TarFile.extract and write the file to a directory, and open it by open function, but I wonder if there is a way to get the text I/O stream directly.
Extracting a text file from tar with tarfile module in python3
1.2
0
0
1,210
47,361,305
2017-11-18T00:42:00.000
0
0
1
1
python,azure
47,396,410
4
false
0
0
Thank you guys for the respond, This one works: file_service.get_share_stats("myShareName") The only thing it does not return the exact size, it rounded the size up to the nearest gigabyte.
1
0
0
My file is stored in the Microsoft Azure file service. Question How can I get the size of the file?
Size of a file stored at azure file service
0
0
0
1,697
47,361,570
2017-11-18T01:31:00.000
0
0
0
0
python,dask
47,420,745
2
true
0
0
I'm not sure this is the "best" way, but here's how I ended up doing it: Create a Pandas DataFrame with the index be the series of index keys I want to keep (e.g., pd.DataFrame(index=overlap_list)) Inner join the Dask Dataframe
1
3
1
I'd like to take a subset of rows of a Dask dataframe based on a set of index keys. (Specifically, I want to find rows of ddf1 whose index is not in the index of ddf2.) Both cache.drop([overlap_list]) and diff = cache[should_keep_bool_array] either throw a NotImplementedException or otherwise don't work. What is the best way to do this?
Dask: subset (or drop) rows from Dataframe by index
1.2
0
0
1,328
47,362,271
2017-11-18T03:44:00.000
1
0
1
0
python,pip
47,368,605
1
true
0
0
As @metatoaster suggested, python setup.py develop reflects changes immediately in the environment, and makes the new functions available. I haven't tried @Paul H's suggestion which is pip install . -e. Thank you both for your comments, problem is solved.
1
3
0
I created and installed a python package by doing the following: coding a bunch of functions in an init.py file and run 'python setup.py install dist' to create a tar.gz, which was installed through pip. Everything works well and I can import the package and the functions. I decided to add a new function in the init file, and redid the whole procedure described above to reinstall (or update) my package. The new function added doesn't seem to be available when importing the package, even after update. Any ideas on how to update my package?
Python: upgrade my own package
1.2
0
0
931
47,362,304
2017-11-18T03:51:00.000
3
0
1
0
python,python-3.x,dictionary,expression
47,362,340
3
true
0
0
Python uses the same equality test that the == operator uses. All of the keys you're using (1, True, 1.0, and 1.00) compare as equal to each other. What's happening with your dicts is it's retaining the first key, then updating the value associated with that key for all subsequent keys that compare as equal to it. It's a bit unintuitive, since the key values are not the same (except for 1.0 and 1.00), but they are "equal". Similarly, {1: 'd'}[True] evaluates to 'd' because True == 1.
3
1
0
When I enter the expression {1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'} into the Python shell, I get back {1: 'd'}. But when I write {1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'} [True], the interpreter returns me 'd'. I don't understand how this dictionary evaluation works.
Why does "{1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'}" evaluate to "{1: 'd'}"?
1.2
0
0
300
47,362,304
2017-11-18T03:51:00.000
1
0
1
0
python,python-3.x,dictionary,expression
47,362,335
3
false
0
0
All of the values 1, True, 1.0 and 1.00 are equal (1.0 and 1.00 are the exact same value). So they are all considered the same key in the dictionary. You can't have a dict with multiple equal keys.
3
1
0
When I enter the expression {1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'} into the Python shell, I get back {1: 'd'}. But when I write {1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'} [True], the interpreter returns me 'd'. I don't understand how this dictionary evaluation works.
Why does "{1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'}" evaluate to "{1: 'd'}"?
0.066568
0
0
300
47,362,304
2017-11-18T03:51:00.000
1
0
1
0
python,python-3.x,dictionary,expression
47,362,339
3
false
0
0
Python doesn't support duplicate keys in a dictionary. In the example, all the keys are same (True==1 evaluates to True). So Python will discard every key-value except the last one.
3
1
0
When I enter the expression {1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'} into the Python shell, I get back {1: 'd'}. But when I write {1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'} [True], the interpreter returns me 'd'. I don't understand how this dictionary evaluation works.
Why does "{1: 'a', True: 'b', 1.0: 'c', 1.00: 'd'}" evaluate to "{1: 'd'}"?
0.066568
0
0
300
47,367,180
2017-11-18T14:31:00.000
0
0
0
0
python,tensorflow,tensorflow-datasets
52,067,484
1
false
0
0
You can try two options: Write a generator and then use Dataset.from_generator: In your generator you can read your file line by line, append to your example while doing that and then yield when you encounter your custom delimiter. First parse your file, create tf.train.SequenceExample with multiple lines and then store your dataset as a TFRecordDataset (more cumbersome option in my opinion)
1
1
1
tf.data.* has dataset classes. There is a TextLineDataset, but what I need is one for multiline text (between start/end tokens). Is there a way to use a different line-break delimiter for tf.data.TextLineDataset? I am an experienced developer, but a python neophyte. I can read but my writing is limited. I am bending an existing Tensorflow NMT tutorial to my own dataset. Most TFRecord tutorials involve jpgs or other structured data.
Multi-line text dataset in Tensorflow
0
0
0
805
47,368,296
2017-11-18T16:24:00.000
14
0
0
0
python,pandas,csv,floating-point,rounding
57,801,832
2
false
0
0
I realise this is an old question, but maybe this will help someone else: I had a similar problem, but couldn't quite use the same solution. Unfortunately the float_precision option only exists when using the C engine and not with the python engine. So if you have to use the python engine for some other reason (for example because the C engine can't deal with regex literals as deliminators), this little "trick" worked for me: In the pd.read_csv arguments, define dtype='str' and then convert your dataframe to whatever dtype you want, e.g. df = df.astype('float64') . Bit of a hack, but it seems to work. If anyone has any suggestions on how to solve this in a better way, let me know.
1
24
1
I have a csv file containing numerical values such as 1524.449677. There are always exactly 6 decimal places. When I import the csv file (and other columns) via pandas read_csv, the column automatically gets the datatype object. My issue is that the values are shown as 2470.6911370000003 which actually should be 2470.691137. Or the value 2484.30691 is shown as 2484.3069100000002. This seems to be a datatype issue in some way. I tried to explicitly provide the data type when importing via read_csv by giving the dtype argument as {'columnname': np.float64}. Still the issue did not go away. How can I get the values imported and shown exactly as they are in the source csv file?
Pandas read csv file with float values results in weird rounding and decimal digits
1
0
0
30,458
47,369,737
2017-11-18T18:44:00.000
0
0
1
1
django,python-3.x,virtualenv
54,036,952
6
false
0
0
Try running this command virtualenv -p python env i.e without specifying python3, i had same problem while using pycharm and after running this command on the terminal the error was resolved.
3
19
0
I have added python36/Scripts in the environment variable's path file and python36 as well is added. But it still shows the following error Command = C:\Users\Sonalika\dev\trydjango1-11>virtualenv -p python3 Error I receive: The path python3 (from --python=python3) does not exist
"The path python3 (from --python=python3) does not exist" error
0
0
0
52,786
47,369,737
2017-11-18T18:44:00.000
0
0
1
1
django,python-3.x,virtualenv
54,619,898
6
false
0
0
When you open your Command Prompt window, make sure to select "Run as administrator"
3
19
0
I have added python36/Scripts in the environment variable's path file and python36 as well is added. But it still shows the following error Command = C:\Users\Sonalika\dev\trydjango1-11>virtualenv -p python3 Error I receive: The path python3 (from --python=python3) does not exist
"The path python3 (from --python=python3) does not exist" error
0
0
0
52,786
47,369,737
2017-11-18T18:44:00.000
2
0
1
1
django,python-3.x,virtualenv
54,673,747
6
false
0
0
if you have already python in you PATH, sure it works by default with python, no python3. you just have to run virtualenv -p python env
3
19
0
I have added python36/Scripts in the environment variable's path file and python36 as well is added. But it still shows the following error Command = C:\Users\Sonalika\dev\trydjango1-11>virtualenv -p python3 Error I receive: The path python3 (from --python=python3) does not exist
"The path python3 (from --python=python3) does not exist" error
0.066568
0
0
52,786
47,370,367
2017-11-18T19:52:00.000
1
0
1
0
python,django,pip,virtualenv
47,370,397
1
false
1
0
You can't tell pip where to install packages. There's a standard place for packages, and that's where it installs them. pip makes sure it installs utilities (such as django-admin) in the path, so when the virtual environment is activated you can run it.
1
1
0
I am new to python/django/pip. I created a virtualenv and I was in the bin dir when I did a pip install django. It created the django files there which is not what I wanted. So I did pip uninstall django. Then I created a folder called web in my virtualenv root and tried pip install again. This time it said loading from cache and installed django in bin folder again. So I deleted it and tried again with --no-cache-dir. This downloaded django fresh, but I am still finding the installation in bin directory! This is driving me crazy, how can I get it to install in the web directory? Any help much appreciated.
Django not installing in folder where I want
0.197375
0
0
801
47,373,656
2017-11-19T04:22:00.000
0
0
0
0
python,sql,django
47,373,721
3
false
1
0
The answer is django celery. Its for any tasks which need to be run with a condition. Look at the beats functionality for your use in celery
2
0
0
I'm making an elimination style voting website. People log in, vote for their least favorite participant, and at the end of the day the person with the most votes goes inactive. Almost everything works: The log in, the voting, etc. But I have no idea how to make a program that checks for the most votes and alters the database to change the status of a participant at a particular time without needing a user to enter the website. I don't even know where to put the code. And how would I make sure it's constantly running? The way I see it, views.py only works when a user goes to the URL, so if no one visited the website at the time of vote recopilation it wouldn't work, so that's a no no. I could make a script outside of the Django project that does this, and then run it with nohup &, but then I'd lose on the model notation and would have to make manual queries, plus I'm sure there's a better, more Django way to do this. Any solutions to this problem? Or maybe some direction you can point me in?
How to make a code that alters database in background Django
0
0
0
41
47,373,656
2017-11-19T04:22:00.000
1
0
0
0
python,sql,django
47,373,793
3
false
1
0
If your scenario is just to update the database based on the number of votes, you can straight away go for cron. Write a script that checks for the votes and changes the status of the user in the database. Schedule it in cron to run it once at the end of the day / poll. Place the script in a directory that is not reachable by outsiders. cron entry for a script to run once a day at 23:30 hrs: 30 23 * * * python /root/scripts/status_change_script.py
2
0
0
I'm making an elimination style voting website. People log in, vote for their least favorite participant, and at the end of the day the person with the most votes goes inactive. Almost everything works: The log in, the voting, etc. But I have no idea how to make a program that checks for the most votes and alters the database to change the status of a participant at a particular time without needing a user to enter the website. I don't even know where to put the code. And how would I make sure it's constantly running? The way I see it, views.py only works when a user goes to the URL, so if no one visited the website at the time of vote recopilation it wouldn't work, so that's a no no. I could make a script outside of the Django project that does this, and then run it with nohup &, but then I'd lose on the model notation and would have to make manual queries, plus I'm sure there's a better, more Django way to do this. Any solutions to this problem? Or maybe some direction you can point me in?
How to make a code that alters database in background Django
0.066568
0
0
41
47,376,246
2017-11-19T11:20:00.000
1
0
1
0
python-3.x,pycharm,raspbian,raspberry-pi3
57,246,451
1
false
0
0
That's because you haven't set a useable repository-url for. Just do it now: File --> Settings --> Project XXX --> Project Interpreter --> Add(Plus icon on the right) --> Manage Repositories-->Add(Plus icon on the right) Now input one true repository-url and click "OK". Or maybe you get work from a network by proxy? Search "proxy" in the settings page, then select one to set.
1
1
0
I am getting the error: Error loading package list:pypi.python.org.. when I try to install new packages. Please can anyone help me to solve the error.
Error loading package list:pypi.python.org in Pycharm 2017.2.4 with Python 3.5
0.197375
0
0
3,517
47,377,617
2017-11-19T13:50:00.000
3
1
0
0
python,obd-ii
47,616,373
2
true
1
0
Some cars will report battery level on PID 0x5B of mode 1.
1
1
0
I'm looking for a way to get the battery level in EV using OBD2 dongle, I manage to get the fuel level in Toyota and Ford but I need to get data from the battery management systems (BMS) on EV. I'm using python obd libraries and I have both comma.ai panda and OBDLink-LX dongles. thanks, Avi
obd2 battery level on electric vehicle
1.2
0
0
1,579
47,378,315
2017-11-19T15:03:00.000
0
1
0
0
r,python-3.x,google-translate
47,410,428
1
false
0
0
Just as an update, for anyone seeking to translate dataframes/variables translateR doesnt seem to accept the Google API key However, everything works very smoothly using the "googleLangaugeR" package
1
2
0
I understand from previous questions that access to Google Translate API may have recently changed. My goal is to translate individual tweets in a dataframe from X Language to English. I tried to setup the Google Cloud Translate API with no success I have setup gcloud sdk, enabled the billing and looks like the certification is ok. But, still no success. Has anyone else had any recent experience with it using R and/or Python?
Google Cloud Translate API : challenge setup (R and/or Python)
0
0
1
200
47,380,618
2017-11-19T18:43:00.000
0
1
0
0
python,api,telegram,telethon
47,380,957
1
false
0
0
Error in library Need to update: pip install telethon --upgrade
1
2
0
Python Telethon I need to receive messages from the channel Error: >>> client.get_message_history(-1001143136828) Traceback (most recent call last): File "messages.py", line 23, in total, messages, senders = client.get_message_history(-1001143136828) File "/Users/kosyachniy/anaconda/lib/python3.5/site-packages/telethon/telegram_client.py", line 548, in get_message_history add_mark=True KeyError: 1143136828
Telethon How do I get channel messages?
0
0
1
2,377
47,380,749
2017-11-19T18:56:00.000
1
0
1
0
python-3.x,semaphore,named
47,380,899
2
false
0
0
Why not use threading.Semaphore? You only need a name if a semaphore is to be accessed by another process. Threads by definition share an address space, so they can have access to basically everything in the right scope. Edit: Note that Semaphore is a class. So each call to threading.Semaphore() creates a separate instance of the Semaphore class.
1
0
0
I know that there are python modules that allow the use of IPC and System V named semaphores. However, these resources exist at the system level. For my particular multithreaded python3 application, I need named semaphores in order to protect certain totally unrelated sections of code, but these semaphores should only be process-specific, not system-wide. I have been unable to find any python code which implements named semaphores that solely exist within the current multithreaded process. Does anyone know of any code like that which has already been written? I specifically want semaphores and not just a simple mutex, because I want to allow a certain number of concurrent accesses to these critical sections of code within the process, up to some configurable maximum number of accesses. Any suggestions? Thank you in advance.
python3 - Named semaphores only within a given process?
0.099668
0
0
806
47,382,322
2017-11-19T21:45:00.000
3
0
1
0
python,jupyter-notebook
65,589,607
3
false
0
0
Consider running jupyter labextension update --all in terminal to check and install updates.
2
5
0
What is the correct way to upgrade Jupyter Notebook extensions (e.g. RISE, ipywidgets)? I have a routine which keeps my Python packages updated by running pip install --upgrade, and this downloads and installs new Notebook extension versions too, when they are available. Should I also run jupyter nbextension install --py --sys-prefix, and possibly even jupyter nbextension enable --py --sys-prefix for each of the Notebook extensions which pip gets a new package? Thanks
What is the correct way to update Jupyter Notebook extensions?
0.197375
0
0
7,042
47,382,322
2017-11-19T21:45:00.000
3
0
1
0
python,jupyter-notebook
47,416,013
3
true
0
0
jupyter nbextension install --py --sys-prefix installs the extension code in the correct place, so yes you should run this when you update an extension. jupyter nbextension enable --py --sys-prefix just writes in a JSON file to load the relevant extension - so unless the extension changes its name then no, you don't need to rerun this.
2
5
0
What is the correct way to upgrade Jupyter Notebook extensions (e.g. RISE, ipywidgets)? I have a routine which keeps my Python packages updated by running pip install --upgrade, and this downloads and installs new Notebook extension versions too, when they are available. Should I also run jupyter nbextension install --py --sys-prefix, and possibly even jupyter nbextension enable --py --sys-prefix for each of the Notebook extensions which pip gets a new package? Thanks
What is the correct way to update Jupyter Notebook extensions?
1.2
0
0
7,042
47,383,336
2017-11-19T23:58:00.000
0
0
0
0
python,tensorflow,tflearn
47,400,198
1
true
0
0
I think you don't need map_fn, you need tf.dynamic_rnn instead. It takes an RNN cell (so it knows what's the output and what's the state) and returns the concatenated outputs and concatenated states.
1
0
1
Suppose I have training data X, Y where X has shape (n,m,p) I want to set up a neural network which applies a RNN (followed by a dense layer) given by f to each i-slice (i,a,b) and outputs f(m,x) which has shape (p') then concatenates all the output slices (presumably using tf.map_fn(f,X)) to form a vector of dimension (n,p') then runs the next neural network on (n,p'). Essentially something similar to: X' = tf.map_fn(f,X) Y= g(X') I am having difficulty getting my head around how to prepare my training data, which shape should X, X' (and later Z) should be. Further more what if I wanted to merge X' with another dataset, say Z? Y = g(X' concat Z)
How do I use the output of one RNN applied to slices as input of the next?
1.2
0
0
39
47,383,691
2017-11-20T00:55:00.000
1
0
1
0
python,python-2.7,math
47,383,747
1
true
0
0
I didn't need to make my integer a symbol. The code above works.
1
2
0
I know you can probably do a limit of a mathematical operation, say the limit of n + 1 as n approaches infinity, but can I do the limit of a function? For instances the limit of dividing to Fibonacci numbers as their index (and value) gets greater, approaching infinity? For example lim(func, start, approach) I have looked at sympy, and it is either not possible or I don't know how to pass a symbol argument as an integer. Eg: iters = Symbol("iters") print limit(main(iters),iters,0)
Do A Limit on the Output of a Function in Python
1.2
0
0
148
47,388,287
2017-11-20T09:03:00.000
1
0
1
0
python,visual-studio
47,388,616
2
false
0
0
Python is usually not compiled but executed using the python interpreter. You should only ever compile it to an .exe if you want to execute it on windows without having to install python first. That can be done by using pyinstaller or py2exe (in case you use python 2).
2
6
0
How do you compile .py to .exe in Microsoft Visual Studio 2017? I have looked through the menus but can not find what I'm looking for?
How to compile .py to .exe in Microsoft Visual Studio Community 2017?
0.099668
0
0
26,618
47,388,287
2017-11-20T09:03:00.000
8
0
1
0
python,visual-studio
47,405,841
2
true
0
0
Hey Tom! You can compile it, but only into a .pyc file which is a compiled python file, which I do not know what it does. I personally chose to installed pyinstaller, and ran pyinstaller [filename].py from the command line. It is easier than py2exe.
2
6
0
How do you compile .py to .exe in Microsoft Visual Studio 2017? I have looked through the menus but can not find what I'm looking for?
How to compile .py to .exe in Microsoft Visual Studio Community 2017?
1.2
0
0
26,618
47,390,019
2017-11-20T10:37:00.000
0
0
0
0
python,scipy,cluster-analysis,hierarchical-clustering,linkage
47,400,793
1
false
0
0
The linkage output clearly does not contain the entire distance matrix. The output is (n-1 x 4), but the entire distance matrix is much larger: n*(n-1)/2. The contents of the matrix are well described in the Scopus documentation.
1
0
1
I would like to reconstruct a condensed distance matrix from the linkage output when using scipy.cluster.hierarchy.linkage. The input is an m by n observation matrix. I understand therefore, that the distances are computed internally and then finally output in the linkage table. Does anybody know of a solution to obtain the condensed distance matrix from the linkage output?
Obtain distance matrix from scipy `linkage` output
0
0
0
284
47,390,692
2017-11-20T11:13:00.000
0
0
1
0
pycharm,python-3.5,cv2
59,021,822
3
false
0
0
Easy Way to go .. You can follow these steps: Open Settings in Pycharm. In Settings, on left there will be an option of Project: Your Project Name. Press it and then inside it click Project Interpreter. Now there will be a plus sign over right frame. Press that sign and write opencv-python and then install the package. write import cv2 in your program to use it.
1
2
0
I am getting error while installing cv2 package from project interpreter. Error occured: Non zero exit code(1) When I execute command, pip install cv2..it gives error as, Could not find a version that satisfies the requirement cv2(from versions: )No matching distribution found for cv2 Please help me solve this error.
Error while installing cv2 in pycharm - python 3.5.3
0
0
0
12,105
47,391,785
2017-11-20T12:13:00.000
1
1
1
1
python
47,391,863
3
false
0
0
In general, Python itself does not provide any way to do this. Once Python starts up, there is already one particular version running, and it can't change to a different version on the fly. Your operating system may provide some way of choosing which version of Python is the default (e.g. eselect python on Gentoo Linux), but it's impossible to say whether that's the case for you without knowing what OS you're using. If your OS doesn't provide something like this, it is possible to make your own scripts to set and change a default Python version.
1
1
0
I want to change the default python version I use, without affecting any other defaults. Appending the python path to the $PATH environment variable only accomplishes that as long as there is only the python executable at that given location, which is not the case if I want to use the /usr/bin version (export PATH=\usr\bin:$PATH). I know I could theoretically create a symbolic link in some other folder of my choosing, but this is not very elegant. Is there any other way of changing the default python (like a nice environment variable that python uses which takes precedence over the $PATH environment variable)?
Changing the default python without changing the $PATH variable?
0.066568
0
0
637
47,395,189
2017-11-20T15:13:00.000
1
1
0
0
python,cross-platform,keyboard-shortcuts
52,665,948
1
false
0
0
You can't, every system has its global shortcuts and only in some of them they are partially listed somewhere.
1
0
0
How can I get the information on what's key bound for a specific function such as copying, for example in Windows it is Ctrl + C. How can I get that information with script?
How can I get platform specific keybind information?
0.197375
0
0
19
47,396,382
2017-11-20T16:13:00.000
0
0
0
0
android,python,python-3.x
47,451,587
1
false
0
1
change org.renpy.android.PythonActivity to org.kivy.android.PythonActivity
1
0
0
i use python3crystax to build plyer apk but when i run apk on android it show error message "Accessing org.renpy.android.PythonActivity is deprecated and will be removed in a future version." i try to use old version of python-for-android it get other error ! if i just change plyer init Activity it can run but still can see any notify! File "main.py", line 9, in <module> 11-07 00:58:08.840 7082-7112/youer.com.school I/python: from kivy.app import App 11-07 00:58:08.841 7082-7112/youer.com.school I/python: File "/data/user/0/youer.com.school/files/app/crystax_python/site-packages/kivy/app.py", line 319, in <module> 11-07 00:58:08.843 7082-7112/youer.com.school I/python: from kivy.base import runTouchApp, stopTouchApp 11-07 00:58:08.843 7082-7112/youer.com.school I/python: File "/data/user/0/youer.com.school/files/app/crystax_python/site-packages/kivy/base.py", line 26, in <module> 11-07 00:58:08.845 7082-7112/youer.com.school I/python: from kivy.clock import Clock 11-07 00:58:08.845 7082-7112/youer.com.school I/python: File "/data/user/0/youer.com.school/files/app/crystax_python/site-packages/kivy/clock.py", line 362, in <module> 11-07 00:58:08.847 7082-7112/youer.com.school I/python: from kivy._clock import CyClockBase, ClockEvent, FreeClockEvent, \ 11-07 00:58:08.848 7082-7112/youer.com.school I/python: ImportError: dlopen failed: cannot locate symbol "_Py_NoneStruct" referenced by "/data/data/youer.com.school/files/app/crystax_python/site-packages/kivy/_clock.so"...
some error come from plyer on android and python3crystax
0
0
0
56
47,397,297
2017-11-20T17:02:00.000
0
0
1
0
python,stream
47,397,476
1
true
0
0
Well, what you shouldn't do is dump all the contents of the file into a string. That would be horrible, as log files can get up to sizes of 100 MB. If you were only allowed one file handler, what I would recommend is to have a queue to write the lines, and every once in a while, dump the lines into the log. In the free time, it can read the last 100 lines.
1
0
0
Assuming a server dumps logs to a file and we need to read last 100 lines. In the meantime, the file is loaded with more lines. How to tackle this kind of cases?
How to ready a file using python, where data is appended continuously?
1.2
0
0
31
47,398,738
2017-11-20T18:26:00.000
0
0
0
0
python,input
47,398,877
2
false
0
1
You could try making a python key-logger. However, it would be much easier to just use Pygame.
2
2
0
I have a laser pointer that I'm using along with my webcam as a drawing tablet, but I'd like to use the extra buttons on it too. They seem to be bound to Page Up, Page Down, Escape, and Period. I can't seem to figure out a way to get the input(which is handled like it's a keyboard) without any windows being selected. I've tried serial and pyusb, but I've had issues with both of those. I got it to work with Pygame, but as far as I know, you can't receive input without the window it creates being selected. Any ideas?
Live Keyboard Input in Python?
0
0
0
1,423
47,398,738
2017-11-20T18:26:00.000
0
0
0
0
python,input
47,518,662
2
true
0
1
CodeSurgeon answered me in a comment. Looks like there are a lot of youtube tutorials on the subject, surprisingly. This one shows a cross-platform approach using the pynput module, while this one looks to be using a windows-specific approach (pyhook and pythoncom). Can't vouch for either of these as I just found them through some searching, and I am sure there are others as well. I found that pynput works for me. (Windows 10/Python 3.4)
2
2
0
I have a laser pointer that I'm using along with my webcam as a drawing tablet, but I'd like to use the extra buttons on it too. They seem to be bound to Page Up, Page Down, Escape, and Period. I can't seem to figure out a way to get the input(which is handled like it's a keyboard) without any windows being selected. I've tried serial and pyusb, but I've had issues with both of those. I got it to work with Pygame, but as far as I know, you can't receive input without the window it creates being selected. Any ideas?
Live Keyboard Input in Python?
1.2
0
0
1,423
47,404,898
2017-11-21T03:52:00.000
0
0
0
0
python,gdal,ogr
47,405,311
3
false
0
0
if you want to do this manually you'll need to test each cell for: Square v Polygon intersection and Square v Line intersection. If you treat each square as a 2d point this becomes easier - it's now a Point v Polygon problem. Check in Game Dev forums for collision algorithms. Good luck!
1
1
1
I want to get a list of indices (row,col) for all raster cells that fall within or are intersected by a polygon feature. Looking for a solution in python, ideally with gdal/ogr modules. Other posts have suggested rasterizing the polygon, but I would rather have direct access to the cell indices if possible.
Find indices of raster cells that intersect with a polygon
0
0
0
3,761
47,405,270
2017-11-21T04:35:00.000
0
0
1
0
python,installation,peewee
47,406,813
1
false
0
0
It solved actually because microsoft visual c++ for python needs to be installed in my computer after inatlling that it worked properly
1
0
0
Command "C:\Users\asus\Anaconda2\python.exe -u -c "import setuptools, tokenize;file='c:\users\asus\appdata\local\temp\pip-build-54ytkq\peewee\setup.py'; f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record c:\users\asus\appd ata\local\temp\pip-q5dadg-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\asus\appdata\local\temp\pip- build-54ytkq\peewee\
I am not able to install peewee using pycharm terminal. I am using this 'pip install peewee' but can't import the library
0
0
0
308
47,406,736
2017-11-21T06:39:00.000
0
0
0
1
python-2.7,cygwin
47,559,740
1
false
0
0
You may have figured this out by now, but I believe tcsh is available in the package list, have you tried installing it to see if it fits your need?
1
0
0
I have created a python tool which generates shell script & run it using command subprocess.Popen("csh -l -c " + os.environ["TEMP"] + "\qa_tool\oos_script_generator.csh") This runs in c-shell. Is there any way I can run same script in cygwin?
Run shell script(.csh) in cygwin using python
0
0
0
377
47,406,919
2017-11-21T06:52:00.000
1
0
0
0
python,mysql,server
47,407,459
2
false
0
0
Look into the Python sockets module. You can send Network messages through it.
2
0
0
I'm going to distribute my "handy" python script to my co workers in the same department as me. The script works with MySQL database, they are all in the same network as me. But I don't want the script to run at the same time because it will cause problem with the database. So I decided that in the distributed script it would check first if there are other of the same script running on other computers in my department, If there aren't any then it would continue to run , it would not run is there are already other same script was running in the other departments computer.
How to check if a the same python script is running in another computer in a small network?
0.099668
0
0
271
47,406,919
2017-11-21T06:52:00.000
0
0
0
0
python,mysql,server
47,407,517
2
false
0
0
Consider using a mutex on the database if it is only able to be handled by a single instance of the script. It would save you trawling the network constantly looking for an instance of the script running.
2
0
0
I'm going to distribute my "handy" python script to my co workers in the same department as me. The script works with MySQL database, they are all in the same network as me. But I don't want the script to run at the same time because it will cause problem with the database. So I decided that in the distributed script it would check first if there are other of the same script running on other computers in my department, If there aren't any then it would continue to run , it would not run is there are already other same script was running in the other departments computer.
How to check if a the same python script is running in another computer in a small network?
0
0
0
271
47,408,124
2017-11-21T08:10:00.000
1
0
1
0
python,navigation,visual-studio-code
47,408,645
1
false
1
0
Since Visual Studio Code now supports multiple root workspaces, you could add your virtual environment to it. Then Quick Open will work.
1
0
0
Is it possible to setup a VS Code to allow Quick Open (Cmd+P / Ctrl+P) a source file in current virtual env (outside my project folder) E.g. open a Django (or any other 3rd party) source file
Visual Studio Code Python: Quick open a file in $VIRTUAL_ENV
0.197375
0
0
25
47,408,608
2017-11-21T08:39:00.000
1
0
1
0
python,bash
47,408,961
3
false
0
0
Snippet of code will help. However, I presume, you're loading all the files in memory in one go and since files are huge, that might have bloated RAM completely, and thus making script to die. If your use case is to get particular line/text from every file, I would recommend to use re modules for pattern and read files accordingly. Please refer syslog. You can get syslog in /var/log/ in Ubuntu. syslog will give you hints of possible reasons of the script failure
2
8
0
I have about 1 million files (which are output simulation outputs). I want to store a specific information of them within one file. I have a for loop which goes to 1M. I put a counter to track the state of the for loop. It will be killed some where between 875000 and 900000. I thought that it may be a space problem. When I run df -h or df /, I have about 68G available. What are other possible reasons that a Python script may be killed? How can I explore it more?
How can I find the reason that Python script is killed?
0.066568
0
0
17,372
47,408,608
2017-11-21T08:39:00.000
21
0
1
0
python,bash
47,409,304
3
false
0
0
On a Linux system, check the output of dmesg. If the process is getting killed by the kernel, it will have a explanation there. Most probable reason: out of memory, or out of file descriptors.
2
8
0
I have about 1 million files (which are output simulation outputs). I want to store a specific information of them within one file. I have a for loop which goes to 1M. I put a counter to track the state of the for loop. It will be killed some where between 875000 and 900000. I thought that it may be a space problem. When I run df -h or df /, I have about 68G available. What are other possible reasons that a Python script may be killed? How can I explore it more?
How can I find the reason that Python script is killed?
1
0
0
17,372
47,409,882
2017-11-21T09:44:00.000
0
0
0
0
python-3.x,mininet
47,410,635
1
false
0
0
You may want to use traceroute or even pathping.
1
0
0
I am working on mininet simulator and I want to know a way to capture complete path from source to destination. I have tried this with wireshark but only I can get is source and destination address in icmp packet.
How can I capture complete path along with intermediate nodes in mininet?
0
0
0
255
47,411,725
2017-11-21T11:14:00.000
0
0
0
0
python,kivy
47,487,305
1
false
0
1
It doesn't matter what Kivy version you have installed locally (a new one will be built for Android), and both python2.7 or python3.5+ should work.
1
0
0
Currently, we are developing with Kivy 1.10.0 and Python 3.4.4 on Windows. At the time of Build, I plan to build it through Linux in the virtual environment. In order to run on both iOS and Android, what version do I need for both Kivy and Python in Linux?
What version running in iOS and Android in Kivy
0
0
0
26
47,412,127
2017-11-21T11:33:00.000
1
0
0
0
javascript,xml,python-2.7,odoo-10
49,551,211
2
false
1
0
Its simple.. we can make create='false' and edit='false' in tree view XML
2
1
0
I need to remove import button from my tree view and need to add edit button for tree view . How can I do that.. Also I need to create button near create button in kanban view to go for another tree view. How can I do that... Please help me..
Remove Import Button in odoo tree view
0.099668
0
0
1,705
47,412,127
2017-11-21T11:33:00.000
0
0
0
0
javascript,xml,python-2.7,odoo-10
47,417,416
2
false
1
0
You need to extend the ListView (list_view.js) and change the method render_buttons. You can then render a custom template (you can use ListView.buttons template as a reference) with the buttons you need. You can do the same for the KanbanView (kanban_view.js).
2
1
0
I need to remove import button from my tree view and need to add edit button for tree view . How can I do that.. Also I need to create button near create button in kanban view to go for another tree view. How can I do that... Please help me..
Remove Import Button in odoo tree view
0
0
0
1,705
47,417,060
2017-11-21T15:45:00.000
0
0
0
0
python,tensorflow,neural-network,perceptron,multi-layer
47,534,280
1
true
0
0
Apparently, feeding an array [x,n] (where x is the number of inputs, I would otherwise have to feed seperatley in each iteration and n is the amount of sequential pixels) to my network and then optimizing the loss computed for this input, is exactly what I was looking for.
1
1
1
The input of my network is n sequential pixels of a NxN image (where n is small compared to N) and the output is 1 pixel. The loss is defined as the squared difference between the ouput and the desired output. I want to use the optimizer on the average loss after iterating over the whole image. But if I try to collect the loss in a list and average those losses after all the iterations are done, feeding this to my optimizer, causes an error because Tensorflow does not know where this loss comes from since it is not on the computational graph.
Is it possible to optimize a Tensorflow MLP with the averaged loss after n iterations instead of every iteration?
1.2
0
0
42
47,418,025
2017-11-21T16:30:00.000
2
1
0
1
bash,python-3.x,raspberry-pi,raspbian
47,431,418
2
true
0
0
For udisks --detach the parameter should be the device, not the mounting point. For example, if the USB Disk is /dev/sdb the command would be udisks --detach /dev/sdb If the command still doesn't work you could try udiskctl power-off -b <device> which should also work.
1
2
0
So I'm trying to get a working code in Python that will eject/unmount all USB flash drives attached to the Pi (Running Raspbian) - so that they can be removed safely. The final code will be run from within the python program. Additionally, I'd like to eject/unmount the USB flash drive even if it's in use. I've looked around and can't see how to do this. Thanks. udisks --detach /media/pi/DOCS/ - 'Blocked device... Resource temporarily available'... udisks --detach /media/pi/ - 'Blocked device...Resource temporarily available'... udisks --detach /media/ - 'Blocked device...Resource temporarily available'... sudo udisks --detach /media/pi/DOCS/ - still blocked... sudo umount /path/to/devicename - command not found... eject /media/pi/DOCS/ - Unable to open '/dev/sda' (DOCS is the name if my USB flash drive. - though I want to eject all USB flash drives - not just my one) So I'm going to ask the user in Python to select their USB flash drive from a list, which is pretty easy (just read in the folder) - so I will have the pathway to the USB. I'm still not sure which code can safely disconnect the USB flash drive - Maybe more research is the answer. Thanks for your help so far.
Ejecting/unmounting random USB flash drive in Raspberry pi / Python
1.2
0
0
6,438
47,418,760
2017-11-21T17:07:00.000
1
0
1
0
python,jupyter-notebook
70,648,620
6
false
0
0
Open the notebook in a text editor as a text file. You should find a JSON like structure Change "editable": false, to "editable": true, under "metadata"
3
7
0
I have opened a python Jupyter notebook but did not notice that it was in read-only, Not Trusted mode. How to save my changes now? Things that I have tried and did not help: File -> Make a Copy File -> Save and Checkpoint File -> Download as File -> Trust Notebook
How to save changes in read-only Jupyter Notebook
0.033321
0
0
17,612
47,418,760
2017-11-21T17:07:00.000
0
0
1
0
python,jupyter-notebook
61,972,021
6
false
0
0
You can navigate to the following tabs - View --> Cell Toolbar --> Edit Metadata Now, all the cells will have 'edit metadata', click on it and set the 'editable' to 'true' or just delete that json entry. The cells will now be editable :) note: you might have to do this for every cell in the notebook you wanna edit
3
7
0
I have opened a python Jupyter notebook but did not notice that it was in read-only, Not Trusted mode. How to save my changes now? Things that I have tried and did not help: File -> Make a Copy File -> Save and Checkpoint File -> Download as File -> Trust Notebook
How to save changes in read-only Jupyter Notebook
0
0
0
17,612
47,418,760
2017-11-21T17:07:00.000
6
0
1
0
python,jupyter-notebook
47,418,941
6
true
0
0
One hack around this issue: Select all cells (or cells that you need) in your read-only notebook. You can select all cells by clicking on the first cell and then shift+clicking the last cell. Copy all cells using CTRL+C (COMMAND+C if you are using MAC) Create a new jupyter notebook page Click CTRL+V (COMMAND+V if you are using MAC) twice Save your new notebook Hope this hack helps
3
7
0
I have opened a python Jupyter notebook but did not notice that it was in read-only, Not Trusted mode. How to save my changes now? Things that I have tried and did not help: File -> Make a Copy File -> Save and Checkpoint File -> Download as File -> Trust Notebook
How to save changes in read-only Jupyter Notebook
1.2
0
0
17,612
47,421,880
2017-11-21T20:13:00.000
0
0
0
0
python-3.x,pandas
47,422,098
3
false
0
0
iterating over iloc's arguments will do the trick.
1
2
1
I have a large file, imported into a single dataframe in Pandas. I'm using pandas to split up a file into many segments, by the number of rows in the dataframe. eg: 10 rows: file 1 gets [0:4] file 2 gets [5:9] Is there a way to do this without having to create more dataframes?
pandas: split dataframe into multiple csvs
0
0
0
8,708
47,422,326
2017-11-21T20:41:00.000
0
0
1
1
python,python-3.x,python-2.7
47,422,425
3
false
0
0
The easiest way (assuming you're only doing this from the CLI) is to use a doskey alias. In your case it would be something like doskey python27=C:\Users\linekm475\AppData\Local\Programs\Python\Python27 Please note that there are some serious limitations to doskey macros (most notably that it can only be used from the command line, not from a script or batch file, nor can they be used on either side of a pipe), but it will accomplish what you're trying to do.
1
0
0
I have python 3.6 installed on my computer with Windows 10. Python 3.6 is in the path: C:\Users\linekm475\AppData\Local\Programs\Python\Python36-32 and I added a path so it works to just type in "python" in cmd to open python 3.6 in cmd. And the problem I have now is that I want to have python 2.7 installed too so I can use both of them. So I installed python 2.7 and it's in the path: C:\Users\linekm475\AppData\Local\Programs\Python\Python27 Is it possible to do so when I type something like "python27" in cmd that it opens up python 2.7?
Run python 3.6 and 2.7 on the same computer
0
0
0
3,080
47,422,401
2017-11-21T20:46:00.000
-4
0
0
1
python,airflow,apache-airflow
47,440,562
2
false
0
0
Why do you need airflow to do that? You could implement an infinite loop in your spark job. It seems more efficient.
1
1
0
I am using airflow to schedule a spark job, I need to got a DAG "run continuously" -- Like I just want a DAG to run, and when it finishes, to start a new DAG instance again. I have two options in my mind: 1. Allow only one DAG instance running at a time and run the DAG more frequently 2. Have another DAG to watch and kick off when it is needed. Have anyone implement it? Suggestions? Thanks!
How to set airflow DAG runs continuously
-1
0
0
1,128
47,422,506
2017-11-21T20:52:00.000
0
0
1
1
python,python-3.x,file,io
47,422,562
2
false
0
0
One reason shell scripts do this is that a shell variable cannot contain arbitrary binary data (namely, null bytes). Python variables have no such limitations; just store the data in a variable.
1
0
0
I am not sure what it is called. I have seen a installer program have stored data at the end of the installer script, looked like just a mess of random text, which then got turn into a seperate file upon running the installer. There was some comand that said something like line xxx -> end | file.txt. Im so sorry if this makes no sense but if anyone knows what I am looking for id like to do the same thing in a python script. directions to a resource or an explination on how to do it here would be great thanks.
Store a file insde python file
0
0
0
41
47,426,089
2017-11-22T02:57:00.000
1
0
0
0
python,python-3.x,pandas
59,068,008
4
false
0
0
In drop method using "inplace=False" you have option to create Subset DF and keep un-touch the original DF, But in del I believe this option is not available.
1
20
1
I know it might be old debate, but out of pandas.drop and python del function which is better in terms of performance over large dataset? I am learning machine learning using python 3 and not sure which one to use. My data is in pandas data frame format. But python del function is in built-in function for python.
python del vs pandas drop
0.049958
0
0
10,359
47,426,249
2017-11-22T03:18:00.000
4
0
0
0
python,python-3.x,python-2.7,graphviz
57,178,590
3
false
0
0
if your goal is to avoid duplicates edges, use Digraph(strict=True) or Graph(strict=True)
1
1
0
Is there any way to get list of edges in graphviz in python. In my program I wanted to check whether edge already exist between nodes before adding edge in the directed graph. I couldn't find any function like get_edge() or has_connected() function in grahviz in python. Is there any other way of doing the above mentioned task? Any help would be appreciated.
Finding list of edges in graphviz in Python
0.26052
0
0
2,186
47,427,810
2017-11-22T06:02:00.000
0
0
1
0
python
47,427,856
4
false
0
0
You can use regex in python for this.
1
0
0
This is a string: \texample\tDart_181120172410.jpg\tImgCaption\t Is there anyway to get the Dart_181120172410.jpg and could say to get this substring if only it contains .jpg at the end. The actual string is even longer
How to get a substring from a string if one string is found
0
0
0
49
47,428,195
2017-11-22T06:32:00.000
5
0
1
0
python-2.7,session,google-colaboratory
47,440,890
1
false
0
0
VMs time out after a period of inactivity, so you'll want to structure your notebooks to install custom dependencies if needed. A typical pattern is to have a cell at the top of your notebook that executes apt and pip install commands as needed. In the case of opencv, that would look something like: !apt install -y -qq libsm6 libxext6 && pip install -q -U opencv-python (Takes ~7 seconds for me on a fresh VM.)
1
1
1
In Colaboratory, is there a way to save library installations across sessions? If so how, can I do that? I'd like to learn more about Colaboratory session management in general. Currently, everytime I have to import, for e.g. a cv2 module(as this is not available by default), I need to reinstall the module with !pip install opencv-python along with it's dependencies that provide for shared objects, through a !apt=-get package-name.
How to save installations across sessions?
0.761594
0
0
309
47,429,513
2017-11-22T08:00:00.000
0
0
1
0
python,python-3.x,operators,exponentiation
47,429,715
3
false
0
0
This explanation seems quite clear to me. Let me show you an example that might enlighten this : print 2 ** 2 ** 3 # prints 256 If you would read this from left to right, you would first do 2 ** 2, which would result in 4, and then 4 ** 3, which would give us 64. It seems we have a wrong answer. :) However, from right to left... You would first do 2 ** 3, which would be 8, and then, 2 ** 8, giving us 256 ! I hope I was able to enlighten this point for you. :) EDIT : Martijn Pieters answered more accurately to your question, sorry. I forgot to say it was mathematical conventions.
1
9
0
I am reading an Intro to Python textbook and came across this line: Operators on the same row have equal precedence and are applied left to right, except for exponentiation, which is applied right to left. I understand most of this, but I do not understand why they say exponentiation is applied right to left. They do not provide any examples either. Also, am I allowed to ask general questions like this, or are only problem solving questions preferred?
Why is exponentiation applied right to left?
0
0
0
5,446
47,429,532
2017-11-22T08:01:00.000
0
0
1
0
python,shell
47,429,862
3
false
0
0
l run it on my local for more than 4 hours with no problem, so l can confirm that the code is correct. You could be surprised by the number of bugs that only reveals after months if not years of correct processing... What you confirm is that the code does not break on first action, but unless you have tested it with all possible corner cases in input (including badly formatted ones) you cannot confirm that it will never break. That is the reason why a program that is intented to run unattendedly should be carefully designed to always (try to*) leave a trace before exiting. try: except: and the logging module are your best friends here. * Of cause in case of a system crash or a power outage there's nothing you can do at user program level...
1
0
0
l wrote a python script with while true to do task to catch email's attachment, but sometimes l found out it would exit unexpectedly on server. l run it on my local for more than 4 hours with no problem, so l can confirm that the code is correct. So is there a kind of mechanism to restart python when it exit unexpectedly, such as process monitoring? l am a novice in linux. remark: l run this python script like python attachment.py & in a shell script.
python restart after unexpected exit with `while true` loop
0
0
0
879
47,431,661
2017-11-22T10:00:00.000
2
0
0
0
python,machine-learning,scikit-learn,cross-validation
47,432,646
1
true
0
0
This depends on what you specify in the cv parameter. If the independent variable is binary or multiclass it will use StratifiedKFold, else it will use KFold. You can also override the options by specifying a function (sklearn or otherwise) to perform the splits. The KFold function will divide the data into sequential folds. If you want it to do a random split, you can set the shuffle parameter to True. If you want to fix the random shuffle you can set a value for the random_state. If you do not, it will take a random value and the folds will be different every time you run the function. For StratifiedKFold, it will split the data while attempting to keep the same ratio of classes of the dependent variable in each split. Because of this, there can be slight changes every time you call the function. i.e. It will not be sequential by default.
1
2
1
In this: cross_val_score(GaussianNB(),features,target, cv=10) Are we splitting the data randomly into 10 or is it done sequentially?
Does cross_val_score take sequential samples or random samples?
1.2
0
0
992
47,433,414
2017-11-22T11:20:00.000
1
0
0
0
python,amazon-web-services,boto3
47,442,387
2
false
0
0
You will need to iterate over the policies to get policy names. I am not aware of a get-policy type api that uses policy names only policy ARNs. Is there a reason that you do not want to get a list of policies? Other than to not download the list.
1
4
0
I am trying to get a policy from boto3 client but there is no method to do so using policy name. By wrapping the create_policy method in a try-except block i can check whether a policy exists or not. Is there any way to get a policy-arn by name using boto3 except for listing all policies and iterating over it.
boto3 iam client: get policy by name
0.099668
0
1
2,854
47,433,990
2017-11-22T11:49:00.000
-1
0
1
0
multithreading,winapi,python-multithreading
47,439,720
1
false
0
0
Look at the RegisterWindowMessage function, it does pretty much exactly what you are after (provides a message number that should not collide with any other). The one downside is that the message number is then not a constant but will vary from run to run of your program, this makes the message loop somewhat more complicated but is well worth it for this sort of thing.
1
1
0
I have an application in which the worker thread needs to call a method on main thread. I am planning to send a custom message via win32api.PostThreadMessage(with message WM_USER+X), and when this message is received, some function must get executed on main thread. What I am looking is, to register a method to the corresponding WM_USER_X message?
MultiThreading: Can we register a method on main thread corresponding to a custom message on windows?
-0.197375
0
0
59
47,437,396
2017-11-22T14:39:00.000
0
0
1
0
python,anaconda,jupyter-notebook
60,792,997
3
false
0
0
update numpy then try or remove ipykernel and then install it again then try.
1
1
0
My kernel keeps dying but I have no idea what the problem is. I have reinstalled anaconda a couple of times but nothing is working. The error message is "can't import PY3." Anyone got any ideas on how to solve this?
Kernel keeps dying using jupyter (Anaconda)
0
0
0
2,427
47,437,955
2017-11-22T15:07:00.000
0
0
1
0
python,byte,big5
47,438,018
1
false
0
0
Add backslash to your string. Instead of: b'\x%x\x%x' % (0xa4, 0x41) Try like so: b'\\x%x\\x%x' % (0xa4, 0x41) This should prevent your error, and the output string will remain with one backslash.
1
0
0
I want to get the char in 0xA440-0xC67E with big5 encoding. I can get the char by decoding byte code like b'\xa4\x41'.decode('big5'), and how can I put this in for loop? I can't modify it in format way like b'\x%x\x%x' % (0xa4, 0x41). It will return error "(value error) invalid \x escape at position 0"
Iterate Big5 charset in python
0
0
0
57
47,439,668
2017-11-22T16:31:00.000
-1
0
0
0
python-3.x,google-chrome-devtools,selenium-chromedriver
47,486,605
2
false
1
0
What is going on here? I've had this issue before and my solution was to uninstall and reinstall chrome and it worked fine. But now, even uninstalling and reinstalling doesn't work and this 'Devtools listening...' line is popping up right away and my script doesn't work! Everyone seems to have the same experience with this message. It's working fine for the longest time and then all of a sudden this message pops up and the script stops working? Chrome has to be caching something somewhere.
1
0
0
Does anybody have the solution yet? The script was working perfectly fine until yesterday, but suddenly it stopped working. Don't know why.. :( I tried searching the answer all over Google, but didn't find yet. Tried adding options.add_argument('--log-level=3') as well, but no luck. Can someone help me out with this?
Python selenium: DevTools listening on ws://127.0.0.1 keeps on popping up
-0.099668
0
1
1,868
47,441,017
2017-11-22T17:49:00.000
4
0
1
0
python,oop,procedural-programming
47,441,162
1
false
0
0
I would say it's not bad at all, if the problem can be solved more easily that way and that your language supports it. Programming paradigms are just different approaches to solving problems: Procedural says "What are the steps that need to be performed to solve this problem?" Functional says "What values should be transformed and how should they be transformed to solve this problem?" OOP says "What objects need to interact with one another and what messages are needed to be sent between them to solve this problem?" When you divide up your problem into smaller ones, you might find that some parts of the problem can be solved more easily in a functional way, other parts a procedural way. It's totally possible to have such a problem. Python is mainly procedural and quite OOP, and has functional features (namely functools). It's not as functional as Haskell and not as OOP as C#, but it allows you to use those paradigms to some extent.
1
3
0
I have been working on a program to solve a Rubik's cube, and I found myself unsure of a couple of things. Firstly, if I have a bunch of functions that have general applications (e.g., clearing the screen), and don't know whether or not I should change them to methods inside of a class. Part of my code is OOP and the other part is procedural. Does this violate PEP8/is it a bad practice? And I guess in broader terms, is it a bad idea to mix two styles of programming (OOP, functional, procedural, etc)?
Is it bad practice to mix OOP and procedural programming in Python (or to mix programming styles in general)
0.664037
0
0
2,765
47,441,418
2017-11-22T18:15:00.000
2
0
1
0
python,date,date-format,strptime
47,441,540
1
true
0
0
You should prepend date placeholders with % in your casce it should be datetime.strptime(start_date[:-6],'%Y-%m-%d %H:%M:%S')
1
0
0
I'm getting the following error: time data '2017-12-11 10:00:00' does not match format 'Y-m-d H:M:S' The formatting looks perfect to me. Here is my code: start_date = '2017-12-11 10:00:00-08:00' start_date = pytz.timezone(event.time_zone).localize(datetime.strptime(start_date[:-6],'Y-m-d H:M:S')) Am I missing something?
python time data does not match format with identical mapping
1.2
0
0
468
47,442,088
2017-11-22T19:01:00.000
1
0
1
0
python
47,442,411
2
false
0
0
According to the Python 2 documentation: On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter the decimal number 0.1 is the binary fraction 0.00011001100110011001100110011001100110011001100110011010 which is close to, but not exactly equal to, 1/10.
1
1
0
Does anyone know what is Python's default float precision value? Couldn't find anything via Google!
Python2.7 default float precision
0.099668
0
0
1,689
47,444,039
2017-11-22T21:20:00.000
2
0
0
0
python,tensorflow,convolution
47,444,089
1
true
0
0
Yes, you could do it that way. Another is to use the stride parameter to specify the stride in each dimension. You can set the other dimension's stride to 0. stride=(1, 0) is the proper syntax, I think.
1
1
1
Put another way I want the filter to cover the image across one dimension and only move in the other direction. If I have an input with shape (ignoring batch size and number of input channels) (h, w), I want to have a filter to have shape (x, w) where x<h. Would creating a filter with shape (x, w) and using padding of 'VALID' be the correct thing to do (the idea being VALID will force that the filter is only convolved across the image when the filter 'fits' across the image in the width dimension)? Might there be another, better way to do this?
How to create a conv layer where the filter doesn't move across one dimension
1.2
0
0
20
47,444,178
2017-11-22T21:32:00.000
0
0
0
1
python,linux,python-3.x,python-os
47,444,271
3
false
0
0
I guess in UNIX based systems it really just comes down to personal preference. I'd use os.getlogin() if I plan to write the code for other platforms such as windows. Moreover, enviroment variables and easier be manipulated. so it's more secure to use it in cases where security is a priority.
1
7
0
Is there a difference between using os.getlogin() and os.environ for getting the current user's username on Linux? At different times I've seen someone recommend looking at the environment variables $USER or $LOGNAME, and other times os.getlogin() was recommended. So I'm curious: is one preferred, or are there situations where you would use one over the other, or are they simply two ways of doing the same thing?
Difference between os.getlogin() and os.environ for getting Username
0
0
0
9,908
47,448,510
2017-11-23T05:56:00.000
17
0
0
0
python,django,python-3.x
56,654,583
2
true
1
0
Ctrl + C worked perfectly on PC
1
4
0
Is there a way in one of the Django scripts to change the CTRL+BREAK exit command to something else when using "python manage.py runserver?"
Change "Quit the server with CTRL-BREAK."
1.2
0
0
16,874
47,449,444
2017-11-23T07:04:00.000
0
0
0
0
python,flask,single-sign-on,flask-login,flask-oauthlib
47,532,413
1
false
1
0
So, I've spent some time thinking about this (as I want to do it for my own website), and I've come up with a theoretical solution. From what I understand in my implementation of Google's OAuth API, OAuth is about sending the user on a link to the server that hosts the OAuth keys, and then returning back to the sender. So, my idea is that in the login form, you have buttons that act as links to the OAuth client. I haven't tested this, but since no one else has replied to this I figure this will give you a nice little project to implement yourself and let us know if it works.
1
1
0
For now, I'm trying to use flask-oauthlib with flask-login together. flask-oauthlib provides a simple way to use oauth, but I want a login manager to automatically redirect all users who are not logged in to /login. In flask-login, I can use login_required to accomplish this. But how could I achieve this by flask-oauthlib? Further, how to manage session when using flask-oauthlib? I understand how SSO works, but I'm confusing how could I know if this token is expired or not?
How to use flask-oauthlib with flask-login together?
0
0
1
504
47,454,016
2017-11-23T11:06:00.000
0
0
0
1
windows,python-3.x,variables,cmd,path
47,454,887
1
true
0
0
OK! I figured out the issue - The python36 folder was not placed in the C: location. Now I've corrected it and it's working for me! Thanks!
1
0
0
I'm wondering how I could run python3 by default when I type python in windows CMD? This is not a duplicated question because I've added the C:\Python36 to the path variables, so when I type py in CMD, it gives me python 3.6.2, while if I type py2, then it gives me python 2.7.11. Since now I need to run python script.py through another application, then I can't really change the command to be py script.py, so my question is how could I run python 3 if using the command python script.py? I've tried to place the python3 preceeding python2 like C:\Python36;C:\Python27 in the variables, but when I type python, CMD still shows python 2.7.11...I'm confused about why it didn't work.. Thank you very much in advance!
How to run Python 3 when I type "python" in windows CMD
1.2
0
0
100
47,454,686
2017-11-23T11:41:00.000
0
0
0
0
django,python-3.x,postgresql,mqtt,thingsboard
47,798,648
2
true
1
0
ThingsBoard has APIs which you can use. You may also customise it based on your requirements.
1
0
0
I am using the Python3, Django and database as Postgresql, and I wanted to use the Thingsboard dashboard in my web application. can anyone pls guide me how can I use this
I am using the python django, and i wanted to know to get use thingsboard dashboard, and database as postgresql
1.2
1
0
304
47,455,323
2017-11-23T12:11:00.000
0
0
0
0
python,pandas,python-datetime
47,455,441
1
true
0
0
As pointed by jezrael and EdChum, i have bd data in my column. errors='coerce' option solved this problem.
1
1
1
I know this has been asked 1000 times before but what i am experiencing is really weird and I cant troubleshoot it. I have a date column structured like this: 24JUN2017:14:46:57 I use: pd.to_datetime('24JUN2017:14:46:57', format="%d%b%Y:%H:%M:%S") and it works fine. But when I try to input the whole column: pd.to_datetime(df['date_column'], format="%d%b%Y:%H:%M:%S") I get the error of the title. Anyone knows why I might be experiencing this? ps:I have checked for any empty spaces and there are none
Weird behavior with datetime: error: time data '0' does not match format '%d%b%Y:%H:%M:%S'
1.2
0
0
111
47,455,914
2017-11-23T12:46:00.000
0
0
0
0
python,nlp,deep-learning,nltk
47,459,041
1
false
0
0
Use POS tagging / Parse Trees to take the corpus of job responsibility sentences that you have accumulated and split it into some broad categories. (e.g. sentences that start out with a bunch of comma-separated verbs, sentences that have a single verb but are in the past tense (i.e. requiring the person to have some specific type of experience), etc). Now try to create a transformation rule for each of those parts. The rule for the sentence you put above might be to prefix the sentence with the word "Can". Some rules might require to change the tense of some verb, convert some noun from singular to plural, swap the order of some parts of the sentence, etc. All these things should be doable using standard NLP tools.
1
0
1
I am working on a NLP problem of rewriting job responsibilities and requirements to skill sets ready to be mentioned in the resume. Example: 1:responsibility: "Train, supervise, motivate and develop sales personnel to attain or exceed their sales territory sales goals;" 1:skill: "Have provided training,guidance and motivation to sales representatives to achieve or exceed their sales area and target." I have tried to do this using paraphrasing techniques,but the narration of sentence and some words are not appropriate to be mentioned in resume. Any help and guidance will be appreciable. Thanks
conversion of job responsibilites and requirements from job descriptions to skillset Using Nlp
0
0
0
53
47,458,024
2017-11-23T14:41:00.000
0
0
0
1
python,windows,server
47,458,154
1
false
0
0
You can use the subprocess module or os.system to launch a command into a windows shell. Then you can use Powershell or cmd instructions.
1
0
0
How can I easily copy a local file to a remote server using python? I don't want to map the drive to my local machine and the windows server requires a username and password. The local machine is also a windows machine. Examples I've seen are with linux and mapping the drive. Unfortunately that's not an option for me.
Transfer file from local machine to remote windows server using python
0
0
1
749
47,459,054
2017-11-23T15:38:00.000
0
0
1
0
python,jupyter
68,312,763
2
false
0
0
Hit esc to exit edit mode Click to select the topmost cell you want to delete. in jupyterlab, a blue vertical line appears to the cell's left. Hit end to scroll to the bottom of the notebook Shift-click the last cell (similar to step 2) to select the range of cells to delete. Hit d, d to delete all the selected cells. Hope that helps.
2
10
0
I everyday try out my solutions before answering a pandas question in stack overflow . Usually after two to three days the jupyter notebook I use will be with n number of cells. Is there a way to delete all cells at once other than creating new notebook by deleting current one? D D deletes one cell at a time.
Is there a way to delete all cells at once in jupyter?
0
0
0
7,112
47,459,054
2017-11-23T15:38:00.000
25
0
1
0
python,jupyter
47,459,164
2
true
0
0
You can delete all Jupyter notebook's cells at once as follows: Select all cells : ESC + SHIFT + DOWN (starting from the top) Then click ESC + D D (D twice)
2
10
0
I everyday try out my solutions before answering a pandas question in stack overflow . Usually after two to three days the jupyter notebook I use will be with n number of cells. Is there a way to delete all cells at once other than creating new notebook by deleting current one? D D deletes one cell at a time.
Is there a way to delete all cells at once in jupyter?
1.2
0
0
7,112
47,459,747
2017-11-23T16:19:00.000
3
0
1
0
python,pygame
50,942,227
4
false
0
0
I had the same issue in windows. My antivirus was blocking PIP requests. Try disabling your antivirus(in my case manually killed it from task manager).
3
2
0
I am trying to install pygame with pip install . but every time i tried i faced to this error. Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Operation cancelled by user* I have done it with other libraries but I faced the same problem
pip install is not working
0.148885
0
0
15,925
47,459,747
2017-11-23T16:19:00.000
0
0
1
0
python,pygame
47,460,018
4
false
0
0
It looks like pip is not connecting to the internet. I have a few options -- I don't know if they will work, but you can try them. Try to reinstall pip (pip3 if using python3) I had to do this on my systeme, as pip3 didn't work initally either. Check and see if you can ping a website from your terminal to check connectivity. You could have an error with your terminal (if using linux) and not with python itself. Good luck, and hope this helps!
3
2
0
I am trying to install pygame with pip install . but every time i tried i faced to this error. Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Operation cancelled by user* I have done it with other libraries but I faced the same problem
pip install is not working
0
0
0
15,925
47,459,747
2017-11-23T16:19:00.000
1
0
1
0
python,pygame
54,913,694
4
false
0
0
I had same error message when I tried to install Python packages on my laptop with Windows 10 OS. I tried all methods recommended online and it still didn't work. I have been noticing for a while that Windows 10 automatically set proxy off causing Internet access problem sometime. Then I googled with keywords: 'windows 10 automatic proxy setting off'. Someone mentioned to modify regedit key value of [ProxyEnabled=1] under Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings I opened the regedit and found it's [ProxyEnabled=1]. I compared HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings with another workstation with relatively clean image. It's [ProxyEnabled=0] and no other extra entry for Proxy. Solution: set ProxyEnabled=0 and delete any other Proxy entries. It worked successfully! pip install package_name
3
2
0
I am trying to install pygame with pip install . but every time i tried i faced to this error. Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))': /simple/pygame-1-9-3-cp36-cp36m-win-amd64/ Operation cancelled by user* I have done it with other libraries but I faced the same problem
pip install is not working
0.049958
0
0
15,925
47,460,240
2017-11-23T16:51:00.000
2
1
1
0
python,c,numerical-methods
47,460,725
1
true
0
0
Binary file, but please, be careful with the format of data that you are saving. If possible, reduce the width of each variable that you are using. For example, do you need to save decimal or float, or you can have just 16 or 32 bit integer? Further, yes, you may apply some of the compression scheme to compress the data before saving, and decompress it after reading, but that requires much more work, and it is probably an overkill for what you are doing.
1
2
0
I am currently running Simulations written in C later analyzing the results using Python scripts. ATM the C Programm is writing the results (lots of double values) in a text file which is slowly but surely eating a lot of disc space. Is there a file format which is more space efficient to store lots of numeric values? At best but not necessarily it should fulfill the following requirements Values can be appended continuously such that not all values have to be in memory at once. The file is more or less easily readable using Python. I feel like this should be a really common question, but looking for an answer I only found descriptions of various data types within C.
Space efficient file type to store double precision floats
1.2
0
0
201
47,462,591
2017-11-23T19:58:00.000
19
0
1
0
python,virtualenv
47,462,620
2
true
0
0
Once you activate your virtual environment, you should be able to list out packages using pip list and verify version using python --version.
1
17
0
How do you check what package and version are installed inside virtualenv? My thought was to create requirement.txt file. But, is there another way to do this from CLI?
Python: How do you check what is in virtualenv?
1.2
0
0
26,470
47,463,593
2017-11-23T21:44:00.000
0
1
0
1
php,python,json,apache,file-io
47,467,584
2
true
0
0
Thank you to user2693053 for solving this problem: Sounds like a privilege issue, although I am not experienced enough with calling Python from php to know whether that could be an issue here. and yes, it is a privilege issue! I fixed the problem by doing the sudo chmod 777 * command in the /var/www/html directory.
2
0
0
I have a open() command in my Python script which opens the file data.json in my Apache serve running on Raspberry Pi in w mode. This script in turn is run by PHP using the shell_exec command. When the script is run alone, the Python code works. However, it does not function when run by PHP. Does anyone have any idea why this is happening, or is more information needed? Thank you in advance for your help!
Python open() function not working when script executed by shell_exec() in PHP
1.2
0
0
165
47,463,593
2017-11-23T21:44:00.000
1
1
0
1
php,python,json,apache,file-io
47,464,087
2
false
0
0
Did you use the relative or absolute path? Try to use the absolute path and check what's your working directory. I think you are starting the python script not in the directory you think you are. :)
2
0
0
I have a open() command in my Python script which opens the file data.json in my Apache serve running on Raspberry Pi in w mode. This script in turn is run by PHP using the shell_exec command. When the script is run alone, the Python code works. However, it does not function when run by PHP. Does anyone have any idea why this is happening, or is more information needed? Thank you in advance for your help!
Python open() function not working when script executed by shell_exec() in PHP
0.099668
0
0
165
47,465,474
2017-11-24T02:17:00.000
0
0
1
0
python-3.x
47,465,487
1
true
0
0
Make a mapping of operators to attributes of operator, and then have the function get the entry in the map and apply it to the operands.
1
1
0
I'm trying to write a function that takes any operator and applies it to two integers. Any thoughts? for example: apply_operator("+", 40, 5) 45 apply_operator("-", 40, 5) 35
Applying any operator to two integers (in python)
1.2
0
0
30
47,466,304
2017-11-24T04:17:00.000
0
0
0
0
python,opencv,background-subtraction,cascade-classifier
52,090,170
1
false
0
0
A cascade classifier works on an image, where as whether a car is moving or not comes from info across frames (time). Could you add details on how exactly you tried using the cascade? A cascade can possibly detect cars, but you would need another layer of intelligence/rules on top of it to infer whether it was moving or not. Are you getting lot of falses while running the cascade on every frame? If so, some things you could try: Modify cascade training params to make things tighter (more false rejection) at each stage Add more layers - i.e. train longer. if the cascade training is running out negatives for higher layers, you may want to add a lot more images from where it can take negative samples. Try correlating boxes detected across time as a way of reducing falses After you have achieved a good enough detection rate with false positives low enough, you can focus on the logic needed to figure out whether a car is stopped or not, possibly based on tracking the detected car over time.
1
0
0
I used background subtraction to detect moving cars,and also to detect stopped cars on the road,but to accurately find non moving cars is tedious. I've tried cascade classifiers,but it gives too much false positives,and it would be helpful,if a can separate the regions as moving and non moving dynamically.
How to detect cars in road that never move in complete video?
0
0
0
104
47,466,390
2017-11-24T04:30:00.000
1
0
1
0
python,python-3.x,jinja2
47,466,422
1
true
1
0
If you mean to just iterate through the string, you can explicitly make the array in python and pass it into jinja. Try to minimise logic in jinja as much as possible.
1
0
0
I need to convert a string acbdefrg into an array of individual letters [a,b,c,d,e,f,r,g]. I understand how it's done in python through list(your_string). Hoever this code does not work in jinja2. Is this possible, or will I have to make the array in python and pass it over to jinja?
Split string into individual charactor array in jinja2
1.2
0
0
375
47,466,917
2017-11-24T05:32:00.000
0
0
1
0
python,file-handling
47,467,062
2
false
0
0
Is '0' the label? If it's only one sentence, you can do a string.split('.') using a period as a delimiter. Though this might catch some errors if you have a sentence with something like 'Mr.' or 'Mrs.' so you might need to add some if statements to handle those.
1
0
0
I have a sample of a file with sentences and labels. How can it be split into sentences and labels? A very, very, very slow-moving, aimless movie about a distressed, drifting young man. 0 Not sure who was more lost - the flat characters or the audience, nearly half of whom walked out. 0 Attempting artiness with black & white and clever camera angles, the movie disappointed - became even more ridiculous - as the acting was poor and the plot and lines almost non-existent. 0 Very little music or anything to speak of. 0 output list of sentences: ['A very, very, very slow-moving, aimless movie about a distressed, drifting young man','Not sure who was more lost - the flat characters or the audience, nearly half of whom walked out'] corresponding labels: ['0','0']
splitting lines in python with sentences and labels
0
0
0
671
47,466,982
2017-11-24T05:40:00.000
0
1
1
0
python,git,intellij-idea
47,467,268
3
false
0
0
The issue I had was that I never had the issue of committing a pyc file. Now I added it to gitignore and what not, but why did I never commit it before? turns out... intellij idea does completely ignore pyc files... I cannot even create a pyc file in the project manually. So maybe that is why.
1
0
0
i've been working with the python project before and never submitted a pyc file. Not sure as to why. I am using intellij, maybe it automatically ignores those files and doesn't display them? maybe I am not using CPython? python --version spits out to me Python 3.6.1 :: Continuum Analytics, Inc.
pyc got checked into my git project by coworker, how comes?
0
0
0
30
47,469,591
2017-11-24T08:56:00.000
0
0
0
0
python,open-source
47,469,592
1
false
0
0
in animation tab try a right click on model and under "roving type" select "response roved".
1
0
1
I am asking this for mgx4: I imported a .uff file containing FRF from Experimental Modal Analysis (EMA). I generated the mesh (geometry) of the tested structure with lines and triangle surfaces. I succeed to realize the analysis and extract eigen frequencies and modal damping. What I am still unable to do is to animate the geometry according to eigen shapes as a function of eigen frequencies. When I am on the tab "Measurement" on the tab menu "ANIMATION" and I put the frequency cursor on one peak of the FRF and I click on "Play", I can just see one node moving. I would like to observe all the nodes moving in order to identify the eigen shape of the mode observed. Do you please have an idea on how to do that? Thank you for your help!
Animating modes in openmodal: roving type
0
0
0
45
47,473,750
2017-11-24T12:53:00.000
2
0
0
0
python,sockets,asyncore
47,474,778
1
true
0
0
1. asyncore relies on operating system for whole connection handling, therefore what you are asking is OS dependent. It has very little to do with Python. Using twisted instead of asyncore wouldn't solve your problem. On Windows, for example, you can listen only for 5 connections coming in simultaneously. So, first requirement is, run it on *nix platform. The rest depends on how long your handlers are taking and on your bandwith. 2. What you can do is combine asyncore and threading to speed-up waiting for next connection. I.e. you can make Handlers that are running in separate threads. It will be a little messy but it is one of possible solutions. When server accepts a connection, instead of creating new traditional handler (which would slow down checking for following connection - because asyncore waits until that handler does at least a little bit of its job), you create a handler that deals with read and write as non-blocking. I.e. it starts a thread and does the job, then, when it has data ready, only then sends it upon following loop()'s check. This way, you allow asyncore.loop() to check the server's socket more often. 3. Or you can use two different socket_maps with two different asyncore.loop()s. You use one map (dictionary), let say the default one - asyncore.socket_map to check the server, and use one asyncore.loop(), let say in main thread, only for server(). And you start the second asyncore.loop() in a thread using your custom dictionary for client handlers. So, One loop is checking only server that accepts connections, and when it arrives, it creates a handler which goes in separate map for handlers, which is checked by another asyncore.loop() running in a thread. This way, you do not mix the server connection checks and client handling. So, server is checked immediately after it accepts one connection. The other loop balances between clients. If you are determined to go even faster, you can exploit the multiprocessor computers by having more maps for handlers. For example, one per CPU and as many threads with asyncore.loop()s. Note, sockets are IO operations using system calls and select() is one too, therefore GIL is released while asyncore.loop() is waiting for results. This means, that you will have total advantage of multithreading and each CPU will deal with its number of clients in literally parallel way. What you would have to do is make the server distributing the load and starting threading loops upon connection arrivals. Don't forget that asyncore.loop() ends when the map empties. So the loop() in a thread that manages clients must be started when new connection is accepted and restarted if at some time there are no more connections present. 4. If you want to be able to run your server on multiple computers and use them as a cluster, then you install the process balancer in front. I do not see the serious need for it if you wrote the asyncore server correctly and want to run it on single computer only.
1
1
0
I've built a server listening on a specific port on my server using Python (asyncore and sockets) and I was curious to know if there was anything possible to do when there is too many people connecting at once on my server. The code in itself cannot be changed, but will adding more process works? or is it from an hardware perspective and I should focus on adding a load balancer in front and balancing the requests on multiple servers? This questions is borderline StackOverflow (code/python) and ServerFault (server management). I decided to go with SO because of the code, but if you think ServerFault is better, let me know.
How to handle a burst of connection to a port?
1.2
0
1
78
47,475,206
2017-11-24T14:22:00.000
-1
0
0
0
python,opencv,object,depth
63,786,602
1
false
0
0
The content of my research is the same as yours, but I have a problem now. I use stereocalibrate() to calibrate the binocular camera, and found that the obtained translation matrix is very different from the actual baseline distance. In addition, the parameters used in stereocalibrate() are obtained by calibrating the two cameras with calibrate().
1
0
1
I need to calculate distance from camera to depth image pixel. I searched through internet but I found stereo image related info and code example where I need info for depth image. Here, I defined depth image in gray scale(0-255) and I defined a particular value( let range defined 0 pixel value is equal to 5m and 255 pixel value is equal to 500m in gray scale). camera's intrinsic (focal length, image sensor format) and extrinsic (rotation and transition matrix) is given. I need to calculate distance from different camera orientation and rotation. I want to do it using opencv python. Is there any specific documentation and code example regarding this? Or any further info is necessary to find this.
distance calculation between camera and depth pixel?
-0.197375
0
0
569
47,477,709
2017-11-24T17:17:00.000
1
0
0
0
python,macos,python-2.7,terminal,macos-sierra
47,478,014
1
true
0
0
By default CTRL+Z suspends a process and places it into background, see man bash and search for job control. On the OS X standard bash, CTRL+Csends a SIGINT (interrupt signal) to the foreground process but also prints ^C. SIGINT can be ignored or handled by the running process. By default python handles SIGINT and converts it to a a KeyboardException If your scripts handle a general exception like except Exception: or similar (a very bad idea) then it could ignore the CTRL+C. Check your scripts for signal handling and general exception handling. To check for remapping on the terminal type stty -a and look for cchars and you should see intr = ^C; and susp = ^Z;.
1
1
0
So I've been using control-z to stop my python scripts, but I recently noticed they were still active in the activity monitor and can only terminate them there. Control-C doesn't work and only prints out ^C. Perhaps the combo got remapped? Any suggestions on how I can figure this out.
Terminate Python in Terminal (control-C just printing ^C) macOS 10.12
1.2
0
0
993
47,477,771
2017-11-24T17:22:00.000
1
0
0
0
python,django,python-3.x,heroku,hosting
47,478,237
1
false
1
0
You are probably using sqlite as the database. You must not use that on Heroku, as it is stored on the ephemeral file system. Use the proper Postgres add-on.
1
0
0
After 30mins of not being used Heroju resets my django site, however when the site is reset the database/models controlled and changed in the admin page are reset back to when the site was first uploaded. How do i stop this and make changes made in admin mode permanently to the site? thank you.
Heroku and Django, database resets when heroku restarts the site
0.197375
0
0
40
47,478,083
2017-11-24T17:49:00.000
0
0
0
0
python,scrapy,scrapy-spider
47,480,292
1
false
1
0
You could subclass the JsonItemExporter overwriting the finish_exporting method to process the final json file
1
0
0
Say I am parsing a listing page and generating a JSON output. At the end when all stuff is parsed, I want to run some operations on final scraped result. How can i do it in Scrapy? I know about process_item but it works for each item in iteration. The closed thing I found was close_spider but I am not sure that if I run scrapy crawl spider whether it will give me JSON I created in closed_spider? If yes then how?
How to process final scrapers result in Scrapy?
0
0
0
135