Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
44,058,239
2017-05-18T21:43:00.000
6
1
0
0
python-3.x,amazon-web-services,sqlite,aws-lambda
44,076,628
8
false
0
0
This isn't a solution, but I have an explanation why. Python 3 has support for sqlite in the standard library (stable to the point of pip knowing and not allowing installation of pysqlite). However, this library requires the sqlite developer tools (C libs) to be on the machine at runtime. Amazon's linux AMI does not have these installed by default, which is what AWS Lambda runs on (naked ami instances). I'm not sure if this means that sqlite support isn't installed or just won't work until the libraries are added, though, because I tested things in the wrong order. Python 2 does not support sqlite in the standard library, you have to use a third party lib like pysqlite to get that support. This means that the binaries can be built more easily without depending on the machine state or path variables. My suggestion, which you've already done I see, is to just run that function in python 2.7 if you can (and make your unit testing just that much harder :/). Because of the limitations (it being something baked into python's base libs in 3) it is more difficult to create a lambda-friendly deployment package. The only thing I can suggest is to either petition AWS to add that support to lambda or (if you can get away without actually using the sqlite pieces in nltk) copying anaconda by putting blank libraries that have the proper methods and attributes but don't actually do anything. If you're curious about the latter, check out any of the fake/_sqlite3 files in an anaconda install. The idea is only to avoid import errors.
2
23
0
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite. In my code I am using nltk which has a import sqlite3 in one of the files. Steps taken till now: Deployment package has only python modules that I am using in the root. I get the error: Unable to import module 'my_program': No module named '_sqlite3' Added the _sqlite3.so from /home/my_username/anaconda2/envs/py3k/lib/python3.6/lib-dynload/_sqlite3.so into package root. Then my error changed to: Unable to import module 'my_program': dynamic module does not define module export function (PyInit__sqlite3) Added the SQLite precompiled binaries from sqlite.org to the root of my package but I still get the error as point #2. My setup: Ubuntu 16.04, python3 virtual env AWS lambda env: python3 How can I fix this problem?
sqlite3 error on AWS lambda with Python 3
1
1
0
7,936
44,058,239
2017-05-18T21:43:00.000
1
1
0
0
python-3.x,amazon-web-services,sqlite,aws-lambda
49,342,276
8
false
0
0
My solution may or may not apply to you (as it depends on Python 3.5), but hopefully it may shed some light for similar issue. sqlite3 comes with standard library, but is not built with the python3.6 that AWS use, with the reason explained by apathyman and other answers. The quick hack is to include the share object .so into your lambda package: find ~ -name _sqlite3.so In my case: /home/user/anaconda3/pkgs/python-3.5.2-0/lib/python3.5/lib-dynload/_sqlite3.so However, that is not totally sufficient. You will get: ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory Because the _sqlite3.so is built with python3.5, it also requires python3.5 share object. You will also need that in your package deployment: find ~ -name libpython3.5m.so* In my case: /home/user/anaconda3/pkgs/python-3.5.2-0/lib/libpython3.5m.so.1.0 This solution is likely not work if you are using _sqlite3.so that is built with python3.6, because the libpython3.6 built by AWS will likely not support this. However, this is just my educational guess. If anyone has successfully done, please let me know.
2
23
0
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite. In my code I am using nltk which has a import sqlite3 in one of the files. Steps taken till now: Deployment package has only python modules that I am using in the root. I get the error: Unable to import module 'my_program': No module named '_sqlite3' Added the _sqlite3.so from /home/my_username/anaconda2/envs/py3k/lib/python3.6/lib-dynload/_sqlite3.so into package root. Then my error changed to: Unable to import module 'my_program': dynamic module does not define module export function (PyInit__sqlite3) Added the SQLite precompiled binaries from sqlite.org to the root of my package but I still get the error as point #2. My setup: Ubuntu 16.04, python3 virtual env AWS lambda env: python3 How can I fix this problem?
sqlite3 error on AWS lambda with Python 3
0.024995
1
0
7,936
44,058,544
2017-05-18T22:11:00.000
6
0
1
0
python,audio,mp3,encoder,pydub
48,354,157
3
false
1
0
The other solution did not work for me. The problem for me was that the ffmpeg version that came installed with Anaconda did not seem to be compiled with an encoder. So instead of: DEA.L. mp3 MP3 (MPEG audio layer 3) (decoders: mp3 mp3float mp3_at ) (encoders: libmp3lame ) I saw: DEA.L. mp3 MP3 (MPEG audio layer 3) (decoders: mp3 mp3float mp3_at ) Without the (encoders: ...) part. My solution was to do this: ffmpeg -codecs | grep mp3, to check if there is any encoder (there isn't!). conda uninstall ffmpeg Open new terminal window. brew install ffmpeg --with-libmp3lame ffmpeg -codecs | grep mp3, to check if there is any encoder (now there is!).
1
5
0
I'm trying to export a file as mp3 in pydub, but I get this error: Automatic encoder selection failed for output stream #0:0. Default encoder for format mp3 is probably disabled. Please choose an encoder manually How do I select an encoder manually, what is the default encoder, and how could I enable it? PS: My Pydub opens mp3 files without any problem. I'm using Windows and Libav.
Pydub export error - Choose encoder manually
1
0
0
8,086
44,062,679
2017-05-19T06:15:00.000
1
0
0
0
python,chatbot
44,240,013
1
true
0
0
Yes, we can , the data folder is ".\data" , which is path from where you are invoking the ubuntu_corpus_training_example.py. create a folder ubuntu_dialogs and unzip all the folders, the trainer.py looks at .\data\ubuntu_dialogs***.tsv files
1
0
1
Looking to create a custom trainer for chatterbot, In the ubuntu corpus trainer, it looks as if the training is done based on all the conversation entries. I manually copy the ubuntu_dialogs.tgz to the 'data' folder. Trainer fails with error file could not be opened successfully https://github.com/gunthercox/ChatterBot/blob/master/examples/ubuntu_corpus_training_example.py can i unzip all the data to ubuntu_dialogs and provide to trainer? edit: Yes, we can , the data folder is ".\data" , which is path from where you are invoking the ubuntu_corpus_training_example.py. create a folder ubuntu_dialogs and unzip all the folders, the trainer.py looks at .\data\ubuntu_dialogs***.tsv files
ChatterBot Ubuntu Corpus Trainer
1.2
0
0
941
44,066,098
2017-05-19T09:18:00.000
0
0
0
0
python,selenium,selenium-webdriver,window-handles,getcurrenturl
44,066,234
4
false
0
0
Just go to the text link which is switching to other tab and save its @href attribute link into a string or list
1
1
0
I often open hundreds of tabs when using web browsers, and this slows my computer. So I want to write a browser manager in Python and Selenium , which opens tabs and can save the urls of those tabs, then I can reopen them later. But it seems like the only way to get the url of a tab in Python Selenium is calling get_current_url. I'm wondering if there's a way to get the url of a tab without switching to it?
Selenium: How to get current url of a tab without switching to it?
0
0
1
1,256
44,067,771
2017-05-19T10:32:00.000
0
0
1
0
revit-api,revitpythonshell,pyrevit
44,070,495
3
false
0
0
Ok sorry. Obviously I was wrong, it does point to all Elements, including the ones that are not instantiated.
2
2
0
Is it possible to access all the family types of a certain category (e.g. Windows, Doors, ...) with Revit API? In contrast with the instances. For what I know, using FilteredElementCollector(doc).OfCategory(...).ToElements() or FilteredElementCollector(doc).OfClass(...).ToElements() point to the instances of that class/type, but I want to check if a particular type is already loaded within Revit, even if it hasn't been instantiated yet. (I'm using pyRevit, Revit 2017) Thanks a lot!
How to access all the family types through revit API?
0
0
0
5,006
44,067,771
2017-05-19T10:32:00.000
2
0
1
0
revit-api,revitpythonshell,pyrevit
44,085,790
3
true
0
0
In your filteredElementCollector, before you do ToElements(), you should add WhereElementIsElementType(), then the ToElements(). For Family based elements like doors, you'll get back FamilySymbol elements - from there you can check if they're active.
2
2
0
Is it possible to access all the family types of a certain category (e.g. Windows, Doors, ...) with Revit API? In contrast with the instances. For what I know, using FilteredElementCollector(doc).OfCategory(...).ToElements() or FilteredElementCollector(doc).OfClass(...).ToElements() point to the instances of that class/type, but I want to check if a particular type is already loaded within Revit, even if it hasn't been instantiated yet. (I'm using pyRevit, Revit 2017) Thanks a lot!
How to access all the family types through revit API?
1.2
0
0
5,006
44,072,308
2017-05-19T14:12:00.000
0
0
0
0
python,django,file,optimization,deployment
44,072,397
1
false
1
0
Here are some tips: Follow a directory structure similar to this: appname/%Y/%m/%d. Use AWS S3, or another storage service, for your media and static files. Compress files to reduce size. Use whitenoise, or a similar package, to serve static files in production. Note: These tips are not solely for media, but also static files.
1
0
0
What should I know about saving files in production? I need to save lots media which are pdf files. The only thing I know so far, is that I shall rename pdf files into my own naming system (for example by overwriting storage in django). But what else is important ? Or that's all, just saving all files like 1.pdf,2.pdf,3.pdf,4.pdf.. in one media folder and it will work in long term without any other tricks and optimization? I am using django1.8 and python 2.7, but I guess it's very general question regarding running production server at all . I hope it's not off topic, as far I faced the lack of information on the issue.
What are important tricks about saving heavy media files in production?
0
0
0
41
44,074,550
2017-05-19T16:09:00.000
0
0
0
0
python,amazon-web-services,etl,aws-glue
44,107,589
1
true
0
0
The 'T' in ETL stands for 'Transform', and aggregation is one of most common ones performed. Briefly speaking: yes, ETL can do this for you. The rest depends on specific needs. Do you need any drill-down? Increasing resolution on zoom perhaps? This would affect the whole design, but in general preparing your data for presentation layer is exactly what ETL is used for.
1
0
0
I haven't been able to find any direct answers, so I thought I'd ask here. Can ETL, say for example AWS Glue, be used to perform aggregations to lower the resolution of data to AVG, MIN, MAX, etc over arbitrary time ranges? e.g. - Given 2000+ data points of outside temperature in the past month, use an ETL job to lower that resolution to 30 data points of daily averages over the past month. (actual use case of such data aside, just an example). The idea is to perform aggregations to lower the resolution of data to make charts, graphs, etc display long time ranges of large data sets more quickly, as we don't need every individual data point that we must then dynamically aggregate on the fly for these charts and graphs. My research so far only suggests that ETL be used for 1 to 1 transformations of data, not 1000 to 1. It seems ETL is used more for transforming data to appropriate structure to store in a db, and not for aggregating over large data sets. Could I use ETL to solve my aggregation needs? This will be on a very large scale, implemented with AWS and Python.
Using ETL for Aggregations
1.2
1
0
588
44,075,041
2017-05-19T16:39:00.000
0
1
0
0
python
44,166,939
1
false
0
0
First attempt. It works, seemingly every time, but I don't know if it's the most efficient way: Take first and last time stamps and number of frames to calculate an average time step. Use average time step and difference between target and beginning timestamps to find approximate index. Check for approximate and 2 surrounding timestamps against target. If target falls between, then take index with minimum difference. If not, set approximate index as new beginning or end, accordingly, and repeat.
1
0
1
I have a large binary file (~4 GB) containing a series of image and time stamp data. I want to find the image that most closely corresponds to a user-given time stamp. There are millions of time stamps in the file, though. In Python 2.7, using seek, read, struct.unpack, it took over 900 seconds just to read all the time stamps into an array. Is there an efficient algorithm for finding the closest value that doesn't require reading all of the values? They monotonically increase, though at very irregular intervals.
Finding closest value in a binary file
0
0
0
40
44,075,134
2017-05-19T16:45:00.000
2
0
1
0
python,python-2.7,atom-editor,todo
44,075,252
3
false
0
0
I use inputs with a description of the surrounding code/what needs done, they stop execution to make it obvious something needs done while allowing you to continue.
1
0
0
How do I clearly mark and label "To-Do"s directly in my code (using Atom on Mac) for later?
How do I mark and label "To-Do"s?
0.132549
0
0
455
44,076,649
2017-05-19T18:22:00.000
0
0
0
0
python,numpy,keras,lstm,training-data
44,091,300
1
false
0
0
I found the answer to this on the Keras slack from user rocketknight. Use the model.fit_generator function. Define a generator function somewhere within your main python script that "yields" a batch of data. Then call this function in the arguments of the model.fit_generator function.
1
0
1
I am currently trying to use a "simple" LSTM network implemented through Keras for a summer project. Looking at the example code given, it appears the LSTM code wants a pre-generated 3D numpy array. As the dataset and the associated time interval I want to use are both rather large, it would be very prohibitive for me to load a "complete array" all at once. Is it possible to load the raw dataset and apply the sequencing transform to it as needed by the network (in this case construct the 3D array from x time-interval windows that then increment by 1 each time)? If so, how would you go about doing this? Thanks for any help you can provide!
How-To Generate a 3D Numpy Array On-Demand for an LSTM
0
0
0
160
44,077,099
2017-05-19T18:50:00.000
1
0
1
1
python,windows
44,077,157
1
false
0
0
No, there is no way to set an environment variable that calls a function when addressed. That's simply not something that environment variables can do.
1
0
0
Is it possible to set a environment variable in Windows 7 that will run a python script when called? Couldn't do it when I tried.
Windows environment variable that runs a script
0.197375
0
0
45
44,078,101
2017-05-19T19:55:00.000
4
0
0
1
python,docker
44,078,274
2
false
0
0
If they have docker then you can distribute your whole application as a docker image. This is the main docker use-case.
1
1
0
I am developing a python application (a package) that can be distributed with distutils. I need to share it with someone that does not have python installed. Is it possible to bundle the entire package and distribute it with docker?
Can I distribute a python application with docker?
0.379949
0
0
317
44,078,542
2017-05-19T20:29:00.000
1
0
1
0
python,memory
44,078,638
1
true
0
0
Objects are not copied, but referenced. If your objects are small (e.g. integers), the overhead of a list of tuples or a dict is significant. If your objects are large (e.g. very long unique strings), the overhead is much less, compared to the size of the objects, so the memory usage will not increase much due to creation of another dict / list of the same objects.
1
1
0
Suppose I have 1000 different objects of the same class instantiated, and I assign it to a dictionary whose keys are integers from 1 to 1000, and whose values are those 1000 objects. Now, I create another dictionary, whose keys are tuples (obj1, 1), (obj2,2), etc. The obj's being those same 1000 objects. And whose values are 1 to 1000. Does the existence of those 2 dictionaries mean that memory usage will be doubled, because 1000 objects are in each dict's key and values? They should NOT be, right? Because we are not creating new objects, we are merely assigning references to those same objects. So I can have 1000 similar dictionaries with those objects as either values or as keys (part of a tuple), and have no significant increases in memory usage. Is that right?
What really takes up memory in Python?
1.2
0
0
75
44,078,888
2017-05-19T20:58:00.000
0
0
0
0
python,shell,clickable
63,865,129
2
false
1
0
Sorry. No. IDLE is the most basic of editors and the shell is just that.
1
0
0
Say, I have a list full of html links that looks something like this: https://www.nytimes.com/2017/05/19/realestate/they-can-afford-to-buy-but-they-would-rather-rent.html When I run a script in Python 3.6 Idle, I get the list as an output in Python 3.6 shell; however, they are not clickable. Is there a simple way to make them clickable in the shell? I've googled around but I can't seem to find anything involving the Python shell. Thank you.
Clickable html links in Python 3.6 shell?
0
0
0
6,771
44,080,884
2017-05-20T01:03:00.000
0
0
1
0
python,visual-studio,libraries
44,081,043
1
true
0
0
An IDE makes no difference on the Python libraries that are available to you, the libraries that were available to you on PyDev are still on your machine somewhere. It's possible this confusion arose because your IDE didn't automatically detect these available libraries, as PyDev might have done, but all you need to do is show it where they are !
1
0
0
I switched over to Visual Studio from PyDev and I was wondering whether there would be any difference in the amount of libraries available?
Do the amount of libraries for Python differ by IDE?
1.2
0
0
39
44,082,419
2017-05-20T05:49:00.000
0
0
1
1
python,linux
44,082,982
2
true
0
0
From what I understood, You have two version of python. One is in /usr/local/bin/python and another is in /usr/bin/python. In your current configuration default python -> /usr/local/bin/python You want to use the one that is in /usr/bin. Update your ~/.bashrc and append this line at the end alias python=/usr/bin/python Then open a new terminal. Or do source ~/.bashrc in the current terminal Run which python to see the location of the python executable. It will show you /usr/bin/python Also, if you want to get packages in your current python (i.e. /usr/local/bin/python) you can use pip with that particular python version. Find pip location using which pip Assuming pip location is /usr/local/bin/pip /usr/local/bin/python /usr/local/bin/pip install
1
0
0
As I understood, I have two versions of python 2.7 installed on my machine. One is located in /usr/bin and another one is in /usr/local/bin. When I type python in the shell, it calls one in /usr/local/bin, and it doesn't have access to all the packages installed using apt-get and pip. I have tried to set up an alias, but when I type sudo python it still calls one in /usr/local/bin. I want to always use one in /usr/bin, since I have all the packages there. How do I do that?
Messed up with two python versions on linux
1.2
0
0
420
44,087,485
2017-05-20T15:10:00.000
0
0
1
0
python,global-variables,coroutine
70,409,164
2
false
0
0
You can make your own thread safe class that wraps the state and use it like an in memory data store
1
8
0
I'm building a discord bot using the discord.py library - all user interaction therefore necessarily takes place in coroutines, defined with async and called with await. One of my functions is going to require a saved state variable - a time offset used in a calculation that will occasionally need to be updated manually by users. I can't use a normal global variable in the main thread - the coroutines can't see them. What's a sensible design pattern for preserving a state variable between multiple coroutines?
Using global state variables in coroutines?
0
0
0
678
44,087,849
2017-05-20T15:44:00.000
0
0
0
0
python,eclipse,apache-kafka,kafka-producer-api
44,092,800
3
false
1
0
Please include the Kafka client library within the WAR file of the Java application which you are deploying to Tomcat
3
0
0
After several weeks looking for some information here and google, I've decided to post it here to see if anyone with the same problem can raise me a hand. I have a java application developed in Eclipse Ganymede using tomcat to connect with my local database. The problem is that I want to send a simple message ("Hello World") to a Kafka Topic published on a public server. I've imported the libraries and developed the Kafka function but something happens when I run in debug mode. I have no issues or visible errors when compiling, but when I run the application and push the button to raise this function it stops in KafkaProducer function because there is NoClassDefFoundError kafka.producer..... It seems like it is not finding the library properly, but I have seen that it is in the build path properly imported. I am not sure if the problem is with Kafka and the compatibility with Eclipse or Java SDK (3.6), it could be?. Anyone knows the minimum required version of Java for Kafka? Also, I have found that with Kafka is really used Scala but I want to know if I can use this Eclipse IDE version for not change this. Another solution that I found is to use a Python script called from the Java application, but I have no way to call it from there since I follow several tutorials but then nothing works, but I have to continue on this because it seems an easier option. I have developed the .py script and works with the Kafka server, now I have to found the solution to exchange variables from Java and Python. If anyone knows any good tutorial for this, please, let me know. After this resume of my days and after hitting my head with the walls, maybe someone has found this error previously and can help me to find the solution, I really appreciate it and sorry for the long history.
Send message to a kafka topic using java
0
0
1
370
44,087,849
2017-05-20T15:44:00.000
0
0
0
0
python,eclipse,apache-kafka,kafka-producer-api
44,109,016
3
false
1
0
Please use org.apache.kafka.clients.producer.KafkaProducer rather than kafka.producer.Producer (which is the old client API) and make sure you have the Kafka client library on the classpath. The client library is entirely in Java. It's the old API that's written in scala, as is the server-side code. You don't need to import the server library in your code or add it to the classpath if you use the new client API.
3
0
0
After several weeks looking for some information here and google, I've decided to post it here to see if anyone with the same problem can raise me a hand. I have a java application developed in Eclipse Ganymede using tomcat to connect with my local database. The problem is that I want to send a simple message ("Hello World") to a Kafka Topic published on a public server. I've imported the libraries and developed the Kafka function but something happens when I run in debug mode. I have no issues or visible errors when compiling, but when I run the application and push the button to raise this function it stops in KafkaProducer function because there is NoClassDefFoundError kafka.producer..... It seems like it is not finding the library properly, but I have seen that it is in the build path properly imported. I am not sure if the problem is with Kafka and the compatibility with Eclipse or Java SDK (3.6), it could be?. Anyone knows the minimum required version of Java for Kafka? Also, I have found that with Kafka is really used Scala but I want to know if I can use this Eclipse IDE version for not change this. Another solution that I found is to use a Python script called from the Java application, but I have no way to call it from there since I follow several tutorials but then nothing works, but I have to continue on this because it seems an easier option. I have developed the .py script and works with the Kafka server, now I have to found the solution to exchange variables from Java and Python. If anyone knows any good tutorial for this, please, let me know. After this resume of my days and after hitting my head with the walls, maybe someone has found this error previously and can help me to find the solution, I really appreciate it and sorry for the long history.
Send message to a kafka topic using java
0
0
1
370
44,087,849
2017-05-20T15:44:00.000
0
0
0
0
python,eclipse,apache-kafka,kafka-producer-api
44,161,576
3
true
1
0
At the end the problem was related with the library that was not well added. I had to add it in the build.xml file, importing here the library. Maybe this is useful for the people who use an old Eclipse version. So now it finds the library but I have to update Java version, other matter. So it is solved
3
0
0
After several weeks looking for some information here and google, I've decided to post it here to see if anyone with the same problem can raise me a hand. I have a java application developed in Eclipse Ganymede using tomcat to connect with my local database. The problem is that I want to send a simple message ("Hello World") to a Kafka Topic published on a public server. I've imported the libraries and developed the Kafka function but something happens when I run in debug mode. I have no issues or visible errors when compiling, but when I run the application and push the button to raise this function it stops in KafkaProducer function because there is NoClassDefFoundError kafka.producer..... It seems like it is not finding the library properly, but I have seen that it is in the build path properly imported. I am not sure if the problem is with Kafka and the compatibility with Eclipse or Java SDK (3.6), it could be?. Anyone knows the minimum required version of Java for Kafka? Also, I have found that with Kafka is really used Scala but I want to know if I can use this Eclipse IDE version for not change this. Another solution that I found is to use a Python script called from the Java application, but I have no way to call it from there since I follow several tutorials but then nothing works, but I have to continue on this because it seems an easier option. I have developed the .py script and works with the Kafka server, now I have to found the solution to exchange variables from Java and Python. If anyone knows any good tutorial for this, please, let me know. After this resume of my days and after hitting my head with the walls, maybe someone has found this error previously and can help me to find the solution, I really appreciate it and sorry for the long history.
Send message to a kafka topic using java
1.2
0
1
370
44,089,046
2017-05-20T17:43:00.000
0
0
0
1
python,terminal,filepath
44,089,362
1
true
0
0
Two problems going on here: 1) That path is probably correct. You're not using find correctly, in particular. You need to do sudo find / -name "*.app" (note the quotes around *app). From the man page: -iname pattern Like -name, but the match is case insensitive. For example, the patterns 'fo*' and 'F??' match the file names 'Foo', 'FOO', 'foo', 'fOo', etc. In these patterns, unlike filename expansion by the shell, an initial '.' can be matched by '*'. That is, find -name *bar will match the file '.foobar'. Please note that you should quote patterns as a matter of course, otherwise the shell will expand any wildcard characters in them. 2) Try using subprocess.Popen(["/Applications/Google Earth.app"], shell=True). Don't worry too much about security problems with shell=True unless you are taking user input. If it's just hardcoded to use your path to Google Earth, you're fine. If you have user input in your logic, however, DO NOT use shell=True. shell=True just means that shell metacharacters will work if they are in the command. The reason you need it is that subprocess.Popen() will have trouble parsing your command since there is a space in the path. Alternatively, you could just use os.system("/Applications/Google Earth.app").
1
0
0
I am trying to build a script that opens the Google Earth.app which I can see in Finder, but when I go to the applications folder it is not present. I looked at some other posts to find the filepath of Google Earth.app via sudo find / -iname *.app, which was /Applications/Google Earth.app. When I try and find this file I get 'No such file or directory'. Could some one please explain why you applications that are in Finder don't show up in terminal? Also how would I find the correct file path so I can use subprocess.Popen() to open Google Earth in Python.
Trying to open Google Earth via a Script, file path nonexistent
1.2
0
0
103
44,089,424
2017-05-20T18:28:00.000
0
0
1
0
python,opencv,import
48,517,382
1
false
0
0
To import opencv in your python project import cv2
1
0
0
I have problem about importing OpenCV to my project. Not actually problem, but I didn't find how to do that. I know it's trivial, but I really don't know. I have opencv downloaded and compiled in my home directory. I know how to import it in virtualenv, but how to import it directly from original - non virtualenv python2.7?
How to import opencv in python (not from virtualenv) [UBUNTU]
0
0
0
71
44,090,457
2017-05-20T20:19:00.000
0
0
0
0
telegram-bot,python-telegram-bot
44,094,915
1
false
1
0
you have lots of options. at first you need to store all chat_ids. you can do it in database or simple text file. then you need a trigger in order to start sending messages. I'm not familiar with your technology but i just create simple service in order to do it.
1
1
0
I've built a small telegram bot using python-telegram-bot. When a conversation is started,I add a periodical job to the job queue and then message back to the user every X minutes. Problem is when my bot goes offline (maintenance, failures, etc), the jobqueue is lost and clients do not receive updates anymore unless they send /start again I could maybe store all chat_ids in a persistent queue and restore them at startup but how do I send a message without responding to an update ?
Restoring job-queue between telegram-bot restarts
0
0
1
778
44,094,933
2017-05-21T08:50:00.000
0
0
0
0
python,django,database,data-visualization,data-mining
44,206,133
1
false
1
0
This depends on what functionality you can offer. Many very interesting data mining tools will read raw data files only, so storing the data in a database does not help you anything. But then you won't want to run them "on the web" anyway, as they easily eat all your resources. Either way, first get your requirements straight and explicit, then settle on the analysis tools, then decide on the storage backend depending on what the tools can use.
1
0
0
I am building a web application that allows users to login and upload data files that would eventually be used to perform data visualisation and data mining features - Imagine a SAS EG/Orange equivalent on the web. What are the best practices to store these files (in a database or on file) to facilitate efficient retrieval and processing of the data and the pros and cons of each method?
Storing data files on Django application
0
1
0
299
44,096,321
2017-05-21T11:28:00.000
1
0
1
0
python,python-3.x,scons
44,202,387
1
false
0
0
the fact that scons is python2 doesn't matter. Your own project can be python3, python4, or anything your heart desires. Scons doesn't care, you just need to remember that you're in python2 land when you're editing your SConstruct files
1
1
0
I am newbie at Python and am learning to code with Python 3 (which I plan to keep as my default version). Another software, I intend to use needs Python 2 for compiling (compilation with SCons). Is there a way around this i.e. keeping Python 3 while still compiling with SCons. Can virtualenv do this?
compile SCons in python3
0.197375
0
0
857
44,097,436
2017-05-21T13:25:00.000
1
0
0
1
python,google-app-engine
44,226,486
1
true
1
0
You cannot to my knowledge edit the codebase once it's uploaded - it can only be modified locally. If you want to access app engine have a look at appengine.google.com . If you look under "Versions" you'll see some the links to the version(s) of your app.
1
0
0
Hello I did a hello world python application in local host and then I deployed it to the google app engine using google application launcher but I don't know where access it inside the google app engine from inside the dashboard.And is there any way through which I can edit the hello world once its uploaded to app engine. I went through their tutorial but I am stuck right here .
[Python ]google app engine deploying the file to server
1.2
0
0
22
44,098,235
2017-05-21T14:50:00.000
4
0
0
0
python,sqlite
44,098,371
1
true
0
0
SELECT name FROM sqlite_master while connected to your database will give you all the tables names. you can then do a fetchall and check the size, or even contents of the list. not try/catch necessary (the list will be empty if the database doesn't contain any tables)
1
2
0
I am using Python 2.7 and SQLite3. When I starting work with DB I want to check - does my database is empty or on not. I mean does it already have any tables or not. My idea is to use the simple SELECT from any table. And wrap this select in try:exception block. So if exception was raised then my DB is empty. Maybe someone know the better way for checking?
How could I check - does my SQLite3 database is empty?
1.2
1
0
1,822
44,099,095
2017-05-21T16:17:00.000
0
0
0
0
python,machine-learning,nlp,cluster-analysis,edit-distance
44,106,903
1
false
0
0
There is only a limited set of POS tags. Rather than using edit distance, compute a POS-POS similarity matrix just once. You may even want to edit this matrixes desired, e.g. to make two POS tags effectively the same, or to increase the difference of two tags. Store that in a numpy array, convert all your vectors to indexes, and then compute similarities using that lookup table. For performance reasons, use numpy where possible, and write the performance critical code in cython because the Python interpreter is very slow.
1
2
1
I have a large set (36k sentence) of sentences (text list) and their POS tags (POS list), and I'd like to group/cluster the elements in the POS list using edit distance/Levenshtein: (e.g Sentx POS tags= [CC DT VBZ RB JJ], Senty POS tags= [CC DT VBZ RB JJ] ) are in cluster edit distance =0, while ([CC DT VBZ RB JJ], [CC DT VB RB JJ]) are in cluster edit distance =1. I understand how the clustering algorithms work but I am confused how to approach such a problem in python and how to store the clusters in data structures so that I can retrieve them easily. I tried to create a matrix (measuring the distance of each sentence with all the sentences in the corpus) but it takes very long to be processed.
How to group sentences by edit distance?
0
0
0
520
44,100,906
2017-05-21T19:23:00.000
-1
0
1
0
python,python-3.x
44,100,972
1
false
0
0
Try this from bs4 import BeautifulSoup Edit: Was already answered by @jonsharpe and @Vinícius Aguiar in the comments under the question.
1
1
0
I am doing a python webpage scraper . Some tutorial told me to use this package: BeautifulSoup. So I installed it using pip. Then, in my script, I try to import BeautifulSoup as bs. But I was warned that no module named BeautifulSoup. Is there a reliable way to get module name out of an installed package?
I installed a pip package, how to know the module name to import?
-0.197375
0
0
91
44,101,319
2017-05-21T20:12:00.000
1
0
0
0
python,django,sqlite
44,101,694
1
false
1
0
You should use manage.py, not django-admin, to run management commands inside your project.
1
1
0
Slighty new to django.. I got this error when attempting to do "django-admin makemigrations" django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. Also get this error when attempting to load my home page: OperationalError at /home/ no such table: home_post
Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured
0.197375
0
0
2,203
44,102,086
2017-05-21T21:42:00.000
0
0
1
1
python,module
44,102,104
2
false
0
0
Check if pip36 or more likely pip3 is a function you can run. Often times the pip command corresponds to the first installed python version, so if you install one later it gets the suffix according to its version. If that is the case then you'll want to do pip36 (pip3) install moduleXYZ.
1
0
0
I have on my Fedora 20 additionally to 2.7 a python3.6 version installed. When I run a script with the 3.6 version it's missing the requests module. When I try to install it with the pip command it says it's already there. So, how can I install this module in python3.6? Any hints? Thanks
How to install a module to one of many pythons in Fedora 20
0
0
0
311
44,108,625
2017-05-22T08:55:00.000
0
0
0
1
python,google-app-engine,terminal
44,108,967
1
true
0
0
That instance of the terminal will be busy doing the thing you asked it to. But you can just open a new window or tab.
1
0
0
How come I can't do anything else on terminal when It's running a localhost. Like I run a google app engine on it and make a new local host, I for example try to see what directory I'm in or even change directories, it doesn't respond to any of my requests? Thank you
Getting Terminal to do something else while running a localhost on it.
1.2
0
0
35
44,109,334
2017-05-22T09:26:00.000
0
0
0
0
python,django
44,109,481
1
false
1
0
You can use Blue-Green deployment approach. In this approach, there are two identical server: one is Blue, other is Green. At any time, only one of the environments is live, with the live environment serving all production traffic.As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.
1
3
0
Things being used: Supervisor to run uwsgi uwsgi to bring up my Django non-rel 1.6 based ML app (Django Upgrade in process) I am using Uwsgi to start my Django ML based app. But somehow as it has to load a lot of binaries for initialization, it takes about 20-30 seconds for supervisor to restart and load new code. How can I reduce this time? or is there any other way to run the Django app so as to reload quickly when code changes? With ZERO downtime? as Nginx will start throwing 5xx if it can't connect to Django.
Zero Downtime Code Update for Django App
0
0
0
614
44,109,867
2017-05-22T09:53:00.000
0
0
1
0
python,visual-studio-code
44,113,475
2
false
0
0
I got it. Ctrl+Shift+P (Command palette) -> Python: Select Workspace interpreter
1
1
0
I am trying to start using Visual Studio Code with Python/Jupyter extensions by Don Jayamanne. I have both Python 3.5 and 3.6 kernels on my system, but I am unable to make them both visible to those extensions. Only the system default kernel is available in VS Code. How to make sure VS Code and Python extensions see all available Python kernels and allow me to choose from them?
Using multiple Python kernels in Visual Studio Code
0
0
0
11,024
44,113,128
2017-05-22T12:40:00.000
2
0
0
0
python-3.x,word2vec,word-embedding
45,963,156
1
true
0
0
Not really, skip-gram and CBOW are simply the names of the two Word2vec models. They are shallow neural networks that generate word embeddings by predicting a context from a word and vice versa, and then treating the output of the hidden layer as the vector/representation. GloVe uses a different approach, making use of the global statistics of a corpus by training on a co-occurrence matrix, rather than local context windows.
1
1
1
I'm trying different word embeddings methods, in order to pick the approache that works the best for me. I tried word2vec and FastText. Now, I would like to try Glove. In both word2vec and FastText, there is two versions: Skip-gram (predict context from word) and CBOW (predict word from context). But in Glove python package, there is no parameter that enables you to choose whether you want to use skipg-gram or Cbow. Given that Glove does not work the same way as w2v, I'm wondering: Does it make sense to talk about skip-gram and cbow when using The Glove method ? Thanks in Advance
Does it make sense to talk about skip-gram and cbow when using The Glove method?
1.2
0
0
341
44,115,291
2017-05-22T14:19:00.000
2
1
0
0
python,python-2.7,python-3.x
44,116,325
2
true
0
0
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it. Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service. This would be a most secure way, if other people will be connecting to nginx to view sites etc. Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection. Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet. Python script may wait for it there. Or you write a socket server from scratch in Python (not extra hard) to wait for your packets. The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery. No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data. Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data. Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much. If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed). As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
1
0
0
Key points: I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another. The first machine is catching those values through sensors connected to it. The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts. How would you do this and why? Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up! Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Efficient way to send results every 1-30 seconds from one machine to another
1.2
0
0
274
44,116,020
2017-05-22T14:50:00.000
0
0
0
1
python,rabbitmq
44,122,840
1
false
0
0
Seems from the description you provide that the issue is due to your servers connecting to multiple queues at the same time. As your prefetch count is set to 1, a server connected to 3 queues will consume up to 3 messages even though he will only be processing one at a time (per your description of processing). It's not clear from your question whether there is a need for multiple queues or whether you could have all tasks end up in a single queue: do all the servers consume all the tasks do you need to be able to stop the processing of certain tasks If you need/wish to be able to "stop" the processing of certain tasks, or control the distribution of processing throughout your servers, you'll need to manage consumers in your servers to only have one active consumer at a time (otherwise you're going to block/consume some messages due to prefetch 1). If you do not need to control the processing of the various tasks, would be far simpler to have all of the messages end up in a single queue, and single consumer to that queue setup with prefetch one for each of your servers.
1
0
0
I'm using RabbitMQ to manage multiple servers executing long lasting tasks. Each server can listen to one or more queues, but each server should process only one task at a time. Each time I start a consumer in a server, I configure it with channel.basic_qos(prefetch_count=1) so that only one tasks is processed for the respective queue. Suppose we have: - 2 queues: task1, task2. - 2 servers: server1, server2. - Both servers work with task1 and task2. If the next messages are produced at the same time: - messageA for tasks1 - messageB for tasks2 - messageC for tasks1 What I expect: - messageA gets processed by server1 - messageB gets processed by server2 . messageC stays queued until one of the servers is ready (finishes its current task). What I actually get: - messageA gets processed by worker1 - messageB gets processed by worker2 - messageC gets processed by worker2 (WRONG) I do not start consumers at the same time. In fact, working tasks are constantly being turned on/off in each server. Most of the time servers work with different queues (server1: tasks1, tasks2, task3; server2: tasks1, tasks5; server3: tasks2, tasks5; and so on). How could I manage to do this? EDIT Based on Olivier's answer: Tasks are different. Each server is able to handle some of the tasks, not all of them. A server could process only one task at a time. I tried using exchanges with routing_keys, but I found two problems: all of the servers binded to the routing key task_i would process its tasks (I need it to be processed only once), and if there is no server binded to task_i, then its messages are dropped (I need to remain queued until some server can handle it).
RabbitMQ: multiple queues/one (long) task at a time
0
0
0
539
44,117,162
2017-05-22T15:50:00.000
1
0
1
1
python,process
44,117,271
1
true
0
0
I suppose figuring out why your own app itself is using all the CPU would be your first priority. :) My psychic powers suggest that you are polling the system constantly without sleeping. Did you consider sleeping for a half second after enumerating processes? Using CPU metrics isn't the best way to accomplish what you are after. You didn't mention what OS you are, but if you are on Windows, then you want to be tracking the window in the foreground, as that is what the user is interacting with. By getting the foreground HWND, you can likely map it back to process id and ultimately process name. Not sure about Mac, but I bet there's an equivalent call.
1
0
0
Goal To track the program real time that user is interacting with. Expected output To get the information of current process that user is interacting with. What I've done Used psutil to list all the process and find the process that uses CPU most. But it resulted in returning python.exe, which was using most CPU because it was counting processes. Question Is there any other way around to do the task without this kind of mistake? Or any keyword that I can google for would be nice, too.
Recognizing active process
1.2
0
0
25
44,121,989
2017-05-22T20:53:00.000
-1
0
0
0
django,python-3.x,amazon-s3,boto
44,122,528
1
true
1
0
Found the issue. Django-storages-redux was temporarily replacing django-storages since it's development had been interrupted. Now the django-storages team restarted to support it. That means that the correct configuration to use is: django-storages + boto3
1
0
0
I was running Django 1.11 with Python 3.5 and I decided to upgrade to Python 3.6. Most things worked well, but I am having issues connection to AWS S3. I know that they have a new boto version boto3 and that django-storages is a little outdated, so now there is django-storages-redux. I've been trying multiple combinations of boto/boto3 and django-storages-redux/django-storages to see if it works. But I'm getting a lot of erros, from SSL connection failing to the whole website being offline due to server errors. The newest is my website throwing a 400 Bad Request to all urls. My app does run on Python 3.5, so I'm confident that the issue is around collectstatic and S3. Is there anybody here who made a similar update work and tell me what configuration was used? Thanks a lot!
Django: Upgrading to python3.6 with Amazon S3
1.2
1
0
149
44,123,641
2017-05-22T23:41:00.000
5
0
1
0
python,google-assistant-sdk,google-assist-api
44,134,868
2
false
0
0
Currently (Assistant SDK Developer Preview 1), there is no direct way to do this. You can probably feed the audio stream into a Speech-to-Text system, but that really starts getting silly. Speaking to the engineers on this subject while at Google I/O, they indicated that there are some technical complications on their end to doing this, but they understand the use cases. They need to see questions like this to know that people want the feature. Hopefully it will make it into an upcoming Developer Preview.
1
3
0
I am using the python libraries from the Assistant SDK for speech recognition via gRPC. I have the speech recognized and returned as a string calling the method resp.result.spoken_request_text from \googlesamples\assistant\__main__.py and I have the answer as an audio stream from the assistant API with the method resp.audio_out.audio_data also from \googlesamples\assistant\__main__.py I would like to know if it is possible to have the answer from the service as a string as well (hoping it is available in the service definition or that it could be included), and how I could access/request the answer as string. Thanks in advance.
How to receive answer from Google Assistant as a String, not as an audio stream
0.462117
0
1
1,474
44,123,678
2017-05-22T23:47:00.000
0
0
0
0
python,multithreading,sqlite,multiprocessing
44,123,745
3
false
0
0
One solution could be to acquire a lock to access the database directly from your program. In this way the multiple threads or processes will wait the other processes to insert the link before performing a request.
1
3
0
I'm making a web crawler in Python that collects redirects/links, adds them to a database, and enters them as a new row if the link doesn't already exist. I want like to use multi-threading but having trouble because I have to check in real time if there is an entry with a given URL. I was initially using sqlite3 but realised I can't use it simultaneously on different threads. I don't really want to use MySQL (or something similar) as it needs more disk space and runs as separate server. Is there anyway to make sqlite3 work with multiple threads?
Getting SQLite3 to work with multiple threads
0
1
0
7,557
44,124,471
2017-05-23T01:40:00.000
0
0
0
0
python,machine-learning,scikit-learn
54,380,982
2
false
0
0
One another solution is that, you can do a bivariate analysis of the categorical variable with the target variable. What yo will get is a result of how each level affects the target. Once you get this you can combine those levels that have a similar effect on the data. This will help you reduce number of levels, as well as each well would have a significant impact.
1
0
1
guys. I have a large data set (60k samples with 50 features). One of this features (which is really relevant for me) is job names. There are many jobs names that I'd like to encode to fit in some models, like linear regression or SVCs. However, I don't know how to handle them. I tried to use pandas dummy variables and Scikit-learn One-hot Encoding, but it generate many features that I may not be encounter on test set. I tried to use the scikit-learn LabelEncoder(), but I also got some errors when I was encoding the variables float() > str() error, for example. What would you guys recommend me to handle with this several categorical features? Thank you all.
How to encode categorical with many levels on scikit-learn?
0
0
0
1,235
44,133,280
2017-05-23T11:17:00.000
0
0
0
0
python,string,pandas,split
44,134,880
1
true
0
0
Ok just solved the question: with df.shape I found out what the dimensions are and then started a for loop: for i in range(1,x): df[df.columns[i]]= df[df.columns[i]].str.split('/').[-1] If you have any more efficient ways let me know :)
1
0
1
I am currently in the phase of data preparation and have a certain issue I would like to make easy. The content of my columns: 10 MW / color. All the columns which have this content are named with line nr. [int] or a [str] What I want to display and which is the data of interest is the color. What I did was following: df['columnname'] = df['columnname'].str.split('/').str[-1] The problem which occurs is that this operation should be done on all columns which names start with the string "line". How could I do this? I thought about doing it with a for-loop or a while-loop but I am quite unsure how to do the referencing in the data frame then since I am a nooby in python for now. Thanks for your help!
Strip certain content of columns in multiple columns
1.2
0
0
50
44,139,153
2017-05-23T15:30:00.000
0
0
0
1
python-3.x,celery,aiohttp
51,969,202
1
false
1
0
If your tasks are not CPU heavy - yes, it is good practice. But if so, then you need to move them to separate service or use run_in_executor(). In other case your aiohttp event loop will be blocked by this tasks and server will not be able to accept new requests.
1
2
0
I have a web service that accepts post requests. A post request specifies a specific job to be executed in the background, that modifies a database used for later analysis. The sender of the request does not care about the result, and only needs to receive a 202 acknowledgment from the web service. How it was implemented so far: Flask Web service will get the http request , and add the necessary parameters to the task queue (rq workers), and return back an acknowledgement. A separate rq worker process listens on the queue and processes the job. We have now switched to aiohttp, and realized that the web service can now schedule the actual job request in its own event loop, by using the aiohttp.ensure_future() method. This however blurs the lines between the web-server and the task queue. On the positive side, it eliminates the need of having to manage the rq workers. Is this considered a good practice?
Background processes in the aiohttp event loop
0
0
0
381
44,139,772
2017-05-23T16:00:00.000
0
0
0
0
python,django,database,postgresql,orm
44,139,838
1
false
1
0
In your case, I would rather go with Django internal implementation and follow Django ORM as you will not need to worry about handling connection and different exceptions that may arise during your own implementation of DAO model in your code. As per your requirement, you need to access user database, there still exists overhead for individual users to create db and setup something to connect with your codebase. So, I thinking sticking with Django will be more profound.
1
0
0
I have pretty simple model. User defines url and database name for his own Postgres server. My django backend fetches some info from client DB to make some calculations, analytics and draw some graphs. How to handle connections? Create new one when client opens a page, or keep connections alive all the time?(about 250-300 possible clients) Can I use Django ORM or smth like SQLAlchemy? Or even psycopg library? Does anyone tackle such a problem before? Thanks
Handle connections to user defined DB in Django
0
1
0
77
44,140,535
2017-05-23T16:39:00.000
0
0
0
1
python
44,140,659
1
false
0
0
The thing to remember is that you run the script as you but like chron startup does not, so you need to: Ensure that the executable flags are set for all users and that it is in a directory that everybody has access to. Use the absolute path for every thing, including the script. Specify what to run it with, again with the absolute path.
1
0
0
I am from electrical engineering and currently working on a project using UP-Board, I have attached LEDs, switch, Webcam, USB flash drive with it. I have created an executable script that I want to run at startup. when I try to run the script in terminal using the code sudo /etc/init.d/testRun start it runs perfectly. Now when I write this command in terminal sudo update-rc.d testRun defaults to register the script to be run at startup it gives me the following error insserv: warning: script 'testRun' missing LSB tags and overrides Please guide me how to resolve this? I am from Electrical engineering background, so novice in this field of coding. Thanks a lot :)
Error while Registering the script to be run at start-up, how to resolve?
0
0
0
24
44,140,675
2017-05-23T16:46:00.000
1
0
0
0
python,pandas
44,140,770
1
false
0
0
I think the simplest and most efficient path would be to have two tables. The reason being is that with the 1 big table your algorithm can take O(n^2) since you have to iterate n number of times for each element in your markers and then matching for each element n times for each performance. If you did the 2 table approach your complexity goes to O(n * m) where n is the number of technical markers and then m is the number of records in performance. In this use case I'd imagine your n to be based on whichever set you want to look at and not the whole set so that means your n < m and therefore you can simple apply a short circuit to make the algorithm much more efficient. Alternatively if you were able to build a master look up table to capture all the relationships between a performance and a technical marker then your complexity is essentially a hash look up or O(1).
1
1
1
I have a DataFrame with about 6 million rows of daily data that I will use to find how certain technical markers affected their respective stocks’ long term performance. I have 2 approaches, which one is recommended? Make 2 different tables, one of raw data and one (a filtered copy) containing the technical markers, then do “lookups” on the master table to get the subsequent performance. Use 1 big table, containing both the markers and the performance data. I’m not sure what is more computationally expensive – calculating the technical markers for all the rows, even the unneeded ones, or doing the lookups against the master table. Thanks.
Python pandas Several DataFrames Best Practice
0.197375
0
0
320
44,141,059
2017-05-23T17:08:00.000
0
0
0
0
python,process,cluster-computing,pymc3,dirichlet
44,144,746
1
false
0
0
If I understand you correctly, you're trying to extract which category (1 through k) a data point belongs to. However, a Dirichlet random variable only produces a probability vector. This should be used as a prior for a Categorical RV, and when that is sampled from, it will result in a numbered category.
1
0
1
I am using PyMC3 to cluster my grouped data. Basically, I have g vectors and would like to cluster the g vectors into m clusters. However, I have two problems. The first one is that, it seems PyMC3 could only deal with one-dimensional data but not vectors. The second problem is, I do not know how to extract the cluster id for the raw data. I do extract the number of components (k) and corresponding weights. But I could not extract the id that indicating the which cluster that each point belongs to. Any ideas or comments are welcomed!
How to extract cluster id from Dirichlet process in PyMC3 for grouped data?
0
0
0
95
44,142,810
2017-05-23T18:49:00.000
0
1
1
0
python,pytest
44,164,747
1
false
0
0
Thanks to Chanda for giving me a solution. test markers are very useful! Also another way to run a specific test class is to have the -k option, which will run only a specific testcase class in a module, skipping the others
1
0
0
I have a module that I use to hold specific testcase classes. I would like to implement an easy way to run them all or one by one. I did notice that if I run pytest passing the whole module, all the test will run; but I would like also to pass one single testcase, if I want to. Is this possible or do I need one module per testcase class?
multiple testcase classes in the same module, how to run them as whole or single test with pytest
0
0
0
74
44,144,421
2017-05-23T20:28:00.000
1
0
0
0
python,apache-spark,machine-learning,pyspark,spark-dataframe
44,207,434
1
true
0
0
This error was due to the order of joining the 2 pyspark dataframes. I tried changing the order of join from say a.join(b) to b.join(a) and its working.
1
0
1
After preprocessing the pyspark dataframe , I am trying to apply pipeline to it but I am getting below error: java.lang.AssertionError: assertion failed: No plan for MetastoreRelation. What is the meaning of this and how to solve this. My code has become quite large, so I will explain the steps 1. I have 8000 columns and 68k rows in my spark dataframe. Out of 8k columns, 500 are categorical to which I applied pyspark.ml one hot encoding as a stage in ml.pipeline encoders2 = [OneHotEncoder(inputCol=c, outputCol="{0}_enc".format(c)) for c in cat_numeric[i:i+2]] but this is very slow and even after 3 hours it was not complete. I am using 40gb memory on each of 12 nodes!. 2. So I am reading 100 columns from pyspark dataframe , creating pandas dataframe from that and doing one hot encoding. Then I transform pandas daaframe back into pyspark data and merge it with original dataframe. 3. Then I try to apply pipeline with stages of string indexer and OHE for categorical string features which are just 5 and then assembler to create 'features' and 'labels' . But in this stage I get the above error. 4. Please let me know if my approach is wrong or if I am missing anything. Also let me know if you want more information. Thanks
PySpark dataframe pipeline throws No plan for MetastoreRelation Error
1.2
0
0
1,261
44,144,949
2017-05-23T21:01:00.000
1
0
0
1
python,fuse,inotify
44,539,296
2
false
0
0
Inotify is concerned with calls on vfs level only, if you call fuse operations within a fuse filesystem, inotify would not know about this.
2
1
0
So I have a filesystem that is downloading some data, storing it in memory, and representing only completed downloads as files to the user. However, each download may take time to complete, so I don't want the user to have to wait for all the files to finish downloading. The way I do this is by choosing which 'files' to list in readdir. When I open nautlius to see the files, I only see the first few and have to refresh to see the rest. When I monitor the inotify activity, I noticed there are no CREATE events for the newly completed downloads. What do I need to do to create this notification?
inotify CREATE notification with fusepy
0.099668
0
0
212
44,144,949
2017-05-23T21:01:00.000
0
0
0
1
python,fuse,inotify
44,150,850
2
false
0
0
You need IN_CLOSE_WRITE. From inotify man page: IN_CLOSE_WRITE (+) File opened for writing was closed.
2
1
0
So I have a filesystem that is downloading some data, storing it in memory, and representing only completed downloads as files to the user. However, each download may take time to complete, so I don't want the user to have to wait for all the files to finish downloading. The way I do this is by choosing which 'files' to list in readdir. When I open nautlius to see the files, I only see the first few and have to refresh to see the rest. When I monitor the inotify activity, I noticed there are no CREATE events for the newly completed downloads. What do I need to do to create this notification?
inotify CREATE notification with fusepy
0
0
0
212
44,146,146
2017-05-23T22:37:00.000
2
0
1
0
python,scikit-learn
44,147,287
1
false
0
0
After spending a couple hours to no avail, deleted the python anaconda folder and reinstalled. Have the latest bits now and problem solved :)
1
3
1
I'm new to open source so appreciate any/all help. I've got notebook server 4.2.3 running on: Python 3.5.2 |Anaconda 4.2.0 (64-bit) on my Windows10 machine. When trying to update scikit-learn from 0.17 to 0.18, I get below error which I believe indicates one of the dependency files is outdated. I can't understand how or why since I just (<1 month ago) installed Python through anaconda. Note I get the same error when I try conda update scikit-learn conda install scikit-learn=0.18 pip install -U scikit-learn ImportError: cannot import name '_remove_dead_weakref' How do I fix it? Should I try to uninstall and re-install? If so what's the safest (meaning will cleanly remove all bits) way to do this? Thanks in advance. I'm trying to update to 0.18. I'm running
'_remove_dead_weakref' error when updating scikit-learn on Win10 machine
0.379949
0
0
3,110
44,149,906
2017-05-24T05:44:00.000
0
0
1
0
python,installation,packages
44,150,031
2
false
0
0
pip install package --download="path to directory" pip install --no-index --find-links="path to directory" package_name Note: pip download replaces the --download option to pip install, which is now deprecated and will be removed in pip 10.
1
3
0
I'd like to do something like pip install requests --save packages.txt so I could have list of all I used and later I could just pip -r install packages.txt when I clone it from repository.
How to install package using pip and save name to file to install it later
0
0
0
1,693
44,150,253
2017-05-24T06:09:00.000
8
0
1
0
python-2.7,python-3.x,visual-studio-code,pylint
44,177,534
1
true
0
0
Yes, there is a way. You can set one interpreter for each folder (project) you have open in vscode. And this will dictate the linting (asuming you have the modules installed for each interpreter). This way you can have two different projects open at the same time and each will use their set interpreter and lintern. The automatic way The ideal way to select the interpreter for your current project-folder is to open the command palette (F1) and type "Python: Select Workspace Interpreter" and choose from the dropdown list (all while you have your project open). And that's it. Vscode will use that interpreter and lintern. The manual way Now, if when you try to select an interpreter like I described above you get a message like this: "There are no entries to select from" even though you have both interpreters installed, like it happened to me (when I tried on Windows, I think maybe because of the Python Launcher for Windows). You can still select the interpreter. You're just going to have to do it manually. While you have your project open, on de command palette write "Preferences: Open Workspace Settings" and hit enter. This will create and open an empty settings.json file in a hidden .vscode folder inside your project folder. In this file you can set the exact python interpreter you want to use in the project. For example, for python 2, like this: "python.pythonPath": "C:\\Python27\\python.exe" With that you should be able to lint and debug with an specific interpreter.
1
7
0
I have two projects in two windows , one in python2 and other in python3. Is there anyway I can use both pylint for python2 and python3 in vscode for different projects on the fly ? I tried, but I can't use both of them. Either I can able to set python2 pylint or the python3 one.
How to configure pylint for both python2 and python3 in vs code
1.2
0
0
5,317
44,155,943
2017-05-24T10:37:00.000
0
0
0
1
python,cx-oracle
44,171,743
1
false
0
0
You cannot mix 32-bit and 64-bit together. Everything (Oracle client, Python, cx_Oracle) must be 32-bit or everything must be 64-bit. The error above looks like you are trying to mix a 64-bit Oracle client with a 32-bit Python.
1
0
0
Trying to install cx_Oracle on Solaris11U3 but getting ld: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error python setup.py build running build running build_ext building 'cx_Oracle' extension cc -DNDEBUG -KPIC -DPIC -I/oracle/database/rdbms/demo -I/oracle/database/rdbms/public -I/usr/include/python2.7 -c cx_Oracle.c -o build/temp.solaris-2.11-sun4v.32bit-2.7-11g/cx_Oracle.o -DBUILD_VERSION=5.2.1 "SessionPool.c", line 202: warning: integer overflow detected: op "<<" cc -G build/temp.solaris-2.11-sun4v.32bit-2.7-11g/cx_Oracle.o -L/oracle/database/lib -L/usr/lib -lclntsh -lpython2.7 -o build/lib.solaris-2.11-sun4v.32bit-2.7-11g/cx_Oracle.so ld: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error: command 'cc' failed with exit status 2 Tried all available information on the internet: Installed gcc Installed solarisstudio12.4 Installed instantclient-basic-solaris.sparc64-12.2.0.1.0, instantclient-odbc-solaris.sparc64-12.2.0.1.0 Set LD_LIBRARY_PATH to oracle home directory:instantclient_12_2/ Same issue seen while installing DBD:Oracle perl module.
Python module cx_Oracle ld installation issue on Solaris11U3 SPARC: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error
0
1
0
268
44,156,976
2017-05-24T11:24:00.000
1
0
0
0
python,django,server
44,157,132
2
true
1
0
I want to know if django creates a new instance of the view (or the whole app) on each request. Nope - unless you serve your project with plain cgi of course but that would be a very very strange choice. Im not sure if this fit too but would love to know if a request will have to wait for a current one to complete before the incoming starts? Depends on how many worker processes / threads you use to serve your project. A single worker process/thread will handle a single request at a time obviously.
1
0
0
I want to know if django creates a new instance of the view (or the whole app) on each request. Im not sure if this fit too but would love to know if a request will have to wait for a current one to complete before the incoming starts? Thanks in advance
Django handling request
1.2
0
0
59
44,157,384
2017-05-24T11:40:00.000
1
0
1
0
python,pycharm
44,159,643
1
true
0
0
To modify a variable value in debug mode, I use the 'Evaluate Code Fragment' tool, which is at the top right of the debug subwindow. You can also access it by right click on a variable and choose Evaluate Expression then if I execute myclass.attribute = a value the value is actually set to a chosen value. I'm not sure if it is a good alternative for you. I don't think it is possible to modify attributes of classes otherwise than add line code directly.
1
0
0
I've been using the Pycharm's debugger quite a lot now and I noticed, that it is possible to change value of a variable - F2, or right click - at any point. Unless it is an attribute of a class. Say, I have a class that is creating a network and have a variable self.current_depth that controls how deep I am from the seed. If in debugging I want to change it, it lets me put the value in, but it does not get rewritten and keeps the original value. I was under the impression that an attribute of a class is just a bit memory that can be overwritten. How does this work then? Is it a different case like class keeps its memory together and I would have to rewrite the whole thing? I cannot find some reference to it. Thanks a lot!
Why cannot set value in debugging
1.2
0
0
769
44,158,025
2017-05-24T12:08:00.000
2
0
0
1
python,google-app-engine,django-templates,webapp2
44,158,851
1
true
1
0
Many people use Jinja templates with webapp2 in GAE, but you can also use Django templates. The two template systems are very similar so it is fairly easy to switch between the two. The template system that you use is quite independent of webapp2. It works like this: Render your template to get a string representation of your HTML page Transmit the string with webapp2 Feel free to use Jinja, Django, or any other templating system. Webapp2 doesn't provide templates because it is not part of its job.
1
0
0
I am using Django 1.2 template tags over webapp framework in Google App engine. I intend to migrate over to Webapp2. I was looking for a way to this in webapp2, but did not find a template engine for webapp2. So, should I continue with webapp's template engine or use pure Django template engine.
Is there any way to use Django templates in Webapp2 framework in Google App Engine
1.2
0
0
109
44,160,324
2017-05-24T13:47:00.000
0
0
0
0
python,machine-learning,dummy-variable
44,366,690
2
true
0
0
In a case where there is more than one categorical variable that needs to be replaced for a dummy. The approach should be to encode each of the variables for a dummy (as in the case for a single categorical variable) and then remove one instance of each dummy that exists for each variable in order to avoid colinearity. Basically, each categorical variable should be treated the same as a single individual one.
2
1
1
I am looking to do either a multivariate Linear Regression or a Logistic Regression using Python on some data that has a large number of categorical variables. I understand that with one Categorical variable I would need to translate this into a dummy and then remove one type of dummy so as to avoid colinearity however is anyone familiar with what the approach should be when dealing with more than one type of categorical variable? Do I do the same thing for each? e.g translate each type of record into a dummy variable and then for each remove one dummy variable so as to avoid colinearity?
Using dummy variables for Machine Learning with more than one categorical variable
1.2
0
0
1,175
44,160,324
2017-05-24T13:47:00.000
1
0
0
0
python,machine-learning,dummy-variable
44,163,726
2
false
0
0
If there are many categorical variables and also in these variables, if there are many levels, using dummy variables might not be a good option. If the categorical variable has data in form of bins, for e.g, a variable age having data in form 10-18, 18-30, 31-50, ... you can either use Label Encoding or create a new numerical feature using mean/median of the bins or create two features for lower age and upper age If you have timestamps from initiation of a task to end, for e.g, starting time of machine to the time when the machine was stopped, you can create a new feature by calculating the duration in terms of hours or minutes. Given many categorical variables but with few number of levels, the obvious and only way out in such cases would be to apply One-Hot Encoding on the categorical variables. But when a categorical variable has many levels, there may be certain cases which are too rare or too frequent. Applying One-Hot Encoding on such data would affect the model performance badly. In such cases, it'd be recommended to apply certain business logic/feature engineering and thereby reduce the number of levels first. Thereafter you can use One-Hot Encoding on the new feature if it is still categorical.
2
1
1
I am looking to do either a multivariate Linear Regression or a Logistic Regression using Python on some data that has a large number of categorical variables. I understand that with one Categorical variable I would need to translate this into a dummy and then remove one type of dummy so as to avoid colinearity however is anyone familiar with what the approach should be when dealing with more than one type of categorical variable? Do I do the same thing for each? e.g translate each type of record into a dummy variable and then for each remove one dummy variable so as to avoid colinearity?
Using dummy variables for Machine Learning with more than one categorical variable
0.099668
0
0
1,175
44,160,902
2017-05-24T14:11:00.000
0
0
1
0
python,file
47,921,223
2
false
0
0
I don't know of any way, but: To use the api the free/demo version is enough! If you can not install it on your machine due to missing privileges, etc... You can also copy the installation from another machine. You might need to tell the api-implementation then where to find your gams "installation".
1
4
0
There is a way to read GDX(GAMS Data eXchange ) files, without need of GAMS??? There is a way to read via GAMS and its api for Python,but I haven't GAMS in my pc. Thnx
How to read GDX file with Python,without GAMS
0
0
1
527
44,161,614
2017-05-24T14:39:00.000
0
0
0
0
postgresql,python-3.5,django-migrations,django-1.8,django-postgresql
47,838,958
1
true
1
0
so simple! python manage.py migrate "app_name"
1
0
0
I recently did an inspectdb of our old database which is on MySQL. We want to move to postgre. now we want the inspection to migrate to a different schema is there a migrate command to achieve this? so different apps shall use different schemas in the same database
how to specify an app to migrate to a schema django
1.2
1
0
204
44,165,218
2017-05-24T17:38:00.000
1
0
0
0
python,user-interface,pyqt,qt-creator,qt-designer
44,184,047
1
true
0
1
On Qt Designer, check the "scalable" feature in Qlabel. Also to make it expandable, make sure you properly set the layout of the frame.
1
1
0
I have a frame around the area that is selected (it contains Qlabel), and a frame around the buttons/objects by the bottom of the screen. I want my screen to STAY this ratio even if I drag it out or upload a very large image to Qlabel. How do I do this? I tried messing around with the "minimum size" for the bottom frame but that did nothing.
How to stop image from being pushed off screen?
1.2
0
0
54
44,165,574
2017-05-24T18:02:00.000
0
0
1
0
python-3.x,link-grammar
44,175,967
2
false
0
0
i too am facing the same issue, after installing "Microsoft Visual Studio 14.0" and "Visual C++ Build tools", installing "pip install pylinkgrammar" throws error saying "Microsoft Visual Studio 14.0 failed with exit status 2", i think its not compatible with python version 3.
1
0
0
I am trying to install link grammar on windows for python using "pip install pylinkgrammar" command. I am getting an error which says "Microsoft Visual Studio 14.0 required." Then I installed Visual C++ Build tools and I get the same error. Can somebody help me?
installing pylinkgrammar for windows
0
0
0
310
44,166,154
2017-05-24T18:34:00.000
4
0
1
0
python,object,numbers,interpreter
44,166,381
2
false
0
0
Yes, it would. Just like contrived benchmarks prove more or less whatever you want them to prove. What happens when you add 3 + 23431827340987123049712093874192376491287364912873641298374190234691283764? Well, if you're using primitive types you get all kinds of overflows. If you're using Python objects then it does some long maths, which yeah, is slower than native math, but they're big numbers so you kind of have to deal with that.
1
3
0
Every time I perform an arthetic operation in python, new number objects are created. Wouldn't it be more efficient for the interpreter to perform arithmetic operations with primitive data types rather than having the overhead of creating objects (even for numbers) for arithmetic operations? Thank you in advance
Why are numbers represented as objects in python?
0.379949
0
0
367
44,167,386
2017-05-24T19:55:00.000
5
0
0
0
python,django,postgresql,django-migrations
44,167,568
1
true
1
0
The issue comes from a Postgresql which rewrites each row on adding a new column (field). What you would need to do is to write your own data migration in the following way: Add a new column with null=True. In this case data will not be rewritten and migration will finish pretty fast. Migrate it Add a default value Migrate it again. That is basically a simple pattern on how to deal with adding a new row in a huge postgres database.
1
5
0
I have a table which I am working on and it contains 11 million rows there abouts... I need to run a migration on this table but since Django trys to store it all in cache I run out of ram or disk space which ever comes first and it comes to abrupt halt. I'm curious to know if anyone has faced this issue and has come up with a solution to essentially "paginate" migrations maybe into blocks of 10-20k rows at a time? Just to give a bit of background I am using Django 1.10 and Postgres 9.4 and I want to keep this automated still if possible (which I still think it can be) Thanks Sam
Django migration 11 million rows, need to break it down
1.2
1
0
995
44,167,842
2017-05-24T20:24:00.000
2
1
0
1
python,python-2.7
44,167,881
3
true
0
0
Yes, rename it to /usr/bin/doxypy
2
2
0
I have a python script: /usr/bin/doxypy.py I have added #!/usr/local/bin/python as first line and given full permission to script with chmod 777 /usr/bin/doxypy.py. If I want to run it as a linux command, let's say, I want to run it with only doxypy, is there any way to achieve this?
How can I run python script as linux command
1.2
0
0
1,422
44,167,842
2017-05-24T20:24:00.000
1
1
0
1
python,python-2.7
44,167,906
3
false
0
0
Rename the file to doxpy and put it in a folder of $PATH, e.g. /usr/bin
2
2
0
I have a python script: /usr/bin/doxypy.py I have added #!/usr/local/bin/python as first line and given full permission to script with chmod 777 /usr/bin/doxypy.py. If I want to run it as a linux command, let's say, I want to run it with only doxypy, is there any way to achieve this?
How can I run python script as linux command
0.066568
0
0
1,422
44,168,336
2017-05-24T20:57:00.000
1
1
1
0
python,scipy,aws-lambda
44,195,826
1
true
0
0
There is no supported way of doing it. You may try removing manually some parts which you do not use. However, you're on your own if you go down this route.
1
0
0
Is it possible to only install part of a python package such as scipy to reduce the total size. The latest version (0.19.0) appears to be using 174MB, I'm only using it for its spectrogram feature. I'm putting this on AWS Lambda and I'm over the 512MB storage limit. I realize there are other options I could take, e.g. using another spectrogram implementation, manually removing scipy files, etc. I'm wondering if there is any automated way to do this?
modular scipy install to fit on space constrained environment
1.2
0
0
68
44,168,482
2017-05-24T21:07:00.000
0
0
0
0
python,html,apache,nginx
44,175,812
1
false
0
0
I suggest you to use monitoring tools for do that. If you need alerting use tool like nagios or zabbix. If you don't need alerting you can setup tool like munin o cacti.
1
0
0
I'd like to poll servers/network devices via SNMP/ICMP/SSH and serve up content about each device in a web browser to a user. Things like uptime, CPU usage, etc. What's a good way to do this? Would I store the data in a db and then present it the framework of my choosing?
How do I serve dynamic content in a dashboard?
0
0
1
61
44,170,581
2017-05-25T00:43:00.000
5
0
0
0
python,keras
44,213,304
4
false
0
0
As it says, it's not an issue. It still works fine although they might change it any day and the code will not work. In Keras 2 Convolution2D has been replaced by Conv2d along with some changes in the parameters. Convolution* layers are renamed Conv*. Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3))
1
7
1
I am trying to use keras to create a CNN, but I keep getting this warning which I do not understand how to fix. Update your Conv2D call to the Keras 2 API: Conv2D(64, (3, 3), activation="relu") after removing the cwd from sys.path. Can anyone give any ideas about fixing this?
Warning from keras: "Update your Conv2D call to the Keras 2 API"
0.244919
0
0
6,588
44,173,237
2017-05-25T05:51:00.000
1
0
1
1
python,c++,compilation,interpreted-language
44,173,717
3
false
0
0
In practice no. The languages C/C++ were written as a better assembler. Their underlying operations have been designed to have a good fit with the processors of the 1970. Subsequently, processors have been driven to run quickly, and so they have been designed around instructions which can make C/C++ faster. This close linking with the semantics of the language and the silicon, has given a headstart to the C/C++ community. An example of the C/C++ advantage, is how simple types and objects can be created on the stack. The stack is implemented as a processor stack, and objects will only be valid whilst their callstack is current. Java/python implement all their objects on the free-store, having lambdas and closures which stretch the lifespan of their objects beyond the call-stack which create them. The free-store is a more-costly way of creating objects, and is one of the penalties the language take. JIT compiling the java/python bytecode, can make up some of the difference, and (theoretically) beat the performance of the C/C++ code. When JIT compiled, a language is compiled based on the processor on the box (with possibly better features than when the code was written), with knowledge of the exact data that is being used with the code. This means the Jit compiler is tuned to the exact usage of the code. Rather than a compiler's best guess.
1
1
0
I want to understand this difference between interpreted languages and compiled languages. Lot of explanation is found online and I understand all of them. But question is, softwares are distributed as exe(on windows) as final products. So if I write my program in Python/Java and compile it to exe, would that work as fast as if I had written and compiled in C/C++?
Does Python/Java program work as fast as C if I convert both of them into exe?
0.066568
0
0
791
44,175,700
2017-05-25T08:20:00.000
0
0
0
0
python-2.7,cpu,mxnet
44,424,599
2
false
0
0
@user3824903 I think to create a bin directory, you have to compile MXNet from source with option USE_OPENCV=1
1
1
1
I have used im2rec.py to convert "caltech101 images" into record io format: I have created "caltech.lst" succesfully using os.system('python %s/tools/im2rec.py --list=1 --recursive=1 --shuffle=1 data/caltech data/101_ObjectCategories'%MXNET_HOME) Then, when I run this : os.system("python %s/tools/im2rec.py --train-ratio=0.8 --test-ratio=0.2 --num-thread=4 --pass-through=1 data/caltech data/101_ObjectCategories"%MXNET_HOME) I have this error : attributeError:'module' object has no attribute 'MXIndexedRecordIO' Please, someone has an idea to fix this error ? Thanks in advance. Environment info Operating System:Windows 8.1 MXNet version:0.9.5
attributeError:'module' object has no attribute 'MXIndexedRecordIO'
0
0
0
375
44,180,835
2017-05-25T12:39:00.000
1
0
1
0
python
44,181,046
2
false
1
0
These fuzzy values "x y's ago" are clearly display values calculated from some original source data. Are you sourcing these data from some API? You should instead try to source the original data behind these display values. Probably the "huge database" you are sourcing these records from can be queried in a way that returns absolute values for dates, rather than these fuzzy ones. (as an aside, I find the current trend of using so-called human-friendly fuzzy date stamps to be extremely annoying, especially when you can't turn them off. Not only does it impact screen scraping applications as this appears to be, but it's really a hindrance for time-critical data such as ticketing systems with date-stamped notes. I look forward to seeing this UI trend abate).
1
2
0
I have a huge database which includes a "posted time" field. This field contains values such as: 2 days ago, 3 months ago, 5 minutes ago... I can sort it the hard way which involves looking first at the second parameter (day, month, minutes) and then looking at the first parameter which is the number. I was wondering if there is a better (easier) way?
Sort dates in Python
0.099668
0
0
100
44,183,927
2017-05-25T15:11:00.000
0
0
0
0
python,pandas,numpy,ipython,jupyter-notebook
44,183,963
2
false
0
0
I would go through and find the item with the smallest id in the list, set it to 1, then find the next smallest, set it to 2, and so on. edit: you are right. That would take way too long. I would just go through and set one of them to 1, the next one to 2, and so on. It doesn't matter what order the ids are in (I am guessing). When a new item is added just set it to 9067, and so on.
1
2
1
I'm making a recommender system, and I'd like to have a matrix of ratings (User/Item). My problem is there are only 9066 unique items in the dataset, but their IDs range from 1 to 165201. So I need a way to map the IDs to be in the range of 1 to 9066, instead of 1 to 165201. How do I do that?
Normalize IDs column
0
0
0
210
44,184,942
2017-05-25T16:02:00.000
0
1
0
0
python,amazon-web-services,aws-lambda,paramiko,mrjob
44,213,779
2
true
1
0
Instead of using SSH I've solved this before by pushing a message onto a SQS queue that a process on the server monitors. This is much simpler to implement and avoids needing to keep credentials in the lambda function or place it in a VPC.
1
0
0
I am trying to spawn a mapreduce job using the mrjob library from AWS Lambda. The job takes longer than the 5 minute Lambda time limit, so I want to execute a remote job. Using the paramiko package, I ssh'd onto the server and ran a nohup command to spawn a background job, but this still waits until the end of job. Is there anyway to do this with Lambda?
How to execute a job on a server on Lambda without waiting for the response?
1.2
0
1
441
44,186,370
2017-05-25T17:25:00.000
72
0
1
0
python,jupyter-notebook,jupyter
45,479,729
2
false
0
0
Little late answer, but you might try this at the top of your notebook: %config Completer.use_jedi = False
1
24
0
I just installed a few libraries for Deep Learning like keras, theano etc. The installation went fine but when I write code in Jupyter notebook and press tab for autocompletion, the kernel of jupyter notebook seems to take too long for autocompletion. There have been time when it has taken minutes to display autocompleted options. I initially thought that the kernel hung so I had to restart it every time. I read in another Stack Overflow post that installing pyreadline may help. I installed it but I'm still having the same problem. Has anyone else faced this problem? How do I go about fixing this? Any pointers would be greatly appreciated.
Kernel taking too long to autocomplete (tab) in Jupyter notebook
1
0
0
11,238
44,186,905
2017-05-25T17:55:00.000
0
0
0
0
python,user-interface,raspberry-pi3
44,188,711
2
true
0
1
Your pi boots up and displays a console - just text - by running a program (getty). Then you run another application called a graphical display manager which then runs a window manager. On a pi it is usually gnome but there are many others,.. this window manager is what displays your GUI window. What you want is obviously possible, it is just that it is non-trivial to do. What you are talking about is either kiosk-mode application still running 'on the desktop' as you say but which obscures the desktop completely and does not allow you to switch or de-focus or an even more complicated JeOS like Kodi/XBMC bare metal installation running without your current window manager. Your python would have to do the job of the display manager and the window manager and it would be very, very slow. Use a really light window manager and go kiosk mode. Or you could go with text! There are libraries eg ncurses but I'm not sure how that would work with your touch screen display.
1
1
0
I am creating a GUI interface that will be using a 7" touch display with a raspberry pi 3. I want the GUI to take the place of the desktop, I do not want it displayed in a window on the desktop. any thoughts on how to do that. I have read the raspberry pi documentation to edit the rc.local script to start the application at login, but I can not figure out how to set up the python GUI with out creating a window
how to replace the desktop interface with a python application
1.2
0
0
737
44,187,592
2017-05-25T18:36:00.000
0
0
0
0
python,node.js,sockets,express,socket.io
44,192,695
1
false
0
0
I will use a job queue to do such kind of job, store each job's info in a queue, so you can cancel it and get its status. You can use a node module like kue.
1
0
0
I am trying to build a node app which calls python script (takes a lot of time to run).User essentially chooses parameters and then clicks run which triggers event in socket.on('python-event') and this runs python script.I am using sockets.io to send real-time data to the user about the status of the python program using stdout stream I get from python.But the problem I am facing is that if the user clicks run button twice, the event-handdler is triggered twice and runs 2 instances of python script which corrupts stdout.How can I ensure only one event-trigger happens at a time and if new event trigger happens it should kill previous instance and also stdout stream and then run new instance of python script using updated parameters.I tried using socket.once() but it only allows the event to trigger once per connection.
Make sure only one instance of event-handler is triggered at a time in socket.io
0
0
1
80
44,187,774
2017-05-25T18:47:00.000
2
0
0
0
python,scrapy
45,843,257
1
false
1
0
Go to settings.py in the project folder and change ROBOTSTXT_OBEY = True to ROBOTSTXT_OBEY = False.
1
1
0
How can I catch a request that is forbidden by robots.txt in scrapy? Usually this seems to get ignored automatically, i.e. nothing in the output, so I really can't tell what happens to those urls. Ideally if crawling a url leads to this forbidden by robots.txt error, I'd like to output a record like {'url': url, 'status': 'forbidden by robots.txt'}. How can I do that? New to scrapy. Appreciate any help.
How to catch forbidden by robots.txt?
0.379949
0
1
1,197
44,188,070
2017-05-25T19:07:00.000
0
0
1
0
python,rounding
44,205,462
2
false
0
0
You could divide your altitude by 1000.0 and cast to an int which would drop the decimal: if int(altitude/1000.0) % 2 == 0 Then you can do whatever you want with that information.
1
1
0
If an aircraft is flying VFR in the US, if the heading is east, the altitude must be an odd thousand plus 500 feet (1500, 3500, 5500, etc). If flying west, the altitude must be an even thousand plus 500 feet (2500, 4500, 6500, etc). If I input a given altitude, but it is the wrong (odd or even) for the heading, how do I get Python to correct it next higher odd or even thousandths (1500 becomes 2500, 6500 becomes 7500, etc)? We never round down for altitudes. Thanks!
Determining Thousandths in a number
0
0
0
54
44,191,741
2017-05-26T00:29:00.000
0
0
1
0
python,anaconda
47,492,223
4
false
0
0
I'm not sure if it's exact same error, I've deleted all files linked to conda under folder C:\Users{user} and solved the problem
3
2
0
I'm trying to import a environment to my anaconda and then I get a error like this: ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'defaults::openssl-1.0.2k-1'. UnicodeDecodeError('ascii', '/Users/fengxinlin/google-cloud-sdk/bin:/opt/local/bin:/opt/local/sbin:/usr/local/Cellar/mongodb/2.4.9/bin:/Users/fengxinlin/anaconda2/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/git/bin:/opt/ImageMagick/bin}\xc2\xa0', 236, 237, 'ordinal not in range(128)') Attempting to roll back. It looks like a decoding issue, but what can I do to fix this error?
How to handle "ERROR conda.core.link:_execute_actions(337)"?
0
0
0
5,939
44,191,741
2017-05-26T00:29:00.000
0
0
1
0
python,anaconda
56,430,301
4
true
0
0
finally I solved this problem somehow. I completely removed anaconda2 and downloaded anaconda3, then re-import env file, I never got this error anymore.
3
2
0
I'm trying to import a environment to my anaconda and then I get a error like this: ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'defaults::openssl-1.0.2k-1'. UnicodeDecodeError('ascii', '/Users/fengxinlin/google-cloud-sdk/bin:/opt/local/bin:/opt/local/sbin:/usr/local/Cellar/mongodb/2.4.9/bin:/Users/fengxinlin/anaconda2/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/git/bin:/opt/ImageMagick/bin}\xc2\xa0', 236, 237, 'ordinal not in range(128)') Attempting to roll back. It looks like a decoding issue, but what can I do to fix this error?
How to handle "ERROR conda.core.link:_execute_actions(337)"?
1.2
0
0
5,939
44,191,741
2017-05-26T00:29:00.000
0
0
1
0
python,anaconda
54,131,335
4
false
0
0
I got this on Linux OS and was missing the openssl package or some of its commands.
3
2
0
I'm trying to import a environment to my anaconda and then I get a error like this: ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'defaults::openssl-1.0.2k-1'. UnicodeDecodeError('ascii', '/Users/fengxinlin/google-cloud-sdk/bin:/opt/local/bin:/opt/local/sbin:/usr/local/Cellar/mongodb/2.4.9/bin:/Users/fengxinlin/anaconda2/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/git/bin:/opt/ImageMagick/bin}\xc2\xa0', 236, 237, 'ordinal not in range(128)') Attempting to roll back. It looks like a decoding issue, but what can I do to fix this error?
How to handle "ERROR conda.core.link:_execute_actions(337)"?
0
0
0
5,939
44,192,062
2017-05-26T01:08:00.000
0
0
0
0
python,django
44,192,368
2
false
1
0
Its normally a good practice to leave your project directory as such and keep the apps in their own separate directory and have them defined in the settings.py. This will make your project much more organised and easy to maintain.
1
0
0
Is it considered against Django "best practices" to place templates, views, or models inside the Django config app? (The same app with settings.py) Templates are probably a "no" because template files in a templates directory under the config app will not be found by default, I had to add a django.template.loaders.filesystem.Loader to load them. Thank you for any advice.
Should templates, views, and models go in the Django config app?
0
0
0
24
44,200,097
2017-05-26T11:12:00.000
0
0
0
0
python-3.x,networking,tcp,scapy,pcap
44,315,276
1
false
0
0
I guess you are trying to work with python. Here's a list of the libraries: pypcap: not maintened, the last update was in 2015 pcap_ylg: even older... updated 4 years ago pcapy: recently updated. quite low level library, hard to use for beginners pcap/pylibpcap: again, so old... The best choice seems to be scapy, which has tons of options and is easy-to-use. There are currently two versions of scapy: the original scapy: active dev. supporting all python versions and high level library for any pcap Apis scapy3k: a scapy fork which used to support python 3, but that has not recieved scapy's new features/updates since 2015
1
0
0
I want to make a comparison list between different libraries for TCP packet capturing, replaying and monitoring. I have come across several libraries and cannot decide with which I should start working. I want to compare pros and cons of these libraries so that its easier for me to decide. The libraries which I wanna compare are, pypcap pcap_ylg pcapy scapy3k pcap pylibpcap I tried to find online and read the documentation but could not find useful information.
Comparison between packet capturing libraries in Python 3
0
0
1
516
44,200,399
2017-05-26T11:25:00.000
0
0
1
0
python-2.7,parallel-processing,callback
44,200,634
1
false
0
0
Create a global variable which you will access via both threads.
1
0
0
I have a code which get scores of a match every time and also at the same time it read bets. At some circumstances it needs to know whats the score now which needs to be updated regularly else we miss the state of game but to have controller at rates is must. I want a to know, is there a package which can parallely update the score in a variable of current thread and whenever I need to access this variable then I can.
How can I use callback in python?
0
0
0
23
44,202,776
2017-05-26T13:24:00.000
1
0
1
0
python,python-3.x
44,202,884
3
false
0
0
There is no =- operator. Depending on the context this might be two operators, e.g. x =- y is equivalent to x = (-y) (so there are two operators: assignment and negation) or an assignment with a negative constant: x =- 1 is equivalent to x = (-1) (in this context - is not an operator, it's just a negative constant).
1
0
0
What does the python operator =- do? I'm not asking about the -= operator, which I realize is shorthand for x = x - value.
What does the python operator =- do?
0.066568
0
0
138
44,203,170
2017-05-26T13:43:00.000
1
0
1
0
python
44,203,237
2
false
0
0
I don't think you can do that, but you could create a virualenv and delete those modules there
1
0
0
I am setting this sys.modules['os']=None for restricting OS modules in my python notebook. But I want to restrict it by default, is there any file in /bin where I can add this line. If not, is it possible in RestrictedPython?
restrict importing certain modules in python
0.099668
0
0
446
44,203,397
2017-05-26T13:54:00.000
20
0
0
0
python,utf-8,python-requests
44,203,633
4
false
1
0
The default assumed content encoding for text/html is ISO-8859-1 aka Latin-1 :( See RFC-2854. UTF-8 was too young to become the default, it was born in 1993, about the same time as HTML and HTTP. Use .content to access the byte stream, or .text to access the decoded Unicode stream. If the HTTP server does not care about the correct encoding, the value of .text may be off.
1
56
0
When the content-type of the server is 'Content-Type:text/html', requests.get() returns improperly encoded data. However, if we have the content type explicitly as 'Content-Type:text/html; charset=utf-8', it returns properly encoded data. Also, when we use urllib.urlopen(), it returns properly encoded data. Has anyone noticed this before? Why does requests.get() behave like this?
python requests.get() returns improperly decoded text instead of UTF-8?
1
0
1
113,907
44,204,086
2017-05-26T14:27:00.000
0
0
0
0
python,ssas,olap
44,204,330
2
false
0
0
Seems Python does not support to include .net dll, but IronPython does, we had a MS BI automation project before with IronPython to connect SSAS, it is a nice experience. www.mdx-helper.com
1
0
1
Does anyone know of a Python package to connect to SSAS multidimensional and/or SSAS tabular that supports MDX and/or DAX queries. I know of olap.xmla but that requires an HTTP connection. I am looking for a Python equivalent of olapR in R. Thanks
SSAS connection from Python
0
0
0
6,435
44,207,726
2017-05-26T18:06:00.000
0
0
0
0
python,sqlalchemy
44,395,701
2
false
1
0
I tried extending Query but had a hard time. Eventually (and unfortunately) I moved back to my previous approach of little helper functions returning filters and applying them to queries. I still wish I would find an approach that automatically adds certain filters if a table (Base) has certain columns. Juergen
1
1
0
in my app I have a mixin that defines 2 fields like start_date and end_date. I've added this mixin to all table declarations which require these fields. I've also defined a function that returns filters (conditions) to test a timestamp (e.g. now) to be >= start_date and < end_date. Currently I'm manually adding these filters whenever I need to query a table with these fields. However sometimes me or my colleagues forget to add the filters, and I wonder whether it is possible to automatically extend any query on such a table. Like e.g. an additional function in the mixin that is invoked by SQLalchemy whenever it "compiles" the statement. I'm using 'compile' only as an example here, actually I don't know when or how to best do that. Any idea how to achieve this? In case it works for SELECT, does it also work for INSERT and UPDATE? thanks a lot for your help Juergen
sqlalchemy automatically extend query or update or insert upon table definition
0
1
0
182
44,209,681
2017-05-26T20:27:00.000
-1
1
0
0
python,automation,talend
44,209,855
2
false
0
0
As soon as you can run a Python script from command line, you should be able to run it from Talend using tSystem component.
2
0
0
I'm trying to automate some stuff I would otherwise have to do manually, so I can run one python script instead of taking a whole bunch of steps. I want to find a way to run a Talend job from the python script. How do I accomplish this? Is it even possible?
Running Talend jobs with Python
-0.099668
0
0
1,683
44,209,681
2017-05-26T20:27:00.000
2
1
0
0
python,automation,talend
44,209,956
2
true
0
0
Oops! sorry. From the Studio, build the job to get an autonomous job you can launch from command line. Extract the files from the generated archive. Search for folder "script/yourJobname". Check the syntax from one of the .bat or .sh depending of which one you prefer. Launch the jar file using subprocess.call (or other way to execute a jar file from Python). Hope this helps.TRF
2
0
0
I'm trying to automate some stuff I would otherwise have to do manually, so I can run one python script instead of taking a whole bunch of steps. I want to find a way to run a Talend job from the python script. How do I accomplish this? Is it even possible?
Running Talend jobs with Python
1.2
0
0
1,683
44,211,959
2017-05-27T01:11:00.000
3
0
1
0
python,file,io
44,211,980
2
true
0
0
./file_name and file_name mean the same thing - a file called file_name in the current working directory. ../file_name means a file called file_name in the in the parent directory of the current working directory. Summary . represents current directory whereas .. represents parent directory. Explanation by example if the current working directory is this/that/folder then: . results in this/that/folder .. results in this/that ../.. results in this .././../other results in this/other
1
3
0
What is the difference between "./file_name", "../file_name" and "file_name"when used as the file path in Python? For example, if you want to save in the file_path, is it correct that "../file_name" will save file_name inside the current directory? And "./file_name" will save it to the desktop? It's really confusing.
Python File Path Name
1.2
0
0
131
44,212,063
2017-05-27T01:30:00.000
1
0
0
0
python,opencv,numpy,keras
44,212,992
2
false
0
0
As with anything regarding performance or efficiency, test it yourself. The problem with recommendations for the "best" of anything is that they might change from year to year. First, you should determine if this is even an issue you should be tackling. If you're not experiencing performance issues or storage issues, then don't bother optimizing until it becomes a problem. What ever you do, don't waste your time on premature optimizations. Next, assuming it actually is an issue, try out every method for saving to see which one yields the smallest results in the shortest amount of time. Maybe compression is the answer, but that might slow things down? Maybe pickling objects would be faster? Who knows until you've tried. Finally, weigh the trade-offs and decide which method you can compromise on; You'll almost never have one silver bullet solution. While your at it, determine if just adding more CPU, RAM or disk space at the problem would solve it. Cloud computing affords you a lot of headroom in those areas.
1
0
1
I'm attempting to create an autonomous RC car and my Python program is supposed to query the live stream on a given interval and add it to a training dataset. The data I want to collect is the array of the current image from OpenCV and the current speed and angle of the car. I would then like it to be loaded into Keras for processing. I found out that numpy.save() just saves one array to a file. What is the best/most efficient way of saving data for my needs?
How to save large Python numpy datasets?
0.099668
0
0
744
44,212,569
2017-05-27T03:12:00.000
0
0
1
0
ipython
52,744,978
2
false
0
0
thx@jack yang 1. emacs ~/.ipython/profile_default/python_config.py 2.in the end wirte down c.AliasManager.user_aliases = [('e', 'emacsclient -t')] 3.exit and restart ipython
1
1
0
I am a bit confused on how to save ipython alias so that everytime i open a ipython session(after saving alias firstly ) and use the alias command directly(at the point,you should not input the alias again ). For example,when use ipython in linux(or windows) ,i would use vi rather than !vi a file . vi fileneme !vi filename
How to save ipython alias forever?
0
0
0
526