Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,512,155 | 2017-08-04T17:05:00.000 | 0 | 0 | 1 | 0 | python,anaconda,gis,jupyter-notebook,data-science | 62,469,212 | 2 | false | 0 | 0 | You can also achieve this by Anaconda GUI.
Go to environments tab.
select your preferred environment Right click and open in cmd .
Then install geopandas package from cmd window.
This way you will be isolating your package installation from root environment. | 1 | 4 | 0 | I am using my own environment called datascience in anaconda.
When I found that I need the Geopandas package and installed it using conda install, the Geopandas package was installed in the "root environment".
Is there any way to install packages directly to the environment or copying from the root environment to another?
Thanks! | In Anaconda, How to install packages in NON-ROOT environment? | 0 | 0 | 0 | 7,553 |
45,513,191 | 2017-08-04T18:17:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,module,reload | 47,103,371 | 2 | false | 0 | 0 | As an appendix to DYZ's answer, I use module reloading when analysing large data in interactive Python interpreter.
Loading and preprocessing of the data takes some time and I prefer to keep them in the memory (in a module BTW) rather than restart the interpreter every time I change something in my modules. | 1 | 4 | 0 | I am going through a book on Python which spends a decent time on module reloading, but does not explain well when it would be useful.
Hence, I am wondering, what are some real-life examples of when such technique becomes handy?
That is, I understand what reloading does. I am trying to understand how one would use it in real world.
Edit:
I am not suggesting that this technique is not useful. Instead, I am trying to learn more about its applications as it seems quite cool.
Also, I think people who marked this question as duplicate took no time to actually read the question and how it is different from the proposed duplicate.
I am not asking HOW TO reload a module in Python. I already know how to do that. I am asking WHY one would want to reload a module in Python in real life. There is a huge difference in nature of these questions. | Real-life module reloading in Python? | 0 | 0 | 0 | 92 |
45,515,031 | 2017-08-04T20:30:00.000 | 2 | 0 | 0 | 0 | python,pandas,dataframe,scikit-learn,missing-data | 57,315,631 | 9 | false | 0 | 0 | The fastest way to find the sum of NaN or the percentage by columns is:
for the sum: df.isna().sum()
for the percentage: df.isna().mean() | 2 | 9 | 1 | I'm working on a machine learning problem in which there are many missing values in the features. There are 100's of features and I would like to remove those features that have too many missing values (it can be features with more than 80% missing values). How can I do that in Python?
My data is a Pandas dataframe. | How to remove columns with too many missing values in Python | 0.044415 | 0 | 0 | 35,470 |
45,515,031 | 2017-08-04T20:30:00.000 | 0 | 0 | 0 | 0 | python,pandas,dataframe,scikit-learn,missing-data | 66,210,039 | 9 | false | 0 | 0 | One thing about dropna() according to the documentation, thresh argument specifies the number of non-NaN to keep. | 2 | 9 | 1 | I'm working on a machine learning problem in which there are many missing values in the features. There are 100's of features and I would like to remove those features that have too many missing values (it can be features with more than 80% missing values). How can I do that in Python?
My data is a Pandas dataframe. | How to remove columns with too many missing values in Python | 0 | 0 | 0 | 35,470 |
45,517,437 | 2017-08-05T01:40:00.000 | 0 | 0 | 0 | 0 | javascript,php,python,angularjs,web | 45,517,697 | 1 | true | 1 | 0 | Download entire HTML content of the web page
For JS files, look for the <script> tags and download the js file using the src attribute or any inline scripts inside the tag
For CSS, use <link> tag to download the CSS file and also look for any <style> tags for inline CSS styling
For Images, scan for <img> tag and download the image using src attribute
Same approach can be used for audio/video etc | 1 | 1 | 0 | I am not talking about the softwares like surf online, HTtrack, or any other 'save page' feature of browsers but I need to know how actually it happens in the background. I am interested in making my own program to do that.
Also is it possible to do in JavaScript. If yes what are the libraries I should look into or any other APIs that could be helpful. Please give me any kind of information about the topic I couldn't find any relevant thing to contribute my research. | How to get all files like: js,css,images from a website to save it for offline use? | 1.2 | 0 | 1 | 188 |
45,517,925 | 2017-08-05T03:37:00.000 | 0 | 0 | 0 | 0 | python,opencv,raspberry-pi,raspbian,opencv3.0 | 45,788,917 | 2 | true | 0 | 0 | If you haven't already done so, you should consider the following:
Reduce image size to the minimum required size for recognizing the target object for each classifier. If different objects require different resolutions, you can even use a set of copies of the original image, with different sizes.
Identify search regions for each classifier and thereby reduce the search area. For example, if you are searching for face landmarks, you can define search regions for the left eye, right eye, nose, and mouth after running the face detector and finding the rectangle that contains the face.
I am not very sure if optimization is going to be very helpful, because OpenCv already does some hardware optimization. | 1 | 0 | 1 | I'm working on a Raspberry PI, an embedded linux platform with Raspbian Jessie where Python 2.7 is already installed, and I have OpenCV algorithms that must run in real-time and must apply several HAAR Cascade classifiers on the same image. Is there any method to reduce the time of these operations? such as multithreading for example?
I also hear about GPU calculations but I didn't know from where I can start.
Thank you for the help. | How to run openCV algorithms in real-time on Raspberry PI3 | 1.2 | 0 | 0 | 669 |
45,518,829 | 2017-08-05T06:05:00.000 | 1 | 0 | 0 | 0 | python-2.7,odoo-10 | 45,519,483 | 1 | false | 1 | 0 | You need to create a scheduler it should run everyday. It should calculate the expiry dates which are ending in next 10 days. For those records you have to trigger the mail.
Create a scheduler
Find the expiry dates
Create email template
Trigger Email
Please refer sale subscription it has subscription expiry reminder | 1 | 0 | 0 | How to calculate expiry date in odoo 10 and how to notify to customers from email/sms before 10 days?
for example:-
If expiry date is near, the customer gets notification through mail or sms.
Can anyone suggest any solution? | expiry date in odoo 10 | 0.197375 | 0 | 0 | 325 |
45,519,339 | 2017-08-05T07:16:00.000 | 1 | 0 | 0 | 0 | python,pyspark | 66,559,845 | 2 | false | 0 | 0 | you can send comma separated files in a string like this through file paths :
--files "filepath1,filepath2,filepath3" \
worked for me!! | 1 | 0 | 1 | I am new to Spark and using python to write jobs using pyspark. I wanted to run my script on a yarn cluster and remove the verbose logging by sending a log4j.properties for setting logging level to WARN using --files tag. I have a local csv file that the script uses and i need to include this as well. How do I use --files tag to include both the files?
I am using the following command:
/opt/spark/bin/spark-submit --master yarn --deploy-mode cluster --num-executors 50 --executor-cores 2 --executor-memory 2G --files /opt/spark/conf/log4j.properties ./list.csv ./read_parquet.py
But I get the following error:
Error: Cannot load main class from JAR file:/opt/spark/conf/./list.csv
` | Pyspark: How to use --files tag for multiple files while running job on Yarn cluster | 0.099668 | 0 | 0 | 4,088 |
45,519,810 | 2017-08-05T08:19:00.000 | 1 | 0 | 0 | 0 | python,django,database,celery,scheduled-tasks | 45,519,970 | 1 | true | 1 | 0 | The whole point of Celery is to work in exactly this way, ie as a distributed task server. You can spin up workers on as many machines as you like, and the broker - ie rabbitmq - will distribute them as necessary.
I'm not sure what you're asking about data integrity, though. Data doesn't get "sent back" to the database; the workers connect directly to the database in exactly the same way as the rest of your Django code. | 1 | 0 | 0 | There's a lot of info out there and honestly it's a bit too much to digest and I'm a bit lost.
My web app has to do so some very resource intensive tasks. Standard setup right now app on server static / media on another for hosting. What I would like to do is setup celery so I can call task.delay for these resource intensive tasks.
I'd like to dedicate the resources of entire separate servers to these resource intensive tasks.
Here's the question: How do I setup celery in this way so that from my main server (where the app is hosted) the calls for .delay are sent from the apps to these servers?
Note: These functions will be kicking data back to the database / affecting models so data integrity is important here. So, how does the data (assuming the above is possible...) retrieved get sent back to the database from the seperate servers while preserving integrity?
Is this possible and if so wth do I begin - information overload?
If not what should I be doing / what am I doing wrong? | Django talking to multiple servers | 1.2 | 0 | 0 | 790 |
45,521,079 | 2017-08-05T10:53:00.000 | 12 | 0 | 0 | 0 | python,django,django-channels | 48,280,901 | 1 | true | 1 | 0 | To begin with, channels is nothing but an asynchronous task queue. It's very similar to celery, the major difference being in performance & reliability.
Channels is faster than celery but celery is more reliable. To add more context to it, channels executes a task only once (irrespective of whether it fails or succeeds). On the other hand, celery executes the tasks until the tasks fail a certain number of time or it succeeds.
Now, coming to your questions & taking this example.
Suppose you were to build clash of clans using channels &
web-sockets.
1) Yes, channels is suitable for real time game as long as you write custom logic for the situations where the task in the async queue fails.
The web-sockets will send & receive messages via channels. So, in case where one of the players' request to deploy a troop on the battlefield is not successfully sent to the server, you need to write custom logic to handle this situation (like trying a request at least 3 times before dumping it out of the task queue).
2) Not really. They are pretty much the same. Ultimately you'll have to use web-sockets & a queue where you can fire/receive messages simultaneously.
3) Yes, you'll have to implement a web-socket in your application (android, iOS, desktop) which'll send/receive messages from the backend via channels. | 1 | 11 | 0 | I want to make a real time game, I wanted to use NodeJS-SocketIO or aiohttp, Until I met django-channels, then i read its documentation.
This is a good module
Questions:
Is django-channels suitable for real time game?
Does django-channels have an advantage over aiohttp/nodejs-socketio?
Is it suitable for all client (android, IOS, desktop)? | Is django-channels suitable for real time game? | 1.2 | 0 | 0 | 2,897 |
45,521,248 | 2017-08-05T11:12:00.000 | -1 | 1 | 0 | 0 | python,pdf,encoding,character-encoding,reportlab | 45,521,777 | 1 | false | 0 | 0 | I think you need to lower your sights a little. All UTF-8 characters is a really tall order. Do you really need Chinese, Japanese, Telugu, Hebrew, Arabic and emojis?
You have listed German, Polish and Russian. So you need a good coverage of Latin characters, plus Cyrillic.
On Windows, both Arial and Times New Roman will give you that. If you really do need Telugu and other East Asian writing systems then Arial Unicode MS has probably as much coverage as you are likely to want. | 1 | 0 | 0 | I'm using ReportLab to create invoices. Since my customers can have various characters in their names (german, polish, russian letters), I want them displayed correctly in my PDFs.
I know the key is to have a proper font style. Which font can handle all UTF-8 characters with no problems?
If there is no such font, how can I solve this? | UTF-8 font style for ReportLab | -0.197375 | 0 | 0 | 831 |
45,523,749 | 2017-08-05T15:49:00.000 | 17 | 0 | 1 | 0 | python,pandas,indexing | 49,416,048 | 6 | false | 0 | 0 | Easiest I can think of is df.loc[start:end].iloc[:-1].
Chops off the last one. | 1 | 20 | 1 | When slicing a dataframe using loc,
df.loc[start:end]
both start and end are included. Is there an easy way to exclude the end when using loc? | Pandas slicing excluding the end | 1 | 0 | 0 | 11,186 |
45,525,260 | 2017-08-05T18:37:00.000 | 1 | 0 | 1 | 0 | python,machine-learning,nlp,deep-learning,nltk | 45,703,053 | 6 | false | 0 | 0 | Lets say text is "Please order a pizza for me" or "May I have a cab booking from uber"
Use a good library like nltk and parse these sentences. As social English is generally grammatically incorrect, you might have to train your parser with your custom broken English corpora. Next, These are the steps you have to follow to get an idea about what a user wants.
Find out the full stop's in a paragraph, keeping in mind the abbreviations, lingos like ...., ??? etc.
Next find all the verbs and noun phrases in individual sentences can be done through POS(part of speech tagging) by different libraries.
After that the real work starts, My approach would be to create a graph of verbs where similar verbs are close to each other and dissimilar verbs are very far off.
Lets say you have words like arrange, instruction , command, directive, dictate which are closer to order. So if your user writes any one of the above verbs in their text , your algorithm will identify that user really means to imply order. you can also use edges of that graph to specify the context in which the verb was used.
Now, you have to assign action to this verb "order" based on the noun phrase which were parsed in the original sentence.
This is just a high level explanation of this algorithm, it has many problems which needs serious considerations, some of them are listed below.
Finding similarity index between root_verb and the given verb in very short time.
New words who doesn't have an entry in the graph. A possible approach is to update your graph by searching google for this word, find a context from the pages on which it was mentioned and find an appropriate place for this new word in the graph.
Similarity indexes of misspelled words with proper verbs or nouns.
If you want to build a more sophisticated model, you can construct graph for every part of speech and can select appropriate words from each graph to form sentences in response to the queries. Above mentioned graph is meant for Verb Part of speech. | 1 | 0 | 0 | I am working on automating task flow of application using text based Natural Language Processing.
It is something like chatting application where the user can type in the text area. At same time python code interprets what user wants and it performs the corresponding action.
Application has commands/actions like:
Create Task
Give Name to as t1
Add time to task
Connect t1 to t2
The users can type in chat (natural language). It will be like a general English conversation, for example:
Can you create a task with name t1 and assign time to it. Also, connect t1 to t2
I could write a rule drive parser, but it would be limited to few rules only.
Which approach or algorithm can I use to solve this task?
How can I map general English to command or action? | NLP general English to action | 0.033321 | 0 | 0 | 1,282 |
45,525,759 | 2017-08-05T19:40:00.000 | 0 | 0 | 0 | 0 | python,django,conventions | 45,525,793 | 2 | false | 1 | 0 | Usually if I have stuff to be shared between different sections I either:
1) make a lib and install it via pip
2) make a helper folder within the project | 1 | 1 | 0 | I Have the following folder setup:
myproject
web
worker
The web project is a website containing the typical django setup and all, including the model and some service classes which move the logic away from the views. The website will display data coming from a database. The worker folder contains 2 classes which are filling the database and aggregate it. These 2 classes are like background processes. My question is, how should I structure this?
Should each worker class get his own application folder
If so, I would prefer to move the models out of the web project in the myproject folder, since they are shared between the applications. However this seems to be against the django convention, why so? And how would the convention handle this?
If not, where should I put these worker processes? And how should I run them ?
Thanks in advance! | Django where to put the shared models and logic | 0 | 0 | 0 | 644 |
45,527,497 | 2017-08-06T00:18:00.000 | 0 | 0 | 0 | 0 | python-2.7,sqlite,electron,node-gyp | 45,533,423 | 1 | false | 0 | 0 | This has been resolved....
Uninstalled Python 2.7.13. Reinstalled, added path to PATH variable again, now command 'python' works just fine... | 1 | 0 | 0 | I am trying to include sqlite3 in an electron project I am getting my hands dirty with. I have never used electron, nor Node before, excuse my ignorance. I understand that to do this on Windows, I need Python installed, I need to download sqlite3, and I need to install it.
As per the NPM sqlite3 page, I am trying to install it using npm install --build-from-source
It always fails with
unpack_sqlite_dep
'python' is not recognized as an internal or external command,
operable program or batch file.
I have Python 2.7 installed and the path has been added to environment variable PATH. I can verify that if I type 'python' in cmd, I get the same response. BUT, if I type 'py', it works....
So, my question is: how can I make node-gyp use the 'py' command instead of 'python' when trying to unpack sqlite3?
If this is not possible, how can I make 'python' an acceptable command to use?
I am using Windows 10 if this helps. Also, please let me know if I can do this whole procedure in a different way.
Thanks for any help! | Failing to install sqlite3 plugin for electron project on windows | 0 | 1 | 0 | 259 |
45,527,740 | 2017-08-06T01:10:00.000 | 0 | 0 | 0 | 0 | python,zeromq,pyzmq | 45,527,870 | 2 | false | 0 | 0 | ZeroMQ provides feature-rich tools, all the rest above is Designer's job:
Given the task description above, the possible approach is to create an application-level logic ( a distributed Finite State Automaton ( dFSA ) with certain depth of process-state memory ), where task-phases' processors report ( guess how -- again by a parallel ZeroMQ signalling / messaging dFSA infrastructure ) achieving ( { enter- | exit- }-triggered state-change ) any of the recognised states and the dFSA logic will thus on the global scale permit to orchestrate and operate the above requested "side"-steps, "branches" and/or "re-submit"-s and similar créme a'la créme tricks.
Each ( free )-worker simply always notifies the dFSA-infrastructure it's ( task-unit )-exit-triggered state-change and your dFSA-infrastructure thus always knows + does not search ad-hoc for an address of any such worker, as it explicitly continuously keeps records on free-state dFSA-nodes ( to pick from ). Re-discovery, watch-dogs, heart-beats and re-confirmation handshaking schemes are also possible within the dFSA-infrastructure signalling.
That simple. | 1 | 1 | 0 | I'm using a ROUTER-to-DEALER design and in-between I'm doing some processing, mainly unserialized my data. But sometimes I need to re-send a payload in backend so to a Worker destination.
Re-send a payload is easy but I need to re-send it to a new worker ( a free one or at least not the same worker ).
I noticed that *_multipart function holds three fields with the first one an address ( to a worker ? ).
Q: Is there a way to find address of a free worker? | How to re-send a payload to a free worker in ZeroMQ? | 0 | 0 | 1 | 56 |
45,529,142 | 2017-08-06T06:14:00.000 | 0 | 0 | 0 | 0 | python,mysql,django | 45,529,591 | 1 | false | 1 | 0 | inspectdb is far from being perfect. If you have an existing db with a bit of complexity you will end up probably changing a lot of code generated by this command. One you're done btw it should work fine. What's your exact issue? If you run inspectdb and it creates a model of your table you should be able to import it and use it like a normal model, can you share more details or errors you are getting while querying the table you're interested in? | 1 | 0 | 0 | I have two database in mysql that have tables built from another program I have wrote to get data, etc. However I would like to use django and having trouble understanding the model/view after going through the tutorial and countless hours of googling. My problem is I just want to access the data and displaying the data. I tried to create routers and done inspectdb to create models. Only to get 1146 (table doesn't exist issues). I have a unique key.. Lets say (a, b) and have 6 other columns in the table. I just need to access those 6 columns row by row. I'm getting so many issues. If you need more details please let me know. Thank you. | Django and Premade mysql databases | 0 | 1 | 0 | 104 |
45,530,573 | 2017-08-06T09:25:00.000 | 0 | 0 | 0 | 1 | python,macos,homebrew | 45,530,632 | 3 | false | 0 | 0 | Update $PATH in your .bashrc file.
Example add the following line in your .bashrc
export PATH=/usr/local/Cellar/python/2.7.13_1/bin:$PATH | 1 | 2 | 0 | I've installed python via homebrew. It is located in:
/usr/local/Cellar/python/2.7.13_1
which should be right.
Now I am trying to use this python installation, but "which python" only shows the macOS python installation at "/usr/bin/python". So i am checking the $PATH and I see that everything should be ok.
"echo $PATH" results in this: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
I restarted the terminal window and this occurs every time. I also did the
"brew doctor" and no warnings appeared.
What I am using:
Standard macOS Terminal-App
Has anybody a clue how this problem could be solved? | Homebrew macOS - Python installation | 0 | 0 | 0 | 434 |
45,531,514 | 2017-08-06T11:23:00.000 | 0 | 0 | 0 | 0 | python,nltk | 45,533,385 | 2 | false | 0 | 0 | This is a well-known problem in NLP and it is often referred to Tokenization. I can think about two possible solutions:
try different NLTK tokenizers (e.g. twitter tokenizer), which maybe will be able to cover all of your cases
run a Name Entity Recognition (NER) on your sentences. This allows you to recognise entity present in the text. This could work because it can recognise Heart rate as a single entity, thus as a single token. | 1 | 0 | 1 | I am having some trouble with NLTK's FreqDist. Let me give you some context first:
I have built a web crawler that crawls webpages of companies selling wearable products (smartwatches etc.).
I am then doing some linguistic analysis and for that analysis I am also using some NLTK functions - in this case FreqDist.
nltk.FreqDist works fine in general - it does the job and does it well; I don't get any errors etc.
My only problem is that the word "heart rate" comes up often and because I am generating a list of the most frequently used words, I get heart and rate separately to the tune of a few hundred occurrences each.
Now of course rate and heart can both occur without being used as "heart rate" but how do I count the occurrences of "heart rate" instead of just the 2 words separately and I do mean in an accurate way. I don't want to subtract one from the other in my current Counters or anything like that.
Thank you in advance! | NLTK FreqDist counting two words as one | 0 | 0 | 0 | 1,205 |
45,533,640 | 2017-08-06T15:36:00.000 | 2 | 0 | 0 | 1 | python,prolog,swi-prolog | 45,533,749 | 1 | true | 0 | 0 | You need to use the stdout/1 and stderr/1 options of process_create/3.
For example, here is a simple predicate that simply copies the process output to standard output:
output_from_process(Exec, Args) :-
process_create(Exec, Args, [stdout(pipe(Stream)),
stderr(pipe(Stream))]),
copy_stream_data(Stream, current_output),
% the process may terminate with any exit code.
catch(close(Stream), error(process_error(_,exit(_)), _), true).
You can adapt the copy_stream_data/2 call to write the output to any other stream. | 1 | 4 | 0 | How can I make a swi-prolog program that executes a Python file score.py and gets the output?
I've read about process_create/3 and exec/1 but I can't find much documentation | How to get output value from python script executed in swi-prolog | 1.2 | 0 | 0 | 403 |
45,535,616 | 2017-08-06T19:09:00.000 | 1 | 0 | 0 | 0 | python,postgresql | 45,537,959 | 1 | true | 0 | 0 | Pg 10 partitioning right now is functionally the same as 9.6, just with prettier notation. Pretty much anything you can do in Pg 10, you can also do in 9.6 with table-inheritance based partitioning, it's just not as convenient.
It looks like you may not have understood that table inheritance is used for partitioning in 9.6, since you refer to doing big UNIONs. This is unnecessary, PostgreSQL does it for you if you do inheritance-based partitioning. You can also have triggers that route inserts into the parent table into child tables, though it's more efficient for the application to route tuples like you suggest, by inserting directly into partitions. This will also work in PostgreSQL 10.
Pg's new built-in partitioning doesn't yet offer any new features you can't get with inheritance, like support for unique constraints across partitions, FKs referencing partitioned tables, etc. So there's really no reason to wait.
Just study up on how to do partitioning on 9.6.
I don't know if you can convert 9.6-style manual partitioning into PostgreSQL 10 native partitioning without copying the data. Ask on the mailing list or post a new specific question.
That said... often when people think they need partitioning, they don't. How sure are you that it's worth it? | 1 | 1 | 0 | I have a task which would really benefit from implementing partitioned tables, but I am torn because Postgres 10 will be coming out relatively soon.
If I just build normal tables and handle the logic with Python format strings to ensure that my data is loaded to the correct tables, can I turn this into a partition easily later?
Can I upgrade Postgres 9.6 to 10 right now? Or is that not advisable?
Should I install an extension like pg_partman?
My format string approach would just create separate tables (f{server}{source}{%Y%m}) and then I would union them together I suppose. Hopefully, I could eventually create a master table though without tearing anything down. | Postgres partitioning options right now? | 1.2 | 1 | 0 | 130 |
45,537,958 | 2017-08-07T00:35:00.000 | 0 | 0 | 0 | 0 | python,scripting,neural-network | 45,537,986 | 1 | false | 0 | 0 | Wrap it in a Python based web server listening on some agreed-on port. Hit it with HTTP requests when you want to supply a new file or retrieve results. | 1 | 0 | 1 | To be clear I have no idea what I'm doing here and any help would be useful.
I have a number of saved files, Keras neural network models and dataframes. I want to create a program that loads all of these files so that the data is there and waiting for when needed.
Any data sent to the algorithm will be standardised and fed into the neural networks.
The algorithm may be called hundreds of times in quick succession and so I don't want to have to import the model and standardisation parameters every time as it will slow everything down.
As I understand it the plan is to have this program running in the background on a server and then somehow call it when required.
How would I go about setting up something like this? I'm asking here first because I've never attempted anything like this before and I don't even know where to start. I'm really hoping you can help me find some direction or maybe provide an example of something similar. Even a search term that would help me research would be useful.
Many thanks | Python have program running and ready for when called | 0 | 0 | 0 | 28 |
45,538,613 | 2017-08-07T02:29:00.000 | 0 | 0 | 0 | 0 | python,chatbot,facebook-messenger-bot,facebook-chatbot | 45,578,431 | 1 | true | 0 | 0 | While postbacks return a payload, all a quick reply does is allow the users to send text by clicking a button. You can handle a quick reply the same way you would handle text input. | 1 | 0 | 0 | I've been struggling with this for a while and I'm just trying to get the payload that returns after someone clicks a quick reply. Has anyone dealt with this in python for messenger? | Getting quick replies from response in messenger bot | 1.2 | 0 | 1 | 352 |
45,539,241 | 2017-08-07T04:01:00.000 | 0 | 0 | 0 | 0 | python,excel,openpyxl | 45,539,442 | 4 | false | 0 | 0 | make sure you have write permission in order to create a excel temporary lock file in said directory... | 3 | 0 | 0 | I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script.
When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission denied error.
I tried to remove the read only mode on the folder I'm working on, I have all permissions on the computer, I clearly specified the save directory of my excel workbooks.
Any idea on what could be the issue? | openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed | 0 | 1 | 0 | 2,830 |
45,539,241 | 2017-08-07T04:01:00.000 | 6 | 0 | 0 | 0 | python,excel,openpyxl | 50,027,342 | 4 | false | 0 | 0 | Windows does not let you modify open Excel files in another program -- only Excel may modify open Excel files. You must close the file before modifying it with the script. (This is one nice thing about *nix systems.) | 3 | 0 | 0 | I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script.
When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission denied error.
I tried to remove the read only mode on the folder I'm working on, I have all permissions on the computer, I clearly specified the save directory of my excel workbooks.
Any idea on what could be the issue? | openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed | 1 | 1 | 0 | 2,830 |
45,539,241 | 2017-08-07T04:01:00.000 | 0 | 0 | 0 | 0 | python,excel,openpyxl | 50,026,988 | 4 | false | 0 | 0 | I've had this issue with Excel files that are located in synced OneDrive folders. If I copy the file to a unsynced directory, openpyxl no longer has problems reading the .xlsx file while it is open in Excel. | 3 | 0 | 0 | I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script.
When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission denied error.
I tried to remove the read only mode on the folder I'm working on, I have all permissions on the computer, I clearly specified the save directory of my excel workbooks.
Any idea on what could be the issue? | openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed | 0 | 1 | 0 | 2,830 |
45,540,633 | 2017-08-07T06:31:00.000 | 1 | 0 | 1 | 0 | python,variables | 45,540,781 | 1 | true | 0 | 0 | DB is the best bet for your use case
This gives the flexibility to know which part of data you have already processed by having a status flag.
Persistence data (you can also have data replication)
You can scale easily if in future your applications pulls more and more data
2 -10 times is a good use case for heavy write application with DB as you will gather tons of data in short duration. | 1 | 0 | 0 | I have a python file that is taking websocket data and constantly updating a giant list. It updates somewhere between 2 to 10 times a second. This file runs constantly.
I want to be able to call that list from a different file so this file can process that data and do something else with it.
Basically file 1 is a worker that keeps the current state in a list, I need to be able to get this state from file 2.
I have 2 questions:
Is there any way of doing this easily? I guess the most obvious answers are storing the list in a file or a DB, which leads me to my second question;
Given that the list is updating somewhere between 2 and 10 times a second, which would be better? a file or a db? can these IO functions handle these types of update speeds? | return data from another python file that's constantly updating | 1.2 | 0 | 1 | 25 |
45,540,638 | 2017-08-07T06:31:00.000 | 0 | 0 | 0 | 0 | python,html,bokeh | 45,540,958 | 3 | false | 1 | 0 | If your requirement is to create an application using Python and users will access via browser and update some data into a table?
Use Django or any web framework, basically, you are trying to build a web app!!
or
if you are looking for something else do mention your requirement thoroughly. | 1 | 1 | 0 | I want to create a table in a browser that's been created with python. That part can be done by using DataTable of the bokeh library. The problem is that I want to extract data from the table when a user gives his/her input in the table itself.
Any library of python I could use to do this? It would better if I could do this with bokeh though. | Extract user input to python from a table created in browser? | 0 | 0 | 1 | 1,099 |
45,542,610 | 2017-08-07T08:35:00.000 | 0 | 0 | 0 | 0 | python,excel,xlwings | 58,832,632 | 5 | false | 0 | 0 | I agrre with you, the best solution is everyone has anaconda installed in the computer, the best thing you don not need administrator privileges to install anaconda, just use the option "just me". | 3 | 3 | 0 | I'm having trouble finding an answer to this - I'm writing some simple VBA with the goal that my colleagues can install it as an add in or custom tab. Coming from Python I would, of course, prefer to work with xlwings or pyxll, but as I understood it in order to call any python you would have to install xlwings on every computer?
The ideal scenario would be that I could develop excel add ins with xlwings or pyxll and export it as if it were a normal excel add in, so that my colleagues can install it easily. Unfortunately, I can't install all the required python modules on every target computer.
is this possible or just wishful thinking? | Distribute xlwings macro without xlwings installation? | 0 | 0 | 0 | 2,218 |
45,542,610 | 2017-08-07T08:35:00.000 | 0 | 0 | 0 | 0 | python,excel,xlwings | 70,219,841 | 5 | false | 0 | 0 | I face this situation daily at work. Your best option would be to convert your scripts to executables with pyinstaller & have your coworkers store them at X location (example: Desktop/ExcelScripts), then I'd recommend creating a button to run X script at that location. Cheers | 3 | 3 | 0 | I'm having trouble finding an answer to this - I'm writing some simple VBA with the goal that my colleagues can install it as an add in or custom tab. Coming from Python I would, of course, prefer to work with xlwings or pyxll, but as I understood it in order to call any python you would have to install xlwings on every computer?
The ideal scenario would be that I could develop excel add ins with xlwings or pyxll and export it as if it were a normal excel add in, so that my colleagues can install it easily. Unfortunately, I can't install all the required python modules on every target computer.
is this possible or just wishful thinking? | Distribute xlwings macro without xlwings installation? | 0 | 0 | 0 | 2,218 |
45,542,610 | 2017-08-07T08:35:00.000 | 1 | 0 | 0 | 0 | python,excel,xlwings | 45,656,282 | 5 | false | 0 | 0 | I'm not an expert, but in my experience this is wishful thinking.
First thing first, your colleagues will need Python installed on their computer to run from VBA even with the xlwings add-in. However, if they have python (and any relevant modules that you use like numpy, xlwings, ect.) then you can just give your colleagues a copy of your macro (maybe with the xlwings source code above so they don't need to upload the add-in to VBA).
Again I am not an expert, but I have tried working around this issue and can't find a better solution. | 3 | 3 | 0 | I'm having trouble finding an answer to this - I'm writing some simple VBA with the goal that my colleagues can install it as an add in or custom tab. Coming from Python I would, of course, prefer to work with xlwings or pyxll, but as I understood it in order to call any python you would have to install xlwings on every computer?
The ideal scenario would be that I could develop excel add ins with xlwings or pyxll and export it as if it were a normal excel add in, so that my colleagues can install it easily. Unfortunately, I can't install all the required python modules on every target computer.
is this possible or just wishful thinking? | Distribute xlwings macro without xlwings installation? | 0.039979 | 0 | 0 | 2,218 |
45,543,511 | 2017-08-07T09:24:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo-10 | 45,543,831 | 2 | false | 1 | 0 | I had a little experience with odoo 8, as I know once you add a field to a model you can't simply remove it, because it passes throught the ORM to the SGBD, so if you remove it from the model it will remain in the table of the database. maybe you can try to remove the table from the data base and re execute your code (I used to do that on a test database I DON'T recommend to do so on a database with real data). If it doesn't work, I think you should create a new database, and install your module after you make your changes on the model. | 1 | 1 | 0 | I have a custom module with a model installed, i want to update the module as i have made change in design of the module. Is there anyway to update the model structure like removing or deleting field in the model in ODOO v10. | How to update Odoo Custom Module's Model | 0 | 0 | 0 | 1,414 |
45,543,799 | 2017-08-07T09:39:00.000 | 0 | 0 | 0 | 0 | python,pandas,pivot-table | 49,675,393 | 1 | false | 0 | 0 | If you're using DataFrame.pivot_table(), try using DataFrame.pivot() instead, it has much smaller memory consumption and is also faster.
This solution is only possible if you're not using a custom aggregation function to construct you pivot table and if the tuple of columns you're pivoting on don't have redundant combinations. | 1 | 1 | 1 | I have 300 million rows and 3 columns in pandas.
I want to pivot this to a wide format.
I estimate that the total memory of in the
Current long format is 9.6 GB.
I arrived at this by doing
300,000,000 * 3 * 8 bytes per "cell".
I want to convert to a wide format with
1.9 million rows * 1000 columns.
I estimate that it should take 15.2 GB.
When I pivot, the memory usage goes to 64gb (Linux resource monitor) and the swap gets used to 30gb and then
The ipython kernel dies, which I am assuming is an out of memory related death.
Am I correct that during the generation of a pivot table the RAM usage will spike to more than the 64 GB of RAM that my desktop has? Why does generating a pivot table exceed system RAM? | Pandas: calculating how much RAM is needed to generate a pivot table? | 0 | 0 | 0 | 528 |
45,544,378 | 2017-08-07T10:07:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 46,266,628 | 1 | false | 0 | 0 | Py2exe does not change the underlying operation of Python, so using it will not sidestep the issue. The multiprocessing module may be the best option. | 1 | 0 | 0 | actually my question is parallelization implementation in python is through multiprocessing but if implement the multithreading parallelization can't be achieved. if we convert to py2exe does it works like parallel execution.
Thank you in advance | Python multi threading py2exe | 0 | 0 | 0 | 141 |
45,546,490 | 2017-08-07T12:05:00.000 | 0 | 0 | 0 | 0 | python,windows,wxpython | 45,688,399 | 2 | false | 0 | 1 | Well apparently if I exclude the style "wx.DD_DEFAULT_STYLE" then it works just fine.
So this works:
style = wx.DD_DIR_MUST_EXIST
But this doesn't focus the dialog properly on the defaultPath:
style = wx.DD_DEFAULT_STYLE | wx.DD_DIR_MUST_EXIST
I guess it must be a bug somewhere | 1 | 0 | 0 | I'm using wxPython DirDialog and it seems to have a bug.
When launching the dialog I specify a default path (defaultPath).
That path is being selected by the dialog but the dialog is not scrolled to the selected path.
Instead the dialog is scrolled to the top of the dialog.
This leaves the user to scroll A LOT down to reach the default path.
Very inconvenient.
Any way to correct this?
Using:
Python 2.6.5
wxPython 2.8.12.1
Windows 8.1 | wxPython DirDialog does not scroll to selected folder | 0 | 0 | 0 | 114 |
45,549,952 | 2017-08-07T14:56:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,machine-learning,pip | 47,348,361 | 4 | false | 0 | 0 | Everyone prefers different ways of using different Python versions. So what I prefer the most is to define different variables for your different Python versions and add/remove the variables in the System variable PATH to use a different Python Version.
So for example:
If you are using anaconda for Python 3, you may make a variable conda3 and add the following in it:
C:\Anaconda3;C:\Anaconda3\Library\mingw-w64\bin;C:\Anaconda3\Library\usr\bin;C:\Anaconda3\Library\bin;C:\Anaconda3\Scripts;
So of course, the values Change depending on where you have installed python.
In a similiar way you can add Python2 and depending on which Version you want to use, you May add(taking the above example as the basis) %Anaconda3% to your System variable PATH.
Note:
Even if you are adding different python variables in the System variable PATH, the System stops searching another python Version as soon as it finds the first one.
If you are using anaconda for python 3.6, I see no Problem in installing tensorflow for python 3.6 - so you can simply do:
conda install tensorflow
and that should work | 1 | 0 | 0 | Is it possible to install 3 different python versions on windows 10 simultaneously? I'm using 2.7 for Udacity course, 3.6 for my college project and now I need to install Python 3.5 for "Tensorflow" package. Is it possible to have? Or is there any way to install tensorflow on python 3.6? Any suggestions will be appreciated. | About different python versions installation | 0 | 0 | 0 | 591 |
45,551,212 | 2017-08-07T16:03:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,neural-network,keras | 45,798,320 | 1 | false | 0 | 0 | Ok So I found a solution to this issue, though it adds a lot of overhead.
Initially I thought the keras callback could be of use but despite the fact that it provided the flexibility that I wanted i.e.: train only on test data or only a subset and not for every test. It seems that callbacks are only given summary data from the logs.
So the first step what to create a custom metric that would do the same calculation as any metric with the 2 arrays ( the true value and the predicted value) and once those calculations are done, output them to a file for later use.
Then once we found a way to gather all the data for every sample, the next step was to implement a method that could give a good measure of error. I'm currently implementing a handful of methods but the most fitting one seem to be bayesian bootstraping ( user lmc2179 has a great python implementation). I also implemented ensemble methods and gaussian process as alternatives or to use as other metrics and some other bayesian methods.
I'll try to find if there are internals in keras that are set during the training and testing phases to see if I can set a trigger for my metric. The main issue with using all the data is that you obtain a lot of unreliable data points at the start since the network is not optimized. Some data filtering could be useful to remove a good amount of those points to improve the results of the error predictors.
I'll update if I find anything interesting. | 1 | 0 | 1 | This might sound silly but I'm just wondering about the possibility of modifying a neural network to obtain a probability density function rather than a single value when you are trying to predict a scalar. I know that when you are trying to classify images or words you can get a probability for each class, so I'm thinking there might be a way to do something similar with a continuous value and plot it. (Similar to the posterior plot with bayesian optimisation)
Such details could be interesting when deploying a model for prediction and could provide more flexibility than a single value.
Does anyone knows a way to obtain such an output?
Thanks! | NN: outputting a probability density function instead of a single value | 0 | 0 | 0 | 117 |
45,551,570 | 2017-08-07T16:24:00.000 | 1 | 0 | 0 | 0 | python,image,python-3.x | 45,551,702 | 1 | false | 0 | 0 | Images has so many formats, compressed, uncompressed, black & white or colored, they may be flat or layered, the may be constructed as raster or vector ones, so the answer is generally NO. | 1 | 1 | 1 | I have been looking now for a few hours and I can't find and answer to if can I rescale a image without using a imaging library. I am using python 3 don't really know if that matters. Thank you | Can I rescale an image without using an imaging library? | 0.197375 | 0 | 0 | 39 |
45,555,122 | 2017-08-07T20:22:00.000 | 0 | 0 | 0 | 0 | python,keras,lstm,rnn | 45,555,309 | 1 | false | 0 | 0 | Data for keras LSTM models is always in the form of (batch_size, n_steps, n_features). When you use shuffle=True, you are going to shuffle on the batch_size argument, thereby retaining the natural order of the sequence that is n_steps long.
In cases that each of your batch_size number of sequences are unrelated to each other, it would be natural to use a stateless model. Each array still contains order (a time-series), but does not depend on the others in your batch. | 1 | 1 | 1 | The main purpose of the LSTM is to utilize its memory property. Based on that what is the point of a stateless LSTM to exist? Don’t we “convert” it into a simple NN by doing that?
In other words.. Does the stateless use of LSTM aim to model the sequences (window) in the input data - if we apply shuffle = False in the fit layer in keras - (eg. for a window of 10 time steps capture any pattern between 10-character words)? If yes why don’t we convert the initial input data to match the form of the sequencers under inspection and then use a plain NN?
If we choose to have shuffle = True then we are losing any information that could be found in our data (e.g. time series data - sequences), don't we? In that case I would expect in to behave similarly to a plain NN and get the same results between the two by setting the same random seed.
Am I missing something in my thinking?
Thanks! | What is the point of stateless LSTM to exists? | 0 | 0 | 0 | 408 |
45,556,834 | 2017-08-07T22:55:00.000 | 1 | 0 | 1 | 0 | python,dictionary,discord | 45,600,608 | 1 | false | 0 | 0 | Here's how I ended up solving my problem:
temp = copy.deepcopy(list(members)) | 1 | 0 | 0 | The current situation:
I'm using Discord's API to retrieve a dictionary of member objects in my server. This dictionary is constantly changing in size as new members join and old members leave.
I currently have a program that has a run time of around 30 minutes and accesses this dictionary of member objects so it's guaranteed that this dictionary changes size as I iterate over it in my program; this causes an error in my for loop. I also can't seem to deepcopy this dictionary;
TypeError: can't pickle dict_values objects
Any ideas of how I can work around this problem?
Code:
for i in members:
do something; <--- while this is happening members changes in size
Trying this also doesn't work:
temp = copy.deepcopy(members)
This is what the dict is: dict_values([<discord.member.Member object at 0x1094b3268>, <discord.member.Member object at 0x1094b32f0>, etc | How do I deal with a dictionary of objects that is constantly changing in Python? | 0.197375 | 0 | 0 | 1,576 |
45,557,554 | 2017-08-08T00:33:00.000 | 0 | 1 | 0 | 0 | python,facebook,facebook-graph-api,bots,messenger | 45,558,293 | 2 | true | 0 | 0 | Think I figured it out.. User clicks ad -> opens up a website to get the pixel conversion hit -> immediate re-direct back to messenger. | 2 | 0 | 0 | I'm creating a Facebook Messenger Bot thats powered by a python backend.
I'm promoting my bot via FB ads and am trying to figure out if theres any possible way to use Pixel's Conversion tracking to improve metrics (pretty sure facebook's ad placement trys to optimize based on conversion results)
Anyone know if this is possible? Everything I'm finding so far is javascript code that you need to put on your website, and I don't have or need a website for my bot.
Thanks! | Facebook Ad Pixel Conversion tracking for Messenger Bot | 1.2 | 0 | 1 | 632 |
45,557,554 | 2017-08-08T00:33:00.000 | 0 | 1 | 0 | 0 | python,facebook,facebook-graph-api,bots,messenger | 45,763,816 | 2 | false | 0 | 0 | "User clicks ad
-> opens up a website to get the pixel conversion hit
-> immediate re-direct back to messenger"
I don't see how this is helping, as the pixel conversion hit happens before any action happens within the chat.
If you want to track a certain action within the chat, couldn't you redirect to a website and re-direct back to the chat from within the chat, say in the middle of the question flow? | 2 | 0 | 0 | I'm creating a Facebook Messenger Bot thats powered by a python backend.
I'm promoting my bot via FB ads and am trying to figure out if theres any possible way to use Pixel's Conversion tracking to improve metrics (pretty sure facebook's ad placement trys to optimize based on conversion results)
Anyone know if this is possible? Everything I'm finding so far is javascript code that you need to put on your website, and I don't have or need a website for my bot.
Thanks! | Facebook Ad Pixel Conversion tracking for Messenger Bot | 0 | 0 | 1 | 632 |
45,557,769 | 2017-08-08T01:06:00.000 | 2 | 0 | 1 | 0 | python-3.x,network-programming,ubuntu-14.04,python-3.5 | 45,557,803 | 2 | true | 0 | 0 | do I have to run the socket.read() in while true? Is there any better way to avoid all those unnecessary cpu-cycles?
It's a blocking read(). So your process (thread) is essentially sleeping while awaiting the next network communication, rather than consuming CPU cycles. | 1 | 1 | 0 | I have to write a simple tcp forwarder in python which will run for ever. I'll get data in 1 min intervals. So do I have to run the socket.read() in while true?
Is there any better way to avoid all those unnecessary cpu-cycles?
And one more thing, the socket.read() is in a thread. | python3 socket reading, avoid while True or best practice | 1.2 | 0 | 1 | 244 |
45,557,791 | 2017-08-08T01:09:00.000 | 1 | 1 | 1 | 0 | python,git,python-2.7,github | 45,558,306 | 2 | false | 0 | 0 | Personally I would clone it to local and then reference the module from there. If a bug suddenly surfaces in the most recent commits from the module, it may impact your application right away. By keeping a stable version on your local, it would eliminate one more place to check during debugging of your application.
Of course, if you pull the module from GitHub directly then you'll get all the latest updates and features, but I would do that if the module is thoroughly tested before it gets committed.
This is just my two cents. Hope that helps. | 1 | 0 | 0 | I am working python 2.7 some codes which needs to import module from the other Github repository, any suggestions on what's the best way to import the module? I could git clone the other Github repository to local, but what if there is a change I am not aware of so I still need to sync. Or should I pull the code from Github directly? Thanks in advance. | Suggestion on import python module from another github project | 0.099668 | 0 | 0 | 377 |
45,559,360 | 2017-08-08T04:34:00.000 | 0 | 0 | 1 | 1 | python,linux,python-2.7,redhat,centos6 | 45,559,463 | 2 | false | 0 | 0 | Make sure you have extracted the archive and are in the directory where the Makefile resides. | 1 | 0 | 0 | Please any one can help how to install python or update with new version on centos 6.5.
i am getting below error while installing through .tat.gz after running make command
make: *** No targets specified and no makefile found. Stop.
kindly any one can help ..
Regards,
Sriram | Not able to install python new version 2.7.8 on centos | 0 | 0 | 0 | 98 |
45,567,986 | 2017-08-08T12:09:00.000 | 3 | 0 | 1 | 0 | python-3.x | 56,149,833 | 6 | false | 0 | 0 | This worked for me. I am also using Python 3.7.3 and used pip install virtualenv first. With this I did not have mkvirtualenv or workon in the Python Scripts folder. Once I ran the pip install virtualwrapper-win both bat files were added to my 3.7.3 scripts folder. | 3 | 5 | 0 | I am using advanced Python as well as Pycharm (Up to date as of 2017) when I am using pip/virtual env install.
I got this error:
'virtualenv' is not recognized as an internal or external command,
operable program or batch file.
Could advise a solution for this?
Thanks. | Python 3.6 . 'virtualenv' is not recognized as an internal or external command, operable program or batch file | 0.099668 | 0 | 0 | 24,305 |
45,567,986 | 2017-08-08T12:09:00.000 | 2 | 0 | 1 | 0 | python-3.x | 55,760,181 | 6 | false | 0 | 0 | I had a similar problem using Python 3.7.3 on Windows 10 - 64 Bit and in my case it turned out that I had installed the incorrect version of the virtualenvwrapper. I had used the command
pip install virtualenvwrapper
which installed successfully but If you're on Windows you need to make sure you run this command
pip install virtualenvwrapper-win
with the "-win" at the end.
I had to reinstall it and after that my command mkvirtualenv project_name worked fine.
Hope this is useful to someone else out there. | 3 | 5 | 0 | I am using advanced Python as well as Pycharm (Up to date as of 2017) when I am using pip/virtual env install.
I got this error:
'virtualenv' is not recognized as an internal or external command,
operable program or batch file.
Could advise a solution for this?
Thanks. | Python 3.6 . 'virtualenv' is not recognized as an internal or external command, operable program or batch file | 0.066568 | 0 | 0 | 24,305 |
45,567,986 | 2017-08-08T12:09:00.000 | 0 | 0 | 1 | 0 | python-3.x | 63,209,722 | 6 | false | 0 | 0 | The following worked well for me:
pip install virtualenv
pip install virtualwrapper-win
mkvirtualenv project_name | 3 | 5 | 0 | I am using advanced Python as well as Pycharm (Up to date as of 2017) when I am using pip/virtual env install.
I got this error:
'virtualenv' is not recognized as an internal or external command,
operable program or batch file.
Could advise a solution for this?
Thanks. | Python 3.6 . 'virtualenv' is not recognized as an internal or external command, operable program or batch file | 0 | 0 | 0 | 24,305 |
45,574,177 | 2017-08-08T17:04:00.000 | 1 | 0 | 0 | 0 | python,django,session-cookies,django-settings,django-sessions | 52,615,621 | 1 | false | 1 | 0 | I solved it this way (assumed on Unix based OS):
first create new value in /etc/hosts:
127.0.0.1 {your local testdomain}
add {your local testdomain} to ALLOWED_HOSTS in settings.py
Open your application e.g. on {your local testdomain}:8000
Open admin interface on localhost:8000/admin
Because of the cookie policy the session data is stored per domain. | 1 | 1 | 0 | I have two applications in Django (one for admin and one for regular users). I want to allow the admin user to login in admin panel and also login in the home page as a regular user (two different users with different credentials).
I know that the session is saved as a cookie, so my best guess is that I have to use a different SESSION_COOKIE_NAME in each app, but I don't know if this is the best approach.
How can I set different login sessions for each app?. | Django: Two types of users logged in at same time | 0.197375 | 0 | 0 | 634 |
45,575,016 | 2017-08-08T17:56:00.000 | 0 | 0 | 1 | 0 | python,path,seaborn | 45,575,093 | 1 | false | 0 | 0 | You probably want to use
plot_name.savefig(plot_path)
instead of
plot_name.savefig('plot_path') (note no '-s). | 1 | 1 | 0 | I created a path variable for my project using
proj_path = pathlib.Path('C:/users/data/lives/here')
I now want to save a seaborn plot as png so I created a new path variable for the file
plot_path = proj_path.joinpath('plot_name.png')
but when I call plot_name.savefig(plot_path) returns
TypeError: Object does not appear to be a 8-bit string path or a Python file-like object
What path format is accepted by savefig and how do I convert plot_path? | What format of path should be used for savefig? | 0 | 0 | 0 | 670 |
45,575,850 | 2017-08-08T18:45:00.000 | 0 | 0 | 0 | 0 | python,gis | 45,592,372 | 1 | false | 0 | 0 | I found that it can be done with two rectangles that span the two hemispheres. For level=0, an example is below:
from s2sphere import *
region1=LatLngRect(LatLng.from_degrees(-90,0),LatLng.from_degrees(90,180))
r1=RegionCoverer()
r1.min_level,r1.max_level=(0,0)
cell_IDs1 = r1.get_covering(region1)
region2=LatLngRect(LatLng.from_degrees(-90,180),LatLng.from_degrees(90,0))
r2=RegionCoverer()
r2.min_level,r2.max_level=(0,0)
cell_IDs2 = r2.get_covering(region2)
all_cell_IDs = set(cell_IDs1) | set(cell_IDs2)
The last line glues the hemispheres. Printing the cell lat-longs
for i in all_cell_IDs: print i.to_lat_lng()
gives:
LatLng: 0.0,0.0
LatLng: 0.0,90.0
LatLng: -0.0,-180.0
LatLng: -0.0,-90.0
LatLng: 90.0,-180.0
LatLng: -90.0,0.0
Which are the six sides of a cube. As expected, with level=1, there will be 24 cells. | 1 | 0 | 0 | I am new to the python s2sphere library. At a given level, I want a list of all cell IDs. I tried using a rectangle bounded by the poles:
region = s2sphere.LatLngRect(LatLng.from_degrees(-90,0),LatLng.from_degrees(90,0))
but region.area() gives 0. I could take the union of many smaller rectangles, but that seems messy. Is there an elegant way to do this? | Python S2/S2sphere library - find all s2 cells on earth at a given level | 0 | 0 | 0 | 545 |
45,577,630 | 2017-08-08T20:38:00.000 | 25 | 0 | 0 | 0 | python,python-3.x,pandas | 45,577,693 | 1 | false | 0 | 0 | print(df2[['col1', 'col2', 'col3']].head(10)) will select the top 10 rows from columns 'col1', 'col2', and 'col3' from the dataframe without modifying the dataframe. | 1 | 11 | 1 | How do you print (in the terminal) a subset of columns from a pandas dataframe?
I don't want to remove any columns from the dataframe; I just want to see a few columns in the terminal to get an idea of how the data is pulling through.
Right now, I have print(df2.head(10)) which prints the first 10 rows of the dataframe, but how to I choose a few columns to print? Can you choose columns by their indexed number and/or name? | Print sample set of columns from dataframe in Pandas? | 1 | 0 | 0 | 31,744 |
45,578,060 | 2017-08-08T21:08:00.000 | 11 | 0 | 0 | 0 | javascript,python | 45,578,326 | 2 | true | 1 | 0 | Apache is a web server, flask is a web framework in python, websockets are a protocol, and cgi is something totally different, and none of them help you on the front end.
You could deploy a simple backend in flask or django or pylons or any other python web framework. I like django, but it may be a little heavy for your purpose, flask is a bit more lightweight. You deploy them to a server with a web server installed and use something like apache to distribute.
Then you need a front end and a way of delivering your front end. Flask / Django are both fully capable of doing so in conjunction with a web server, or you could use a static file server like Amazon S3.
On your front end, you need to load D3 and probably some kind of utility like jQuery to load and parse your data from the back end, then use D3 however you like to present it on screen. | 2 | 11 | 0 | I'm currently working on a project that involves parsing through a user-supplied file, doing computations with that data, and visualizing the results using graphing utilities. Right now, I'm stuck with using Python as the back-end because it has scientific libraries unavailable in JavaScript, but I want to move the entire tool to a web server, where I can do much slicker visualizations using D3.js.
The workflow would be something like: obtain the file contents from the browser, execute the Python script with the contents, return jsonified objects of computed values, and plot those objects using D3. I already have the back-end and front-end working by themselves, but want to know: How can I go about bridging the two? From what I've gathered, I need to do something along the lines of launching a server, sending AJAX requests to the server, and retrieving data from the server. But with the number of frameworks out there (Flask, cgi, apache, websockets, etc.), I'm not really sure where to start. This will likely only be a very simple web app with just a file submit page and a data visualization page. Any help is appreciated! | Bridging a Python back-end and JavaScript front-end | 1.2 | 0 | 0 | 27,397 |
45,578,060 | 2017-08-08T21:08:00.000 | 7 | 0 | 0 | 0 | javascript,python | 45,578,190 | 2 | false | 1 | 0 | Flask is easy to get up and running and is Python based. It works well with REST APIs and data sent by JSON (or JSON API).
This is one solution with which I have some experience and which seems to work well and is not hard to get up and running (and natural to work with Python). I can't tell you whether it is the best solution for your needs, but it should work.
If you are overwhelmed and don't know where to start, you can pick one of the options and google search for a tutorial. With a decent tutorial, you should have an example up and running by the end of the tutorial, and then you will know if you are comfortable working with it and have an idea whether it will meet your needs.
Then you could do a proof-of-concept; make a small app that just handles one small part (the one you are most concerned about handling, perhaps) and write something which will do it.
By then, you can be pretty sure you have a good tool for the purpose (unless you were convinced otherwise by the proof-of-concept -- in which case, try again with another option :-)) | 2 | 11 | 0 | I'm currently working on a project that involves parsing through a user-supplied file, doing computations with that data, and visualizing the results using graphing utilities. Right now, I'm stuck with using Python as the back-end because it has scientific libraries unavailable in JavaScript, but I want to move the entire tool to a web server, where I can do much slicker visualizations using D3.js.
The workflow would be something like: obtain the file contents from the browser, execute the Python script with the contents, return jsonified objects of computed values, and plot those objects using D3. I already have the back-end and front-end working by themselves, but want to know: How can I go about bridging the two? From what I've gathered, I need to do something along the lines of launching a server, sending AJAX requests to the server, and retrieving data from the server. But with the number of frameworks out there (Flask, cgi, apache, websockets, etc.), I'm not really sure where to start. This will likely only be a very simple web app with just a file submit page and a data visualization page. Any help is appreciated! | Bridging a Python back-end and JavaScript front-end | 1 | 0 | 0 | 27,397 |
45,580,435 | 2017-08-09T01:44:00.000 | 2 | 0 | 0 | 0 | python,django,django-forms | 52,530,445 | 2 | false | 1 | 0 | As far as i have understood ..all the implicit validations are performed using clean..
which is performed when we check for the form validity using is_valid:
but we can add our own validations to it by overriding the function clean():
what we have to do is we call the super().clean() such that all the implicit validations are done by the clean() defined by django and then add our own validations to it if necessary...
when you call the super().clean() it returns the dictionary containing the cleaned_data..
you can store it in a variable ...
or else you can access the dictionary using self.cleaned_data | 1 | 0 | 0 | This is an obvious question for everyone, but I don't understand what the term "clean" means. I can use clean_data, and use form validation on my forms. However, I still don't understand what this means.
In order to use validation, do I always need to use the keyword "clean"? | What does clean mean in Django? | 0.197375 | 0 | 0 | 3,453 |
45,582,182 | 2017-08-09T05:18:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,neural-network,deep-learning,artificial-intelligence | 45,587,277 | 1 | true | 0 | 0 | A classifier actually gives you a probability of item belonging to a category, unless you add a final layer or post-processing that translates those probabilities to one and zeros. So, you can define a certain confidence threshold for probabilities and if classifier does not output probabilities above the threshold then call the output undecided.
An "audi" can still have features that make network believe it is tree for example. | 1 | 0 | 1 | A machine learning model has been trained to recognize the name of Animals and Plants. If suppose an automobile name is given, is it possible to say that the given name doesn't belong to the category animals or plants. If possible, kindly mention the methodology or algorithm which achieves this scenario.
E.g. If 'Lion' or 'Coconut Tree' is given the model will be predicting either 'Animals' or 'Trees' category. If suppose, 'Audi' is given, is it possible to say that the given item belongs neither to 'Animals' or 'Plants'. (Note : I have heard that the machine learning model will try to fit into either one the category). | Possibility of identification of non trained item in Machine Learning | 1.2 | 0 | 0 | 48 |
45,585,796 | 2017-08-09T08:46:00.000 | 0 | 0 | 1 | 1 | python | 45,588,633 | 3 | false | 0 | 0 | You could try the embeddable version (Really a zipped portable version), but I'm not sure about dependencies management (i.e. pip) and path variables and whatnot. | 1 | 1 | 0 | I am trying to install Python 3.6.2 on a windows vps I have but I need admin rights to do it.
I tried a various different methods but none of them worked.
There is no MSI version for python 3 so that does not work either.
Any ideas? | Install Python 3.6.2 on Windows without admin rights | 0 | 0 | 0 | 4,368 |
45,591,428 | 2017-08-09T13:00:00.000 | 0 | 0 | 0 | 1 | python,working-directory | 45,591,819 | 5 | false | 0 | 0 | os.getcwd() has nothing to do with OSX in particular. It simply returns the directory/location of the source-file. If my source-file is on my desktop it would return C:\Users\Dave\Desktop\ or let say the source-file is saved on an external storage device it could return something like G:\Programs\. It is the same for both unix-based and Windows systems. | 2 | 7 | 0 | My book states:
Every program that runs on your computer has a current working directory, or cwd. Any filenames or paths that do not begin with the root folder are assumed to be under the current working directory
As I am on OSX, my root folder is /. When I type in os.getcwd() in my Python shell, I get /Users/apple/Documents. Why am I getting the Documents folder in my cwd? Is it saying that Python is using Documents folder? Isn't there any path heading to Python that begins with / (the root folder)? Also, does every program have a different cwd? | What exactly is current working directory? | 0 | 0 | 0 | 16,071 |
45,591,428 | 2017-08-09T13:00:00.000 | 0 | 0 | 0 | 1 | python,working-directory | 45,591,529 | 5 | false | 0 | 0 | Python is usually (except if you are working with virtual environments) accessible from any of your directory. You can check the variables in your path and Python should be available. So the directory you get when you ask Python is the one in which you started Python. Change directory in your shell before starting Python and you will see you will it. | 2 | 7 | 0 | My book states:
Every program that runs on your computer has a current working directory, or cwd. Any filenames or paths that do not begin with the root folder are assumed to be under the current working directory
As I am on OSX, my root folder is /. When I type in os.getcwd() in my Python shell, I get /Users/apple/Documents. Why am I getting the Documents folder in my cwd? Is it saying that Python is using Documents folder? Isn't there any path heading to Python that begins with / (the root folder)? Also, does every program have a different cwd? | What exactly is current working directory? | 0 | 0 | 0 | 16,071 |
45,593,421 | 2017-08-09T14:23:00.000 | 1 | 0 | 1 | 0 | python,django,pycharm | 45,594,147 | 2 | false | 1 | 0 | So apparently all I needed to do was close Pycharm and start it up again. | 1 | 0 | 0 | I'm having a bit of trouble with Pycharm. I tried opening a project with it, one that I know has files and folders in it, but for some reason, the project view and the editor is not visible at all. I checked one of the modules, and there is code in it, so I don't know why I can't get these things to show properly. | Pycharm is not showing the project view or editor of a project | 0.099668 | 0 | 0 | 1,576 |
45,593,608 | 2017-08-09T14:31:00.000 | 1 | 1 | 0 | 0 | php,python,mysql | 45,593,914 | 3 | false | 0 | 0 | on the terminal of your raspi use the following command:
mysql -u -p -h --port
where you switch out your hostname with your ip address. since currently you can only connect via local host | 1 | 0 | 0 | I would like to host a database on my raspberry pi to which I can access from any device. I would like to access the contents of the database using python.
What I've done so far:
I installed the necessary mysql packages, including apache 2.
I created my first database which I named test.
I wrote a simple php
script that connects and displays all the contents of my simple
database. The script is located on the raspberry pi at /var/www/html
and is executed when I enter the following from my laptop
(192.168.3.14/select.php)
Now my goal is to be able to connect to the database using python from my laptop. But I seem to have an error connecting to it, this is what I wrote to connect to it.
db = MySQLdb.connect("192.168.3.14","root","12345","test" )
Any help or direction is appreciated. | Raspberry Pi Database Server | 0.066568 | 1 | 0 | 104 |
45,594,748 | 2017-08-09T15:22:00.000 | 1 | 0 | 0 | 0 | python-2.7,robotframework | 45,595,174 | 1 | false | 1 | 0 | There is no way to cause that keyword to ignore parts of the HTML. You will have to write your own keyword, or modify the html before doing the compare. | 1 | 0 | 0 | Im trying to compare the page source with Robot Framework. I added the robotframework-difflibrary for extra compare, however I can not send a wildcard for check.
In my page source there is always the date and time which will never be the same. So my result will always be false because the date / time part is different.
Is it possible to let Robot Framework to ignore some parts of the HTML?
Example of time / date HTML tag:
td class="r">12:43:01
td class="r" width="10%">8-8-2017 | File compare incl wildcards for variables with Robot Framework | 0.197375 | 0 | 0 | 433 |
45,599,187 | 2017-08-09T19:31:00.000 | 2 | 0 | 1 | 1 | python,windows,git,jupyter | 63,553,489 | 3 | false | 0 | 0 | If you are running Jupyter Notebook in Windows run conda install posix.
It worked for me. | 2 | 6 | 0 | I have installed Git Bash, python 3.6 and Anaconda for the course which requires me to use Unix commands within Jupyter, such as !ls, !cat, !head etc.
However, for each of these commands I get (e.g.):
'ls' is not recognized as an internal or external command,
operable program or batch file.
I am using Windows 10. What can I do to be able to proceed with the course?
Thanks! | Python - Unix commands not recognized in Jupyter | 0.132549 | 0 | 0 | 6,860 |
45,599,187 | 2017-08-09T19:31:00.000 | 11 | 0 | 1 | 1 | python,windows,git,jupyter | 45,751,003 | 3 | false | 0 | 0 | Please don't use !ls as mentioned in the course.
Use %ls in the jupyter notebook and it works fine.
Hope it helps. | 2 | 6 | 0 | I have installed Git Bash, python 3.6 and Anaconda for the course which requires me to use Unix commands within Jupyter, such as !ls, !cat, !head etc.
However, for each of these commands I get (e.g.):
'ls' is not recognized as an internal or external command,
operable program or batch file.
I am using Windows 10. What can I do to be able to proceed with the course?
Thanks! | Python - Unix commands not recognized in Jupyter | 1 | 0 | 0 | 6,860 |
45,601,984 | 2017-08-09T23:10:00.000 | 0 | 0 | 1 | 0 | python | 45,602,412 | 2 | false | 0 | 0 | You can set dependencies on another python package (e.g. using install_requires in your setup.py), but if your code relies on a specific non-Python binary you cannot have that installed automatically as part of the pip install process.
You could create a native package for your operating system, which would allow you to set dependencies on other system packages such that when your Python script was installed with apt/yum/dnf/etc, the necessary binary would be installed as well. | 1 | 0 | 0 | I am trying to make a python package that relies on a command line utility to work. I am wondering if anyone knows how to make pip install that command line utility when pip installs my package. The only documentation I can seem to find is on dependency_links which looks to be depreciated. | Installing a command line utility from installing a python package | 0 | 0 | 0 | 103 |
45,607,301 | 2017-08-10T07:35:00.000 | 0 | 0 | 0 | 0 | python,json,cassandra,cqlsh | 60,649,389 | 4 | false | 0 | 0 | You can use bash redirction to get json file.
cqlsh -e "select JSON * from ${keyspace}.${table}" | awk 'NR>3 {print $0}' | head -n -2 > table.json | 1 | 1 | 0 | I want to export data from Cassandra to Json file, because Pentaho didn't support my version of Cassandra 3.10 | How to export data from cassandra to Json file using Python or other language? | 0 | 1 | 0 | 3,302 |
45,608,490 | 2017-08-10T08:36:00.000 | 0 | 0 | 0 | 1 | python,celery | 45,608,632 | 1 | false | 1 | 0 | Try a web server like flask that forwards requests to the celery workers. Or try a server that reads from a queue (SQS, AMQP,...) and does the same.
No matter the solution you choose, you end up with 2 services: the celery worker itself and the "server" that calls the celery tasks. They both share the same code but are launched with different command lines.
Alternately, if the task code is small enough, you could just import the git repository in your code and call it from there | 1 | 1 | 0 | Imagine that I've written a celery task, and put the code to the server, however, when I want to send the task to the server, I need to reuse the code written before.
So my question is that are there any methods to seperate the code between server and client. | how to seperate celery code into server and client side? | 0 | 0 | 1 | 222 |
45,610,370 | 2017-08-10T09:54:00.000 | 0 | 0 | 1 | 0 | python,algorithm,machine-learning,nlp | 45,646,885 | 2 | false | 0 | 0 | One possible algorithmic solution is to create a longer compositional dictionary representing all possible first_name last_name. Then for any given list of tokens as a name (words separated with space), for each token, find all dictionary enteries which have shortest edit distance to that token | 1 | 0 | 1 | My problem is
I have full names with concatenated names, like "davidrobert jones". I want to split it to be "david robert jones".
I tested the solutions using longest prefix matching algorithm with a names dictionary, but it's not that simple because a name could be written in many ways.
I added phonetic matching algorithm too, but also there are many names that could have same pronunciation and so they're very ambiguous.
What is the best solution to do so?, i believe machine learning could have an answer, but i don't know much about machine learning. | An algorithm to split concatenated names | 0 | 0 | 0 | 180 |
45,610,737 | 2017-08-10T10:09:00.000 | 0 | 0 | 0 | 0 | python,sql,sql-server,excel,ssis | 45,614,658 | 1 | false | 0 | 0 | You can try using BiML, which dynamically creates packages based on meta data.
The only other possible solution is to write a script task. | 1 | 0 | 0 | I have a task to import multiple Excel files in their respective sql server tables. The Excel files are of different schema and I need a mechanism to create a table dynamically; so that I don't have to write a Create Table query. I use SSIS, and I have seen some SSIS articles on the same. However, it looks I have to define the table anyhow. OpenRowSet doesn't work well in case of large excel files. | Multiple Excel with different schema Upload in SQL | 0 | 1 | 0 | 59 |
45,612,349 | 2017-08-10T11:22:00.000 | 1 | 0 | 0 | 0 | python,django,heroku | 45,612,723 | 1 | false | 1 | 0 | I suggest you to create a Django management command for your project like python mananage.py run_this_once_a_day. And you can use Heroku schedular to trigger this scheduling. | 1 | 0 | 0 | I have deployed a django app on heroku. So far it works fine. Now I have to schedule a task (its in the form of python script) once a day. The job would take the data from heroku database perform some calculations and post the results back in the database. I have looked at some solutions for this usually they are using rails in heroku. I am confused whether I should do it using the cron jobs extension available in django or using the scheduled jobs option in heroku. Since the application is using using heroku I thought of using that only but I dont get any help how to add python jobs in it. Kindly help. | running scheduled job in django app deployed on heroku | 0.197375 | 0 | 0 | 397 |
45,615,556 | 2017-08-10T13:45:00.000 | 0 | 0 | 1 | 0 | python,json | 45,637,067 | 2 | false | 0 | 0 | I solved my problem with regex. re.match(pattern,str) function. | 1 | 1 | 0 | I got data from JSON now I want to determine whether the data is timestamp or not. But it is not a datetime object, it is string. How can I do this? | How to determine timestamp field from JSON? | 0 | 0 | 0 | 298 |
45,615,840 | 2017-08-10T13:57:00.000 | 0 | 0 | 1 | 0 | python | 45,615,917 | 3 | false | 0 | 0 | You can try using substring after finding the position of the target word. Have you tried to code anything so far? | 1 | 0 | 0 | So I need a simple way to pull ten words from before and after a search term in a paragraph, and have it extract all of it into a sentence.
example:
paragraph = 'The domestic dog (Canis lupus familiaris or Canis familiaris) is a member of genus Canis (canines) that forms part of the wolf-like canids, and is the most widely abundant carnivore. The dog and the extant gray wolf are sister taxa, with modern wolves not closely related to the wolves that were first domesticated, which implies that the direct ancestor of the dog is extinct. The dog was the first domesticated species and has been selectively bred over millennia for various behaviors, sensory capabilities, and physical attributes.'
input
wolf
output
most widely abundant carnivore. The dog and the extant gray wolf are sister taxa, with modern wolves not closely related to | How do I pull a number of words around a specific word in python? | 0 | 0 | 1 | 336 |
45,616,787 | 2017-08-10T14:39:00.000 | -1 | 0 | 1 | 0 | python,windows,pyside,py2exe | 45,617,193 | 1 | false | 0 | 1 | I fixed the issue. There were some differences in the MSVCP and MSVCR dll files between the 2 machines. I copied all the missing dll files from the machine that was working to the one that wasn't both in the System32 and SysWOW64 directories and now the program is working.
The files were:
msvcp60.dll
msvcp100.dll
mscvp120.dll
msvcr60.dll
msvcr100.dll
msvcr120.dll
Hope this helps anyone in the future! | 1 | 0 | 0 | I have 3 identical (I thought) servers that are running 2012 R2. I built the app using python 3.4 and PySide 1.2.4 on a Windows 7 machine. Running the setup file gives me the executable as well as 3 dll files: QtCore4.dll, QtGui4.dll, and QtNetwork4,dll. I copied all these files to the 3 servers. I can run the exe from 2 of the servers just fine, but the third one is giving me trouble. At first it was giving me an error saying that MSVCR100.dll was not installed. So, I copied msvcr100.dll from one of the other servers where the exe runs fine. Now when I try to run the exe I get the following error:
Traceback (most recent call last):
File "Ninja_Lite.py", line 3, in
File "C:\Python34\lib\site-packages\zipextimporter.py", line 109, in load_module
ImportError: MemoryLoadLibrary failed loading PySide\QtGui.pyd: The specified module could not be found. (126)
Does anyone have any idea what could be causing this error to only happen on one of the 3 servers? | Issue with py2exe executable on Windows Server 2012 R2 | -0.197375 | 0 | 0 | 477 |
45,617,095 | 2017-08-10T14:53:00.000 | 2 | 0 | 0 | 0 | python,sockets,ip,gethostbyname | 45,618,059 | 1 | true | 0 | 0 | I don't see any "wrong" IPs in your question. A DNS server is allowed to return multiple IP addresses for the same host. The client generally just picks one of them. A lot of servers use this as a part of their load balancing, as clients select any available server and since they generally would pick different ones the traffic gets split up evenly. Your ping command and your gethostbyname command are just selecting different available IPs, but neither is "wrong".
You can see all the IPs that are returned for a given hostname with a tool like nslookup or dig. | 1 | 0 | 0 | socket.gethostbyname("vidzi.tv") giving '104.20.87.139'
ping vidzi.tv gives '104.20.86.139'
socket.gethostbyname("www.vidzi.tv") giving '104.20.87.139'
ping www.vidzi.tv gives '104.20.86.139'
Why socket.gethostbyname is giving wrong IP for this website? It is giving right IP for other websites? | socket.gethostbyname giving wrong IP | 1.2 | 0 | 1 | 1,420 |
45,618,524 | 2017-08-10T16:02:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,import,module | 72,499,535 | 2 | false | 0 | 0 | Matplotlib is an entire library, so if you are using import matplotlib as plt in your code, it might not work. Use 'import matplotlib.plyplot as plt' instead. | 1 | 0 | 1 | This question may appear similar to previously asked questions, but it is not.
I have a Python script with a single line:
import matplotlib
This fails with the error:
'module' object is not callable
random.py - print a random integer between 1 and 100
(followed by 3 more lines of usage of random.py)
If I start python from the command line, then type
import matplotlib
That works. I can instantiate classes from the module, plot figures and so on.
I am completely lost as to what is going on. Any clue appreciated.
Python version 2.6.6 on 64 bit x86 Linux machine. | import matplotlib fails with "'module' object not callable" error | 0 | 0 | 0 | 1,820 |
45,618,708 | 2017-08-10T16:12:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-3.x | 45,618,770 | 2 | true | 0 | 0 | Just check the ASCII table. It is child.send(b'\x04') (EOT) | 1 | 1 | 0 | I've been looking online for a way to break out of a screen terminal session. The keys to break out of a session are:
Ctrl+A then D
OR
Ctrl+A then Ctrl+D
I've got the Ctrl+A part:
child.send(b'\x01')
But cannot find the keycode for Ctrl+D | Python keycodes to break out of a screen terminal session | 1.2 | 0 | 0 | 38 |
45,621,400 | 2017-08-10T18:47:00.000 | 0 | 0 | 1 | 0 | python,list | 45,621,435 | 1 | false | 0 | 0 | Is L[index] like a pointer which points to a particular value in the list ...
No. It is an indicator to the compiler that the __getitem__(), __setitem__(), or __delitem__() method (or their C-level equivalents) of the object should be called depending on which operation is required. | 1 | 0 | 0 | We see a lot of operations on lists using its index like L[index] and we get the value associated with that index in a particular list.
I have a doubt on what exactly is python doing when we say "get me an element at this particular index(L[index])".
Is L[index] like a pointer which points to a particular value in the list and is it the reason why the value is changed through the assignment L[index]=value in the same address unlike other variables?
Any help on this would be appreciated. | Regarding python lists | 0 | 0 | 0 | 45 |
45,621,637 | 2017-08-10T19:01:00.000 | 2 | 0 | 1 | 0 | python,vba,excel,xlwings | 45,622,510 | 1 | true | 0 | 0 | RunPython basically just does what it says: run python code. So to run a module rather than a single function, you could do: RunPython("import filename"). | 1 | 0 | 0 | I have python programs that use python's xlwings module to communicate with excel. They work great, but I would like to run them using a button from excel. I imported xlwings to VBA and use the RunPython command to do so. That also works great, however the code I use with RunPython is something like:
"from filename import function;function()"
which requires me to make the entire python program a function. This is annoying when I go back to make edits to the python program because every variable is local. Any tips on running the file from RunPython without having to create a function out of it? | Running Python from VBA using xlwings without defining function | 1.2 | 1 | 0 | 873 |
45,622,805 | 2017-08-10T20:19:00.000 | 0 | 0 | 1 | 0 | python,node.js,artificial-intelligence,ocr | 52,320,544 | 1 | false | 0 | 0 | If every document have the same format.
Try dissect the document into individual part and feed OCR the part that you need the text.
If not, good luck, I'm looking for the answer too. | 1 | 1 | 0 | User uploads tabular data with information like classes, professors, schedule and such.
I want to easily extract that information.
I can use an OCR library, but it'd simply output text as randomly mixed.
I would have no idea what something belongs to.
Is there a way to train OCR little bit to only look at certain part of image (form) and then label data so when it extracts it's all labeled. etc
Suppose i had a form with lots of data, I want it to only look at address section and label it.
Or it spreadsheet like data and i want it to label it by columns.
Simply extract all text into string isn't that useful. | How to extract text information from specified places using OCR? | 0 | 0 | 0 | 518 |
45,625,918 | 2017-08-11T02:12:00.000 | 2 | 0 | 0 | 0 | python,python-3.x,tkinter,width | 45,626,120 | 1 | true | 0 | 1 | The width will never be bigger than the parent frame.
You can call winfo_reqwidth to get the requested width of the widget. I'm not sure if that will give you the answer you are looking for. I'm not entirely what the real problem is that you are trying to solve. | 1 | 2 | 0 | Say I have a Frame in tkinter with a set width and height (placed using the place method), and I add a child Frame to that parent Frame (using the pack method). In that child Frame, I add an arbitrary amount of widgets, so that the child Frame's width is dynamically set depending on its children.
My question is how do I get the width of the child Frame if it's width is greater than its parent?
I know there's a winfo_width method to get the width of a widget, but it only returns the width of the parent Frame if its width is greater than its parent. In other words, how do I get the actual width of a widget, not just the width of the part of the widget that is displayed? | Get width of child in tkinter | 1.2 | 0 | 0 | 174 |
45,627,467 | 2017-08-11T05:18:00.000 | 0 | 1 | 0 | 0 | php,python,opencv | 45,742,503 | 1 | true | 0 | 0 | Well, I believe that if you want to use cv2.imread you need to have a file, because one of the parameters is a file name and that is going to make everythins slower and resource consuming.
I think that what you want to use is cv2.imdecode (similar to imread by in memory), in which case you only need to turn your image into the correct type of stream, a numpy array for example.
I would also consider to process everything on python at once if you want to take in account performance. | 1 | 1 | 0 | I am capturing an image from PHP script from webcam and I want to send it to python script so that I can use OpenCV feature homography functions on it. How can I send an image using exec command in PHP and pass that image to cv2.imread() function in python?
Thanks. | How to send a image captured from webcam in PHP to Python | 1.2 | 0 | 0 | 438 |
45,628,002 | 2017-08-11T06:03:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 45,628,111 | 1 | false | 0 | 0 | zip in Python can give you result you want:
e.g. c = zip(a, b), c will contain [(basket,banana), (fridge,apple), (table,grapes), (basket,apple), (fridge,banana)] | 1 | 0 | 0 | Hi I have 2 list from a from a loop.
A = [basket,fridge,table,basket,fridge]
B = [banana, apple, grapes, apple, banana]
Is there a way for me to format it as:
c = [basket:banana, fridge:apple, table:grapes, basket:apple, fridge:banana]
or
c = [basket, banana, fridge, apple, table, grapes, basket, apple, fridge, banana]
My goal would be to list how many bananas in basket; bananas in fridge and so on. | Combine list A value 0 to list B value 0: Python | 0 | 0 | 0 | 10 |
45,628,665 | 2017-08-11T06:46:00.000 | 1 | 0 | 1 | 0 | r,python-3.x,list,pandas,dataframe | 45,633,068 | 2 | false | 0 | 0 | Found a solution to select a particular dataframe/dataframe_column from a list of dataframes.
In R : x = listOfdf$df1$df2$df3
In Python : x = listOfdf['df1']['df2']['df3']
Thank you :) | 1 | 1 | 1 | I have a list of dataframes in R, with which I'm trying to select a particular dataframe as follows:
x = listOfdf$df1$df2$df3
Now, trying hard to find an equivalent way to do so in Python. Like, the syntax on how a particular DataFrame be selected from a list of DataFrames in Pandas Python. | How to select a particular dataframe from a list of dataframes in Python equivalent to R? | 0.099668 | 0 | 0 | 4,311 |
45,630,562 | 2017-08-11T08:36:00.000 | 1 | 0 | 0 | 0 | python,mysql,database,database-design,amazon-ec2 | 45,643,778 | 2 | false | 1 | 0 | The problem is you don't have access to RDS filesystem, therefore cannot upload csv there (and import too).
Modify your Python Scraper to connect to DB directly and insert data there. | 1 | 0 | 0 | I have a Python Scraper that I run periodically in my free tier AWS EC2 instance using Cron that outputs a csv file every day containing around 4-5000 rows with 8 columns. I have been ssh-ing into it from my home Ubuntu OS and adding the new data to a SQLite database which I can then use to extract the data I want.
Now I would like to try the free tier AWS MySQL database so I can have the database in the Cloud and pull data from it from my terminal on my home PC. I have searched around and found no direct tutorial on how this could be done. It would be great if anyone that has done this could give me a conceptual idea of the steps I would need to take. Ideally I would like to automate the updating of the database as soon as my EC2 instance updates with a new csv table. I can do all the de-duping once the table is in the aws MySQL database.
Any advice or link to tutorials on this most welcome. As I stated, I have searched quite a bit for guides but haven't found anything on this. Perhaps the concept is completely wrong and there is an entirely different way of doing it that I am not seeing? | Exported scraped .csv file from AWS EC2 to AWS MYSQL database | 0.099668 | 1 | 0 | 158 |
45,631,450 | 2017-08-11T09:20:00.000 | 1 | 0 | 0 | 0 | python,hadoop,hadoop-streaming | 45,631,639 | 1 | false | 0 | 0 | Not sure whether these questions are on topic here, but fortunately the answer is simple enough:
In these days a million rows is simply not that large anymore, even Excel can hold more than a million.
If you have a few million rows in a large table, and want to run quick small select statements, the answer is that you are probably better off without Hadoop.
Hadoop is great for sets of 100 million rows, but does not scale down too wel (in performance and required maintenance).
Therefore, I would recommend you to try using a 'normal' database solution, like MySQL. At least untill your data starts growing significantly.
You can use python for advanced analytical processing, but for simple queries I would recommend using SQL. | 1 | 1 | 0 | I am looking for a solution to build an application with the following features:
A database compound of -potentially- millions of rows in a table, that might be related with a few small ones.
Fast single queries, such as "SELECT * FROM table WHERE field LIKE %value"
It will run on a Linux Server: Single node, but maybe multiple nodes in the future.
Do you think Python and Hadoop is a good choice?
Where could I find a quick example written in Python to add/retrieve information to Hadoop in order to see a proof of concept running with my one eyes and take a decision?
Thanks in advance! | is the choice of Python and Hadoop a good one for this scenario? | 0.197375 | 1 | 0 | 43 |
45,633,267 | 2017-08-11T10:43:00.000 | 3 | 0 | 0 | 0 | python,django,django-models | 45,633,856 | 1 | true | 1 | 0 | You should recognise that Django fields represent database columns. A ForeignKey field is exactly that, a field on the model that represents a key in another model. But you can't model a "one-to-many" field in that way; what would the field on the model represent? So no, it is not possible. | 1 | 1 | 0 | Is there a OneToManyField relationship in Django? There is a ManyToOneField relationship but that restrict you declare the relationship on the Many side. | OneToManyField relationship in Django? | 1.2 | 0 | 0 | 503 |
45,633,650 | 2017-08-11T11:03:00.000 | 0 | 0 | 0 | 1 | python,hadoop,oozie,oozie-workflow | 45,682,834 | 1 | false | 1 | 0 | You can use the Cloudera Hue or Apache Ambari tools that will give you your all info about the Ozzie
If you are looking for more you can write you own program using some api expose by oozie. | 1 | 0 | 0 | I am new to Oozie. I have couple of questions on oozie job scheduling.
Can we get a list of jobs which scheduled on ozzie server for everyday run using some programmatic approach? Considering there are multiple job scheduled to run everyday may be for next couple of months or year.
How to know programmatically that a scheduled job had failed to run at day end for reporting purpose?
Can we do a ranking on oozie scheduled job on the basis of their execution time?
Thanks much for any help on this. | Apache Oozie workflows | 0 | 0 | 0 | 170 |
45,634,854 | 2017-08-11T12:07:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,pyserial | 45,635,399 | 1 | false | 1 | 0 | You might be able to tell whether the device is physically plugged in by checking the status of one of the RS232 control lines - CTS, DSR, RI, or CD (all of which are exposed as properties in PySerial). Not all USB-serial adapters support any of these.
If the only connection to the device is the TX/RX lines, your choices are extremely limited:
Send a command to the device and see if it responds. Hopefully its protocol includes a do-nothing command for this purpose.
If the device sends data periodically without needing an explicit command, save a timestamp whenever data is received, and return False if it's been significantly longer than the period since the last reception. | 1 | 0 | 0 | So I am working on a project that has a Raspberry Pi connected to a Serial Device via a USB to Serial Connector. I am trying to use PySerial to track the data being sent over the connected Serial device, however there is a problem.
Currently, I have my project set up so that every 5 seconds it calls a custom port.open() method I have created, which returns True if the port is actually open. This is so that I don't have to have the Serial Device plugged in when I initially go to start the program.
However I'd also like to set it up so that the program can also detect when my serial device is disconnected, and then reconnected. But I am not sure how to accomplish this.
If I attempt to use the PySerial method isOpen() to check if the device is there, I am always having it return true as long as the USB to Serial connector is plugged in, even if I have no Serial device hooked up to the connector itself. | PySerial, check if Serial is Connected | 0 | 0 | 0 | 1,786 |
45,635,746 | 2017-08-11T12:53:00.000 | 4 | 0 | 0 | 0 | python,django,django-tables2 | 45,635,938 | 1 | true | 1 | 0 | You're using a hyphen but the module name has an underscore. The instructions ask you to add "django_tables2" to INSTALLED_APPS and not "django-tables2". | 1 | 1 | 0 | I am trying to use django-tables2 in my django project but I keep getting
"ModuleNotFoundError: No module named 'django-tables2'" error.
installed it with pip install - everything OK.
added django-tables2 to INSTALLED_APPS (it seems the problem is here).
Thanks you. | django-tables2 module missing | 1.2 | 0 | 0 | 1,651 |
45,637,732 | 2017-08-11T14:32:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,gpio,spi | 45,734,337 | 1 | true | 0 | 0 | I fixed it by connecting the spi devices to other pins. | 1 | 0 | 0 | Im using spidev on a raspberry pi 3 to communicate with 2 spi devices.
One on GPIO8 and one on GPIO16.
Spidev requires a parameter for CS but GPIO16 is not a CS line. How do I fix this? | spidev software CS | 1.2 | 0 | 0 | 162 |
45,641,038 | 2017-08-11T17:47:00.000 | 0 | 0 | 1 | 0 | python,workfront-api | 45,641,584 | 1 | false | 0 | 0 | Parsing nested objects is a challenge with Python. I would suggest opting for multiple CSVs for each set of objects referenced in the collection of another object (tasks contained in a project, for example). Ensure that you have a key that can be used to link entries between the files, such as the tasks' project ID. | 1 | 0 | 1 | I would like to convert each workfront api object (Projects, Isues, Tasks, etc) output to a csv using Python. Is there a recommended way to do this with the nested objects? For example: json to csv, list to csv, etc?
Thanks,
Victor | workfront api output to csv | 0 | 0 | 0 | 88 |
45,641,646 | 2017-08-11T18:25:00.000 | 0 | 0 | 1 | 0 | python,ubuntu,jupyter-notebook,jupyter | 45,641,706 | 2 | false | 0 | 0 | The command you give should open Jupyter in the filesystem view first. You then navigate from there to a notebook and double-click that, which will open the notebook in a new browser tab. | 1 | 0 | 0 | When I write jupyter notebook in terminal it doesn't open notebook, it opens Jupyter tree. What can be the problem? | Why doesn't Jupyter notebook command open notebook? | 0 | 0 | 0 | 840 |
45,644,200 | 2017-08-11T21:44:00.000 | 7 | 0 | 0 | 0 | python-3.x,catboost | 51,193,723 | 3 | false | 0 | 0 | CatBoost also has scale_pos_weight parameter starting from version 0.6.1 | 1 | 7 | 1 | Is there a parameter like "scale_pos_weight" in catboost package as we used to have in the xgboost package in python ? | for Imbalanced data dealing with cat boost | 1 | 0 | 0 | 14,128 |
45,644,367 | 2017-08-11T22:02:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,neural-network,deep-learning | 45,657,010 | 2 | false | 0 | 0 | I finally implemented the requirement by forcing certain blocks of the weight matrix corresponding to the first layer to be constant zero. That is, rather than just define w1 = tf.Variables(tf.random_normal([100,10])), I define ten 10 by 1 weight vectors and concatenate them with zeros to form a block diagonal matrix as final w1. | 1 | 0 | 1 | I would like to implement a feed-forward neural network, with the only difference from a usual one that I'd manually control the correspondence between input features and the first hidden layer neurons. For example, in the input layer I have features f1, f2, ..., f100, and in the first hidden layer I have h1, h2, ..., h10. I want the first 10 features f1-f10 fed into h1, and f11-f20 fed into h2, etc.
Graphically, unlike the common deep learning technique dropout which is to prevent over-fitting by randomly omit hidden nodes for a certain layer, here what I want is to statically (fixed) omit certain hidden edges between input and hidden.
I am implementing it using Tensorflow and didn't find a way of specifying this requirement. I also looked into other platforms such as pytourch and theano, but still haven't got an answer. Any idea of implementation using Python would be appreciated! | How to implement a neural network model, with fixed correspondence between the input layer and the first hidden layer specified? | 0 | 0 | 0 | 75 |
45,644,840 | 2017-08-11T22:58:00.000 | 1 | 0 | 0 | 0 | python,multithreading,numpy,ctypes | 45,644,861 | 1 | true | 0 | 0 | I'd initialize the array with np.empty and then pass the buffer to the C function. That should allow each core to grab whatever pages from the array it needs during the initialization. | 1 | 0 | 1 | Is it possible to initialize a numpy.ndarray in a parallel fashion such that the corresponding pages will be distributed among the NUMA-nodes on the system?
The ndarray will later be passed to a multi-threaded C function which yields much better performance if the passed data is allocated in parallel (adhering to the first-touch policy) | NUMA-friendly initialization of numpy.ndarray | 1.2 | 0 | 0 | 167 |
45,645,507 | 2017-08-12T00:53:00.000 | 0 | 0 | 0 | 0 | python-2.7,recursion,tensorflow | 45,714,701 | 1 | false | 0 | 0 | I had made a coding error that referenced a tensor in the construction of the same tensor. I don't know if changing recursion depth would solve similar, but unbugged, situations. | 1 | 0 | 1 | Is there a hack so that I can increase the maximum recursion depth allowed? I only need it to be 2-3 times as big.
I have a tensorflow graph with many tensors that are lazily constructed because they depend on other tensors (which may or may not be constructed yet). I can guarantee that this process terminates, and that I will not run out of memory. However, I run into this recursion depth error. | increase maximum recursion depth tensorflow | 0 | 0 | 0 | 252 |
45,645,884 | 2017-08-12T02:13:00.000 | 0 | 0 | 0 | 0 | python,audio,pygame | 45,657,002 | 1 | true | 0 | 1 | Not exactly a fix, but it seems that if I disable the mixer(pygame.mixer.quit()) it doesn't cancel out the audio. Unfortunately, I haven't found a way to use audio in two pygame windows at once.
Feel free to post another answer if anyone figures out a full fix. | 1 | 0 | 0 | I wrote a music player in Python using Pygame and Pydub(I used Pygame for actually playing the music, while Pydub is probably unrelated to the issue).
The music works fine even when windows are switched unless I switch to another Pygame window. I thought this effect would go away if I compiled it(cx_freeze), but that didn't work.
So I was wondering if there is any way to let the music keep playing when the window is switched to another Pygame window.
I used pygame.mixer.music instead of Sound objects if that might somehow be related.
Thanks in advance! | Pygame Audio Stops When Window Switches? | 1.2 | 0 | 0 | 115 |
45,646,104 | 2017-08-12T03:05:00.000 | 0 | 1 | 0 | 1 | python,linux,bash,shell,pexpect | 45,718,528 | 2 | false | 0 | 0 | The short answer appears to be no you cannot do this with pexpect.
Auditd is a possible alternative for tracking user input, but I have not figured out how to get it to record commands such as 'ls' and 'cd' because they do not call the system execve() command. The best work around I have found is to use the script command which opens another interactive terminal where every command entered in the prompt is recorded. You can use the file the script command outputs to (by default is typescript) to log all user commands. | 1 | 0 | 0 | As a sys admin I am trying to simulate a user on a virtual machine for testing log file monitoring.
The simulated user will be automated to perform various tasks that should show up in bash history, "ls", "cd", "touch" etc. It is important that they show up in bash history because the bash history is logged.
I have thought about writing directly to the bash history but would prefer to more accurately simulate a users behavior. The reason being that the bash history is not the only log file being watched and it would be better if logs for the same event remained synchronized.
Details
I am working on CentOS Linux release 7.3.1611
Python 2.7.5 is installed
I have already tried to use pexpect.run('ls') or pexpect.spawn('ls'), 'ls' does not show up in the bash history with either command. | Is it possible to use pexpect to generate bash shell commands that will show up in bash history? | 0 | 0 | 0 | 1,718 |
45,646,249 | 2017-08-12T03:33:00.000 | 0 | 0 | 0 | 0 | python-3.x,cgi | 45,646,512 | 1 | false | 0 | 0 | Well, I've finally found the solution. The problem (which I didn't see first) was that the server sent plain text to client. Here is one way to send binary data :
import cgi
import os
import shutil
import sys
print('Content-Type: application/octet-stream; file="Library.db"')
print('Content-Disposition: attachment; filename="Library.db"\n')
sys.stdout.flush()
db = os.path.realpath('..') + '/Library.db'
with open(db,'rb') as file:
shutil.copyfileobj(file, sys.stdout.buffer)
But if someone has a better syntax, I would be glad to see it ! Thank you ! | 1 | 0 | 0 | I'm working on a little python3 server and I want to download a sqlite database from this server. But when I tried that, I discovered that the downloaded file is larger than the original : the original file size is 108K, the downloaded file size is 247K. I've tried this many times, and each time I had the same result. I also checked the sum with sha256, which have different results.
Here is my downloader.py file :
import cgi
import os
print('Content-Type: application/octet-stream')
print('Content-Disposition: attachment; filename="Library.db"\n')
db = os.path.realpath('..') + '/Library.db'
with open(db,'rb') as file:
print(file.read())
Thanks in advance !
EDIT :
I tried that :
$ ./downloader > file
file's size is also 247K. | File downloaded larger than original | 0 | 1 | 0 | 445 |
45,649,949 | 2017-08-12T11:53:00.000 | 3 | 0 | 1 | 0 | python,newline | 45,649,980 | 3 | false | 0 | 0 | Your question's pretty unclear, but if you simply need to remove all newlines, use str.replace (replace("\n", "")) | 1 | 2 | 0 | I have a sequence file begining with > and afterwards a lot of letters followed by newlines and then again >. I am able to extract the line begining with > and the letter lines in to two different variables, when there are no newlines, but I does not work if there are any newlines. My question is, how can I, in my script, remove these newlines? | Removing newlines in Python | 0.197375 | 0 | 0 | 1,658 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.