Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,577,583 | 2017-06-15T21:49:00.000 | 0 | 0 | 1 | 1 | 1 | python,windows,py2exe | 0 | 44,632,191 | 0 | 1 | 0 | true | 0 | 0 | I solved the problem myself and I'm going to share the answer if someone ever has the same mistake. I just had to download a 32-bit version of Canopy (with Python 2.7) and py2exe in order for them to work on Windows 7. | 1 | 5 | 0 | 0 | I created a .exe file using Py2exe on Windows 10 but when I try to run it on a Windows 7 computer it says that the os version is wrong.
Can anyone tell me how to fix this? (like using another Python or Py2exe version or setting a specific configuration inside setup.py) | Py2exe - Can't run a .exe created on Windows 10 with a Windows 7 computer | 0 | 1.2 | 1 | 0 | 0 | 850 |
44,586,049 | 2017-06-16T09:43:00.000 | 0 | 0 | 1 | 1 | 0 | python,anaconda,default | 0 | 44,586,285 | 0 | 1 | 0 | true | 0 | 0 | Set the environment path variable of your default python interpreter in system properties.
or if this doesn't work do:
in cmd C:\Python27\python.exe yourfilename.py
in the command first part is your interpreter location and second is your file name | 1 | 2 | 0 | 0 | windows power shell or cmd uses anaconda python instead of the default windows installation
how to make them use the default python installation?
my os is win 8.1
python 3.6
anaconda python 3.6 | How to change cmd python from anaconda to default python? | 0 | 1.2 | 1 | 0 | 0 | 1,228 |
44,594,309 | 2017-06-16T16:50:00.000 | 1 | 1 | 0 | 0 | 0 | python,discord,discord.py | 0 | 44,830,944 | 0 | 3 | 0 | false | 0 | 0 | In Discord, you're never going to 100% sure who invited the user.
Using Invite, you know who created the invite.
Using on_member_join, you know who joined.
So, yes, you could have to check invites and see which invite got revoked. However, you will never know for sure who invited since anyone can paste the same invite link anywhere. | 2 | 3 | 0 | 0 | I am currently trying to figure out a way to know who invited a user. From the official docs, I would think that the member class would have an attribute showing who invited them, but it doesn't. I have a very faint idea of a possible method to get the user who invited and that would be to get all invites in the server then get the number of uses, when someone joins the server, it checks to see the invite that has gone up a use. But I don't know if this is the most efficient method or at least the used method. | Discord.py show who invited a user | 0 | 0.066568 | 1 | 0 | 1 | 13,355 |
44,594,309 | 2017-06-16T16:50:00.000 | 1 | 1 | 0 | 0 | 0 | python,discord,discord.py | 0 | 45,571,128 | 0 | 3 | 0 | false | 0 | 0 | Watching the number of uses an invite has had, or for when they run out of uses and are revoked, is the only way to see how a user was invited to the server. | 2 | 3 | 0 | 0 | I am currently trying to figure out a way to know who invited a user. From the official docs, I would think that the member class would have an attribute showing who invited them, but it doesn't. I have a very faint idea of a possible method to get the user who invited and that would be to get all invites in the server then get the number of uses, when someone joins the server, it checks to see the invite that has gone up a use. But I don't know if this is the most efficient method or at least the used method. | Discord.py show who invited a user | 0 | 0.066568 | 1 | 0 | 1 | 13,355 |
44,595,736 | 2017-06-16T18:24:00.000 | 1 | 1 | 0 | 1 | 0 | python,unix,operating-system | 0 | 44,595,853 | 0 | 3 | 0 | true | 0 | 0 | Python 3.6 has pathlib and its Path objects have methods:
is_dir()
is_file()
is_symlink()
is_socket()
is_fifo()
is_block_device()
is_char_device()
pathlib takes a bit to get used to (at least for me having come to Python from C/C++ on Unix), but it is a nice library | 1 | 6 | 0 | 0 | I would like to get the unix file type of a file specified by path (find out whether it is a regular file, a named pipe, a block device, ...)
I found in the docs os.stat(path).st_type but in Python 3.6, this seems not to work.
Another approach is to use os.DirEntry objects (e. g. by os.listdir(path)), but there are only methods is_dir(), is_file() and is_symlink().
Any ideas how to do it? | Get unix file type with Python os module | 0 | 1.2 | 1 | 0 | 0 | 1,711 |
44,597,555 | 2017-06-16T20:33:00.000 | 2 | 0 | 0 | 0 | 0 | python,deep-learning | 0 | 44,609,082 | 0 | 2 | 0 | false | 0 | 0 | They are useful for on-the-fly augmentations, which the previous poster mentioned. This however is not neccessarily restricted to generators, because you can fit for one epoch and then augment your data and fit again.
What does not work with fit is using too much data per epoch though. This means that if you have a dataset of 1 TB and only 8 GB of RAM you can use the generator to load the data on the fly and only hold a couple of batches in memory. This helps tremendously on scaling to huge datasets. | 1 | 4 | 1 | 0 | When and how should I use fit_generator?
What is the difference between fit and fit_generator? | How to use model.fit_generator in keras | 0 | 0.197375 | 1 | 0 | 0 | 3,009 |
44,606,342 | 2017-06-17T15:37:00.000 | 1 | 0 | 0 | 0 | 0 | python,postgresql,python-3.x,flask | 0 | 44,608,243 | 0 | 1 | 0 | true | 1 | 0 | These requirements are more or less straightforward to follow. Given that you will have a persistent database that can share the state of each file with multiple sessions - and even multiple deploys - of your system - and that is more or less a given with Python + PostgreSQL.
I'd suggest you to create a Python class with a few fields yuu can use for the whole process, and use an ORM like SQLAlchemy or Django's to bind those to a database. The fields you will need are more or less: filename, filpath, timestamp, check_status - and some extra like "locked_for_checking", and "checker" (which might be a foreignkey to a Users collection). On presenting a file as a sugestion for a given user, you set the "locked_for_checking" flag - and for the overall listing, yu create a list that excçuds files "checked" or "locked_for_checking". (and sort the files by timestamp/size or other metadata that attends your requirements).
You will need some logic to "unlock for checking" if the first user does not complete the checking in a given time frame, but that is it. | 1 | 0 | 0 | 0 | Firstly, this question isn't a request for code suggestions- it's more of a question about a general approach others would take for a given problem.
I've been given the task of writing a web application in python to allow users to check the content of media files held on a shared server. There will also likely be a postgres database from which records for each file will be gathered.
I want the web app to:
1) Suggest the next file to check (from files that have yet to be checked) and have a link to the next unchecked file once the result of the previous check have been submitted.
2) Prevent the app from suggesting the same file to multiple users simultaneously.
If it was just one user checking the files it would be easier, but I'm having trouble conceptualising how i'm going to achieve the two points above with multiple simultaneous users.
As I say, this isn't a code request i'm just just interested in what approach/tools others feel would be best suited to this type of project.
If there are any python libraries that could be useful i'd be interested to hear any recommendations.
Thanks | Python web app ideas- incremental/unique file suggestions for multiple users | 0 | 1.2 | 1 | 1 | 0 | 49 |
44,618,760 | 2017-06-18T19:31:00.000 | 1 | 0 | 1 | 0 | 0 | c#,python,data-sharing,multiple-languages | 0 | 44,618,813 | 0 | 2 | 0 | false | 0 | 0 | Any kind of IPC (InterProcess Communication) — sockets or shared memory. Any common format — plain text files or structured, JSON, e.g. Or a database. | 2 | 0 | 0 | 0 | I want to share data between programs that run locally which uses different languages, I don't know how to approach this.
For example, if I have a program that uses C# to run and another that uses python to run, and I want to share some strings between the two, how can I do it?
I thought about using sockets for this but I'm not sure that this is the right approach, I also thought about saving the data in a file, then reading the file from the other program, but, it might even be worse than using sockets.
Note that I need to share strings almost a thousand times between the programs | How to share data between programs that use different languages to run | 0 | 0.099668 | 1 | 0 | 0 | 406 |
44,618,760 | 2017-06-18T19:31:00.000 | 3 | 0 | 1 | 0 | 0 | c#,python,data-sharing,multiple-languages | 0 | 44,618,831 | 0 | 2 | 0 | true | 0 | 0 | There are a lot of ways to do so, I would recommend you reading more about IPC (Inter Process Communication) - sockets, pipes, named pipes, shared memory and etc...
Each method has it's own advantages, therefore, you need to think about what you're trying to achieve and choose the method that fits you the best. | 2 | 0 | 0 | 0 | I want to share data between programs that run locally which uses different languages, I don't know how to approach this.
For example, if I have a program that uses C# to run and another that uses python to run, and I want to share some strings between the two, how can I do it?
I thought about using sockets for this but I'm not sure that this is the right approach, I also thought about saving the data in a file, then reading the file from the other program, but, it might even be worse than using sockets.
Note that I need to share strings almost a thousand times between the programs | How to share data between programs that use different languages to run | 0 | 1.2 | 1 | 0 | 0 | 406 |
44,618,843 | 2017-06-18T19:41:00.000 | 0 | 0 | 0 | 0 | 0 | python-3.x,scapy,packet-sniffers,sniffing | 0 | 44,703,883 | 0 | 2 | 0 | false | 0 | 0 | Maybe you can get your device MAC address and filter any packets with that address as source address. | 1 | 1 | 0 | 0 | how do I sniff packets that are only outbound packets?
I tried to sniff only destination port but it doesn't succeed at all | Python(scapy): how to sniff packets that are only outboun packets | 0 | 0 | 1 | 0 | 1 | 736 |
44,626,578 | 2017-06-19T09:17:00.000 | 0 | 0 | 1 | 0 | 0 | excel,windows,python-2.7 | 0 | 44,626,892 | 0 | 1 | 0 | false | 0 | 0 | I am assuming that both excel sheets have a list of words, with one word in each cell.
The best way to write this program would be something like this:
Open the first excel file, you might find it easier to open if you export it as a CSV first.
Create a Dictionary to store word and Cell Index Pairs
Iterate over each cell/word, add the word to the dictionary as the Key, with the Cell Reference as the Value.
Open the second excel file.
Iterate over each cell/word, check if the word is present in the Dictionary, if it is, you can print out the corresponding cells or store them however you want. | 1 | 0 | 0 | 0 | I want to find the same words in two different excel workbooks. I have two excel workbooks (data.xls and data1.xls). If in data.xls have the same words in the data1.xls, i want it to print the row of data1.xls that contain of the same words with data.xls. I hope u can help me. Thank you. | python- how to find same words in two different excel workbooks | 0 | 0 | 1 | 1 | 0 | 67 |
44,630,642 | 2017-06-19T12:30:00.000 | 2 | 0 | 0 | 0 | 0 | python,arrays,django | 0 | 61,437,282 | 0 | 5 | 0 | false | 1 | 0 | I don't know why nobody has suggested it, but you can always pickle things and put the result into a binary field.
The advantages of this method are that it will work with just about any database, it's efficient, and it's applicable to more than just arrays. The downside is that you can't have the database run queries on the pickled data (not easily, anyway). | 1 | 52 | 0 | 0 | I was wondering if it's possible to store an array in a Django model?
I'm asking this because I need to store an array of int (e.g [1,2,3]) in a field and then be able to search a specific array and get a match with it or by it's possible combinations.
I was thinking to store that arrays as strings in CharFields and then, when I need to search something, concatenate the values(obtained by filtering other model) with '[', ']' and ',' and then use a object filter with that generated string. The problem is that I will have to generate each possible combination and then filter them one by one until I get a match, and I believe that this might be inefficient.
So, I hope you can give me other ideas that I could try.
I'm not asking for code, necessarily, any ideas on how to achieve this will be good. | Is it possible to store an array in Django model? | 1 | 0.07983 | 1 | 0 | 0 | 103,363 |
44,632,982 | 2017-06-19T14:15:00.000 | 0 | 1 | 0 | 0 | 0 | python,api,twitter,twitter-oauth,chatbot | 0 | 44,717,595 | 0 | 2 | 1 | true | 0 | 0 | Answering my own question.
Webhook isn't needed, after searching for long hours on Twitter Documentation, I made a well working DM bot, it uses Twitter Stream API, and StreamListener class from tweepy, whenever a DM is received, I send requests to REST API which sends DM to the mentioned recipient. | 1 | 0 | 0 | 0 | I am trying to build a twitter chat bot which is interactive and replies according to incoming messages from users. Webhook documentation is unclear on how do I receive incoming message notifications. I'm using python. | Does twitter support webhooks for chatbots or should i use Stream API? | 0 | 1.2 | 1 | 0 | 1 | 662 |
44,639,106 | 2017-06-19T20:10:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow | 0 | 44,937,787 | 0 | 1 | 0 | false | 0 | 0 | I can't comment on the question because of low rep, so using an answer instead.
Can you clarify your question a bit, maybe with a small concrete example using very small tensors?
What are the "columns" you are referring to? You say that you want to keep 50 columns (presumably 50 numbers) per image. If so, the (10, 50) shape seems like what you want - it has 50 numbers for each image in the batch. The (10, 50, 20, 3) shape you mention would allocate 50 numbers to each "image_column x channel". That is 20*3*50 = 3000 numbers per image. How do you want to construct them from the 50 that you have?
Also, can you give a link to tf.batch_nd(). I did not find anything similar and relevant. | 1 | 0 | 1 | 0 | I have a tensor of shape (10, 100, 20, 3). Basically, it can be thought of as a batch of images. So the image height is 100 and width is 20 and channel depth is 3.
I have run some computations to generate a set of 10*50 indices corresponding to 50 columns I would like to keep per image in the batch. The indices are stored in a tensor of shape (10, 50). I would like to end up with a tensor of shape (10, 50, 20, 3).
I have looked into tf.batch_nd() but I can't figure out the semantics for how indices are actually used.
Any thoughts? | TensorFlow extracting columns | 0 | 0 | 1 | 0 | 0 | 78 |
44,651,925 | 2017-06-20T11:33:00.000 | 2 | 0 | 1 | 0 | 1 | python,linux,windows,keyboard-shortcuts,interpreter | 0 | 44,653,688 | 0 | 1 | 0 | true | 0 | 0 | Normally, IDLE has an Option / Configure IDLE menu which allows you to remap almost any action to a key combination. The newline and indent action is by default mapped to Key Return and Num Keypad Return, while Ctrl J is used for plain newline and indent. But it is easy to change this mapping configuration. | 1 | 1 | 0 | 0 | Recently my Enter key stopped working. For sure it's a hardware problem!. However I managed so many days without Enter key by using the alternatives ctrl + j or ctrl + m .Running python programs was fine as I would run the script by saving it in a file. Now that I need to give commandline values I have to press enter for it to be accepted in the IDLE Interpreter. While typing this too I can't press enter or ctrl + j or ctrl + m.
But how did I do this? (This newline?) I copied a empty newline from another file. Even this doesn't work in the interpreter. Someone help any way to enter values in python IDLE Interpreter without actually using enter key.
One good alternative would be to use the cmd or terminal and using the command line python script.py. And then using ctrl+m as this works there.
But I miss the python interpreter. Any alternatives any suggestion?
Ofcourse onscreen keyboard is an option but I'm looking for any key alternatives to enter in python Interpreter. Is that even possible? | Alternative for 'enter' key in python interpreter? | 0 | 1.2 | 1 | 0 | 0 | 1,474 |
44,677,753 | 2017-06-21T13:37:00.000 | 0 | 0 | 1 | 0 | 0 | python,rpm,packaging,distutils,rpm-spec | 0 | 46,433,084 | 0 | 1 | 0 | false | 0 | 0 | Any answer likely depends on the distro for which the rpm was built. A generic, albeit manual approach, would to start with rpm -q --requires $PACKAGE but as you already have the spec file, you can simply rpmspec -q --requires *spec to get that same info. Look for the packages providing Python resources, e.g., python3-requests. You'll need to translate each of these into the Python package name, e.g., 'requests' for your setup.py. You may find that rpm -q --provides python3-requests to be useful at this step; maybe not. | 1 | 0 | 0 | 0 | Basically I'm working on porting a program from being packaged with RPM into using setup.py to package it as a wheel. My core question is whether there exists some guide or tool on how to make this conversion.
The key issue is that I'm looking to convert dependencies as specified by RPM's spec file to setup.py and can't find any information online as to how to do this. | How to convert RPM spec file dependencies to Python setup.py? | 0 | 0 | 1 | 0 | 0 | 497 |
44,678,133 | 2017-06-21T13:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,selenium,selenium-chromedriver,user-agent | 0 | 44,678,533 | 0 | 1 | 0 | true | 0 | 0 | I would go with creating a new driver and copy all the necessary attributes from the old driver except the user agent. | 1 | 0 | 0 | 0 | I am running some code with selenium using python, and I figured out that I need to dynamically change the UserAgent after I already created the webdriver. Any advice if it is possible and how this could be done? Just to highlight - I want to change it on the fly, after almost each GET or POST request I send | Python selenium with chrome webdriver - change user agent | 1 | 1.2 | 1 | 0 | 1 | 1,149 |
44,678,706 | 2017-06-21T14:17:00.000 | 2 | 0 | 1 | 0 | 1 | python,python-3.x,virtualenv,virtualenvwrapper | 0 | 44,679,103 | 0 | 4 | 0 | false | 0 | 0 | Requirements:
Virtual Env
Pycharm
Go to Virtual env and type which python
Add remote project interpreter (File > Default Settings > Project Interpreter (cog) add remote)
You'll need to set up your file system so that PyCharm can also open the project.
NOTE:
do not turn off your virtual environment without saving your run configurations that will cause pycharm to see your run configurations as corrupt
There's a button on the top right that reads share enable this and your run configs will be saved to a .idea file and you'll have a lot less issues | 2 | 4 | 0 | 0 | been searching for this with no success, i don't know if i am missing something but i have a virtualenv already but how do i create a project to associate the virtualenv with, thanks
P.S. Am on windows | Associating a python project with a virtual environment | 0 | 0.099668 | 1 | 0 | 0 | 3,130 |
44,678,706 | 2017-06-21T14:17:00.000 | 1 | 0 | 1 | 0 | 1 | python,python-3.x,virtualenv,virtualenvwrapper | 0 | 44,679,249 | 0 | 4 | 0 | false | 0 | 0 | If you already have your virtualenv installed you just need to start using it.
Create your projects virtual environment using virtualenv env_name on cmd. To associate a specific version of python with your environment use: virtualenv env_name -p pythonx.x;
Activate your environment by navigating into its Scripts folder and executing activate.
Your terminal now is using your virtual environment, that means every python package you install and the python version you run will be the ones you configured inside your env.
I like to create environments with the names similar to my projects, I always use one environment to each project, that helps keeping track of which packages my specific projects need to run.
If you haven't read much about venvs yet, try googling about requirements.txt along with pip freeze command those are pretty useful to keep track of your project's packages. | 2 | 4 | 0 | 0 | been searching for this with no success, i don't know if i am missing something but i have a virtualenv already but how do i create a project to associate the virtualenv with, thanks
P.S. Am on windows | Associating a python project with a virtual environment | 0 | 0.049958 | 1 | 0 | 0 | 3,130 |
44,679,656 | 2017-06-21T14:58:00.000 | 1 | 0 | 0 | 0 | 0 | python,http,scapy | 0 | 44,703,791 | 0 | 3 | 0 | false | 0 | 0 | Yes, you can. You can filter by TCP port 80 (checking each packet or using BPF) and then check the TCP payload to ensure there is an HTTP header. | 1 | 2 | 0 | 0 | I am trying to make a filter for packets that contain HTTP data, yet I don't have a clue on how to do so.
I.E. Is there a way to filter packets using Scapy that are only HTTP? | Using Scapy to fitler HTTP packets | 0 | 0.066568 | 1 | 0 | 1 | 3,860 |
44,698,229 | 2017-06-22T11:33:00.000 | 1 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 62,299,368 | 0 | 2 | 0 | false | 1 | 0 | Try
gcloud app deploy dispatch.yaml
...to connect services to dispatch rules. | 1 | 0 | 0 | 0 | I edited my dispatch.yaml and deployed on app engine using
appcfg.py update_dispatch .
But when I go and see source code under StackDriver debug, I don't see the change.
Why the changes doesn't get reflected. But when I deploy complete app by appcfg.py update . the changes get reflected.
But in case, If I only want to update dispatch how do I do??? | dispatch.yaml not getting updated | 0 | 0.099668 | 1 | 0 | 0 | 306 |
44,703,003 | 2017-06-22T14:59:00.000 | 1 | 0 | 1 | 0 | 0 | python,multithreading | 0 | 44,703,268 | 0 | 1 | 1 | true | 0 | 0 | Multiprocessing is generally for when you want to take advantage of the computational power of multiple processing cores. Multiprocessing limits your options on how to handle shared state between components of your program, as memory is copied initially on process creation, but not shared or updated automatically. Threads execute from the same region of memory, and do not have this restriction, but cannot take advantage of multiple cores for computational performance. Your application does not sound like it would require large amounts of computation, and simply would benefit from concurrency to be able to handle user input, networking, and a small amount of processing at the same time. I would say you need threads not processes. I am not experienced enough with asyncio to give a good comparison of that to threads.
Edit: This looks like a fairly involved project, so don't expect it to go perfectly the first time you hit "run", but definitely very doable and interesting.
Here's how I would structure this project...
I see effectively four separate threads here (maybe small ancillary dameon threads for stupid little tasks)
I would have one thread acting as your temperature controller (PID control / whatever) that has sole control of the heater output. (other threads get to make requests to change setpoint / control mode (duty cycle / PID))
I would have one main thread (with a few dameon threads) to handle the data logging: Main thead listens for logging commands (pause, resume, get, etc.) dameon threads to poll thermometer, rotate log files, etc..
I am not as familiar with networking, and this will be specific to your client application, but I would probably get started with http.server just for prototyping, or maybe something like websockets and a little bit of asyncio. The main thing is that it would interact with the data logger and temperature controller threads with getters and setters rather than directly modifying values
Finally, for the keypad input, I would likely just make up a quick tkinter application to grab keypresses, because that's what I know. Again, form a request with the tkinter app, but don't modify values directly; use getters and setters when "talking" between threads. It just keeps things better organized and compartmentalized. | 1 | 1 | 0 | 0 | I am trying to build a temperature control module that can be controlled over a network or with manual controls. the individual parts of my program all work but I'm having trouble figuring out how to make them all work together.also my temperature control module is python and the client is C#.
so far as physical components go i have a keypad that sets a temperature and turns the heater on and off and an lcd screen that displays temperature data and of course a temperature sensor.
for my network stuff i need to:
constantly send temperature data to the client.
send a list of log files to the client.
await prompts from the client to either set the desired temperature or send a log file to the client.
so far all the hardware works fine and each individual part of the network functions work but not together. I have not tried to use both physical and network components.
I have been attempting to use threads for this but was wondering if i should be using something else?
EDIT:
here is the basic logic behind what i want to do:
Hardware:
keypad takes a number inputs until '*' it then sets a temp variable.
temp variable is compared to sensor data and the heater is turned on or off accordingly.
'#' turns of the heater and sets temp variable to 0.
sensor data is written to log files while temp variable is not 0
Network:
upon client connect the client is sent a list of log files
temperature sensor data is continuously sent to client.
prompt handler listens for prompts.
if client requests log file the temperature data is halted and the file sent after which the temperature data is resumed.
client can send a command to the prompt handler to set the temp variable to trigger the heater
client can send a command to the prompt handler to stop the heater and set temp variable to 0
commands from either the keypad or client should work at all times. | should i be using threads multiprocessing or asycio for my project? | 0 | 1.2 | 1 | 0 | 0 | 53 |
44,705,077 | 2017-06-22T16:39:00.000 | 0 | 0 | 0 | 0 | 1 | python,opencv,cmake | 1 | 44,717,895 | 0 | 1 | 0 | false | 0 | 1 | The problem was an old version of the module lurking an a different folder where the python script was actually looking. This must have been created in the past with an OpenCV 3.1 environment. | 1 | 0 | 0 | 0 | I'm trying to run a python script that uses a custom module written by someone else. I created that module by running CMake according to the creator's instructions. Running my python script, I get the error: ImportError: libopencv_imgproc.so.3.1: cannot open shared object file: No such file or directory. This error is caused by the module I created earlier.
There is no file of that name since I have OpenCV 3.2.0 installed, so in usr/local/lib there's libopencv_imgproc.so.3.2.0. I don't know how to fix this or where to start looking. The CMakeLists.txt of the module has a line
find_package(OpenCV 3 COMPONENTS core highgui imgproc REQUIRED).
I tried changing it to
find_package(OpenCV 3.2.0 COMPONENTS core highgui imgproc REQUIRED),
without success. | How can I force CMake to use the correct OpenCV version? | 0 | 0 | 1 | 0 | 0 | 430 |
44,711,048 | 2017-06-22T23:57:00.000 | 0 | 0 | 1 | 0 | 0 | python,string,utf-8,byte | 0 | 44,711,115 | 0 | 2 | 0 | false | 0 | 0 | You need to decode the byte data:
byte_data.decode("utf-8") | 1 | 1 | 0 | 0 | I'm a Python3 User. And I'm now face some problem about byte to string control..
First, I'm get data from some server as a byte.
[Byte data] : b'\xaaD\x12\x1c+\x00\x00 \x18\x08\x00\x00\x88\xb4\xa2\x07\xf8\xaf\xb6\x19\x00\x00\x00\x00\x03Q\xfa3/\x00\x00\x00\x1d\x00\x00\x00\x86=\xbd\xc9~\x98uA>\xdf#=\x9a\xd8\xdb\x18\x1c_\x9c\xc1\xe4\xb4\xfc;'
This data isn't escape any string type such as utf-8, unicode-escape ...
Who know the solution how to control these data? | How to escape the string "\x0a\xfd\x ....." in python? | 0 | 0 | 1 | 0 | 0 | 1,232 |
44,711,871 | 2017-06-23T01:55:00.000 | 1 | 0 | 0 | 0 | 0 | python,python-3.x,intellij-idea,ide | 0 | 44,711,956 | 0 | 1 | 0 | true | 0 | 0 | It is pre-compiled in some IDEs like PyDev but not in IDEA, you can add it manually if you want it. I also recommend you to use PyCharm instead of IDEA for python. | 1 | 0 | 0 | 0 | I am learning Python by watching youtube videos and also through an online course that I bought. In every video I watch, the first line of each file is: _author_='dev'. For some reason when I start a new file this does not come up. What does this mean and if it is an issue how do I correct it?
FYI I am using IntelliJ IDEA as an IDE.
Thank you! | script first line _author_="dev" does not show up | 0 | 1.2 | 1 | 0 | 0 | 91 |
44,716,368 | 2017-06-23T08:14:00.000 | 3 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 1 | 44,736,370 | 0 | 1 | 0 | true | 0 | 0 | The NaNs are produced because the eigenvalues (self.lambdas_) of the input matrix are negative which provoke the ValueError as the square root does not operate with negative values.
The issue might be overcome by setting KernelPCA(remove_zero_eig=True, ...) but such action would not preserve the original dimensionality of the data. Using this parameter is a last resort as the model's results may be skewed.
Actually, it has been stated negative eigenvalues indicate a model misspecification, which is obviously bad. Possible solution for evading that fact without corroding the dimensionality of the data with remove_zero_eig parameter might be reducing the quantity of the original features, which are greatly correlated. Try to build the correlation matrix and see what those values are. Then, try to omit the redundant features and fit the KernelPCA() again. | 1 | 2 | 1 | 0 | After applying KernelPCA to my data and passing it to a classifier (SVC) I'm getting the following error:
ValueError: Input contains NaN, infinity or a value too large for
dtype('float64').
and this warning while performing KernelPCA:
RuntimeWarning: invalid value encountered in sqrt X_transformed =
self.alphas_ * np.sqrt(self.lambdas_)
Looking at the transformed data I've found several nan values.
It makes no difference which kernel I'm using. I tried cosine, rbf and linear.
But what's interesting:
My original data only contains values between 0 and 1 (no inf or nan), it's scaled with MinMaxScaler
Applying standard PCA works, which I thought to be the same as KernelPCA with linear kernel.
Some more facts:
My data is high dimensional ( > 8000 features) and mostly sparse.
I'm using the newest version of scikit-learn, 18.2
Any idea how to overcome this and what could be the reason? | KernelPCA produces NaNs | 0 | 1.2 | 1 | 0 | 0 | 622 |
44,721,450 | 2017-06-23T12:27:00.000 | 0 | 0 | 1 | 0 | 1 | python,eclipse,import,pip,docx | 0 | 44,775,360 | 0 | 2 | 0 | false | 0 | 0 | thanks for the reply.
The actual problem was that I was using python 3.6 where eclipse only accepts python grammar versions up to 3.5. The docx package also only works with python 2.6, 2.7, 3.3, or 3.4 so I installed python 3.4 and docx is now working! | 1 | 0 | 0 | 0 | So basically I used pip to import the docx python package and it installed correctly, (verified by the freeze command). However I cannot import the package in eclipse.
Through some serious effort I've noticed that I can import the package using the 32 bit IDLE shell whereas I cannot when using the 64 bit IDLE shell. My PC is 64 bit and so I do not why I cannot import a 32 bit package in eclipse, a problem I've never encountered before.
Does anybody have any insights as to how I can import this package properly in eclipse? I'm sure there's a very reasonable cause and hopefully solution as to why this is happening and would really appreciate if anyone could help with this issue as I need to use this package for the specific project I aim to do.
side note: I'm using python 3.6 if that's of any relevance | Eclipse cannot import already installed pip package | 0 | 0 | 1 | 0 | 0 | 582 |
44,723,464 | 2017-06-23T14:08:00.000 | 0 | 0 | 0 | 0 | 0 | python,machine-learning,deep-learning,keras | 1 | 45,011,256 | 0 | 1 | 0 | true | 0 | 0 | Low accuracy is caused by the problem in layers. I just modified my network and obtained .7496 accuracy. | 1 | 1 | 1 | 0 | I was trying to train CIFAR10 and MNIST dataset on VGG16 network. In my first attempt, I got an error which says shape of input_2 (labels) must be (None,2,2,10). What information does this structure hold in 2x2x10 array because I expect input_2 to have shape (None, 10) (There are 10 classes in both my datasets).
I tried to expand dimensions of my labels from (None,10) to (None,2,2,10). But I am sure this is not the correct way to do it since I obtain a very low accuracy (around 0.09)
(I am using keras, Python3.5) | VGG16 Training new dataset: Why VGG16 needs label to have shape (None,2,2,10) and how do I train mnist dataset with this network? | 0 | 1.2 | 1 | 0 | 0 | 212 |
44,727,232 | 2017-06-23T17:45:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 57,961,296 | 0 | 9 | 0 | false | 0 | 0 | This absolutely worked for me . I am using windows 10 professional edition and it has taken me almost 6 months to get this solution.Thanks to the suggestion made above.
I followed this suggestion and it worked right away and smoothly. All I did was to instruct the scheduler to run python.exe with my script as an argument just as explained by this fellow below
This what I did Suppose the script you want to run is E:\My script.py. Instead of running the script directly, instruct the task scheduler to run python.exe with the script as an argument. For example:
C:\Python27\ArcGIS10.2\python.exe
"E:\My script.py"
The location of python.exe depends on your install. If you don’t know where it is, you can discover its location; copy and paste the following code into a new Python script then execute the script. The script will print the location of python.exe as well as other information about your Python environment. | 2 | 39 | 0 | 0 | I already tried to convert my .py file into .exe file. Unfortunately, the .exe file gives problems; I believe this is because my code is fairly complicated.
So, I am trying to schedule directly my .py file with Task Scheduler but every time I do it and then run it to see if works, a window pops up and asks me how I would like to open the program?-.-
Does any of you know how I can successfully schedule my .py file with Task Scheduler? Please help, thanks
Windows 10
Python 3.5.2 | Scheduling a .py file on Task Scheduler in Windows 10 | 0 | 0 | 1 | 0 | 0 | 80,315 |
44,727,232 | 2017-06-23T17:45:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 44,728,388 | 0 | 9 | 0 | false | 0 | 0 | The script you execute would be the exe found in your python directory
ex) C:\Python27\python.exe
The "argument" would be the path to your script
ex) C:\Path\To\Script.py
So think of it like this: you aren't executing your script technically as a scheduled task. You are executing the root python exe for your computer with your script being fed as a parameter. | 2 | 39 | 0 | 0 | I already tried to convert my .py file into .exe file. Unfortunately, the .exe file gives problems; I believe this is because my code is fairly complicated.
So, I am trying to schedule directly my .py file with Task Scheduler but every time I do it and then run it to see if works, a window pops up and asks me how I would like to open the program?-.-
Does any of you know how I can successfully schedule my .py file with Task Scheduler? Please help, thanks
Windows 10
Python 3.5.2 | Scheduling a .py file on Task Scheduler in Windows 10 | 0 | 0.022219 | 1 | 0 | 0 | 80,315 |
44,740,161 | 2017-06-24T19:25:00.000 | 2 | 0 | 0 | 0 | 0 | python-3.x,nlp,word2vec | 0 | 44,740,700 | 0 | 1 | 0 | true | 0 | 0 | If you are splitting each entry into a list of words, that's essentially 'tokenization'.
Word2Vec just learns vectors for each word, not for each text example ('record') – so there's nothing to 'preserve', no vectors for the 45,000 records are ever created. But if there are 26,000 unique words among the records (after applying min_count), you will have 26,000 vectors at the end.
Gensim's Doc2Vec (the '
Paragraph Vector' algorithm) can create a vector for each text example, so you may want to try that.
If you only have word-vectors, one simplistic way to create a vector for a larger text is to just add all the individual word vectors together. Further options include choosing between using the unit-normed word-vectors or raw word-vectors of many magnitudes; whether to then unit-norm the sum; and whether to otherwise weight the words by any other importance factor (such as TF/IDF).
Note that unless your documents are very long, this is a quite small training set for either Word2Vec or Doc2Vec. | 1 | 0 | 1 | 0 | I have 45000 text records in my dataframe. I wanted to convert those 45000 records into word vectors so that I can train a classifier on the word vector. I am not tokenizing the sentences. I just split the each entry into list of words.
After training word2vec model with 300 features, the shape of the model resulted in only 26000. How can I preserve all of my 45000 records ?
In the classifier model, I need all of those 45000 records, so that it can match 45000 output labels. | how to preserve number of records in word2vec? | 0 | 1.2 | 1 | 0 | 0 | 277 |
44,771,725 | 2017-06-27T03:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,python-3.x,pyqt,pyqt4 | 0 | 44,775,948 | 0 | 1 | 0 | false | 0 | 1 | This is a pretty broad question. I recommend checking out the many tutorials on Youtube.com.
However, in your init method, put something like this:
self.ui.charge_codes_combo.currentIndexChanged.connect(self.setup_payments)
In my example, the combo box was placed on a form in Qt Designer. Self.setup_payment is a method triggered by the change in the combo box.
I hope this helps! | 1 | 0 | 0 | 0 | Exactly how do I utilize the various event methods that widgets have? Say I have a comboBox(drop down list) and I want to initiate a function every time someone changes the choice. There is the changeEvent() method in the documentation but It would be great if someone explains to me with a piece of code. | How to use pyqt widget event() method? | 0 | 0 | 1 | 0 | 0 | 102 |
44,788,533 | 2017-06-27T19:40:00.000 | 0 | 0 | 1 | 0 | 0 | python,regex,str-replace | 0 | 44,788,966 | 0 | 1 | 0 | false | 0 | 0 | state code always contains 2 uppercase characters, so you can use this pattern to do your replacement.
match this:
([A-Z]{2}).
and replace by this: $1 | 1 | 0 | 0 | 0 | I have sentences with state codes followed by a . (ie. "CA.", "AL.", but also good "CA", "AL") or things like "acct." or "no."
I'd like to:
1. remove those "."
2. keep other "."
3. change no. to #
For example, I'd like:
"Mr. J. Edgar Hoover from CA. owes us $123.45 from acct. no. 98765."
to become
"Mr. J. Edgar Hoover from CA owes us $123.45 from acct # 98765."
Changing " no." to " #"
and "acct." to "acct"
is easily done with regex or replace and I could do that first to get those out of the way. (I'm open to other efficient approaches).
But how do I change state code . to state code and keep the right state code?
Thanks! | Python remove . after state | 0 | 0 | 1 | 0 | 0 | 59 |
44,810,259 | 2017-06-28T18:42:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,forms | 0 | 44,810,611 | 0 | 1 | 0 | false | 1 | 0 | This can only be done using JavaScript. The hard part is to have the management form sync up with the number of rows.
But there's two alternatives:
Semi-javascript (Mezzanine's approach): Generate a ton of rows in the formset and only show one empty. Upon the click of the "add another row" button, unhide the next one. This makes it easier to handle the management form as the unfilled extra's don't need any work.
No fix needed: Add as many rows as is humanly sane. In general, people don't need 40 rows, they get bored with filling out the form or worry that all that work is lost when the browser crashes.
Hope this helps you along. Good luck! | 1 | 1 | 0 | 0 | I am trying to create a model via a form that has multiple other models related to it. Say I have a model Publisher, then another model Article with a foreign key to Publisher. When creating a Publisher via a form, I want to create An article at the same time. I know how to do this via formsets. However, I don't know how to add a button that says add extra article at the same view, without having to be redirected to a new page and losing the old data since the form was not saved. What I want is when someone clicks add new article, for a new form for article to appear and for the user to add a new Article. Is this possible to be done in the same view in django, if so can someone give me and idea how to approach this?
I would show code or my attempts, but I am not sure how to even approach it. | Option to add extra choices in django form | 0 | 0 | 1 | 0 | 0 | 56 |
44,836,123 | 2017-06-29T22:49:00.000 | 1 | 0 | 0 | 0 | 0 | r,conda,python-3.6,rpy2,libiconv | 1 | 44,935,654 | 0 | 2 | 0 | true | 0 | 0 | I uninstalled rpy2 and reinstalled with --verborse. I then found
ld: warning: ignoring file /opt/local/lib/libpcre.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libpcre.dylib
ld: warning: ignoring file /opt/local/lib/liblzma.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/liblzma.dylib
ld: warning: ignoring file /opt/local/lib/libbz2.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libbz2.dylib
ld: warning: ignoring file /opt/local/lib/libz.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libz.dylib
ld: warning: ignoring file /opt/local/lib/libiconv.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libiconv.dylib
ld: warning: ignoring file /opt/local/lib/libicuuc.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libicuuc.dylib
ld: warning: ignoring file /opt/local/lib/libicui18n.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libicui18n.dylib
ld: warning: ignoring file /opt/local/Library/Frameworks/R.framework/R, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/Library/Frameworks/R.framework/R
So I supposed the reason is the architecture incompatibility of the libiconv in opt/local, causing make to fall back onto the outdate libiconv in usr/lib. This is strange because my machine should be running on x86_64 not i386. I then tried export ARCHFLAGS="-arch x86_64" and reinstalled libiconv using port. This resolved the problem. | 2 | 2 | 1 | 0 | I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8).
I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6) as my interpreter, I get the error:
Traceback (most recent call last):
File "/Users/vincentliu/PycharmProjects/magic/rpy2tester.py", line 1, in
from rpy2 import robjects
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/init.py", line 16, in
import rpy2.rinterface as rinterface
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/init.py", line 92, in
from rpy2.rinterface._rinterface import (baseenv,
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libiconv.2.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so
Reason: Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0
Which first seemed like a problem caused by Anaconda, and so I remove all Anaconda-related files but the problem persists.
I then uninstalled rpy2, reinstalled Anaconda and used conda install rpy2 to install, which also installs R version 3.3.2 through Anaconda. I can then change the interpreter to /anaconda/bin/python and can use rpy2 fine, but I couldn't use the R packages I care about because they need R version 3.4 and higher. Apparently, the oldest version Anaconda can install is 3.3.2, so is there any way I can use rpy2 with R version 3.4?
I can see two general solutions to this problem. One is to install rpy2 through conda and then somehow change its depending R to the 3.4 one in the system. Another solution is to resolve the error
Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0
After much struggling, I've found no good result with either. | Installing rpy2 to work with R 3.4.0 on OSX | 1 | 1.2 | 1 | 0 | 0 | 1,095 |
44,836,123 | 2017-06-29T22:49:00.000 | 0 | 0 | 0 | 0 | 0 | r,conda,python-3.6,rpy2,libiconv | 1 | 53,839,320 | 0 | 2 | 0 | false | 0 | 0 | I had uninstall the version pip installed and install from source python setup.py install on the download https://bitbucket.org/rpy2/rpy2/downloads/. FWIW not using Anaconda at all either. | 2 | 2 | 1 | 0 | I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8).
I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6) as my interpreter, I get the error:
Traceback (most recent call last):
File "/Users/vincentliu/PycharmProjects/magic/rpy2tester.py", line 1, in
from rpy2 import robjects
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/init.py", line 16, in
import rpy2.rinterface as rinterface
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/init.py", line 92, in
from rpy2.rinterface._rinterface import (baseenv,
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libiconv.2.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so
Reason: Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0
Which first seemed like a problem caused by Anaconda, and so I remove all Anaconda-related files but the problem persists.
I then uninstalled rpy2, reinstalled Anaconda and used conda install rpy2 to install, which also installs R version 3.3.2 through Anaconda. I can then change the interpreter to /anaconda/bin/python and can use rpy2 fine, but I couldn't use the R packages I care about because they need R version 3.4 and higher. Apparently, the oldest version Anaconda can install is 3.3.2, so is there any way I can use rpy2 with R version 3.4?
I can see two general solutions to this problem. One is to install rpy2 through conda and then somehow change its depending R to the 3.4 one in the system. Another solution is to resolve the error
Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0
After much struggling, I've found no good result with either. | Installing rpy2 to work with R 3.4.0 on OSX | 1 | 0 | 1 | 0 | 0 | 1,095 |
44,839,204 | 2017-06-30T05:32:00.000 | 0 | 0 | 1 | 0 | 0 | python,django,web-deployment,pythonanywhere | 0 | 44,839,307 | 1 | 4 | 0 | false | 1 | 0 | Try to type "python3 --version". This can work on linux, but I am not sure whether it works on pythonanywhere | 3 | 2 | 0 | 0 | I am trying to deploy my Django application on pythonanywhere through manual-configuration. I selected Python 3.6.When I opened the console and type "python --version" It is showing python 2.7 instead of 3.6. How can I change this?
Please help me. | how can I change default python version in pythonanywhere? | 0 | 0 | 1 | 0 | 0 | 3,626 |
44,839,204 | 2017-06-30T05:32:00.000 | 2 | 0 | 1 | 0 | 0 | python,django,web-deployment,pythonanywhere | 0 | 44,850,691 | 1 | 4 | 0 | true | 1 | 0 | Python 3.6 is available as python3.6 in a console on PythonAnywhere. | 3 | 2 | 0 | 0 | I am trying to deploy my Django application on pythonanywhere through manual-configuration. I selected Python 3.6.When I opened the console and type "python --version" It is showing python 2.7 instead of 3.6. How can I change this?
Please help me. | how can I change default python version in pythonanywhere? | 0 | 1.2 | 1 | 0 | 0 | 3,626 |
44,839,204 | 2017-06-30T05:32:00.000 | 2 | 0 | 1 | 0 | 0 | python,django,web-deployment,pythonanywhere | 0 | 55,050,163 | 1 | 4 | 0 | false | 1 | 0 | to set your default python version from 2.7 to 3.7 run the command below
$ alias python=python3
that's it now check the version
$ python --version
it should be solved | 3 | 2 | 0 | 0 | I am trying to deploy my Django application on pythonanywhere through manual-configuration. I selected Python 3.6.When I opened the console and type "python --version" It is showing python 2.7 instead of 3.6. How can I change this?
Please help me. | how can I change default python version in pythonanywhere? | 0 | 0.099668 | 1 | 0 | 0 | 3,626 |
44,851,959 | 2017-06-30T17:24:00.000 | 0 | 1 | 0 | 0 | 0 | python,scapy | 0 | 44,997,621 | 0 | 1 | 0 | true | 1 | 0 | You can directly answer HTTP requests to pages different to that specific webpage with HTTP redirections (e.g. HTTP 302). Moreover, you should only route packets going to the desired webpage and block the rest (you can do so with a firewall such as iptables). | 1 | 0 | 0 | 0 | I have built a MITM with python and scapy.I want to make the "victim" device be redirected to a specific page each time it tried to access a website. Any suggestions on how to do it?
*Keep in mind that all the traffic from the device already passes through my machine before being routed. | Python : Redirecting device with MITM | 0 | 1.2 | 1 | 0 | 1 | 141 |
44,857,970 | 2017-07-01T06:19:00.000 | 3 | 0 | 0 | 0 | 0 | python,random,pixel | 0 | 44,858,027 | 0 | 4 | 0 | false | 0 | 0 | I'd suggest making a list of coordinates of all non-zero pixels (by checking all pixels in the image), then using random.shuffle on the list and taking the first 100 elements. | 1 | 4 | 1 | 0 | I have a binary image of large size (2000x2000). In this image most of the pixel values are zero and some of them are 1. I need to get only 100 randomly chosen pixel coordinates with value 1 from image. I am beginner in python, so please answer. | how to get random pixel index from binary image with value 1 in python? | 0 | 0.148885 | 1 | 0 | 0 | 3,545 |
44,871,312 | 2017-07-02T13:26:00.000 | 1 | 0 | 1 | 0 | 0 | python-3.x,anaconda,spyder | 0 | 44,871,723 | 0 | 1 | 0 | false | 0 | 0 | (Spyder developer here) Please use the Variable Explorer to visualize Numpy arrays and Pandas DataFrames. That's its main purpose. | 1 | 0 | 1 | 0 | Hi on running a code in the console I am getting the display as:
runfile('C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing/praCTICE.py', wdir='C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing')
and on viewing a small matrix it is showing up as
array([['France', 44.0, 72000.0],
['Spain', 27.0, 48000.0],
['Germany', 30.0, 54000.0],
...,
['France', 48.0, 79000.0],
['Germany', 50.0, 83000.0],
['France', 37.0, 67000.0]], dtype=object)
Even though the matrix is pity small how to change it to get the default view when i cun my codes in ipython console
I installed the latest version of anaconda | In spyder how to get back default view of running a code in Ipython console | 0 | 0.197375 | 1 | 0 | 0 | 272 |
44,902,885 | 2017-07-04T10:02:00.000 | 0 | 0 | 1 | 1 | 0 | python,azure,pyspark,jupyter-notebook,azure-hdinsight | 0 | 44,903,656 | 0 | 3 | 0 | false | 0 | 0 | Have you tried installing using pip?
In some cases where you have both Python 2 and Python 3, you have to run pip3 instead of just pip to invoke pip for Python 3. | 1 | 1 | 1 | 0 | I would like to install python 3.5 packages so they would be available in Jupyter notebook with pyspark3 kernel.
I've tried to run the following script action:
#!/bin/bash
source /usr/bin/anaconda/envs/py35/bin/activate py35
sudo /usr/bin/anaconda/envs/py35/bin/conda install -y keras tensorflow theano gensim
but the packages get installed on python 2.7 and not in 3.5 | how to install python package on azure hdinsight pyspark3 kernel? | 0 | 0 | 1 | 0 | 0 | 2,656 |
44,911,066 | 2017-07-04T16:59:00.000 | 2 | 0 | 0 | 0 | 0 | python,postgresql,azure,psycopg2 | 0 | 44,915,875 | 0 | 1 | 0 | true | 1 | 0 | You don't need the specific pg_config from the target database. It's only being used to compile against libpq, the client library for PostgreSQL, so you only need the matching PostgreSQL client installed on your local machine.
If you're on Windows I strongly advise you to install a pre-compiled PostgreSQL. You can just install the whole server, it comes with the client libraries.
If you're on Linux, you'll probably need the PostgreSQL -devel or -dev package that matches your PostgreSQL version. | 1 | 0 | 0 | 0 | Trying to install a postgresql database which resides on Azure for my python flask application; but the installation of psycopg2 package requires the pg_config file which comes when postgresql is installed. So how do I export the pg_config file from the postgresql database which also resides on azure? Is pg_config all psycopg2 need for a successful installation? | How to retrieve the pg_config file from Azure postgresql Database | 0 | 1.2 | 1 | 1 | 0 | 164 |
44,955,528 | 2017-07-06T17:31:00.000 | 0 | 0 | 1 | 0 | 1 | python | 0 | 44,977,316 | 0 | 3 | 0 | true | 0 | 0 | Using coroutines (multithreading) will provide the desired concurrent functionality. Source in the comments of the question and of user2357112's answer. | 1 | 0 | 0 | 0 | Suppose I want a program Foo.py which has some arbitrary routines Bar(), Quux(), and Fizz(). Let's say that the usual order of execution from a procedural perspective should be Bar() -> Quux() -> Fizz(). However, Fizz() should conditionally call a function Buzz() depending on some runtime action, and calling Buzz() at any time during Fizz() should return the process back to Quux().
I have a fair understanding of how concurrent processes can be implemented in assembly using system calls depending on the architecture, but what options are available to me in Python, where I can't – and frankly would prefer not to – use lots of jumps and directly move an instruction pointer around? When searching for an answer, I found loops and recursion as a suggestion for going back in a program. I don't think a loop would work without stopping the Fizz() process to wait for the condition check for Buzz(), and I'm not sure how recursion could be implemented in this scenario either. (My Buzz() would be like a "Back" button on a GUI). | What are my options for navigating through subroutines? | 1 | 1.2 | 1 | 0 | 0 | 55 |
44,956,676 | 2017-07-06T18:44:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x | 0 | 44,957,535 | 0 | 4 | 0 | false | 0 | 0 | This may not be the most efficient solution, but you could also just hard code it e.g. create a variable equivalent to zero, add one to the variable for each word in the line, and append the word to a list when variable = 5. Then reset the variable equal to zero. | 1 | 1 | 0 | 0 | What would be a pythonic way to create a list of (to illustrate with an example) the fifth string of every line of a text file, assuming it ressembles something like this:
12, 27.i, 3, 6.7, Hello, 438
In this case, the script would add "Hello" (without quotes) to the list.
In other words (to generalize), with an input "input.txt", how could I get a list in python that takes the nth string (n being a defined number) of every line?
Many thanks in advance! | How to create a list of string in nth position of every line in Python | 0 | 0 | 1 | 0 | 0 | 202 |
44,958,993 | 2017-07-06T21:16:00.000 | 6 | 1 | 0 | 1 | 0 | python,docker,pytest,pytest-django | 0 | 44,959,229 | 0 | 1 | 0 | true | 0 | 0 | There is no way to do that. You can use a different pytest configuration using pytest -c but tox.ini and setup.cfg must reside in the top-level directory of your package, next to setup.py. | 1 | 5 | 0 | 0 | How can I set an environment variable with the location of the pytest.ini, tox.ini or setup.cfg for running pytest by default?
I created a docker container with a volume pointing to my project directory, so every change I make is also visible inside the docker container. The problem is that I have a pytest.ini file on my project root which won't apply to the docker container.
So I want to set an environment variable inside the docker container to specify where to look for the pytest configuration. Does anyone have any idea how could I do that? | pytest: environment variable to specify pytest.ini location | 0 | 1.2 | 1 | 0 | 0 | 7,501 |
44,981,576 | 2017-07-08T01:11:00.000 | 0 | 0 | 1 | 0 | 0 | python,regex | 0 | 44,981,592 | 0 | 4 | 0 | false | 0 | 0 | You could use the re.findall(regex, string, flags) function in python. That returns non-overlapping matches of the patter in string in a list of strings. You could then grab the second member of the returned list. | 1 | 0 | 0 | 0 | The text file I'm searching through looks like a lot of text blocks like this:
MKC,2017-06-23 07:54,-94.5930,39.1230,79.00,73.90,84.41,220.00,4.00,0.00,29.68,1003.90,10.00,M,FEW,M,M,M,9500.00,M,M,M,M,KMKC 230754Z 22004KT 10SM FEW095 26/23 A2968 RMK AO2 SLP039 T02610233
(That's all one line)
I'm looking to grab the 2nd occurrence in the line that matches r',\d\.\d{2},', which in this case would be 0.00
I don't know how to specify that I want the nth occurrence of the pattern.
Extra: I've never seen the first value that matches the same pattern go over 9.99, meaning 10.00 and then it would no longer match the same pattern, but it would be nice if there was a way to take this into account. | How to grab the nth occurence of a float on a line using regex? | 0 | 0 | 1 | 0 | 0 | 87 |
44,982,302 | 2017-07-08T03:50:00.000 | 1 | 0 | 0 | 0 | 0 | python-2.7,amazon-web-services,amazon-s3,aws-lambda,aws-sdk | 0 | 45,005,925 | 0 | 3 | 0 | true | 1 | 0 | Three steps I followed
1) connected to aws lambda with boto3 used add_permission API
2)also applyed get_policy
3)connected to S3 with boto resource to configuring BucketNotification API,put LambdaFunctionConfigurations | 1 | 2 | 0 | 0 | how to add trigger s3 bucket to lambda function with boto3, then I want attach that lambda function to dynamically created s3 buckets using programmatically(boto3) | how to add the trigger s3 bucket to lambda function dynamically(python boto3 API) | 0 | 1.2 | 1 | 0 | 1 | 3,218 |
44,988,422 | 2017-07-08T16:35:00.000 | 2 | 0 | 0 | 0 | 1 | python,django,database,cron | 0 | 44,988,567 | 0 | 2 | 0 | false | 1 | 0 | I think you should choose definetely the third alternative, a cron job to update the database regularly seems the best option.
You don' t need to use a seperate python function, you can schedule a task with celery, which can be easily integrated with django using django-celery | 1 | 0 | 0 | 0 | I'm learning Django and to practice I'm currently developing a clone page of YTS, it's a movie torrents repository*.
As of right now, I scrapped all the movies in the website and have them on a single db table called Movie with all the basic information of each movie (I'm planning on adding one more for Genre).
Every few days YTS will post new movies and I want my clone-web to automatically add them to the database. I'm currently stuck on deciding how to do this:
I was planning on comparing the movie id of the last movie in my db against the last movie in the YTS db each time the user enters the website, but that'd mean make a request to YTS every time my page loads, it'd also mean some very slow code should be executed inside my index() views method.
Another strategy would be to query the last time my db was updated (new entries were introduced) and if it's let's say bigger than a day then request new movies to YTS. Problem with this is I don't seem to find any method to query the time of last db updates. Does it even exist such method?
I could also set a cron job to update the information but I'm having problems to make changes from a separated Python function (I import django.db and such but the interpreter refuses to execute django db instructions).
So, all in all, what's the best strategy to update my database from a third party service/website without bothering the user with loading times? How do you set such updates in non-intrusive way to the user? How do you generally do it?
* I know a torrents website borders the illegal and I'm not intended, in any way, to make my project available to the public | Updating my Django website's database from a third party service, strategies? | 0 | 0.197375 | 1 | 0 | 0 | 114 |
44,994,358 | 2017-07-09T08:11:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,session,cookies,session-cookies | 0 | 44,994,487 | 0 | 2 | 0 | false | 1 | 0 | Well you can't know if users disconnected their internet or WiFi.
But you can check if user is still online and browsing the website.
to achieve that you can use javascript to send a request every 10 second (less or more) and check if user is still on the site. and if user is not online anymore you can make some changes or etc but in general you can't access to the users device and check the status for wifi or ... | 1 | 1 | 0 | 0 | I have a website and i want to destroy some session or cookie in django when user discoonet suddenly or get offline (wifi discoonect or disconnect mobil data).
But i dont know to how do this!
Is there any default library to do this? | Destroy session or cookie in django when user get offline | 0 | 0 | 1 | 0 | 0 | 748 |
44,996,563 | 2017-07-09T12:49:00.000 | 0 | 0 | 0 | 1 | 1 | python,ssl,cloud9-ide | 0 | 44,998,382 | 0 | 1 | 0 | false | 0 | 0 | Cloud9 runs your app behind an https proxy, so you need to just use http, since cloud9 proxy won't accept your self signed certificate. | 1 | 0 | 0 | 1 | I am migrating my personal hobby python web application from 127.0.0.1 to cloud 9 lately, but found myself completely new to the idea of setting up ssl certificate. I did some online research on openssl and its python wrapper but still couldn't find any definitive guide on how to set it up in practice, specifically for the cloud 9 IDE platform.
Could someone please give a walkthough, or point out some references link here? Thanks.
By the way, I'm using cherrypy for the python server.
EDIT: specifically, I have the following questions:
is it requred to run openssl from the server(in my case, cloud9 bash), or I can run openssl from my local laptop then upload the generated key and cert?
does it make any sense to use passphrase to protect the key? I don't see any point here, correct me if I'm wrong please
how to install it to cloud9? | Could someone please provide a walkthrough on how to setup a self signing ssl certificate on cloud 9? | 0 | 0 | 1 | 0 | 0 | 184 |
44,997,969 | 2017-07-09T15:20:00.000 | 0 | 0 | 1 | 1 | 0 | python,cmake,anaconda | 1 | 44,999,337 | 0 | 1 | 0 | false | 0 | 0 | Since the "REQUIRED" option to find_package() is not working, you can be explicit about which Python library using CMake options with cache variables:
cmake -DPYTHON_INCLUDE_DIR=C:\Python36\include -DPYTHON_LIBRARY=C:\Python36\libs\python36.lib .. | 1 | 1 | 0 | 0 | FindPythonLibs.cmake is somehow finding Python versions that don't exist/were uninstalled.
When I run find_package(PythonLibs 3 REQUIRED) CMake properly finds my Python3.6 installation and adds its include path, but then I get the error
No rule to make target 'C:/Users/ultim/Anaconda2/libs/python27.lib', needed by 'minotaur-cpp.exe'. Stop.
This directory doesn't exist, and I recently uninstalled Anaconda and the python that came with it. I've looked through my environment variables and registry, but find no reference to this location.
Would anyone know where there might still be a reference to this location? | CMake's find packages finds nonexisting python library | 0 | 0 | 1 | 0 | 0 | 114 |
45,015,116 | 2017-07-10T14:46:00.000 | 3 | 0 | 0 | 1 | 0 | python,airflow | 0 | 45,117,737 | 0 | 1 | 0 | false | 1 | 0 | We use docker to run the code with different dependencies and DockerOperator in airflow DAG, which can run docker containers, also on remote machines (with docker daemon already running). We actually have only one airflow server to run jobs but more machines with docker daemon running, which the airflow executors call.
For continuous integration we use gitlab CI with the Gitlab container registry for each repository. This should be easily doable with Jenkins. | 1 | 6 | 0 | 0 | I'm thinking of starting to use Apache Airflow for a project and am wondering how people manage continuous integration and dependencies with airflow. More specifically
Say I have the following set up
3 Airflow servers: dev staging and production.
I have two python DAG'S whose source code I want to keep in seperate repos.
The DAG's themselves are simple, basically just use a Python operator to call main(*args, **kwargs). However the actually code that's run by main is very large and stretches several files/modules.
Each python code base has different dependencies
for example,
Dag1 uses Python2.7 pandas==0.18.1, requests=2.13.0
Dag2 uses Python3.6 pandas==0.20.0 and Numba==0.27 as well as some cythonized code that needs to be compiled
How do I manage Airflow running these two Dag's with completely different dependencies?
Also, how do I manage the continuous integration of the code for both these Dags into each different Airflow enivornment (dev, staging, Prod)(do I just get jenkins or something to ssh to the airflow server and do something like git pull origin BRANCH)
Hopefully this question isn't too vague and people see the problems i'm having. | Apache Airflow Continous Integration Workflow and Dependency management | 0 | 0.53705 | 1 | 0 | 0 | 1,922 |
45,041,154 | 2017-07-11T17:31:00.000 | 0 | 0 | 0 | 0 | 1 | python,python-2.7,python-3.x,user-interface,kivy | 0 | 45,068,664 | 0 | 1 | 0 | false | 0 | 1 | i think you should first try and convert the video to an image format (gif) and then load it in the Image class in the kv file and then use clock to schedule it to load a new screen(login) after some seconds depending on the duration of the gif | 1 | 0 | 0 | 0 | I have a GUI that starts off with a video written in Kivy. That GUI is supposed to then begin loading the whole program in the background while the clip is playing, and after the clip, a window for login is supposed to come up. How do I load the whole program and at the same time load the video to play at the start of the program?
I used event dispatcher but it didn't work.
Additionally, how do I tell the window to open from the video to the login to the first page of the GUI without being separate GUIs to load from?
Thank you very much. | Placing a Video at Start of GUI to Transition to Main Code Kivy | 0 | 0 | 1 | 0 | 0 | 28 |
45,043,181 | 2017-07-11T19:37:00.000 | 0 | 0 | 0 | 1 | 0 | python,windows,eclipse,odoo-10 | 0 | 56,924,563 | 0 | 2 | 0 | false | 1 | 0 | You can try below
python ./odoo-bin -c odoo.conf
Hope this help you | 1 | 1 | 0 | 0 | I'm running ODOO 10 from source code in Eclipse on Windows 10. It's running ok in the web interface (on localhost)
I want to control the odoo via command line at the same time. Can I do so while its running in the web interface?
If so how do I invoke the odoo commands to the server? | How to control odoo 10 from command line while its running in the web | 0 | 0 | 1 | 0 | 0 | 1,605 |
45,043,654 | 2017-07-11T20:09:00.000 | 0 | 0 | 0 | 0 | 0 | python,flask,wtforms,flask-wtforms | 0 | 45,047,577 | 0 | 1 | 0 | false | 1 | 0 | The file was uploaded from the user, the browser gets it and keeps it on the sky, so you can save it with a path. That's why you can't get its full path.
If an apple flies in the sky, how do you know which apple tree he comes from? | 1 | 0 | 0 | 0 | When you make a file field with WTForms in Flask, it only returns the filename. Does anyone know how to get it to return the full path of the file? | Python Flask Wtforms File Field full path | 0 | 0 | 1 | 0 | 0 | 786 |
45,043,961 | 2017-07-11T20:31:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow,protocol-buffers,bazel | 1 | 45,048,559 | 0 | 2 | 0 | false | 0 | 0 | Are you using load in the BUILD file you're building?
load("@protobuf//:protobuf.bzl", "py_proto_library")?
The error seems to indicate the symbol py_proto_library isn't loaded into skylark. | 1 | 0 | 0 | 0 | I'm getting the following error when trying to run
$ bazel build object_detection/...
And I'm getting ~20 of the same error (1 for each time it attempts to build that). I think it's something with the way I need to configure bazel to recognize the py_proto_library, but I don't know where, or how I would do this.
/src/github.com/tensorflow/tensorflow_models/object_detection/protos/BUILD:325:1: name 'py_proto_library' is not defined (did you mean 'cc_proto_library'?).
I also think it could be an issue with the fact that initially I had installed the cpp version of tensorflow, and then I built it for python. | Bazel has no definition for py_proto_library | 0 | 0 | 1 | 0 | 0 | 802 |
45,045,165 | 2017-07-11T22:01:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,reactjs,django-rest-framework | 0 | 45,045,751 | 0 | 2 | 0 | false | 1 | 0 | For dev:
You can run both of them on two different shells. By default, your django rest api will be at 127.0.0.1:8000 and React will at 127.0.0.1:8081. I do not think there will be any issues for the two to communicate via the fetch api. Just make sure you have ALLOWED_HOSTS=['127.0.0.1'] in your django's settings file.
For production:
It will work like any other mobile app or web app does! Host your your RESTful API on an application server (aws, heroku or whatever you choose) and create and host your React App separately. Use Javascript's fetch api to send requests to your django endpoints and use the json/xml response to render your views in React. | 1 | 3 | 0 | 0 | I have an application backend in Django-rest-framework, and I have a reactjs app.
How can I do to they work together ?
For development I open 2 terminals and run them separately. There is some way to make them work together ?
Also to deploy it to production I have no idea how I can do that.
I tried to look for some github project ready, but couldn't find anything
Thanks! | How to configure Django Rest Framework + React | 1 | 0 | 1 | 0 | 0 | 999 |
45,053,733 | 2017-07-12T09:27:00.000 | 1 | 1 | 1 | 0 | 1 | python,amazon-web-services,lambda,virtualenv,travis-ci | 1 | 45,054,273 | 0 | 1 | 0 | true | 0 | 0 | Solved it. I was installing the Python modules into a subdirectory of my project root, rather than in the project root itself.
Essentially was doing this:
pip install -r requirements.txt ./virtualenv/
when I should have been doing this:
pip install -r requirements.txt ./ | 1 | 0 | 0 | 0 | I have an AWS Lambda handler in Python 2.7 that is deployed from Travis CI. However, when I try running the function I received an error from AWS saying that it cannot import the enum module (enum34). Is there a simple way to resolve this? Should Travis CI include the virtual environment that Python is running in? If not, how do I include that virtualenv?
Additionally, when I deploy from Travis CI, it seems to prepend an "index." onto the handler_name field. Does anyone know why this happens, or how to disable it? I can't seem to find an answer. | Enum Module with AWS Lambda Python 2.7, Deployed with Travis CI | 1 | 1.2 | 1 | 0 | 0 | 297 |
45,060,419 | 2017-07-12T14:24:00.000 | 0 | 0 | 0 | 0 | 0 | python,matlab | 0 | 45,061,095 | 0 | 1 | 0 | false | 0 | 0 | Matlab's radon() function is not circular. This was the problem. Although the output image sizes do still differ, I am getting essentially the result I want. | 1 | 0 | 1 | 0 | I am trying to translate some matlab code to python. In the matlab code, I have a radon transform function. I start with a 146x146 image, feed it into the radon() function, and get a 211x90 image. When I feed the same image into my python radon() function, I get a 146x90 image. The documentation for the python radon () function says it is a circular radon transform. Is the matlab function also circular? Why are these returning different shaped images and how can I get the outputs to match? | is the Matlab radon() function a "circular" radon transform? | 0 | 0 | 1 | 0 | 0 | 269 |
45,071,567 | 2017-07-13T04:43:00.000 | 0 | 1 | 0 | 0 | 0 | python,grpc | 0 | 70,484,074 | 0 | 3 | 0 | false | 0 | 0 | If you metadata has one key/value you can only use list(eg: [(key, value)]) ,If you metadata has Mult k/v you should use list(eg: [(key1, value1), (key2,value2)]) or tuple(eg: ((key1, value1), (key2,value2)) | 1 | 9 | 0 | 1 | I want to know how to send custom header (or metadata) using Python gRPC. I looked into documents and I couldn't find anything. | How to send custom header (metadata) with Python gRPC? | 0 | 0 | 1 | 0 | 1 | 12,988 |
45,088,254 | 2017-07-13T18:17:00.000 | 0 | 0 | 1 | 0 | 0 | python,django,localhost | 0 | 45,088,312 | 0 | 2 | 0 | false | 1 | 0 | Started a new project. And you replaced the settings.py from another project? If so just update your database and install the required packages with pip. To update the database: python manage.py makemigrations and then python manage.py migrate. | 1 | 1 | 0 | 0 | i started a new project in Django but local environment settings come from the previous project.
So how can i reset local environment settings?
Thank you.. | Django Local Environment Settings | 0 | 0 | 1 | 0 | 0 | 1,354 |
45,104,994 | 2017-07-14T14:05:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x,matplotlib,statistics,histogram | 0 | 45,105,114 | 0 | 2 | 0 | false | 0 | 0 | As you pointed out, len(set(list)) is the number of unique values for the "delivery days" variable. This is not the same thing as the bin size; it's the number of distinct bins. I would use "bin size" to describe the number of items in one bin; "bin count" would be a better name for the number of bins.
If you want to generate a histogram, supposing the original list of days is called days_list, a quick high-level approach is:
Make a new set unique_days = set(days_list)
Iterate over each value day in unique_days
For the current day, set the height of the bar (or size of the bin) in the
histogram to be equal to days_list.count(day). This will tell you the number
of times the current "day" value for number of delivery days appeared in the
days_list list of delivery times.
Does this make sense?
If the problem is not that you're manually calculating the histogram wrong but that pyplot is doing something wrong, it would help if you included some code for how you are using pyplot. | 1 | 0 | 0 | 0 | I have this list of delivery times in days for cars that are 0 years old. The list contains nearly 20,000 delivery days with many days being repeated. My question is how do i get the histogram to show bin sizes as 1 day. I have set the bin size to the amount of unique delivery days there by:
len(set(list))
but when i generate the histogram, the frequency of 0 delivery days is over 5000, however when i do list.count(0) it returns with 4500. | Histogram bins size to equal 1 day - pyplot | 0 | 0 | 1 | 0 | 0 | 1,029 |
45,111,731 | 2017-07-14T21:17:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,sockets,qpython3 | 0 | 69,265,192 | 0 | 2 | 0 | false | 0 | 1 | It's your loopback address this wont work
HOST = '127.0.0.1'
Instead that use true ip address on network for your host and make sure port of 5000 on server is open already | 1 | 0 | 0 | 0 | Someone know how can I send string by socket qpython3 android (client) to python2.7 linux (server)?
For python2.7 linux (server) ok, I know, but I dont know how create the client with qpython3 android.
Someone Know?
TKS | Sending string via socket qpython3 android (client) to python2.7 linux (server) | 0 | 0 | 1 | 0 | 0 | 217 |
45,144,525 | 2017-07-17T12:37:00.000 | 1 | 0 | 1 | 1 | 0 | python,python-3.x,pythonpath | 0 | 45,145,568 | 0 | 2 | 0 | false | 0 | 0 | I assume you are using Linux
Before executing your application u can metion pythonpath=path && execution script
Other elegant way is using virtualenv. Where u can have diff packages for each application.
Before exection say workon env and then deactivate
Python3 has virtualenv by default | 1 | 0 | 0 | 0 | Can anyone let me know how to set PYTHONPATH?
Do we need to set it in the environment variables (is it system specific) or we can independently set the PYTHONPATH and use it to run any independent python application?
i need to pick the module from a package available in directory which is different from the directory from which I am running my application . How to include these packages in my application | how to use PYTHONPATH for independent python application | 0 | 0.099668 | 1 | 0 | 0 | 371 |
45,156,592 | 2017-07-18T02:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,proxy,python-requests,urllib | 1 | 45,156,637 | 0 | 1 | 0 | false | 0 | 0 | Check if there is any proxy setting in chrome | 1 | 0 | 0 | 1 | I'm writing this application where the user can perform a web search to obtain some information from a particular website.
Everything works well except when I'm connected to the Internet via Proxy (it's a corporate proxy).
The thing is, it works sometimes.
By sometimes I mean that if it stops working, all I have to do is to use any web browser (Chrome, IE, etc.) to surf the internet and then python's requests start working as before.
The error I get is:
OSError('Tunnel connection failed: 407 Proxy Authentication Required',)
My guess is that some sort of credentials are validated and the proxy tunnel is up again.
I tried with the proxies handlers but it remains the same.
My doubts are:
How do I know if the proxy need authentication, and if so, how to do it without hardcoding the username and password since this application will be used by others?
Is there a way to use the Windows default proxy configuration so it will work like the browsers do?
What do you think that happens when I surf the internet and then the python requests start working again?
I tried with requests and urllib.request
Any help is appreciated.
Thank you! | Python URL Request under corporate proxy | 1 | 0 | 1 | 0 | 1 | 1,617 |
45,163,450 | 2017-07-18T10:00:00.000 | 0 | 0 | 0 | 0 | 1 | java,android,python,testing,monkeyrunner | 0 | 49,088,774 | 0 | 2 | 0 | false | 1 | 1 | In addition to @ohbo's solution, copying AdbWinApi.dll, AdbWinUsbApi.dll into framework folder solved my problem. | 1 | 2 | 0 | 0 | i try to run my android test script by "monkeyrunner cameraTest.py"
but it can't work, the cmd show me this
SWT folder '..\framework\x86' does not exist.
Please set ANDROID_SWT to point to the folder containing swt.jar for your platform.
anyone know how to deal with this?thanks | SWT folder '..\framework\x86' does not exist. Please set ANDROID_SWT to point to the folder containing swt.jar for your platform | 1 | 0 | 1 | 0 | 0 | 2,025 |
45,177,975 | 2017-07-18T22:12:00.000 | 1 | 0 | 1 | 0 | 0 | python,audio,pyaudio,channels | 0 | 46,686,683 | 0 | 1 | 0 | false | 0 | 0 | My solution is not very elegant, but it does work. Open separate streams with the appropriate input_device_index for each.
stream1 = audio.open(input_device_index = 1 ...)
stream2 = audio.open(input_device_index = 2 ...) | 1 | 1 | 0 | 0 | I need to open a multi-channel audio file (two or more microphones) and record the audio of each of them on a different file. With PyAudio I know how to open a multi-channel file (open method) and stop when 1.5 seconds of silence are recorded, but eventually I end up with a single (multi-channel) file. I would like to work live on each of input channels separately: record them on a separate file when a pause is detected. For instance if channel 1 has a silence after 5 seconds I stop its recording on a file, while I keep on recording channel 2 until a silence on that channel is detected as well (e.g., after 10 seconds). Could anyone tell me if this is possible with PyAudio, or point me to the right (Python) library if not? | Read different streams separately | 1 | 0.197375 | 1 | 0 | 0 | 476 |
45,198,564 | 2017-07-19T18:37:00.000 | 1 | 0 | 1 | 0 | 0 | python,pandas,dataframe | 0 | 45,198,690 | 0 | 5 | 0 | false | 0 | 0 | If you have a string you can always just choose parts of it by writing:
foo = 'abcdefg'
foo2 = foo[2:4]
print foo2
then the output would be:
cd | 1 | 1 | 1 | 0 | I have a column in my dataframe (call it 'FY') which has financial year values in the format: 2015/2016 or 2016/2017.
I want to convert the whole column so it says 15/16 or 16/17 etc instead.
I presume you somehow only take the 3rd, 4th and 5th character from the string, as well as the 8th and 9th, but haven't got a clue how to do it.
Could anyone help me? Thank you. | Python Pandas - Dataframe column - Convert FY in format '2015/2016' to '15/16' | 0 | 0.039979 | 1 | 0 | 0 | 138 |
45,218,374 | 2017-07-20T14:57:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow,object-detection | 0 | 45,222,284 | 0 | 2 | 0 | false | 0 | 0 | To find what to use for output_node_names, just checkout the graph.pbtxt file. In this case it was Softmax | 1 | 1 | 1 | 0 | I've followed the pet detector tutorial, i have exported the model using "export_inference_graph.py".
However when I try to freeze the graph using the provided "freeze_graph.py" but now sure what --output_node_names to use.
Does anyone know which I should use, or more importantly how I find out what to use for when I train my own model. | freezing the "tensorflow object detection api pet detector" graph | 0 | 0 | 1 | 0 | 0 | 1,483 |
45,247,320 | 2017-07-21T22:02:00.000 | 1 | 0 | 1 | 0 | 0 | python,datetime,time | 0 | 45,247,574 | 0 | 2 | 0 | false | 0 | 0 | Assuming that you mean 6.5 hours of elapsed time, that would be a timedelta. The time object is for time-of-day, as on a 24-hour clock. These are different concepts, and shouldn't be mixed.
You should also not think of time-of-day as "time elapsed since midnight", as some days include daylight saving time transitions, which can increase or decrease this value. For example, for most locations of the United States, on 2017-11-05 the time you gave of 06:30:00 will have 7.5 hours elapsed since midnight, as the hour between 1 and 2 repeats for the fall-back transition.
So the answer to your question is - don't. | 1 | 6 | 0 | 0 | I have a datetime stamp (e.g. time(6,30)) which would return 06:30:00.
I was wondering how I could then convert this into 6.5 hrs.
Kind regards | Convert datetime into number of hours? | 0 | 0.099668 | 1 | 0 | 0 | 5,634 |
45,250,797 | 2017-07-22T04:48:00.000 | 2 | 0 | 1 | 0 | 0 | python,linux,bash,python-2.7,python-3.x | 0 | 45,251,074 | 0 | 1 | 0 | true | 0 | 0 | In your Python 2 pip, run pip freeze > requirements.txt. This will write all your installed packages to a text file.
Then, using your Python 3 pip (perhaps pip3), run pip install -r /path/to/requirements.txt. This will install all of the packages as listed in the requirements.txt file. | 1 | 1 | 0 | 0 | I installed Anaconda with Python 2.7 and then later installed the Python 3.6 kernel. I have lots of Python 2 packages and I don't want to have to manually install all of the packages for Python 3. Has anyone written, or does anyone know how to write, a bash script that will go through all my Python 2 packages and just run pip3 install [PACKAGE NAME]? | How can I install all my python 2 packages for python 3? | 0 | 1.2 | 1 | 0 | 0 | 120 |
45,279,148 | 2017-07-24T11:13:00.000 | 0 | 0 | 0 | 0 | 0 | django,python-3.x,rest,soap,django-rest-framework | 0 | 45,458,723 | 0 | 2 | 0 | false | 1 | 0 | Lets Discuss both the Approaches and their pros and cons
Seperate SOAP Service
Reusing Same Code - if you are sure the code changes will not impact the two code flow ,it is good to go.
Extension of Features - if you are sure that new feature extension will not impact other parts it is again best to go.
Scalablity - if new API are part of same application and you are sure that it will be scalable with more load ,it is again a good option.
Extension - if you are sure in future adding more API will not create a mess of code, it is again good to go for.
Soap Wrapper Using Python (my favourate and suggested way to go)
Seperation of Concern with this you can make sure ,what ever code you write is sperate from main logic and you can easly plug in and plug out new things.
Answer for all the above question in case of this is YES.
Your Call ,
Comments and critisicsm are most welcome | 1 | 15 | 0 | 0 | I have existing REST APIs, written using Django Rest Framework and now due to some client requirements I have to expose some of them as SOAP web services.
I want to know how to go about writing a wrapper in python so that I can expose some of my REST APIs as SOAP web services. OR should I make SOAP web services separately and reuse code ?
I know this is an odd situation but any help would be greatly appreciated. | Write a wrapper to expose existing REST APIs as SOAP web services? | 0 | 0 | 1 | 0 | 1 | 3,292 |
45,281,971 | 2017-07-24T13:29:00.000 | 1 | 0 | 0 | 0 | 0 | python,django | 0 | 45,282,175 | 0 | 1 | 0 | true | 1 | 0 | It totally depends on your application.
If you are an only developer working on project .
It is advisable to write one view for each web page or event.
If you have multiple developers in your house you can split the view if you want to make a part of it reusable or something like that.
Again its all about how your team work,better stick to the same style for the entire project
all the best | 1 | 0 | 0 | 0 | This is a non-specific question about best practice in Django. Also note when I say "app" I'm referring to Django's definition of apps within a project.
How should you go about deciding when to use a new view and when to create an entirely new app? In theory, you can have a simple webapp running entirely on one views.py for an existing app.
So how do you go about deciding when to branch off to a new app or just add a new function in your views.py? Is it just whatever makes the most sense? | When To Use A View Vs. A New Project | 0 | 1.2 | 1 | 0 | 0 | 28 |
45,282,194 | 2017-07-24T13:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow | 0 | 45,282,483 | 0 | 1 | 0 | true | 0 | 0 | This is impossible. You have tensor, which contains batch_size * max_word_length
elements and tensor which contains batch_size * predicted_label elements. Hence there are
batch_size * (max_word_length + predicted_label)
elements. And now you want to create new tensor [batch_size, max_word_length, predicted_label] with
batch_size * max_word_length * predicted_label
elements. You don't have enough elements for this. | 1 | 0 | 1 | 0 | I am doing some sentiment analysis with Tensorflow, but there is a problem I can't solve:
I have one tensor (input) shaped as [?, 38] [batch_size, max_word_length] and one (prediction) shaped as [?, 3] [batch_size, predicted_label].
My goal is to combine both tensors into a single tensor with the shape of [?, 38, 3].
This tensor is used as the input of my second stage.
Seems easy, but i can't find a way of doing it.
Can (and will) you tell me how to do this? | Tensorflow: combining two tensors with dimension X into one tensor with dimension X+1 | 0 | 1.2 | 1 | 0 | 0 | 834 |
45,298,393 | 2017-07-25T08:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,treeview,openerp,one2many | 0 | 45,298,797 | 0 | 1 | 0 | false | 1 | 0 | to use one2many field you need many2one field in products
to this new model that you create. to make it easy use many2many
field it's better that way and use onchange to fill it.
just search for product that have parent_id equals to the selected
product and add this record to your many2many field.
if you need to keep it using o2m field it's better to add more code
to see what you did and what is the many2one field that you added
in your product to your new model. | 1 | 0 | 0 | 0 | I'm writing a module in odoo. I hve defined some parent products and their child products. I want to do, when I'm selecting a parent product from many2one field, this parent product's childs will open in Treeview lines automatically. This tree view field is defined as one2many field.
I used onchange_parent_product funtion, also added filter according to parent_product_id.
But treeview nothing show when I select a parent product..
Please help me how can I fill treeview lines automatically ? | How to autofill child produts in treeview, when parent product (BOM) is selected in odoo? | 0 | 0 | 1 | 0 | 0 | 433 |
45,299,561 | 2017-07-25T09:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow,resolution,inference,tensor | 0 | 45,456,995 | 0 | 1 | 0 | true | 0 | 0 | Okay so here is what I did:
input and output tensors now have the shape (batchsize, None, None, channels)
The training images now have to be resized outside of the network.
Important reminder: training images have to be the same size since they are in batches! Images in one batch have to have the same size. When inferencing the batch size is 1 so the size does not matter. | 1 | 1 | 1 | 0 | I am using tensorflow to scale images by a factor of 2. But since the tensor (batchsize, height, width, channels) determines the resolution it only accepts images of only one resolution for inference and training.
For other resolutions I have to modify the code and retrain the model. Is it possible to make my code resolution independent? In theory convolutions of images are resolution independent, I don't see a reason why this wouldn't be possible.
I have no idea how to do this in tensorflow though. Is there anything out there to help me with this?
Thanks | Variable Resolution with Tensorflow for Superresolution | 1 | 1.2 | 1 | 0 | 0 | 208 |
45,310,481 | 2017-07-25T17:55:00.000 | 3 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 45,312,185 | 0 | 2 | 0 | false | 0 | 0 | @Cedric's Answer.
Additionally, if you get the pickle error for pickling functions, add the 'dill' library to your pickling script. You just need to import it at the start, it will do the rest. | 1 | 4 | 1 | 0 | I want to create a python script that zooms in and out of matplotlib graphs along the horizontal axis. My plot is a set of horizontal bar graphs.
I also want to make that able to take any generic matplotlib graph.
I do not want to just load an image and zoom into that, I want to zoom into the graph along the horizontal axis. (I know how to do this)
Is there some way I can save and load a created graph as a data file or is there an object I can save and load later?
(typically, I would be creating my graph and then displaying it with the matplotlib plt.show, but the graph creation takes time and I do not want to recreate the graph every time I want to display it) | python matplotlib save graph as data file | 1 | 0.291313 | 1 | 0 | 0 | 6,630 |
45,311,601 | 2017-07-25T19:04:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 45,311,699 | 0 | 1 | 0 | false | 0 | 0 | Type
which python
on terminal. | 1 | 0 | 0 | 0 | I'm trying to download pygame on my computer and use it, but from what I've seen, I need the 32-bit python not the 64-bit one I have. however, I cannot find where the file is on my computer to delete it. I looked through all of my files with the name of 'python' but nothing has shown up about the 64-bit pre installed program. Anyone know how to find it? | Where is python on mac? (osx el capitan) | 0 | 0 | 1 | 0 | 0 | 98 |
45,335,812 | 2017-07-26T19:26:00.000 | 1 | 0 | 1 | 1 | 1 | python | 0 | 45,335,902 | 0 | 3 | 0 | false | 0 | 0 | In environmental variables under path, add your python path... you said you already so please ensure is their comma separation between previous path..
And once added save environment variables tab. And close all command prompt then open it.
Then only command prompt will refresh with your python config..
Main thing, if you enter python which mean python 2.
For python3 type, python3 then it should work | 3 | 0 | 0 | 0 | I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something. | Downloading python 3 on windows | 0 | 0.066568 | 1 | 0 | 0 | 262 |
45,335,812 | 2017-07-26T19:26:00.000 | 0 | 0 | 1 | 1 | 1 | python | 0 | 45,361,096 | 0 | 3 | 0 | false | 0 | 0 | Thanks everyone, I ended up uninstalling and then re-downloading python, and selecting the button that says "add to environment variables." Previously, I typed the addition to Path myself, so I thought it might make a difference if I included it in the installation process instead. Then, I completely restarted my computer rather than just Command Prompt itself. I'm not sure which of these two things did it, but it works now! | 3 | 0 | 0 | 0 | I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something. | Downloading python 3 on windows | 0 | 0 | 1 | 0 | 0 | 262 |
45,335,812 | 2017-07-26T19:26:00.000 | 0 | 0 | 1 | 1 | 1 | python | 0 | 45,338,244 | 0 | 3 | 0 | false | 0 | 0 | Why are you using command prompt? I just use the python shell that comes with IDLE. It’s much simpler.
If you have to use command prompt for some reason, you’re problem is probably that you need to type in python3. Plain python is what you use for using Python 2 in the command prompt. | 3 | 0 | 0 | 0 | I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something. | Downloading python 3 on windows | 0 | 0 | 1 | 0 | 0 | 262 |
45,341,070 | 2017-07-27T03:50:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 45,341,485 | 0 | 2 | 0 | false | 0 | 0 | To exit out of the "interactive mode" that you mentioned (the included REPL shell in IDLE) and write a script, you will have to create a new file by either selecting the option from the top navigation bar or pressing Control-N. As for running the file, there's also that option on the navigation bar; alternatively, you can press F5 to run the program. | 1 | 0 | 0 | 0 | I know some basics of Java and C++, and am looking to learn Python
I am trying to develop some random stuffs to get a good feel of how it works, but i can only make 1 line scripts that run every time i press enter to go to the next line.
I've seen tutorial videos where they can just open up files from a menu and type away until they eventually run the program.
I'm using IDLE, and i don't see options to open up new stuffs; I can only make one or two line programs. When i tried to make a calculator program, i didnt know how to run it because it ran every line of code i typed in unless there were ...'s under the >>>'s.
I think it's because i am in interactive mode, whatever that is.
How do i turn it off, if that's the problem? | python 3.6.2 not giving me option to create or run files or anything of the sort | 0 | 0 | 1 | 0 | 0 | 68 |
45,350,985 | 2017-07-27T12:34:00.000 | 2 | 0 | 1 | 0 | 0 | python-3.x,asynchronous,parallel-processing,python-asyncio | 0 | 46,352,707 | 0 | 1 | 0 | true | 0 | 0 | Any I/O bound task would be a good case for asyncio. In the context of the network programming - any application, that requires simultaneous handling of the thousands of connections. Web server, web crawler, chat backend, MMO game backend, torrent tracker and so on. Keep in mind, though, that you should go async all the way and use async versions of all libraries performing blocking I/O, like the database drivers, etc. | 1 | 6 | 0 | 0 | Feeling the need to learn how to use asyncio, but cannot think of applicable problem (or problem set) that can help me learn this new technique.
Could you suggest a problem that can help me understand and learn asyncio usage in practice?
In another words: can you suggest me an example of some abstract problem or application, which, while coding it, will help me to learn how to use asyncio in practice.
Thank you | Python asyncio training exercises | 0 | 1.2 | 1 | 0 | 0 | 1,304 |
45,362,440 | 2017-07-27T23:12:00.000 | 1 | 0 | 0 | 0 | 1 | python-3.x,odbc,driver,32bit-64bit,pyodbc | 0 | 45,365,583 | 0 | 1 | 0 | true | 0 | 0 | A 32bit application can NOT invoke a 64bit dll, so python 32bit can not talk to a 64bit driver for sure.
msodbc driver for sql server is in essence a dll file: msodbcsql13.dll
I just found out (which is not even mentioned by microsoft) that "odbc for sql server 13.1 x64" will install a 64bit msodbcsql13.dll in system32 and a 32bit msodbcsql13.dll in SysWOW64 ( 32bit version of "system32" on a 64bit windows system)
I can not however be certain that the network protocol between a 32bit client talking to 64bit sql server will be the same as a 64bit client talking to a 64bit sql server. But, I believe that, once a request is put on the network by the client to the server, 32bit or 64bit doesn't matter anymore. Someone please comment on this | 1 | 0 | 0 | 0 | What I can observe:
I am using windows 7 64bit My code (establish an odbc connection with
a SQL server on the network, simple reading operations only) is
written in python 3.6.2 32bit
I pip installed pyodbc, so I assume that was 32bit as well.
I downloaded and installed the 64bit "Microsoft® ODBC Driver 13.1 for SQL Server®" from microsoft website.
My python code connects to
other computers on the network, which run server2003 32bit and either SQL Server 2005(32bit) or sql2008(32bit).
The setup works.
Moreover: cursory test shows that, the above setup can successfully connect to a computer with Microsoft server2008(64bit) running sql2012(64bit) with the configuration under "SQL Server Network Connection (32bit)" being empty (meaing, the 32bit dll is missing), while the default 64 bit network connection configuration contains the usual config options like ip adress and listening port info.
My own explanation:
[1] the client and the server's OS and ODBC interfaces can be of any 32/64 bit combination, but the protocol that travels thru the network between my computer and the sql computer will be identical.
[2] 32 bit python+pyodbc can talk to microsoft's 64bit odbc driver, because... 32 bit python knows how to use a 64 bit DLL...? | 32bit pyodbc for 32bit python (3.6) works with microsoft's 64 bit odbc driver. Why? | 0 | 1.2 | 1 | 1 | 0 | 1,620 |
45,368,358 | 2017-07-28T08:18:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x,user-interface,tkinter,pyinstaller,tkinter-canvas | 0 | 51,119,738 | 1 | 2 | 0 | false | 0 | 1 | Another option would be to manually open a CMD window, navigate to, and then execute your exe, rather than letting the packaged application spawn the instance. | 1 | 1 | 0 | 0 | I'm working on a GUI that I would like to put at the disposal for my colleagues to use under the form of .exe , after some researchs i found pyinstaller as "freezer" which work great after downloading the github version , but my issue is even if the .exe is created when i run it , it show up for less than a second on the screen and it disapears
I would like to know how to keep it on the screen (most important part) and getting it closed when the user close it himself..
Thanks in advance for the help! | avoiding a pyinstaller .exe disapear of the screen without closing | 0 | 0 | 1 | 0 | 0 | 177 |
45,380,268 | 2017-07-28T18:36:00.000 | 2 | 1 | 1 | 0 | 0 | python | 0 | 45,380,349 | 0 | 2 | 0 | false | 0 | 0 | If you mean Standard Python (CPython) by Python, then no! The byte-code (.pyc or .pyo files) are just a binary version of your code line by line, and is interpreted at run-time. But if you use pypy, yes! It has a JIT Compiler and it runs your byte-codeas like Java dn .NET (CLR). | 1 | 6 | 0 | 0 | I'm a little confused as to how the PVM gets the cpu to carry out the bytecode instructions. I had read somewhere on StackOverflow that it doesn't convert byte-code to machine code, (alas, I can't find the thread now).
Does it already have tons of pre-compiled machine instructions hard-coded that it runs/chooses one of those depending on the byte code?
Thank you. | Does the Python Virtual Machine (CPython) convert bytecode into machine language? | 0 | 0.197375 | 1 | 0 | 0 | 2,285 |
45,385,751 | 2017-07-29T05:45:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,database,django-models | 0 | 45,386,569 | 0 | 1 | 0 | false | 1 | 0 | The whole point of migrations is that you run them on both your local database and in production, to keep them in sync. | 1 | 1 | 0 | 0 | When we work with Django in local environment, we change the structure of the Data Base using Command Prompt through migration.
But for using Django in server, i don't now how can i apply such changes? How can i type commands to change Data Base structure? Is it a good way to upload site files every time that i do some change again. | Changing database structure of Django in server | 1 | 0.197375 | 1 | 0 | 0 | 196 |
45,406,471 | 2017-07-31T01:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,authentication,login,web-crawler,lxml | 0 | 45,406,522 | 0 | 1 | 0 | true | 1 | 0 | It very much depends on the method of authentication used. If it's HTTP Basic Auth, then you should be able to pass those headers along with the request. If it's using a web page-based login, you'll need to automate that request and pass back the cookies or whatever session token is used with the next request. | 1 | 0 | 0 | 0 | I can get html of a web site using lxml module if authentication is not required. However, when it required, how do I input 'User Name' and 'Password' using python? | How to get html using python when the site requires authenticasion? | 1 | 1.2 | 1 | 0 | 1 | 29 |
45,410,078 | 2017-07-31T07:44:00.000 | 0 | 0 | 0 | 0 | 0 | python,redis,scrapy | 0 | 45,420,788 | 0 | 1 | 1 | false | 1 | 0 | The pipeline is a different script, yes. In the settings file you can enable the pipeline. A pipeline can be used to store the crawled results in any database you want. | 1 | 0 | 0 | 0 | I am using scrapy-redis now, and I am ok with it, and I am success to crawl in different computer by using the same redis server.
But I don't understand how to use the scrapy-redis pipeline properly.
In my understanding, I think I need another script than the spiders to deal with the item in the redis pipeline list, then I can do stuffs like store them into the database.
Do I understand right, do I have to write another script, which is somehow dependent from the spider? | how to use scrapy-redis pipeline? | 0 | 0 | 1 | 0 | 0 | 489 |
45,437,357 | 2017-08-01T12:05:00.000 | 2 | 0 | 0 | 0 | 0 | python,machine-learning,reinforcement-learning,openai-gym | 0 | 45,468,144 | 0 | 2 | 0 | false | 0 | 0 | No, OpenAI Gym environments will not provide you with the information in that form. In order to collect that information you will need to explore the environment via sampling: i.e. selecting actions and receiving observations and rewards. With these samples you can estimate them.
One basic way to approximate these values is to use LSPI (least square policy iteration), as far as I remember, you will find more about this in Sutton too. | 1 | 3 | 1 | 0 | I am currently reading "Reinforcement Learning" from Sutton & Barto and I am attempting to write some of the methods myself.
Policy iteration is the one I am currently working on. I am trying to use OpenAI Gym for a simple problem, such as CartPole or continuous mountain car.
However, for policy iteration, I need both the transition matrix between states and the Reward matrix.
Are these available from the 'environment' that you build in OpenAI Gym.
I am using python.
If not, how do I calculate these values, and use the environment? | Implementing Policy iteration methods in Open AI Gym | 0 | 0.197375 | 1 | 0 | 0 | 1,540 |
45,444,964 | 2017-08-01T18:12:00.000 | 21 | 0 | 0 | 0 | 0 | python,gensim,word2vec | 0 | 45,453,040 | 0 | 2 | 0 | true | 0 | 0 | size is, as you note, the dimensionality of the vector.
Word2Vec needs large, varied text examples to create its 'dense' embedding vectors per word. (It's the competition between many contrasting examples during training which allows the word-vectors to move to positions that have interesting distances and spatial-relationships with each other.)
If you only have a vocabulary of 30 words, word2vec is unlikely an appropriate technology. And if trying to apply it, you'd want to use a vector size much lower than your vocabulary size – ideally much lower. For example, texts containing many examples of each of tens-of-thousands of words might justify 100-dimensional word-vectors.
Using a higher dimensionality than vocabulary size would more-or-less guarantee 'overfitting'. The training could tend toward an idiosyncratic vector for each word – essentially like a 'one-hot' encoding – that would perform better than any other encoding, because there's no cross-word interference forced by representing a larger number of words in a smaller number of dimensions.
That'd mean a model that does about as well as possible on the Word2Vec internal nearby-word prediction task – but then awful on other downstream tasks, because there's been no generalizable relative-relations knowledge captured. (The cross-word interference is what the algorithm needs, over many training cycles, to incrementally settle into an arrangement where similar words must be similar in learned weights, and contrasting words different.) | 2 | 9 | 1 | 0 | I have been struggling to understand the use of size parameter in the gensim.models.Word2Vec
From the Gensim documentation, size is the dimensionality of the vector. Now, as far as my knowledge goes, word2vec creates a vector of the probability of closeness with the other words in the sentence for each word. So, suppose if my vocab size is 30 then how does it create a vector with the dimension greater than 30? Can anyone please brief me on the optimal value of Word2Vec size?
Thank you. | Python: What is the "size" parameter in Gensim Word2vec model class | 0 | 1.2 | 1 | 0 | 0 | 14,983 |
45,444,964 | 2017-08-01T18:12:00.000 | 0 | 0 | 0 | 0 | 0 | python,gensim,word2vec | 0 | 65,432,085 | 0 | 2 | 0 | false | 0 | 0 | It's equal to vector_size.
To make it easy, it's a uniform size of dimension of the output vectors for each word that you trained with word2vec. | 2 | 9 | 1 | 0 | I have been struggling to understand the use of size parameter in the gensim.models.Word2Vec
From the Gensim documentation, size is the dimensionality of the vector. Now, as far as my knowledge goes, word2vec creates a vector of the probability of closeness with the other words in the sentence for each word. So, suppose if my vocab size is 30 then how does it create a vector with the dimension greater than 30? Can anyone please brief me on the optimal value of Word2Vec size?
Thank you. | Python: What is the "size" parameter in Gensim Word2vec model class | 0 | 0 | 1 | 0 | 0 | 14,983 |
45,446,829 | 2017-08-01T20:08:00.000 | 0 | 0 | 0 | 0 | 0 | python,caffe | 0 | 45,447,380 | 0 | 1 | 0 | true | 0 | 0 | You are confusing test and validation sets. A validation set is a set where you know the labels (like in training) but you do not train on it. The validation set is used to make sure you are not overfitting the training data.
At test time you may present your model with unlabeled data and make prediction for these samples. | 1 | 1 | 1 | 0 | During the process of making a lmdb file,we are supposed to make a train.txt and val.txt file,i have already made a train.txt file which consists of the image name space its corresponding label.Ex image1.JPG 0.
Now that i have to make the val.txt file im confused as to how do i give it its corresponding values since it is my test data and i am hoping to predict those.Can anyone tell me what thisval.txt file is and what is it supposed to be doing. | Caffe LMDB train and val.txt | 0 | 1.2 | 1 | 0 | 0 | 427 |
45,455,892 | 2017-08-02T08:47:00.000 | 0 | 0 | 0 | 0 | 0 | python,windows,xlwings | 0 | 45,456,886 | 0 | 1 | 0 | false | 0 | 0 | The add-in replaces the need for the settings in VBA in newer versions.
One can debug the xlam module using "xlwings" as a password.
This enabled me to realize that the OPTIMIZED_CONNECTION parameter is now set through "USE UDF SERVER" keyword in the xlwings.conf sheet (which does work) | 1 | 0 | 0 | 0 | I would like to use xlwings wit the OPTIMIZED_CONNECTION set to TRUE. I would like to modify the setting but somehow cannot find where to do it. I change the _xlwings.conf sheet name in my workbook but this seems to have no effect. Also I cannot find these settings in VBA as I think I am supposed to under what is called "Functions settings in VBA module" in the xlwings documentation. I tried to re-import the VBA module but cannot find xlwings.bas on my computer.(only xlwings.xlam that I cannot access in VBA)
I am using the 0.11.4 version of xlwings.
Sorry for this boring question and thanks in advance for any help. | xlwings VBA function settings edit | 0 | 0 | 1 | 1 | 0 | 702 |
45,475,587 | 2017-08-03T05:31:00.000 | 0 | 0 | 1 | 0 | 0 | python,binary,complement | 0 | 45,475,668 | 0 | 1 | 0 | false | 0 | 0 | You can use the ~ operator. If A = 00100, ~A = 11011.
If A is a string version of a decimal, convert it into int first. | 1 | 0 | 0 | 0 | all.
I want to change 1 to 0 and 0 to 1 in binary.
for example,
if binary is 00000110.
I want to change it to 11111001.
how to do that in python3?
Best regards. | how to change 1 to 0 and 0 to 1 in binary(Python) | 0 | 0 | 1 | 0 | 0 | 865 |
45,487,397 | 2017-08-03T14:38:00.000 | -1 | 0 | 1 | 0 | 1 | python,pip,cherrypy | 0 | 45,493,774 | 0 | 1 | 0 | false | 0 | 0 | Ended up copying my entire lib\site-packages folder to the remote server, placed where it would have been on my old server, and it worked fine.
TL:DR copy you %Python_home%/lib/site-packages folder to your remote machine and it might work. need to have the same version of python installed. In my case it was 2.7. | 1 | 0 | 0 | 0 | Good Morning everyone, I am attempting to install CherryPy on a server without internet access. It has windows Server 2012. I can RDP to it, which is how i have attempted to install it. The server has Python 2.7 installed.
What I have tried (unsuccessfully):
RDP to the server, pip install cherrypy from command line (issue is that it is offline)
Downloaded the .grz files, RDP to server, from CL ran python (source to the setup.py file) install. says that there are dependencies that are unable to be downloaded (because offline).
Downloaded the whl file, attempted to run, did not work.
Is there a way to download the the package, along with all dependencies, on a remote computer (with internet access) and them copy the files over and install? I have attempted to find this information without success.
thank you all for your help. | Unable to install cherrypy on an offline server | 0 | -0.197375 | 1 | 0 | 0 | 259 |
45,498,188 | 2017-08-04T04:16:00.000 | 1 | 0 | 0 | 0 | 1 | python,sqlite,ipython | 0 | 45,498,306 | 0 | 2 | 0 | true | 0 | 0 | That is because .fetchall() makes your cursor(c) pointing the last row.
if you want to select your DB again, you should .execute again.
Or, if you just want to use your fetched data again, you can store c.fetchall() into your variable. | 1 | 0 | 0 | 0 | So I was trying learn sqlite and how to use it from Ipython notebook, and I have a sqlite object named db.
I am executing this command:
sel=" " " SELECT * FROM candidates;" " "
c=db.cursor().execute(sel)
and when I do this in the next cell:
c.fetchall()
it does print out all the rows but when I run this same command again i.e. I run
c.fetchall() again it doesn't print out anything, it just displays a two square brackets with nothing inside them. But when I run the above first command ie, c=db.cursor().execute(sel) and then run db.fetchall() it again prints out the table.
This is very weird and I don't understand it, what does this mean? | Weird behavior by db.cursor.execute() | 1 | 1.2 | 1 | 1 | 0 | 189 |
45,500,972 | 2017-08-04T07:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,sql,django | 0 | 45,501,557 | 0 | 3 | 0 | false | 1 | 0 | @Daniel Roseman helped me understand the answer.
SOLVED:
What I was getting from the query was the model of character, so I couldn't have accessed it thru result.Character but thru result.Field_Inside_Of_Character | 1 | 0 | 0 | 1 | I am using django 1.10 and python 3.6.1
when executing
get_or_none(models.Character, pk=0), with SQL's get method, the query returns a hashmap i.e.: <Character: example>
How can I extract the value example?
I tried .values(), I tried iterating, I tried .Character
nothing seems to work, and I can't find a solution in the documentation.
Thank you, | Django SQL get query returns a hashmap, how to access the value? | 0 | 0 | 1 | 1 | 0 | 414 |
Subsets and Splits