Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
45,947,457
2017-08-29T20:01:00.000
0
0
0
0
python,django,python-3.x,django-models
45,951,472
2
false
1
0
You can convert the list to json and store it in a charfield.
1
1
0
excuse my English I'm beginner in django and i want to achieve this in my model and form: the scenario is like purchasing shoes on amazon: 1. the shoes has list of size a user can select form 2. the user select and add it to the cart 3. user place an order which is saved in a model with the respective size included now imagine a shoe seller when selling can enter a list of sizes like this in python size = [1, 2, 3, 4] which is also saved on a shoe-model how can implement this in django and how to save a list of size entered but a seller to a model and how can i display this list so the user can select only one of the value of this list? please help
Django: how can i get a model that store a list(like in python or anyway) of a sizes of a shoes
0
0
0
858
45,949,911
2017-08-29T23:54:00.000
0
0
1
0
javascript,python,web-scraping,cross-domain,aggregate
45,950,238
1
false
0
0
i suggest you use selenium webdriver to login to get cookie,and use requests library to scrap the message.That is what my company do in the scraping system.if you only use selenium webdriver, you will need to many memory and cpu capacity. if you are good at html and js,it is a good way for you to use requests library to Simulate logging. for the website you must log in,the most import thing is to get cookie.
1
0
0
I'm a fresh-out-of-college programmer with some experience in Python and Javascript, and I'm trying to develop either a website or just a back-end system that will aggregate information from online market websites which don't have any API (or none that I've found, anyway). Ideally I would also want the system that can write to local storage to track changes to the data over time in some kind of database, but that's down the road a bit. I've already pounded out some javascript that can grab the data I want, but apparently there doesn't seem to be a way to access or act upon data from other websites due to data security protections or to save the data to local storage in order to be read from other pages. I know that there are ways to aggregate data, as I've seen other websites that do this. I can load websites in Python using the urllib2 and use regular expressions to parse what I want from some pages, but on a couple of the desired sites I would need to log into the website before I can access the data I want to gather. Since I am relatively new to programming, is there an ideal tool / programming language that would streamline or simplify what I'm trying to do? If not, could you please point me in the right direction for how I might go about this? After doing some searching, there seems to be a general lack of cross-domain data gathering and aggregation. Maybe I'm not even using the right terminology to describe what I'm trying to do. Whichever way you would look at this, please help! :-)
Looking for ways to aggregate info/data from different websites
0
0
1
123
45,950,381
2017-08-30T01:03:00.000
0
0
1
0
python,multithreading,multiprocessing
45,950,446
2
false
0
0
Every sub-process will have its own resources, so this imply does. More precisely, every sub-process will copy a part of original dataframe, determined by your implementation. But will it be fast then a shared one? I'm not sure. Unless your dataframe implement a w/r lock, read a shared one or read separated ones are the same. But why a dataframe need to lock read operation? It doesn't make sense.
1
0
1
I have a readonly very large dataframe and I want to do some calculation so I do a multiprocessing.map and set the dataframe as global. However, does this imply that for each process, the program will copy the dataframe separately(so it will be fast then a shared one)?
multiprocessing.map in python with very large read-only object?
0
0
0
843
45,951,244
2017-08-30T03:05:00.000
1
0
0
0
python,regex,django,url
45,970,813
1
true
1
0
(.*\/)([^\/]*)$ will put the page in $2 and something you'll have to parse in $1. OP added: The above did the trick. For reference, the full regex became (multi-lined for readability): ^(?P<domain>[a-zA-Z0-9\_\-]*)/ (?P<version>[a-zA-Z0-9\-\_\.]*)/ (?P<collections>.*)/ (?P<page>[^\/]*)$
1
0
0
URL structure look liks this: <domain>/<version>/<collection>/<sub-collection>/<page> where domain is anything a-zA-Z0-9\-\_, version is anything a-zA-Z0-9\-\_\. (But would likely be only 1.0, 1.1, 2.0.0 etc) and both collection and sub-collection are optional groups which follow <domain> restraints. Current regex is: r'^(?P<domain_slug>[a-zA-Z0-9\-\_]*)\/(?P<version_slug>[a-zA-Z0-9\-\_\.]*)\/(?P<slug>[a-zA-Z0-9\-\_]+)\/$. How can I capture optional positions, ensuring the last segment is always the page? Additionally, is this even a good idea? Would it be better to just pass this through the Django URL conf and let a view function handle all the work? Example URLs could include: administration/5.0.3/reset-user-password user/1.0/getting-started/setting-up-an-account development/3.2/authentication/acl/creating-aco
Complex Python Regex for Django View
1.2
0
0
44
45,955,464
2017-08-30T08:24:00.000
2
1
0
0
python,django,email
45,955,886
2
true
1
0
The easiest way would be something like this: Make a table e.g. challenge in database with following cols: challenge_name, week1, week2, ..., as much weeks as you need. Make them nullable, so if some challenge is shorter, the rest of weeks can be null. In users table (or you can make new one) add two cols, one for active challenge, the other for day started. Then yes, you can run cron job daily, or maybe twice a day, in the morning and in the afternoon, that executes python function for sending mail. In that function you go through all users, check their current challenge, calculate the week they are in, query table challenge for mail content and then send. I think this is the best solution, certainly the most simple and solid one :)
1
1
0
The Goal: Once a user has signed up they start a challenge. Once they've started the challenge I'd like to be able to send users weekly emails for 12 weeks (Or however long the challenge lasts for). Bearing in mind, one user may sign up today and one may sign up in three months time. Each week, the email will be different and be specific to that week. So I don't want the week one email being sent alongside week 2 for 12 weeks. Should I add 12 boolean fields inside the extended User model, then run checks against them and set the value to True once that week gas passed? Assuming this is the case. Would i then need to setup a cron task which runs every week, and compares the users sign up date to todays date and then checks off the week one boolean? Then use the post_save signal to run a check to see which week has been completed and send the 'week one' email? I hope that makes sense and i'm trying to get my head around the logic. I'm comfortable with sending email now. It's trying to put together an automated process as currently it's all manual. Please let me know if I need to elaborate on anything. Any help would be much appreciated.
Send weekly emails for a limited time after sign up in django
1.2
0
0
752
45,959,832
2017-08-30T11:58:00.000
-1
0
0
0
python,django,vue.js,vue-component,server-side-rendering
45,959,886
1
false
1
0
I think the best method really is to split up the front and backend into two seperate entities, communicating via an API rather than bleeding the two together as in the long run it creates headaches.
1
0
0
I have an existing python/django app with jinja template engine. I have a template with a filter and a list the gets rendered correctly via server, also the filters work perfectly without javascript. (based on url with parameters) The server also responds with json if the same filter url is requested by ajax. So it's ready for an enhanced version. Now: I would like to make it nicer and update/rerender the list asynchronously when I change the filter based on the json response I receive. Questions: When I init the vue on top of the page template, it removes/rerenders everything in the app. All within the vue root element becomes white. Can I declare only parts of my whole template ( the one filter and the list) as separate vue components and combine these instances (there are other parts that are not part of the async update part vue should not take of) Can I somehow use the existing markup of my components in jinja to be used to rerender the components again with vue or do I have to copy paste it into javascript (please no!) ? TLDR: I don't wan't to create the whole model with vue and then prerender the whole app (modern SSR way) and vue will find out the diff. I just want to use vue on top of an existing working django app.
Vue.JS on top of existing python/django/jinja app for filter and list render
-0.197375
0
0
464
45,960,590
2017-08-30T12:35:00.000
5
0
1
0
python,console,ipython,spyder
50,839,157
3
false
0
0
The variable explorer in makes Spyder great for debugging but it doesn't stack up to full featured IDE's such as PyCharm community edition. In my opinion, Spyder is a much worse debugger since the console was removed so after 2 months of frustration with the newer version I "downgraded" back to version 3.1.4 which I love. This will get you back to the version of Spyder where the beloved console still exists: conda uninstall spyder conda install spyder=3.1.4
2
19
0
After upgrade spyder version 3.2.1 .I con't find the python console in spyder. It is inconvenient when i plot data interactively though the Ipython console.How can i add the python console to the spyder.
How to add python console in spyder
0.321513
0
0
37,462
45,960,590
2017-08-30T12:35:00.000
1
0
1
0
python,console,ipython,spyder
65,611,582
3
false
0
0
Alt + Z or View -> Panes -> Ipython Console
2
19
0
After upgrade spyder version 3.2.1 .I con't find the python console in spyder. It is inconvenient when i plot data interactively though the Ipython console.How can i add the python console to the spyder.
How to add python console in spyder
0.066568
0
0
37,462
45,961,177
2017-08-30T13:00:00.000
0
0
0
0
python,wireshark,scapy,packets,sniffing
48,503,065
1
false
0
0
Scapy itself has many libraries and extensions which are either pre-installed or you will have to install it based on your needs. Your question is a bit vague about what exactly is your comparison factor here between the two, but for example, Scapy will need a HTTPS decoder library installed for decoding the information of those packets. Also in Scapy, you can write your own protocol as you deem. But again if you are doing real-time parsing without a PCAP file Scapy is a good option even with the packet drop ratio. But if you are not concerned about the PCAP file I suggest to use Wireshark/TCPdump and record a PCAP file. You can dissect the PCAP file using Scapy then. Hope this helps.
1
0
0
I've tried using Scapy's sniff function to sniff some packets and compared it to Wiresharks output. Upon displaying Scapy's sniffed packets and Wireshark's sniffed packets on the same interface, I discover that Wireshark can sniff some packets that Scapy was apparently not able to sniff and display. Is there a reason why and if so how can I prevent it so Scapy does not 'drop' any packets and sniffs all the packets Wireshark can receive?
Scapy cannot sniff some packets
0
0
1
600
45,964,972
2017-08-30T15:58:00.000
0
0
0
0
python,mysql,django,unit-testing,innodb
46,018,456
1
false
1
0
Since I have never seen anyone use the feature of having bigger block size, I have no experience with making it work. And I recommend you not be the first to try. Instead I offer several likely workarounds. Don't use VARCHAR(255) blindly; make the lengths realistic for the data involved. Don't use uf8 (or utf8mb4) for columns that can only have ascii. Examples: postcode, hex strings, UUIDs, country_code, etc. Use CHARACTER SET ascii. Vertically partition the table. That is spit it into two tables with the same PRIMARY KEY. Don't splay arrays across columns; use another table and have multiple rows in it. Example: phone1, phone2, phone3.
1
0
0
Hope you have a great day. I have a table with 470 columns to be exact. I am working on Django unit testing and the tests won't execute giving the error when I run command python manage.py test: Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline To resolve this issue I am trying to increase the innodb_page_size in MySQL my.cnf file. When I restart MySQL server after changing value in my.cnf file, MySQL won't restart. I have tried almost every available solution on stackoverflow but no success. MYSQL version=5.5.57 Ubuntu version = 16.04 Any help would be greatly appreciated. Thank you
Changing innodb_page_size in my.cnf file does not restart mysql database
0
1
0
978
45,974,121
2017-08-31T05:49:00.000
1
1
1
0
text-to-speech,python-2.x,translate
45,987,971
2
false
0
0
I have tried writing a string in unicode format as- u'Qu'est-ce que tu fais? Gardez-le de côté.'. The ASCII code characters are converted into unicode format and hence, resolve the error. So, the text you want to be converted into speech can even have utf-8 format characters and can be easily transformed.
1
3
0
While using gTTS google translator module in python 2.x, I am getting error- File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/gtts/tts.py", line 94, in init if self._len(text) <= self.MAX_CHARS: File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/gtts/tts.py", line 154, in _len return len(unicode(text)) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe0 in position 0: ordinal not in range(128)`. Even though I have included # -*- coding: utf-8 -*- in my python script, I am getting the error on using Non-ASCII characters. Tell me some other way to implement like I can write sentence in English and get translated in other language. But this is also not working as I am getting speech in English, only accent changed. I have searched a lot everywhere but can't find an answer. Please help!
getting Unicode decode error while using gTTS in python
0.099668
0
0
856
45,974,470
2017-08-31T06:15:00.000
0
0
0
0
python,django,shell
45,974,601
2
false
1
0
Use relative import from .models import Teacher instead if you are inside your app main.
2
1
0
This is what I did to change the name of my django app which has classes under models I deleted the database and the migration folder. Then I ran the migration again and the tables were created ( mariadb). All works fine. Except when I run python manage.py shell which starts the interpretor alright. But when I try to from main.models import Teacher I get a LookupError: No installed app with label 'models' . Teacher being the class under models.py of the app main. Main is what I had renamed my earlier app 'foo' to
Changed the name of django app.Now python manage.py shell cant see the models.
0
0
0
185
45,974,470
2017-08-31T06:15:00.000
0
0
0
0
python,django,shell
45,974,639
2
false
1
0
Editing the Answer. I misunderstood the question earlier Hey, if the server is running and site is working as expected (as you mentioned) and you are facing problems only in shell, you just have to close the shell and reopen it. However, here it seems that you have not added init.py file in the new folder as it can not fing models.py. If that is not the case, please edit the question to show the list of installed apps and the entire stack trace.
2
1
0
This is what I did to change the name of my django app which has classes under models I deleted the database and the migration folder. Then I ran the migration again and the tables were created ( mariadb). All works fine. Except when I run python manage.py shell which starts the interpretor alright. But when I try to from main.models import Teacher I get a LookupError: No installed app with label 'models' . Teacher being the class under models.py of the app main. Main is what I had renamed my earlier app 'foo' to
Changed the name of django app.Now python manage.py shell cant see the models.
0
0
0
185
45,983,192
2017-08-31T13:53:00.000
0
0
1
0
python-2.7,py2exe
45,983,305
2
false
0
0
For Windows users Go to the sourceforge site, where the cx_Freeze project is hosted. And download the file corresponding to your version of Python. After downloading it, launch the executable and let yourself be guided. Nothing too technical so far! I invite you to go to the command line (Start > Run... > cmd). Go to the scripts subfolder of your Python installation (me, C: \python34\scripts). On Windows or Linux, the script syntax is the same: cxfreeze file.py.
1
0
0
I have a .py file and want to converrt it to .exe, I have tried by using py2exe but it created dist folder, and if I move the exe file out of dist it is not working. How can I have single working .exe file?
How to make .py to .exe independent?
0
0
0
198
45,985,692
2017-08-31T15:58:00.000
2
0
0
0
python,arrays,numpy,resize,reshape
45,985,888
2
false
0
0
Neither. Reshape only changes the shape of the data, but not the total size, so you can for example reshape an array of shape 1x9 into one which is 3x3, but not into 2x4. Resize does similar thing, but lets you increase the size, in which case it will fill new space with elements of array which is being resized. You have two choices: write your function which does resizing in the manner you want it to do so, or use one of Python image libraries (PIL, Pillow...) to apply common image resizing functions.
1
0
1
I've been scouring the stackexchange archives and can not seem to come across the right answer... should reshape be used, should resize be used, but both fail... setup: 3 netCDF files of two resolutions... 1 500 meter, 2 1000 meter need to resize or decrease resolution or reshape or whatever the right word is the higher resolution file :) using either gdalinfo or "print (np.shape(array))" we know that the higher resolution file has a shape or size of (2907, 2331) and the lower resolution array has the size of (1453, 1166) So i have tried both np.resize (array, (1453,1166)) and np.reshape (array, (1453,1166)) and receive errors such as: ValueError: cannot reshape array of size 6776217 into shape (1453,1166) Surely I'm using the wrong terms / lingo and I apologize for that... on the command line to do what I would need done it would be as simple as gdal_translate -outsize x y -of GTiff infile outfile Please help!
Numpy resize or Numpy reshape
0.197375
0
0
1,755
45,985,912
2017-08-31T16:11:00.000
14
0
1
0
python,pycharm,virtualenv
46,007,206
3
false
0
0
In File > Settings > Project: > Project Structure at the bottom is Exclude files: You can put something in there like venv or venv;coverage.xml (given your comment) It doesn't seem to recognize paths (e.g. foo/venv), but this does what you requested.
1
17
0
In Python 3, I've moved away from creating virtualenvs in ~/.virtualenvs and towards keeping them in the project directory ./venv However now search results in every PyCharm project include results from venv subdirectories, until you manually right-click and exclude them from the project. How to omit directories named venv from PyCharm indexing/searching, globally?
PyCharm: always mark venv directory as excluded
1
0
0
12,756
45,986,266
2017-08-31T16:32:00.000
0
0
0
0
javascript,python,html,css,user-interface
45,986,589
3
false
1
1
The popular form of Javascript or ES6 (which you are talking about) is designed to run in browser, so, the limitations are that it can only make calls via browser, i.e. it cannot directly interact with the OS like python's OS module. This means you will need a web-service in your computer that would run a specific python code and return you the responses, this requires a web-service/web-framework, preferably python's like Django, Flask which will run the python script for you because they can make OS calls on the server machine. I do think other non-python web-services are cacpable to do so, but of course, the natural preference would be 'Python-based services'. Sidenote: If the case was with Node.js(i.e. the server-side js) and not ES6(client-side browser-run) you would have an upperhand i.e. you could invoke python scripts on your server because node.js like the python-based web-servers do support os calls.
2
0
0
I've built a very simple assistant app in python which can do very basic tasks like taking notes, reminding you, stopwatch, timer, web scrape for news feeds etc. tkinter seems confusing and looks oldish to me. On the other hand, css js seems much easier to design gui side and way more elegant looking. Is it possible to design a desktop gui app (may be with electron?) using HTML+CSS+JavaScript but it will run my old python codes? I've been coding for only two months and i suck at it. Please excuse my newbiness. TLDR: Simply, i want to make the gui side using HTML+CSS+JavaScript to take user input but then it will run python scripts and shows output in the gui app. Is it possible?
Is it possible to design GUI with HTML+CSS+JavaScript but it will actually run python script?
0
0
0
2,102
45,986,266
2017-08-31T16:32:00.000
0
0
0
0
javascript,python,html,css,user-interface
45,986,381
3
false
1
1
It can't be done, you'd have to make it like a web app (although with local webserver serving python responses) EDIT: if you don't mind running it in webbrowser, you can make quite easily webserver, that will evaluate your queries...
2
0
0
I've built a very simple assistant app in python which can do very basic tasks like taking notes, reminding you, stopwatch, timer, web scrape for news feeds etc. tkinter seems confusing and looks oldish to me. On the other hand, css js seems much easier to design gui side and way more elegant looking. Is it possible to design a desktop gui app (may be with electron?) using HTML+CSS+JavaScript but it will run my old python codes? I've been coding for only two months and i suck at it. Please excuse my newbiness. TLDR: Simply, i want to make the gui side using HTML+CSS+JavaScript to take user input but then it will run python scripts and shows output in the gui app. Is it possible?
Is it possible to design GUI with HTML+CSS+JavaScript but it will actually run python script?
0
0
0
2,102
45,992,329
2017-09-01T01:52:00.000
0
0
0
1
python,sockets
45,992,812
1
false
0
0
So I think I figured out the answer. By default, the pickle data format uses a printable ASCII representation. since ASCII is a single byte representation the endianess does not matter.
1
0
1
I know for integers one can use htonl and ntohl but what about pickle byte streams? If I know that the next 150 bytes that are received are a pickle object, do I still have to reverse byte-order just in case one machine uses big-endian and the other is little-endian?
Does Pickle.dumps and loads used for sending network data require change in byte order?
0
0
0
358
45,993,828
2017-09-01T05:33:00.000
9
0
0
0
python,opencv,video-streaming,rtmp
50,102,832
1
false
0
0
Just open the address instead of your cam : myrtmp_addr = "rtmp://myip:1935/myapp/mystream" cap = cv2.VideoCapture(myrtmp_addr) frame,err = cap.read() From there you can handle your frames like when you get it from your cam. if it still doesn't work, check if you have a valid version of ffmpeg linked with your opencv. You can check it with print(cv2.getBuildInformation()) Hope I could help
1
5
1
I'm developing a python program to receive a live streaming video from android device via RTMP. I created a server and also I'm capable of transmitting a videoStream from android device. But the problem is I can't access that stream in opencv. Can anyone tell me a way to access it via opencv. It is better if you can post any python code snippets.
Get videoStream from RTMP to opencv
1
0
0
9,717
45,995,076
2017-09-01T07:15:00.000
1
0
0
0
python,machine-learning,tensorflow,object-detection
45,997,098
1
false
0
0
Either you finetune your model, with just a few thousands steps, on pedestrians (a small dataset to train would be enough) or you look in your in your label definition file (.pbtx file), search for the person label, and do whatever stuff you want with the others.
1
1
1
On the tensorflow/models repo on GitHub, they supply five pre-trained models for object detection. These models are trained on the COCO dataset and can identify 90 different objects. I need a model to just detect people, and nothing else. I can modify the code to only print labels on people, but it will still look for the other 89 objects, which takes more time than just looking for one object. I can train my own model, but I would rather be able to use a pre-trained model, instead of spending a lot of time training my own model. So is there a way, either by modifying the model file, or the TensorFlow or Object Detection API code, so it only looks for a single object?
Is there a way to limit an existing TensorFlow object detection model?
0.197375
0
0
1,270
45,996,727
2017-09-01T09:04:00.000
2
0
0
0
python,numpy,random,montecarlo,mersenne-twister
45,996,951
1
true
0
0
I do not think anyone can tell you if this algorithm suffices without knowing how the random numbers are being used. What I would do is to replace the numpy random numbers by something else, certainly there are other modules already available that provide different algorithms. If your simulation results are not affected by the choice of random number generator, it is already a good sign.
1
1
1
I wrote a Monte Carlo (MC) code in Python with a Fortran extension (compiled with f2py). As it is a stochastic integration, the algorithm relies heavily on random numbers, namely I use ~ 10^8 - 10^9 random numbers for a typical run. So far, I didn't really mind the 'quality' of the random numbers - this is, however, something that I want to check out. My question is: does the Mersenne-Twister used by numpy suffice or are there better random number generators out there that one should (could) use? (better in the sense of runtime as well as quality of the generated sequence) Any suggestions/experiences are most definitely welcome, thanks!
numpy.random and Monte Carlo
1.2
0
0
859
45,997,541
2017-09-01T09:50:00.000
-5
0
1
0
python
45,998,552
4
false
0
0
You can simply apt-get install python3 and then use -p python3 while creating your virtual environment. Installing python3 will not disturb your system python (2.7).
1
2
0
I have a CentOS 7 machine which already has Python 2.7.5 installed. Now i want to install Python version 3 also side by side without disturbing the original Python version 2. If i install with pip i fear that it would install version 3 on top of the already existing version. Can someone please guide me how to do the same ? Also i have created a virtualenvs directory inside my installation where i want to create the virualenvs. At present whenever i create any virtualenvs using the virtualenv command it automatically copies the Python version 2 installable over there. I want my virtualenvs to contain version 3 and anything outside my virtualenvs should run with version 2. Is this even possible. Thanks a lot for any answers.
Install Python version 3 side by side with version 2 in Centos 7
-1
0
0
6,155
45,997,744
2017-09-01T10:00:00.000
2
1
0
0
python,unit-testing,selenium,uwsgi
45,997,798
1
true
1
0
This seems like you're mixing up different levels of testing. Mocking/patching is appropriate for a unit test, where you test a function in isolation. What you're describing is an integration test; here, rather than patching, you should set up your app to run with a test database.
1
2
0
I have a python wsgi app that is configured to be run with uwsgi which is installed in the app's virtual environment. The app's main functionality is to retrieve files from a database. I need to test this functionality while running the app with uwsgi. At the same time I need to mock the outputs of the function that connects to the database. When running uwsgi this proves to be a hard (impossible?) thing to do. The main app is called app.py. In the same directory there's a tests module (dir with init.py) with tests. I try to patch the function's output with patch (form unittest.mock), then open the web-page with selenium in a test case, all while uwsgi is running. But uwsgi's ouput seems unaffected by the patching, uwsgi just uses the real function from app.py. What could I possibly do to achieve the required behaviour? I need to test how the app works with uwsgi, and at the same time can not use any database.
Unittest + Mock + UWSGI
1.2
0
0
367
45,999,285
2017-09-01T11:27:00.000
6
1
0
1
python-2.7,amazon-ec2,aws-cli
46,288,793
2
true
0
0
Ran into the same error. Running this command fixed the error for me: export AWS_DEFAULT_REGION=us-east-1 You might also try specifying the region when running any command: aws s3 ls --region us-east-1 Hope this helps! or run aws configure and enter valid region for default region name
2
0
0
configured AWS Cli on Linux system. While running any command like "aws ec2 describe-instances" it is showing error "Invalid IPv6 URL"
Invalid IPv6 URL while running commands using AWS CLI
1.2
0
1
6,082
45,999,285
2017-09-01T11:27:00.000
1
1
0
1
python-2.7,amazon-ec2,aws-cli
50,748,839
2
false
0
0
I ran into this issue due to region being wrongly typed. When you run aws configure during initial setup, if you try to delete a mistaken entry, it will end up having invalid characters in the region name. Hopefully, running aws configure again will resolve your issue.
2
0
0
configured AWS Cli on Linux system. While running any command like "aws ec2 describe-instances" it is showing error "Invalid IPv6 URL"
Invalid IPv6 URL while running commands using AWS CLI
0.099668
0
1
6,082
46,001,197
2017-09-01T13:21:00.000
0
0
1
0
python,python-3.x,variables
46,001,305
1
false
0
0
If you are using python version 3.x then for raw_input() you will got error saying NameError: name 'raw_input' is not defined. For python3.X you can use name = input("What is your name?") while what you are doing will run in python2.X perfectly.
1
1
0
I've been working on a project recently to refresh on my Python, and I need to set a variable to a name. name = raw_input("What is your name?") Is what I have so far, and I can't get it to set the variable (name) to be any word. When I put print (name) in the line after, it comes up with an error. How do I set the variable to anything that the player inputs?
How do I use raw_input to set a variable to any word?
0
0
0
80
46,003,172
2017-09-01T15:07:00.000
3
0
1
0
python,numba
46,003,271
2
false
0
0
Numba supports namedtuples in nopython mode, which should be a good alternative to a dict for passing a large group of parameters into a numba jitted function.
1
7
0
In my code, there are a lot of parameters which are constant during the running. I defined a dict type variable to store them. But I find that numba cannot support dict. What is a better way to solve this?
Replacement of dict type for numba as parameters of a python function
0.291313
0
0
2,207
46,004,579
2017-09-01T16:36:00.000
0
0
1
0
python
46,004,605
2
false
0
0
On python3, import builtins or import __builtin__ for older versions You can check any modules content with the dir function
1
3
0
Like we have python modules in the standard library from which we can import methods and use them, is there also a module where all the built-in functions are defined? If yes, how can I view that module?
Where are the python Built-in functions stored?
0
0
0
2,074
46,005,544
2017-09-01T17:47:00.000
0
0
1
0
python,spyder
66,217,726
3
false
0
0
In the menu bar, click on 'Consoles' than click on 'Restart kernel'.
3
6
0
Does anybody know the keyboard shortcut for restarting the kernel in Spyder? It says it should be Ctrl + . but that is not working. I'm using a norwegian keyboard so I'm thinking it is related to that. I have tried various combinations of Ctrl + something to no avail. Anybody else using a non-US/EN keyboard having this issue? Suggestions? It's not life or death but it would be nice to know the shortcut.
Keyboard shortcut for restarting kernel in spyder
0
0
0
29,879
46,005,544
2017-09-01T17:47:00.000
17
0
1
0
python,spyder
56,983,585
3
true
0
0
I figured it out. The console (not the editor or the file explorer or some other part of the interface) has to be selected, then hit Ctrl + .. Facepalm.
3
6
0
Does anybody know the keyboard shortcut for restarting the kernel in Spyder? It says it should be Ctrl + . but that is not working. I'm using a norwegian keyboard so I'm thinking it is related to that. I have tried various combinations of Ctrl + something to no avail. Anybody else using a non-US/EN keyboard having this issue? Suggestions? It's not life or death but it would be nice to know the shortcut.
Keyboard shortcut for restarting kernel in spyder
1.2
0
0
29,879
46,005,544
2017-09-01T17:47:00.000
3
0
1
0
python,spyder
46,008,661
3
false
0
0
Ctrl + L works on my Hebrew keyboard so it should also work with the norwegian one.
3
6
0
Does anybody know the keyboard shortcut for restarting the kernel in Spyder? It says it should be Ctrl + . but that is not working. I'm using a norwegian keyboard so I'm thinking it is related to that. I have tried various combinations of Ctrl + something to no avail. Anybody else using a non-US/EN keyboard having this issue? Suggestions? It's not life or death but it would be nice to know the shortcut.
Keyboard shortcut for restarting kernel in spyder
0.197375
0
0
29,879
46,005,960
2017-09-01T18:19:00.000
0
0
1
0
python,django,virtualenv
46,009,125
2
false
1
0
This is how I do it . Create virtualenv. Activate the virtualenv. Install Django and other packages in virtualenv. python manage.py runserver... Perhaps is not so much a case of Django using the virtualenv, as the virtualenv using Django.
1
0
0
I need help to understand how I can: 1) Choose which virtualenv my django project should use? As I understood maybe I'm wrong! when I activate the virtualenv, that one my project will be using. But what about if I'm running on 2 different projects on a single server and each one should use it's own virtualenv? I'm looking for your help :)
How to specify which virtualenv to use in my django project?
0
0
0
52
46,009,089
2017-09-01T23:27:00.000
1
0
1
0
python,django,nginx,virtualenv,digital-ocean
46,009,209
1
false
1
0
s virtualenv -p python3 envname or virtualenv -p /usr/bin/python3 <venv-name> that's going to create a virtualenv with python3, tell me if I misunderstood the question
1
0
0
I have a droplet from DigitalOcean. It runs Ubuntu 16.04. By default, it is using Python2. I have a website created with Django. I want to setup a virtual env and run python 3 in the virtual environment. The HTTP server in the droplet is NGINX. How can I let the droplet pick up the python 3 in the virtual environment as the python for my Django project? Thank!
How to setup python 3 from virtual environment as the default python for Django
0.197375
0
0
255
46,011,743
2017-09-02T08:02:00.000
1
0
0
0
python,selenium,web-scraping,selenium-chromedriver,geckodriver
46,012,320
1
false
1
0
Two words: Browser Fingerprinting. It's a huge topic in it's own right and as Tarun mentioned would take a decent amount of research to nail this issue on its head. But possible I believe.
1
0
0
I am trying to scrape a dynamic content (javascript) page with Python + Selenium + BS4 and the page blocks my requests at random (the soft might be: F5 AMS). I managed to bypass this thing by changing the user-agent for each of the browsers I have specified. The thing is, only the Chrome driver can pass over the rejection. Same code, adjusted for PhantomJS or Firefox drivers is blocked constantly, like I am not even changing the user agent. I must say that I am also multithreading, that meaning, starting 4 browsers at the same time. Why does this happen? What does Chrome Webdriver have to offer that can pass over the firewall and the rest don't? I really need to get the results because I want to change to Firefox, therefore, I want to make Firefox pass just as Chrome.
How does Chrome Driver work, but Firefox, PhantomJS and HTMLUnit not?
0.197375
0
1
181
46,011,785
2017-09-02T08:08:00.000
32
0
1
0
python,jupyter-notebook,ipython-notebook,jupyter,data-science
48,120,444
4
false
0
0
<sup>superscript text </sup> also works, and might be better because latex formatting changes the whole line etc.
1
46
0
I want to to use numbers to indicate references in footnotes, so I was wondering inside of Jupyter Notebook how can I use superscripts and subscripts?
How to do superscripts and subscripts in Jupyter Notebook?
1
0
0
58,725
46,013,423
2017-09-02T11:45:00.000
1
0
0
0
python,django
46,013,451
2
false
0
0
You can make a symptoms table and then relate it using one-to-many relationship.
1
0
0
Example: I have table named disease, and it contains four fields: DiseaseId DiseaseName DiseaseType Symptoms As for one disease, many symptoms can be present. So, can I store as many data as needed in that one field-symptoms? How? And if I can't then what's the other solution?
Can I add multiple symptoms for one disease in the same table?
0.099668
0
0
61
46,013,567
2017-09-02T12:03:00.000
1
0
0
0
mongodb,pythonanywhere,mlab
46,028,070
1
false
0
0
Hey after checking out I found that pythonanywhere required paid plan in order to use mlab services, or others services.
1
0
0
when I try to connect to my application deploy at Pythonanywhere database does not working, its seems that he can't reach to him. when I am using my computer and run the app all seems to be perfect. any one any ideas? Thanks very much.
pythonanywhere with mlab(mongoDB)
0.197375
1
0
197
46,015,426
2017-09-02T15:38:00.000
1
0
0
0
python,machine-learning,scikit-learn
68,411,615
2
false
0
0
There are alternatives like XGboost or LigtGMB that are distributed (i.e., can run parallel). These are well documented and popular boosting algorithms.
1
6
1
Relatively new to model building with sklearn. I know cross validation can be parallelized via the n_jobs parameter, but if I'm not using CV, how can I utilize my available cores to speed up model fitting?
How can I parallelize fitting a gradient boosting model with sklearn?
0.099668
0
0
3,044
46,016,385
2017-09-02T17:28:00.000
0
0
1
0
python,visual-studio-2012
53,776,734
1
false
0
0
This error indicates your compiler can't find the required header files to compile the PyRadiomics C Extensions' source code. The easiest solution is to get the latest PyRadiomics wheel from PyPi (pip install pyradiomics), which are precompiled for 64-bit Python 2.7, 3.4, 3.4, 3.4 on windows, linux and mac.
1
1
0
The below error occurs while installing pyradiomics packages using python setup.py install fatal error C1083: Cannot open include file: 'stdlib.h': No such file or directory error: command 'C:\Program Files\Microsoft Visual Studio 14.0\VC\bin\cl.exe' failed with exit status 2
C:\\Program Files\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe' failed with exit status 2
0
0
0
1,842
46,016,736
2017-09-02T18:05:00.000
8
0
0
0
javascript,django,python-3.x,frontend,backend
46,016,757
3
true
1
0
No, python can't be used in frontend. You need frontend technologies like html, css, javascript, jQuery etc... for frontend. Python can be used as scripting language in backend.
1
12
0
I'm watching the udemy Django tutorial that requires using JavaScript as the front-end and Python for the back-end: Can you replace JavaScript with Python? What are the advantages and disadvantages of that?
Can you use Python for both front end and back end using Django framework?
1.2
0
0
17,671
46,016,838
2017-09-02T18:14:00.000
1
0
0
0
python,pandas,dataframe
58,381,427
4
false
0
0
You can also do like this, df[df == '?'] = np.nan
2
2
1
I have a dataset and there are missing values which are encoded as ?. My problem is how can I change the missing values, ?, to NaN? So I can drop any row with NaN. Can I just use .replace() ?
How can I convert '?' to NaN
0.049958
0
0
8,652
46,016,838
2017-09-02T18:14:00.000
2
0
0
0
python,pandas,dataframe
52,961,763
4
false
0
0
You can also read the data initially by passing df = pd.read_csv('filename',na_values = '?') It will automatically replace '?' to NaN
2
2
1
I have a dataset and there are missing values which are encoded as ?. My problem is how can I change the missing values, ?, to NaN? So I can drop any row with NaN. Can I just use .replace() ?
How can I convert '?' to NaN
0.099668
0
0
8,652
46,017,576
2017-09-02T19:46:00.000
0
0
0
0
python,database,csv,panel
46,018,254
3
false
0
0
I would also suggest to use DB, it is much more convenient to update tables in DB than a csv file, moreover if you have a substantial amount of observations, you will be able to access/manipulate your data much more faster. Another solution is to keep separate updates in separate .csv files. You can still keep your major file (the one which is regularly updated), and at the same time create separate files for each update.
2
2
1
The data exists out of Date, Open, High, Low, Close, Volume and it's currently stored in a .csv file. It's currently updating every minute and when time goes by the file keeps growing and growing. A problem is when I need 500 observations from the data, I need to import the whole .csv file and that is a problem yes. Especially when I need to access the data fast. In Python I use the data mostly in a data frame or panel.
Most efficient way to store financial data (Python)
0
0
0
841
46,017,576
2017-09-02T19:46:00.000
0
0
0
0
python,database,csv,panel
70,355,115
3
false
0
0
You maybe wanna check RethinkDB, it gives you fastness, reliability and also flexible searching ability, it has a good python driver. I also recommend to use docker, because in that case, regardless of which database you want to use, you can easily store the data of your db inside a folder, and you can anytime change that folder(when you have a 1TB hard, now you want to change it to 4TB hard). maybe using docker in your project is more important than DB.
2
2
1
The data exists out of Date, Open, High, Low, Close, Volume and it's currently stored in a .csv file. It's currently updating every minute and when time goes by the file keeps growing and growing. A problem is when I need 500 observations from the data, I need to import the whole .csv file and that is a problem yes. Especially when I need to access the data fast. In Python I use the data mostly in a data frame or panel.
Most efficient way to store financial data (Python)
0
0
0
841
46,018,426
2017-09-02T21:36:00.000
0
0
0
0
python,sockets,pythonanywhere
46,066,092
2
false
0
0
Nope. PythonAnywhere doesn't support the socket module.
2
1
0
I have a beginner PythonAnywhere account, which, the account comparison page notes, have "Access to External Internet Sites: Specific Sites via HTTP(S) Only." So I know only certain hosts can be accessed through HTTP protocols, but are there restrictions on use of the socket module? In particular, can I set up a Python server using socket?
PythonAnywhere - Are sockets allowed?
0
0
1
1,618
46,018,426
2017-09-02T21:36:00.000
4
0
0
0
python,sockets,pythonanywhere
46,093,607
2
false
0
0
PythonAnywhere dev here. Short answer: you can't run a socket server on PythonAnywhere, no. Longer answer: the socket module is supported, and from paid accounts you can use it for outbound connections just like you could on your normal machine. On a free account, you could also create a socket connection to the proxy server that handles free accounts' Internet access, and then use the HTTP protocol to request a whitelisted site from it (though that would be hard work, and it would be easier to use requests or something like that). What you can't do on PythonAnywhere is run a socket server that can be accessed from outside our system.
2
1
0
I have a beginner PythonAnywhere account, which, the account comparison page notes, have "Access to External Internet Sites: Specific Sites via HTTP(S) Only." So I know only certain hosts can be accessed through HTTP protocols, but are there restrictions on use of the socket module? In particular, can I set up a Python server using socket?
PythonAnywhere - Are sockets allowed?
0.379949
0
1
1,618
46,018,812
2017-09-02T22:37:00.000
0
0
1
0
python,google-cloud-platform,jupyter-notebook
55,023,413
3
false
0
0
I know it's an old question but let me give it a shot. When you say Google Cloud instance do you mean their compute engine (virtual machine)? If so, you don't need to keep your laptop running all the time to continue a process on cloud, there is an alternative. You can run your python programs from terminal and use tools like Screen or tmux, which keep running your (training) process even if your GC instance is disconnected. You can even turn your system off. I just finished a 24 hr long Hyperparameter optimization marathon few days back. Allow me to also mention here that sometimes "Screen" throws X11 display related errors, so tmux can be used instead. Usage Instructions: Install tmux : sudo apt-get install tmux start new tmux session to run your process : tmux new -s "name_of_sess" --> this opens a new window in terminal. Type your command here like : python my_program.py to start your training. detach session : ctrl+B -> release -> press 'd' (while inside session) list of all the tmux session running : tmux ls attach a session : tmux attach -t "id_from_above_command_lists" kill a tmux session : ctrl+shift+d Note: Commands are mentioned based on what I remember, it may have mistakes. Quick search would give you exact syntax. Idea is to show how easy it is to use.
3
0
0
So I want to run a neural network on Google Cloud instance, but whenever my computer goes to sleep the notebook seems to stop running. Does anyone know how I can keep it running?
Keep Google Cloud Jupyter Notebook Running while computer sleeps
0
0
0
2,193
46,018,812
2017-09-02T22:37:00.000
0
0
1
0
python,google-cloud-platform,jupyter-notebook
56,662,974
3
false
0
0
In the remote (GCP) SSH client: Install tmux: sudo apt-get install tmux. Start a new tmux session: tmux new -s "session". Run commands as normal.
3
0
0
So I want to run a neural network on Google Cloud instance, but whenever my computer goes to sleep the notebook seems to stop running. Does anyone know how I can keep it running?
Keep Google Cloud Jupyter Notebook Running while computer sleeps
0
0
0
2,193
46,018,812
2017-09-02T22:37:00.000
2
0
1
0
python,google-cloud-platform,jupyter-notebook
51,087,148
3
false
0
0
I just discovered this huge oversight in GCP. I don't understand how they could design it this way. This behavior defeats a major point of using the cloud. They want us to pay for this? I use a laptop which sometimes needs to be asleep in a backpack, I can't keep a computer on all day just so a cloud computer can run. If I can't figure anything else out, we are just going to have to use two cloud computers. Maybe use like a small free cloud computer to keep the big datalab one running. We shouldn't have to resort to this.
3
0
0
So I want to run a neural network on Google Cloud instance, but whenever my computer goes to sleep the notebook seems to stop running. Does anyone know how I can keep it running?
Keep Google Cloud Jupyter Notebook Running while computer sleeps
0.132549
0
0
2,193
46,020,031
2017-09-03T03:33:00.000
0
0
0
0
python,web-scraping,scrapy,scrapy-spider
46,020,248
1
true
1
0
Response is stored in memory. That said, every time you call response.xpath it doesn't hit the website on the matter but memory.
1
1
0
Does Scrapy ping the website every time I use response.xpath? Or is there one response value stored in memory per request and all subsequent xpath queries are run locally?
Optimizing Scrapy xpath queries
1.2
0
1
98
46,022,688
2017-09-03T10:52:00.000
0
0
0
0
python,numpy,vectorization
46,022,740
2
false
0
0
I found another answer: A = np.where(A < 0, A + 5, A)
1
0
1
Very basic question: Suppose I have a 1D numpy array (A) containing 5 elements: A = np.array([ -4.0, 5.0, -3.5, 5.4, -5.9]) I need to add, 5 to all the elements of A that are lesser than zero. What is the numpy way to do this without for-looping ?
Change values of a 1D numpy array based on certain condition
0
0
0
4,001
46,023,310
2017-09-03T12:09:00.000
1
0
0
0
android,python-3.x,pygame,apk
55,198,484
1
true
0
1
I am also building an apk from pygame. But the tool -pgs4a actually is designed for python2 only. No tool has been developed to enable to use python3. So only option is python2. Till now it is not possible to use python3 for this purpose.
1
1
0
Recently I was trying to package all the files from a pygame program into an Android package (apk) using PGS4A, but it says that I need python 2.7. How to create an apk from pygame with python 3.5?
How to create an APK from Pygame using PGS4A?
1.2
0
0
2,183
46,023,855
2017-09-03T13:23:00.000
5
0
1
0
python,python-3.x
46,023,888
5
true
0
0
and True just checks that .. well, nothing, since True is True. i in incoming_json and incoming_json[i] would check that the value of the key represented in incoming_json is True as well (or a value evaluated as True). If you actually want to check for the boolean value True (and not 1 etc.), use incoming_json[i] is True.
1
2
0
I receive a JSON dictionary, and I want to check if certain keys are present and are true. I check whether they're present using all(i in incoming_json for i in ['title', 'code', 'university', 'lecturer']), but I'm stuck with checking if they're true. I tried all(i in incoming_json and True for i in ['title', 'code', 'university', 'lecturer']) and all(i in incoming_json for i in ['title', 'code', 'university', 'lecturer'] if i), but they don't seem to make any difference. What am I doing wrong? Example JSON: {title: "Example title", code: "1234", university: "2", lecturer: "John Doe"} Clarification: I only need to know if they're truthy or falsy. Edit: thanks for the responses, I could've accepted any of them, but I accepted the one that explained what I did wrong.
Check if items are in a dictionary and are true using Python
1.2
0
0
92
46,024,476
2017-09-03T14:34:00.000
0
1
0
1
python,cron
46,024,560
2
true
0
0
Well, no. Your argument - */60 * * * * - means "run every 60 minutes". And you can't specify a shorter interval than 1 minute, not in standard Unix cron anyway.
1
1
0
I tried making a cronjob to run my script every second, i used */60 * * * * this a parameter to run every second but it didn't worked, pls suggest me how should i run my script every second ?
I want to make a cronjob such that it runs a python script every second
1.2
0
0
1,441
46,028,132
2017-09-03T21:51:00.000
0
0
1
0
python,python-3.x,exe,pyinstaller
46,028,466
1
true
0
0
It seems like your Pyinstaller is using the wrong version of Python, to make it use the correct one you probably want to use an explicit declaration of what Python interpreter you're using. It's normally something like python -m pyinstaller {args} but other ones could be python3.5 I'd recommend using a virtual environment so you're sure what Python interpreter you're using.
1
0
0
I have a 64 bit PC, and python 3.6.2 (64 bit), python 3.5.4 (32 bit), and python 3.5.4 (64 bit), all installed and added to my path. In Pycharm, I created a virtual environment based off of python 3.5.4 (32 bit) and wrote a project in this env. Each version of python I have installed has an associated virtual env, and all of them have pyinstaller installed on them via Pycharm's installer. However, when I open up a command prompt in the project folder and type pyinstaller -F project_name.py it spits out a .exe that only runs on 64 bit machines. Everything is tested and works perfectly well on 64 bit PCs, but I get an error on 32 bit PCs asking me to check whether or not the system is 32 bit or 64 bit. How can this be possible, and how do I fix it? EDIT: It seems as though pyinstaller is accessing the python35 folder instead of the python35-32 folder when running. How do I stop this?
Making a 32 bit .exe from PyInstaller using PyCharm
1.2
0
0
2,959
46,032,658
2017-09-04T07:58:00.000
0
0
0
0
python,tensorflow
46,032,724
2
false
0
0
That should work. Check if you are using any environment but you are not updating the tensorflow version within the environment. Also, please restart the notebook after saving it and run the cells and try. That should work. Verify in the notebook : run - print(tf.__version__). Please mark the answer if it resolves.
1
1
1
I am facing the below error while running the code for LinearClassifier in tensorflow. AttributeError: module 'tensorflow.python.estimator.estimator_lib' has no attribute 'LinearRegressor' My current version for tensorflow is 1.2.1. I tried to update the version of the package from ANACONDA environment, its not showing for an upgrade. I tried to upgrade it from command prompt by using below command, it is successfully updating the package however it is not reflecting to the actual library when I am using it. pip install --upgrade tensorflow==1.3.0 FYI, I am using Jupyter Notebook and have created a separate environment for tensorflow. Please let me know if I have missed anything.
Not able to update the version of tensorflow
0
0
0
5,131
46,038,671
2017-09-04T13:59:00.000
1
0
0
0
python,computer-vision,artificial-intelligence,keras,training-data
46,039,296
3
false
0
0
First detect the cars present in the image, and obtain their size and alignment. Then go for segmentation and labeling of the parking lot by fixing a suitable size and alignment.
2
0
1
I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ? i have downloaded PKLot dataset for training included negative and positive image. should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :)
how to prepare image dataset for training model?
0.066568
0
0
822
46,038,671
2017-09-04T13:59:00.000
1
0
0
0
python,computer-vision,artificial-intelligence,keras,training-data
46,153,830
3
false
0
0
as you want use pklot dataset for training your machine and test with real data, the best approach is to make both datasets similar and homological, they must be normalized , fixed sized , gray-scaled and parameterized shapes. then you can use Scale-invariant feature transform (SIFT) for image feature extraction as basic method.the exact definition often depends on the problem or the type of application. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. you can use these types of image features based on your problem: Corners / interest points Edges Blobs / regions of interest points Ridges ...
2
0
1
I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ? i have downloaded PKLot dataset for training included negative and positive image. should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :)
how to prepare image dataset for training model?
0.066568
0
0
822
46,041,148
2017-09-04T16:32:00.000
14
0
0
0
python,pandas
46,041,308
2
true
0
0
The first one computes correlation with another dataframe: between rows or columns of two DataFrame objects The second one computes it with itself Compute pairwise correlation of columns
1
16
1
What is the reason of Pandas to provide two different correlation functions? DataFrame.corrwith(other, axis=0, drop=False): Correlation between rows or columns of two DataFrame objectsCompute pairwise vs. DataFrame.corr(method='pearson', min_periods=1): Compute pairwise correlation of columns, excluding NA/null values (from pandas 0.20.3 documentation)
Pandas corr() vs corrwith()
1.2
0
0
22,525
46,045,916
2017-09-05T01:46:00.000
1
0
1
0
python,python-3.x,scikit-learn
46,290,419
2
false
0
0
Try starting fresh and reinstall all of the necessary modules through miniconda3. Maybe the scikit-learn included on that install did not work. You can then try sudo apt-get install python3-scipy python3-sklearn
1
0
1
I am trying to import 'sklearn' using Python 3.4.3 on a Raspberry Pi 3 running Raspian. I downloaded microconda3, which includes all the necessary modules to use scikit. However, when I attempt to import 'sklearn' in IDLE, I receive an error stating that there is no module named 'sklearn'.
No Module named "sklearn"
0.099668
0
0
4,166
46,046,120
2017-09-05T02:18:00.000
0
0
0
0
python,selenium,google-chrome,webdriver,selenium-chromedriver
51,934,022
1
false
0
0
I faced the same issue but it was happening with Firefox. On more investigation I found that there were two different Firefox installations on my comp: x86 and x64. I uninstalled the x86 version and the error got resolved. Check if you've got two different instances of chrome installed on your pc. If yes, then remove one of them.
1
2
0
I have a few Python selenium scripts running on a computer, these scripts open, close, and create chromedriver object instances regularly. After some time of doing this, I get an error "Only one usage of each socket address" on all scripts except for one, the one that doesn't get the error is throwing timeout exceptions. I'm trying to catch the error but it still is thrown and not caught. How do I fix the main issue? Is there too many object instances?
Only one usage of each socket address address (python selenium)
0
0
1
1,448
46,046,256
2017-09-05T02:40:00.000
0
0
1
0
python,nlp,nltk,stanford-nlp,wordnet
46,046,357
2
false
0
0
I have heard of NLTK being useful, but I am sure you can find a lot of public Github repos if you search. Some results that come up are TextBlob, Standard Core NLP, spaCy, genism.
1
0
0
I have the task of sentence completion, I have the subj, verb, adverb or subject and all I need is the appropriate preposition in between. Is there any NLP tool that can give distribution over the prepositions that can go with the verb? Best
Finding the best preposition for a verb
0
0
0
829
46,048,152
2017-09-05T06:18:00.000
1
0
1
0
python
46,048,835
1
true
0
0
You can serialise the dictionary into json and write it out onto the disk. You'll need to convert the dates into some unambiguous format. If it's too big, you can zip files. Since they're text, they should compress fairly well.
1
0
0
I need to store a dictionary of which keys are dates and values are a list of strings. The file was originally stored by matlab in .mat with only around 100MB. When we try to store it in pickle, it become 1G. Are there any good ways to save this dictionary in python?
Best way to store a dictionary of which keys are dates and values are a list of strings in python
1.2
0
0
63
46,048,165
2017-09-05T06:18:00.000
0
0
0
1
java,python,apache-kafka,workflow
46,048,325
1
true
0
0
If you publish Kafka messages with keys they will be directed to topic partitions such that all similar keys go to the same partition. Alternatively you can use Kafka Streams to read an input topic and route messages to a set of output topics based on the keys provided with the messages.
1
0
0
I am migrating my java project from Rabbit MQ to Kafka (for some reasons). However, I am facing one difficulty. In current workflow, I post all the messages to rabbit mq exchange, and based on the routing key of the messages, the messages are redirected to one or more queues. I want to retain the same functionality in Kafka also. ( I know kafka is not originally suited for it but I want a workaround). Basically, I want something like this: whenever a message is received by a topic, based on the meta present in the message, the message should be redirected to other set of topics. What is the fastest way to achieve this? I would prefer python or java solution Thanks
How to add workflow to Kafka messages?
1.2
0
0
391
46,054,748
2017-09-05T12:14:00.000
1
0
0
0
python,odoo-8,invoice,point-of-sale,accounting
46,104,613
1
true
1
0
It does not need be fixed, because it is not a bug, when you close the session you post the entries and reconcile payments, and I see that for security reasons too, you don't want some user invoiced a yesterday pos order, he could use the invoice for money laundering(I exaggerated the event).
1
0
0
Anybody knows why it's not possible to generate an invoice after that the POS session is closed an the POS orders are booked? And how to fix this
Odoo POS create invoice after POS session is closed
1.2
0
0
428
46,055,886
2017-09-05T13:12:00.000
3
0
0
0
python,r,statistics,covariance
46,061,009
1
true
0
0
OK, you only need one matrix and randomness isn't important. Here's a way to construct a matrix according to your description. Start with an identity matrix 50 by 50. Assign 10 to the first (upper left) element. Assign a small number (I don't know what's appropriate for your problem, maybe 0.1? 0.01? It's up to you) to all the other elements. Now take that matrix and square it (i.e. compute transpose(X) . X where X is your matrix). Presto! You've squared the eigenvalues so now you have a covariance matrix. If the small element is small enough, X is already positive definite. But squaring guarantees it (assuming there are no zero eigenvalues, which you can verify by computing the determinant -- if the determinant is nonzero then there are no zero eigenvalues). I assume you can find Python functions for these operations.
1
0
1
So I would like to generate a 50 X 50 covariance matrix for a random variable X given the following conditions: one variance is 10 times larger than the others the parameters of X are only slightly correlated Is there a way of doing this in Python/R etc? Or is there a covariance matrix that you can think of that might satisfy these requirements? Thank you for your help!
How to generate a random covariance matrix in Python?
1.2
0
0
1,796
46,056,161
2017-09-05T13:26:00.000
1
0
1
1
python,cmd,installation,python-install
63,153,547
3
false
0
0
For Windows I was unable to find a way to Download python using just CMD but if you have python.exe in your system then you can use the below Method to install it (you can also make .bat file to automate it.) Download the python.exe file on your computer from the official site. Open CMD and change Your directory to the path where you have python.exe Past this code in your Command prompt make sure to change the name with your file version In the below code(e.g python-3.8.5.exe) python-3.6.0.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0 It will also set the path Variables.
1
7
0
Is it possible to install Python from cmd on Windows? If so, how to do it?
How to install Python using Windows Command Prompt
0.066568
0
0
66,195
46,057,558
2017-09-05T14:33:00.000
3
0
0
0
python,twitter-bootstrap,flask,jinja2
46,057,739
1
false
1
0
Jinja is a rendering engine. It doesn't care what it's rendering, and that includes not caring that you decided to use Twitter Bootstrap to lay out your HTML page. wtf.form_field is a macro from Flask-Bootstrap that handles rendering the label, input, errors, and other information with Bootstrap's CSS. There is no requirement that you use it. You can look at its source to see what you would need to do instead. There is nothing wrong with using pre-made CSS like Bootstrap, or libraries like Flask-WTF and Flask-Bootstrap, to handle parts of the application you don't want to deal with.
1
0
0
I've been building my Flask app from a little code I found in a tutorial. My Flask app is using Bootstrap, WTForms and Jinja to render the HTML files. I would like to clean up since I don't think I need Bootstrap. Can you use Jinja and WTForms without Bootstrap? I'm pretty sure you can but I found a lot of examples that have code like {{ wtf.form_field(form.username) }} which is making me insecure. What is the advantage of {{ wtf.form_field(form.field1) }} instead of {{ form.name.label }} {{ form.name(size=20) }}? Can you use Flask without Bootstrap? I don't want to include all those files.
Can you use Jinja without Bootstrap in Flask?
0.53705
0
0
456
46,060,352
2017-09-05T17:23:00.000
1
1
1
0
python
46,060,417
2
false
0
0
Yes. If a function doesn't access a self, it should most likely not be a method. You can use a full module if your goal was to arrange your functions in a distinct namespace. Python uses namespaces everywhere, so we need not shy away from global names like C++ tends to and Java enforces (effectively, because they're not that global after all).
1
0
0
In my test suite I have a file called "utils.py" in which I have assorted functions required by many of the tests. To accomplish this I created a "Utils" class and had all of the functions inside it. A colleague, with more Python experience, insisted that there should be no such class and instead all of these functions should be top-level. Thus "Utils.get_feature_id()" became "get_feature_id()". Would you concur with his assertion? Robert
Is it Pythonic to have global functions not part of a class?
0.099668
0
0
71
46,061,674
2017-09-05T18:59:00.000
1
0
1
0
python,pycharm
46,063,604
2
true
0
0
Fixed by doing are hard uninstall. Steps taken to do this - Deleting the application rm -rf ~/Library/Preferences/PyCharm2017.2 rm -rf ~/Library/Caches/PyCharm2017.2 Then re-install of Pycharm 2017.2.2.
2
3
0
When I open Pycharm it tries to open previous projects I was working on. It starts to index these projects and opens the tip of the day. It then freezes - I can't close any windows or do anything within Pycharm. How do I open Pycharm and not open these projects. I have tried restarting my machine and re-installing Pycharm. (running on Mac OS Sierra 10.12.6 and Pycharm 2017.2.2)
Pycharm freezes upon opening. How do I open Pycharm without opening any projects?
1.2
0
0
364
46,061,674
2017-09-05T18:59:00.000
0
0
1
0
python,pycharm
50,934,182
2
false
0
0
I know this post is quite old. However, I am new to pycharm and had exactly the same issue. I had pycharm-community-2018.1.4 and this would not start after selection Open-->File or Project. I did a repair installation with pycharm-community-182.2949.11. Boom, it worked like a charm.
2
3
0
When I open Pycharm it tries to open previous projects I was working on. It starts to index these projects and opens the tip of the day. It then freezes - I can't close any windows or do anything within Pycharm. How do I open Pycharm and not open these projects. I have tried restarting my machine and re-installing Pycharm. (running on Mac OS Sierra 10.12.6 and Pycharm 2017.2.2)
Pycharm freezes upon opening. How do I open Pycharm without opening any projects?
0
0
0
364
46,062,103
2017-09-05T19:31:00.000
1
0
1
1
python,bash,macos
46,064,025
1
true
0
0
From your Python version directory, Pip installs packages to './lib/python/site-packages/' and creates the binary in './bin/'. If you install a package to your User directory with: pip install --user [packagename] the Python version directory is: /Users/[username]/Library/Python/[version]/ otherwise the directory is usually: /Library/Frameworks/Python.framework/Versions/[version]. Create a symbolic link from the virtualenv binary in /Users/[username]/Library/Python/3.6/bin/ to /usr/local/bin/ in your path with ln -s: ln -s /Users/[username]/Library/Python/3.6/bin/virtualenv /usr/local/bin/virtualenv and you should be all set. If you need to delete the symbolic link simply use rm: rm /usr/local/bin/virtualenv
1
0
0
I am fresh reinstalling python 2.7 (python, pip) and 3.6 (python3, pip3). However, when I installed pipenv and virtualenv for pythn3 using pip3 - the corresponding bash commands are not added, so simple things like $ virtualenv --version fail. What is going on here? can anyone help, please? Thanks
osx pip python3 - installing packages does not create alias
1.2
0
0
373
46,062,523
2017-09-05T19:59:00.000
0
0
0
0
python,pandas,dask
57,173,648
2
false
0
0
dask.dataframe.from_pandas(pandas.Series(my_data), npartitions=n) is what you need. from_pandas accepts both pandas.DataFrame/Series.
1
2
1
I was trying to add a column in dask dataframe, but it's not letting me to add columns of type list, so I reached a little bit and found that it would add a dask series. However I'm unable to convert my list to a dask series. Can you help me out?
Initialize a dask series
0
0
0
1,854
46,062,649
2017-09-05T20:08:00.000
1
0
0
0
python-2.7,tensorflow,tensorflow-gpu,tf-slim
49,581,528
1
true
0
0
You can partition the GPU usage using the following code. You can set the fraction of the GPU to be used for training and evaluation separately. The code below means that the process is given 30% of the memory. gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3000) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) sess.run(tf.app.run())
1
1
1
I am interested in using the tensorflow slim library (tf.contrib.slim) to do evaluation of a model performance on a(n) (entire) test set periodically during training. The documentation is pretty clear that slim.evaluation.evaluation_loop is the way to go, and it looks promising. The issue is that I don't have a second gpu to spare, this model parameters take up an entire gpu's worth of memory, and I would like to do concurrent evaluation. For example, if I had 2 GPUs, I could run a python script that terminated with "slim.learning.train()" on the first gpu, and another that terminated with "slim.evaluation.evaluation_loop()" on the second gpu. Is there an approach that can manage 1 gpu's resources for both tasks? tf.train.Supervisor comes to mind, but I don't honestly know.
tensorflow slim concurrent train and evaluation loops; single device
1.2
0
0
215
46,063,312
2017-09-05T21:03:00.000
-1
0
1
0
python
46,063,432
2
false
0
0
My file1.py has only following line, x="hello world" from file1 import x print x actually printing the value from other file1
1
0
0
Fairly new to python, have looked around on various so sources, but so far nothing has made the code actually function. The code I am using is from file1 import x. (have also tried import file1.x) but both of them make the module run rather than giving me the variable. Is there any other code to use or am I missing something?
importing variables across modules
-0.099668
0
0
31
46,063,595
2017-09-05T21:27:00.000
2
0
0
0
python,apk,kivy
46,064,362
2
false
0
1
While there is a numpy recipe, i believe there is no scipy or keras one, and it's certainly going to be quite some work to do them, so while theorically yes, python-for-android would do the job, in practice, you'll have to get your hands dirty to get that going.
1
3
0
I have written a Python code which imports libraries like numpy, scipy, keras (deep learning). Is it possible to convert it to mobile .apk using say kivy? I couldn't find any documentation specifying it is possible or not possible. Kindly help.
Converting Python Code to apk for Android?
0.197375
0
0
12,122
46,064,849
2017-09-05T23:55:00.000
2
0
1
0
python,installation,scipy,pip,python-3.6
46,065,204
1
true
0
0
For 32-bit Python you need scipy‑0.19.1‑cp36‑cp36m‑win32.whl. To install scipy‑0.19.1‑cp36‑cp36m‑win_amd64.whl you need 64-bit Python.
1
0
0
I am trying to install scipy for 3.6, but i get an error: "scipy.....whl is not supported on this platform." I have been attempting to do this via my scripts and using pip install but i am unsure why this is not working.
Error installing scipy whl file using pip inPython 3.6
1.2
0
0
153
46,065,520
2017-09-06T01:36:00.000
-2
0
1
0
python,python-3.x
46,065,660
2
false
0
0
dict is interesting. What you need to understand is how Python looks up attributes, methods, etc. Thing like name are stored in dir(instance), which is not part of the standard getattr of your class. Only explicitly defined attributes and methods are part of A.dict, the other stuff is part of a complex look-up system, the rest is looking at the class, at the subclasses, etc, until you get to 'object'.
2
2
0
For the attributes of a class: The attribute __dict__ of a class or an instance doesn't include __base__, __name__. What attributes does __dict__ of a class contain, and what doesn't? How can I get all the attributes of a class? For the attributes of an instance: The attribute __dict__ of an instance doesn't include __class__. What attributes does __dict__ of an instance contain, and what doesn't? How can I get all the attributes of an instance? Thanks.
What attributes does `__dict__` of a class or an instance contain, and what doesn't?
-0.197375
0
0
69
46,065,520
2017-09-06T01:36:00.000
0
0
1
0
python,python-3.x
46,066,414
2
false
0
0
__dict__ excludes: Any attributes (or descriptors) defined on the type (or any type in the mro). Any attributes declared as part of a type implemented in C. Any attributes declared in __slots__. The best you can do for listing all attributes is dir, but it is often unreliable.
2
2
0
For the attributes of a class: The attribute __dict__ of a class or an instance doesn't include __base__, __name__. What attributes does __dict__ of a class contain, and what doesn't? How can I get all the attributes of a class? For the attributes of an instance: The attribute __dict__ of an instance doesn't include __class__. What attributes does __dict__ of an instance contain, and what doesn't? How can I get all the attributes of an instance? Thanks.
What attributes does `__dict__` of a class or an instance contain, and what doesn't?
0
0
0
69
46,067,848
2017-09-06T06:01:00.000
1
0
0
1
python,airflow,google-cloud-dataproc
53,117,725
2
false
0
0
We were running into the same issue using Google Composer, which was running Airflow 1.9. We upgrade to Airflow 1.10 and this fixed the issue. Google just released it. Now, when I run the operator it can see the cluster - it looks in the correct region. Previously it was always looking in global.
1
1
0
I am performing some operation using DataProcPySparkOperator. This operator is only taking a cluster name as parameter, there is no option to specify region and by default it considers cluster with global region. For clusters with regions other than global, the following error occurs: googleapiclient.errors.HttpError: https://dataproc.googleapis.com/v1/projects//regions/global/jobs:submit?alt=json returned "No current cluster for project id '' with name ''` Am i missing anything or its just limitation with these operators?
Airflow DataProcPySparkOperator not considering cluster other than global region
0.099668
0
0
822
46,069,364
2017-09-06T07:34:00.000
1
0
0
0
algorithm,python-3.x,machine-learning,artificial-intelligence
46,070,137
4
false
0
0
This is not a machine learning problem but an optimization problem. So you need a greedy algorithm for the shortest path Indeed it could be solved this way but the challenge is to represent your grid as a graph... For example, decomposing the grid in a n x n matrix. In your shortest path algorithm, a node is an element of your matrix (so you exclude the elements of the matrice that contains the scattered points) and the weight of the arcs are the distance. However n must be small since shortest path algotithms are np-hard problems... Maybe other algorithms exist for this specific problem but I'm not aware of.
2
0
1
Assume that I have set of points scattered on the XY plane, and i have two points say start and end point any where in XY plane. I want to find the shortest path between start and end point without touching scattered points. The path has to maintain certain offset ( i.e assume path has some width ). How to approach this kind of problems in programming, Are there any algorithms in machine learning.
Shortest root using machine learning/AI
0.049958
0
0
1,861
46,069,364
2017-09-06T07:34:00.000
0
0
0
0
algorithm,python-3.x,machine-learning,artificial-intelligence
46,078,672
4
false
0
0
Like others already stated: this is not a typical "Artificial Intelligence" problem. It is kind of a path planning problem. There are different algorithms available. If your path doesn't neet to satisfy any constraints like .g. smoothness, you can use an A*-Algorithm with distance as heuristic. You have to represent your XYZ-space as a Graph where each node has a coordinate. Further you need to take into account, that no nodes lie near the points you want to avoid. If your path needs to satisfy constraints, this turns into a more complicated path planning problem where you could apply optimization or RRTs.
2
0
1
Assume that I have set of points scattered on the XY plane, and i have two points say start and end point any where in XY plane. I want to find the shortest path between start and end point without touching scattered points. The path has to maintain certain offset ( i.e assume path has some width ). How to approach this kind of problems in programming, Are there any algorithms in machine learning.
Shortest root using machine learning/AI
0
0
0
1,861
46,070,112
2017-09-06T08:17:00.000
2
0
1
1
python,macos,intellij-idea
46,070,218
2
true
0
0
Try this in menu of IDEA: File -> Settings -> Project: Name of project -> Project Interpreter and from above in the window you can choice interpreter version or virtualenv.
1
1
0
I'm highly confused about this. Python3 is installed per default on the MacBook. which python3 will output /Library/Frameworks/Python.framework/Versions/3.6/bin/python3 Great, so that's my SDK to put into IntelliJ IDEA, one should think. I have the Python plugin for IDEA. However, I can't run Python files. So I try to change the configuration and set it to the above PATH for the Python interpreter. However, still nothing. Trying to run the Python file inside IDEA will prompt a new configuration? I can run the script just file doing python3 script.py in the terminal? I know the path for the Python3 library, yet, IDEA doesn't recognise it at all and doesn't save the configuration. What am I doing wrong in this process? This should be fairly easy to set up but turns out it isn't :) I even tried to create a Python 3.6.2 virtual environment with the IDEA internal tool - same thing? It doesn't allow me to run the Python3 script from inside IDEA. Should I use python from usr/bin/python? If I cd there, I can see Python3. But inside IDEA, i only have access to Python2..
How do I set up Python 3 with IntelliJ IDEA on OSX?
1.2
0
0
5,298
46,071,484
2017-09-06T09:25:00.000
0
0
0
0
python,asterisk,voip,asteriskami
46,072,812
1
false
0
1
You can start from nice book, like ORelly "The future of telephony". There is no way do something really valuable without know of asterisk dialplan. All that technology used for different things.
1
1
0
I want to create an GUI application, that would communicate with Asterisk server and provide functions, such as call forwarding, originating calls, etc. I wanted to use Kivy (Python GUI framework), but there here is so many different tools (AGI, AMI, FastAGI) and libraries (Pyst2, StarPy, etc.) to manage asterisk, that i don't even know where to start. I have already written some code (using Pyst2 asterisk manager) but I have a feeling, that this is not the best solution, as said application should be able to have multiple instances open simultaneously and AMI would be too messy for that purpose. Could someone give me some advice or suggestions what tools would be best to use in this case?
Creating touch GUI application for communication with Asterisk
0
0
0
168
46,075,238
2017-09-06T12:27:00.000
0
0
1
0
python,osgeo
46,075,462
1
false
0
0
Go to add/remove programs in windows and uninstall everything. Then reinstall everything. Do your Python install first and PyCharm should detect everything. I'd suggest doing the 32 bit Python install as some packages aren't compiled for 64 bit and it makes things a bit challenging to find and install them. Pip is standard in the python install now. You'll see a check box in the installer for it, which is pre-checked. I'd change your install directory to something simple like C:\Python27 Other than that it should be pretty straightforward. However I doubt you really need to uninstall Python. You can just go to settings and under the interpreter section add the C:\Python27 directory. PyCharm is constantly scanning for installed modules, so it will know in real time that you've installed them. You can even install packaged via PIP while PyCharm is open and within second PyCharm will recognize it as a valid package. If you're missing packages you can also import them in PyCharm and when you get the red underline saying it's missing, hover your mouse over it and hit alt+enter and you'll get a menu to install it.
1
1
0
My python installation is a mess. Therefore I'd like to reinstall the entire installation of it. (Unfortunately,) I've also installed QGIS and PyCharm (mostly making it a mess) and I want to start clean..! So, what is the best way to get rid of every little python thingy and what are the best packages/methods for reinstalling Python27, QGIS and PyCharm? Should I go for osgeo or not, should I first install Pycharm or Python etc. Hopefully you have some good thoughts and tools on this. I'm a fan of pip, so in the end I hope its possible just to use pip for installing the packages all around. I'm working on Windows 7, 64bit (thanks for the headsup Karel)
Complete reinstall python, pycharm, qgis
0
0
0
817
46,078,899
2017-09-06T15:15:00.000
0
0
0
0
python,django,pycharm
46,081,739
2
false
1
0
First kill django server using sudo lsof -t -i tcp:8000 | xargs kill -9 Whatever PORT you are using you can replace with 8000 with it. Right click on project main directory(Having Setting.py) ->Mark directory as-> Source Root Edit Configuration Working Directory: leave blank And then click on run Button. It worked for me. I am using linux system, Not sure for Windows but this should work for windows too.
1
3
0
I'm new to Pycharm IDE. Previously have been using Atom editor and a shell. I am working in Windows 7. Tonight I created a small Django project in Pycharm. I used the "Tools: Run manage.py task" menu option. Then I ran runserver from the terminal it opened. This launched my server at http://127.0.0.1:8000, as expected. But later I accidentally closed the manage.py window pane in pycharm ide. The server continues to run. Even after I closed Pycharm, the server remains running. How do I stop it?!?! Embarassing.
Started a django server from inside Pycharm, but now I cannot turn it off
0
0
0
2,346
46,080,092
2017-09-06T16:20:00.000
1
1
0
0
c#,python,wcf,client-server,rpc
46,080,832
2
true
1
1
I'm not sure I completely understand your use case, but I would suggest having a look at a REST API approach if you need to have .Net talk to Python, or vice versa.
1
0
0
I am trying an application to write an application on which I have used a client/server architecture. The client side is developed using .NET\C# and the server side is developed on python. To communicate the both sides, I used first tcp/ip socket; so i put my python's methods on a loop then I ask each time from my c# application to run an method. This idea is very bad as it require to cover all use cases that can be happening on network or something like that. After a work of search, I have found three technologies that can answer a client/server architecture which are RPC, RMI and WCF. RMI a java oriented solution so it is rejected. So, my question here is: does RPC and WCF support multi programming languages (interoperability) especially betwenn C# and python?
Client Server architecture using python and C#
1.2
0
0
525
46,081,434
2017-09-06T17:47:00.000
0
0
0
0
python,pycharm,cx-oracle,python-packaging
46,081,563
1
false
0
0
file cx_Oracle.cp36-win32.pyd is the cx_Oracle module itself. When You imports cx_Oracle in Your Python program, Python interpreter loads this pyd (which is de facto a dll library linked against python36.dll) and calls functions from this Python "module". cx_Oracle has no more files (ie cx_Oracle.py(o|c)) like other modules.
1
0
0
I installed cx_Oracle python package using pycharm's package installer. It installed successfully and also works perfectly. The installed directory for the package is shown as c:\program files(x86)\python36-32\lib\site-packages\. When I go to this directory I do not see the package directory for cx_Oracle. I only see following two related to cx_Oracle - cx_Oracle-6.0.1.dist-info directory and cx_Oracle.cp36-win32.pyd file. For other packages I see package directory as well as a info directory but for cx_Oracle I see only the info directory. Where does the package directory for cx_Oracle is present?
issue in locating cx_oracle python package files/directory
0
0
0
598
46,085,215
2017-09-06T22:30:00.000
0
1
1
0
python,python-3.x,string
46,085,264
3
false
0
0
You use the encode and decode methods for this, and supply the desired encoding to them. It's not clear to me if you know the encoding beforehand. If you don't know it you're in trouble. You may have to guess the encoding in some way, risking garbage output.
1
2
0
I'm Using python 3.5 I have a couple of byte strings representing text that is encoded in various codecs so: b'mybytesstring' , now some are Utf8 encoded other are latin1 and so on. What I want to in the following order is: transform the bytes string into an ascii like string. transform the ascii like string back to a bytes string. decode the bytes string with correct codec. The problem is that I have to move the bytes string into something that does not accept bytes objects so I'm looking for a solution that lets me do bytes -> ascii -> bytes safely.
Convert bytes to ascii and back save in Python?
0
0
0
13,895
46,085,556
2017-09-06T23:08:00.000
3
0
1
0
python,anaconda,ubuntu-16.04,jupyter
46,086,306
1
false
0
0
I found a kernel.json from 2015 buried down in .local/share/jupyter/kernels/python3/kernel.json that was pointing to /usr/bin/python3. Moving it out of the way seems to have fixed things. The strategy was to use jupyter --paths to find the places it looks and check each one.
1
1
0
I have installed Anaconda3-4.4.0 on Ubuntu 16.04. I can't find any variation of virtualenv, PATH hacking, or modifications to kernel.json, that stops it from using /usr/bin/python3 when creating a new notebook. I want it to use the anaconda installed version exclusively so I can have different versions of packages in the anaconda environment. I have removed my ~/.jupyter and ~/.ipython. I have tried virtual environments and everything I can think of. What do I have to do to get it prevent it from using the system installed version of python3.
Anaconda Jupyter notebook uses /usr/bin/python3
0.53705
0
0
239
46,086,401
2017-09-07T01:12:00.000
0
0
0
0
python,matplotlib,scipy,cluster-analysis,hierarchical-clustering
52,784,797
1
true
0
0
fcluster from scipy.cluster.hierarchy will do.
1
1
1
How can I plot a bunch of 2-D points X (say, using matplotlib) with color labels from scipy.cluster.hierarchy.linkage(X, method="single") (say, k = 3)?
How to Plot with Label from scipy.cluster.hierarchy.linkage?
1.2
0
0
340
46,090,908
2017-09-07T07:53:00.000
0
0
0
0
python-3.x,gurobi
59,344,648
1
false
0
0
Your statements are somewhat contradictory: If gurobi was to add redundant constraints in the beginning, the presolve step would remove them again and second them being redundant would mean that they wont influence the solution. Thus, there would be a modelling error on your side. As sascha mentioned it is extremely unlikely that any constraint-generating routine/presolve step would be falsely implemented - such errors would show up immediately, that is before patching into the published versions. And from a theoretical side: These methods require mathematical proofs that guarantee they do not cut-off feasible solutions. What is overseen sometimes, I did that only recently, is the default lower bound of 0 on all variables which stems from the LP-notation according to standard form. Further investigation can be done if you were to post the .lp and .mps files of your initial problem with your question. That way people could investigate themselves.
1
0
1
Is there any way to check whether Gurobi is adding an extra redundant constraint to the model? I tried model.write() at every iteration and it seems fine. But still, my output is not what I'm expecting. I want to check whether there are any unwanted constraints added by Gurobi when solving.
Check Gurobi redundant constraints
0
0
0
249
46,095,206
2017-09-07T11:27:00.000
1
0
1
0
python,validation,libphonenumber
46,095,311
1
false
0
0
A static library most certainly won't know anything about company internal extensions. That library is merely a combination of country specific format validators plus a database of known assigned blocks. The information is not validated in realtime against a live system, so is never guaranteed to be accurate (if a number has been assigned or unassigned yesterday, it won't know about it until that built-in database is being updated). Even if a certain block of numbers is known as being assigned, there's no guarantee that any one individual number within that block is currently in use. The best this library can tell you is that a number looks like a number that could plausibly be in use. Whether it's actually in use and belongs to the user you're interacting with can only be verified by sending a validation code to that number, or some similar validation loop.
1
0
0
I've been using the Python Google Phone Numbers library, and find it to be a good alternative to Twilio. Does the is.valid_number method check if the number is actually connected to a human, or just that its in a correct/valid format/style? For example, obviously a phone number 123 won't be valid because its format is wrong, but if 408-800-1000 is a base corporate phone number and they've given their current employees numbers like 408-800-1001, 408-800-1002, 408-800-1003... and haven't yet reached past 408-800-1007... would something like 408-800-1008 return valid or not?
What is the definition of a 'valid' phone number in Python's libphonenumber
0.197375
0
0
2,432
46,095,249
2017-09-07T11:29:00.000
1
0
0
0
python,scikit-learn,knn
46,101,630
1
true
0
0
I would recommend looking up this book. Introduction to Machine Learning with Python A Guide for Data Scientists by Andreas C. Müller, Sarah Guido The book has code written to visualise various outputs for Machine Learning algorithms.
1
0
1
I want to know is there any way to see "under the hood" on other python sklearn algorithms. For example, I have created a decision tree classifier using sklearn and have been able to export the exact structure of the tree but would like to also be able to do this with other algorithms, for example KNN classification. Is this possible?
Output from scikit learn ML algorithms
1.2
0
0
60
46,097,968
2017-09-07T13:42:00.000
2
0
0
0
python,tensorflow,neural-network,deep-learning,image-segmentation
46,203,664
2
false
0
0
If I understand correctly you have a portion of each image with label void in which you are not interested at all. Since there is not a easy way to obtain the real value behind this void spots, why don't you map these points to background label and try to get results for your model? I would try in a preprocessing state to clear the data labels from this void label and substitute them with background label. Another possible strategy ,if you don's simply want to map void labels to background, is to run a mask (with a continuous motion from top to bottom from right to left) to check the neigthbooring pixels from a void pixel (let's say an area of 5x5 pixels) and assign to the void pixels the most common label besides void. Also you can always keep a better subset of the data, filtering data where the percentage of void labels is over a threshold. You can keep only images with no void labels, or more likeley you can keep images that have only under a threshold (e.g. 5%) of non-labeled points. In this images you can implement the beforementioned strategies for replacing the void labels.
1
21
1
I was wondering how to handle not labeled parts of an image in image segmentation using TensorFlow. For example, my input is an image of height * width * channels. The labels are too of the size height * width, with one label for every pixel. Some parts of the image are annotated, other parts are not. I would wish that those parts have no influence on the gradient computation whatsoever. Furthermore, I am not interested in the network predicting this “void” label. Is there a label or a function for this? At the moment I am using tf.nn.sparse_softmax_cross_entropy_with_logits.
TensorFlow: How to handle void labeled data in image segmentation?
0.197375
0
0
3,236
46,098,385
2017-09-07T14:01:00.000
0
0
0
0
javascript,python,node.js,socket.io
46,098,475
2
false
1
0
I believe you need a simple API on the server which accept input from the client which can be done via JavaScript. There are several technologies you could have a look at: Ajax. WebSockets.
1
1
0
I have a python script which resides on a web-server (running node.js) and does some machine learning computation. The data has to be supplied to the python script using javascript running in web-browser. How can this be done? I want to know the complete setup. For now, the server is the localhost only.
communicate between python script on server side and javascript in the web-browser
0
0
1
951
46,099,439
2017-09-07T14:52:00.000
3
0
0
0
python,api,curl,user-agent
69,885,034
2
false
1
0
I know this post is a few years old, but since I stumbled upon it... tldr; Do not use the user agent to determine the return format unless absolutely necessary. Use the Accept header or (less ideal) use a separate endpoint/URL. The standard and most future-proof way to set the desired return format for a specific endpoint is to use the Accept header. Accept is explicitly designed to allow the client to state what response format they would like returned. The value will be a standard MIME type. Web browsers, by default, will send text/html as the value of the Accept header. Most Javascript libraries and frontend frameworks will send application/json, but this can usually be explicitly set to something else (e.g. text/xml) if needed. All mobile app frameworks and HTTP client libraries that I am aware of have the ability to set this header if needed. There are two big problems with trying to use user agent for simply determining the response format: The list will be massive. You will need to account for every possible client which needs to be supported today. If this endpoint is used internally, this may not be an immediate problem as you might be able to enforce which user agents you will accept (may cause its own set of problems in the future, e.g. forcing your users to a specific version of Internet Explorer indefinitely) which will help keep this list small. If this endpoint is to be exposed externally, you will almost certainly miss something you badly need to accept. The list will change. You will need to account for every possible client which needs to be supported tomorrow, next week, next year, and in five years. This becomes a self-induced maintenance headache. Two notes regarding Accept: Please read up on how to use the Accept header before attempting to implement against it. Here is an actual example from this website: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9. Given this, I would return back HTML. The value of the header can be */*, which basically just says "whatever" or "I don't care". At that point, the server is allowed to determine the response format.
1
2
0
I'm looking for a way to show html to a user if they call from a browser or just give them the API response in JSON if the call is made from an application, terminal with curl or generally any other way. I know a number of APIs do this and I believe Django's REST framework does this. I've been able to fool a number of those APIs by passing in my browser's useragent to curl so I know this is done using useragents, but how do I implement this? To cover every single possible or most useragents out there. There has to be a file/database or a regex, so that I don't have to worry about updating my useragent lists every few months, and worrying that my users on the latest browsers might not be able to access my website.
How to detect if a GET request is from a browser or not
0.291313
0
1
1,500
46,099,695
2017-09-07T15:04:00.000
5
0
1
0
python,plotly,pyinstaller
47,979,807
5
false
0
0
it's seem like plotly isn't fully supported by PyInstaller. I used an work around solution that worked for me. Don't use the one file option Completely copy the plotly package(for me it was Lib\site-packages\plotly) for the python installation directory into the /dist/{exe name}/ directory
2
9
1
I am compiling my current program using pyinstaller and it seems to not be able to handle all required files in plotly. It runs fine on its own, and without plotly it can compile and run as well. It seems be to failing to find a file "default-schema.json" that I cannot even locate anywhere on my drive. Traceback (most recent call last): File "comdty_runtime.py", line 17, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "actual_vs_mai.py", line 12, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly__init__.py", line 31, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly\graph_objs__init__.py", line 14, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly\graph_objs\graph_objs.py", line 34, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly\graph_reference.py", line 578, in File "site-packages\plotly\graph_reference.py", line 70, in get_graph_referenc e File "site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py", li ne 1215, in resource_string File "site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py", li ne 1457, in get_resource_string File "site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py", li ne 1530, in _get File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 474, in get_data with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'H:\Python\Commodity_M AI_Trade_List\Code\dist\comdty_runtime\plotly\package_data\default-schema. json' Failed to execute script comdty_runtime
pyinstaller fails with plotly
0.197375
0
0
6,865
46,099,695
2017-09-07T15:04:00.000
0
0
1
0
python,plotly,pyinstaller
70,137,929
5
false
0
0
Before using pyinstaller, make sure that the imports that you are using in your program are specifically listed in the requirements file. pip freeze->requirements.txt quickly creates a requirements file in your program. I am not sure if this is the solution? I had similar issues that seem to go away once I updated the requirements file then making an executable.
2
9
1
I am compiling my current program using pyinstaller and it seems to not be able to handle all required files in plotly. It runs fine on its own, and without plotly it can compile and run as well. It seems be to failing to find a file "default-schema.json" that I cannot even locate anywhere on my drive. Traceback (most recent call last): File "comdty_runtime.py", line 17, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "actual_vs_mai.py", line 12, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users\ktehrani\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly__init__.py", line 31, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly\graph_objs__init__.py", line 14, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly\graph_objs\graph_objs.py", line 34, in File "", line 2237, in _find_and_load File "", line 2226, in _find_and_load_unlocked File "", line 1191, in _load_unlocked File "", line 1161, in _load_backward_compatible File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module exec(bytecode, module.dict) File "site-packages\plotly\graph_reference.py", line 578, in File "site-packages\plotly\graph_reference.py", line 70, in get_graph_referenc e File "site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py", li ne 1215, in resource_string File "site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py", li ne 1457, in get_resource_string File "site-packages\setuptools-27.2.0-py3.4.egg\pkg_resources__init__.py", li ne 1530, in _get File "d:\users*\appdata\local\continuum\anaconda3\envs\py34\lib\site-p ackages\PyInstaller\loader\pyimod03_importers.py", line 474, in get_data with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'H:\Python\Commodity_M AI_Trade_List\Code\dist\comdty_runtime\plotly\package_data\default-schema. json' Failed to execute script comdty_runtime
pyinstaller fails with plotly
0
0
0
6,865