Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
32,268,889
2015-08-28T10:28:00.000
2
0
0
1
0
python-2.7,centos6,zebra-printers,barcode-printing
0
32,270,859
0
1
0
false
0
0
If you are printing to a network printer open a TCP connection to port 9100. If you are printing to a USB printer look up a USB library for python. Once you have a connection send a print string formatted in ZPL. Look on the Zebra site for the ZPL manual. There are examples in there on how to print a barcode. Normal Linux drivers will print graphics and text but do not have a barcode font.
1
1
0
0
I want to print Barcode on my Zebra Desktop label printer on CentOS 6.5 but I did not find any python drivers for that and did not found script so that i can use in my project. Does anyone know how to print Barcode in Zebra printer?
How to print barcode in Centos 6 using python
0
0.379949
1
0
0
281
32,278,459
2015-08-28T19:24:00.000
0
0
0
0
1
javascript,python,ajax,jsp,jsp-tags
0
32,278,473
0
1
1
false
1
0
The developers console ( F12 in Chrome and Firefox) is a wonderful thing. Check the Network or Net tab. There you can see all the requests between your browser and your server.
1
1
0
0
I have a JSP page called X.JSP (contains few radio button and submit button), when i hit the submit button in X.JSP, the next page is displayed Y.JSP?xxxx=1111&yyyy=2222&zzzz=3333 how to know what page or service or ajax call is being made when i hit the submit button in X.JSP page. xxxx=1111&yyyy=2222&zzzz=3333 these are generated after i click the submit button in X.JSP Currently i am using python to script. i select a radio button and post the form. i am not able to get the desired O/P. how do I what page or service or ajax call is being made when i hit the submit button in X.JSP page, so that i can directly hit that page or is there any better way to solve this
how to know what page or call is being when in a JSP page when submited
1
0
1
0
1
49
32,284,938
2015-08-29T10:15:00.000
7
0
1
0
0
python,windows,download,wxpython
1
48,046,623
0
10
0
false
0
1
3 steps to install wx-widgets and pygame in python IDLE Install python 3xxx in your system opting (Add 3xxx to your path). open python CLI to see whether python is working or not. then open command prompt (CMD). type PIP to see whether pip is installed or not. enter command : pip install wheel enter command : pip install pygame To install wxpython enter command : pip install -U wxPython Thats all !!
2
20
0
0
So I was looking around at different things to do on Python, like code for flashing text or a timer, but when I copied them into my window, there were constant syntax errors. Now, maybe you're not meant to copy them straight in, but one error I got was 'no module named wx'. I learned that I could get that module by installing wxPython. Problem is, I've tried all 4 options and none of them have worked for me. Which one do I download and how do I set it up using Windows? Thanks
How to properly install wxPython?
0
1
1
0
0
91,741
32,284,938
2015-08-29T10:15:00.000
0
0
1
0
0
python,windows,download,wxpython
1
54,313,732
0
10
0
false
0
1
Check the version of wxpython and the version of python you have in your machine. For python 2.7 use wxPython3.0-win32-3.0.2.0-py27 package
2
20
0
0
So I was looking around at different things to do on Python, like code for flashing text or a timer, but when I copied them into my window, there were constant syntax errors. Now, maybe you're not meant to copy them straight in, but one error I got was 'no module named wx'. I learned that I could get that module by installing wxPython. Problem is, I've tried all 4 options and none of them have worked for me. Which one do I download and how do I set it up using Windows? Thanks
How to properly install wxPython?
0
0
1
0
0
91,741
32,285,041
2015-08-29T10:32:00.000
0
1
1
0
1
python,macos,module,installation
0
32,285,525
0
1
0
true
0
0
I think there are several different things to take into consideration here. When you're importing a module, (doing import factorial), Python will look in the defined PATH and try to find the module you're trying to import. In the simplest case, if your module is in the same folder where your script is trying to import it, it will find it. If it's somewhere else, you will have to specify the path. Now, site-packages is where Python keep the installed libraries. So for example when you do pip install x, Python will put the module x in your site-packages destination and when you try to import it, it will look for it there. In order to manage site-packages better, try to read about virtualenv. If you want your module to go there, first you need to create a package that you can install. For that look at distutils or all the different alternatives for packaging that involve some type of building process based on a setup file. I don't want to go into details in any of these points because all of them have been covered before. Just wanted to give you a general idea of where to look for.
1
1
0
0
I don't know how many duplicates of this are out there but none of those I looked at solved my problem. To practice writing and installing custom modules I've written a simple factorial module. I have made a factorial folder in my site-packages folder containing factorial.py and an empty __init__.py file. But typing import factorial does not work. How can I solve this? I also tried pip install factorial but that didn't work either. Do I have to save my code intended to use factorial in the same folder inside site-packages or can I save it whereever I want? Greetings holofox EDIT: I solved it. Everything was correct as I did it. Had some problems at importing and using it properly in my code...
Install a custom Python 2.7.10 module on Mac
0
1.2
1
0
0
512
32,297,244
2015-08-30T13:51:00.000
0
0
0
0
1
sql-server,python-2.7,sqlalchemy,pyodbc,pymssql
0
32,305,711
0
1
0
false
0
0
Sqlalchemy bulk operations doesn't really inserts a bulk. It's written in the docs. And we've checked it with our dba. Thank you we'll try the xml.
1
2
0
0
My original purpose was to bulk insert into my db I tried pyodbc, sqlalchemy and ceodbc to do this with executemany function but my dba checked and they execute each row individually. his solution was to run procedure that recieve table (user defined data type) as parameter and load it into the real table. The problem is no library seem to support that (or at least no example on the internet). So my question is anyone try bulk insert before or know how to pass user data defined type to stored procedure? EDIT I also tried bulk insert query but the problem it requires local path or share and it will not happend because organizition limits
Python mssql passing procedure user defined data type as parameter
0
0
1
1
0
826
32,305,131
2015-08-31T06:29:00.000
0
0
0
0
0
python,oracle,mongodb
0
32,305,243
0
1
0
false
0
0
To answer your question directly: 1. Connect to Oracle 2. Fetch all the delta data by timestamp or id (first time is all records) 3. Transform the data to json 4. Write the json to mongo with pymongo 5. Save the maximum timestamp / id for next iteration Keep in mind that you should think about the data model considerations and usually relational DB (like Oracle) and document DB (like mongo) will have different data model.
1
1
0
0
I know there are various ETL tools available to export data from oracle to MongoDB but i wish to use python as intermediate to perform this. Please can anyone guide me how to proceed with this? Requirement: Initially i want to add all the records from oracle to mongoDB and after that I want to insert only newly inserted records from Oracle into MongoDB. Appreciate any kind of help.
Export from Oracle to MongoDB using python
0
0
1
1
0
889
32,316,088
2015-08-31T16:47:00.000
7
0
0
1
0
python,mysql,google-cloud-datastore
0
33,367,328
0
3
0
true
1
0
There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything. You could always write a quick script using a local queue that parallelizes the work. The basic gist would be: Queuing script pulls data out of your MySQL instance and puts it on a queue. (Many) Workers pull from this queue, and try to write the item to Datastore. On failure, push the item back on the queue. Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads. Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k). If you have properties (which will be indexed by default), this cost goes up substantially. The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties). A single "new entity write" is equivalent to: + 2 (1 x 2 write ops fixed cost per new entity) + 10 (5 x 2 write ops per indexed property) = 12 "operations" per entity. So your actual cost to load this data is: 120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00
1
6
0
0
We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore. Eventually we would move all our data there, however initially focusing on the archived data as a test. Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row. We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time. Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python? Any comments, suggestions is appreciated thank you in advanced.
Is it possible to Bulk Insert using Google Cloud Datastore
1
1.2
1
1
0
3,726
32,325,390
2015-09-01T06:58:00.000
1
0
1
0
0
python,mysql,dictionary,scrapy
0
32,334,348
0
2
0
false
0
0
If you want to store dynamic data in a database, here are a few options. It really depends on what you need out of this. First, you could go with a NoSQL solution, like MongoDB. NoSQL allows you to store unstructured data in a database without an explicit data schema. It's a pretty big topic, with far better guides/information than I could provide you. NoSQL may not be suited to the rest of your project, though. Second, if possible, you could switch to PostgreSQL, and use it's HSTORE column (unavailable in MySQL). The HSTORE column is designed to store a bunch of Key/Value pairs. This column types supports BTREE, GIST, GIN, and HASH indexing. You're going to need to ensure you're familiar with PostgreSQL, and how it differs from MySQL. Some of your other SQL may no longer work as you'd expect. Third, you can serialize the data, then store the serialized entity. Both json and pickle come to mind. The viability and reliability of this will of course depend on how complicated your dictionaries are. Serializing data, especially with pickle can be dangerous, so ensure you're familiar with how it works from a security perspective. Fourth, use an "Entity-Attribute-Value" table. This mimics a dictionaries "Key/Value" pairing. You, essentially, create a new table with three columns of "Related_Object_ID", "Attribute", "Value". You lose a lot of object metadata you'd normally get in a table, and SQL queries can become much more complicated. Any of these options can be a double edged sword. Make sure you've read up on the downfalls of whatever option you want to go with, or, in looking into the options more, perhaps you'll find something that better suits you and your project.
1
1
0
0
I am doing a mini-project on Web-Crawler+Search-Engine. I already know how to scrape data using Scrapy framework. Now I want to do indexing. For that I figured out Python dictionary is the best option for me. I want mapping to be like name/title of an object (a string) -> the object itself (a Python object). Now the problem is that I don't know how to store dynamic dict in MySQL database and I definitely want to store this dict as it is! Some commands on how to go about doing that would be very much appreciated!
How to store a dynamic python dictionary in MySQL database?
0
0.099668
1
1
0
339
32,343,072
2015-09-02T00:53:00.000
1
0
0
1
0
python-2.7
1
40,198,608
0
3
0
false
0
0
go to the command prompt it will say the account and all that jazz. type cd .. then hit enter it will say C:\Users> type cd .. again then it will say C:> type cd python27 (or the name of your python folder) it will say C:\Python27> type cd scripts it will say C:\python27/scripts> type easy_install twilio then wait for it to run the procceses and then you will have twilio installed to python.
2
2
0
0
I'm new here and also a new Python learner. I was trying to install twilio package through the Windows command line interface, but I got a syntax error(please see below). I know there're related posts, however, I was still unable to make it work after trying those solutions. Perhaps I need to set the path in the command line, but I really have no idea how to do that...(I can see the easy_install and pip files in the Scripts folder under Python) Can anyone please help? Thanks in advance! Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation. All rights reserved. C:\WINDOWS\system32>python Python 2.7.10 (default, May 23 2015, 09:44:00) [MSC v.1500 64 bit (AMD64)] on wi n32 Type "help", "copyright", "credits" or "license" for more information. easy_install twilio File "", line 1 easy_install twilio ^ SyntaxError: invalid syntax
getting "SyntaxError" when installing twilio in the Windows command line interface
0
0.066568
1
0
0
3,665
32,343,072
2015-09-02T00:53:00.000
3
0
0
1
0
python-2.7
1
40,895,682
0
3
0
false
0
0
You should not type python first, then it becomes python command line. Open a new command prompt and directly type: easy_install twilio
2
2
0
0
I'm new here and also a new Python learner. I was trying to install twilio package through the Windows command line interface, but I got a syntax error(please see below). I know there're related posts, however, I was still unable to make it work after trying those solutions. Perhaps I need to set the path in the command line, but I really have no idea how to do that...(I can see the easy_install and pip files in the Scripts folder under Python) Can anyone please help? Thanks in advance! Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation. All rights reserved. C:\WINDOWS\system32>python Python 2.7.10 (default, May 23 2015, 09:44:00) [MSC v.1500 64 bit (AMD64)] on wi n32 Type "help", "copyright", "credits" or "license" for more information. easy_install twilio File "", line 1 easy_install twilio ^ SyntaxError: invalid syntax
getting "SyntaxError" when installing twilio in the Windows command line interface
0
0.197375
1
0
0
3,665
32,343,180
2015-09-02T01:10:00.000
0
0
0
0
0
python,django,django-rest-framework,django-rest-auth
0
34,567,070
0
1
0
false
1
0
djoser supports only Basic Auth and the token auth by Django Rest Framework. What you can do is to make use of login and logout from Django OAuth Toolkit and then djoser views such as like register, password reset.
1
0
0
0
When developing a Django project, many third party authentication packages are available, for example: Django OAuth Toolkit, OAuth 2.0 support. Djoser, provides a set of views to handle basic actions such as registration, login, logout, password reset and account activation. Currently, I just want to support basic actions registration, login and so on. So Djoser could be my best choice. But if I want to support OAuth 2.0 later, I will have two tokens, one is from Djoser, and another is from Django OAuth Toolkit. I just got confused here, how to handler two tokens at the same time? Or should I just replace Djoser with Django OAuth Toolkit, if so, how to support basic actions such as registration?
Multiple authentication app support
0
0
1
0
0
308
32,353,887
2015-09-02T12:59:00.000
2
0
1
0
0
python,python-2.7,python-import
0
32,359,039
0
1
0
false
0
0
When you import a module, any/all module objects/functions/etc are cached so that importing the same module again is a no-op. Subsequently these objects/functions/etc will not be freed when the local names referring to them go out of scope. This only affects functions and objects defined globally within the module and it's very likely that there won't be a lot of those, so it's probably not something to worry about. To specifically answer your question, there is effectively no difference in terms of performance unless the import is inside a function or branch which is never executed. In that rare case, having it inside the branch or function is slightly faster/less resource intensive, but it won't gain you much.
1
0
0
0
When memory issues are critical, do I save some memory when I do my python imports inside a function, so as to when the call finishes everything will be discarder from memory? Or this cumbers more my memory and CPU especially when I do a lot of calls of the specific function? (The user does the calls and I do not know in advance how many she will do). What are the memory implications of this difference?
Python import statement inside and outside a function. What is better for the memory?
0
0.379949
1
0
0
618
32,365,141
2015-09-03T00:15:00.000
1
0
1
0
0
syntax,markdown,ipython-notebook
0
40,813,583
0
3
0
false
0
0
Late answer but backtick ` is the way to go, just like here at SO.
1
3
0
0
In Markdown, how do you highlight code <--- just like that. I don't know how to search for it because I'm not sure what it's called. I also don't know what the little dashes are called either. I tried doing what I would do in SO, but it just reads it as normal text Update: This is what I have: foo in SO it actually shows the hilighting bar whereas in iPython Notebook, it doesn't, it only changes the fontstyle
Markdown in iPython Notebook: how to highlight code?
0
0.066568
1
0
0
3,161
32,381,227
2015-09-03T16:42:00.000
1
0
0
0
0
python,cassandra,cassandra-2.0
0
32,381,556
0
1
0
true
0
0
When you do an insert or update in Cassandra, the new value overrides the old value even if it is the same value. This is because Cassandra does not do a read of the existing data before storing the new data. Every write is just an append operation and Cassandra has no idea if the new value is the same as the old value. When you do a read, Cassandra will find the most recent write and that is the current value. So if your insert or update sets a TTL, then that TTL will override any previous TTL for the columns you inserted/updated. So if you are writing data with a TTL of 10 seconds, then you need to write the same data again before the 10 seconds is up if you want it to stick around for another 10 seconds.
1
1
0
1
When running model.update(args, kwargs) in python, if the data is not different it doesn't actually make any changes, correct? If so, does it update the TTL? If not, how can we make it so it will reset the TTL? Use case: We have a model that stores out loop information for twisted and we have a TTL of 10 seconds on it. The program is set to automatically check that configuration every 2 seconds and if it does not then we want that data to be removed out of the active loops. Here is where it gets tricky, the data rarely changes once it has been set for a particular loop. I can post the model and other information if it would helpful.
Updating only TTL in cassandra
0
1.2
1
0
0
473
32,390,195
2015-09-04T04:40:00.000
4
0
0
0
0
python,django
0
32,390,343
0
1
0
false
1
0
Instead of multiple views.py, You can divide your application into individual applications within your project. Like separate applications for userRegistration, Contact, ArticleControl. this way your code will look much cleaner. And in case of any bug you will be able to debug that specific application easily.
1
1
0
0
Iam new to Django Framework and just started learning django 1.8. In other framework Like Laravel,Rails there we can make different controller file. for example UserController.php,ContactController.php etc. I think in Django, views.py is similar to Controller and In django ,in views.py i have a long line of code of more than 300 lines and i want to make my code clean by making seperate views.py for userRegistration,Contact,ArticleControl etc. My question is how can i achieve this ie:making many views.py like controller
Can we make many views.py in Django as a Controller?
0
0.664037
1
0
0
602
32,390,291
2015-09-04T04:49:00.000
22
0
1
0
0
python,pip
0
53,797,785
0
10
0
false
1
0
I have tried both pipreqs and pigar and found pigar is better because it also generates information about where it is used, it also has more options.
3
67
0
0
When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior. I have, however, done something wrong as this now includes things like Django in projects that have no business with Django. How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv.
Pip freeze for only project requirements
0
1
1
0
0
48,165
32,390,291
2015-09-04T04:49:00.000
0
0
1
0
0
python,pip
0
70,058,507
0
10
0
false
1
0
I just had the same issue, here's what I've found to solve the problem. First create the venv in the directory of your project, then activate it. For Linux/MacOS : python3 -m venv ./venv source myvenv/bin/activate For Windows : python3 -m venv .\venv env\Scripts\activate.bat Now pip freeze > requirements.txt should only takes the library used in the project. NB: If you have already begin your project you will have to reinstall all the library to have them in pip freeze.
3
67
0
0
When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior. I have, however, done something wrong as this now includes things like Django in projects that have no business with Django. How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv.
Pip freeze for only project requirements
0
0
1
0
0
48,165
32,390,291
2015-09-04T04:49:00.000
1
0
1
0
0
python,pip
0
65,285,557
0
10
0
false
1
0
if you are using linux then do it with sed pip freeze | sed 's/==.*$/''/' > requirements.txt
3
67
0
0
When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior. I have, however, done something wrong as this now includes things like Django in projects that have no business with Django. How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv.
Pip freeze for only project requirements
0
0.019997
1
0
0
48,165
32,397,089
2015-09-04T11:33:00.000
0
0
0
0
0
python,sockets,networking,connection
1
32,403,092
0
2
0
false
0
0
The issue is not related to the programming language, in this case python. The oeprating system (Windows or linux), has the final word regarding the resilience degree of the socket.
2
0
0
0
I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems? Anyway my connection code m.connect(("localhost", 5000)) is in a if and try and while e.g. while True: if tryconnection: #Error handeling try: m.connect(("localhost", 5000)) init = True tryconnection = False except socket.error: init = False tryconnection = True And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it?
Keeping python sockets alive in event of connection loss
0
0
1
0
1
1,309
32,397,089
2015-09-04T11:33:00.000
1
0
0
0
0
python,sockets,networking,connection
1
32,399,470
0
2
0
true
0
0
I'm assuming we're dealing with TCP here since you use the word "connection". It all depend by what you mean by "connection loss". If by connection loss you mean that the data exchanges between the server and the client may be suspended/irresponsive (important: I did not say "closed" here) for a long among of time, seconds or minutes, then there's not much you can do about it and it's fine like that because the TCP protocol have been carefully designed to handle such situations gracefully. The timeout before deciding one or the other side is definitely down, give up, and close the connection is veeeery long (minutes). Example of such situation: the client is your smartphone, connected to some server on the web, and you enter a long tunnel. But when you say: "But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer localhost and just clicking the X button", what you are doing is actually closing the connections. If you abruptly terminate the server: the TCP/IP implementation of your operating system will know that there's not any more a process listening on port 5000, and will cleanly close all connections to that port. In doing so a few TCP segments exchange will occur with the client(s) side (it's a TCP 4-way tear down or a reset), and all clients will be disconected. It is important to understand that this is done at the TCP/IP implementation level, that's to say your operating system. If you abruptly terminate a client, accordingly, the TCP/IP implementation of your operating system will cleanly close the connection from it's port Y to your server port 5000. In both cases/side, at the network level, that would be the same as if you explicitly (not abruptly) closed the connection in your code. ...and once closed, there's no way you can possibly re-establish those connections as they were before. You have to establish new connections. If you want to establish these new connections and get the application logic to the state it was before, now that's another topic. TCP alone can't help you here. You need a higher level protocol, maybe your own, to implement stateful client/server application.
2
0
0
0
I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems? Anyway my connection code m.connect(("localhost", 5000)) is in a if and try and while e.g. while True: if tryconnection: #Error handeling try: m.connect(("localhost", 5000)) init = True tryconnection = False except socket.error: init = False tryconnection = True And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it?
Keeping python sockets alive in event of connection loss
0
1.2
1
0
1
1,309
32,406,765
2015-09-04T21:21:00.000
0
1
0
1
0
python,eclipse,version-control,synchronization,pydev
0
32,408,606
0
3
0
true
1
0
I use mercurial. I picked it because it seemed easier. But is is only easiER. There is mercurial eclipse plugin. Save a copy of your workspace and maybe your eclipse folder too before daring it :)
2
0
0
1
I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace. However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places. However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively? Thanks
Using SCM to synchronize PyDev eclipse projects between different computer
0
1.2
1
0
0
86
32,406,765
2015-09-04T21:21:00.000
0
1
0
1
0
python,eclipse,version-control,synchronization,pydev
0
32,466,408
0
3
0
false
1
0
I use bitbucket coupled with mercurial. That is my repository is on bitbucket and i pull and psuh to it from mercurial within eclipse For my backup i have an independent carbonite process going to net back all hard disk files. But I imagine there is a clever free programatic way to do so. If one knew how to write the appropriate scripts. Glad the first suggestion was helpful .you are wise to bite the bullet and get this in place now. ;)
2
0
0
1
I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace. However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places. However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively? Thanks
Using SCM to synchronize PyDev eclipse projects between different computer
0
0
1
0
0
86
32,420,853
2015-09-06T06:50:00.000
7
0
0
1
0
python,macos,opencv,homebrew,opencv3.0
0
37,454,424
0
2
0
false
0
0
It's weird that there is no concise instruction for installing OpenCV 3 with Python3. So, here I make it clear step-by-step: Install Homebrew Python 3.5: brew install python3 Tap homebrew/science: brew tap homebrew/science Install any Python3 packages using pip3. This will create the site-packages folder for Python3 For example: pip3 install numpy Then install OpenCV3 brew install opencv3 --with-python3 Now you can find the site-packages folder created in Step 2. Just run the following command to link Opencv3 to Python3: echo /usr/local/opt/opencv3/lib/python3.5/site-packages >> /usr/local/lib/python3.5/site-packages/opencv3.pth You may have to change the above command correpondingly to your installed Homebrew Python version (e.g. 3.4).
2
4
0
0
When I install OpenCV 3.0 with Homebrew, it gives me the following directions to link it to Python 2.7: If you need Python to find bindings for this keg-only formula, run: echo /usr/local/opt/opencv3/lib/python2.7/site-packages >> /usr/local/lib/python2.7/site-packages/opencv3.pth While I can find the python2.7 site packages in opencv3, no python34 site packages were generated. Does anyone know how I can link my OpenCV 3.0 install to Python 3?
Homebrew installation of OpenCV 3.0 not linking to Python
0
1
1
0
0
7,461
32,420,853
2015-09-06T06:50:00.000
4
0
0
1
0
python,macos,opencv,homebrew,opencv3.0
0
32,510,430
0
2
0
false
0
0
You need to install opencv like brew install opencv3 --with-python3. You can see a list of options for a package by running brew info opencv3.
2
4
0
0
When I install OpenCV 3.0 with Homebrew, it gives me the following directions to link it to Python 2.7: If you need Python to find bindings for this keg-only formula, run: echo /usr/local/opt/opencv3/lib/python2.7/site-packages >> /usr/local/lib/python2.7/site-packages/opencv3.pth While I can find the python2.7 site packages in opencv3, no python34 site packages were generated. Does anyone know how I can link my OpenCV 3.0 install to Python 3?
Homebrew installation of OpenCV 3.0 not linking to Python
0
0.379949
1
0
0
7,461
32,423,519
2015-09-06T12:23:00.000
1
1
0
0
0
python-3.x,gunicorn,aiohttp
0
32,440,342
0
1
0
false
1
0
At least for now aiohttp is a library without reading configuration from .ini or .yaml file. But you can write code for reading config and setting up aiohttp server by hands easy.
1
1
0
0
I implemented my first aiohttp based RESTlike service, which works quite fine as a toy example. Now I want to run it using gunicorn. All examples I found, specify some prepared application in some module, which is then hosted by gunicorn. This requires me to setup the application at import time, which I don't like. I would like to specify some config file (development.ini, production.ini) as I'm used from Pyramid and setup the application based on that ini file. This is common to more or less all python web frameworks, but I don't get how to do it with aiohttp + gunicorn. What is the smartest way to switch between development and production settings using those tools?
Configuring an aiohttp app hosted by gunicorn
0
0.197375
1
0
0
273
32,428,052
2015-09-06T20:35:00.000
1
0
0
0
0
rethinkdb,rethinkdb-python,rethinkdb-javascript
0
32,465,192
0
1
0
false
0
0
As mentioned elsewhere, you can avoid the port conflict by passing in the -o 1 argument. This shifts the ports that the proxy uses by an offset of 1.
1
1
0
0
we have two client machines, how do we connect both of them using proxy server? As you said earlier: "To start a RethinkDB proxy on the client: rethinkdb proxy -j -j ..." only of the clients can connect in this way, since the ports will already be in use.
How to setup rethinkdb proxy server
0
0.197375
1
0
0
282
32,438,646
2015-09-07T12:20:00.000
0
1
0
0
0
python,email,parsing
1
32,442,153
0
1
0
false
0
0
So you have 1500 .eml files and want to identify mails from mailer-daemons and which adress caused the mailer-daemon message? Just iterate over the files, then check the from: line and see if it is a mailer-daemon message, and then get the adress that caused the error out of the text. There is no other way than iterating over them line by line.
1
0
0
0
I have something like 1500 mail messages in eml format and I want to parse them na get e-mail addresses that caused error and error message (or code). I would like to try to do it in python. Someone have any idea how to do that except parsing line by line and searching for line and error code (or know software to do that)? I see nothing about errors in mail headers which is sad.
Parse mailer daemons, failure notices
0
0
1
0
1
165
32,444,840
2015-09-07T19:16:00.000
2
0
1
0
0
ipython,jupyter
0
65,136,451
0
4
0
false
0
0
You can switch the cell from 'Code to 'Raw NBConvert'
4
34
0
0
Is it possible to comment out whole cells in jupyter? I need it for this case: I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them) Thanks
jupyter - how to comment out cells?
1
0.099668
1
0
0
42,040
32,444,840
2015-09-07T19:16:00.000
39
0
1
0
0
ipython,jupyter
0
52,076,374
0
4
0
false
0
0
Mark the content of the cell and press Ctrl+ /. It will comment out all lines in that cell. Repeat the same steps to uncomment the lines of your cell.
4
34
0
0
Is it possible to comment out whole cells in jupyter? I need it for this case: I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them) Thanks
jupyter - how to comment out cells?
1
1
1
0
0
42,040
32,444,840
2015-09-07T19:16:00.000
16
0
1
0
0
ipython,jupyter
0
38,432,841
0
4
0
false
0
0
If you switch the cell to 'raw NBConvert' the code retains its formatting, while all text remains in a single font (important if you have any commented sections), so it remains readable. 'Markdown' will interpret the commented sections as headers and change the size and colour accordingly, making the cell rather messy. On a side note I use this to interrupt the process if I want to stop it - it seems much more effective than 'Kernel --> Interrupt'.
4
34
0
0
Is it possible to comment out whole cells in jupyter? I need it for this case: I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them) Thanks
jupyter - how to comment out cells?
1
1
1
0
0
42,040
32,444,840
2015-09-07T19:16:00.000
23
0
1
0
0
ipython,jupyter
0
32,458,871
0
4
0
false
0
0
I think the easiest thing will be to change the cell type to 'Markdown' with M when you don't want to run it and change back to 'Code' with Y when you do. In a short test I did, I did not lose my formatting when switching back and forth. I don't think you can select multiple cells at once.
4
34
0
0
Is it possible to comment out whole cells in jupyter? I need it for this case: I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them) Thanks
jupyter - how to comment out cells?
1
1
1
0
0
42,040
32,458,158
2015-09-08T12:42:00.000
1
0
1
0
0
autocomplete,ipython,pycharm
0
51,340,397
0
2
0
false
0
0
ctrl+space confuse with input language switching of Windows. need change setting of Keymap File -> Setting -> Keymap -> Main menu -> Code -> Complete -> Basic
1
5
0
0
if I start ipython in a terminal, when I type 'im' and press TAB, the terminal will auto-complete it with 'import', but when I click python console button in the bottom of pycharm IDE, when the ipython environment shows, type 'im', press TAB, it will not give autocompletion. In PyCharm, it use pydevconsole.py to create the ipython environment, but I do not know how to change it to enable the autocompletion.
pycharm python console autocompletion
0
0.099668
1
0
0
5,218
32,488,808
2015-09-09T20:41:00.000
1
0
0
0
0
python,statistics,scipy
0
32,510,602
0
3
0
false
0
0
You can also use the histogram, piecewise uniform distribution directly, then you get exactly the corresponding random numbers instead of an approximation. The inverse cdf, ppf, is piecewise linear and linear interpolation can be used to transform uniform random numbers appropriately.
1
3
1
0
Suppose I have a process where I push a button, and after a certain amount of time (from 1 to 30 minutes), an event occurs. I then run a very large number of trials, and record how long it takes the event to occur for each trial. This raw data is then reduced to a set of 30 data points where the x value is the number of minutes it took for the event to occur, and the y value is the percentage of trials which fell into that bucket. I do not have access to the original data. How can I use this set of 30 points to identify an appropriate probability distribution which I can then use to generate representative random samples? I feel like scipy.stats has all the tools I need built in, but for the life of me I can't figure out how to go about it. Any tips?
Use Histogram data to generate random samples in scipy
0
0.066568
1
0
0
2,122
32,490,561
2015-09-09T23:12:00.000
4
0
1
0
1
python,excel,date,csv,import-from-csv
0
32,490,655
0
2
0
true
0
0
Option 3. Import it properly Use DATA, Get External Data, From Text and when the wizard prompts you choose the appropriate DMY combination (Step 3 of 3, Under Column data format, and Date).
1
0
1
0
I have a CSV file where the date is formatted as yy/mm/dd, but Excel is reading it wrongly as dd/mm/yyyy (e.g. 8th September 2015 is read as 15th of September 2008). I know how to change the format that Excel outputs, but how can I change the format it uses to interpret the CSV data? I'd like to keep it to Excel if possible, but I could work with a Python program.
Change date format when importing from CSV
1
1.2
1
0
0
1,588
32,511,575
2015-09-10T21:05:00.000
4
0
0
0
0
python,django,mkdocs
0
32,511,724
0
3
0
false
1
0
Django is just a framework you need to host your static files and serve them with something like Nginx or Apache etc.
1
2
0
0
I tend to write my API documentation in Markdown and generate a static site with MkDocs. However, the site I'd like to host the documentation on is a Django site. So my question is, and I can't seem to find an answer Googling around, is how would I go about hosting the MkDocs static generated site files at a location like /api/v1/docs and have the static site viewable at that URL? UPDATE: I should point out I do serve all static files under Apache and do NOT use runserver or Debug mode to serve static files, that's just crazy. The site is completely built and working along with a REST API. My question was simply how do I get Django (or should I say Apache) to serve the static site (for my API docs) generated by MkDocs under a certain URL. I understand how to serve media and static files for the site, but not necessarily how to serve what MkDocs generates. It generates a completely static 'site' directory with HTML and assets files in it.
Hosting API docs generated with mkdocs at a URL within a Django project
0
0.26052
1
0
0
1,356
32,530,506
2015-09-11T19:06:00.000
4
0
1
1
0
python,macos,pip,homebrew
0
32,530,618
0
3
0
false
0
0
Homebrew is a package manager, similar to apt on ubuntu or yum on some other linux distros. Pip is also a package manager, but is specific to python packages. Homebrew can be used to install a variety of things such as databases like MySQL and mongodb or webservers like apache or nginx.
1
48
0
0
I want to install pillow on my Mac. I have python 2.7 and python 3.4, both installed with Homebrew. I tried brew install pillow and it worked fine, but only for python 2.7. I haven't been able to find a way to install it for python 3. I tried brew install pillow3 but no luck. I've found a post on SO that says to first install pip3 with Homebrew and then use pip3 install pillow. As it happens, I have already installed pip3. I've never understood the difference, if any, between installing a python package with pip and installing it with Homebrew. Can you explain it to me? Also, is it preferable to install with Homebrew if a formula is available? If installing with Homebrew is indeed preferable, do you know how to install pillow for python 3 with Homebrew? The first answers indicate that I haven't made myself plain. If I had installed pillow with pip install pillow instead of brew install pillow would the installation on my system be any different? Why would Homebrew make a formula that does something that pip already does? Would it check for additional prerequisites or something? Why is there a formula for pillow with python2, but not as far as I can tell for pillow with python3?
Is there a difference between "brew install" and "pip install"?
0
0.26052
1
0
0
38,886
32,531,468
2015-09-11T20:15:00.000
0
0
1
0
1
python,regex
0
32,531,657
0
2
1
false
0
0
I'd recommend cleaning the input up a but first. Get rid of all the white space, or at least the spaces. Check for an equal sign and see if there's a 0 on one side or the other. If there is you can remove it. If not you have to decide how clever you want to be. Get it close to a format you want to deal with, in other words. Then you can check if they've entered a valid re. You also need to decide if you can handle shuffling the order of the terms around and letters other than x. Also whether you want the ^ or ** or just x2. You probably want to grab each term individually (all the terms between the + or -) and decide what kind of term it is. In other words there's a lot to do before the re expression. Incidentally have you seen SymPy
1
0
0
0
I am currently trying to do a quadratic equation solver. I searched on the web how people did their version, but all of them implied that the user entered the coefficients, which, although the easiest way, I really hate it. I want the user to introduce the whole equation, and let the program know which are the coefficients, and calculate the solution. I discovered the concept of regex, and automatically, the re module. I understand how to implement the quadratic formula to solve the problem, but the problem is that I don't know which function should I use, and how to get the coefficients from the input. I want the regex to be like: \d(\sx\s\(\^)\s2/x\^2)(\s\+\s)\dbx(\s\+\s)\d = 0 To find the coefficients in this: ax^2 + bx + c = 0 I am aware that the regex sucks, because I only started to understand it yesterday, so you can also tell me how to improve that. EDIT: Let's clarify what I exactly want. How to improve the regex that I tried doing above? What Python function should I use so that I can only have the coefficients? How can I take the groups and turn them into usable integers, assuming that it doesn't store those groups?
Python: Regular expression for quadratic equations
0
0
1
0
0
2,333
32,553,806
2015-09-13T19:39:00.000
2
0
0
0
0
python,machine-learning,tf-idf,text-classification
0
32,555,599
0
2
0
true
0
0
What I think you have is an unsupervised learning application. Clustering. Using the combined X & Y dataset, generate clusters. Then overlay the X boundary; the boundary that contains all X samples. All items from Y in the X boundary can be considered X. And the X-ness of a given sample from Y is the distance from the X cluster centroid. Something like that.
2
1
1
1
I have a collection X of documents, all of which are of class A (the only class in which I'm interested or know anything about). I also have a much larger collection Y of documents that I know nothing about. The documents in X and Y come from the same source and have similar formats and somewhat similar subject matters. I'd like to use the TF-IDF feature vectors of the documents in X to find the documents in Y that are most likely to be of class A. In the past, I've used TF-IDF feature vectors to build naive Bayes classifiers, but in these situations, my training set X consisted of documents of many classes, and my objective was to classify each document in Y as one of the classes seen in X. This seems like a different situation. Here, my entire training set has the same class (I have no documents that I know are not of class A), and I'm only interested in determining if documents in Y are or are not of that class. A classifier seems like the wrong route, but I'm not sure what the best next step is. Is there a different algorithm that can use that TF-IDF matrix to determine the likelihood that a document is of the same class? FYI, I'm using scikit-learn in Python 2.7, which obviously made computing the TF-IDF matrix of X (and Y) simple.
If my entire training set of documents is class A, how can I use TF-IDF to find other documents of class A?
0
1.2
1
0
0
264
32,553,806
2015-09-13T19:39:00.000
1
0
0
0
0
python,machine-learning,tf-idf,text-classification
0
32,604,626
0
2
0
false
0
0
The easiest thing to do is what was already proposed - clustering. More specifically, you extract a single feature vector from set X and then apply K-means clustering to the whole X & Y set. ps: Be careful not to confuse k-means with kNN (k-nearest neighbors). You are able to apply only unsupervised learning methods.
2
1
1
1
I have a collection X of documents, all of which are of class A (the only class in which I'm interested or know anything about). I also have a much larger collection Y of documents that I know nothing about. The documents in X and Y come from the same source and have similar formats and somewhat similar subject matters. I'd like to use the TF-IDF feature vectors of the documents in X to find the documents in Y that are most likely to be of class A. In the past, I've used TF-IDF feature vectors to build naive Bayes classifiers, but in these situations, my training set X consisted of documents of many classes, and my objective was to classify each document in Y as one of the classes seen in X. This seems like a different situation. Here, my entire training set has the same class (I have no documents that I know are not of class A), and I'm only interested in determining if documents in Y are or are not of that class. A classifier seems like the wrong route, but I'm not sure what the best next step is. Is there a different algorithm that can use that TF-IDF matrix to determine the likelihood that a document is of the same class? FYI, I'm using scikit-learn in Python 2.7, which obviously made computing the TF-IDF matrix of X (and Y) simple.
If my entire training set of documents is class A, how can I use TF-IDF to find other documents of class A?
0
0.099668
1
0
0
264
32,572,114
2015-09-14T19:07:00.000
0
0
0
0
0
java,c#,python,c++,swig
0
32,615,679
0
1
0
false
0
1
As you are using c++, than you can try with std::wstring as it has typemaps for all: C#, Python and Java. It is in std_wstring.i
1
2
0
0
I use %include wchar.i in C# and it seems to work correctly for all wchar_t values and arrays mapping to C#'s string. Swig's library for Python also contains the typemaps for wchar_t in wchar.i file. Java's library doesn't have wchar.i. What's the reason for that? And also how I can achieve type mapping from wchar_t types in C++ to String in Java?
SWIG: wchar_t support for Java
0
0
1
0
0
426
32,573,995
2015-09-14T21:09:00.000
1
0
0
1
0
python,apache-spark,ipython,ibm-cloud,jupyter
0
32,574,697
0
1
0
true
0
0
You cannot add 3rd party libraries at this point in the beta. This will most certainly be coming later in the beta as it's a popular requirement ;-)
1
2
1
0
I'm currently playing around with the Apache Spark Service in IBM Bluemix. There is a quick start composite application (Boilerplate) consisting of the Spark Service itself, an OpenStack Swift service for the data and an IPython/Jupyter Notebook. I want to add some 3rd party libraries to the system and I'm wondering how this could be achieved. Using an python import statement doesn't really help since the libraries are then expected to be located on the SparkWorker nodes. Is there a ways of loading python libraries in Spark from an external source during job runtime (e.g. a Swift or ftp source)? thanks a lot!
How can I reference libraries for ApacheSpark using IPython Notebook only?
0
1.2
1
0
0
157
32,578,106
2015-09-15T05:06:00.000
24
0
0
1
0
macos,python-2.7
0
32,578,175
0
1
0
false
0
0
If you install Python using brew, the relevant headers are already installed for you. In other words, you don't need python-devel.
1
27
0
0
brew and port does not provide python-devel. How can I install it in Mac OS. Is there an equivalent in Mac OS?
how to install python-devel in Mac OS?
0
1
1
0
0
59,901
32,590,327
2015-09-15T15:44:00.000
0
0
0
0
0
python,url,networking,request
0
32,590,563
0
2
0
false
0
0
It is not difficult to parse the webpage and find the links of all "attached" files such as (css, icon, js, images, etc.) which will be fetched by the browser that you can see them in the 'Network' panel. The harder part is that some files are fetched by javascript using ajax. The only way to do that (completely and correctly) is to simulate a browser (parse html+css and run javascripts) which I don't think python can do.
1
0
0
0
So I have a question; How does one get the files from a webpage and the urls attached to them. For example, Google.com so we go to google.com and open firebug (Mozilla/chrome) and go to the "network" We then see the location of every file attached, and extension of the file. How do I do this in python? For url stuff, I usually look into urllib/mechanize/selenium but none of these seem to support what I want or I don't know the code that would be associated with it. I'm using linux python 2.7 - Any help/answers would be awesome. Thank you for anyone attempting to answer this. Edit: The things the back end servers generate, I don't know how but firebug in the "net" or "network" section show this information. I wondered if it could be implemented into python some how.
Get files attached to URL using python
0
0
1
0
1
76
32,601,954
2015-09-16T07:11:00.000
0
0
0
0
0
python,django,url
0
49,653,008
0
1
0
true
1
0
What I usually do is create a dir with uploaded date as name, e.g. 04042018/ and then I rename the uploaded file with an uuid4, e.g. 550e8400-e29b-41d4-a716-446655440000.jpg So the fullpath for the uploaded file will be something like this: site_media/media/04042018/550e8400-e29b-41d4-a716-446655440000.jpg Personally I think is best this way than having something with site_media/media/username/file.jpg because will be easier to figure out what images belong to who.
1
1
0
0
For example, uploading a gif to gfycat generates URLs in the form of Adjective-Adjective-Noun such as ForkedTestyRabbit. Obviously going to this URL allows you to view the model instance that was uploaded. So I'm thinking the post upload generates a unique random URL, e.g. /uploads/PurpleSmellyGiraffe. The model will have a column that is the custom URL part, i.e. PurpleSmellyGiraffe, and then in the urls.py we have anything /uploads/* will select the corresponding model with that URL. However, I'm not sure this is the best practice. Could I get some feedback/suggestions on how to implement this?
What is the right way in Django to generate random URLs for user uploaded items?
0
1.2
1
0
0
508
32,611,932
2015-09-16T14:47:00.000
5
0
1
0
0
python,pip,anaconda,conda
0
32,615,498
0
2
0
false
0
0
Odds are that anaconda automatically edited your .bashrc so that anaconda/bin is in front of your /usr/bin folder in your $PATH variable. To check this, type echo $PATH, and the command line will return a list of directory paths. Your computer checks each of these places for pip when you type pip in the command line. It executes the first one it finds in your PATH. You can open /home/username/.bashrc with whatever text editor you choose. Wherever it adds anaconda/bin to the path, with something like export PATH=/anaconda/bin:$PATH , just replace it with export PATH=$PATH:/anaconda/bin Note though, this will change your OS to use your system python as well. Instead of all of this, you can always just use the direct path to pip when calling it. Or you can alias it using alias pip=/path/to/system/pip. And you can put that line in your .bashrc file in order to apply it whenever you login to the pc.
2
8
0
0
I am now using anaconda pip after I installed pip by "conda install pip", if I want to use system pip again, how can I make it? Or how can I switch system pip to anaconda pip?
How can I switch using pip between system and anaconda
0
0.462117
1
0
0
9,104
32,611,932
2015-09-16T14:47:00.000
1
0
1
0
0
python,pip,anaconda,conda
0
32,617,096
0
2
0
false
0
0
You don't need to change your path. Just use the full path to the system pip (generally /usr/bin/pip or /usr/local/bin/pip) to use the system pip.
2
8
0
0
I am now using anaconda pip after I installed pip by "conda install pip", if I want to use system pip again, how can I make it? Or how can I switch system pip to anaconda pip?
How can I switch using pip between system and anaconda
0
0.099668
1
0
0
9,104
32,652,611
2015-09-18T12:55:00.000
0
0
1
0
1
python,pdf,matplotlib
0
32,664,477
0
3
0
false
0
0
The PDF backend makes one page per figure. Use subplots to get multiple plots into one figure and they'll all show up together on one page of the PDF.
1
1
0
0
I'm trying to get my figures in just one pdf page, but I don't know how to do this. I found out that it's possible to save multiple figures in a pdf file with 'matplotlib.backends.backend_pdf', but it doesn't work for just one page. Has anyone any ideas ? Convert the figures to just one figure ?
Save multiple figures in one pdf page, matplotlib
0
0
1
0
0
8,551
32,659,008
2015-09-18T18:45:00.000
1
0
0
0
0
python,sql-server,database-design,triggers,database-cloning
0
32,659,567
0
1
0
true
0
0
lad2015 answered the first part. The second part can be infinitely more dangerous as it involves calling outside the Sql Server process. In the bad old days one would use the xp_cmdshell. These days it may be more worthwhile to create an Unsafe CLR stored procedure that'll call the python script. But it is very dangerous and I cannot stress just how much you shouldn't do it unless you really have no other choices. I'd prefer to see a polling python script that runs 100% external to Sql that connects to a Status table populated by the trigger and performs work accordingly.
1
0
0
0
I've a read-only access to a database server dbserver1. I need to store the result set generated from my query running on dbserver1 into another server of mine dbserver2. How should I go about doing that? Also can I setup a trigger which will automatically copy new entries that will come in to the dbserver1 to dbserver2? Source and destination are both using Microsoft SQL server. Following on that, I need to call a python script on a database trigger event. Any ideas on how could that be accomplished?
Copy database from one server to another server + trigger python script on a database event
0
1.2
1
1
0
449
32,660,088
2015-09-18T19:58:00.000
1
1
0
1
0
php,python,apache,permission-denied
0
32,660,141
0
2
0
false
0
0
You need read permission to run the python script.
1
2
0
0
I have a PHP script that is supposed to execute a python script as user "apache" but is returning the error: /transform/anaconda/bin/python: can't open file '/transform/python_code/edit_doc_with_new_demo_info.py': [Errno 13] Permission denied Permissions for edit_doc_with_new_demo_info.py are ---xrwx--x. 1 apache PosixUsers 4077 Sep 18 12:14 edit_doc_with_new_demo_info.py. The line that is calling this python script is: shell_exec('/transform/anaconda/bin/python /transform/python_code/edit_doc_with_new_demo_info.py ' . escapeshellarg($json_python_data) .' >> /transform/edit_subject_python.log 2>&1') If apache is the owner of the python file and the owner has execute permission how can it be unable to open the file?
Python can't open file
0
0.099668
1
0
0
4,074
32,660,496
2015-09-18T20:27:00.000
0
0
0
0
0
python,mysql,django
0
32,660,678
0
2
0
false
1
0
You could create a model UserItems for each user with a ForeignKey pointing to the user and an item ID pointing to items. The UserItems model should store the unique item IDs of the items that belong to a user. This should scale better if items can be attached to multiple users or if items can exist that aren't attached to any user yet.
1
2
0
0
Good evening, I am working on some little website for fun and want users to be able to add items to their accounts. What I am struggling with is coming up with a proper solution how to implement this properly. I thought about adding the User Object itself to the item's model via ForeignKey but wouldn't it be necessary to filter through all entries in the end to find the elements attached to x user? While this would work, it seems quite inefficient, especially when the database has grown to some point. What would be a better solution?
Add models to specific user (Django)
0
0
1
0
0
1,112
32,675,024
2015-09-20T02:07:00.000
1
0
1
0
1
python,scikit-learn,python-import,anaconda
1
65,357,723
0
8
0
false
0
0
SOLVED: reinstalled Python 3.7.9 (not the latet) installed numpy 1.17.5 (not the latest) installed scikit-learn (latest) sklearn works now!
4
24
1
0
Beginner here. I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn” The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one. Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn” Does anyone know how to solve this problem?
Getting PyCharm to import sklearn
0
0.024995
1
0
0
67,839
32,675,024
2015-09-20T02:07:00.000
0
0
1
0
1
python,scikit-learn,python-import,anaconda
1
53,357,368
0
8
0
false
0
0
For Mac OS: PyCharm --> Preferences --> Project Interpreter --> Double Click on pip (a new window will open with search option) --> mention 'Scikit-learn' on the search bar --> Install Packages --> Once installed, close that new window --> OK on the existing window and you are done.
4
24
1
0
Beginner here. I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn” The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one. Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn” Does anyone know how to solve this problem?
Getting PyCharm to import sklearn
0
0
1
0
0
67,839
32,675,024
2015-09-20T02:07:00.000
8
0
1
0
1
python,scikit-learn,python-import,anaconda
1
54,702,828
0
8
0
false
0
0
please notice that, in the packages search 'Scikit-learn', instead 'sklearn'
4
24
1
0
Beginner here. I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn” The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one. Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn” Does anyone know how to solve this problem?
Getting PyCharm to import sklearn
0
1
1
0
0
67,839
32,675,024
2015-09-20T02:07:00.000
1
0
1
0
1
python,scikit-learn,python-import,anaconda
1
50,808,255
0
8
0
false
0
0
Same error occurs to me i have fixed by selecting File Menu-> Default Settings-> Project Interpreter -> Press + button and type 'sklearn' Press install button. Installation will be done in 10 to 20 seconds. If issue not resolved please check you PyCharm Interpreter path. Sometimes your machine have Python 2.7 and Python 3.6 both installed and there may be some conflict by choosing one.
4
24
1
0
Beginner here. I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn” The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one. Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn” Does anyone know how to solve this problem?
Getting PyCharm to import sklearn
0
0.024995
1
0
0
67,839
32,680,081
2015-09-20T13:45:00.000
2
0
1
0
0
python,pip
1
32,680,295
0
2
0
false
0
0
A couple more points: Check to see if you're installing the library into the virtualenv that you want to use. There are some libraries whose package names are different from the library's name. You could take a look at their documentation online (google with keyword python <library> would usually bring up the information) to see if you're importing the package correctly.
1
38
0
0
I have successfully installed a library with pip install <library-name>. But when I try to import it, python raises ImportError: No module named <library-name>. Why do I get this error and how can I use the installed library?
ImportError after successful pip installation
0
0.197375
1
0
0
67,829
32,680,534
2015-09-20T14:35:00.000
2
0
0
0
0
javascript,python,browser,beautifulsoup,python-requests
0
32,680,624
0
1
0
false
1
0
In most cases, it is enougth to analyze the "network" tab of the developer tools and see the requests that are fired when you hit that button you metioned. As you understand those requests, you will be able to implement your scraper to run similar requests and grab the relevant data.
1
0
0
0
I'm facing a new problem. I'm writing a scraper for a website, usually for this kind of tasks I use selenium, but in this case I cannot use anything that simulate a web-browser. Researching on StackOverflow, I read the best solution is to undestand what javascript did and rebuild the request over HTTP. Yeah, I understand well in theory, but don't know how to start, as I don't know well technologies involved. In my specific case, some HTML is added to the page when the button is clicked. With developer tools I set a breakpoint on the 'click' event, but from here, I'm literally lost. Anyone can link some resource and examples I can study?
Python - Rebuild Javascript generated code using Requests Module
0
0.379949
1
0
1
84
32,682,065
2015-09-20T17:10:00.000
1
1
0
0
0
python,nginx,apache2,server,uwsgi
0
32,682,202
0
1
1
true
1
0
Andrew, I believe that you can move some pieces of your deployment topology. My suggestion is use nginx for delivering HTTP content, and expose your application using some web framework, i.e. tornadoweb (my preference, considering async core, and best documented if compared to twisted, even twisted being a really great framework) You can communicate between nginx and tornado by proxy. It is simple to be configured. You can replicate your service instance to distribute your calculation application inside the same machine and another hosts. It can be easily configured by nginx upstreams. If you need more performance, you can break your application in small modules and integrate it using Async Messaging. You can choose using zeromq or rabbitmq, among other solutions. Then, you can have different topologies, gradually applied during the evolution of your application. 1th Topology: nginx -> tornadoweb 2th Topology: nginx with loadbalance (upstreams) -> tornadoweb replicated on [1..n] instances 3rd Topology: [2nd topology] -> your app integrated by messaging (zeromq, amqp(rabbitmq), ...) My favorite is 3rd, for begining. But, you should start, for this moment, by 1th and 2nd There are a lot of options. But, these thre may be sufficient for a simple organization of your app.
1
1
0
0
I have a site, that performs some heavy calculations, using library for symbolic math. Currently average calculation time is 5 seconds. I know, that ask too broad question, but nevertheless, what is the optimized configuration for this type of sites? What server is best for this? Currently, I'm using Apache with mod_wsgi, but I don't know how to correctly configure it. On average site is receiving 40 requests per second. How many processes, threads, MaxClients etc. should I set? Maybe, it is better to use nginx/uwsgi/gunicorn (I'm using python as programming language)? Anyway, any info is highly appreciated.
Best server configuration for site with heavy calculations
0
1.2
1
0
0
229
32,708,630
2015-09-22T04:53:00.000
0
1
1
0
0
python,global
0
32,709,121
0
2
0
false
1
0
Delegate the opening and management of the serial port to a separate daemon, and use a UNIX domain socket to transfer the file descriptor for the serial port to the client programs.
1
3
0
0
I have 5 different games written in python that run on a raspberry pi. Each game needs to pass data in and out to a controller using a serial connection. The games get called by some other code (written in nodeJS) that lets the user select any of the games. I'm thinking I don't want to open and close a serial port every time I start and finish a game. Is there anyway to make a serial object instance "global", open it once, and then access it from multiple game modules, all of which can open and close at will? I see that if I make a module which assigns a Serial object to a variable (using PySerial) I can access that variable from any module that goes on to import this first module, but I can see using the id() function that they are actually different objects - different instances - when they are imported by the various games. Any ideas about how to do this?
Python - global Serial obj instance accessible from multiple modules
0
0
1
0
0
328
32,712,557
2015-09-22T08:53:00.000
0
0
1
0
0
python,pyenchant
0
58,909,625
0
2
0
false
0
0
You can use hunspell for italian it will replace it, and myspell for spanish sudo apt install hunspell-it Italian sudo apt install myspell-es Spanish
1
6
0
0
I am using pyenchant package for spell check in Python. I am able to do it successfully for languages English, French, German. Also, I want to do it for languages Italian and Spanish. I looked into available dictionaries in enchant using enchant.list_languages() and I got only ['de_DE', 'en_AU', 'en_GB', 'en_US', 'fr_FR']. I am looking for how to do spell check for Italian and Spanish using enchant package or any other package/techniques. Thanks.
Pyenchant - Italian and Spanish Languages
0
0
1
0
0
4,538
32,715,175
2015-09-22T11:02:00.000
1
0
0
0
0
python,mysql,django
1
32,723,517
0
3
0
false
1
0
This approach worked ! I was able to install the mysqlclient inside the virtual environment through the following command:- python -m pip install mysqlclient Thanks Much..!!!!!
1
0
0
0
I had posted about this error some time back but need some more clarification on this. I'm currently building out a Django Web Application using Visual Studio 2013 on a Windows 10 machine (Running Python3.4). While starting out I was constantly dealing with the MySQL connectivity issue, for which I did a mysqlclient pip-install. I had created two projects that use MySQL as a backend and after installing the mysqlclient I was able to connect to the database through the current project I was working on. When I opened the second project and tried to connect to the database, I got the same 'No Module called MySqlDB' error. Now, the difference between both projects was that the first one was NOT created within a Virtual Environment whereas the second was. So I have come to deduce that projects created within the Python virtual environment are not able to connect to the database. Can someone here please help me in getting this problem solved. I need to know how the mysqlclient module can be loaded onto a virtual environment so that a project can use it. Thanks
No Module Named MySqlDb in Python Virtual Enviroment
0
0.066568
1
1
0
660
32,742,093
2015-09-23T14:18:00.000
20
0
0
1
0
python,windows,python-2.7,cmd
0
32,742,619
0
2
0
true
0
0
py command comes with Python3.x and allow to choose among multiple Python interpreters. For example if you have both Python 3.4 and 2.7 installed, py -2 will start python2.7 and py -3 will start python3.4 . If you just use py it will start the one that was defined as default. So the official way would be to install Python 3.x, declare Python 2.7 as the default, and the py command will do its job. But if you just want py to be an alias of python, doskey py=python.exe as proposed by @Nizil and @ergonaut will be much simpler... Or copying python.exe to py.exe in Python27 folder if you do not want to be bothered by the limitations of doskey.
1
6
0
0
I have a very weird request. An executable I have has a system call to a python script which goes like py file1.py Now, in my system though py is shown as an unrecognized internal or external command. python file1.py works however. is there some way I can get my windows command prompt to recognize that py and python refer to the same thing?
how to access python from command line using py instead of python
1
1.2
1
0
0
29,688
32,756,711
2015-09-24T08:25:00.000
12
0
1
0
0
python,flask,virtualenv
0
32,756,729
0
1
0
true
1
0
No, there is no requirement to use a virtualenv. No project ever would require you to use one; it is just a method of insulating a collection of Python libraries from other projects. I personally do strongly recommend you use a virtualenv, because it makes it much, much easier to swap out versions of libraries and not affect other Python projects. Without a virtualenv, you just continue with installing the dependencies, and they'll end up in your system libraries collection. Installing the Flask project with pip will pull in a few other packages, such as Werkzeug, Jinja2 and itsdangerous. These would all be installed 'globally'.
1
6
0
0
I just started exploring Flask. Earlier I tried to explore Django but found it a bit complicated. However, Installing Flask requires us to install virtualenv first which, As I can recall, is not required in the case of Django. In case it is not required, how to go ahead without virtualenv?
Is it necessary to use virtualenv to use Flask framework?
1
1.2
1
0
0
2,441
32,762,675
2015-09-24T13:29:00.000
2
1
0
1
0
python,salt-stack
0
32,769,491
0
1
0
true
0
0
Only python extensions are supported, so your best bet is to do the following: 1) Deploy your non-Python components via a file.managed / file.recurse state. 2) Ensure your custom execution module has a __virtual__() function checking for the existence of the non-Python dependencies, and returning False if they are not present. This will keep the module from being loaded and used unless the deps are present. 3) Sync your custom modules using saltutil.sync_modules. This function will also re-invoke the loader to update the available execution modules on the minion, so if you already had your custom module sync'ed and later deployed the non-Python depenencies, saltutil.sync_modules would re-load the custom modules and, provided your __virtual__() function returned either True or the desired module name, your execution module would then be available for use.
1
0
0
0
I am currently transforming a perl / bash tool into a salt module and I am wondering how I should sync the non-python parts of this module to my minions. I want to run salt agent-less and ideally the dependencies would by synced automatically alongside the module itself once its called via salt-ssh. But it seems that only python scripts get synced. Any thoughts for a nice and clean solution? Copying the necessary files from the salt fileserver during module execution seems somehow wrong to me..
How to sync a salt execution module with non-python dependencies
0
1.2
1
0
0
284
32,771,481
2015-09-24T21:48:00.000
1
0
0
0
0
python,pygame,python-idle
0
32,771,570
0
1
0
false
0
1
If you're on windows, a quick thing could be to make a one-line batch script. start pythonw.exe <filename> The start keyword causes the shell to not wait for the program to finish. pythonw vs python prevents the python terminal from appearing. If you're on linux/mac, you could do the same with shell scripts.
1
0
0
0
I'm just wondering (as im starting to make a game with pygame) is there a way to 1) Run a program that only opens a GUI and not the shell and 2) Run the program without having to edit with idle (I'd like to do this so the game will look more professional. I would like to just have a desktop shortcut or something that when clicked, runs only the GUI and doesnt show my code or show the shell.
How to make the text shell not appear when running a program
1
0.197375
1
0
0
87
32,772,954
2015-09-25T00:21:00.000
2
0
0
0
1
python,python-2.7,openpyxl
0
32,776,318
0
2
0
false
0
0
If you want to preserve the integrity of the workbook, ie. retain the formulae, the you cannot use data_only=True. The documentation makes this very clear.
1
6
0
0
Because I need to parse and then use the actual data in cells, I open an xlsm in openpyxl with data_only = True. This has proved very useful. Now though, having the same need for an xlsm that contains formuale in cells, when I then save my changes, the formulae are missing from the saved version. Are data_only = True and formulae mutually exclusive? If not, how can I access the actual value in cells without losing the formulae when I save? When I say I lose the formulae, it seems that the results of the formulae (sums, concatenattions etc.) get preserved. But the actual formulaes themselves are no longer displayed when a cell is clicked. UPDATE: To confirm whether or not the formulaes were being preserved or not, I've re-opened the saved xlsm, this time with data_only left as False. I've checked the value of a cell that had been constructed using a formula. Had formulae been preserved, opening the xlsm with data_only set to False should have return the formula. But it returns the actual text value (which is not what I want).
How to save in openpyxl without losing formulae?
0
0.197375
1
1
0
6,128
32,816,966
2015-09-28T06:36:00.000
0
0
0
0
0
python,django,django-models
0
32,830,790
0
2
1
false
1
0
it is not clear what you mean by "my user" . is this just the admin user who set this configuration globaly for the site . or do you mean evey user of the site has his/her own preferences ? if the latter case then make a new model called Preferences which has one to one relation to the user model . then in your query you should create three separate queries according to the preference values .
1
1
0
0
suppose i have three models, at my view I am showing all the item of these models, i want to give my user privilege to set which model's objects are to be shown at view at first, second and third. what is the best way to implement this?
how to keep django models in database for users customized view?
0
0
1
0
0
106
32,817,302
2015-09-28T06:59:00.000
0
0
0
1
1
python,linux,macos,terminal,stdout
0
32,818,127
0
2
0
false
0
0
File write operations are buffered by default so the file isn't effectiveley written until either the buffer is full, the file is closed or you explicitely call flush() on the file. But anyway: dont use direct file access if you want to log to a file, use either a logging.StreamHandler with an opened file as stream or, better, a logging.FileHandler. Both will take care of flushing the file.
1
6
0
0
This is a python question, but also a linux/BSD question. I have a python script with two threads, one downloading data from the web and the other sending data to a device over a serial port. Both of these threads print a lot of status information to stdout using python's logging module. What I would like is to have two terminal windows open, side by side, and have each terminal window show the output from one thread, rather than have the messages from both interleaved in a single window. Are there file descriptors other than stdin, stdout & stderr to write to and connect to other terminal windows? Perhaps this wish is better fulfilled with a GUI? I'm not sure how to get started with this. edit: I've tried writing status messages to two different files instead of printing them to stdout, and then monitoring these two files with tail -f in other terminal windows, but this doesn't work for live monitoring because the files aren't written to until you call close() on them.
Is it possible to output to and monitor streams other than stdin, stdout & stderr? (python)
1
0
1
0
0
437
32,820,673
2015-09-28T10:21:00.000
1
0
1
1
1
python,windows,uninstallation
0
32,821,417
0
2
0
true
0
0
Did you try to reinstall the version you want to delete and then uninstall it afterwards ?
1
3
0
0
I accidentally downloaded Python 3.4.2 a while back but I actually needed Python 2.7, so I deleted the 3.4.2 files and downloaded 2.7 instead. Now I need Python 3, so I tried to download it but I noticed that in the control panel in the Uninstall Programs section it tells me that the 3.4.2 from back then is still on my PC. Every time I try to uninstall/change/repair/download a newer version I can't and it tells me A program required to complete the installation can not be found... I can not find any remaining files connected to any sort of Python in my PC. My operating system is Windows 10. Does someone know how to solve this?
Can't uninstall Python on Windows (3.4.2)
0
1.2
1
0
0
8,420
32,834,722
2015-09-29T02:28:00.000
6
0
1
0
0
python,spyder
0
43,390,881
0
4
0
false
0
0
Just use combination of shift+ctrl+v
4
21
0
0
I accidentally closed the variable explorer on my Pythone(Spyder)... Does anyone know how I can reopen it? If I must reinstall the program, then I will but I just want to see if there is a way. Thank you!
spyder python how to re-open Variable Explorer
1
1
1
0
0
51,281
32,834,722
2015-09-29T02:28:00.000
2
0
1
0
0
python,spyder
0
59,318,950
0
4
0
false
0
0
You can go to view then in view click on window layout then in that click on reset to spyder default which will give you the default layout of spyder.
4
21
0
0
I accidentally closed the variable explorer on my Pythone(Spyder)... Does anyone know how I can reopen it? If I must reinstall the program, then I will but I just want to see if there is a way. Thank you!
spyder python how to re-open Variable Explorer
1
0.099668
1
0
0
51,281
32,834,722
2015-09-29T02:28:00.000
39
0
1
0
0
python,spyder
0
32,834,969
0
4
0
true
0
0
Go to View/Panes and select Variable Explorer.
4
21
0
0
I accidentally closed the variable explorer on my Pythone(Spyder)... Does anyone know how I can reopen it? If I must reinstall the program, then I will but I just want to see if there is a way. Thank you!
spyder python how to re-open Variable Explorer
1
1.2
1
0
0
51,281
32,834,722
2015-09-29T02:28:00.000
1
0
1
0
0
python,spyder
0
56,264,798
0
4
0
false
0
0
Just go to tools(look at the dashboard at top middle) and select reset spyder to factory dafaults and click ok You will get your default spyder console.
4
21
0
0
I accidentally closed the variable explorer on my Pythone(Spyder)... Does anyone know how I can reopen it? If I must reinstall the program, then I will but I just want to see if there is a way. Thank you!
spyder python how to re-open Variable Explorer
1
0.049958
1
0
0
51,281
32,836,510
2015-09-29T05:47:00.000
0
0
0
0
0
python,oracle,database-connection,cx-oracle
0
32,836,606
0
2
0
false
0
0
Yes, you definitely need to install an Oracle Client, it even says so in cx_oracle readme.txt. Another recommendation you can find there is installing an oracle instant client, which is the minimal installation needed to communicate with Oracle, and is the simplest to use. Other dependencies can usually be found in the readme.txt file, and should be the first place to look for these details.
1
1
0
0
I am writing a Python script to fetch and update some data on a remote oracle database from a Linux server. I would like to know how can I connect to remote oracle database from the server. Do I necessarily need to have an oracle client installed on my server or any connector can be used for the same? And also if I use cx_Oracle module in Python, is there any dependency that has to be fulfilled for making it work?
Python connection to Oracle database
1
0
1
1
0
6,425
32,849,325
2015-09-29T16:35:00.000
0
0
0
0
0
python,contextmenu,menuitem,maya,right-click
0
32,870,983
0
1
0
true
1
0
Okay i think I have found a solution, that's the best I have found but this is not very handy... create a copy of the buildobjectMenuItemsNow.mel and dagMenuProc.mel in your script folder so Maya will read those ones instead of the native ones. Once you have done that you can modify the dagMenuProc.mel without destructing anything as you re working on a copy. the proc that needs modifications is the last one : dagMenuProc (line 2240) and you can start modifying inside the loop (line 2266) What I have done is make a call to a Python file that adds some more items in the menu and you can obviously remove some items by deleting few MEL lines. I hope this can help someone. And if there is an other way of doing that, I would love to hear about !
1
0
0
0
I am trying to edit the right click context sensitive menu from Maya. I have found how to add a menuItem but I would like to have this Item in the top of the list not at the bottom... I think in this case I need to deleteAllItems from the menu, add mine and then re-add the default Maya ones but I don't know how to re-add them. Where can I find it ? Most of the topics say "modify Maya's source code" but that's not even an option. Any suggestions ? Thx !
Maya right click context sensitive menu
0
1.2
1
0
0
955
32,902,615
2015-10-02T07:45:00.000
2
0
1
0
0
python,multithreading,python-2.7,python-3.x,python-multithreading
0
32,902,789
0
2
0
true
0
0
I would suggest creating an addToVariable or setVariable method to take care of the actual adding to the variable in question. This way you can just set a flag, and if the flag is set addToVariable returns immediately instead of actually adding to the variable in question. If not, you should look into operator overloading. Basically what you want is to make your own number-class and write a new add function which contains some sort of flag that stops further additions to have any effect.
1
0
0
0
I am working with multi-threading. There is a thread that periodically adds to a certain variable, say every 5 seconds. I want to make it so that, given a call to function f(), additions to that variable will have no effect. I can't find how to do this anywhere and I'm not sure where to start. Is there any syntax that makes certain operations not no effect no specific variables for a certain duration? Or is there any starting point someone could give me and I can figure out the rest? Thanks.
Make certain operators have no effect over a duration of time? (Python)
1
1.2
1
0
0
51
32,913,859
2015-10-02T18:46:00.000
1
0
0
0
0
java,php,python,html,mysql
0
32,915,448
0
1
0
true
1
0
The programing language does not realy matter for the way to solve the problem. You can implement it in the language which you are comfortable with. There are two basic ways to solve the problem: Use a crawler which creates a index of words found on the different pages The use that index to lookup the searched word or When the user has entered the search expression, you start crawling the pages and look if the search expression is found Of course both solutions will have different (dis)advantages For example: In 1) you need to do a inital crawl (and udate it later on when the pages change) In 1) you need to store the crawl result in some sort of database In 1) you will receive instanst search results In 2) You don't need a database/datastore In 2) You will have to wait until all pages are searched before showing the final resultlist
1
0
0
0
I made a website with many pages, on each page is a sample essay. The homepage is a page with a search field. I'm attempting to design a system where a user can type in a word and when they click 'search', multiple paragaphs containing the searched word from the pages with a sample essays are loaded on to the page. I'm 14 and have been programming for about 2 years, can anyone please explain to me the programming languages/technologies I'll need to accomplish this task and provide suggestions as to how I can achieve my task. All I have so far are the web pages with articles and a custom search page I've made with PHP. Any suggestions?
Creating a webpage crawler that finds and maches user input
0
1.2
1
0
1
41
32,937,078
2015-10-04T18:48:00.000
1
1
0
0
1
python,python-import,panda3d
1
32,937,162
0
1
0
false
0
0
try to change your PYTHONPATH? i met a problem like this, and then i modify my PYTHONPATH, and it worked.
1
1
0
0
I have been searching the web for hours now, found several instances where someone had the same problem, but I seem to be too much of a newb with linux/ubuntu to follow the instructions properly, as none of the given solutions worked. Whenever I try to run a panda3d sample file from the python shell, I would give me an error saying: Traceback (most recent call last): File "/usr/share/panda3d/samples/asteroids/main.py", line 16, in from direct.showbase.ShowBase import ShowBase ImportError: No module named 'direct' What really bugs me is that when I try to execute the .py file directly (without opening it in the IDLE or pycharm) it works just fine. I know this has been asked before, but I would like to ask for a working step by step solution to be able to import panda3d from pycharm and the IDLE. I have no clue how to get it working, as none of the answers given to this question worked for me.
panda3d python import error
0
0.197375
1
0
0
501
32,953,669
2015-10-05T16:44:00.000
3
0
0
0
0
python,postgresql,imdb,imdbpy
0
32,972,532
0
1
0
true
0
0
Problem solved! For those who are having the same problem, here it goes: Download the java movie database. It works witch postrgres or mysql. You'll have to download java runtime. After that open the readme in the directory you installed the java movie database, there are all the instructions, but I'll help you. Follow the link off the *.list archives and download then. Move them into a new folder. After that, open JMDB (java movie database) and select the right folders of moviecollection, sound etc (they are in C:/programs...). In IMDb-import select the folder you created which contains the *.files. Well, this is the end. Run the JMDB and you'll have your DB populated.
1
0
0
0
I'm startint a project on my own and I'm having some troubles with importing datas from IMDb. Already downloaded everything that's necessary but I'm kinda newbie in this python and command lines stuff, and it's pissing me off because I'm doing my homework (trying to learn how to do these things) but I can't reach it :( So, is there anybody who could create a step-by-step of how to do it? I mean, something like: You'll need to download this and run this commands on 'x'. Create a database and then run 'x'. It would be amazing for me and other people who don't know how to do this as well and I would truly appreciate A LOT, really!. Oh, I'm using Windows.
How to run imdbpy2sql.py and import data from IMDb to postgres
0
1.2
1
1
0
1,057
32,965,937
2015-10-06T09:07:00.000
1
0
0
0
1
wxpython,wxwidgets
0
32,968,457
0
2
0
false
0
1
The simplest way to do it I see would be to store only the abbreviated contents in wxTextCtrl itself normally and only replace it with the full contents when the user is about to start editing it (i.e. when you get wxEVT_SET_FOCUS) and then replace it with the abbreviated version again later (i.e. when you get wxEVT_KILL_FOCUS).
2
0
0
0
Last week I was given a requirement for an already crowded display/input screen written in wxPython (data stored in MySQL database) to show the first 24 characters of two free format comment fields but to allow up to 255 characters to be displayed/input. For instance on entry the screen might show “Right hip and knee X-ra” whilst the full entry continues “ys requested 24th September in both laying and standing positions with knee shown straighten and bent through 75 degrees”. Whilst I can create a wxtextCtrl that holds more characters than displayed (scrolls) I cannot work out how to display/enter the contents as a multiline box when selected. I have gone through our wxPython book and searched online with no joy.
wxtextCtrl To Allow Input/Display Of More Text Than Default Display
0
0.099668
1
0
0
109
32,965,937
2015-10-06T09:07:00.000
0
0
0
0
1
wxpython,wxwidgets
0
33,018,442
0
2
0
false
0
1
Have you considered using my.TextCtrl.SetToolTipString() and setting it to the same value as the textctrl contents. In this manner only the first 24 characters show in the textctrl but if you hover over it the entire string will be displayed as a tooltip.
2
0
0
0
Last week I was given a requirement for an already crowded display/input screen written in wxPython (data stored in MySQL database) to show the first 24 characters of two free format comment fields but to allow up to 255 characters to be displayed/input. For instance on entry the screen might show “Right hip and knee X-ra” whilst the full entry continues “ys requested 24th September in both laying and standing positions with knee shown straighten and bent through 75 degrees”. Whilst I can create a wxtextCtrl that holds more characters than displayed (scrolls) I cannot work out how to display/enter the contents as a multiline box when selected. I have gone through our wxPython book and searched online with no joy.
wxtextCtrl To Allow Input/Display Of More Text Than Default Display
0
0
1
0
0
109
32,978,233
2015-10-06T19:36:00.000
0
1
0
1
0
python,gdb
0
32,979,189
0
2
0
false
0
0
I was able to figure out. What I understood is GDB embeds the Python interpreter so it can use Python as an extension language. You can't just import gdb from /usr/bin/python like it's an ordinary Python library because GDB isn't structured as a library. What you can do is source MY-SCRIPT.py from within gdb (equivalent to running gdb -x MY-SCRIPT.py).
1
0
0
0
where should my python files be stored so that I can run that using gdb. I have custom gdb located at /usr/local/myproject/bin. I start my gdb session by calling ./arm-none-eabi-gdb from the above location. I don't know how this gdb and python are integrated into each other. Can anyone help.?
Invoke gdb from python script
0
0
1
0
0
886
32,981,331
2015-10-06T23:18:00.000
0
0
1
0
0
python
0
32,981,354
0
3
0
false
0
0
Just cast them to float using float(x) method where x is the string that contains the float
1
0
0
0
I am trying to read in a 4 column txt file and create a 5th column. Column 1 is a string and columns 2-4 are numbers, however they are being read as strings. I have two questions - my python script is currently unable to perform multiplication on two of the columns because it is reading columns 2-4 as strings. I want to know how to change columns 2-4 (which are numbers) to floating numbers and then create a 5th column that is two of the previous columns multiplied together.
Change data type in text file from string to float Python
0
0
1
0
0
761
33,009,532
2015-10-08T07:40:00.000
0
0
0
0
1
python-2.7,selenium-webdriver
1
33,010,025
0
1
0
false
0
0
Cross check your language binding, check with older versions
1
0
0
0
I am implementing selenium and i have already included "from selenium import webdriver" but still getting this error "ImportError: cannot import name webdriver" Any idea how to resolve this error?
ImportError: cannot import name webdriver
0
0
1
0
1
1,185
33,023,432
2015-10-08T18:33:00.000
6
0
1
0
0
python,arguments,global-variables,parameter-passing
0
33,023,611
0
2
0
false
0
0
This is a very generic question so it is hard to be specific. What you seem to be describing is a bunch of inter-related functions that share data. That pattern is usually implemented as an Object. Instead of a bunch of functions, create a class with a lot of methods. For the common data, use attributes. Set the attributes, then call the methods. The methods can refer to the attributes without them being explicitly passed as parameters.
1
5
0
0
Let's say I have a python module that has a lot of functions that rely on each other, processing each others results. There's lots of cohesion. That means I'll be passing back and forth a lot of arguments. Either that, or I'd be using global variables. What are best practices to deal with such a situation if? Things that come to mind would be replacing those parameters with dictionaries. But I don't necessarily like how that changes the function signature to something less expressive. Or I can wrap everything into a class. But that feels like I'm cheating and using "pseudo"-global variables? I'm asking specifically for how to deal with this in Python but I understand that many of those things would apply to other languages as well. I don't have a specific code example right, it's just something that came to mind when I was thinking about this issue. Examples could be: You have a function that calculates something. In the process, a lot of auxiliary stuff is calculated. Your processing routines need access to this auxiliary stuff, and you don't want to just re-compute it.
Avoiding global variables but also too many function arguments (Python)
0
1
1
0
0
1,672
33,029,293
2015-10-09T03:13:00.000
0
0
0
0
0
java,python,subprocess,jython,processbuilder
0
38,824,000
0
1
0
false
1
1
If there is no limit to the length of the string argument for launching the python script, you could simply encode the binary data from the image into a string, and pass that. The only problem you might encounter with this approach would be null characters and and negative numbers.
1
0
0
0
I have a working program written in Java (a 3d game) and some scripts in Python written with theano to process images. I am trying to capture the frames of the game as it is running and run these scripts on the frames. My current implementation grabs the binary data from each pixel in the frame, saves the frame as a png image and calls the python script (using ProcessBuilder) which opens the image and does its thing. Writing an image to file and then opening it in python is pretty inefficient, so I would like to be able to pass the binary data from Java to Python directly. If I am not mistaken, processBuilder only takes arguments as strings, so does anyone know how I can pass this binary data directly to my python script? Any ideas? Thanks
Passing binary data from java to python
0
0
1
0
0
250
33,034,687
2015-10-09T09:29:00.000
0
0
1
1
0
python
0
33,034,977
0
3
0
false
0
0
If it's not a multithreaded program, then just let the program do whatever it needs and then: raw_input("Press Enter to stop the sharade.\n") Maybe it's not exactly what you're looking for but on the other hand you should not rely on a predefined sleep time.
1
0
0
0
I want the program to wait like 5 seconds before exiting the console after finishing whatever it does, so the user can read the "Good Bye" message, how can one do this?
How to stop console from exiting
0
0
1
0
0
67
33,043,704
2015-10-09T17:06:00.000
1
0
0
0
0
python,hadoop,hive,hbase,bigdata
0
33,043,867
0
2
0
false
0
0
If it's already in the CSV or any format on the linux file system, that PIG can understand, just do a hadoop fs -copyFromLocal to If you want to read/process the raw H5 File format using Python on HDFS, look at hadoop-streaming (map/reduce) Python can handle 2GB on a decent linux system- not sure if you need hadoop for it.
2
0
1
0
I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data analysis in Pig. Do I extract them as CSV and load to Hbase or Hive ? It would help if someone can point me to right resource.
How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive?
0
0.099668
1
0
0
726
33,043,704
2015-10-09T17:06:00.000
0
0
0
0
0
python,hadoop,hive,hbase,bigdata
0
50,411,499
0
2
0
false
0
0
Don't load such amount of small files into HDFS. Hadoop doesn't handle well lots of small files. Each small file will incur in overhead because the block size (usually 64MB) is much bigger. I want to do it myself, so I'm thinking of solutions. The million song dataset files don't have more than 1MB. My approach will be to aggregate data somehow before importing into HDFS. The blog post "The Small Files Problem" from Cloudera may shed some light.
2
0
1
0
I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data analysis in Pig. Do I extract them as CSV and load to Hbase or Hive ? It would help if someone can point me to right resource.
How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive?
0
0
1
0
0
726
33,057,049
2015-10-10T17:42:00.000
3
0
0
0
0
python,resize,pygtk
0
33,057,480
0
1
0
true
0
1
For the first question: Gtk.Window.resize(width, height) should work. If you use set_size_request(width, height), you cannot resize your window smaller than these values. For the second question: Gtk.Window.set_resizable(False)
1
1
0
0
I want to set a window's size, and then be able to resize it while the program is running. I've been able to make the window large, but I can't resize it smaller than the original set size. For a different project, I would also like to know how to make it so the window is not resizable at all.
PYGTK Resizing permissions
0
1.2
1
0
0
41
33,068,445
2015-10-11T18:15:00.000
0
1
1
0
0
python,windows,atom-editor
0
43,814,862
0
3
0
false
0
0
From Atom > Preferences > Install: Search for atom-runner package and install it. Close the atom editor and reopen. This helps the atom editor set the right path and will solve the issue. If this doesnt help, manually copy the folder of the python installation directory and add the path to system PATH. This will solve the issue.
1
3
0
0
I have read numerous articles about running code in the Atom code editor, however, I cannot seem to understand how this can be done. Could anyone explain it in simpler terms? I want to run my Python code in it and I have downloaded the 'python-tools-0.6.5' and 'atom-script-2.29.0' files from the Atom website and I just need to know how to get them working.
Run Code In Atom Code Editor
0
0
1
0
0
17,470
33,080,063
2015-10-12T11:32:00.000
-3
1
0
0
0
javascript,python,ios,swift,encryption
0
33,080,475
0
3
0
false
1
0
It's very wide question and the variety of ways to do that. First of all, you need to choose a method of the encryption and purpose why you encrypt the data. There are 3 main encryption methods: 1. Symmetric Encryption Encrypter and Decrypter have access to the same key. It's quite easy, but it has one big drawback - key needs to be shared, so if you put it to the client it can be stolen and abused. As a solution, you would need to use some another method to send the key and encrypt it on the client. 2. Asymmetric Encryption With asymmetric encryption, the situation is different. To encrypt data you need to use public/private key pair. The public key is usually used to encrypt data, but decryption is possible only with a private key. So you can hand out public key to your clients, and they would be able to send some encrypted traffic back to you. But still you need to be sure that you are talking to the right public key to not abuse your encryption. Usually, it's used in TLS (SSL), SSH, signing updates, etc. 3. Hashing (it's not really encryption) It's the simplest. With hashing, you produce some spring that can't be reverted, but with a rule that same data will produce the same hash. So you could just pick the most suitable method and try to find appropriate package in the language you use.
1
0
0
0
I have a very simple iOS and Android applications that download a txt file from a web server and preset it to the user. I'm looking for a way that only the application will be able to read the file so no one else can download and use it. I would like to take the file that is on my computer, encrypt it somehow, upload the result to the server and when the client will download the file it will know how to read it. What is the simplest way to to this kind of thing? Thanks a lot!
encrypting data on server and decrypting it on client
0
-0.197375
1
0
0
1,327
33,089,288
2015-10-12T20:02:00.000
0
0
1
0
0
ipython-notebook
0
33,089,553
0
2
0
false
0
0
The three tabs "Files", "Running" and "Clusters" should be correct. The "Running" tab has the active notebooks listed. Yes, perhaps it was an older tutorial. Do you have a link?
2
0
0
0
I have installed iPython Notebook as a component of my Anaconda installation. To become familiar with using iPython Notebook I started through an introductory tutorial and immediately ran into a discrepancy. When I open the notebook the tutorial shows I should have three tabs across the top labelled 'Notebooks', 'Running' and 'Clusters'. Instead I have 'Files', 'Running' and 'Clusters'. I cannot figure out how to switch the Files tab to the Notebooks tab...or is the Files tab correct and is the tutorial out-of-date? I would much prefer to just have the Notebooks view as listing all the file folders just gets in the way. Can someone tell me how to swap out the Files tab for the Notebooks tab?
Missing 'Notebooks' tab
0
0
1
0
0
135
33,089,288
2015-10-12T20:02:00.000
0
0
1
0
0
ipython-notebook
0
39,973,387
0
2
0
false
0
0
To start the notebook in notebook server 4.2.1, go to file tab and on the top right side, you will see new drop down menu. Click on the python[root] and it should start the new notebook.
2
0
0
0
I have installed iPython Notebook as a component of my Anaconda installation. To become familiar with using iPython Notebook I started through an introductory tutorial and immediately ran into a discrepancy. When I open the notebook the tutorial shows I should have three tabs across the top labelled 'Notebooks', 'Running' and 'Clusters'. Instead I have 'Files', 'Running' and 'Clusters'. I cannot figure out how to switch the Files tab to the Notebooks tab...or is the Files tab correct and is the tutorial out-of-date? I would much prefer to just have the Notebooks view as listing all the file folders just gets in the way. Can someone tell me how to swap out the Files tab for the Notebooks tab?
Missing 'Notebooks' tab
0
0
1
0
0
135
33,126,905
2015-10-14T13:37:00.000
1
0
0
0
0
python,ios,objective-c,xcode,socketrocket
0
33,127,431
0
1
0
true
0
0
You do exactly that: you [can] pass around keys and make them show up in separate windows. From the way you've phrased your question, you appear new to streams/sockets. I'd recommend you first start with one socket and make a chat application so you can get a feel for how to develop protocols which let you do that.
1
2
0
0
I would like to create multiple sockets between all users. So how can i pass key and ID such as the server is divided in seprated windows. Thank You.
SocketRocket (iOS) : How to identify whom user are chatting with another?
0
1.2
1
0
1
141
33,137,569
2015-10-14T23:48:00.000
0
0
0
1
0
python
0
33,137,802
0
1
0
false
0
0
Lots of options. Which you should choose depends on lots of things. Sorry for being so vague, but your question does not give any details besides the name Chrome. How ever you start Chrome, e. g. by a button or menu of your window manager, maybe you can tweak that button or menu to start both programs. This is probably the simplest solution, but of course it bears the risk that a different event starts Chrome (e. g. clicking a link in a mail program). This would then pass-by your Program. You can write a daemon process which you start via login-scripts at login time. That daemon would try to figure out when you start Chrome (e. g. by polling the process table) and react by starting your accompanying program. You can configure Chrome so that it starts your program upon starting time. How to do this is a whole new question in itself with lots of answers; options include at least writing a plugin for Chrome which does what you want. You can have your program up and running all the time (started via login scripts, i. e. too early) but stay invisible and passive until Chrome starts up. This is basically the same as option 2, just that your program is itself that daemon. In case you tell us more, we probably can give narrower and thus more fitting answers.
1
0
0
0
I have a program I want to open when I open Chrome. Instead of opening them separately, I want it to automatically start when Chrome starts. How would I write code to have the program attach itself to Chrome? I don't want the program to start on startup, just when Chrome starts. I know I can right click on the Chrome icon on my desktop and change the properties to open both programs but I want to know how to do the same thing with code.
Python code to open with another program?
0
0
1
0
0
61
33,161,725
2015-10-16T02:45:00.000
0
0
1
1
0
python,ubuntu
1
33,161,926
0
1
0
false
0
0
Try to kill the process which is opening the python file or you can reboot your windows It may resolve the problem
1
0
0
0
I use a shared folder to share files in linux with a windows machine. However, after i have edited a python file using notepad++ in windows, changed the owner of the file to the user of ubuntu, and run the file, i wanted to edit the file again, when i save the file, I get the following error. 'Please check whether if this file is opened in another program' is it concerned with the linux daemon?? If both machines are windows, i can edit it as administrator, but i am new to linux, i do not know how to deal with it? Thanks for your help!!
unable save file in a shared folder on a remote ubuntu machine using notepad++
0
0
1
0
0
59
33,170,016
2015-10-16T12:01:00.000
0
0
0
0
0
python,django,orm
0
67,953,078
0
5
0
false
1
0
I had to add import django, and then this worked.
1
26
0
0
I've used Django ORM for one of my web-app and I'm very much comfortable with it. Now I've a new requirement which needs database but nothing else that Django offers. I don't want to invest more time in learning one more ORM like sqlalchemy. I think I can still do from django.db import models and create models, but then without manage.py how would I do migration and syncing?
How to use Django 1.8.5 ORM without creating a django project?
0
0
1
0
0
15,755
33,172,090
2015-10-16T13:45:00.000
1
0
0
0
0
python,algorithm,performance,minimum-spanning-tree,delaunay
0
33,175,701
0
1
0
true
0
0
NB: this assumes we're working in 2-d I suspect that what you are doing now is feeding all point to point distances to the MST library. There are on the order of N^2 of these distances and the asymptotic runtime of Kruskal's algorithm on such an input is N^2 * log N. Most algorithms for Delaunay triangulation take N log N time. Once the triangulation has been computed only the edges in the triangulation need to be considered (since an MST is always a subset of the triangulation). There are O(N) such edges so the runtime of Kruskal's algorithm in scipy.sparse.csgraph should be N log N. So this brings you to an asymptotic time complexity of N log N. The reason that scipy.sparse.csgraph doesn't incorporate Delaunay triangulation is that the algorithm works on arbitrary input, not only Euclidean inputs. I'm not quite sure how much this will help you in practice but that's what it looks like asymptotically.
1
0
1
0
I have a code that makes Minimum Spanning Trees of many sets of points (about 25000 data sets containing 40-10000 points in each set) and this is obviously taking a while. I am using the MST algorithm from scipy.sparse.csgraph. I have been told that the MST is a subset of the Delaunay Triangulation, so it was suggested I speed up my code by finding the DT first and finding the MST from that. Does anyone know how much difference this would make? Also, if this makes it quicker, why is it not part of the algorithm in the first place? If it is quicker to calculate the DT and then the MST, then why would scipy.sparse.csgraph.minimum_spanning_tree do something else instead? Please note: I am not a computer whizz, some people may say I should be using a different language but Python is the only one I know well enough to do this sort of thing, and please use simple language in your answers, no jargon please!
Speed up Python MST calculation using Delaunay Triangulation
0
1.2
1
0
0
659
33,183,963
2015-10-17T07:18:00.000
0
0
0
1
1
multithreading,python-2.7,sockets,google-app-engine,apple-push-notifications
1
33,184,298
0
1
0
false
1
0
Found the issue. I was calling start_background_thread with argument set to function(). When I fixed it to calling it as function it worked as expected.
1
0
0
0
Does Google App engine support multithreading? I am seeing conflicting reports on the web. Basically, I am working on a module that communicates with Apple Push Notification server (APNS). It uses a worker thread that constantly looks for new messages that need to be pushed to client devices in a pull queue. After it gets a message and sends it to APNS, I want it to start another thread that checks if APNS sends any error response in return. Basically, look for incoming bytes on same socket (it can only be error response per APNS spec). If nothing comes in 60s since the last message send time in the main thread, the error response thread terminates. While this "error response handler" thread is waiting for bytes from APNS (using a select on the socket), I want the main thread to continue looking at the pull queue and send any packets that come immediately. I tried starting a background thread to do the function of the error response handler. But this does not work as expected since the threads are executed serially. I.e control returns to the main thread only after the error response thread has finished its 60s wait. So, if there are messages that arrive during this time they sit in the queue. I tried threadsafe: false in .yaml My questions are: Is there any way to have true multithreading in GAE. I.e the main thread and the error response thread execute in parallel in the above example instead of forcing one to wait while other executes? If no, is there any solution to the problem I am trying to solve? I tried kick-starting a new instance to handle the error response (basically use a push queue), but I don't know how to pass the socket object the error response thread will have to listen to as a parameter to this push queue task (since dill is not supported to serialize socket objects)
Multi-threading on Google app engine
0
0
1
0
0
921