Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
29,888,716
2015-04-27T06:32:00.000
0
1
0
1
python,ubuntu-12.04,dynamic-library
29,888,758
3
false
0
0
If I remember correctly, executing export ... via os.system will only set that shell variable within the scope, thus it's not available in the following os.system scopes. You should set the LD_LIBRARY_PATH in the shell, before executing the Python script. Btw. also avoid setting relative paths…
1
2
0
When I call a executable in python using os.system("./mydemo") in Ubuntu, it can't find the .so file (libmsc.so) needed for mydemo. I used os.system("export LD_LIBRARY_PATH=pwd:$LD_LIBRARY_PATH;"), but it still can't find libmsc.so. The libmsc.so is in the current directory. and shouldn't be global.
I use os.system to run executable file, but it need a .so file, it can't find library
0
0
0
926
29,888,975
2015-04-27T06:48:00.000
0
0
0
0
python,messagebox,odoo
42,901,463
2
false
1
0
You can do one thing. Call a form view using button (type="action"). In form view footer, keep your button. On clicking on the button do your desired operation. NB: Your form view model should be different. Should not be same as current view.
1
0
0
I would like to show a message box with 'text', 'Yes' and 'No' button. If users click 'Yes', to continue the method work. How could I add this alert message box in my Odoo by Python? If you have any idea, please share me. Thanks.
Yes/No message box in Odoo
0
0
0
3,798
29,890,204
2015-04-27T07:59:00.000
0
0
0
0
python-2.7,calendar,odoo,odoo-8
29,891,572
1
true
1
0
Is your field used for 'display_start' on your view has a default value on your model ? I think it's because it returns False for this field you have this error. If it returns today, I think it will be ok.
1
0
0
I extends class calendar_event for add status with statusbar. It's work perfectly when i update an event BUT when i tried to create i've a problem : Traceback (most recent call last): File "/home/x/workspace/Odoo8/openerp/http.py", line 530, in _handle_exception return super(JsonRequest, self)._handle_exception(exception) File "/home/x/workspace/Odoo8/openerp/http.py", line 567, in dispatch result = self._call_function(**self.params) File "/home/x/workspace/Odoo8/openerp/http.py", line 303, in _call_function return checked_call(self.db, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/service/model.py", line 113, in wrapper return f(dbname, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/http.py", line 300, in checked_call return self.endpoint(*a, **kw) File "/home/x/workspace/Odoo8/openerp/http.py", line 796, in __call__ return self.method(*args, **kw) File "/home/x/workspace/Odoo8/openerp/http.py", line 396, in response_wrap response = f(*args, **kw) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/web/controllers/main.py", line 949, in call_kw return self._call_kw(model, method, args, kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/web/controllers/main.py", line 941, in _call_kw return getattr(request.registry.get(model), method)(request.cr, request.uid, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/api.py", line 241, in wrapper return old_api(self, *args, **kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/cap_addons/cap_CRM/models/calendar_event.py", line 66, in create res = super(calendar_event, self).create(cr, uid, vals, context=context) File "/home/x/workspace/Odoo8/openerp/api.py", line 241, in wrapper return old_api(self, *args, **kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/crm/calendar_event.py", line 36, in create res = super(calendar_event, self).create(cr, uid, vals, context=context) File "/home/x/workspace/Odoo8/openerp/api.py", line 241, in wrapper return old_api(self, *args, **kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/calendar/calendar.py", line 1646, in create res = super(calendar_event, self).create(cr, uid, vals, context=context) File "/home/x/workspace/Odoo8/openerp/api.py", line 241, in wrapper return old_api(self, *args, **kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/mail/mail_thread.py", line 377, in create thread_id = super(mail_thread, self).create(cr, uid, values, context=context) File "/home/x/workspace/Odoo8/openerp/api.py", line 241, in wrapper return old_api(self, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/api.py", line 336, in old_api result = method(recs, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/models.py", line 4042, in create record = self.browse(self._create(old_vals)) File "/home/x/workspace/Odoo8/openerp/api.py", line 239, in wrapper return new_api(self, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/api.py", line 462, in new_api result = method(self._model, cr, uid, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/models.py", line 4214, in _create recs.modified(self._fields) File "/home/x/workspace/Odoo8/openerp/api.py", line 239, in wrapper return new_api(self, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/models.py", line 5608, in modified spec += self._fields[fname].modified(self) File "/home/x/workspace/Odoo8/openerp/fields.py", line 1414, in modified spec = super(_Relational, self).modified(records) File "/home/x/workspace/Odoo8/openerp/fields.py", line 908, in modified target = env[field.model_name].search([(path, 'in', records.ids)]) File "/home/x/workspace/Odoo8/openerp/api.py", line 239, in wrapper return new_api(self, *args, **kwargs) File "/home/x/workspace/Odoo8/openerp/api.py", line 462, in new_api result = method(self._model, cr, uid, *args, **kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/calendar/calendar.py", line 1511, in search res = self.get_recurrent_ids(cr, uid, res, args, order=order, context=context) File "/home/x/workspace/Odoo8/openerp/api.py", line 241, in wrapper return old_api(self, *args, **kwargs) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/calendar/calendar.py", line 1187, in get_recurrent_ids result_data.append(self.get_search_fields(ev, order_fields)) File "/home/x/workspace/alpbureautique_openerp/openerp/addons/calendar/calendar.py", line 1155, in get_search_fields sort_fields['sort_start'] = browse_event['display_start'].replace(' ', '').replace('-', '') AttributeError: 'bool' object has no attribute 'replace' This error is raise when i call create() by super() and it's link with field "participant_without_owner" My code : class calendar_event(osv.Model): _inherit = 'calendar.event' _columns = { 'cap_state':fields.selection( [('open', 'Confirmed'), ('cancel', 'Cancelled'), ('pending', 'Pending'), ('done', 'Held') ], string='Status', track_visibility='onchange', help='The status is set to Confirmed, when a case is created.\n' 'When the call is over, the status is set to Held.\n' 'If the callis not applicable anymore, the status can be set to Cancelled.'), 'participant_without_owner':fields.char(compute="_compute_participant",store=True), } _default = { 'cap_state':'open' } @api.depends('partner_ids', 'user_id') def _compute_participant(self): for record in self: if record.user_id.partner_id in record.partner_ids: participants = record.partner_ids - record.user_id.partner_id chaine = str("") for p in participants: chaine = chaine + p.name + ", " record.participant_without_owner = chaine def done_event_in_tree(self, cr, uid, ids, context=None): res = self.write(cr, uid, ids, {'cap_state': 'done'}, context) return res def create(self,cr, uid, vals, context=None): import datetime as dt if context is None: context = {} date_appel = datetime.strptime(vals['start_datetime'], '%Y-%m-%d %H:%M:%S') print type(date_appel) print type(dt.datetime.today()) if date_appel > dt.datetime.today(): vals['cap_state'] = 'open' else: vals['cap_state'] = 'done' vals['participant_without_owner'] = "" print vals res = super(calendar_event, self).create(cr, uid, vals, context=context) return res In odoo v8
ODOO, calendar.event : bool have no attribute replace
1.2
0
0
823
29,890,684
2015-04-27T08:25:00.000
2
0
0
0
python,django,flask,sqlalchemy
29,890,773
2
false
1
0
Firstly, don't do this; you're in for a world of pain. Use an API to pass data between apps. But if you are resigned to doing it, there isn't actually any problem with migrations. Write all of them in one app only, either Django or Alembic and run them there. Since they're sharing a database table, that's all there is to it.
1
6
0
I have two repository written in flask and django. These projects sharing the database model which is written in SQLAlchemy in flask and written in Django ORM. When I write migration script in flask as alembic, How can django project migrates with that script? I also think about Django with SQLAlchemy. But I can't find out Django projects using SQLAlchemy. Is that bad idea? Thanks.
How to manage django and flask application sharing one database model?
0.197375
1
0
3,757
29,893,476
2015-04-27T10:42:00.000
1
0
0
0
linux,postgresql,python-3.x,psycopg2,amazon-redshift
29,915,754
1
true
0
0
Re-declaring a cursor doesn't create new connection while using psycopg2.
1
0
0
I am using the psycopg2 library with Python3 on a linux server to create some temporary tables on Redshift and querying these tables to get results and write to files on the server. Since my queries are long and takes about 15 minutes to create all these temp tables that I ultimate pull data from, how do I ensure that my connection persists and I don't lose the temp tables that I later query? Right now I just do a cursor() before the execute(), is there a default timeout for these? I have noticed that whenever I do a Select a,b from #results_table or select * from #results_table the query just freezes/hangs, but select top 35 from #results_table returns the results (select top 40 fails!). There are about a 100 rows in #results_table, and I am not able to get them all. I did a ps aux and the process just stays in the S+ state. If I manually run the query on Redshift it finishes in seconds. Any ideas?
Does redeclaring a cursor create new connection while using psycopg2?
1.2
1
0
82
29,896,309
2015-04-27T12:50:00.000
1
0
1
0
python-3.x,python-2.7,proxy,anaconda,conda
40,988,680
3
false
0
0
There are chances that the .condarc file is hidden as was in my case. I was using Linux Mint (Sarah) and couldn't find the file though later on I found that it was hidden in the home directory and hence when I opted to show hidden files I could find it.
2
10
0
I am trying to set up a proxy server in Anaconda because my firewall does not allow me to run online commands such as conda update I see online that I should create a .condarc file that contains the proxy address. Unfortunately, I dont know how to create that file (is it a text file?) and where to put it?! (in which folder? in the Anaconda folder?) Any help appreciated Thanks!
how to create a .condarc file for Anaconda?
0.066568
0
0
46,665
29,896,309
2015-04-27T12:50:00.000
0
0
1
0
python-3.x,python-2.7,proxy,anaconda,conda
69,901,282
3
false
0
0
to create the .condarc file open Anaconda Prompt and type: conda config it will appear in your user's home directory
2
10
0
I am trying to set up a proxy server in Anaconda because my firewall does not allow me to run online commands such as conda update I see online that I should create a .condarc file that contains the proxy address. Unfortunately, I dont know how to create that file (is it a text file?) and where to put it?! (in which folder? in the Anaconda folder?) Any help appreciated Thanks!
how to create a .condarc file for Anaconda?
0
0
0
46,665
29,896,688
2015-04-27T13:05:00.000
0
0
0
0
memory,logging,textbox,tkinter,python-3.4
29,900,168
1
true
0
1
In general, no, there are no memory limitations with writing to a scrolled text widget. Internally, the text is stored in an efficient b-tree (efficient, unless all the data is a single line, since the b-tree leaves are lines). There might be a limit of some sort, but it would likely be in the millions of lines or so.
1
0
0
I am fairly new to Python and to GUI programming, and have been learning the Tkinter package to further my development. I have written a simple data logger that sends a command to a device via a serial or TCP connection, and then reads the response back, displaying it in a ScrolledText widget. In addition, I have a button that allows me to save the contents of the ScrolledText widget into a text file. I was testing my software by sending a looped command, with a 0.5 second delay between commands. The aim was to test the durability of the logger so it may later be deployed to automatically monitor and log the output of the devices it is connected to. After 30-40 minutes, I find that the program crashes on my Windows 7 system, and I suspect that it may be caused by a memory issue. The crash is a rather nondescript, "pythonw.exe has stopped working" message. When I monitor the process using Windows Task Manager, the memory used by pythonw.exe increases each time a response is read, and will eventually reach nearly 2Gb. It may be that I need to rethink my logic and have the software log to the disk in 'real time', while the ScrolledText box overwrites the oldest data after x-number of lines... However, for my own education, I was wondering if there was a better way to manage the memory used by ScrolledText? Thanks in advance!
Are there memory limitations when outputting to a ScrolledText widget?
1.2
0
0
72
29,900,058
2015-04-27T15:35:00.000
0
1
1
0
python,django,assembly,web,web-applications
29,905,544
1
true
0
0
The simplest solution is to put the two output files into a single zip file, send a doctype header indicating it's in zip format and send the output as raw data. The alternative is to use httprequest for the client to request each file in turn.
1
0
0
The compiler works by receiving a file that contains the macro assembler source code then, it generates two files, one is the list and the other one is the hex file. Everything works alright while offline but I want to make it online. In this case, the user will provide an MC68HC11 assembly source code file to my server (I already have the server up and running) and after this, my server will compile it using the Python script I wrote and then it will give the user an option to download the list and the hex file.
What's the best way to make an Assembly compiler in Python available online?
1.2
0
0
112
29,902,069
2015-04-27T17:15:00.000
0
1
0
1
python,message-queue,messaging,distributed,distributed-system
29,904,422
3
false
0
0
I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport )
1
2
0
I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions: I don't want to open any ports or punch holes in the firewall Also, I don't want to export/forward any internal ports outside my network Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible. 1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption Solutions considered: Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism XMPP using connections to a third party server IRC Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)? I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?)
Simple way for message passing in distributed system
0
0
1
1,745
29,903,119
2015-04-27T18:17:00.000
1
0
0
0
python,ssl,amazon-s3,boto,sslv3
30,356,292
2
true
0
0
At a high-level, the client and the server will negotiate which one to support as part of the SSL/TLS handshake, the highest supported version of the protocol, both from the client and the server side, wins. If client supports the latest and greatest which is TLS 1.2 and the server supports it as well, they will decide to use TLS 1.2. You can sniff the traffic using Wireshark or other similar packet capture tools to determine if the encrypted traffic is using SSLv3 or TLS.
2
10
0
Amazon is sunsetting SSLv3 support soon, and I am trying to verify that boto is utilizing TLS. Is there a good way to verify this? Or is there a good test to show TLS utilization?
How to tell if boto is using SSLv3 or TLS?
1.2
0
1
3,258
29,903,119
2015-04-27T18:17:00.000
3
0
0
0
python,ssl,amazon-s3,boto,sslv3
30,388,720
2
false
0
0
As stated above, you can use a packet sniffer to determine if SSLv3 connections are being made: # sudo tcpdump -i eth0 'tcp[((tcp[12]>>4)*4)+9:2]=0x0300' Replace 'eth0' with the correct interface. Then test if it's working, by performing a SSLv3 connection with openssl: # openssl s_client -connect s3.amazonaws.com:443 -ssl3 That activity should be captured by tcpdump, if network interface is correct. Finally, test your app. If it's using SSLv3 it should be visible as well. You can also change the capture filter to see what protocol is being used: TLSv1 - 0x0301 TLSv1.1 - 0x0302 TLSv1.2 - 0x0303
2
10
0
Amazon is sunsetting SSLv3 support soon, and I am trying to verify that boto is utilizing TLS. Is there a good way to verify this? Or is there a good test to show TLS utilization?
How to tell if boto is using SSLv3 or TLS?
0.291313
0
1
3,258
29,903,134
2015-04-27T18:18:00.000
-1
0
0
0
python,django,django-models,celery,django-celery
29,903,655
5
false
1
0
Is there any reason you wouldn't just calculate the boolean field in your business logic? i.e. when you receive a request related to that race, simply check the times and assess whether or not the race is active (for display, analysis), etc. I'm assuming you won't have high load (one reason for pre-processing) like this.
1
18
0
I am working on a django project for racing event in which a table in the database has three fields. 1)Boolean field to know whether race is active or not 2)Race start time 3)Race end time While creating an object of it,the start_time and end_time are specified. How to change the value of boolean field to True when the race starts and to False when it ends? How to schedule these activities?
Django: How to automatically change a field's value at the time mentioned in the same object?
-0.039979
0
0
13,168
29,903,381
2015-04-27T18:31:00.000
1
0
0
0
python,angularjs,mongodb,frameworks,mean-stack
36,828,297
1
false
1
0
To answer my own question about a year later, what I would do now is just run my python script in tiny web server that lived on the same server as my MEAN app. It wouldn't have any external ports exposed and the MEAN app would just ping it for information and get JSON back. Just in case anyone is looking at this question down the road... I find this way easier than trying to integrate the python script into the application itself.
1
1
0
I currently have a small webapp on AWS based on the MEAN (Mongo, Express, Angular, Node) but have a python script I would like to execute on the front end. Is there a way to incorporate this? Basically, I have some data objects on my AngularJS frontend from a mongoDB that I would like to manipulate with python and don't know how to get them into a python scope, do something to them, and send them to a view. Is this possible? If so, how could it be done? Or is this totally against framework conventions and should never be done? from
Webapp architecture: Putting python in a MEAN stack app
0.197375
0
0
1,124
29,903,889
2015-04-27T19:01:00.000
0
0
0
1
python,google-app-engine,console,cloud
29,906,509
1
false
1
0
Most likely, you are logged into Gmail under a different account. Go to Gmail, and click Sign Out. Then go to the developer console. It should ask you to log in or select from several accounts.
1
0
0
(I've been using appengine since 2009 and haven't needed support until now.) I've been added to a new project from the cloud console. When I try to upload the app, AppEngine launcher says "This application does not exist". Furthermore, in Cloud console, nothing appears under the appengine heading. At the same time, however, the old appengine.appspot.com DOES have the application listed. Any help?
Cannot upload to an app ID though I have the correct permissions
0
0
0
30
29,905,262
2015-04-27T20:24:00.000
2
1
1
1
python,python-3.x,debian,apt
29,975,297
2
false
0
0
I'm going to answer my own question, since I have found a solution to my problem. I had previously run apt-get upgrade on my system after setting my debian release to jessie. This did not replace python 3.2 though. What did replace it was running apt-get dist-upgrade; after that apt-get autoremove removed python 3.2. I doubt that this could be a problem, since I hadn't installed any external libraries.
1
1
0
I have Python 3.2 installed by default on my Raspbian Linux, but I want Python 3.4 (time.perf_counter, yield from, etc.). Installing Python 3.4 via apt-get is no problem, but when i type python3 in my shell I still get Python 3.2 (since /usr/bin/python3 still links to it). Should I change the Symlink, or is there a better was to do this?
Upgrade Python 3.2 to Python 3.4 on linux
0.197375
0
0
5,897
29,906,736
2015-04-27T21:59:00.000
2
0
1
0
python,uml
29,906,785
1
true
0
0
I would say, do it only if they add some useful information to the readers of the said UML diagram. I would say that, in general, any piece of documentation should only be written if it is useful for your users. Else it will only be in the way of finding other more important things. In the case of doc strings, you should definitely write the __hash__ method docstring. And maybe make a citation in the class docstring.
1
0
0
I wonder whether or not to include special methods in Python such as __str__ or ___eq___, etc, in a UML diagram.
Should I include special methods for a Python class UML?
1.2
0
0
301
29,906,888
2015-04-27T22:10:00.000
1
1
0
0
php,python,node.js,ubuntu,nginx
29,909,359
1
false
1
0
The kind of setup you're describing is straightforward and not complicated. Nginx works fine as a reverse proxy and web server that handles serving static assets. For PHP, you just need to proxy to php-fpm (running on a TCP port or unix socket). For Python, you need a wsgi server (something like uwsgi or gunicorn, again using a TCP port or unix socket) to server the Python app and have Ngix proxy to requests to it. For your Node.js app, just run the node server on a port like 8000 and have Nginx proxy requests to it. If you have a bunch of websites, each should have a server block matching a unique server name (i.e. mapped to a virtual host). The setup is as reliable as your backend services (like php-fpm, wsgi, and Node.js server). As long as those services are up and running (as daemon services) nginx should have no problem proxying to them. I have used all 3 setups on one server and have never experienced problems with any of the above.
1
0
0
Is there a rational way to serve multiple websites via PHP:Nginx, Python:??? & node.js on the same vps? And would it be reliable? The sites are expected to be low in traffic. I currently have PHP running on Nginx, Ubuntu via Digital Ocean and I would like to stick to Nginx for PHP and any major webserver for Python.
Running node, PHP and Python on the same vps
0.197375
0
0
789
29,907,405
2015-04-27T22:57:00.000
0
0
0
0
python,beautifulsoup
57,754,599
15
false
1
0
The best way to resolve is, while creating your interpreter select your global python path on your system(/usr/local/bin/python3.7). Make sure that in pycharm shell, python --version appears as 3.7. It shouldn't show 2.7
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
1
0
0
0
python,beautifulsoup
54,489,766
15
false
1
0
One of the possible reason: If you have more than one python versions installed and let's say you installed beautifulsoup4 using pip3, it will only be available for import when you run it in python3 shell.
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0.013333
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
3
0
0
0
python,beautifulsoup
53,284,293
15
false
1
0
Copy bs4 and beautifulsoup4-4.6.0.dist-info from C:\python\Lib\site-packages to your local project directory. It worked for me. Here, python actually looks for the library in local directory rather than the place where the library was installed!
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0.039979
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
3
0
0
0
python,beautifulsoup
52,847,210
15
false
1
0
I experienced a variation of this problem and am posting for others' benefit. I named my Python example script bs4.py Inside this script, whenever trying to import bs4 using the command: from bs4 import BeautifulSoup, an ImportError was thrown, but confusingly (for me) the import worked perfectly from an interactive shell within the same venv environment. After renaming the Python script, imports work as expected. The error was caused as Python tries to import itself from the local directory rather than using the system copy of bs4
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0.039979
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
0
0
0
0
python,beautifulsoup
72,153,534
15
false
1
0
For me it was a permissions issue. Directory "/usr/local/lib/python#.#/site-packages/bs4" was only 'rwx' by root and no other groups/users. Please check permissions on that directory.
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
5
0
0
0
python,beautifulsoup
59,960,168
15
false
1
0
Make sure the directory from which you are running your script does not contain a filename called bs4.py.
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0.066568
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
0
0
0
0
python,beautifulsoup
69,835,809
15
false
1
0
I had the same problem. The error was that the file in which I was importing beautifulsoup from bs4 was in another folder. Just replaced the file out of the internal folder and it worked.
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
0
0
0
0
python,beautifulsoup
61,982,622
15
false
1
0
There is no problem with package just need to Copy bs4 and beautifulsoup4-4.6.0.dist-info into your project directory
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
0
0
0
0
python,beautifulsoup
51,398,896
15
false
1
0
I was also facing this type error in the beginning even after install all the modules which were required including pip install bs4 (if you have installed this then no need to install beautifusoup4 | BeautifulSoup4 through pip or anywhere else it comes with bs4 itself) Solution : Just go to your python file where it is installed C:\python\Lib\site-packages and then copy bs4 and beautifulsoup4-4.6.0.dist-info folders and paste it to your project folder where you have saved your working project.
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
0
0
0
0
python,beautifulsoup
70,643,924
15
false
1
0
For anyone else that might have the same issue as me. I tried all the above, but still didn't work. issue was 1 was using a virtual environment so needed to do pip install in the pycharm terminal instead of a command prompt to install it there. Secondly I had typed import Beautifulsoup with the S not capitalized. changed to BeautifulSoup and it worked.
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
0
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
17
0
0
0
python,beautifulsoup
29,924,863
15
true
1
0
The issue was I named the file HTMLParser.py , and that name is already used somewhere in the bs4 module. Thanks to everyone that helped!
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
1.2
0
1
46,868
29,907,405
2015-04-27T22:57:00.000
6
0
0
0
python,beautifulsoup
60,569,945
15
false
1
0
I found out after numerous attempts to solve the ImportError: cannot import name 'BeautifulSoup4' that the package is actually called BeautifulSoup so the import should be: from bs4 import BeautifulSoup
12
18
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
Cannot import Beautiful Soup
1
0
1
46,868
29,907,747
2015-04-27T23:31:00.000
0
0
1
0
python,pandas,pickle
29,908,276
3
false
0
0
Not an easy problem. I assume you can do nothing about the fact that your web server calls your application multiple times. In that case I see two solutions: (1) Write TWO separate applications. The first application, A, loads the large file and then it just sits there, waiting for the other application to access the data. "A" provides access as required, so it's basically a sort of custom server. The second application, B, is the one that gets called multiple times by the web server. On each call, it extracts the necessary data from A using some form of interprocess communication. This ought to be relatively fast. The Python standard library offers some tools for interprocess communication (socket, http server) but they are rather low-level. Alternatives are almost certainly going to be operating-system dependent. (2) Perhaps you can pre-digest or pre-analyze the large file, writing out a more compact file that can be loaded quickly. A similar idea is suggested by tdelaney in his comment (some sort of database arrangement).
1
2
0
I have a python script that needs to read a huge file into a var and then search into it and perform other stuff, the problem is the web server calls this script multiple times and every time i am having a latency of around 8 seconds while the file loads. Is it possible to make the file persist in memory to have faster access to it atlater times ? I know i can make the script as a service using supervisor but i can't do that for this. Any other suggestions please. PS I am already using var = pickle.load(open(file))
python make huge file persist in memory
0
0
0
612
29,914,101
2015-04-28T08:22:00.000
4
0
1
0
python,file,class,module,convention
29,914,670
3
false
0
0
Python has the concept of packages, modules and classes. If you put one class per module, the advantage of having modules is gone. If you have a huge class, it might be ok to put this class in a separate file, but then again, is it good to have big classes? NO, it's hard to test and maintain. Better have more small classes with specific tasks and put them logically grouped in as few files as possible.
1
16
0
I've started programming in python 2 weeks ago. I'm making a separate file (module) for each class as I've done before in languages like Java or C#. But now, seeing tutorials and code from other people, I've realized that many people use the same files to define more than 1 class and the main function but I don't know if they do it like that because are just examples or because it's a python convention or something like that (to define and group many classes in the same files). So, in Python, one file for each class or many classes in the same files if they can be grouped by any particular feature? (like motor vehicles by one side and just vehicles by the other side). It's obvious that each one has his own style, but when I ask, I hope general answers or just the conventions, anyway, if someone wants to tell me his opinion about his own style and why, feel free to do it! ;)
Python: one single module (file .py) for each class?
0.26052
0
0
13,313
29,914,909
2015-04-28T09:02:00.000
0
0
0
0
windows,python-3.x,pyqt4
29,999,330
1
false
0
1
The QFileDialog.saveState() and QFileDialog.restoreState() methods can save and restore the current directory of the dialog box.
1
1
1
I have a pyQT4 application where the user is asked for a savefile (QFileDialog and all that...) One annoyance is it does not remember the last directory so multiple call always defaults to the working directory of the application (or whatever I set the 3rd argument to) If I set the option to not use the native file browser it remembers but "it is not native to windows" (note this doesn't bug me as I am a linux user, but others are not...) One option I was considering was saving the last working directory and populating the 3rd argument with that for every call but this seems quite brutal, especially as it seems matplotlib appears to be remembering the last directory (so it is possible) Any ideas? filename = QtGui.QFileDialog.getSaveFileName(self, "Save Plot to CSV", '', "CSV Data (*.csv)")
pyQT4 native file dialog remembering last directory
0
0
0
305
29,915,632
2015-04-28T09:34:00.000
2
0
0
1
python-2.7,google-app-engine,google-cloud-datastore,webapp2
29,918,649
1
false
1
0
Hash the entities and use the hash value as the key for your Entity.
1
1
0
I am trying to send the data to google app engine in python using Webapp2, But when I check the entries in the data in console I found duplicate entries which means except Id everything is same.I want to avoid those duplicate entries.Please suggest me if there is anyway to find the duplicate values to avoid.Thanks in advance.
Avoid duplicate entries in Datastore
0.379949
0
0
456
29,915,865
2015-04-28T09:45:00.000
1
0
0
0
python,django
29,916,039
1
true
1
0
You can use with django any python packages as with any "normal" python program. If you have a module, that communicate with your server, you can use this, if not, you have to write one on your own possibly with socket programming.
1
0
0
Is it possible to use Django for communication with some kind of server process? For example on my Django website I want to have form where I input connection details (host and port) and after connection I want to send some request or events to other server process (some simple action like slider moving or clicking a button). Can I use python socket programming for this or is there some easier way?
Socket communication with Django
1.2
0
1
409
29,924,590
2015-04-28T16:12:00.000
0
0
0
0
python,numpy,segmentation-fault,scipy,sparse-matrix
29,927,508
2
false
0
0
Resolved the issue, turns out this is a memory problem. I ran the operation on another machine and received a MemoryIssue (whereas my machine gives a segfault), and when given more memory it turns into a "negative dimensions not allowed error" a long way into it, which I presume is an integer overflow in the calculation.
1
0
1
I have a CSR sparse matrix in scipy of size 444075 x 444075. I wish to multiply it by its transpose. However, when I do m * m.T it causes a segmentation fault 11 error. Is this a memory issue, and if so, is there a way to allocate more memory to the program? Is there a clever workaround/hack using subroutines other routines from scipy to multiply a different way?
Scipy - Multiplying large sparse matrix causes segmentation fault?
0
0
0
850
29,926,772
2015-04-28T18:05:00.000
0
0
0
0
python,csv,numpy
29,929,834
1
false
0
0
Assuming I have understood what you mean by headers (it would be easier to tell with a few complete lines, even if you had to scale it down from your actual file)... I would first read the irregular lines with normal python then, on the regular lines, use genfromtxt with skip_header and usecols (make a tuple like (i for i in range(2,102))
1
0
1
I have a set of data that is below some metadata. I'm looking to put the headers into a numpy array to be used later. However the first header needs to be ignored as that is the x data header, then the other columns are the y headers. How do i read this?
putting headers into an array, python
0
0
0
118
29,926,911
2015-04-28T18:13:00.000
1
0
0
0
python,django,celery,django-celery
29,927,235
1
false
1
0
My first suggestion is to psychologically separate celery from django when you start to think of the two. They can run in the same environment, but celery is to asynchronous processes what django is to http requests. Also remember that celery is unlike diango in that it requires other services to function; a message broker. So by using celery you will increase your architectural requirements. To address you specific use case, you'll need a system to publish messages from each celery task to a message broker and your web client will need to subscribe to those messages. There's a lot involved here, but the short version is that you can use Redis as your celery message broker as well as your pub/sub service to get messages back to the browser. You can then use e.g diango-redis-websockets to subscribe the browser to the task state messages in redis
1
2
0
I currently have a typical Django structure set up for a project and one web application. The web application is set up so that a user inputs some information, and this information is taken as the input to run a Python program. This python program sometimes can take quite a while to finish (grabbing things from the web and doing some text mining scoring) - sometimes it can take multiple minutes to load. On the command line, this program would periodically display where it was in the process (it'd first say how many things it found to score against, then it'd say where in the number of things found it is in the scoring process), which was very useful. However, when I moved this over to a Django set up, I no longer have this capability (at least, not in the same way since now this is sent to log files). The way I set it up is that there is an input view, and then a results view. The results view takes the input and runs the Python program. It won't display the results until the entire program is run. So on the user side, the browser just sits there for sometimes minutes before the results are displayed. Obviously, this is not ideal. Does anyone know of the best way to bring status information on a task to Django? I've looked into Celery a little bit, but I think since I'm still a beginner in Django that I'm confusing myself with some of the documentation. For instance: even if the task is sent off asynchronously to a worker, how does the browser grab the current state of the program?? Also, consistent documentation seems to be lacking for celery on Django (I've seen people set up celery many different ways on their Django projects). I would appreciate any input here, I've been stuck on this for a while now.
Display progress of a long running Python task in Django
0.197375
0
0
1,205
29,928,477
2015-04-28T19:42:00.000
0
0
0
1
python,google-app-engine
29,928,760
1
true
1
0
Fixed by shutting down all instances (on all modules/versions just to be safe).
1
2
0
I am currently experiencing an issue in my GAE app with sending requests to non-default modules. Every request throws an error in the logs saying: Request attempted to contact a stopped backend. When I try to access the module directly through the browser, I get: The requested URL / was not found on this server. I attempted to stop and start the "backend" modules a few times to no avail. I also tried changing the default version for the module to a previous working version, but the requests from my front-end are still hitting the "new", non-default version. When I try to access a previous version of the module through the browser, it does work however. One final symptom: I am able to upload my non-default modules fine, but cannot upload my default front-end module. The process continually says "Checking if deployment succeeded...Will check again in 60 seconds.", even after rolling back the update. I Googled the error from the logs and found almost literally nothing. Anyone have any idea what's going on here, or how to fix it?
GAE module: "Request attempted to contact a stopped backend."
1.2
0
0
607
29,928,485
2015-04-28T19:43:00.000
0
0
0
0
python,django,pyqt,saas,pyqtgraph
29,947,088
2
true
1
1
Here is what I have sort of put together by pulling several threads online: Ruby On Rails seems to be more popular than python at this moment. If you go python, Flask and Django are good templates. bokeh seems to be a good way of plotting to a browser. AFAIK, there is no way to take an existing PyQt or pyqtgraph application and have it run on the web. I am not sure how Twisted (Tornado, Node.js and Friends) fits in to the web SaaS, but I see it referred to occasionally since it is asynchronous event-driven. People often suggest using Rest, but that seems slow to me. Not sure why...
2
4
0
Is there a way to take existing python pyqtgraph and pyqt application and have it display on a web page to implement software as a service? I suspect that there has to be a supporting web framework like Django in between, but I am not sure how this is done. Any hints links examples welcome.
Displaying pyqtgraph and pyqt widgets on web
1.2
0
0
1,640
29,928,485
2015-04-28T19:43:00.000
0
0
0
0
python,django,pyqt,saas,pyqtgraph
29,987,875
2
false
1
1
If all you need are static plots, then it should be straightforward to draw and export to an SVG file, then display the SVG in a webpage (or export to image, as svg rendering is not reliable in all browsers). If you need interactivity, then you're going to need a different solution and probably pyqtgraph is not the tool for this job. VisPy does have some early browser support but this has only been demonstrated with ipython notebook.
2
4
0
Is there a way to take existing python pyqtgraph and pyqt application and have it display on a web page to implement software as a service? I suspect that there has to be a supporting web framework like Django in between, but I am not sure how this is done. Any hints links examples welcome.
Displaying pyqtgraph and pyqt widgets on web
0
0
0
1,640
29,928,638
2015-04-28T19:52:00.000
6
1
0
0
python,twitter,tweepy,tweets,twitter-streaming-api
31,647,823
5
false
0
0
Here's a work around to fetch replies of a tweet made by "username" using the rest API using tweepy 1) Find the tweet_id of the tweet for which the replies are required to be fetched 2) Using the api's search method query the following (q="@username", since_id=tweet_id) and retrieve all tweets since tweet_id 3) the results matching the in_reply_to_status_id to tweet_id is the replies for the post.
1
17
0
I am trying to go through tweets of a particular user and get all replies on that tweet. I found that the APIv1.1 of twitter does not directly support it. Is there a hack or a workaround on getting the replies for a particular tweet. I am using python Streaming API.
Getting tweet replies to a particular tweet from a particular user
1
0
1
18,865
29,930,029
2015-04-28T21:11:00.000
0
0
0
1
python,celery
30,126,084
1
false
0
0
Directory option in supervisord = where we mention our project directory path. Example: directory="/home/celery/pictures/myapp"
1
2
0
I was configuring supervisor daemon to be able to start/stop Celery. It did not work. After debuging back and forth I realized that the problem was that it did not change the working directory to the one mentioned in the directory option in supervisord.conf under program:celery configuration. Hopefully there is a workdir in Celery but I am curious - what is the purpose of the directory option then?
Supervisor and directory option
0
0
0
498
29,930,160
2015-04-28T21:21:00.000
0
0
0
0
python,scipy,scikit-learn,sparse-matrix,pca
29,930,257
3
false
0
0
Even the input matrix is sparse the output will not be a sparse matrix. If the system does not support a dense matrix neither the results will not be supported
1
2
1
I'm trying to decomposing signals in components (matrix factorization) in a large sparse matrix in Python using the sklearn library. I made use of scipy's scipy.sparse.csc_matrix to construct my matrix of data. However I'm unable to perform any analysis such as factor analysis or independent component analysis. The only thing I'm able to do is use truncatedSVD or scipy's scipy.sparse.linalg.svds and perform PCA. Does anyone know any work-arounds to doing ICA or FA on a sparse matrix in python? Any help would be much appreciated! Thanks.
Performing Decomposition on Sparse Matrices in Python
0
0
0
1,097
29,932,535
2015-04-29T00:56:00.000
0
1
0
0
python,audio,radio,gnuradio
30,705,038
2
false
0
0
You can simply use alsa or pulse audio to configure a "virtual" capture device, use that as the device name in the GNU Radio audio sink, FM modulate the audio signal and send the result to your RF hardware. That's pretty much a typical GNU Radio use case. You might want to have a look at the gr-analog examples :)
2
0
0
I'm trying to set GNU Radio as an audio processor for a little community radio in my town. I've already installed GNU Radio and it's working, but I'm not a sound engineer, so I need some help. This is my installation: MIC & Music Player ----> Mixer ----> GNU Radio ---> FM Emitter I need to know what filters and modules to set to improve sound in this workflow. Could any of you give me an outline of what GNU Radio modules to use?
GNU Radio on Community radios
0
0
0
310
29,932,535
2015-04-29T00:56:00.000
0
1
0
0
python,audio,radio,gnuradio
30,032,777
2
true
0
0
Since the aim is to improve sound quality in our little community radio, the right way to achieve it is to use an audio processor software, as @KevinReid said. For the records, one possible solution is use this schema with Jack: MIC & Music Player ----> Mixer ----> PC with audio processor ---> FM Emiter The PC with audio processor is a GNU/Linux based PC with Jack as sound server and Calf Jack Hub (calf.sourceforge.net) as audio processor. Steps: Install jack, qjackctl and calf. Open qjackctl and start jacks server Open calf and set filters you want (eq, limiter, compressor, etc.) Set connections so you take the input, send it through filters, put it into the output (i.e. headset connector or lineout) That's all. All this can be done by command line, at startup, etc... but this shows the main idea.
2
0
0
I'm trying to set GNU Radio as an audio processor for a little community radio in my town. I've already installed GNU Radio and it's working, but I'm not a sound engineer, so I need some help. This is my installation: MIC & Music Player ----> Mixer ----> GNU Radio ---> FM Emitter I need to know what filters and modules to set to improve sound in this workflow. Could any of you give me an outline of what GNU Radio modules to use?
GNU Radio on Community radios
1.2
0
0
310
29,933,219
2015-04-29T02:19:00.000
1
0
1
0
ipython,ipython-notebook
29,990,375
1
true
0
0
Well it took a bit of googling, but if it's any help to anyone you can prevent your Mac from going to sleep by opening terminal and typing pmset noidle, which will tell the power management utility to temporarily disable sleep.
1
2
0
I would like to run an IPython notebook script that will take several hours to complete (processing of ~40 movies). However the script pauses when the screen locks and then resumes execution when I login to my account.. Is there anyway to prevent the IPython script from pausing while the screen is locked?
How to prevent IPython notebook script pausing when the screen locks
1.2
0
0
1,398
29,933,581
2015-04-29T03:02:00.000
2
0
0
0
python,python-2.7,user-interface,button,tkinter
29,941,782
1
true
0
1
Is it possible to mount more than one image AND text on a Tkinter button? Strictly speaking, no, it is not possible. Or, is it possible to put a FRAME containing images + text on a button? Yes, though it probably won't work on OSX. It would probably take you less time to actually try it than to type in the question on stackoverflow. A little research goes a long way. You can also simply not use a button. Just use a frame or canvas, and set up bindings on the container and/or it's contents to react to a button click.
1
2
0
Is it possible to mount more than one image AND text on a Tkinter button? Or, is it possible to put a FRAME containing images + text on a button? I want a big button containing multiple widgets that, when taken together, fully describe the option the user will be able to choose. I appreciate any suggestions!!
Python Tkinter: multiple images and text on a BIG button?
1.2
0
0
917
29,934,451
2015-04-29T04:35:00.000
11
0
1
1
python,docker,pip,git-submodules
29,936,384
5
false
0
0
If you use github with a private repo you will have to create a SSH deploy key and add the private key to your app folder for builds. pip install git+git://github.com/myuser/foo.git@v123 Alternatively, you can mount a pip-cache folder from host into container and do pip install from that folder. You'd have to keep the python packages in the cache dir with your app. pip install --no-index --find-links=/my/pip-cache/ you can install python packages to this pip-cache with the following command: pre pip 9.0.1: pip install --download pip-cache/ package1 package2 pip 9.0.1+ (thx for comment @James Hiew): pip install download pip-cache/ package1 package2
1
13
0
I have a fairly large private python package I just finished creating. I'd like to install it as part of my build process for an app in a Docker container (though this isn't so important). The package source is quite large, so ideally I'd avoid downloading/keeping the whole source. Right now, I've been just passing around the package source along with my app, but this is unwieldy and hopefully temporary. What's a better way? git submodule/subtree? I'm pretty new to this.
How to Install Private Python Package as Part of Build
1
0
0
7,187
29,935,200
2015-04-29T05:38:00.000
2
0
0
0
python,django,rest,django-rest-framework
29,935,296
1
false
1
0
Consider the upvote button to the left. When you click it, a request may be sent to stackoverflow.com/question/12345/upvote. It creates an "action resource" on the db, so later you can go to your user profile and check out the list of actions you took. You can consider doing the same thing for your application. It may be a better user experience to have immediate action taken like SO, or a "batch" request like with gmail's check boxes.
1
1
0
I have a question about REST design in general and specifically what the best way to implement a solution is in Django Rest Framework. Here it the situation: Say I have an app for keeping track of albums that the user likes. In the browser, the user sees a list of albums and each one has a check box next to it. Checking the box means you like the album. At the bottom of the page is a submit button. I want the submit button to initiate an AJAX request that sends tp my API endpoint a list of the ids (as in, the Djano model ids) of the albums that are liked by the user. My question is, is this a standard approach for doing this sort of thing (I am new to web stuff and REST in particular). In other words, is there a better way to handle the transmission of these data than to send an array of ids like this? As a corollary, if this is an alright approach, how does one implement this in Django Rest Framework in a way which is consistent with its intended methodology. I am keeping this question a little vague (not presenting any code for the album serializer, for example) intentionally because I am looking to learn some fundamentals, not to debug a particular piece of code. Thanks a lot in advance!
Django rest framework: correctly handle incoming array of model ids
0.379949
0
0
116
29,939,110
2015-04-29T09:08:00.000
0
0
0
0
python,angularjs,http,simplehttpserver
29,939,768
3
false
1
0
Well i had a similar problem but the difference is that i had Spring on the Server Side. You can capture page not found exception at your server side implementation, and redirect to the default page [route] in your app. In Spring, we do have handlers for page not found exceptions, i guess they are available in python too.
1
0
0
I have an angularjs app that uses Angular UI Router and the URL that are created have a # in them.. Eg. http://localhost:8081/#/login. I am using Python Simple HTTP server to run the app while developing. I need to remove the # from the URL. I know how to remove it by enabling HTML5 mode in angular. But that method has its problems and i want to remove the # from the server side. How can i do this using Python Simple HTTP Server?
Remove # from the URL in Python Simple HTTP Server
0
0
0
1,156
29,942,739
2015-04-29T11:44:00.000
0
0
0
0
python,image-processing,flood-fill
30,708,149
1
true
0
0
Hell yeah! scipy.ndimage.measurements module helps!
1
1
1
I have a binary multidimensional image. And I want to get some implementation of flood fill that will give me the next: List of connected regions (with adjacent pixels with value True). For each region I want to get its bounding box and list of pixel coordinates of all pixels from the interconnected region. Is something like that implemented?
Python: flood filling of multidimensional image
1.2
0
0
897
29,945,960
2015-04-29T13:58:00.000
1
0
0
0
python,file,client,server
29,946,100
3
false
0
0
If you can install software on the server, and the server allows HTTP connections, you can write your own simple HTTP server (Python has libraries for doing that). If not, the answer would depend on what services are available on the server.
1
2
0
I have a Python client behind a NAT and a python server with a public IP address. My job is to send a pcap file (the size of a few MB) from the client to a server, as well as a dictionary with some data. Is there any easy way of doing this without resorting to third-party libraries (e.g. twisted, tornado)? If not, what's the easiest alternative? I thought I could send the pcap file through http so that it would be easier to read it on the server side, and perhaps I could do the same with the dictionary by first pickling it. Would it be a good solution? (I have complete control on the server, where I can install whatever)
Python: send file to a server without third-party libraries?
0.066568
0
1
635
29,946,610
2015-04-29T14:25:00.000
1
1
0
0
python,python-3.x,file-upload,email-attachments,mime-message
29,947,447
1
true
0
0
I wasn't closing the stream after writing to the file. So the code couldn't find the file. However when the script finished, the stream would get closed by force and I would see the file in the folder.
1
0
0
I am using Python3 and mime.multipart to send an attachment. I was able to send attachment successfully. But today, I get an error saying file does not exist, when I can see in WINSCP that it clearly does. Is this a permissions issue? Also when I list the contents of the directory, the file DOES NOT show up. What is going on?
Python cannot find file to send as email attachment
1.2
0
1
115
29,948,415
2015-04-29T15:37:00.000
0
1
0
0
php,python
29,948,514
1
false
0
0
Not really possible like that. What you could do is use exec or passthru to run the python in the terminal and return the results. Best option would be to do a separate call to the python script and combine them on the client.
1
0
0
I want to include the following python code into php. The name of the python file is hello.py. The contents are print "Hello World" I want to call this python script in php and show the same in webpage
including python code in php
0
0
0
50
29,949,697
2015-04-29T16:40:00.000
6
0
1
0
python,multithreading,queue
29,950,005
2
false
0
0
The simplest fix would probably be to let the producer consume its old value, if there is one, before putting. You can use Queue.get_nowait() for this. Stylistically, I'm not too keen on using a Queue for something only ever intended to hold one object. A normal Lock + a reference variable will make it more obvious what the code does.
1
4
0
I have two threads: a producer and a consumer. The producer periodically acquires information and provides it to the consumer. The consumer wants only the most current copy of the information and will check it aperiodically and maybe a long intervals. It seems like the simplest mechanism to facilitate this communication would be to create a Queue.Queue(maxsize=1). However, if the producer acquires new information before the old information is consumed it will block until the consumer uses the out of date information first. Is there a way for the producer to overwrite the old information? Is there a better threadsafe mechanism to accomplish this?
multithreading - overwrite old value in queue?
1
0
0
2,704
29,950,300
2015-04-29T17:13:00.000
33
0
1
1
python,virtualenv,virtualenvwrapper,pyenv
46,344,026
2
false
0
0
Short version: virtualenv allows you to create local (per-directory), independent python installations by cloning from existing ones pyenv allows you to install (build from source) different versions of Python alongside each other; you can then clone them with virtualenv or use pyenv to select which one to run at any given time Longer version: Virtualenv allows you to create a custom Python installation e.g. in a subdirectory of your project. This is done by cloning from an existing Python installation somewhere on your system (some files are copied, some are reused/shared to save space). Each of your projects can thus have their own python (or even several) under their respective virtualenv. It is perfectly fine for some/all virtualenvs to even have the same version of python (e.g. 3.8.5) without conflict - they live separately and don't know about each other. If you want to use any of those pythons from shell, you have to activate it (by running a script which will temporarily modify your PATH to ensure that that virtualenv's bin/ directory comes first). From that point, calling python (or pip etc.) will invoke that virtualenv's version until you deactivate it (which restores the PATH). It is also possible to call into a virtualenv Python using its absolute path - this can be useful e.g. when invoking Python from a script. Pyenv operates on a wider scale than virtualenv. It is used to install (build from source) arbitrary versions of Python (it holds a register of available versions). By default, they're all installed alongside each other under ~/.pyenv, so they're "more global" than virtualenv. Then, it allows you to configure which version of Python to run when you use the python command (without virtualenv). This can be done at a global level or, separately, per directory (by placing a .python-version file in a directory). It's done by prepending pyenv's shim python script to your PATH (permanently, unlike in virtualenv) which then decides which "real" python to invoke. You can even configure pyenv to call into one of your virtualenv pythons (by using the pyenv-virtualenv plugin). You can also duplicate Python versions (by giving them different names) and let them diverge. Using pyenv can be a convenient way of installing Python for subsequent virtualenv use.
1
209
0
I recently learned how to use virtualenv and virtualenvwrapper in my workflow but I've seen pyenv mentioned in a few guides but I can't seem to get an understanding of what pyenv is and how it is different/similar to virtualenv. Is pyenv a better/newer replacement for virtualenv or a complimentary tool? If the latter what does it do differently and how do the two (and virtualenvwrapper if applicable) work together?
What is the relationship between virtualenv and pyenv?
1
0
0
58,878
29,952,053
2015-04-29T18:47:00.000
2
0
0
0
python,django,heroku
29,954,119
4
false
1
0
Heroku only uses the libraries in your requirements.txt file. Whatever version of Django is specified there is what it will install.
1
0
0
I'm starting to work on my first-ever Django/Heroku project – I'm working on a friend's web app that's already partially coded. It's built in Django 1.6. There's no virtualenv, and when I clone it and try to run it in Django 1.8 it crashes and burns. The app itself is currently online and functional, and when I run the app locally in Django 1.6, no issues. How is Heroku handling dependencies like this? Does it install dependencies on its server by reading the requirements.txt?
How does Heroku handle Django dependencies?
0.099668
0
0
86
29,952,975
2015-04-29T19:35:00.000
1
0
0
1
python,celery
29,971,218
1
true
0
0
Short answer is no and that is by design. Long answer is yes, you can always send in unneeded information to the worker whose sole purpose is to identify the caller and the caller's state.
1
0
0
Is it possible to lookup what code called (delay(), apply_async(), apply(), etc.) a task from within the task's code? Strings would be fine. Ideally, I would like to get the caller's stack trace.
Celery, find the task caller from the task?
1.2
0
0
249
29,953,112
2015-04-29T19:45:00.000
0
0
0
0
python,django
29,953,247
3
false
1
0
You can use sql alchemy You can add the mainscript as part of the django project You can serve content using Django Rest Framework, the database connection happens when you access through manage.py, the table is available to anyone with permissions. tl:dr; You need a connection to said database
1
0
0
I have a directory which contains a Django project with a models.py in which I have defined some models. I have another directory somewhere else and which has a Python script. In this script I would like to import one of the models "Foo" from models.py. With the table "Foo", I want to create entries, update, get, etc... How do I go about doing this?
How to use a model from a Django project in another Python script
0
0
0
1,229
29,956,042
2015-04-29T22:55:00.000
8
0
1
0
python,django,git,heroku,virtualenv
29,956,174
2
true
0
0
On top of what Othman said, virtualenvs are simply not portable. Trying to move it will break it, and it's easier to create a new environment than to fix it. So, even on deployment platforms that do use virtual environments, checking them in to git is not going to work.
1
5
0
Tutorials online are telling me to put venv in my .gitignore file. Why wouldn't I want to push my virtual environment so that I or other developers could easily pull the project to their locals and conveniently have all dependencies?
Why shouldn't I push a virtualenv to Heroku?
1.2
0
0
4,110
29,961,898
2015-04-30T07:44:00.000
4
0
0
0
python,flask,flask-login,anonymous-users
30,008,742
2
false
1
0
You can use a AnonymousUserMixin subclass if you like, but you need to add some logic to it so that you can associate each anonymous user with a cart stored in your database. This is what you can do: When a new user connects to your application you assign a randomly generated unique id. You can write this random id to the user session (if you want the cart to be dropped when the user closes the browser window) or to a long-lived cookie (if you want the cart to be remembered even after closing the browser). You can use Flask-Login for managing the session/cookie actually, you don't have to treat unknown users as anonymous, as soon as you assign an id to them you can treat them as logged in users. How do you know if an anonymous user is known or new? When the user connects you check if the session or cookie exist, and look for the id there. If an id is found, then you can locate the cart for the user. If you use a subclass of AnonymousUserMixin, then you can add the id as a member variable, so that you can do current_user.id even for anonymous users. You can have this logic in the Flask-Login user loader callback. When the user is ready to pay you convert the anonymous user to a registered user, preserving the id. If you have a cron job that routinely cleans up old/abandoned anonymous carts from the database, you may find that an old anonymous user connects and provides a user id that does not have a cart in the database (because the cart was deemed stale and deleted). You can handle this by creating a brand new cart for the same id, and you can even notify the user that the contents of the cart expired and were removed. Hope this helps!
2
7
0
My app implements a shopping cart in which anonymous users can fill their cart with products. User Login is required only before payment. How can this be implemented? The main challenge is that flask must keep track of the user (even if anonymous) and their orders. My current approach is to leverage the AnonymousUserMixin object that is assigned to current_user. The assumption is that current_user will not change throughout the session. However, I noticed that a new AnonymousUserMixin object is assigned to current_user, for example, upon every browser page refresh. Notice that this does not happen if a user is authenticated. Any suggestions on how to circumvent this?
How to track anonymous users with Flask
0.379949
0
0
2,806
29,961,898
2015-04-30T07:44:00.000
9
0
0
0
python,flask,flask-login,anonymous-users
29,962,315
2
false
1
0
There is no need for a custom AnonymousUserMixin, you can keep the shopping cart data in session: anonymous user adds something to hist cart -> update his session with the cart data the user wants to check out -> redirect him to login page logged in user is back at the check out -> take his cart data out of the session and do whatever you would do if he was logged in the whole time
2
7
0
My app implements a shopping cart in which anonymous users can fill their cart with products. User Login is required only before payment. How can this be implemented? The main challenge is that flask must keep track of the user (even if anonymous) and their orders. My current approach is to leverage the AnonymousUserMixin object that is assigned to current_user. The assumption is that current_user will not change throughout the session. However, I noticed that a new AnonymousUserMixin object is assigned to current_user, for example, upon every browser page refresh. Notice that this does not happen if a user is authenticated. Any suggestions on how to circumvent this?
How to track anonymous users with Flask
1
0
0
2,806
29,962,386
2015-04-30T08:10:00.000
2
0
0
0
python-2.7,peewee
29,968,980
1
true
0
0
What do you mean "active"? Active as in being "checked out" by a thread, or active as in "has a connection to the database"? For the first, you would just do pooled_db._in_use. For the second, it's a little trickier -- basically it will be the combination of pooled_db._in_use (a dict) and pooled_db._connections (a heap).
1
1
0
I am using Python's peewee ORM with MYSQL. I want to list the active connections for the PooledDatabase. Is there any way to list..?
Counting Active connections in peewee ORM
1.2
1
0
283
29,963,686
2015-04-30T09:15:00.000
-2
0
1
0
python,list,slice,notation,shallow-copy
49,813,254
5
false
0
0
There are 2 copies available. 1) Deep Copy 2) Shallow Copy. 1) Deep Copy is you just copy the values list = ['abc',123,'xyz'] list1 = copy.deepcopy(list) or list1 = list[:] 2) Shallow Copy is you just reference to the varible list2 = copy.copy(list) or list2 = list When you modify something on list2 it get effected in list also as it is referenced. list1.append(456) list2.append('789') print "list: %s" %list print "list1: %s" %list1 print "list2: %s" %list2 ans: list : ['abc',123,'xyz','789'] list1 : ['abc',123,'xyz',456] list2 : ['abc',123,'xyz','789']
1
32
0
I sometimes get across this way of printing or returning a list - someList[:]. I don't see why people use it, as it returns the full list. Why not simply write someList, whithout the [:] part?
What does this notation do for lists in Python: "someList[:]"?
-0.07983
0
0
3,330
29,967,612
2015-04-30T12:19:00.000
0
0
0
0
python,websocket
29,967,827
1
false
1
0
It depends on your software design, if you decide the logic from WebSocketServer.px and CoreApplication.py belongs together, merge it. If not, you need some kind of inter process communication (ipc). You can use websockets for this ipc, but i would suggest, you use something simpler. For example, you can you json-rpc over tcp or unix domain to send control messages from CoreApplication.py to WebSocketServer.py
1
2
0
I am trying to understand how to use websockets correctly and seem to be missing some fundamental part of the puzzle. Say I have a website with 3 different pages: newsfeed1.html newsfeed2.html newsfeed3.html When a user goes to one of those pages they get a feed specific to the page, ie newsfeed1.html = sport, newsfeed2.html = world news etc. There is a CoreApplication.py that does all the handling of getting data and parsing etc. Then there is a WebSocketServer.py, using say Autobahn. All the examples I have looked at, and that is alot, only seem to react to a message from the client (browser) within the WebSocketServer.py, think chat echo examples. So a client browser sends a chat message and it is echoed back or broadcast to all connected client browsers. What I am trying to figure out is given the following two components: CoreApplication.py WebSocketServer.py How to best make CoreApplication.py communicate with WebSocketServer.py for the purpose of sending messages to connected users. Normally should CoreApplication.py simply send command messages to the WebSocketServer.py as a client. For example like this: CoreApplication.py -> Connects to WebServerSocket.py as a normal client -> sends a Json command message (like broadcast message X to all users || send message Y to specific remote client) -> WebSocketServer.py determines how to process the incoming message dependant on which client is connected to which feed and sends to according remote client browsers. OR, should CoreApplication.py connect programatically with WebSocketServer.py? As I cannot seem to find any examples of being able to do this for example with Autobahn or other simple web sockets as once the WebSocketServer is instantiated it seems to run in a loop and does not accept external sendMessage requests? So to sum up the question: What is the best practice? To simply make CoreApplication.py interact with WebSocketServer.py as a client (with special command data) or for CoreApplication.py to use an already running instance of WebSocketServer.py (both of which are on the same machine) through some more direct method to directly sendMessages without having to make a full websocket connection first to the WebSocketServer.py server?
WebSockets best practice for connecting an external application to push data
0
0
1
704
29,968,829
2015-04-30T13:18:00.000
0
0
1
0
server,ipython
42,136,317
4
false
0
0
If it is a text file, create a empty file, edit it and then copy/paste the content.. You can do this to bypass the 25mb constraint
1
15
0
I did setup an ipython server for other people (in my company department) to have a chance to learn and work with python. Now I wonder how people can load their own local data into the ipython notebook session on the remote server. Is there any way to do this?
Load local data into IPython notebook server
0
0
0
52,276
29,969,746
2015-04-30T13:57:00.000
2
0
0
0
python,gtk,pygtk
29,976,673
1
true
0
1
GTK+ 2 has a type GtkComboBoxEntry that always has the entry box you don't want (and handles some model-related things). Your Glade file uses a GtkComboBoxEntry. Change it to GtkComboBox and, assuming everything else is set up properly (your model is correct and you have a GtkCellRendererText), you should be good to go. (Thanks to gregier in irc.gimp.net/#gtk+ for some information.)
1
0
0
I want the user to be able to select the items on a PyGTK ComboBox, while not being able to write in the combo. He/She should be allowed just to select one of the items. So I can't use set_active(False), for it will disable the combo. How can I do this ?
How do I disable edition of a PyGtk Combobox?
1.2
0
0
667
29,971,186
2015-04-30T15:01:00.000
0
1
0
0
python,windows,excel,python-2.7,xlrd
30,945,220
3
false
0
0
I had the same problem and I think we have to look at the cells excel that these are not picking up empty, that's how I solved it.
1
4
0
I'm stumped on this one, please help me oh wise stack exchangers... I have a function that uses xlrd to read in an .xls file which is a file that my company puts out every few months. The file is always in the same format, just with updated data. I haven't had issues reading in the .xls files in the past but the newest release .xls file is not being read in and is producing this error: *** formula/tFunc unknown FuncID:186 Things I've tried: I compared the new .xls file with the old to see if I could spot any differences. None that I could find. I deleted all of the macros that were contained in the file (older versions also had macros) Updated xlrd to version 0.9.3 but get the same error These files are originally .xlsm files. I open them and save them as .xls files so that xlrd can read them in. This worked just fine on previous releases of the file. After upgrading to xlrd 0.9.3 which supposedly supports .xlsx, I tried saving the .xlsm file as.xlsx and tried to read it in but got an error with a blank error message Useful Info: Python 2.7 xlrd 0.9.3 Windows 7 (not sure if this matters but...) My guess is that there is some sort of formula in the new file that xlrd doesn't know how to read. Does anybody know what FuncID: 186 is? Edit: Still no clue on where to go with this. Anybody out there run into this? I tried searching up FuncID 186 to see if it's an excel function but to no avail...
Python XLRD Error : formula/tFunc unknown FuncID:186
0
1
0
1,917
29,972,537
2015-04-30T16:01:00.000
1
0
1
0
python,multiprocessing,pickle,serialization
30,238,617
2
false
0
0
I'm the author of dill and pathos. Multiprocessing should use cPickle by default, so you should't have to do anything. If your object doesn't searliize, you have two options: go to a fork of multiprocessing or some other parallel backend, or add methods to your class (i.e. reduce methods) that register how to serialize the object.
1
6
0
Using Python 2.7, I am passing many large objects across processes using a manager derived from multiprocessing.managers. BaseManager and I would like to use cPickle as the serializer to save time; how can this be done? I see that the BaseManager initializer takes a serializer argument, but the only options appear to be pickle and xmlrpclib.
How do I change the serializer that my multiprocessing.mangers.BaseManager subclass uses to cPickle?
0.099668
0
0
330
29,973,700
2015-04-30T17:01:00.000
3
1
1
0
python,unit-testing,testing,functional-testing
29,974,588
1
true
0
0
Arguably, the best solution is to split your function into two pieces. One piece to do the parsing, the second to do the writing. Then, you can unit test each piece separately. For the first piece, give it a file and verify the parsing function returns the proper string, and/or throws the proper exception. For the second, give it a string to write, and then verify that the file was written and that the contents match your string. It's tempting to skip the test that writes the data, since it's reasonable to assume that the python open and write functions work. However, the unit testing also proves that the data you pass in is the data that gets written (ie: you don't have a bug that causes a fixed string to be written to the file). If refactoring the code isn't something you can do, you can still test the function. Feed it the data to be parsed, then open the file that it wrote to and compare the result to what you expect it to be.
1
1
0
I have written a piece of software in Python that does a lot of parsing and a lot of writing files to disk. I am starting to write unit tests, but have no idea how to unit test a function that just writes some data to disk, and returns nothing. I am familiar with unittest and ddt. Any advice or even a link to a resource where I could learn more would be appreciated.
How to write tests for writers / parsers? (Python)
1.2
0
0
324
29,974,933
2015-04-30T18:17:00.000
2
0
1
0
python,multiprocessing,python-multiprocessing
31,207,441
2
false
0
0
From the Python multiprocessing documentation. start() Start the process’s activity. This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process. A Process object can be run only once. If you need to re-run the same routine (the target parameter) you must instantiate a new Process object. This is due to the fact that Process objects are encapsulating unique OS instances. Tip: do not play with the internals of Python Thread and Process objects, you might get seriously hurt. The _popen attribute you're seeing is a sentinel the Process object uses to understand when the child OS process is gone. It's literally a pipe and it's used to block the join() call until the process does not terminate.
1
2
0
I am using python multiprocessing Process. Why can't I start or restart a process after it exits and I do a join. The process is gone, but in the instantiated class _popen is not set to None after the process dies and I do a join. If I try and start again it tells me I can't start a process twice.
Python multiprocessing restart after join
0.197375
0
0
1,647
29,976,283
2015-04-30T19:34:00.000
1
1
1
0
python
29,976,372
2
false
0
0
No, you can't. You can only have one interpreter instance running all of the code in a single program at a time. The exception is if you break out some of your functionality into a totally separate program that communicates with the other part of your code through some form of inter-process communication; then you can run those totally separate programs however you like. But for code that is not separated like that, it's not possible. It will probably be more straightforward to adapt the entirety of your code to work with PyPy one way or another, instead of trying to break out bits and pieces. If that's absolutely not possible, then PyPy probably can't help you.
2
0
0
I need pypy to speed up my python code. While the pypy doesn't support a lot of modules I need (e.g. GNU Radio). Could I use pypy to speed up parts of my python code. Could I use pypy to only speed up some of my python files? How can I do that?
Could pypy speed up parts of my python code?
0.099668
0
0
259
29,976,283
2015-04-30T19:34:00.000
0
1
1
0
python
30,705,404
2
false
0
0
No, you can't. And GNU Radio does the signal processing and scheduling in C++, so that's totally opaque to your python interpreter. Also, GNU Radio itself is highly optimized and contains specialized implementations for most of the CPU intense tasks for SSE, SSE4, and some NEON. I need pypy to speed up my python code. I doubt that. If your program runs too slow, it's probably nothing your Python interpreter can solve -- you might have to look into what could take so much time, and solve this on a higher level.
2
0
0
I need pypy to speed up my python code. While the pypy doesn't support a lot of modules I need (e.g. GNU Radio). Could I use pypy to speed up parts of my python code. Could I use pypy to only speed up some of my python files? How can I do that?
Could pypy speed up parts of my python code?
0
0
0
259
29,976,769
2015-04-30T20:00:00.000
1
1
0
0
python,unit-testing,nosetests,coverage.py
29,985,334
2
true
0
0
The simplest way to direct coverage.py's focus is to use the source option, usually source=. to indicate that you only want to measure code in the current working tree.
1
2
0
I am using nosetests --with-coverage to test and see code coverage of my unit tests. The class that I test has many external dependencies and I mock all of them in my unit test. When I run nosetests --with-coverage, it shows a really long list of all the imports (including something I don't even know where it is being used). I learned that I can use .coveragerc for configuration purposes but it seems like I cannot find a helpful instruction on the web. My questions are.. 1) In which directory do I need to add .coveragerc? How do I specify the directories in .coveragerc? My tests are in a folder called "tests".. /project_folder /project_folder/tests 2)It is going to be a pretty long list if I were to add each in omit= ... What is the best way to only show the class that I am testing with the unittest in the coverage report? It would be nice if I could get some beginner level code examples for .coveragerc. Thanks.
how to omit imports using .coveragerc in coverage.py?
1.2
0
0
1,938
29,977,495
2015-04-30T20:46:00.000
-1
0
0
0
python,django,django-forms
68,890,513
6
false
1
0
python manage.py <your_script_name> Here script name is ur python file .No need to mention .py extenstion.
1
8
0
Perhaps there is a different way of going about this problem, but I am fairly new to using Django. I have written a custom Python script and would like to run a function or .py file when a user presses a "submit" button on the webpage. How can I get a parameter to be passed into a Python function from a submit button using Django?
Running Python script in Django from submit
-0.033321
0
0
37,522
29,978,859
2015-04-30T22:26:00.000
0
0
0
0
python,django,deployment,django-models
29,980,168
2
false
1
0
syncdb has been changed to migrate since django 1.7 Migrations New in Django 1.7. Migrations are Django’s way of propagating changes you make to your models (adding a field, deleting a model, etc.) into your database schema. They’re designed to be mostly automatic, but you’ll need to know when to make migrations, when to run them, and the common problems you might run into. Prior to version 1.7, Django only supported adding new models to the database; it was not possible to alter or remove existing models via the syncdb command (the predecessor to migrate). The difference betwen migrate and makemigrations is nicely stated by doru.
1
1
0
So I've managed to create a site with django 1.8 and I'm ready to deploy. I have several new models and I'm using django-allauth which has it's own models. I've also managed to make changes to settings with a config file to use different databases for production and development as well as turning off debug when it's production, etc. I've uploaded my project folder to the production server and getting ready to add new lines in http.conf for Apache but I can't wrap my head around the database. Do I run syncdb or makemigrations on the production server? How does django know to use the production db and not the development db. My settings look for hostname from socket to decide if it's production or development. What should I do next?
Django syncdb on first deployment
0
0
0
300
29,980,999
2015-05-01T02:52:00.000
1
0
1
0
python,ide
29,981,080
1
true
0
0
Try using pycharm or visual studios community edition.
1
0
0
I'm not really sure how to phrase this. When you use a function and an open parenthesis in IDLE a little yellow window pops up and contains the function documentation. For example, if you type int( in IDLE and wait, the window will say "() int(x = 0) -> int or long" which is the function description. Are there any other IDEs that support this?
Is it possible to see the function description windows that pop up in IDLE when you use a function in any other IDEs?
1.2
0
0
53
29,988,504
2015-05-01T14:13:00.000
2
0
1
1
python,python-2.7,python-3.x,anaconda
29,989,510
2
false
0
0
No it won't, you can have multiple python installs, once you don't remove your system python or manually change the default you will be fine.
1
2
0
I'm planning to install Anaconda3 for Python 3.4. Since by default, Mac OS X uses Python2, will install Anaconda3 change the default Python version for the system? I don't want that to happen since Python3 can break backwards compatibility. If it does change the default Python version, how can I avoid that?
Will installing Anaconda3 change Mac OS X default Python version to 3.4?
0.197375
0
0
2,423
29,988,923
2015-05-01T14:37:00.000
2
0
0
1
python-2.7,osx-mavericks,homebrew,icu,mapnik
30,045,778
2
false
0
0
This was Homebrew's fault and should be fixed after brew update && brew upgrade mapnik; sorry!
1
0
0
What is the best way to downgrade icu4c from 55.1 to 54.1 on Mac OS X Mavericks. I tried brew switch icu4c 54.1 and failed. Reason to switch back to 54.1 I am trying to setup and use Mapnik. I was able to install Mapnik from homebrew - brew install mapnik But, I get the following error when I try to import mapnik in python Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/mapnik/__init__.py", line 69, in <module> from _mapnik import * ImportError: dlopen(/usr/local/lib/python2.7/site-packages/mapnik/_mapnik.so, 2): Library not loaded: /usr/local/opt/icu4c/lib/libicuuc.54.dylib Referenced from: /usr/local/Cellar/mapnik/2.2.0_5/lib/libmapnik.dylib Reason: image not found Python version on my Mac - Python 2.7.5 (default, Mar 9 2014, 22:15:05) Is switching icu4c back to 54.1 way to go? Or, Am I missing something? Thanks for the help in advance.
icu4c + Mapnik - I want to switch icu4c from 55.1 to 54.1 to get Mapnik to work
0.197375
0
0
824
29,990,202
2015-05-01T15:46:00.000
0
0
0
0
python,wxpython,objectlistview-python
30,030,742
2
false
0
1
For the future: what I did was to show the color as a background of the row of the list.
1
0
0
I am using wxPython ObjectListView and it is very easy to use. Now I need to render a wx.Color as a column but I haven't found a way in the documentation. Basically I have list of items each of them have the following attributes: name, surname and hair color. Hair color is a RGB color and I would like to show it as a column in my ObjectListView. Is there a way to do it ? Many thanks
ObjectListView wxPython: how to show a wx.Color
0
0
0
360
29,991,871
2015-05-01T17:30:00.000
3
0
0
0
python,django,web
66,293,699
2
false
1
0
When you have no database in your project, a simple python manage.py migrate will create a new db.sqlite3 file.
1
7
0
I had a duplicate sqlite database. I tried deleting the duplicate but instead deleted both. Is there a way I can generate a new database? The data was not especially important.
Generating new SQLite database django
0.291313
1
0
5,560
29,994,341
2015-05-01T20:09:00.000
0
0
1
0
python,pygame,zooming,pixel
29,994,418
1
false
0
1
First make a canvas and triangle with a big size and Zoom out. So when you zoom in, it won't get pixelated.
1
0
0
So, I'm working with pygame in python 2.7.9 and I'm trying to make some kind of zoom, to view details of a fractal. I can easily draw a Sierpinski triangle using polygons, my idea is to zoom a area of the triangle and see details of the depth without pixelation. So far, I can zoom in but the surface gets pixelated Is there anyway to do this? Thanks.
Zoom in and Zoom Out Pygame
0
0
0
1,289
29,999,068
2015-05-02T06:07:00.000
0
0
0
0
image-processing,overlay,python-imaging-library
30,013,456
2
false
0
0
Actually I decided to make the heatmap using matplotlib and numpy,instead of creating superimposed images.
1
3
0
I need to create an overlay/composite of 1000 images, all of the same size on top of each other. They all will have same level of transparency, such that any pixel which has no image in any of the 1000 images will be white while a pixel which has an image in each of the 1000 images will be black in the final overlay of 1000. I am new to the domain and have been trying to figure out the best way of doing it. I realized one can use blend or paste(unsure about the diff between them at this point), but they take just 2 images as arguments. How can i superimpose all 1000?
Python imaging library overlay of 1000 images
0
0
0
212
30,000,275
2015-05-02T08:48:00.000
0
0
1
0
python,python-3.x,tuples,extract
30,000,302
3
true
0
1
app = pyglet.window.Window(*WIN)
1
0
0
I have a tuple of constants: WIN = 640, 360, "Example", False, 'tool' # x, y, title, resizable, style And now to use it I have to type in this: app = pyglet.window.Window(WIN[0], WIN[1], WIN[2], WIN[3], WIN[4]) Is there a method to split tuple into separate elements like this: app = pyglet.window.Window(WIN.extract()) ?
Python: method to extract tuple
1.2
0
0
123
30,000,294
2015-05-02T08:51:00.000
3
0
1
1
python,ubuntu,python-3.x,numpy
30,000,588
2
true
0
0
You're close, to update a package for python3.4 you have to sudo pip3 install -U numpy (note the pip3). It might be the case that you still have to install pip3 first (not sure if it is bundled). Probably for that you have to sudo apt-get install python-pip3 or something. If you have a recent Ubuntu (I believe starting from 14.10), then you already have python 3.4 when you first boot up Ubuntu, as well as pip3 pre-installed. You can also install through ubuntu's package manager, but if you want an OS independent way, you can just use pip3.
1
2
0
I have python 2.7, and I installed beside it python 3.4, but python 3.4 has not numpy package. When I use sudo pip install -U numpy, it install it in python2.7 location. How can I install numpy for python 3.4 in a machine that has already python 2.7?
install numpy for python 3.4 in ubuntu
1.2
0
0
5,372
30,002,869
2015-05-02T13:29:00.000
2
0
0
0
python,python-2.7,tkinter
30,014,871
1
true
0
1
You cannot change the window border, but you can remove it entirely and draw your own border. You'll also be responsible for adding the ability to move and resize the window. Search this site for "overrideredirect" for lots of questions and answers related to this feature. As for third party themes: no, there aren't any.
1
1
0
My question is simple, apart from the three themes pre-installed in Tkinter are there any other themes I can get ? Something like 3rd party themes ? If not, how can I change the button or other widgets looks (manually changing the form,etc..)? Also I would like to know if it is possible to change the outside window look, like the look of the [ _ ] [ [] ] [X] buttons of the window, if not is there a way to remove them so I can put my own buttons in the frame? Any code example or link is welcome.
3rd party Tkinter themes and modifying outside window buttons
1.2
0
0
174
30,005,704
2015-05-02T18:04:00.000
3
0
1
0
python,performance,debugging,pycharm
30,017,850
1
false
1
0
The way to get fast debugging sessions in PyCharm (Professional edition) is to use remote debugging, similar to pdb.set_trace(). In the Run/Debug Configurations dialogue, create a Remote Debug configuration. The dialogue contains the instructions, which I will repeat here completeness sake: Add pycharm-debug.egg from the PyCharm installation to the Python path. Add the following import statement: import pydev Add the following command to connect to the debug server: pydevd.settrace('localhost', port=$SERVER_PORT, stdoutToServer=True, stderrToServer=True) These strings can be copied from the dialogue and pasted into the source. When you choose the host and server port in the dialogue, the pasteable strings will update themselves. Of course, they can also be concatenated to a oneliner using ;. After the settrace() method has been run, the breakpoints you have set in PyCharm will become active. So, where's the file pycharm-debug.egg? Somewhere in the near vicinity of the PyCharm binary. In OS X, you will find the file within the Contents/debug-eggs directory within PyCharm.app. I assume other PyCharm distributions have a similar directory. If you're running the application using a virtualenv, install the egg using easy_install. If you prefer to run your application within PyCharm (stdout in the PyCharm console is useful), then add the path to the egg file to the Project Interpreter's file paths.
1
5
0
I am using PyCharm to debug a moderately complex Pyramid web application with a lot of dependencies. When I run the application inside PyCharm using PyCharm's Debug run, the application start up slows down significantly. This kills the normal web application workflow of edit, save, refresh. The slowdown is significant, making the application restart to take tens of seconds instead of fractions of seconds. Is there a way to speed up PyCharm debug runs any way? The similar slowdown would not occur if one is using hardcoded import pdb ; pdb.set_trace() style breakpoints and normal Run mode.
PyCharm integrated debugger slows down application
0.53705
0
0
2,048
30,009,595
2015-05-03T02:24:00.000
1
1
0
1
python,pushbullet
30,035,305
1
false
1
0
Could you shutdown the script on your VPS, copy the cache files over the the Pi and run the script there? Then do the reverse when you want to move it back to the VPS. You could possibly run the script on both systems, but then you'd need to synchronize between them which sounds like a lot of unnecessary work. For instance you could run a third server that you can check with to see if you've sent something yet, but you would need to be able to lock items on there so you don't have a race condition between your two scripts.
1
1
0
I have a Python script that manages Pushbullet channels for Nexus Android device factory images. It runs on my VPS (cron job that runs every 10 minutes), but my provider has warned that there may be intermittent downtime over the next several days. The VPS is running Ubuntu Server 15.04. I have a Raspberry Pi that's always on, and I can easily modify the script so that it works independently on both the VPS and the Pi. I would like the primary functionality to exist on the VPS, but I want to fall back to the Pi if the VPS goes down. What would be the best way to facilitate this handoff between the two systems (in both directions)? The Pi is running Raspbian Wheezy. Additionally, the script uses urlwatch to actually watch the requisite page for updates. It keeps a cache file on the local system for each URL. If the Pi takes over and determines a change is made, it will notify the Pushbullet channel(s) as it should. When the VPS comes back up and takes over, it will have the old cache files and will notify the channel(s) again, which I want to avoid. So: How can I properly run the script on whichever system happens to be up at the moment (preferring the VPS), and how can I manage the urlwatch caches between the two systems?
Python script fallback to second server
0.197375
0
0
104
30,010,620
2015-05-03T05:40:00.000
5
0
1
0
python,google-app-engine,pycharm,wtforms
30,010,724
2
false
1
0
Try deleting the libraries from your project if they are in libraries, then re-importing those libraries. Also, I assume you've done this, but make sure the libraries are actually installed and present in a reachable location that is properly mapped.
2
5
0
I refactored my webapp and now my IDE pycharm marks some imports red. Why? from wtforms import Form, TextField, validators, SelectField My IDE marks the Form, TextField and the SelectField red as they cannot be imported. ("Unresolved reference"). What should I do if I need those classes in my project?
Why are my imports no longer working?
0.462117
0
0
299
30,010,620
2015-05-03T05:40:00.000
3
0
1
0
python,google-app-engine,pycharm,wtforms
30,010,736
2
true
1
0
You need to install it in in your environment(according to the comments you didn't), please try the following: Settings -> Project: MyProjectName -> Project Interpreter Then click on the green plus and choose your packages
2
5
0
I refactored my webapp and now my IDE pycharm marks some imports red. Why? from wtforms import Form, TextField, validators, SelectField My IDE marks the Form, TextField and the SelectField red as they cannot be imported. ("Unresolved reference"). What should I do if I need those classes in my project?
Why are my imports no longer working?
1.2
0
0
299
30,011,715
2015-05-03T08:25:00.000
2
0
1
0
python,python-3.x,python-asyncio
30,012,377
1
true
0
0
You can run aiozmq and aiohttp on quamash loop in the main thread. It just works. If you really need to run different loops in different threads (I don't understand why, but you may have the desire) you should instantiate those loops manually. I doubt that event loop policy will be useful. It's convenient sometimes, but you have another case.
1
2
0
Let's say in one application I would like to run quamash, aiozmq and aiohttp in different threads. It should be possible to write custom event loop policy that would return appropriate loop e.g. based on name of a thread. However it's not clear what types of communications are supported between event loops from different providers.
Does asyncio allow co-existence of multiple implementations?
1.2
0
0
108
30,011,988
2015-05-03T08:59:00.000
2
0
1
0
python,emacs,elisp,introspection
30,012,064
1
false
0
0
I usually run a python interactive session in emacs while coding python. With your source file open, press C-c C-z where you can specify which python interpreter you want to use. This will split the window in two with the source code on the left and the output/python shell on the right. While your source file is selected you can press C-c C-c to evaluate the buffer and have its output displayed in the window on the right. After evaluating your code you can switch to the shell with C-x o and while in the shell you can use dir() to list all variables, modules, functions, ect. as if you had done the while thing in an interactive python shell. In your case you can run dir(a) or type(a) in the shell on the right.
1
2
0
I remember seeing PyCharm and RStudio having a way of showing what data/modules/functions are in the current interactive session. E.g. when you do a = [1, 2, 3], there will be a small part of the window giving information on the object a. Is there any way we can have something similar in Emacs (perhaps making use of the python inspect module)?
Writing elisp program for automatic introspection of Python Objects
0.379949
0
0
126
30,014,267
2015-05-03T13:17:00.000
2
0
1
0
python,keyboard-shortcuts,ipython,key-bindings
30,243,280
1
true
0
0
Reposting as an answer: You can set InteractiveShell.readline_parse_and_bind in a config file (default value is here). It takes a list of readline config commands. IPython also uses .inputrc, but things in that config value take precendence, and Ctrl+L is in there by default.
1
3
0
Is it possible to define custom keybindings and/or desactivate the default ones for Ipython terminal interface? For example, I have bound C+j and C+l to move left and right in my terminal by configuring the ~/.inputrc file (Linux), but when using Ipython terminal, C+l is captured before and actually clears the screen. So my questions are: 1) Is it possible to desactivate some keybindings of Ipython 2) Even better, is it possible to totally configure Ipython keymap?
Custom Keybindings for Ipython terminal
1.2
0
0
893
30,015,650
2015-05-03T15:29:00.000
0
0
1
0
python,python-2.7
30,015,904
2
false
0
0
Sometimes it is useful to end your methods with return self so it's possible to do something like object.a().b()
2
0
0
for functions that do not return any value, is there any benefit in ending them with either return or return(True) as opposed to just nothing. I assume there is no performance difference, and Python does not require the statements, but is there a general pythonic convention it is good to follow?
Benefit of ending functions in Python which do not return values with return statement
0
0
0
226
30,015,650
2015-05-03T15:29:00.000
0
0
1
0
python,python-2.7
30,015,730
2
false
0
0
In general, there is no difference. Possible difference is when returning from, for example, conditional block. Return will allow you to bypass rest of the block which shouldn't be called anyway. Also, consider the difference between return(True) and return: return means that the function return value is treated as None, while return(True) is treated as True. This can cause side effects if you conditionally check the return value of a function.
2
0
0
for functions that do not return any value, is there any benefit in ending them with either return or return(True) as opposed to just nothing. I assume there is no performance difference, and Python does not require the statements, but is there a general pythonic convention it is good to follow?
Benefit of ending functions in Python which do not return values with return statement
0
0
0
226
30,017,368
2015-05-03T17:56:00.000
0
0
1
0
python,class,inheritance
30,017,486
3
false
0
0
Defining a class does not execute the code inside the body. It simply binds the the class object to a name so that it can be called later on. If both classes are created or in other words a class object is bound to a class name variable then both can be called in any order. A variable must be created by binding it to an object before you can use it. This is what happens in functions and classes. the function or class name is the variable and the body is the value. Although this will run, doing this is very bad style and I would refrain from ever doing this. It's ok for a child to call a parent method but not the other way around.
2
0
0
Question related with inheritance in python. why is it correct to use a child class object inside a parent class method, while the child class is defined later in the code? How does python know the child class will be defined later in the code? When does the class statements gets executed?
Why does it work when a parent class method access a child object in python even though it is defined later?
0
0
0
119
30,017,368
2015-05-03T17:56:00.000
2
0
1
0
python,class,inheritance
30,017,424
3
false
0
0
Python is a dynamic language. So every name is resolved at runtime. No need to know, which names are defined when class methods are defined.
2
0
0
Question related with inheritance in python. why is it correct to use a child class object inside a parent class method, while the child class is defined later in the code? How does python know the child class will be defined later in the code? When does the class statements gets executed?
Why does it work when a parent class method access a child object in python even though it is defined later?
0.132549
0
0
119
30,020,044
2015-05-03T22:16:00.000
0
0
1
1
python,python-2.7,python-3.x
30,020,231
1
false
0
0
Probably the links are not working To solve this, backup your current python link: cp /usr/bin/python ~/Desktop Remove the old soft link and create a new soft link pointing to Python 3.4.3 installation: rm -f /usr/bin/python ln -s /System/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 /usr/bin/python
1
1
0
I installed Python 3.4.3 and then installed Python 2.7.9 on Mac Air. If I run Python on a command line, it shows Python 2.7.9. I removed Python 2.7.9, it still shows Python 2.7.9. What is the problem? Thanks.
Try to run Python 3.4.3 but still shows Python 2.7.9
0
0
0
1,024
30,020,226
2015-05-03T22:42:00.000
7
0
0
0
python,django,apache,http
30,021,032
1
true
1
0
When reading request content, if the required length of content had not been read, then the WSGI layer would when wsgi.input.read() was called raise an IOError exception. This may be passed on as is in Django, or in more recent versions be changed to a different derived IOError exception type called UnreadablePostError. When your application code isn't specifically checking for broken request content and handling that exception type, then it propagates back up and is dealt with by Django as an unhanded exception. Django will at that point attempt to write a 500 error response. When that is written by the WSGI layer, it will fail as the connection had been closed, something that can only be detected by actually attempting to write the response. So Django should not be giving you incomplete POST data and there should be an exception being raised instead. As to whether there is a better way of handling it, the answer is no. With the WSGI specification being based around a blocking model, detection and handling of a dropped connections in a clean way isn't really possible. One would need to switch to an ASYNC web server and framework to be able to better handle it and that means not being able to use WSGI or Django. FWIW, there has been past discussions on the issue of dropped connections on the mod_wsgi mailing list. You might therefore go to Google Groups and search through the archives for the list using search terms like 'dropped connection' or 'failed connection' or 'closed connection' and see what you can find.
1
4
0
When django is deployed on apache with mod_wsgi. It seems to handle incomplete or cancelled requests in a very odd way. If the client cancels the request, the cancelled request is not cancelled on django, for example, if you are uploading a big file, obviously the body of the request will be actually streamed, so while django is reading the body, and the client cancels the request, it is still processed (just incomplete) and the actual request cancel action is never noticed. This is one log example from apache, when a request is cancelled. [Fri May 01 22:05:51.055968 2015] [:error] [pid 31609] (70008)Partial results are valid but processing is incomplete: [client 172.31.43.91:3645] mod_wsgi (pid=31609): Unable to get bucket brigade for request. Then on django code, the actual POST dictionary is never built (because the request is incomplete, yet it arrives to django and is processed as if it had data), and so django will then fail when trying to get data (and return missing XX field errors, or what ever the logic does to handle them) Finally, when django tries to write back the response, it will obviously fail as well as the client already closed the connection. This scenario, happens very often on a request that is used as a REST service endpoint for a mobile app. The mobile app uploads large files and so the request is cancelled on app suspend/close, yet the server always seems to get the partial request. The complete log when this happens would look something like this: [Fri May 01 22:05:51.055968 2015] [:error] [pid 31609] (70008)Partial results are valid but processing is incomplete: [client 172.31.43.91:3645] mod_wsgi (pid=31609): Unable to get bucket brigade for request. [Fri May 01 22:05:51.062690 2015] [:error] [pid 10580] some error message related to missing data here [Fri May 01 22:05:51.068790 2015] [:error] [pid 10580] [remote 172.31.43.91:0] mod_wsgi (pid=10580): Exception occurred processing WSGI script 'some-path/wsgi.py'. [Fri May 01 22:05:51.068827 2015] [:error] [pid 10580] [remote 172.31.43.91:0] IOError: failed to write data Now the final question is, is there a way to detect this kind of incomplete request and handle it accordingly, rather than just failing later with missing required data?
Django/apache handle incomplete/cancelled http requests
1.2
0
0
5,582