Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
27,304,973 | 2014-12-04T22:01:00.000 | 0 | 0 | 0 | 0 | 0 | python,tkinter | 0 | 27,305,312 | 0 | 1 | 0 | true | 0 | 1 | No, there isn't a way to tell what type of event is flagged by <<ListboxSelect>>. Using <<ListboxSelect>> is useless for binding to a letter key, however. It only will fire for mouse clicks and using the arrow keys to navigate the listbox.
You can bind to <KeyPress>. That binding will tell you what key was pressed, and in that binding you can change the widget's selection. You can get that information from event.keysym | 1 | 0 | 0 | 0 | I am binding to ListBoxSelect and i want to implement a function which recognizes an alphabetical key-press and uses this to jump to the corresponding place in an alphabetically sorted list.
Is there a way to tell what type of event is flagged by ListBoxSelect? I see something in the documentation about getting event "keysym" but i dont know how to access this. | How to determine the type of event flagged by <> in Tkinter? | 0 | 1.2 | 1 | 0 | 0 | 47 |
27,308,293 | 2014-12-05T03:32:00.000 | -2 | 0 | 1 | 0 | 0 | python,duplicates,package,pip | 0 | 27,308,401 | 0 | 9 | 0 | false | 0 | 0 | This is not possible with the pip command line tool. All of the packages on PyPI have unique names. Packages often require and depend on each other, and assume the name will not change. | 5 | 51 | 0 | 0 | When installing a new python package with PIP, can I change the package name because there is another package with the same name?
Or, how can I change the existing package's name? | How to install python package with a different name using PIP | 0 | -0.044415 | 1 | 0 | 0 | 22,776 |
27,308,293 | 2014-12-05T03:32:00.000 | 13 | 0 | 1 | 0 | 0 | python,duplicates,package,pip | 0 | 55,817,170 | 0 | 9 | 0 | false | 0 | 0 | It's not possible to change "import path" (installed name) by specifying arguments to pip. All other options require some form of "changes to the package":
A. Use pip install -e git+http://some_url#egg=some-name: that way even if both packages have the same import path, they will be saved under different directories (using some-name provided after #egg=). After this you can go to the source directories of packages (usually venv/src/some-name) and rename some folders to change import paths
B-C. Fork the repository, make changes, then install the package from that repository. Or you can publish your package on PyPI using different name and install it by name
D. use pip download to put one of the packages in your project, then rename folders as you like | 5 | 51 | 0 | 0 | When installing a new python package with PIP, can I change the package name because there is another package with the same name?
Or, how can I change the existing package's name? | How to install python package with a different name using PIP | 0 | 1 | 1 | 0 | 0 | 22,776 |
27,308,293 | 2014-12-05T03:32:00.000 | 0 | 0 | 1 | 0 | 0 | python,duplicates,package,pip | 0 | 48,315,327 | 0 | 9 | 0 | false | 0 | 0 | I Don't think it is possible to change the name of package by using pip.
Because pip can install packages which are exist and gives error if there is no package name which you write for change the name of package. | 5 | 51 | 0 | 0 | When installing a new python package with PIP, can I change the package name because there is another package with the same name?
Or, how can I change the existing package's name? | How to install python package with a different name using PIP | 0 | 0 | 1 | 0 | 0 | 22,776 |
27,308,293 | 2014-12-05T03:32:00.000 | 2 | 0 | 1 | 0 | 0 | python,duplicates,package,pip | 0 | 49,680,767 | 0 | 9 | 0 | false | 0 | 0 | If you are struggling to install the correct package when using pip install 'module', you could always download its corresponding wheel file (.whl extension) and then install this directly using pip. This has worked for me in various situations in the past. | 5 | 51 | 0 | 0 | When installing a new python package with PIP, can I change the package name because there is another package with the same name?
Or, how can I change the existing package's name? | How to install python package with a different name using PIP | 0 | 0.044415 | 1 | 0 | 0 | 22,776 |
27,308,293 | 2014-12-05T03:32:00.000 | 4 | 0 | 1 | 0 | 0 | python,duplicates,package,pip | 0 | 50,367,038 | 0 | 9 | 0 | false | 0 | 0 | Create a new virtualenv and then install the package on new virtualenv, with this you can have the different version of packages as well. | 5 | 51 | 0 | 0 | When installing a new python package with PIP, can I change the package name because there is another package with the same name?
Or, how can I change the existing package's name? | How to install python package with a different name using PIP | 0 | 0.088656 | 1 | 0 | 0 | 22,776 |
27,308,836 | 2014-12-05T04:35:00.000 | 1 | 0 | 0 | 0 | 0 | python,module,openerp-7,odoo,time-management | 0 | 27,353,088 | 0 | 1 | 0 | true | 1 | 0 | You can manage this by grouping each employee according to their privilege. For example you have two groups Managerial and employee group so each of them might have different or some how common privilege on certain python objects from OpenERP so please identify those python objects and explore more in Setting >> Users >> Groups | 1 | 0 | 0 | 0 | I have downloaded and installed and also tested via a virtual machine online Odoo 8 and OpenErp 7. I have spent many hours tinkering with the apps and features of both. I am unable to find any way from hours I spend searching or tinkering for a method to change the approve timesheet functionality in the manner I will explain below.
Each project will have an assigned manager. Any number employees can enter time for a project. Once employees send their timesheet to be be approved, each respective manager will only get that portion of the timesheet for which time was charged to the project they managed. They should be able to view each project and the employees in them. | In using OpenErp 7 or Odoo 8, how do I modify it such that a manager assigned to a project is the one who will approve all timesheet entries for it? | 0 | 1.2 | 1 | 0 | 0 | 364 |
27,337,587 | 2014-12-06T23:03:00.000 | 0 | 1 | 0 | 1 | 1 | python,linux,boot,raspbian,autostart | 0 | 27,344,131 | 0 | 2 | 0 | false | 0 | 0 | Try to use bootup option in crontab:
@reboot python /path/to/pythonfile.py | 1 | 0 | 0 | 0 | Can anyone tel me how to start a python script on boot, and then also load the GUI ? I am debian based Raspbian OS.
The reason I want to run the python script on boot is because I need to read key board input from a RFID reader. I am currently using raw_input() to read data from the RFID reader. The 11 character hex value is then compared against a set of values in a txt file. This raw_input() did not work for me on autostarting python script using crontab and also using with LXDE autostart.
So, I am thinking to run python script at boot, so that it reads keyboard input. If there are any other ways of reading keyboard input using crontab autostart and LXDE autostart, please let me know. | Starting a python script at boot and loading GUI after that | 0 | 0 | 1 | 0 | 0 | 1,590 |
27,358,870 | 2014-12-08T13:15:00.000 | 3 | 0 | 0 | 0 | 0 | python,kivy | 0 | 27,380,669 | 0 | 3 | 0 | true | 0 | 1 | There is no position nor size in a Canvas. Canvas act just as a container for graphics instructions, like Fbo that draw within a Texture, so it have a size.
In Kivy, Canvas.size doesn't exists, but i guess you called your widget a canvas. By default, a Widget size is 100, 100. If you put it into a layout, the size will be changed, when the layout will known its own size. Mean, you need to listen to the changes of the Widget.size, or use a size you known, like Window.size. | 1 | 2 | 0 | 0 | In my app I need to know how big the canvas is in pixels.
Instead calling canvas.size returns [100,100] no matter how many pixels the canvas is wide.
Can you please tell me a way to get how many pixels the canvas is wide and high? | Size of canvas in kivy | 0 | 1.2 | 1 | 0 | 0 | 5,156 |
27,361,967 | 2014-12-08T16:06:00.000 | -1 | 0 | 0 | 0 | 0 | python,sql-server,spss | 0 | 27,362,970 | 0 | 3 | 0 | true | 0 | 0 | This isn't as clean as working directly with whatever database is holding the data, but you could do something with an exported data set:
There may or may not be a way for you to write and run an export script from inside your Admin panel or whatever. If not, you could write a simple Python script using Selenium WebDriver which logs into your admin panel and exports all data to a *.sav data file.
Then you can use the Python SPSS extensions to write your analysis scripts. Note that these scripts have to run on a machine that has a copy of SPSS installed.
Once you have your data and analysis results accessible to Python, you should be able to easily write that to your other database. | 3 | 1 | 0 | 0 | I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report.
My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin?
If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide.
Thanks so much. | Automating IBM SPSS Data Collection survey export? | 0 | 1.2 | 1 | 0 | 0 | 1,546 |
27,361,967 | 2014-12-08T16:06:00.000 | 1 | 0 | 0 | 0 | 0 | python,sql-server,spss | 0 | 29,892,280 | 0 | 3 | 0 | false | 0 | 0 | There are a number of different ways you can accomplish easing this task and even automate it completely. However, if you are not an IBM SPSS Data Collection expert and don't have access to somebody who is or have the time to become one, I'd suggest getting in touch with some of the consultants who offer services on the platform. Internally IBM doesn't have many skilled SPSS resources available, so they rely heavily on external partners to do services on a lot of their products. This goes for IBM SPSS Data Collection in particular, but is also largely true for SPSS Statistics.
As noted by previous contributors there is an approach using Python for data cleaning, merging and other transformations and then loading that output into your report database. For maintenance reasons I'd probably not suggest this approach. Though you are most likely able to automate the export of data from SPSS Data Collection to a sav file with a simple SPSS Syntax (and an SPSS add-on data component), it is extremely error prone when upgrading either SPSS Statistics or SPSS Data Collection.
From a best practice standpoint, you ought to use the SPSS Data Collection Data Management module. It is very flexible and hardly requires any maintenance on upgrades, because you are working within the same data model framework (e.g. survey metadata, survey versions, labels etc. is handled implicitly) right until you load your transformed data into your reporting database.
Ideally the approach would be to build the mentioned SPSS Data Collection Data Management script and trigger it at the end of each completed interview. In this way your reporting will be close to real-time (you can make it actual real-time by triggering the DM script during the interview using the interview script events - just a FYI).
All scripting on the SPSS Data Collection platform including Data Management scripting is very VB-like, so for most people knowing VB, it is very easy to get started and it is documented very well in the SPSS Data Collection DDL. There you'll also be able to find examples of extracting survey data from SPSS Data Collection surveys (as well as reading and writing data to/from other databases, files etc.). There are also many examples of data manipulation and transformation.
Lastly, to answer your specific questions:
Yes, there is always an MS SQL Server behind SPSS Data Collection -
no exceptions. However, generally speaking the data model is way to
complex to read out data directly from it. If you have a look in it,
you'll quickly realize this.
The MDD file (short for Meta Data Document) is containing all survey meta
data including data source specifications, version history etc.
Without it you'll not be able to make anything of the survey data in
the database, which is the main reason I'd suggest to stay within the
SPSS Data Collection platform for as large part of your data handling
as possible. However, it is indeed just a readable XML file.
Note that the SPSS Data Collection Data Management Module requires a separate license and if the scripting needed is large or complex, you'd probably want base professional too, if that's not what you already use for developing the questionnaires and handling the surveys.
Hope that helps. | 3 | 1 | 0 | 0 | I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report.
My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin?
If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide.
Thanks so much. | Automating IBM SPSS Data Collection survey export? | 0 | 0.066568 | 1 | 0 | 0 | 1,546 |
27,361,967 | 2014-12-08T16:06:00.000 | 2 | 0 | 0 | 0 | 0 | python,sql-server,spss | 0 | 30,706,866 | 0 | 3 | 0 | false | 0 | 0 | As mentioned by other contributors, there are a few ways to achieve this. The simplest I can suggest is using the DMS (data management script) and windows scheduler. Ideally you should follow below steps.
Prerequisite:
1. You should have access to the server running IBM Data collection
2. Basic knowledge of windows task scheduler
3. Knowledge of DMS scripting
Approach:
1. Create a new DMS script from the template
2. If you want to perform only data extract / transformation, you only need input and output data source
3. In the input data source, create/build the connection string pointing to your survey on IBM Data collection server. Use the data source as SQL
4. In the select query: use "Select * from VDATA" if you want to export all variables
5. Set the output data connection string by selecting the output data format as SPSS (if you want to export it in SPSS)
6. run the script manually and see if the SPSS export is what is expected
7. Create batch file using text editor (save with .bat extension). Add below lines
cd "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS"
Call DMSRun YOURDMSFILENAME.dms
Then add a line to copy (using XCOPY) the data / files extracted to the location where you want to further process it.
Save the file and open windows scheduler to schedule the execution of this batch file for data extraction.
If you want to do any further processing, you create an mrs or dms file and add to the batch file.
Hope this helps! | 3 | 1 | 0 | 0 | I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report.
My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin?
If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide.
Thanks so much. | Automating IBM SPSS Data Collection survey export? | 0 | 0.132549 | 1 | 0 | 0 | 1,546 |
27,401,918 | 2014-12-10T13:12:00.000 | 4 | 1 | 0 | 0 | 1 | python,bluetooth,pebble-watch | 0 | 27,412,126 | 0 | 5 | 0 | false | 0 | 0 | Apple iDevices do use private resolvable addresses with Bluetooth Low Energy (BLE). They cycle to a different address every ~15 minutes. Only paired devices that have a so called Identity Resolving Key can "decipher" these seemingly random addresses and associate them back to the paired device.
So to do something like this with your iPhone, you need to pair it with your raspberry pi.
Then what you can do is make a simple iOS app that advertises some data (what does not matter because when the app is backgrounded, only iOS itself gets to put data into the advertising packet). On the raspberry pi you can then use hcitool lescan to scan for the BLE advertisements. If the address of the advertisement can be resolved using the IRK, you know with high certainty that it's the iPhone. I'm not sure if hcitool does any IRK math out of the box, but the resolving algorithm is well specified by the Bluetooth spec.
Pebble currently does indeed use a fixed address. However, it is only advertising when it is disconnected from the phone it is supposed to be connected to. So, for your use case, using its BLE advertisements is not very useful. Currently, there is no API in the Pebble SDK to allow an app on the Pebble to advertise data.
FWIW, the commands you mentioned are useful only for Bluetooth 2.1 ("Classic") and probably only useful if the other device is discoverable (basically never, unless it's in the Settings / Bluetooth menu). | 1 | 3 | 0 | 0 | my ultimate goal is to allow my raspberry pi detect when my iphone or pebble watch is nearby. I am presently focusing on the pebble as I believe iphone randomizes the MAC address. I have the static MAC address of the pebble watch.
My question is how to detect the presence of the MAC address through bluetooth?
I have tried hcitool rssi [mac address] or l2ping [mac address] however both needs a confirmation of connection on the watch before any response. I want it to be automatic...
I also tried hcitool scan, but it takes awhile, presumably it is going through all possibilities. I simply want to search for a particular Mac Address.
EDIT: I just tried "hcitool name [Mac Address]" which return the name of the device and if not there it returns a "null" so this is the idea... is there a python equivalent of this?
I am new to python, so hopefully someone can point to how I can simply ping the mac address and see how strong the RSSI value is? | Detecting presence of particular bluetooth device with MAC address | 0 | 0.158649 | 1 | 0 | 0 | 15,931 |
27,403,050 | 2014-12-10T14:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,django,database,data-migration | 0 | 60,700,719 | 0 | 4 | 0 | false | 1 | 0 | I faced the same issue all what i do to get out this issue just drop all tables in my DB and then run:
python manage.py makemigrations
and :
python manage.py migrate | 4 | 4 | 0 | 0 | I have a Django project and I did the following:
Added a table with some columns
Insert some records into the db
Added a new column that I didn't realize I needed
Made an update to populate that column
When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated.
When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet.
What should I do?
EDIT: More info
I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database.
Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better. | Django 1054 - Unknown Column in field list | 0 | 0 | 1 | 0 | 0 | 8,200 |
27,403,050 | 2014-12-10T14:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,django,database,data-migration | 0 | 40,621,929 | 0 | 4 | 0 | false | 1 | 0 | This happened to me because I faked one migration (m1), created another (m2), and then tried to migrate m2 before I had faked my initial migration (m1).
So in my case I had to migrate --fake <app name> m1 and then migrate <app name> m2. | 4 | 4 | 0 | 0 | I have a Django project and I did the following:
Added a table with some columns
Insert some records into the db
Added a new column that I didn't realize I needed
Made an update to populate that column
When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated.
When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet.
What should I do?
EDIT: More info
I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database.
Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better. | Django 1054 - Unknown Column in field list | 0 | 0 | 1 | 0 | 0 | 8,200 |
27,403,050 | 2014-12-10T14:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,django,database,data-migration | 0 | 29,351,981 | 0 | 4 | 0 | false | 1 | 0 | Unless the new column has a default value defined, the insert statement will expect to add data to that column. Can you move the data load to be after the second migration. (I would have commented, but do not yet have sufficient reputation.) | 4 | 4 | 0 | 0 | I have a Django project and I did the following:
Added a table with some columns
Insert some records into the db
Added a new column that I didn't realize I needed
Made an update to populate that column
When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated.
When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet.
What should I do?
EDIT: More info
I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database.
Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better. | Django 1054 - Unknown Column in field list | 0 | 0 | 1 | 0 | 0 | 8,200 |
27,403,050 | 2014-12-10T14:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,django,database,data-migration | 0 | 31,042,146 | 0 | 4 | 0 | true | 1 | 0 | I believe this was because the migration scripts were getting called out of order, due to a problem I had setting them up. Everything is ok now. | 4 | 4 | 0 | 0 | I have a Django project and I did the following:
Added a table with some columns
Insert some records into the db
Added a new column that I didn't realize I needed
Made an update to populate that column
When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated.
When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet.
What should I do?
EDIT: More info
I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database.
Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better. | Django 1054 - Unknown Column in field list | 0 | 1.2 | 1 | 0 | 0 | 8,200 |
27,416,913 | 2014-12-11T06:46:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,crm,zoho | 0 | 27,498,436 | 0 | 2 | 0 | false | 1 | 0 | All you need is export django auth but if extender user model is there then you can also take extended user model too...
Please update your question what Database you are using ? | 1 | 0 | 0 | 0 | How to track django user details using zoho CRM?
I am new zoho CRM, I got the few information and details how ZOHO CRm will be.
Now I want to know one thing, I had implement the django project and also have a account in zoho CRM. Now I would like to Tacke all my user details from app database in zoho crm.
how to export app database users to zoho CRM and how to track the user behaviour? | How to track django user details using zoho CRM | 0 | 0 | 1 | 0 | 0 | 812 |
27,510,077 | 2014-12-16T17:08:00.000 | 2 | 0 | 1 | 0 | 0 | python,self-modifying | 0 | 27,510,374 | 0 | 1 | 0 | true | 0 | 0 | My first inclination is to say "don't do that". Self-modifying Python (really any language) makes it extremely difficult to maintain a versioned library.
You make a bug fix and need to redistribute - how do you merge data you stored via self-modification.
Very hard to authenticate packaging using a hash - once the local version is modified it's hard to tell which version it originated because SHAs won't match.
It's unsafe - You could just save and load a Python class that's not stored with your package, however, if it's user writable, a foreign process could add any arbitrary Python code to that file to evaluate. Kind of like SQL injection but Python style.
Python makes is so trivial to load and dump JSON files, that for simple things, I wouldn't think of anything else. Even CSV files are trivial and can be bound to maps but can be more easily manipulated as data using your favorite spreadsheet editor.
My suggestion - don't use self-modifiying Python unless you're just wanting to experiment; It's just not a practical solution in the real world, unless you're working in an embedded environment where disk and memory are a premium. | 1 | 0 | 0 | 0 | I've considered storing the high scores for my game as variables in the code itself rather than as a text file as I've done so far because it means less additional files are required to run it and that attributing 999999 points becomes harder.
However, this would then require me to run self-modifying code to overwrite the global variables representing the scores permanently. I looked into that and considering that all I want to do is really just to change global variables, all the stuff I found was too advanced.
I'd appreciate if someone could give me an explanation on how to write self-modifying Python code to do just that, preferably with an example too as it aids understanding. | Self-modifying Python code to keep track of high scores | 0 | 1.2 | 1 | 0 | 0 | 414 |
27,517,255 | 2014-12-17T02:05:00.000 | 0 | 0 | 1 | 1 | 1 | python,windows,command-line,command | 0 | 27,557,706 | 0 | 2 | 0 | false | 0 | 0 | Issue resolved. Since no feasible solution was found in 2 days, I decided to wipe all keys containing 'python' from registry as well as some files that were not parts of other programs. This resolved the issue after re-installing python.
If anyone finds the true cause of this misbehavior and other - less brutal - solution, please write it here for future reference. | 1 | 1 | 0 | 0 | I seem to have problem launching python from command line. I tried various things with no success.
Problem: When trying to run python from the command line, there is no response i.e. I do not get message about 'command not found' and console does not launch. Only option to open python console is to run C:\Python34\python.exe directly. Running using python command does not work even when in the python directory but python.exe launches. Issue with the launching this way is that python console is launched in new window. This whole problem is present only on one machine while on my other machine I am able to run python correctly and console launches in the command prompt window from which the python command was executed.
PATH is correctly set to
C:\Python34\;C:\Python34\Scripts;...
and where python correctly returns C:\Python34\python.exe. I verified that running other commands imported through PATH (such as javac) run correctly.
Things I tried:
Completely re-installing python both with x86 and x64 python installations with no success.
Copy installation from my second machine and manually set the path variables - again no success.
Can anyone hint how to resolve this behavior?
(Additional info: Win 8.1 x64, python 3.4.2) | Python console not launching properly from command line | 0 | 0 | 1 | 0 | 0 | 632 |
27,517,939 | 2014-12-17T03:40:00.000 | 1 | 1 | 0 | 1 | 1 | python,c++,protocol-buffers | 1 | 27,687,619 | 0 | 1 | 0 | false | 0 | 1 | So this is a happy non-answer using my experience. The pure-python bindings for google protobuf are a terrible port of C/C++ stuff. However, I had quite a bit of success wrapping C google protobuf generated bindings using cffi. Someone should go ahead and create a more generic binding, but that would just a short consulting stint. | 1 | 2 | 0 | 0 | Protobuf with pure python performance 3x slowly on pypy than CPython do.
So I try to use c++ implementation for pypy.
These are two error (PyFloatObject undefined and const char* to char*) when I compile protobuf(2.6.1 release) c++ implementation for pypy.
I compile successfully after I modify python/google/protobuf/pyext/message.c,But I get 'Segmentation fault' error finally when I use protobuf with c++ implementation on pypy.
I don't know how to fix it, help me please! | Is there any way to use Google Protobuf on pypy? | 0 | 0.197375 | 1 | 0 | 0 | 567 |
27,539,852 | 2014-12-18T05:35:00.000 | 1 | 0 | 1 | 0 | 0 | python,string,parsing,format,character | 0 | 27,539,969 | 0 | 4 | 0 | false | 0 | 0 | Do you mean the list method?
s='abccda'
list(s) # ['a', 'b', 'c', 'c', 'd', 'a'] | 1 | 2 | 0 | 0 | In Python 2.7 how do I parse 'abc' into 'a b c' for a very long string (like 1000 chars)?
Or how would I convert 'abccda' to '1 2 3 3 4 1'? (where each unique letter maps to a unique digit, 1-4)
I imagine I could pop the chars off, one by one, but I'm new to Python and wonder if there is a simple function that does it. | python parse string into individual chararcters | 0 | 0.049958 | 1 | 0 | 0 | 116 |
27,553,319 | 2014-12-18T18:30:00.000 | 0 | 0 | 0 | 0 | 0 | python,python-3.x,urllib | 0 | 27,556,712 | 0 | 1 | 0 | false | 0 | 0 | Now how do I store the last known absolute url to extract the netloc
from it and append it to relative url? Should I save the last known
absolute URL in a text file?
What do you think is wrong with this? Seems to make sense to me... (depending on context, obviously) | 1 | 0 | 0 | 0 | I am new to Python programming. While making an application, I ran into this problem.
I am parsing URL using urllib library of python. I want to convert any relative url into its corresponding absolute url. I get relative and absolute URLs in random fashiion and they may not be from the same domain. Now how do I store the last known absolute url to extract the netloc from it and append it to relative url? Should I save the last known absolute URL in a text file? Or is there any better option to this problem? | URL parsing issue in python | 1 | 0 | 1 | 0 | 1 | 117 |
27,555,520 | 2014-12-18T20:55:00.000 | 1 | 0 | 1 | 0 | 0 | python,class | 0 | 27,555,893 | 0 | 2 | 1 | false | 0 | 0 | If your program starts assuming big dimensions, yes, you could split your classes or simply your functions according to what they do. Usually functions that do similar tasks or that work on the same data are put together.
To import a file containing a set of functions that you defined, if the file is in the same folder where your main script is, you can simply use this statement, if, for example, the name of the script containing your function that you want to imported is called functions.py, you can simply do import functions or from functions import *, or, better, from functions import function_1.
Let's talk about the 3 ways of importing files that I have just mentioned:
import functions
Here, in order to use one of your functions, for example foo, you have to put the name of the module in front of the name of the function followed by a .:
functions.foo('this is a string')
from functions import *
In this case, you can directly call foo just typing foo('this is a new method of importing files'). * means that you have just imported everything from the module functions.
from functions import function_1
In this last case, you have imported a specific function function_1 from the module functions, and you can use just the function_1 from the same module:
function_1('I cannot use the function "foo" here, because I did not imported it') | 1 | 0 | 0 | 0 | I am wondering if it would be a good idea to use different .py scripts for different parts of a Python program. Like one .py file for a calculator and another for class files and etc. If it is a good idea, is it possible?
If possible, where can I find how to do so?
I am asking this because I find it confusing to have so much code in a single file, and have to find it anytime fixing is needed. | Is it a good idea to make use different Python scripts for programs? | 1 | 0.099668 | 1 | 0 | 0 | 545 |
27,559,330 | 2014-12-19T03:32:00.000 | 1 | 0 | 1 | 0 | 1 | python,django,terminal,komodoedit | 1 | 27,559,411 | 0 | 1 | 0 | true | 1 | 0 | Let's start from the beginning:
You are in your project folder eg /home/me/myproject
You create a new virtualenv, eg virtualenv /home/me/virtualenvs/myprojectenv
You activate the new virtualenv:
source /home/me/virtualenvs/myprojectenv/bin/activate
...this means that python and pip commands now point to the versions installed in your virtualenv
You install your project dependencies pip install django
You can ./manage.py runserver successfully
Now, the virtualenv has only been activated in your current terminal session. If you cd outside your project directory the virtualenv is still active. But if you open a new terminal window (or turn off the computer and come back later) the virtualenv is not activated.
If the virtualenv is not activated then the python and pip commands point to the system-installed copies (if they exist) and Django has not been installed there.
All you need to do when you open a new terminal is step 3. above:
source /home/me/virtualenvs/myprojectenv/bin/activate
Possibly the tutorial you followed got you to install virtualenvwrapper which is an additional convenience layer around the virtualenv commands above. In that case the steps would look like this:
You are in your project folder eg /home/me/myproject
You create a new virtualenv, eg mkvirtualenv myprojectenv
...virtualenv has already been activated for you now!
You install your project dependencies pip install django
You can ./manage.py runserver successfully
and whenever you start a new shell session you need to:
workon myprojectenv
in order to re-activate the virtualenv | 1 | 0 | 0 | 0 | I sincerely apologize for the noobish dribble thats about to come out here:
Okay so I am following along with a youtube tutorial using terminal/django/komodo edit to make a simple website. This is my first time really using terminal, I am having issues. I have read up on the terminal and searched this site for my question but to no avail. Im hoping someone will take the time to answer this for me as it is most infuriating. This is my first time working with virtual env's as well.
So my question is, How do I uhmm, I suppose "save" my virtual env settings?
So I have set up a new virualenv. Downloaded django and started up my server so I can see things such as the admin page, log in page, from the internet page. Things go as they should along with the tutorial until it comes time to eventually turn off my computer.
When I reload the virtualenv I cannot run the server, it gives me: Import error, no module named django.core.management.
I use pip freeze and it shows that django is no longer installed.
If trying to reinstall django it gives a long block of error messages.
All the work done within the virtualenv file is still visible for the komodo edit pages however, but it seems the terminal does not want to work properly. My only option thus far has been to completely remake a virualenv, re-set it all up with the proper imports, files, django and restart the project.
so my questions are:
how do I save my terminal and/or virtualenv settings?
What do I need to do before logging off to ensure I will be able to continue with my project?
Lets say I am going to continue with my project, How do I start up the project again via terminal? Is that where I am going wrong? I've assumed up until now that I must go into terminal, start the server again and then from komodo edit continue with my project, but inside the terminal everything goes wrong.
Im not even explicitly saying I cannot continue with my project, I am more saying the terminal is not recognizing I had django installed within my virtualenv, and it is not letting me start the server again.
I have tried doing the research on my own, I am not one to sit back and wait for an answer but being completely new, this is baffling. I am sorry for the noob questions, feel free to link another answered question or website that has the answer.
Thank you all!! | Terminal/django/komodo edit | 0 | 1.2 | 1 | 0 | 0 | 323 |
27,584,321 | 2014-12-20T21:05:00.000 | 2 | 0 | 0 | 0 | 0 | python,amazon-dynamodb,boto,sample | 0 | 27,599,549 | 0 | 2 | 0 | true | 0 | 0 | Use Table.scan(max_page_size=1000) | 1 | 2 | 0 | 0 | I am using amazon dynamodb and accessing it via the python boto query interface. I have a very simple requirement
I want to get 1000 entries. But I don't know the primary keys beforehand. I just want to get 1000 entries. How can I do this? ...I know how to use the query_2 but that requires knowing primary keys beforehand.
And maybe afterwards I want to get another different 1000 and go on like that. You can consider it as sampling without replacement.How can I do this?
Any help is much appreciated. | python dynamodb get 1000 entries | 0 | 1.2 | 1 | 0 | 1 | 1,903 |
27,586,108 | 2014-12-21T02:02:00.000 | 0 | 0 | 0 | 0 | 0 | python-3.x,tkinter,exe,py2exe | 1 | 29,677,578 | 0 | 1 | 0 | false | 0 | 1 | As far as I understand there is not a version of py2exe for python3.x
You'd be best of going for cx_freeze (Sentdex also has a tutorial on that on that) | 1 | 0 | 0 | 0 | I have made a small program in Python that involves tkinter, and I made it a pyw file because it has a shortcut on the desktop and I do not want the command prompt to get in the way. I used py2exe following the sentdex tutorial on youtube, and I got an output file, but the output file shows an error and exits before I can read it. The pyw file on its own works fine, but I don't know how to get the exe output file to work correctly.
Information:
Python - 3.4.2;
OS - Windows 8.1;
Folder - Multiple items (photos and audio for the program);
Program - A simple animation in tkinter
If you need the program, tell me and I can upload the folder containing the program. | Python py2exe with tkinter (pyw file) | 0 | 0 | 1 | 0 | 0 | 216 |
27,608,053 | 2014-12-22T18:02:00.000 | 1 | 0 | 0 | 0 | 0 | python,opencv,image-processing | 0 | 27,689,079 | 0 | 1 | 0 | true | 0 | 0 | There is a standard Python functionimghdr.what. It rulez!
^__^ | 1 | 1 | 1 | 0 | I read an image (of unknown format, most frequent are PNGs or JPGs) from a buffer.
I can decode it with cv2.imdecode, I can even check if it is valid (imdecode returns non-None).
But how can I reveal the image type (PNG, JPG, something else) of the buffer I've just read? | OpenCV: how to get image format if reading from buffer? | 0 | 1.2 | 1 | 0 | 0 | 2,355 |
27,628,753 | 2014-12-23T22:10:00.000 | 1 | 0 | 0 | 0 | 1 | python,network-programming,client-server,port,server | 0 | 27,628,809 | 0 | 3 | 0 | false | 0 | 0 | In order to do that the server must listen on a certain port(s).
This means the client(s) will need to interact on these ports with it.
So... no it is impossible to do that on some random unknown port. | 3 | 1 | 0 | 0 | I'm trying to build a simple python server that a client can connect to without the client having to know the exact portnumber. Is that even possible? The thought is to choose a random portnumber and using it for clients to connect.
I know you could use bind(host, 0) to get a random port number and socket.getsockname()[1] within the server to get my portnumber. But how could my client get the portnumber?
I have tried socket.getnameinfo() but I don't think I understand how that method really works. | How to give a python client a port number from a python server | 0 | 0.066568 | 1 | 0 | 1 | 689 |
27,628,753 | 2014-12-23T22:10:00.000 | 0 | 0 | 0 | 0 | 1 | python,network-programming,client-server,port,server | 0 | 27,630,673 | 0 | 3 | 0 | false | 0 | 0 | Take a look at Zeroconf, it seems to be the path to where you are trying to get to. | 3 | 1 | 0 | 0 | I'm trying to build a simple python server that a client can connect to without the client having to know the exact portnumber. Is that even possible? The thought is to choose a random portnumber and using it for clients to connect.
I know you could use bind(host, 0) to get a random port number and socket.getsockname()[1] within the server to get my portnumber. But how could my client get the portnumber?
I have tried socket.getnameinfo() but I don't think I understand how that method really works. | How to give a python client a port number from a python server | 0 | 0 | 1 | 0 | 1 | 689 |
27,628,753 | 2014-12-23T22:10:00.000 | 0 | 0 | 0 | 0 | 1 | python,network-programming,client-server,port,server | 0 | 27,628,855 | 0 | 3 | 0 | false | 0 | 0 | You need to advertise the port number somehow. Although DNS doesn't do that (well, you could probably cook up some resource record on the server object, but that's not really done) there are many network services that do. LDAP like active directory (you need write rights), DNS-SD dns service discovery, universal plug and play, service location protocol, all come to mind. You could even record the port number on some web page somewhere and have the client read it. | 3 | 1 | 0 | 0 | I'm trying to build a simple python server that a client can connect to without the client having to know the exact portnumber. Is that even possible? The thought is to choose a random portnumber and using it for clients to connect.
I know you could use bind(host, 0) to get a random port number and socket.getsockname()[1] within the server to get my portnumber. But how could my client get the portnumber?
I have tried socket.getnameinfo() but I don't think I understand how that method really works. | How to give a python client a port number from a python server | 0 | 0 | 1 | 0 | 1 | 689 |
27,629,227 | 2014-12-23T22:57:00.000 | -1 | 0 | 0 | 0 | 0 | python,scipy,least-squares,intel-mkl | 0 | 45,621,086 | 0 | 1 | 0 | false | 0 | 0 | You could try Intel's python distribution. It includes a pre-built scipy optimized with MKL. | 1 | 6 | 1 | 0 | I'm doing a Monte Carlo simulation in Python in which I obtain a set of intensities at certain 2D coordinates and then fit a 2D Gaussian to them. I'm using the scipy.optimize.leastsq function and it all seems to work well except for the following error:
Intel MKL ERROR: Parameter 6 was incorrect on entry to DGELSD.
The problem occurs multiple times in a simulation. I have looked around and understand it is something to do with a bug in Intel's MKL library. I can't seem to find a solution to the problem and so I was looking at an alternative fitting function In could use. If someone does know how to get rid of the problem that would be good also. | Intel MKL Error with Gaussian Fitting in Python? | 1 | -0.197375 | 1 | 0 | 0 | 860 |
27,641,616 | 2014-12-24T19:51:00.000 | 1 | 1 | 0 | 0 | 0 | python,database,numpy,dataset,storage | 0 | 27,641,772 | 0 | 1 | 1 | true | 0 | 0 | Reading 500 files in python should not take much time, as the overall file size is around few MB. Your data-structure is plain and simple in your file chunks, it ll not even take much time to parse I guess.
Is the actual slowness is bcoz of opening and closing file, then there may be OS related issue (it may have very poor I/O.)
Did you timed it like how much time it is taking to read all the files.?
You can also try using small database structures like sqllite. Where you can store your file data and access the required data in a fly. | 1 | 0 | 1 | 0 | I've accumulated a set of 500 or so files, each of which has an array and header that stores metadata. Something like:
2,.25,.9,26 #<-- header, which is actually cryptic metadata
1.7331,0
1.7163,0
1.7042,0
1.6951,0
1.6881,0
1.6825,0
1.678,0
1.6743,0
1.6713,0
I'd like to read these arrays into memory selectively. We've built a GUI that lets users select one or multiple files from disk, then each are read in to the program. If users want to read in all 500 files, the program is slow opening and closing each file. Therefore, my question is: will it speed up my program to store all of these in a single structure? Something like hdf5? Ideally, this would have faster access than the individual files. What is the best way to go about this? I haven't ever dealt with these types of considerations. What's the best way to speed up this bottleneck in Python? The total data is only a few MegaBytes, I'd even be amenable to storing it in the program somewhere, not just on disk (but don't know how to do this) | Better way to store a set of files with arrays? | 0 | 1.2 | 1 | 0 | 0 | 66 |
27,645,172 | 2014-12-25T07:32:00.000 | 0 | 1 | 0 | 0 | 0 | python,nginx,uwsgi | 1 | 27,646,090 | 0 | 1 | 0 | false | 1 | 0 | The limit here is nginx. It cannot avoid (unless when in websockets mode) buffering the input. You may have more luck with apache or the uWSGI http router (albeit i suppose they are not a viable choice) | 1 | 0 | 0 | 0 | I have an api with publishers + subscribers and I want to stop a publisher from uploading a lot of data if there are no subscribers. In an effort to avoid another RTT I want to parse the HTTP header, see if there are any subscribers and if not return an HTTP error before the publisher finishes sending all of the data.
Is this possible? If so, how do I achieve it. I do not have post-buffering enabled in uwsgi and the data is being uploaded with a transfer encoding of chunked. Therefore, since uWSGi is giving me a content-length header, it must have buffered the whole thing somewhere previously. How do I get it to stop?
P.S. uWSGi is being sent the data via nginx. Is there some configuration that I need to set there too perhaps? | Can uWSGI return a response before the POST data is uploaded? | 1 | 0 | 1 | 0 | 1 | 244 |
27,682,975 | 2014-12-29T03:17:00.000 | 3 | 0 | 0 | 0 | 0 | python,web-scraping | 0 | 27,683,149 | 0 | 1 | 0 | true | 0 | 0 | The infinite scroll is probably using an Ajax query to retrieve more data as you scroll. Use your browser's dev tools to inspect the request structure and try to hit the same endpoint directly. In this way you can get the data you need, often in json or xml format.
In chrome open the dev tools (Ctrl + shift + I in windows) and switch to the network tab. Then begin to scroll, when more content is loaded you should see new network activity. Specifically a Ajax request, you can filter by "xhr". Click on the new network item and you will get detailed info on the request such as headers, the request body, the structure of the response, and the url (endpoint) the request is hitting. Scraping this url is the same as scraping a website except there will be no html to parse through just formatted data.
Some websites will try to block this type of behavior. If that happens I suggest using phantomjs without selenium. It can be very fast (in comparison to selenium) for mimicking human interaction on websites. | 1 | 0 | 0 | 0 | I am trying to use python to scrape a website implemented with infinite scroll. Actually, the web is pinterest. I know how to use selenium webdriver to scrape a web with infinite scroll. However, the webdriver basically imitates the process of visiting the web and is slow, much slower than using BeautifulSoup and urllib for scraping. Do you know any time efficient ways to scrape a web with infinite scroll? Thanks. | Is there any fast ways to scrape a website with infinite scroll? | 0 | 1.2 | 1 | 0 | 1 | 3,050 |
27,690,388 | 2014-12-29T14:06:00.000 | 0 | 0 | 1 | 0 | 0 | python,powershell | 0 | 27,753,819 | 0 | 1 | 0 | true | 0 | 0 | According to Lukas Graf:
Replace source with . (a single dot) and the relative path after it with a full, absolute path | 1 | 1 | 0 | 0 | I want to activate this virtual environment:
(G:/virt_env/virt1)
I'm just following a virtualenv tutorial, I have created a virtual environment(look above), the next step Is activating It, but that tutorial was written for Unix, So how do I activate this virtual environment using Powershell 2? This is assuming basic knowledge of Powershell
Edit: Question answered | How do I activate a virtual environment in Powershell 2? | 0 | 1.2 | 1 | 0 | 0 | 522 |
27,714,535 | 2014-12-31T00:22:00.000 | 3 | 0 | 0 | 0 | 0 | python,opencv,pixel,integral | 0 | 27,717,883 | 0 | 2 | 0 | false | 0 | 0 | sumElems function in OpenCV will help you to find out the sum of the pixels of the whole of the image in python. If you want to find only the sum of a particular portion of an image, you will have to select the ROI of the image on the sum is to be calculated.
As a side note, if you had found out the integral image, the very last pixel represents the sum of all the pixels of the image. | 2 | 2 | 1 | 0 | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | how to do the sum of pixels with Python and OpenCV | 0 | 0.291313 | 1 | 0 | 0 | 17,785 |
27,714,535 | 2014-12-31T00:22:00.000 | 5 | 0 | 0 | 0 | 0 | python,opencv,pixel,integral | 0 | 27,738,842 | 0 | 2 | 0 | false | 0 | 0 | np.sum(img[y1:y2, x1:x2, c1:c2]) Where c1 and c2 are the channels. | 2 | 2 | 1 | 0 | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | how to do the sum of pixels with Python and OpenCV | 0 | 0.462117 | 1 | 0 | 0 | 17,785 |
27,716,752 | 2014-12-31T05:50:00.000 | 1 | 0 | 1 | 0 | 0 | python,string,list,comparison,overlapping | 0 | 27,716,829 | 0 | 3 | 0 | false | 0 | 0 | all possible list combinations to string, and avoiding overlaping
elements
Is a combination one or more complete items in its exact, current order in the list that match a pattern or subpattern of the string? I believe one of the requirements is to not rearrange the items in the list (ab doesn't get substituted for ba). I believe one of the requirements is to not rearrange the characters in the string. If the subpattern appears twice, then you want the combinations to reflect two individual copies of the subpattern by themselves as well as a list of with both items of the subpattern with other subpatterns that match too. You want multiple permutations of the matches. | 1 | 0 | 0 | 0 | I want to know how to compare a string to a list.
For example
I have string 'abcdab' and a list ['ab','bcd','da']. Is there any way to compare all possible list combinations to the string, and avoid overlaping elements. so that output will be a list of tuples like
[('ab','da'),('bcd'),('bcd','ab'),('ab','ab'),('ab'),('da')].
The output should avoid combinations such as ('bcd', 'da') as the character 'd' is repeated in tuple while it appears only once in the string.
As pointed out in the answer. The characters in string and list elements, must not be rearranged.
One way I tried was to split string elements in to all possible combinations and compare. Which was 2^(n-1) n being number of characters. It was very time consuming.
I am new to python programing.
Thanks in advance. | Python: comparing list to a string | 1 | 0.066568 | 1 | 0 | 0 | 567 |
27,719,695 | 2014-12-31T10:20:00.000 | 0 | 1 | 0 | 1 | 0 | python,encryption,https | 0 | 27,719,872 | 0 | 2 | 0 | true | 0 | 0 | if you have no problem rolling out a key file to all nodes ...
simply throw your messages into AES, and move the output like you moved the unencrypted messages ...
on the other side ... decrypt, and handle the plaintext like the messages you handled before ... | 1 | 0 | 0 | 0 | I want to create a python program that can communicate with another python program running on another machine. They should communicate via network. For me, it's super simple using BasicHTTPServer. I just have to direct my message to http:// server2 : port /my/message and server2 can do whatever action needed based on that message "/my/message". It is also very time-efficient as I do not have to check a file every X seconds or something similar. (My other idea was to put text files via ssh to the remote server and then read that file..)
The downside is, that this is not password protected and not encrypted. I would like to have both, but still keep it that simple to transfer messages.
The machines that are communicating know each other and I can put key files on all those machines.
I also stumbled upon twisted, but it looks rather complicated. Also gevent looks way too complicated with gevent.ssl.SSLsocket, because I have to check for byte length of messages and stuff..
Is there a simple example on how to set something like this up? | Python network communication with encryption and password protection | 0 | 1.2 | 1 | 0 | 0 | 595 |
27,724,624 | 2014-12-31T17:57:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,apache | 0 | 27,725,014 | 0 | 1 | 0 | true | 1 | 0 | Thats the nature of using it in that format/setup. On development, running as 'manage.py runserver' it auto reloads on file changes.
production/proxy setups like you have, you need to reload/restart the service to have changes take effect. | 1 | 0 | 0 | 0 | I am testing a web application on both shared host and Apache localhost, using Django and fastcgi. When I edited my code and refreshing the page, many times the new code does not take effect. I think this is a cache issue, but I don't know how from the application.
For example: adding new url pattern to mysite/urls.py it does not take effect till I restart the Apache server on the localhost or waiting some time on the shared host.
I did not find any entries in mysite/settings.py that may allow any solution for that issue. I use Django 1.7 and Python 3.4.2. | How to cancel Django cache in fastcgi | 0 | 1.2 | 1 | 0 | 0 | 47 |
27,743,031 | 2015-01-02T13:31:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,pymysql | 0 | 27,743,210 | 0 | 2 | 0 | false | 0 | 0 | This happens to be one of the reasons desktop client-server architecture gave way to web architecture. Once a desktop user has access to a dbms, they don't have to use just the SQL in your application. They can do whatever their privileges allow.
In those bad old days, client-server apps only could change rows in the DBMS via stored procedures. They didn't have direct privileges to INSERT, UPDATE, or DELETE rows. The users of those apps had accounts that were GRANTed a limited set of privileges; they could SELECT rows and run procedures, and that was it. They certainly did not have any create / drop / table privilege.
(This is why a typical DBMS has such granular privilege control.)
You should restrict the privileges of the account or accounts employed by the users of your desktop app. (The same is, of course, true for web app access accounts.) Ideally, each user should have her own account. It should only grant access to the particular database your application needs.
Then, if you don't trust your users to avoid trashing your data, you can write, and test, and deploy, stored procedures to do every insert, update, or delete needed by your app.
This is a notoriously slow and bureaucratic way to get IT done; you may want to make good backups and trust your users, or switch to a web app.
If you do trust them tolerably well, then restrict them to the particular database employed by your app. | 1 | 1 | 0 | 0 | I have a Python client program (which will be available to a limited number of users) that fetches data from a remote MySQL-DB using the pymysql-Module.
The problem is that the login data for the DB is visible for everyone who takes a look at the code, so everyone could manipulate or delete data in the DB. Even if I would store the login data in an encrypted file, some still could edit the code and insert their own MySql queries (and again manipulate or delete data).
So how can I access the DB from my program and still SELECT, DELETE or UPDATE data in it, but make sure that no one can execute his own (evil) SQL Code (except the ones that are triggered by using the GUI)? | Secure MySQL login data in a Python client program | 0 | 0 | 1 | 1 | 0 | 934 |
27,761,448 | 2015-01-04T01:27:00.000 | 1 | 0 | 0 | 0 | 0 | python,tile,tmx,tiled,cocos2d-python | 0 | 27,761,506 | 0 | 1 | 0 | false | 0 | 1 | All I had to do was
cell.tile.image = image | 1 | 0 | 0 | 0 | -I'm using python and cocos2D
I have the file loading a tmx-map but now I want to change a specific tile to display an image from another file, I have saved the specific tile that I want to change in a variable but changing it I don't know how.
Thanks in advance | python cocos2d change tile's image | 0 | 0.197375 | 1 | 0 | 0 | 266 |
27,761,684 | 2015-01-04T02:11:00.000 | 0 | 1 | 1 | 0 | 0 | python,mysql,ruby,json,mongodb | 0 | 27,761,882 | 0 | 1 | 1 | false | 1 | 0 | I think this all boils down to what the most important needs are for the project. These are some of the questions I would try to answer before selecting the technology:
Will I need to access records individually after inserting into the database?
Will I ever need to aggregate the data when reading it (for reporting, for instance)?
Is it more important to the project goals to have the data written quickly or read quickly?
How large do I anticipate the data will grow and will the database technology I select scale easily, cheaply and reliably to support the data volume?
Will the schema of the data change? Do I need a schemaless database solution like MongoDB?
Where are the trade offs between development time/cost, maintenance time/cost and time/cost for running the program?
Without knowing much about the particulars or your project or its goals I would say it's generally not a good idea to store a single JSON object for the entirety of the data. This would likely make it more difficult to read the data and append to it in the future. You should probably apply some more thought on how to model your data and represent it in the database in a way that will make sense when you actually need to use it later. | 1 | 0 | 0 | 0 | Im trying to design a system that can periodic "download" a large amount of data from an outside api..
This user could have around 600,000 records of data that I need once, then to check back every hour or so to reconcile both datasets.
Im thinking about doing this in python or ruby in background tasks eventually but I'm curious about how to store the data.
Would it possible/good idea to store everything in one record hashed as json vs copying each record individually?
It would be nice to be able to index or search the data without anything failing so I was wondering what would be the best implementation memory wise.
For example if the a user has 500,000 tweet records and I want to store all of them, which would be a better implementation?
one record as JSON => user_1 = {id:1 twt:"blah"},{id:2 twt:"blah"},.....{id:600,000 twt:"blah"}
vs
many records =>
id:1 outside_id=1 twt:"blah"
id:2 outside_id=1 twt:"blah"
id:3 outside_id=1 twt:"blah"
I'm curious how I would find out how memory intensive each method is or what is the best solution.
The records are alot more complex with maybe 40 attributes per record I wanted to store.
Also would MySQL or MongoDB be a better solution for fastest copy/storage? | Best Way to Store Large Amount of Outside API Data... using Ruby or Python | 1 | 0 | 1 | 0 | 0 | 149 |
27,799,692 | 2015-01-06T13:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,zip,epub,epub3 | 1 | 28,436,076 | 0 | 2 | 0 | true | 1 | 0 | The solution I've found:
delete the previous mimetype file
when creating the new archive create an new mimetype file before adding anything else : zipFile.writestr("mimetype", "application/epub+zip")
Why does it work : the mimetype is the same for all epub : "application/epub+zip", no need to use the original file. | 1 | 0 | 0 | 0 | I'm working on a script to create epub from html files, but when I check my epub I have the following error : Mimetype entry missing or not the first in archive
The Mimetype is present, but it's not the first file in the epub. Any idea how to put it in first place in any case using Python ? | epub3 : how to add the mimetype at first in archive | 0 | 1.2 | 1 | 0 | 1 | 1,098 |
27,810,571 | 2015-01-07T02:30:00.000 | 1 | 0 | 1 | 0 | 0 | python,loops | 0 | 27,810,605 | 0 | 2 | 0 | false | 0 | 0 | Check out the module matplotlib, it was made to give plotting visuals in python. I hope this helps a little. | 1 | 0 | 0 | 0 | I am a beginner so I don't think I need to use anything complicated.
Basically I have to print y=x^+3 for the rangex=0 to x=4 using formatted output and I don't know how.
From what I have learned so far, I'm supposed to use formatted output, looping and variable width output to do this.
Does anyone know how to do it? Thank you very much. | How to formatted output a graph of an equation using python? | 0 | 0.099668 | 1 | 0 | 0 | 102 |
27,813,209 | 2015-01-07T05:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,intellij-idea | 0 | 27,828,457 | 0 | 1 | 0 | true | 0 | 1 | Normally the binary modules for a Python interpreter are rescanned on IntelliJ IDEA restart. Please try restarting the IDE. | 1 | 0 | 0 | 0 | community!
My problem:
I have an item, namely gi.repository.Gtk, marked as "Unresolved reference: Gtk".
The Gtk module did not exist at the moment of setting up Python SDK in Idea, however I've installed it little bit later.
I can't get how do I force re-sync of classpath for python? | intellij: update classes in classpath for python plug-in | 0 | 1.2 | 1 | 0 | 0 | 93 |
27,834,570 | 2015-01-08T06:57:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,environment-variables | 1 | 28,414,823 | 0 | 1 | 0 | true | 1 | 0 | well, it turned out that the AppConfig is not the right spot for a task like that .. i realized my loading of secrets with a modification of the projects manage.py and im planning to release an app with all of the code in the near furture :) | 1 | 0 | 0 | 0 | Id like to have my secret keys loaded via environment vars that shall be check on startup of my Django app. Im using an AppConfig for that purpose, because that code will be executed on startup.
For now i wrote a little helper to get the vars and a list of vars to check. Which is working fine.
The problem:
I also wrote a Django management command to help entering and storing the needed vars and save em to the users .profile, BUT when i have my checks in place the AppConfig will raise errors before i even have the chance to run my configuration command :(
So how do i enable that configuration management command whilst still having that env check run on startup?
For now im going to do a plain python script to not load Django at all (which i dont need for now anyways), but in case i might need to alter the database (and thus need Django for some setup task) how would i be able to sneak past my own startup check in my AppConf?
Where else might i be placing the checks?
I tried the main urls.py, but this will only be loaded once the first url lookup is needs and thus one might start the server and not see any errors and still the app will not work once the first url is entered in the browser. | How To check for environment vars during django startup | 0 | 1.2 | 1 | 0 | 0 | 170 |
27,870,003 | 2015-01-09T22:05:00.000 | 0 | 0 | 1 | 1 | 0 | python,pip,sudo,osx-yosemite | 0 | 57,711,516 | 0 | 5 | 0 | false | 0 | 0 | If you altered your $PATH variable that could also cause the problem. If you think that might be the issue, check your ~/.bash_profile or ~/.bashrc | 2 | 156 | 0 | 0 | While installing pip and python I have ran into a that says:
The directory '/Users/Parthenon/Library/Logs/pi' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.
because I now have to install using sudo.
I had python and a handful of libraries already installed on my Mac, I'm running Yosemite. I recently had to do a clean wipe and then reinstall of the OS. Now I'm getting this prompt and I'm having trouble figuring out how to change it
Before my command line was Parthenon$ now it's Philips-MBP:~ Parthenon$
I am the sole owner of this computer and this is the only account on it. This seems to be a problem when upgrading to python 3.4, nothing seems to be in the right place, virtualenv isn't going where I expect it to, etc. | pip install: Please check the permissions and owner of that directory | 0 | 0 | 1 | 0 | 0 | 189,575 |
27,870,003 | 2015-01-09T22:05:00.000 | 61 | 0 | 1 | 1 | 0 | python,pip,sudo,osx-yosemite | 0 | 39,810,683 | 0 | 5 | 0 | false | 0 | 0 | pip install --user <package name> (no sudo needed) worked for me for a very similar problem. | 2 | 156 | 0 | 0 | While installing pip and python I have ran into a that says:
The directory '/Users/Parthenon/Library/Logs/pi' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.
because I now have to install using sudo.
I had python and a handful of libraries already installed on my Mac, I'm running Yosemite. I recently had to do a clean wipe and then reinstall of the OS. Now I'm getting this prompt and I'm having trouble figuring out how to change it
Before my command line was Parthenon$ now it's Philips-MBP:~ Parthenon$
I am the sole owner of this computer and this is the only account on it. This seems to be a problem when upgrading to python 3.4, nothing seems to be in the right place, virtualenv isn't going where I expect it to, etc. | pip install: Please check the permissions and owner of that directory | 0 | 1 | 1 | 0 | 0 | 189,575 |
27,885,666 | 2015-01-11T09:36:00.000 | 1 | 0 | 1 | 1 | 0 | python,virtualenv,virtualenvwrapper | 0 | 27,885,868 | 1 | 1 | 0 | false | 0 | 0 | When creating virtual environment, you can specify which python to use.
For example,
virtualenv -p/usr/bin/python2.7 env
Same for mkvirtualenv | 1 | 0 | 0 | 0 | using autoenv and virtualenvwrapper in python and trying to configure in it the specific python version.
the autoenv file (called .env) contains (simply)
echo 'my_env'
is there a way to configure it's python version? | how to setup virtualenv to use a different python version in each virtual env | 1 | 0.197375 | 1 | 0 | 0 | 572 |
27,912,757 | 2015-01-12T23:49:00.000 | 0 | 1 | 1 | 0 | 0 | python,c++,c,binary | 0 | 27,912,842 | 0 | 2 | 0 | true | 0 | 0 | The tab is represented in the ASCII chart as 0x08 0x09, or "00001000" "00001001" in a binary string.
The Enter key is different because it could represent CR (carriage return), LF (Linefeed), or both.
The CR is represented as 0x0d, or "00001101" as binary string.
The LF is represented as 0x0A, or "00001010" as binary string.
A common convention is '\t' for tab and '\r' for CR, '\n' for newline. | 1 | 0 | 0 | 0 | I'm trying to make a python program that converts text to one long binary string. The usual line of test and sentences are easy enough to convert into binary but I'm having trouble with the whitespace.
How do I put in a binary byte to represent the enter key?
Do I just put in the '/' and an 'n' strings?
I would ideally want to be able to convert an entire text file into a binary string and be able to convert it back again. Obviously, if I were to do this with a python script, the tabbing would get messed up and the program would be broken.
Would a C language be better for doing this stuff?
Obviously a C program would still function without its whitespace whereas python would not.
In short, I need to know how to represent the 'tab' and 'enter' keys in binary, and how to create a function to translate them into binary. would bin(ord('\n')) be good? | text - binary conversion | 0 | 1.2 | 1 | 0 | 0 | 310 |
27,929,197 | 2015-01-13T18:42:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,encoding,urllib | 0 | 27,931,174 | 0 | 1 | 0 | false | 0 | 0 | Yes it is an encoding issue, make sure sure when you parse your data it doesn't encrypt it or turn it into byte encoded format. Those characters means your computer can't read the data that is being stored, so it isn't being stored in a data type that we can read. | 1 | 0 | 0 | 0 | I am using urllib in python 2.7 to retrieve webpages from the internet. After parsing the data and storing it on the database, i get the following symbols – and †and — and so on. I wanted to know how these symbols are generated and how to get rid of it? Is it an encoding issue? | MySQL - Strange symbols while using urllib in Python | 0 | 0 | 1 | 0 | 0 | 55 |
27,952,331 | 2015-01-14T20:59:00.000 | 92 | 0 | 1 | 0 | 0 | python,pycharm | 0 | 27,952,376 | 0 | 3 | 0 | true | 0 | 0 | Menu: Run -> Edit configurations -> "+" (add new config) -> Python.
Script name: program.py
If you need to debug a script from installed packages, such as tox, you can specify the full path too. For example:
Script name: /home/your_user/.envs/env_name/bin/tox
Above /home/your_user/.envs/env_name is a path to virtual environment containing tox package.
Script params: -t input1 -t1 input2 | 2 | 63 | 0 | 0 | I have been using PyCharm for a bit so I am not an expert.
How I normally ran my programs was with the terminal like so:
program.py -t input1 -t1 input2
I was wondering how can I debug this?
For other programs I wrote, I did not have any arguments so debugging was simply setting break points and pressing debug. | Debugging with PyCharm terminal arguments | 0 | 1.2 | 1 | 0 | 0 | 43,698 |
27,952,331 | 2015-01-14T20:59:00.000 | 1 | 0 | 1 | 0 | 0 | python,pycharm | 0 | 51,116,540 | 0 | 3 | 0 | false | 0 | 0 | It was almost correct but just needed little correction with full script path.
Menu: Run->Edit configurations->"+" (add new config)->Python.
Script name: path + /program.py
Script params: -t input1 -t1 input2 | 2 | 63 | 0 | 0 | I have been using PyCharm for a bit so I am not an expert.
How I normally ran my programs was with the terminal like so:
program.py -t input1 -t1 input2
I was wondering how can I debug this?
For other programs I wrote, I did not have any arguments so debugging was simply setting break points and pressing debug. | Debugging with PyCharm terminal arguments | 0 | 0.066568 | 1 | 0 | 0 | 43,698 |
27,955,622 | 2015-01-15T01:51:00.000 | 0 | 0 | 0 | 0 | 0 | java,python,processbuilder,inter-process-communicat | 0 | 27,969,431 | 0 | 1 | 0 | false | 1 | 0 | From your scenario, you are looking for inter process communication.
You can achieve this using shared file. Your python script will write the output in text file, and your java program will read the same file. | 1 | 1 | 0 | 0 | Good evening all,
am running a python script inside java using processBuilder.
the python script returns a list and i dont know how to get it java and use it since all i can do for the moment with process builder is print errors or outputs.
is it possible to get the list in java as well.
Many thanks | Java/python using processBuilder | 1 | 0 | 1 | 0 | 0 | 266 |
27,972,459 | 2015-01-15T20:32:00.000 | 2 | 0 | 0 | 0 | 0 | python,rest,paypal | 0 | 27,989,541 | 0 | 1 | 0 | false | 1 | 0 | Got a reply from PayPal support. Apparently you can take the same token you pass to BillingAgreement.execute() and pass it to GetExpressCheckoutDetails in their classic API. I tried it and it works. It means you have to use both APIs (which we weren't planning to do) and store both API auth info, which is annoying. Hopefully they'll fix it someday, but if it's been high-priority for two months I'm not holding my breath. | 1 | 1 | 0 | 0 | Let's say I have a billing agreement, which I just executed on the callback from the PayPal site:
resource = BillingAgreement.execute(token)
The resource returned does not have any payer information (name, email, etc). I can use the ID to load the full BillingAgreement:
billing_agreement = BillingAgreement.find(resource.id)
That returns successfully, but the resulting object also lacks an payer info.
This seems like a critical oversight in the REST API's design. If I just got a user to sign up for a subscription, don't I need to know who they are? How else will I send them a confirmation email, allow them to cancel later, etc?
Thanks for the help! | In the PayPal REST API, how can I get payer information from a newly executed BillingAgreement? | 0 | 0.379949 | 1 | 0 | 1 | 555 |
27,978,383 | 2015-01-16T06:25:00.000 | 5 | 1 | 0 | 0 | 0 | python | 0 | 27,978,545 | 0 | 2 | 0 | false | 0 | 0 | Even if you did this, I'd strongly discourage it. Root can access pretty much everyone's home directory, but the nuances of adding programs to the PATH that the root user doesn't technically own can be detrimental at best - might lead to a few root services not working properly, and actively insecure at worst.
There's literally nothing wrong with installing your own copy of pyenv as another user. There's no pain involved and there's not much sense to do it any other way. | 1 | 7 | 0 | 0 | How to use pyenv with another user?
For example, If I have installed pyenv in user test's environment, I could use pyenv when i login as test.
However, how could i use pyenv when I login as another user, such as root? | How to use pyenv with another user? | 0 | 0.462117 | 1 | 0 | 0 | 3,900 |
28,009,390 | 2015-01-18T11:56:00.000 | 6 | 0 | 0 | 0 | 0 | python,python-2.7 | 0 | 28,009,442 | 0 | 4 | 0 | false | 0 | 0 | This question is rather subjective to the definition of random, and the distribution you wish to replicate.
The simplest solution:
Choose a one random number, rand1 : [30,296]
Choose a second random number, rand2 : [30, (326-Rand1)]
Then the third cannot be random due to the constraint so calc via 356-(rand1+rand2) | 1 | 0 | 1 | 0 | please how can I randomly select 3 numbers whose sum is 356 and each of these 3 is more than 30?
So output should be for example [100, 34, 222]
(but not [1,5,350])
I would like to use random module to do this. thank you! | randomly select 3 numbers whose sum is 356 and each of these 3 is more than 30 | 0 | 1 | 1 | 0 | 0 | 131 |
28,017,037 | 2015-01-19T02:11:00.000 | 6 | 0 | 1 | 0 | 0 | python,heap | 0 | 28,017,065 | 0 | 2 | 0 | false | 0 | 0 | You can't. Or rather, you can't specify it as a lambda. You can however make a heap of tuples, heapq.heappush(Q, (key(v), v)) and heapq.heappop(Q)[1]. | 1 | 2 | 0 | 0 | In python heapq if you are putting in objects, how can u use a lambda to specify its key? Like heapq.heappush(Q, v, key=lambda x: f(x)).
Thanks | How to use lambdas in python heapq? | 0 | 1 | 1 | 0 | 0 | 5,551 |
28,024,191 | 2015-01-19T12:00:00.000 | 0 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 0 | 28,024,414 | 0 | 1 | 0 | false | 0 | 0 | (My answer is based on the usage of svm.SVC, Lasso may be different.)
I think that you are supposed pass the Gram matrix instead of X to the fit method.
Also, the Gram matrix has shape (n_samples, n_samples) so it should also be too large for memory in your case, right? | 1 | 4 | 1 | 0 | I'm trying to train a linear model on a very large dataset.
The feature space is small but there are too many samples to hold in memory.
I'm calculating the Gram matrix on-the-fly and trying to pass it as an argument to sklearn Lasso (or other algorithms) but, when I call fit, it needs the actual X and y matrices.
Any idea how to use the 'precompute' feature without storing the original matrices? | Using precomputed Gram matrix in sklearn linear models (Lasso, Lars, etc) | 0 | 0 | 1 | 0 | 0 | 1,183 |
28,024,651 | 2015-01-19T12:26:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,login,navigation | 0 | 28,024,750 | 0 | 2 | 0 | false | 1 | 0 | 1: create your own view that the login button take the person to when clicked and load your index.html template. you do not have to use the built in login.
2: logo? you can use any logo/format you want. Django doesn't come with a template that creates the look of the site (beside the admin one); so you need to create it. | 2 | 0 | 0 | 0 | Im using the Django template login and want to navigate from the login to my own written index.html file.
so if a user push the "login" button the result page is my index file.
My second question is how to use logos in django python and what structure i need in my project.
best regards | Django Navigation from Template-Login to a self written index.html | 1 | 0 | 1 | 0 | 0 | 85 |
28,024,651 | 2015-01-19T12:26:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,login,navigation | 0 | 28,028,387 | 0 | 2 | 0 | false | 1 | 0 | create custom log-in and registration in your project, and write the custom back-end if you have more user ,if request is came from user to log-in then redirect to index,html, | 2 | 0 | 0 | 0 | Im using the Django template login and want to navigate from the login to my own written index.html file.
so if a user push the "login" button the result page is my index file.
My second question is how to use logos in django python and what structure i need in my project.
best regards | Django Navigation from Template-Login to a self written index.html | 1 | 0 | 1 | 0 | 0 | 85 |
28,057,552 | 2015-01-21T00:14:00.000 | 0 | 0 | 0 | 0 | 0 | python,sockets,buffer,recv | 0 | 28,057,641 | 0 | 2 | 0 | false | 0 | 0 | Learn about OSI layers, and different connections such as TCP and UDP. socket.send implements a TCP transmission of data. If you look into the OSI layers, you will find out that the 4th layer (i.e. transport layer) will buffer the data to be transmitted. The default size of the buffer depends on the implementation. | 2 | 0 | 0 | 0 | I'm trying to learn how socket module works and I have a dumb question:
Where is socket.send()'s sent data stored before being cleared by socket.recv()?.
I believe there must be a buffer somewhere in the middle awaiting for socket.recv() calls to pull this data out.
I just made a test with a server which sent a lot of data at once, and then connected to a client that (intentionally) pulled data very slowly. The final result was that data was sent within a fraction of a second and, on the other side, it was entirely received in small chunks of 10 bytes .recv(10), which took 20 seconds long .
Where has this data been stored meanwhile??, what is the default size of this buffer?, how can it be accessed and modified?
thanks. | Trying to understand buffering in Python socket module | 0 | 0 | 1 | 0 | 1 | 184 |
28,057,552 | 2015-01-21T00:14:00.000 | 0 | 0 | 0 | 0 | 0 | python,sockets,buffer,recv | 0 | 28,057,652 | 0 | 2 | 0 | true | 0 | 0 | The OS (kernel) buffers the data.
On Linux the buffer size parameters may be accessed through the /proc interface. See man 7 socket for more details (towards the end.) | 2 | 0 | 0 | 0 | I'm trying to learn how socket module works and I have a dumb question:
Where is socket.send()'s sent data stored before being cleared by socket.recv()?.
I believe there must be a buffer somewhere in the middle awaiting for socket.recv() calls to pull this data out.
I just made a test with a server which sent a lot of data at once, and then connected to a client that (intentionally) pulled data very slowly. The final result was that data was sent within a fraction of a second and, on the other side, it was entirely received in small chunks of 10 bytes .recv(10), which took 20 seconds long .
Where has this data been stored meanwhile??, what is the default size of this buffer?, how can it be accessed and modified?
thanks. | Trying to understand buffering in Python socket module | 0 | 1.2 | 1 | 0 | 1 | 184 |
28,063,378 | 2015-01-21T09:14:00.000 | 1 | 0 | 0 | 0 | 0 | python,macos,apache,ampps | 0 | 28,284,966 | 0 | 1 | 0 | true | 1 | 0 | As far as I can tell, AMPPS does not work on Yosemite at all (@ v2.8) | 1 | 0 | 0 | 0 | I am have just recently acquired a new Mac (Yosemite OSX 10.10) I am reconfiguring everything and I need to work with external configs and Python on some new projects. Which bring me to two additional question:
how to include an extra configuration with an external config file ? can I just include it to the httpd.conf of Apache from Ampps GUI? would I need to do additional settings in the Admin Panel ?
how do I set up the mod_wsgi in Ampps, is there a specific set of actions to trigger ? will I need some specific workflow to get it to work with my external config (bunch of virtual hosts in which some application run on Python)
Thanks in advance. | AMPPS Yosemite Python configuration | 0 | 1.2 | 1 | 0 | 0 | 259 |
28,069,439 | 2015-01-21T14:19:00.000 | 7 | 0 | 0 | 0 | 0 | python,mysql,openerp-7,odoo | 0 | 28,069,999 | 0 | 1 | 0 | true | 1 | 0 | You can't: the Odoo ORM does not support null numeric values.
If you really need to distinguish an "empty" value from a zero value, you need a workaround: either use a string field (simpler, but needs additional checks to allow only number and it's harder to perform calculations) or use a second boolean field to mark empty values. | 1 | 8 | 0 | 0 | In Odoo/OpenERP 7, I have a nullable numeric column called balance
'balance': fields.float(string='Balance',digits=(20,2))
I am trying to use Python code to update None into the field
self.write(cr, uid, [1], { 'balance':None })
but, rather frustratingly, Python treats None the same as 0 so I end up with 0 in the DB instead of the expected Null.
Any pointers on how I can use the write command to store a null value? | Odoo - Write null to numeric column | 0 | 1.2 | 1 | 0 | 0 | 4,136 |
28,076,316 | 2015-01-21T20:34:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,scheduled-tasks,celery,django-celery | 0 | 28,076,369 | 0 | 2 | 0 | false | 1 | 0 | Use a simple cron job to trigger a custom Django management command. | 2 | 0 | 0 | 0 | Some tuples in my database need to be erased after a given time. I need my django app to check if the rows have expired.
While I can definitely write the function, how do I make Django run at everyday at a fixed time?
I have heard about Task Queues like Celery. Are they too much powerful for this? Is their anything simpler? May be something build-in? | How do I make my django application perform a job at a fixed time everyday? | 0 | 0.197375 | 1 | 0 | 0 | 140 |
28,076,316 | 2015-01-21T20:34:00.000 | -1 | 0 | 0 | 0 | 0 | python,django,scheduled-tasks,celery,django-celery | 0 | 28,076,358 | 0 | 2 | 0 | false | 1 | 0 | Use a threading.Thread to schedule a continuous event loop. In your thread, use time.sleep to create a gap before the next occurrence of the event. | 2 | 0 | 0 | 0 | Some tuples in my database need to be erased after a given time. I need my django app to check if the rows have expired.
While I can definitely write the function, how do I make Django run at everyday at a fixed time?
I have heard about Task Queues like Celery. Are they too much powerful for this? Is their anything simpler? May be something build-in? | How do I make my django application perform a job at a fixed time everyday? | 0 | -0.099668 | 1 | 0 | 0 | 140 |
28,101,995 | 2015-01-23T02:12:00.000 | 2 | 0 | 0 | 1 | 0 | python,bash,hadoop,mapreduce,apache-spark | 0 | 28,112,455 | 0 | 1 | 0 | false | 0 | 0 | The main idea why Hadoop Streaming is considered to be slow is that for both mapper and reducer you have to pass arguments via stdin, which means you have to serialize them as a text, and to get the output of both mapper and reducer you have to deserialize them from the text back to Java structures, which usually consumes much time.
If you have a third party compiled application that is capable of reading the input data from stdin and passing data to stdout you don't have much choice but to run it in Hadoop Streaming or in Spark pipe. But of course the native mapreduce application application would be faster as it would eliminate the need for data serialization/deserialization on passing it to the application
But if your application just accepts the filename and reads the file by itself (from NFS, for instance), it would be the same speed as a native one, but of course you should consider that this type of use is not the case neither for Hadoop nor for Spark - these frameworks were developed to process the data with APIs they provide | 1 | 0 | 0 | 0 | I have heard from several articles that Hadoop streaming with bash is significantly slower than a compiled code or python. Is this only true for sort commands and the like? My script needs to
-copy file to node
-on node execute a commercial program with file as argument
-pass output back to folder
My intuition is telling me that this should be a similar speed to the compiled versions. Would it be? | Hadoop streaming with Bash - how slow? | 0 | 0.379949 | 1 | 0 | 0 | 481 |
28,106,854 | 2015-01-23T09:36:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,function,session | 0 | 28,107,498 | 0 | 1 | 0 | true | 1 | 0 | You can't.
This question betrays a fundamental misunderstanding of how both sessions and web applications work. Web code is stateless; there is nothing "running" for any user in between the requests that user makes. If a user doesn't make any requests for longer than the session timeout, the only time the server code knows about it is the next time they actually do make a request: at which point the code will compare the last request time with the current time and either treat the session as valid or expired. If the user goes away and never comes back, there is simply no way the server will ever know. | 1 | 0 | 0 | 0 | So i'm using the request.session.set_expiry(NUMBER_OF_SECONDS) in order to check if the user of my webpage has been inactive for a number of seconds and clossing the session when it happens. The problem is that i want to call a function for doing some things jsut before the session expires and i don't know how can I do that.
Thanks in advance! | Django: calling a function before session expires | 0 | 1.2 | 1 | 0 | 0 | 312 |
28,108,106 | 2015-01-23T10:43:00.000 | 2 | 0 | 1 | 0 | 0 | python,64-bit,32-bit,anaconda,pythonxy | 1 | 28,198,803 | 0 | 1 | 1 | true | 0 | 0 | You'll need to reinstall everything for the 64-bit Python, but note that Anaconda and conda may already come with everything that you need. | 1 | 2 | 0 | 0 | I have some doubts about how to do that, I hope anybody can point me in the correct direction.
My current situation is that I am working with the python package Python(x,y) 32 bits in a Windows machine with 64 bits. And, as many of you know, I am having some problems with the Memory error.
So that I am thinking about changing to 64-bits, let's say with Anaconda for example.
My concern is about what can happen with all the previous job done with python 2.7 with 32 bits. Will it work with Anaconda 64?.
And, if finally I change to Anaconda 64, I really don't think I can still use QtDesigner anymore, if I am not wrong, it only works with python 32, right?.
Sorry If any question sound very basic, I really do not have any idea about that. | from python 32 bits to python 64 bits | 0 | 1.2 | 1 | 0 | 0 | 5,321 |
28,124,450 | 2015-01-24T10:20:00.000 | 2 | 0 | 0 | 0 | 0 | python,python-2.7 | 0 | 28,124,506 | 0 | 1 | 0 | true | 0 | 0 | What you could do is choose a min/max interval of how long you want to wait. Keep a note of how many requests have been made in the last 60 minutes and if you're still below the quota, download a document and wait for rand(min, max). This is not very fancy and doesn't distribute the wait times across the whole 60 minutes interval, but it's easy to implement.
Another way would be to randomly choose 100 numbers between 0 and 60*60. These are the seconds on which you make requests. Sort them and as you progress through the array, each time you wait for next - current seconds. (or even use the scheduler module to simplify it a bit) | 1 | 0 | 0 | 0 | I am using an API that returns me articles in my language in determined categories. This API limits me in 100 calls for each interval of 60 minutes.
I don't want to make 100 calls straight away and make my script wait until 60 minutes has passed.
I could then shoot an API call every 36 seconds, but I also don't want the API calls to be shot evenly.
What is a feasible way to make my script make 100 API calls at random intervals of time, as long as the 100 fits in 60 minutes?
I thought of making a function that would generate 100 timestamps in this 60 minutes interval, and then at the right time of each timestamp, it would shoot an API call, but I think that'd be overkill, and I'm not sure how I could do that either. | How can I distribute 100 API calls randomly in a 60 minutes time interval? | 1 | 1.2 | 1 | 0 | 1 | 303 |
28,127,310 | 2015-01-24T16:00:00.000 | 2 | 0 | 1 | 0 | 0 | python,python-3.x | 0 | 28,127,456 | 0 | 3 | 0 | false | 0 | 0 | On windows, you can quit the interpreter with CTRL-z and on linux/mac you can quit with CTRL-D. If you type exit, it will tell you exactly what to do.
If you are not running interactively (i.e. by running python your_script.py) the code will just end when it's reached the bottom of the file.
If you actually do need to interact with the interpreter while you are running your code, you can use input (or raw_input, depending on the python version) for fetching information from simple prompts, or the code module for more complex interactions. | 1 | 2 | 0 | 0 | I have made a number guessing game and once you have guessed the number it will say well done but idle stays open, how do I get it to close? | how do I get my program to close once finished in python | 0 | 0.132549 | 1 | 0 | 0 | 4,071 |
28,127,730 | 2015-01-24T16:40:00.000 | 3 | 0 | 1 | 0 | 0 | python,pygame,pip,python-3.4 | 0 | 28,127,786 | 0 | 9 | 0 | false | 0 | 1 | 14 y/o? Good for you! You can put the file into your python/scripts folder and run pip install *file* (where *file* is your filename). | 1 | 9 | 0 | 0 | So I have this little problem. When I try to install PyGame for Python 3.4 I download a .whl (wheel?) file and don't know how to use it. Some guys told me something about pip but don't know how to use/install it. | How to install PyGame on Python 3.4? | 0 | 0.066568 | 1 | 0 | 0 | 64,844 |
28,136,063 | 2015-01-25T11:52:00.000 | 1 | 0 | 0 | 0 | 0 | ajax,python-3.x,cookies,xmlhttprequest,bottle | 0 | 28,137,747 | 0 | 2 | 0 | false | 1 | 0 | I tried to set_cookies('test', 123, secret='mysecret') under AJAX request, it worked, but still couldn't find previous cookies.
Then I remarked that my previous cookies, called cook1 and cook2, written under 'normal' http request, if they had same domain, had different 'path' (under Chrome ressource explorer). They were set under path '/XXX/dev' and my AJAX request was just under path '/XXX'
So I modified my AJAX request from /XXX/do_stuff to point to '/XXX/dev/do_stuff', and then, surprise ! cook1 and cook2 could be read by my AJAX request.
Not sure if it's a Bottle bug or if such behaviour is designed on purpose (in this case, if someone can explain to me why...), but at least I have my solution. | 1 | 1 | 0 | 0 | I use bottle set/get cookie mecanism to track my user_id (with 'secret' param when calling set/get_cookie()
During normal http(s) request everything is fine but when making a xhr request (same domain)user_id = request.get_cookie('user_id', secret='mysecret') returns None.
When checking on client browser, cookie and key/value are still available.
How to deal with it ?
(I've always been told that xhr requests are http requests, so from same domain, cookies should be shared, no ? is problem arrising from Bottle 'secret' handling ?) | Bottle, how to get_cookie during AJAX request (same domain) | 0 | 0.099668 | 1 | 0 | 0 | 167 |
28,139,507 | 2015-01-25T17:50:00.000 | 0 | 0 | 1 | 0 | 0 | ipython | 0 | 34,588,036 | 0 | 2 | 0 | false | 0 | 0 | You can output your history with the commands
%history -f filename
history with lines and sessions
%history -g -f filename | 1 | 3 | 0 | 0 | I use vim with a couple of plugins as a python ide. Along with an open vim session, I run an ipython session in a split console. I've found their combination is a great productivity tool for programming data analysis scripts.
What I'm missing is a way to show all current session history in a side panel, so that I could easily do some copy-pasting from there to the vim session to create a script. Something similar to 'tail -f' would do, if only I knew where ipython stores the current session history.
I already know:
ipython has '%history' and 'hist' commands, BUT I'm looking for a way to display the history in a panel outside of the ipython session.
history is stored in a sqlite file under .ipython/(profile), BUT I don't know how to access that file.
I hope I've been clear about my question.
Thanks in advance for all your help. | Where does ipython store the current session history? | 0 | 0 | 1 | 0 | 0 | 851 |
28,159,111 | 2015-01-26T21:30:00.000 | 0 | 0 | 1 | 0 | 1 | python,django | 1 | 28,159,200 | 0 | 1 | 0 | true | 1 | 0 | Make sure your wsgi file points to the right virtualenv! | 1 | 0 | 0 | 0 | I installed a django-secure into a virtualenv using pip. The install was normal. The module shows up in the virtualenv pip list and in virtualenvs/dev/lib/python2.7/site-packages. I get the following error when running my code.
ImportError: No module named djangosecure
The folder is in there and there is an init. No install issues. What am I doing wrong and how do I fix it? | Virtualenv installed package not found | 0 | 1.2 | 1 | 0 | 0 | 196 |
28,161,508 | 2015-01-27T00:53:00.000 | 0 | 0 | 1 | 1 | 0 | python,gitpython | 1 | 28,179,875 | 0 | 1 | 0 | false | 0 | 0 | You might not have git.ext in your PATH, but that can easily be tested by executing it yourself.
If you see an error, you can either add it to the PATH, or set the GIT_PYTHON_GIT_EXECUTABLEto the executable git-python should execute for git commandline services. | 1 | 0 | 0 | 0 | I am facing an issue while cloning a git repo.
I am using function clone_from from GitPython library
from git import Repo
Repo.clone_from("git://github.com/facebook/buck.git", "D:\sample")
I am getting error
WindowsError: The system cannot find the file specified
Can someone please tell me if this is how to clone a repo using the library? | GitPython - clone_from not working | 1 | 0 | 1 | 0 | 0 | 1,005 |
28,162,229 | 2015-01-27T02:26:00.000 | 0 | 1 | 0 | 1 | 1 | python,linux,bash,ssh | 0 | 28,223,316 | 0 | 2 | 0 | true | 0 | 0 | The solution to my problem is to use PATH=/cygdrive/c/WINDOWS/system32:/bin cmd /c in front of the script call, sth like: ssh user@host "PATH=/cygdrive/c/WINDOWS/system32:/bin cmd /c script" .This will run the script using in Windows env.
In my case the problem was that the script was run under cygwin env and I wanted to be run in a Windows env. | 1 | 1 | 0 | 0 | I know I can find multiple answers to this question but I have a problem with the result.
I have a Windows PC with a script on it and a Linux PC that has to start the script using ssh.
The problem I am seeing is that for some reason it's using the Linux environment to run the script and not the Windows env. Is this expected and if yes how can I start a remote script (From Linux) and still use the Windows env?
Linux: Python 2.7
Windows: Python 3.4
My example:
I am running:ssh user@host "WINDOWS_PYTHON_PATH Script.py arg1 arg2 arg3" and it fails internally at a copy command
I can't run ssh user@host "Script.py arg1 arg2 arg3" because then it will fail to run the script because of the python version.
The way I run the command in Windows is using the same syntax "Script.py arg1 arg2 arg3" and it works.
It looks like it's using the Linux env to run the script. I would like to run the script on Windows no matter who triggers it. How can I achieve this? | How to execute a remote script using ssh | 1 | 1.2 | 1 | 0 | 0 | 865 |
28,173,716 | 2015-01-27T15:12:00.000 | 0 | 0 | 1 | 0 | 0 | python-multithreading | 0 | 28,207,425 | 0 | 1 | 0 | true | 0 | 0 | How about you create a list of 101 elements, 0 through 100. Then store the output from processing file x into list element x. When all processing is complete, write the data in the list elements from 0 to 100 to the file. | 1 | 0 | 0 | 0 | I am using Python threading to process multiple files and I want output of each of the files 0 through 100 being processed to be written in a file in orderly fashion
Currently I am saving output of all the thread as they get executed and hence the order is not maintained.
How can I achieve this? | Python threading how get output of multiprocess in sequence | 0 | 1.2 | 1 | 0 | 0 | 109 |
28,174,487 | 2015-01-27T15:50:00.000 | 3 | 0 | 0 | 0 | 0 | python,sockets,ethernet | 0 | 28,174,528 | 0 | 1 | 0 | false | 0 | 0 | Unplug one and if it stops working - you found the right one.
If it does not stop working it is the other one. | 1 | 1 | 0 | 0 | I have an embedded system connected with an ethernet port to one of my 2 ethernet interfaces, but I have the problem that my python code for the socket connection does not know where to connect to the embedded.
I mean, sometimes I get the connection and sometimes I just don't (and I have to change the cable to the other interface), because I don't know how the socket functionality is getting the right ethernet port in which it has to connect.
is there anything I can do on my python code to know the correct ethernet port in which the embedded is connected? (in order to know every time I connect it without changing the cable to another interface) | I have 2 ethernet interfaces on the same PC, how to know which interface is connected in python? | 0 | 0.53705 | 1 | 0 | 1 | 172 |
28,183,527 | 2015-01-28T02:18:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,gmail | 0 | 28,184,142 | 0 | 2 | 0 | false | 0 | 0 | Unfortunately the categories are not exposed to IMAP. You can work around that by using filters in Gmail to apply normal user labels. (Filter on, e.g., category:social.) | 2 | 0 | 0 | 0 | I am trying to extract mails from gmail using python. I noticed that I can get mails from "[Gmail]/All Mail", "[Gmail]/Drafts","[Gmail]/Spam" and so on. However, is there any method to retrieve mails that are labeled with "Primary", "Social", "Promotions" etc.? These tags are under the "categories" label, and I don't know how to access it.
By the way, I am using imaplib in python. Do I need to access the "categories" with some pop library? | How to extract mail in "categories" label in gmail? | 1 | 0 | 1 | 0 | 1 | 596 |
28,183,527 | 2015-01-28T02:18:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,gmail | 0 | 28,218,594 | 0 | 2 | 0 | false | 0 | 0 | Yes, categories are not available in IMAP. However, rather than filters, I found that gmail api is more favorable for me to get mail by category. | 2 | 0 | 0 | 0 | I am trying to extract mails from gmail using python. I noticed that I can get mails from "[Gmail]/All Mail", "[Gmail]/Drafts","[Gmail]/Spam" and so on. However, is there any method to retrieve mails that are labeled with "Primary", "Social", "Promotions" etc.? These tags are under the "categories" label, and I don't know how to access it.
By the way, I am using imaplib in python. Do I need to access the "categories" with some pop library? | How to extract mail in "categories" label in gmail? | 1 | 0 | 1 | 0 | 1 | 596 |
28,187,233 | 2015-01-28T07:56:00.000 | 2 | 0 | 0 | 0 | 1 | python,matlab,curve-fitting,least-squares,surface | 0 | 28,189,659 | 0 | 2 | 0 | false | 0 | 0 | Dont use any toolboxes, GUIs or special functions for this problem. Your problem is very common and the equation you provided may be solved in a very straight-forward manner. The solution to the linear least squares problem can be outlined as:
The basis of the vector space is x^2, y^2, z^2, xy, yz, zx, x, y, z, 1. Therefore your vector has 10 dimensions.
Your problem may be expressed as Ap=b, where p = [A B C D E F G H I J K L]^T is the vector containing your parameters. The right hand side b should be all zeros, but will contain some residual due to model errors, uncertainty in the data or for numerical reasons. This residual has to be minimized.
The matrix A has a dimension of N by 10, where N denotes the number of known points on surface of the parabola.
A = [x(1)^2 y(1)^2 ... y(1) z(1) 1
...
x(N)^2 y(N)^2 ... y(N) z(N) 1]
Solve the overdetermined system of linear equations by computing p = A\b. | 2 | 0 | 1 | 0 | I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a model that can represent the parabola accurately using (I'm assuming) least squares fitting. I cannot seem to figure out how this works. I have though of rotating the parabola until its central axis lines up with z-axis but I do not know what this axis is. Matlab's cftool only seems to fit equations of the form z = f(x, y) and I am not aware of anything in python that can solve this.
I also tried solving for the parameters numerically. When I tried making this into a matrix equation and solving by least squares, the matrix turned out to be invertible and hence my parameters were just all zero. I also am stuck on this and any help would be appreciated. I don't really mind the method as I am familiar with matlab, python and linear algebra if need be.
Thanks | Rotated Paraboloid Surface Fitting | 0 | 0.197375 | 1 | 0 | 0 | 1,508 |
28,187,233 | 2015-01-28T07:56:00.000 | 0 | 0 | 0 | 0 | 1 | python,matlab,curve-fitting,least-squares,surface | 0 | 28,188,683 | 0 | 2 | 0 | false | 0 | 0 | Do you have enough data points to fit all 10 parameters - you will need at least 10?
I also suspect that 10 parameters are to many to describe a general paraboloid, meaning that some of the parameters are dependent. My fealing is that a translated and rotated paraboloid needs 7 parameters (although I'm not really sure) | 2 | 0 | 1 | 0 | I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a model that can represent the parabola accurately using (I'm assuming) least squares fitting. I cannot seem to figure out how this works. I have though of rotating the parabola until its central axis lines up with z-axis but I do not know what this axis is. Matlab's cftool only seems to fit equations of the form z = f(x, y) and I am not aware of anything in python that can solve this.
I also tried solving for the parameters numerically. When I tried making this into a matrix equation and solving by least squares, the matrix turned out to be invertible and hence my parameters were just all zero. I also am stuck on this and any help would be appreciated. I don't really mind the method as I am familiar with matlab, python and linear algebra if need be.
Thanks | Rotated Paraboloid Surface Fitting | 0 | 0 | 1 | 0 | 0 | 1,508 |
28,215,502 | 2015-01-29T13:05:00.000 | 2 | 0 | 0 | 1 | 0 | python,svn,externals | 0 | 28,215,771 | 0 | 1 | 0 | true | 0 | 0 | First of all, don't use sed. Use Python's string methods or the re module.
Second, I recommend to run svn propget ... first, to fetch the old value. Then, you manipulate it (within Python, no need to run sed). Finally, you run svn propset.
Alternatively, you could run a second Python script as editor for svn propedit. Here, too, you don't need sed if you already have Python. | 1 | 0 | 0 | 0 | I am currently writing a python script which needs to run a sed command to replace stuff from the svn:externals data.
I tried to run sed on "svn propedit svn:externals ." but the outcome is not the one expected.
Does anyone know how to do this ? | Run replace command on svn:externals (python) | 0 | 1.2 | 1 | 0 | 0 | 153 |
28,226,283 | 2015-01-29T22:58:00.000 | -3 | 0 | 1 | 0 | 1 | python,deployment | 0 | 28,262,034 | 0 | 2 | 0 | false | 0 | 0 | So the question is how do I solve this?
You have not solve this problem anyhow. There is no any method to describe external dependencies outside of python ecosystem in setup.py. Just provide it in a README. | 1 | 7 | 0 | 0 | For python applications that install with pip, how can you handle their C extension requirements automatically?
For example, the mysqlclient module requires development libs of MySQL installed on the system. When you initially install an application requiring that module it'll fail if the MySQL development libraries are not on the system. So the question is how do I solve this?
Is there a way to solve this with setup.py already that I do not know about?
If not am I supposed to use a pure python module implementation?
Note; I'm not looking for answers like "just use py2exe". | How to handle C extensions for python apps with pip? | 0 | -0.291313 | 1 | 0 | 0 | 371 |
28,228,431 | 2015-01-30T02:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,pygame,collision | 0 | 28,228,513 | 0 | 1 | 0 | true | 0 | 1 | What you are looking for is functionality usually provided by a so-called physics engine. For very basic shapes, it is simple enough to code the basic functionality yourself. (The simplest case for 2D shapes is the collision detection between circles).
Collision detection gets pretty hard pretty quickly, especially if you want to do it at a reasonably fast rate (such as you would need for the sort of project you are describing) and also especially if you are dealing with arbitrary, non-regular shapes (which your description seems to indicate). So, unless you are interested in learning how to code an optimized collision detection system, I suggest you google for python physics engines. I have never used any, so I can't personally recommend one.
Good luck! | 1 | 0 | 0 | 0 | I have a pygame program where there's a face in the center. What I want the program to do is have a bunch of objects on the screen, all irregular. Some would be circles, others would be cut-out pictures of objects like surf boards, chairs, bananas, etc. The user would be able to drag the objects around, and they'd collide with each other and the face in the center, and so be unable to pass through them. Could anyone show me how I would do this? Thanks!
-EDIT- And by not be able to pass through, I mean they'd move along the edge of the object, trying to follow the mouse. | Pygame image collision | 0 | 1.2 | 1 | 0 | 0 | 123 |
28,253,855 | 2015-01-31T16:29:00.000 | 0 | 0 | 0 | 0 | 1 | python,flask | 1 | 28,257,714 | 0 | 1 | 0 | false | 1 | 0 | you need to configure your firewall on your server/workstation to allow connections on port 5000. setting the ip to 0.0.0.0 allows connections to your machine but only if you have the port open. also, you will need to connect via the ip of your machine and not localhost since localhost will only work from the machine where the server is running. | 1 | 0 | 0 | 0 | I have an apache server setup on a Pi, and i'm trying to learn Flask. I set it up so that The 'view' from the index '/' returns "hello world". then i ran my main program. nothing happens from the browser on the PC i'm SSH'ing from,I just get an error saying , but when i used the Pi directly and went to http:localhost:5000/ i got a response.I read about setting Host to '0.0.0.0' but that didnt help. how can i get my Flask to accept all connections? does it make a difference that I have an 'index.html' in '/'? | Flask isn't recognising connections from other clients | 1 | 0 | 1 | 0 | 0 | 48 |
28,280,308 | 2015-02-02T14:48:00.000 | 0 | 0 | 1 | 0 | 0 | python,debugging,spyder | 0 | 36,927,378 | 0 | 7 | 0 | false | 0 | 0 | One minor extra regarding point 3:
It also seemed to me the debug console frequently froze, doing prints, evaluating, etc, but pressing the stop (Exit debug) button usually got it back to the bottom of the call stack and then I could go back up ('u') to the frame I was debugging in. Worth a try. This might be for a later version of Spyder (2.3.5.2) | 4 | 82 | 0 | 0 | I like Python and I like Spyder but I find debugging with Spyder terrible!
Every time I put a break point, I need to press two buttons: first
the debug and then the continue button (it pauses at first line
automatically) which is annoying.
Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage.
The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB.
Last but not least, if I call a function from within the
ipdb>> console, and put a breakpoint in it, it will not stop there.
It seems like I have to put the breakpoint there before I start the
debugging (Ctrl+F5).
Do you have a solution or maybe can you tell me how you debug Python scripts and functions?
I am using fresh install of Anaconda on a Windows 8.1 64bit. | How do I debug efficiently with Spyder in Python? | 0 | 0 | 1 | 0 | 0 | 113,747 |
28,280,308 | 2015-02-02T14:48:00.000 | 0 | 0 | 1 | 0 | 0 | python,debugging,spyder | 0 | 49,370,618 | 0 | 7 | 0 | false | 0 | 0 | You can use debug shortcut keys like:
Step Over F10
Step Into F11
in tools>preferences>keyboard shortcuts | 4 | 82 | 0 | 0 | I like Python and I like Spyder but I find debugging with Spyder terrible!
Every time I put a break point, I need to press two buttons: first
the debug and then the continue button (it pauses at first line
automatically) which is annoying.
Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage.
The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB.
Last but not least, if I call a function from within the
ipdb>> console, and put a breakpoint in it, it will not stop there.
It seems like I have to put the breakpoint there before I start the
debugging (Ctrl+F5).
Do you have a solution or maybe can you tell me how you debug Python scripts and functions?
I am using fresh install of Anaconda on a Windows 8.1 64bit. | How do I debug efficiently with Spyder in Python? | 0 | 0 | 1 | 0 | 0 | 113,747 |
28,280,308 | 2015-02-02T14:48:00.000 | 61 | 0 | 1 | 0 | 0 | python,debugging,spyder | 0 | 28,285,708 | 0 | 7 | 0 | true | 0 | 0 | (Spyder maintainer here) After our 4.2.0 version, released in November 2020, the debugging experience in Spyder is quite good. What we provide now is what people coming from Matlab would expect from a debugger, i.e. something that works like IPython and lets you inspect and plot variables at the current breakpoint or frame.
Now about your points:
If there is a breakpoint present in the file you're trying to debug, then Spyder enters in debug mode and continues until the first breakpoint is met. If it's present in another file, then you still need to press first Debug and then Continue.
IPdb is the IPython debugger console. In Spyder 4.2.0 or above it comes with code completion, syntax highlighting, history browsing of commands with the up/down arrows (separate from the IPython history), multi-line evaluation of code, and inline and interactive plots with Matplotlib.
This is fixed now. Also, to avoid clashes between Python code and Pdb commands, if you have (for instance) a variable called n and write n in the prompt to see its value, we will show it instead of running the n Pdb command. To run that command instead, you have to prefix it with an exclamation mark, like this: !n
This is fixed too. You can set breakpoints in IPdb and they will be taken into account in your current session. | 4 | 82 | 0 | 0 | I like Python and I like Spyder but I find debugging with Spyder terrible!
Every time I put a break point, I need to press two buttons: first
the debug and then the continue button (it pauses at first line
automatically) which is annoying.
Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage.
The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB.
Last but not least, if I call a function from within the
ipdb>> console, and put a breakpoint in it, it will not stop there.
It seems like I have to put the breakpoint there before I start the
debugging (Ctrl+F5).
Do you have a solution or maybe can you tell me how you debug Python scripts and functions?
I am using fresh install of Anaconda on a Windows 8.1 64bit. | How do I debug efficiently with Spyder in Python? | 0 | 1.2 | 1 | 0 | 0 | 113,747 |
28,280,308 | 2015-02-02T14:48:00.000 | 1 | 0 | 1 | 0 | 0 | python,debugging,spyder | 0 | 39,023,817 | 0 | 7 | 0 | false | 0 | 0 | Here is how I debug in Spyder in order to avoid freezing the IDE. I do this if I alter the script while in debugging mode.
I close out the current IPython (debugging) console [x]
Open a new one [Menu bar-> Consoles-> Open an IPython Console]
Enter debug mode again [blue play pause button].
Still a bit annoying, but it has the added benefit of clearing (resetting) variable list. | 4 | 82 | 0 | 0 | I like Python and I like Spyder but I find debugging with Spyder terrible!
Every time I put a break point, I need to press two buttons: first
the debug and then the continue button (it pauses at first line
automatically) which is annoying.
Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage.
The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB.
Last but not least, if I call a function from within the
ipdb>> console, and put a breakpoint in it, it will not stop there.
It seems like I have to put the breakpoint there before I start the
debugging (Ctrl+F5).
Do you have a solution or maybe can you tell me how you debug Python scripts and functions?
I am using fresh install of Anaconda on a Windows 8.1 64bit. | How do I debug efficiently with Spyder in Python? | 0 | 0.028564 | 1 | 0 | 0 | 113,747 |
28,308,285 | 2015-02-03T20:38:00.000 | 0 | 0 | 1 | 0 | 0 | python,multithreading | 0 | 28,308,422 | 0 | 2 | 1 | false | 0 | 0 | It depends.
If your code is spending most of its time waiting for network operations (likely, in a web scraping application), threading is appropriate. The best way to implement a thread pool is to use concurrent.futures in 3.4. Failing that, you can create a threading.Queue object and write each thread as an infinite loop that consumes work objects from the queue and processes them.
If your code is spending most of its time processing data after you've downloaded it, threading is useless due to the GIL. concurrent.futures provides support for process concurrency, but again only works in 3.4+. For older Pythons, use multiprocessing. It provides a Pool type which simplifies the process of creating a process pool.
You should profile your code (using cProfile) to determine which of those two scenarios you are experiencing. | 1 | 0 | 0 | 0 | Since my scaper is running so slow (one page at a time) so I'm trying to use thread to make it work faster. I have a function scrape(website) that take in a website to scrape, so easily I can create each thread and call start() on each of them.
Now I want to implement a num_threads variable that is the number of threads that I want to run at the same time. What is the best way to handle those multiple threads?
For ex: supposed num_threads = 5 , my goal is to start 5 threads then grab the first 5 website in the list and scrape them, then if thread #3 finishes, it will grab the 6th website from the list to scrape immidiately, not wait until other threads end.
Any recommendation for how to handle it? Thank you | Python what is the best way to handle multiple threads | 0 | 0 | 1 | 0 | 1 | 1,243 |
28,314,822 | 2015-02-04T06:27:00.000 | 0 | 0 | 1 | 0 | 0 | python,multithreading | 0 | 28,314,945 | 0 | 2 | 0 | false | 0 | 0 | A background task should never try to kill a foreground task - it might be doing something important. The background task should pass a status of "Restart Needed" and the foreground task should exit when convenient or after prompting the user. This would be an extension of your current status checking.
In your background thread you can have a status that is periodically fetched from the foreground and the foreground can take action based on the returned status. You could also supply a callback to the background thread that it calls on there being a problem. | 1 | 0 | 0 | 0 | I have a client and server model. The server periodically sends its healthy condition to client. The client has a background thread to take care of it(main thread is doing something else). Once the client notices the server is in a bad status, it will do some clean up work and then kill itself(kill the client process).
The problem is this:
At the very beginning of the process, it does atexit.register(specific_cleanup_func). Once the client recognizes the server is in a bad status, the background thread will do general_cleanup_func() and os.kill(os.getpid(), signal.SIGTERM). I hope the os.kill() called by background thread will trigger the registered specific_cleanup_func but it is not the case. I also tried to call sys.exit() from the background thread but the process does not exit. I wonder how to trigger the registered function from the background thread while killing the process or how to let the background thread ask main thread to do all those cleanup stuff and sys.exit(). | python background thread cannot trigger function registered at atexit.register() | 0 | 0 | 1 | 0 | 0 | 451 |
28,318,105 | 2015-02-04T09:42:00.000 | 0 | 0 | 0 | 0 | 1 | django,python-2.7,apache2,mod-wsgi,pyinotify | 0 | 28,320,868 | 0 | 2 | 0 | false | 1 | 0 | You shouldn't prevent spawning multiple processes, because it's good thing, especially on production environment. You should consider using some external tool, separated from django or add check if folder listening is already running (for example monitor persistence of PID file and it's content). | 2 | 1 | 0 | 0 | I'm running apache with django and mod_wsgi enabled in 2 different processes.
I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once.
I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver.
I have two questions:
How can I run with only one process in production or at least make only one process run the ready() function ?
Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request.
For further explanation, I am experiencing a scenario as follows:
The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes.
I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener. | how to run Apache with mod_wsgi and django in one process only? | 0 | 0 | 1 | 0 | 0 | 444 |
28,318,105 | 2015-02-04T09:42:00.000 | 2 | 0 | 0 | 0 | 1 | django,python-2.7,apache2,mod-wsgi,pyinotify | 0 | 28,321,203 | 0 | 2 | 0 | false | 1 | 0 | No, the second process is not an onchange listener - I don't know where you read that. That happens with the dev server, not with mod_wsgi.
You should not try to prevent Apache from serving multiple processes. If you do, the speed of your site will be massively reduced: it will only be able to serve a single request at a time, with others queued until the first finishes. That's no good for anything other than a toy site.
Instead, you should fix your AppConfig. Rather than blindly spawning a listener, you should check to see if it has already been created before starting a new one. | 2 | 1 | 0 | 0 | I'm running apache with django and mod_wsgi enabled in 2 different processes.
I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once.
I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver.
I have two questions:
How can I run with only one process in production or at least make only one process run the ready() function ?
Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request.
For further explanation, I am experiencing a scenario as follows:
The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes.
I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener. | how to run Apache with mod_wsgi and django in one process only? | 0 | 0.197375 | 1 | 0 | 0 | 444 |
28,327,779 | 2015-02-04T17:36:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x,lighttable | 0 | 28,327,929 | 0 | 4 | 0 | false | 0 | 0 | Hit Ctrl + Space to bring up the control pane. Then start typing Set Syntax and select Set Syntax to Python. Start typing your Python then press Ctrl + Shift + Enter to build and run the program. | 3 | 6 | 0 | 0 | I am trying Light Table and learning how to use it. Overall, I like it, but I noticed that the only means of making the watches and inline evaluation work in Python programs uses Python 2.7.8, making it incompatible with some of my code. Is there a way to make it use Python 3 instead?
I looked on Google and GitHub and I couldn't find anything promising.
I am using a Mac with OS X 10.10.2. I have an installation of Python 3.4.0 that runs fine from the Terminal. | Running Python 3 from Light Table | 0 | 0 | 1 | 0 | 0 | 4,744 |
28,327,779 | 2015-02-04T17:36:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x,lighttable | 0 | 28,331,730 | 0 | 4 | 0 | false | 0 | 0 | I got the same problem. It worked for me after saving the file with a .py extension and then typing Cmd+Enter. | 3 | 6 | 0 | 0 | I am trying Light Table and learning how to use it. Overall, I like it, but I noticed that the only means of making the watches and inline evaluation work in Python programs uses Python 2.7.8, making it incompatible with some of my code. Is there a way to make it use Python 3 instead?
I looked on Google and GitHub and I couldn't find anything promising.
I am using a Mac with OS X 10.10.2. I have an installation of Python 3.4.0 that runs fine from the Terminal. | Running Python 3 from Light Table | 0 | 0 | 1 | 0 | 0 | 4,744 |
Subsets and Splits