Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
17,303,329
2013-06-25T17:03:00.000
1
0
1
0
python,compatibility
17,303,683
4
false
0
0
No, what you named is pretty much how it's done, though the What's New pages and the documentation proper may be more useful than the full changelog. Compatibility to such a huge, moving target is infeasible to automate even partially. It's just not as much work as it sounds like, because: Some people do have test suites ;-) You don't (usually) need to consider bugfix releases (such as 2.7.x for various x). It's possible that your code requires a bug fix, but generally the .0 releases are quite reliable and code compatible with x.y.0 can run on any x.y.z version. Thanks to the backwards compatibility policy, it is enough to establish a minimum supported version, all later releases (of the same major version) will stay compatible. This doesn't help in your case as 2.7 is the last 2.x release ever, but if you target, say, 2.5 then you usually don't have to check for 2.6 or 2.7 compatibility. If you keep your eyes open while coding, and have a bit of experience as well as a good memory, you'll know you used some functionality that was introduced in a recent version. Even if you don't know what version specifically, you can look it up quickly in the documentation. Some people embark with the intent to support a specific version, and always keep that in mind when developing. Even if it happens to work on other versions, they'd consider it unsupported and don't claim compatibility. So, you could either limit yourself to 2.7 (it's been out for three years), or perform tests on older releases. If you just want to determine whether it's compatible, not which incompatibilities there are and how they can be fixed, you can: Search the What's New pages for new features, most importantly new syntax, which you used. Check the version constraints of third party libraries you used. Search the documentation of standard library modules you use for newly added functionality.
4
4
0
I have created a medium sized project in python 2.7.3 containing around 100 modules. I wish to find out with which previous versions of python (ex: 2.6.x, 2.7.x) is my code compatible (before releasing my project in public domain). What is the easiest way to find it out? Solutions I know - Install multiple versions of python and check in every versions. But I don't have test cases defined yet, so need to define those first. Read and compare changelog of the various python versions I wish to check compatibility for, and accordingly find out. Kindly provide better solutions.
How do I find out all previous versions of python with which my code is compatible
0.049958
0
0
778
17,303,329
2013-06-25T17:03:00.000
3
0
1
0
python,compatibility
17,303,410
4
false
0
0
I don't really know of a way to get around doing this without some test cases. Even if your code could run in an older version of python there is no guarantee that it works correctly without a suite of test cases that sufficiently test your code
4
4
0
I have created a medium sized project in python 2.7.3 containing around 100 modules. I wish to find out with which previous versions of python (ex: 2.6.x, 2.7.x) is my code compatible (before releasing my project in public domain). What is the easiest way to find it out? Solutions I know - Install multiple versions of python and check in every versions. But I don't have test cases defined yet, so need to define those first. Read and compare changelog of the various python versions I wish to check compatibility for, and accordingly find out. Kindly provide better solutions.
How do I find out all previous versions of python with which my code is compatible
0.148885
0
0
778
17,303,329
2013-06-25T17:03:00.000
1
0
1
0
python,compatibility
17,303,916
4
false
0
0
1) If you're going to maintain compatibility with previous versions, testing is the way to go. Even if your code happens to be compatible now, it can stop being so at any moment in the future if you don't pay attention. 2) If backwards compatibility is not an objective but just a "nice side-feature for those lucky enough", an easy way for OSS is to let users try it out, noting that "it was tested in <version> but may work in previous ones as well". If there's anyone in your user base interested in running your code in an earlier version (and maintain compatibility with it), they'll probably give you feedback. If there isn't, why bother?
4
4
0
I have created a medium sized project in python 2.7.3 containing around 100 modules. I wish to find out with which previous versions of python (ex: 2.6.x, 2.7.x) is my code compatible (before releasing my project in public domain). What is the easiest way to find it out? Solutions I know - Install multiple versions of python and check in every versions. But I don't have test cases defined yet, so need to define those first. Read and compare changelog of the various python versions I wish to check compatibility for, and accordingly find out. Kindly provide better solutions.
How do I find out all previous versions of python with which my code is compatible
0.049958
0
0
778
17,303,329
2013-06-25T17:03:00.000
1
0
1
0
python,compatibility
17,304,022
4
false
0
0
A lot easier with some test cases but manual testing can give you a reasonably idea. Take the furthest back version that you would hope to support, (I would suggest 2.5.x but further back if you must - manually test with that version keeping notes of what you did and especially where it fails if any where - if it does fail then either address the issue or do a binary search to see which version the failure point(s) disappear at. This could work even better if you start from a version that you are quite sure you will fail at, 2.0 maybe.
4
4
0
I have created a medium sized project in python 2.7.3 containing around 100 modules. I wish to find out with which previous versions of python (ex: 2.6.x, 2.7.x) is my code compatible (before releasing my project in public domain). What is the easiest way to find it out? Solutions I know - Install multiple versions of python and check in every versions. But I don't have test cases defined yet, so need to define those first. Read and compare changelog of the various python versions I wish to check compatibility for, and accordingly find out. Kindly provide better solutions.
How do I find out all previous versions of python with which my code is compatible
0.049958
0
0
778
17,303,998
2013-06-25T17:41:00.000
0
0
0
0
python,python-2.7,audio,tkinter
17,304,375
1
false
0
1
Depending on the sound file format, if it is a .wav then you can probably just read it in to a numpy array then plot it, otherwise you are going to have to parse the file format first.
1
0
0
Does anyone know how to display the waveform of a sound file in a Tkinter window (Python)?
Plot wavewform in Tkinter (Python)
0
0
0
257
17,305,880
2013-06-25T19:29:00.000
0
0
0
0
python,dataset,match
17,306,062
1
false
0
0
Lets call the two files Dirty and Clean. You could have a loop that indexes through Dirty and then another loop within that loop that indexes through clean to see if the line you are searching for exists. If the line does exist in Clean write it out into a new file called NEWDATABASE if not write what already exists in Dirty. If you are more specific with your question and provide what the lines look like, I could help more.
1
0
0
I have a database which consists of a line with car brands, models and a lot of rubbish and some other, clean information. I also have a database with most of the car brands and models, and I want to check if these brands and models appear in the dirty lines and replace them with the brand and model. I am more or less new to Python, so it would be nice to receive some support.
Check if part of lines match with lines in dataset
0
0
0
77
17,308,521
2013-06-25T22:16:00.000
2
1
1
0
python,testing
17,308,545
1
false
0
0
No, you can't really detect whether or not you're in a test context, or you'd do it with a lot of unnecessary processing. For example: having a state variable in the testing package that you set up when you're running your tests. But then you would include that module (or variable) in all of your modules, which would be far from being elegant. Globals are evil. The best way to implement filtering output based on the execution context is to use the logging module and make all unnecessary warning messages at a low level (like DEBUG) and ignore them when you run your tests. Another option would be to add a level for all of the messages you explicitly ignore when running the tests.
1
1
0
I want to suppress certain warning messages when Python is running in a test context. Is there any way to detect this globally in Python?
Is there a way to detect that Python is running a test?
0.379949
0
0
61
17,309,163
2013-06-25T23:23:00.000
6
0
1
0
python,max,minimum
17,309,178
4
true
0
0
There is no inbuilt function for that. You can just do your_list.index(min(your_list)).
2
3
0
Does Python have a built-in function like min() and max() except that it returns the index rather than the item?
Python min and max functions, but with indices
1.2
0
0
388
17,309,163
2013-06-25T23:23:00.000
2
0
1
0
python,max,minimum
17,309,331
4
false
0
0
If you have NumPy, it has argmax and argmin functions you can use.
2
3
0
Does Python have a built-in function like min() and max() except that it returns the index rather than the item?
Python min and max functions, but with indices
0.099668
0
0
388
17,309,279
2013-06-25T23:33:00.000
1
0
0
1
python,celery
17,370,082
1
false
0
0
This question lacks details. I guess you need distributed file system, with which both of your computers can work. There are plenty of solutions: GridFS in MongoDB, HDFS in Hadoop. You can also try more simple solution like SSHFS. In this case, one of your servers will connect to other's server file system. rsync can clone remote directory if you are not worried about consistency.
1
1
0
I have a program output files. I use celery to output the files with two parallel computers so the output files will distributed in two computers. How can I write a program read files from these two computers?
celery read files from different computers
0.197375
0
0
162
17,309,889
2013-06-26T00:51:00.000
3
0
0
0
python,debugging,flask
52,030,732
17
false
1
0
Quick tip - if you use a PyCharm, go to Edit Configurations => Configurations and enable FLASK_DEBUG checkbox, restart the Run.
5
174
0
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
How to debug a Flask app
0.035279
0
0
341,826
17,309,889
2013-06-26T00:51:00.000
1
0
0
0
python,debugging,flask
54,051,191
17
false
1
0
Use loggers and print statements in the Development Environment, you can go for sentry in case of production environments.
5
174
0
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
How to debug a Flask app
0.011764
0
0
341,826
17,309,889
2013-06-26T00:51:00.000
10
0
0
0
python,debugging,flask
58,817,088
17
false
1
0
To activate debug mode in flask you simply type set FLASK_DEBUG=1 on your CMD for windows, or export FLASK_DEBUG=1 on Linux terminal then restart your app and you are good to go!!
5
174
0
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
How to debug a Flask app
1
0
0
341,826
17,309,889
2013-06-26T00:51:00.000
-4
0
0
0
python,debugging,flask
41,045,846
17
false
1
0
If you are running it locally and want to be able to step through the code: python -m pdb script.py
5
174
0
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
How to debug a Flask app
-1
0
0
341,826
17,309,889
2013-06-26T00:51:00.000
0
0
0
0
python,debugging,flask
71,165,056
17
false
1
0
If you're using VSCode, press F5 or go to "Run" and "Run Debugging".
5
174
0
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
How to debug a Flask app
0
0
0
341,826
17,312,282
2013-06-26T05:37:00.000
2
0
1
0
python,state,python-stackless,stackless
17,339,840
2
false
0
0
To be more concrete: Stackless adds pickling support to a number of built in elements, such as execution frames and modules and other runtime objects However, code, such as classes, functions and modules, are all pickled by name. What this means is that on the other machine, the same objects must be accessable through the import mechanism. In other words, the pickled execution state will contain the current local variables and all that, but the contents of code objects, or modules will not be picled. These need to be accessible by name when the state is unpickled.
1
2
0
Given a large (4.5 GB codebase) python testing framework whose execution involves many tens of files, many of which are not directly pickle-able, is it possible to wrap initial execution of the program in a one line function, create a Stackless tasklet around that function, and, during execution, pickle the tasklet as a way of saving the whole program's state? What is the limit of Stackless' tasklet pickling capabilities?
Using Stackless Python to save the state of a large running program?
0.197375
0
0
283
17,314,366
2013-06-26T07:46:00.000
3
1
1
0
python,performance,profiling
17,314,426
3
false
0
0
timeit is a standard module since python 2.3 take a look at the documentation for it.
1
4
0
I've written a Python script, but running it is taking a lot longer than I had anticipated, and I've no obvious candidate for particuklar lines in the script taking up runtime. Is there anything I can put in my code to check how long its taking to run through each line? Many thanks.
Check running time per line in python
0.197375
0
0
6,436
17,315,214
2013-06-26T08:33:00.000
1
0
0
0
python,django,django-templates
17,315,760
1
true
1
0
Since requests are stateless, you will have to somehow "save" the state of your radio buttons. One option would be to use sessions, the other would be to use a form and instantiate it with the submitted data.
1
0
0
I am using django. My webpage works like this, If i check the radio button and click on submit. it redirects to the same page with jobs redefined on the basis of which radiobuttons were checked. My problem is after loading the page none of the radio buttons are checked. so I would like to know is there any method so that when redirect the same page(ie form action="") the previous selected radio buttons(ie before submit) are selected in this page too?
Is there any simple way to check radio buttons when page loads, based on the what is checked before the page submit?
1.2
0
0
125
17,315,881
2013-06-26T09:07:00.000
84
0
0
0
python,pandas
17,347,945
4
true
0
0
How about: df.index.is_monotonic
1
44
1
I have a vanilla pandas dataframe with an index. I need to check if the index is sorted. Preferably without sorting it again. e.g. I can test an index to see if it is unique by index.is_unique() is there a similar way for testing sorted?
How can I check if a Pandas dataframe's index is sorted
1.2
0
0
16,454
17,319,422
2013-06-26T12:00:00.000
25
0
1
0
python,pycharm,pep8
28,055,797
8
false
0
0
For PyCharm 4 File >> Settings >> Editor >> Code Style: Right margin (columns) suggestion: Take a look at other options in that tab, they're very helpful
3
343
0
I am using PyCharm on Windows and want to change the settings to limit the maximum line length to 79 characters, as opposed to the default limit of 120 characters. Where can I change the maximum amount of characters per line in PyCharm?
How do I set the maximum line length in PyCharm?
1
0
0
204,587
17,319,422
2013-06-26T12:00:00.000
7
0
1
0
python,pycharm,pep8
37,107,526
8
false
0
0
You can even set a separate right margin for HTML. Under the specified path: File >> Settings >> Editor >> Code Style >> HTML >> Other Tab >> Right margin (columns) This is very useful because generally HTML and JS may be usually long in one line than Python. :)
3
343
0
I am using PyCharm on Windows and want to change the settings to limit the maximum line length to 79 characters, as opposed to the default limit of 120 characters. Where can I change the maximum amount of characters per line in PyCharm?
How do I set the maximum line length in PyCharm?
1
0
0
204,587
17,319,422
2013-06-26T12:00:00.000
1
0
1
0
python,pycharm,pep8
48,395,959
8
false
0
0
For PyCharm 2017 We can follow below: File >> Settings >> Editor >> Code Style. Then provide values for Hard Wrap & Visual Guides for wrapping while typing, tick the checkbox. NB: look at other tabs as well, viz. Python, HTML, JSON etc.
3
343
0
I am using PyCharm on Windows and want to change the settings to limit the maximum line length to 79 characters, as opposed to the default limit of 120 characters. Where can I change the maximum amount of characters per line in PyCharm?
How do I set the maximum line length in PyCharm?
0.024995
0
0
204,587
17,321,167
2013-06-26T13:18:00.000
0
0
1
0
python,design-patterns
17,321,669
2
false
0
0
There are multiple choices which would entirely depend on your complete scenario - Chain of Responsibility - If your different classes needs to follow a chain of operations. Decorator - When you dont know which sequence you need to top up your class object with additional features Builder - which would help you to assign parameter values to your class.
1
13
0
I have a class and some methods of it. Could I keep a result of the methods between calling. Example calling: result = my_obj.method_1(...).method_2(...).method_3(...) when method_v3(...) received a result from the method_2(..) who received a result from the method 1(..) Please tell me, are there patterns or anything to decide above example?
Calling chains methods with intermediate results
0
0
0
8,889
17,322,820
2013-06-26T14:23:00.000
2
0
0
0
python,decode,dbf,visual-foxpro
17,346,761
3
false
0
0
'?' characters don't convey much. Try looking at the contents of the memo fields as hex, and see whether what you're seeing looks anything like text in any encodings. (Apologies if you've tried this using Python already). Of course if it is actually encrypted you may be out of luck unless you can find out the key and method.
1
2
0
I recently acquired a ton of data stored in Visual FoxPro 9.0 databases. The text I need is in Cyrillic (Russian), but of the 1000 .dbf files (complete with .fpt and .cdx files), only 4 or 5 return readable text. The rest (usually in the form of memos) returns something like this: ??9Y?u? yL??x??itZ?????zv?|7?g?̚?繠X6?~u?ꢴe} ?aL1? Ş6U?|wL(Wz???8???7?@R? .FAc?TY?H???#f U???K???F&?w3A??hEڅԦX?MiOK?,?AZ&GtT??u??r:?q???%,NCGo0??H?5d??]?????O{?? z|??\??pq?ݑ?,??om???K*???lb?5?D?J+z!?? ?G>j=???N ?H?jѺAs`c?HK\i ??9a*q?? For the life of me, I can't figure out how this is encoded. I have tried all kinds of online decoders, opened up the .dbfs in many database programs, and used Python to open and manipulate them. All of them returns the similar messiness as above, but never readable Russian. Note: I know that these databases are not corrupt, because they came accompanied by enterprise software that can open, query and read them successfully. However, that software will not export the data, so I am left working directly with the .dbfs. Happy to share an example .dbf if would help get to the bottom of this.
Can't Read Encoded Text in Visual FoxPro DBF FIles
0.132549
0
0
1,208
17,323,142
2013-06-26T14:37:00.000
0
1
0
0
python,audio,noise
17,323,482
3
false
1
0
Not quite my field but I suspect that if you get a spectrum, (do a Fourier transform maybe), and compare "good" and "noisy" recordings you will find that the noise contributes to a cross spectrum level that is higher in the bad recordings than the good. Take a look at the signal processing section in SciPy - this can probably help.
2
2
0
Is there any way to algorithmically determine audio quality from a .wav or .mp3 file? Basically I have users with diverse recording setups (i.e. they are from all over the world and I have no control over them) recording audio to mp3/wav files. At which point the software should determine whether their setup is okay or not (tragically, for some reason they are not capable of making this determination just by listening to their own recordings, and so occasionally we get recordings that are basically impossible to understand due to low volume or high noise). I was doing a volume check to make sure the microphone level was okay; unfortunately this misses cases where the volume is high but the clarity is low. I'm wondering if there is some kind of standard scan I can do (ideally in Python) that detects when there is a lot of background noise. I realize one possible solution is to ask them to record total silence and then compare to the spoken recording and consider the audio "bad" if the volume of the "silent" recording is too close to the volume of the spoken recording. But that depends on getting a good sample from the speaker both times, which may or may not be something I can depend on. So I'm wondering if instead there's just a way to scan through an audio file (these would be ~10 seconds long) and recognize whether the sound file is "noisy" or clear.
Determining sound quality from an audio recording?
0
0
0
3,067
17,323,142
2013-06-26T14:37:00.000
1
1
0
0
python,audio,noise
17,326,588
3
false
1
0
It all depends on what your quality problems are, which is not 100% clear from your question, but here are some suggestions: In the case where volume is high and clarity is low, I'm guessing the problem is that the user has the input gain too high. After the recording, you can simply check for distortion. Even better, you can use Automatic Gain Control (AGC) durring recording to prevent this from happening in the first place. In the case of too much noise, I'm assuming the issue is that the speaker is too far from the mike. In this case Steve's suggestion might work, but to make it really work, you'd need to do a ton of work comparing sample recordings and developing statistics to see how you can discriminate. In practice, I think this is too much work. A simpler alternative that I think will be easier and more likely to work (although not necessarily guaranteed) would be to create an envelope of your signal, then create a histogram from that and see how the histogram compares to existing good and bad recordings. If we are talking about speech only, you could divide the signal into three frequency bands (with a time-domain filter, not an FFT) to give you an idea of how much is noise (the high and low bands) and how much is sound you care about (the center band). Again, though, I would use an AGC durring recording and if the AGC finds it needs to set the input gain too high, it's probably a bad recording.
2
2
0
Is there any way to algorithmically determine audio quality from a .wav or .mp3 file? Basically I have users with diverse recording setups (i.e. they are from all over the world and I have no control over them) recording audio to mp3/wav files. At which point the software should determine whether their setup is okay or not (tragically, for some reason they are not capable of making this determination just by listening to their own recordings, and so occasionally we get recordings that are basically impossible to understand due to low volume or high noise). I was doing a volume check to make sure the microphone level was okay; unfortunately this misses cases where the volume is high but the clarity is low. I'm wondering if there is some kind of standard scan I can do (ideally in Python) that detects when there is a lot of background noise. I realize one possible solution is to ask them to record total silence and then compare to the spoken recording and consider the audio "bad" if the volume of the "silent" recording is too close to the volume of the spoken recording. But that depends on getting a good sample from the speaker both times, which may or may not be something I can depend on. So I'm wondering if instead there's just a way to scan through an audio file (these would be ~10 seconds long) and recognize whether the sound file is "noisy" or clear.
Determining sound quality from an audio recording?
0.066568
0
0
3,067
17,324,670
2013-06-26T15:45:00.000
1
0
0
0
python,scroll,qtablewidget
21,234,521
1
true
0
1
This is how I've solved it: While populating table cells with text, if my text have over 1000 lines (text.count("\n")) I put it in QTextEdit() and then set it with setCellWidget. Reason I didn't put them all in QTextEdit()s is that Windows can show limited number of GUI elements (<20000) and, as I said, I have 10 columns x up to 100000 rows
1
1
0
Problem: I have QTableWidget cells populated with text (10 columns). I found out that no matter how much rows I have (10 or 100000) scrolling over rows with heigth ~ over 3000 is very slow and not smooth enough (I've used table.verticalHeader().sectionSize(i) to find out heigth of every row). What I've tried to do: So I've tried to set height of those rows using table.setRowHeight(i,3000), but still when I scroll over those rows I get delay of 1-5 secunds. Also in that case I need to manualy resize height of row in order to see all text in that row (because it was cuted), and that is slow too. Questions: I need to do table.resizeRowsToContent() for all smaller rows, so I think solution would be to set limit for height of every row in table. There is setMinimumSectionSize() method but I can't find any for max size, so how can I do it? Is it possible to have scrollbars inside every cell where height is >3000 so I can scroll through text inside that cell? Maybe that will speed up scrolling throug table? Any help would be appreciated
Qtablewidget does not scroll smoothly
1.2
0
0
913
17,325,299
2013-06-26T16:14:00.000
3
1
0
0
python-3.x,easygui
32,529,223
2
false
0
0
In addition to what @Benjooster answered previously: Apparently sometimes the font settings are not in easygui.py, but rather in Python27\Lib\site-packages\easygui\boxes\state.py
1
4
0
How can you change the font settings for the input boxes and message text in EasyGUI? I know you have to edit a file somewhere, but that's about it. Exactly how to do it and what to edit would be appreciated. Thanks in advance.
Python EasyGUI module: how to change the font
0.291313
0
0
6,064
17,326,583
2013-06-26T17:22:00.000
0
0
0
0
python,tkinter,tk
33,200,160
2
false
0
1
Another possible way is to insert a frame and resize that, eg: import tkinter as tk root = tk.Tk() frame = Frame(root, width = 1000, height = 1000) frame.pack() root.mainloop The size of your window will then be determined by the frame, although the answer already given works just fine too
1
2
0
How can I show a window created with Tkinter.Tk() outside visible screen? I need to make it much bigger than the desktop size, and show part of it defined by coordinates.
Tkinter window outside desktop
0
0
0
656
17,328,275
2013-06-26T18:57:00.000
0
0
0
0
python,django,escaping
23,383,849
1
false
1
0
The problem is resolved with django 1.6 You can update with : sudo pip install -U django
1
1
0
Some of my texts are escaped twice after upgrading from django 1.4 to django 1.5 For instance one label in my template "{{ field.label_tag }}" is displayed as "Email ou nom d&#39;utilisateur". Is there something to change in the settings to avoid the double escaping? The text "Email ou nom d'utilisateur" comes file django.po This {{ field.label_tag }} come from the file signin_form.html of userena package Vers 1.2.1 "Email ou nom d'utilisateur" it is the traduction of "Email or username" in french, this come from the traduction in django.po _(u"Email or username"), come from the file form.py line 147 of package userena
django i18n msgstr quote is escaped twice
0
0
0
190
17,328,290
2013-06-26T18:58:00.000
0
0
0
0
python,opengl,graphics,blender
17,328,734
2
false
0
1
I want to create a small-scale application like Blender (graphics rendering software) any tips for me? Yes: Readjust your perception of software size/complexity. I occasionally contribute (TBH, it's been years since I submitted something substantial) to Blender and over the years it turned into a mighty suite. But the codebase is just as large.
2
0
0
I have been thinking about my final year project topic and to be honest I want to create something GREAAAT like many others. I know C,C++,Java and Python (Python is getting quite popular these days).. I want to create a small-scale application like Blender (graphics rendering software) any tips for me? I prefer using OpenGL and it's shading language rather than Direct3D since it is open-source. Tell me the stuffs I should know to pull this off and also if the combination of python and OpenGL a good choice for this application ?
Creating small scale application like Blender?
0
0
0
688
17,328,290
2013-06-26T18:58:00.000
0
0
0
0
python,opengl,graphics,blender
17,330,177
2
false
0
1
A small object-viewer should definitely be possible. That's something you can build and add features upon, depending on how much time is left. I would do the visualization and movement in your scene first, then some basic interactions with your objects (translating, rotating, etc.). The final step would be adding tools (edit polygons, sculpt, etc.). If you are fit enough in C++, OpenGL and Software-Architecture on a larger scale it should be doable.
2
0
0
I have been thinking about my final year project topic and to be honest I want to create something GREAAAT like many others. I know C,C++,Java and Python (Python is getting quite popular these days).. I want to create a small-scale application like Blender (graphics rendering software) any tips for me? I prefer using OpenGL and it's shading language rather than Direct3D since it is open-source. Tell me the stuffs I should know to pull this off and also if the combination of python and OpenGL a good choice for this application ?
Creating small scale application like Blender?
0
0
0
688
17,328,538
2013-06-26T19:11:00.000
0
0
1
0
python,settings,reset,restore,python-idle
42,125,592
2
false
0
0
I know you're using Mac, but here's how to reset WinPython's IDLEX: Go to the install location -> open settings -> rename .idlerc to idlerc.bak Note: If you want to restore it, Windows will not let you rename idlerc.bak back to .idlerc. So you will need to launch IDLEX, close it, then copy the files from idlerc.bak into .idlerc.
1
3
0
I was messing with the Preferences in Python IDLE and I couldn't edit the size of the window. So I right clicked on the windows size where it says "Width and Height" and got a ridiculous size for width! Now IDLE wont even turn on. I tried uninstalling it and installing it again but it doesn't work. I have a Mac. How do I reset IDLE?
How to restore IDLE to factory settings
0
0
0
4,613
17,330,079
2013-06-26T20:41:00.000
3
0
1
0
python,django,pycharm
17,909,331
1
false
1
0
I feel your problem. All you have to do is Ctrl + CLICK on the definition. Please note, however that this does not provide the actual files. What I mean by this is that it does not re-direct you to the actual function, but rather a skeleton of the function. If you want to go to the actual function, you will need to go it it my clicking on external libraries on your sidebar and do a search.
1
4
0
I use PyCharm as my IDE for working with Django. So far, it's navigation shortcuts have proven very useful. I can go to a specific (project) file with Ctrl+Shift+N, I can go to any class definition with Ctrl+N and can go to any symbol with Ctrl+Shift+Alt+N. This is great, but lately I've seen that it would be very useful too to have a shortcut to move to a specific external (or project) module. Is there any shortcut where I can pass for example: django.contrib and show the modules for inside django.contrib package or base64 and show me the modules match for base64, just as easy as I can go to a specific, symbol, class, file?
Navigate to a specific module in PyCharm
0.53705
0
0
1,264
17,332,929
2013-06-27T01:04:00.000
2
0
1
0
python,return,init
17,332,945
3
false
0
0
Have you considered raising an exception? That is the usual way to signal a failure.
3
14
0
First off, I know that the __init__() function of a class in Python cannot return a value, so sadly this option is unavailable. Due to the structure of my code, it makes sense to have data assertions (and prompts for the user to give information) inside the __init__ function of the class. However, this means that the creation of the object can fail, and I would like to be able to gracefully recover from this. I was wondering what the best way to continue with this is. I've considered setting a global boolean as a 'valid construction' flag, but I'd prefer not to. Any other ideas (besides restructuring so assertions can happen outside of the initialization and values are passed in as arguments)? I'm basically looking for a way to have return 0 on success and return -1 on failure during initialization. (Like most C system calls)
Python __init__ return failure to create
0.132549
0
0
18,862
17,332,929
2013-06-27T01:04:00.000
0
0
1
0
python,return,init
70,865,898
3
false
0
0
What about writing a @classmethod function? You can then check all the conditions and either create and initialize a new class, or return None. The calling function then just has to check if the returned class is not None.
3
14
0
First off, I know that the __init__() function of a class in Python cannot return a value, so sadly this option is unavailable. Due to the structure of my code, it makes sense to have data assertions (and prompts for the user to give information) inside the __init__ function of the class. However, this means that the creation of the object can fail, and I would like to be able to gracefully recover from this. I was wondering what the best way to continue with this is. I've considered setting a global boolean as a 'valid construction' flag, but I'd prefer not to. Any other ideas (besides restructuring so assertions can happen outside of the initialization and values are passed in as arguments)? I'm basically looking for a way to have return 0 on success and return -1 on failure during initialization. (Like most C system calls)
Python __init__ return failure to create
0
0
0
18,862
17,332,929
2013-06-27T01:04:00.000
17
0
1
0
python,return,init
17,332,994
3
true
0
0
You could raise an exception when either assertio fail, or -, if you really don't want or can't work with exceptions, you can write the __new__ method in your classes - in Python, __init__ is technically an "initializer" method - and it should fill in he attributes and acquire some of the resources and others your object will need during its life cycle - However, Python does define a real constructor, the __new__ method, which is called prior to __init__- and unlike this, __new__ actually does return a value: the newly created (uninitialized) instance itself. So you can place your checks inside __new__ and and simply return None is something fails - otherwise, return the result of the call to the superclass __new__ method (one can't do the actual memory allocation for the object in pure Python, so ultimately you have to call a constructor written in native code in a superclass - usually this is object.__new__ in the base of your class hierarchy. NB: In Python 2, you must have object as the base class for your hierarchy - otherwise not only __new__ is not called, as a whole lot of features added later to Python objects will just not work. In short, class MyClass(object): , never class MyClass: - unless you are on Python3.
3
14
0
First off, I know that the __init__() function of a class in Python cannot return a value, so sadly this option is unavailable. Due to the structure of my code, it makes sense to have data assertions (and prompts for the user to give information) inside the __init__ function of the class. However, this means that the creation of the object can fail, and I would like to be able to gracefully recover from this. I was wondering what the best way to continue with this is. I've considered setting a global boolean as a 'valid construction' flag, but I'd prefer not to. Any other ideas (besides restructuring so assertions can happen outside of the initialization and values are passed in as arguments)? I'm basically looking for a way to have return 0 on success and return -1 on failure during initialization. (Like most C system calls)
Python __init__ return failure to create
1.2
0
0
18,862
17,341,034
2013-06-27T10:44:00.000
0
0
0
0
python,eclipse,openerp
17,343,886
1
false
1
0
Add a many2many field relating to ir.attachments.Check sent by email button in invoice. It opens a wizard which we can add many attachments and also email body for example, add a many2many field relating to ir.attachement and in xml line of the field, specify widget="many2many_binary" I dont know whether it is possible to show images as many2many
1
0
0
Hi I have created a custom openerp module having several fields . I also have a field for attaching image file . But now I need to have a field that should have the ability to attach multiple image fields. How Can I be able to do this? Hopes for suggestion
how to add field for multiple image attachment in openerp module
0
0
0
1,243
17,344,335
2013-06-27T13:20:00.000
0
0
0
0
java,python,excel,passwords,xls
17,344,366
1
false
0
0
If you search there are a number of applications that you can download that will unblock the workbook.
1
0
0
So I have a password protected XLS file which i've forgotten the password for...I'm aware it's a date within a certain range so i'm trying to write a brute forcer to try various dates of the year. However, I can't find how to use python/java to enter the password for the file. It's protected such that I can't open the xls file unless I have it and it has some very important information on there (so important I kept the password in a safe place that I now can't find lol). I'm using fedora. Are there any possible suggestions? Thankyou.
How to enter password in XLS files with python?
0
1
0
268
17,346,488
2013-06-27T14:53:00.000
0
0
1
1
python,job-scheduling
45,596,425
3
false
0
0
If you want to set the cron in windows task scheduler is there ,before that write batch file write all commands to run python programs one by one and put it for task scheduler, It works
1
1
0
I have to run a a list of Python jobs one by one after the successful completion of each job. How can I accomplish this in development environment, I know I can use a scheduler in production environment? FOr example: module1.py module2.py module3.py module4.py module5.py I need to run module1.py then after its successful completion need to trigger module2, then module3.. I have heard of CRON scheduler, Can I install it in windows environment and set it up? Also, Im on windows environment and use Pydev to develop my applications.
running python jobs in sequence
0
0
0
598
17,348,253
2013-06-27T16:17:00.000
0
0
0
1
python,rest,authentication,tornado,userid
17,350,408
1
false
1
0
I am assuming that your authentication function talks to a database and that each page in you app hits the database one or more times. With that in mind, you should probably just authenticate each request. Many cloud/web applications have multiple database queries per page and run just fine. So when performance does get to be problem in your app (it probably won't for a long time), you'll likely already have an average of n queries per page where n is greater than 1. You can either work on bringing down that average or work on making those queries faster.
1
1
0
I would like to maintain statelessness but I also don't want to call my login function on each authenticated request. Would using tornado's secure cookie functionality be feasible for storing the userid in each request for a mobile app? I'm trying to keep performance in mind, so although basic http authentication would work, I dont want to call a login function on each request to get the users id.
How to authenticate a user in a RESTful api and get their user id? (Tornado)
0
0
0
513
17,348,492
2013-06-27T16:29:00.000
3
1
0
0
python,c,python-c-api
17,471,190
1
true
0
0
I eventually found the PyFrame_GetLineNumber(PyFrameObject *f) C function, whose source is located in frameobject.c.
1
3
0
I have a Python code calling some C code (.so file). Is there a way, from within the C code, go get the line number it has been called from at the Python side?
Python calling C: how could C send Python's line number it has been called from?
1.2
0
0
208
17,350,684
2013-06-27T18:31:00.000
0
0
0
1
python,google-app-engine,asynchronous
17,354,787
1
true
1
0
It depends on how long the "interaction" takes. Appengine has a limit of 60 seconds per HTTP Requests. If your external systems send data periodically then I would advice to grab the data in small chunks to respect the 60 seconds limit. Aggregate those into blobs and then process the data periodically using tasks.
1
2
0
My Python AppEngine app interacts with slow external systems (think receiving data from narrow-band connections). Half-hour-long interactions are a norm. I need to run 10-15 of such interactions in parallel. My options are background tasks and "background threads" (not plain Python threads). Theoretically they look about the same. I'd stick with tasks since background threads don't run on the local development server. Are there any significant advantages of one approach over the other?
Long-running I/O-bound processes in AppEngine: tasks or threads?
1.2
0
0
193
17,351,154
2013-06-27T18:58:00.000
1
0
1
1
python,eclipse,pydev
20,921,210
1
false
0
0
i had this problem too, it turns out i had used "&" in the path of my eclipse folder. i renamed the folder using just normal characters, and pydev installed fine. i believe the path statement for the location of the eclipse folder has to be strict unicode without any other characters
1
0
0
I installed Python 32bit on W7. I then "installed" Eclipse 32bit. I successfully added PyDev to Eclipse. I then go to PyDev->Interpreter-Python, and click on "new", browse to C:\Python27\python.exe, click ok, and get the following error: Error getting info on interpreter. Common reasons include -Using an unsupported version -Specifying and invalid interpreter Reasons: See error log for details. Log: org.xml.sax.SAXParseException; lineNumber: 4; columnNumber: 23; The reference to entity "g" must end with the ';' delimiter. Any ideas how to fix this? Thanks!
Failing to define the Python interpeter for PyDev in Eclipse
0.197375
0
0
288
17,353,773
2013-06-27T21:46:00.000
0
0
0
0
python,pandas,data-analysis
17,370,686
1
true
0
0
they have a similiar storage mechanism, and only really differ in the indexing scheme. Performance wise they should be similar. There is more support (code-wise) for multi-level df's as they are more often used. In addition Panels have different silicing semantics, so dtype guarantees are different.
1
2
1
I am wondering whether there is any computational or storage disadvantage to using Panels instead of multi-indexed DataFrames in pandas. Or are they the same behind the curtain?
Are pandas Panels as efficient as multi-indexed DataFrames?
1.2
0
0
169
17,360,793
2013-06-28T08:50:00.000
0
0
1
0
python,backwards-compatibility
17,360,835
3
true
0
0
Either directly use the Python 2.4 interpreter to run it, or modify the programs she-bang line to point to the interpreter you wish to use. Note that there's many things in common use in recent python (any/all, the 1 if 2 else 3 syntax, as well as major stdlib and language changes) that may cause your program to experience difficulties. It's also worth noting that a lot of the common 3rd party modules require at least 2.5 and some of those are even dropping that and only guaranteeing 2.6+ compatibility.
1
0
0
I have python 2.7 installed. I want to use python 2.4 to run python code. Is it possible?
Running python program using earlier version of python
1.2
0
0
5,745
17,362,909
2013-06-28T10:44:00.000
0
1
0
0
c++,python,swig,porting,symbols
37,594,931
1
false
0
1
Problem Solved! After using c++filt, I have found out that one of the constructors in the lib wasn't defined, after deleting it problem solved
1
0
0
I am trying to use C++ lib with python using SWIG, my problem is that the main class symbol is missing, $ ldd -r -d _rf24.so 2>&1|grep RF24 undefined symbol: _ZN4RF24C1Ehh (./_rf24.so) $ objdump -t librf24-bcm.so.1.0 |grep RF24 . . . 000032cc g F .text 00000044 _ZN4RF24C1Ehhj 000032cc g F .text 00000044 _ZN4RF24C2Ehhj . . . python exception: ImportError: ./_rf24.so: undefined symbol: _ZN4RF24C1Ehh I tried using the lib objs from the original Makefile or tried to compile them with some flags but the result is the same build lines: $ gcc -c RF24_wrap.cxx -I/usr/include/python2.7 $ gcc -lstdc++ -shared bcm2835.o RF24.o RF24_wrap.o -o _rf24.so RF24.i (the SWIG file): %module rf24 %{ #include "RF24.h" %} %include "RF24.h" //%include "bcm2835.h" %include "carrays.i" %array_class(char, byteArray); RF24.h (relevant part of the class header file): . . . // bla bla bla enums... class RF24 { private: // bla bla bla protected: // bla bla bla public: RF24(uint8_t _cepin, uint8_t _cspin); RF24(uint8_t _cepin, uint8_t _cspin, uint32_t spispeed ) //bla bla bla
Missing / wrong signature whan converting c++ library to python using SWIG
0
0
0
215
17,364,120
2013-06-28T11:53:00.000
0
0
0
1
python,django,web-scraping,scraper,scraperwiki
17,374,282
1
false
1
0
Step # 1 download django-dynamic-scraper-0.3.0-py2.7.tar.gz file Step # 2 Unzip it and change the name of the folder to: django-dynamic-scraper-0.3.0-py2.7.egg Step # 3 paste the folder into C:\Python27\Lib\site-packages
1
0
0
I am trying to make a project in dynamic django scraper. I have tested it on linux and it runs properly. When I try to run the command: syndb i get this error /*****************************************************************************************************************************/ python : WindowsError: [Error 3] The system cannot find the path specified: 'C:\Python27\l ib\site-packages\django_dynamic_scraper-0.3.0-py2.7.egg\dynamic_scraper\migrations/.' At line:1 char:1 + python manage.py syncdb + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (WindowsError: [...migrations/.':String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError /*****************************************************************************************************************************/ The admin server runs properly with the command python manage.py runserver Kindly guide me how i can remove this error
Django Dynamic Scraper Project does not run on windows even though it works on Linux
0
0
0
194
17,366,528
2013-06-28T14:03:00.000
1
1
0
0
python,web-scraping,beautifulsoup,screen-scraping
17,366,729
2
false
1
0
In most countries the telephone number follows one of a very few well defined patterns that can be matched with a simple regexp - likewise email addresses have an internationally recognised format - simply scrape the homepage, contacts or contact us page and then parse with regular expressions - you should easily achieve better than 90% accuracy. Alternatively of course you simply submit the restaurant name and town to the local equivalent of the Yellow Pages web site.
1
3
0
I'd like to scrape contact info from about 1000-2000 different restaurant websites. Almost all of them have contact information either on the homepage or on some kind of "contact" page, but no two websites are exactly alike (i.e., there's no common pattern to exploit). How can I reliably scrape email/phone # info from sites like these without specifically pointing the Python script to a particular element on the page (i.e., the script needs to be structure agnostic, since each site has a unique HTML structure, they don't all have, e.g., their contact info in a "contact" div). I know there's no way to write a program that will be 100% effective, I'd just like to maximize my hit rate. Any guidance on this—where to start, what to read—would be much appreciated. Thanks.
Scraping Contact Information from Several Unique Sites with Python
0.099668
0
1
2,766
17,366,579
2013-06-28T14:05:00.000
10
0
0
1
python,celery
46,843,345
2
false
0
0
If you want to log everything, you can use the following command -f celery.logs You can also specify different log levels as well. For suppose if you want log warning and errors add like following. --loglevel=warning -f celery.logs
1
28
0
Can someone please help and tell me how to get the celery task debug details to a log file? I have a requirement to have the details of celery task logged into a .log file. Can you please make some suggestions on how this can be done without impacting the performance of the task?
Celery Logs into file
1
0
0
63,725
17,369,422
2013-06-28T16:27:00.000
3
0
0
0
python,django,metaprogramming,metaclass
17,369,468
1
true
1
0
The field name in the model has _id appended to it in the table, and it stores the PK of the foreign model (as a FK normally would). When the related field is accessed on a model, Django performs a query to retrieve the foreign model from the database. When a model is assigned to the related field, Django reads the PK of the model and assigns it to the backing field in the table.
1
0
0
I am curious how Django handles model relationships at the object level because I am working on building a custom json serializer, and I need to understand this so I have properly handle nested serialization. I am almost positive I will have to dive into some of the internals of python, but that will not be too big of a deal.
How does Django handle foreignKeys internally?
1.2
0
0
38
17,371,059
2013-06-28T18:08:00.000
1
0
1
0
python,numpy
17,371,090
2
false
0
0
Numpy in general is more efficient if you pre-allocate the size. If you know you're going to be populating an MxN matrix...create it first then populate as opposed to using appends for example. While the list does have to be created, a lot of the improvement in efficiency comes from acting on that structure. Reading/writing/computations/etc.
1
4
1
From what I've read about Numpy arrays, they're more memory efficient that standard Python lists. What confuses me is that when you create a numpy array, you have to pass in a python list. I assume this python list gets deconstructed, but to me, it seems like it defeats the purpose of having a memory efficient data structure if you have to create a larger inefficient structure to create the efficient one. Does numpy.zeros get around this?
numpy array memory allocation
0.099668
0
0
4,676
17,373,333
2013-06-28T20:42:00.000
0
0
1
0
python,json,flask,ascii,octal
17,406,851
1
true
1
0
I ended up passing a urlencoded cookie instead of json. This is a hack. I am not really satisfied with this fix right now.
1
1
0
I have written an application using flask. Part of the application creates a dictionary and then the dictionary gets parsed into json(string) with json.dumps. The string then gets stored as a cookie. Everything was working fine in development. I set up a production environment and when the above process takes place, I am unable to read the cookie with javascript. Upon examining the cookie, I can see that an ASCII octal character for comma has been added: \054. There are supposedly no differences between my development and production environments. I did have a newer version of flask in production and read that they changed how cookies are stored, so I blew away flask 0.10.1 and installed 0.9 which is what is on my development environment, but the problem persists. Any ideas where this comma is being replaced by the octal code?
Application producing invalid JSON
1.2
0
0
131
17,373,473
2013-06-28T20:51:00.000
-2
0
1
0
python,pip
70,735,171
8
false
0
0
Use: pip show <package_name>
2
114
0
I would like to be able to search for an available Python package using pip (on the terminal). I would like a functionality similar to apt-cache in Ubuntu. More specifically, I would like to be able to search for packages given a term (similar to apt-cache search [package-name]), and list all available packages.
How do I search for an available Python package using pip?
-0.049958
0
0
155,786
17,373,473
2013-06-28T20:51:00.000
4
0
1
0
python,pip
71,389,162
8
false
0
0
After Dec 2020, search doesn't work. But index does. pip index versions <package_name> Note: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
2
114
0
I would like to be able to search for an available Python package using pip (on the terminal). I would like a functionality similar to apt-cache in Ubuntu. More specifically, I would like to be able to search for packages given a term (similar to apt-cache search [package-name]), and list all available packages.
How do I search for an available Python package using pip?
0.099668
0
0
155,786
17,374,249
2013-06-28T21:53:00.000
0
1
1
0
python,pyramid
27,862,172
2
false
0
0
As stated by Mikhail, code and configuration are note the same. You may want to deploy your package manytimes and not to overwrite already installed configuration and data. Please note that the db, if present and on file system (sqlite), is not distributed inside the package as well. I guess it's done to allow you to update the code easily. If your intent is to deploy the package in production environment all you need to do is to copy both the ini you want to use and the database (if sqlite) or to run the initilize_db script (that is installed in bin) before starting the app. Note that it's always a good idea to test the production ini in a non production environment to be sure that settings are good for you, in particular about logging, because you'll have no console logging. Though it's good enough for dev/prod environment, it may be a problem for distribution to 3rd party. I'm just trying to address similar problems and I think that the main point is to properly configure setup.py and MANIFEST.in, to include what you need in the egg and properly extract them when installing. The problem seems to be that easy_install skip all files outside your app folder (so ini files, that are one dir back). A workaround for that, is to skip easy_install, and just untaz your tarball and then enter your project folder and use: pip install -e . --pre (the --pre is only required if you included pre-release package in your project, maybe because they are a dependency of formalchemy, as I did). This seems the easiest way toi distribute to other people. You may want to create the database somehow, anyway, to have it work, unless you include it in the distribution explicitly adding it to MANIFEST file.
1
0
0
I am new at pyramid framework and I recently started to play with it. However, I'm a bit confused about how a tarball created with 'sdist' gets installed in a production virtual environment. My scenario is as follows: After finishing a project I created in pyramid called 'myapp', I run: python setup.py sdist in order to create the distribution tarball. The tarball gets created under 'dist' folder and it contains all my project sources as well as the .ini files (development and production). I then create a new production virtual environment by executing: virtualenv --no-site-packages envprod To install the 'myapp' distribution tarball I execute: envprod/bin/easy_install src/myapp/dist/myapp0-0.tar.gz. It then starts to download and install all the requirements for the project and it also installs the sources of my application under envprod/lib/python2.7/site-packages/myapp The problem is that neither development.ini nor production.ini are installed in the new prod environment so I have no way to execute 'pserve' since it needs the .ini file. Am I doing something wrong? Or is there a way to start serving 'myapp' without the .ini files? Thanks!
Deploy a pyramid application with sdist
0
0
0
637
17,374,262
2013-06-28T21:54:00.000
2
0
1
1
python,enthought,icu,canopy
20,845,995
2
false
0
0
Sorry for the late response (we are monitoring the enthought tag, please use it for to get our attention more quickly). These warning messages Unable to load library icui18n are spurious and shouldn't affect the usability of Canopy. We have turned these warnings off in version 1.2, which is coming out during the first week of January. Please post again if you see issues.
1
4
0
I have tried to get enthought canopy and follow the procedure. However, when I tried to run ./canopy, it gave this error: Unable to load library icui18n "Cannot load library icui18n: (icui18n: cannot open shared object file: No such file or directory)". I cannot sudo because I am using the university's supercomputing account, no permission to do so. Any advice?
Enthought Canopy 1.1 giving error icui18n: cannot open shared object file: No such file or directory
0.197375
0
0
6,614
17,374,526
2013-06-28T22:22:00.000
2
1
0
0
python,unicode,encoding,python-3.x,ssh
17,374,821
1
false
0
0
The problem might be not your Python code, check your ssh ENV. LANG should be en_US.UTF-8 (containing UTF-8) not ASCII
1
5
0
I made a small application that prints unicode special characters(i.e. superscript, subscript...). When it runs locally there are no problems but when it runs in a ssh session I always get a UnicodeEncodeError. Specifically: UnicodeEncodeError 'ascii' can't encode characters in position 0-1: ordinal not in range(128) I tried different ssh clients, computers and double checked the sessions encoding but the result is the same. This is really weird. Why does this happen? Is this really related to ssh?
UnicodeEncodeError when using python from ssh
0.379949
0
0
1,171
17,376,033
2013-06-29T02:14:00.000
0
0
1
1
python-2.7,locking,shared-file
17,376,105
2
false
0
0
Just a thought... Couldn't you put a 'lock' file in the same directory as the file your trying to write to? In your distributed processes check for this lock file. If it exists sleep for x amount and try again. Likewise, when the process that currently has the file open finishes the process deletes the lock file? So if you have in the simple case 2 processes called A and B: Process A checks for lock file and if it doesn't exist it creates the lock file and does what it needs to with the file. After it's done it deletes this lock file. If process A detects the lock file then that means process B has the file, so sleep and try again later....rinse repeat.
1
4
0
My project requires being run on several different physical machines, which have shared file system among them. One problem arising out of this is how to synchronize write to a common single file. With threads, that can be easily achieved with locks, however my program consists of processes distributed on different machines, which I have no idea how to synchronize. In theory, any way to check whether a file is being opened right now or any lock-like solutions will do, but I just cannot crack out this by myself. A python way would be particularly appreciated.
is there a way to synchronize write to a file among different processes (not threads)
0
0
0
1,617
17,376,904
2013-06-29T05:07:00.000
1
0
0
0
python,arrays,astronomy,fits
19,917,484
1
false
0
0
A FITS file consists of header-data units. A header-data unit contains an ASCII-type header with keyword-value-comment triples plus either binary FITS tables or (hyperdimensional) image cubes. Each entry in a table of a binary FITS table may itself contain hyperdimensional image cubes. An array is some slice through some dimensions of any of these cubes. Now as a shortcut to images stored in the first (a.k.a primary) header-data unit, many viewers allow to indicate in square brackets some indices of windows into these images (which in most common cases is based on the equivalent support by the cfitsio library).
1
0
1
I'm basically trying to plot some images based on a given set of parameters of a .fits file. However, this made me curious: what IS a .fits array? When I type in img[2400,3456] or some random values in the array, I get some output. I guess my question is more conceptual than code-based, but, it boils down to this: what IS a .fits file, and what do the arrays and the outputs represent?
What IS a .fits file, as in, what is a .fits array?
0.197375
0
0
119
17,381,982
2013-06-29T15:55:00.000
0
0
1
0
python,regex,string
17,382,700
8
false
0
0
Depth first search with XML Parser ? Maybe remember the position in the xml document where the text node was found, for later reverse lookup. You actual goal is still unclear.
1
2
0
I am searching an XML file generated from Ms word for some phrases. The thing is that any phrase can be interrupted with some XML tags, that can come between words, or even inside words, as you can see in the example: </w:rPr><w:t> To i</w:t></w:r><w:r wsp:rsidRPr="00EC3076"><w:rPr><w:sz w:val="17"/><w:lang w:fareast="JA"/></w:rPr><w:t>ncrease knowledge of and acquired skills for implementing social policies with a view to strengthening the capacity of developing countries at the national and community level.</w:t></w:r></w:p> So my approach to handle this problem was to simply reduce all XML tags into clusters of # characters of the same length, so that when I can find any phrase, the regex would ignore all the XML tags between each two characters. What I need basically is the span of this phrase within the actual xml document, so I will use this span into later processing with the xml document, I cannot use clones. This approach works remarkablly, but some phrases cause catastropic backtracking, such as the following example, so I need someone to point out where does the backtracking come from, or suggest a better solution to the problem. ================================ Here is an example: I have this text where there are some clusters of # characters within it (which I want to keep), and the spaces are also unpredictable, such as the following: Relationship to the #################strategic framework ################## for the period 2014-2015####################: Programme 7, Economic and Social Affairs, subprogramme 3, expected accomplishment (c)####### In order to match the following phrase: Relationship to the strategic framework for the period 2014-2015: programme 7, Economic and Social Affairs, subprogramme 3, expected accomplishment (c) I came up with this regex to accommodate the unpredictable # and space characters: u'R#*e#*l#*a#*t#*i#*o#*n#*s#*h#*i#*p#*\\s*#*t#*o#*\\s*#*t#*h#*e#*\\s*#*s#*t#*r#*a#*t#*e#*g#*i#*c#*\\s*#*f#*r#*a#*m#*e#*w#*o#*r#*k#*\\s*#*f#*o#*r#*\\s*#*t#*h#*e#*\\s*#*p#*e#*r#*i#*o#*d#*\\s*#*2#*0#*1#*4#*\\-#*2#*0#*1#*5#*:#*\\s*#*p#*r#*o#*g#*r#*a#*m#*m#*e#*\\s*#*7#*\\,#*\\s*#*E#*c#*o#*n#*o#*m#*i#*c#*\\s*#*a#*n#*d#*\\s*#*S#*o#*c#*i#*a#*l#*\\s*#*A#*f#*f#*a#*i#*r#*s#*\\,#*\\s*#*s#*u#*b#*p#*r#*o#*g#*r#*a#*m#*m#*e#*\\s*#*3#*\\,#*\\s*#*e#*x#*p#*e#*c#*t#*e#*d#*\\s*#*a#*c#*c#*o#*m#*p#*l#*i#*s#*h#*m#*e#*n#*t#*\\s*#*\\(#*c#*\\)' And it works fine in all the other phrases that I want to match, but this one has a problem leading to some catastrophic backtracking, can anyone spot it? The original text is separated with xml tags, so to make it simpler for the regex, I replaced the tags with these # clusters, here is the original text: </w:rPr><w:t>Relationship to the </w:t></w:r><w:r><w:rPr><w:i/><w:sz w:val="17"/><w:sz-cs w:val="17"/></w:rPr><w:t>strategic framework </w:t></w:r><w:r wsp:rsidRPr="00EC3076"><w:rPr><w:i/><w:sz w:val="17"/><w:sz-cs w:val="17"/></w:rPr><w:t> for the period 2014-2015</w:t></w:r><w:r wsp:rsidRPr="00EC3076"><w:rPr><w:sz w:val="17"/><w:sz-cs w:val="17"/></w:rPr><w:t>: Programme 7, Economic and Social Affairs, subprogramme 3, expected accomplishment (c)</w:t>
Python regex catastrophic backtracking
0
0
0
3,036
17,382,053
2013-06-29T16:01:00.000
1
0
0
0
python,django,sqlite
17,382,483
2
true
1
0
I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API. What I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a "file" in that directory. When you're done populating the database, return that "file", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write.
1
0
0
In my python/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file. In other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned. Now, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request.
Python: Create and return an SQLite DB as a web request result
1.2
1
0
169
17,384,280
2013-06-29T20:13:00.000
2
0
1
1
python,powershell
17,384,352
1
false
0
0
It seems your file is started with Unicode BOM. Try to save your file in Utf-8 without BOM.
1
0
0
I am a python beginner, trying to run from power shell on vista. when trying to call a simple script with: python vc.py gives error: "File "vcpy", line 1 syntaxError: Non-UTF-8 code starting with '\xff' ... where vc.py is: import sys print sys.version It does work when I invoke instead: cat vc.py | python The problem with this latter approach is that it is giving us problems with the raw input function.
python run in powershell with script give 'non-utf' error
0.379949
0
0
1,256
17,386,880
2013-06-30T03:33:00.000
37
0
1
0
python,scipy,environment-variables,anaconda
17,407,341
2
true
0
0
No, the only thing that needs to be modified for an Anaconda environment is the PATH (so that it gets the right Python from the environment bin/ directory, or Scripts\ on Windows). The way Anaconda environments work is that they hard link everything that is installed into the environment. For all intents and purposes, this means that each environment is a completely separate installation of Python and all the packages. By using hard links, this is done efficiently. Thus, there's no need to mess with PYTHONPATH because the Python binary in the environment already searches the site-packages in the environment, and the lib of the environment, and so on.
1
56
0
I am starting to work with the Python Anaconda distribution from Continuum.io to do scipy work. I have been able to get Anaconda up and running, but I cannot tell whether Anaconda creates a new PYTHONPATH environment variable for each new environment it creates, or whether it relies on the common system PYTHONPATH. I could not find any information on this in the documentation. Further, when I did a printenv, I did not see a PYTHONPATH variable in the newly created environment --though I did find a few new anaconda created environment variables. The best I can find is that Anaconda added some Anaconda directories and the new environment directory to the head of PATH variable --but this does not necessarily isolate the new package from the system environment but it is close. Does anyone know the answer to this question or found a way to deal with this concern?
Does `anaconda` create a separate PYTHONPATH variable for each new environment?
1.2
0
0
131,082
17,393,291
2013-06-30T18:03:00.000
1
0
0
0
python,mysql,django
17,393,525
1
true
1
0
Having a string-valued PK should not be a problem in any modern database system. A PK is automatically indexed, so when you perform a look-up with a condition like table1.pk = 'long-string-key', it won't be a string comparison but an index look-up. So it's ok to have string-valued PK, regardless of the length of the key values. In any case, if you need an additional column with all unique values, then I think you should just add a new column.
1
0
0
suppose there was a database table with one column, and it's a PK. To make things more specific this is a django project and the database is in mysql. If I needed an additional column with all unique values, should I create a new UniqueField with unique integers, or just write a hash-like function to convert the existing PK's for each existing row (model instance) into a new unique variable. The current PK is a varchar/ & string. With creating a new column it consumes more memory but I think writing a new function and converting fields frequently has disadvantages also. Any ideas?
Database design, adding an extra column versus converting existing column with a function
1.2
1
0
50
17,393,911
2013-06-30T19:09:00.000
1
0
1
0
python
17,393,930
7
false
0
0
An inverse hash function would not be (in general) unique even if you could invert it. For example, there are an infinite number of strings from which hash keys are generated into a finite integer range limited by the word size on your machine.
3
6
0
not sure if this is possible but in python there is a hash() function which takes a string or an integer and generates a [EDIT not-unique] integer representation of that input. My question is (after searching online), how to reverse the generated integer back into the original String. Thanks.
Reverse the hash() function in python
0.028564
0
0
15,557
17,393,911
2013-06-30T19:09:00.000
0
0
1
0
python
17,394,493
7
false
0
0
Hashes are meant to be computationally expensive to reverse. Generally the only way to "reverse" them is to bruteforce the input that was used to generate the output.
3
6
0
not sure if this is possible but in python there is a hash() function which takes a string or an integer and generates a [EDIT not-unique] integer representation of that input. My question is (after searching online), how to reverse the generated integer back into the original String. Thanks.
Reverse the hash() function in python
0
0
0
15,557
17,393,911
2013-06-30T19:09:00.000
0
0
1
0
python
17,398,642
7
false
0
0
Another point that people are missing isn't just that its hard to find a string that matches a hash, but also that there isn't enough information there to determine what the string was. A hash is (usually) a cryptographic way of converting a given input into an integer that is unreversible. However, it is possible for hashes to clash or collide, which is possible in MD5. As such, under such hashing functions, the number of different strings which could hash to the same number is infinite - so even if it were possible to reverse (its not), you still wouldn't know which string was the original!
3
6
0
not sure if this is possible but in python there is a hash() function which takes a string or an integer and generates a [EDIT not-unique] integer representation of that input. My question is (after searching online), how to reverse the generated integer back into the original String. Thanks.
Reverse the hash() function in python
0
0
0
15,557
17,396,218
2013-07-01T00:51:00.000
0
0
0
0
python,autocomplete,enthought
17,397,527
1
false
0
0
Canopy currently doesn't expose the API to customize auto-completion. But, what exactly do you mean by making it behave like sublime?
1
1
0
I want to be able to customize auto-completion in canopy, so it behaves like sublime. Is it possible? And if so, where do i find the APIs?
Canopy: Changing the way auto-completion works
0
0
0
792
17,397,204
2013-07-01T03:52:00.000
-1
0
1
0
python-2.7,virtualenv,pip
17,947,410
1
false
0
0
Seems that the problem is that Python 2.7 64 bit has a compilation problem. It would seem that I need to download a special package from MS to get access to 64-bit c/c++ compiler that's compatible with Python 2.7. Not a problem, except that's its 3 gigabytes. So, I just did it on my Linux VM and Windows Python 2.7 is 32-bit for me now. Not the best solution, but we're supposedly going to upgrade to Python 3 one of these years. Probably about when Python 4 comes out and the Python 3 compiler is obsolete too. C'est le vie!
1
0
0
So I've been working with python 2.7 no problem for a while now. I've been using pip for a couple of months without issue. I recently installed virtualenv and now none of my pythons or pip can find vcvarsall.bat, even though this wasn't a problem before. I thought virtualenv seemed like a good idea, but not if it breaks everything around it. I tried to run repair on VS C++ but it didn't find any problems. Has anyone run into something like this before?
Python 2.7 can't find vcvarsall.bat, MS VS C++ 2008 is installed
-0.197375
0
0
241
17,400,805
2013-07-01T09:08:00.000
13
0
1
0
python,cpython
17,401,698
3
true
0
0
Python isn't written in C. Arguably, Python is written in an esoteric English dialect using BNF. However, all the following statements are true: Python is a language, consisting of a language specification and a bunch of standard modules Python source code is compiled to a bytecode representation this bytecode could in principle be executed directly by a suitably-designed processor but I'm not aware of one actually existing in the absence of a processor that natively understands the bytecode, some other program must be used to translate the bytecode to something a hardware processor can understand one real implementation of this runtime facility is CPython CPython is itself written in C, but ... C is a language, consisting of a language specification and a bunch of standard libraries C source code is compiled to some bytecode format (typically something platform-specific) this platform specific format is typically the native instruction set of some processor (in which case it may be called "object code" or "machine code") this native bytecode doesn't retain any magical C-ness: it is just instructions. It doesn't make any difference to the processor which language the bytecode was compiled from so the CPython executable which translates your Python bytecode is a sequence of instructions executing directly on your processor so you have: Python bytecode being interpreted by machine code being interpreted by the hardware processor Jython is another implementation of the same Python runtime facility Jython is written in Java, but ... Java is a language, consisting of a spec, standard libraries etc. etc. Java source code is compiled to a different bytecode Java bytecode is also executable either on suitable hardware, or by some runtime facility The Java runtime environment which provides this facility may also be written in C so you have: Python bytecode being interpreted by Java bytecode being interpreted by machine code being interpreted by the hardware processor You can add more layers indefinitely: consider that your "hardware processor" may really be a software emulation, or that hardware processors may have a front-end that decodes their "native" instruction set into another internal bytecode. All of these layers are defined by what they do (executing or interpreting instructions according to some specification), not how they implement it. Oh, and I skipped over the compilation step. The C compiler is typically written in C (and getting any language to the stage where it can compile itself is traditionally significant), but it could just as well be written in Python or Java. Again, the compiler is defined by what it does (transforms some source language to some output such as a bytecode, according to the language spec), rather than how it is implemented.
1
4
0
From what I know, CPython programs are compiled into intermediate bytecode, which is executed by the virtual machine. Then how does one identify without knowing beforehand that CPython is written in C. Isn't there some common DNA for both which can be matched to identify this?
What does it mean when people say CPython is written in C?
1.2
0
0
686
17,403,346
2013-07-01T11:25:00.000
3
0
0
0
python,django,orm,cassandra
17,403,637
2
true
1
0
There's an external backend for Cassandra, but it has some issues with the authentication middleware, which doesn't handle users correctly in the admin. If you use a non-relational database, you lose a lot of goodies that django has. You could try using Postgres' nosql extension for the parts of your data that you want to store in a nosql'y way, and the regular Postgres' tables for the rest.
1
1
0
i am working on developing a Django application with Cassandra as the back end database. while Django supports ORM feature for SQL, i wonder if there is any thing similar for Cassandra. what would be the best approach to load the schema into the Cassandra server and perform CRUD operations. P.S. I am complete beginner to Cassandra.
Cassandra-Django python application approach
1.2
1
0
410
17,406,252
2013-07-01T13:57:00.000
6
0
0
1
python,command-line,import
17,406,315
1
true
0
0
Two solutions here: You can run the script using python like this: python my_program.py or add this at the top of the file: #!/usr/bin/env python which will switch from bash to python to run this script
1
0
0
I am running a Python script from Linux command line, and the script itself, on the first line, import several modules. I got some error message and searched online. Here is a reply from the author of the Python script: it appears that you are running dexseq_count.py as if it were a shell script, rather than from Python. As a consequence, the first line of the script is interpreted as the Linux command 'import' rather than as Python code, leading to the error you report. I am curious if the first line of import in Python has been mis-interpretated in Linux, and if so, how can I solve this problem? I have to run in the cmd line instead of in Python. Thanks so much!
Running Python script from cmd line but start with import in code
1.2
0
0
1,408
17,407,276
2013-07-01T14:46:00.000
1
1
1
1
python,macos,python-2.7
17,407,429
1
true
0
0
The standard directory which is already searched by python depends on the version of python. For the Apple installed python 2.7 it is /Library/Python/2.7/site-packages the README in that directory says This directory exists so that 3rd party packages can be installed here. Read the source for site.py for more details.
1
0
0
I've written some python modules that I'd like to be able to import anytime on Mac OS X. I've done some googling and I've gotten some mixed responses so I'd like to know what the "best" practice is for storing those files safely. I'm running Python2.7 and I want to make sure I don't mess with the Mac install of Python or anything like that. Thanks for the help
Good location to store .py files on Mac
1.2
0
0
498
17,408,276
2013-07-01T15:33:00.000
2
0
0
0
python,sql,sqlalchemy
17,408,674
1
false
0
0
If you've declared that column as an enum type (as you should for cases such as these where the values are drawn from a small, fixed set of strings), then using ORDER BY on that column will order results according to the order in which the values of the enum were declared. So the datatype for that column should be ENUM('in process', 'discharged', 'None'); that will cause ORDER BY to sort in the order you desire. Specifically, each value in an enum is assigned a numerical index and that index is used when comparing enum values for sorting purposes. (The exact way in which you should declare an enum will vary according to which type of backend you're using.)
1
1
0
I have a state column in my table which has the following possible values: discharged, in process and None. Can I fetch all the records in the following order: in process, discharged followed by None?
Sqlalchemy order_by custom ordering?
0.379949
1
0
1,384
17,409,127
2013-07-01T16:18:00.000
2
1
0
1
python,eclipse,nose,python-unittest
19,227,424
1
true
0
0
I eventually found in the Preferences > PyDev > PyUnit menu that adding -s to the Parameters for test running stopped this. The parameter prevents the capture of stdout that nose does by default. The alternate --nocapture parameter should work too.
1
2
0
I'm using Eclipse / PyDev and PyUnit on OSX for development. It was recommended to me that I use Nose to execute our suite of tests. When I configure Nose as the test runner, however, output from the interactive console (either standalone or during debugging) disappears. I can type commands but do not see any output. Is this normal, or am I missing some configuration?
Where does console output go when Eclipse PyUnit Test Runner configured to use Nose
1.2
0
0
969
17,412,982
2013-07-01T20:21:00.000
0
0
0
1
python,ubuntu
17,413,212
1
true
0
0
The shebang line #!/usr/bin/python3 should work if sh, bash, etc. is trying to launch your script. It it is being run from another script as python myscript.py you'll have to find that script and get it to launch the script using python3 myscripy.py
1
0
0
The standard python version of ubuntu 13.04 is python 2.7. I know that I can call a python script of version 3.3 by calling python3.3 or python3 in terminal instead of only "python", which starts the version 2.7... e.g. python3 myscript.py But now I have a version 3.3. script in the system start routine and can only tell the path to the file. The system recognizes it as a python script (in the shebang with #!/usr/bin/python3) But how to open it with the correct version? It is tried to be opened with the standard python install so it wont work nor even show up.
starting a python 3.3. script at ubuntu startup
1.2
0
0
1,070
17,413,796
2013-07-01T16:53:00.000
1
0
1
0
python,pygame,libraries,file-format,code-structure
17,413,852
1
true
0
0
It's my understanding that the result of import is essentially making the compiler jump to that file and parse it in place of the import line. Basically making it like the entire file was pasted into the file doing the importing, at the line where the import happens. So essentially, it's helpful with organization and sanity. Once a project starts to get large enough, it would be impractical to maintain all the code in a single file.
1
2
0
I am writing a game in Python that has many classes and methods, and I want to know if it is advantageous to store the classes in the main python .py file or store each category of classes in separate files, then import them. This would help with organization, but are there any other pros/cons?
Separate files or one file for classes/methods
1.2
0
0
108
17,414,855
2013-07-01T22:44:00.000
0
0
0
1
python,windows,usb,pyusb
17,516,390
1
false
0
0
What about polling? Create a Python app that enumerates a list of attached USB devices every couple of seconds or so. Keep a list/dictionary of your initially detected devices, and compare to that to determine what was attached/detached since your last polling iteration. This isn't the best approach, and enumerating all the devices takes a short while, so not too sure this would be the most CPU efficient method.
1
0
0
On a windows OS, how can I get python to detect if anything is plugged in to a specific USB location on the computer. For example "Port_#0002.Hub_#0003" I've tried pyUSB which worked fine for detecting a specific device, but I couldn't seem to figure out how to just check a specific port/hub location for any kind of device.
How can I get Python to watch a USB port for any device?
0
0
0
1,021
17,415,024
2013-07-01T23:04:00.000
3
0
0
0
python,cassandra,cql
17,419,541
1
true
0
0
Your manual solution of timing the requests is enough, if nodes that are slow to respond are also ones that are slow to process the query. Internally Cassandra will avoid slow nodes if it can by using the dynamic snitch. This orders nodes by recent latency statistics and will avoid reading from the slowest nodes if the consistency level allows. NB writes go to all available nodes, but you don't have to wait for them to all respond if your consistency level allows. There may be some client support for what you want in a python client - Astyanax in Java uses something very like the dynamic snitch in the client to avoid sending requests to slow nodes.
1
3
0
Intro: I have a Python application using a Cassandra 1.2.4 cluster with a replication factor of 3, all reads and writes are done with a consistency level of 2. To access the cluster I use the CQL library. The Cassandra cluster is running on rackspace's virtual servers. The problem: From time to time one of the nodes can become slower than usual, in this case I want to be able to detect this situation and prevent making requests to the slow node and if possible to stop using it at all (this should theoretically be possible since the RF is 3 and the CL is 2 for every single request). So far the solution I came up with involves timing the requests to each of the nodes and preventing future connections to the slow node. But still this doesn't solves all the problem because even connecting to another node a particular query may end up being served by the slow node after the coordinator node routes the query. The questions: What's the best way of detecting the slow node from a Python application? Is there a way to stop using one of the Cassandra nodes from Python in this scenario without human intervention? Thanks in advance!
How to prevent traffic to/from a slow Cassandra node using Python
1.2
0
1
968
17,416,448
2013-07-02T02:21:00.000
2
0
0
0
python,numpy,linear-algebra,multidimensional-array
17,416,531
1
true
0
0
Let's say you're trying to use a Markov chain to model english sentence syntax. Your transition matrix will give you the probability of going from one part of speech to another part of speech. Now let's suppose that we're using a 3rd-order Markov model. This would give use the probability of going from state 123 to 23X, where X is a valid state. The Markov transition matrix would be N3 x N, which is still a 2-dimensional matrix regardless of the dimensionality of the states, themselves. If you're generating the probability distributions based on empirical evidence, then, in this case, there's going to be states with probability 0. If you're worried about sparsity, perhaps arrays are not the best choice. Instead of using an array of arrays, perhaps you should use a dictionary of dictionaries. Or if you have many transition matrices, an array of dictionaries of dictionaries. EDIT (based off comment): You're right, that is more complicated. Nonetheless, for any state, (i,j), there exists a probability distribution for going to the next state, (m,n). Hence, we have our "outer" dictionary, whose keys are all the possible states. Each key (state) points to a value that is a dictionary, which holds the probability distribution for that state.
1
2
1
First of all, I am aware that matrix and array are two different data types in NumPy. But I put both in the title to make it a general question. If you are editing this question, please feel free to remove one. Ok, here is my question, Here is an edit to the original question. Consider a Markov Chain with a 2 dimensional state vector x_t=(y_t,z_t) where y_t and z_t are both scalars. What is the best way of representing/storing/manipulating transition matrix of this Markov Chain? Now, what I explained is a simplified version of my problem. My Markov Chain state vector is a 5*1 vector. Hope this clarifies
Multiplication of Multidimensional matrices (arrays) in Python
1.2
0
0
846
17,416,893
2013-07-02T03:20:00.000
1
0
0
0
python,user-interface,websocket,pyqt
17,417,077
1
true
0
1
A way to do this is to use a QWebView, insert that into your App and then load a HTML5 page in the WebView and use that to communicate with the server. This way you can probably even reuse the code for the mobile client as the code for the desktop chat interface.
1
0
0
I'm creating a desktop application that is interfaced with using a mobile app or mobile communications ( twitter, txt ) I already have the mechanisms in place to share media ( youtube, instagram, ) with the desktop app from a mobile device. But, I would like to add a websocket chatbox to the desktop interface. So, that users can add msgs using a webview or websocket client within the mobile app. BUT How do I combine websockets with pyqt? I've found very few examples online... just looking for some insight on this problem.
PyQt and WebSockets
1.2
0
1
1,153
17,419,724
2013-07-02T07:21:00.000
0
0
0
0
python,eclipse,openerp
17,486,874
1
false
1
0
Verify that the OpenERP service is running on your computer. You can verify this by clicking on the Taskbar -> task manager -> Services. Look for the OpenERP service and start it if it is not running. A problem might have made it fail to start. There might be errors with your custom module. I tell you developing custom modules on Window is more tedious than on Linux because you can run the server in terminal mode and view output logged directly on the console
1
0
0
Hi I have been working on openerp-7 (win-7) custom module creation . I have been loading openerp server through localhost:8069 . But today the application failed to start and its generating error " Oops! Google Chrome could not connect to localhost:8069 " . What should I do now to fix this issue? Plz help Hopes for suggestion
openerp not loading on localhost
0
0
1
1,240
17,420,396
2013-07-02T08:00:00.000
0
0
0
0
python,mysql,python-3.x,oursql
17,420,506
1
false
0
0
OK, I moved libmysql.dll to the same directory as python.exe, instead of in the DLL's folder, and it seems like it works.
1
0
0
I'm trying to work with oursql in python 3.2, and it's really not going so well. Facts: I downloaded oursql binary and ran the installer. I have MySQL 5.1 installed. I separately downloaded the libmysql dll and placed it in the System32 directory. I downloaded cython for version 3.1 because there wasn't one for 2.7 or 3.2. I have python versions 2.7, 3.1, and 3.2 installed. I rebooted. I now still get the ImportError: DLL load failed: The specified module could not be found. error when running import oursql from the Python 3.1 shell. Any ideas?
Error on installing oursql for Python 3.1
0
1
0
196
17,423,384
2013-07-02T10:37:00.000
2
0
0
0
python,mysql,multithreading,python-2.7
17,423,440
1
true
0
0
No, it does not. You have to tell the server on the other side that the connection is closed, because it can't tell the difference between "going away" and "I haven't sent my next query yet" without an explicit signal from you. The connection can time out, of course, but it won't be closed or cleaned up without instructions from you.
1
0
0
I am using python 2.7 and Mysql. I am using multi-threading and giving connections to different threads by using PooledDB . I give db connections to different threads by pool.dedicated_connection().Now if a thread takes a connection from pool and dies due to some reason with closing it(ie. without returning it to the pool).What happens to this connection. If it lives forever how to return it to the pool??
Does database connection return to pool if a thread holding it dies?
1.2
1
0
178
17,424,014
2013-07-02T11:10:00.000
2
0
1
0
multithreading,python-2.7
17,424,154
1
true
0
0
When you say a thread "dies", do you mean you intentionally terminate it or it fails due to error? If you're intentionally terminating it and you're worried about the time required to spawn a new thread, why not keep the thread persistent and simply have it do the job that the new thread would have done? This is a pretty standard approach - maintain a pool of "worker" threads and have a work queue with pending items to execute. They all run an identical loop which is to pull an item off the queue and execute it. These items can be objects with methods which contain the code to execute if it's convenient to work that way - if the tasks are all very similar then it might be easier to put the code into the thread's own function instead. If you're talking about threads failing due to error, I wouldn't have imagined this was common enough to worry about it. If it is, you probably need to look at making your code more robust. In either case, spawning a thread on most systems should be a lightweight activity - a lot more lightweight than spawning a whole new process, for example. As a result, I really wouldn't worry about keeping a pool of threads in reserve to use - that really sounds like early optimisation to me. Even if spawning threads were slow, consider what you would be doing by spawning threads in advance - you would be taking up more memory (some memory in the OS to keep track of a the thread, some in Python for the objects that it uses to track the thread), although not a great deal; you'd also be spending more time at the start of your program creating all these threads. So, you might save a little time while you were running, but instead your program takes significantly longer to start. That doesn't sound like a sensible trade-off to me unless the speed and latency of your code is absolutely critical while it's running, and if speed is that critical then I'm not sure a pure Python solution is the right approach anyway. Something like C/C++ is going to give you better control of scheduling, at the expense of much more complexity. In summary: seriously, don't worry about it, just spawn threads as you need them. Trust me, there will be much bigger speed problems elsewhere in your code which are much more deserving of your time.
1
0
0
I am using python 2.7 .I am using multi-threading.Now if a thread dies I again create one to compensate for it.So should I create a lot of threads before hand and store them and use from them when one or more existing threads die or should I create one when some thread dies?? Which is more efficient in terms of time ??
should I create threads before hand to save time?
1.2
0
0
33
17,434,390
2013-07-02T19:40:00.000
0
0
0
0
python,tcp-ip
47,622,960
2
false
0
0
you can send TCP SYN packets to the server to initiate a handshake with the server to start a connection, but as you are using fake IPs the SYN-ACK packets sent from the server will be sent somewhere else and so you won't be able to acknowledge the server and finally start the connection. you better read more about the tcp handshake and the SYN cookies
1
1
0
I want to "establish" a TCP connection to a server with a fake sender IP address by using Python 2.6 on Windows 7. Is this possible without a proxy/Tor? I understand that I won't get a response, I'm only interested in sending an HTTP GET request to mimic a DDOS attack on my web server.
TCP connection with a fake source IP address
0
0
1
4,544
17,436,014
2013-07-02T21:19:00.000
3
0
0
0
javascript,python,selenium,beautifulsoup
55,484,972
3
false
1
0
I would recommend using Selenium for things such as interacting with web pages whether it is in a full blown browser, or a browser in headless mode, such as headless Chrome. I would also like to say that beautiful soup is better for observing and writing statements that rely on if an element is found or WHAT is found, and then using selenium ot execute interactive tasks with the page if the user desires.
1
55
0
I'm scraping content from a website using Python. First I used BeautifulSoup and Mechanize on Python but I saw that the website had a button that created content via JavaScript so I decided to use Selenium. Given that I can find elements and get their content using Selenium with methods like driver.find_element_by_xpath, what reason is there to use BeautifulSoup when I could just use Selenium for everything? And in this particular case, I need to use Selenium to click on the JavaScript button so is it better to use Selenium to parse as well or should I use both Selenium and Beautiful Soup?
Selenium versus BeautifulSoup for web scraping
0.197375
0
1
40,675
17,440,323
2013-07-03T05:27:00.000
1
0
0
1
google-app-engine,python-2.7,app-engine-ndb
17,441,520
1
false
1
0
Please vote if it solve your problem :) GAE works like that: You can have multiple instances of program with separated code space - mean instance has not access to other instance. You can have multiple threads in program instance if you mark code as thread safe - mean each instance has access to same code/memory (counter in you case) - you need locking to avoid conflicts. Memcache is synchronized - updated of value is available to all programs and their threads - there is no concurrent races - mean you can read recent cache value and track if it not change during your changes. How to simulate concurrent access to piece of code? - You should not simulate you should use clear locking at level of thread or program - since it very hard to simulate concurrent races - it is not know who will win program or thread race since in each environment result is undefined - mean Linux, Windows, Python.
1
0
0
Is it possible to simulate concurrent access to a piece of code in Google App Engine? I am trying to unit test a piece of code that increments a counter. It is possible that the code will be used by different instances of the app concurrently and although I have made the datastore access sections transactional and also used memcache cas I would feel better if there was some way to test it. I have tried setting up background threads but Testbed seems to be creating a new environment for each thread.
Testing concurrent access in GAE
0.197375
0
0
221
17,442,184
2013-07-03T07:32:00.000
0
0
1
0
python,redis,pycharm
44,322,725
3
false
0
0
This is very simple. Rename your program file from "redis.py" to other name. (cf. redis_test.py etc).
1
5
0
I installed python,Django and Redis. In Vim I use “import redis” is OK, when I use pycharm IDE to code, I write “import redis” , the pycharm tip “no module named redis”, why? what should I to do ?
pycharm no module named redis
0
0
0
10,942
17,444,608
2013-07-03T09:41:00.000
2
0
1
0
python,dictionary,rounding
17,446,861
2
false
0
0
You can use int() if you want just to chop off the decimals or round() if you want to round it to the closer number given the precision, i.e. round(2.456,1)=2.5 and round(2.456,0)=2
1
1
0
I have a program in python 3.4 that prints some numbers in a dictionary, but the numbers are have long decimal places. How do I get it to print the dictionary with no decimal places?
Rounding numbers in a dictionary
0.197375
0
0
1,304
17,446,703
2013-07-03T11:26:00.000
0
0
0
0
python,eclipse,openerp
17,447,105
1
false
1
0
You can create field on function, you have to create field in object 'ir.model.fields' if you are create simple field like float. char, boolean then you have to give value Field Name, Label, Model, for which object you want to create field , if many2one or many2many field then you have to give Object Relation field to. Hope this help
1
1
0
Hi I have created a button oin my custom openerp module. I wanted to add func to this button to create a field. I have added the function but how to add functionality for creating fields . please help
How to create field through func in openerp?
0
0
0
198
17,449,313
2013-07-03T13:26:00.000
1
0
1
0
python,pip
17,449,533
3
false
0
0
I do not know if there is a standard way / some module to do it , but you can pretty much do it by first installing the package using pip and then you can find the .py file/files at lib/python2.7/site-packages location.
2
0
0
Just like the famous BeautifulSoup package, I am wondering whether there is a standard way to convert the package into a standalone py file or files?
How to convert a pip package into a standalone py file(or files)?
0.066568
0
0
90
17,449,313
2013-07-03T13:26:00.000
0
0
1
0
python,pip
17,449,817
3
false
0
0
A Python package, as long as it doesn't use compiled extensions, is basically a set of files in a directory structure, so just download the source package, find the directory which contains the python scripts, and put it where you want. What the "build" / "install" things do (in the case of pure Python packages) is only to put the Python scripts somewhere on your Python path.
2
0
0
Just like the famous BeautifulSoup package, I am wondering whether there is a standard way to convert the package into a standalone py file or files?
How to convert a pip package into a standalone py file(or files)?
0
0
0
90
17,451,874
2013-07-03T15:18:00.000
6
0
1
0
ipython,paste,ipython-magic
17,475,225
3
false
0
0
You have two options: To edit it by hand, run %cpaste. Then you can paste it in with standard terminal options (try Ctrl-Shift-V), and edit it. Enter -- on a line to finish. To change it as text in your code, run %paste foo. It will store the clipboard contents in foo.
2
7
0
When using magic %paste in ipython, it executes pasted code, rather than just pasting. How can i get it to just paste the copied code so that it can be edited?
When using magic %paste in ipython, how can i get it to just paste the copied code, rather than paste and execute, so that it can be edited
1
0
0
3,184
17,451,874
2013-07-03T15:18:00.000
3
0
1
0
ipython,paste,ipython-magic
30,046,297
3
false
0
0
There is a solution for this issue in ipython, if you are not concerned with indentation, Just run %autoindent to Automatic indentation OFF.
2
7
0
When using magic %paste in ipython, it executes pasted code, rather than just pasting. How can i get it to just paste the copied code so that it can be edited?
When using magic %paste in ipython, how can i get it to just paste the copied code, rather than paste and execute, so that it can be edited
0.197375
0
0
3,184
17,456,233
2013-07-03T19:09:00.000
1
0
1
0
python,comparison
17,456,347
4
false
0
0
What you're asking is about a fuzzy search, from what it sounds like. Instead of checking string equality, you can check if the two string being compared have a levenshtein distance of 1 or less. Levenshtein distance is basically a fancy way of saying how many insertions, deletions or changes will it take to get from word A to B. This should account for small typos. Hope this is what you were looking for.
1
0
1
I am working on a traffic study and I have the following problem: I have a CSV file that contains time-stamps and license plate numbers of cars for a location and another CSV file that contains the same thing. I am trying to find matching license plates between the two files and then find the time difference between the two. I know how to match strings but is there a way I can find matches that are close maybe to detect user input error of the license plate number? Essentially the data looks like the following: A = [['09:02:56','ASD456'],...] B = [...,['09:03:45','ASD456'],...] And I want to find the time difference between the two sightings but say if the data was entered slightly incorrect and the license plate for B says 'ASF456' that it will catch that
Python Matching License Plates
0.049958
0
0
1,137
17,456,384
2013-07-03T19:18:00.000
2
0
0
0
python,python-idle
17,456,637
1
true
0
0
You can configure IDLE this way: Open up the menu item Options -> Configure IDLE... Go to Keys tab In the drop down menu on the right side of the dialog change the select to "IDLE Classic Unix"
1
0
0
Similar to Emacs C-f, C-b, C-p, C-n, is there a way to navigate a .py script in IDLE without using the arrow keys? When I go to configure IDLE I don't see any key bindings for this. *Please don't just leave a smug comment such as 'Why are you using IDLE in the first place?'
Navigate Python Script in IDLE without Arrow Keys
1.2
0
0
213
17,457,418
2013-07-03T20:21:00.000
1
0
0
0
python,numpy,pandas
17,457,967
3
false
0
0
df = df.convert_objects(convert_numeric=True) will work in most cases. I should note that this copies the data. It would be preferable to get it to a numeric type on the initial read. If you post your code and a small example, someone might be able to help you with that.
1
4
1
I have a pandas dataFrame created through a mysql call which returns the data as object type. The data is mostly numeric, with some 'na' values. How can I cast the type of the dataFrame so the numeric values are appropriately typed (floats) and the 'na' values are represented as numpy NaN values?
Converting Pandas Dataframe types
0.066568
0
0
4,908
17,457,460
2013-07-03T20:24:00.000
0
1
0
0
python,c++,svm,libsvm
18,509,671
3
false
0
0
easy.py is a script for training and evaluating a classifier. it does a metatraining for the SVM parameters with grid.py. in grid.py is a parameter "nr_local_worker" which is defining the mumber of threads. you might wish to increase it (check processor load).
1
3
1
I'm using Libsvm in a 5x2 cross validation to classify a very huge amount of data, that is, I have 47k samples for training and 47k samples for testing in 10 different configurations. I usually use the Libsvm's script easy.py to classify the data, but it's taking so long, I've been waiting for results for more than 3 hours and nothing, and I still have to repeat this procedure more 9 times! does anybody know how to use the libsvm faster with a very huge amount of data? does the C++ Libsvm functions work faster than the python functions?
Large training and testing data in libsvm
0
0
0
3,432