Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31,764,392 | 2015-08-01T17:36:00.000 | 0 | 0 | 1 | 0 | wxpython | 31,764,393 | 1 | true | 0 | 1 | It took a while to find the answer, so if you have a similar issue the answer lies in default_color and oob_color.
Define it similar to this:
self.Client_rate = ic.IntCtrl(self.panel3,-1,value=0,size=(25,22),default_color=self.txt_colour,oob_color="red")
Then if you need to change the foreground colour alter like this:
self.Client_rate.SetColors(default_color=self.txt_colour, oob_color="red")
I trust this helps someone. Note: requires the spelling "color"! | 1 | 1 | 0 | When changing the foreground colour of an IntCtrl it acted as if it was not set on every subsequent update, as the colour of the text reverted back to black.
Is there a way to avoid this? | How to prevent IntCtrl() ignoring SetForegroundColour | 1.2 | 0 | 0 | 17 |
31,766,524 | 2015-08-01T21:42:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,file-handling | 31,766,723 | 3 | false | 0 | 0 | close can throw exceptions, for example, if you run out of disk space while it's trying to flush your last writes, or if you pulled out the USB stick the file was on.
As for the correct way to deal with this, that depends on the details of your application. Maybe you want to show the user an error message. Maybe you want to shut down your program. Maybe you want to retry whatever you were doing, but with a different file. Whatever the response you choose, it'll probably be implemented with a try-except block in whatever layer of your program is best equipped to deal with it. | 1 | 9 | 0 | I know that in Python the file.close() method doesn't have any return value, but I can't find any information on whether in some cases it throws an exception. If it doesn't do that either, then I guess the second part of this question is superfluous.
If it does, then what would be the "correct" way to handle the file.close() method throwing an exception inside a "with" statement used to open the file?
Is there a situation where file.close() could fail immediately after a file has been open and read from successfully? | file.close() exception handling inside a with statement in Python | 0.066568 | 0 | 0 | 7,395 |
31,769,814 | 2015-08-02T08:04:00.000 | 0 | 1 | 0 | 0 | python,unit-testing,tdd,integration-testing | 31,769,998 | 2 | false | 0 | 0 | One way to test this kind of workflow is by using a special database just for testing. The test database mirrors the structure of your production database, but is otherwise completely empty (i.e. no data is in the tables). The routine is then as follows
Connect to the test database (and and maybe reload its structure)
For every testcase, do the following:
Load the minimal set of data into the database necessary to test your routine
Run your function to test and grab its output (if any)
Perform some tests to see that your function did what you expected it to do.
Drop all data from the database before the next test case runs
After all your tests are done, disconnect from the database | 1 | 1 | 0 | Usually the workflow I have is as follows:
Perform SQL query on database,
Load it into memory
Transform data based on logic foo()
Insert the transformed data to a table in a database.
How should unit test be written for this kind of workflow? I'm really new to testing.
Anyway, I'm using Python 3.4. | How should unit test be written for data transformation? | 0 | 1 | 0 | 538 |
31,769,887 | 2015-08-02T08:16:00.000 | 11 | 1 | 0 | 0 | pytest,python-hypothesis | 31,770,016 | 2 | true | 0 | 0 | It means more or less what it says: You have a test which failed the first time but succeeded the second time when rerun with the same example. This could be a Hypothesis bug, but it usually isn't. The most common cause of this is that you have a test which depends on some external state - e.g. if you're using a system random number generator rather than a Hypothesis provided one, or if your test creates some files and only fails if the files did not exist at the start of the test. The second most common cause of this is that your failure is a recursion error and the example which triggered it at one level of function calls did not at another.
You haven't really provided enough information to say what's actually happening, so it's hard to provide more specific advice than that. If you're running a recent version of Hypothesis (e.g. 1.9.0 certainly does it) you should have been given quite detailed diagnostics about what is going on - it will tell you what the original exception you got was and it will report if the values passed in seemed to change between calls. | 1 | 5 | 0 | I am using the hypothesis python package for testing.
I am getting the following error:
Flaky: Hypothesis test_visiting produces unreliable results: Falsified on the first call but did not on a subsequent one
As far as I can tell, the test is working correctly.
How do I get around this? | What does Flaky: Hypothesis test produces unreliable results mean? | 1.2 | 0 | 0 | 3,046 |
31,779,148 | 2015-08-03T03:49:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt4 | 31,779,357 | 1 | false | 0 | 1 | the problem is that the
checkBox.isChecked()
method was in the wrong place. It was checked whenever the program was run (where the checkbox was empty) but not afterwards. As it turns out, the signal is generated via mouse after the program is run.
So I added the method to the function that checks the state of the checkbox after a button is clicked.
Please be merciful in grading this post thread :). New to stackoverflow. | 1 | 0 | 0 | I am using Qt Designer to build a UI. In Qt Designer, in the Signal/Slot editor I set up the following:
Sender: radioButton,
Signal: toggled(bool),
Receiver: checkbox,
Slot: setChecked(bool)
When I run my .py file, as expected, when I select radioButton with the mouse in the user interface, checkbox is checked.
However. if I add a button that calls a function, which includes the following code:
print(checkBox.isChecked())
The boolean value I get in return is 'False', even though the checkbox is visibly checked.
Does anyone know why?
Thanks. | PyQt4 isChecked() method | 0 | 0 | 0 | 404 |
31,779,186 | 2015-08-03T03:54:00.000 | -2 | 0 | 1 | 0 | python,python-2.7 | 31,781,039 | 1 | false | 0 | 0 | According to the documentation (and my testing) copy.deepcopy does NOT take an extra argument (well it does, but not the depth), copy.deepcopy(my_object, 3) results in AttributeError: 'int' object has no attribute 'get' (since it expect a memo object here instead).
On the other hand copy.deepcopy does not require an additional argument. You could just call copy.deepcopy(my_object) and it will copy with indefinite depth.
There are some catcha's you could run into that makes the copy misbehave. Basically you could end up copying too much or too little depending on whether the objects has proper support for deep copies. | 1 | 0 | 0 | I want to copy a python object to an arbitrary depth. for example
copy.deepcopy(my_object, 3) would deep copy my_object to a depth of three and then shallow copy anything deeper than 3.
Is there some python functions that already exists that can do this? If not what what is the most efficient way to go about this? | How to copy python object to arbitrary depth? | -0.379949 | 0 | 0 | 288 |
31,785,593 | 2015-08-03T11:03:00.000 | 3 | 0 | 0 | 0 | python,django | 31,785,953 | 1 | true | 1 | 0 | If you are using sqlite3 you don't need to tell any ip or port to your teammate. He just needs the path/name of the sqlite database. You can find the name in your settings.py file in the 'DATABASE' variable. | 1 | 1 | 0 | I am maintaining one database in django and some another application written in java want to access that database and add some data in it. Here i want common database for Java application and Django application. So whenever need data, we can make query to that database directly. How is it possible??? | How to access django database from another application | 1.2 | 0 | 0 | 2,301 |
31,786,076 | 2015-08-03T11:29:00.000 | 0 | 0 | 0 | 0 | python,scipy,ndimage | 66,423,995 | 1 | false | 0 | 0 | This is really late, but the data that gets passed to your filter function is a numpy array. You should just be able to reshape the data like normal
arr = arr.reshape((y, x)) | 1 | 2 | 1 | I am trying to use scipy.generic_filter to process an image. However, I need to further subset the window within the function I am applying. In another words I need to know the process (function) used to convert the 2D window to 1D array within the generic filter, so I can recreate the 2D array within the applied function in the right way. Does anybody know what function doe the scipy filter use to reshape the 2D to 1D? | Scipy.generic_filter - window translation to 1D | 0 | 0 | 0 | 64 |
31,786,122 | 2015-08-03T11:31:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,kivy,sudo | 31,861,791 | 2 | true | 0 | 1 | Well, I would call this problem solved, even if a few questions remain.
Here are the key points:
The slowdown is caused by Kivy being unable to load the proper video driver under "sudo", and using software rendering instead.
I haven't figured out why the driver isn't loading with sudo or how to fix it. However...
After compiling the program with Pyinstaller, everything works fine. The executable can be started with sudo, GPIO is working, Kivy loads the appropriate driver, everything works fast, as it should.
To sum it up, the reason of the initial problem has been found, no fix for launching the program directly with Python was yet found, but the problem was removed by compiling the program with Pyinstaller. (still, not a convenient way for debugging.) | 1 | 6 | 0 | I've been writing a Kivy graphical program on Raspberry Pi, with the KivyPie OS (Linux pre-configured for Kivy development).
For some reason, it's running extremely slow if started with sudo.
Normally, running "python main.py", the program runs at about 30 cycles per second.
However, if I do "sudo python main.py", it runs as slowly as 1 cycle per 5-10 seconds.
I need to use sudo to access Raspberry's GPIO. (unless I try some other way to do it, that I see people discuss).
I'm interested, though, what could be the cause of such a massive performance drop with sudo? And is it possible to work around that?
PS: Running the same program on my PC (Linux) with and without sudo doesn't seem to cause such problem. Only on Raspberry. | Raspberry Pi Python (Kivy) extremely slow with sudo | 1.2 | 0 | 0 | 2,524 |
31,790,076 | 2015-08-03T14:34:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,azure,web-applications | 31,790,837 | 2 | false | 1 | 0 | yes there is. look at backend and frontend instances. your question is too broad to go into more detail. in general the backend type of instance is used for long running tasks but you could also do everyrhing in the frontend instance. | 1 | 1 | 0 | I am currently working with MS Azure. There I have a worker role and a web role. In worker role I start an infinite loop to process some data continously. The web role is performing the interaction with the client. There I use a MVC Framework, which on server side is written in C# and on client side in Javascript.
Now I'm interested in GAE engine. I read a lot about the app engine. I want to build an application in Python. But I don't really understand the architecture. Is there a counterpart in the project structure like the worker and web role in Azure? | Worker role and web role counterpart in GAE | 0.099668 | 0 | 0 | 97 |
31,790,133 | 2015-08-03T14:37:00.000 | 0 | 1 | 0 | 1 | python,linux,arm,raspberry-pi,init.d | 31,791,309 | 1 | true | 0 | 0 | Ah bah, let's just give a quick answer.
After creating a script in /etc/init.d, you need to add a soft-link to the directory /etc/rc2.d, such as sudo ln -s /etc/init.d/<your script> /etc/rc2.d/S99<your script>. Assuming, of course, that you run runlevel 2. You can check that with the command runlevel.
The S means the script is 'started', the number determines the order in which processes are started.
You will also want to remove the entry from rc2.d that starts the graphical environment. What command that is depends on how your pi is configured. | 1 | 0 | 0 | I have a python script. This script is essentially my own desktop/UI. However, I would like to replace the default Raspbian (Raspberry Pi linux distro) desktop enviroment with my own version. How would I go about:
Disabling the default desktop and
Launching my python script (fullscreen) at startup?
This is on the Raspberry Pi running a modified version of debian linux.
(Edit: I tried making a startup script in the /etc/init.d directory, and added it to chmod, but I still can't seem to get it to start up. The script contained the normal .sh stuff, but also contained the python command that opened the script in my designated directory.) | Starting a python script at boot - Raspbian | 1.2 | 0 | 0 | 344 |
31,792,302 | 2015-08-03T16:27:00.000 | 1 | 0 | 0 | 1 | python-2.7,google-app-engine,web-applications,configuration,app.yaml | 31,796,794 | 2 | false | 1 | 0 | To "configure your app," generally speaking, is to specify, via some mechanism, parameters that can be used to direct the behavior of your app at runtime. Additionally, in the case of Google App Engine, these parameters can affect the behavior of the framework and services surrounding your app.
When you specify these parameters, and how you specify them, depends on the app and the framework, and sometimes also on your own philosophy of what needs to be parameterized. Readable data files in formats like YAML are a popular choice, particularly for web applications and services. In this case, the configuration will be read and obeyed when your application is deployed to Google App Engine, or launched locally via GoogleAppEngineLauncher.
Now, this might seem like a lot of bother to you. After all, the easiest way you have to change your app's behavior is to simply write code that implements the behavior you want! When you have configuration via files, it's generally more work to set up: something has to read the configuration file and twiddle the appropriate switches/variables in your application. (In the specific case of app.yaml, this is not something you have to worry about, but Google's engineers certainly do.) So what are some of the advantages of pulling out "configuration" into files like this?
Configuration files like YAML are relatively easy to edit. If you understand what the parameters are, then changing a value is a piece of cake! Doing the same thing in code may not be quite as obvious.
In some cases, the configuration parameters will affect things that happen before your app ever gets run – such as pulling out static content and deploying that to Google App Engine's front-end servers for better performance and lower cost. You couldn't direct that behavior from your app because your app is not running yet – it's still in the process of being deployed when the static content is handled.
Sometimes, you want your application to behave one way in one environment (testing) and another way in another environment (production). Or, you might want your application to behave some reasonably sensible way by default, but allow someone deploying your application to be able to change its behavior if the default isn't to their liking. Configuration files make this easier: to change the behavior, you can simply change the configuration file before you deploy/launch the application. | 1 | 1 | 0 | I am working on Google App Engine (GAE) which has a file called (app.yaml). As I am new to programming, I have been wondering, what does it mean to configure an app? | What does app configuration mean? | 0.099668 | 0 | 0 | 3,561 |
31,794,152 | 2015-08-03T18:22:00.000 | 1 | 0 | 0 | 1 | python,flask | 31,794,311 | 2 | true | 1 | 0 | No. The code won't be viewable. Server side code is not accessible unless you give someone access or post it somewhere public. | 1 | 0 | 0 | I don't want other people to see my application code. When I host my application, will others be able to see the code that is running? | Is the application code visible to others when it is run? | 1.2 | 0 | 0 | 75 |
31,794,158 | 2015-08-03T18:22:00.000 | 2 | 0 | 0 | 0 | python,numpy | 31,794,556 | 1 | false | 0 | 0 | I think you should first remap your data, then create the histogram, and then interpret the histogram knowing the values have been transformed. One possibility would be to tweak the histogram tick labels so that they display mapped values.
One possible way of doing it, for example, would be:
Sort one dimension of data as an unidimensional array;
Integrate this array, so you have a cumulative distribution;
Find the steepest part of this distribution, and choose a horizontal interval corresponding to a "good" bin size for the peak of your histogram - that is, a size that gives you good resolution;
Find the size of this same interval along the vertical axis. That will give you a bin size to apply along the vertical axis;
Create the bins using the vertical span of that bin - that is, "draw" horizontal, equidistant lines to create your bins, instead of the most common way of drawing vertical ones;
That way, you'll have lots of bins where data is more dense, and lesser bins where data is more sparse.
Two things to consider:
The mapping function is the cumulative distribution of the sorted values along that dimension. This can be quite arbitrary. If the distribution resembles some well known algebraic function, you could define it mathematically and use it to perform a two-way transform between actual value data and "adaptive" histogram data;
This applies to only one dimension. Care must be taken as how this would work if the histograms from multiple dimensions are to be combined. | 1 | 1 | 1 | I am currently working on a project where I have to bin up to 10-dimensional data. This works totally fine with numpy.histogramdd, however with one have a serious obstacle:
My parameter space is pretty large, but only a fraction is actually inhabited by data (say, maybe a few % or so...). In these regions, the data is quite rich, so I would like to use relatively small bin widths. The problem here, however, is that the RAM usage totally explodes. I see usage of 20GB+ for only 5 dimensions which is already absolutely not practical. I tried defining the grid myself, but the problem persists...
My idea would be to manually specify the bin edges, where I just use very large bin widths for empty regions in the data space. Only in regions where I actually have data, I would need to go to a finer scale.
I was wondering if anyone here knows of such an implementation already which works in arbitrary numbers of dimensions.
thanks | Python adaptive histogram widths | 0.379949 | 0 | 0 | 740 |
31,796,174 | 2015-08-03T20:27:00.000 | 7 | 1 | 0 | 1 | python,module,smtplib | 35,091,800 | 4 | false | 0 | 0 | I will tell you a probable why you might be getting error like Error no module smtplib
I had created program as email.py
Now email is a module in python and because of that it start giving error for smtplib also
then I had to delete email.pyc file created and then rename email.py to mymail.py
After that no error of smtplib
Make sure your file name is not conflicting with the python module. Also see because of that any *.pyc file created inside the folder | 1 | 13 | 0 | I tried to install python module via pip, but it was not successful.
can any one help me to install smtplib python module in ubuntu 12.10 OS? | How to install python smtplib module in ubuntu os | 1 | 0 | 0 | 58,280 |
31,796,798 | 2015-08-03T21:07:00.000 | 1 | 0 | 1 | 0 | python,datetime | 31,796,826 | 4 | false | 0 | 0 | Off the cuff-
Did you try %b? | 1 | 13 | 0 | How can I convert 'Jan' to an integer using Datetime? When I try strptime, I get an error time data 'Jan' does not match format '%m' | Python - Convert Month Name to Integer | 0.049958 | 0 | 0 | 37,304 |
31,798,097 | 2015-08-03T23:02:00.000 | 0 | 0 | 0 | 0 | python,autocad,euclidean-distance,freecad | 31,815,098 | 1 | false | 0 | 0 | You might look into the DXF file type. (drawing exchange format)
AutoCAD and most freeCAD will read/write DXF files. | 1 | 1 | 1 | I'm looking for software that can fit a signed distance function to a vector image or output from either AutoCAD or FreeCAD. Preferable CAD data because that output is 3D. I'm looking at coding something in either C or Python but I thought I'd check to see if there was anything out there because I couldn't find anything using google.
Thanks for the help | Fit a level set field to CAD data | 0 | 0 | 0 | 122 |
31,799,087 | 2015-08-04T01:01:00.000 | 2 | 1 | 0 | 1 | python,macos,vim,osx-yosemite | 31,800,107 | 1 | true | 0 | 0 | Vim doesn't check Python syntax out of the box, so a plugin is probably causing this issue.
Not sure why an OS upgrade would make a Vim plugin suddenly start being more zealous about things, of course, but your list of installed plugins (however you manage them) is probably the best place to start narrowing down your problem. | 1 | 1 | 0 | Overview
After upgrading to 10.11 Yosemite, I discovered that vim (on the terminal) highlights a bunch of errors in my python scripts that are actually not errors.
e.g.
This line:
from django.conf.urls import patterns
gets called out as an [import-error] Unable to import 'django.conf.urls'.
This error is not true because I can open up a python shell from the command line and import the supposedly missing module. I'm also getting a bunch of other errors all the way through my python file too: [bad-continuation] Wrong continued indentation, [invalid-name] Invalid constant name, etc.
All of these errors are not true.
Question
Anyway, how do I turn off these python error checks?
vim Details
vim --version:
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 5 2014 21:00:28)
Compiled by [email protected]
Normal version without GUI. Features included (+) or not (-):
-arabic +autocmd -balloon_eval -browse +builtin_terms +byte_offset +cindent
-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
-conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi +file_in_path
+find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv
+insert_expand +jumplist -keymap -langmap +libcall +linebreak +lispindent
+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape
-mouse_dec -mouse_gpm -mouse_jsbterm -mouse_netterm -mouse_sysmouse
+mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg -osfiletype
+path_extra -perl +persistent_undo +postscript +printer -profile +python/dyn
-python3 +quickfix +reltime -rightleft +ruby/dyn +scrollbind +signs
+smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary
+tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title
-toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo
+vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp
-xterm_clipboard -xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipe
Linking: gcc -arch i386 -arch x86_64 -o vim -lncurses | How Do I Turn Off Python Error Checking in vim? (vim terminal 7.3, OS X 10.11 Yosemite) | 1.2 | 0 | 0 | 421 |
31,804,892 | 2015-08-04T08:58:00.000 | 1 | 0 | 1 | 1 | python,celery,fileparsing | 31,877,500 | 1 | true | 0 | 0 | Using Memcached sounds like a much easier solution - a task is for processing, memcached is for storage - why use a task for storage?
Personally I'd recommend using Redis over memcached.
An alternative would be to try ZODB - it stores Python objects natively. If your application really suffers from serialization overhead maybe this would help. But I'd strongly recommend testing this with your real workload against JSON/memcached. | 1 | 3 | 0 | I have got a program that handle about 500 000 files {Ai} and for each file, it will fetch a definition {Di} for the parsing.
For now, each file {Ai} is parsed by a dedicated celery task and each time the definition file {Di} is parsed again to generate an object. This object is used for the parsing of the file {Ai} (JSON representation).
I would like to store the definition file (generated object) {Di(object)} to make it available for whole task.
So I wonder what would be the best choice to manage it:
Memcahe + Python-memcached,
A Long running task to "store" the object with set(add)/get interface.
For performance and memory usage, what would be the best choice ? | Share objects between celery tasks | 1.2 | 0 | 0 | 2,958 |
31,808,379 | 2015-08-04T11:42:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,sqlite | 34,991,644 | 1 | false | 1 | 0 | Just delete the db.sqlite3 file in the project directory
Recently in my django project, I faced similar problems. Initially I created two classes in my models.py, then after a custom migration for populating the database with initial data, I needed to make three classes in models.py, where the third table would need to be populated with data in the second migration. This caused similar problems. I simply deleted the db.sqlite3 file in the project directory, backed up my custom migrations, made necessary changes to my models.py, then ran makemigrations followed by a migrate. Everything went just fine. Hope it helps. | 1 | 0 | 0 | I named an app incorrectly in Django which I have renamed but I'm now getting migration errors for non-existent parent nodes. So I'd like to fresh install. Is there a django native way of doing this or best practice? At this stage I think I'll just start a new app and copy the db over. | Fresh Install SQLite3 in Django 1.8 | 0 | 0 | 0 | 544 |
31,814,825 | 2015-08-04T16:42:00.000 | 0 | 0 | 0 | 0 | python,gensim,word2vec | 57,694,733 | 2 | false | 0 | 0 | I think it is possible to obtain antonym using
king-men+women=queen analogies.
in here queen (antonym of king and synonym of women) is the result that return from word2vec trained model.
let we say there is a word X and its synonym Y. and also have antonym of Y which is Z. then we can say X-Y + Z = antonym of (X) and synonym of(Z). | 1 | 11 | 0 | I am currently working on word2vec model using gensim in Python, and want to write a function that can help me find the antonyms and synonyms of a given word.
For example:
antonym("sad")="happy"
synonym("upset")="enraged"
Is there a way to do that in word2vec? | How to obtain antonyms through word2vec? | 0 | 0 | 0 | 5,653 |
31,820,655 | 2015-08-04T22:46:00.000 | -3 | 0 | 0 | 0 | python,postgresql | 31,820,790 | 3 | false | 0 | 0 | I think no. You're forced to read the entire value of a column. You can divide the date in few columns, one for the year, another for the month, etc. , or store the date on an integer format if you want an aggressive space optimization. But it will doing the database worst about scalability and modifications.
The databases are slow, you must assume it, but they offer hardest things to do with C/C++.
If you think make a game and save your 'save game' on SQL forget it. Use it if you're doing a back-end server or a management application, tool, etc. | 1 | 0 | 0 | Say you have a column that contains the values for the year, month and date. Is it possible to get just the year? In particular I have
ALTER TABLE pmk_pp_disturbances.disturbances_natural ADD COLUMN sdate timestamp without time zone;
and want just the 2004 from 2004-08-10 05:00:00. Can this be done with Postgres or must a script parse the string? By the way, any rules as to when to "let the database do the work" vs. let the script running on the local computer do the work? I once heard querying databases is slower than the rest of the program written in C/C++, generally speaking. | Can Postgres be used to take only the portion of a date from a field? | -0.197375 | 1 | 0 | 38 |
31,821,553 | 2015-08-05T00:34:00.000 | 2 | 0 | 0 | 0 | python,django,redirect,nginx,varnish | 31,821,657 | 1 | false | 1 | 0 | It all depends on the load actually... if you have a lot of requests going to the old urls than it might be useful to have some caching. But in general I would say that doing it in Django, adding all of the urls to a database model and querying (optionally caching the results in Django or even Varnish) should do the trick.
These things are not impossible to do in Varnish or Nginx but Django will be far easier to link up to a database so that would have my vote. | 1 | 2 | 0 | We are building a Django app to replace a legacy system which used custom URLs for almost every resource. No pattern to the URLs at all. There are about 350,000 custom URLs that we now need to 301 redirect to a correct URL in the new system.
Our new system will use Django, but will also have Varnish and Nginx in front of it, so we could use any of these tools for the solution.
In Django, I think we could either make a very very large custom urls.py file, or maybe make a middleware that does a database lookup against a table with all the redirects.
Or perhaps there's a way to handle this in Varnish or Nginx so the requests never even hit Django.
My question: what's the most performant way to handle thousands of custom URL redirects? | How to handle thousands of legacy urls in Django, Varnish, Nginx? | 0.379949 | 0 | 0 | 107 |
31,822,523 | 2015-08-05T02:39:00.000 | 0 | 0 | 1 | 0 | python,indexing | 31,822,646 | 4 | false | 0 | 0 | In python: 0 == -0, so x[0] == x[-0].
Why is sequence indexing zero based instead of one based? It is a choice the language designer should do. Most languages I know of use 0 based indexing. Xpath uses 1 based for selection.
Using negative indexing is also a convention for the language. Not sure why it was chosen, but it allows for circling or looping the sequence by simple addition (subtraction) on the index. | 1 | 4 | 1 | I'm curious in Python why x[0] retrieves the first element of x while x[-1] retrieves the first element when reading in the reverse order. The syntax seems inconsistent to me since in the one case we're counting distance from the first element, whereas we don't count distance from the last element when reading backwards. Wouldn't something like x[-0] make more sense? One thought I have is that intervals in Python are generally thought of as inclusive with respect to the lower bound but exclusive for the upper bound, and so the index could maybe be interpreted as distance from a lower or upper bound element. Any ideas on why this notation was chosen? (I'm also just curious why zero indexing is preferred at all.) | Logic behind Python indexing | 0 | 0 | 0 | 180 |
31,822,714 | 2015-08-05T03:02:00.000 | 0 | 1 | 0 | 1 | python,c++,c,linux | 31,830,627 | 1 | false | 0 | 0 | Here is the only way to do that I can think. It is a bit confusing but if you follow the steps it is very simple:
If I want to select total cpu use of Google Chrome process:
$ps -e -o pcpu,comm | grep chrome | awk '{ print $1 }' | paste -sd+ |
bc -l | 1 | 0 | 0 | I have a python server that forks itself once it receives a request. The python service has several C++ .so objects it can call into, as well as the python process itself.
My question is, in any one of these processes, I would like to be able to see how much CPU all instances of this server are currently using. So lets say I have foo.py, I want to see how much CPU all instances of foo.py are currently using. For example, foo.py(1) is using 200% cpu, foo.py(2) is using 300%, and foo.py(3) is using 50%, id like to arrive at 550%.
The only way I can think of doing this myself is getting the PID of every process and scanning through the /proc filesystem. Is there a more general way available within C/Python/POSIX for such an operation?
Thank you! | Query total CPU usage of all instances of a process on Linux OS | 0 | 0 | 0 | 114 |
31,825,032 | 2015-08-05T06:35:00.000 | 1 | 0 | 0 | 0 | python-2.7,odoo-8 | 31,846,746 | 1 | true | 1 | 0 | We need to give the Rights of "View Online Payment Options" in the user form, after that user will able to see the payment button in sale_order as well as invoice also and also see in a website. | 1 | 0 | 0 | Once i installed payment_paypal module, still it is not showing after confirm the sale order and validate the invoice. | Paypal button is not showing in Sale Order and Accounting Invoice odoo? | 1.2 | 0 | 0 | 497 |
31,827,094 | 2015-08-05T08:21:00.000 | 3 | 1 | 1 | 0 | python,email,encryption,sendmail | 31,828,903 | 3 | false | 0 | 0 | Encryption basically tries to rely on one (and only one) secret.
That is, one piece of data that is known to those who want to communicate but not to an attacker.
In the past attempts have been made to e.g. (also) keep the encryption algorithm/implementation secret, but if that implementation is widely used (in a popular cryptosystem) those attempts have generally fared poorly.
In general that one secret is the password. So that even if the attacker knows the encryption algorithm, he cannot decrypt the traffic if he doesn't know the password.
As others have shown, encrypting a password and giving a script the means to decrypt it is futile if the attacker can get hold of the script. It's like a safe with the combination of the lock written on the door.
On the other hand as long as you can keep your script secret, the key in it is secret as well.
So if you restrict the permissions of your script such that only the root/administrator user can read or execute it, the only way for an attacker to access it is to have cracked the root/administrator account. In which case you've probably already lost.
The biggest challenges in cases like these are operational.
Here are some examples of things that you should not do;
Make the script readable by every user.
Store the script where it can by read be a publicly accessible web-server.
Upload it to github or any other public hosting service.
Store it in an unencrypted backup.
Update: You should also consider how the script uses the password. If it sends the password over the internet in cleartext, you don't have much security anyway. | 1 | 3 | 0 | I want to have Python send a mail automatically after certain events occur. In my script I have to enter a password. Is there any way to encrypt my password and use it in this script?
Please give an example as I am not an expert in python. I have seen few answers on this topic but those aren't discussed completely, just some hints are given. | How to use encrypted password in python email | 0.197375 | 0 | 0 | 6,684 |
31,827,125 | 2015-08-05T08:22:00.000 | 1 | 0 | 1 | 0 | python,function,size,readability | 31,827,199 | 1 | false | 0 | 0 | A good rule of thumb (and it's more a guideline of thumb really) is that you should be able to view the entire function on one screen.
That makes it a lot easier to see the control flow without having to scroll all over the place in whatever editor you're using.
If you can't understand fully what a function does at first glance, it's probably a good idea to refactor chunks of code so that the more detailed steps are placed in their own, well-named, separate function and just called from this one.
However, it's not a hard-and-fast rule, you'll adapt your approach depending on your level of expertise and how complex the code actually is. | 1 | 0 | 0 | New programmer here!
I'm creating my first script on my own, and I have a particular function that is quite large, as in 50 lines.
I understand that theoretically a function can be as large as you need it to be, but etiquette-wise, where is a good place to stay under?
I'm using Python 2.something if that makes a difference. | Is there a limit to how large a function can be? | 0.197375 | 0 | 0 | 55 |
31,828,141 | 2015-08-05T09:11:00.000 | 3 | 0 | 0 | 0 | python,django,security,django-models,django-settings | 31,828,245 | 1 | true | 1 | 0 | No, settings.SECRET_KEY is not used for password hashing | 1 | 1 | 0 | I have a Django powered site(Project-1) running with some users registered on it. I am now creating a revamped version of the site in a new separate Django project(Project-2) which I would make live once finished. I would need to populate the User data along with their hashed passwords currently in database of Project-1 into database of Project-2. Would having different SECRET_KEYs for Project-1 and Project-2 be an issue to get the hashed passwords migrated and working in Project-2? | Django SECRET_KEY : Copying hashed passwords into different Django project | 1.2 | 0 | 0 | 184 |
31,828,928 | 2015-08-05T09:46:00.000 | 0 | 1 | 0 | 1 | python,testing,celery | 31,877,460 | 1 | false | 0 | 0 | To facilitate testing you should first run the task from ipython to verify that it does what it should.
Then to verify scheduling you should change the celerybeat schedule to run in the near future, and verify that it does in fact run.
Once you have verified functionality and schedule you can update the celerybeat schedule to midnight, and be at least some way confident that it will run like it should. | 1 | 0 | 0 | I have a periodical celery job that is supposed to run every night at midnight. Of course I can just run the system and leave it overnight to see the result. But I can see that it's not going to be very efficient in terms of solving potential problems and energy.
In such situation, is there a trick to make the testing easier? | testing celery job that runs each night | 0 | 0 | 0 | 158 |
31,834,738 | 2015-08-05T14:01:00.000 | 1 | 0 | 0 | 0 | python,web-scraping,scrapy,rabbitmq,celery | 31,841,508 | 1 | true | 1 | 0 | Yes, using RabbitMQ is very helpful for your use case since your crawling agent can utilize a message queue for storing the results while your document processor can then store that in both your database back end (in this reply I'll assume mongodb) and your search engine (and I'll assume elastic search here).
What one gets in this scenario is a very rapid and dynamic search engine and crawler that can be scaled.
As for celery+rabbitmq+scrapy portion; celery would be a good way to schedule your scrapy crawlers and distribute your crawler bots across your infrastructure. Celery is just using RabbitMQ as its back end to consolidate and distribute the jobs between each instance. So for your use case to use both celery and scrapy just write the code for your scrapy bot to use its own rabbitmq queue for storing the results then write up a document processor to store the results into your persistent database back end. Then setup celery to schedule the batches of site crawls. Throw in sched module to maintain a bit of sanity in your crawling scheude.
Also, review the works done at google for how they resolve the issues for over crawling a site in thier algorithm plus respect sane robots.txt settings and your crawler should be good to go. | 1 | 2 | 0 | I want to run multiple spiders to crawl many different websites. Websites I want to crawl take different time to be scraped (some take about 24h, others 4h, ...). I have multiple workers (less than the number of websites) to launch scrapy and a queue where I put the websites I want to crawl. Once a worker has finished crawling a website, the website goes back to the queue waiting for a worker to be available to launch scrapy, and so on.
The problem is that small website will be crawled more times than big ones and I want all websites to be crawled the same number of time.
I was thinking about using RabbitMQ for queue management and to prioritise some websites.
But when I search for RabbitMQ, it is often used with Celery. What I understood about these tools is that Celery will allow to launch some code depending on a schedule and RabbitMQ will use message and queues to define the execution order.
In my case, I don't know if using only RabbitMQ without Celery will work. Also, is using RabbitMQ helpful for my problem?
Thanks | Scrapy - Use RabbitMQ only or Celery + RabbitMQ for scraping multiple websites? | 1.2 | 0 | 1 | 1,802 |
31,837,170 | 2015-08-05T15:48:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 31,837,244 | 3 | false | 0 | 0 | number % 2
is equal to (shorthand for)
number % 2 != 0
because 1 evaluates to True and 0 to False. | 1 | 3 | 0 | I'm learning about Python's boolean logic and how you can shorten things down. Are the two expressions in the title equivalent? If not, what are the differences between them? | Difference between 'number % 2:' and 'number % 2 == 0'? | 0.132549 | 0 | 0 | 2,539 |
31,838,882 | 2015-08-05T17:16:00.000 | 0 | 0 | 0 | 0 | python,django,django-migrations | 53,122,187 | 10 | false | 1 | 0 | I checked it by looking up the table django_migrations, which stores all applied migrations. | 1 | 74 | 0 | In Django, is there an easy way to check whether all database migrations have been run? I've found manage.py migrate --list, which gives me the information I want, but the format isn't very machine readable.
For context: I have a script that shouldn't start running until the database has been migrated. For various reasons, it would be tricky to send a signal from the process that's running the migrations. So I'd like to have my script periodically check the database to see if all the migrations have run. | Check for pending Django migrations | 0 | 0 | 0 | 57,259 |
31,841,054 | 2015-08-05T19:19:00.000 | 6 | 0 | 0 | 0 | python,sqlalchemy,flask-sqlalchemy | 31,857,049 | 1 | false | 0 | 0 | I got it:
from sqlalchemy import func
(func.extract(<my_object>, 'dow') == some_day)
dow stands for 'day of week'
The extract is an SQLAlchemy function allowing the extraction of any field from the column object. | 1 | 5 | 0 | I have an SQLAlchemy DB column which is of type datetime:
type(<my_object>) --> sqlalchemy.orm.attributes.InstrumentedAttribute
How do I reach the actual date in order to filter the DB by weekday() ? | Extract a weekday() from an SQLAlchemy InstrumentedAttribute (Column type is datetime) | 1 | 1 | 0 | 3,875 |
31,841,751 | 2015-08-05T19:58:00.000 | 1 | 0 | 0 | 0 | python,csv,paraview | 32,314,832 | 3 | false | 0 | 0 | Improving the @GregNash's answer. If you want to include only a single file (called foo.csv):
outcsv = CSVReader(FileName= 'foo.csv')
Or if you want to include all files with certain pattern use glob. For example if files start with string foo (aka foo.csv.0, foo.csv.1, foo.csv.2):
myreader = CSVReader(FileName=glob.glob('foo*'))
To use glob is neccesary import glob in the preamble. In general in Filename you could work with strings generated with python which could contain more complex pattern files and file's path. | 2 | 0 | 0 | How do I use ParaView's CSVReader in a Python Script? An example would be appreciated. | How do I use ParaView's CSVReader in a Python Script? | 0.066568 | 0 | 0 | 521 |
31,841,751 | 2015-08-05T19:58:00.000 | 1 | 0 | 0 | 0 | python,csv,paraview | 31,842,727 | 3 | false | 0 | 0 | Unfortunately, I don't know Paraview at all. But I found "... simply record your work in the desktop application in the form of a python script ..." at their site. If you import a CSV like that, it might give you a hint. | 2 | 0 | 0 | How do I use ParaView's CSVReader in a Python Script? An example would be appreciated. | How do I use ParaView's CSVReader in a Python Script? | 0.066568 | 0 | 0 | 521 |
31,842,424 | 2015-08-05T20:41:00.000 | 1 | 0 | 1 | 0 | python,string,csv,boolean | 31,842,481 | 1 | false | 0 | 0 | I believe it is reading the value as the string "0", which is a truthy value. Try using int(field) instead of the field itself. | 1 | 1 | 0 | I have a CSV file that I'm reading. One of the fields contains only 1s and 0s. I was under the assumption that Python will return a Boolean value of False for 0, but when my program reads a field whose value is 0, it returns True.
Is this because entries read in a .csv file are strings? | Boolean value of fields in .csv file in Python | 0.197375 | 0 | 0 | 1,611 |
31,842,785 | 2015-08-05T21:03:00.000 | 6 | 1 | 0 | 1 | python,pyserial,beagleboneblack,uart,baud-rate | 33,552,144 | 3 | true | 0 | 0 | The AM335x technical reference manual (TI document spruh73) gives the baud rate limits for the UART sub-system in the UART section (section 19.1.1, page 4208 in version spruh73l):
Baud rate from 300 bps up to 3.6864 Mbps
The UART modules each have a 48MHz clock to generate their timing. They can be configured in one of two modes: UART 16x and UART 13x, in which that clock is divided by 16 and 13, respectively. There is then a configured 16-bit divisor to generate the actual baud rate from that clock. So for 300 bps it would be UART 16x and a divisor of 10000, or 48MHz / 16 / 1000 = 300 bps.
When you tell the omap-serial kernel driver (that's the driver used for UARTs on the BeagleBone), it calculates the mode and divisor that best approximates the rate you want. The actual rate you'll get is limited by the way it's generated - for example if you asked for an arbitrary baud of 2998 bps, I suppose you'd actually get 2997.003 bps, because 48MHz / 16 / 1001 = 2997.003 is closer to 2998 than 48 MHz / 16 / 1000 = 3000.
So the UART modules can certainly generate all the standard baud rates, as well as a large range of arbitrary ones (you'd have to actually do the math to see how close it can get). On Linux based systems, PySerial is just sending along the baud you tell it to the kernel driver through an ioctl call, so it won't limit you either.
Note: I just tested sending data on from the BeagleBone Black at 200 bps and it worked fine, but it doesn't generate 110 bps (the next lower standard baud rate below 300 bps), so the listed limits are really the lowest and highest standard rates it can generate. | 2 | 5 | 0 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more standard (9600, 115200, etc.)?
Thanks for any help.
-UPDATE-
It is related to the baud rates supported by PySerial. This gives a list of potential baud rates, but not specific ones that will or will not work with specific hardware. | Maximum Beaglebone Black UART baud? | 1.2 | 0 | 0 | 5,044 |
31,842,785 | 2015-08-05T21:03:00.000 | 0 | 1 | 0 | 1 | python,pyserial,beagleboneblack,uart,baud-rate | 31,902,876 | 3 | false | 0 | 0 | The BBB reference manual does not contain any information on Baud Rate for UART but for serial communication I usually prefer using value of BAUDRATE = 115200, which works in most of the cases without any issues. | 2 | 5 | 0 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more standard (9600, 115200, etc.)?
Thanks for any help.
-UPDATE-
It is related to the baud rates supported by PySerial. This gives a list of potential baud rates, but not specific ones that will or will not work with specific hardware. | Maximum Beaglebone Black UART baud? | 0 | 0 | 0 | 5,044 |
31,842,810 | 2015-08-05T21:04:00.000 | 0 | 0 | 0 | 0 | python,excel,pandas,xlwt | 31,842,901 | 1 | false | 0 | 0 | If a large proportion of them are similar, and this is a one-off operation it may be worth your while coding the solution for the majority and handling the other documents (or groups of them if they are similar) separately. If using Python to do this you could simply build a dynamic query where the columns that are present in a given excel sheet are built into the INSERT statements. Of course, this assumes that your database table allows for NULLs or that a default value is present on the columns that aren't in a given document. | 1 | 0 | 0 | I have several thousand excel documents. All of these documents are 95% the same in terms of column headings. However, since they are not 100% identical, I cannot simply merge them together and upload it into a database without messing up the data.
Would anyone happen to have a library or an example that they've ran into that would help? | Python merge excel documents with dynamic columns | 0 | 1 | 0 | 177 |
31,846,436 | 2015-08-06T03:41:00.000 | 5 | 0 | 0 | 0 | python,django,inline | 44,101,423 | 1 | false | 1 | 0 | Instead of using ForeignKey, use OneToOneField and it will display just one item without the add another link. | 1 | 3 | 0 | I have a foreign key in model and I am making inline in admin side. I passed extra=0 to display only one form and its working but I am getting Add another model in admin.
I dont want to display Add another model in admin just one form only.
How can I do that . How can I remove Add another option from admin | django admin inline with only one form without add another option | 0.761594 | 0 | 0 | 1,934 |
31,847,163 | 2015-08-06T05:00:00.000 | 0 | 0 | 0 | 0 | django,python-2.7 | 31,848,062 | 1 | false | 1 | 0 | If both computers are in same network, you can use local IP and the port you indicated with runserver command. For instance, if the computer with django app has an IP of 192.168.1.145, you need to go to http://192.168.1.145:8000 to access your app in other computers with same network.
If it's about accessing the app from different computers with different networks. We have servers for that. If you have to need the app from your own computer, you need to get a static IP.(It's not recommended though.) Call your ISP for static IP. | 1 | 0 | 0 | I am running a django runserver from my macbook at home. Able to load the page in my mac. But when i tried copy the link and load the page on other PC the page is not loading. Why? Please help.. | Django runserver - page is loading in other PCs | 0 | 0 | 0 | 51 |
31,849,867 | 2015-08-06T07:50:00.000 | 2 | 0 | 0 | 0 | python,django,django-admin,django-grappelli | 31,850,151 | 4 | false | 1 | 0 | If you want to change the appearance of the admin in general you should override admin templates. This is covered in details here: Overriding admin templates. Sometimes you can just extend the original admin file and then overwrite a block like {% block extrastyle %}{% endblock %} in django/contrib/admin/templates/admin/base.html as an example.
If your style is model specific you can add additional styles via the Media meta class in your admin.py. See an example here:
class MyModelAdmin(admin.ModelAdmin):
class Media:
js = ('js/admin/my_own_admin.js',)
css = {
'all': ('css/admin/my_own_admin.css',)
} | 1 | 1 | 0 | In Django grappelli, how can I add my own css files to all the admin pages? Or is there a way to extend admin's base.html template? | Django (grappelli): how add my own css to all the pages or how to extend admin's base.html? | 0.099668 | 0 | 0 | 3,792 |
31,852,534 | 2015-08-06T09:52:00.000 | 0 | 0 | 0 | 0 | android,kivy,qpython | 31,858,806 | 2 | false | 0 | 1 | I don't know about qpython's log stuff, but you can see the kivy log output by viewing the logcat stream, e.g. from an attached computer (with developer mode enabled) using adb logcat.
This has always been possible and is the standard way to view the log output with kivy (and with android in general). It sounds like your direct problem relates to a change in qpython. | 1 | 1 | 0 | I'm having some trouble with my Kivy apps when run from the QPython launcher.
If I run the standard pong example, I don't see any output.
This used to work.
So, I suspect that QPython, or Kivy has taken an 'upgrade' which has broken something.
In the past I would be able to swipe down to see the log output icon.
But, now that's no longer there.
Well done QPython, Kivy !!!!
So, what's changed?
And, how am I supposed to check program log output to see why it no longer runs?
Regards
Nick | Kivy QPython app on Android Moto G phone - no log output | 0 | 0 | 0 | 429 |
31,855,794 | 2015-08-06T12:28:00.000 | 24 | 0 | 1 | 0 | python,jupyter-notebook,jupyter | 33,249,008 | 6 | false | 0 | 0 | Michael's suggestion of running your own nbviewer instance is a good one I used in the past with an Enterprise Github server.
Another lightweight alternative is to have a cell at the end of your notebook that does a shell call to nbconvert so that it's automatically refreshed after running the whole thing:
!ipython nbconvert <notebook name>.ipynb --to html
EDIT: With Jupyter/IPython's Big Split, you'll probably want to change this to !jupyter nbconvert <notebook name>.ipynb --to html now. | 1 | 179 | 0 | I am trying to wrap my head around what I can/cannot do with Jupyter.
I have a Jupyter server running on our internal server, accessible via VPN and password protected.
I am the only one actually creating notebooks but I would like to make some notebooks visible to other team members in a read-only way. Ideally I could just share a URL with them that they would bookmark for when they want to see the notebook with refreshed data.
I saw export options but cannot find any mention of "publishing" or "making public" local live notebooks. Is this impossible? Is it maybe just a wrong way to think about how Jupyter should be used? | How can I share Jupyter notebooks with non-programmers? | 1 | 0 | 0 | 147,878 |
31,857,628 | 2015-08-06T13:51:00.000 | 2 | 1 | 0 | 0 | python,python-asyncio | 31,880,066 | 1 | false | 0 | 0 | You may call synchronous LDAP library in thread pool (loop.run_in_executor()).
aiohttp itself doesn't contain abstractions for sessions and authentication but there are aiohttp_session and aiohttp_security libraries. I'm working on these but current status is alpha. You may try it as beta-tester :) | 1 | 2 | 0 | I'm looking for solution how to setup domain authorization with aiohttp.
There are several ldap librarys, but all of them blocks event loop, plus i don't have clear understanding about user authorization with aiohttp.
As i see i need session managment and store isLogdedIn=True in cookie file, check that cookie at every route -> redirect at login handler, and check key in every template? It seems very insecure, session could be stolen. | Proper way to setup ldap auth with aiohttp.web | 0.379949 | 0 | 1 | 549 |
31,860,476 | 2015-08-06T15:57:00.000 | 1 | 0 | 0 | 1 | python,linux,curl,wifi,bandwidth | 31,860,813 | 1 | true | 0 | 0 | Simply sending packets as fast as possible to a random destination (that is not localhost) should work.
You'll need to use udp (otherwise you need a connection acknowledge before you can send data).
cat /dev/urandom | pv | nc -u 1.1.1.1 9123
pv is optional (but nice).
You can also use /dev/zero, but there may be a risk of link-level compression.
Of course, make sure the router is not actually connected to the internet (you don't want to flood a server somewhere!), and that your computer has the router as the default route. | 1 | 7 | 0 | I need to generate a very high level of wifi activity for a study to see if very close proximity to a transceiver can have a negative impact on development of bee colonies.
I have tried to write an application which spawns several web-socket server-client pairs to continuously transfer mid-sized files (this approach hit >100MB). However, we want to run this on a single computer connected to a wifi router, so the packets invariably end up getting routed via the loopback interface, not the WLAN.
Alternatively I have tried using a either simple ping floods and curling the router, but this is not producing nearly the maximum bandwidth the router is capable of.
Is there a quick fix on linux to force the traffic over the network? The computer we are using has both an ethernet and a wireless interface, and I found one thread online which suggested setting up iptables to force traffic between the two interfaces and avoid the loopback. | Generating maximum wifi activity through 1 computer | 1.2 | 0 | 0 | 93 |
31,860,630 | 2015-08-06T16:05:00.000 | 2 | 0 | 1 | 1 | python,hdfs,race-condition,ioerror | 31,934,576 | 1 | true | 0 | 0 | (Setting aside that it sounds like HDFS might not be the right solution for your use case, I'll assume you can't switch to something else. If you can, take a look at Redis, or memcached.)
It seems like this is the kind of thing where you should have a single service that's responsible for computing/caching these results. That way all your processes will have to do is request that the resource be created if it's not already. If it's not already computed, the service will compute it; once it's been computed (or if it already was), either a signal saying the resource is available, or even just the resource itself, is returned to your process.
If for some reason you can't do that, you could try using HDFS for synchronization. For example, you could try creating the resource with a sentinel value inside which signals that process A is currently building this file. Meanwhile process A could be computing the value and writing it to a temporary resource; once it's finished, it could just move the temporary resource over the sentinel resource. It's clunky and hackish, and you should try to avoid it, but it's an option.
You say you want to avoid expensive recalculations, but if process B is waiting for process A to compute the resource, why can't process B (and C and D) be computing it as well for itself/themselves? If this is okay with you, then in the event that a resource doesn't already exist, you could just have each process start computing and writing to a temporary file, then move the file to the resource location. Hopefully moves are atomic, so one of them will cleanly win; it doesn't matter which if they're all identical. Once it's there, it'll be available in the future. This does involve the possibility of multiple processes sending the same data to the HDFS cluster at the same time, so it's not the most efficient, but how bad it is depends on your use case. You can lessen the inefficiency by, for example, checking after computation and before upload to the HDFS whether someone else has created the resource since you last looked; if so, there's no need to even create the temporary resource.
TLDR: You can do it with just HDFS, but it would be better to have a service that manages it for you, and it would probably be even better not to use HDFS for this (though you still would possibly want a service to handle it for you, even if you're using Redis or memcached; it depends, once again, on your particular use case). | 1 | 5 | 0 | So I have some code that attempts to find a resource on HDFS...if it is not there it will calculate the contents of that file, then write it. And next time it goes to be accessed the reader can just look at the file. This is to prevent expensive recalculation of certain functions
However...I have several processes running at the same time on different machines on the same cluster. I SUSPECT that they are trying to access the same resource and I'm hitting a race condition that leads a lot of errors where I either can't open a file or a file exists but can't be read.
Hopefully this timeline will demonstrate what I believe my issue to be
Process A goes to access resource X
Process A finds resource X exists and begins writing
Process B goes to access resource X
Process A finishes writing resource X
...and so on
Obviously I would want Process B to wait for Process A to be done with Resource X and simply read it when A is done.
Something like semaphores come to mind but I am unaware of how to use these across different python processes on separate processors looking at the same HDFS location. Any help would be greatly appreciated
UPDATE: To be clear..process A and process B will end up calculating the exact same output (i.e. the same filename, with the same contents, to the same location). Ideally, B shouldn't have to calculate it. B would wait for A to calculate it, then read the output once A is done. Essentially this whole process is working like a "long term cache" using HDFS. Where a given function will have an output signature. Any process that wants the output of a function, will first determine the output signature (this is basically a hash of some function parameters, inputs, etc.). It will then check the HDFS to see if it is there. If it's not...it will write calculate it and write it to the HDFS so that other processes can also read it. | Sharing a resource (file) across different python processes using HDFS | 1.2 | 0 | 0 | 132 |
31,862,293 | 2015-08-06T17:36:00.000 | 1 | 0 | 0 | 1 | python,apache-spark,ipython,pyspark | 66,149,862 | 8 | false | 0 | 0 | Tested with spark 3.0.1 and python 3.7.7 (with ipython/jupyter installed)
To start pyspark with IPython:
$ PYSPARK_DRIVER_PYTHON=ipython pyspark
To start pyspark with jupyter notebook:
$ PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS=notebook pyspark | 1 | 33 | 0 | I want to load IPython shell (not IPython notebook) in which I can use PySpark through command line. Is that possible?
I have installed Spark-1.4.1. | How to load IPython shell with PySpark | 0.024995 | 0 | 0 | 26,058 |
31,862,957 | 2015-08-06T18:14:00.000 | 0 | 0 | 0 | 0 | python,mongodb,pymongo,database | 31,867,932 | 1 | true | 0 | 0 | Your post's points and questions are quoted below. My comments follow each quote.
By having a database where each slide is a document that contains the filename of the slide, the filename of the preview thumbnail of the
slide, and an array containing searchable tag words, one will be able
to query specific sets of slides very quickly.
Sounds good. I'm assuming that the slides are in one collection.
The user-created "groups" can be individual collections, where a
collection is created when a group is created, slides are
added/removed from the collection as needed, and the collection can be
destroyed when the group is deleted.
I suggest that you do not create a collection for each group. You may end up with hundreds or thousands of collections to represent "groups". This could bite you down the road if you decide you want to introduce Sharding (which may only be done on a collection). Consider creating one collection just for "groups" that will contain the unique user id to which the group is associated.
[Will I be able to use MongoDB, with decent performance, to] create
and delete collections as needed?
Constantly creating and deleting collections is not good design. You probably do not need to do this. If you have any familiarity with RDBMS then a collection is analogous to a table. Would you keep creating dozens/hundreds/thousands of tables on the fly? Probably not.
[Will I be able to use MongoDB, with decent performance, to] copy and
delete specific documents from a master collection into newly created
collections?
Yes, it is possible to take data from one collection and save it to another collection. Deleting documents is also quite easy and performant. (Indexing is your friend.)
[Will I be able to use MongoDB, with decent performance, to] store an
array of searchable tags in string format in a dynamic array
associated with each document?
Yes. MongoDB is very good at this.
[Will I be able to use MongoDB, with decent performance, to] query
slides within a collection based on a single tag word stored in an
array within each document?
Yes. Decent performance will depend on the size of your collection and, more importantly, the existence of relevant indexes.
[Will I be able to use MongoDB, with decent performance, to] maintain
the database between system shutdowns and opening / closing the
program?
I'm assuming that you are referring to replication and failover. If so, yes, MongoDB supports this quite well. | 1 | 0 | 0 | This will be a bit of a lengthly post, sorry in advance. I have a bit of experience using MongoDB (been awhile) and I'm so-so with python, but I have a big project and I would like some feedback before spending lots of time coding.
The project involves creating a gallery where individual presentation slides (from apple keynote '09) can be selected and parsed together into a presentation. This way a user with a few thousand slides can use this program to create a new presentation by mixing and matching old slides, rather than having to open up each presentation and copy-paste all of the desired slides into a new presentation manually.
Within the program there is a master gallery that contains all the slides. Each slide may be selected and assigned searchable tags. New "groups" of slides may be formed, where all slides with a specific set of tags are added to the group automatically. In addition, individual slides can be dragged from the master gallery and dropped into a user-created group.
There is a folder with preview images for each slide, here is why I believe I need MongoDB: By having a database where each slide is a document that contains the filename of the slide, the filename of the preview thumbnail of the slide, and an array containing searchable tag words, one will be able to query specific sets of slides very quickly. The query will return an array of matching slides which can than be looped through to add each slide thumbnail to the GUI gallery. The user-created "groups" can be individual collections, where a collection is created when a group is created, slides are added/removed from the collection as needed, and the collection can be destroyed when the group is deleted. This also will allow permanent storage as the database and its collections will persist between opening and closing the program.
My question is, will I be able to use MongoDB (through pyMongo) to do the following with decent performance:
-Create and delete collections as needed
-Copy and delete specific documents from a master collection into newly created collections
-Store an array of searchable tags in string format in a dynamic array associated with each document
-Query slides within a collection based on a single tag word stored in an array within each document
-Maintain the database between system shutdowns and opening / closing the program.
Thanks! | Will I be able to implement this design using MongoDB (via pyMongo) | 1.2 | 1 | 0 | 70 |
31,863,996 | 2015-08-06T19:12:00.000 | 1 | 0 | 0 | 1 | python,django,rabbitmq,celery | 32,602,968 | 1 | true | 1 | 0 | I found the problem in my code,
So in one of my task i was opening a connection to parse using urllib3 that was getting hung.
After moving out that portion in async task, things are working fine now. | 1 | 0 | 0 | I am using rabbitmq as broker, there is a strange behaviour that is happening in my production environment only. Randomly sometimes my celery stops consuming messages from a queue, while it consumes from other queues.
This leads to pileup on messages in queue, if i restart my celeryd everything starts to work fine.
"/var/logs/celeryd/worker" does not indicate any error. I am not even sure where to start looking as i am new to python/django.
Any help will be greatly appreciated. | Celery worker stops consuming from a specific queue while it consumes from other queues | 1.2 | 0 | 0 | 1,570 |
31,866,429 | 2015-08-06T21:51:00.000 | 0 | 0 | 0 | 0 | python,scrapy,virtualenv | 33,439,385 | 3 | true | 1 | 0 | It's not possible to do what I wanted to do on the GoDaddy plan I had. | 1 | 1 | 0 | Here's my problem,
I have a shared hosting (GoDaddy Linux Hosting package) account and I'd like to create .py file to do some scraping for me. To do this I need the scrapy module (scrapy.org). Because of the shared account I can't install new modules so I installed VirtualEnv and created a new virtual env. that has pip, wheel, etc. preinstalled.
Running pip install scrapydoes NOT complete successfully because scrapy has lot of dependencies like libxml2 and it also needs python-dev tools. If I had access to 'sudo apt-get ...' this would be easy but I dont'. I can only use pip and easy_install.
So How do I install the python dev tool? And how do I install the dependencies? Is this even possible?
Cheers | Installing Scrapy on Python VirtualEnv | 1.2 | 0 | 0 | 2,964 |
31,866,507 | 2015-08-06T21:57:00.000 | 13 | 0 | 0 | 0 | python,windows | 31,866,538 | 2 | true | 0 | 0 | As long as the computer doesn't get put to sleep, your process should continue to run. | 2 | 14 | 0 | I am running a Python script that uses the requests library to get data from a service.
The script takes a while to finish and I am currently running it locally on my Windows 7 laptop. If I lock my screen and leave, will the script continue to run (for ~3 hours) without Windows disconnecting from the internet or halting any processes? The power settings are already set up to keep the laptop from sleeping.
If it will eventually halt anything, how do I keep this from happening? Thanks. | Keep Python script running after screen lock (Win. 7) | 1.2 | 0 | 1 | 27,171 |
31,866,507 | 2015-08-06T21:57:00.000 | 7 | 0 | 0 | 0 | python,windows | 31,866,586 | 2 | false | 0 | 0 | Check "Power Options" in the Control panel. You don't need to worry about the screen locking or turning off as these wont affect running processes. However, if your system is set to sleep after a set amount of time you may need to change this to Never. Keep in mind there are separate settings depending on whether or not the system is plugged in. | 2 | 14 | 0 | I am running a Python script that uses the requests library to get data from a service.
The script takes a while to finish and I am currently running it locally on my Windows 7 laptop. If I lock my screen and leave, will the script continue to run (for ~3 hours) without Windows disconnecting from the internet or halting any processes? The power settings are already set up to keep the laptop from sleeping.
If it will eventually halt anything, how do I keep this from happening? Thanks. | Keep Python script running after screen lock (Win. 7) | 1 | 0 | 1 | 27,171 |
31,867,145 | 2015-08-06T22:59:00.000 | 0 | 0 | 0 | 0 | python,tkinter,launch,os.system | 31,867,275 | 1 | false | 0 | 1 | I'd consider using subprocess library in python.
import subprocess
p=subprocess.Popen(["python", "my_script_1.py"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output=p.communicate()[0]
Hope this helps. | 1 | 0 | 0 | I'm creating a program using Python/Tkinter and it involves a main screen with a bunch of buttons. Each button will launch another Python script in the same folder.
I'm trying to create a function that will quit the script and open another. How can this be done?
People have recommended os.system on other similar threads, however everything I've tried has failed to work.
Any ideas/help would be greatly appreciated,
Jarrod | Function to Launch Another Python Script | 0 | 0 | 0 | 428 |
31,868,486 | 2015-08-07T01:48:00.000 | -3 | 0 | 0 | 0 | python,wifi,wireless | 54,299,200 | 3 | false | 0 | 0 | c:\netsh
C:\netsh\wlan
c:\netsh\wlan)Show all | 1 | 10 | 0 | I am trying to find out how I can list all of the available wireless networks in Python. I am using Windows 8.1.
Is there a built-in function I can call, or through a library?
Please kindly show me the code which prints the list. | List All Wireless Networks Python for PC | -0.197375 | 0 | 0 | 24,082 |
31,869,524 | 2015-08-07T04:03:00.000 | 4 | 0 | 1 | 0 | java,python,javaw,pythonw | 31,869,550 | 3 | false | 0 | 0 | By default java opens a console window when executed in windows OS. By using javaw the java process doesn't open in a console window. It is a good UX practice to use javaw in scripts or bundled executables. I guess it is the same for pythonw also. 'w' stands for 'Windows' as in Java for windows. | 1 | 6 | 0 | I just made a connection between python's: pythonw.exe and java's: javaw.exe and I'm curious about this as I cant figure out what some of those ending letters mean. I know that javac is the Java compiler so I assume the w on the end of the name also has some significance. I've also seen more like javap, javah etc. Could someone outline the meanings for the most common endings like c, w, h, p, etc?
I've tried googling and searching on Stackoverflow but haven't found anything that isn't just about a specific ending.
Edit:
I realize there are a lot of isolated answers to these questions. All I really want to know is if there's a place where I can view a complete (or decent) list of the common letters and their meanings, or if someone could outline them for me? Also what to call these endings so that I'm not referring to them just "ending letters"? | What Do The Ending Letters Mean - pythonw, javaw, javap, javac, etc | 0.26052 | 0 | 0 | 715 |
31,870,616 | 2015-08-07T05:52:00.000 | -3 | 0 | 1 | 0 | python,python-2.7,time | 31,870,657 | 4 | false | 0 | 0 | If the program would know how much data it is getting, you could set it up to function like a progress bar.. | 1 | 1 | 0 | I created a python file that collect data. After collecting all the data, it will print out "Done.". Sometimes, it might take atleast 3 minutes to collect all the data.
I would like to know how to print something like "Please wait..." for every 30 seconds, and it will stop after collecting all the data.
Can anyone help me please? | Python Priting Out Something While Waiting For Long Output | -0.148885 | 0 | 0 | 1,834 |
31,870,995 | 2015-08-07T06:23:00.000 | 1 | 1 | 1 | 0 | python,compilation,word2vec | 34,438,547 | 4 | false | 0 | 0 | Similar to user1151923, after adding MinGW\bin to my path variable and uninstalling\reinstalling gensim through pip, I still received the same warning message. I ran the following code to fix this problem (installed gensim from conda).
pip uninstall gensim
conda install gensim | 3 | 5 | 0 | Sorry that I don't have enough reputation to post images.
The main problem is that it tells me that I need to install a C compiler and reinstall gensim or the train will be slow, and in fact it is really slow.
I have installed mingw32, Visual Studio 2008, and have added the mingw32 environment variable to my path.
Any ideas on how to solve it? | Gensim needs a C compiler? | 0.049958 | 0 | 0 | 4,228 |
31,870,995 | 2015-08-07T06:23:00.000 | 0 | 1 | 1 | 0 | python,compilation,word2vec | 56,945,149 | 4 | false | 0 | 0 | I had the same problem and tried many solutions, but none of them worked except degrading to gensim version 3.7.1. | 3 | 5 | 0 | Sorry that I don't have enough reputation to post images.
The main problem is that it tells me that I need to install a C compiler and reinstall gensim or the train will be slow, and in fact it is really slow.
I have installed mingw32, Visual Studio 2008, and have added the mingw32 environment variable to my path.
Any ideas on how to solve it? | Gensim needs a C compiler? | 0 | 0 | 0 | 4,228 |
31,870,995 | 2015-08-07T06:23:00.000 | 0 | 1 | 1 | 0 | python,compilation,word2vec | 61,358,854 | 4 | false | 0 | 0 | When I installed it from conda-forge then I obtained a version that is already compiled and fast:
conda install -c conda-forge gensim | 3 | 5 | 0 | Sorry that I don't have enough reputation to post images.
The main problem is that it tells me that I need to install a C compiler and reinstall gensim or the train will be slow, and in fact it is really slow.
I have installed mingw32, Visual Studio 2008, and have added the mingw32 environment variable to my path.
Any ideas on how to solve it? | Gensim needs a C compiler? | 0 | 0 | 0 | 4,228 |
31,879,337 | 2015-08-07T13:47:00.000 | 2 | 0 | 0 | 0 | python,mysql | 31,879,445 | 2 | false | 0 | 0 | Are you using an ORM like SQLAlchemy?
Anyway, to answer your question directly, you can use json or pickle to convert your list to a string and store that. Then to get it back, you can parse it (as JSON or a pickle) and get the list back.
However, if your list is always a 3 point coordinate, I'd recommend making separate x, y, and z columns in your table. You could easily write functions to store a list in the correct columns and convert the columns to a list, if you need that. | 1 | 3 | 0 | How can I store python 'list' values into MySQL and access it later from the same database like a normal list?
I tried storing the list as a varchar type and it did store it. However, while accessing the data from MySQL I couldn't access the same stored value as a list, but it instead it acts as a string. So, accessing the list with index was no longer possible. Is it perhaps easier to store some data in the form of sets datatype? I see the MySQL datatype 'set' but i'm unable to use it from python. When I try to store set from python into MySQL, it throws the following error: 'MySQLConverter' object has no attribute '_set_to_mysql'. Any help is appreciated
P.S. I have to store co-ordinate of an image within the list along with the image number. So, it is going to be in the form [1,157,421] | storing python list into mysql and accessing it | 0.197375 | 1 | 0 | 5,462 |
31,879,606 | 2015-08-07T14:01:00.000 | 1 | 0 | 0 | 1 | python,deployment | 31,885,764 | 1 | false | 1 | 0 | I don't see why you couldn't deploy on the same node (that's essentially what I do when I'm developing locally), but if you want to be able to rapidly scale you'll probably want them to be separate.
I haven't used rabbitmq in production with celery, but I use redis as the broker and it was easy for me to get redis as a service. The web app sends messages to the broker and worker nodes pick up the messages (and perhaps provide a result to the broker).
You can scale the web app, broker service (or the underlying node it's running on), and the number of worker nodes as appropriate. Separating the components allows you to scale them individually and I find that it's easier to maintain. | 1 | 0 | 0 | My web app is using celery for async job and rabbitmq for messaging, etc. The standard stuff. When it comes to deployment, are rabbitmq and celery normally deployed in the same node where the web app is running or separate? What are the differences? | flask application deployment: rabbitmq and celery | 0.197375 | 0 | 0 | 603 |
31,880,734 | 2015-08-07T14:52:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-admin | 44,893,888 | 1 | false | 1 | 0 | I fixed a very similar issue today where I couldn't assign Users permissions concerning tables that were created in multiple databases because those tables didn't appear in the list of "available permissions."
It appears that I accidentally migrated the model creation migrations to the default database before I correctly used the --database DATABASE flag with manage.py migrate. So I had the same table names in both the default and auxiliary databases. I dropped the tables in the default database, leaving only the tables in the auxiliary database, and then the tables appeared in the permissions list. | 1 | 1 | 0 | I registered a new model in Django's admin interface but I can't see any permissions related to it that I can assign to users or groups.
Could it be related to the fact that my models come from a different database? | Can't see permissions for new model in Django's admin interface | 0 | 0 | 0 | 725 |
31,880,759 | 2015-08-07T14:53:00.000 | 0 | 0 | 1 | 0 | python,arrays | 31,881,089 | 3 | false | 0 | 0 | From your description, it might be enough to have a counter associated with each sub-array as to how many of the items in that sub-array have already been bought. Give that you haven't shown any details as to your representation of these things, I can't give more details as to how to implement this. | 2 | 0 | 1 | I'm relatively new to Python and have already written a code to randomly select from two tables based on user input but the next function I need to create is more complex and I'm having trouble wrapping my head around.
I'm going to have some code that's going to take user input and generate an amount of money I'm going to add to a variable, lets say, wallet.
I then want to write some code that takes random objects from an array based on price.
Now here's the caveat(s). Lets say array A is chosen. In Array A there will be 3-4 other sub arrays. Within those arrays are 4 objects first, second, third, and fourth. With the first being the cheapest and the fourth being the most expensive. I want this code to NOT be able to buy object second without having bought object first. I don't want an object purchasable unless the prerequisite is also purchased.
I'm just having a hard time thinking it through (a weakness in general in programming I need to overcome) but any advice or links to a concept similar to what I'm aiming to do would be greatly appreciated. Thanks! | Python - Pick from Array based on price with a caveat | 0 | 0 | 0 | 45 |
31,880,759 | 2015-08-07T14:53:00.000 | 0 | 0 | 1 | 0 | python,arrays | 31,881,413 | 3 | false | 0 | 0 | It's difficult to understand what you're getting at, because you're not expressing your ideas very well. You're finding this general programming difficult as programming can be considered as the precise expression of ideas.
So, in very general terms you are trying to simulate some sort of curated shopping experience. You need to:
Track a value of currency.
Manage a product catalogue.
Allow a selection of products based on value and constraints based on prior selections.
If I were doing this, I might write a class that I'd use to manage the basket. I might instantiate a basket with a budget figure and a product catalogue to select from. I might express the constraints in the catalogue, but enforce them in the basket.
I would probably use the basket (budget, tally and selections) to filter the product catalogue to highlight eligible products.
If multiple transactions are allowed, the basket would need to have knowledge of previous purchases and therefore which prerequisites have already been fulfilled. | 2 | 0 | 1 | I'm relatively new to Python and have already written a code to randomly select from two tables based on user input but the next function I need to create is more complex and I'm having trouble wrapping my head around.
I'm going to have some code that's going to take user input and generate an amount of money I'm going to add to a variable, lets say, wallet.
I then want to write some code that takes random objects from an array based on price.
Now here's the caveat(s). Lets say array A is chosen. In Array A there will be 3-4 other sub arrays. Within those arrays are 4 objects first, second, third, and fourth. With the first being the cheapest and the fourth being the most expensive. I want this code to NOT be able to buy object second without having bought object first. I don't want an object purchasable unless the prerequisite is also purchased.
I'm just having a hard time thinking it through (a weakness in general in programming I need to overcome) but any advice or links to a concept similar to what I'm aiming to do would be greatly appreciated. Thanks! | Python - Pick from Array based on price with a caveat | 0 | 0 | 0 | 45 |
31,882,605 | 2015-08-07T16:32:00.000 | 0 | 0 | 1 | 0 | python,spyder,imdb,imdbpy | 31,885,585 | 1 | false | 0 | 0 | Suggestion (may or may not work):
launch "Winpython Command Prompt"
type 'pip install IMDbPy'
....
type 'pip list' to check pip did install it | 1 | 1 | 0 | I'm trying to get a version of IMDbPy that I can install using the WinPython installer - there are a variety of programs that have been made compatible with, however IMDbPy doesn't have a specific WinPython package.
I've tried to download several different versions of it and install it with the WinPython Installer, however most of them have instantly been rejected due to the incorrect file type for the package; I got one to be accepted only for it to reject it later in the process.
I'm just wondering if there is a way to get the software installed and usable within Spyder. | IMDbPy with WinPython | 0 | 0 | 0 | 309 |
31,883,046 | 2015-08-07T17:00:00.000 | 0 | 0 | 1 | 0 | python-3.x,installation | 70,541,886 | 3 | false | 0 | 0 | Download msi file for particular version and mention the target directory name and run below command in Download folder(where msi placed)
msiexec /a python-2.7.10.msi /qb TARGETDIR=D:\python27
inside D:\python27 we got python path and application | 1 | 1 | 0 | I want to install python3 on a computer that I do not have admin rights. Are there any ways to go around ? Thanks. | How to install python3 without admin rights on Windows? | 0 | 0 | 0 | 6,451 |
31,883,505 | 2015-08-07T17:30:00.000 | 2 | 0 | 0 | 0 | python,django,virtualenv | 31,883,608 | 4 | false | 1 | 0 | Common approach, if you'd like to configure region, but did not want to store sensitive information in repo, is to pass it through environment variables. When you need it just call os.environ('SECRET') (even in your settings.py). Better with some fallback value.
Virtualenv does not helps you to hide anything, it just prevent you system-wide Python installation from littering by one-project-required-packages. | 2 | 16 | 0 | I am using Django, python, virtualenv, virtualenvwrapper and Vagrant.
So far I have simply left my secret_key inside of the settings.py file.
This works file for local files. However I have already placed my files in Git. I know this is not acceptable for production(Apache).
What is the correct way to go about hiding my secret_key?
Should I use virtualenv to hide it? | How to I hide my secret_key using virtualenv and Django? | 0.099668 | 0 | 0 | 14,236 |
31,883,505 | 2015-08-07T17:30:00.000 | 0 | 0 | 0 | 0 | python,django,virtualenv | 63,311,862 | 4 | false | 1 | 0 | The solution I use is to create a file sec.py and place it next to my settings.py file. Then in at line 1 of settings.py call from .sec import *. Be sure to include the period in front of the file name. Be sure to list sec.py in your .gitignore file. | 2 | 16 | 0 | I am using Django, python, virtualenv, virtualenvwrapper and Vagrant.
So far I have simply left my secret_key inside of the settings.py file.
This works file for local files. However I have already placed my files in Git. I know this is not acceptable for production(Apache).
What is the correct way to go about hiding my secret_key?
Should I use virtualenv to hide it? | How to I hide my secret_key using virtualenv and Django? | 0 | 0 | 0 | 14,236 |
31,883,784 | 2015-08-07T17:47:00.000 | 0 | 0 | 1 | 0 | python,numbers,format,decimal,floating-point-precision | 31,884,402 | 5 | false | 0 | 0 | Python also has a builtin "round" function: x = round(2.00001, 2) I believe is the command you would use. | 1 | 0 | 0 | This is what I have:
x = 2.00001
This is what I need:
x = 2.00
I am using:
float("%.2f" % x)
But all I get is:
2
How can I limit the decimal places to two AND make sure there are always two decimal places even if they are zero?
Note: I do not want the final output to be a string. | How to fix floating point decimal to two places even if number is 2.00000 | 0 | 0 | 0 | 330 |
31,884,573 | 2015-08-07T18:41:00.000 | -1 | 0 | 0 | 0 | python,django,database-migration | 48,024,289 | 3 | false | 1 | 0 | Worth noting for future readers that the migrations can hang when trying to apply a migration for an incorrect size CharField (DB implementation dependent). I was trying to alter a CharField to be greater than size 255 and it was just hanging. Even after terminating the connections as stated it would not fix it as a CharField of size greater than 255 as that was incorrect with my implementation (postgresql).
TLDR; Ensure your CharField is 255 or less, if greater change your CharField to a TextField and it could fix your problem! | 2 | 9 | 0 | I have a django migration I am trying to apply. It gets made fine (it's small, it's only adding a CharField to two different Models. However when I run the actual migrate it hangs (no failure, no success, just sits).
Through googling I've found that other open connections can mess with it so I restarted the DB. However this DB is connect to continuously running jobs and new queries do sneak in right away. However they are small, and last time I tried restarting I THINK I was able to execute my migrate before anything else. Still nothing.
Are there any other known issues that cause something like this? | Django 1.7 Migrations hanging | -0.066568 | 0 | 0 | 4,210 |
31,884,573 | 2015-08-07T18:41:00.000 | 6 | 0 | 0 | 0 | python,django,database-migration | 31,884,628 | 3 | true | 1 | 0 | At least in PostgreSQL you cannot modify tables (even if it's just adding new columns) while there are active transactions. The easiest workaround for this is usually to:
run the migration script (which will hang)
restart your webserver/wsgi container
When restarting your webserver all open transactions will be aborted (assuming you don't have background processes which also have transactions open), so as soon as no transactions are blocking your table, the migration will finish. | 2 | 9 | 0 | I have a django migration I am trying to apply. It gets made fine (it's small, it's only adding a CharField to two different Models. However when I run the actual migrate it hangs (no failure, no success, just sits).
Through googling I've found that other open connections can mess with it so I restarted the DB. However this DB is connect to continuously running jobs and new queries do sneak in right away. However they are small, and last time I tried restarting I THINK I was able to execute my migrate before anything else. Still nothing.
Are there any other known issues that cause something like this? | Django 1.7 Migrations hanging | 1.2 | 0 | 0 | 4,210 |
31,886,385 | 2015-08-07T20:47:00.000 | 4 | 0 | 1 | 0 | python,ghostscript | 52,513,311 | 2 | false | 0 | 0 | Jamie's answer is not very helpful, OP is saying that he has indeed downloaded and installed ghostscript and even posted that he's using a python 2.7 which is supposed to be supported by ghostscript no problem.
I got the same error:
RuntimeError: Can not find Ghostscript DLL in registry
My problem was actually that I had Python(3.6) 64bit installed while having Ghostscript 32bit installed. Uninstalling the 32bit Ghostscript and installing 64bit Ghostscript resolved the issue.
You can check your python version by running python.exe and checking the header message.
python
Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 11:27:44) [MSC
v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information. | 1 | 5 | 0 | I've been trying to import ghostscript into Python in order to convert pdf files to a .tiff format.
I am using Python version 2.7.10 on Windows 8.
I have successfully downloaded and installed ghostscript using pip, and it appears in the correct location (...\Anaconda\Lib\sitepackages). I've confirmed that other packages located in this directory can be imported into Python.
I am using the command import ghostscript
When I do so, I get an error message:
RuntimeError: Can not find Ghostscript DLL in registry
The traceback indicates that calling the file "ghoscript_init_.py" successfully imports _gsprint as gs.
However, when the import function attempts to access "ghostscript_gsprint.py", it produces the RuntimeError where it is unable to find the Ghostscript DLL.
I would be very grateful for any advice or tips. Thanks! | Importing Ghostscript in Python on Windows 8 | 0.379949 | 0 | 0 | 13,033 |
31,886,734 | 2015-08-07T21:13:00.000 | 1 | 0 | 0 | 0 | python,django,postgresql,heroku,django-models | 31,888,017 | 1 | true | 1 | 0 | i suggest you update the data in your local then make a fixture, commit and push it in your heroku. then do load the data using the terminal
update data (locally)
make a fixture (manage.py dumpdata)
commit and push to heroku
login via terminal (heroku login)
load the data (heroku run python manage.py loaddata .json) | 1 | 1 | 0 | I want to update a field in my users table on my django project hosted on heroku.
Is there a way I can run a script(if so from where?) using what?
That allows me to update a field in the database? I could do this manually in the django admin but it would take way to long as there are large number of users.
Any advice is appreciated. | How to update my production database in a django/heroku project with a script | 1.2 | 0 | 0 | 673 |
31,887,058 | 2015-08-07T21:42:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,user-interface,kivy | 31,888,122 | 2 | true | 0 | 1 | self.pos and self.size would have sufficed. How silly of me. | 1 | 0 | 0 | I have a 10 x 10 GridLayout with 100 Image widgets, and on_touch_down, I want the picture that was touched to change to a different picture. Since the touch signal will bubble through GridLayout and all its 100 Image children, I want to do a check in on_touch_down to see if the touch coordinates are within the area occupied by the Image that was touched. How can I find the vertices of the Image, or is there an alternative to my approach? Calculating each Image's four vertices as they are added would be rather difficult since I am stretching these Images.
Many thanks in advance. :) | How Can I Find the Area/Vertices Occupied By a Widget in Kivy? | 1.2 | 0 | 0 | 74 |
31,888,624 | 2015-08-08T01:08:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,numpy,grid,distance | 31,888,691 | 3 | false | 0 | 0 | The simplest way I know of to calculate the distance between two points on a plane is using the Pythagorean theorem.
That is, picture a right angle triangle where the hypotenuse goes between the two points and the base of the triangle is parallel to the x axis and the height is parallel to the y axis. We then know that the distance (represented by the length of the hypotenuse) h adheres to the following: h^2 = a^2 + b^2, where a and b are the lengths of the two remaining sides of the triangle.
It's hard to give any other help without seeing your code. Have you tried something similar yet? You need to specify your question more if you want more specific answers. | 1 | 2 | 1 | I have a 10 x 10 grid of cells (as a numpy array). I also have a list of 3 points on that grid. For each cell on the grid, I need to find the closest of the three points. I can do this in series of nested loops in python (2.7) which works but is slow (especially if I upscale to larger grids) but I suspect there is a faster way. Does anyone have any suggestions? | Calculating distances on grid | 0 | 0 | 0 | 2,279 |
31,889,124 | 2015-08-08T02:47:00.000 | 5 | 0 | 0 | 0 | python,flask,remote-access,flask-restful | 31,889,144 | 1 | true | 1 | 0 | The problem is not from Flask,
The IP specified in app.run(host='0.0.0.0') must be owned by your server.
If you want to launch Flask on remote server, deploy the code on that server using SSH and run it using a remote session. | 1 | 3 | 0 | I'm able to run the flask app on the local system using app.run(). But when I try to run it on remote server using app.run(host='0.0.0.0',port='81') or app.run(host='<remote ip>'),both don't work. I want to know if something else has to be done. | How to run a flask app on a remote server from a local system? | 1.2 | 0 | 0 | 8,067 |
31,892,531 | 2015-08-08T11:17:00.000 | 1 | 1 | 0 | 0 | python,unit-testing | 31,892,566 | 1 | true | 0 | 0 | When unit testing, you test a particular unit (function/method...) in isolation, meaning that you don't care if other components that your function uses, work (since there are other unit test cases that cover those).
So to answer your question - it's out of the scope of your unit tests whether an external service like Google oAuth works. You just need to tests that you make a correct call to it, and here's where Mock comes in handy. It remembers the call for you to inspect and make some assertions about it, but it prevents the request for actually going out to the external service / component / library / whatever.
Edit: If you find your code is too complex and difficult to test, that might be an indication that it should be refactored into smaller more manageable pieces. | 1 | 0 | 0 | I am fairly new to unit testing. And at the moment I have trouble on trying to unit test a Google oAuth Picasa authentication. It involves major changes to the code if I would like to unit tested it (yeah, I develop unit test after the app works).
I have read that Mock Object is probably the way to go. But if I use Mock, how do I know that the functionality (that is Google oAuth Picasa authentication), is really working?
Or, aside that I develop unit testing after the app finished, did I made other mistakes in understanding Mock? | How can mock object replace all system functionality being tested? | 1.2 | 0 | 0 | 32 |
31,892,667 | 2015-08-08T11:35:00.000 | 2 | 0 | 1 | 0 | python,multithreading,multiprocessing | 31,892,771 | 2 | false | 0 | 0 | It depends on your code and the type of the problem you are trying to solve. Python GIL applies to CPU-bound threads, i.e. threads that want to do CPU-intensive tasks*.
However, if your threads are I/O-bound, i.e. spend most of the time waiting for input/output, the GIL would not be a problem because while waiting for the I/O operation to complete, threads can do not need a lock on the Python interpreter.
An example would be waiting for a network operation to complete (downloading a file, for example). You can easily fire multiple threads and download the files simultaneously.
*footnote: even for CPU-bound tasks, the GIL only applies to Python code. If a thread uses a C-extension (i.e. library written in C), it won't need the GIL while executing, because it won't be running Python instructions.
Edit: Reading your comment saying that you will be invoking the POS simulator functions via sockets... that's an I/O operation. While the server will be executing function calls, Python (the client) will simply wait doing nothing, and the threads will not need to hold GIL most of the time. | 1 | 0 | 0 | I have a requirement which requires me to run 480 threads and 16 processes( each process will have 30 threads, so it's 480). I have heard that due to GIL, python does not have a good support for multi threading! Is there any way to execute the application with above requirements efficiently in python? Thanks in advance. | Python multithreading,multi processing | 0.197375 | 0 | 0 | 61 |
31,893,477 | 2015-08-08T13:11:00.000 | 0 | 0 | 0 | 1 | python,linux,qt,ubuntu,pyqt | 42,756,312 | 2 | false | 0 | 0 | This is a hacky solution.
Install qt-qtconf. sudo apt-get install qt4-qtconfig
Run sudo qtconfig or gksudo qtconfig.
Change GUI Style to GTK+.
Edited. | 1 | 0 | 0 | Ok the title explains it all. But just to clarify.
I have Ubuntu and programed a GUI app with Qt Designer 4 and PyQt4. The program works fine running python main.py in terminal.
Last week I made an update and now the program needs sudo privelages to start. So I type sudo python main.py.
But Oh my GODDDDDDD. What an ungly inteface came up. O.o
And I don't know how to get the realy nice normal-mode interface in my programm and all of my others programs i'll make. Is there any way to set a vaiable to python? Do I need to execute any command line code?
The program is deployed only in Linux machines.
P.S.
I search a lot in the web and couldn't find a working solution. | How to run PyQt4 app with sudo privelages in Ubuntu and keep the normal user style | 0 | 0 | 0 | 1,223 |
31,893,930 | 2015-08-08T14:08:00.000 | -1 | 0 | 0 | 0 | csv,pandas,ipython-notebook | 53,851,540 | 7 | false | 0 | 0 | My simple approach to download all the files from the jupyter notebook would be by simply using this wonderful command
!tar cvfz my_compressed_file_name.tar.gz *
This will download all the files of the server including the notebooks.
In case if your server has multiple folders, you might be willing to use the following command. write ../ before the * for every step up the directory.
tar cvfz zipname.tar.gz ../../*
Hope it helps.. | 1 | 33 | 1 | I run an iPython Notebook server, and would like users to be able to download a pandas dataframe as a csv file so that they can use it in their own environment. There's no personal data, so if the solution involves writing the file at the server (which I can do) and then downloading that file, I'd be happy with that. | Download CSV from an iPython Notebook | -0.028564 | 0 | 0 | 50,655 |
31,896,870 | 2015-08-08T17:51:00.000 | 2 | 0 | 1 | 0 | python,boolean-expression | 31,896,914 | 3 | false | 0 | 0 | Your multiple operators all have the same precedence, so now it is going to work through them serially. 1<2<3 goes to 1<2 which is T, then 2<3 is T. 2<3<1 has two parts, 2<3 is T, but 3<1 is F so the entire expression evaluates to F. | 1 | 2 | 0 | The title says it all. For example 1<2<3 returns True and 2<3<1 returns False.
It's great that it works, but I can't explain why it works... I can't find anything about it in the documentation. It's always: expression boolean_operator expression, not two boolean operators). Also: a<b returns a boolean, and boolean boolean_operator expression does not explain the behaviour.
I'm sure the explanation is (almost) obvious, but I seem to miss it. | Why does `a<b<c` work in Python? | 0.132549 | 0 | 0 | 168 |
31,900,892 | 2015-08-09T04:24:00.000 | 0 | 0 | 1 | 0 | python | 32,379,727 | 5 | false | 0 | 0 | ('abc') is equivalent to 'abc'.
'a' in ('abc') is equivalent to 'a' in 'abc'.
'a' in ('abc', ) returns False as 'a' in ['abc'].
'a' in ['a', 'b', 'c'] returns True as 'a' in 'abc'. | 2 | 72 | 0 | When using the interpreter, the expression 'a' in ('abc') returns True, while 'a' in ['abc'] returns False. Can somebody explain this behaviour? | Why is 'a' in ('abc') True while 'a' in ['abc'] is False? | 0 | 0 | 0 | 4,571 |
31,900,892 | 2015-08-09T04:24:00.000 | 136 | 0 | 1 | 0 | python | 31,900,920 | 5 | false | 0 | 0 | ('abc') is the same as 'abc'. 'abc' contains the substring 'a', hence 'a' in 'abc' == True.
If you want the tuple instead, you need to write ('abc', ).
['abc'] is a list (containing a single element, the string 'abc'). 'a' is not a member of this list, so 'a' in ['abc'] == False | 2 | 72 | 0 | When using the interpreter, the expression 'a' in ('abc') returns True, while 'a' in ['abc'] returns False. Can somebody explain this behaviour? | Why is 'a' in ('abc') True while 'a' in ['abc'] is False? | 1 | 0 | 0 | 4,571 |
31,903,327 | 2015-08-09T10:39:00.000 | 0 | 0 | 1 | 0 | python-2.7,openerp,odoo | 31,917,329 | 2 | true | 0 | 0 | Python will give you functionalities(i.e. "Back end" Not DataBase) and XML will gives you the view(i.e "Front End").
OSV = Object Service. Keeps the definitions of objects and their fields in memory, more or less.
"arch" will give "View Architecture" for XML! | 2 | 1 | 0 | And also explain what is osv.osv and sometimes why we include class name at last line in python code like this student(). Why do we need to do that?
And last what is arch field in xml code.
Thanks in advance | What does python code do and what does xml code do in odoo? | 1.2 | 0 | 1 | 282 |
31,903,327 | 2015-08-09T10:39:00.000 | 2 | 0 | 1 | 0 | python-2.7,openerp,odoo | 31,939,808 | 2 | false | 0 | 0 | If you have experience with MVC, then you can compare odoo python file to a model / controller which holds the business logic, for creating masters etc
and a xml file to a view which is for presenting the data to the UI.
osv class inside in OSV module in OpenERP server , which contains all the OpenERP properties like you can see _column, _defaults and other many things.
student() - its like a constructor to invoke the object, but its not needed now in latest versions | 2 | 1 | 0 | And also explain what is osv.osv and sometimes why we include class name at last line in python code like this student(). Why do we need to do that?
And last what is arch field in xml code.
Thanks in advance | What does python code do and what does xml code do in odoo? | 0.197375 | 0 | 1 | 282 |
31,903,574 | 2015-08-09T11:10:00.000 | 0 | 0 | 0 | 1 | python,twisted | 32,285,162 | 2 | false | 0 | 0 | The only way to support a cross-platform unexpected disconnection (unplug) is to implement a application-level ping message to ping clients in a specific interval. | 1 | 2 | 0 | I wrote a TCP server using Python Twisted to send/receive binary data from clients.
When a client close their application or calls the abortConnection method, I get the connectionLost event normally but when the client disconnects unexpectedly, I don't get the disconnect event, therefore, I can't remove the disconnected client from the queue.
By unexpected disconnect I mean disabling the network adapter or lost the network connection somehow.
My question is, how can I handle this sort of unexpected connection losts? | Twisted unexpected connection lost | 0 | 0 | 0 | 860 |
31,904,761 | 2015-08-09T13:32:00.000 | 29 | 0 | 0 | 0 | python,web,tcp,flask,server | 31,904,923 | 1 | false | 1 | 0 | To answer to your second question. You can just hit the IP address of the machine that your flask app is running, e.g. 192.168.1.100 in a browser on different machine on the same network and you are there. Though, you will not be able to access it if you are on a different network. Firewalls or VLans can cause you problems with reaching your application.
If that computer has a public IP, then you can hit that IP from anywhere on the planet and you will be able to reach the app. Usually this might impose some configuration, since most of the public servers are behind some sort of router or firewall. | 1 | 53 | 0 | I am reading the Flask documentation. I was told that with app.run(host='0.0.0.0'), I could make the server publicly available.
What does it mean ? How can I visit the server in another computer (just localhost:5000 in my own computer) ? | What does "app.run(host='0.0.0.0') " mean in Flask | 1 | 0 | 0 | 132,602 |
31,906,949 | 2015-08-09T17:32:00.000 | 2 | 0 | 1 | 0 | qpython | 41,871,252 | 4 | false | 0 | 0 | go to settings->input method select word-based | 3 | 2 | 0 | Very basic question. Im trying to use qpython. I can type things in the console but no obvious way to enter a return (or enter) | in qpython, how do I enter a "return" character | 0.099668 | 0 | 0 | 3,088 |
31,906,949 | 2015-08-09T17:32:00.000 | 0 | 0 | 1 | 0 | qpython | 32,237,684 | 4 | false | 0 | 0 | The console works just like a normally python console. You can use a function if you want to write a script in the console. | 3 | 2 | 0 | Very basic question. Im trying to use qpython. I can type things in the console but no obvious way to enter a return (or enter) | in qpython, how do I enter a "return" character | 0 | 0 | 0 | 3,088 |
31,906,949 | 2015-08-09T17:32:00.000 | 0 | 0 | 1 | 0 | qpython | 33,434,430 | 4 | false | 0 | 0 | There is no way of doing it.
The console will automatically input a break line when the line of code ends so you can continue inputting in the screen without any scroll bars.
For complex code, you should use the editor. | 3 | 2 | 0 | Very basic question. Im trying to use qpython. I can type things in the console but no obvious way to enter a return (or enter) | in qpython, how do I enter a "return" character | 0 | 0 | 0 | 3,088 |
31,907,080 | 2015-08-09T17:43:00.000 | 0 | 0 | 0 | 0 | python,algorithm,data-structures,queue | 31,908,093 | 1 | false | 0 | 0 | You can do something like this:
Start timer 60 minutes
Get the pages that people visits
Save pages
If timer is not ended do step 2-3 again if timed is ended:
Count wich one is the most visited
Count wich one is the second most visited
Etc | 1 | 0 | 0 | If there a data structures likes container/queue, based on time , I could use it this way: add item(may duplicate) into it one by one, pop out those added time ealier then 60 minutes; count the queue; then I got top 10 most added items, in a dymatice period, said, 60min.
How to implement this time based container ? | python, data structures, algorithm: how to rank top 10 most visited pages in latest 60 minutes? | 0 | 0 | 1 | 347 |
31,908,956 | 2015-08-09T21:21:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy | 31,909,251 | 3 | false | 0 | 0 | np.where is the answer. I spend time messing with np.place without knowing its existence. | 1 | 4 | 1 | Say I have a large array of value 0~255. I wanted every element in this array that is higher than 100 got multiplied by 1.2, otherwise, got multiplied by 0.8.
It sounded simple but I could not find anyway other than iterate through all the variable and multiply it one by one. | Numpy conditional multiply data in array (if true multiply A, false multiply B) | 0.066568 | 0 | 0 | 6,219 |
31,910,812 | 2015-08-10T02:22:00.000 | 0 | 0 | 0 | 1 | python,uwsgi,gunicorn,apscheduler | 31,929,832 | 1 | false | 1 | 0 | I'm not aware of any way to do this with either, at least not without some sort of RPC. That is, run APScheduler in a separate process and then connect to it from each worker. You may want to look up projects like RPyC and Execnet to do that. | 1 | 7 | 0 | The title basically says it all. I have gunicorn running my app with 5 workers. I have a data structure that all the workers need access to that is being updated on a schedule by apscheduler. Currently apscheduler is being run once per worker, but I just want it run once period. Is there a way to do this? I've tried using the --preload option, which let's me load the shared data structure just once, but doesn't seem to let all the workers have access to it when it updates. I'm open to switching to uWSGI if that helps. | Running ApScheduler in Gunicorn Without Duplicating Per Worker | 0 | 0 | 0 | 1,182 |
31,914,900 | 2015-08-10T08:26:00.000 | 0 | 0 | 1 | 0 | python,garbage-collection | 35,272,896 | 1 | false | 0 | 0 | Unless you are overriding the __del__ methods, you should not worry about circular dependencies, as Python is able to properly cope with them. | 1 | 8 | 0 | I have some python code where gc.collect() seems to free a lot of memory. Given Python's reference counting nature, I am inclined to think that my program contains a lot of cyclical references. Since some data structures are rather big, I would like to introduce weak references. Now I need to find the circular references, having found a few of the obvious ones, I wonder if one can detect circular references and the objects that form the ring explicitly. So far I have only seen tutorials on how to call gc.collect et. al. | How to find out which specific circular references are present in code | 0 | 0 | 0 | 909 |
31,915,120 | 2015-08-10T08:39:00.000 | 2 | 0 | 1 | 0 | python-2.7 | 31,915,492 | 2 | false | 0 | 0 | Use pickle.dump in Python 3.x, or cPickle.dump in Python 2.x. | 1 | 2 | 1 | I just new in python. How to save variable data to file like save command in MATLAB.
Thank you | Python save variable to file like save() in MATLAB | 0.197375 | 0 | 0 | 1,761 |
31,923,606 | 2015-08-10T15:32:00.000 | 1 | 0 | 1 | 0 | python-2.7,windows-10,spyder | 32,023,483 | 1 | false | 0 | 0 | First, one correction: the problem was with starting Spyder, not running .py or .pyw files. Anyway, things work all right now after de-installing Spyder and Python, and reinstalling the Python(x,y) package (instead of Anaconda's). Then, when starting Spyder from the Python(x,y)start window, it behaves normally. | 1 | 0 | 0 | Can't open Spyder2 in Windows 10.0 (# 10240): the icon just appears briefly. Python 2.7.10 and Spyder 2.3.1 were loaded with Anaconda 2.3.0 (64-bit). The python console works fine - but I can't get my *.py or *.pyw files running. There is probably some message in the Python console when attemtping to open Spyder, but I don't know how to capture it. | Can't run Spyder or .py(w) scripts with Windows 10 | 0.197375 | 0 | 0 | 1,096 |
31,924,923 | 2015-08-10T16:43:00.000 | 0 | 0 | 1 | 0 | python,class,oop,default | 31,925,266 | 1 | true | 0 | 0 | Here are a few reasons you might want to use None as a default argument
It is actually the default value you want
This is especially applicable to SqlAlchemy because you may have a database column with default value of NULL
You want a mutable default value
Using a mutable default value such as a list can cause unpredictable behavior. You can provide a default value of None in order to change the scope of the mutable argument and avoid these issues.
You want to force the use of keyword arguments
In python 2, the only way to force all arguments to be keywords is to define them as keyword arguments, with no positional arguments. | 1 | 0 | 0 | I see for defining a class in sqlalchemy, the popular pattern is to pass the variables of the User like name, password, email, etc. with default value None to the construct function.
But in other classes ( like posts of a blog ) they do not define default values.
so to clear :
why defining default values and why None ? | Why pass python construct variables with default None? | 1.2 | 0 | 0 | 76 |
31,926,126 | 2015-08-10T17:58:00.000 | 1 | 0 | 1 | 0 | python,arrays,pygame,blit | 31,948,684 | 2 | false | 0 | 1 | use pygame.sprite.Group or multithread the blit method. In pygame 1.9.2alpha, it releases the python gil and allows multi-cpu rendering.
Also look up for pygame dirty rendering. Depending on what you want to draw, this can give you significant speed increase. | 1 | 0 | 0 | I was looking for a faster way to blit multiple objects in pygame than the traditional blitting method, in which you have an array or list, and you use a for loop to go and insert each image at its position.
Maybe there is a way of blitting the whole array at once, without having to go value by value throughout the whole array?
Thanks for the ideas and help! | Using arrays to blit multiple objects in pygame? | 0.099668 | 0 | 0 | 662 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.