Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23,551,808 | 2014-05-08T20:24:00.000 | 4 | 0 | 0 | 1 | python,django,architecture,celery | 23,552,055 | 2 | false | 1 | 0 | What will strongly simplify your processing is some shared storage, accessible from all cooperating servers. With such design, you may distribute the work among more servers without worrying on which server will be next processing step done.
Using AWS S3 (or similar) cloud storage
If you can use some cloud storage, like AWS S3, use that.
In case you have your servers running at AWS too, you do not pay for traffic within the same region, and transfers are quite fast.
Main advantage is, your data are available from all the servers under the same bucket/key name, so you do not have to bother about who is processing which file, as all have shared storage on S3.
note: If you need to get rid of old files, you may even set up some policy file on give bucket, e.g. to delete files older than 1 day or 1 week.
Using other types of shared storage
There are more options
Samba
central file server
FTP
Google storage (very similar to AWS S3)
Swift (from OpenStack)
etc.
For small files you could even use Redis, but such solutions are for good reasons rather rare. | 2 | 1 | 0 | Currently we have everything setup on single cloud server, that includes:
Database server
Apache
Celery
redis to serve as a broker for celery and for some other tasks
etc
Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed.
I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated.
Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers):
User uploads a abc.pptx file->reference is stored in database->stored on server disk
A task (convert document to pdf) is created and goes in redis (broker) queue
celery which is running on same server picks the task from queue
Read the file, convert it to pdf using software called docsplit
create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file
Considering the above use case, how can you setup up multiple web servers which can perform the same functionality? | django-celery infrastructure over multiple servers, broker is redis | 0.379949 | 0 | 0 | 2,804 |
23,552,613 | 2014-05-08T21:11:00.000 | 3 | 1 | 1 | 0 | python | 23,552,656 | 1 | true | 0 | 0 | Yes, all values in Python are objects, including integers, floats, functions, classes, and None. I've never heard it described as a "Pure" Object-oriented language, but it seems to meet your description of one. | 1 | 0 | 0 | I use Ruby on a daily basis and know it is a purely object oriented language. As far as I know, pure object oriented languages' distinguishable characteristic is that all variables are objects, even ints, floats, chars, etc that would be found as primitive types in other languages like Java.
Is Python the same way? I always knew Python as a general purpose object oriented/functional/procedural language that is also good for scripting, but I never thought that it could be purely OO.
Anyone have any explanations? | Is Python a pure object-oriented language | 1.2 | 0 | 0 | 5,949 |
23,556,119 | 2014-05-09T03:26:00.000 | 0 | 0 | 1 | 0 | python,multithreading | 27,574,636 | 2 | false | 0 | 0 | Looking back at this, I feel like an absolute idiot.
The proper way to handle this would be calling a function that accepts input and checks it against different expected strings or numerical values.
I just thought I'd answer this in a little more clear way than Stu did. | 1 | 0 | 0 | I'm writing a text based RPG, and I am using Python 3.3.4. It will be played through the Python command line, with no graphics. I'm wanting to make it so that no matter what options the user is presented with, they have the capability of typing "exit" or "help" and exiting (or getting help information) respectively at any time during the game. So if they're in the middle of fighting some monster and the only options presented to them directly would be to attack, defend, or flee, they're still able to exit or get help. Same for once they leave that function and they're just moving around the map, or talking to NPCs. From what I've found, starting a thread that waits for "exit" to be entered is the best way to do that. If I'm wrong, please let me know! If not, please explain (or show me a guide) how I'm supposed to go about this because none of my ideas have worked.
I've attempted using the threading module, and _thread. I may not have implemented them correctly, and I have no code to show for my attempts as I just trashed it when it didn't work.
Thank you in advance. | Python Multithreading to allow user to exit at anytime | 0 | 0 | 0 | 837 |
23,556,597 | 2014-05-09T04:25:00.000 | 1 | 0 | 0 | 0 | python,django,google-app-engine,google-drive-api | 23,559,388 | 1 | false | 1 | 0 | You can insert/upload files using the Drive API and set the "restricted" label to prevent downloading of the file. You would then set the appropriate permissions to this file to allow anyone or a specified set of users to access the file.
Download restrictions may or may not apply for files that are converted to one of the Google Apps formats because the option to prevent downloading seems unavailable for these files through the Google Drive UI. You would have to test this yourself. | 1 | 1 | 0 | I'm working on an app in django that allows users to upload documents to google drive and share them with friends. The problem is I want to restrict the shared documents to view only (no download option). How can I go about doing this? | Uploading and serving media files from google drive on django | 0.197375 | 0 | 0 | 909 |
23,556,752 | 2014-05-09T04:41:00.000 | 2 | 0 | 0 | 0 | python,regex,django | 23,557,031 | 2 | true | 1 | 0 | Part after the ? not used in the url dispatching process because it contains GET parameters. Otherwise you will not be able to use a GET request with parameters on patterns like r'^foo/&' (ampersand at the end means the end of the string, and without it it would be harder to use patterns like r'^foo/' and r'^foo/bar/' at the same time).
So for your url the pattern should look like r'^databaseadd/q=DQ-TDOXRTA&' and then in add_recipe view you need to check for the recipe GET parameter. | 1 | 0 | 0 | I have the following URL:
http://127.0.0.1:8000/databaseadd/q=DQ-TDOXRTA?recipe=&recipe=&recipe=&recipe=&recipe=
I'd like to match if the keyword "recipe=" is found in the url.
I have the following line in django url.py but it's not working:
url(r'recipe=', 'plots.views.add_recipe'),
Are the "&" and the "?" throwing it off?
Thanks!
Alex | Django: Search URL for keyword RegExp | 1.2 | 0 | 0 | 157 |
23,567,368 | 2014-05-09T14:38:00.000 | 1 | 0 | 0 | 1 | python,websocket,tornado,uwsgi | 23,577,862 | 1 | true | 1 | 0 | The uWSGI tornado loop engine is no more than a proof of concept, you could try to use it but native uWSGI websockets support or having nginx routing requests to both uWSGI and tornado are better (and more solid) choices for sure. | 1 | 0 | 0 | I'm work in a little message chat project and i use tornado websocket for the communication between web browser and the server here all work fine but i was working with tornado integrate web framework and i want to configurate my app for run on web server nginx -uwsgi , i read for integrate tornado and uwsgi i have to run the application tornado in wsgi mode but on this way the asynchronous methods are not supported. And i ask what is the best way for integrate a tornado websocket to uwsgi? Or i should run tornado websocket and configure it on nginx separate of the rest my app? | How integrate a websocket between tornado and uwsgi? | 1.2 | 0 | 0 | 728 |
23,567,726 | 2014-05-09T14:53:00.000 | 2 | 0 | 0 | 0 | python,qt,pyqt,pyqtgraph | 23,572,444 | 1 | false | 0 | 1 | It's likely you have items in the scene that accept their own mouse input, but it's difficult to say without seeing code. In particular, be wary of complex plot lines that are made clickable--it is very expensive to compute the intersection of the mouse cursor with such complex shapes.
The best (some would say only) way to solve performance issues is to profile your application: run python -m cProfile -s cumulative your_script.py once without moving the mouse, and again with mouse movement (be sure to spend plenty of time moving the mouse), and then compare the outputs to see where the interpreter is spending all of its time. | 1 | 0 | 0 | I have a multi-threaded (via pyqt) application which plots realtime data (data is processed in the second thread and passed to the gui thread to plot via a pyqt-signal). If I place the mouse over the application it continues to run at full speed (as measured by the time difference between calls to app.processEvents()). As soon as I begin moving the mouse, the update rate slows to a crawl, increasing again when I stop moving the mouse.
Does anyone know how I can resolve this/debug the issue?
The code is quite lengthy and complex so I'd rather not post it here. Thanks! | PYQTGraph application slows down when mouse moves over application | 0.379949 | 0 | 0 | 742 |
23,568,682 | 2014-05-09T15:39:00.000 | 0 | 0 | 1 | 1 | python,multithreading,events,twisted | 23,571,618 | 3 | false | 0 | 0 | The connection scheme is also important in the choice. How many concurrent connections do you expect ? How long will a client stay connected ?
If each connection is tied to a thread, many concurrent connections or very long lasting connections ( as with websockets ) will choke the system. For these scenarios an event loop based solution will be better.
When the connections are short and the heavy treatment comes in after the disconnection, both models weigh each other. | 1 | 4 | 0 | I am writing a simple daemon to receive data from N many mobile devices. The device will poll the server and send the data it needs as simple JSON. In generic terms the server will receive the data and then "do stuff" with it.
I know this topic has been beaten a bunch of times but I am having a hard time understanding the pros and cons.
Would threads or events (think Twisted in Python) work better for this situation as far as concurrency and scalability is concerned? The event model seems to make more sense but I wanted to poll you guys. Data comes in -> Process data -> wait for more data. What if the "do stuff" was something very computationally intensive? What if the "do stuff" was very IO intensive (such as inserting into a database). Would this block the event loop? What are the pros and drawbacks of each approach? | Thread vs Event Loop - network programming (language agnostic) | 0 | 0 | 0 | 1,730 |
23,571,451 | 2014-05-09T18:16:00.000 | 1 | 0 | 0 | 0 | python,google-app-engine,reportlab | 23,571,695 | 1 | true | 1 | 0 | If you want to access files from your application code that are covered by a static-file route in your app config, you need to set application_readable to true. Or, you can move/copy the file somewhere else in your project. | 1 | 0 | 0 | I have an app running on GAE, using reportlab to email generated PDF's.
When I run my reportlab app on localhost everything works perfectly. But when I run it after deploying, it throws out an error.
Error
IOError: Cannot open resource
"/base/data/home/apps/myapp/1.375717494064852868/static/img/__.jpg"
Line
img=[
[Image(os.path.join(os.path.dirname(os.path.abspath(file)),
'static/img/__.jpg'))]
]
app.yaml
-url: /static/img
static_dir: static/img | Reportlab error after deploying | 1.2 | 0 | 0 | 127 |
23,572,471 | 2014-05-09T19:27:00.000 | 1 | 0 | 1 | 1 | python,nltk,python-idle | 23,572,642 | 2 | false | 0 | 0 | Supplementing the answer above, when you install python packages they will install under the default version of python you are using. Since the module imports in python 2.7.6 make sure that you aren't using the Python 3 version of IDLE. | 1 | 1 | 0 | Programming noob here. I'm on Mac OS 10.5.8. I have Python 2.7.6 and have installed NLTK. If I run Python from Terminal, I can "import nltk" with no problem. But if I open IDLE (either from Terminal or by double-clicking on the application) and try the same thing there, I get an error message, "ImportError: No module named nltk". I assume this is a path problem, but what exactly should I do?
The directory where I installed NLTK is "My Documents/Python/nltk-2.0.4". But within this there are various other directories called build, dist, etc. Which of these is the exact directory that IDLE needs to be able to find? And how do I add that directory to IDLE's path? | How do I get IDLE to find NLTK? | 0.099668 | 0 | 0 | 2,634 |
23,574,720 | 2014-05-09T22:11:00.000 | 3 | 0 | 1 | 0 | python,numpy,fortran,fortran90,f2py | 24,146,620 | 2 | false | 0 | 0 | I'm doing something depressingly similar. Instead of dynamic memory allocation via C we have a single global array with integer indices (also at global scope), but otherwise it's much the same. Weird, inconsistent input file and all.
I'd advise against trying to rewrite the majority of the program, whether in python or anything else. It's time consuming, unpleasant and largely unnecessary. As an alternative, get the F77 code base to the point whether it compiles cleanly enough that you're willing to trust it, then write an interface routine.
I now have a big, ugly F77 code base which sits behind an interface. The program requires input as a text file so a large part of the interface's job is to produce that text file. Beyond that, the legacy code is reduced to a single gateway routine which takes a few arguments (including a means of identifying the text file) and returns the answer. If you use the iso_c_binding of Fortran 2003 you can expose the interface in a format C understands, at which point you can link it to whatever you wish.
As far as the modern code (mostly optimisation routines) is concerned, the legacy code base is the single subroutine behind the C interface. This is much nicer than trying to modify the old code further and probably a valid strategy for your case as well. | 1 | 6 | 0 | I am supposed to be doing research with this huge Fortran 77 program (which I recently ported to Fortran 90 superficially). It is a very old piece of software used for modeling using finite element methods.
It is a monstrosity. It is roughly 240,000 lines.
Since it began its life in Fortran 77, it uses some really dirty hacks for dynamic memory allocation; basically it uses the functions from the C standard library, mixed programming with C and Fortran. I am yet to fully grasp how allocation works. The program is built to be easily extendable by the user, and the user generally needs to allocate some globally accessible arrays for later use. This is done by having an array of memory addresses, which point to the beginning addresses of dynamically allocable arrays. Of course, which element of the address array pointing to which information all depends on conventions which has to be learned by the user, before one can start to really program. There are two address arrays, one for integers, and the other for floating points.
By dirty hacks, I mean inconsistent ones. For example an update in the optimization algorithm of the GNU compilers caused the program to exit with random memory leaks.
The program is far from elegant. Global variable names are generally short (3-4 characters) and cryptic. Passing data across routines is of course accomplished by using common blocks, which include all program switches, and the aforementioned arrays.
The usage of the program is roughly like that of an interactive shell, albeit a stupid one. First, an input file is read by the program itself, then per choice, the user is dropped into a pseudo-shell, in which the user has to type 4 character wide commands, followed by the parameters. The parser then parses the command, and corresponding subroutine is called with the parameters. You would guess that there is a loop structure in this pseudo-parser (a goto bonanza, rather) which wraps the subroutine behavior in a manner more complex than it should be in the 21st century.
The format of the input file is the same (commands, then parameters), since it is the same parser. But the syntax is not really consistent (by that, I mean it lacks control structures, and some commands cause the finite state machine to do behavior that contradict with other commands; it lacks definite grammar), time to time causing the end user to discover pitfalls. The user must learn these pitfalls by experience; I did not see them in any documentation of the program. This is a problem that can easily be avoided with python, and it is not even necessary to implement a parser.
What I want to do:
Port parts of the program into python, namely the parts that don't have anything to do with numerical computation. This includes
cleaning up and abstracting the API with an OOP approach in python,
giving meaningful variable names,
migrating dynamic allocation to either numpy or Fortran 90 and losing the C part,
migrating non-numerical execution to python, and wrap the numerical objects using f2py, so there is no loss in performance. Have I told that the program is damn fast in its current state? Hopefully porting the calls to numerical subroutines and I/O to python will not slow it down to an impractical level (or will it?).
Making use of python's interactive shell as a replacement for the pseudo-shell. This way, there will not be any inconsistencies for the end user. The aforementioned commands will be simply replaced by functions defined in python. This will allow the user to actually access the data. Plus, the user will be able to extend the program without going to deep.
What I wonder:
Is f2py suitable and up-to this task of wrapping numerous subroutines and common blocks without any confusion? I have only seen single-file examples on the net for f2py; I know that numpy has used it to wrap LAPACK and stuff, but I need reassurance that f2py is a tool consistent enough for this task.
Whether there are any suggestions on the general strategy that I should follow, or pitfalls I should avoid.
How can & should I implement a system in this python-wrapped Fortran 90 environment, so that I will be able to modify (allocate and assign) globally accessible arrays and variables inside fortran routines. This should preferably omit address arrays and I should preferably be able to inject verbal representations into the namespaces. These variables should preferably be accessible inside both python and fortran.
Notes:
I may have been asking for too much, something beyond the boundaries of the possible realm. In this case, please forgive me for I am a beginner with this aspect of programming; and don't hesitate to correct me.
The "program" I have been talking about is open source but it is commercial and the license does not allow its distribution, so I decided not to mention its name. However, you could deduce it from the 2nd sentence and the description I gave throughout. | Porting an old fortran program to work with python+numpy | 0.291313 | 0 | 0 | 1,394 |
23,576,603 | 2014-05-10T02:49:00.000 | 0 | 0 | 1 | 0 | python,tuples | 23,576,631 | 2 | false | 0 | 0 | Yes, collections.namedtuple does this, although you'll access the color with x.color rather than x['color'] | 1 | 0 | 0 | Is it possible to create a tuple in Python with indices that are string-based, not number based? That is, if I have a tuple of ("yellow", "fish", "10), I would be able to access by [color] instead of [0]. This is really only for the sake of convenience. | String-based indices in tuple Python | 0 | 0 | 0 | 117 |
23,577,208 | 2014-05-10T04:49:00.000 | 0 | 0 | 1 | 0 | javascript,ipython,ipython-notebook | 61,441,244 | 3 | false | 0 | 0 | as now in 5.7.4 version shortcut to create cell above is esc + a | 1 | 1 | 0 | I upgraded to IPython 2.0 today.
Alot of changes seem good, but the button to insert a new cell, above/below seems to be gone.
The option is still in the menu, and I believe the keyboard short cut works.
But the button is gone.
I'm sure there is a way to turn it back on, but the documentation for the new version doesn't seem super complete.
Possibly to turn it back on I need to adjust something in the config.
Which is just python script.
Maybe even tell it to insert a new element, and bind some javascript to it. | IPython 2.0 Notebook: where has the "Add Cell Above" button gone, and how do I get it back? | 0 | 0 | 0 | 826 |
23,579,631 | 2014-05-10T09:57:00.000 | 0 | 0 | 1 | 0 | python,turtle-graphics | 51,011,317 | 2 | false | 0 | 1 | I would use an array keep all x and y that is used for the maze in an array like stated above. Then have a size of a box around the turtle defined for detecting purposes. | 1 | 3 | 0 | I'm preparing exercises for school classes involving Python's turtle library.
The students are already drawing terrific pictures, but I want them to be able to detect existing pictures and colours in order to modify the behaviour of their program.
For example I would like to provide them with code which draws a maze using turtle, and then they can write the code to navigate the turtle around the maze (don't worry, I'll start simpler).
Is there a way to detect the colour of the pixels already drawn by the turtle?
Thanks! | How to read pixel colours using Python turtle.py | 0 | 0 | 0 | 3,154 |
23,583,017 | 2014-05-10T15:40:00.000 | 0 | 0 | 1 | 0 | python,debugging,exception,web2py,pycharm | 23,634,516 | 1 | true | 1 | 0 | I figured this out by looking through the web2py source code. Apparently, web2py is set up to do what I want to do for a particular debugger, seemingly called Wing Db. There's a constant env var ref in the code named WINGDB_ACTIVE that, if set, redirects exceptions to an external debugger.
All I had do to was define this env var, WINGDB_ACTIVE, as 1 in my PyCharm execution configuration, and voila, exceptions are now passed through to my debugger! | 1 | 0 | 0 | I'm just starting to build a web app with web2py for the first time. It's great how well PyCharm integrates with web2py.
One thing I'd like to do, however, is avoid the web2py ticketing system and just allow exceptions to be caught in the normal way in PyCharm. Currently, any attempt to catch exceptions, even via an "All Exceptions" breakpoint, never results in anything getting caught by Pycharm.
Can someone tell me if this is possible, and if so, how to do it? | Can web2py be made to allow exceptions to be caught in my PyCharm debugger? | 1.2 | 0 | 0 | 95 |
23,586,090 | 2014-05-10T20:41:00.000 | 1 | 0 | 0 | 0 | python,angularjs,single-page-application,custom-headers | 23,587,708 | 1 | true | 1 | 0 | Store the token in sessionStorage or localStorage. In your application startup (config or run) look for this information and set your header.
Perhaps if your user selects "remember me" when they log-in; save the token in local storage otherwise keep it in session storage. | 1 | 0 | 0 | I am building a single page web application (angularjs + python) where a user first needs to login with a username and password. After the user gets authenticated, a new custom header with a token is created and sent everytime this application makes calls to the python api.
One thing I noticed though, is that if I refresh the page (with F5 or Ctrl+F5) then the browser loses this custom header, so it is not sent anymore to the api.
Is there a way to keep the custom headers even after a refresh of the page? | Single page application losing custom headers after a refresh | 1.2 | 0 | 0 | 597 |
23,587,542 | 2014-05-10T23:54:00.000 | 0 | 1 | 1 | 0 | python,configuration-files | 23,587,747 | 5 | false | 0 | 0 | I've done this frequently in company internal tools and games. Primary reason being simplicity: you just import the file and don't need to care about formats or parsers. Usually it has been exactly what @zmo said, constants meant for non programmers in the team to modify (say the size of the grid of the game level. or the display resolution).
Sometimes it has been useful to be able to have logic in the configuration. For example alternative functions that populate the initial configuration of the board in the game. I've found this a great advantage actually.
I acknowledge that this could lead to hard to debug problems. Perhaps in these cases those modules have been more like game level init modules than typical config files. Anyhow I've been really happy about the straightforward way to make clear textual config files with the ability to have logic there too and haven't gotten bit by it. | 2 | 7 | 0 | I've noticed a few Python packages that use config files written in Python. Apart from the obvious privilege escalation, what are the pros and cons of this approach?
Is there much of a precedence for this? Are there any guides as to the best way to implement this?
Just to clarify: In my particular use case, this will only be used by programmers or people who know what they're doing. It's not a config file in a piece of software that will be distributed to end users. | Using config files written in Python | 0 | 0 | 0 | 1,283 |
23,587,542 | 2014-05-10T23:54:00.000 | 0 | 1 | 1 | 0 | python,configuration-files | 23,587,749 | 5 | false | 0 | 0 | This is yet another config file option. There are several quite adequate config file formats available.
Please take a moment to understand the system administrator's viewpoint or some 3rd party vendor supporting your product. If there is yet another config file format they might drop your product. If you have a product that is of monumental importance then people will go through the hassle of learning the syntax just to read your config file. (like X.org, or apache)
If you plan on another programming language accessing/writing the config file info then a python based config file would be a bad idea. | 2 | 7 | 0 | I've noticed a few Python packages that use config files written in Python. Apart from the obvious privilege escalation, what are the pros and cons of this approach?
Is there much of a precedence for this? Are there any guides as to the best way to implement this?
Just to clarify: In my particular use case, this will only be used by programmers or people who know what they're doing. It's not a config file in a piece of software that will be distributed to end users. | Using config files written in Python | 0 | 0 | 0 | 1,283 |
23,587,568 | 2014-05-10T23:57:00.000 | 5 | 0 | 1 | 0 | python,algorithm,list,frequency | 23,587,628 | 5 | false | 0 | 0 | Why not have two lists, one for questions not yet picked and one for questions that have been picked. Initially, the not-yet-picked list will be full, and you will pick elements from it which will be removed and added to the picked list. Once the not-yet-picked list is empty, repeat the same process as above, this time using the full picked list as the not-yet-picked list and vice versa. | 2 | 4 | 0 | I am building a quiz application which pulls questions randomly from a pool of questions. However, there is a requirement that the pool of questions be limited to questions that the user has not already seen. If, however, the user has seen all the questions, then the algorithm should "reset" and only show questions the user has seen once. That is, always show the user questions that they have never seen or, if they have seen all of them, always show them questions they have seen less frequently before showing questions they have seen more frequently.
The list (L) is created in a such a way that the following is true: any value in the list (I), may exist once or be repeated in the list multiple times. Let's define another value in the list, J, such that it's not the same value as I. Then 0 <= abs(frequency(I) - frequency(J)) <= 1 will always be true.
To put it another way: if a value is repeated in the list 5 times, and 5 times is the maximum number of times any value has been repeated in the list, then all values in the list will be repeated either 4 or 5 times. The algorithm should return all values in the list with frequency == 4 before it returns any with frequency == 5.
Sorry this is so verbose, I'm struggling to define this problem succinctly. Please feel free to leave comments with questions and I will further qualify if needed.
Thanks in advance for any help you can provide.
Clarification
Thank you for the proposed answers so far. I don't think any of them are there yet. Let me further explain.
I'm not interacting with the user and asking them questions. I'm assigning the question ids to an exam record so that when the user begins an exam, the list of questions they have access to is determined. Therefore, I have two data structures to work with:
List of possible question ids that the user has access to
List of all question ids this user has ever been assigned previously. This is the list L described above.
So, unless I am mistaken, the algorithm/solution to this problem will need to involve list &/or set based operations using the two lists described above.
The result will be a list of question ids I can associate with the exam record and then insert into the database. | Efficient algorithm for determining values not as frequent in a list | 0.197375 | 0 | 0 | 314 |
23,587,568 | 2014-05-10T23:57:00.000 | 0 | 0 | 1 | 0 | python,algorithm,list,frequency | 23,587,653 | 5 | false | 0 | 0 | Let's define a "pivot" that separates a list into two sections. The pivot partitions the array such that all numbers before the pivot has been picked one more than the numbers after the pivot (or more generally, all numbers before pivot are ineligible for picking, while all numbers after the pivot are eligible for picking).
You simply need to pick a random item from the list of numbers after the pivot, swap it with the number on the pivot, and then increment the pivot. When the pivot reaches the end of the list, you can reset it back to the start.
Alternatively, you can also use two lists, which is much easier to implement, but is slightly less efficient as it needs to expand/shrink the list. Most of the time, the ease of implementation would trump the inefficiency so the two-list would usually be my first choice. | 2 | 4 | 0 | I am building a quiz application which pulls questions randomly from a pool of questions. However, there is a requirement that the pool of questions be limited to questions that the user has not already seen. If, however, the user has seen all the questions, then the algorithm should "reset" and only show questions the user has seen once. That is, always show the user questions that they have never seen or, if they have seen all of them, always show them questions they have seen less frequently before showing questions they have seen more frequently.
The list (L) is created in a such a way that the following is true: any value in the list (I), may exist once or be repeated in the list multiple times. Let's define another value in the list, J, such that it's not the same value as I. Then 0 <= abs(frequency(I) - frequency(J)) <= 1 will always be true.
To put it another way: if a value is repeated in the list 5 times, and 5 times is the maximum number of times any value has been repeated in the list, then all values in the list will be repeated either 4 or 5 times. The algorithm should return all values in the list with frequency == 4 before it returns any with frequency == 5.
Sorry this is so verbose, I'm struggling to define this problem succinctly. Please feel free to leave comments with questions and I will further qualify if needed.
Thanks in advance for any help you can provide.
Clarification
Thank you for the proposed answers so far. I don't think any of them are there yet. Let me further explain.
I'm not interacting with the user and asking them questions. I'm assigning the question ids to an exam record so that when the user begins an exam, the list of questions they have access to is determined. Therefore, I have two data structures to work with:
List of possible question ids that the user has access to
List of all question ids this user has ever been assigned previously. This is the list L described above.
So, unless I am mistaken, the algorithm/solution to this problem will need to involve list &/or set based operations using the two lists described above.
The result will be a list of question ids I can associate with the exam record and then insert into the database. | Efficient algorithm for determining values not as frequent in a list | 0 | 0 | 0 | 314 |
23,590,683 | 2014-05-11T08:48:00.000 | 1 | 0 | 0 | 0 | javascript,jquery,python,perl | 23,590,735 | 1 | true | 0 | 0 | Short answer: In Python, you can use mechanize together with BeautifulSoup to achieve what you want. Try it and figure out from the docs of these packages how to use them, then come back here with specific questions when things don't work. | 1 | 1 | 0 | Basically I need a tool (preferably for linux) for getting some data from webpage, opening some others, filling form on them, closing windows and clicking on some buttons.
What would be the tool for doing this in whole? Some scripting language like Perl or Python can do it for me?
This is maybe difficult so give me the way that is most user friendly. :-)
I am not familiar with Perl or Python, but I am strong in my will to make it work because it is important to me. Notify that I can't do anything on the server, so everything that needs to be done is from clientside/web browser.
open webpage
on that page click on the link to open webpage in new window
on new page click on two radio buttons and submit
close that window and return to previous
extract part of the text from tag that follows text "IP Address: " to the end of the line and save it to the variable
extract part of the text from tag that follows text "Timestamp: " to the end of the line and save it to the variable
extract part of the text from tag that follows text "File Name: " to the end of the line and save it to the variable (these three are similar to make)
open URL in the new window
fill in the form with the data that is extracted from the previous page and submit
if there is "answer"(result is table) from the form do "procedure1"(copy entire table with result and concat its cells, close window and return to previous, again click some buttons/links on that page etc.....)
if there is "no answer"(empty table) from the form do "precedure2"(opening new window, filling form with data in variables, clicking some buttons and closing that window) than proceed to "procedure1"
Thank you all for helping me. | Some scriping language for my task? | 1.2 | 0 | 1 | 135 |
23,590,741 | 2014-05-11T08:56:00.000 | 1 | 0 | 0 | 1 | python,django,apache,virtualhost,mezzanine | 23,596,414 | 1 | true | 1 | 0 | It sounds like you've misconfigured the "sites" section.
Under "sites" in the admin interface, you'll find each configured site. Each of the pages in a site is related to one of these, and matched to the host name you use in the browser.
You'll probably find there's only one site record configured, and its domain doesn't match your production host that you're accessing the site via. If you update it, it should resolve everything. | 1 | 0 | 0 | I developed my first shop using mezzanine.
If I run it with python manage runserver 0.0.0.0:8000 it works well, but if I try to put an apache virtualhost in front of it, the result I get is awful, because I only see the home page, but not the other ones.
I checked the generated HTML, and it looks very different.
I think it's a problem of mezzanine configuration, maybe on the configured sites, but I am not able to understand what I have to change.
Can you please give me a hint? | mezzanine shop is not shown if behind a virtualhost | 1.2 | 0 | 0 | 66 |
23,591,711 | 2014-05-11T10:54:00.000 | 1 | 0 | 1 | 0 | python,django,django-models | 23,591,861 | 1 | false | 1 | 0 | The GMT offset for Pacific/Auckland is UTC+12 (hours).
datetime.datetime(2014, 5, 4, 12, 0, tzinfo=UTC) represents noon UTC on the 4th. But due to the offet, this is midnight in Auckland. However, midnight in Auckland is in fact "hour 0" on the 5th.
So, yes, the issue does have to do with timezones, but it's in fact not an issue. The date didn't change, it's just expressed in a different timezone.
Naturally, you could rollback the migration, and you account for timezones differently in step 1 where you transform the date into a datetime. | 1 | 2 | 0 | I'm just wondering where/what did I do wrong. So here's the scenario.
I have a field date = DateField() then I want to change it to date = DateTimeField() without losing the data stored from it.
What I did:
Add temporary field temp = DatetimeField() then transfer the value from date = DateField() to temp by datamigration
remove date = DateField()
Add date = DateTimeField()
Transfer stored value from temp to date
Everything went great, no errors. But one thing changed, the value.
For example:
Old data: datetime.date(2014, 5, 5)
New data: datetime.datetime(2014, 5, 4, 12, 0, tzinfo=UTC)
So my question is, why did it changed and deduct 1 day from the original value? Any thoughts? Is it because of the timezone? Timezone was set to Pacific/Auckland
Any help would be much appreciated. Thanks! | datamigration from datefield to datetimefield | 0.197375 | 0 | 0 | 56 |
23,593,173 | 2014-05-11T13:27:00.000 | -1 | 0 | 1 | 0 | python | 23,593,743 | 2 | true | 0 | 0 | well, there's no such thing as a private option. You should just avoid doing python setup.py register by mistake. Aren't you afraid of doing rm -rf / by mistake? or rm /boot/linux*? ;-)
In case you do run python setup.py register by mistake, you can always log on pypi and delete manually your package from the index. | 1 | 7 | 0 | Is there a way to prevent accidental publication of private package such as "private": true in NPM? | How to prevent accidental publish Python pacakge using setup.py | 1.2 | 0 | 0 | 460 |
23,593,618 | 2014-05-11T14:11:00.000 | 0 | 0 | 1 | 0 | python,sql,rdbms | 23,593,850 | 2 | false | 0 | 0 | People usually make use of a Web framework, instead of implementing the basic machinery themselves as you are doing.
That is: Python i s a great language that easily allows one to translate "json into sql" with a small amount of code - and it is great for learning. If you are doing this for educational purposes, it is a nice project to continue fiddling with, and maybe you can have some nice ideas in this answer and in others.
But for "real world" usage, the apparent simple plan comes up with real world issues. Tens or even hundreds of them. How to proper separate html/css template from content from core logic, how to deal with many, many aspects of security, etc...
Them, the frameworks come into play: a web-framewrok project is a software project that had, over the years, and soemtimes hundreds of hours of work from several contributors, thought about, and addresses all of the issues a real web application can and will face.
So, it is ok if one want to to everything from scratch if he believes he can come up with a a framework taht has distinguished capabilities not available in any existing project. And it is ok to make small projects for learning purposes. It is not ok to try to come up with something from scratch for real server production, without having a deep knowledge of all the issues involved, and knowing well at least 3, 4 frameworks.
So, now, you've got the basic understanding of a way to get to a framework - it istime to learn some of the frameworks tehmselves. Try, for example, Bottle and Flask (microframeworks), and Django (a fully featured framework for web application development), maybe Tornado (an http server, but with enough of a web framework in it to be usable, and to be very instructive)- just reading the documentation on "how to get started" with these projects, to get to a "hello world" page will lead you to lots of concepts you probably had not thought about yet. | 2 | 0 | 0 | I have a server with a database, the server will listen for http requests, and using JSONs for
data transferring.
Currently, what my server code(python) mainly do is read the JSON file, convert it to SQL and make some modifications to the database. And the function of the server, as I see, is only like a converter between JSON and SQL. Is this the right procedure that people usually do?
Then I come up with another idea, I can define a class for user information and every user's information in the database is an instance of that class. When I get the JSON file, first I put it into the instance, do some operation and then put it into the database. In my understanding, it adds a language layer between the http request and the database.
The question is, what do people usually do? | How do people usually operate information on server database? | 0 | 1 | 0 | 46 |
23,593,618 | 2014-05-11T14:11:00.000 | 0 | 0 | 1 | 0 | python,sql,rdbms | 23,593,774 | 2 | false | 0 | 0 | The answer is: people do usually that, what they need to do. The layer between database and client normally provides a higher level api, to make the request independent from the actual database. But how this higher level looks like depends on the application you have. | 2 | 0 | 0 | I have a server with a database, the server will listen for http requests, and using JSONs for
data transferring.
Currently, what my server code(python) mainly do is read the JSON file, convert it to SQL and make some modifications to the database. And the function of the server, as I see, is only like a converter between JSON and SQL. Is this the right procedure that people usually do?
Then I come up with another idea, I can define a class for user information and every user's information in the database is an instance of that class. When I get the JSON file, first I put it into the instance, do some operation and then put it into the database. In my understanding, it adds a language layer between the http request and the database.
The question is, what do people usually do? | How do people usually operate information on server database? | 0 | 1 | 0 | 46 |
23,594,048 | 2014-05-11T14:52:00.000 | 6 | 0 | 1 | 0 | python,compilation,pyinstaller | 64,578,211 | 2 | false | 0 | 0 | PyInstaller will generate a .spec file. You can run it first against your one file app and then for future runs reference your edited .spec file
Use one file app:
pyinstaller main.py
Note: this will overwrite main.spec so don't use this on future runs
Use .spec file:
pyinstaller main.spec | 1 | 6 | 0 | How can I specifiy the .spec file while compiling my .py file with PyInstaller using the --onefile and --noconsole options? | How can I specifiy the .spec file in PyInstaller | 1 | 0 | 0 | 11,918 |
23,595,346 | 2014-05-11T16:53:00.000 | 1 | 0 | 1 | 0 | python | 23,595,617 | 2 | false | 0 | 0 | Both python 2.7 and python 3 coexist on one machine happily.
If you name the scripts .py for those you would like to run with python 2.3 and .py3 for those that you would like to run with python3 then you can just invoke the scripts by typing their names or by double clicking. These associations are set up by default by the installer.
You can force the python version on the command line, assuming both are on the path by typing python or python3 for any script file regardless of the extension.
It is also worth looking at virtualenv for your testing.
N.B. For installing from pypi you can use pip or pip3 and the install for the appropriate version will be done. | 1 | 0 | 0 | I have two versions of Python installed in my computer, Python 3.4 and Python 2.7, and I use both of these installations. When I run a script, how do I choose which versions I want to use? May I rename the names of the executables for that (Python.exe -> Python27.exe)?
Thanks. | Multiple Python versions in one computer (windows 7) | 0.099668 | 0 | 0 | 327 |
23,597,816 | 2014-05-11T20:54:00.000 | 0 | 0 | 0 | 1 | python,git,kivy | 23,609,691 | 1 | false | 0 | 0 | This value is not typically something you would include in a git repo because it is specific to your system. Other people using your repo may have Kivy installed somewhere else, or be on an entirely different OS where that path does not exist. | 1 | 1 | 0 | Create a file named 'py.ini' and place it either in your users application data directory, or in 'C:\Windows'. It will contain the path used to start Kivy.
I put my Kivy installation at 'C:\utils\kivy'
so my copy says:
[commands]
kivy="c:\utils\kivy\kivy.bat"
(You could also add commands to start other script interpreters, such as jython or IronPython.)
so my question is:
What commands are supposed to be used in GIT to add this path into a variable "kivy".
Or is it even suppose to be a variable?
And in GIT, to get the script working, it uses "source /c/.../Kivy-1.8.0-py2.7-win32/kivyenv.sh"
But, on to add path, they said to use "C:...\kivy.bat
Why does " /c/" change to "C:"
and why is it 'kivy.ba't not 'kivyenv.sh'
Thank you. | Adding Path in GIT for KIVY | 0 | 0 | 0 | 256 |
23,598,452 | 2014-05-11T22:05:00.000 | 2 | 0 | 1 | 0 | python,data-structures | 23,598,489 | 1 | true | 0 | 0 | Currently, GameServer has a set of GameInstances, and GameInstance has a set of players. This requires me to iterate through every game and player to find the correct game, and I don't think this is the best way to do it, because it will have to be run hundreds of times per second.
You're right! And while I will answer your specific question, what you should do is read about datastructures. It's essential that every working programmer have at least a basic understanding of the most common datastructures, and their performance characteristics.
Based on your description of your problem, you need to maintain a mapping, probably using a hashtable, between a key which identifies each game, and the object which describes it. | 1 | 1 | 0 | I am implementing a game server in python. The GameServer class holds multiple game instances, which each hold multiple players. I am trying to figure out the most optimal data structures to use. There is a single function that receives all incoming data, and it needs to find the player within the games, and update the info.
Currently, GameServer has a set of GameInstances, and GameInstance has a set of players. This requires me to iterate through every game and player to find the correct game, and I don't think this is the best way to do it, because it will have to be run hundreds of times per second.
The incoming data has a connection (from which the data was received), and the message. This means I store a connection for every player within their class, so that I can send messages back to a specific player. I can't keep a dict of every player connection, because they have to be grouped by game instance. Please help me understand the most efficient way to structure this. | How to structure game instance data in Python | 1.2 | 0 | 0 | 111 |
23,599,345 | 2014-05-12T00:16:00.000 | 0 | 0 | 1 | 0 | python | 42,448,823 | 4 | false | 0 | 0 | This line:
self._file = open(self._format_filename())
Should be:
self._file = open(self._format_filename(), self_mode)
This will create a new file and write to it. Without adding 'self_mode' the file is created but there will not open for writing. | 2 | 2 | 0 | I would like to write a script in python which will logging a file, one per day.
It will log gps records and the log file will be a gpx(I have the script for that).
So for today it will create a file named 12-05-2014.gpx and will keep all the gps records until it turns 13-05-2014. Then it will create a new file 13-05-2014.gpx and keep logging in there.
Is this possible? Could you give me some hints about that please? | Create log file one every day in python | 0 | 0 | 0 | 5,664 |
23,599,345 | 2014-05-12T00:16:00.000 | 0 | 0 | 1 | 0 | python | 37,257,150 | 4 | false | 0 | 0 | Addition to the presented code:
File needs to be opened in append mode. On Windows and Rapsbian this did not function and I had to change this line:
self._file = open(self._filename)
into:
self._file = open(self._filename, self._mode)
Hope this will help someone to make full use of this.
GWS | 2 | 2 | 0 | I would like to write a script in python which will logging a file, one per day.
It will log gps records and the log file will be a gpx(I have the script for that).
So for today it will create a file named 12-05-2014.gpx and will keep all the gps records until it turns 13-05-2014. Then it will create a new file 13-05-2014.gpx and keep logging in there.
Is this possible? Could you give me some hints about that please? | Create log file one every day in python | 0 | 0 | 0 | 5,664 |
23,603,414 | 2014-05-12T07:35:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,apache2,wsgi | 23,625,565 | 1 | false | 1 | 0 | Solved: Changed the django settings. Apparently Debug was False, should be True | 1 | 0 | 0 | When I run 127.0.0.1 on the web (company LAN) I get an 500 page, but the costume one I wrote to my specific django application.
I'm using apache 2.2 with mod_wsgi and django 1.5.1 (Python 2.7). I checked all of my settings so it should work fine but I can't understand why it's still returning a 500.
Appreciate any help,
Alon | Django using mod_wsgi returns my 500 page | 0 | 0 | 0 | 23 |
23,603,762 | 2014-05-12T07:57:00.000 | 0 | 0 | 0 | 0 | python,algorithm | 23,613,409 | 3 | false | 0 | 0 | So the problem is: how many ways can you roll with attributes 12, 13 and 12 and a talent of 7. Lets assume you know the outcome of the first dice, lets say its 11. Then the problem is reduced to how many ways can you roll with attributes 13 and 12 and with a talent of 7. Now try it with a different first roll, lets say you rolled 14 for the first time. You are over by 2, so the problem now is how many ways can you roll with attributes 13 and 12 and with a talent of 5. Now try with a first roll of 20. The question now is how many ways can you roll with attributes 13 and 12 and with a talent of -1 (in the last case its obviously 0). | 1 | 1 | 1 | The Dark Eye is a popular fantasy role-playing game, the German equivalent of Dungeons and Dragons. In this system a character has a number of attributes and talents. Each talent has several attributes associated with it, and to make a talent check the player rolls a d20 (a 20-sided die) for each associated attribute. Each time a roll result exceeds the attribute score, the difference between roll and attribute is added up. If this difference total (the excess ) is greater than the talent value, the check fails.
Your first task for this tutorial is to write a function that takes as input a list of attribute scores associated with a talent as well as the talent ranks, and returns the probability that the test succeeds. Your algorithm must work efficiently for lists of arbitrary length.
Hint: First write a function that computes the probability distribution for the excess. This can be done efficiently using dynamic programming.
What does this mean? I solved the knapsack problem without any major issues (both 0/1 and unbounded), but I have no idea what to do with this?
The smallest problem to solve first would be rank 1 with a single attribute of say 12 (using the example above) - the probability of passing would be 12/20 right? then rank 2 would be 13/20 then rank 3 would be 14/20?
Am I on the right track? I feel like I might be misunderstanding the actual game rules. | Dynamic Programming for Dice probability in a Role Playing Pen & Paper game | 0 | 0 | 0 | 1,791 |
23,606,102 | 2014-05-12T10:02:00.000 | 5 | 0 | 0 | 0 | python,postgresql,psycopg2 | 23,606,436 | 1 | true | 0 | 0 | psycopg2 will never prompt for a password - that's a feature of psql, not of the underlying libpq that both psql and psycopg2 use. There's no equvialent of -w / -W because there's no password prompt feature to turn on/off.
If you want to prompt for a password you must do it yourself in your code: trap the exception thrown when authentication fails because a password is required, prompt the user for a password, and reconnect using the password. That's what psql does anyway, if you take a look at the sources. | 1 | 2 | 0 | Is there an option in psycopg2 (in the connect() method) similar to psql -w (never issue a password prompt) and -W (force psql to prompt for a password before connecting to a database)? | Is there a way to ask the user for a password or not with psycopg2? | 1.2 | 1 | 0 | 1,041 |
23,610,748 | 2014-05-12T13:44:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine | 23,612,408 | 1 | true | 1 | 0 | There's no 100% sure way to assess the number of frontend instance hours. An instance can serve more than one request at a time. In addition, the algorithm of the scheduler (the system that starts the instances) is not documented by Google.
Depending on how demanding your code is, I think you can expect a standard F1 instance to hold up to 5 requests in parallel, that's a maximum. 2 is a safer bet.
My recommendation, if possible, would be to simulate standard interaction on your website with limited number of users, and see how the number of instances grow, then extrapolate.
For example, let's say you simulate 100 requests per minute during 2 hours, and you see that GAE spawns 5 instances for that, then you can extrapolate that a continuous load of 3000 requests per minute would require 150 instances during the same 2 hours. Then I would double this number for safety, and end up with an estimate of 300 instances. | 1 | 0 | 0 | We are developing a Python server on Google App Engine that should be capable of handling incoming HTTP POST requests (around 1,000 to 3,000 per minute in total). Each of the requests will trigger some datastore writing operations. In addition we will write a web-client as a human-usable interface for displaying and analyse stored data.
First we are trying to estimate usage for GAE to have at least an approximation about the costs we would have to cover in future based on the number of requests. As for datastore write operations and data storage size it is fairly easy to come up with an approximate number, though it is not so obvious for the frontend and backend instance hours.
As far as I understood each time a request is coming in, an instance is being started which then is running for 15 minutes. If a request is coming in within these 15 minutes, the same instance would have been used. And now it is getting a bit tricky I think: if two requests are coming in at the very same time (which is not so odd with 3,000 requests per minute), is Google firing up another instance, hence Google would count an addition of (at least) 0.15 instance hours? Also I am not quite sure how a web-client that is constantly performing read operations on the datastore in order to display and analyse data would increase the instance hours.
Does anyone know a reliable way of counting instance hours and creating meaningful estimations? We would use that information to know how expensive it would be to run an application on GAE in comparison to just ordering a web server. | GAE: how to quantify Frontend Instance Hours usage? | 1.2 | 0 | 0 | 1,362 |
23,611,310 | 2014-05-12T14:09:00.000 | 3 | 0 | 1 | 0 | python,exception | 23,611,442 | 2 | true | 0 | 0 | Exceptions can model any exception to the normal flow of the code. Errors are the most common use-case, but anything where the normal return type doesn't really make sense can be seen as an opportunity to raise an exception instead.
Raising an exception for non-error use-cases is perfectly fine, if it simplifies your flow.
Remember, the Exception object raised and caught, is itself a value. Attaching information to that is a perfectly acceptable method to communicate with a caller to signal a deviation from 'normal'.
I'd prefer an exception over a sentinel value like None; it makes the handling explicit, and can actually lead to cleaner handling and better feedback to a developer when they forget to handle the exceptional return value. If your code normally returns a list, but in specific, exceptional circumstances, you return None, and the caller doesn't handle it, then you get weird bugs down the line somewhere. TypeError: 'NoneType' object is not iterable is a lot more cryptic than an explicit unhandled exception. | 1 | 2 | 0 | Is it good Python practice to have a function return None most of the time, while exceptionally returning useful values by raising an exception where the values are stored?
I am a little uneasy with this because exceptions are most often used to signal some kind of problem, with some attributes of the exception giving some details about the problem. Here, I would like the exception to actually mean "here is an exceptional result of this function".
Using an exception for this is tempting because (1) this is only done in exceptional circumstances and (2) this is efficient, in Python (more than an if … is not None:…). On the other hand, the exception itself is not the sign of an error of any sort, just a vehicle for the exceptionally returned values.
Is there any official recommendation against using exceptions for exceptionally returning values from a function?
PS: Here is a use case:
An object method updates the internal state of the object based on new data (it's a finite state automaton).
At some point (usually after getting many data points), the method considers that some action must be taken (in my case: some date from the object should be stored in a database, and the object's state is reset to the initial state, where it is ready to get more data).
Thus, the sequence of events, for the method, is: get data, update state, get data, update state,… ah! we reached an special state where information about the object should be stored! reset the objet's state and send the relevant information out; get data, update state, get data, update state,… Thus, most of the time, the function updates the internal state of the object and does not return anything. Exceptionally, it must send important information about the object. | Using exceptions to return exceptional values: is this good practice? | 1.2 | 0 | 0 | 128 |
23,613,260 | 2014-05-12T15:37:00.000 | 1 | 0 | 0 | 0 | javascript,python,css,selenium | 23,649,576 | 1 | true | 0 | 0 | Overflow takes only one of five values (overflow: auto | hidden | scroll | visible | inherit). Use visible | 1 | 1 | 0 | I am trying to use Python Selenium to click a button on a webpage, but Selenium is giving exception "Element is not currently visible and so may not be interacted with".
The DOM structure is quite simple:
<body style="overflow: hidden;">
...
<div aria-hidden="false" style="display: block; ...">
...
<button id="button-aaa" aria-hidden="true" style="...">
...
</div>
...
</body>
I have searched Google and Stackoverflow. Some users say Selenium cannot click a web element which is under a parent node with overflow: hidden. But surprisingly, I found that Selenium is able to click some other buttons which are also under a parent node with overflow: hidden.
Anyway, I have tried to use driver.execute_script to change the <body> style to overflow: none, but Selenium is still unable to click this button.
I have also tried to change the button's aria-hidden="true" to aria-hidden="false", but Selenium still failed to click.
I have also tried to add "display: block;" to the button's style, and tried all different combination of style changes, but Selenium still failed to click.
I have used this commands to check the button: buttonelement.is_displayed(). It always returns False no matter what style I change in the DOM. The button is clearly visually visible in the Firefox browser, and it is clickable and functioning. By using the Chrome console, I am able to select the button using ID.
May I know how can I check what is causing a web element to be invisible to Python Selenium? | How to check what is causing a web element to be invisible to Python Selenium? | 1.2 | 0 | 1 | 1,022 |
23,620,930 | 2014-05-13T00:16:00.000 | 4 | 1 | 0 | 0 | php,python,ruby,perl,pointers | 23,620,955 | 3 | true | 0 | 0 | Because pointers, while very versatile, are a pain-in-the-ass source of bugs. The whole point of higher-order languages is abstraction of dangerous or verbose constructs into safer and shorter ones: you trade power for ease of development. Thus, for example, arrays in dynamic languages all know how to allocate themselves, free themselves, and even resize themselves, so the programmer does not need to worry about it (and can't mess it up). It is the same reason why we don't normally program in assembly unless we really really want to control every cycle of the processor: too verbose, too easy to make a mistake (which is why C/C++, Objective-C and so on exist in the first place). Dynamic languages are a step further in the same direction. | 3 | 0 | 0 | I was wondering why scripting languages (python,php,ruby,perl) don't have pointers such as C/C++,Objective-C so on? | Why Scripting (Dynamic) Languages Don't Have Pointers? | 1.2 | 0 | 0 | 370 |
23,620,930 | 2014-05-13T00:16:00.000 | 0 | 1 | 0 | 0 | php,python,ruby,perl,pointers | 23,620,990 | 3 | false | 0 | 0 | A major difference between the two categories is that scripting languages don't necessarily deal with things like, say, memory in a direct manner. In your C languages you have memory and have to allocate and manage it. In PHP, however, you don't generally manage your memory directly (and in most cases, the memory usage is transparent to the programmer). The underlying software does this for you. So it's entirely possible to write software without knowing a thing about machine level code, malloc, etc. | 3 | 0 | 0 | I was wondering why scripting languages (python,php,ruby,perl) don't have pointers such as C/C++,Objective-C so on? | Why Scripting (Dynamic) Languages Don't Have Pointers? | 0 | 0 | 0 | 370 |
23,620,930 | 2014-05-13T00:16:00.000 | 0 | 1 | 0 | 0 | php,python,ruby,perl,pointers | 23,620,951 | 3 | false | 0 | 0 | Question is too general too be answered on Stackoverflow.
But the answer is - scripting languages try to be as hardware independent as they can. And that includes no knowledge of RAM structure and content. | 3 | 0 | 0 | I was wondering why scripting languages (python,php,ruby,perl) don't have pointers such as C/C++,Objective-C so on? | Why Scripting (Dynamic) Languages Don't Have Pointers? | 0 | 0 | 0 | 370 |
23,621,423 | 2014-05-13T01:23:00.000 | 1 | 0 | 1 | 0 | python,arrays,memory-management,anaconda,cellular-automata | 23,622,236 | 2 | false | 0 | 0 | If the grid is sparsely populated, you might be better off tracking just the populated parts, using a different data structure, rather than a giant python list (array). | 2 | 3 | 1 | I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the program is run, it only uses 1% of the cpu but maxes out the ram (8GB). It takes days to run. What would be the best way to start to solve this problem? Would GPU processing be a good idea or do I need to find a way to offload some of the completed calculations to the HDD?
I am just trying to find avenues of thought to move towards a solution. Is my model just pulling too much data into the ram, resulting in slow calculations? | All of Ram being used in a Cellular Automata Python Script | 0.099668 | 0 | 0 | 151 |
23,621,423 | 2014-05-13T01:23:00.000 | 1 | 0 | 1 | 0 | python,arrays,memory-management,anaconda,cellular-automata | 23,622,159 | 2 | false | 0 | 0 | Sounds like your problem is memory management. You're likely writing to your swap file, which would drastically slow down your processing. GPU wouldn't help you with this, as you said you're maxing out your RAM, not your processing (CPU). You probably need to rewrite your algorithm or use different datatypes, but you haven't shared your code, so it's hard to diagnose just based on what you've written. I hope this is enough information to get you heading in the right direction. | 2 | 3 | 1 | I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the program is run, it only uses 1% of the cpu but maxes out the ram (8GB). It takes days to run. What would be the best way to start to solve this problem? Would GPU processing be a good idea or do I need to find a way to offload some of the completed calculations to the HDD?
I am just trying to find avenues of thought to move towards a solution. Is my model just pulling too much data into the ram, resulting in slow calculations? | All of Ram being used in a Cellular Automata Python Script | 0.099668 | 0 | 0 | 151 |
23,622,285 | 2014-05-13T03:22:00.000 | 1 | 0 | 0 | 1 | python,path,directory,delete-directory | 23,622,356 | 2 | true | 0 | 0 | Yes, you can do this. The script will be loaded into memory when it runs, so it can delete its parent directory (and therefore itself) directly from the script without any issues. Just use shutil.rmtree rather than os.rmdir, because os.rmdir can't remove a directory that isn't empty.
Here's a one-liner that will do it (be careful running this in a directory with stuff you don't want deleted!)
shutil.rmtree(os.path.dirname(os.path.realpath(__file__))) | 2 | 0 | 0 | Is it possible to make a script delete its own folder? clearly there are some issues here but I feel like it might be possible.
Essentially what i'm trying to create is a script that once it's finished doing it's thing that will delete it's own folder and a few other files. But I'm mainly having an issue with trying to tell it to delete itself as a method of closing itself.
As a final action or in a sense telling the pc to do one more action after its closed itself
Any help would be greatly appreciated :)
Tltr;can an application can tell windows to do a command after it close's it self | Making a Script that deletes its own folder when it finishes | 1.2 | 0 | 0 | 123 |
23,622,285 | 2014-05-13T03:22:00.000 | 1 | 0 | 0 | 1 | python,path,directory,delete-directory | 23,622,353 | 2 | false | 0 | 0 | You mean it would delete the entire folder, including the script itself?
Why not invoke the script with a cmd or batch file (on Windows; bash script on *nix) that would first execute the script, and then do whatever cleanup you want to do afterwards? The wrapper file could live in another directory so it would not also get deleted. | 2 | 0 | 0 | Is it possible to make a script delete its own folder? clearly there are some issues here but I feel like it might be possible.
Essentially what i'm trying to create is a script that once it's finished doing it's thing that will delete it's own folder and a few other files. But I'm mainly having an issue with trying to tell it to delete itself as a method of closing itself.
As a final action or in a sense telling the pc to do one more action after its closed itself
Any help would be greatly appreciated :)
Tltr;can an application can tell windows to do a command after it close's it self | Making a Script that deletes its own folder when it finishes | 0.099668 | 0 | 0 | 123 |
23,622,923 | 2014-05-13T04:40:00.000 | 5 | 1 | 0 | 0 | python,c++,unit-testing | 23,623,089 | 5 | false | 0 | 1 | It really depends on what it is you are trying to test. It almost always makes sense to write unit tests in the same language as the code you are testing so that you can construct the objects under test or invoke the functions under test, both of which can be most easily done in the same language, and verify that they work correctly. There are, however, cases in which it makes sense to use a different language, namely:
Integration tests that run a number of different components or applications together.
Tests that verify compilation or interpretation failures which could not be tested in the language, itself, since you are validating that an error occurs at the language level.
An example of #1 might be a program that starts up multiple different servers connected to each other, issues requests to the server, and verifies those responses. Or, as a simpler example, a program that simply forks an application under test as a subprocess and verifies that it produces the expected outputs for a given input.
An example of #2 might be a program that verifies that a certain piece of C++ code will produce a static assertion failure or that a particular template instantiation which is intentionally disallowed will result in a compilation failure if someone attempts to use it.
To answer your larger question, it is not bad practice per-se to write tests in a different language. Whatever makes the tests more convenient to write, easier to understand, more robust to changes in implementation, more sensitive to regressions, and better on any one of the properties that define good testing would be a good justification to write the tests one way vs another. If that means writing the tests in another language, then go for it. That being said, small unit tests typically need to be able to invoke the item under test directly which, in most cases, means writing the unit tests in the same language as the component under test. | 5 | 28 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | Is it acceptable practice to unit-test a program in a different language? | 0.197375 | 0 | 0 | 4,198 |
23,622,923 | 2014-05-13T04:40:00.000 | 4 | 1 | 0 | 0 | python,c++,unit-testing | 23,623,087 | 5 | false | 0 | 1 | Why not, it's an awesome idea because you really understand that you are testing the unit like a black box.
Of course there may be technical issues involved, what if you need to mock some parts of the unit under test, that may be difficult in a different language.
This is a common practice for integration tests though, I've seen lots of programs driven from external tools such as a website from selenium, or an application from cucumber. Both those can be considered the same as a custom python script.
If you consider the difference between integration testing and unit testing is the number of things under test at any given time, the only reason why you shouldn't do this is tool support. | 5 | 28 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | Is it acceptable practice to unit-test a program in a different language? | 0.158649 | 0 | 0 | 4,198 |
23,622,923 | 2014-05-13T04:40:00.000 | 4 | 1 | 0 | 0 | python,c++,unit-testing | 23,623,031 | 5 | false | 0 | 1 | I would say it depends on what you're actually trying to test. For true unit testing, it is, I think, best to test in the same language, or at least a binary-compatible language (i.e. testing Java with Groovy -- I use Spock in this case, which is Groovy based, to unit-test my Java code, since I can intermingle the Java with the Groovy), but if you are testing results, then I think it's fair to switch languages.
For example, I have tested the expected results when given a specific set of a data when running a Perl application via nose in Python. This works because I'm not unit testing the Perl code, per se, but the outcomes of that Perl code.
In that case, to unit test actual Perl functions that are part of the application, I would use a Perl-based test framework such as Test::More. | 5 | 28 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | Is it acceptable practice to unit-test a program in a different language? | 0.158649 | 0 | 0 | 4,198 |
23,622,923 | 2014-05-13T04:40:00.000 | 9 | 1 | 0 | 0 | python,c++,unit-testing | 23,623,093 | 5 | false | 0 | 1 | A few things to keep in mind:
If you are writing tests as you code, then, by all means, use whatever language works best to give you rapid feedback. This enables fast test-code cycles (and is fun as well). BUT.
Always have well-written tests in the language of the consumer. How is your client/consumer going to call your functions? What language will they be using? Using the same language minimizes integration issues later on in the life-cycle. | 5 | 28 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | Is it acceptable practice to unit-test a program in a different language? | 1 | 0 | 0 | 4,198 |
23,622,923 | 2014-05-13T04:40:00.000 | 28 | 1 | 0 | 0 | python,c++,unit-testing | 23,623,088 | 5 | true | 0 | 1 | I'd say that it's best to test the API that your users will be exposed to. Other tests are good to have as well, but that's the most important aspect.
If your users are going to write C/C++ code linking to your library, then it would be good to have tests making use of your library the same way.
If you are going to ship a Python wrapper (why not?) then you should have Python tests.
Of course, there is a convenience aspect to this, as well. It may be easier to write tests in Python, and you might have time constraints that make it more appealing, etc.
I guess what I'm saying is: There's nothing inherently wrong with tests being in a different language from the code under test (that's totally normal for testing a REST API, for instance), but make sure you have tests for the public-facing API at a minimum.
Aside, on terminology:
I don't think the types of tests you are describing are "unit tests" in the usual sense of the term. Probably "functional test" would be more accurate.
A unit test typically tests a very small component - such as a function call - that might be one piece of larger functionality. Unit tests like these are often "white box" tests, so you can see the inner workings of your code.
Testing something from a user's point-of-view (such as your professor's commandline tests) are "black box" tests, and in these examples are at a more functional level rather than "unit" level.
I'm sure plenty of people may disagree with that, though - it's not a rigidly-defined set of terms. | 5 | 28 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | Is it acceptable practice to unit-test a program in a different language? | 1.2 | 0 | 0 | 4,198 |
23,623,717 | 2014-05-13T05:54:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,ubuntu,graphviz | 34,070,637 | 2 | false | 0 | 0 | This is a bug in latest ubuntu xdot package, please use xdot in pip repository:
sudo apt-get remove xdot
sudo pip install xdot | 1 | 0 | 1 | Lately I have observed that xdot utility which is implemented in python to view dot graphs is giving me following error when I am trying to open any dot file.
File "/usr/bin/xdot", line 4, in xdot.main()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1947, in main win.open_file(args[0])
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1881, in open_file self.set_dotcode(fp.read(), filename)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1863, in set_dotcode if self.widget.set_dotcode(dotcode, filename):
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1477, in set_dotcode self.set_xdotcode(xdotcode)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1497, in set_xdotcode self.graph = parser.parse()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1167, in parse DotParser.parse(self)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 977, in parse self.parse_graph()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 986, in parse_graph self.parse_stmt()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1032, in parse_stmt self.handle_node(id, attrs)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1142, in handle_node shapes.extend(parser.parse())
File "/usr/lib/python2.7/dist-packages/xdot.py", line 612, in parse w = s.read_number()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 494, in read_number return int(self.read_code())
ValueError: invalid literal for int() with base 10: '206.05'
I have observed few things;
The same utility works fine for me on previous ubuntu versions(12.04, 13.04). The problem is when this is run on ubuntu 14.04. I am not sure if it is an ubuntu problem.
As per the trace log above the int() function has encounterd some float value which is causing the exception at the end of log.But the contents of my dot files does not contain any float value, so how come the trace shows ValueError: invalid literal for int() with base 10: '206.05'?
Any clue will be helpful. | Graphviz xdot utility fails to parse graphs | 0 | 0 | 0 | 1,384 |
23,623,763 | 2014-05-13T05:58:00.000 | 0 | 0 | 1 | 1 | python,crash,installation | 23,644,079 | 1 | false | 0 | 0 | It turns out uninstalling Blender, uninstalling Python, reinstalling Python and then reinstalling Blender(optional) did the trick. Blender 2.70 has some Python 3.4 libraries bundled with it on installation, but I'm guessing for some reason, having an older version of Python already installed caused conflicts/dependency issues/wh. I'd love to know the details on why, but for now I'm just glad I can run python scripts again. | 1 | 0 | 0 | This issue doesn't pertain to any code exactly. I think my installation (python 3.3.5) became corrupted somehow. I've tried uninstalling and reinstalling, as well as repairing, but nothing has worked. It's been a while since I last ran any python code or did anything involving python, really, so I can't say I accidentally messed up my own install. The only thing I can think of which might be an issue is installation and updates of Blender to 2.7. | pythonw.exe stopped working, can't run IDLE or any .py files | 0 | 0 | 0 | 676 |
23,627,989 | 2014-05-13T09:51:00.000 | 1 | 0 | 0 | 0 | java,python,c++,xml,sax | 23,628,074 | 1 | false | 0 | 0 | Not really sure if this answers your question, but I would take a different approach. 10GB is not THAT much data, so you could implement a two-phase parsing.
Phase 1 would be to split the file in smaller chunks based on some tag, so you end up with more smaller files. For example if your first file is A.xml, you split it to A_0.xml, A_1.xml etc.
Phase 2 would do the real heavy lifting on each chuck, so you invoke it on A_0.xml, then after that on A_1.xml etc. You could then restart on a chunk after your code has exitted. | 1 | 0 | 0 | I'm working on a project that needs to parse a very big XML file (about 10GB). Because process time is really long (about days), It's possible that my code exit in the middle of the process; so I want to save my code's status once in a while and then be able to restart it from last save point.
Is there a way to start (restart) a SAX parser not from the beginning of a XML file?
P.S: I'm programming using Python, but solutions for Java and C++ are also acceptable. | restart SAX parser from the middle of the document | 0.197375 | 0 | 1 | 162 |
23,628,936 | 2014-05-13T10:38:00.000 | 0 | 1 | 0 | 0 | python,django | 23,638,267 | 1 | false | 0 | 0 | If you catch that raised exception at the place in your view where you triggered the signal you can then respond with whatever you want in the view. | 1 | 0 | 0 | I'm trying to find a way to interrupt a save and return a HttpResponse from it. So far I've only managed to do this with raising an Exception, but I want to return a propper HttpResponse.
Any ideas? Also looking for any other ways of stopping a save. | Return HttpResponse from pre_save() | 0 | 0 | 0 | 78 |
23,634,759 | 2014-05-13T14:56:00.000 | 8 | 0 | 1 | 0 | python,nlp,nltk | 23,634,933 | 3 | false | 0 | 0 | No there is not. What is more important it is quite a complex problem, which can be a topic of research, and not something that "simple built in function" could solve. Such operation requires semantic analysis of the sentence, think about for example "I think that I could run faster" which of the 3 verbs should be negated? We know that "think", but for the algorithm they are just the same. Even the case of detection whether you should use "do" or "does" is not so easy. Consider "Mary and Jane walked down the road" and "Jane walked down the road", without parse tree you won't be able to distinguish the singular/plural problem. To sum up, there is no, and cannot be any simple solution. You can design any kind of heuristic you want (one of such is proposed POS-based negation) and if it fails, start a research in this area. | 1 | 5 | 1 | I am new to NLTK. I would like to create the negative of a sentence (which will usually be in the present tense). For example, is there a function to allow me to convert:
'I run' to 'I do not run'
or
'She runs' to 'She does not run'.
I suppose I could use POS to detect the verb and its preceding pronoun but I just wondered if there was a simpler built in function | How to create the negative of a sentence in nltk | 1 | 0 | 0 | 2,143 |
23,636,068 | 2014-05-13T15:56:00.000 | 0 | 0 | 0 | 0 | python,mysql,sql,mysql-python | 23,636,177 | 1 | false | 0 | 0 | Step 1 - Make your initial insert into a staging table.
Step 2 - deal with duplicate records and all other issues.
Step 3 - write to your real tables from the staging table. | 1 | 1 | 0 | I have a question in MySQL and Python MySQLdb library:
Suppose I'd like to insert a bulk in to the DB. When I'm inserting, there are possibly a duplicated among the records in the bulk, or a record in the bulk may be a duplicate of a record in the table. In case of duplicates, I'd like to ignore the duplication and just insert the rest of the bulk. In case of a duplication in within the bulk, I'd like that only one of the records (any of them) will be inserted, and the rest of the bulk will also be inserted.
How can I write it in MySQL syntax? And is it built-in in MySQLdb library in Python? | Bulk insert into MySQL on duplicate | 0 | 1 | 0 | 289 |
23,639,085 | 2014-05-13T18:39:00.000 | -2 | 0 | 0 | 0 | python,django,django-manage.py,manage.py | 38,094,213 | 15 | false | 1 | 0 | I was struggling with the same problem and found one solution. I guess it can help you.
when you run python manage.py runserver, it will take 127.0.0.1 as default ip address and 8000 as default port number which can be configured in your python environment.
In your python setting, go to <your python env>\Lib\site-packages\django\core\management\commands\runserver.py and set
1. default_port = '<your_port>'
2. find this under def handle and set
if not options.get('addrport'):
self.addr = '0.0.0.0'
self.port = self.default_port
Now if you run "python manage.py runserver" it will run by default on "0.0.0.0:
Enjoy coding ..... | 3 | 193 | 0 | I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini. | django change default runserver port | -0.02666 | 0 | 0 | 268,838 |
23,639,085 | 2014-05-13T18:39:00.000 | -2 | 0 | 0 | 0 | python,django,django-manage.py,manage.py | 26,317,016 | 15 | false | 1 | 0 | This is an old post but for those who are interested:
If you want to change the default port number so when you run the "runserver" command you start with your preferred port do this:
Find your python installation. (you can have multiple pythons installed and you can have your virtual environment version as well so make sure you find the right one)
Inside the python folder locate the site-packages folder. Inside that you will find your django installation
Open the django folder-> core -> management -> commands
Inside the commands folder open up the runserver.py script with a text editor
Find the DEFAULT_PORT field. it is equal to 8000 by default. Change it to whatever you like
DEFAULT_PORT = "8080"
Restart your server: python manage.py runserver and see that it uses your set port number
It works with python 2.7 but it should work with newer versions of python as well. Good luck | 3 | 193 | 0 | I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini. | django change default runserver port | -0.02666 | 0 | 0 | 268,838 |
23,639,085 | 2014-05-13T18:39:00.000 | 2 | 0 | 0 | 0 | python,django,django-manage.py,manage.py | 36,612,064 | 15 | false | 1 | 0 | I'm very late to the party here, but if you use an IDE like PyCharm, there's an option in 'Edit Configurations' under the 'Run' menu (Run > Edit Configurations) where you can specify a default port. This of course is relevant only if you are debugging/testing through PyCharm. | 3 | 193 | 0 | I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini. | django change default runserver port | 0.02666 | 0 | 0 | 268,838 |
23,639,412 | 2014-05-13T18:57:00.000 | 1 | 0 | 0 | 0 | python,django | 23,640,050 | 2 | false | 1 | 0 | This is not strictly an answer to your question, but in general the best practice is to store time in UTC and convert it to whatever timezone you want at display time. This way there is less ambiguity about the time. | 1 | 0 | 0 | Please help!!!
I'm in views have value date by localzone, but in database, the Django was pull date by UTC...
What i'm must do, to push in base date by my local timezone? (my local zone Europe/Kiev)
Please help))) | Django was push in base wrong date | 0.099668 | 0 | 0 | 59 |
23,640,182 | 2014-05-13T19:41:00.000 | 7 | 0 | 1 | 0 | python,pip | 23,642,321 | 3 | true | 0 | 0 | There are few options available
Try simple ignorance
Simply do not care about these packages being present in pip output.
Delete these lines from output
Filter the output through some grep filter and have the result clean.
Use virtualenv and do not install unwanted packages into it
Note, that pip freeze in virtualenv does not report globally installed packages (however it typically reports argparse and wsgiref for me - nothing seems to be really perfect.)
Write your own pipwarm command
which would call pip freeze and modify the output as needed (removing unneeded files).
I am aware, I probably did not give you the answer you asked for, but maybe the virtualenv is close to what you need, as it allows global presence of those packages and still allow not having these packages in output of pip freeze.
Install pep8 and pylint as scripts but keep them away from pip visibility
In case, you just care about having pylint and pep8 available as command line tools, but do not require them visible to pip freeze, there are multiple options
Install pep8 and pylint into virtualenv and copy the scripts to /usr/bin
If you install pylint and pep8 into separate virtualenv, find location of the executables by which pep8 and which pylint and copy these files somewhere, where they will be visible, e.g. to /usr/bin. The scripts you copy or move from virtualenv have hardcoded path to required python packages in the virtualenv and will safely run even when copied (just the scripts, do not touch the rest of related virtualenv). Note, that there is no need to activete given virutalenv to make that working.
Install pep8 and pylint system wide but keep developing in virtualenv
System wide installed command line tools are typically installed into location, which makes them globally visible. At the same time, system wide installed packages are not seen by pip freeze when called in virtualenv. | 2 | 12 | 0 | Title basically says it all. How can I tell pip freeze to ignore certain packages, like pylint and pep8, and their dependencies? | Ignore certain packages and their dependencies with pip freeze | 1.2 | 0 | 0 | 9,938 |
23,640,182 | 2014-05-13T19:41:00.000 | 10 | 0 | 1 | 0 | python,pip | 43,137,206 | 3 | false | 0 | 0 | My approach is the following:
I my .bashrc I create the following alias: alias pipfreezeignore='pip freeze | grep -vFxf ignore_requirements.txt'
Create the virtual environment, and install first all the packages that I do not want to keep track of (i.e. pip install jedi flake8 importmagic autopep8 yapf).
Immediately save them in a ignore_requirements.txt file, as in pip freeze > ignore_requirements.txt.
Install the rest of packages (e.g. pip install django)
Use pipfreezeignore > requirements.txt (in the same folder where ignore_requirements.txt is) so I just get in requirements.txt the installed packages that are not in ignore_requirements.txt
If you always want to ignore the same packages (through all your virtual environments), you might redefine the alias as in alias pipfreezeignore='pip freeze | grep -vFxf /abs/path/to/ignore_requirements.txt'
Just make sure that no packages from ignore_requirements.txt are not actually necessary for your project. | 2 | 12 | 0 | Title basically says it all. How can I tell pip freeze to ignore certain packages, like pylint and pep8, and their dependencies? | Ignore certain packages and their dependencies with pip freeze | 1 | 0 | 0 | 9,938 |
23,643,027 | 2014-05-13T23:04:00.000 | 2 | 0 | 1 | 0 | python,environment-variables,pythonpath | 23,643,251 | 1 | false | 0 | 0 | Definitely do not add your project to site-packages this is going to spoil your system Python installation and will fire back at the moment some other app would come there or you would need to install something.
There are at last two popular options for installing python apps in isolated manner
Using virtualenv
See virtualenv project. It allows
creation of new isolating python environment - python for this environment is different from system one and has it's own PYTHONPATH setup this allows to keep all installed packages private for it.
activation and deactivation of given virtualenv from command line for command line usage. After activate, you can run pip install etc. and it will affect only given virtualenv install.
calling any Python script by starting by virtualenv Python copy - this will use related virtualenv (note, that there is no need to call any activate)
using zc.buildout
This package provides command buildout. With this, you can use special configuration file and this allows creation of local python environment with all packages and scripts.
Conclusions
virtualenv seems more popular today and I find it much easier to learn and use
zc.buildout might be also working for you, but be prepared for a bit longer learning time
installing into system Python directories shall be always reserved for very special cases (pip, easy_install), better avoid it.
installing into private directories and manipulatig PYTHONPATH is also an option, but you would repeat, what virtualenv already provides | 1 | 1 | 0 | I have a Python project which contains three components: main executable scripts, modules which those scripts rely on, and data (sqlite3 databases, flat files, etc.) which those scripts manipulate. The top level has an __init__.py file so that other programs can also borrow from the modules if needed.
The question is, is it more "Pythonic" or "correct" to move my project into the default site-packages directory, or to modify PYTHONPATH to include one directory above my project (so that the project can be imported from)? On the one hand, what I've described is not strictly a "package", but a "project" with data that can be treated as a package. So I'm leaning in the direction of modifying PYTHONPATH (after all, PYTHONPATH must exist for a reason, right?) | Should I add my Python project to the site-packages directory, or append my project to PYTHONPATH? | 0.379949 | 0 | 0 | 392 |
23,644,545 | 2014-05-14T02:11:00.000 | 0 | 0 | 0 | 1 | python,hadoop | 23,644,971 | 2 | false | 0 | 0 | Did not try, but taking the example with KeyFieldBasedPartitioner and simply replacing:
-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
with
-partitioner org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
Should work. | 1 | 0 | 1 | I'm using python with Hadoop streaming to do a project, and I need the similar functionality provided by the TotalOrderPartitioner and InputSampler in Hadoop, that is, I need to sample the data first and create a partition file, then use the partition file to decide which K-V pair will go to which reducer in the mapper. I need to do it in Hadoop 1.0.4.
I could only find some Hadoop streaming examples with KeyFieldBasedPartitioner and customized partitioners, which use the -partitioner option in the command to tell Hadoop to use these partitioners. The examples I found using TotalOrderPartitioner and InputSampler are all in Java, and they need to use the writePartitionFile() of InputSampler and the DistributedCache class to do the job. So I am wondering if it is possible to use TotalOrderPartitioner with hadoop streaming? If it is possible, how can I organize my code to use it? If it is not, is it practical to implement the total partitioner in python first and then use it? | Using TotalOrderPartitioner in Hadoop streaming | 0 | 0 | 0 | 909 |
23,645,572 | 2014-05-14T04:19:00.000 | 5 | 0 | 0 | 1 | python,django,google-app-engine | 23,646,875 | 1 | true | 1 | 0 | In simple words these are two versions of datastore . db being the older version and ndb the newer one. The difference is in the models, in the datastore these are the same thing. NDB provides advantages like handling caching (memcache) itself. and ndb is faster than db. so you should definitely go with ndb. to use ndb datastore just use ndb.Model while defining your models | 1 | 2 | 0 | I have been going through the Google App Engine documentation (Python) now and found two different types of storage.
NDB Datastore
DB Datastore
Both quota limits (free) seem to be same, and their database design too. However NDB automatically cache data in Memcache!
I am actually wondering when to use which storage? What are the general practices regarding this?
Can I completely rely on NDB and ignore DB? How should it be done?
I have been using Django for a while and read that in Django-nonrel the JOIN operations can be somehow done in NDB! and rest of the storage is used in DB! Why is that? Both storages are schemaless and pretty well use same design.. How is that someone can tweak JOIN in NDB and not in DB? | App Engine: Difference between NDB and Datastore | 1.2 | 1 | 0 | 568 |
23,646,485 | 2014-05-14T05:42:00.000 | 1 | 1 | 0 | 0 | python,pyramid,mako,waitress | 23,654,584 | 2 | false | 1 | 1 | Oh my,
I found the thing... I had <%block cached="True" cache_key="${self.filename}+body"> and the file inclusion was inside of that block.
Cheerious:) | 1 | 0 | 0 | I've a strange issue. pserve --reload has stopped reloading the templates. It is reloading if some .py-file is changing, but won't notice .mak-file changes anymore.
I tried to fix it by:
Checking the filepermissions
Creating the new virtualenv, which didn't help.
Installing different version of mako without any effect.
Checking that the python is used from virtualenv
playing with the development.ini. It has the flag: pyramid.reload_templates = true
Any idea how to start debugging the system?
Versions:
Python 2.7
pyramid 1.5
pyramid_mako 1.02
mako 0.9.1
Yours
Heikki | Pyramid Mako pserver --reload not reloading in Mac | 0.099668 | 0 | 0 | 216 |
23,647,516 | 2014-05-14T06:49:00.000 | 1 | 0 | 0 | 1 | python-idle | 41,288,341 | 2 | false | 0 | 0 | I've found that use of character æ, ø, å, even as it is comment out, Python 2.7 IDE won't save the file. | 2 | 1 | 0 | Sometimes I can save a script by using Ctrl+S when using IDLE, but then other times it fails to save for no reason, as haven't changed anything. Is this a bug of some kind? It's starting to irritate me.
Platform: Windows 7 Professional | Ctrl+s does not always save a file on IDLE | 0.099668 | 0 | 0 | 1,776 |
23,647,516 | 2014-05-14T06:49:00.000 | 2 | 0 | 0 | 1 | python-idle | 34,439,578 | 2 | false | 0 | 0 | It may be some character. I faced that same trouble, then I removed some character, such as "Ç" and it worked fine. | 2 | 1 | 0 | Sometimes I can save a script by using Ctrl+S when using IDLE, but then other times it fails to save for no reason, as haven't changed anything. Is this a bug of some kind? It's starting to irritate me.
Platform: Windows 7 Professional | Ctrl+s does not always save a file on IDLE | 0.197375 | 0 | 0 | 1,776 |
23,653,170 | 2014-05-14T11:18:00.000 | 0 | 0 | 1 | 0 | python,gunicorn | 25,699,363 | 1 | false | 1 | 0 | Assuming that by global variable, you mean another process that keeps them in memory or on disk, yes I think so. I haven't checked the source code of Gunicorn, but based on a problem I had with some old piece of code was that several users retrieved the same key from a legacy MyISAM table, incrementing it and creating a new entry using it assuming it was unique to create a new record. The result was that occasionally (when under very heavy traffic) one record is created (the newest one overwriting the older ones, all using the same incremented key). This problem was never observed during a hardware upgrade, when I reduced the gunicorn workers of the website to one, which was the reason to explore this probable cause in the first place.
Now usually, reducing the workers will degrade performance, and it is better to deal with these issues with transactions (if you are using an ACID RDBMS, unlike MyISAM). The same issue should be present with in Redis and similar stores.
Also this shouldn't a problem with files and sockets, since to my knowledge, the operating system will block other processes (even children) from accessing an open file. | 1 | 3 | 0 | Is it possible to have multiple workers running with Gunicorn and have them accessing some global variable in an ordered manner i.e. without running into problems with race conditions? | Variable access in gunicorn with multiple workers | 0 | 0 | 0 | 1,514 |
23,654,867 | 2014-05-14T12:31:00.000 | 0 | 0 | 1 | 0 | python,regex,string,python-3.x,lazy-evaluation | 23,654,986 | 2 | false | 0 | 0 | If the number of elements in that list is truly to the order of 10**8 then you are probably better off doing a linear search if you only want to do it once. Otherwise, you have to create this huge string that is really very inefficient. The other thing I can think of if you need to do this more than once is inserting the collection into a hashtable and do the search faster. | 1 | 2 | 0 | I'm looking for a way to run a regex over a (long) iterable of "characters" in Python. (Python doesn't actually have characters, so it's actually an iterable of one-length strings. But same difference.)
The re module only allows searching over strings (or buffers), as far as I can tell.
I could implement it myself, but that seems a little silly.
Alternatively, I could convert the iterable to a string and run the regex over the string, but that gets (hideously) inefficient. (A worst-case example: re.search(".a", "".join('a' for a in range(10**8))) peaks at over 900M of RAM (private working set) on my (x64) machine, and takes ~12 seconds, even though it only needs to look at the first two characters in the iterable.) | Is there any streaming regular-expression module for Python? | 0 | 0 | 0 | 127 |
23,656,062 | 2014-05-14T13:27:00.000 | 0 | 0 | 1 | 0 | python,powershell,cmd | 42,074,963 | 5 | false | 0 | 0 | The python file closes because the interpreter encounters an error in the script. To overcome this challenge, simply use Python IDLE program to run your script instead of running from the CLI.
Start Idle, create a new file then open your script and click RUN module from the Run menu.
Hope this is useful. | 1 | 0 | 0 | Tried running this code first through powershell and then through cmd and even simply clicking it. I am typing "start python myfile.py" to run it. In each case, the file blinks on screen and immediately closes.
The only way I can view it is to drag the file directly into cmd, in which case it looks perfect.
I am doing the first exercise from "Learn Python the Hard Way". I'm doing everything to the letter as in the book, with one exception. See below*. I've read and re-read it.
I don't think this matters, but the book tells me to simply write, "python" "notepad" etc. to run those (and other) programs from my terminal. And I figured out I have to write, "start python", "start notepad", etc. for it to actually run.
As per suggestions to this problem on other similar questions here, I have attempted to end my code with 1.) main() 2.) raw_input() and
3.) if __name__ == '__main__':
main()
No success. Here is my file code which I made in Notepad++:
print "Hello World!"
print "Hello again"
print "I like typing this."
print "This is fun."
print 'Yay! Printing.'
print "I'd much rather you 'not'."
print 'I "said" do not touch this.'
So pretty much, the program executes and closes itself. How might I keep it open in python on its own?
It has been hours of trying to figure this out and this seriously seems like it should be so simple. Could someone please, pretty please, pleeeease point me in the right direction? | Python file opens and immediately closes | 0 | 0 | 0 | 13,378 |
23,657,718 | 2014-05-14T14:35:00.000 | 1 | 0 | 1 | 0 | python,rest | 23,657,898 | 3 | false | 1 | 0 | Start simple and use the way, which seems to be easy to use for you. Consider optimization to be done later on only if needed.
Use of async libraries would come into play as helpful if you would have thousands of request a second. Much sooner you are likely to have performance problems related to database (if you use it), which is not to be resolved by async magic. | 1 | 1 | 0 | I have a REST API and now I want to create a web site that will use this API as only and primary datasource. The system is distributed: REST API is on one group of machines and the site will be on the other(s).
I'm expecting to have quite a lot of load, so I'd like to make requests as efficient, as possible.
Do I need some async HTTP requests library or any HTTP client library will work?
API is done using Flask, web site will be also built using Flask and Jinja as template engine. | Consume REST API from Python: do I need an async library? | 0.066568 | 0 | 1 | 3,037 |
23,659,248 | 2014-05-14T15:40:00.000 | 0 | 0 | 0 | 1 | python,user-interface,cross-platform | 23,660,787 | 2 | false | 0 | 1 | Firstly, you should always use .pyw for GUIs.
Secondly, you could convert it to .exe if you want people without python to be able to use your program. The process is simple. The hardest part is downloading one of these:
for python 2.x: p2exe
for python 3.x: cx_Freeze
You can simply google instructions on how to use them if you decide to go down that path.
Also, if you're using messageboxes in your GUI, it won't work. You will have to create windows/toplevels instead. | 2 | 1 | 0 | I am developing a simple standalone, graphical application in python. My development has been done on linux but I would like to distribute the application cross-platform.
I have a launcher script which checks a bunch of environment variables and then sets various configuration options, and then calls the application with what amounts to python main.py (specifically os.system('python main.py %s'% (arg1, arg2...)) )
On OS X (without X11), the launcher script crashed with an error like Could not run application, need access to screen. A very quick google search later, the script was working locally by replacing python main.py with pythonw main.py.
My question is, what is the best way to write the launcher script so that it can do the right thing across platforms and not crash? Note that this question is not asking how to determine what platform I am on. The solution "check to see if I am on OS X, and if so invoke pythonw instead" is what I have done for now, but it seems like a somewhat hacky fix because it depends on understanding the details of the windowing system (which could easily break sometime in the future) and I wonder if there is a cleaner way.
This question does not yet have a satisfactory answer. | python or pythonw in creating a cross-platform standalone GUI app | 0 | 0 | 0 | 933 |
23,659,248 | 2014-05-14T15:40:00.000 | 0 | 0 | 0 | 1 | python,user-interface,cross-platform | 23,659,489 | 2 | false | 0 | 1 | If you save the file as main.pyw, it should run the script without opening up a new cmd/terminal.
Then you can run it as python main.pyw | 2 | 1 | 0 | I am developing a simple standalone, graphical application in python. My development has been done on linux but I would like to distribute the application cross-platform.
I have a launcher script which checks a bunch of environment variables and then sets various configuration options, and then calls the application with what amounts to python main.py (specifically os.system('python main.py %s'% (arg1, arg2...)) )
On OS X (without X11), the launcher script crashed with an error like Could not run application, need access to screen. A very quick google search later, the script was working locally by replacing python main.py with pythonw main.py.
My question is, what is the best way to write the launcher script so that it can do the right thing across platforms and not crash? Note that this question is not asking how to determine what platform I am on. The solution "check to see if I am on OS X, and if so invoke pythonw instead" is what I have done for now, but it seems like a somewhat hacky fix because it depends on understanding the details of the windowing system (which could easily break sometime in the future) and I wonder if there is a cleaner way.
This question does not yet have a satisfactory answer. | python or pythonw in creating a cross-platform standalone GUI app | 0 | 0 | 0 | 933 |
23,659,698 | 2014-05-14T16:01:00.000 | 1 | 0 | 0 | 0 | python,scipy,mathematical-optimization,minimize | 23,659,957 | 1 | true | 0 | 0 | You have 2 options I can think of:
opt for constrained optimization
modify your objective function to diverge whenever your numerical simulation does not converge. Basically this means returning a large value, large compared to a 'normal' value, which depends on your problem at hand. minimize will then try to optimize going in another direction
I am however a bit surprised that minimize does not understand inf as a large value, and does not try to look for a solution in another direction. Could it be that it returns with 0 iterations only when your objective function returns nan? You could try debugging the issue by printing the value just before the return statement in your objective function. | 1 | 3 | 1 | I'm using scipy.optimize.minimize for unrestricted optimization of an objective function which receives a couple of parameters and runs a complex numerical simulation based on these parameters. This simulation does not always converge in which case I make the objective function return inf, in some cases, in others NaN.
I thought that this hack would prevent the minimization from converging anywhere near a set of parameters that makes the simulation diverge. Instead, I encountered a case where the simulation won't even converge for the starting set of parameters but instead of failing, the optimization terminates "successfully" with 0 iterations. It doesn't seem to care about the objective function returning inf.
Is there a way to tell scipy.optimize.minimize to fail, e.g. by raising some sort of exception. While in this case it's obvious that the optimization didn't terminate successfully - because of 0 iterations and the fact that I know the optimal result - at some point I want to run problems that I don't know the solution for and I need to rely on minimize to tell me if shit hit the fan. If returning lots of nans and infs doesn't "break" the algorithm I guess I'll have to do it by brute force.
Here is an example of what the almost-iteration looks like.
The function - a function of two variables - is called 4 times over all:
1) at the starting point -> simulation diverges, f(x) = inf
2) at a point 1e-5 to the right (gradient approximation) -> simulation diverges, f(x) = inf
3) at a point 1e-5 higher (grad. appr.) -> simulation converges, f(x) = some finite value
4) once more at the starting point -> simulation diverges, f(x) = inf | Tell scipy.optimize.minimize to fail | 1.2 | 0 | 0 | 2,291 |
23,662,048 | 2014-05-14T18:11:00.000 | 1 | 0 | 0 | 0 | python,windows-8 | 23,662,251 | 3 | true | 0 | 0 | Assuming that you used one of the standard installers for python on windows, .py is already registered and it should just work. Copy it to your desktop and double-click. A console running the program should appear and run as normal. Its still a console app - the the customer wants a gui app, that's a different story.
btw, you shouldn't even have to type c:\path\to\python\dir\python.exe ftpgrab.py, just a plain ftpgrab.py or ftpgrab should do. | 2 | 0 | 0 | I have a program called ftpgrab.py. At the command prompt to run it I type:
c:\path\to\python\dir\python.exe ftpgrab.py
Is there a way on Windows 8 to create an icon which I can double-click to run this? | I have a Python program that the customer would like to access as a double clickable desktop icon on Windows 8. How do I do this? 8 | 1.2 | 0 | 0 | 74 |
23,662,048 | 2014-05-14T18:11:00.000 | 2 | 0 | 0 | 0 | python,windows-8 | 23,662,083 | 3 | false | 0 | 0 | create a file named foo.bat;
copy your command to that file and save it;
double click foo.bat... | 2 | 0 | 0 | I have a program called ftpgrab.py. At the command prompt to run it I type:
c:\path\to\python\dir\python.exe ftpgrab.py
Is there a way on Windows 8 to create an icon which I can double-click to run this? | I have a Python program that the customer would like to access as a double clickable desktop icon on Windows 8. How do I do this? 8 | 0.132549 | 0 | 0 | 74 |
23,666,321 | 2014-05-14T22:36:00.000 | 2 | 0 | 0 | 0 | python,django-1.7,django-migrations | 23,666,529 | 1 | true | 1 | 0 | It's done via MIGRATION_MODULES setting.
In my case:
MIGRATION_MODULES = dict([(app, 'migrations.' + app) for app in INSTALLED_APPS]) | 1 | 3 | 0 | In django 1.7, using the provided makemigrations command(not from South), is there a way to change the location of where the generated migration files are stored?
I'm keeping these files under version control and for apps imported from Django's contrib, they get generated right inside the app directory, which resides outside my project's root path.
For example, the auth app gets the files generated in this location in my case:
/home/dev/.envs/myproj/lib/python2.7/site-packages/django/contrib/auth/migrations/0002_group.py
Thanks | Change base path of the generated migration files | 1.2 | 0 | 0 | 798 |
23,666,333 | 2014-05-14T22:38:00.000 | 0 | 0 | 0 | 0 | python,camera,pygame,2d-games | 23,671,391 | 2 | false | 0 | 1 | Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :) | 1 | 0 | 0 | So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks | 2D Game Engine - Implementing a Camera | 0 | 0 | 0 | 2,049 |
23,667,610 | 2014-05-15T01:04:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,python-unittest | 72,185,121 | 3 | false | 0 | 0 | As they mentioned above, setup() and teardown() will run after and before each test. however SetupClass() and TearDownClass() will run only once for the whole class. | 1 | 139 | 0 | What is the difference between setUp() and setUpClass() in the Python unittest framework? Why would setup be handled in one method over the other?
I want to understand what part of setup is done in the setUp() and setUpClass() functions, as well as with tearDown() and tearDownClass(). | What is the difference between setUp() and setUpClass() in Python unittest? | 0 | 0 | 0 | 61,754 |
23,667,700 | 2014-05-15T01:15:00.000 | 3 | 0 | 0 | 0 | python,c++,algorithm,boost | 23,668,100 | 1 | true | 0 | 0 | One rectangle can be divided into two rectangles by drawing either a horizontal or a vertical line. Divide one of those rectangles and the result is three rectangles. Continue until you have N rectangles. Some limitations to observe to improve the results
Don't divide a rectangle with a horizontal line if the height is
below some threshold
Don't divide a rectangle with a vertical line if the width is below
some threshold
Divide by quarters, thirds, or halves, i.e. avoid 90/10 splits
Keep a list of rectangles sorted by area, and always divide the
rectangle with the largest area | 1 | 0 | 1 | How to split one big rectangle on N smaller rectangles to look random ?
I need to generate couple divisions for different value of n.
Is there library for this in boost for c++ or some for python ? | How to split one big rectangle on N smaller rectangles to look random? | 1.2 | 0 | 0 | 1,261 |
23,673,433 | 2014-05-15T08:50:00.000 | 1 | 0 | 0 | 0 | python,xlsxwriter | 23,674,407 | 1 | true | 0 | 0 | No, such functionality is not what xlsxwriter offers.
This package is able to write Excel files, but as importing HTML you describe is using MS Excell GUI functionality and as MS Excel is not an requirement of xlsxwriter, do not expect it to be present.
On the other hand, you could play with Python to do the conversion of HTML to spreadsheet data yourself, it is definitely not drag and drop solution, but for repetitive tasks this can be much more productive at the end. | 1 | 0 | 0 | I just wanted to know if it is possible to insert the contents of a local tabulated html file into an Excel worksheet using xlsxwriter? Manually it works fine by just dragging and dropping the file into Excel and the formatting is clear, but I can't find any information on inserting file contents into Excel using xlsxwriter. I can only find information related to inserting an image into Excel.
Many thanks for reading this,
MikG | Using Python's xlsxwriter to drop a tabulated html file into worksheet - is this possible? | 1.2 | 1 | 0 | 448 |
23,676,718 | 2014-05-15T11:19:00.000 | 1 | 0 | 0 | 0 | python,django,datetime,timezone | 23,676,984 | 2 | false | 0 | 0 | Yes, you should store everything in your db in UTC.
I don't know why you say you won't be able to cope with DST. On the contrary, any good timezone library - such as pytz - is quite capable of translating UTC to the correct time in any timezone, taking DST into account. | 1 | 0 | 0 | I have an app which will work in multiple timezones. I need the app to be able to say "Do this at 10 PM, 12 April, 2015, in the America/New York timezone and repeat every 30 days." (And similar).
If I get a datetime from the user in PST, should I be storing the datetime in DB after converting to UTC?
Pros: Easier to manage, every datetime in DB is in UTC.
Cons: Can not take DST in account. (I think). | Convert datetime to UTC before storing in DB? | 0.099668 | 1 | 0 | 158 |
23,677,734 | 2014-05-15T12:06:00.000 | 8 | 0 | 0 | 0 | python,classification,scikit-learn,feature-selection | 23,682,058 | 1 | true | 0 | 0 | In general the p-value indicates how probable a given outcome or a more extreme outcome is under the null hypothesis. In your case of feature selection, the null hypothesis is something like this feature contains no information about the prediction target, where no information is to be interpreted in the sense of the scoring method: If your scoring method tests e.g. univariate linear interaction (f_classif, f_regression in sklearn.feature_selection are options for your scoring function), then the null hypothesis says that this linear interaction is not present.
TL;DR The p-value of a feature selection score indicates the probability that this score or a higher score would be obtained if this variable showed no interaction with the target.
Another general statement: scores are better if greater, p-values are better if smaller (and losses are better if smaller) | 1 | 5 | 1 | Recently, I have used sklearn(a python meachine learning library) to do a short-text classification task. I found that SelectKBest class can choose K best of features. However, the first argument of SelectKBest is a score function, which "taking two arrays X and y, and returning a pair of arrays (scores, pvalues)". I know that scores, but what is the meaning of pvalues? | What's the meaning of p-values which produced by feature selection (i.e. chi2 method)? | 1.2 | 0 | 0 | 6,714 |
23,678,292 | 2014-05-15T12:30:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,google-calendar-api,autostart | 23,686,241 | 1 | true | 0 | 0 | Figured this out now. It's tough, not knowing whether it's a Python, Linux or Google issue, but it was a Google one. I found that other people across the web have had issues with client_secrets.json as well, and the solution is to find where its location is stored in the Python code, and instead of just having the name of the file, include the path as well, like this.
CLIENT_SECRETS = '/home/blahblahblah/client_secrets.json'
Then it all works fine - calling it from another folder and it starting on startup. :) | 1 | 0 | 0 | I have written a python tkinter program which runs on my Raspberry Pi, which does a number of things, including interfacing with my google calendar (read only access). I can navigate to the directory it is in and run it there - it works fine.
I would like the program to start at boot-up, so I added it to the autostart file in /etc/xdg/lxsession/LXDE, as per advice from the web. However it does not start at boot. So I try running the line of code I put in that file manually, and I get this.
(code I run) python /home/blahblah/MyScript.py
WARNING: Please configure OAuth 2.0
To make this sample run you will need to download the client_secrets.json file and save it at:
/home/blahblah/client_secrets.json
The thing is, that file DOES exist. But for some reason the google code doesn't realise this when I run the script from elsewhere.
How then can I get my script to run at bootup? | Can't auto-start Python program on Raspberry Pi due to Google Calendar | 1.2 | 0 | 0 | 381 |
23,678,753 | 2014-05-15T12:51:00.000 | 0 | 0 | 0 | 0 | python,pyqt,py2exe | 23,681,056 | 1 | false | 0 | 1 | PyQt is essentially a wrapper around Qt.
In theory it should be possible to use Qt without PyQt, but you'd have to make the bindings yourself. | 1 | 0 | 0 | I just used py2exe to compile my python app that uses pyqt. The size of it is 23MB.
The Pyqt libraries (PyQt4.QtCore.pyd, PyQt4.QtGui.pyd, QtCore4.dll and QtGui4.dll) sum more than 17MB.
Is there a way to use QT with a reduced size?
Thank you! | Reduce Size py2exe and pyqt | 0 | 0 | 0 | 475 |
23,679,480 | 2014-05-15T13:19:00.000 | 0 | 0 | 0 | 0 | python,web-scraping,beautifulsoup | 65,200,203 | 2 | false | 1 | 0 | print(soup.find('h1',class_='pdp_product_title'))
does not give any result
<div class="pr2-sm css-1ou6bb2"><h2 class="headline-5-small pb1-sm d-sm-ib css-1ppcdci" data-test="product-sub-title">Women's Shoe</h2><h1 id="pdp_product_title" class="headline-2 css-zis9ta" data-test="product-title">Nike Air Force 1 Shadow</h1></div> | 1 | 9 | 0 | In mechanize we click links either by using follow_link or click_link. Is there a similar kind of thing in beautiful soup to click a link on a web page? | Clicking link using beautifulsoup in python | 0 | 0 | 1 | 52,594 |
23,679,951 | 2014-05-15T13:38:00.000 | 0 | 0 | 1 | 0 | python,pandas | 23,698,952 | 1 | false | 0 | 0 | Found the best solution:
pd.tools.merge.concat([test.construction,test.ops],join='outer')
Joins along the date index and keeps the different columns. To the extent the column names are the same, it will join 'inner' or 'outer' as specified. | 1 | 0 | 1 | I have two dataframes, each with a series of dates as the index. The dates to not overlap (in other words one date range from, say, 2013-01-01 through 2016-06-15 by month and the second DataFrame will start on 2016-06-15 and run quarterly through 2035-06-15.
Most of the column names overlap (i.e. are the same) and the join just does fine. However, there is one columns in each DataFrame that I would like to preserve as 'belonging' to the original DataFrame so that I have them both available for future use. I gave each a different name. For example, DF1 has a column entitled opselapsed_time and DF2 has a column entitled constructionelapsed_time.
When I try to combine DF1 and DF2 together using the command DF1.combine_first(DF2) or vice versa I get this error: ValueError: Cannot convert NA to integer.
Could someone please give me advice on how best to resolve?
Do I need to just stick with using a merge/join type solution instead of combine_first? | 'combine first' in pandas produces NA error | 0 | 0 | 0 | 373 |
23,680,786 | 2014-05-15T14:12:00.000 | 2 | 0 | 1 | 0 | python,ipython | 30,603,070 | 1 | false | 0 | 0 | If you are connected to the kernel via the IPython console client and are on a Unix-like OS, you can detach using Ctrl-\ | 1 | 4 | 0 | Can someone please tell me how I can detach from an IPython kernel without terminating it?
I see in the documentation of quit() that there is a parameter keep_kernel, but unfortunately quit(keep_kernel=True) won't work. | Detach from IPython kernel without terminating it | 0.379949 | 0 | 0 | 682 |
23,682,222 | 2014-05-15T15:11:00.000 | 2 | 0 | 0 | 0 | python,django | 23,682,246 | 1 | true | 1 | 0 | django is a server side application | 1 | 1 | 0 | Hello :) My task is to write a web framework for a Python program with some massive calculations, which are desired to be executed at proper server and the result should be sent to the browser after it's calculated. My question is - is Django the right framework for that purpose? I tried to find out where Django executes scripts, but I haven't found any satisfying answer, so I hope that I would find one here.
Thank you for any attention. | Do Django excecute scripts at server or in browser? | 1.2 | 0 | 0 | 44 |
23,687,183 | 2014-05-15T19:37:00.000 | 0 | 0 | 0 | 0 | python | 23,687,339 | 2 | false | 0 | 0 | This problem is a bit underspecified.
Say, you're on some Windows: You could assign the correct python invocation as default open action to the .xml file type, but this would then apply to all .xml files. And you might need elevated access rights to perform the change.
On Linux, you'd have to deal with various ways of desktop environments ... and then I'd open an issue because it does not work with xfe running under awesome on my box here.
In short: Solve this problem by avoiding it.
Can you make do with "drag this file onto the application icon"?
Can you make the .xml files appear in a dedicated place and have a background task check for new ones? | 1 | 0 | 0 | I have a project where the goal is to have a user double click on an xml file, which is sent to my python script to work with. What is the best way to do accomplish something like this? Can this even be done programmatically or is this some kind of system preference I have to modify? How can my program retrieve the file that was double clicked on (at least the name). | Execute Python when double clicking a file | 0 | 0 | 1 | 301 |
23,688,227 | 2014-05-15T20:41:00.000 | 9 | 0 | 0 | 0 | python,matplotlib | 23,688,578 | 1 | true | 0 | 0 | I do not like those layers
pylab : massive namespace dump that pulls in everything from pyplot and numpy. It is not a 'layer' so much as a very cluttered name space.
pyplot : state-machine based layer (it knows what your 'current axes' and 'current figure' are and applies the functions to that axes/figure. You should use this only when you are playing around in an interactive terminal (other than plt.subplots or plt.figure for setting up the figure/axes objects). Making a distiction between this layer and pylab is dumb. This is the layer that makes it 'like MATLAB'
the OO layer : which is what you should use in all of your scripts
Figures are the top level container objects. They can contain Axes objects and Artist objects (technically Axes are Artists, but it is useful for pedagogical reasons to distinguish between the Axes objects in a figure and the other artists (such as text objects) that are in the Figure, but not associated with an Axes) and know about the Canvas object. Each Axes can contain more Artists objects. The Artists are the useful things you want to put on your graph (lines, text, images, etc). Artists know how to draw themselves on a Canvas. When you call fig.savefig (or render the figure to the screen) the Figure object loops over all of it's children and tell them to draw them selves on to it's Canvas.
The different Backends provide implementations of the Canvas objects, hence the same figure can be rendered to a raster or vector graphic by just changing the Canvas object that is being used.
Unless you want to write a new backend, many of these details are not important and the fact that matplotlib hides them from you is why it is useful.
If the book couldn't get this correct, I would take everything it says with grain of salt. | 1 | 4 | 1 | In this period I am working with matplotlib. I studied many examples, and was able to modify it to suit several of my needs. But I would like to better understand the general structure of the library. For this reason, apart reading a lot of tutorials on the web, I also purchased the ebook "Matplotlib for Python Developers", by Tosi. While it is full of good examples, I still don't fully grasp the relationships between all the different levels.
The book clearly explains that matplotlib has 3 main "modes":
1) pylab, to work similarly to Matlab
2) pyplot, to work in a procedural way
3) the full OO system
Concerning the objects of the OO system, the book lists 3 levels:
1) FigureCanvas, the container class for the Figure instance
2) Figure, the container for Axes instances
3) Axes, the areas that hold the basic elements (lines, points, text...)
The problem is that, by reading the official documentation, I also encountered the notions of Backends and Artists. While I understand the basic logic of them, I am quite confused about their role respect to the previous classifications. In particular, are the Artists in an intermediate level between the FigureCanvas and the Figure, or is that hierarchy not suitable in this case?
I would be grateful to receive some clarifications, eventually also referring to other documentation I could have missed.
Thanks. | Confusion about Artist place in matplotlib hierarchy | 1.2 | 0 | 0 | 1,693 |
23,691,819 | 2014-05-16T02:43:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter | 23,704,276 | 1 | false | 0 | 1 | I cannot answer this specifically for 'unlock' event (if there even is one in the GUI; I can't find one by cursory search.).
But I think the real answer is to change the question. Having a program simply take focus when user unlocks the display is very un-Windows-ish behavior. The Windows user expects to see the desktop just as s/he left it before the display was locked -- why does s/he want to see your program on top when unlocking, regardless of why Windows might have locked the display?
Maybe you want to recast your program as something that runs in the background and pops up a notification (in the notification area on the right side of toolbar) when it wants user to do something? | 1 | 0 | 0 | I have a tkinter program written in python 3.3.3. I see myself in the need of making the the program get focus when the user unlocks the computer. I don't really know how to go ahead and start with this, users have a .exe version of the program that I created with cxfreeze. I f i need to modify the .py and create another .exe, that wouldn't be a problem.
After some research I found that one can use the ctypes module to lock the computer, but it's not very helpful because i need to know if it is locked or unlocked. I also saw commands from win32com, but i can't seem to be able to find a way to trigger a command when it gets unlocked.
What is the best way to get focus on my program after the computer is unlocked??
Any help is greatly appreciated. | Get focus on tkinter program after pc is unlocked | 0 | 0 | 0 | 100 |
23,695,305 | 2014-05-16T12:45:00.000 | 0 | 0 | 0 | 0 | python,scipy,openshift | 33,723,529 | 1 | false | 0 | 0 | Either add scipy to your setup.py file or login to openshift rhc ssh yourapp and install manually: pip install scipy | 1 | 0 | 0 | I would like to install scipy on openshift but I don't know how to do it. I'm an absolute beginner with python and openshift. Therefore it would be great if somebody could provide a step by step explanation on how to proceed. | install scipy on openshift | 0 | 0 | 0 | 232 |
23,696,185 | 2014-05-16T13:23:00.000 | 0 | 0 | 1 | 1 | python,python-2.7 | 23,697,040 | 1 | true | 0 | 0 | m = s.readline() has \n at the end of line. Then you're doing .format(i, m, values) which writes m in the middle of the string.
I leave it as exercise to the reader to find out what's happening when you're writing such line to a file. :-)
(hint: m = s.readline().rstrip('\n')) | 1 | 0 | 0 | I've been trying to write lines to a file based on specific file names from the same directory, a search of the file names in another log file(given as an input), and the modified date of the files.
The output is limiting me to under 80 characters per line.
def getFiles(flag, file):
if (flag == True):
file_version = open(file)
if file_version:
s = mmap.mmap(file_version.fileno(), 0, access=mmap.ACCESS_READ)
file_version.close()
file = open('AllModules.txt', 'wb')
for i, values in dict.items():
# search keys in version file
if (flag == True):
index = s.find(bytes(i))
if index > 0:
s.seek(index + len(i) + 1)
m = s.readline()
line_new = '{:>0} {:>12} {:>12}'.format(i, m, values)
file.write(line_new)
s.seek(0)
else:
file.write(i +'\n')
file.close()
if __name__ == '__main__':
dict = {}
for file in os.listdir(os.getcwd()):
if os.path.splitext(file)[1] == '.psw' or os.path.splitext(file)[1] == '.pkw':
time.ctime(os.path.getmtime(file))
dict.update({str(os.path.splitext(file)[0]).upper():time.strftime('%d/%m/%y')})
if (len(sys.argv) > 1) :
if os.path.exists(sys.argv[1]):
getFiles(True, sys.argv[1])
else:
getFiles(False, None)
The output is always like:
BW_LIB_INCL 13.1 rev. 259 [20140425 16:28]
16/05/14
The interpretation of data is correct, then again the formatting is not correct as the time is put on the next line (not on the same).
This is happening to all the lines of my new file.
Could someone give me a hint? | Writing to file limitation on line length | 1.2 | 0 | 0 | 1,868 |
23,699,378 | 2014-05-16T15:40:00.000 | 0 | 0 | 1 | 0 | python,algorithm,permutation,time-complexity,combinatorics | 23,734,414 | 7 | false | 0 | 0 | The first part is straight forward if you work wholly in the lexiographic side of things. Given my answer on the other thread, you can go from a permutation to the factorial representation instantly. Basically, you imagine a list {0,1,2,3} and the number that I need to go along is the factorial representation, so for 1,2,3,4, i keep taking the zeroth element and get 000 (0*3+0*!2+0*!1!).
0,1,2,3, => 000
1032 = 3!+1! = 8th permuation (as 000 is the first permutation) => 101
And you can work out the degree trivially, as each transposition which swaps a pair of numbers (a,b) a
So 0123 -> 1023 is 000 -> 100.
if a>b you swap the numbers and then subtract one from the right hand number.
Given two permuations/lexiographic numbers, I just permute the digits from right to left like a bubble sort, counting the degree that I need, and building the new lexiographic number as I go. So to go from 0123 to the 1032 i first move the 1 to the left, then the zero is in the right position, and then I move the 2 into position, and both of those had pairs with the rh number greater than the left hand number, so both add a 1, so 101.
This deals with your first problem. The second is much more difficult, as the numbers of degree two are not evenly distributed. I don't see anything better than getting the global lexiographic number (global meaning here the number without any exclusions) of the permutation you want, e.g. 78 in your example, and then go through all the lexiographic numbers and each time that you get to one which is degree 2, then add one to your global lexiographic number, e.g. 78 -> 79 when you find the first number of degree 2. Obvioulsy, this will not be fast. Alternatively you could try generating all the numbers of degree to. Given a set of n elements, there are (n-1)(n-2) numbers of degree 2, but its not clear that this holds going forward, at least to me, which might easily be a lot less work than computing all the numbers up to your target. and you could just see which ones have lexiographic number less than your target number, and again add one to its global lexiographic number.
Ill see if i can come up with something better. | 1 | 13 | 1 | I've been working on this for hours but couldn't figure it out.
Define a permutation's degree to be the minimum number of transpositions that need to be composed to create it. So a the degree of (0, 1, 2, 3) is 0, the degree of (0, 1, 3, 2) is 1, the degree of (1, 0, 3, 2) is 2, etc.
Look at the space Snd as the space of all permutations of a sequence of length n that have degree d.
I want two algorithms. One that takes a permutation in that space and assigns it an index number, and another that takes an index number of an item in Snd and retrieves its permutation. The index numbers should obviously be successive (i.e. in the range 0 to len(Snd)-1, with each permutation having a distinct index number.)
I'd like this implemented in O(sane); which means that if you're asking for permutation number 17, the algorithm shouldn't go over all the permutations between 0 and 16 to retrieve your permutation.
Any idea how to solve this?
(If you're going to include code, I prefer Python, thank you.)
Update:
I want a solution in which
The permutations are ordered according to their lexicographic order (and not by manually ordering them, but by an efficient algorithm that gives them with lexicographic order to begin with) and
I want the algorithm to accept a sequence of different degrees as well, so I could say "I want permutation number 78 out of all permutations of degrees 1, 3 or 4 out of the permutation space of range(5)". (Basically the function would take a tuple of degrees.) This'll also affect the reverse function that calculates index from permutation; based on the set of degrees, the index would be different.
I've tried solving this for the last two days and I was not successful. If you could provide Python code, that'd be best. | Get permutation with specified degree by index number | 0 | 0 | 0 | 1,866 |
23,702,324 | 2014-05-16T18:31:00.000 | 0 | 0 | 1 | 0 | javascript,python,cocos2d-x,cocos2d-js | 33,130,769 | 4 | false | 1 | 0 | You should know, Chrome is tenacious about caching.
You can turn it off every way they offer, and it will still retain js files you don't want it to.
My advice is to restart your entire browser--not just the tab you're debugging--at least once an hour. | 3 | 4 | 0 | Context:
I'm creating a Cocos2d-JS Game. I have already created the Project and am in development phase.
I can run cocos run -p web or cocos run -p web --source-map from the project directory in the Console. These commands work and allow me to run and test my project.
Problem:
Simply: Code changes I make are not being picked up by the cocos2d-JSB compiler. Old code that I've recently changed still exists in newly compiled cocos2d projects. I cannot detect changes I've made to class files that have already been compiled.
Technical:
The problem technically: Modified .js files are not being copied correctly by the cocos2d-js compiler (from the Terminal/Console). The previous version of the .js file are retained somehow in the localhost-web-server. The localhost is maintained by the Python script that runs the cocos2d app.
(I am writing most of my code using Typescript .ts and compiling down into Javascript .js with a .js.map. I am definitely compiling down the Typescript to Javascript before running the cocos compiler)
More:
I can see my .ts files are visible from the localhost when using Javascript Console in my Chrome Browser. I can also see my .js files this way, and can confirm that the code has not been updated.
Question:
How can I 'Force' the cocos compile or cocos run commands to overwrite old any .js files, instead of 'intelligently' retaining old files?
Is it possible that --source-map makes the run command force a fresh build?
I want to make a 'Clean Build' like in Apple's Xcode, but for cocos2d-js. How can I do this?
If none of that is possible, where can I locate the build/run directory used by the localhost so I can manually update the .js files myself? | Cocos2D-JS -- Compile and Run a 'Clean' build? | 0 | 0 | 0 | 4,146 |
23,702,324 | 2014-05-16T18:31:00.000 | 7 | 0 | 1 | 0 | javascript,python,cocos2d-x,cocos2d-js | 23,703,216 | 4 | true | 1 | 0 | Fix it: .js files were being Cached by my Browser.
Issue:
Chrome Browser was Caching the .js files. I solved this problem by turning off Caching. I did not realize that the localhost was indeed pointing to the project directory.
Solution: Disable Caching in Chrome:
Menu (top right icon) -> Tools -> Developer Tools -> Settings (Gear Icon) -> Checked the box for Disabling Caching (when DevTools is open) | 3 | 4 | 0 | Context:
I'm creating a Cocos2d-JS Game. I have already created the Project and am in development phase.
I can run cocos run -p web or cocos run -p web --source-map from the project directory in the Console. These commands work and allow me to run and test my project.
Problem:
Simply: Code changes I make are not being picked up by the cocos2d-JSB compiler. Old code that I've recently changed still exists in newly compiled cocos2d projects. I cannot detect changes I've made to class files that have already been compiled.
Technical:
The problem technically: Modified .js files are not being copied correctly by the cocos2d-js compiler (from the Terminal/Console). The previous version of the .js file are retained somehow in the localhost-web-server. The localhost is maintained by the Python script that runs the cocos2d app.
(I am writing most of my code using Typescript .ts and compiling down into Javascript .js with a .js.map. I am definitely compiling down the Typescript to Javascript before running the cocos compiler)
More:
I can see my .ts files are visible from the localhost when using Javascript Console in my Chrome Browser. I can also see my .js files this way, and can confirm that the code has not been updated.
Question:
How can I 'Force' the cocos compile or cocos run commands to overwrite old any .js files, instead of 'intelligently' retaining old files?
Is it possible that --source-map makes the run command force a fresh build?
I want to make a 'Clean Build' like in Apple's Xcode, but for cocos2d-js. How can I do this?
If none of that is possible, where can I locate the build/run directory used by the localhost so I can manually update the .js files myself? | Cocos2D-JS -- Compile and Run a 'Clean' build? | 1.2 | 0 | 0 | 4,146 |
23,702,324 | 2014-05-16T18:31:00.000 | 0 | 0 | 1 | 0 | javascript,python,cocos2d-x,cocos2d-js | 34,370,929 | 4 | false | 1 | 0 | Yes it was as simple as this, just open the dev tools with F12, then go to settings, do the cache thing, and when you run your game , activate the dev tools again (F12) and refresh the page | 3 | 4 | 0 | Context:
I'm creating a Cocos2d-JS Game. I have already created the Project and am in development phase.
I can run cocos run -p web or cocos run -p web --source-map from the project directory in the Console. These commands work and allow me to run and test my project.
Problem:
Simply: Code changes I make are not being picked up by the cocos2d-JSB compiler. Old code that I've recently changed still exists in newly compiled cocos2d projects. I cannot detect changes I've made to class files that have already been compiled.
Technical:
The problem technically: Modified .js files are not being copied correctly by the cocos2d-js compiler (from the Terminal/Console). The previous version of the .js file are retained somehow in the localhost-web-server. The localhost is maintained by the Python script that runs the cocos2d app.
(I am writing most of my code using Typescript .ts and compiling down into Javascript .js with a .js.map. I am definitely compiling down the Typescript to Javascript before running the cocos compiler)
More:
I can see my .ts files are visible from the localhost when using Javascript Console in my Chrome Browser. I can also see my .js files this way, and can confirm that the code has not been updated.
Question:
How can I 'Force' the cocos compile or cocos run commands to overwrite old any .js files, instead of 'intelligently' retaining old files?
Is it possible that --source-map makes the run command force a fresh build?
I want to make a 'Clean Build' like in Apple's Xcode, but for cocos2d-js. How can I do this?
If none of that is possible, where can I locate the build/run directory used by the localhost so I can manually update the .js files myself? | Cocos2D-JS -- Compile and Run a 'Clean' build? | 0 | 0 | 0 | 4,146 |
23,703,135 | 2014-05-16T19:24:00.000 | 0 | 0 | 0 | 0 | python,flask | 23,703,319 | 1 | false | 1 | 0 | I see a few solutions:
read /dev/urandom a few times, calculate sha-256 of the number and use it as a file name; collision is extremely improbable
use Redis and command like LPUSH, using it from Python is very easy; then RPOP from right end of the linked list, there's your queue | 1 | 0 | 0 | My web app asks users 3 questions and simple writes that to a file, a1,a2,a3. I also have real time visualization of the average of the data (reads real time from file).
Must I use a database to ensure that no/minimal information is lost? Is it possible to produce a queue of read/writes>(Since files are small I am not too worried about the execution time of each call). Does python/flask already take care of this?
I am quite experienced in python itself, but not in this area(with flask). | Is it possible to make writing to files/reading from files safe for a questionnaire type website? | 0 | 1 | 0 | 31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.