Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9,694,967 |
2012-03-14T02:05:00.000
| 4 | 0 | 0 | 0 |
python,database,sqlite,indexing,bigdata
| 9,695,095 | 3 | true | 0 | 0 |
sqlite3 is too slow, and I need something more heavyweight
First, sqlite3 is fast, sometime faster than MySQL
Second, you have to use index, put a compound index in (date1, date2, name) will speed thing up significantly
| 1 | 2 | 0 |
I have a spreadsheet with about 1.7m lines, totalling 1 GB, and need to perform various queries on it. Being most comfortable with Python, my first approach was to hack together a bunch of dictionaries keyed in a way that would facilitate the queries I was trying to make. E.g. if I needed to be able to access everyone with a particular area code and age, I would make an areacode_age 2-dimensional dict. I ended up needing quite a few of these, which multiplied my memory footprint (to the order of ~10GB), and even though I had enough RAM to support this, the process was still quite slow.
At this point, it seemed like I was playing a sucker's game. "Well this is what relational databases were made for, right?", I thought. I imported sqlite3 and imported my data into an in-memory database. I figure databases are built for speed and this will solve my problems.
It turns out though, that doing a query like "SELECT (a, b, c) FROM foo WHERE date1<=d AND date2>e AND name=f" takes 0.05 seconds. Doing this for my 1.7m rows would take 24 hours of compute time. My hacky approach with dictionaries was about 3 orders of magnitude faster for this particular task (and, in this example, I couldn't key on date1 and date2 obviously, so I was getting every row that matched name and then filtering by date).
So, my question is, why is this so slow, and how can I make it fast? And what is the Pythonic approach? Possibilities I've been considering:
sqlite3 is too slow, and I need something more heavyweight
I need to somehow change my schema or my queries to be more... optimized?
the approaches I've tried so far are entirely wrong and I need a whole new tool of some kind
I read somewhere that, in sqlite 3, doing repeated calls to cursor.execute is much slower than using cursor.executemany. It turns out that executemany isn't even compatible with select statements though, so I think this was a red herring.
Thanks.
|
Querying (pretty) big relational data in Python in a reasonable amount of time?
| 1.2 | 1 | 0 | 572 |
9,695,320 |
2012-03-14T02:59:00.000
| 4 | 0 | 0 | 1 |
python,google-app-engine
| 9,696,354 | 2 | false | 0 | 0 |
Wrap appcfg.py in a shell script. Before actually running appcfg.py update, save the current time, possibly adjusting for your time zone, if necessary, in a file that's marked as a resource. You can open and read that file from the deployed app.
Alternatively, have that script substitute the current time directly into code, obviating the need for a file open.
| 1 | 3 | 0 |
I need to know a value holding timestamp when my app was deployed on the GAE server. In runtime.
Surely I could generate some Python constant in the deployment script. But is there an easier and more correct way to reach the goal?
(I'd like not to use data store for that.)
|
How do I know timestamp when my Python app was deployed on GAE?
| 0.379949 | 0 | 0 | 266 |
9,695,924 |
2012-03-14T04:25:00.000
| 3 | 0 | 1 | 0 |
python
| 9,695,974 | 1 | true | 0 | 0 |
In 2.x, there is no difference; str is a sequence of bytes.
In 3.x, A byte string is identified by a byte literal, b'...'; it can be gotten from a string by encoding it to a specific charset, and it is the default type for most I/O operations.
| 1 | 0 | 0 |
What's the difference between a string and a byte string?
When is it appropriate to use a byte string instead of a string?
More specifically, if I download an image or another binary file from the web, why do I need to convert it to a byte string before I can save it?
|
String vs Byte string
| 1.2 | 0 | 0 | 184 |
9,696,294 |
2012-03-14T05:13:00.000
| 0 | 0 | 0 | 0 |
python,django,ftp,web-hosting
| 9,698,149 | 3 | false | 1 | 0 |
There are different tools for FTP and SSH file transfer. Which one is best for you depends on your environment (e.g. operating system) and your needs (do you want a graphical or command line interface?). But basically it's always a program you run on your machine that connects to a server to upload files. You don't do anything through a web site (except finding out which server to connect to and maybe setting up an account / password).
| 1 | 0 | 0 |
I'm in the midst of trying to get my first website up and running all of the sudden I get to the point where I need to get my file online and I have zero idea on how to do that. I thought it would be as easy as selecting your files and clicking upload but so far it has not been that easy. Currently I'm using djangoeurope.com. So if anyone has experience with that site that would help extra.
|
Beginner advice on how to use FTP or SSH? (django)
| 0 | 0 | 0 | 676 |
9,698,557 |
2012-03-14T08:54:00.000
| 0 | 0 | 0 | 0 |
python,authentication,proxy,pip
| 65,307,484 | 11 | false | 0 | 0 |
For me, the issue was being inside a conda environment. Most likely it used the pip command from the conda environment ("where pip" pointed to the conda environment). Setting proxy-settings via --proxy or set http_proxy did not help.
Instead, simply opening a new CMD and doing "pip install " there, helped.
| 1 | 99 | 0 |
My computer is running windows behind a proxy on a windows server (using active directory), and I can't figure out how to get through it with pip (in python3). I have tried using --proxy, but it still just timeouts. I have also tried setting a long timeout (60s), but that made no difference. My proxy settings are correct, and I compared them with those that I'm using successfully in TortoiseHG to make sure.
Are there any other tricks that anyone knows of that I can try, or is there some limitation in pip with regards to windows proxies?
Update: My failed attempts involved searching pypi. I've just tried actually installing something and it worked. Searching still fails though. Does this indicate a bug in pip or do they work differently?
|
How to use pip on windows behind an authenticating proxy
| 0 | 0 | 0 | 241,528 |
9,700,623 |
2012-03-14T11:00:00.000
| 0 | 0 | 0 | 0 |
python,build,titanium,titanium-mobile,appcelerator-mobile
| 9,706,293 | 1 | false | 1 | 1 |
Ajeet, I believe you can create a directory for android and iphone inside the resources folder that you can keep your platform-specific code/assets in. I think the compiler recognizes this.
| 1 | 1 | 0 |
I am building a Titanium mobile project.
I have some folders with some .JS files in the Resources folder. The problem I ran into is that I need to exclude some of the folder while building for iphone but those folder are needed in my android build.
I looked into the python files in the SDK folder and found out that there are separate file i.e. builder.py for iphone and android.
While building for android or ios all of my JS files gets build, which i dont want as it increases my app size.
As for now i have successfully edited the builder.py file so that it copies the selected folder into my iphone resources folder. It runs fine on simulator but when i tried running on the device i got error saying .js file missing.
I know that my copied .Js files were not archived
Can anyone help me in configuring that builder.py so that we can exclude some folders from getting build.
|
Avoid folders in Resources folder from being build or compile
| 0 | 0 | 0 | 332 |
9,700,942 |
2012-03-14T11:21:00.000
| 4 | 0 | 0 | 0 |
python,plone
| 9,704,631 | 1 | false | 1 | 0 |
You can change the workflow used for File objects, or indeed copy the File type in portal_types to a a new Drawing type and change the workflow for that new type if you want to treat them differently to standard files in your CMS.
| 1 | 1 | 0 |
How can I use Plone 4.1.4 to manage autocad drawings with different roles like architect, sr.architect, Project Manager, accounts manager(who manages the user accounts). I would first of all like to know whether Plone can be used to crease a workflow for uploaded autocad drawing files or for uploaded files as such? Doubt arises due to certain plone documentation which say that plone By default, content types Image and File have no workflow.
I wish to track the comments and changes made by the different user roles to the drawing files as well provide a lock i.e iterate through the working copy of the drawing files that have been uploaded. Can anyone suggest the best approach to this project using Plone?
|
can i use plone workflow to manage autocad related drawings?
| 0.664037 | 0 | 0 | 150 |
9,701,682 |
2012-03-14T12:08:00.000
| 3 | 0 | 1 | 0 |
python,download,urllib2,fedora
| 9,702,345 | 3 | false | 0 | 0 |
first, the http server should return Content-Length header. this is usually means the file is a static file, if it is a dynamic file, such as a result of php or jsp, you can not do such split.
then, you can use http Range header when request, this header tell the server which part of file should return. see python doc for how set and parse http head.
to do this, if the part size is 100k, you first request with Range: 0-1000000 100k will get first part, and in its conent-length in response tell your the size of file, then start some thread with different Range, it will work
| 1 | 6 | 0 |
I'm trying to create a 'Download Manager' for Linux that lets me download one single file using multiple threads. This is what I'm trying to do :
Divide the file to be downloaded into different parts by specifying an offset
Download the different parts into a temporary location
Merge them into a single file.
Steps 2 and 3 are solvable, and it is at Step #1 that I'm stuck. How do I specify an offset while downloading a file?
Using something along the lines of open("/path/to/file", "wb").write(urllib2.urlopen(url).read()) does not let me specify a starting point to read from. Is there any alternative to this?
|
Download A Single File Using Multiple Threads
| 0.197375 | 0 | 1 | 3,774 |
9,703,511 |
2012-03-14T14:06:00.000
| 5 | 0 | 0 | 0 |
python,django,django-orm,django-q
| 9,703,606 | 3 | true | 1 | 0 |
No, but you could create the Q object first, and use that; alternatively, create your query as a dict, and pass that to your filter method and the Q object.
| 1 | 8 | 0 |
I have a Django QuerySet, and I want to get a Q object out of it. (i.e. that holds the exact same query as that queryset.)
Is that possible? And if so, how?
|
Django: Extracting a `Q` object from a `QuerySet`
| 1.2 | 0 | 0 | 1,110 |
9,705,201 |
2012-03-14T15:45:00.000
| 2 | 0 | 0 | 0 |
wxpython,mouseevent,proximity,shaped-window
| 9,716,353 | 2 | false | 0 | 1 |
I don't think it can be done that easily if the mouse is outside the main frame. That said, you can always do the following:
1) Start a timer in your main frame and poll it every 50 milliseconds (or whatever suits you);
2) Once you poll it in your OnTimer event handler, check the mouse position via wx.GetMousePosition() (this will be in screen coordinates);
3) In the same OnTimer method, get the screen position of your frame, via frame.GetScreenPosition();
4) Compare the mouse position with the frame position (maybe using an euclidean distance calculation, or whatever suits you). Then you set your frame transparency accordingly to this distance (remember to put it fully opaque if the mouse is inside the frame rectangle).
I did just it for the fun of it, it shouldn't take more than 5 minutes to implement.
Hope this helps.
Andrea.
| 1 | 4 | 0 |
In Python using wxPython, how can I set the transparency and size of a window based on the proximity of the mouse relative to the application's window, or frame?
Eg. similar to a hyperbolic zoom, or The Dock in MAC OS X? I am trying to achieve this effect with a png with transparency and a shaped window.
Any libraries or code snippets that do this would be great too. Thanks.
|
Capturing mouse events outside wx.Frame in Python
| 0.197375 | 0 | 0 | 2,043 |
9,705,852 |
2012-03-14T16:21:00.000
| 2 | 0 | 0 | 0 |
javascript,python,django,django-templates
| 9,705,930 | 3 | false | 1 | 0 |
This depends a lot on what you are trying to do. If the chart is dynamic and animated, doing it client side with js may be the only choice. It also depends on how much data you have. I would not recommend doing it in js if you have over 10mb of raw data.
| 2 | 4 | 0 |
In general is it better for performance to do lots of data calculations on the server side or on the javascript side?
I have a bunch of data that i'm displaying on a page - and I'm wondering if I should format/ parse/ make calculations on that data on the server side (in python) and return a template or if I should return the data as is and do all my calculating/ formatting on the javascript side?
Are there any general rules of thumb when making these decisions?
Examples of things i'm calculating - converting timestamps to dates.
|
Server side or Javascript calculations?
| 0.132549 | 0 | 0 | 1,820 |
9,705,852 |
2012-03-14T16:21:00.000
| 1 | 0 | 0 | 0 |
javascript,python,django,django-templates
| 9,708,096 | 3 | false | 1 | 0 |
In addition to the facts stated by thedk, you should also keep in mind that calculations you do on client side are more likely to fail because the client may not fulfill certain preconditions. Think of disabled JavaScript or an unreliable internet connection. You generally have no control over your data as soon as it has left the server.
So, it would be highly advisable to move only unimportant calculations to the client side. Something like datetime formation might be okay, but don't try to parse your whole website with JavaScript. Your website should work (and look acceptable) even if JavaScript is disabled on the client.
| 2 | 4 | 0 |
In general is it better for performance to do lots of data calculations on the server side or on the javascript side?
I have a bunch of data that i'm displaying on a page - and I'm wondering if I should format/ parse/ make calculations on that data on the server side (in python) and return a template or if I should return the data as is and do all my calculating/ formatting on the javascript side?
Are there any general rules of thumb when making these decisions?
Examples of things i'm calculating - converting timestamps to dates.
|
Server side or Javascript calculations?
| 0.066568 | 0 | 0 | 1,820 |
9,707,816 |
2012-03-14T18:22:00.000
| 1 | 1 | 0 | 0 |
python,django
| 9,707,962 | 1 | true | 1 | 0 |
The filesystem cache in Django works like any of the other caches, when the timeout value expires, the cache is "invalidated". In the case of files, that means it will be deleted/overwritten.
If you want long-term storage, you need to use a a long-term storage solution (Django's cache framework is specifically not a long-term storage solution). Just save the tweets to your DB or manually to a file. You can still implement caching in addition to this, but you need to handle the long-term storage end.
| 1 | 0 | 0 |
I'm using Django to power a site where I pull in tweets from twitter timelines for use (for about 50 different people). I want to keep a large dictionary of all the tweets in a cache so I don't have to poll twitter every page-refresh. Right now I have it so when it retrieves tweets (30) from twitter, it saves it in the default cache with the key being that user's ID. However, I want it to save these in the long-term so the list of tweets for a user grows over time.
My question is, if I save them using the file-system cache instead, will the files themselves (pickled dictionaries) get deleted after the timeout value, or will it just re-read them into the cache from the file? That way, I could still add to the file over time. Thanks!
|
Do files with filesystem caching in Django delete after timeout?
| 1.2 | 0 | 0 | 970 |
9,709,513 |
2012-03-14T20:10:00.000
| 6 | 1 | 1 | 0 |
c++,python,algorithm,optimization,scipy
| 9,709,955 | 3 | true | 0 | 0 |
Often the choice between double and float is made more on space demands than speed. Modern processors are capable of operating on double quite fast.
Floats may be faster than doubles when using SIMD instructions (such as SSE) which can operate on multiple values at a time. Also if the operations are faster than the memory pipeline, the smaller memory requirements of float will speed things overall.
| 2 | 7 | 0 |
I am reading through code for optimization routines (Nelder Mead, SQP...). Languages are C++, Python. I observe that often conversion from double to float is performed, or methods are duplicated with double resp. float arguments. Why is it profitable in optimization routines code, and is it significant? In my own code in C++, should I be careful for types double and float and why?
Kind regards.
|
Double or float - optimization routines
| 1.2 | 0 | 0 | 958 |
9,709,513 |
2012-03-14T20:10:00.000
| 2 | 1 | 1 | 0 |
c++,python,algorithm,optimization,scipy
| 9,710,279 | 3 | false | 0 | 0 |
Other times that I've come across the need to consider the choice between double and float types in terms of optimisation include:
Networking. Sending double precision data across a socket connection
will obviously require more time than sending half that amount of
data.
Mobile and embedded processors may only be able to handle high
speed single precision calculations efficiently on a coprocessor.
As mentioned in another answer, modern desktop processors can handle double precision Processing quite fast. However, you have to ask yourself if the
double precision processing is really required. I work with audio,
and the only time that I can think of where I would need to process
double precision data would be when using high order filters where
numerical errors can accumulate. Most of the time this can be avoided
by paying more careful attention to the algorithm design. There are,
of course, other scientific or engineering applications where double
precision data is required in order to correctly represent a huge
dynamic range.
Even so, the question of how much effort to spend on considering the data type to use really depends on your target platform. If the platform can crunch through doubles with negligible overhead and you have memory to spare then there is no need to concern yourself. Profile small sections of test code to find out.
| 2 | 7 | 0 |
I am reading through code for optimization routines (Nelder Mead, SQP...). Languages are C++, Python. I observe that often conversion from double to float is performed, or methods are duplicated with double resp. float arguments. Why is it profitable in optimization routines code, and is it significant? In my own code in C++, should I be careful for types double and float and why?
Kind regards.
|
Double or float - optimization routines
| 0.132549 | 0 | 0 | 958 |
9,710,914 |
2012-03-14T21:58:00.000
| 0 | 0 | 0 | 0 |
python,sockets,socketserver
| 9,711,006 | 1 | true | 0 | 0 |
socket is the low-level interface which SocketServer (as well as other networking code) is based off of. I'd start out learning it, whether you plan to use it directly or not, just so that you know what you're working with.
Also, SocketServer is of no use if you're writing client code. :)
| 1 | 0 | 0 |
I've read that socketserver is easier to use, but for someone who is just learning about sockets, which would be quicker and more beginner-friendly, socket or socketserver? For a very basic client/server setup using stream sockets. (Python)
|
Which is better to use for the server of a basic server/client socket implementation, "socket" or "socketserver"?
| 1.2 | 0 | 1 | 72 |
9,711,561 |
2012-03-14T22:56:00.000
| 3 | 0 | 0 | 0 |
python,youtube,gdata
| 9,714,978 | 1 | true | 0 | 0 |
Short answer: Not possible.
Long answer: Videos are just data files. So the question becomes: is it possible for a program on Computer A to tell Server B to send a file to Server C using standard internet communication? YouTube only accepts POST requests for uploading videos, so Server B would need to send this request. And you can't tell Server B to do this with just a URL.
| 1 | 1 | 0 |
Is it possible to upload a video on youtube with a remote URL (not from the local machine). I am using Youtube API and python gdata tools for this.
I don't have the videos on the server where the script will run, and I want to upload them directly to youtube from a remote URL, instead of downloading them first... Do you know if this possible?
|
Upload video to youtube via URL with python gdata
| 1.2 | 0 | 1 | 1,998 |
9,712,898 |
2012-03-15T01:44:00.000
| 2 | 1 | 0 | 0 |
java,php,javascript,c++,python
| 9,712,935 | 2 | true | 0 | 0 |
It is possible to raise the limits in Apache and PHP to handle files of this size. The basic HTTP upload mechanism does not offer progressive information, however, so I would usually consider this acceptable only for LAN-type connections.
The normal alternative is to locate a Flash or Javascript uploader widget. These have the bonus that they can display progressive information and will integrate well with a PHP-based website.
| 1 | 4 | 0 |
I have a problem I've been dealing with lately. My application asks its users to upload videos, to be shared with a private community. They are teaching videos, which are not always optimized for web quality to start with. The problem is, many of the videos are huge, way over the 50 megs I've seen in another question. In one case, a video was over a gig, and the only solution I had was to take the client's video from box.net, upload it to the video server via FTP, then associate it with the client's account by updating the database manually. Obviously, we don't want to deal with videos this way, we need it to all be handled automatically.
I've considered using either the box.net or dropbox API to facilitate large uploads, but would rather not go that way if I don't have to. We're using PHP for the main logic of the site, though I'm comfortable with many other languages, especially Python, but including Java, C++, or Perl. If I have to dedicate a whole server or server instance to handling the uploads, I will.
I'd rather do the client-side using native browser JavaScript, instead of Flash or other proprietary tech.
What is the final answer to uploading huge files though the web, by handling the server response in PHP or any other language?
|
Uploading huge files with PHP or any other language?
| 1.2 | 0 | 1 | 200 |
9,713,908 |
2012-03-15T04:16:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine
| 9,715,283 | 2 | false | 0 | 0 |
Though I agree with suggestions in comment, I think I have a better solution to your problem (Hopefully :))
Although it's not necessary you can use pull queue in your application, to facilitate design of your problem. The pattern I am suggesting is like this:
1) A servlet centrally handles execution (Let's call it controller) of various tasks and is exposed at a URL
2) The jobs are initiated by the controller by hitting the URL of the job (Assuming pull queue again)
3) After job completion, the job hits back at controller URL to report completion of job
4) Controller in turn deletes the job from queue which is done, and adds next logical job to queue
And this is repeated.
In this case your job code is unchanged even if logic of sequence changes or new jobs are added. You might need to make changes to controller only.
| 1 | 0 | 0 |
Hi um struggling with a problem . I created number of crons and i and i want to run them one after another in a specific order . Lets say i have A , B , C and D crons and want to Run Cron B after Completion of Cron A and after that want to run Cron D and after that cron C. I searched for a way to accomplish this task but could not find any . Can any one help?
|
Google app engine how to schedule Crons one after another
| 0.099668 | 0 | 0 | 112 |
9,714,161 |
2012-03-15T04:50:00.000
| 4 | 0 | 1 | 0 |
python,coding-style,formatting
| 9,714,174 | 3 | false | 0 | 0 |
It means you shouldn't do things like a = f ( 1 ) or l = [ 2, 3 ].
| 1 | 1 | 0 |
Python tutorial says "Use spaces around operators and after commas, but not directly inside bracketing constructs: a = f(1, 2) + g(3, 4)." What does "not directly inside bracketing constructs" exactly mean?
|
Spaces in Python coding style
| 0.26052 | 0 | 0 | 3,381 |
9,714,877 |
2012-03-15T06:21:00.000
| 5 | 0 | 0 | 0 |
php,python,json,zend-framework,serialization
| 9,716,132 | 3 | true | 1 | 0 |
Use JSON for data serialization. It's clean, simple, compact, widely supported, and understands data types. Use SOAP only if you like pain. It is a bloated sack of cruft built upon another bloated sack of cruft.
| 3 | 1 | 0 |
I am doing a car rental software, in which a front end back end are there, where the back end will do the accounting part. I have to send some data like customer name, amount, currency etc. to account engine to prepare the ledgers. I am confused whether to use json or soap for information exchange between front and back ends. ur suggestions are precious. thank u..
|
Either json or Soap to exchange data in my project?
| 1.2 | 0 | 1 | 181 |
9,714,877 |
2012-03-15T06:21:00.000
| 3 | 0 | 0 | 0 |
php,python,json,zend-framework,serialization
| 9,720,556 | 3 | false | 1 | 0 |
Use JSON.
My argument is that JSON maps directly to and from native data types in common scripting languages.
If you use Python, then None <-> null, True <-> true, False <-> false, int/float <-> Number, str/unicode <-> String, list <-> Array and dict <-> Object. You feel right at home with JSON.
If you use PHP, there should be similar mappings.
XML is always a foreign language for any programming language except Scala.
| 3 | 1 | 0 |
I am doing a car rental software, in which a front end back end are there, where the back end will do the accounting part. I have to send some data like customer name, amount, currency etc. to account engine to prepare the ledgers. I am confused whether to use json or soap for information exchange between front and back ends. ur suggestions are precious. thank u..
|
Either json or Soap to exchange data in my project?
| 0.197375 | 0 | 1 | 181 |
9,714,877 |
2012-03-15T06:21:00.000
| 0 | 0 | 0 | 0 |
php,python,json,zend-framework,serialization
| 9,720,841 | 3 | false | 1 | 0 |
Depending on your needs, you could use both. For example, using XML bindings you get the (de)serialization of the data going across the wire for free. That is, if you're going to be POSTing lots of data to your web-service, and want to avoid calling the equivalent of "request.getParameter" for each parameter and building your own objects and creating/registering different servlets for each endpoing, the bindings can save in development time. And for the response, you can have the payload be defined as a String and return JSON text, which gives you the benefits of that compact, javascript-friendly of that notation.
| 3 | 1 | 0 |
I am doing a car rental software, in which a front end back end are there, where the back end will do the accounting part. I have to send some data like customer name, amount, currency etc. to account engine to prepare the ledgers. I am confused whether to use json or soap for information exchange between front and back ends. ur suggestions are precious. thank u..
|
Either json or Soap to exchange data in my project?
| 0 | 0 | 1 | 181 |
9,715,395 |
2012-03-15T07:19:00.000
| 7 | 0 | 1 | 0 |
python,html,vim,ubuntu
| 9,715,804 | 1 | true | 0 | 0 |
That's what compilers scripts are for!
The idea is to put a "compiler script" in your vim's compiler directory. That script is actually a settings file(the difference between script files and settings files in vim is only conceptual - technically they are the same), just like your .vimrc file. That script should contain configurations that are only loaded when you want them to. For example :compiler python loads your python settings.
Check out :help compiler for more info.
There are also "filetype plugins" - the main difference between them and compilers is that they are loaded automatically by the vim's filetype detection mechanism - which is actually an extensive set of scripts that can detect pretty much any filetype - unless you use an exotic language, or define your own extension, and even then you can extend that mechanism with your own ftdetect scripts. This is different from compiler scripts, which you need to explicitly call via a :compiler command, or define an :autocmd that calls the :compiler command.
Check out :help filetype for more info.
Compiler scripts more suitable for compiler-specific settings like make settings and build/run shortcuts, and filetype plugins more suitable for settings. It makes sense to build a C program the same way either if you are in a .c or .h file, if you are in the makefile, or if you are in an one of the program's resource text files.
Filetype scripts are more suitable for filetype-specific settings like syntax or code completion. It doesn't make sense to use C syntax and code completion for a C program's makefile or .ini file.
That said - for interpreted languages it doesn't really matter(unless you use a makefile to run them)
| 1 | 3 | 0 |
I'm pretty new to Vim and I just set it up so that I can write Python code, with code completion, folding, etc. and am able to compile it also with a shortcut via plug-ins.
The thing is, I would also like to write some HTML/CSS in Vim as well and I'd like to install some similar plug-ins. I know that I could do this and configure different short-cuts for each language, but I'd like to set it up into two separate workspaces, so that am either working in my python or html workspace but not both. Is there any way to do this? Thanks in advance!
|
Configuring Vim workspaces for programming in multiple languages?
| 1.2 | 0 | 0 | 1,189 |
9,715,877 |
2012-03-15T08:03:00.000
| 8 | 0 | 1 | 0 |
python,string,encoding,utf
| 9,715,989 | 1 | true | 0 | 0 |
Python 3 distinguishes between text and binary data. Text is guaranteed to be in Unicode, though no specific encoding is specified, as far as I could see. So it could be UTF-8, or UTF-16, or UTF-32¹ – but you wouldn't even notice.
The main point here is: You shouldn't even care. If you want to deal with text, then use text strings and access them by code point (which is the number of a single Unicode character and independent of the internal UTF – which may organise code points in several smaller code units). If you want bytes, then use b"" and access them by byte. And if you want to have a string in a byte sequence in a specific encoding, you use .encode().
¹ Or even UTF-9, if someone is insane enough to implement Python on a PDP-10.
| 1 | 5 | 0 |
I believe most of you who are familiar with Python have read Dive Into Python 3. In chapter 4.3, it says this:
In Python 3, all strings are sequences of Unicode characters. There is no such thing as a Python string encoded in UTF-8, or a Python string encoded as CP-1252. “Is this string UTF-8?” is an invalid question.
Somehow I understand what this means: strings = characters in the Unicode set, and Python can help you encode characters according to different encoding methods. However, are characters in Pythons stored as bytes in computers anyway? For example, s = 'strings', and s is surely stored in my computer as a byte strem '0100100101...' or whatever. Then what is this encoding method used here - The "default" encoding method of Python?
Thanks!
|
how strings are stored by python in computers?
| 1.2 | 0 | 0 | 3,787 |
9,719,937 |
2012-03-15T12:43:00.000
| 1 | 1 | 0 | 0 |
php,python,netbeans,ide
| 9,719,988 | 2 | false | 0 | 0 |
For first download netbeans for php support, and form plugin manager install python support.
| 1 | 0 | 0 |
netbean IDE support when downloading PHP bundle version. I also found a download of netbean for python. But How can I let one netbean IDE support both PHP and python?
|
How to make netbean IDE support both python and php
| 0.099668 | 0 | 0 | 400 |
9,720,797 |
2012-03-15T13:35:00.000
| 2 | 0 | 1 | 0 |
ipython,matplotlib
| 9,722,583 | 1 | true | 0 | 0 |
ipython's magic function %who should do the job.
| 1 | 3 | 0 |
Scenario:
I often work in 'pylab' mode of iPython for interactive data analysis. During these sessions I create many intermittent variables and sometimes I forget what I have called things, especially if an analyis session is running for several days (obviously with interruptions).
Now the problem is, that with the dir() command one sees all defined variables in this iPython session, but because it's a pylab session, many important numpy and matplotlib commands are in the global namespace and it's basically hopeless to find my own defined variables in this huge list.
Is there any way to filter this for 'imported' ones and created ones so that I can see only the variables that I have manually created during this session?
|
How to identify/find self-created variables in iPython session?
| 1.2 | 0 | 0 | 270 |
9,720,894 |
2012-03-15T13:41:00.000
| 11 | 0 | 0 | 0 |
java,python,machine-learning,nltk,mahout
| 9,722,329 | 3 | false | 1 | 0 |
I think one big thing Java has going for it is Hadoop. If you really mean large scale, you'll want to be able to use something like that. Generally speaking Java has the performance advantage, and more libraries available. So: Java.
| 3 | 34 | 0 |
I am currently embarking on a project that will involve crawling and processing huge amounts of data (hundreds of gigs), and also mining them for extracting structured data, named entity recognition, deduplication, classification etc.
I'm familiar with ML tools from both Java and the Python world: Lingpipe, Mahout, NLTK, etc. However, when it comes down to picking a platform for such a large scale problem - I lack sufficient experience to decide between Java or Python.
I know this sounds like a vague question, and but I am looking for general advice on picking either Java or Python. The JVM offers better performance(?) over Python, but are libraries like Lingpipe etc. match up with the Python ecosystem? If I went this Python, how easy would it be scaling it and managing it across multiple machines etc.
Which one should I go with and why?
|
Large scale machine learning - Python or Java?
| 1 | 0 | 0 | 12,549 |
9,720,894 |
2012-03-15T13:41:00.000
| 5 | 0 | 0 | 0 |
java,python,machine-learning,nltk,mahout
| 9,735,214 | 3 | false | 1 | 0 |
If you are looking at NoSQL databases fit for ML task, then Neo4J is one of the more production ready (relatively) and capable of handling BigData, it is native to JAVA but comes along with a beautiful REST API out of the box and hence can be integrated with the platform of your choice. JAVA will give you an performance edge here.
| 3 | 34 | 0 |
I am currently embarking on a project that will involve crawling and processing huge amounts of data (hundreds of gigs), and also mining them for extracting structured data, named entity recognition, deduplication, classification etc.
I'm familiar with ML tools from both Java and the Python world: Lingpipe, Mahout, NLTK, etc. However, when it comes down to picking a platform for such a large scale problem - I lack sufficient experience to decide between Java or Python.
I know this sounds like a vague question, and but I am looking for general advice on picking either Java or Python. The JVM offers better performance(?) over Python, but are libraries like Lingpipe etc. match up with the Python ecosystem? If I went this Python, how easy would it be scaling it and managing it across multiple machines etc.
Which one should I go with and why?
|
Large scale machine learning - Python or Java?
| 0.321513 | 0 | 0 | 12,549 |
9,720,894 |
2012-03-15T13:41:00.000
| 18 | 0 | 0 | 0 |
java,python,machine-learning,nltk,mahout
| 9,723,569 | 3 | true | 1 | 0 |
As Apache is going strong producing excellent stuff like Lucene/Solr/Nutch for Search, Mahout for Big Data Machine Learning, Hadoop for Map Reduce, OpenNLP for NLP, lot of NoSQL stuff. The best part is the big "I" which stands for integration and these products can be integrated with each other well as of course in most situations they (these products) complement each other.
Python is great too however if you consider above from ASF then I will go with Java like Sean Owen. Python will always be available for the above but mostly like Add on's and not the actual stuff. For example you can do Hadoop using Python by using Streaming etc.
I partially switched from C++ to Java in order to utilize some of the very popular Apache products like Lucene, Solr & OpenNLP and also other popular open source NoSQL Java products like Neo4j & OrientDB.
| 3 | 34 | 0 |
I am currently embarking on a project that will involve crawling and processing huge amounts of data (hundreds of gigs), and also mining them for extracting structured data, named entity recognition, deduplication, classification etc.
I'm familiar with ML tools from both Java and the Python world: Lingpipe, Mahout, NLTK, etc. However, when it comes down to picking a platform for such a large scale problem - I lack sufficient experience to decide between Java or Python.
I know this sounds like a vague question, and but I am looking for general advice on picking either Java or Python. The JVM offers better performance(?) over Python, but are libraries like Lingpipe etc. match up with the Python ecosystem? If I went this Python, how easy would it be scaling it and managing it across multiple machines etc.
Which one should I go with and why?
|
Large scale machine learning - Python or Java?
| 1.2 | 0 | 0 | 12,549 |
9,722,778 |
2012-03-15T15:21:00.000
| 1 | 1 | 0 | 1 |
python,unix
| 10,034,142 | 1 | true | 0 | 0 |
You're asking about something pretty messy here. I suspect that none of this is what you want to do at all, and that you really want to accomplish this some simpler way. However, presuming you really want to mess with process groups...
Generally, a new process group is created only by the setpgrp(2) system call. Otherwise, processes created by fork(2) are always members of the current process group. That said, upon creating a new process group, the processes in that group aren't even controlled by any tty and doing what you appear to want to do properly requires understanding the whole process group model. A good reference for how all this works is Stevens, "Advanced Programming in the Unix Environment", which goes into it in gory detail.
If you really want to go down this route, you're going to have to implement popen or the equivalent yourself with all the appropriate system calls made.
| 1 | 1 | 0 |
I'm writing a unittesting framework for servers that uses popen to basically execute "python myserver.py" with shell=False, run some tests, and then proceed to take the server down by killpg.
This myserver.py can and will use multiprocessing to spawn subprocesses of its own. The problem is, from my tests, it seems that the pgrp pid of the server processes shares the same group pid as the actual main thread running the unittests, therefore doing an os.killpg on the group pid will not only take down the server but also the process calling the popen (not what I want to do). Why does it do this? And how can I make them be on separate group pids that I can kill independently?
|
Popen-ing a python call that invokes a script using multiprocessing (pgrp issue)?
| 1.2 | 0 | 0 | 241 |
9,723,000 |
2012-03-15T15:33:00.000
| 7 | 0 | 1 | 0 |
python,parsing,datetime,pandas
| 9,739,828 | 1 | true | 0 | 0 |
Pass dateutil.parser.parse (or another datetime conversion function) in the converters argument to read_csv
| 1 | 6 | 1 |
I have a csv file where one of the columns is a date/time string. How do I parse it correctly with pandas? I don't want to make that column the index. Thanks!
Uri
|
How do I tell pandas to parse a particular column as a datetime object, but not make it an index?
| 1.2 | 0 | 0 | 1,378 |
9,723,381 |
2012-03-15T15:53:00.000
| 1 | 0 | 1 | 0 |
python,import
| 9,723,534 | 3 | false | 0 | 0 |
Python has modules that give the code more functionalities. import re gives access to the re module which gives RegEx support. If you type help() at the Python interpreter and then type modules, it will return a list of all the modules.
| 1 | 0 | 0 |
I keep noticing blocks of code starting with import string, import re or import sys.
I know that you must import a module before you can use it. Is the import based on the object?
|
import string/re/sys in python
| 0.066568 | 0 | 0 | 1,444 |
9,724,539 |
2012-03-15T16:59:00.000
| 0 | 0 | 0 | 0 |
python,django
| 10,783,975 | 2 | false | 1 | 0 |
Think of CBV, more specifically "Generic Class Based Views" as a large tree of Python classes. Starting with the simplest class. Each one subclasses and over rides methods from one another. For example, the ArchiveIndexView is typically the view you will sub-class for the index of your site. It adds an extra context variable called latest. You must supply it with a date_field, num_latest, and a couple optionals in the view class. You can also pass these arguements in through the URLConf. However, it is more tidy and clean to have the logic in the views.py . It is quite convenient once you get the hang of it. You can create mixins of your own that essentially are as powerful as your brain. Beyond, a mixin though for something you want available on every page then perhaps a template tag or a custom context processor at worst.
| 1 | 0 | 0 |
I have a news on my site done with "James Bennett - Practical Django Projects, 2nd Edition (2009)". So I am using a date-based views, which will be deprecated in django-1.4. How can I just convert my views and urls to class-based views ? May be you have seen this, please just post a link, I can't find any working example, at least for MonthMixin.
|
Can you share an example of using class based view with MonthMixin?
| 0 | 0 | 0 | 362 |
9,725,725 |
2012-03-15T18:17:00.000
| 1 | 0 | 0 | 0 |
python,.net,xml,xslt,ado.net
| 11,595,618 | 1 | true | 0 | 0 |
Finally we have used SAXON + XSLT2.0 (saxon called from Perl) and Perl::Twig for the parts we did not know how to program in XSLT
| 1 | 1 | 0 |
I have 10 XML files containing several Objects.
The XML files define ACTIONS on those objects.
ACTIONS on objects=
MODIFY values
DELETE Object
CREATE Object with values
I need to get the result of those 10 XML files (10 files of actions on those objects).
Any suggestion ?
programming .NET and ADO ?
programming PYTHON and minidom ?
spyXML from Altova ?
a commercial tool to load MYSQL ?
|
XML files to be sum up
| 1.2 | 0 | 1 | 81 |
9,726,214 |
2012-03-15T18:50:00.000
| 0 | 1 | 1 | 0 |
python,testing
| 9,726,303 | 2 | false | 0 | 0 |
Functional testing. Or regression testing, if that is its purpose. Or code coverage, if you structure your data to cover all code paths.
| 1 | 4 | 0 |
I have a completely non-interactive python program that takes some command-line options and input files and produces output files. It can be fairly easily tested by choosing simple cases and writing the input and expected output files by hand, then running the program on the input files and comparing output files to the expected ones.
1) What's the name for this type of testing?
2) Is there a python package to do this type of testing?
It's not difficult to set up by hand in the most basic form, and I did that already. But then I ran into cases like output files containing the date and other information that can legitimately change between the runs - I considered writing something that would let me specify which sections of the reference files should be allowed to be different and still have the test pass, and realized I might be getting into "reinventing the wheel" territory.
(I rewrote a good part of unittest functionality before I caught myself last time this happened...)
|
Testing full program by comparing output file to reference file: what's it called, and is there a package for it?
| 0 | 0 | 0 | 1,131 |
9,726,483 |
2012-03-15T19:09:00.000
| 3 | 0 | 1 | 0 |
python,wxpython
| 9,726,647 | 2 | false | 0 | 0 |
Only the first import executes the file. Subsequent imports copy the reference from sys.modules.
| 2 | 2 | 0 |
Just for my knowledge, how does python, especially wxpython reacts to multiple imports? If I import wx in multiple files, how does it handle that when called the main frame? Does it slows the speed or it firstly checks whether it is already been imported or not?
|
Multiple imports of python modules
| 0.291313 | 0 | 0 | 111 |
9,726,483 |
2012-03-15T19:09:00.000
| 5 | 0 | 1 | 0 |
python,wxpython
| 9,726,645 | 2 | true | 0 | 0 |
When Python imports a file, it keps track of it by storing it in sys.modules. So whenever Python is importing a file it checks there first and, if it finds it there, returns that instead; if it is not there, it imports it, adds it to sys.modules, and then returns it.
| 2 | 2 | 0 |
Just for my knowledge, how does python, especially wxpython reacts to multiple imports? If I import wx in multiple files, how does it handle that when called the main frame? Does it slows the speed or it firstly checks whether it is already been imported or not?
|
Multiple imports of python modules
| 1.2 | 0 | 0 | 111 |
9,727,608 |
2012-03-15T20:25:00.000
| 0 | 0 | 0 | 1 |
python,bottle
| 10,681,349 | 1 | true | 1 | 0 |
I actually resolved the issue. The Bottle framework tutorial encourages first-time users to set up the server on a high port (to avoid conflict with apache, etc) for development. I was missing two parts of the process: 1. import the python script so that it can be called from the main bottle file 2. in the main bottle file, add a route to the api link (for the javascript to work) I'm not sure if I would have had to add the route if I was running the server on port 80
| 1 | 1 | 0 |
I have written a webapp using traditional cgi. I'm now trying to rewrite it with bottle
The page is simple...the user fills out a form, hits submit and the data object is sent to a python script that used to live in my cgi-bin
The python script generates an image, and prints the url for that image out to standard out
On callback, I use javascript to display the newly generated image on the page formatted with html.
The issue that I'm having with bottle is getting the image-generating script to execute when it receives the post request. I'm used to handling the post request and callback with javascript (or jquery). should I be using a bottle method instead?
|
bottle framework: getting requests and routing to work
| 1.2 | 0 | 0 | 693 |
9,730,769 |
2012-03-16T01:49:00.000
| 0 | 0 | 0 | 0 |
python,wxpython,wxwidgets
| 9,742,560 | 1 | false | 1 | 1 |
I don't think you are missing anything, this hasn't yet been implemented. Mouse and keyboard events are high on my todo list though, I will update this question when they have been added.
| 1 | 2 | 0 |
I would like to bind the wx.html2.WebView.New widget with wx.EVT_LEFT_UP however it doesnt work (it doesnt get noticed, nothing happens).
Is there anything i am missing?
|
Binding a wx.html2.WebView.New Widget?
| 0 | 0 | 0 | 419 |
9,731,496 |
2012-03-16T03:36:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,matrix,distributed
| 9,731,658 | 2 | true | 0 | 0 |
You will probably get the best performance if you use one thread for each CPU core available to the machine running your application. You won't get any performance benefit by running more threads than you have processors.
If you are planning to spawn new threads each time you perform a matrix multiplication then there is very little hope of your multi-threaded app ever outperforming the single-threaded version unless you are multiplying really huge matrices. The overhead involved in thread creation is just too high relative to the time required to multiply matrices. However, you could get a significant performance boost if you spawn all the worker threads once when your process starts and then reuse them over and over again to perform many matrix multiplications.
For each pair of matrices you want to multiply you will want to load the multiplicand and multiplier matrices into memory once and then allow all of your worker threads to access the memory simultaneously. This should be safe because those matrices will not be changing during the multiplication.
You should also be able to allow all the worker threads to write their output simultaneously into the same output matrix because (due to the nature of matrix multiplication) each thread will end up writing its output to different elements of the matrix and there will not be any contention.
I think you should distribute the rows between threads by maintaining an integer NextRowToProcess that is shared by all of the threads. Whenever a thread is ready to process another row it calls InterlockedIncrement (or whatever atomic increment operation you have available on your platform) to safely get the next row to process.
| 1 | 2 | 1 |
For a class project I am writing a simple matrix multiplier in Python. My professor has asked for it to be threaded. The way I handle this right now is to create a thread for every row and throw the result in another matrix.
What I wanted to know if it would be faster that instead of creating a thread for each row it creates some amount threads that each handles various rows.
For example: given Matrix1 100x100 * Matrix2 100x100 (matrix sizes can vary widely):
4 threads each handling 25 rows
10 threads each handling 10 rows
Maybe this is a problem of fine tuning or maybe the thread creation process overhead is still faster than the above distribution mechanism.
|
Creating a thread for each operation or a some threads for various operations?
| 1.2 | 0 | 0 | 251 |
9,734,403 |
2012-03-16T09:05:00.000
| 2 | 0 | 0 | 0 |
python,machine-learning,scikits,scikit-learn
| 9,760,852 | 1 | true | 0 | 0 |
You can either use an aggregate confusion matrix or compute one for each CV partition and compute the mean and the standard deviation (or standard error) for each component in the matrix as a measure of the variability.
For the classification report, the code would need to be modified to accept 2 dimensional inputs so as to pass the predictions for each CV partitions and then compute the mean scores and std deviation for each class.
| 1 | 6 | 1 |
I am training a svm classifier with cross validation (stratifiedKfold) using the scikits interfaces. For each test set (of k), I get a classification result. I want to have a confusion matrix with all the results.
Scikits has a confusion matrix interface:
sklearn.metrics.confusion_matrix(y_true, y_pred)
My question is how should I accumulate the y_true and y_pred values. They are arrays (numpy). Should I define the size of the arrays based on my k-fold parameter? And for each result I should add the y_true and y-pred to the array ????
|
scikits confusion matrix with cross validation
| 1.2 | 0 | 0 | 4,075 |
9,735,381 |
2012-03-16T10:15:00.000
| 1 | 0 | 1 | 0 |
python,nltk,corpus
| 9,878,800 | 2 | false | 0 | 0 |
CategorizedCorpusReader only supports one level of categories. But since categories are based on the filename, you are free to set up your own name/category scheme and filter the corpus fileids as needed.
How do you want to use the multi-level categories? If you have follow-up questions, explain what you want to achieve and what you have tried so far.
| 1 | 3 | 0 |
I was trying to create another category under a parent category.
Is is possible to create. How it can be done and how can a refer to these sub-categories?
|
How to create a sub-category for a corpus in NLTK Python
| 0.099668 | 0 | 0 | 990 |
9,736,542 |
2012-03-16T11:38:00.000
| 1 | 0 | 1 | 0 |
python
| 9,739,990 | 2 | true | 0 | 0 |
Sounds like you have a stand-alone program that reads from stdin, and you want to automate input to it using python. Download and use the pexpect module, that's what it's for.
| 1 | 1 | 0 |
I have a python script say script1.py. It will prompt the user with a serious of questions like Name, 'Y' / 'N' type questions. Now i need to call this python script from another python script,say scripts2.py such that I would define the user inputs in script2.py. So how to pass the input sequentially???
Help would be appreciated.
Regards,
Sujith
|
Python : Automate the user input data(Multiple sequential inputs)?
| 1.2 | 0 | 0 | 751 |
9,737,757 |
2012-03-16T13:02:00.000
| 1 | 1 | 0 | 0 |
python,encryption,public-key-encryption
| 9,738,049 | 2 | false | 0 | 0 |
What you need is to do something like SSL does: exchange a key using public key encryption, then use symmetric encryption. Asymmetric encryption is very inefficient in terms of performance, and should not be used for such stuff.
| 2 | 3 | 0 |
I have a program that regularly appends small pieces (say 8 bytes) of sensitive data to a number of logfiles. I would like this data to be encrypted. I want the program to start automatically at boot time, so I don't want to type a password at program start. I also don't want it to store a password somewhere, since that would almost defeat the purpose of encryption.
For these reasons, it seems to me that public key encryption would be a good choice. The program knows my public key, but my private key is password protected somewhere else.
So far, so good. But when I try to use PyCrypto to RSA (or ElGamal)-encrypt a small 5-byte string, the output explodes to 128 bytes. My logfiles are large enough as it is... On the other hand, when I try a symmetric crypto, like Blowfish, the output string is just as large as the input string.
So, my question is: Is there a reasonably secure public key encryption algorithm where I can encrypt data 8 bytes at a time and don't have it blow up? (I guess a factor of 2 would be OK). I think what I want is a public key stream cipher.
If there is not such a thing, I think I will just give up and use a symmetric crypto and give the password manually on startup.
|
Is there a public key stream cipher encryption?
| 0.099668 | 0 | 0 | 2,836 |
9,737,757 |
2012-03-16T13:02:00.000
| 5 | 1 | 0 | 0 |
python,encryption,public-key-encryption
| 9,738,026 | 2 | true | 0 | 0 |
Typically this is solved in the way that the program creates some (real) random numbers which are used as a secret key to a symmetric encryption algorithm.
In you program you have to do something like:
Generate some real random data (maybe use /dev/random) as a secret key.
Encrypt the secret key with the public key algorithm.
Use the secret key for some other symmetric algorithm.
To decrypt this,
Use the private key to decrypt the secret key.
Use the secret key and the symmetric algorithm to decrypt the data.
You might want to get some random data (e.g. >=256bit) for a 'good' key.
| 2 | 3 | 0 |
I have a program that regularly appends small pieces (say 8 bytes) of sensitive data to a number of logfiles. I would like this data to be encrypted. I want the program to start automatically at boot time, so I don't want to type a password at program start. I also don't want it to store a password somewhere, since that would almost defeat the purpose of encryption.
For these reasons, it seems to me that public key encryption would be a good choice. The program knows my public key, but my private key is password protected somewhere else.
So far, so good. But when I try to use PyCrypto to RSA (or ElGamal)-encrypt a small 5-byte string, the output explodes to 128 bytes. My logfiles are large enough as it is... On the other hand, when I try a symmetric crypto, like Blowfish, the output string is just as large as the input string.
So, my question is: Is there a reasonably secure public key encryption algorithm where I can encrypt data 8 bytes at a time and don't have it blow up? (I guess a factor of 2 would be OK). I think what I want is a public key stream cipher.
If there is not such a thing, I think I will just give up and use a symmetric crypto and give the password manually on startup.
|
Is there a public key stream cipher encryption?
| 1.2 | 0 | 0 | 2,836 |
9,738,522 |
2012-03-16T13:54:00.000
| 4 | 0 | 0 | 0 |
python,algorithm,http,web-scraping,beautifulsoup
| 9,738,568 | 2 | false | 0 | 0 |
You could use the HEAD HTTP method and look at the Date-Modified and ETag headers, etc. before actually downloading the full content again.
However nothing guarantees that the server will actually update these headers when the entity's (URL's) content changes, or indeed even respond properly to the HEAD method.
| 2 | 0 | 0 |
I need to create software in Python which monitoring sites when changes have happened. At the moment I have periodic task and check content of site with previous version. Is there any easier way to check if content of site has been changed, maybe time of last changes, so to avoid downloading content everu time ?
|
Get last changes on site
| 0.379949 | 0 | 0 | 118 |
9,738,522 |
2012-03-16T13:54:00.000
| 1 | 0 | 0 | 0 |
python,algorithm,http,web-scraping,beautifulsoup
| 9,738,636 | 2 | false | 0 | 0 |
Altough it doesn't answer your question I think its worth to mention that you don't have to store the previous version of website to look for changes. You can just count md5 sum of it and store this sum, then count it for the new version and check if they are equal.
And about the question itself, AKX gave a great answer - just look for Date-Modified header, but remember it is not guaranteed to work.
| 2 | 0 | 0 |
I need to create software in Python which monitoring sites when changes have happened. At the moment I have periodic task and check content of site with previous version. Is there any easier way to check if content of site has been changed, maybe time of last changes, so to avoid downloading content everu time ?
|
Get last changes on site
| 0.099668 | 0 | 0 | 118 |
9,739,963 |
2012-03-16T15:22:00.000
| 2 | 0 | 1 | 0 |
python
| 9,740,014 | 4 | false | 0 | 0 |
You can seek() to a position and write a single byte. It will overwrite what's there, rather than inserting.
| 1 | 6 | 0 |
This is a theoretical question as I don't have an actual problem, but I got to wondering ...
If I had a huge file, say many gigs long and I wanted to change a single byte and I knew the offset of that byte, how could I do this efficiently? Is there a way to do this without rewriting the entire file and only writing the single byte?
I'm not seeing anything in the Python file api that would let me write to a particular offset in a file.
|
Python - Small Change to a Huge File
| 0.099668 | 0 | 0 | 1,583 |
9,742,351 |
2012-03-16T18:05:00.000
| 0 | 0 | 0 | 0 |
python,encryption,passwords,hashlib
| 9,742,498 | 5 | false | 0 | 0 |
The HTTPS channel over which you send the password to the server provides encryption that is good enough.
However, you need a more secure storage mechanism for the password. Use an algorithm like "bcrypt" with many thousands of hash iterations (bcrypt calls this the cost factor, and it should be at least 16, meaning 216 iterations), and a random "salt". This works by deriving an encryption key from the password, which is a computationally expensive process, then using that key to encrypt some known cipher text, which is saved for comparison on future login attempts.
Also, using HTTPS on the login only is not sufficient. You should use it for any requests that require an authenticated user, or that carry an authentication cookie.
| 1 | 0 | 0 |
I'm trying to send username and password data from a web form to my server.
The password is sent as plain text over a https connection, then properly encrypted on the server (using python hashlib.sha224) before being stored, however I'm not sure how to transmit the password text to the server in an encrypted format.
My web client is written in javascript, and the server is written in python.
|
How to encrypt password sent to server
| 0 | 0 | 1 | 2,259 |
9,742,739 |
2012-03-16T18:36:00.000
| -4 | 0 | 1 | 0 |
python,matrix,multiprocessing
| 9,743,633 | 4 | false | 0 | 0 |
You don't.
Either they return their edits in a format you can use in the main programme, or you use some kind of interprocess-communication to have them send their edits over, or you use some kind of shared storage, such as a database, or a datastructure server like redis.
| 1 | 9 | 0 |
I am making a process pool and each of them need to write in different parts of a matrix that exists in the main program. There exists no fear of overwriting information as each process will work with different rows of the matrix. How can i make the matrix writable from within the processes??
The program is a matrix multiplier a professor assigned me and has to be multiprocessed. It will create a process for every core the computer has. The main program will send different parts of the matrix to the processes and they will compute them, then they will return them in a way i can identify which response corresponds to which row it was based on.
|
How do I make processes able to write in an array of the main program?
| -1 | 0 | 0 | 29,338 |
9,743,838 |
2012-03-16T20:08:00.000
| 4 | 0 | 1 | 1 |
python,subprocess
| 9,743,899 | 4 | false | 0 | 0 |
You don't need to run a thread for each process. You can peek at the stdout streams for each process without blocking on them, and only read from them if they have data available to read.
You do have to be careful not to accidentally block on them, though, if you're not intending to.
| 1 | 22 | 0 |
I want to run many processes in parallel with ability to take stdout in any time. How should I do it? Do I need to run thread for each subprocess.Popen() call, a what?
|
Python subprocess in parallel
| 0.197375 | 0 | 0 | 25,715 |
9,744,806 |
2012-03-16T21:35:00.000
| 4 | 0 | 1 | 0 |
python
| 9,745,145 | 1 | true | 0 | 0 |
It should not break any tools and it should work on Python 3.
It is ok If it doesn't hurt a source code readability i.e., you can still find out what the function does and how to use it.
The problem might be that it masks a poor design. If several methods use the same list of arguments the code should be refactored (create an object that works with the list) rather than patched by generating repetitive docstrings.
| 1 | 7 | 0 |
A python docstring must be given as a literal string; but sometimes it's useful to have similar docstrings for several functions (e.g., different constructors), or several access methods might accept the same list of arguments (and then rely on the same hidden method), so it would be nice to use the same description everywhere. For such cases I can construct a docstring by assigning to __doc__, which I do by means of a simple decorator. The system works very nicely (in python 2), and I'm pleased with how simple, clear and well-encapsulated it is.
The question: Is this a good idea? In particular, are there tools that would be confused by this set-up (e.g., anything that extracts docstrings from the source rather than from the bytecode). Is the solution still going to work in python 3? Are there other reasons or circumstances that would make this inadvisable?
|
Modifying a python docstring with a decorator: Is it a good idea?
| 1.2 | 0 | 0 | 1,266 |
9,746,586 |
2012-03-17T01:56:00.000
| 2 | 1 | 0 | 0 |
c++,python,embedding,extending
| 9,746,618 | 2 | false | 0 | 1 |
In my opinion, in your case it makes no sense to embed Python in C++, while the reverse could be beneficial.
In most of programs, the performance problems are very localized, which means that you should rewrite the problematic code in C++ only where it makes sense, leaving Python for the rest.
This gives you the best of both world: the speed of C++ where you need it, the ease of use and flexibility of Python everywhere else. What is also great is that you can do this process step by step, replacing the slow code paths by the by, leaving you always with the whole application in an usable (and testable!) state.
The reverse wouldn't make sense: you'd have to rewrite almost all the code, sacrificing the flexibility of the Python structure.
Still, as always when talking about performance, before acting measure: if your bottleneck is not CPU/memory bound switching to C++ isn't likely to produce much advantages.
| 1 | 1 | 0 |
I have some big mysql databases with data for calculations and some parts where I need to get data from external websites.
I used python to do the whole thing until now, but what shall I say: its not a speedster.
Now I'm thinking about mixing Python with C++ using Boost::Python and Python C API.
The question I've got now is: what is the better way to get some speed.
Shall I extend python with some c++ code or shall I embedd python code into a c++ programm?
I will get fore sure some speed increment using c++ code for the calculating parts and I think that calling the Python interpreter inside of an C-application will not be better, because the python interpreter will run the whole time. And I must wrap things python-libraries like mysqldb or urllib3 to have a nice way to work inside c++.
So what whould you suggest is the better way to go: extending or embedding?
( I love the python language, but I'm also familiar with c++ and respect it for speed )
Update:
So I switched some parts from python to c++ and used multi threading (real one) in my c modules and my programm now needs instead of 7 hours 30 minutes :))))
|
Speed - embedding python in c++ or extending python with c++
| 0.197375 | 0 | 0 | 2,903 |
9,747,258 |
2012-03-17T04:30:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,google-api,google-api-client,google-api-python-client
| 9,748,040 | 2 | false | 1 | 0 |
The packages needs to be locally available, where did you put the packages, in the Python folder or in your project folder?
| 1 | 0 | 0 |
I've been experimenting with the Google App Engine, and I'm trying to import certain libraries in order to execute API commands. I've been having trouble importing, however. When I tried to execute "from apiclient.discovery import build", my website doesn't load anymore. When I test locally in IDLE, this command works.
|
Google App Engine library imports
| 0 | 0 | 1 | 2,099 |
9,748,915 |
2012-03-17T09:43:00.000
| 3 | 0 | 0 | 0 |
python,user-interface,pygame
| 9,751,549 | 2 | true | 0 | 1 |
You might try running two separate programs. I just ran two of my pygame programs separately, they work fine. Run one using the other, maybe? Or, if that doesn't work, put use two surfaces as screens, and draw one into the other.
| 2 | 2 | 0 |
Is there any way to bind two windows from seprate processes together using Python/Pygame? By binding I mean in two possible ways:
A large window that contains two smaller windows
Two separate windows which appear side to side (perhaps using OS environment variables?)
|
Possibility of binding multiple PyGame windows
| 1.2 | 0 | 0 | 531 |
9,748,915 |
2012-03-17T09:43:00.000
| 0 | 0 | 0 | 0 |
python,user-interface,pygame
| 9,855,904 | 2 | false | 0 | 1 |
Interprocess communication is probably the simplest. The issue though is that SDL is fundamentally not set up for multiple windows.
Probably the best long-term solution is to set up with wxPython, and then use PyGame inside of it. This will let you have all manner of windows with PyGame renderers.
| 2 | 2 | 0 |
Is there any way to bind two windows from seprate processes together using Python/Pygame? By binding I mean in two possible ways:
A large window that contains two smaller windows
Two separate windows which appear side to side (perhaps using OS environment variables?)
|
Possibility of binding multiple PyGame windows
| 0 | 0 | 0 | 531 |
9,750,481 |
2012-03-17T13:50:00.000
| 2 | 0 | 0 | 0 |
python,download,html-parsing,beautifulsoup,printing-web-page
| 9,750,658 | 1 | true | 1 | 0 |
Use http.client to send a HEAD request to the URL. This will return only the headers for the resource then you can look at the content-type header and see if it text/html. If it is then send a GET request to the URL to get the body.
| 1 | 0 | 0 |
I want to write a python script which downloads the web-page only if the web-page contains HTML. I know that content-type in header will be used. Please suggest someway to do it as i am unable to get a way to get header before the file download.
|
Download a URL only if it is a HTML Webpage
| 1.2 | 0 | 1 | 103 |
9,752,808 |
2012-03-17T18:54:00.000
| 1 | 1 | 1 | 0 |
python,yaml,sublimetext
| 9,752,868 | 1 | true | 0 | 0 |
/Users/me/Developer/Cellar/python/2.7.2/lib/python2.7 doesn't seem like a pre-installed version of Python on a Mac. Can you try to identify the system-wide Python installation and use the explicit path to the python executable to execute setup.py install? Then try the Sublime Text plug-in.
The default Mac OS X Python should be located at /Library/Frameworks/Python.framework/Versions/...
| 1 | 0 | 0 |
I'm using OS X with Sublime text build 2181, and I am having trouble using the Yaml module in a Sublime Text plugin.
I have installed PyYaml by doing python setup.py install. When I go to the python console, and try import yaml I have no problems. But when I try to save my Sublime Text plugin with the import yaml statement, I keep getting ImportError: No module name yaml
I'm using the pre-installed version of Python, version 2.7.
Last line of the install output:
Writing /Users/me/Developer/Cellar/python/2.7.2/lib/python2.7/site-packages/PyYAML-3.10-py2.7.egg-info
Any help would be greatly appreciated.
|
Python SublimeText plugin - No module named Yaml
| 1.2 | 0 | 0 | 5,021 |
9,752,891 |
2012-03-17T19:08:00.000
| 0 | 0 | 0 | 0 |
python,url,dynamic
| 9,753,135 | 1 | true | 1 | 0 |
If you parsing some product pages, usually these URLs have some kind of product id.
Find the pattern to extract product id from URLs, and use it to filter already visited URLs.
| 1 | 0 | 0 |
I am crawling online stores for price comparison. Mot of the stores are using dynamic URLs heavily. This is causing my crawler to spend lot of time on every online stores. Even though most of them have only 5-6k unique products, they have unique URLs >= 300k. Any idea how to get around this.
Thanks in advance!
|
How to handle dynamic URLs while crawling online stores?
| 1.2 | 0 | 1 | 205 |
9,754,056 |
2012-03-17T21:48:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,csv,tkinter
| 9,754,842 | 2 | false | 0 | 1 |
you can use a lambda function in order to pass an argument to your load function. Unfortunately pastebin ist down right now so i cannot have a look at your code. The idea is something like this:
for filename in filenames:
...Button(...,command=lambda i=filename:loadFile(i),...
so in your loadFile function you have the filename as first parameter
| 1 | 3 | 0 |
I'm trying to get this program to print the contents of a .csv file onto a GUI.
I've created in Tkinter. It mostly works, but I can't figure out a way to get each button to print the contents of the file it's linked to.
At the moment I've created a variable that links to just one of the files, which shows that it works.
The variable is "loadFiles", and the project it's set to open is "a_P.csv". Is there any way I can make the buttons link this variable to the relevant .csv file?
The code is in this pastebin link: http://pastebin.com/ZP2pPvKA
The program searches for files ending in "_P.csv" in the same folder as it, so you may have to create a .csv with 7 objects in it.
|
How to make a button open a specific .csv file using Python Tkinter?
| 0.099668 | 0 | 0 | 2,189 |
9,755,990 |
2012-03-18T04:26:00.000
| 2 | 0 | 1 | 0 |
python,list,tuples,immutability,python-internals
| 9,756,037 | 8 | false | 0 | 0 |
A tuple is immutable in the sense that the tuple itself can not expand or shrink, not that all the items contained themselves are immutable. Otherwise tuples are dull.
| 2 | 200 | 0 |
If a tuple is immutable then why can it contain mutable items?
It is seemingly a contradiction that when a mutable item such as a list does get modified, the tuple it belongs to maintains being immutable.
|
Why can tuples contain mutable items?
| 0.049958 | 0 | 0 | 37,518 |
9,755,990 |
2012-03-18T04:26:00.000
| 5 | 0 | 1 | 0 |
python,list,tuples,immutability,python-internals
| 9,756,035 | 8 | false | 0 | 0 |
I'll go out on a limb here and say that the relevant part here is that while you can change the contents of a list, or the state of an object, contained within a tuple, what you can't change is that the object or list is there. If you had something that depended on thing[3] being a list, even if empty, then I could see this being useful.
| 2 | 200 | 0 |
If a tuple is immutable then why can it contain mutable items?
It is seemingly a contradiction that when a mutable item such as a list does get modified, the tuple it belongs to maintains being immutable.
|
Why can tuples contain mutable items?
| 0.124353 | 0 | 0 | 37,518 |
9,757,203 |
2012-03-18T09:15:00.000
| 6 | 0 | 0 | 1 |
python,google-app-engine,python-2.7
| 9,757,219 | 1 | false | 1 | 0 |
AppEngine restricts you from doing things that don't make sense. Your AppEngine application can't go wandering all over the filesystem once it is running on Google's servers, and Google's servers certainly don't have a C: drive.
Whatever you are trying to accomplish by changing directories, it's something that you need to accomplish in a different way in an AppEngine application.
| 1 | 0 | 0 |
import os os.chdir("c:\Users")
works in the command prompt but not on localhost (google app engine.)
can anyone help.
|
change directory (python) doesnt work in localhost
| 1 | 0 | 0 | 213 |
9,757,361 |
2012-03-18T09:49:00.000
| 1 | 0 | 1 | 0 |
python,openpyxl
| 9,757,506 | 1 | false | 0 | 0 |
i want it as they are in excel file.
A date is recorded in an Excel file (both 2007+ XLSX files and earlier XLS files) as a floating point number of days (and fraction thereof) since some date in 1899/1900 or 1904. Only the "number format" that is recorded against the cell can be used to distinguish whether a date or a number was intended.
You will need to be able to retrieve the actual float value and the "number format" and apply the format to the float value. If the "number format" being used is one of the standard ones, this should be easy enough to do. Customised number formats are another matter. Locale-dependant formats likewise.
To get detailed help, you will need to give examples of what raw data you have got and what you want to "see" and how it is now being presented ("changing date format").
| 1 | 0 | 0 |
i am new in python and i want to read office 2010 excel file without changing its style. Currently its working fine but changing date format. i want it as they are in excel file.
|
How to read office 2010 excelfile using openpyxl without changing style
| 0.197375 | 1 | 0 | 719 |
9,758,636 |
2012-03-18T13:16:00.000
| 0 | 1 | 0 | 0 |
python,api,twitter
| 29,076,949 | 2 | false | 0 | 0 |
I am currently studying twitter structure and had found out that is a field called tweet_count associated with each tweet as to # of times that particular original tweet has been retweeted
| 2 | 1 | 0 |
I have used twitter search API to collect lots of tweets given a search keyword. Now that I have this collection of tweets, I'd like to find out which tweet has been retweeted most.
Since search API does not have retweet_count, I have to find some other way to check how many times each tweet has been retweeted. The only clue I have is that I have ID number for each tweet. Is there any way I could use these ID numbers to figure out how many times each tweet has been retweeted??
I am using twitter module for python.
|
Getting Retweet Count of a Given Tweet ID Number
| 0 | 0 | 1 | 1,571 |
9,758,636 |
2012-03-18T13:16:00.000
| 0 | 1 | 0 | 0 |
python,api,twitter
| 9,758,703 | 2 | false | 0 | 0 |
i don't think so, since one can either retweet using the retweet command or using a commented retweet. At least the second alternative generates a new tweet id
| 2 | 1 | 0 |
I have used twitter search API to collect lots of tweets given a search keyword. Now that I have this collection of tweets, I'd like to find out which tweet has been retweeted most.
Since search API does not have retweet_count, I have to find some other way to check how many times each tweet has been retweeted. The only clue I have is that I have ID number for each tweet. Is there any way I could use these ID numbers to figure out how many times each tweet has been retweeted??
I am using twitter module for python.
|
Getting Retweet Count of a Given Tweet ID Number
| 0 | 0 | 1 | 1,571 |
9,759,558 |
2012-03-18T15:33:00.000
| 0 | 0 | 0 | 0 |
python,templates,flask,bottle
| 26,185,476 | 3 | false | 1 | 0 |
Note: this same solution can be used with the other template engines. The technique is exactly the same, but you use BaseTemplate (it works for all template classes) or the class for the engine you want to use.
| 1 | 7 | 0 |
Is there a bottle.py equivalent of context processors that you get in Flask?
|
Include variables in template context on every page with Bottle.py
| 0 | 0 | 0 | 1,518 |
9,759,680 |
2012-03-18T15:49:00.000
| 6 | 0 | 1 | 0 |
python,class,function,object,methods
| 9,759,706 | 2 | true | 0 | 0 |
Methods need to be called on a specific object. Functions don't.
The functions that are available at any time are the built-in ones, such as sorted and list, plus any functions that are in modules that you've imported or that you've defined yourself. The methods that are available on a particular object are the ones that are defined on that object's type.
| 2 | 1 | 0 |
In sorted(list(mydict.keys())), sorted and list doesn't need an object prefix someobject., but keys() needed dict1.. When, or for what functions, is the prefix necessary?
|
In Python, when a function doesn't need an object prefix?
| 1.2 | 0 | 0 | 121 |
9,759,680 |
2012-03-18T15:49:00.000
| 2 | 0 | 1 | 0 |
python,class,function,object,methods
| 9,759,707 | 2 | false | 0 | 0 |
The "prefix" means that you are calling a method from an object (someobject or dict in your example). If your function is not a method of an object, you do not need "a prefix"
| 2 | 1 | 0 |
In sorted(list(mydict.keys())), sorted and list doesn't need an object prefix someobject., but keys() needed dict1.. When, or for what functions, is the prefix necessary?
|
In Python, when a function doesn't need an object prefix?
| 0.197375 | 0 | 0 | 121 |
9,760,636 |
2012-03-18T17:46:00.000
| 0 | 0 | 0 | 0 |
python,keyword,wikipedia,topic-maps
| 9,760,985 | 2 | false | 0 | 0 |
You can scrape the categories if you want. If you're working with python, you can read the wikitext directly from their API, and use mwlib to parse the article and find the links.
A more interesting but harder to implement approach would be to create clusters of related terms, and given the list of terms extracted from an article, find the closest terms to them.
| 1 | 1 | 1 |
I am writing a user-app that takes input from the user as the current open wikipedia page. I have written a piece of code that takes this as input to my module and generates a list of keywords related to that particular article using webscraping and natural language processing.
I want to expand the functionality of the app by providing in addition to the keywords that i have identified, a set of related topics that may be of interest to the user. Is there any API that wikipedia provides that will do the trick. If there isn't, Can anybody Point me to what i should be looking into (incase i have to write code from scratch). Also i will appreciate any pointers in identifying any algorithm that will train the machine to identify topic maps. I am not seeking any paper but rather a practical implementation of something basic
so to summarize,
I need a way to find topics related to current article in wikipedia (categories will also do)
I will also appreciate a sample algorithm for training a machine to identify topics that usually are related and clustered.
ps. please be specific because i have researched through a number of obvious possibilities
appreciate it thank you
|
How to get related topics from a present wikipedia article?
| 0 | 0 | 0 | 1,060 |
9,761,240 |
2012-03-18T18:52:00.000
| 1 | 0 | 1 | 0 |
python,class,automation,replication
| 9,761,303 | 2 | true | 0 | 0 |
Any new-style object can get a reference to its class by accessing its __class__ attribute. From there it can invoke a constructor, manipulate class attributes, etc.
| 1 | 1 | 0 |
I am trying to come up with a way of taking a created instance that acts as an environment for n-many sub-instances within; like having a overall 'network' instance with multiple dynamically-interconnecting objects inside. My current idea is for the network instance to first be instantiated, and then an initial sub-object created inside. The network would have a way to receive an input and send an output, and those inputs pass by the sub-objects and the sub-objects collectively form an output to send.
What I need is for a way that can have a cell, when certain parameters are met, take itself and create a new object that is a copy of itself but with different name and inserting a different stored data; not replacing the original, but expanding the collective in the network instance. So this would allow for a database-like system that could dynamically expand without being told a set range of object names, but be able to self-replicate. If it would be possible, the objects would replicate themselves as their instances and not call their class, but I'm completely open to ideas.
I have been able to test manually creating individual objects and have them interact how I would like, but still unable to get a simplified way of making the objects class self-replicate on its own initiative. So that's what I really need, sorry for the wall of text, and thanks for reading.
|
Python - Creating self-replicating class-object instance
| 1.2 | 0 | 0 | 1,634 |
9,762,156 |
2012-03-18T20:48:00.000
| 0 | 0 | 1 | 0 |
python,windows,filesystems,dokan
| 9,844,594 | 1 | false | 0 | 0 |
No way (without some very deep kernel-mode hacking). You need to have a filesystem visible to the OS via the driver stack in order to run an EXE from it. One option is to create a hidden filesystem or map the virtual file system to the directory on existing NTFS drive (eg. our Callback File System lets you do this), but in any case a kernel-mode driver is required.
There's one more option possible but I didn't see viable implementations of it: create an SMB server module and create a network mapped drive, which is connected to this SMB server.
| 1 | 1 | 0 |
A quick question for anyone who knows the answer. I'm doing with with virtual file systems and python. I have an EXE file within my file system, is it possible to run this application without having to expose the file system with something like Dokan?
If not possible, is there a way to expose the file system without the need of drivers/admin privileges like Dokan requires in Windows?
Any help is appreciated, thanks!
|
python - Run Application out of Virtual File System
| 0 | 0 | 0 | 477 |
9,762,841 |
2012-03-18T22:22:00.000
| 1 | 0 | 0 | 0 |
python,sqlite,user-interface,wxpython,wxwidgets
| 9,771,997 | 1 | false | 0 | 1 |
You could use wx.grid or one of the ListCtrls. There's an example of a grid with 100 million cells in the wxPython demo that you could use for guidance on projects with lots of information. For ListCtrls, you would want to use a Virtual ListCtrl using the wx.LC_VIRTUAL flag. There's an example of that in the demo as well.
| 1 | 1 | 0 |
I'm creating python app using relatively big SQL database (250k rows). Application needs GUI where most important part of it would be to present results of SQL queries.
So I'm looking for a best way to quickly present data in tables in GUI.
Most preferably I'd be using wx - as it has seamless connection to main application I'm working with. And what I need is least effort between SQL query a and populating GUI table.
I used once wx.grid, but it seemed to be limited functionality. Also I know of wx.grid.pygridtablebase - what is the difference?
What would be easiest way to do this?
|
Most seamless way to present data in gui
| 0.197375 | 1 | 0 | 420 |
9,763,056 |
2012-03-18T22:54:00.000
| 0 | 0 | 0 | 1 |
python,macos,module,homebrew
| 10,824,368 | 1 | false | 0 | 0 |
From the Homebrew page: "Homebrew installs packages into their own isolated prefix and then symlinks everything into /usr/local"
I think that the OS X preinstalled python looks for modules in
/Library/Frameworks/Python.framework/Versions/Current//lib/python2.7/site-packages
So maybe you need to symlink your Homebrew installed packages to there.
| 1 | 0 | 0 |
I am installing modules with homebrew and other installers, and they are not recognized by my default python. Module installations with easy_install (such as pip) appear to be available for my system and system python).
My default python is located here and is this version:
15:49 [~]: which python
/usr/local/bin/python
15:49 [~]: python -d
Python 2.7.2 (default, Mar 18 2012, 15:13:08)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin Type "help",
"copyright", "credits" or "license" for more information.
The packages do appear to be located in /library/frameworks/, GEOS.framework is one example.
What do I need to modify to gain access to my modules?
System: Mac os x 10.5.8
|
default python does not locate modules installed with homebrew
| 0 | 0 | 0 | 311 |
9,763,675 |
2012-03-19T00:41:00.000
| 5 | 1 | 1 | 0 |
python
| 9,763,705 | 2 | true | 0 | 0 |
Opening the file in write/read mode (w+) will truncate the file without rewriting it if it already exists.
| 1 | 2 | 0 |
I want to basically copy whats from the clipboard and paste it in a file in utf-8 encoding, but what ever I try, the file has the '?' symbols in it and is Anscii encoding...
But what I found out is, if there is a file that's already in utf-8 encoding, then whatever I paste in it manually (deleting whats there already), wont have the '?' in it.
So if there is a way to clear content in a utf-8 file, then copy whats from the clipboard and write it to that file then that would be great.
If I create the file, it's always ends up being Ancii...
Now I already know how to copy from clip board and write it to a file, its just how to clear a file which is confusing...
|
How to erase all text from a file using python, but not delete/recreate the file?
| 1.2 | 0 | 0 | 12,883 |
9,764,895 |
2012-03-19T04:03:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,coffeescript,go
| 9,764,949 | 2 | true | 1 | 0 |
Coffeescript compiles to Javascript, which can be run in a web browser. In that case, App Engine can serve up the resulting javascript.
I don't know of any way to compile coffeescript to python, java or go though, so you can't use it as a server side language.
| 1 | 2 | 0 |
Does anyone know if it is possible to use Coffeescript on Google App Engine? If so how can this be done with the app engine Python or Go platforms?
|
How to use Coffeescript on Google App Engine
| 1.2 | 0 | 0 | 1,031 |
9,764,930 |
2012-03-19T04:09:00.000
| 9 | 0 | 1 | 0 |
python,regex
| 9,765,037 | 4 | false | 0 | 0 |
You can fix the problem of (\.\w+)+ only capturing the last match by doing this instead: ((?:\.\w+)+)
| 1 | 34 | 0 |
While matching an email address, after I match something like yasar@webmail, I want to capture one or more of (\.\w+)(what I am doing is a little bit more complicated, this is just an example), I tried adding (.\w+)+ , but it only captures last match. For example, [email protected] matches but only include .tr after yasar@webmail part, so I lost .something and .edu groups. Can I do this in Python regular expressions, or would you suggest matching everything at first, and split the subpatterns later?
|
Capturing repeating subpatterns in Python regex
| 1 | 0 | 0 | 55,735 |
9,764,963 |
2012-03-19T04:13:00.000
| 2 | 0 | 0 | 0 |
python,mysql-python
| 9,765,239 | 2 | true | 0 | 0 |
I guess you're using InnoDB. This is default for an InnoDB transaction.
REPEATABLE READ
This is the default isolation level for InnoDB. For consistent reads,
there is an important difference from the READ COMMITTED isolation
level: All consistent reads within the same transaction read the
snapshot established by the first read. This convention means that if
you issue several plain (nonlocking) SELECT statements within the same
transaction, these SELECT statements are consistent also with respect
to each other. See Section 13.2.8.2, “Consistent Nonlocking Reads”.
I haven't tested yet but forcing MySQLdb to start a new transaction by issuing a commit() on the current connection or create a new connection might solve the issue.
| 1 | 2 | 0 |
>>> _cursor.execute("select * from bitter.test where id > 34")
1L
>>> _cursor.fetchall()
({'priority': 1L, 'default': 0, 'id': 35L, 'name': 'chinanet'},)
>>> _cursor.execute("select * from bitter.test where id > 34")
1L
>>> _cursor.fetchall()
({'priority': 1L, 'default': 0, 'id': 35L, 'name': 'chinanet'},)
>>>
the first time, i run cursor.execute and cursor.fetchall, i got the right result.
before the second time i run execute and fetchall
i insert data into mysql which id id 36, i also run commit command in mysql
but cursor.execute/fetchall counld only get the data before without new data
|
cursor fetch wrong records from mysql
| 1.2 | 1 | 0 | 677 |
9,767,585 |
2012-03-19T09:20:00.000
| 6 | 0 | 0 | 0 |
python,jinja2
| 9,767,951 | 5 | false | 1 | 0 |
Try putting the syntax in the other files in {% raw %} {% endraw %}
You can use jQuery if you dont want to edit the external files:
Make a dive to contain the content <div id="contentoffile"></div>
and use jquery to load the file : $("#contentoffile").load("url to file") << the url can be relative
| 1 | 19 | 0 |
I'm trying to insert file into a page using Jinja 2.6 using the include tag. This worked fine until I started using characters in the file that are reminiscent of the Jinja syntax, at which point it realized it couldn't parse them and bombed.
Short of going though the file and escaping all characters, what can I do to tell Jinja to just include the file as is?
|
Insert static files literally into Jinja templates without parsing them
| 1 | 0 | 0 | 18,579 |
9,768,218 |
2012-03-19T10:04:00.000
| 11 | 0 | 0 | 0 |
python,ctypes,pickle
| 9,771,616 | 3 | true | 0 | 0 |
Python has no way of doing that automatically for you:
You will have to build code to pick all the desired Data yourself, putting them in a suitable Python data structure (or just adding the data in a unique bytes-string where you will know where each element is by its offset) - and then save that object to disk.
This is not a "Python" problem - it is exactly a problem Python solves for you when you use Python objects and data. When coding in C or lower level, you are responsible to know not only where your data is, but also, the length of each chunk of data (and allocate memory for each chunk, and free it when done, and etc). And this is what you have to do in this case.
Your data structure should give you not only the pointers, but also the length of the data in each pointed location (in a way or the other - if the pointer is to another structure, "size_of" will work for you)
| 3 | 8 | 1 |
I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.
How can I save the ctypes object and what the pointers are pointing to for later use?
I tried
scipy.io.savemat => TypeError: Could not convert object to array
cPickle => ctypes objects containing pointers cannot be pickled
|
How to save ctypes objects containing pointers
| 1.2 | 0 | 0 | 14,249 |
9,768,218 |
2012-03-19T10:04:00.000
| 1 | 0 | 0 | 0 |
python,ctypes,pickle
| 41,899,145 | 3 | false | 0 | 0 |
To pickle a ctypes object that has pointers, you would have to define your own __getstate__/__reduce__ methods for pickling and __setstate__ for unpickling. More information in the docs for pickle module.
| 3 | 8 | 1 |
I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.
How can I save the ctypes object and what the pointers are pointing to for later use?
I tried
scipy.io.savemat => TypeError: Could not convert object to array
cPickle => ctypes objects containing pointers cannot be pickled
|
How to save ctypes objects containing pointers
| 0.066568 | 0 | 0 | 14,249 |
9,768,218 |
2012-03-19T10:04:00.000
| 0 | 0 | 0 | 0 |
python,ctypes,pickle
| 9,768,597 | 3 | false | 0 | 0 |
You could copy the data into a Python data structure and dereference the pointers as you go (using the contents attribute of a pointer).
| 3 | 8 | 1 |
I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers.
How can I save the ctypes object and what the pointers are pointing to for later use?
I tried
scipy.io.savemat => TypeError: Could not convert object to array
cPickle => ctypes objects containing pointers cannot be pickled
|
How to save ctypes objects containing pointers
| 0 | 0 | 0 | 14,249 |
9,768,794 |
2012-03-19T10:48:00.000
| 1 | 0 | 1 | 0 |
python
| 9,768,810 | 2 | true | 0 | 0 |
"Immediately", no. The garbage collector will sweep it up next run, assuming there are no other references to that object.
| 1 | 1 | 0 |
I have a large dataset that I am dealing with in Python. It is hierarchical just like DOM. I have a root node object, and from that object all the other objects emanate.
SO, if I just do del obj where obj is root node, will the entire hierarchy be gone immediately?
|
Python: deleting large data
| 1.2 | 0 | 0 | 64 |
9,768,901 |
2012-03-19T10:57:00.000
| 0 | 0 | 1 | 0 |
javascript,python,sandbox,embedded-v8
| 9,796,559 | 2 | false | 0 | 0 |
Would simply locking down the V8 instance (ie: giving it no permissions in a chroot) and killing the process if it doesn't return after a certain amount of time not work?
| 2 | 6 | 0 |
I'm new to V8 and plan on using it in a python web application. The purpose is to let users submit and execute certain JS scripts. Obviously this is a security threat so I'm looking for resources that document the ways one might 'lock down' v8. For example, can I create a white list of functions allowed to be called? Or a blacklist of libraries not allowed to be referenced?
|
How to "Lock down" V8?
| 0 | 0 | 0 | 670 |
9,768,901 |
2012-03-19T10:57:00.000
| 1 | 0 | 1 | 0 |
javascript,python,sandbox,embedded-v8
| 9,796,731 | 2 | false | 0 | 0 |
If you use a plain V8 (i.e. not something like node.js) there won't be any dangerous functions. JavaScript itself doesn't have a stdlib containing filesystem functions etc.
The only thing a malicious user can do is creating infinite loops, deep recursions and memory hogs.
| 2 | 6 | 0 |
I'm new to V8 and plan on using it in a python web application. The purpose is to let users submit and execute certain JS scripts. Obviously this is a security threat so I'm looking for resources that document the ways one might 'lock down' v8. For example, can I create a white list of functions allowed to be called? Or a blacklist of libraries not allowed to be referenced?
|
How to "Lock down" V8?
| 0.099668 | 0 | 0 | 670 |
9,771,171 |
2012-03-17T17:03:00.000
| 0 | 1 | 0 | 0 |
python,openerp
| 10,222,065 | 3 | false | 1 | 0 |
i dont know but i think you can also use the sheduled actions in administration->shedular->sheduled actions or else ir.cron is the best option for sheduling outgoing emails
| 3 | 4 | 0 |
In OpenERP 6.0.1, I've created a server action to send a confirmation email after an invoice is confirmed, and linked it to appropriately to the invoice workflow. now normally when an invoice is confirmed, an email is automatically sent.
is there a way to set a date for when the email should be sent instead of being sent immediately? like "send email after one week of confirmation" ?
|
openerp schedule server action
| 0 | 0 | 0 | 1,470 |
9,771,171 |
2012-03-17T17:03:00.000
| 9 | 1 | 0 | 0 |
python,openerp
| 9,784,730 | 3 | true | 1 | 0 |
There is a one object ir.cron which will run on specific time period. There you can specify the time when you want to sent the mail.
This object will call the function which you given in Method attribute. In this function you have to search for those invoices which are in created state. Then check the date when it created and if its >=7 days then send mail.
Or
You can create ir.cron on specific workflow action of the invoice which will have Next Execution Date as after the 7 or 8 days.
| 3 | 4 | 0 |
In OpenERP 6.0.1, I've created a server action to send a confirmation email after an invoice is confirmed, and linked it to appropriately to the invoice workflow. now normally when an invoice is confirmed, an email is automatically sent.
is there a way to set a date for when the email should be sent instead of being sent immediately? like "send email after one week of confirmation" ?
|
openerp schedule server action
| 1.2 | 0 | 0 | 1,470 |
9,771,171 |
2012-03-17T17:03:00.000
| 0 | 1 | 0 | 0 |
python,openerp
| 10,615,931 | 3 | false | 1 | 0 |
With OpenERO 6.1 New Email Engine has Email Queue so what you just need to do it queue your Email on that email queue and we already have one Scheduled Action which processes this email queue at defined interval, so what you can do it you can change the trigger time of the same action. and you can see the email Engine api for how to queue your emails in email queue.
Regards
| 3 | 4 | 0 |
In OpenERP 6.0.1, I've created a server action to send a confirmation email after an invoice is confirmed, and linked it to appropriately to the invoice workflow. now normally when an invoice is confirmed, an email is automatically sent.
is there a way to set a date for when the email should be sent instead of being sent immediately? like "send email after one week of confirmation" ?
|
openerp schedule server action
| 0 | 0 | 0 | 1,470 |
9,773,232 |
2012-03-19T15:43:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,server-side-includes,static-files
| 9,782,676 | 2 | false | 1 | 0 |
Or use a framework like django, which will help in inheritance of templates.
| 1 | 0 | 0 |
Is there a decent way to "simulate" server side includes using Python on Google App Engine?
I would really like to split my static html files up into smaller pieces for two reasons:
They will be easier to manage from a development perspective
HTML that is redundant across multiple pages can be more easily re-used and updates to the HTML will show on all pages instead of having to copy and paste updates
|
Practical server side includes with Python on Google App Engine
| 0 | 0 | 0 | 1,024 |
9,774,966 |
2012-03-19T17:31:00.000
| 0 | 1 | 0 | 0 |
python,ruby,vim,code-completion
| 9,775,180 | 1 | false | 0 | 0 |
Commercial IDE for python like wing (www.wingware.com) and pycharm (www.jetbrains.com/pycharm) are better to solve majority of code-completion issues. Of course, they are not free though. I myself, when use eclipse with pydev plugin was not able to get satisfactory results.
| 1 | 10 | 0 |
I'm working on a large python project using vim with tagexplorer, pythoncomplete, and ctags. Tag-based code-browsing and code-completion features don't work the way they should unfortunately because ctags doesn't tie instances to types.
Hypothetical scenarios:
Auto Complete: vim won't auto-complete method on() in myCar.ignition().on() because ctags doesn't know that ignition() returns TypeIgnition.
Code Browsing: vim won't browse into TypeCar when I click on myCar but instead presents me with multiple definition matches, incorrect matches, or no matches because ctags doesn't backtrack and tie instances to types.
The problem seems to stem from python being a dynamically typed language. Neither scenario would present a challenge otherwise. Is there an effective alternative to tags-based code-browsing and code-completion and an IDE or vim plugin that implements it well?
Note: Please vote "re-open". Solutions to this problem are valuable to the community. The question was originally formulated very vaguely, that's no longer the case.
|
How to address python code-browsing and code-completion issues in vim?
| 0 | 0 | 0 | 273 |
9,776,206 |
2012-03-19T18:59:00.000
| 1 | 0 | 0 | 0 |
c++,python,c,wxpython
| 9,777,740 | 3 | false | 0 | 1 |
You could strictly seperate design(python part) and code(c++ part) like this:
Write a complete c++ programm that works in the terminal/console and then make the python-application call these c++-terminal programm via os.Popen.
So if your programm is a calculator it does this:
(python gui) 5 + 5 -> my_c_programm.exe "5 + 5" -> (returns) 10 -> (python gui) display
that way you can use your programm with and without gui.
Its easier and faster than embedding Python in your c++ programm or extending Python with C++.
I basically do the same thing on my current project, but like this:
php: my webinterface
python: for structure and logic and easy operation
c++: for heavy calculations and where I need speed
so php -> python -> c++
and it works very well for me :)
| 1 | 0 | 0 |
I am writing an application where I am coding all of my front-end and GUI with python library (wxpython specifically). For this application, I would like to write the model class with C and use python to use the compiled C code? How can this be implemented in python?
I know this is little vague question but I am struggling with the starting point.
|
Using C compiled code from python GUI
| 0.066568 | 0 | 0 | 894 |
9,776,425 |
2012-03-19T19:16:00.000
| 3 | 0 | 1 | 0 |
python,filesize
| 9,776,500 | 3 | false | 0 | 0 |
no. 1 is definitely the worst. If at all, it's better to seek() and tell(), but that's not as good as the other two.
no. 2 and no. 3 are equally ok IMO. I think no. 3 is a bit clearer to read, but that's negligible.
| 2 | 1 | 0 |
There are actually three ways I have in mind to determine a files size:
open and read it, and get the size of the string with len()
using os.stat and getting it via st_size -> what should be the "right" way because its handled by the underlying os
os.path.getsize what should be the same as above
So what is the actual right way to determine the filesize? What is the worst way to do?
Or doesn't it even matter because at the end it is all the same?
(I can imagine the first method having a problem with really large files, while the two others have not)
|
Whats the best way to get the filesize?
| 0.197375 | 0 | 0 | 3,713 |
9,776,425 |
2012-03-19T19:16:00.000
| 4 | 0 | 1 | 0 |
python,filesize
| 9,777,252 | 3 | false | 0 | 0 |
Method 1 is the slowest way possible. Don't use it unless you will need the entire contents of the file as a string later.
Methods 2 and 3 are the fastest, since they don't even have to open the file.
Using f.seek(os.SEEK_END) and f.tell() requires opening the file, and might be a bit slower than 2&3 unless you're going to open the file anyway.
All methods will give the same result when no other program is writing to the file. If the file is in the middle of being modified when your code runs, seek+tell can sometimes give you a more up-to-date answer than 2&3.
| 2 | 1 | 0 |
There are actually three ways I have in mind to determine a files size:
open and read it, and get the size of the string with len()
using os.stat and getting it via st_size -> what should be the "right" way because its handled by the underlying os
os.path.getsize what should be the same as above
So what is the actual right way to determine the filesize? What is the worst way to do?
Or doesn't it even matter because at the end it is all the same?
(I can imagine the first method having a problem with really large files, while the two others have not)
|
Whats the best way to get the filesize?
| 0.26052 | 0 | 0 | 3,713 |
9,776,529 |
2012-03-19T19:24:00.000
| 0 | 1 | 0 | 1 |
android,python,android-intent,android-emulator,monkeyrunner
| 10,211,905 | 1 | false | 0 | 0 |
wpa_cli should work.Open wpa_cli>>
add_network
set_network ssid "APSSID"
set_network key_mgmt NONE \if ap is confgrd in open none
save_config
enable
these set of commands should work if WiFI is ON in UI.
using Monkeyrunner navigate using keycode is the only option OR
you need to make an APK for ur specific operations
| 1 | 1 | 0 |
I am trying to connect an android device to specific AP without keycodes. I am looking for adb shell commands or monkeyrunner script that can perform the same.
Hope you guys can help me with this.
PS. After researching for days only way I found is using wpa_cli in adb shell. But couldnt exactly connect because I was not able to find the exact codes.
|
How to connect android device to specific AP with adb shell or monkeyrunner
| 0 | 0 | 0 | 799 |
9,777,879 |
2012-03-19T21:01:00.000
| 2 | 0 | 0 | 0 |
python,django,wizard,django-formwizard
| 9,778,474 | 3 | false | 1 | 0 |
What do you want to do ?
If you want to create a wizard where step x is repeated n times then answer is yes, you can do that and it is not that hard.
You just need to create a wizard class factory that creates the class given specific parameters and you're done.
In case you mean, can I change the steps of a wizard on-the-fly.
answer is still yes but then things will get a bit more complicated than that since you will have to change the internal state of the wizard after its initialization.
This is not fun at all, if you really need the second option I really suggest to think about it, try to find an alternative design and choose the dynamic wizard approach as last resort.
| 1 | 3 | 0 |
Is possible that the steps of the wizard are dynamic? For example, the second step occur repeatedly n times?
|
Dynamic number of Steps using Django Wizard
| 0.132549 | 0 | 0 | 1,832 |
9,778,006 |
2012-03-19T21:11:00.000
| 1 | 0 | 0 | 0 |
python,encryption,cryptography,toolkit
| 9,821,078 | 5 | true | 0 | 0 |
I ended up using M2Crypto after trying PyOpenSSL
the problem with PyOpenSSL is that it doesnt have a method to return a public key. I was having a lot of problem with this.
M2Crypto has its own encryption method as well, meaning you dont need to install multiple libraries :)
| 1 | 2 | 0 |
I have an assignment to create a secure communication between 2 people with a middle man.
The messages have to be encrypted using public and private keys and a X.509 certificate should be created for each user, this certificate is stored by the third party.
I'm currently sending messages between users through a sockets.
Could someone suggest an easy to understand library that I could use to perform simple encryption? Any appropriate reading sources about the library will help as well.
|
Python open source cryptographic toolkit
| 1.2 | 0 | 1 | 994 |
9,779,200 |
2012-03-19T23:00:00.000
| 1 | 1 | 1 | 1 |
python,daemon
| 9,779,553 | 3 | false | 0 | 0 |
I've written many things in C/C++ and Perl that are initiated when a LINUX box O.S. boots, launching them using the rc.d.
Also I've written a couple of java and python scripts that are started the same way I've mentioned above, but I needed a little shell-script (.sh file) to launch them and I used rc.5.
Let me tell you that your concerns about their runtime environments are completely valid, you will have to be careful about wich runlevel you'll use... (only from rc.2 to rc.5, because rc.1 and rc.6 are for the System).
If the runlevel is too low, the python runtime might not be up at the time you are launching your program and it could flop. e.g.: In a LAMP Server MySQL and Apache are started in rc.3 where the Network is already available.
I think your best shot is to make your script in python and launch it using a .sh file from rc.5.
Good luck!
| 2 | 18 | 0 |
I have to write a daemon program that constantly runs in the background and performs some simple tasks. The logic is not complicated at all, however it has to run for extended periods of time and be stable.
I think C++ would be a good choice for writing this kind of application, however I'm also considering Python since it's easier to write and test something quickly in it.
The problem that I have with Python is that I'm not sure how its runtime environment is going to behave over extended periods of time. Can it eat up more and more memory because of some GC quirks? Can it crash unexpectedly? I've never written daemons in Python before, so if anyone here did, please share your experience. Thanks!
|
Is writing a daemon in Python a good idea?
| 0.066568 | 0 | 0 | 2,480 |
9,779,200 |
2012-03-19T23:00:00.000
| 14 | 1 | 1 | 1 |
python,daemon
| 9,779,293 | 3 | true | 0 | 0 |
I've written a number of daemons in Python for my last company. The short answer is, it works just fine. As long as the code itself doesn't have some huge memory bomb, I've never seen any gradual degradation or memory hogging. Be mindful of anything in the global or class scopes, because they'll live on, so use del more liberally than you might normally. Otherwise, like I said, no issues I can personally report.
And in case you're wondering, they ran for months and months (let's say 6 months usually) between routine reboots with zero problems.
| 2 | 18 | 0 |
I have to write a daemon program that constantly runs in the background and performs some simple tasks. The logic is not complicated at all, however it has to run for extended periods of time and be stable.
I think C++ would be a good choice for writing this kind of application, however I'm also considering Python since it's easier to write and test something quickly in it.
The problem that I have with Python is that I'm not sure how its runtime environment is going to behave over extended periods of time. Can it eat up more and more memory because of some GC quirks? Can it crash unexpectedly? I've never written daemons in Python before, so if anyone here did, please share your experience. Thanks!
|
Is writing a daemon in Python a good idea?
| 1.2 | 0 | 0 | 2,480 |
9,780,717 |
2012-03-20T02:43:00.000
| 2 | 0 | 1 | 1 |
python,macos,pip,python-2.6
| 40,450,261 | 36 | false | 0 | 0 |
(Context: My OS is Amazon linux using AWS. It seems similar to RedHat but it's stripped down a bit, it seems.)
Exit the shell, then open a new shell. The pip command now works.
That's what solved the problem at this location.
You might want to know as well: The pip commands to install software then needed to be written like this example (jupyter for example) to work correctly on my system:
pip install jupyter --user
Specifically, note the lack of sudo, and the presence of --user
Would be real nice if pip docs had said anything about all this, but that would take typing in more characters I guess.
| 10 | 580 | 0 |
I downloaded pip and ran python setup.py install and everything worked just fine. The very next step in the tutorial is to run pip install <lib you want> but before it even tries to find anything online I get an error "bash: pip: command not found".
This is on Mac OS X, which I'm new to, so I'm assuming there's some kind of path setting that was not set correctly when I ran setup.py. How can I investigate further? What do I need to check to get a better idea of the exact cause of the problem?
EDIT: I also tried installing Python 2.7 for Mac in the hopes that the friendly install process would do any housekeeping like editing PATH and whatever else needs to happen for everything to work according to the tutorials, but this didn't work. After installing, running 'python' still ran Python 2.6 and PATH was not updated.
|
bash: pip: command not found
| 0.011111 | 0 | 0 | 1,748,193 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.