Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
32,256,361 | 2015-08-27T17:57:00.000 | 1 | 1 | 1 | 0 | python,mocking,patch | 32,256,435 | 2 | false | 0 | 0 | There is no such requirement, and yes, everything is an object in Python.
It is nothing more than a style choice; do you import the module yourself or does patch take care of this for you? Because that's the only difference between the two approaches; either patch() imports the module, or you do.
For the code-under-test, I prefer mock.patch() to take care of this, as this ensures that the import takes place as the test runs. This ensures I get a test error state (test failure), rather than problems while loading the test. All other modules are fair game. | 1 | 1 | 0 | I have been writing unit tests for over a year now, and have always used patch.object for pretty much everything (modules, classes, etc).
My coworker says that patch.object should never be used to patch an object in a module (i.e. patch.object(socket, 'socket'), instead you should always use patch('socket.socket').
I much prefer the patch.object method, as it allows me to import modules and is more pythonic in my opinion. Is my coworker right?
Note: I have looked through the patch documentation and can't find any warnings on this subject. Isn't everything an object in python? | python mocking: mock.patch.object gotchas | 0.099668 | 0 | 0 | 315 |
32,260,031 | 2015-08-27T21:50:00.000 | 0 | 0 | 0 | 0 | python,django,mongodb | 32,260,215 | 1 | false | 1 | 0 | If you are using mongoengine, there is no need of django-nonrel.You can directly use django latest versions. | 1 | 0 | 0 | Anybody know of any currently worked on projects that wire up MongoDB to the most recent version of Django? mongoengine's Django module github hasn't been updated in 2 years (and I don't know if I can use its regular module with Django) and django-nonrel uses Django 1.6. Anybody tried using django-nonrel with Django 1.8? | Django: MongoDB engine for Django 1.8 | 0 | 1 | 0 | 210 |
32,260,884 | 2015-08-27T23:13:00.000 | 0 | 0 | 0 | 1 | google-api,google-drive-api,google-api-python-client | 32,260,966 | 1 | false | 1 | 0 | I believe that this is a limit that google sets to stop people spamming the service and tying it up. It doesn't have anything to do with your app itself but is set on the Google server side. If the Google server receives over a particular number of requests within a certain time this is the error you get. There is nothing you can do in your app to overcome this. You can talk to Google about it and usually paying for Google licenses ect can allow you much higher limits before being restricted. | 1 | 1 | 0 | I have a huge amount of users and files in a Google Drive domain. +100k users, +10M of files. I need to fetch all the permissions for these files every month.
Each user have files owned by themselves, and files shared by other domain users and/or external users (users that don't belong to the domain). Most of the files are owned by domain users. There is more than 7 millions of unique files owned by domain users.
My app is a backend app, which runs with a token granted by the domain admin user.
I think that doing batch requests is the best way to do this. Then, I configured my app to 1000 requests per user, in google developer console.
I tried the following cases:
1000 requests per batch, up to 1000 per user -> lots of user rate limits
1000 requests per batch, up to 100 per user -> lots of rate limit errors
100 requests per batch, up to 100 per user -> lots of rate limit errors
100 requests per batch, up to 50 per user -> lots of rate limits errors
100 requests per batch, up to 10 per user -> not errors anymore
I'm using quotaUser parameter to uniquely identify each user in batch requests.
I checked my app to confirm that each batch was not going to google out of its time. I checked also to see if each batch have no more than the limit of file_id configured to fetch. Everything was right.
I also wait each batch to finish before sending the next one.
Every time I see a 403 Rate Limit Exceeded, I do an exponential backoff. Sometimes I have to retry after 9 steps, which is 2**9 seconds waiting.
So, I can't see the point of Google Drive API limits. I'm sure my app is doing everything right, but I can't increase the limits to fetch more permissions per second. | Can't find a way to deal with Google Drive API 403 Rate Limit Exceeded | 0 | 0 | 0 | 884 |
32,261,516 | 2015-08-28T00:27:00.000 | 1 | 0 | 1 | 0 | python,reverse-engineering,class-diagram,enterprise-architect,visual-paradigm | 32,267,378 | 2 | false | 0 | 0 | It's possible, certainly. There may not be anything in the code that the reverse-engineering process recognizes as a UML relationship. Precisely what that would be depends on the language and tool, since there are no standardized UML profiles for any implementation languages.
UML is fundamentally object-oriented, but in Python, object-orientation is optional. If the code doesn't use classes, there isn't much for UML to work with. Python's dynamic typing also makes it tricky to deduce the types of variables from the source code, which means it's hard for the UML tool to identify associations.
In EA, there are some options you can play with under Tools -- Options -- Source Code Engineering. On that page there's "Create dependencies for operation returns and parameter types," which I believe is off by default. But since EA treats all Python types as var I don't think this would have much of an effect.
There are further options per language, but I don't think there's anything in the Python section that affects relationships. | 2 | 2 | 0 | I am trying to work on a Python project thant is not documented.
I did a reverse engineering to get the Class diagram 2 times in windows environment : with sparx EA and with visual paradigm.
But in both cases I got a class diagram with classes without relationships (even if i did configure the process to generate them). is it possible or is there a problem ? | class diagram without relationships after python reverse engineering | 0.099668 | 0 | 0 | 524 |
32,261,516 | 2015-08-28T00:27:00.000 | 2 | 0 | 1 | 0 | python,reverse-engineering,class-diagram,enterprise-architect,visual-paradigm | 32,266,569 | 2 | false | 0 | 0 | Almost all tools (for myself I have expertise in EA and backdated in RSA) have difficulties showing relations between classes. Basically they RE the structure (files/packages) and the operations/properties of the single classes. In some cases you also will get relations but as said: this is limited.
Anyhow, if you are going to understand the code it's good practice to complete/correct the missing relations between the classes and thereby adding comments as well. | 2 | 2 | 0 | I am trying to work on a Python project thant is not documented.
I did a reverse engineering to get the Class diagram 2 times in windows environment : with sparx EA and with visual paradigm.
But in both cases I got a class diagram with classes without relationships (even if i did configure the process to generate them). is it possible or is there a problem ? | class diagram without relationships after python reverse engineering | 0.197375 | 0 | 0 | 524 |
32,268,889 | 2015-08-28T10:28:00.000 | 2 | 0 | 0 | 1 | python-2.7,centos6,zebra-printers,barcode-printing | 32,270,859 | 1 | false | 0 | 0 | If you are printing to a network printer open a TCP connection to port 9100. If you are printing to a USB printer look up a USB library for python.
Once you have a connection send a print string formatted in ZPL. Look on the Zebra site for the ZPL manual. There are examples in there on how to print a barcode.
Normal Linux drivers will print graphics and text but do not have a barcode font. | 1 | 1 | 0 | I want to print Barcode on my Zebra Desktop label printer on CentOS 6.5 but I did not find any python drivers for that and did not found script so that i can use in my project.
Does anyone know how to print Barcode in Zebra printer? | How to print barcode in Centos 6 using python | 0.379949 | 0 | 0 | 281 |
32,278,459 | 2015-08-28T19:24:00.000 | 0 | 0 | 0 | 0 | javascript,python,ajax,jsp,jsp-tags | 32,278,473 | 1 | false | 1 | 0 | The developers console ( F12 in Chrome and Firefox) is a wonderful thing.
Check the Network or Net tab. There you can see all the requests between your browser and your server. | 1 | 1 | 0 | I have a JSP page called X.JSP (contains few radio button and submit button), when i hit the submit button in X.JSP, the next page is displayed Y.JSP?xxxx=1111&yyyy=2222&zzzz=3333
how to know what page or service or ajax call is being made when i hit the submit button in X.JSP page.
xxxx=1111&yyyy=2222&zzzz=3333 these are generated after i click the submit button in X.JSP
Currently i am using python to script.
i select a radio button and post the form. i am not able to get the desired O/P.
how do I what page or service or ajax call is being made when i hit the submit button in X.JSP page, so that i can directly hit that page
or is there any better way to solve this | how to know what page or call is being when in a JSP page when submited | 0 | 0 | 1 | 49 |
32,278,654 | 2015-08-28T19:39:00.000 | 0 | 0 | 1 | 0 | python | 32,278,961 | 4 | false | 0 | 0 | Python is object oriented. This means we have "objects", which basically enclose their own data, and have their own methods. a String is an example of an object. Another example would be if you have a Person object. You can't just do walk(), you have to do Miles.walk(). You could try walk(Miles). But not everything can walk, so we make the function walk() specific to Person objects.
So yes, Python creators could have made capitalize('str') legal, but they decided to make the capitalize function specific to String objects. | 2 | 3 | 0 | I am a little confused about functions in Python, and how they are classified. For one, we have functions like print(), that simply encode some instructions and act on input. But also, we have functions like 'str'.capitalize(), that can only act when they have an "executor" attached to them. This might not be a well-informed question, but what are the differences between these forms, and how are they classified? | Difference between functions that act alone and those with the "dot" operator | 0 | 0 | 0 | 214 |
32,278,654 | 2015-08-28T19:39:00.000 | 1 | 0 | 1 | 0 | python | 32,279,097 | 4 | false | 0 | 0 | Python is a multi paradigm language that you can write structural and object oriented. Python has built-in functions and built-in classes; for example when you use sequence of characters between two quotation mark (') you instantiate string class.This instance called object. Objects may contain functions or/and other objects. you can access internal functions or object with DOT. | 2 | 3 | 0 | I am a little confused about functions in Python, and how they are classified. For one, we have functions like print(), that simply encode some instructions and act on input. But also, we have functions like 'str'.capitalize(), that can only act when they have an "executor" attached to them. This might not be a well-informed question, but what are the differences between these forms, and how are they classified? | Difference between functions that act alone and those with the "dot" operator | 0.049958 | 0 | 0 | 214 |
32,281,130 | 2015-08-28T23:23:00.000 | 2 | 0 | 0 | 0 | python-2.7,machine-learning,statistics,logistic-regression | 32,281,179 | 1 | true | 0 | 0 | How much of a problem it is depends on the nature of your data. The bigger issue will be that you simply have a huge class imbalance (50 As for every B). If you end up getting good classification accuracy anyway, then fine - nothing to do. What to do next depends on your data and the nature of the problem and what is acceptable in a solution. There really isn't a dead set "do this" answer for this question. | 1 | 1 | 1 | Currently, I am trying to implement a basic logistic regression algorithm in Python to differentiate between A vs. B.
For my training and test data, I have ~50,000 samples of A vs. 1000 samples of B. Is this a problem if I use half the data of each to train the algorithm and the other half as testing data (25000 train A, 500 train B and so on for testing accuracy).
If so, how can I overcome this problem. Should I consider resampling, doing some other "fancy stuff". | Computational Logistic Regression With Python, Different Sample Sizes | 1.2 | 0 | 0 | 69 |
32,284,334 | 2015-08-29T08:56:00.000 | 5 | 0 | 1 | 0 | python,regex | 32,284,387 | 6 | false | 0 | 0 | The regular expression modifier A|B means that "if either A or B matches, then the whole thing matches". So in your case, the resulting regular expression matches if/where any of the following 5 regular expressions match:
\ba\b
\bthe\b
\bone\b \breason\b
reasons\b \bfor\b
\bof\b
To limit the extent to which | applies, use the non-capturing grouping for this, that is (?:something|something else). Also, for having an optional s at the end of reason you do not need to use alteration; this is exactly equal to reasons?.
Thus we get the regular expression \b(?:a|the|one) reasons? (?:for|of)\b.
Note that you do not need to use the word boundary operators \b within the regular expression, only at the beginning and end (otherwise it would match something like everyone reasons forever). | 1 | 4 | 0 | We know \ba\b|\bthe\b will match either word "a" or "the"
I want to build a regex expression to match a pattern like
a/the/one reason/reasons for/of
Which means I want to match a string s contains 3 words:
the first word of s should be "a", "the" or "one"
the second word should be "reason" or "reasons"
the third word of s should be "for" or "of"
The regex \ba\b|\bthe\b|\bone\b \breason\b|reasons\b \bfor\b|\bof\b doesn't help.
How can I do this? BTW, I use python. Thanks. | Python regex: Alternation for sets of words | 0.16514 | 0 | 0 | 7,605 |
32,284,815 | 2015-08-29T09:59:00.000 | 0 | 0 | 0 | 0 | python,scrapy | 36,967,238 | 1 | false | 1 | 0 | From inside the spider:
self.crawler.settings.get("JOBDIR") | 1 | 0 | 0 | I'm running scrapy like this
scrapy crawl somespider -s JOBDIR=crawls/somespider-1 -a input_data=data
(For maintaining the Job state)
When something unexpected happens (eg. Connection lost)
A CloseSpider exception is raised and the spider is later scheduled to run as a cron job
I usually pass **kwargs inside __init__ to the new spider crawl
However JOBDIR is'nt found inside **kwargs
Is there any way i can access this value from inside the spider? | Scrapy/Python: How to get JOBDIR setting from inside of spider? | 0 | 0 | 0 | 562 |
32,284,938 | 2015-08-29T10:15:00.000 | 7 | 0 | 1 | 0 | python,windows,download,wxpython | 48,046,623 | 10 | false | 0 | 1 | 3 steps to install wx-widgets and pygame in python IDLE
Install python 3xxx in your system opting (Add 3xxx to your path).
open python CLI to see whether python is working or not.
then open command prompt (CMD).
type PIP to see whether pip is installed or not.
enter command : pip install wheel
enter command : pip install pygame
To install wxpython
enter command : pip install -U wxPython
Thats all !! | 2 | 20 | 0 | So I was looking around at different things to do on Python, like code for flashing text or a timer, but when I copied them into my window, there were constant syntax errors. Now, maybe you're not meant to copy them straight in, but one error I got was 'no module named wx'. I learned that I could get that module by installing wxPython. Problem is, I've tried all 4 options and none of them have worked for me. Which one do I download and how do I set it up using Windows?
Thanks | How to properly install wxPython? | 1 | 0 | 0 | 91,741 |
32,284,938 | 2015-08-29T10:15:00.000 | 0 | 0 | 1 | 0 | python,windows,download,wxpython | 54,313,732 | 10 | false | 0 | 1 | Check the version of wxpython and the version of python you have in your machine.
For python 2.7 use wxPython3.0-win32-3.0.2.0-py27 package | 2 | 20 | 0 | So I was looking around at different things to do on Python, like code for flashing text or a timer, but when I copied them into my window, there were constant syntax errors. Now, maybe you're not meant to copy them straight in, but one error I got was 'no module named wx'. I learned that I could get that module by installing wxPython. Problem is, I've tried all 4 options and none of them have worked for me. Which one do I download and how do I set it up using Windows?
Thanks | How to properly install wxPython? | 0 | 0 | 0 | 91,741 |
32,285,041 | 2015-08-29T10:32:00.000 | 0 | 1 | 1 | 0 | python,macos,module,installation | 32,285,525 | 1 | true | 0 | 0 | I think there are several different things to take into consideration here.
When you're importing a module, (doing import factorial), Python will look in the defined PATH and try to find the module you're trying to import. In the simplest case, if your module is in the same folder where your script is trying to import it, it will find it. If it's somewhere else, you will have to specify the path.
Now, site-packages is where Python keep the installed libraries. So for example when you do pip install x, Python will put the module x in your site-packages destination and when you try to import it, it will look for it there. In order to manage site-packages better, try to read about virtualenv.
If you want your module to go there, first you need to create a package that you can install. For that look at distutils or all the different alternatives for packaging that involve some type of building process based on a setup file.
I don't want to go into details in any of these points because all of them have been covered before. Just wanted to give you a general idea of where to look for. | 1 | 1 | 0 | I don't know how many duplicates of this are out there but none of those I looked at solved my problem.
To practice writing and installing custom modules I've written a simple factorial module. I have made a factorial folder in my site-packages folder containing factorial.py and an empty __init__.py file.
But typing import factorial does not work. How can I solve this? I also tried pip install factorial but that didn't work either.
Do I have to save my code intended to use factorial in the same folder inside site-packages or can I save it whereever I want?
Greetings
holofox
EDIT: I solved it. Everything was correct as I did it. Had some problems at importing and using it properly in my code... | Install a custom Python 2.7.10 module on Mac | 1.2 | 0 | 0 | 512 |
32,285,927 | 2015-08-29T12:15:00.000 | 69 | 0 | 1 | 0 | python,type-conversion | 32,286,009 | 3 | false | 0 | 0 | There is two methods:
float_number = float ( decimal_number )
float_number = decimal_number * 1.0
First one using the built in function and second one without any function, but takecare about the 1.0 not 1 as it should be float so the result will be float. | 1 | 68 | 0 | I need to convert a Decimal value to float in Python. Is there any method available for this type conversion? | How to do Decimal to float conversion in Python? | 1 | 0 | 0 | 92,406 |
32,286,059 | 2015-08-29T12:30:00.000 | 0 | 0 | 0 | 0 | python,image,web-crawler | 32,286,161 | 1 | true | 1 | 0 | If there is no alt attribute in a tag, or it is empty, check fo attribute name, if not name, check for id. Well, id, when .asp or .aspx for instance, doesn't have to make sense. But, well, as a last resort, use src attribute by getting just the filename without an extension. Sometimes attribute class can also be used, but, well, I don't recomend it. Even id can be very much deceiving.
You will have trouble with JS imposed images, of course, but even that can be solved with a lot of time and will.
As for precautions, what exactly do you mean? Check whether src is really an image or what? | 1 | 0 | 0 | I am writing an image crawler that scrapes the images from a Web page. This done by finding the img tag on the Web page. But recently I noticed, some img tags don't have an alt attribute in it. Is there any way to find the keywords for that particular image?
And are there any precautions for crawling the websites for images? | Crawling and finding keywords for images without any "alt" attribute | 1.2 | 0 | 1 | 278 |
32,286,655 | 2015-08-29T13:37:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 32,286,881 | 2 | false | 0 | 0 | With multiprocessing a primitive way to do this whould be chunck the file into 5 equal pieces and give it to 5 different processes write their results to 5 different files, when all processes are done you will merge the results.
You can have the same logic with Python threads without much complication. And probably won't make any difference since the bottle neck is probably the API. So in the end it does not really matter which approach you choose here.
There are two things two consider though:
Using Threads, you are not realy using multiple CPUs hence you are have "wasted resources"
Using Multiprocessing will use multiple processors but it is heavier on start up ... So you will benefit from never stoping the script and keeping the processes alive if the script needs to run very often.
Since the information you gave about the scenario where you use this script (or better say program) is limited, it really hard to say which is the better approach. | 1 | 0 | 0 | I have created a script that :
Imports a list of IP's from .txt ( around 5K )
Connects to a REST API and performs a query based on the IP ( web logs for each IP)
Data is returned from the API and some calculations are done on the data
Results of calculations are written to a .csv
At the moment it's really slow as it takes one IP at a time does everything and then goes to the next IP .
I may be wrong but from my understanding with threading or multiprocessing i could have 3-4 threads each doing an IP which would increase the speed of the tool by a huge margin . Is my understanding correct and if it is should i be looking at threading or multi-processing for my task ?
Any help would amazing
Random info, running python 2.7.5 , Win7 with plenty or resources. | Python threading or multiprocessing for my 'tool' | 0.099668 | 0 | 1 | 110 |
32,290,233 | 2015-08-29T20:09:00.000 | 10 | 0 | 0 | 0 | python,numpy,scipy | 32,290,589 | 2 | false | 0 | 0 | While numpy returns an array with discrete datapoints, 'interp1d' returns a function. You can use the generated function later in your code as often as you want. Furthermore, you can choose other methods than linear interpoationn | 1 | 22 | 1 | I'm trying to choose between numpy.interp vs scipy.interpolate.interp1d (with kind='linear' of course). I realize they have different interfaces but that doesn't matter much to me (I can code around either interface). I'm wondering whether there are other differences I should be aware of. Thanks. | Choosing between numpy.interp vs scipy.interpolate.interp1d (with kind='linear') | 1 | 0 | 0 | 7,258 |
32,290,371 | 2015-08-29T20:31:00.000 | 0 | 0 | 0 | 0 | javascript,python,mysql,web | 32,290,680 | 1 | false | 0 | 0 | You need to write handler/controller functions to handle each request from the client (the view). The routers will route each request to a specific controller which invoke the code i.e. query the database (via the Model) and return data with the response to the client via that controller. Read more on MVC and frameworks like Flask/Django for more info. | 1 | 0 | 0 | when the client server submit a request in the website,
it triggers a python program in that website's server
that python program scrap data in the Internet, or do its other job.
and return the scrapped data to the user.
thanks | how to trigger a python module in a linux website server when client submit data through website | 0 | 0 | 1 | 22 |
32,297,244 | 2015-08-30T13:51:00.000 | 0 | 0 | 0 | 0 | sql-server,python-2.7,sqlalchemy,pyodbc,pymssql | 32,305,711 | 1 | false | 0 | 0 | Sqlalchemy bulk operations doesn't really inserts a bulk. It's written in the docs.
And we've checked it with our dba.
Thank you we'll try the xml. | 1 | 2 | 0 | My original purpose was to bulk insert into my db
I tried pyodbc, sqlalchemy and ceodbc to do this with executemany function but my dba checked and they execute each row individually.
his solution was to run procedure that recieve table (user defined data type) as parameter and load it into the real table.
The problem is no library seem to support that (or at least no example on the internet).
So my question is anyone try bulk insert before or know how to pass user data defined type to stored procedure?
EDIT
I also tried bulk insert query but the problem it requires local path or share and it will not happend because organizition limits | Python mssql passing procedure user defined data type as parameter | 0 | 1 | 0 | 826 |
32,300,616 | 2015-08-30T20:04:00.000 | 0 | 0 | 0 | 0 | python,apache-spark,cloud | 32,304,062 | 1 | false | 0 | 0 | If your object's aren't picklable your options are pretty limited. If you can create them on the executor side though (frequently a useful option for things like database connections), you can parallelize a regular list (e.g. maybe a list of the constructor parameters) and then use map if your dostuff function returns (picklable) values you want to use or foreach if your dostuff function is called for its side effects (like updating a database or similar). | 1 | 0 | 1 | I try to understand the capabilities of Spark, but I fail to see if the following is possible in Python.
I have some objects that are non Pickable (wrapped from C++ with SWIG).
I have a list of those objects obj_list = [obj1, obj2, ...]
All those objects have a member function called .dostuff
I'd like to parallelized the following loop in Spark (in order to run it on AWS since I don't have a big architecture internally. We could probably use multiprocessing, but I don't think I can easily send the objects over the network):
[x.dostuff() for x in obj_list]
Any pointers would be appreciated. | Passing objects to Spark | 0 | 0 | 0 | 335 |
32,301,928 | 2015-08-30T22:53:00.000 | 3 | 0 | 1 | 0 | python,windows | 32,301,968 | 1 | true | 0 | 0 | Save your module.py file to /documents. Then navigate to /documents in Powershell and start the Python command line by typing python and pressing enter. Then you can import the file with the command import module | 1 | 1 | 0 | I have a python file that I wrote myself, that has several functions in it. I am trying to import it from the python interpreter from within powershell (Windows 7). This is a learning exercise. But it isn't importing, it's throwing an error that the module does not exist. My question is where do I need to save my .py file to? Is there a specific place that python looks for modules to be imported? I'm using python2.7.10.
This isn't a module from the standard library or a third party that I've downloaded. It is a python file that I am importing so that I can call functions from it...
It's answered now, I was just calling python from withion the wrong directory. I have to call python from the same directory that the file is saved in.
Thank you guys. | Where do I save a .py file so that I can import it from python interpreter ( from powershell ) | 1.2 | 0 | 0 | 4,095 |
32,303,115 | 2015-08-31T02:14:00.000 | 0 | 0 | 0 | 0 | python,peewee,datefield | 32,429,590 | 1 | false | 0 | 0 | You need to store the data in the database using the format %Y-%m-%d. When you extract the data you can present it in any format you like, but to ensure the data is sorted correctly (and recognized by SQLite as a date) you must use the %Y-%m-%d format (or unix timestamps if you prefer that way). | 1 | 0 | 0 | I would like to read a column of date from SQL database. However, the format of the date in the database is something like 27-Jan-13 which is day-month-year. When I read this column using peewee DateField it is read in a format which cannot be compared later using datetime.date.
Can anyone help me solve the issue? | Reading DateField with specific format from SQL database using peewee | 0 | 1 | 0 | 366 |
32,304,781 | 2015-08-31T05:57:00.000 | 0 | 1 | 0 | 0 | python,html,hyperlink | 32,305,050 | 1 | false | 1 | 0 | You need to one of:
Attach A.html as well as report.html,
Post A.html to a shared location such as Google drive and modify the link to point to it, or
Put the content of A.html into a hidden <div> with a show method. | 1 | 0 | 0 | I am trying to embed a A.html file inside B.html(report) file as a hyperlink.
Just to be clear, both html files are offline and available only on my local machine. and at last the report html file will be sent over email.
The email recipient will not be having access to my local machine. So if they click on hyperlink in report.html, they will get "404 - File or directory not found".
Is there any way to embed A.html inside report.html, so that email recipient can open A.html from report.html on their machine | Embed one html file inside other report html file as hyperlink in python | 0 | 0 | 1 | 78 |
32,305,131 | 2015-08-31T06:29:00.000 | 0 | 0 | 0 | 0 | python,oracle,mongodb | 32,305,243 | 1 | false | 0 | 0 | To answer your question directly:
1. Connect to Oracle
2. Fetch all the delta data by timestamp or id (first time is all records)
3. Transform the data to json
4. Write the json to mongo with pymongo
5. Save the maximum timestamp / id for next iteration
Keep in mind that you should think about the data model considerations and usually relational DB (like Oracle) and document DB (like mongo) will have different data model. | 1 | 1 | 0 | I know there are various ETL tools available to export data from oracle to MongoDB but i wish to use python as intermediate to perform this. Please can anyone guide me how to proceed with this?
Requirement:
Initially i want to add all the records from oracle to mongoDB and after that I want to insert only newly inserted records from Oracle into MongoDB.
Appreciate any kind of help. | Export from Oracle to MongoDB using python | 0 | 1 | 0 | 889 |
32,305,936 | 2015-08-31T07:22:00.000 | 0 | 0 | 1 | 1 | python | 32,323,128 | 2 | true | 0 | 0 | npyscreen has CallSubShell function which allows to execute a command line program. The CallSubShell is actually switching from curses mode to normal mode and executes the command using os.system then at the end of the command execution switches back to curses mode.
Note:- I was not able to make the standard input working properly in the command execution. Also you may want to clear the screen before calling CallSubShell. | 2 | 0 | 0 | I have a npyscreen program which has set of options and also I have another normal python command line program which interacts with user by asking yes/no question(s) like a wizard.
I want to integrate the normal python command line program in to the npyscreen program, so when user selects a option I want to run this normal python program. I do not want to reimplement the whole normal python command line program into npyscreen.
Is there anyway to do?
I found one function "npyscreen.CallSubShell" but didn't find any example code and much help in the documentation about this function.
Thanks in advance for any help.
/Shan | running command line program from npyscreen select option | 1.2 | 0 | 0 | 903 |
32,305,936 | 2015-08-31T07:22:00.000 | 0 | 0 | 1 | 1 | python | 32,691,384 | 2 | false | 0 | 0 | Thanks for the solution Shan. This works for me. Also as you said, uncommenting curses.endwin() works for scripts that are interactive. | 2 | 0 | 0 | I have a npyscreen program which has set of options and also I have another normal python command line program which interacts with user by asking yes/no question(s) like a wizard.
I want to integrate the normal python command line program in to the npyscreen program, so when user selects a option I want to run this normal python program. I do not want to reimplement the whole normal python command line program into npyscreen.
Is there anyway to do?
I found one function "npyscreen.CallSubShell" but didn't find any example code and much help in the documentation about this function.
Thanks in advance for any help.
/Shan | running command line program from npyscreen select option | 0 | 0 | 0 | 903 |
32,308,397 | 2015-08-31T09:47:00.000 | 1 | 0 | 0 | 0 | pythonanywhere | 32,329,235 | 2 | false | 1 | 0 | The most likely issue is that you don't have a web app at the domain that you're trying to access. For instance, if you've added the CNAME to www.mydomain.com, you must have a web app at www.mydomain.com. The fact that you're getting a "coming soon" page suggests that the CNAME is correctly set up to go to PythonAnywhere. | 1 | 0 | 0 | I have an app deployed on pythonanywhere and setup to use a custom domain. I'm in the process of getting the domain and I wanted to ask if there is a way to access my application via the CNAME webapp-xxxxxx.pythonanywhere.com which has been provided by pythonanywhere. Currently trying to access it takes me to the coming soon page.
Thank you. | Access my web app with CNAME | 0.099668 | 0 | 0 | 1,566 |
32,311,385 | 2015-08-31T12:33:00.000 | 0 | 0 | 0 | 0 | python,webserver,python-requests | 32,314,458 | 1 | false | 1 | 0 | I don't know specifically Python APIs but, for sure, HTTP specification does not allow a single GET request to fetch multiple resources. To each request always corresponds one and only one resource in response. This is intrinsic of the protocol.
In some situations you have to do many request to obtain a single resource, or a part of it, as happens with range requests. But also in this case every request has only one response which is finally used by the client to assemble the complete final resource. | 1 | 0 | 0 | I am using python requests right now but if there is a way to do this then it would be a game changer... Specifically i want to download a bunch of pdf's from one web site. I have the urls to the pages i want. Can i grab more then one at a time? | Is there a way to fetch multiple urls(chunks) from a web server with one GET request? | 0 | 0 | 1 | 96 |
32,311,470 | 2015-08-31T12:37:00.000 | 0 | 0 | 0 | 1 | python,django,nginx,websocket | 32,324,682 | 2 | false | 1 | 0 | Just needed to change the port... May be this will help somebody. | 2 | 0 | 0 | I have a Django app, with a real-time chat using tornado, redis and WebSockets. Project is running on the ubuntu server. On my local server everything is working good, but doesn't work at all on production server. I get an error
WebSocket connection to 'ws://mysite.com:8888/dialogs/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
privatemessages.js:232
close dialog ws
I have tried to change nginx configuration, settings.py, tried to open the 8888 port, but still no result. | WebSockets connection refused | 0 | 0 | 0 | 2,943 |
32,311,470 | 2015-08-31T12:37:00.000 | 0 | 0 | 0 | 1 | python,django,nginx,websocket | 32,313,073 | 2 | false | 1 | 0 | Seems to be you are using WebSockets as a separate service, so try to add the Access-control-origins add_header Access-Control-Allow-Origin *; | 2 | 0 | 0 | I have a Django app, with a real-time chat using tornado, redis and WebSockets. Project is running on the ubuntu server. On my local server everything is working good, but doesn't work at all on production server. I get an error
WebSocket connection to 'ws://mysite.com:8888/dialogs/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
privatemessages.js:232
close dialog ws
I have tried to change nginx configuration, settings.py, tried to open the 8888 port, but still no result. | WebSockets connection refused | 0 | 0 | 0 | 2,943 |
32,311,986 | 2015-08-31T13:04:00.000 | 0 | 0 | 0 | 0 | java,python,amazon-sqs | 32,312,184 | 1 | false | 1 | 0 | You Can manually check by visiting your aws console, whether your message are in proper format. Verify that. | 1 | 0 | 0 | I am working with SQS. I am sending messages from a java code and receiving it from a python script. I am sending json object in string form using JSONObject.toString(). Sometimes python script receive the proper string but sometimes it get the message in following format:
���'��eq��z��߭��N��n6�N��~��~��m=���v+���Myӟ=���e�M�ߟv�۽y�����8��w��;�M��N�۞�㾹뾷�n���7�}7�o=��4۽����߾v��6��<�}7�}4�ν��=���߾{��}�n6���߭��^������~���|�]��N��~��κ�����y�������^��}��M��θ��:�^�����_|߮6��5�^�q��z�ږǫiخ�����n�Wږǭʗ�9�������F���8�����4�N�u��q�������_o�<���o�Zo�<�n�뗷 | Not receiving string messages from SQS | 0 | 0 | 0 | 34 |
32,313,826 | 2015-08-31T14:37:00.000 | 1 | 0 | 1 | 1 | python,mingw,distutils | 32,594,079 | 1 | true | 0 | 0 | Ok, figured it out. If you run the python.exe included with winpython it doesn't set the environment variables and so won't find gcc. If you run the special WinPython.exe it will set the variables and everything works fine. | 1 | 1 | 0 | I'm trying out WinPython as an option to recommend to users who need to run my Python software. Crucially, distutils needs to work with MinGW.
WinPython includes mingwpy and provides a gcc.exe in the Python scripts directory. When checking os.environ I can see that this directory is added to the (temporary) path environment variable.
Unfortunately, distutils still can't find gcc. Does anyone know if there is a way to make distutils find the included gcc file without making changes to the system? | Distutils can't find gcc from mingwpy in WinPython | 1.2 | 0 | 0 | 596 |
32,316,088 | 2015-08-31T16:47:00.000 | 7 | 0 | 0 | 1 | python,mysql,google-cloud-datastore | 33,367,328 | 3 | true | 1 | 0 | There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.
You could always write a quick script using a local queue that parallelizes the work.
The basic gist would be:
Queuing script pulls data out of your MySQL instance and puts it on a queue.
(Many) Workers pull from this queue, and try to write the item to Datastore.
On failure, push the item back on the queue.
Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads.
Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k).
If you have properties (which will be indexed by default), this cost goes up substantially.
The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties).
A single "new entity write" is equivalent to:
+ 2 (1 x 2 write ops fixed cost per new entity)
+ 10 (5 x 2 write ops per indexed property)
= 12 "operations" per entity.
So your actual cost to load this data is:
120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00 | 1 | 6 | 0 | We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore.
Eventually we would move all our data there, however initially focusing on the archived data as a test.
Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row.
We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time.
Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python?
Any comments, suggestions is appreciated thank you in advanced. | Is it possible to Bulk Insert using Google Cloud Datastore | 1.2 | 1 | 0 | 3,726 |
32,316,480 | 2015-08-31T17:13:00.000 | 0 | 1 | 0 | 0 | python,encoding,kindle,latin1 | 64,088,459 | 1 | false | 1 | 0 | Use <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
I previously used <meta charset="UTF-8" />, which did not seem to work. | 1 | 1 | 0 | Basicly, I'm crawling text from a webpage with python using Beautifulsoup, then save it as an HTML and send it to my Kindle as a mail attachement. The problem is; Kindle supports Latin1(ISO-8859-1) encoding, however the text I'm parsing includes characters that are not a part of Latin1. So when I try to encode text as Latin1 python gives following error because of the illegal characters:
UnicodeEncodeError: 'latin-1' codec can't encode character u'\u2019'
in position 17: ordinal not in range(256)
When I try to encode it as UTF-8, this time script runs perfectly but Kindle replaces some incompatible characters with gibberish. | Text Encoding for Kindle with Python | 0 | 0 | 1 | 171 |
32,322,434 | 2015-09-01T01:57:00.000 | 1 | 0 | 1 | 0 | python,api,rest | 32,322,492 | 1 | true | 0 | 0 | You don't need a library for the client. However, developers tend to like libraries because it helps with things like creating authorization headers, creating parameterized URLs and converting response bodies into native types.
However, there are good/bad ways of building these kinds of libraries. Many libraries hide the HTTP API and introduce a whole new set of issues that HTTP interfaces were originally designed to avoid. | 1 | 3 | 0 | I'm kinda confused. So I understand that If we want to grab data from an API, we can just call that API url in whatever language we are using ( example in python, we can do urllib.open( url of api) and then load the json). My question is, if we can just open the url in any language, what's the point of the API libraries that developers usually have on the site ( library wrapper for python, java, c#, ruby, etc). Do we need to use a specific library to call an API in that specific language? Can we not just open up the API url in any language? What's the point of having a library in each language if we can just extract the API in each of those languages? | Difference between API and API library/Wrapper | 1.2 | 0 | 1 | 2,429 |
32,322,503 | 2015-09-01T02:09:00.000 | 1 | 0 | 1 | 0 | python,amazon-web-services,boto,boto3 | 70,191,456 | 2 | false | 0 | 0 | Boto is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3.
while
Boto3 generates the client from a JSON service definition file. The client’s methods support every single type of interaction with the target AWS service. Resources, on the other hand, are generated from JSON resource definition files. Boto3 generates the client and the resource from different definitions | 1 | 158 | 0 | I'm new to AWS using Python and I'm trying to learn the boto API however I noticed that there are two major versions/packages for Python. That would be boto and boto3.
What is the difference between the AWS boto and boto3 libraries? | What is the difference between the AWS boto and boto3 | 0.099668 | 0 | 1 | 51,063 |
32,325,390 | 2015-09-01T06:58:00.000 | 1 | 0 | 1 | 0 | python,mysql,dictionary,scrapy | 32,334,348 | 2 | false | 0 | 0 | If you want to store dynamic data in a database, here are a few options. It really depends on what you need out of this.
First, you could go with a NoSQL solution, like MongoDB. NoSQL allows you to store unstructured data in a database without an explicit data schema. It's a pretty big topic, with far better guides/information than I could provide you. NoSQL may not be suited to the rest of your project, though.
Second, if possible, you could switch to PostgreSQL, and use it's HSTORE column (unavailable in MySQL). The HSTORE column is designed to store a bunch of Key/Value pairs. This column types supports BTREE, GIST, GIN, and HASH indexing. You're going to need to ensure you're familiar with PostgreSQL, and how it differs from MySQL. Some of your other SQL may no longer work as you'd expect.
Third, you can serialize the data, then store the serialized entity. Both json and pickle come to mind. The viability and reliability of this will of course depend on how complicated your dictionaries are. Serializing data, especially with pickle can be dangerous, so ensure you're familiar with how it works from a security perspective.
Fourth, use an "Entity-Attribute-Value" table. This mimics a dictionaries "Key/Value" pairing. You, essentially, create a new table with three columns of "Related_Object_ID", "Attribute", "Value". You lose a lot of object metadata you'd normally get in a table, and SQL queries can become much more complicated.
Any of these options can be a double edged sword. Make sure you've read up on the downfalls of whatever option you want to go with, or, in looking into the options more, perhaps you'll find something that better suits you and your project. | 1 | 1 | 0 | I am doing a mini-project on Web-Crawler+Search-Engine. I already know how to scrape data using Scrapy framework. Now I want to do indexing. For that I figured out Python dictionary is the best option for me. I want mapping to be like name/title of an object (a string) -> the object itself (a Python object).
Now the problem is that I don't know how to store dynamic dict in MySQL database and I definitely want to store this dict as it is!
Some commands on how to go about doing that would be very much appreciated! | How to store a dynamic python dictionary in MySQL database? | 0.099668 | 1 | 0 | 339 |
32,330,346 | 2015-09-01T11:23:00.000 | 1 | 1 | 0 | 0 | python,encoding,gzip,diff | 32,343,370 | 2 | false | 0 | 0 | It is pointless to do a diff of a compressed file if there are any differences, since the entire compressed file will be different after the first difference in the uncompressed data. If there are a small set of differences in the uncompressed data, then the only way to find those is to uncompress the data. | 1 | 0 | 0 | I want to perform DIFF over encoded content (gzip mainly), Is there any way ?
Right now I am decoding the content and performing the diff, it adds a lot of time overhead.
I am using python zlib library for decodig and libdiff for taking diff. | How to perform diff over encoded content | 0.099668 | 0 | 0 | 57 |
32,343,072 | 2015-09-02T00:53:00.000 | 1 | 0 | 0 | 1 | python-2.7 | 40,198,608 | 3 | false | 0 | 0 | go to the command prompt
it will say the account and all that jazz.
type cd ..
then hit enter
it will say C:\Users>
type cd .. again
then it will say C:>
type cd python27 (or the name of your python folder)
it will say C:\Python27>
type cd scripts
it will say C:\python27/scripts>
type easy_install twilio
then wait for it to run the procceses and then you will have twilio installed to python. | 2 | 2 | 0 | I'm new here and also a new Python learner. I was trying to install twilio package through the Windows command line interface, but I got a syntax error(please see below). I know there're related posts, however, I was still unable to make it work after trying those solutions. Perhaps I need to set the path in the command line, but I really have no idea how to do that...(I can see the easy_install and pip files in the Scripts folder under Python) Can anyone please help? Thanks in advance!
Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation.
All rights reserved.
C:\WINDOWS\system32>python Python 2.7.10 (default, May 23 2015,
09:44:00) [MSC v.1500 64 bit (AMD64)] on wi n32 Type "help",
"copyright", "credits" or "license" for more information.
easy_install twilio File "", line 1
easy_install twilio
^ SyntaxError: invalid syntax | getting "SyntaxError" when installing twilio in the Windows command line interface | 0.066568 | 0 | 0 | 3,665 |
32,343,072 | 2015-09-02T00:53:00.000 | 3 | 0 | 0 | 1 | python-2.7 | 40,895,682 | 3 | false | 0 | 0 | You should not type python first, then it becomes python command line.
Open a new command prompt and directly type:
easy_install twilio | 2 | 2 | 0 | I'm new here and also a new Python learner. I was trying to install twilio package through the Windows command line interface, but I got a syntax error(please see below). I know there're related posts, however, I was still unable to make it work after trying those solutions. Perhaps I need to set the path in the command line, but I really have no idea how to do that...(I can see the easy_install and pip files in the Scripts folder under Python) Can anyone please help? Thanks in advance!
Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation.
All rights reserved.
C:\WINDOWS\system32>python Python 2.7.10 (default, May 23 2015,
09:44:00) [MSC v.1500 64 bit (AMD64)] on wi n32 Type "help",
"copyright", "credits" or "license" for more information.
easy_install twilio File "", line 1
easy_install twilio
^ SyntaxError: invalid syntax | getting "SyntaxError" when installing twilio in the Windows command line interface | 0.197375 | 0 | 0 | 3,665 |
32,343,180 | 2015-09-02T01:10:00.000 | 0 | 0 | 0 | 0 | python,django,django-rest-framework,django-rest-auth | 34,567,070 | 1 | false | 1 | 0 | djoser supports only Basic Auth and the token auth by Django Rest Framework. What you can do is to make use of login and logout from Django OAuth Toolkit and then djoser views such as like register, password reset. | 1 | 0 | 0 | When developing a Django project, many third party authentication packages are available, for example:
Django OAuth Toolkit, OAuth 2.0 support.
Djoser, provides a set of views to handle basic actions such as registration, login, logout, password reset and account activation.
Currently, I just want to support basic actions registration, login and so on. So Djoser could be my best choice.
But if I want to support OAuth 2.0 later, I will have two tokens, one is from Djoser, and another is from Django OAuth Toolkit. I just got confused here, how to handler two tokens at the same time?
Or should I just replace Djoser with Django OAuth Toolkit, if so, how to support basic actions such as registration? | Multiple authentication app support | 0 | 0 | 0 | 308 |
32,345,638 | 2015-09-02T06:00:00.000 | 4 | 0 | 0 | 0 | python,matlab,simulink | 32,347,770 | 3 | false | 0 | 0 | Until now there is no library like Simulink in Python. The closest match is the Modelica language with OpenModelica and a python implementation JModelica. | 1 | 14 | 1 | I have been searching and found many libraries (scipy, numpy, matplotlib) for Python that lets a user easily shift from MATLAB to Python. However, I am unable to find any library that is related to the Simulink in MATLAB. I would like to know if such a library exists or something else that resembles Simulink in it's GUI and computation features. | Simulink for Python | 0.26052 | 0 | 0 | 28,386 |
32,346,261 | 2015-09-02T06:44:00.000 | 0 | 1 | 0 | 0 | python,apache,tomcat6,.htpasswd,ckan | 32,395,525 | 1 | false | 1 | 0 | You are using nginx, arent you?
So you can make the athentification with nginx with just adding two lines to one file and creating a password file.
In /etc/nginx/sites-available/ckan add following lines:
auth_basic "Restricted";
auth_basic_user_file /filedestination;
then create create a file at your filedestination with following content:
USERNAME:PASSWORD
The password must be in md5.
Have fun with ckan! | 1 | 0 | 0 | I have included Location header in My Virtual Host file.
AuthType Basic
AuthName "Restricted"
AuthUserFile /etc/httpd/.htpasswd
require valid-user
Also created user to access the domain. but the user i have created using htpasswd is not allow other user to make any activity in CKAN Instance.
anyone Have an idea..Please let me know | How to use htpasswd For CKAN Instance | 0 | 0 | 0 | 75 |
32,348,812 | 2015-09-02T08:59:00.000 | 3 | 0 | 0 | 0 | python,amazon-web-services,file-upload,amazon-s3,boto | 32,352,584 | 1 | true | 1 | 0 | There is no API in S3 to retrieve a part of a multi-part upload. You can list the parts but I don't believe there is any way to retrieve an individual part once it has been uploaded.
You can re-upload a part. S3 will just throw away the previous part and use the new one in it's place. So, if you had the old and new versions of the file locally and were keeping track of the parts yourself, I suppose you could, in theory, replace individual parts that had been modified after the multipart upload was initiated. However, it seems to me that this would be a very complicated and error-prone process. What if the change made to a file was to add several MB's of data to it? Wouldn't that change your boundaries? Would that potentially affect other parts, as well?
I'm not saying it can't be done but I am saying it seems complicated and would require you to do a lot of bookkeeping on the client side. | 1 | 1 | 0 | I'm working on aws S3 multipart upload, And I am facing following issue.
Basically I am uploading a file chunk by chunk to s3, And during the time if any write happens to the file locally, I would like to reflect that change to the s3 object which is in current upload process.
Here is the procedure that I am following,
Initiate multipart upload operation.
upload the parts one by one [5 mb chunk size.] [do not complete that operation yet.]
During the time if a write goes to that file, [assuming i have the details for the write [offset, no_bytes_written] ].
I will calculate the part no for that write happen locally, And read that chunk from the s3 uploaded object.
Read the same chunk from the local file and write to read part from s3.
Upload the same part to s3 object.
This will be an a-sync operation. I will complete the multipart operation at the end.
I am facing an issue in reading the uploaded part that is in multipart uploading process. Is there any API available for the same?
Any help would be greatly appreciated. | How to read a part of amazon s3 key, assuming that "multipart upload complete" is yet to happen for that key? | 1.2 | 1 | 1 | 1,069 |
32,351,417 | 2015-09-02T11:03:00.000 | 1 | 0 | 1 | 1 | python,unicode,ipython,jupyter | 32,355,092 | 1 | false | 0 | 0 | Well setting IPYTHONDIR variable to another location rather than the default one which also includes unicode characters, solved the problem, it is not an elegant solution in fact but it works. | 1 | 2 | 0 | Jupyter data path on my laptop includes unicode characters because my name has specific letters (\u00d6 and \u00fc) which are not available in plain Latin. I tried to change data path by changing JUPYTER_PATH variable but according to documentation it must include %APPDATA% variable and unfortunately %APPDATA% also includes the same letters. Is there any way to solve this problem ? | Does Jupyter data path support unicode? | 0.197375 | 0 | 0 | 503 |
32,353,881 | 2015-09-02T12:59:00.000 | 0 | 0 | 1 | 0 | wpf,ironpython,avalonedit | 33,526,046 | 1 | false | 0 | 1 | You could take a look at pdb and dbd. This are some very useful modules. You could go through the code and integrate it in your application. | 1 | 3 | 0 | I have a WPF application.It has a iron python script editor using AvalonEdit. We were able to validate and run the iron python scripts using this application.
But we need to integrate an iron python debugger in to this application.
Can anyone suggest better solution for this? | Iron python debugger for my application | 0 | 0 | 0 | 256 |
32,353,887 | 2015-09-02T12:59:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-import | 32,359,039 | 1 | false | 0 | 0 | When you import a module, any/all module objects/functions/etc are cached so that importing the same module again is a no-op. Subsequently these objects/functions/etc will not be freed when the local names referring to them go out of scope. This only affects functions and objects defined globally within the module and it's very likely that there won't be a lot of those, so it's probably not something to worry about.
To specifically answer your question, there is effectively no difference in terms of performance unless the import is inside a function or branch which is never executed. In that rare case, having it inside the branch or function is slightly faster/less resource intensive, but it won't gain you much. | 1 | 0 | 0 | When memory issues are critical, do I save some memory when I do my python imports inside a function, so as to when the call finishes everything will be discarder from memory? Or this cumbers more my memory and CPU especially when I do a lot of calls of the specific function? (The user does the calls and I do not know in advance how many she will do). What are the memory implications of this difference? | Python import statement inside and outside a function. What is better for the memory? | 0.379949 | 0 | 0 | 618 |
32,355,681 | 2015-09-02T14:18:00.000 | 0 | 1 | 0 | 1 | python,gcc,proxy,compilation,media | 32,355,833 | 1 | false | 0 | 0 | Could be a dependency issue. Give this a shot:
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev | 1 | 0 | 0 | I am installing mediaproxy on my server debian. Please review the error pasted below. I have also tried installing the dependencies but still this error occurs. Need help on this.
root@server:/usr/local/src/mediaproxy-2.5.2# ./setup.py build running build running build_py running build_ext building 'mediaproxy.interfaces.system._conntrack' extension x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DMODULE_VERSION=2.5.2 -I/usr/include/python2.7 -c mediaproxy/interfaces/system/_conntrack.c -o build/temp.linux-x86_64-2.7/mediaproxy/interfaces/system/_conntrack.o mediaproxy/interfaces/system/_conntrack.c:12:29: fatal error: libiptc/libiptc.h: No such file or directory #include ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Thanks. Faisal | Error during mediaproxy installation | 0 | 0 | 0 | 118 |
32,358,150 | 2015-09-02T16:16:00.000 | 1 | 0 | 1 | 0 | python | 32,358,169 | 1 | false | 0 | 0 | They'll be fine. They are not dependent on the OS or the hardware; they are only dependent on the Python version. Python automatically creates, and updates, them when needed. | 1 | 1 | 0 | Should I delete the .pyc files if I copy a project from one computer to another, lets say a linux machine to a windows machine? Or will it automatically correct itself. | Should I delete .pyc files if I copy project to new comptuer | 0.197375 | 0 | 0 | 158 |
32,361,764 | 2015-09-02T19:41:00.000 | 0 | 0 | 0 | 0 | python,django,import,django-dev-server | 32,362,402 | 3 | false | 1 | 0 | I found the culprit, or at least a culprit. I had omitted (in my .bashrc) the "export ", and now I'm on to another problem. | 1 | 1 | 0 | I am trying to pick up an old Django project and my immediate goal is to see what I can get running on my computer on the development server. I get:
Inner Sanctum ~/pragmatometer $ python manage.py runserver
Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 49, in _setup
self._wrapped = Settings(settings_module)
File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 132, in __init__
% (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'pragmatometer.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named pragmatometer.settings
Here is some command line output:
Inner Sanctum ~/pragmatometer $ /bin/pwd
/Users/jonathan/pragmatometer
Inner Sanctum ~/pragmatometer $ echo $PYTHONPATH
/Users/jonathan
Inner Sanctum ~/pragmatometer $ python
Python 2.7.10 (default, Jul 14 2015, 19:46:27)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pragmatometer
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named pragmatometer
>>> import pragmatometer.settings
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named pragmatometer.settings
>>>
What should I be doing that I'm not? (Or, as it was an older project, should I just start with a fresh new project?)
Thanks, | Why can't I import modules in a Django project? | 0 | 0 | 0 | 2,926 |
32,362,339 | 2015-09-02T20:16:00.000 | 0 | 0 | 0 | 0 | python | 32,365,343 | 1 | true | 0 | 0 | If you are using the webbrowser module, you could try passing autoraise=False to the webbrowser.open() function. | 1 | 0 | 0 | I have a python script that opens firefox. Is there a way I can run it in the background and suppress the window from popping up? Something like & for running in background? | Suppress window opening from python script | 1.2 | 0 | 0 | 166 |
32,362,384 | 2015-09-02T20:19:00.000 | 1 | 0 | 0 | 0 | python,django,django-south | 32,385,965 | 1 | false | 1 | 0 | I had no solution, so I ran loaddata locally and used pg_dump and ran the dump with pqsl -f and restored the data. | 1 | 2 | 0 | I am trying to migrate django models from sqlite to postgres. I tested it locally and now trying to do the samething with remote database. I dumped the data first then started the application which created the tables in remote database.
Finally I am trying to loaddata but it looks like hanged and no errors.
Is there a way to get verbose ? Or I am not sure how to diagnose this issue. It just 199M size file and when I test locally loaddata works in few minutes. | manage.py loaddata hangs when loading to remote postgres | 0.197375 | 1 | 0 | 353 |
32,362,586 | 2015-09-02T20:33:00.000 | 1 | 0 | 0 | 0 | python,networking,network-programming,mininet | 36,837,694 | 1 | false | 0 | 0 | Since you are setting the controller to be remote, --controller=remote, you need to provide a controller explicitly.
For example, if you are using POX,
In another terminal, run this:
cd pox
./pox.py openflow.discovery forwarding.l2_learning
Now do a pingall in the mininet console, there should be 0% packet loss | 1 | 0 | 0 | I want to create a custom topology using Python API and mininet. It should be such that, if there are n number of hosts, then odd numbered hosts can ping each other and also even numbered hosts can ping each other.
For example, if we have 5 hosts, h1 .. to h5,
then h1 can ping h3 and h5, while h2 can only ping h4.
I have tried writing code, in which I added links between all even hosts and between all odd hosts. But I am not able to get the desired outcome. h1 is able to ping h3, but not h5.
Also, is it correct to define links between hosts? Or should we only have links between hosts and switches and within switches? | Custom Topology in Mininet | 0.197375 | 0 | 1 | 769 |
32,363,678 | 2015-09-02T21:47:00.000 | 0 | 1 | 0 | 0 | python,opencv,raspberry-pi | 32,363,759 | 1 | false | 0 | 0 | You need to ensure that your frame rate is fast enough to get a decent still of the moving car. When filming, each frame will most likely be blurry, and our brain pieces together the number plate on playback. Of course a blurry frame is no good for letter recognition, so is something you'll need to deal with on the hardware side, rather than software side.
Remember the old saying: Garbage in; Garbage out. | 1 | 1 | 1 | I'm working on detecting license plates with openCV, python and a raspberry pi. Most of it is already covered. What I want is to detect the ROI (Region of interest) of the plate, track it on a few frames and add those together to get a more clear and crisp image of the plate.
I want to get a better image of the plate by taking the information from several frames. I detect the plate, and have a collection of plates from several frames, as many as I wish and as many as the car is moving by the camera. How can I take all those and get a better version? | openCV track object in video and obtain a better image from multiple frames | 0 | 0 | 0 | 330 |
32,365,141 | 2015-09-03T00:15:00.000 | 1 | 0 | 1 | 0 | syntax,markdown,ipython-notebook | 40,813,583 | 3 | false | 0 | 0 | Late answer but backtick ` is the way to go, just like here at SO. | 1 | 3 | 0 | In Markdown, how do you highlight code <--- just like that. I don't know how to search for it because I'm not sure what it's called. I also don't know what the little dashes are called either. I tried doing what I would do in SO, but it just reads it as normal text
Update:
This is what I have:
foo in SO it actually shows the hilighting
bar whereas in iPython Notebook, it doesn't, it only changes the fontstyle | Markdown in iPython Notebook: how to highlight code? | 0.066568 | 0 | 0 | 3,161 |
32,367,279 | 2015-09-03T04:58:00.000 | 3 | 0 | 0 | 0 | python,django | 32,367,413 | 3 | false | 1 | 0 | Without seeing your script, I would have to say that you have blocking calls, such as socket.recv() or os.system(executable) running at the time of the CTRL+C.
Your script is stuck after the CTRL+C because python executes the KeyboardInterrupt AFTER the the current command is completed, but before the next one. If there is a blocking function waiting for a response, such as an exit code, packet, or URL, until it times out, you're stuck unless you abort it with task manager or by closing the console.
In the case of threading, it kills all threads after it completes its current command. Again, if you have a blocking call, the thread will not exit until it receives its response. | 2 | 2 | 0 | I start Django server with python manage.py runserver and then quit with CONTROL-C, but I can still access urls in ROOT_URLCONF, why? | Django server is still running after CONTROL-C | 0.197375 | 0 | 0 | 6,725 |
32,367,279 | 2015-09-03T04:58:00.000 | -1 | 0 | 0 | 0 | python,django | 53,162,008 | 3 | false | 1 | 0 | just type exit(), that is what I did and it worked | 2 | 2 | 0 | I start Django server with python manage.py runserver and then quit with CONTROL-C, but I can still access urls in ROOT_URLCONF, why? | Django server is still running after CONTROL-C | -0.066568 | 0 | 0 | 6,725 |
32,367,572 | 2015-09-03T05:26:00.000 | 2 | 0 | 1 | 0 | python,anaconda,spyder | 65,988,220 | 3 | false | 0 | 0 | For newbies like me, you got to use Anaconda prompt (windows cmd terminal didn't work for me) took a few mins to figure out to use Anaconda prompt to run spyder --reset command. | 3 | 9 | 0 | My system is windows 8, and I download Anaconda python 3.4 from the official website. The spyder has been all well until yesterday, I can open the spyder, and the icon shows on my taskbar, but no matter how I click the icon, the program window doesn't show up on the screen. I have uninstalled and reinstalled the Anaconda, but still can't fix it. Any suggestion?
By the way, I usually open spyder from "Anaconda/Scripts/spyder.exe", is that a problem? | Anaconda Spyder can't show the program window on windows 8 | 0.132549 | 0 | 0 | 8,721 |
32,367,572 | 2015-09-03T05:26:00.000 | 12 | 0 | 1 | 0 | python,anaconda,spyder | 35,349,189 | 3 | false | 0 | 0 | I ran into the same problem after installing Spyder for Python 2.7 with an existing installation of Spyder running on Python 3.5. What worked for me was the following: I closed all running Spyder applications, ran the "Reset Spyder Settings" shortcut for Anaconda2, and voila, spyder2 started displaying properly after running. | 3 | 9 | 0 | My system is windows 8, and I download Anaconda python 3.4 from the official website. The spyder has been all well until yesterday, I can open the spyder, and the icon shows on my taskbar, but no matter how I click the icon, the program window doesn't show up on the screen. I have uninstalled and reinstalled the Anaconda, but still can't fix it. Any suggestion?
By the way, I usually open spyder from "Anaconda/Scripts/spyder.exe", is that a problem? | Anaconda Spyder can't show the program window on windows 8 | 1 | 0 | 0 | 8,721 |
32,367,572 | 2015-09-03T05:26:00.000 | 10 | 0 | 1 | 0 | python,anaconda,spyder | 50,910,563 | 3 | false | 0 | 0 | spyder --reset from the command line works. | 3 | 9 | 0 | My system is windows 8, and I download Anaconda python 3.4 from the official website. The spyder has been all well until yesterday, I can open the spyder, and the icon shows on my taskbar, but no matter how I click the icon, the program window doesn't show up on the screen. I have uninstalled and reinstalled the Anaconda, but still can't fix it. Any suggestion?
By the way, I usually open spyder from "Anaconda/Scripts/spyder.exe", is that a problem? | Anaconda Spyder can't show the program window on windows 8 | 1 | 0 | 0 | 8,721 |
32,371,099 | 2015-09-03T08:52:00.000 | 0 | 0 | 0 | 1 | python,batch-file,window,command | 32,371,292 | 1 | true | 0 | 0 | Run your script with pythonw.exe instead of python.exe and it won't show dos shell. | 1 | 0 | 0 | I dont want to open a command window when i am Running application,
I directed shortcut to .bat file while creating .exe file where the application is python based
Code in .bat file is like this
@python\python.exe -m demo.demo %*
where demo is my application name (.bat file name) | How to run a batch file without launching a “command window”? | 1.2 | 0 | 0 | 1,362 |
32,371,449 | 2015-09-03T09:09:00.000 | 6 | 0 | 1 | 0 | python,list,python-2.7 | 32,371,508 | 3 | true | 0 | 0 | [0,0,0,0,0,0,] and [0,0,0,0,0,0] are the same lists. Last comma is acceptable by python syntax analyzer in order to distinguish a variable and a tuple with a variable inside: (1) is int, (1,) is tuple. | 2 | 1 | 0 | When playing with a code I have noticed that [0,]*6, does not return [0,0,0,0,0,0,] but rather [0,0,0,0,0,0]. Can you please explain why? | Creating a list through multiplication | 1.2 | 0 | 0 | 100 |
32,371,449 | 2015-09-03T09:09:00.000 | 1 | 0 | 1 | 0 | python,list,python-2.7 | 32,372,377 | 3 | false | 0 | 0 | Lists do not end with commas. It's just the way the syntax goes. However, Python will think the ('hello world') to be a string. To make a tuple, you must end it with a comma ('hello world',). So, in your case, Python thought [0,] to be equivalent to [0]. It's just the way the syntax goes. | 2 | 1 | 0 | When playing with a code I have noticed that [0,]*6, does not return [0,0,0,0,0,0,] but rather [0,0,0,0,0,0]. Can you please explain why? | Creating a list through multiplication | 0.066568 | 0 | 0 | 100 |
32,371,896 | 2015-09-03T09:30:00.000 | 0 | 0 | 0 | 0 | python,django,remote-debugging | 32,372,423 | 2 | false | 1 | 0 | You cannot output it to the console. Since the process is not called from a console, you cannot see the stdout in a console. You can only redirect the output to a file and read the file.
If at all you want the logs in the console, then you have to call the django server from console. i.e python manage.py runserver, which should only be used for development time, as this server is not good to be used in production | 2 | 0 | 0 | I'm developing Django server on Ubuntu OS. Since there is no browser on that machine, I can only debug the server remotely. So I just configure it with Apache and WSGI, and now I can access it through machine public IP.
Then I want to record logs in some views for debugging, if I output the log to a file, I can see it in the file, but if I want to output it to the console, I just get confused here, where is the console? since I didn't launch it with python manage.py runserver manually, currently running server process was launched by WSGI automatically. Of course, I can just stop the process launched by WSGI, and re-launch it with python manage.py runserver manually. If so, I can't access it through machine public IP.
So how can I see logs in the console in putty | Debugging django remotely | 0 | 0 | 0 | 689 |
32,371,896 | 2015-09-03T09:30:00.000 | 3 | 0 | 0 | 0 | python,django,remote-debugging | 32,372,593 | 2 | true | 1 | 0 | Firstly, you shouldn't be developing on the server. Do that locally and debug in the usual way there.
If you're debugging production issues, you will indeed need to use the log files. But it's pretty simple to see those in the console; you can do tail -f /var/log/my_log_file.log and the console will show the log as it is being written. | 2 | 0 | 0 | I'm developing Django server on Ubuntu OS. Since there is no browser on that machine, I can only debug the server remotely. So I just configure it with Apache and WSGI, and now I can access it through machine public IP.
Then I want to record logs in some views for debugging, if I output the log to a file, I can see it in the file, but if I want to output it to the console, I just get confused here, where is the console? since I didn't launch it with python manage.py runserver manually, currently running server process was launched by WSGI automatically. Of course, I can just stop the process launched by WSGI, and re-launch it with python manage.py runserver manually. If so, I can't access it through machine public IP.
So how can I see logs in the console in putty | Debugging django remotely | 1.2 | 0 | 0 | 689 |
32,376,092 | 2015-09-03T12:46:00.000 | 7 | 0 | 0 | 0 | python,django,migration,django-migrations | 32,376,288 | 3 | false | 1 | 0 | Migrations synchronize the state of your database with the state of your code. If you don't check in the migrations into version control, you lose the intermediate steps. You won't be able to go back in the version control history and just run the code, as the database won't match the models at that point in time.
Migrations, like any code, should be tested, at the very least on a basic level. Even though they are auto-generated, that's not a guarantee that they will work 100% of the time. So the safe path is to create the migrations in your development environment, test them, and then push them to the production environment to apply them there. | 3 | 6 | 0 | This is a common practice that people working on django project usually push migrations to the version control system along with other code.
My question is why this practice is so common? Why not just push the updated models and everyone generate migrations locally. This approach can reduce the effort for resolving migrations conflicts too. | Why there is need to push django migrations to version control system | 1 | 0 | 0 | 1,373 |
32,376,092 | 2015-09-03T12:46:00.000 | 2 | 0 | 0 | 0 | python,django,migration,django-migrations | 32,376,183 | 3 | false | 1 | 0 | Firstly, migrations in version control allows you to run them in production.
Secondly, migrations are not always automatically generated. For example, if you add a new field to a model, you might write a migration to populate the field. That migration cannot be re-created from the models. If that migration is not in version control, then no-one else will be able to run it. | 3 | 6 | 0 | This is a common practice that people working on django project usually push migrations to the version control system along with other code.
My question is why this practice is so common? Why not just push the updated models and everyone generate migrations locally. This approach can reduce the effort for resolving migrations conflicts too. | Why there is need to push django migrations to version control system | 0.132549 | 0 | 0 | 1,373 |
32,376,092 | 2015-09-03T12:46:00.000 | 9 | 0 | 0 | 0 | python,django,migration,django-migrations | 32,376,337 | 3 | true | 1 | 0 | If you didn't commit them to a VCS then what would happen is people would make potentially conflicting changes to the model.
When finally ready to deploy, you would still need django to make new migrations that would then merge everybodys changes together. And this just creates an additional unnecessary step that can introduce bugs.
You also are assuming everybody will always be able to work on an up to date version of the code which isn't always possible when you start working on branches that are not ready to be merged into mainline. | 3 | 6 | 0 | This is a common practice that people working on django project usually push migrations to the version control system along with other code.
My question is why this practice is so common? Why not just push the updated models and everyone generate migrations locally. This approach can reduce the effort for resolving migrations conflicts too. | Why there is need to push django migrations to version control system | 1.2 | 0 | 0 | 1,373 |
32,376,618 | 2015-09-03T13:10:00.000 | 0 | 1 | 0 | 0 | python,linux,arm,raspberry-pi | 32,380,390 | 1 | true | 0 | 0 | The nice thing about python is that you rarely need to worry about the
underlying architecture. Python is interpreted, so the interpreter does
all the hard work of handling 32 bit, 64 bit, little-endian, big-endian,
soft or hard floating point etc.
Also, you don't need to compile your python, as the interpreter will
also compile your source if your provide both the .py and the .pyc or .pyo file
and the latter does not match what is needed. Compiling python is
not the same as compiling C, for example, as python targets a virtual
machine, not real hardware. The resulting .pyc or .pyo files are
however tied to the particular version of python.
Generally, source files are usually provided, and if there is no .pyc or .pyo for them,
then the first time python is run it will create them (if it has
file permissions). A second run will then use the compiled versions,
if the source has not changed. | 1 | 0 | 0 | I'm trying to be sure I understand some basics about programming for different ARM architectures (e.g. ARMv5 vs ARMv7).
I have a python program that was ported to the newer Raspberry Pi B(Cotrex-7A). What would it take to also have it run on an ARMv6 or ARMv5 architecture. The program does simple waveform processing and serial communication with no need for a GPU.
My understanding is that I would have to recompile the program for each of the architectures to account for the different instruction sets. And I would also need to run the same version of Linux (in this case Wheezy), but is there more I have to consider here?
Is there the possibility that if it compiles in an ARMv7 it won't on an ARMv6 or ARMv5
Thanks | Programming on different ARM architectures | 1.2 | 0 | 0 | 1,564 |
32,379,716 | 2015-09-03T15:23:00.000 | 1 | 0 | 1 | 1 | python-2.7,installation | 32,380,336 | 1 | true | 0 | 0 | If you mean python installer for windows, yes it's enough and installer doesn't need internet connection, but if you want to install another modules through pip you will need internet connection. | 1 | 1 | 0 | I have a Windows machine that I want to install Python (2.7) on.
That machine is not connected to the internet and never will be.
Hence the question: If I download the thing that the python site
calls the installer and copy it to that machine, will that be
enough to install python? Or does the installer need internet
access, like so many "installers" these days?
(Yes, I could just try it. Got a very slow connection...)
Anyone happens to know the answer to the same question regarding
wxpython that would be great.
Thanks. | Does Installer Need an Internet Connection? | 1.2 | 0 | 0 | 979 |
32,380,760 | 2015-09-03T16:16:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore | 32,404,416 | 1 | false | 1 | 0 | Here's how I remedied this:
I went to console.developers.google.com > project > hockeybias-hrd > appengine > settings > domains > add
In 'step 2' on that page I put the 'www' for the subdomain in the textbox which enabled the Add button.
I clicked on the 'Add' button and the issue was solved.
I will note that this was the second time I have been 'head-faked' by google's use of greyed-out text to mean something other than 'disabled'... 'www' was the default value in the subdomain textbox - BUT it was greyed-out AND the 'Add'button was disabled right next to it. So, I did not initially think I could enter a value there. | 1 | 0 | 0 | If a user enters ‘hockeybias.com’ into his/her browser as a URL to get to my hockey news aggregation site, the default page comes up correctly. It has in the past and does so today.
However, as of this summer, if someone uses ‘www.hockeybias.com’ the user will get the following error message:
Error: Not Found
The requested URL / was not found on this server.
This is a relatively new issue for me as ‘www.hockeybias.com’ worked fine in the past.
The issue seems to have come up after I migrated from the ‘Master/Slave Datastore’ version of GAE to the ‘High Replication Datastore’ (HRD) version of GAE earlier this summer.
The issue occurred while the site used python2.5. And I migrated the site to python2.7 this morning and am still having the issue. | GAE/python site is no longer handling requests prefaced with 'www.' | 0 | 0 | 0 | 26 |
32,381,227 | 2015-09-03T16:42:00.000 | 1 | 0 | 0 | 0 | python,cassandra,cassandra-2.0 | 32,381,556 | 1 | true | 0 | 0 | When you do an insert or update in Cassandra, the new value overrides the old value even if it is the same value. This is because Cassandra does not do a read of the existing data before storing the new data. Every write is just an append operation and Cassandra has no idea if the new value is the same as the old value.
When you do a read, Cassandra will find the most recent write and that is the current value. So if your insert or update sets a TTL, then that TTL will override any previous TTL for the columns you inserted/updated.
So if you are writing data with a TTL of 10 seconds, then you need to write the same data again before the 10 seconds is up if you want it to stick around for another 10 seconds. | 1 | 1 | 0 | When running model.update(args, kwargs) in python, if the data is not different it doesn't actually make any changes, correct? If so, does it update the TTL? If not, how can we make it so it will reset the TTL?
Use case:
We have a model that stores out loop information for twisted and we have a TTL of 10 seconds on it. The program is set to automatically check that configuration every 2 seconds and if it does not then we want that data to be removed out of the active loops. Here is where it gets tricky, the data rarely changes once it has been set for a particular loop.
I can post the model and other information if it would helpful. | Updating only TTL in cassandra | 1.2 | 0 | 0 | 473 |
32,382,416 | 2015-09-03T17:55:00.000 | 10 | 0 | 1 | 0 | python,python-2.7,raw-input | 32,382,444 | 2 | true | 0 | 0 | var1 = int((raw_input()) has three left parentheses and two right ones. Until you complete the expression with another right parentheses, Python thinks you have not finished writing the expression. That is why it shows an ellipsis.
When you type "12", the full expression becomes var1 = int((raw_input())12, which is not valid syntax because you can't have a number immediately after a closing paren. | 1 | 3 | 0 | So, I recently started learning python and I am in the raw_input() section.
So while I was trying out different things, I made an error (at least that's what I think for now). Can someone please explain what is the difference between the two statements?
var1 = int(raw_input())
var1 = int((raw_input())
I know that the first one waits for an input from the user and assigns it to the variable var1 but, in the second case, this is the output I am getting.
>>> x = int((raw_input()) On pressing enter, it just shows the ellipses and waits for a user input.
... 12 Twelve was my input and then I get the following error.
File "<stdin>", line 2
12
^
SyntaxError: invalid syntax
I know it clearly says it is a Syntax Error but shouldn't it even accept the statement? Why does it wait for an in input?
Thank you.
Python Version: 2.7
OS: Windows | Python: What does this mean in raw_input function? | 1.2 | 0 | 0 | 439 |
32,385,881 | 2015-09-03T21:40:00.000 | 0 | 0 | 0 | 0 | python,django,web-services,soap | 32,386,137 | 1 | false | 1 | 0 | You don't need to build a web API if all you want to do is consume someone else's API. Just retrieve the data - with something like requests - and use it in your Django app. | 1 | 0 | 0 | I have a django application to show some data from my database (ORACLE), but now I need to show some data from a web service.
I need to build a form based in the request of the web service, and show the response of the web service.
I googling to expose my app as a web service and send and retrieve XML data.
But I am very confused and I don't know where to start or which django package to use(PyXML,DJANGO REST).
I am not sure if I need to build a web API or I can consume the web service without the web api.
Can someone give some advices to achieve this task. | Consume web service with DJANGO | 0 | 0 | 0 | 2,079 |
32,386,686 | 2015-09-03T22:56:00.000 | 0 | 0 | 1 | 0 | c#,python,algorithm,random | 32,386,815 | 5 | false | 0 | 0 | One semi rough and quick heuristic check would be to sort a string by individual letters, and compare its sorted sequence against a likelihood of generating that sequence randomly for that length. i.e. for word length 2, a (sorted) string "AA" given 26 letters in alphabet, has a chance of 1/(26*26), but a (sorted) string "AB" - which is generated by "AB" and "BA" - has a chance of 2/(26*26).
P.S. from programming perspective, another way, would be to run a spell checker against it and figure out how many "mistakes" are there. Then put a threshold value against it. | 1 | 0 | 0 | I need to be able to test a text list for file names that seem random;
e.g. aggvvcx.com or kbzaandc.exe
Is there any sensible/reasonable way to do this? The only idea I have is to check for ratio of occurrence of vowels to consonant but this doesn't seem reliable neither does using a dictionary.
EDIT: Definition of randomness
The only information I have about the nature of the randomness is that it is a filename. Maybe it is possible to get a dictionary of common file names and use some kind of pattern parser to determine common file naming patterns and run this against the list after training? This would obviously be a futile approach if we're considering multiple languages, but I'm only interested in checking for English filenames. | How to determine if a filename is random? | 0 | 0 | 0 | 297 |
32,389,977 | 2015-09-04T04:16:00.000 | 0 | 0 | 0 | 1 | python,c++,c,linux | 48,399,591 | 4 | false | 0 | 0 | If you used sudo to start the gdb, make sure you have the PATH correct.
Try this sudo PATH=$PATH gdb ... | 2 | 15 | 0 | When I use gdb to debug my C++ program with segmentation fault, I come with this error in gdb.
Traceback (most recent call last):
File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux- gnu/libstdc++.so.6.0.19-gdb.py", line 63, in
from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
I am using Gdb 7.7.1 and g++ version 4.8.4. I have googled around but haven't get answers. Can any one solve my error? Thank you very much. | Import Error: No module name libstdcxx | 0 | 0 | 0 | 14,223 |
32,389,977 | 2015-09-04T04:16:00.000 | 21 | 0 | 0 | 1 | python,c++,c,linux | 33,897,420 | 4 | false | 0 | 0 | This is a bug in /usr/lib/debug/usr/lib/$triple/libstdc++.so.6.0.18-gdb.py;
When you start gdb, please enter:
python sys.path.append("/usr/share/gcc-4.8/python"); | 2 | 15 | 0 | When I use gdb to debug my C++ program with segmentation fault, I come with this error in gdb.
Traceback (most recent call last):
File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux- gnu/libstdc++.so.6.0.19-gdb.py", line 63, in
from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
I am using Gdb 7.7.1 and g++ version 4.8.4. I have googled around but haven't get answers. Can any one solve my error? Thank you very much. | Import Error: No module name libstdcxx | 1 | 0 | 0 | 14,223 |
32,390,195 | 2015-09-04T04:40:00.000 | 4 | 0 | 0 | 0 | python,django | 32,390,343 | 1 | false | 1 | 0 | Instead of multiple views.py, You can divide your application into individual applications within your project. Like separate applications for userRegistration, Contact, ArticleControl. this way your code will look much cleaner. And in case of any bug you will be able to debug that specific application easily. | 1 | 1 | 0 | Iam new to Django Framework and just started learning django 1.8. In other framework Like Laravel,Rails there we can make different controller file. for example UserController.php,ContactController.php etc.
I think in Django, views.py is similar to Controller and In django ,in views.py i have a long line of code of more than 300 lines and i want to make my code clean by making seperate views.py for userRegistration,Contact,ArticleControl etc. My question is how can i achieve this ie:making many views.py like controller | Can we make many views.py in Django as a Controller? | 0.664037 | 0 | 0 | 602 |
32,390,291 | 2015-09-04T04:49:00.000 | 22 | 0 | 1 | 0 | python,pip | 53,797,785 | 10 | false | 1 | 0 | I have tried both pipreqs and pigar and found pigar is better because it also generates information about where it is used, it also has more options. | 3 | 67 | 0 | When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior.
I have, however, done something wrong as this now includes things like Django in projects that have no business with Django.
How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv. | Pip freeze for only project requirements | 1 | 0 | 0 | 48,165 |
32,390,291 | 2015-09-04T04:49:00.000 | 0 | 0 | 1 | 0 | python,pip | 70,058,507 | 10 | false | 1 | 0 | I just had the same issue, here's what I've found to solve the problem.
First create the venv in the directory of your project, then activate it.
For Linux/MacOS : python3 -m venv ./venv source myvenv/bin/activate
For Windows : python3 -m venv .\venv env\Scripts\activate.bat
Now pip freeze > requirements.txt should only takes the library used in the project.
NB: If you have already begin your project you will have to reinstall all the library to have them in pip freeze. | 3 | 67 | 0 | When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior.
I have, however, done something wrong as this now includes things like Django in projects that have no business with Django.
How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv. | Pip freeze for only project requirements | 0 | 0 | 0 | 48,165 |
32,390,291 | 2015-09-04T04:49:00.000 | 1 | 0 | 1 | 0 | python,pip | 65,285,557 | 10 | false | 1 | 0 | if you are using linux then do it with sed
pip freeze | sed 's/==.*$/''/' > requirements.txt | 3 | 67 | 0 | When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior.
I have, however, done something wrong as this now includes things like Django in projects that have no business with Django.
How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv. | Pip freeze for only project requirements | 0.019997 | 0 | 0 | 48,165 |
32,392,858 | 2015-09-04T07:52:00.000 | 1 | 0 | 1 | 0 | python | 32,393,092 | 2 | false | 0 | 0 | You should always use a virtualenv. For each project, install the requirements you need inside that project's own virtualenv. | 1 | 0 | 0 | There are several versions of python on my mac OS X system.
I just installed beautiful soup4 on python2.7.6, but how can I install the same module on the version 3.4.3? | How to install third-party modules on python3 | 0.099668 | 0 | 0 | 114 |
32,394,655 | 2015-09-04T09:31:00.000 | 2 | 0 | 1 | 0 | python,pycharm | 32,394,855 | 2 | false | 0 | 0 | Press Ctrl-Shift-(Numpad Minus). It will collapse all the functions, putting your cursor back to the start of the current function. Then press Ctrl-Shift-(Numpad Plus) and you will get back where you were before. Or you can use Ctrl-(Numpad Minus) and collapse only the function you are in now. | 1 | 1 | 0 | This is a simple problem with a potentially simple solution (maybe there is a plugin that will help or a better way of working with the tool).
I have a large Python file with many function definitions that look very similar. Each definition is a few hundred lines long. When I'm trying to make sense of a few lines of code I often want to know which function definition those lines of code belong to. My current method for doing this is scrolling back up the file (I'm using PyCharm) until I reach the start of the function definition. This is tedious and time consuming. Is there a better way of determining which function I am currently viewing? | How do I automatically find out the name of the function I am currently viewing in PyCharm? | 0.197375 | 0 | 0 | 68 |
32,396,300 | 2015-09-04T10:52:00.000 | 2 | 0 | 0 | 0 | python,ajax,django,web,global | 32,396,689 | 1 | true | 1 | 0 | The answer is clear, surely: you should not be using global variables. If you need to store state for a user, do it in the session or the database. | 1 | 1 | 0 | I developed a django web-application and I tested it in local.. good news, it works!
Know I am trying to move forward. I want to make it reachable via public Internet.. Bad news, it won't work!
The client-side interact with server using Ajax and execute some python script and get some result to display in the web page.
The problem is that my application/server can't handle multiple connexions!!
I clarify:
Mainly, the problem is that when more than one client are served (2 for ex.), each one will ask the server to run a python script and, because there is a lot of global variables in the script, the two clients will modify them simultaneously and than bof!
Can multi-threading be a solution? How?
PS: It's clear, I am newbie to web :-). Thanks | Django: global variables in multi-connections | 1.2 | 0 | 0 | 295 |
32,397,089 | 2015-09-04T11:33:00.000 | 0 | 0 | 0 | 0 | python,sockets,networking,connection | 32,403,092 | 2 | false | 0 | 0 | The issue is not related to the programming language, in this case python. The oeprating system (Windows or linux), has the final word regarding the resilience degree of the socket. | 2 | 0 | 0 | I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems?
Anyway my connection code
m.connect(("localhost", 5000))
is in a if and try and while e.g.
while True:
if tryconnection:
#Error handeling
try:
m.connect(("localhost", 5000))
init = True
tryconnection = False
except socket.error:
init = False
tryconnection = True
And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it? | Keeping python sockets alive in event of connection loss | 0 | 0 | 1 | 1,309 |
32,397,089 | 2015-09-04T11:33:00.000 | 1 | 0 | 0 | 0 | python,sockets,networking,connection | 32,399,470 | 2 | true | 0 | 0 | I'm assuming we're dealing with TCP here since you use the word "connection".
It all depend by what you mean by "connection loss".
If by connection loss you mean that the data exchanges between the server and the client may be suspended/irresponsive (important: I did not say "closed" here) for a long among of time, seconds or minutes, then there's not much you can do about it and it's fine like that because the TCP protocol have been carefully designed to handle such situations gracefully. The timeout before deciding one or the other side is definitely down, give up, and close the connection is veeeery long (minutes). Example of such situation: the client is your smartphone, connected to some server on the web, and you enter a long tunnel.
But when you say: "But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer localhost and just clicking the X button", what you are doing is actually closing the connections.
If you abruptly terminate the server: the TCP/IP implementation of your operating system will know that there's not any more a process listening on port 5000, and will cleanly close all connections to that port. In doing so a few TCP segments exchange will occur with the client(s) side (it's a TCP 4-way tear down or a reset), and all clients will be disconected. It is important to understand that this is done at the TCP/IP implementation level, that's to say your operating system.
If you abruptly terminate a client, accordingly, the TCP/IP implementation of your operating system will cleanly close the connection from it's port Y to your server port 5000.
In both cases/side, at the network level, that would be the same as if you explicitly (not abruptly) closed the connection in your code.
...and once closed, there's no way you can possibly re-establish those connections as they were before. You have to establish new connections.
If you want to establish these new connections and get the application logic to the state it was before, now that's another topic. TCP alone can't help you here. You need a higher level protocol, maybe your own, to implement stateful client/server application. | 2 | 0 | 0 | I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems?
Anyway my connection code
m.connect(("localhost", 5000))
is in a if and try and while e.g.
while True:
if tryconnection:
#Error handeling
try:
m.connect(("localhost", 5000))
init = True
tryconnection = False
except socket.error:
init = False
tryconnection = True
And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it? | Keeping python sockets alive in event of connection loss | 1.2 | 0 | 1 | 1,309 |
32,401,501 | 2015-09-04T15:20:00.000 | 2 | 0 | 1 | 0 | dronekit-python,dronekit | 32,404,415 | 2 | true | 0 | 0 | ArduCopter 3.3-rc9 added a 3 second velocity timeout. This is to prevent a lost connection from causing a flyaway. To continue flying in the same direction, just send the same packet repeatedly. | 1 | 0 | 0 | Running dronekit-python with ArduCopter as SITL. When specifying a velocity (only) in the set_position_local_ned_encode, the drone moves for a few seconds and stops.
This happens both with the example code (guided_set_speed_yaw.py) and a very small test program that ONLY does the set_position after the appropriate init. All other parts of all examples seem to work fine.
All running on Fedora. I don't see this listed as a bug, or any issues related to this. Any ideas or pointers are appreciated. | dronekit set_position_target_local_ned_encode | 1.2 | 0 | 0 | 836 |
32,403,617 | 2015-09-04T17:30:00.000 | 1 | 0 | 0 | 0 | python,django,mezzanine | 32,489,152 | 1 | true | 1 | 0 | Hello if you are far ahead in your development i am sorry for you. If not, run away as far as you can from mezzanine. The documentation for this CMS is scarce.
Lucky for you you can solve this by using "page.branch_level" instead of just "branch_level". The former, will give you the depth of the current branch, and the latter will give you the depth of the page related to the page tree. Hope this can help you. | 1 | 1 | 0 | I have the following menu structure:
Personal
PersonalOption1
Sub-Option1
Sub-Option2
PersonalOption2
Enterprise
EnterpriseOption1
EnterpriseOption2
From the Page on Sub-Option1, I'm trying to generate a page_menu to only show:
PersonalOption1
PersonalOption2
But based on the branch_level value, I'm getting:
PersonalOption1
PersonalOption2
Enterprise
EnterpriseOption1
EnterpriseOption2
This is the tree I'm getting using branch_level to identify each node:
Personal (branch_level: 0)
PersonalOption1 (branch_level: 1)
Sub-Option1 (branch_level: 2)
Sub-Option2 (branch_level: 2)
PersonalOption2 (branch_level: 1)
Enterprise (branch_level: 1)
EnterpriseOption1 (branch_level: 1)
EnterpriseOption2 (branch_level: 1)
Enterprise should have branch_level 0. | Mezzanine (Django) menu tree generation from lower branch level | 1.2 | 0 | 0 | 136 |
32,404,408 | 2015-09-04T18:26:00.000 | 0 | 0 | 0 | 1 | python,linux | 32,468,089 | 1 | false | 0 | 0 | In this case I have a flask back end that needed to do something privileged. I broke it up into two back ends - one unprivileged and another small privileged piece rather than use sudo.
It is also possible to run sudo in a pty but I decided against this approach as it does indeed have a security flaw. | 1 | 2 | 0 | I am writing a python administrative daemon on linux that needs to start/stop other services. Following the principle of least privilege, I want to run this normally with regular user privileges but when it needs to start/stop other services, I want it to become root. Essentially I want to do what sudo would do from the command line. I cannot directly exec sudo from the daemon because it has no tty. I want to avoid running the daemon as root when it does not need to run as root. Is there any way to do this from python without needing to use sudo?
Thank you in advance.
Ranga. | "sudo" operations from python daemon | 0 | 0 | 0 | 299 |
32,404,764 | 2015-09-04T18:49:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,automation,celery,django-celery | 32,410,234 | 1 | true | 1 | 0 | You need, at minimum, a user model, comment model, article model, and most likely, a site model to store your RSS URLs and metadata about that site.
You will then need to create a function to parse the RSS from your URLs, and populate your article table. You will need to call this function on a periodic basis, either via Cron, or something like Celery.
The case of user-submitted articles is similar, although rather than a site model, you would need something like a category or channel model.
The rest is all forms and views.
The syndication framework does not parse RSS, it generates RSS from an existing model, so that's useless in your case, unless you intend to publish an RSS feed of your articles, linking to the comments pages (Reddit does this). | 1 | 1 | 0 | Django noob, please bear with me
How does to parse the rss/atom feed of an external site(any news site) and create a comments section for each post? Or simply on reddit where user submits the links; here the links are to be updated from a single/multiple websites and add a comment section.
Its easy to do with syndication framework, if site is in same db. But I couldn't find the exact solution and process to make it work for external sites. I have created the user model and comments model.I got stuck at automating the process of adding links.
Using django==1.8, python==2.7 Thanks a lot
EDIT: How to do it in celery? | Automation in django: Celery | 1.2 | 0 | 0 | 131 |
32,405,224 | 2015-09-04T19:25:00.000 | 1 | 0 | 0 | 0 | python,pygame,collision,rect | 32,407,425 | 1 | true | 0 | 0 | At some point, you will need to check to see if the Rect collides with any other Rect. With that in mind, there are some ways to speed things up, basically relying on grouping Rects.
For example, assuming these Rects are objects in the level that don't move around, you could sort them by the X-coordinate, and remember the maximum width. When you want to run collision detection, start at the main Rect's left side minus the maximum width, and loop through until the Rect's right side. Any Rects outside of that range do not have the ability to collide, and so do not need to be checked.
Alternately, you could divide the level into, say, 16 squares, and give each square a list of all Rects within the square. Then, just decide which square the main Rect is in, and just compare with the Rects in there. (With logic for overlaps, of course.)
There's a large number of ways to do this. | 1 | 0 | 0 | I was wondering if it was possible to find out if a Rect is colliding with another Rect. The problem is that I do not know what/where that other Rect is.
I have a Rect which moves around (of which I know where it is).
I have many other Rects on the same "map".
I dont want to make a list of all Rects on the map and then try collideRect with each and every one of them.
Does anyone have an idea under these circumstances for a function that takes a Rect and returns a list of all other Rects with which it collides? (Without using the collideRect function for all existing Rects?)
Can I somehow "scan" only the area of the first Rect and if there is another Rect in the same "spot" I return the other Rect?
I have come up with nothing so far... | Collision between Rect and another unknown Rect | 1.2 | 0 | 0 | 80 |
32,406,089 | 2015-09-04T20:27:00.000 | 0 | 0 | 0 | 0 | python,ftp | 40,610,963 | 1 | false | 0 | 0 | To download a file from FTP this code will do the job
import urllib urllib.urlretrieve('ftp://server/path/to/file', 'file') # if you need to pass credentials: # urllib.urlretrieve('ftp://username:password@server/path/to/file', 'file') | 1 | 0 | 0 | I've noticed that the FTP library doesn't seem to have a method or function of straight up downloading a file from an FTP server. The only function I've come across for downloading a file is ftp.retrbinary and in order to transfer the file contents, you essentially have to write the contents to a pre-existing file on the local computer where the Python script is located.
Is there a way to download the file as-is without having to create a local file first?
Edit: I think the better question to ask is: do I need to have a pre-existing file in order to download an FTP server file's contents? | How do I simply transfer and download a file from an FTP server with Python? | 0 | 0 | 1 | 214 |
32,406,471 | 2015-09-04T20:55:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 32,408,458 | 1 | false | 0 | 0 | I was able to correct this by adding a new PYTHONPATH and the correct path to the enviroment variable under console in pycharm | 1 | 1 | 0 | I am having an issue importing a python path into pycharm. I have tried adding the path I want to the following.
Default Settings > Project Interpreters > Interpreter Path
When I do this I see the correct path that I am trying to import from, but when I try to run from the console I still get an import error.
Thanks | Pycharm Python Path | 0 | 0 | 0 | 1,159 |
32,406,765 | 2015-09-04T21:21:00.000 | 0 | 1 | 0 | 1 | python,eclipse,version-control,synchronization,pydev | 32,408,606 | 3 | true | 1 | 0 | I use mercurial. I picked it because it seemed easier. But is is only easiER.
There is mercurial eclipse plugin.
Save a copy of your workspace and maybe your eclipse folder too before daring it :) | 2 | 0 | 0 | I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace.
However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places.
However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers
Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively?
Thanks | Using SCM to synchronize PyDev eclipse projects between different computer | 1.2 | 0 | 0 | 86 |
32,406,765 | 2015-09-04T21:21:00.000 | 0 | 1 | 0 | 1 | python,eclipse,version-control,synchronization,pydev | 32,466,408 | 3 | false | 1 | 0 | I use bitbucket coupled with mercurial. That is my repository is on bitbucket and i pull and psuh to it from mercurial within eclipse
For my backup i have an independent carbonite process going to net back all hard disk files. But I imagine there is a clever free programatic way to do so. If one knew how to write the appropriate scripts.
Glad the first suggestion was helpful .you are wise to bite the bullet and get this in place now. ;) | 2 | 0 | 0 | I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace.
However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places.
However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers
Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively?
Thanks | Using SCM to synchronize PyDev eclipse projects between different computer | 0 | 0 | 0 | 86 |
32,409,274 | 2015-09-05T03:38:00.000 | 4 | 0 | 0 | 0 | javascript,java,python,c,algorithm | 32,409,317 | 5 | false | 0 | 0 | You can do it in O(nlog(n)):
Sort the array
find the duplicates (they will be next to each other) in one pass.
I think that is what the interviewer wanted to hear.
if you did a merge sort or a quick sort, finding the duplicates could be done at merging in hidden time.
These can be implemented "in-place", or "by-part" if the array is too large to fit in memory. | 3 | 3 | 1 | During one of technical interview got this question.
I know way to solve this problem using (in java) HashSet.
But i could not understand when interviewer forced on the word "a very large array let's say 10 million elements in the given array".
Do i need to change the approach? If not, what should be the efficient to achieve this?
PS: Algo or implementation is language agnostic.
Thank you. | Algo to find duplicates in a very large array | 0.158649 | 0 | 0 | 3,410 |
32,409,274 | 2015-09-05T03:38:00.000 | 4 | 0 | 0 | 0 | javascript,java,python,c,algorithm | 32,409,400 | 5 | false | 0 | 0 | There were some key things, which interviewer expected you to ask back like: if you can not load the array in memory, then how much I can load. These are the steps to solve the problem:
you need to divide the array in how much memory is available to you.
Let's say you can load 1M number at a time. You have split the data in k parts. You load the first 1M and build Min Heap of it. Then remove the top and apply the Heapify on Min Heap.
Repeat the same for other parts of the data.
Now you will have K sorted splits.
Now fetch a first number from each K split and again build a Min Heap.
Now remove the top from the Min Heap and store the value in temporary variable as well for comparing with the next coming number for finding the duplicates.
Now fetch the next number from the same split (part) whose number got removed last time. Put that number on top of Min Heap and apply Heapify.
Now the top of the Min Heap is your next sorted number and compare it with the temporary variable for finding the duplicates. Update thetemporary variable` if number is not duplicate. | 3 | 3 | 1 | During one of technical interview got this question.
I know way to solve this problem using (in java) HashSet.
But i could not understand when interviewer forced on the word "a very large array let's say 10 million elements in the given array".
Do i need to change the approach? If not, what should be the efficient to achieve this?
PS: Algo or implementation is language agnostic.
Thank you. | Algo to find duplicates in a very large array | 0.158649 | 0 | 0 | 3,410 |
32,409,274 | 2015-09-05T03:38:00.000 | 3 | 0 | 0 | 0 | javascript,java,python,c,algorithm | 32,409,590 | 5 | true | 0 | 0 | One thing to keep in mind is that O-notation doesn't necessarily tell you what algorithm is fastest. If one algorithm is O(n log n) and another algorithm is O(n2), then there is some value M such that the first algorithm is faster for all n > M. But M could be much larger than the amount of data you'll ever have to deal with.
The reason I'm bringing this up is that I think a HashSet is probably still the best answer, although I'd have to profile it to find out for sure. Assuming that you aren't allowed to set up a hash table with 10 million buckets, you may still be able to set up a reasonable-sized table. Say you can create a HashSet with table size 100,000. The buckets will then be sets of objects. If n is the size of the array, the average bucket size will be n / 100000. So to see if an element is already in the HashSet, and add it if not, will take a fixed amount of time to compute the hash value, and O(n) to search the elements in the bucket if they're stored in a linear list(*). Technically, this means that the algorithm to find all duplicates is O(n2). But since one of the n's in n2 is for a linear list that is so much smaller than the array size (by a factor of 100000), it seems likely to me that it will still take much less time than a O(n log n) sort, for 10 million items. The value of M, the point at which the O(n log n) sort becomes faster, is likely to be much, much larger than that. (I'm just guessing, though; to find out for certain would require some profiling.)
I'd tend to lean against using a sort anyway, because if all you need to do is find duplicates, a sort is doing more work than you need. You shouldn't need to put the elements in order, just to find duplicates. That to me suggests that a sort is not likely to be the best answer.
(*) Note that in Java 8, the elements in each bucket will be in some kind of search tree, probably a red-black tree, instead of in a linear list. So the algorithm will still be O(n log n), and still probably lots faster than a sort. | 3 | 3 | 1 | During one of technical interview got this question.
I know way to solve this problem using (in java) HashSet.
But i could not understand when interviewer forced on the word "a very large array let's say 10 million elements in the given array".
Do i need to change the approach? If not, what should be the efficient to achieve this?
PS: Algo or implementation is language agnostic.
Thank you. | Algo to find duplicates in a very large array | 1.2 | 0 | 0 | 3,410 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.