Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,356,413 | 2016-09-06T19:24:00.000 | -3 | 0 | 0 | 1 | python,windows,ssl,pip | 55,395,471 | 7 | false | 0 | 0 | Open Anaconda Navigator.
Go to File\Preferences.
Enable SSL verification Disable (not recommended)
or Enable and indicate SSL certificate path(Optional)
Update a package to a specific version:
Select Install on Top-Right
Select package click on tick
Mark for update
Mark for specific version installation
Click Apply | 2 | 128 | 0 | I just installed Python3 from python.org and am having trouble installing packages with pip. By design, there is a man-in-the-middle packet inspection appliance on the network here that inspects all packets (ssl included) by resigning all ssl connections with its own certificate. Part of the GPO pushes the custom root certificate into the Windows Keystore.
When using Java, if I need to access any external https sites, I need to manually update the cacerts in the JVM to trust the Self-Signed CA certificate.
How do I accomplish that for python? Right now, when I try to install packages using pip, understandably, I get wonderful [SSL: CERTIFICATE_VERIFY_FAILED] errors.
I realize I can ignore them using the --trusted-host parameter, but I don't want to do that for every package I'm trying to install.
Is there a way to update the CA Certificate store that python uses? | How to add a custom CA Root certificate to the CA Store used by pip in Windows? | -0.085505 | 0 | 0 | 213,196 |
39,356,413 | 2016-09-06T19:24:00.000 | 50 | 0 | 0 | 1 | python,windows,ssl,pip | 39,358,282 | 7 | false | 0 | 0 | Run: python -c "import ssl; print(ssl.get_default_verify_paths())" to check the current paths which are used to verify the certificate. Add your company's root certificate to one of those.
The path openssl_capath_env points to the environment variable: SSL_CERT_DIR.
If SSL_CERT_DIR doesn't exist, you will need to create it and point it to a valid folder within your filesystem. You can then add your certificate to this folder to use it. | 2 | 128 | 0 | I just installed Python3 from python.org and am having trouble installing packages with pip. By design, there is a man-in-the-middle packet inspection appliance on the network here that inspects all packets (ssl included) by resigning all ssl connections with its own certificate. Part of the GPO pushes the custom root certificate into the Windows Keystore.
When using Java, if I need to access any external https sites, I need to manually update the cacerts in the JVM to trust the Self-Signed CA certificate.
How do I accomplish that for python? Right now, when I try to install packages using pip, understandably, I get wonderful [SSL: CERTIFICATE_VERIFY_FAILED] errors.
I realize I can ignore them using the --trusted-host parameter, but I don't want to do that for every package I'm trying to install.
Is there a way to update the CA Certificate store that python uses? | How to add a custom CA Root certificate to the CA Store used by pip in Windows? | 1 | 0 | 0 | 213,196 |
39,361,214 | 2016-09-07T04:29:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 39,361,292 | 5 | false | 0 | 0 | This regular expression will detect a single character surrounded by spaces, if the character is a plus or minus or mult or div sign: r' ([+-*/]) '. Note the spaces inside the apostrophes. The parentheses "capture" the character in the middle. If you need to recognize a different set of characters, change the set inside the brackets.
If you haven't dealt with regular expressions before, read up on the re module. They are very useful for simple text processing. The two relevant features here are "character classes" (the square brackets in my example) and "capturing parentheses" (the round parens). | 1 | 0 | 0 | Anyone know how I can find the character in the center that is surrounded by spaces?
1 + 1
I'd like to be able to separate the + in the middle to use in a if/else statement.
Sorry if I'm not too clear, I'm a Python beginner. | Python detect character surrounded by spaces | 0.039979 | 0 | 0 | 2,051 |
39,361,258 | 2016-09-07T04:34:00.000 | 2 | 0 | 1 | 0 | python,boolean-expression,short-circuiting,operator-precedence | 39,361,346 | 2 | false | 0 | 0 | b or anything_else is defined to return b if b is true-ish, without evaluating anything_else. Since your first 1 is true-ish, your 1/0 never gets evaluated, hence no error. By "true-ish" I mean any value that Python considers true, not only the True boolean value. Try your expression with True or [2] in place of the first 1 to see what I mean. | 1 | 2 | 0 | When I evaluate the following expression:
1 or (1/0) and 1
What are the rules (precedence, short-circuit evaluation etc) are followed to get the answer | Python precedence rules for boolean operators | 0.197375 | 0 | 0 | 544 |
39,368,789 | 2016-09-07T11:33:00.000 | 0 | 0 | 1 | 0 | python,git,pip | 39,369,148 | 2 | false | 0 | 0 | The best way to do this would be to clone the repository, or just donwload the requirements.txt file, and then run pip install -r requirements.txt to install all the modules dependencies. | 1 | 2 | 0 | For example, we have project Foo with dependency Bar (that in private Git repo) and we want install Bar into Foo directory via pip from requirements.txt.
We can manually install Bar with console command:
pip install --target=. git+ssh://git.repo/some_pkg.git#egg=SomePackage
But how to install Bar into current directory from requirements.txt? | How to install package via pip requirements.txt from VCS into current directory? | 0 | 0 | 0 | 1,394 |
39,369,335 | 2016-09-07T11:58:00.000 | 0 | 0 | 1 | 1 | ipython,anaconda,jupyter,jupyter-notebook | 39,370,822 | 1 | false | 0 | 0 | Found the solution - go to your Anaconda install directory (for me this was C:\Anaconda3) and open the file cwp.py in a text editor. Change the line
os.chdir(documents_folder)
to
os.chdir("C:\\my\\path\\here"). | 1 | 0 | 0 | I am trying to change the default Jupyter Notebook start directory on my Windows 7 Enterprise machine. Other answers have suggested changing the "Start In" field found through Right-click>Properties>Shortcut on the Jupyter program in my Start menu, however this doesn't have any effect. When I change this field to my desired directory and try running the program it still opens in the default directory, when I recheck the "Start In" field it is the same as whatever I had changed it to so it looks like it isn't being changed back by Windows, rather it's being disregarded entirely. For reference the default directory is at P:\ which is not a local directory and is hosted on my company servers, and I am trying to change the Jupyter startup directory to C:.
I'm sure the path is correct - I've tried a few different ones and they are working with autocomplete. I should mention this is a locked down corporate machine and I have to run Jupyter as administrator or else it exits immediately. I do have elevated rights and have checked the user permissions on Jupyter. This is using the Jupyter that comes as default with the current Python 3.5 distribution of Anaconda - I have also tried reinstalling the whole Anaconda package and I'm currently working with a fresh default install.
I am wondering if there is perhaps a way through changing the startup script that is run when you execute the program? | Changing Jupyter Notebook start location [Win 7 Enterprise] | 0 | 0 | 0 | 433 |
39,371,646 | 2016-09-07T13:46:00.000 | 2 | 0 | 1 | 0 | python,pdb,ipdb | 39,399,896 | 2 | false | 0 | 0 | If you want to keep your breakpoints rather than clearing them, but also want them not to be reached, you can use pdb's disable command. I don't see a convenient way to disable all breakpoints in a concise way, but you can list their numbers in the disable command. You can also be selective about this, and disable some breakpoints and leave others enabled. You can undo the effect of a disable command with pdbs enable command. The break command (or just b) with no parameters shows, for each breakpoint, whether it is enabled. | 1 | 6 | 0 | Is there a way to tell pdb or ipdb to skip all future break-points and just finish execution as if they weren't there? | Is it possible to skip breakpoints in pdb / ipdb? | 0.197375 | 0 | 0 | 8,222 |
39,371,700 | 2016-09-07T13:48:00.000 | 0 | 0 | 0 | 0 | python,flash,actionscript,jsfl | 41,182,630 | 2 | false | 0 | 1 | I've got the same issue.
I suggest you see the structure of a fla.
to do that take one of you fla and save it from animate into a .xfl
It's basically a folder what inside all the library and the content visible.
If you open the DOMDocument.xml you'll see the XML structure of your fla.
If you use visual basic or other software you may be able to "recreate2 a DOMDocument.xml and all the other folder and the rebuild the fla.
It's a b it long g and tricky..but it may works | 1 | 1 | 0 | I'm looking to create a lot of .fla files automatically but I'm not sure where to start.
This is for an internal tool for a cartoon studio, they work with Flash CC, each file Flash is a scene for an episode (scene1, scene 2, scene 3 ...) and in each file, we have a .flv from the scene, a .wav, a .jpg, and a specific template (folder and layers organisations and few stuff like a safe frame etc)
Every external elements are in external folders
a folder with .flv
a folder with .wav
a folder with .jpg
And everything is named incrementially like this "video_sc1, video_sc2" "sound_sc1, sound_sc2" etc.
So, is there a possibility to create a lot of .fla file (the same amount as detected in the .flv folder, for example) with all the "corresponding" stuff inside ?
And finally obtaining something like this "animfile_sc01.fla, animfile_sc02.fla ..." without doing all this manually (very boring and time-consuming)
I do some python, and very very little .jsfl, so, I'm not sure about the faisability of my project ( I use flash mainly for animation and graphics, not so much for coding).
Do you have any hint or tip to show me ?
Thanks !
(ps: my english is not perfect, but I can explain better if needed) | Automatically creating hundreds of .fla files, who gather external ressources | 0 | 0 | 0 | 223 |
39,372,150 | 2016-09-07T14:07:00.000 | 0 | 0 | 1 | 0 | python,sublimetext3,sublime-anaconda | 39,795,023 | 1 | false | 0 | 0 | Fixed by using PackageControl to reinstall Anaconda. | 1 | 0 | 0 | I am using a Sublime Text 3 portable app and I simply dragged all of the Anaconda files into the packages directory, i.e. \Sublime Text Build 3114 x64\Data\Packages\anaconda-1.3.4.
However, I keep getting an error in the console that says ImportError: No module named 'anaconda-1'. I can see the Anaconda option when I right-click anywhere, but all of the commands in the Anaconda menu are greyed out. Nothing else, like the auto-complete, is working either.
Any help is appreciated.
EDIT: Fixed by using PackageControl to reinstall Anaconda. | Anaconda plugin not working on Sublime Text 3 | 0 | 0 | 0 | 1,186 |
39,378,728 | 2016-09-07T20:46:00.000 | 1 | 1 | 0 | 0 | python,django,sms | 39,379,920 | 1 | false | 1 | 0 | Using Twilio is not mandatory, but I do recommend it. Twilio does the heavy lifting, your Django App just needs to make the proper API Requests to Twilio, which has great documentation on it.
Twilio has Webhooks as well which you can 'hook' to specific Django Views and process certain events. As for the 'programmable' aspect of your app you can use django-celery, django-cron, RabbitMQ or other task-queueing software. | 1 | 1 | 0 | I came to the requirement to send SMS from my django app. Its a dashboard from multiple clients, and each client will have the ability to send programable SMS.
Is this achievable with django smsish? I have found some packages that aren't updated, and I sending email sms is not possible.
All answers found are old and I have tried all approaches suggested.
Do I have to use services like twilio mandatorily? Thanks | Sending SMS from django app | 0.197375 | 0 | 0 | 765 |
39,380,527 | 2016-09-07T23:40:00.000 | 0 | 0 | 1 | 0 | python | 39,380,622 | 1 | true | 0 | 1 | Keep positions of snake segments on list (first segment is head).
Later new position of head insert before first segment (and remove last segment).
Use this list to blit snake segments. | 1 | 0 | 0 | I'm trying reproduce the game "Snake" in pygame, using the pygame.blit function, instead of pygame.draw. My question is how to make an image follow another image. I mean, make the snake's body photo follow your head. In the current state of my program the head moves on its own. | Pygame: how to blit an image that follows another image | 1.2 | 0 | 0 | 42 |
39,382,531 | 2016-09-08T04:19:00.000 | -3 | 0 | 1 | 0 | python,jupyter-notebook | 39,382,560 | 6 | false | 0 | 0 | Ok youngster, I will break this down for ya in simple steps:
go to the intended textbox and put your mouse cursor there
on keyboard press crtl-a and delete
Ta-dah all done
Glad to help | 5 | 9 | 0 | All I want to do is try some new codes in ipython notebook and I don't want to save it every time as its done automatically. Instead what I want is clear all the codes of ipython notebook along with reset of variables.
I want to do some coding and clear everything and start coding another set of codes without going to new python portion and without saving the current code.
Any shortcuts will be appreciated.
Note: I want to clear every cells code one time. All I want is an interface which appears when i create new python file, but I don't want to save my current code. | ipython notebook clear all code | -0.099668 | 0 | 0 | 14,420 |
39,382,531 | 2016-09-08T04:19:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook | 39,472,132 | 6 | false | 0 | 0 | Well, you could use Shift-M to merge the cells (from top) - at least there's only one left to delete manually. | 5 | 9 | 0 | All I want to do is try some new codes in ipython notebook and I don't want to save it every time as its done automatically. Instead what I want is clear all the codes of ipython notebook along with reset of variables.
I want to do some coding and clear everything and start coding another set of codes without going to new python portion and without saving the current code.
Any shortcuts will be appreciated.
Note: I want to clear every cells code one time. All I want is an interface which appears when i create new python file, but I don't want to save my current code. | ipython notebook clear all code | 0.033321 | 0 | 0 | 14,420 |
39,382,531 | 2016-09-08T04:19:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook | 43,153,884 | 6 | false | 0 | 0 | You can type dd to remove current selected cell. And if you continously press dd you can clean the screen. | 5 | 9 | 0 | All I want to do is try some new codes in ipython notebook and I don't want to save it every time as its done automatically. Instead what I want is clear all the codes of ipython notebook along with reset of variables.
I want to do some coding and clear everything and start coding another set of codes without going to new python portion and without saving the current code.
Any shortcuts will be appreciated.
Note: I want to clear every cells code one time. All I want is an interface which appears when i create new python file, but I don't want to save my current code. | ipython notebook clear all code | 0.033321 | 0 | 0 | 14,420 |
39,382,531 | 2016-09-08T04:19:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 66,080,534 | 6 | false | 0 | 0 | Just do Ctrl+A , this selects all the individual blocks/cells and then press d twice. | 5 | 9 | 0 | All I want to do is try some new codes in ipython notebook and I don't want to save it every time as its done automatically. Instead what I want is clear all the codes of ipython notebook along with reset of variables.
I want to do some coding and clear everything and start coding another set of codes without going to new python portion and without saving the current code.
Any shortcuts will be appreciated.
Note: I want to clear every cells code one time. All I want is an interface which appears when i create new python file, but I don't want to save my current code. | ipython notebook clear all code | 0 | 0 | 0 | 14,420 |
39,382,531 | 2016-09-08T04:19:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook | 47,859,663 | 6 | false | 0 | 0 | Press the scissors (cut) button multiple times for each cell you want to delete. | 5 | 9 | 0 | All I want to do is try some new codes in ipython notebook and I don't want to save it every time as its done automatically. Instead what I want is clear all the codes of ipython notebook along with reset of variables.
I want to do some coding and clear everything and start coding another set of codes without going to new python portion and without saving the current code.
Any shortcuts will be appreciated.
Note: I want to clear every cells code one time. All I want is an interface which appears when i create new python file, but I don't want to save my current code. | ipython notebook clear all code | 0.033321 | 0 | 0 | 14,420 |
39,382,725 | 2016-09-08T04:42:00.000 | 1 | 0 | 1 | 0 | python,json,tensorflow | 39,395,872 | 1 | false | 0 | 0 | A Tensor in TensorFlow is a node in the graph which, when run, will produce a tensor. So you can't save the SparseTensor directly because it's not a value (you can serialize the graph). If you do evaluate the sparsetensor, you get a SparseTensorValue object back which can be serialized as it's just a tuple. | 1 | 0 | 1 | I'm creating a list of Sparsetensors in Tensorflow. I want to access them in later sessions of my program. I've read online that you can store Python lists as json files but how do I save a list of Sparsetensors to a json file and then use that later on?
Thanks in advance | Saving Python list containing Tensorflow Sparsetensors to file for later access? | 0.197375 | 0 | 0 | 70 |
39,383,557 | 2016-09-08T06:03:00.000 | 21 | 0 | 0 | 0 | python,apache-spark,pyspark,apache-spark-sql | 44,253,561 | 10 | false | 0 | 0 | You can use df.dropDuplicates(['col1','col2']) to get only distinct rows based on colX in the array. | 2 | 149 | 1 | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column. | Show distinct column values in pyspark dataframe | 1 | 0 | 0 | 344,799 |
39,383,557 | 2016-09-08T06:03:00.000 | 1 | 0 | 0 | 0 | python,apache-spark,pyspark,apache-spark-sql | 60,578,769 | 10 | false | 0 | 0 | If you want to select ALL(columns) data as distinct frrom a DataFrame (df), then
df.select('*').distinct().show(10,truncate=False) | 2 | 149 | 1 | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column. | Show distinct column values in pyspark dataframe | 0.019997 | 0 | 0 | 344,799 |
39,387,888 | 2016-09-08T09:56:00.000 | 0 | 0 | 0 | 0 | node.js,ironpython,edgejs | 43,118,997 | 1 | true | 1 | 0 | The problem come from Edge, where the version number is hardcoded. | 1 | 1 | 0 | I am testing Edge.JS since I need to run Python functions from Node.js. The problem is that Edge seems to want another version of IronPython:
Could not load file or assembly 'IronPython, Version=2.7.0.40, Culture=neutral, PublicKeyToken=7f709c5b713576e1' or one of its dependencies. The system cannot find the file specified.
I have 2.7.6.3 installed, should I downgrade? Or is there a way to set the version in edge? | Change IronPython version in Edge.js? | 1.2 | 0 | 0 | 102 |
39,387,983 | 2016-09-08T10:01:00.000 | 1 | 0 | 0 | 0 | python,mysql,django,pootle | 39,447,127 | 1 | true | 1 | 0 | Install django debug toolbar, you can easily check all of the queries that have been executed | 1 | 0 | 0 | I am trying to debug a Pootle (pootle is build on django) installation which fails with a django transaction error whenever I try to add a template to an existing language. Using the python debugger I can see that it fails when pootle tries to save a model as well as all the queries that have been made in that session.
What I can't see is what specifically causes the save to fail. I figure pootle/django must have added some database database constraint, how do I figure out which one? MySql (the database being used) apparently can't log just failed transactions. | How do I get Django to log why an sql transaction failed? | 1.2 | 1 | 0 | 223 |
39,392,297 | 2016-09-08T13:28:00.000 | 1 | 0 | 0 | 0 | python,blogs,pelican | 60,045,659 | 2 | false | 1 | 0 | If you have a non-standard theme installed, go to the folder of that theme and navigate to the templates folder. There are a lot of different html files. If you want to translate the generated text, like "read more" or "Other articles", open the index.html file inside the template folder and search for the text you want to translate, replace it with yours and regenerate your page. Be cautious not to break the syntax of the template file, tho. | 1 | 1 | 0 | I am setting up a new Pelican blog and stumbled upon a bit of a problem. I am German, the blog is going to be in german so I want the generated text (dates, 'Page 1/5'...) to be in german. (In my post date I include the weekday)
In pelicanconf.py I tried
DEFAULT_LANG = u'ger' and
DEFAULT_LANG = u'de' and
DEFAULT_LANG = u'de_DE'
but I only get everything in en. | How to translate generated language in Pelican? | 0.099668 | 0 | 0 | 538 |
39,394,328 | 2016-09-08T15:02:00.000 | 1 | 1 | 0 | 1 | python,performance,file,io | 39,395,013 | 5 | false | 0 | 0 | If you can find a way to take advantage of hash tables your task will change from O(N^2) to O(N). The implementation will depend on exactly how large your files are and whether or not you have duplicate job IDs in file 2. I'll assume you don't have any duplicates. If you can fit file 2 in memory, just load the thing into pandas with job as the index. If you can't fit file 2 in memory, you can at least build a dictionary of {Job #: row # in file 2}. Either way, finding a match should be substantially faster. | 2 | 0 | 0 | I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.
I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).
The problem
I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.
Here's an example of the files:
File 1 File 2
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
I would like to compare the contents of lines with the same "Job" field, like so:
Job File 1 Content File 2 Content
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
I will be performing calculations on the File 1 Content and File 2 Content and comparing the two (for each line).
What is the most efficient way of doing this (matching lines)?
The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.
I appreciate any and all help.
Thank you! | Comparing the contents of very large files efficiently | 0.039979 | 0 | 0 | 1,140 |
39,394,328 | 2016-09-08T15:02:00.000 | 1 | 1 | 0 | 1 | python,performance,file,io | 39,396,201 | 5 | false | 0 | 0 | I was trying to develop something where you'd split one of the files into smaller files (say 100,000 records each) and keep a pickled dictionary of each file that contains all Job_id as a key and its line as a value. In a sense, an index for each database and you could use a hash lookup on each subfile to determine whether you wanted to read its contents.
However, you say that the file grows continually and each Job_id is unique. So, I would bite the bullet and run your current analysis once. Have a line counter that records how many lines you analysed for each file and write to a file somewhere. Then in future, you can use linecache to know what line you want to start at for your next analysis in both file1 and file2; all previous lines have been processed so there's absolutely no point in scanning the whole content of that file again, just start where you ended in the previous analysis.
If you run the analysis at sufficiently frequent intervals, who cares if it's O(n^2) since you're processing, say, 10 records at a time and appending it to your combined database. In other words, the first analysis takes a long time but each subsequent analysis gets quicker and eventually n should converge on 1 so it becomes irrelevant. | 2 | 0 | 0 | I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.
I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).
The problem
I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.
Here's an example of the files:
File 1 File 2
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
I would like to compare the contents of lines with the same "Job" field, like so:
Job File 1 Content File 2 Content
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
I will be performing calculations on the File 1 Content and File 2 Content and comparing the two (for each line).
What is the most efficient way of doing this (matching lines)?
The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.
I appreciate any and all help.
Thank you! | Comparing the contents of very large files efficiently | 0.039979 | 0 | 0 | 1,140 |
39,398,318 | 2016-09-08T18:56:00.000 | 1 | 0 | 1 | 0 | python,pycharm,virtualenv | 39,409,962 | 1 | true | 0 | 0 | In PyCharm, do File -> Open and point at the directory. It will turn that directory into a "project" (meaning, it will create a .idea subdirectory). Depending on how you named your virtualenv, it will likely detect the virtualenv and assign it the project's interpreter. | 1 | 2 | 0 | I was sent a bunch of Python files that have various custom dependencies inside nested folders. I used to run the main file from Terminal by first navigating to the main folder, then running python main.py. This worked until I needed to update some modules and ran into permissions problems.
So I downloaded Pycharm and I'm trying to use a virtualenv. I'm stuck though: do I create a new Pycharm project?
Under the project interpreter, I made a new virtualenv with no modules, but when I do pip list in the command window that's below, it lists all my modules.
How can I "import" my existing Python files, put them in a clean virtualenv, and install the modules I need? | Using Pycharm virtualenv with preexisting files | 1.2 | 0 | 0 | 176 |
39,403,002 | 2016-09-09T02:37:00.000 | 0 | 0 | 1 | 0 | python,pip | 45,851,298 | 1 | false | 0 | 0 | Create an empty .egg-info file in your site-packages directory.
For example, on my machine I did touch /usr/lib64/python3.6/site-packages/GLWindow-1.8.0-py3.6.egg-info to trick pip3 into thinking that I've installed GLWindow. | 1 | 1 | 0 | I'm installing the openbabel package, and it can automatically generate the necessary Python libraries during compilation. This saves a good chunk of time, since installing from source via pip takes a few minutes, and that time can be rolled into the initial compilation.
I've listed it as a requirement in my requirements.txt file, but when I go to install (pip install -r requirements.txt), it attempts to reinstall the openbabel Python library. When I run pip show or pip list, openbabel doesn't show up.
Is there a way to manually mark a package as installed so pip thinks it's installed, even if it can't find the package? Or is there a file I can create that pip will use that will tell it openbabel is installed? | Manually set package as installed in Python/pip | 0 | 0 | 0 | 554 |
39,403,477 | 2016-09-09T03:39:00.000 | 1 | 0 | 0 | 0 | python,math,direction,freecad | 39,403,696 | 3 | false | 0 | 0 | We'll need a lot more information to give a good answer, but here is a first attempt, with questions after.
One way to approximate a tangent vector is with a secant vector: If your curve is given parametrically as a function of t and you want the tangent at t_0, then choose some small number e; evaluate the function at t_0 + e and at t_0 - e; then subtract the two results to get the secant vector. It will be a good approximation to the tangent vector if your curve isn't too curvy in that interval around t.
Now for the questions. How is your question related to Python, and where does FreeCAD come in? You have constructed the curve in FreeCAD, and you want to compute tangents in Python? Can you say anything about the curve, like whether it's a cubic spline curve, whether it curves in only one direction, what you mean by "center" and "axis"? (An arbitrary curve with tangent vectors isn't necessarily a cubic spline, might curve in very complicated ways, and doesn't have any notion of a center or axis.) | 2 | 2 | 0 | I can get some info from a Arc.
FirstPoint [x, y, z]
LastPoint [x, y, z]
Center [x, y, z]
Axis [x, y, z] # Perpendicular to the plane
How can I get the FirstPoint&LastPoint's tangential direction Vector?
I want to get a intersection Point from two direction vector.
I work in FreeCAD. | How can I get Tangential direction Vector in Python? | 0.066568 | 0 | 0 | 964 |
39,403,477 | 2016-09-09T03:39:00.000 | 1 | 0 | 0 | 0 | python,math,direction,freecad | 39,409,346 | 3 | true | 0 | 0 | Circular arc from A to B with center M and normal vector N.
The tangent directions can be obtained by the cross product.
Tangent at A: N x (A-M)
Tangent at B: (B-M) x N
Both correspond to a rotation of 90DEG or -90DEG of the radius vectors around the axis N | 2 | 2 | 0 | I can get some info from a Arc.
FirstPoint [x, y, z]
LastPoint [x, y, z]
Center [x, y, z]
Axis [x, y, z] # Perpendicular to the plane
How can I get the FirstPoint&LastPoint's tangential direction Vector?
I want to get a intersection Point from two direction vector.
I work in FreeCAD. | How can I get Tangential direction Vector in Python? | 1.2 | 0 | 0 | 964 |
39,406,177 | 2016-09-09T07:32:00.000 | 0 | 0 | 1 | 0 | python,pip,virtualenv,requirements.txt | 53,108,990 | 4 | false | 1 | 0 | If you only want to see what packages you have installed then just do pip freeze.
but if you want all these packages in your requirement.txt, then do
pip freeze > requirements.txt | 2 | 14 | 0 | So I am creating a brand new Flask app from scratch. As all good developers do, my first step was to create a virtual environment.
The first thing I install in the virtual environment is Flask==0.11.1. Flask installs its following dependencies:
click==6.6
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
Werkzeug==0.11.11
wheel==0.24.0
Now, I create a requirements.txt to ensure everyone cloning the repository has the same version of the libraries. However, my dilemma is this:
Do I mention each of the Flask dependencies in the requirements.txt along with the version numbers
OR
Do I just mention the exact Flask version number in the requirements.txt and hope that when they do a pip install requirements.txt, Flask will take care of the dependency management and they will download the right versions of the dependent libraries | Managing contents of requirements.txt for a Python virtual environment | 0 | 0 | 0 | 35,512 |
39,406,177 | 2016-09-09T07:32:00.000 | 5 | 0 | 1 | 0 | python,pip,virtualenv,requirements.txt | 39,406,537 | 4 | true | 1 | 0 | Both approaches are valid and work. But there is a little difference. When you enter all the dependencies in the requirements.txt you will be able to pin the versions of them. If you leave them out, there might be a later update and if Flask has something like Werkzeug>=0.11 in its dependencies, you will get a newer version of Werkzeug installed.
So it comes down to updates vs. defined environment. Whatever suits you better. | 2 | 14 | 0 | So I am creating a brand new Flask app from scratch. As all good developers do, my first step was to create a virtual environment.
The first thing I install in the virtual environment is Flask==0.11.1. Flask installs its following dependencies:
click==6.6
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
Werkzeug==0.11.11
wheel==0.24.0
Now, I create a requirements.txt to ensure everyone cloning the repository has the same version of the libraries. However, my dilemma is this:
Do I mention each of the Flask dependencies in the requirements.txt along with the version numbers
OR
Do I just mention the exact Flask version number in the requirements.txt and hope that when they do a pip install requirements.txt, Flask will take care of the dependency management and they will download the right versions of the dependent libraries | Managing contents of requirements.txt for a Python virtual environment | 1.2 | 0 | 0 | 35,512 |
39,409,084 | 2016-09-09T10:10:00.000 | 13 | 0 | 1 | 0 | python,python-2.7,package | 39,409,288 | 1 | true | 0 | 0 | Go to the directory of the .py file and run python -m compileall .. | 1 | 9 | 0 | If I edit source of an installed package and delete the .pyc when I restart an app that uses it there is no new pyc generated in place indicating there is a cache elsewhere.
How do I force the update to source to be taken into account? | How to force recompile of py source in site-packages? | 1.2 | 0 | 0 | 8,750 |
39,409,581 | 2016-09-09T10:34:00.000 | 0 | 0 | 0 | 0 | python,django | 39,409,980 | 2 | true | 1 | 0 | Make sure the package is correct (Include init.py file).
Make sure there are no other utils files in the same directory level. That is if you are importing from utils import http_utils from views.py, there should not be a utils.py in the same folder. Conflict occurs because of that.
You dont have to include the folder in the INSTALLED_APP settings. Because the utils folder is a package and should be available for importing | 1 | 0 | 0 | I created a new utils package and an http_utils file with some decorators and HTTP utility functions in there. I imported them wherever I am using them and the IDE reports no errors, and I also added the utils module to the INSTALLED_APPS list.
However, when launching the server I am getting an import error:
ImportError: No module named http_utils
What am I missing? What else do I need to do to register a new module? | Django: no module named http_utils | 1.2 | 0 | 0 | 217 |
39,410,656 | 2016-09-09T11:34:00.000 | 1 | 0 | 0 | 0 | python,django | 39,411,527 | 1 | true | 1 | 0 | You could create a custom middleware (called after AuthenticationMiddleware), that checks if the user if logged in or not, and if not, replaces the current user object attached to request, with the the user of your choice. | 1 | 1 | 0 | Is there a way to globally provide a custom instance of User class instead of AnonymousUser?
It is not possible to assign AnonymousUser instances when User is expected (for example in forms, there is need to check for authentication and so on), and therefore having an ordinary User class with name 'anonymous' (so that we could search for it in the DB) would be globally returned when a non-authenticated user visits the page. Somehow implementing a custom authentication mechanism would do the trick? And I also want to ask if such an idea is a standard approach before diving into this. | Providing 'default' User when not logged in instead of AnonymousUser | 1.2 | 0 | 0 | 49 |
39,412,829 | 2016-09-09T13:35:00.000 | 20 | 0 | 0 | 0 | python,pandas,decimal,xlm | 52,673,944 | 4 | false | 1 | 0 | This did not start working for me until I used both decimal=',' and thousands='.'
Pandas version: 0.23.4
So try to use both decimal and thousands:
i.e.:
pd.read_html(io="http://example.com", decimal=',', thousands='.')
Before I would only use decimal=',' and the number columns would be saved as type str with the numbers just omitting the comma.(weird behaviour) For example 0,7 would be "07" and "1,9" would be "19"
It is still being saved in the dataframe as type str but at least I don't have to manually put in the dots. The numbers are correctly displayed; 0,7 -> "0.7" | 1 | 14 | 1 | I was reading an xlm file using pandas.read_html and works almost perfect, the problem is that the file has commas as decimal separators instead of dots (the default in read_html).
I could easily replace the commas by dots in one file, but i have almost 200 files with that configuration.
with pandas.read_csv you can define the decimal separator, but i don't know why in pandas.read_html you can only define the thousand separator.
any guidance in this matter?, there is another way to automate the comma/dot replacement before it is open by pandas?
thanks in advance! | pandas.read_html not support decimal comma | 1 | 0 | 0 | 4,626 |
39,415,278 | 2016-09-09T15:43:00.000 | 5 | 0 | 1 | 0 | python,download,installation,python-3.5,scapy | 42,430,634 | 3 | false | 0 | 0 | use
pip3 install scapy-python3
and you will get it | 1 | 2 | 0 | I installed python 3.5 for my windows 8.1 computer. I need to install scapy for this version (I deleted all the files of the last one). How do i download and install it? can't find anything that help me to do that no matter how i searched. | Download and install scapy for windows for python 3.5? | 0.321513 | 0 | 0 | 12,592 |
39,419,108 | 2016-09-09T20:15:00.000 | 2 | 0 | 1 | 0 | python,build,pygame,pyinstaller | 68,652,885 | 2 | false | 0 | 1 | Not trying to dig up this old question, but this was at the top of my Google search so it may be for others as well.
If you intend to distribute the program in some kind of folder, you can always just mark everything unnecessary as hidden in Windows, and it will remain hidden even if you compress or extract it.
For a program that I designed to be very user friendly, I just selected each file and folder that was not necessary to the user and hid them. If the user has show hidden files on (rarely default), they aren't likely to be intimidated by the mess of files that pyinstaller creates. | 1 | 2 | 0 | Alright, so I managed to use PyInstaller to build a homework assignment I made with Pygame. Cool. The executable works fine and everything.
Problem is, alongside the executable, there is so much clutter. So many files, like pyds and dlls accompany the exe in the same directory, making it look so ugly.
Now, I know that these files are important; the modules I used, such as Pygame, need them to work. Still, how do I make PyInstaller build my game, so that it puts the clutter into its own folder? I could just manually make a folder and move the files in there, but it stops the exe from working.
If this info would help any, I used Python 3.4.3 and am on Windows. | How to remove clutter from PyInstaller one-folder build? | 0.197375 | 0 | 0 | 4,472 |
39,420,152 | 2016-09-09T21:46:00.000 | 0 | 1 | 0 | 0 | python,html,css,web-scraping | 39,420,644 | 1 | true | 1 | 0 | You can use selenium with firefox or phantomjs if you're on a headless machine, the browser will render the page, then you can locate the element and get it's attributes.
On python the method to get attributes is self explanatory, Element_obj.get_attribute('attribute_name') | 1 | 0 | 0 | I am trying to scrape the font-size of each section of text in an HTML page. I have spent the past few days trying to do it, but I feel like I am trying to re-invent the wheel. I have looked at python libraries like cssutils, beautiful-soup, but haven't had much luck sadly. I have made my own html parser that finds the font size inside the html only, but it doesn't look at stylesheets which is really important. Any tips to get me headed in the right direction? | Scraping font-size from HTML and CSS | 1.2 | 0 | 0 | 556 |
39,423,756 | 2016-09-10T07:31:00.000 | 2 | 0 | 0 | 0 | python,database,authorization,key-value,nosql | 39,518,000 | 1 | true | 1 | 0 | Both of the solutions you described have some limitations.
You point yourself that including the owner ID in the key does not solve the problem of shared data. However, this solution may be acceptable, if you add another key/value pair, containing the IDs of the contents shared with this user (key: userId:shared, value: [id1, id2, id3...]).
Your second proposal, in which you include the list of users who were granted access to a given content, is OK if and only if you application needs to make a query to retrieve the list of users who have access to a particular content. If your need is to list all contents a given user can access, this design will lead you to poor performances, as the K/V store will have to scan all records -and this type of database engine usually don't allow you to create an index to optimise this kind of request.
From a more general point of view, with NoSQL databases and especially Key/Value stores, the model has to be defined according to the requests to be made by the application. It may lead you to duplicate some information. The application has the responsibility of maintaining the consistency of the data.
By example, if you need to get all contents for a given user, whether this user is the owner of the content or these contents were shared with him, I suggest you to create a key for the user, containing the list of content Ids for that user, as I already said. But if your app also needs to get the list of users allowed to access a given content, you should add their IDs in a field of this content. This would result in something like :
key: contentID, value: { ..., [userId1, userID2...]}
When you remove the access to a given content for a user, your app (and not the datastore) have to remove the userId from the content value, and the contentId from the list of contents for this user.
This design may imply for your app to make multiple requests: by example one to get the list of userIDs allowed to access a given content, and one or more to get these user profiles. However, this should not really be a problem as K/V stores usually have very high performances. | 1 | 8 | 0 | I have a web application that accesses large amounts of JSON data.
I want to use a key value database for storing JSON data owned/shared by different users of the web application (not users of the database). Each user should only be able to access the records they own or share.
In a relational database, I would add a column Owner to the record table, or manage shared ownerships in a separate table, and check access on the application side (Python). For key value stores, two approaches come to mind.
User ID as part of the key
What if I use keys like USERID_RECORDID and then write code to check the USERID before accessing the record? Is that a good idea? It wouldn't work with records that are shared between users.
User ID as part of the value
I could store one or more USERIDs in the value data and check if the data contains the ID of the user trying to access the record. Performance is probably slower than having the user ID as part of the key, but shared ownerships are possible.
What are typical patterns to do what I am trying to do? | How do you control user access to records in a key-value database? | 1.2 | 1 | 0 | 171 |
39,431,684 | 2016-09-11T00:05:00.000 | 0 | 0 | 1 | 1 | python,pybrain | 39,431,699 | 1 | false | 0 | 0 | The problem was resolved replacing pybrain.pybrain by pybrain. | 1 | 1 | 0 | I'm getting that error:
ImportError: No module named pybrain.structure
When executing:
from pybrain.pybrain.structure import FeedForwardNetwork
From the pybrain tutorial.
I installed pybrain running:
sudo python setup.py install | Install pybrain in ubuntu 16.04 "ImportError: No module named pybrain.structure" | 0 | 0 | 0 | 253 |
39,433,108 | 2016-09-11T05:16:00.000 | 1 | 0 | 1 | 0 | python,scipy | 39,433,909 | 1 | true | 0 | 0 | One simple way is to use the CubicSpline class instead. Then it's CubicSpline(x, y).antiderivative().solve(0.05*M) or thereabouts. | 1 | 0 | 1 | I have arrays t_array and dMdt_array of x and y points. Let's call M = trapz(dMdt_array, t_array). I want to find at what value of t the integral of dM/dt vs t is equal to a certain value -- say 0.05*M. In python, is there a nice way to do this?
I was thinking something like F = interp1d(t_array, dMdt_array). Then some kind of root find for where the integral of F is equal to 0.05*M. Can I do this in python? | Find when integral of an interpolated function is equal to a specific value (python) | 1.2 | 0 | 0 | 113 |
39,433,457 | 2016-09-11T06:29:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,exception-handling,smartsheet-api,smartsheet-api-1.1 | 39,480,478 | 1 | false | 0 | 0 | Knowing what a successful response will look like you could try checking for the error response. For example, running a get_row with an invalid rowId will result in this error:
{"requestResponse": null, "result": {"code": 1006, "name": "NotFoundError", "recommendation": "Do not retry without fixing the problem. Hint: Verify that specified URI is correct. If the URI contains an object ID, verify that the object ID is correct and that the requester has access to the corresponding object in Smartsheet.", "shouldRetry": false, "message": "Not Found", "statusCode": 404}}
Seeing requestResponse being null you can check the result object to know what the code is to look up in the Smartsheet API docs. Also, there is a recommendation parameter that gives next steps. | 1 | 0 | 0 | I am trying to write try except block for smartsheet aPI using python sdk, specially in cases where the API response to call returns error object rather than a usual index result object. Could someone explain what kind of exception would I be catching. I am not sure if I would have to create custom exceptions of my own or whether there are some way to capture exceptions. The API document talks about the error messages, not handling the. Would be great if someone could share some simple examples around the same. | Smartsheet SDK exception handling | 0 | 0 | 0 | 153 |
39,436,020 | 2016-09-11T12:20:00.000 | 1 | 0 | 1 | 0 | python-3.4 | 39,436,077 | 1 | true | 0 | 0 | Use the built-in method list.index
If you wanna know where is 'c':
l = ['a','b','c']
l.index('c') | 1 | 0 | 0 | Im trying to find out the number of where X is in the list e.g:
if i had a list like: ['a','b','c','d'] and i have 'c' how would i find where it is in the list, so that it would print '2' (as thats where it is in the list)
thanks | How to find in a list what thats number python3 | 1.2 | 0 | 0 | 29 |
39,437,667 | 2016-09-11T15:26:00.000 | 0 | 0 | 1 | 0 | python,sql | 39,438,159 | 1 | true | 0 | 0 | In fact, you have used files instead of a database. To answer the question, let us check the advantages of using a database:
it is faster: a service is awaiting commands and your app sends some commands to it. Database Management Systems have a lot of cool stuff implemented which you will be lacking if you use a single file. True, you can create a service which loads the file into memory and serves commands, but while that seems to be easy, it will be inferior to RDBMS's, since your implementation is highly unlikely to be even close to a match of the optimizations done for RDBMS's over decades, unless you implement an RDBMS, but then you end up with an RDBMS, after all
it is safer: RDBMS's encrypt data and have user-password authentication along with port handling
it is smaller: data is stored in a compressed manner, so if you end up with a lot of data, data size will get critical much later
it is developed: you will always have possibilities to upgrade your system with juices implemented recently and to keep up the pace with science's current development
you can use ORM's and other stuff built to ease the pain of data handling
it supports concurrent access: imagine the case when many people are reaching your database at the same time. Instead of you implementing very complicated stuff, you can get this feature instantly
All in all, you will either use a database management system (not necessarily relational), implement your own or work with textual files. Your textual file will quickly be overwhelmed if your application is successful and you will need a database management system. If you write your own, you might have a success story to tell, but it will come only after many years of hard work. So, if you get successful, then you will need database management system. If you do not get successful, you can use textual files, but the question is: is it worth it?
And finally, your textual file is a database, but you are managing it by your custom and probably very primitive (no offence, but it is virtually impossible to achieve results when you are racing against the whole world) database management system compared to the ones out there. So, yes, you should learn to use advanced database management systems and should refactor your project to use one. | 1 | 1 | 0 | I used txt files to store data in it and read it any time i need and search in it and append and delete from it
so
why should i use database i can still using txt files ? | using Python and SQL | 1.2 | 1 | 0 | 55 |
39,437,766 | 2016-09-11T15:36:00.000 | 1 | 0 | 1 | 0 | python,bigdata | 39,437,863 | 1 | true | 0 | 0 | Assuming the list contains the raw byte strings of the image contents, one quick and dirty way to weed out possible duplicates is to compare the lengths of the byte strings. Two pictures with unequal length byte strings cannot be duplicates.
Then, for each group of pictures with equal length byte strings, hash the byte strings and compare those. If the hashes are equal then the pictures are duplicates, otherwise they're not. (For extra speed, don't bother hashing if there are only two byte strings in a group; just compare the strings directly byte-by-byte.) | 1 | 0 | 0 | I've got the question on interview by python recently. The question was:
we have a large list of pictures in python(as I understood we simply read their content, and then got list of their contents), [...], this list will occupy 1gb of RAM e.g. What is the best way to compare them(do pictures the same)?
I answered that we can separate this list into several lists and then compare elements in them.
But I got answer that: "it is wrong".
So my question the same what is the best way here to compare them?
Currently I think maybe use python's sets and compare lengths of source list and set? | Compare elements in large list of data | 1.2 | 0 | 0 | 153 |
39,438,003 | 2016-09-11T16:01:00.000 | 1 | 0 | 0 | 0 | python,sockets,tcp,ddos,python-sockets | 39,438,366 | 1 | true | 0 | 0 | What you describe are internals of the TCP stack of the operating system. Python just uses this stack via the socket interface. I doubt that any of these settings can be changed specific to the application at all, i.e. these are system wide settings which can only be changed with administrator privileges. | 1 | 1 | 0 | More in detail, would like to know:
what is the default SYN_RECEIVED timer,
how do i get to change it,
are SYN cookies or SYN caches implemented.
I'm about to create a simple special-purpose publically accessible server. i must choose whether using built-in TCP sockets or RAW sockets and re-implement the TCP handshake if these security mechanisms are not present. | what anti-ddos security systems python use for socket TCP connections? | 1.2 | 0 | 1 | 216 |
39,438,843 | 2016-09-11T17:36:00.000 | 1 | 0 | 1 | 0 | python,regex | 39,438,883 | 4 | true | 0 | 0 | Use parenthesis
Like
re.findall("(foo)bar","foobar foogy woogy") | 1 | 0 | 0 | I've been reading the documentation but can't find what I'm looking for.
I'm simply trying to match foo inside foobar but can't seem to see how to do it. Any guidance would be helpful! | Python Regex matching word inside words | 1.2 | 0 | 0 | 162 |
39,441,109 | 2016-09-11T22:09:00.000 | 1 | 0 | 0 | 0 | python,django,image-processing,imagekit,photologue | 39,457,291 | 2 | true | 1 | 0 | If you use django-photologue, you can define a thumbnail size and specify that the thumbnail should not be generated at upload time - instead, it gets generated the first time the thumbnail is requested for display.
If you have lots of different sized thumbnails for a photo, this trick can help a user upload their photos faster.
Source: I maintain django-photologue. | 1 | 1 | 0 | I am building a web application that allows users to upload images to their accounts - similar to flickr and 500px.
I want to know the best setup for such an application. I'm using Python 3.4 and Django 1.9
I'm currently thinking about the following:
Heroku
AWS S3
Postgres
I'm struggling to find a suitable image processing library. I've looked at ImageKit and Photologue. But I find Photologue to be a little bit heavy for what I want to do.
I'm basically looking for a way to allow users to upload images of a certain size without locking up the Heroku dynos. Any suggestions?
Thanks | The best Python/Django architecture for image heavy web application | 1.2 | 0 | 0 | 452 |
39,441,764 | 2016-09-11T23:53:00.000 | 2 | 0 | 0 | 1 | google-app-engine,google-cloud-datastore,app-engine-ndb,google-app-engine-python | 39,453,571 | 2 | false | 1 | 0 | The reason for the two implementations is that originally, the Datastore (called App Engine Datastore) was only available from inside App Engine (through a private RPC API). On Python, the only way to access this API was through an ORM-like library (NDB). As you can see on the import, it is part of the App Engine API.
Now Google has made the Datastore available outside of App Engine through a restful API called Cloud Datastore API. gcloud library is a client library that allows access to different rest APIs from Google Cloud, including the Cloud Datastore API. | 1 | 14 | 0 | ndb: (from google.appengine.ext import ndb)
datastore: (from gcloud import datastore)
What's the difference? I've seen both of them used, and hints they both save data to google datastore. Why are there two different implementations? | what's the difference between google.appengine.ext.ndb and gcloud.datastore? | 0.197375 | 0 | 0 | 1,307 |
39,442,327 | 2016-09-12T01:34:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,neural-network,regression | 39,446,832 | 1 | false | 0 | 0 | If I got this right, you are basically trying to implement classification variables into your input, and this is basically done by adding an input variable for each possible class (in your case "group 1" and "group 2") that holds binary values (1 if the sample belongs to the group, 0 if it doesn't). Wheather or not you would want to retain the actual coordinates depends on wheather you would like your network to process actual spatial data, or simply base it's output on the group that the sample belongs to. As I don't have much experience with the particular module you're using, I am unable to provide actual code, but I hope this helps. | 1 | 0 | 1 | I'm trying to predict an output (regression) where multiple groups have spacial (x,y) coordinates. I've been using scikit-learn's neural network packages (MLPClassifier and MLPRegressor), which I know can be trained with spacial data by inputting a 1-D array per observation (ex. the MNIST dataset).
I'm trying to figure out the best way to tell the model that group 1 has this set of spacial coordinates AND group 2 has a different set of spacial coordinates, and that combination yielded a result. Would it make more sense to input a single array where a group 1 location is represented by 1 and group 2 location is represented by -1? Or to create an array for group 1 and group to and append them? Still pretty new to neural nets - hopefully this question makes sense. | Training a neural network with two groups of spacial coordinates per observation? | 0 | 0 | 0 | 244 |
39,446,262 | 2016-09-12T08:34:00.000 | 2 | 0 | 0 | 0 | python-2.7,image-segmentation,binary-image | 39,447,667 | 1 | true | 0 | 0 | To my mind, this is exactly what can be done using scipy.ndimage.measurements.label and scipy.ndimage.measurements.find_objects
You have to specify what "touching" means. If it means edge-sharing, then the default structure of ndimage.measurements.label is the one you need so you just need to pass your array. If touching means also corner sharing, you will find the right structure in the docstring.
find_objects can then yield a list of slices for the objects. | 1 | 1 | 1 | Is there a easy way to implement the segmentation of a binary image in python?
My 2d-"images" are numpy arrays. The used values are 1.0 and 0.0. I would require a list of all objects with the value 1.0. Every black pixel is a pixel of an object. An object may contain many touching pixels with the value 1.0.
I can use numpy and also scipy.
I already tried to iterate over all pixels and create sets of pixels and fill the new pixel in old sets (or create a new set). Unfortunately the implementation was poor, extremely buggy and also very slow.
Hopyfully somthing like this already exists or there is an easy way to do this?
Thank you very much | Python: Binary image segmentation | 1.2 | 0 | 0 | 1,440 |
39,447,513 | 2016-09-12T09:44:00.000 | 0 | 1 | 0 | 0 | django,boost-python | 39,454,692 | 1 | false | 1 | 0 | put them to $PROJECT_HOME/lib just like a normal package | 1 | 0 | 0 | I am working on a python/django project which calls a C++ shared library. I am using boost_python C++ library.
It works fine: I can call C++ methods from python interpreter. I can also call this methods from my django project. But i am wondering something: Where is the best folder for my C++ shared library ?
I actually put this binary shared library in django app folder (same folder as view.py). It works but i think this is ugly... Is there a specific folder for shared library in django directory structure ?
Thanks | Where is the Folder for shared library in a django project | 0 | 0 | 0 | 628 |
39,448,884 | 2016-09-12T11:03:00.000 | 2 | 0 | 0 | 1 | python,stream,subprocess,stdout,stderr | 39,449,018 | 1 | true | 0 | 0 | There is no other console output than stdout and stderr (assuming that samtools does not write to the terminal directly via a tty device). So, if the output is not captured with the subprocesses stdout, it must have been written to stderr, which can be captured as well using Popen() with stderr=subprocess.PIPE and inspecting the stderr attribute of the resulting process object. | 1 | 0 | 0 | I am working on incorporating a program (samtools) into a pipeline. FYI samtools is a program used to manipulate DNA sequence alignments that are in a SAM format. It takes input and generates an output file via stdin and stdout, so it is quite easily controlled via pythons subprocess.Popen().
When it runs, it also outputs short messages to the console - not using stdout, obviously - and I wonder if it would be possible to catch these as well - potentially by getting a os generated handler list?
I guess my question in general is if it is possible to catch a programs console output if it is not coming from stdout? Thank you. | Is it possible to catch data streams other than stdin, stdout and stderr in a Popen call? | 1.2 | 0 | 0 | 71 |
39,449,728 | 2016-09-12T11:53:00.000 | 0 | 0 | 0 | 0 | python-2.7,opencv,yuv | 39,451,022 | 1 | true | 0 | 0 | You should take care of how YUV is arranged in memory. There are various formats involved. The most common being YUV NV12 and NV21. In general, data is stored as unsigned bytes. While the range of Y is from 0~255, it is -128~127 for U and V. As both U and V approach 0, you have less saturation and approach grayscale. In the case of both NV12 and NV21, it is cols * rows of Y followed by 0.5 * cols * rows of U and V. Both NV12 and NV21 is a semi-planar format, so U and V are interleaved. The former starts with U and the latter with V. In the case of a planar format, there is no interleaving involved. | 1 | 0 | 1 | So I've heard that the YUV and YPbPr colour system is essentially the same.
When I convert BGR to YUV, presumably to the Color_BGR2YUV opencv command, what are the ranges for the values that return for Y, U and V? Because on Colorizer.org, the values seem to be decimals, but I haven't seen opencv spit out any decimal places before.
So basically what I'm asking (in a very general, but hopefully easily answerable way)
What does YUV look like in an array? (ranges and such comparable to the Colorizers.org) | Python 2.7 opencv Yuv/ YPbPr | 1.2 | 0 | 0 | 220 |
39,457,587 | 2016-09-12T19:38:00.000 | 5 | 0 | 1 | 0 | python,flask,jinja2 | 39,458,104 | 2 | true | 1 | 0 | I'll answer my own, maybe it helps someone who has the same question.
This works: {{thestring.encode('string_escape')}} | 1 | 4 | 0 | I have the following string in Python: thestring = "123\n456"
In my Jinja2 template, I use {{thestring}} and the output is:
123
456
The only way I can get Jinja2 to print the exact representation 123\n456 (including the \n) is by escaping thestring = "123\\n456".
Is there any other way this can be done directly in the template? | How to print the \n character in Jinja2 | 1.2 | 0 | 0 | 12,017 |
39,462,847 | 2016-09-13T05:38:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,rabbitmq,celery,celery-task | 56,372,222 | 2 | true | 0 | 0 | The issue was because I was unable to understand the nature of AMQP protocol or RabbitMQ.
When a celery worker starts it opens up a channel at RabbitMQ. This channel upon any network changes tries to reconnect, but the port/sock opened for the channel previously is registered with a different public IP address of the client. As such the negotiations between the celery worker (client) and RabbitMQ (server) cannot resume because the client has changed the address, hence a new channel needs to be established in case of a change in the public IP address of the client.
The answer by @qreOct above is due to either I was unable to express the question properly or because of the difference in our perceptions. Still thanks a lot for taking your time out! | 1 | 3 | 0 | I deployed celery for some tasks that need to be performed at my workplace. These tasks are huge and I bought a few high-spec machines for performing these. Before I detail my issue, let me brief about what all I've deployed:
RabbitMQ broker on a remote server
Producer that pushes tasks on another remote server
Workers at 3 machines deployed at my workplace
Now, when I started the whole process was as smooth as I tested and everything process just great!
The problem
Unfortunately, I forgot to consult my network guy about a fixed IP address, and as per our location, we do not have a fixed IP address from our ISP. So my celery workers upon network disconnect freeze and do nothing. Even when the network is running, because the IP Address changed, and the connection to the broker is not being recreated or worker is not retrying connection. I have tried configuration like BROKER_CONNECTION_MAX_RETRIES = 0 and BROKER_HEARTBEAT = 10. But I had no option but to post it out here and look for experts on this matter!
PS: I cannot restart the workers manually everytime the network changes the IP address by kill -9 | Celery worker not reconnecting on network change/IP Change | 1.2 | 0 | 0 | 948 |
39,462,958 | 2016-09-13T05:48:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,ubuntu,makefile,server | 61,891,341 | 2 | false | 1 | 0 | I can report that :
I encountered the same problem with a python3 / pip3 installation (which is recommended now)
the problem was apparently with the permissions on python. I simply had to run pelican --listen with superuser rights to make the local server work.
Also be careful to install all packets you might have installed without superuser rights with sudo in order to have a fully-working installation with sudo. | 2 | 0 | 0 | I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks!
Python info:
Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2
Error info:
127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html
file.
127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 -
---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent
call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in
_handle_request_noblock
self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in init
self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish
self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close
self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe | pelican make serve error with broken pipe? | 0 | 0 | 0 | 151 |
39,462,958 | 2016-09-13T05:48:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,ubuntu,makefile,server | 39,462,999 | 2 | false | 1 | 0 | Well I installed pip on Ubuntu and then it all worked..
Not sure if it is a version thing.. | 2 | 0 | 0 | I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks!
Python info:
Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2
Error info:
127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html
file.
127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 -
---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent
call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in
_handle_request_noblock
self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in init
self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish
self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close
self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe | pelican make serve error with broken pipe? | 0 | 0 | 0 | 151 |
39,464,748 | 2016-09-13T07:47:00.000 | 2 | 0 | 0 | 0 | python,flask,redis | 39,465,104 | 2 | false | 1 | 0 | This is about performance and scale. To get those 2 buzzwords buzzing you'll in fact need persistent connections.
Eventual race conditions will be no different than with a reconnect on every request so that shouldn't be a problem. Any RCs will depend on how you're using redis, but if it's just caching there's not much room for error.
I understand the desired stateless-ness of an API from a client sides POV, but not so sure what you mean about the server side.
I'd suggest you put them in the application context, not the sessions (those could become too numerous) whereas the app context gives you the optimal 1 connection per process (and created immediately at startup). Scaling this way becomes easy-peasy: you'll never have to worry about hitting the max connection counts on the redis box (and the less multiplexing the better). | 1 | 0 | 0 | I have a Flask API, it connects to a Redis cluster for caching purposes. Should I be creating and tearing down a Redis connection on each flask api call? Or, should I try and maintain a connection across requests?
My argument against the second option is that I should really try and keep the api as stateless as possible, and I also don't know if keeping some persistent across request might causes threads race conditions or other side effects.
However, if I want to persist a connection, should it be saved on the session or on the application context? | Should a connection to Redis cluster be made on each Flask request? | 0.197375 | 0 | 0 | 1,131 |
39,466,671 | 2016-09-13T09:34:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 39,481,787 | 2 | false | 0 | 0 | I got the answer through the scikit-learn mailing list so here it is:
'There is no way to use the "efficient" EstimatorCV objects with pipelines.
This is an API bug and there's an open issue and maybe even a PR for that.'
Many thanks to Andreas Mueller for the answer. | 1 | 1 | 1 | I would like to use scikit-learn LassoCV/RidgeCV while applying a 'StandardScaler' on each fold training set. I do not want to apply the scaler before the cross-validation to avoid leakage but I cannot figure out how I am supposed to do that with LassoCV/RidgeCV.
Is there a way to do this ? Or should I create a pipeline with Lasso/Ridge and 'manually' search for the hyperparameters (using GridSearchCV for instance) ?
Many thanks. | Use of Scaler with LassoCV, RidgeCV | 0.197375 | 0 | 0 | 726 |
39,467,399 | 2016-09-13T10:08:00.000 | 0 | 0 | 0 | 1 | python,django,rabbitmq,celery | 40,389,543 | 1 | false | 1 | 0 | CELERY_QUEUES is used only for "internal" celery communication with it's workers, not with your custom queues in rabbitmq independent of celery.
What are you trying to accomplish with two exchanges with the same queue? | 1 | 0 | 0 | I have a RabbitMQ topology(set up independent of celery) with a queue that is bound to two exchanges with the same routing key. Now, I want to set up a celery instance to post to the exchanges and another one to consume from the queue.
I have the following questions in the context of both the producer and the consumer:
Is the CELERY_QUEUES setting necessary in the first place if I specify only the exchange name and routing key in apply_async and the queue name while starting up the consumer? From my understanding of AMQP, this should be enough...
If it is necessary, I can only set one exchange per queue there. Does this mean that the other binding will not work(producer can't post to the other exchange, consumer can't receive messages routed through the other exchange)? Or, can I post and receive messages from the other exchange regardless of the binding in CELERY_QUEUES? | Celery - Single AMQP queue bound to multiple exchanges | 0 | 0 | 0 | 177 |
39,472,379 | 2016-09-13T14:21:00.000 | 2 | 0 | 1 | 0 | python,macos,python-2.7,pillow | 39,472,408 | 1 | true | 0 | 0 | Reinstall package properly using python -m pip install package_name
Then import using from PIL import Image. | 1 | 1 | 0 | I have installed the Pillow package from PIP using pip install Pillow and Pillow 3.3.1 got installed. I am working with Python 2.7 on Mac OS 10.11 (El Capitan).
When I try to import the Image module, I run into ImportError: No module named Pillow. I tried to import the following:
import Pillow
import Image
import Pillow.Image
All return the same ImportError.
What is missing? | Import Error: Pillow 3.3.1 & Python 2.7 & El Capitan OS | 1.2 | 0 | 0 | 233 |
39,473,445 | 2016-09-13T15:11:00.000 | 2 | 0 | 0 | 1 | python,python-2.7,python-exec | 39,473,520 | 2 | false | 0 | 0 | Execute the code as a user that only owns that specific directory and has no permissions anywhere else?
However- if you do not completely trust the source of code, you should simply not be using exec under any circumstances. Remember, say you came up with a python solution... the exec code could literally undo whatever restrictions you put on it before doing its nefarious deeds. If you tell us the problem you're trying to solve, we can probably come up with a better idea. | 2 | 0 | 0 | I have a python script which executes a string of code with the exec function. I need a way to restrict the read/write access of the script to the current directory. How can I achieve this?
Or, is there a way to restrict the python script's environment directly through the command line so that when I run the interpreter, it does not allow writes out of the directory? Can I do that using a virtualenv? How?
So basically, my app is a web portal where people can write and execute python apps and get a response - and I've hosted this on heroku. Now there might be multiple users with multiple folders and no user should have access to other's folders or even system files and folders. The permissions should be determined by the user on the nodejs app (a web app) and not a local user. How do I achieve that? | Restrict python exec acess to one directory | 0.197375 | 0 | 0 | 1,159 |
39,473,445 | 2016-09-13T15:11:00.000 | 2 | 0 | 0 | 1 | python,python-2.7,python-exec | 39,474,240 | 2 | false | 0 | 0 | The question boils down to: How can I safely execute the code I don't trust.
You can't.
Either you know what the code does or you don't execute it.
You can have an isolated environment for your process, for example with docker. But the use cases are far away from executing unsafe code. | 2 | 0 | 0 | I have a python script which executes a string of code with the exec function. I need a way to restrict the read/write access of the script to the current directory. How can I achieve this?
Or, is there a way to restrict the python script's environment directly through the command line so that when I run the interpreter, it does not allow writes out of the directory? Can I do that using a virtualenv? How?
So basically, my app is a web portal where people can write and execute python apps and get a response - and I've hosted this on heroku. Now there might be multiple users with multiple folders and no user should have access to other's folders or even system files and folders. The permissions should be determined by the user on the nodejs app (a web app) and not a local user. How do I achieve that? | Restrict python exec acess to one directory | 0.197375 | 0 | 0 | 1,159 |
39,473,867 | 2016-09-13T15:32:00.000 | 1 | 0 | 0 | 0 | python,git,markdown,blogs,pelican | 39,476,013 | 1 | false | 1 | 0 | From your Question, what I understood is you are having problem publishing pelican site on git hub. As per my knowledege below is the way to publish it.I don't know why you got 404 Error though.
Step1:
First you need to create repository in github.To create it follow the below steps:
goto github.com--sign In--select git hub pages(Project Pages)--click on '+' to create new repository--give repository name (eg.Blog,Order System)--Check 'public' radio Button--check 'Initialize with READ Me' check box --Click Create Repository
Note:Make sure you use a .gitIgnore file before comminting file
Step2:
Once repository is created you will be on master pages Branch.
Click on Master--create gh-pages--in branch section update default page as gh-pages branch--click on 'code' in menu bar and delete master branch.
Now you need to download READ Me file on local machine.
Copy READ Me file from gh-Pages branch--go to directory where all the files of you project are stored on you machine--goto command prompt--cd directory name(eg. here we have order systems)--
order systems>git add click enter
order systems>git commit -a initialize click enter
order systems>git push origin gh-pages click enter
It will ask you to enter git credentials. Enter those and sign In.
Go to settings. You can see pages are published.
I hope this is helpful for you. | 1 | 1 | 0 | Sorry if I didn't express the question correctly - I am trying to set up a blog on Git using pelican, but I am new to both of it.
So I followed some websites and tried to release one page, however when I did make serve on my local drive the blog looks ok on localhost:/8000
But after pushing to Git, the template of the blog disappears and the webpage looks pretty ugly. Also, if I click on "Read more" hyperlink, the page navigates to a 404 error.
Did I miss anything here? Many thanks if anyone could shed some light on! | Git Blog - the pelican template disappears in the new deployed blog but exists in localhost | 0.197375 | 0 | 0 | 55 |
39,474,896 | 2016-09-13T16:29:00.000 | 1 | 0 | 0 | 0 | python,mysql,django | 39,475,119 | 1 | false | 1 | 0 | You can't install mysql through pip; it's a database, not a Python library (and it's currently in version 5.7). You need to install the binary package for your operating system. | 1 | 0 | 0 | I'm trying to run a server in python/django and I'm getting the following error:
django.db.uils.OperationslError: (200, "Can't connect to local MySQL
server through socket '/tmp/mysql.sock' (2)").
I have MySQL-python installed (1.2.5 version) and mysql installed (0.0.1), both via pip, so I'm not sure why I can't connect to the MySQL server. Does anyone know why? Thanks! | OperationalError: Can't connect to local MySQL server through socket | 0.197375 | 1 | 0 | 295 |
39,477,023 | 2016-09-13T18:47:00.000 | 1 | 0 | 1 | 1 | python,macos,python-2.7 | 39,477,667 | 4 | false | 0 | 0 | you are mixing 32bit and 64bit versions of python.
probably you installed 64bit python version on a 32bit computer.
go on and uninstall python and reinstall it with the right configuration. | 2 | 1 | 1 | I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10
`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
MyMachine:desktop ***********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture | Error "mach-o, but wrong architecture" after installing anaconda on mac | 0.049958 | 0 | 0 | 7,799 |
39,477,023 | 2016-09-13T18:47:00.000 | 3 | 0 | 1 | 1 | python,macos,python-2.7 | 70,210,511 | 4 | false | 0 | 0 | Below steps resolved this problem for me.
Quit the terminal.
Go to Finder => Apps
Right Click on Terminal
Get Info
Check the checkbox Open using Rosetta
Now, open the terminal and try again.
PS: Rosetta allows Mac with M1 architecture to use apps built for Mac with Intel chip. Most of the times the reason behind most of the architecture problems is this chip compatibility reason only. So, 'Open using Rosetta' for terminal allows us to use Rosetta by default for such applications. | 2 | 1 | 1 | I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10
`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
MyMachine:desktop ***********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture | Error "mach-o, but wrong architecture" after installing anaconda on mac | 0.148885 | 0 | 0 | 7,799 |
39,477,312 | 2016-09-13T19:08:00.000 | 0 | 0 | 0 | 1 | python,bash,shell,scripting | 39,477,654 | 2 | false | 0 | 0 | Try this code. I also put the fake file generator for testing purposes.
Prudential step:
do rm only after you check everything is ok.
Possible improvement
move/rename files instead of copying and the removing
for aaa in {1..5} ; do touch SUBJA_${aaa}.txt SUBJB_${aaa}.txt SUBJC_${aaa}.txt; done
for MYSUBJ in SUBJA SUBJB SUBJC
do
mkdir $MYSUBJ
cp $MYSUBJ*.txt $MYSUBJ/
rm $MYSUBJ*.txt
done | 1 | 0 | 0 | I would like to group similar files in folders in a same directory. TO give a better idea, i am working on image datasets where I have images of several subjects with varying filenames. however I will have 10-15 images per subject in the dataset too. So lets say subject A will have 10 images named as A_1.png, A_2.png, A_3.png and so on. So, similarly we have n number of subjects. I have to group the subjects in folders having all the images corresponding to that subject. I tried using python but it I was not able to get to the point right. Can we do it using bash or shell scripts? If yes, please advise. | Grouping similar files in folders | 0 | 0 | 0 | 220 |
39,478,845 | 2016-09-13T20:56:00.000 | -1 | 0 | 0 | 0 | python,django,python-2.7 | 39,481,262 | 3 | false | 1 | 0 | You could try:
Deleted everything in the django-migrations table.
Deleted all files in migrations folder and then run python manage.py makemigrations followed by python manage.py migrate as you said.
If this doesn't work, try:
Deleted everything in the django-migrations table.
Deleted all files in migrations folder, use your old model.py to run python manage.py makemigrations followed by python manage.py migrate.
Add new model, run python manage.py makemigrations followed by python manage.py migrate again. | 2 | 1 | 0 | I added some table models in models.py for the first time running the app and then ran python manage.py makemigrations followed by python manage.py migrate. This works well but after adding two more tables it doesn't work again.
It created migrations for the changes made but when I run python manage.py migrate nothing happens. My new tables are not added to the database.
Things I have done:
Deleted all files in migrations folder and then run python manage.py makemigrationsfollowed by python manage.py migrate but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e 0001_initial.py.
Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created.
Tried python manage.py makemigrations app_name but it still didn't help. | Django Makemigrations not working in version 1.10 after adding new table | -0.066568 | 0 | 0 | 1,011 |
39,478,845 | 2016-09-13T20:56:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7 | 39,482,630 | 3 | false | 1 | 0 | Can you post your models?
Have you edited manage.py in any way?
Try deleting the migrations and the database again after ensuring that your models are valid, then run manage.py makemigrations appname and then manage.py migrate. | 2 | 1 | 0 | I added some table models in models.py for the first time running the app and then ran python manage.py makemigrations followed by python manage.py migrate. This works well but after adding two more tables it doesn't work again.
It created migrations for the changes made but when I run python manage.py migrate nothing happens. My new tables are not added to the database.
Things I have done:
Deleted all files in migrations folder and then run python manage.py makemigrationsfollowed by python manage.py migrate but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e 0001_initial.py.
Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created.
Tried python manage.py makemigrations app_name but it still didn't help. | Django Makemigrations not working in version 1.10 after adding new table | 0 | 0 | 0 | 1,011 |
39,480,992 | 2016-09-14T00:54:00.000 | 0 | 1 | 0 | 0 | python,flask,raspberry-pi,virtualenv,wiringpi | 39,521,293 | 1 | false | 1 | 0 | Turns out I just had to make sure that "root" had the proper libraries installed too. Root and User have different directories for their Python binaries. | 1 | 0 | 0 | I am running my application in a virtualenv using Python3.4.
WiringPi requires sudo privilege to access the hardware pins. Flask, on the other hand, resides in my virtualEnv folder, so I can't access it using sudo flask.
I've tried making it run on startup by placing some commands in /etc/rc.local so that it can have root access automatically. It only tells me that it can't find basic Python library modules (like re).
My RPI2 is running Raspbian. For the time being I am running it using flask run --localhost=0.0.0.0, which I know I am not supposed to do, but I'll change that later. | WiringPi and Flask Sudo Conflict | 0 | 0 | 0 | 105 |
39,482,074 | 2016-09-14T03:28:00.000 | 0 | 0 | 0 | 0 | python-2.7,tkinter,keypress | 39,488,864 | 1 | true | 0 | 1 | No, you are not able to use Tkinter's bind method for a CLI program. | 1 | 0 | 0 | Let's say I need to make use of keypress events. The easiest solution I have so far is the Tkinter module (testing for cross platforms after the code is done).
However, I suck at programming GUI so I am starting with a CLI first. Am I able to use Tkinter.bind() in CLI mode?
I am on Python 2.7 | Can Tkinter keypress modules be used when in CLI | 1.2 | 0 | 0 | 35 |
39,482,504 | 2016-09-14T04:30:00.000 | 1 | 0 | 0 | 0 | python,pyinstaller,cx-oracle | 39,503,038 | 1 | true | 0 | 0 | One thing that you may be running into is the fact that if you used the instant client RPMs when you built cx_Oracle an RPATH would have been burned into the shared library. You can examine its contents and change it using the chrpath command. You can use the special path $ORIGIN in the modified RPATH to specify a path relative to the shared library.
If an RPATH isn't the culprit, then you'll want to examine the output from the ldd command and see where it is looking and then adjust things to make it behave itself! | 1 | 0 | 0 | I wrote a python application that uses cx_Oracle and then generates a pyinstaller bundle (folder/single executable). I should note it is on 64 bit linux. I have a custom spec file that includes the Oracle client libraries so everything that is needed is in the bundle.
When I run the bundled executable on a freshly installed CentOS 7.1 VM, (no Oracle software installed), the program connects to the database successfully and runs without error. However, when I install the bundled executable on another system that contains RHEL 7.2, and I try to run it, I get
Unable to acquire Oracle environment handle.
My understanding is this is due to an Oracle client installation that has some sort of conflict. I tried unsetting ORACLE_HOME on the machine giving me errors. It's almost as though the program is looking for the Oracle client libraries in a location other than in the location where I bundled the client files.
It seems like it should work on both machines or neither machine. I guess I'm not clear on how the Python application/cx_Oracle finds the Oracle client libraries. Again, it seems to have found them fine on a machine with a fresh operating system installation. Any ideas on why this is happening? | Why does pyinstaller generated cx_oracle application work on fresh CentOS machine but not on one with Oracle client installed? | 1.2 | 1 | 0 | 306 |
39,484,863 | 2016-09-14T07:33:00.000 | 13 | 0 | 1 | 1 | python,python-2.7,openstack,setuptools | 46,090,408 | 3 | false | 0 | 0 | setup.py is an integral part of a python package which includes details or information about the files that should be a package. This includes the required dependencies for installation and functioning of your Python package, entry points, license, etc.
setup.cfg on the other hand is more about the settings for any plug-ins or the type of distribution you wish to create. bdist/sdist and further classification of universal or core-python wheel. It can also be used to configure some meta-data of the setup.py. | 1 | 81 | 0 | Need to know what's the difference between setup.py and setup.cfg. Both are used prominently in openstack projects | What's the difference between setup.py and setup.cfg in python projects | 1 | 0 | 0 | 24,854 |
39,491,258 | 2016-09-14T13:06:00.000 | 0 | 0 | 0 | 0 | python,matplotlib | 46,030,298 | 1 | false | 0 | 0 | Without more detail on your specific problem, it's hard to guess what is the best way to represent your data. I am going to give an example, hopefully it is relevant.
Suppose we are collecting height and weight of a group of people. Maybe the index of the person is your first dimension, and the height and weight depends on who it is. Then one way to represent this data is use height and weight as the x and y axes, and plot each person as a dot in that two dimensional space.
In this example, the person index doesn't really have much meaning, thus no color is needed. | 1 | 0 | 1 | I am stuck with python and matplotlib imshow(). Aim is it to show a twodimensonal color map which represents three dimensions.
My x-axis is represented by an array'TG'(93 entries). My y-axis is a set of arrays dependend of my 'TG' To be precise we have 93 different arrays with the length of 340. My z-axis is also a set of arrays depended of my 'TG' equally sized then y (93x340).
Basically what I have is a set of two-dimensonal measurements which I want to plot in color dependend on a third array. Is there a clever way to do that. I was trying to find out on my own first, but all I found is that most common is the problem with just a z-plane(two-dimensonal plot). So I have two matrices of the order of (93x340) and one array(93). Do you know a helpful advise. | Matplotlib imshow() | 0 | 0 | 0 | 343 |
39,492,507 | 2016-09-14T14:05:00.000 | 2 | 0 | 0 | 1 | python,twisted | 39,492,575 | 1 | true | 0 | 0 | Clients do not have to be written w/ twisted (they don't even have to be written in Python); they just have to use a protocol that your server supports. | 1 | 1 | 0 | I'm new to twisted. I was wondering if I can use multiple sync clients to connect to a twisted server? Or I have to make the client twisted as well?
Thanks in advance. | how can sync clients connect to twisted server | 1.2 | 0 | 0 | 89 |
39,493,732 | 2016-09-14T14:59:00.000 | 3 | 0 | 1 | 0 | python,pandas,floating-point | 39,493,833 | 2 | true | 0 | 0 | The same string representation will become the same float representation when put through the same parse routine. The float inaccuracy issue occurs either when mathematical operations are performed on the values or when high-precision representations are used, but equality on low-precision values is no reason to worry. | 1 | 0 | 1 | I know using == for float is generally not safe. But does it work for the below scenario?
Read from csv file A.csv, save first half of the data to csv file B.csv without doing anything.
Read from both A.csv and B.csv. Use == to check if data match everywhere in the first half.
These are all done with Pandas. The columns in A.csv have types datetime, string, and float. Obviously == works for datetime and string, so if == works for float as well in this case, it saves a lot of work.
It seems to be working for all my tests, but can I assume it will work all the time? | Python how does == work for float/double? | 1.2 | 0 | 0 | 717 |
39,495,407 | 2016-09-14T16:26:00.000 | 0 | 0 | 1 | 0 | python,input | 39,495,935 | 1 | true | 0 | 0 | No. The input() function takes data from standard input stream and returns it somewhere in your code. Then the contents of standard input stream is forgotten. | 1 | 0 | 0 | As I understand it, input() reads a new line every time it's called, but is there a way for me to make it read, say, the third line of input and then read the second line of input without first storing the second line? | Python - How to choose line in console to read from | 1.2 | 0 | 0 | 73 |
39,497,199 | 2016-09-14T18:23:00.000 | -3 | 0 | 0 | 0 | python,python-3.x,parsing,ssh,ip-address | 39,500,132 | 2 | false | 0 | 0 | use
not set(p).isdisjoint(set("0123456789$,")) where p is the SSH. | 1 | 2 | 0 | A Python 3 function receives an SSH address like [email protected]:/random/file/path. I want to access this file with the paramiko lib, which needs the username, IP address, and file path separately.
How can I split this address into these 3 parts, knowing that the input will sometimes omit the username ? | How to split an SSH address + path? | -0.291313 | 0 | 1 | 247 |
39,498,602 | 2016-09-14T19:52:00.000 | 1 | 0 | 1 | 0 | python,pycharm,pypi | 69,876,229 | 11 | false | 0 | 0 | File -> Settings -> Python Interpreter -> Add package -> Manage repositories -> Set only "https://pypi.python.org/simple", as told by @Jee Mok, works fine for me. | 6 | 15 | 0 | I just downloaded PyCharm the other day and wanted to download a few packages. I'm using Windows 10 with PyCharm 2016.2.2 and python 3.5.2. When I go to the available packages screen it continually states:
Error loading package list:pypi.python.org
I was just trying to download BeautifulSoup and I'm clearly missing something. Any help would be wonderful. | PyCharm Error Loading Package List | 0.01818 | 0 | 0 | 51,529 |
39,498,602 | 2016-09-14T19:52:00.000 | 0 | 0 | 1 | 0 | python,pycharm,pypi | 39,499,006 | 11 | false | 0 | 0 | You should update to PyCharm 2016.2.3
Help -> Check for Updates... | 6 | 15 | 0 | I just downloaded PyCharm the other day and wanted to download a few packages. I'm using Windows 10 with PyCharm 2016.2.2 and python 3.5.2. When I go to the available packages screen it continually states:
Error loading package list:pypi.python.org
I was just trying to download BeautifulSoup and I'm clearly missing something. Any help would be wonderful. | PyCharm Error Loading Package List | 0 | 0 | 0 | 51,529 |
39,498,602 | 2016-09-14T19:52:00.000 | 31 | 0 | 1 | 0 | python,pycharm,pypi | 50,065,680 | 11 | true | 0 | 0 | I am behind a corporate firewall and had this problem. All I had to do was go to Settings/Appearance and Behavior/System Settings/HTTP Proxy and check Auto-detect proxy settings and it worked. | 6 | 15 | 0 | I just downloaded PyCharm the other day and wanted to download a few packages. I'm using Windows 10 with PyCharm 2016.2.2 and python 3.5.2. When I go to the available packages screen it continually states:
Error loading package list:pypi.python.org
I was just trying to download BeautifulSoup and I'm clearly missing something. Any help would be wonderful. | PyCharm Error Loading Package List | 1.2 | 0 | 0 | 51,529 |
39,498,602 | 2016-09-14T19:52:00.000 | 2 | 0 | 1 | 0 | python,pycharm,pypi | 45,807,803 | 11 | false | 0 | 0 | First of all, I think you should use Default Preferences in File to install packages. Second, you should check "Manage Repositories" if there's any unavailable source. | 6 | 15 | 0 | I just downloaded PyCharm the other day and wanted to download a few packages. I'm using Windows 10 with PyCharm 2016.2.2 and python 3.5.2. When I go to the available packages screen it continually states:
Error loading package list:pypi.python.org
I was just trying to download BeautifulSoup and I'm clearly missing something. Any help would be wonderful. | PyCharm Error Loading Package List | 0.036348 | 0 | 0 | 51,529 |
39,498,602 | 2016-09-14T19:52:00.000 | 0 | 0 | 1 | 0 | python,pycharm,pypi | 55,610,016 | 11 | false | 0 | 0 | If you are running PyCharm on OSX behind a proxy you need to define proxy configuration the following should do the trick:
Click on PyCharm > Preferences > Appearance and Behavior > System Settings -> HTTP Proxy
Either select Auto-detect proxy settings
Or select Manual proxy configuration
Define the Host Name and Port
and optionally Proxy Authentication if required
Click Apply | 6 | 15 | 0 | I just downloaded PyCharm the other day and wanted to download a few packages. I'm using Windows 10 with PyCharm 2016.2.2 and python 3.5.2. When I go to the available packages screen it continually states:
Error loading package list:pypi.python.org
I was just trying to download BeautifulSoup and I'm clearly missing something. Any help would be wonderful. | PyCharm Error Loading Package List | 0 | 0 | 0 | 51,529 |
39,498,602 | 2016-09-14T19:52:00.000 | 0 | 0 | 1 | 0 | python,pycharm,pypi | 62,198,698 | 11 | false | 0 | 0 | If you are using a Corporate license, make sure you are connected to your corporate VPN | 6 | 15 | 0 | I just downloaded PyCharm the other day and wanted to download a few packages. I'm using Windows 10 with PyCharm 2016.2.2 and python 3.5.2. When I go to the available packages screen it continually states:
Error loading package list:pypi.python.org
I was just trying to download BeautifulSoup and I'm clearly missing something. Any help would be wonderful. | PyCharm Error Loading Package List | 0 | 0 | 0 | 51,529 |
39,500,131 | 2016-09-14T21:46:00.000 | 0 | 0 | 0 | 0 | python-2.7,networkx | 39,629,000 | 1 | false | 0 | 0 | It really depends what paths you are looking for.
To begin with, the shortest path gives you the lowest bound c_min on the length constraint. Then given a length constraint c>=c_min, for each node n, you know the shortest path P_s_n and distance c_n from start to this node. Choose those nodes that satisfy c_n <c. Then you can extend P_s_n arbitrarily by any path from n to goal, which will satisfy your length constraint. | 1 | 0 | 0 | I am using python 2.7 and networkx.
I have a quite large network and I need to find all the paths (not only the shortest path) between an origin and destination. Since my network is large, I would like to speed up with some constraints, such as path length, cost, etc..
I am using networkx. I don't want to use all_simple_paths because with all_simple_paths, I have to filter all the paths later based on path length (number of nodes in it) or cost of the path (based on arc costs). Filtering all the paths is very expensive for the large network.
I would really appreciate any help. | Find all paths between origin destination with path length constraint | 0 | 0 | 1 | 363 |
39,500,416 | 2016-09-14T22:10:00.000 | -1 | 1 | 1 | 0 | python,integration-testing,pytest | 39,559,306 | 2 | true | 0 | 0 | On assertion further execution of test is aborted. So there will always be 1 assertion per test.
To achieve what you want you will have to write your own wrapper over assertion to keep track. At the end of the test check if count is >0 then raise assertion.
The count can be reset to zero either the setup or at teardown of test. | 2 | 4 | 0 | I am using pytest for writing some tests at integration level. I would like to be able to also report the number of assertions done on each test case. By default, pytest will only report the number of test cases which have passed and failed. | Report number of assertions in pyTest | 1.2 | 0 | 0 | 1,091 |
39,500,416 | 2016-09-14T22:10:00.000 | 1 | 1 | 1 | 0 | python,integration-testing,pytest | 53,409,809 | 2 | false | 0 | 0 | As far as I can see there is no such possibility to get the number of passed assertions out of pytest.
Reason: it employs Python's standard assert statement, so the determination whether the assertion is a pass or a fail is done within the engine used, and in case the engine does not count how often it got into the "pass" branch of the necessary if in assert's implementation, and offer a way to read out that counter, there is just no way to elicit that information from the engine. So pyTest can't tell you that either.
Many (unit) test frameworks report assertion count by default, and IMHO i deem it desirable, because it is - amongst others - a measure for test quality: sloppy tests might apply too few checks to values. | 2 | 4 | 0 | I am using pytest for writing some tests at integration level. I would like to be able to also report the number of assertions done on each test case. By default, pytest will only report the number of test cases which have passed and failed. | Report number of assertions in pyTest | 0.099668 | 0 | 0 | 1,091 |
39,500,438 | 2016-09-14T22:12:00.000 | 17 | 0 | 1 | 1 | python,bash,command-line,pycharm | 39,500,506 | 6 | true | 0 | 0 | PyCharm can be launched using the charm command line tool (which can be installed while getting started with PyCharm the first time).
charm . | 1 | 13 | 0 | If you give in the command "atom ." in the terminal, the Atom editor opens the current folder and I am ready to code.
I am trying to achieve the same with Pycharm using Ubuntu: get the current directory and open it with Pycharm as a project.
Is there way to achieve this by setting a bash alias? | Opening Pycharm from terminal with the current path | 1.2 | 0 | 0 | 33,524 |
39,500,513 | 2016-09-14T22:18:00.000 | 1 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-elastic-beanstalk,django-migrations | 39,500,763 | 1 | false | 1 | 0 | Seems that you might have deleted the table or migrations at some point of time.
When you run makemigrations, django create migratins and when you run migrate, it creates database whichever is specified in settings file.
One thing is if you keep on creating migrations and do not run it in a particular database, it will be absolutely fine. Whenever you switch to databsse and run migrations, it will handle it as every database will store the point upto which migrations have been run until now in django-migrations table and will start running next migrations only.
To solve your problem, you can delete all databases and migration files and start afresh as you are perhaps testing right now. Things will go fine untill you delete a migration or a database in any of the server.
If you have precious data, you should get into migration files and tables to analyse and manage things. | 1 | 1 | 0 | I am developing a small web application using Django and Elasticbeanstalk.
I created a EB application with two environments (staging and production), created a RDS instance and assigned it to my EB environments.
For development I use a local database, because deploying to AWS takes quite some time.
However, I am having troubles with the migrations. Because I develop and test locally every couple of minutes, I tend to have different migrations locally and on the two environments.
So once I deploy the current version of the app to a certain environment, the "manage.py migrate" fails most of the times because tables already exist or do not exist even though they should (because another environment already created the tables).
So I was wondering how to handle the migration process when using multiple environments for development, staging and production with some common and some exclusive database instances that might not reflect the same structure all the time?
Should I exclude the migration files from the code repository and the eb deployment and run makemigrations & migrate after every deployment? Should I not run migrations automatically using the .ebextensions and apply all the migrations manually through one of the instances?
What's the recommended way of using the same Django application with different database instances on different environments? | Django Migration Process for Elasticbeanstalk / Multiple Databases | 0.197375 | 0 | 0 | 795 |
39,506,856 | 2016-09-15T08:54:00.000 | 1 | 0 | 0 | 0 | python,pyqt,pyqt4 | 39,506,951 | 1 | true | 0 | 1 | A widget can only exist in one place at a time. You will need to link the two unfortunately. Do yourself a favor and do it properly via a model.
If it were possible for a widget to exist in multiple places, this would lead to a whole lot of problems: cyclic trees, multiple parents, etc. | 1 | 1 | 0 | I'd like to add QLineEdit/checkbox/button in 2 layouts. So no matter which one I press in which ever window they both do the same thing, update each other as I type and so on.
Is it possible or do I need to create second set of controls and then signal link each other?
Regards
Dariusz | PyQt display 1 widget in 2 layouts? | 1.2 | 0 | 0 | 284 |
39,512,166 | 2016-09-15T13:22:00.000 | 0 | 1 | 0 | 0 | python-3.x,xlrd,rapidminer | 39,519,713 | 3 | false | 0 | 0 | Good day! So, I'm not sure I understand your question correctly, but have you tried a combination of Read Excel operator with the Loop Examples operator? Your loop subprocess could then use Write CSV operator or similar. | 1 | 1 | 0 | I am working with RapidMiner at the moment and am trying to copy my RapidMiner results which are in xlsx files to txt files in order to do some further processing with python. I do have plain text in column A (A1-A1500) as well as the according filename in column C (C1-C1500).
Now my question:
Is there any possibility (I am thinking of the xlrd module) to read the content of every cell in column A and print this to a new created txt file with the filename being given in corresponding column C?
As I have never worked with the xlrd module before I am a bit lost at the moment... | Read Excel Cells and Copy content to txt file | 0 | 1 | 0 | 5,449 |
39,514,804 | 2016-09-15T15:21:00.000 | 0 | 0 | 0 | 1 | python,python-3.x,numpy | 39,660,578 | 1 | true | 0 | 0 | Well.. I missed on installing some numpy libraries.
sudo apt-get install python-numpy
sudo apt-get install libsasl2-dev python-dev libldap2-dev libssl-dev
After installing the above pakages my issue got resolved.
Caravel is running fine. | 1 | 0 | 0 | I am trying to build the source code of caravel.
Following the instructions I have installed the front end dependencies using npm.
on python setup.py install I am getting error:
warning: no previously-included files matching '.pyo' found anywhere
in distribution warning: no previously-included files matching '.pyd'
found anywhere in distribution
numpy/core/src/npymath/ieee754.c.src:7:29: fatal error:
npy_math_common.h: No such file or directory #include
"npy_math_common.h"
I tried running with python3.
I am running this on Ubuntu 14.04.4 LTS | Error on caravel source code build - no previously included files matching *.pyo | 1.2 | 0 | 0 | 508 |
39,517,040 | 2016-09-15T17:29:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,loops,dictionary,iteration | 39,517,206 | 3 | false | 0 | 0 | Since the values for a given a are strictly increasing with successive i values, you can do a binary search for the value that is closest to your target.
While it's certainly possible to write your own binary search code on your dictionary, I suspect you'd have an easier time with a different data structure. If you used nested lists (with a as the index to the outer list, and i as the index to an inner list), you could use the bisect module to search the inner list efficiently. | 1 | 2 | 1 | I have a dictionary, T, with keys in the form k,i with an associated value that is a real number (float). Let's suppose I choose a particular key a,b from the dictionary T with corresponding value V1—what's the most efficient way to find the closest value to V1 for a key that has the form a+1,i, where i is an integer that ranges from 0 to n? (k,a, and b are also integers.) To add one condition on the values of items in T, as i increases in the key, the value associated to T[a+1,i] is strictly increasing (i.e. T[a+1,i+1] > T[a+1,i]).
I was planning to simply run a while loop that starts from i = 0 and compares the valueT[a+1,i] to V1. To be more clear, the loop would simply stop at the point at which np.abs(T[a+1,i] - V1) < np.abs(T[a+1,i+1] - V1), as I would know the item associated to T[a+1,i] is the closest to T[a,b] = V1. But given the strictly increasing condition I have imposed, is there a more efficient method than running a while loop that iterates over the dictionary elements? i will go from 0 to n where n could be an integer in the millions. Also, this process would be repeated frequently so efficiency is key. | Finding closest value in a dictionary | 0 | 0 | 0 | 2,461 |
39,517,707 | 2016-09-15T18:11:00.000 | 0 | 0 | 0 | 0 | python,gmail-api,google-api-python-client | 39,777,352 | 1 | false | 1 | 0 | This was a bug in the Gmail API. It is fixed now. | 1 | 1 | 0 | I can't find any results when searching Google for this response.
I'm using the current Google Python API Client to make requests against the Gmail API. I can successfully insert a label, I can successfully retrieve a user's SendAs settings, but I cannot update, patch, or create a SendAS without receiving this error.
Here's a brief snippit of my code:
sendAsResource = {"sendAsEmail": "[email protected]",
"isDefault": True,
"replyToAddress": "[email protected]",
"displayName": "Test Sendas",
"isPrimary": False,
"treatAsAlias": False
}
self.service.users().settings().sendAs().create(userId = "me", body=sendAsResource).execute()
The response I get is:
<HttpError 400 when requesting https://www.googleapis.com/gmail/v1/users/me/settings/sendAs?alt=json returned "Custom display name disallowed">
I've tried userId="me" as well as the user i'm authenticated with, both result in this error. I am using a service account with domain wide delegation. Since adding a label works fine, I'm confused why this doesn't.
All pip modules are up to date as of this morning (google-api-python-client==1.5.3)
Edit: After hours of testing I decided to try on another user and this worked fine. There is something unique about my initial test account. | Setting the SendAs via python gmail api returns "Custom display name disallowed" | 0 | 0 | 1 | 964 |
39,520,532 | 2016-09-15T21:18:00.000 | 0 | 0 | 1 | 0 | python,pandas | 62,178,663 | 4 | false | 0 | 0 | After introducing pandas to my script and loaded dataframe with 0.8MB data, ran the script and surprised to see the memory usage got increased from 13MB to 49MB. I suspected my existing script has some memory leak and I used memory profiler to check what is consuming much memory and finally the culprit is pandas. Just import statement which loads the library into memory is taking around 30MB. Importing only specific item like (from pandas import Dataframe) didn't help much in saving memory.
Just import pandas takes around 30MB memory
Once import is done, memory of dataframe object can be checked by using print(df.memory_usage(deep=True)) which depends on the data loaded to dataframe | 2 | 0 | 1 | I am running Python on a low memory system.
I want to know whether or not importing pandas will increase memory usage significantly.
At present I just want to import pandas so that I can use the date_range function. | How to measure the memory footprint of importing pandas? | 0 | 0 | 0 | 493 |
39,520,532 | 2016-09-15T21:18:00.000 | 3 | 0 | 1 | 0 | python,pandas | 39,520,649 | 4 | false | 0 | 0 | You may also want to use a Memory Profiler to get an idea of how much memory is allocated to your Pandas objects. There are several Python Memory Profilers you can use (a simple Google search can give you an idea). PySizer is one that I used a while ago. | 2 | 0 | 1 | I am running Python on a low memory system.
I want to know whether or not importing pandas will increase memory usage significantly.
At present I just want to import pandas so that I can use the date_range function. | How to measure the memory footprint of importing pandas? | 0.148885 | 0 | 0 | 493 |
39,523,214 | 2016-09-16T03:10:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7,django-models | 63,384,226 | 6 | false | 1 | 0 | I found the solution for me. When you write polls.apps.PollsConfig in your InstalledApps, you need to have in mind that, the first polls refers to the created app, not to the site.
In the Django documentation it can be a bit confusing. | 3 | 5 | 0 | I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error:
ImportError: No module named apps
Traceback (most recent call last):
File "manage.py", line 22, in
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create
mod = import_module(mod_path)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
How can I resolve this error? | Django - ImportError: No module named apps | 0 | 0 | 0 | 24,726 |
39,523,214 | 2016-09-16T03:10:00.000 | 5 | 0 | 0 | 0 | python,django,python-2.7,django-models | 39,537,105 | 6 | false | 1 | 0 | There is an error in the tutorial.
It instructs to add polls.apps.PollsConfig in the INSTALLED_APPS section of the settings.py file. I changed it from polls.apps.PollsConfig to simply polls and that did the trick. I was able to successfully make migrations.
I hope this helps other people who face similar problems. | 3 | 5 | 0 | I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error:
ImportError: No module named apps
Traceback (most recent call last):
File "manage.py", line 22, in
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create
mod = import_module(mod_path)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
How can I resolve this error? | Django - ImportError: No module named apps | 0.16514 | 0 | 0 | 24,726 |
39,523,214 | 2016-09-16T03:10:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7,django-models | 42,808,902 | 6 | false | 1 | 0 | In Django 1.10.6 I had the same error ("no module named..."). The solution that worked for me is changing "polls.apps.PollsConfig" for "mysite.polls" in settings.py. o.O | 3 | 5 | 0 | I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error:
ImportError: No module named apps
Traceback (most recent call last):
File "manage.py", line 22, in
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create
mod = import_module(mod_path)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
How can I resolve this error? | Django - ImportError: No module named apps | 0 | 0 | 0 | 24,726 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.