Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
33,227,369
2015-10-20T03:15:00.000
0
0
0
0
python,arrays,numpy,pandas,resize
33,228,345
1
false
0
0
If you want to stay within 'Pandas', I would suggest one of the following: df.unstack() which would result in shape (len(index2), maxlen * num_columns) following your notation; here columns will be stored as a MultiIndex. Alternatively, you can use df.to_panel(); Panel is a natural Pandas data structure used for 3 dimensions, as in your case. I believe that the shape should be (num_columns, len(index1), maxlen). You can then fill any nans with .fillna(0).
1
0
1
I have a dataframe with two indexes. (Both timestamps but thats probably not relevant). I need to get out a numpy matrix with shape (len(first_index), maxlen, num_columns). maxlen is some number (likely the max of all of the len(second_index)) or just something simple like 1000. I can do this with arr = df.as_matrix(...) and then arr.resize((len(first_index), maxlen, num_columns)). Elements in new rows should be 0 so .resize(...) works well. Is there a simpler and more efficient way to do this within the dataframe? Numpy works just fine but I need maximum efficiency because I have millions of rows.
Pandas' version of numpy.resize for efficient matrix resizing
0
0
0
1,511
33,228,905
2015-10-20T05:45:00.000
0
1
0
0
java,python,c++,python-module
33,229,221
1
false
0
0
If performance is what you are looking for, you should know that Python is 10 to 100 times slower than C++. Depending on how much performance you are looking for, you may come around by optimizing your code or use some 3rd party libraries for number crunching as scipy. Using Cython would be an option for consideration, but you were looking for decreasing the development time by using python? Using C-Modules would introduce more complexity and Cython has a different syntax anyway.
1
0
0
I have an assignment every week of the heuristic problem solving course. The assignments are taking up at least 3-4 days of my week (I want to reduce this time). The questions asked in the assignment are computationally intensive and we need to give our best answer within a program execution time of 2 min. I started doing assignments in c++ for good runtime performance. Fine. But, I would have to end up using pointers etc so as not to create copies of data everywhere. But this usually resulted in more debugging time. So I switched to java for my next assignment. A little low on performance compared to c++ but is saving my weekends. I profiled my java program and saw that a single function was taking up 95% of the cpu time. In this context I want to ask, if I use python to write my assignment solution, profile it, find out the functions using up the most cpu time, implement them using c-modules.. can I do any better? I can decrease my development time (bcz I personally find development on python to be faster) and since I would implement the functions which take up 95% of cpu time in c-modules I should not take a hit on performance. Is this something I can try? I can try this out (python + c-modules) and check out for myself (without asking for help here), but if I fail I might not have time to re-implement my whole assignment in c++ or java.
Performance of Python with c modules
0
0
0
51
33,229,441
2015-10-20T06:26:00.000
0
0
1
0
python,path,pycharm
68,943,187
11
false
0
0
I experienced this problem after moving my project to a different root directory. None of the above solutions worked for me. I solved it by opening my entire project folder, instead of just the python file I was trying to run. And then running the file I wanted, while the entire project was loaded into PyCharm.
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
0
0
1
0
python,path,pycharm
68,016,309
11
false
0
0
I had this problem because I renamed my project, it was "xx" I renamed it to "yy", what I did was I went through the directory of .idea in the "yy", in any of those files (all XML files) if there were the name "xx", I replaced it with "yy"
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
3
0
1
0
python,path,pycharm
33,229,464
11
true
0
0
After testing for a bit, I've found a solution (but not an answer to why this error occurs in PyCharm): Delete the file and create it again. (Or rename or move it and create a new file with its old name, both should work.)
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
1.2
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
1
0
1
0
python,path,pycharm
54,367,334
11
false
0
0
I was getting this same error, and the path in "edit configurations" was correct. However, this is what eventually got my code working again. 1) I commented out all of the code in my file ("ctrl" + "a" + "ctrl" + "/") 2) I commented something I knew would compile in the file. (my list of imports) 3) I ran the python file. This time, it actually completely compiled and after that I was able to uncomment the rest of my code and everything worked again.
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0.01818
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
3
0
1
0
python,path,pycharm
54,862,230
11
false
0
0
I had the same problem, mine is probably related to the explaination gave by the others, it comes from the dir .idea, files *.xml contain the variable $DIR_PROJECT$. Therefore, as the attribution of a new path didn't work, I just deleted my .idea, that is automatically loaded each time I open my project's directory. It automatically regenerated the .idea, asked for the script path... And it worked perfectly CAREFUL => You will automatically lose your project settings, you are deleting the "settings file"
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0.054491
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
0
0
1
0
python,path,pycharm
59,196,577
11
false
0
0
The issue kept popping up over and over in PyCharm. So I created a new project and loaded the needed script. Then I provided the directory to path and assigned the default Python version that I wanted to use... and it worked. Then I was able to finally use "execute line in console" once again.
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
2
0
1
0
python,path,pycharm
67,854,085
11
false
0
0
Set the working directory correctly 1. File-> Settings 2. Build, Execution, Deployment -> Console -> Python Console 3. Working directory: [The path to the directory where the file you're currently working on resides.]
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0.036348
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
0
0
1
0
python,path,pycharm
70,729,384
11
false
0
0
In my case Run -> Edit Configuration didn't help. I've solved it changing the value of "WORKING_DIRECTORY" attribute in .idea -> workspace.xml <option name="WORKING_DIRECTORY" value="$PROJECT_DIR$/your/correct/path/here" />
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
0
0
1
0
python,path,pycharm
70,193,943
11
false
0
0
open the qtdesigner work dictionary setting and choose your project path then click OK, don't not use the mysterious work dictionary path by default
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
0
0
0
54,881
33,229,441
2015-10-20T06:26:00.000
37
0
1
0
python,path,pycharm
41,593,577
11
false
0
0
It happens because when you create a file it automatically assigns the working directory to it's configuration, which of course is the one where you created it. You can change that by going into Run -> Edit Configurations. Click on the folder icon in Script path: and correct the path to the file. Click OK to save and you should be able to Run the file again.
10
35
0
I've seen this question being asked before (at least twice), but I haven't found a solution so far, so I'll ask the question again with some more details. The Problem When I run my python main file, PyCharm keeps telling me Cannot start process, the working directory /home/myname/PyCharmProjects/MyProjectName/mypackage does not exist. When did this error occur? After I created a package mypackage for test purposes, moved files to it (inluding my main file), and moved the files back to the root folder afterwards. The package mypackage was empty after that, but PyCharm still thought that the main file (Main.py) is located in that package. I could still run the program until I deleted the empty package, but path errors occurred. After deleting the package, I was unable to start it. Additional Info I can still run the other files that used to be in mypackage and are now in my root directory again I can still create and run new files in my root directory
PyCharm tells me "Cannot start process, the working directory ... does not exist"
1
0
0
54,881
33,231,156
2015-10-20T08:01:00.000
0
0
0
0
python,django,selenium,selenium-webdriver
33,231,191
1
false
1
0
I suggest you use a continuous integration solution like Jenkins to run your tests periodically.
1
0
0
I'm quite new to whole Selenim thing and I have a simple question. When I run tests (Django application) on my local machine, everything works great. But how this should be done on server? There is no X, so how can I start up webdriver there? What's the common way? Thanks
Selenium on server
0
0
1
63
33,234,363
2015-10-20T10:35:00.000
3
0
0
0
python,image,opencv,image-processing,opencv-contour
42,767,296
3
false
0
0
Answer from @rayryeng is excellent! One small thing from my implementation is: The np.where() returns a tuple, which contains an array of row indices and an array of column indices. So, pts[0] includes a list of row indices, which correspond to height of the image, pts[1] includes a list of column indices, which correspond to the width of the image. The img.shape returns (rows, cols, channels). So I think it should be img[pts[0], pts[1]] to slice the ndarray behind the img.
1
15
1
I'm using OpenCV 3.0.0 on Python 2.7.9. I'm trying to track an object in a video with a still background, and estimate some of its properties. Since there can be multiple moving objects in an image, I want to be able to differentiate between them and track them individually throughout the remaining frames of the video. One way I thought I could do that was by converting the image to binary, getting the contours of the blobs (tracked object, in this case) and get the coordinates of the object boundary. Then I can go to these boundary coordinates in the grayscale image, get the pixel intensities surrounded by that boundary, and track this color gradient/pixel intensities in the other frames. This way, I could keep two objects separate from each other, so they won't be considered as new objects in the next frame. I have the contour boundary coordinates, but I don't know how to retrieve the pixel intensities within that boundary. Could someone please help me with that? Thanks!
Access pixel values within a contour boundary using OpenCV in Python
0.197375
0
0
32,084
33,236,054
2015-10-20T11:58:00.000
0
0
0
0
python,python-requests
33,236,211
3
true
0
0
HTTP status codes are usually meant for the browsers, or in case of APIs for the client talking to the server. For normal web sites, using status codes for semantical error information is not really useful. Overusing the status codes there could even cause the browser to not render responses correctly. So for normal HTML responses, you would usually expect a code 200 for almost everything. In order to check for errors, you will then have to check the—application specific—error output from the HTML response. A good way to find out about these signs is to just try logging in from the browser with invalid credentials and then check what output is rendered. Or as many sites also show some kind of user menu once you’re logged in, check for its existence to figure out if you’re logged in. And when it’s not there, the login probably failed.
2
1
0
I used requests to login to a website using the correct credentials initially. Then I tried the same with some invalid username and password. I was still getting response status of 200. I then understood that the response status tells if the corresponding webpage has been hit or not. So now my doubt is how to verify if I have really logged in to the website using correct credentials
How to verify that we have logged in correctly to a website using requests in python?
1.2
0
1
98
33,236,054
2015-10-20T11:58:00.000
0
0
0
0
python,python-requests
33,236,191
3
false
0
0
What status code the site responds with depends entirely on their implementation; you're more likely to get a non-200 response if you're attempting to log in to a web service. If a login attempt yielded a non-200 response on a normal website, it'd require a special handler on their end, as opposed to a 200 response with a normal page prompting you (presumably a human user, not a script) with a visual cue indicating login failure. If the site you're logging into returns a 200 regardless of success or failure, you may need to use something like lxml or BeautifulSoup to look for indications of success or failure (which presumably you'll be using already to process whatever it is you're logging in to access).
2
1
0
I used requests to login to a website using the correct credentials initially. Then I tried the same with some invalid username and password. I was still getting response status of 200. I then understood that the response status tells if the corresponding webpage has been hit or not. So now my doubt is how to verify if I have really logged in to the website using correct credentials
How to verify that we have logged in correctly to a website using requests in python?
0
0
1
98
33,241,211
2015-10-20T15:52:00.000
6
0
0
1
python,linux,file,space
33,241,435
1
true
0
0
You don't need to (and shouldn't) escape the space in the file name. When you are working with a command line shell, you need to escape the space because that's how the shell tokenizes the command and its arguments. Python, however, is expecting a file name, so if the file name has a space, you just include the space.
1
3
0
I have problem with os.access(filename, os.R_OK) when file is an absolute path on a Linux system with space in the filename. I have tried many ways of quoting the space, from "'" + filename + "'" to filename.replace(' ', '\\ ') but it doesn't work. How can I escape the filename so my shell knows how to access it? In terminal I would address it as '/home/abc/LC\ 1.a'
Handling a literal space in a filename
1.2
0
0
4,377
33,243,575
2015-10-20T17:57:00.000
1
0
0
0
python,mysql,mysql-python
33,243,613
1
true
0
0
Connection instance is persistent, you can connect one time and work with connection as long as you need.
1
1
0
I'm new to using databases in Python and I'm playing around with MySQLdb. I have several methods that will issue database calls. Do I need to go through the database connection steps every time I want to make a call or is the instance of the database persistent?
Do I need to call MySQLdb.connect() in every method where I execute a database operation?
1.2
1
0
32
33,246,572
2015-10-20T20:52:00.000
0
1
0
1
python,python-2.7,fedora-21
33,246,962
2
false
0
0
Path to python was different than other user. User was pointing to canopy.
1
0
0
OS: Fedora 21 Python: 2.7.6 I run a python script as root or using sudo it runs fine. If I run it as just the user I get the following: Traceback (most recent call last): File "/home/user/dev_ad_list.py", line 12, in import ldap ImportError: No module named ldap selinux=disabled -- What other security is preventing a user from running a python script that imports ldap
Python Script not Running - Has to be something simple
0
0
0
51
33,248,482
2015-10-20T23:24:00.000
2
0
0
0
python,sql,database,postgresql
33,248,538
1
false
0
0
Double-quotes in SQL are not strings - they escape table, index, and other object names (ex. "John Smith" refers to a table named John Smith). Only single quoted strings are actually strings. In any case, if you are using query parameters properly (which, in your example code, you seem to be), you should not have to worry about escaping your data. Simply pass the raw values to execute (ex. c.execute(player, ("Bob O'Niel",)))
1
0
0
I have the following table create table players (name varchar(30), playerid serial primary key); And I am working with the script: def registerPlayer(name): """Registers new player.""" db = psycopg2.connect("dbname=tournament") c = db.cursor() player = "insert into players values (%s);" scores = "insert into scores (wins, matches) values (0, 0);" c.execute(player, (name,)) c.execute(scores) db.commit() db.close() But when I try and register a player with the argument in quotes as so: registerPlayer("Any Name") It doesn't work... Now, if I directly enter the query into psql, it works if I only use single quotes as so INSERT INTO players VALUES ('Any Name'); But not if I use "Any Name". If I use the "", it tells me: ERROR: column "Any Name" does not exist Now, this is a problem if I want to enter a name in such as Bob O'Neal, because it will close off that entry after the O. The quotes were working fine the other day, and I went to format so that all the SQL queries were capitalized, and everything stopped working. I returned to the code that was working fine, and now nothing is working!
Quotations not working in PostgreSQL Queries
0.379949
1
0
74
33,249,904
2015-10-21T02:23:00.000
0
0
0
0
python,machine-learning,scikit-learn
33,290,142
1
true
0
0
If the gap between the training and cross-validation accuracy is increasing then this is an indication that your model is overfitting on the training data. With every iteration (supplying additional training data) your model is better able to capture the training data, however it is no longer able to better generalise (and thus the cross-validation accuracy converges).
1
0
1
I am trying to plot the learning curves for my SVC classifier with sklearn.learning_curve. From the plot, I find that both of my training scores and test scores increases simultaneously. But the gap between the training curve and cross-validation curve becomes larger with the increasing number of the samples. As I know, the training scores should decrease when more samples supplied. Do you guys have any sense about this problem?
About learning curves
1.2
0
0
959
33,253,287
2015-10-21T07:24:00.000
1
0
1
0
python,random
33,254,265
2
false
0
0
Unique and random are contradictory. For anything that's genuinely random there is a (small, maybe infinitessimal) chance of repetition. If you want something less unwieldy (but less universally unique) than UUIDs you can roll your own combination of a random number (with a small chance of repetition) and a number derived from the time (for example the Unix epoch, which won't ever repeat for a single instance if the script is run less often than once per second). If the random number is used as part of (say) a filename you can generate a name and then check whether the file already exists. If it does, then reject the random number as already used, and generate another one. Or if you really need to, you could store all random numbers already used somewhere. Load them before each run, add the new number and save after each run. Finally there are pseudo-"random" generators of the form X(n+1) = (X(n)*a + b) mod M. These are hopeless for security / cryptography because given a few members of the sequence, you can discover the algorithm and predict all future numbers. However, if that predictability is unimportant, then with appropriate constants you can guarantee no repeats until all M members of the sequence have been generated. The numbers are not at all random, but they may appear random to a casual observer.
1
0
0
I have written a script where I need a unique random number every time I run that script. Just for explaination: suppose that I want my script 5 times. Now I want number generated in all times should be unique? I have found a lot of infomation about random number uniqueness but those are for one time only. If you think it is not possible then is there any alternative way etc.?
How to generate random numbers that are unique forever in python
0.099668
0
0
334
33,261,261
2015-10-21T13:54:00.000
0
0
0
0
python,pandas,group-by
33,261,884
2
false
0
0
You need to make a dictionary, where the key is the id. Each value of that is going to be another dictionary of outN to value. Read a line. You get an id, outN, and a value. Check you have an dict for that id first, and if not, create one. Then shove the value for that outN into the dict for that id. Second step: You need to collect a list of all the outNs. Make a new set. For each value in your dict, add each of its outN keys to your set. At the end, get a list from the set, and sort it. Third step: Go through each id in your dict keys, and then each outn in your new sorted list of outns, and print the value of that, with a fallback to zero. outnval_by_ids[id].get(outn, "0") There's a weird case here, in that you have a lot of timestamps that you are assuming are duplicate by id. Be careful that is really the case. Assumptions like that cause bugs.
1
2
1
I have a question per below - I need to transform multiple rows of ID into one row, and let the different "output"-values become columns with binary 1/0, like example. Here is my table! ID Output Timestamp 1 out1 1501 1 out2 1501 1 out5 1501 1 out9 1501 2 out3 1603 2 out4 1603 2 out9 1603 To be transformed into the following: ID out1 out2 out3 out4 out5 out9 timestamp 1 1 1 0 0 1 1 1501 2 0 0 1 1 0 1 1603 Can someone help me do this in a flexible way in Python, preferably Pandas? I'm quite new to this, have been using SAS for a good many years so any "transition tips" are greatly appreciated. Br,
Python multiple rows to one row
0
0
0
1,859
33,263,378
2015-10-21T15:31:00.000
0
0
0
0
python,python-3.x,openpyxl
33,275,544
2
false
0
0
When it comes to value of numbers openpyxl doesn't care about their formatting so it will report 3142 in both cases. I don't think coercing this to a string makes any sense at all.
1
2
0
I filled an Excel sheet with a correct float numbers based on the German decimal point format. So, the number 3.142 is correctly written 3,142, and if it is written 3.142 (or '3.142 by declaring it as a text entry in order to avoid English interpretation as 3142), then I want to report an error to the author of the Excel file. So, I want to see a 3,142 in the first case when reading this file by openpyxl, and in the second case a 3.142 - just as writting by hand in Excel. However, I see 3.142 in both cases. What can I do?
How to read a cell of a sheet filled with floats containing German decimal point
0
1
0
549
33,264,119
2015-10-21T16:05:00.000
1
1
0
1
python,ssh,google-compute-engine
33,264,941
1
false
0
0
X-Windows (X11 nowadays) is a client-server architecture. You can forward connections to your x server with a -X (uppercase) option to ssh (ie $ ssh -X [email protected]). This should work if everything is installed correctly on the server (apt-get usually does a good job of this, but I don't have a lot of experience with kwrite). EDIT from the ssh man page X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring. For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Please refer to the ssh -Y option and the ForwardX11Trusted directive in ssh_config(5) for more information. and the relevant -Y -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls.
1
1
0
I cannot open .py file through Google VM SSH Console. Kwrite and sudo apt-get install xvfb are installed. My command: kwrite test.py I get the following error: kwrite: Cannot connect to X server. Do I need to change the command/install additional software? Thanks
Cannot open .py file in Google Virtual Machine SSH Terminal
0.197375
0
0
72
33,273,885
2015-10-22T05:22:00.000
0
1
1
0
python,api,twitter,tweepy,twitter-streaming-api
33,290,201
1
true
0
0
Once your data has been loaded into JSON format, you can access the username by calling tweet['user']['screen_name']. Where tweet is whatever varibale you have assigned that holds the JSON object for that specific tweet.
1
0
0
How to list name of users who tweeted with given keyword along with count of tweets from them ? I am using python and tweepy. I used tweepy to list JSON result in a file by filter(track["keyword"]) but doesn't know how to list users who tweeted given keyword.
how to list all users who tweeted a given keyword using twitter api and tweepy
1.2
0
1
578
33,275,042
2015-10-22T06:57:00.000
0
0
1
0
python-3.x
33,275,076
5
false
0
0
1) create a variable maxNum = 0 2) loop through list if a[i] > maxNum : maxNum = a[i] 3)loop through list a second time now if a[i] == maxNum: print(i)
1
1
0
In a list, there might be several largest numbers. I want to get the indices of them all. For example: In the list a=[1,2,3,4,5,5,5] The indices of the largest numbers are 4,5,6 I know the question is easy for most of people, but please be patient to answer my question. Thanks :-)
How to find the index of the largest numbers in a list
0
0
0
54
33,276,126
2015-10-22T08:05:00.000
-2
0
0
0
python,django,django-models,django-admin,django-grappelli
33,277,390
1
false
1
0
I think you can use a none-model class, which wrapper the Model class and have some extra fields, where you can set/get or save to other place
1
0
0
I have a model, which has some fields stored in db. On top of it, I need to implement non-db fields, which would be loaded and saved using a custom API. Users should interact with the model using the admin interface, Grappelli is used to enhance the standard Django admin. I am interested in one of the following: Model virtual fields or properties, where I can override how to read and save custom fields. (Simple python properties won't work with Django admin) Editable callables for admin (not sure if it is even possible) Any other means to display and process custom fields in admin, except of creating custom forms and moving the logic into the forms.
Non-db fields for Django model or admin
-0.379949
0
0
1,083
33,276,615
2015-10-22T08:33:00.000
3
0
1
0
python,module
33,276,690
2
false
0
0
If the script is still running, it's likely that replacing the dependency will not affect it at all - the code will already be in memory. Still, it's better to be safe than sorry. I would install the other script inside a virtualenv, in which you can install whichever versions of modules you want without affecting anything else.
1
1
0
Is it possible to update a python module while it is used in a running script? The situation is the following: 1) I have a script running using pandas 0.15.2. It is a long data processing task and should continue running for at least another week. 2) I would like to run, on the same machine, another script, which requires pandas 0.16. Is it possible for me to do 2) without interrupting 1)?
update python module while it is active
0.291313
0
0
48
33,279,279
2015-10-22T10:51:00.000
0
0
0
0
python,urllib
33,279,321
2
false
0
0
Using urllib you can't perform a click on an <a> tag. You may want to look into selenium-webdriver for that matter. You can fetch its href attribute value and then call the urllib.urlopen(path) function (make sure path variable contains the full path and not the relative path).
2
1
0
I'm writing a python script to grab my bank account details using urllib. To access the login page, there is a hyperlink button present on the page. How can I get my script to click that button or indeed bypass it? Please, any help me on this would be appreciated.
Python: Clicking a hyperlink button with urllib
0
0
1
1,484
33,279,279
2015-10-22T10:51:00.000
0
0
0
0
python,urllib
33,279,434
2
false
0
0
I want to add to Jason's answer that if you're going to pass login data you might want to step away from urllib and use requests. I would find the url of the page that has the login form and use requests there.
2
1
0
I'm writing a python script to grab my bank account details using urllib. To access the login page, there is a hyperlink button present on the page. How can I get my script to click that button or indeed bypass it? Please, any help me on this would be appreciated.
Python: Clicking a hyperlink button with urllib
0
0
1
1,484
33,280,111
2015-10-22T11:38:00.000
0
0
0
0
python,tkinter
44,626,762
1
false
0
1
There are three ways to insert elements in a tkinter window. First you could use only yourelement.pack() to put the element, and every element you put after, it will be downside the first align at the center. You could use yourelement.pack() and yourelement.place(x=xx,y=xx,width=xx,height=xx) to place the element where you want according to your x and y values. And last of all but not less important, you could use yourelement.grid(row=xx,column=xx) to place the element in the window on a grid you have previously defined. This last way to place the element is redimensionable, but a little bit difficult to assemble all the things in the window, but it's a thing of try and fail.
1
2
0
I am creating an application with Tkinter that contains a grid of widgets. Is there anyway to mimic the behavior of bootstrap such that all the elements appear on the window after it is collapsed?
Bootstrap Behavior in Tkinter
0
0
0
1,698
33,280,783
2015-10-22T12:13:00.000
2
1
0
1
python,windows,cron,crontab,job-scheduling
33,283,524
1
true
0
0
cron is best for jobs that you want to repeat periodically. For one-time jobs, use at or batch.
1
3
0
What are the best methods to set a .py file to run at one specific time in the future? Ideally, its like to do everything within a single script. Details: I often travel for business so I built a program to automatically check me in to my flights 24 hours prior to takeoff so I can board earlier. I currently am editing my script to input my confirmation number and then setting up cron jobs to run said script at the specified time. Is there a better way to do this? Options I know of: • current method • put code in the script to delay until x time. Run the script immediately after booking the flight and it would stay open until the specified time, then check me in and close. This would prevent me from shutting down my computer, though, and my machine is prone to overheating. Ideal method: input my confirmation number & flight date, run the script, have it set up whatever cron automatically, be done with it. I want to make sure whatever method I use doesn't include keeping a script open and running in the background.
Methods to schedule a task prior to runtime
1.2
0
0
50
33,285,738
2015-10-22T16:17:00.000
0
0
0
0
python,amazon-redshift,amazon-rds
34,606,560
1
false
0
0
Currently it's impossible to access underlining db from UDF on RedShift, not supported yet.
1
1
0
I have 2 tables: Main Table Rules Reference Table(if, else if, else) I cannot join tables directly because reference tables contains if,else if condition data. Same functionality i have implemented using Distributed Cache UDF in HIVE and i want same behavior in RedShfit also. I want to apply Reference rules table for each and every Main table rows. Whether can i access entire reference table inside UDF?
Accessing RedShift Table inside UDF
0
0
0
263
33,286,349
2015-10-22T16:49:00.000
-2
0
1
0
python,string
33,286,380
4
false
0
0
You can do this: print "{{f}}".format(f='wonderful'). You can do this as well: "Hello, {name}!".format(name='John'). This will substitute all {name}s with John.
1
5
0
Ponder that you have a string which looks like the following 'This string is {{}}' and you would like to transform it into the following 'This string is {wonderful}' if you do 'This string is {{}}'.format('wonderful') it won't work. What's the best way to achieve this?
string.format() with {} inside string as string
-0.099668
0
0
7,116
33,289,820
2015-10-22T20:12:00.000
0
0
1
0
python,spacy
66,596,439
5
false
0
0
from spacy.en import English may give you an error No module named 'spacy.en' All language data has been moved to a submodule spacy.lang in spacy2.0+ Please use spacy.lang.en import English Then do all the remaining steps as @syllogism_ answered
1
35
0
How can I extract noun phrases from text using spacy? I am not referring to part of speech tags. In the documentation I cannot find anything about noun phrases or regular parse trees.
Noun phrases with spacy
0
0
0
33,789
33,290,705
2015-10-22T21:03:00.000
0
0
0
0
python,amazon-s3
33,291,826
1
false
0
0
You use HTTPS. SSL certificates not only serve as a public key for encryption, they are also signed by a trusted certificate authority and confirm that the server you intended to contact possesses a certificate (and has the correlated private key) that matches the hostname you used when you made the contact. A properly-configured client will refuse to communicate over HTTPS with a server presenting a certificate with a mismatched hostname, or a certificate signed by an untrusted/unknown certificate authority.
1
0
0
I have a s3 bucket with a file in it and while retrieving file from s3 bucket, I want to check if it is retrieved from the source I am expecting from and not from some man-in-the-middle thingy. What is the best way to do that? Something with Authentication header may be or associate with key?
python : signature matching aws s3 bucket
0
0
1
60
33,290,927
2015-10-22T21:17:00.000
1
1
0
1
python,django-views,fork,uwsgi
58,931,038
1
false
1
0
use lazy-apps = true instead of 1
1
3
0
I use Debian + Nginx + Django + UWSGI. One of my function us fork() in the file view.py (the fork works well), then immediately written return render (request, ... After the fork() the page loads for a long time and after that browser prints error - "Web page not available». On the other hand the error doesn’t occur if i reload the page during loading (because i don’t launch the fork() again). The documentation UWSGI there is - uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible. By default it will fork after having loaded your applications to share as much of their memory as possible. If this behavior is undesirable for some reason, use the lazy-apps option. This will instruct uWSGI to load the applications after each worker’s fork(). Beware as there is an older options named lazy that is way more invasive and highly discouraged (it is still here only for backward compatibility) I do not understand everything, and I wrote in a configuration option uWSGI lazy-apps: lazy-apps: 1 in my uwsgi.yaml. It does not help that I'm wrong? What do I do with this problem? P.S. other options besides fork() is that I do not fit .. PP.S. Sorry, I used google translate ..
How to enable the lazy-apps in uwsgi to use fork () in the code?
0.197375
0
0
2,374
33,291,027
2015-10-22T21:24:00.000
1
0
1
0
python,regex,python-2.7,python-3.x
33,291,521
3
false
0
0
The reason for your issue is a combination of greediness and the empty token. The reason why is that when the pattern starts out, it will happily match the ab at the beginning of the string. So the first token is satisfied. Now the next token is the greedy dot. This consumes all of the remaining characters in your target string. This satisfies that token. The next token is an alternation. Neither of the first two options can be matched since you are at the end of the target string thanks to the greedy dot. However, the empty token can trivially match. This satisfies that entire group. The next token is another greedy dot. However, this dot requires zero or more occurrences of any character. Since you are at the end of the string (because of the first greedy dot), this token is trivially satisfied. The final token has the same behavior and result as the previously described group. So again, this final token is trivially satisfied.
2
0
0
As I understand, | tries different subpatterns in alternation and matches the first possible option. Whenever there are multiple groups, the later ones behave unexpectedly when one of the subpatterns is empty giving it priority. Example: re.search("(ab|a|).*(as|a|).*(qwe|qw|)", "abcde asdfg qwerty").groups() returns: ('ab', '', ''). If the empty option is removed re.search("(ab|a|).*(as|a).*(qwe|qw)", "abcde asdfg qwerty").groups() The result is ('ab', 'as', 'qwe') as expected. I am interested in a way to achieve the second result and be able to match a string like abc qwerty and obtain ('ab', '', 'qwe') or abcd asd and obtain ('ab', 'as', ''). The explanation on why the patterns did not work as I expected will be appreciated, but it is not my main concern. Thanks in advance!
Empty string taking precedence in python regex subpatterns with multiple groups
0.066568
0
0
85
33,291,027
2015-10-22T21:24:00.000
1
0
1
0
python,regex,python-2.7,python-3.x
33,291,201
3
false
0
0
The reason that you're getting that middle group is the .* is greedy. It sees all characters in your string and consumes all of them. You probably want something like this: (ab|a|).* ?(as|a|).* (qwe|qw|) It might be more helpful if you posted exactly what you need. I'm not sure what the use case of this might be, and if there is a better way to write that regex.
2
0
0
As I understand, | tries different subpatterns in alternation and matches the first possible option. Whenever there are multiple groups, the later ones behave unexpectedly when one of the subpatterns is empty giving it priority. Example: re.search("(ab|a|).*(as|a|).*(qwe|qw|)", "abcde asdfg qwerty").groups() returns: ('ab', '', ''). If the empty option is removed re.search("(ab|a|).*(as|a).*(qwe|qw)", "abcde asdfg qwerty").groups() The result is ('ab', 'as', 'qwe') as expected. I am interested in a way to achieve the second result and be able to match a string like abc qwerty and obtain ('ab', '', 'qwe') or abcd asd and obtain ('ab', 'as', ''). The explanation on why the patterns did not work as I expected will be appreciated, but it is not my main concern. Thanks in advance!
Empty string taking precedence in python regex subpatterns with multiple groups
0.066568
0
0
85
33,292,063
2015-10-22T22:50:00.000
0
0
1
0
hadoop,apache-spark,ipython,pyspark,jupyter
34,088,151
1
false
0
0
I have a working deployment of CDH5.5 + jupyter with pyspark and scala native spark. In my case I am using a dedicated user to start a jupyter server and then connecting to it from a client browser. Before sharing some thoughts about your problem I would like to point out that if your fifth server is not close connected to your cluster you should avoid launching pyspark in yarn-client mode, as the communication latency would surely slow your jobs. As far as I know yarn-cluster mode cannot be invoked remotely without pyspark-submit If you still want your driver node to be executing in that 5th server, make sure that your user "ipython" has the correct permission to access hdfs and other hadoop conf directories, you might need to create that user in your other hadoop nodes. Also make sure that your yarn-conf.xml is correctly configured to reflect the address of your yarn ResourceManager.
1
1
1
Let's assume I've got a 4 nodes Hadoop cluster (Cloudera distro in my case) with a user named 'hadoop' on each node ('/home/hadoop'). Also, I've got a fifth server with installed on it, Jupyter and Anaconda with a user named 'ipython', but without hadoop installation. Let's say I want to start Jupyter remotely from that fifth server in 'yarn_client' mode by keeping the 'ipython' user, my problem is that I've got an issue from logs which says that the user 'ipython' isn't allowed (or something like that). For info I copied-paste a dummy directory (to set the HADOOP_CONF_DIR environment variable) from the Hadoop cluster to that fifth server. Everything works well with the 'local[*]' setting in my 'kernel.json' file (fortunately), but the issue appears back when I change the master value into 'yarn_client' (unfortunately)... Is there a trick to solve that issue ? Or maybe several different tricks ?
Spark: How to start remotely Jupyter in 'yarn_client' mode from a different user
0
0
0
1,327
33,294,758
2015-10-23T04:12:00.000
0
0
0
0
javascript,python,django,cordova,authentication
33,294,839
2
false
1
0
You would need to either expose the django token in the settings file so that it can be accessed via jquery, or that decorator wont be accessible via mobile. Alternatively, you can start using something like oauth
1
1
0
I have developed a Python/Django application for a company. In this app all the employees of the company have a username and a password to login in. Now there is a need for a phone application that can do some functionality. In some functions I have the decorator @login_required For security reasons I would like to work with this decorator than against it, so how do I? I'm using PhoneGap (JavaScript/JQuery) to make the phone app if that helps. I can do my own research but I just need a starting point. Do I get some sort of token and keep it in all my HTTP request headers? First Attempt: I was thinking that maybe I POST to the server and get some kind of Authentication Token or something. Maybe there is some Javascript code that hashes my password using the same algorithm so that I can compare it to the database. Thanks
Login in from Phone App
0
0
0
57
33,295,439
2015-10-23T05:29:00.000
0
0
0
0
python,django,apache,ffmpeg,moviepy
33,412,882
2
true
1
0
After spending lots of time and trying lots of things, I have finally solved this issue. We can pass full path of temp video along with it's name, then it will create temp video at given path. Make sure you have write permissions on directory which you are going to set for temp video.
1
3
0
I am using Moviepy through a Django application on Ubuntu 14.04 system. It is giving me permissions error when it tries to write video file. Following are details of error : MoviePy error: FFMPEG encountered the following error while writing file test1TEMP_MPY_wvf_snd.mp3: test1TEMP_MPY_wvf_snd.mp3: Permission denied It seems it has not correct permissions on directory where it is trying to write temporary files. I have set the 777 on /tmp directory but no luck. Please help me fix this issue. Thanks
MoviePy error: FFMPEG permission error
1.2
0
0
2,767
33,298,821
2015-10-23T09:17:00.000
0
0
0
0
python,amazon-web-services,amazon-s3,amazon-ec2,amazon-iam
33,375,622
5
false
1
0
As mentioned above, you can do this with Boto. To make it more secure and not worry about the user credentials, you could use IAM to grant the EC2 machine access to the specific bucket only. Hope that helps.
1
5
0
I have an EC2 instance and an S3 bucket in different region. The bucket contains some files that are used regularly by my EC2 instance. I want to programatically download the files on my EC2 instance (using python) Is there a way to do that?
Access to Amazon S3 Bucket from EC2 instance
0
1
1
6,069
33,302,773
2015-10-23T12:51:00.000
1
1
0
1
python,labview
33,306,025
4
false
0
0
Why not use the System Exec.vi in Connectivity->Libraries and Executables menu? You can execute the script and get the output.
2
2
0
I need to call a Python script from Labview, someone know which is the best method to do that? I've tried Labpython, but it is not supported on newest versions of Labview and I'm not able to use it on Labview-2014. Definitevly, I'm looking for an advice about python integration: I know this two solutions: 1)Labpython: is a good solution but it is obsolete 2)execute python script with shell_execute block in Labview. I think that it isn't the best solution because is very hard to get the output of python script
Python and Labview
0.049958
0
0
2,470
33,302,773
2015-10-23T12:51:00.000
0
1
0
1
python,labview
55,798,097
4
false
0
0
You can save Python script as a large string constant(or load from text file) within the LabVIEW vi so that it can be manipulated within LabVIEW and then save that to a text file then execute the Python script using command line in LabVIEW. Python yourscript enter
2
2
0
I need to call a Python script from Labview, someone know which is the best method to do that? I've tried Labpython, but it is not supported on newest versions of Labview and I'm not able to use it on Labview-2014. Definitevly, I'm looking for an advice about python integration: I know this two solutions: 1)Labpython: is a good solution but it is obsolete 2)execute python script with shell_execute block in Labview. I think that it isn't the best solution because is very hard to get the output of python script
Python and Labview
0
0
0
2,470
33,306,071
2015-10-23T15:24:00.000
0
0
0
0
python,django,django-rest-framework
68,921,035
4
false
1
0
if you get single data in array format then user .get method instead of .filter method .get instead of .filter for get single data response only
1
8
0
I am converting a set of existing APIs from tastypie to REST framework. By default when doing list APIs, tastypie returns a dictionary containing the list of objects and a dictionary of metadata, where REST framework just returns an array of objects. For example, I have a model called Site. Tastypie returns a dictionary that looks like { "meta": { ... some data here ...}, "site": [ {... first site...}, {...second site...} ... ] } where REST framework returns just the array [ {... first site...}, {...second site...} ... ] We are not using the metadata from tastypie in any way. What is the least invasive way to change the return value in REST framework? I could override list(), but I would rather have REST framework do its thing where ever possible.
Return dictionary instead of array in REST framework
0
0
0
6,333
33,306,517
2015-10-23T15:46:00.000
1
0
0
0
python,database-connection,psycopg2
33,306,596
1
true
0
0
It looks like I can do this in __del__() or make the class a context manager and close the connection in __exit__(). I wonder which one is more Pythonic. I won't comment on what's more "pythonic", since that is a highly subjective question. However, Python doesn't make very strict guarantees on when a destructor is called, making the context/__exit__ approach the right one here.
1
1
0
I create a database connection in the __init__ method of a Python class and want to make sure that the connection is closed on object destruction. It looks like I can do this in __del__() or make the class a context manager and close the connection in __exit__(). I wonder which one is more Pythonic.
What's the preferred way to close a psycopg2 connection used by Python object?
1.2
1
0
523
33,307,845
2015-10-23T17:04:00.000
1
0
0
0
python,sockets,server,client,disconnect
33,307,987
2
true
0
0
Assuming you are not after ping from server to client. I believe that your approach is fine. Very ofther server will not be able to hit client but it works otherway around. You may run out of resources if you have many connected clients. Also over this established channel you can send other data/metrics and boom monitoring was born ;-) IF you send other data you will probably reliaze you don't need to send data every 2 secs but only if no other data was sent - boom FIX works this way ( and many other messaging protocol) What you may like is something like kafka that will transport the messages for you there are other messaging protocols too.. and they scale better then if you just connect all client(assuming you have many of them) Happy messaging
1
0
0
My basic problem is that I am looking for a way for multiple clients to connect to a server over the internet, and for the server to be able to tell if those clients are online or offline. My current way of doing this is a python socket server, and python clients, which send the server a small message every 2 seconds. The server checks each client to see if it has received such a message in the last 5 seconds, and if not, the client is marked as offline. However, I feel that is is probably not the best way of doing this, and even if it is, there might be a library that does this for me. I have looked for such a library but have come up empty handed. Does anyone know of a better way of doing this, or a library which can automatically check the status of multiple connected clients? Note: by "offline", I mean that the client could be powered off, network connection disconnected or program quit.
Python client-server - tell if client offline
1.2
0
1
1,192
33,308,781
2015-10-23T18:05:00.000
1
0
0
0
python,django,python-3.x,pip,django-rest-framework
67,793,330
29
false
1
0
Also, if you're getting this error while running docker-compose up. Make sure to run docker-compose up --build because docker needs to install the djangorestframework dependency as well.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.006896
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
0
0
0
0
python,django,python-3.x,pip,django-rest-framework
68,277,646
29
false
1
0
After installing the necessary packages with python3/pip3 inside my virtual environment, it all came down to running my server with python manage.py runserver instead of python3 manage.py runserver. This was because the virtual environment and other packages were installed using python3/pip3 and not python2/pip2, hence running the server with python3 again resulted in the error. Am sure this will help someone else.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
0
0
0
0
python,django,python-3.x,pip,django-rest-framework
67,944,634
29
false
1
0
I face the same problem. In my case, I solved it by update Windows Defender configuration.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
0
0
0
0
python,django,python-3.x,pip,django-rest-framework
61,280,125
29
false
1
0
I know there is an accepted answer for this question and many other answers also but I just wanted to add an another case which happened with me was Updating the django and django rest framework to the latest versions to make them work properly without any error. So all you have to do is just uninstall both django and django rest framework using: pip uninstall django pip uninstall djangorestframework and then install it again using: pip install django pip install djangorestframework
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
2
0
0
0
python,django,python-3.x,pip,django-rest-framework
61,382,641
29
false
1
0
Yeh for me it was the python version as well ... much better to use pipenv ... create a virtual env using using python 3 ... install pipenv : pip3 install pipenv create the virtualenv: pipenv --python 3 activate the virtual env: pipenv shell
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.013792
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
0
0
0
0
python,django,python-3.x,pip,django-rest-framework
59,230,374
29
false
1
0
(I would assume that folks using containers know what they're doing, but here's my two cents) Let's say you setup your project using cookiecutter-django and enabled the docker container support, be sure to update the pip requirements file with djangorestframework==<x.yy.z> (or whichever python dependency you're trying to install) and re-build the docker images (local and production).
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
0
0
0
0
python,django,python-3.x,pip,django-rest-framework
68,626,596
29
false
1
0
Install pip3 install djangorestframework first and add rest_framework in the settings.py. This is how I have a shout out the problem.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
5
0
0
0
python,django,python-3.x,pip,django-rest-framework
50,729,985
29
false
1
0
If you're using some sort of virtual environment do this! Exit from your virtual environment. Activate your virtual environment. After you've done this you can try running your command again and this time it probably won't have any ImportErrors.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.034469
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
18
0
0
0
python,django,python-3.x,pip,django-rest-framework
52,544,748
29
false
1
0
Also, check for the possibility of a tiny typo: It's rest_framework with an underscore (_) in between! Took me a while to figure out that I was using a dash instead...
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
1
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
-1
0
0
0
python,django,python-3.x,pip,django-rest-framework
56,489,277
29
false
1
0
On Windows, with PowerShell, I had to close and reopen the console and then reactive the virtual environment.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
-0.006896
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
1
0
0
0
python,django,python-3.x,pip,django-rest-framework
54,212,984
29
false
1
0
if you used pipenv: if you installed rest_framework thru the new pipenv, you need to run it thru the virtual environment: 1.pipenv shell 2.(env) now, run your command(for example python manage.py runserver)
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.006896
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
2
0
0
0
python,django,python-3.x,pip,django-rest-framework
52,474,178
29
false
1
0
If you are working with PyCharm, I found that restarting the program and closing all prompts after adding 'rest_framework' to my INSTALLED_APPS worked for me.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.013792
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
1
0
0
0
python,django,python-3.x,pip,django-rest-framework
71,253,035
29
false
1
0
if after installing and adding it to your INSTALLED_APPS it persist, then it's most likely because you're using python3 to run the server and thats okay. So what you do while installing is use python3 -m pip install djangorestframework .
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.006896
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
0
0
0
0
python,django,python-3.x,pip,django-rest-framework
43,004,127
29
false
1
0
try this if you are using JWT pip install djangorestframework-jwt
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0
0
0
221,940
33,308,781
2015-10-23T18:05:00.000
2
0
0
0
python,django,python-3.x,pip,django-rest-framework
38,469,924
29
false
1
0
When using a virtual environment like virtualenvwithout having django-rest-framework installed globally you might as well have the error. The solution would be: activate the environment first with {{your environment name}}/bin/activate for Linux or {{your environment name}}/Scripts/activate for Windows and then run the command again.
15
116
0
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb": ImportError: No module named 'rest_framework' I'm using python3, is this my issue?
Django Rest Framework -- no module named rest_framework
0.013792
0
0
221,940
33,309,735
2015-10-23T19:08:00.000
1
0
1
0
python,ptvs
33,310,137
1
false
0
0
Sending it to the interactive works. Still new to VS and PTVS, so kind of a bonehead move.
1
0
0
I'm not sure how to phrase the question another way, but I upgraded from VS 13' to 15', and I'm no longer able to run multiple statements in the REPL window. I get 'SyntaxError: multiple statements found while compiling a single statement'. I know in 13' i was able to run multiple lines/statements with error. Any suggestions?
How to pass multiple lines into interpreter?
0.197375
0
0
159
33,310,292
2015-10-23T19:45:00.000
2
0
0
0
python,sockets,network-programming
33,310,427
1
true
0
0
Yes, UDP sockets are bidirectional.
1
0
0
Am I able to send and receive data (therefore, to use sendto and recvfrom methods) via the same UDP socket simultaneously in Python? I need to listen for new packets while sending some data to previous clients from another thread.
Am I able to send and receive data via the same UDP socket simultaneously
1.2
0
1
583
33,310,788
2015-10-23T20:21:00.000
0
0
0
0
python,python-3.x,openpyxl
33,320,152
2
false
0
0
I suggest you update your version of openpyxl. It hasn't been a requirement to have a styles.xml in an archive for a while.
1
0
0
I have a directory of .xls and .xlsx files I am trying to load into Python with the openpyxl module. My code is as follows: for i in os.listdir(os.getcwd()): if i.endswith(".xls") or i.endswith(".xlsx"): wb = openpyxl.load_workbook(i) When I run this code I am receiving the following error: raise InvalidFileException(unicode(e)) InvalidFileException: "There is no item named 'xl/styles.xml' in the archive" I am able to load these same files successfully with openpyxl one at a time, but not in a loop. Thank you in advance for any help. James
InvalidFileException error when iterating through directory with openpyxl.load_workbook
0
0
0
1,895
33,312,123
2015-10-23T22:08:00.000
0
0
1
0
python
33,312,656
1
true
0
0
The sys module is built in to the python interpreter and thus has higher priority over your bogus python module. Additionally, before any of your code is run, some part of the python startup has already imported the sys module and cached it in sys.modules. If a module you try to import is in the cache, the cached module is returned regardless of anything else.
1
0
0
I am relatively new to python and was reading about how python goes about looking for modules when import statement is invoked. It appears that when doing stuff on the python interactive prompt, it should look for sys.py in my current working directory first when I do "import sys" but it seems to be saving me from this mistake somehow. When I do a dir(sys) after I invoke import sys it shows me the correct names even though i put a bogus sys.py in the CWD. Could someone please kindly explain what's happening?
why doesnt sys.py in my current directory break "import sys"
1.2
0
0
111
33,313,496
2015-10-24T01:10:00.000
1
0
1
0
python,package,config,generated
33,313,610
1
true
0
0
I would think you would implement some kind of initialization routine that allows them to pass in the values, assuming this is a module they might import and use. If it is a package that they run from the command line, like python -m SimpleHTTPServer then you just could probably add some arguments to specify a key file. You could generate a config file in their home directory if it doesn't exist and give them a note to update it if it hasn't been updated yet (and then exit).
1
1
0
I have a python package which build some particular text file, and save it on a bucket on google storage. The access keys of google storage are a parameter of my program which are defined in a file config.py, but these keys should not be given to other people and every user is expected to fill the file with its own access keys. My question is: how am I supposed to distribute this package to my colleagues, so they can modify the config.py with their own google storage access after having pip installed the package. In other words, what is the natural way to allow a package to be installed via pip install in a way that the user have to generate the config file before using the package. I could do the stdin and stdout with the user myself like the first program I did in python but I would be really surprised if no package exists to handle such interactions properly?
generate individual config file for a python package
1.2
0
0
52
33,321,076
2015-10-24T17:19:00.000
0
0
0
0
python,session,cookies,python-requests
33,321,212
2
false
1
0
You need to get the response from the page then regex match for token.
2
0
0
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
How to get token value while sending post requests
0
0
1
70
33,321,076
2015-10-24T17:19:00.000
0
0
0
0
python,session,cookies,python-requests
37,413,165
2
false
1
0
token value is stored in the cookie file..check the cookie file and extract the value from it.. for example,a cookie file after login contain jsession ID=A01~xxxxxxx where 'xxxxxxx' is the token value..extract this value..and post this value
2
0
0
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
How to get token value while sending post requests
0
0
1
70
33,324,475
2015-10-24T23:31:00.000
2
0
0
0
python,bokeh
33,339,510
1
true
0
0
There is an open PR to improve this, it will be in the the 0.11 release.
1
3
1
Usually I do plotting inside of IPython Notebook with pylab mode. Whenever I use Bokeh, I like to enable output_notebook() to show my plot inside of the IPython notebook. Most annoying part is that Bokeh enable wheel_zoom by default which cause unintended zoom in IPython notebook. I know I can avoid this by passing comma separated tools string what I want to include into bokeh.plotting.figure. but with this solution, I should list up the other tools but wheel_zoom. Is there any way to exclude wheel_zoom only? or Can I disable wheel_zoom in global setting or something like that?
How to disable wheel_zoom in Bokeh?
1.2
0
0
513
33,327,217
2015-10-25T07:20:00.000
0
0
0
0
python,django,markdown,django-wiki
33,327,523
1
true
1
0
Well, it looks like there is just such an extension for this in MarkDown (WikiLinkExtension - which takes a base_url parameter). I've had to modify my copy of django-wiki to add a new setting to use it (submitted an upstream pull request for it too, since I suspect this will be useful to others). Kinda surprised django-wiki didn't have this support already built but there you go. EDIT: Ok, it looks like this approach doesn't play nice with a hierarchical Wiki layout (which Django-wiki is designed for). I've cobbled together a hack that allows me to link to child pages of the current page, which is enough to be workable even if it's kind of limited.
1
0
0
I've got a Django app that I'm working on, with a wiki (powered by django-wiki) sitting under the wiki/ folder. The problem I'm having is that if I create links using Markdown, they all direct to the root /, whereas I want any links generated from the markdown to go into the same subdirectory (under /wiki). The documentation doesn't appear to be particularly forthcoming on this (mainly directing me to to the source code, which so far has revealed nothing). The other avenue I'm looking into is how to direct Markdown itself to prefix all links with a specified path fragment. Is there a Markdown extension or trick that might be useful for accomplishing this?
Defining a default URL prefix using markdown / django-wiki
1.2
0
0
226
33,328,730
2015-10-25T10:41:00.000
0
0
0
0
python,django
33,329,001
1
false
1
0
Groups in django (django.contrib.auth) are used to specify certain rights of viewing content mainly in the admin to certain users. I think your group functionality might be more custom than this and that you're better of creating your own group models, and making your own user and group management structure that suits the way your website is used better.
1
0
0
I am currently learning how to use Django. I want to make a web app where you as a user can join groups. These groups have content that just members of this group should be able to see. I learned about users, groups and a bit of authentication. My first impression is, that this is more about the administration of the website itself and I cannot really believe that I can solve my idea with it. I just want to know if thats the way to go in Django. I probably have to create groups in Django that have the right to see the content of the group on the website. But that means that everytime a group is created, I have to create a django group. Is that an overkill or the right way?
How to organize groups in Django?
0
0
0
77
33,329,238
2015-10-25T11:38:00.000
0
0
1
0
python,nlp,extraction
36,612,972
3
false
0
0
SUTime runs on the JVM. So am not sure if you can call it from Python seamlessly. There's no SUTime port for Python as far as I know.
1
2
0
For date extraction I tried to use NLTK (part of Natural Language Processing) - regular expression,unigram. Using these I could extract date but when I pass different messages for the same date extractor, it is unable to identify the date format. When i further googled it I came across SUTime for extracting date. Can any one tell how to install SUTime and extract date from a text message using python? Or Is there any other way to extract date from a text message using python? (NOTE: Text message are not machine generated. Hence the date format changes from message to message) Example : Text message : "10/10/2015 4:20 CST. At Belendoor terminal UNL is unavailable from Date: October 12, 2015 Time: 1:30 PM until 07:30pm EST." Output : Date1 : 10/10/2015 04:20:00 CST Date2 : 10/12/2015 13:30 Date3 : 10/12/2015 19:30 EST
How to use SUTime,NLP in python inorder to extract date
0
0
0
3,272
33,331,899
2015-10-25T16:13:00.000
0
0
1
0
python,function,namespaces
33,331,933
1
true
0
0
No! Python has namespaces, so you do not have to use prefixes. Prefixes are only useful for languages with out of namespaces like C or ObjC
1
0
0
I have a file in my Python project containing a number of functions that will modify various data objects within my app. I've called this file conditions.py. Every function within this file, I've prefixed with condition_ (e.g. condition_limits() will limit integers between a specified range, condition_in_list() tests for a given value in a given list...). My question: Given that to use these functions in other files, I will be importing conditions. Do I need to prefix my functions in this file at all? Will conditions.limits() be sufficiently unique and equal to conditions.condition_limits() in use? I ask because I am having a little trouble understanding python's personality with regard to structure, scope, namespace and unique names. I would prefer to not prepend all of my functions for sake of readability and overall cleanliness. Thanks!
Should I use a unique prefix for all functions within a Python module?
1.2
0
0
161
33,345,609
2015-10-26T12:19:00.000
0
0
0
1
python,testing,virtual-machine,squish
33,345,750
1
false
0
0
You can install an ssh server on the Windows machine and then use the paramiko module to communicate with it or you can also use wmi command to remotely execute command on Windows system.
1
1
0
I'll get right into the point. The problem is: my localmachine is a Windows OS I launched a Windows Virtual Machine (through VirtualBox) that awaits some python commands on my localhost I have a python script that I execute and after the VM is started, I want the script to open inside the VM, a cmd.exe process after cmd.exe opens up, the python script should send to that cmd.exe, inside the VM, the delete command "del c:\folder_name" I did searched on various issue on StackOverflow that suggested me using subprocess.call or subprocess.Popen, but unfortunately none of them worked in my case, because I'm sure that all of the solutions were meant to work on localhost, and not inside a virtual machine, how I want it. Any suggestions? Thank you. PS: I'm trying to do this without installing other packages in host/guest. UPDATE: Isn't there any solution, that will allow me to do this without installing something on VM ?!
Send a DOS command in a virtual machines cmd through python script
0
0
0
615
33,345,960
2015-10-26T12:30:00.000
0
0
0
0
heroku,python-requests
33,346,079
1
false
1
0
Have your host based firewall throttle those requests. Depending on your setup, you can also add Nginx in to the mix, which can throttle requests too.
1
0
0
I am experiencing a once per 60-90 minute spike in traffic that's causing my Heroku app to slow to a crawl for the duration of the spike - NewRelic is reporting response times of 20-50 seconds per request, with 99% of that down to the Heroku router queuing requests up. The request count goes from an average of around 50-100rpm up to 400-500rpm Looking at the logs, it looks to me like a scraping bot or spider trying to access a lot of content pages on the site. However it's not all coming from a single IP. What can I do about it? My sysadmin / devops skills are pretty minimal. Guy
How to deal with excessive requests on heroku
0
0
1
46
33,346,591
2015-10-26T13:08:00.000
1
0
0
0
python,pandas,numpy,nan,difference
42,668,700
5
false
0
0
When we are dealing with normal dataframes then only difference will be an inclusion of NAN values, means count does not include NAN values while counting rows. But if we are using these functions with the groupby then, to get the correct results by count() we have to associate any numeric field with the groupby to get the exact number of groups where for size() there is no need for this type of association.
1
131
1
That is the difference between groupby("x").count and groupby("x").size in pandas ? Does size just exclude nil ?
What is the difference between size and count in pandas?
0.039979
0
0
60,325
33,349,515
2015-10-26T15:28:00.000
0
0
1
0
python
33,349,868
2
true
0
0
I would advise you to type "Python /?" in a command prompt and see which possibilities you have (e.g. python -v gives a verbose output on the import statements in your code). Like this you might find a way of having more information without needing to modify your source code. Obviously I don't know if the information you get from python -v is the one you're looking for.
1
0
0
I am analyzing an existing Python code that runs into hundreds of line. Adding log per line to capture flow / understanding run time processing is painful - but then the current application logging is very poor by just using print data. Hence for support purpose these are not enough as its difficult to understand without looking into code. What is the best way of change these unstandard logs into at least something like - Class Name - Method Name - Error Details additional more details With small modifications - I also run into risk of breaking the flow if not dealt carefully. Please let me know which application mechanism logging would be the best?
Application logs for support and analysis purpose
1.2
0
0
38
33,349,846
2015-10-26T15:41:00.000
1
0
0
0
python,django,django-forms
33,350,576
2
false
1
0
I don't think there is a premade solution for you. You'll have to do one of two things: When the form is submitted, examine the value of the field in question. If it is equal to the default value, then ignore the result of has_changed and save it. (Be aware that this could result in duplicate items being saved, depending on your schema.) When the form is submitted, search for an existing record with those field values. If no such record exists, save it. Otherwise do nothing. (If these records contain a last-updated timestamp, you might update that value, depending on your application.)
1
1
0
When looking for this feature, one is flooded under answers pointing toward the Form initial member. Yet, by its design, this approach would not save anything to database if the user does not change at least one value in the form, because the has_changed method returns False when only initial values are submitted. Then, if one were to override has_changed to always return true, the behaviour would be to try to save forms for which no value (nor initial nor user input) is found. Is it possible to have a real default value in Django: a value that the user can change if he wants, but that would still save the form to DB when the form is only submitted with default values ?
How to set a default value for a Django Form Field, that would be saved even in the absence of user initiated changes?
0.099668
0
0
1,119
33,350,153
2015-10-26T15:55:00.000
0
0
1
0
python,pandas,pytables
52,785,994
2
false
0
0
You really have to close the open store manually. There is no other way. Why? PyTables uses a file registry to track open files. A destructor for this file registry is registered with Python's atexit module, which is called when the Python interpreter exits. If this destructor method is called, it will print out the names of every open file. This feature is not configurable.
1
4
1
Is there a way to prevent PyTables from printing out Closing remaining open files:path/to/store.h5...done? I want to get rid of it just because it is clogging up the terminal. I'm using pandas.HDFStore if that matters.
Preventing PyTables (in Pandas) from printing "Closing remaining open files..."
0
0
0
1,063
33,350,869
2015-10-26T16:30:00.000
0
0
0
1
python,google-app-engine
33,351,187
1
false
1
0
It might help to include the code in question, but try putting a \ before the +, that's what can escape things within quotes in python, so it might work here. E.g.: C\+
1
0
0
I'm using full text search and I'd like to search for items that have a property with value 'C+' is there a way I can escape the '+' Char so that this search would work?
Escape characters in Google AppEngine full text search
0
0
0
92
33,352,298
2015-10-26T17:48:00.000
3
0
1
0
python,multithreading
33,352,887
2
true
0
0
python is not very intelligent about switching between threads Python threads work a certain way :-) if I use a thread lock where only 1 thread can run at a time... will that lock actually make anything run slower Err, no because there is nothing else runnable, so nothing else could run slower. If all but 1 threads are locked, will the python interpreter know not to context switch? Yes. The kernel knows which threads are runnable. If no other threads can run then logically speaking (as far as the thread is concerned) the python interpreter won't context switch away from the only runnable thread. The thread doesn't know when it has been switched away from (how can it, it isn't running).
1
5
0
I watched an excellent presentation on the GIL, and how when running in the interpreter only 1 single thread can run at a time. It also seemed that python is not very intelligent about switching between threads. If i am threading some operation that only runs in the interpreter, and it is not particularly CPU heavy, and I use a thread lock where only 1 thread can run at a time for this relatively short interpreter-bound operation, will that lock actually make anything run slower? as opposed to if the lock were not necessary and all threads could run concurrently. If all but 1 threads are locked, will the python interpreter know not to context switch? Edit: by 'making things run slower' I mean if python is context switching to a bunch of locked threads, that will (maybe) be a performance decrease even if the threads don't actually run
How does python handle thread locking / context switching?
1.2
0
0
6,723
33,352,574
2015-10-26T18:01:00.000
0
0
1
0
python,windows,tkinter,console,py2exe
33,352,759
1
false
0
1
import subprocess subprocess.Popen("application.exe", shell = True)
1
0
0
I have made a GUI for my application. All scripts are in Python (2.7) and the GUI is created with Tkinter. I work on Linux, but I needed this app to be executable on Windows. So I've used py2exe, to create an executable. After a while it is working almost perfectly, but I have the following problem: Somewhere in the application, I need to call external programs, namely ImageMagick and LaTeX. I use the commands convert, pdflatex, simply by importing os module and running os.system(build), where build = 'convert page.pdf page.gif'etc. When those commands are called from the *.exe application the console pops up (meaning a console window opens up for a split of a second and closes again). Is there a way, to prevent this behaviour? It does not interrupt the application, but it is ugly and not a desired behaviour. [Note] I chose not to add any samples, since there are lots of files and other content, which, I think, is not related to the problem. I could however try to post minimal (not)working example. But maybe it is not needed. Thanks!
How to stop console from poping up when command is called from python GUI?
0
0
0
893
33,353,130
2015-10-26T18:33:00.000
0
0
0
0
python,animation,expression,maya,curves
33,353,440
1
false
0
0
I used this seems to work. cmds.scaleKey("animCurve.x", valuePivot=0, valueScale=0.42 )
1
0
0
If I wanted to move it correctly. I would select the whole curve and type *=.42 in the graph editor second box and the whole curve will move up. How do I do this using python. Do I use expression?
How do you move an animation curve in maya python?
0
0
0
594
33,353,398
2015-10-26T18:50:00.000
0
0
0
1
python,design-patterns,command-line-interface,restful-architecture,n-tier-architecture
33,353,621
1
true
1
0
Since your app is not very complex, I see 2 layers here: ServerClient: it provides API for remote calls and hides any details. It knows how to access HTTP server, provide auth, deal with errors etc. It has methods like do_something_good() which anyone may call and do not care if it remote method or not. CommandLine: it uses optparse (or argparse) to implement CLI, it may support history etc. This layer uses ServerClient to access remote service. Both layers do not know anything about each other (only protocol like list of known methods). It will allow you to use somethign instead of HTTP Rest and CLI will still work. Or you may change CLI with batch files and HTTP should work.
1
0
0
I am going to write some HTTP (REST) client in Python. This will be a Command Line Interface tool with no gui. I won't use any business logic objects, no database, just using an API to communicate with the server (using Curl). Would you recommend me some architectual patterns for doing that, except for Model View Controller? Note: I am not asking for a design patterns like Command or Strategy. I just want to know how to segregate and decouple abstraction layers. I think using MVC is pointless regarding the fact of not having a business logic - please correct me if I'm wrong. Please give me your suggestions! Do you know any examples of CLI projects (in any language, not necessarily in Python) that are well maintained and with clean code? Cheers
Architectual pattern for CLI tool
1.2
0
0
355
33,353,968
2015-10-26T19:25:00.000
0
0
0
1
python,azure
42,435,442
4
false
0
0
BlobService is function you are trying to call, but it is not defined anywhere. It should be defined when you call from azure.storage import *. It is probably not being called, due to a difference in package versions. Calling from azure.storage.blob import * should work, as it is now invoked correctly.
3
0
0
I've installed the azure SDK for Python (pip install azure). I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook. I've replaced all the place holders in the script with actual values as noted in the scripts comments. When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)". Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct?
How do I fix the 'BlobService' is not defined' error
0
0
0
3,863
33,353,968
2015-10-26T19:25:00.000
0
0
0
1
python,azure
33,354,366
4
false
0
0
It's been a long time since I did any Python, but BlobStorage is in the azure.storage.blob namespace I believe. So I don't think your from azure.storage import * is pulling it in. If you've got a code sample in a book which shows otherwise it may just be out of date.
3
0
0
I've installed the azure SDK for Python (pip install azure). I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook. I've replaced all the place holders in the script with actual values as noted in the scripts comments. When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)". Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct?
How do I fix the 'BlobService' is not defined' error
0
0
0
3,863
33,353,968
2015-10-26T19:25:00.000
1
0
0
1
python,azure
33,355,053
4
false
0
0
James, I figured it out. I just changed from azure.storage import * to azure.storage.blob import * and it seems to be working.
3
0
0
I've installed the azure SDK for Python (pip install azure). I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook. I've replaced all the place holders in the script with actual values as noted in the scripts comments. When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)". Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct?
How do I fix the 'BlobService' is not defined' error
0.049958
0
0
3,863
33,354,977
2015-10-26T20:28:00.000
0
1
0
0
python
33,374,589
2
false
0
0
ser = serial.Serial(3) # open COM 4 print ser.name # check which sport was really used and 'ser' is the serial object Here is the python code to open specific serial port.
1
0
0
I am using Python v2.7 on Win 7 PC. I have my robot connected to computer and COM 4 pops out in device manager. My plan is to send API to robot through COM 4. Here is the question, how could python identify which serial port is for which device? So far, I can list all the available ports in python, but I need to specifically talk to COM 4 to communicate with robot. As a newie, any help would be appreciated.
Serial Port Identity in Python
0
0
1
64
33,355,400
2015-10-26T20:52:00.000
2
0
1
0
windows,python-3.x,qr-code,barcode,zbar
33,369,080
6
false
0
0
Forget wrestling with all of the wrappers. The easiest solution for me was to simply use import os os.system(r'D:\Winapps\Zbar\bin\zbarimg.exe -d d:\Winapps\Zbar\Examples \barcode.png') Worked instantly. Hope this helps anyone else struggling with that issue.
1
2
0
I am pretty new to programming, and have never used Zbar before. I am trying to write a simple script that will allow me to import Zbar and use it to decode a barcode image. I already have a script set up to decode text from images that uses Pytesseract and Tesseract OCR, but I need to be able to decode barcodes as well. I have Windows 7 32 bit, and and am using Python 3.4. I have already installed Zbar and have used it from the command line successfully to decode their barcode sample. I have tried using >pip install zbar, but I keep getting the error: "fatal error C1083: Cannot open include file: 'zbar.h': No such file or directory error: command 'C:\Program Files\Microsoft Visual Studio 10.0\VC\BIN\cl.exe' failed with exit status 2" Getting the pytesseract OCR was painless but I have wasted a lot of time on this barcode portion of it, any help or alternatives would be much appreciated.
How do I import Zbar into my Python 3.4 script?
0.066568
0
0
15,721
33,359,740
2015-10-27T03:59:00.000
4
0
0
0
python,random
37,483,626
7
false
0
0
random.randrange(0,2) this works!
1
207
1
I want a random number between 0 and 1, like 0.3452. I used random.randrange(0, 1) but it is always 0 for me. What should I do?
Random number between 0 and 1?
0.113791
0
0
517,528
33,368,621
2015-10-27T12:58:00.000
4
0
1
1
python,flask,tornado,python-asyncio
33,369,914
1
false
1
0
No. It is possible to run Flask on Tornado's WSGIContainer, but since Flask is limited by the WSGI interface it will be unable to take advantage of Tornado's asynchronous features. gunicorn or uwsgi is generally a much better choice than Tornado's WSGIContainer unless you have a specific need to run a Flask application in the same process as native Tornado RequestHandlers.
1
3
0
We have a project use Flask+Gunicorn(sync). This works well for a long time, however, recently i came across to know that Asyncio(Python3.5) support async io in standard library. However, before Asyncio, there are both Twisted and Tornado async servers. So, i wander whether Flask can use the aysncio feature of Tornado, cause Gunicorn support tornado worker class.
Can Flask use the async feature of Tornado Server?
0.664037
0
0
1,151
33,370,301
2015-10-27T14:12:00.000
1
0
0
0
python,django-cms
33,373,378
1
true
1
0
They are stored in the database you configured for Django. By default you can inspect the pages in the administration interface at /admin/cms/page/. In the database the table for them is by default named cms_page.
1
1
0
I am creating the new page in Django CMS. I want to see where the HTML pages get stored? I tried to find out every where also in the site-package, but I was not able to find it. Can anyone tell me when I create a new page in Django CMS in GUI view, then where it get stored?
Where the new pages get stored after creation in django CMS?
1.2
0
0
408
33,376,218
2015-10-27T18:50:00.000
1
0
1
0
java,php,python
33,376,321
1
false
0
0
Nope ... the runtimes usually store these variables in completely separate variable tables, namespaces, add custom identifiers to the variable names, etc. if it was handling multiple users in a single runtime environment. In the case of a webserver, each person that visits a site (i.e., connection) is a typically a completely separate instance of the runtime with it's own memory footprint... it would be like running multiple copies of the same program. If you have two Word docs open, typing in one doesn't change the other one, does it? The same principal is true here as well ...
1
2
0
I am very premature to programming.I don't even know whether the question itself is right.I will explain what i meant here plainly. User 1 Login to the website.So all the objects and others are run with respect to the program code written. Now user 2 login to the website at same time say,so again these codes are run parallel to the user 1 script.So these codes must contain almost same varibales as user 1 in the codes.right ? How does PHP,Python,Java differentiate these same variables run in the memory at same time ? You can't have same variables with different values at same time in memory right ?
How does compilers differentiate same variable created from different instantiations
0.197375
0
0
33
33,378,422
2015-10-27T21:02:00.000
8
0
0
0
python,amazon-web-services,boto3,amazon-iam,amazon-cloudfront
57,297,264
4
false
1
0
Just add profile to session configuration before client call. boto3.session.Session(profile_name='YOUR_PROFILE_NAME').client('cloudwatch')
1
214
0
I am using the Boto 3 python library, and want to connect to AWS CloudFront. I need to specify the correct AWS Profile (AWS Credentials), but looking at the official documentation, I see no way to specify it. I am initializing the client using the code: client = boto3.client('cloudfront') However, this results in it using the default profile to connect. I couldn't find a method where I can specify which profile to use.
How to choose an AWS profile when using boto3 to connect to CloudFront
1
0
1
163,599
33,381,808
2015-10-28T02:16:00.000
0
0
1
0
python,printf
33,381,822
5
false
0
0
print has a \n embedded in it....so you don't need to add \n by yourself
2
0
0
I am reading files in a folder in a python. I want print the each file content separate by a single empty line. So, after the for loop I am adding print("\n") which adding two empty lines of each file content. How can I resolve this problem?
print only one single empty line
0
0
0
73
33,381,808
2015-10-28T02:16:00.000
1
0
1
0
python,printf
33,382,026
5
false
0
0
From help(print) (I think you're using Python 3): print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. flush: whether to forcibly flush the stream. So print()'s default end argument is \n. That means you don't need add a \n like print('\n'). This will print two newlines, just use print(). By the way, if you're using Python 2, use print.
2
0
0
I am reading files in a folder in a python. I want print the each file content separate by a single empty line. So, after the for loop I am adding print("\n") which adding two empty lines of each file content. How can I resolve this problem?
print only one single empty line
0.039979
0
0
73
33,382,383
2015-10-28T03:24:00.000
3
0
1
0
python,file-writing
33,382,404
2
false
0
0
Option 3: Write the lines as you generate them. Writes are already buffered; you don't have to do it manually as in options 1 and 2.
2
1
0
I need to auto-generate a somewhat large Makefile using a Python script. The number of lines is expected to be relatively large. The routine for writing to the file is composed of nested loops, whole bunch of conditions etc. My options: Start with an empty string and keep appending the lines to it and finally write the huge string to the file using file.write (pro: only single write operation, con: huge string will take up memory) Start with an empty list and keep appending the lines to it and finally use file.writelines (pro: single write operation (?), con: huge list takes up memory) Write each line to the file as it is constructed (pro: no large memory consumed, con: huge number of write operations) What is the idiomatic/recommended way of writing large number of lines to a file?
Pythonic way to write a large number of lines to a file
0.291313
0
0
1,175