Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
38,378,578
2016-07-14T15:40:00.000
0
1
0
0
python,linux,vlc,libvlc
38,379,334
1
false
0
0
First, make sure you only have one copy of libvlc and that it's current. You can see what options VLC is using to play the file by clicking the "show more options" in the "Open Media" dialog.
1
1
0
I'm creating a python script that plays mp3/cdg karaoke files. I can open these files and they play with no problems when using the standalone VLC gui, however when I use the python libvlc library to open them, they play for a few frames then stop while the audio continues. I'm almost certain that this is is because the gui has some configuration set that the python implementation is defaulting to something else, but I'm not sure what it is. My question is: A) is there some way to just "export" the settings from the gui to command line arguments so I can pass them to the python implementation? B) If not, is there some way to compare the settings each one is using?
Video plays in VLC gui but not when using python libvlc
0
0
0
251
38,385,630
2016-07-14T23:13:00.000
0
0
1
1
python-3.x,gdbm
70,973,357
1
false
0
0
I got similar issue though I am not sure which platform you are using. Steps are: look for file _gdbm.cpython-"python version"-.so example file: _gdbm.cpython-39-darwin.so Once you find the path check which python version in directory path. Try creating same python venv. Execute your code. Before this make sure you have install appropriate gdbm version installed on host machine, for mac it's different for ubuntu it's different name.
1
5
0
In the Python 3 docs, it states that the dbm module will use gdbm if it's installed. In my script I use from dbm.gnu import open as dbm_open to try and import the module. It always returns with the exception ImportError: No module named '_gdbm'. I've gone to the gnu website and have downloaded the latest version. I installed it using ./configure --enable-libgdbm-compat, make; make check; make install, and it installed with no errors. I can access the man page for the library but I still can't import it into Python 3.5.2 (Anaconda). How do I install the Python module for gdbm?
Python: How to Install gdbm for dbm.gnu
0
1
0
2,327
38,386,154
2016-07-15T00:28:00.000
11
0
1
0
python,intellij-idea
44,640,387
1
false
0
0
Go to Settings | Editor | Inspections | PEP 8 Coding Style Violations and add E111 to the "Ignore Errors" list.
1
7
0
I have set my PEP8 indentation to 2 in IntelliJ. However, I still get the PEP8 indentation warning as not being of length 4. I went to 'Setting -> Editor -> Python ' and turned off the indentation warning. However, the inspector still continues to give me warnings.
PEP8 indentation warning in IntelliJ
1
0
0
2,114
38,387,676
2016-07-15T04:03:00.000
0
0
0
1
python,macos,subprocess,osx-elcapitan
38,387,870
3
false
0
0
Thank you everyone for the quick replies. I have been playing with the subprocess module, and I have gotten this to work:import subprocess m=subprocess.Popen(["say","hello"]) print(m) The .Popen command is also a quick way to get this to work. However, this is only working on my Mac and I need it to work on my Raspberry Pi for an interactive feature in my code. (I am using Pi Cam and Infrared Sensors for a robot that wheels around and when it senses people in front of it, says "Hey! Please move out of my way please!"
1
0
0
While Mac OSX 10.11.5 (El Capitan) has the "say" command to speak in a system generated voice, or so to say, is there any command that is similar for Python that can be used in Python? If Subprocess is utilized, please explain on how to use that.
While Mac OSX has the say command to speak, or so to say, is there any command that is similar for Python?
0
0
0
449
38,388,799
2016-07-15T05:57:00.000
1
0
0
0
python,list,python-2.7,sorting
38,389,853
3
false
0
0
you can use string.split(),string.split(',')[1]
1
5
1
Overview: I have data something like this (each row is a string): 81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M 3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M 61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M And I want to sort each row based on the first timestamp that is present in the each String, which for these four records is: 2016-07-14 01:28:59 2016-07-14 06:25:32 2016-07-14 08:26:45 2016-07-14 14:29:13 Now I know the sort() method but I don't understand how can I use here to sort all the rows based on this (timestamp) quantity, and I do need to keep the final sorted data in the same format as some other service is going to use it. I also understand I can make the key() but I am not clear how that can be made to sort on the timestamp field.
Sort A list of Strings Based on certain field
0.066568
1
0
358
38,399,901
2016-07-15T15:28:00.000
1
0
0
0
python,networking,operating-system,share,listdir
38,400,861
1
false
0
0
I'm silly. I figured it out. I just took the target of the Local_Reports folder and wrote os.listdir(r"\\03vs-cmpt04\Local_Reports"). This just searched the network for the folder and listed the correct output: [Daemons, Reports, SQL]
1
1
0
I have a network shared folder with file path :C\\Local_Reports. I would like to use os.listdir(":C\\Local_Reports"), but the output is ['desktop.ini', 'target.lnk']. This is not the correct output obviously. The correct output would be [Daemons, Reports, SQL]. How do I successfully access this?
listdir of a network shared folder in Python
0.197375
0
1
792
38,400,918
2016-07-15T16:23:00.000
2
1
1
0
java,python,oop
38,400,988
1
false
0
0
The short answer is yes and no. One of the key differences I see in Python compared to Java and C# is that in Python, functions don't have to be in a class. In fact, operations don't even have to be in a function. Java and C# both have two main rules: All code must be in a class. Operations are generally required to be in functions. This isn't true in Python. In fact, you can write a very basic Python script that's not even in a function. Java does not offer that flexibility - sometimes, that can be very positive because those strict rules help keep the code organized. Classes in Python operate in a manner that's very similar to Java and C#, but they aren't necessarily applied in the same way because of the rules above.
1
0
0
My background in programming is mostly Java. It was the first language I learned, and the language I spent the most amount of time with (I then moved on to C# for a little, and eventually C in school). A while back I tried dabbling with Python, and it seemed so different to me (based on my experience with Java). Anyways, now I'm doing much more Python stuff, and I've learned that Python is considered an OOP language with classes and such. I was just curious as to whether these attributes of Python function similarly to their Java counterparts. Please understand, that I'm asking this at a very rudimentary level. I'm still a "new" programmer in the sense that I just know how to write code, but don't know much about the various intricacies and subtleties with various languages and types of programming. Thanks EDIT Sorry, I realize that this was incredibly broad, but I really wasn't looking for specifics. I guess the root of my question stems from my curiosity about the purpose/role of classes in Python to begin with. From my experience, and what I've seen (and this is by no means extensive or considered to be an accurate representation of the actual uses of Python), most of the time, Python is used without classes or any sort of OOP. As to how that relates to Java, I merely wanted to know if there was a special use or scenario for classes in Python. Essentially, since classes are required in Java, and I was brought up on Java, classes seemed like a norm to me. However, when I got to Python, I noticed that a lack of classes was the norm. This led me to wonder whether classes in Python had some sort of special significance. I apologize if this is no more clear than my original post, or if any of this sounds confusing/inaccurate.
Do classes in Python work the same way as classes in Java?
0.379949
0
0
57
38,401,090
2016-07-15T16:33:00.000
0
1
0
0
python,amazon-web-services,aws-lambda,continuous-deployment
38,442,640
1
false
1
0
No, there is no way to accomplish this. Your Lambda function is always provisioned as a whole from the latest zipped package you provide (or S3 bucket/key if you choose that method).
1
0
0
I am new to AWS Lambda, I have phantomjs application to run there. There is a python script of 5 kb and phantomjs binary which makes the whole uploadable zip to 32MB. And I have to upload this bunch all the time. Is there any way of pushing phantomjs binary to AWS lambda /bin folder separately ?
Does AWS Lambda allows to upload binaries separately to avoid re-upload
0
0
0
345
38,402,995
2016-07-15T18:33:00.000
5
0
0
0
python,pandas,dataframe,amazon-redshift,psycopg2
42,047,026
7
false
0
0
Assuming you have access to S3, this approach should work: Step 1: Write the DataFrame as a csv to S3 (I use AWS SDK boto3 for this) Step 2: You know the columns, datatypes, and key/index for your Redshift table from your DataFrame, so you should be able to generate a create table script and push it to Redshift to create an empty table Step 3: Send a copy command from your Python environment to Redshift to copy data from S3 into the empty table created in step 2 Works like a charm everytime. Step 4: Before your cloud storage folks start yelling at you delete the csv from S3 If you see yourself doing this several times, wrapping all four steps in a function keeps it tidy.
1
32
1
I have a dataframe in Python. Can I write this data to Redshift as a new table? I have successfully created a db connection to Redshift and am able to execute simple sql queries. Now I need to write a dataframe to it.
How to write data to Redshift that is a result of a dataframe created in Python?
0.141893
1
0
57,271
38,405,109
2016-07-15T21:13:00.000
0
1
1
0
python,pytest,python-module
38,405,572
1
false
0
0
Ok, the problem was simply that the cwd is prepended to the PYTHONPATH. sys.path.pop(1) (0 is the tests dir, prepended by pytest) resolved the behavior.
1
0
0
This may seem like a simple question, but I haven't found an answer that explains the behavior I'm seeing. Hard to provide a simple repro case but I basically have a package structure like this: a.b.c a.b.utils I have one project that has files in a.b.c. (let's call this aux_project) and another that has files in a.b.d, a.b.utils, etc (call it main_project). I'm trying to import a.b.utils inside pytest tests in the first project, using tests_require. This does not work because a.b is for some reason sourced from inside aux_project/a/b/__init__.pyc instead of the virtualenv and it shadows the other package (i.e. this a.b only has a c in it, not d or utils). This happens ONLY in the test context. In ipython I can load all packages fine, and they are correctly loaded from virtualenv. What's weirder is that if I simply delete the actual directory, the tests do load the pycs from virtualenv and everything works (I need that directory, though) python==2.7.9 What is going on?
How are python module paths translated to filesystem paths?
0
0
0
17
38,405,345
2016-07-15T21:36:00.000
7
0
1
0
python,jupyter-notebook
38,405,533
1
true
0
0
The only way I can see to do it would be to join the cells, and then put the entire thing in a for/while loop.
1
10
0
I've got a Jupyter Notebook with a couple hundred lines of code in it, spread across about 30 cells. If I want to loop through 10 cells in the middle (e.g. using a For Loop), how do you do that? Is it even possible, or do you need to merge all the code in your loop into one cell?
How to loop through multiple cells in Jupyter / iPython Notebook
1.2
0
0
18,735
38,405,454
2016-07-15T21:48:00.000
1
0
1
0
python,performance,intel,python-multiprocessing,intel-mkl
38,405,541
1
true
0
0
In Spyder menu choose Preferences then click console and click to Advanced settings tab. From there choose the Python interpreter, which came with Intel Distribution.
1
0
0
I've just installed the new Intel Distribution for Python because I need some performance improvements with my Skull Canyon NUC, but I don't understand how to use all the packages/modules modified by Intel. I usually use Anaconda Spyder as my main IDE, how can I "tell" to Spyder to not use the Anaconda standard/included packages/modules instead of the new Intel ones? Thank you for your answers!
Intel Distribution for Python and Spyder IDE
1.2
0
0
892
38,405,633
2016-07-15T22:06:00.000
1
0
1
0
python,python-3.x
38,405,691
2
false
0
0
A dictionary would work. datetime objects can be used as dictionary keys, and can be compared against one another in ranges.
1
3
0
I have some objects whicha re associated with dates. I would like to be able to get objects for a given date range, as well as for a single date. What is the best data structure in python to do this? I believe in some languages this would be achieved using a 'Curve'.
What is the best data structure to store objects indexed by date?
0.099668
0
0
782
38,412,298
2016-07-16T14:45:00.000
1
0
0
0
xml,python-2.7,xslt,pdf-generation,xsl-fo
38,413,882
2
false
1
0
XSL FO requires a formatting engine to create print output like PDF from XSL FO input. Freely available one is Apache FOP. There are several other commercial products also. I know of no XSL FO engines written in Python though some have Python interfaces.
1
1
0
Is there a simple way to get a PDF from a xml with an xsl-fo? I would like to do it in python. I know how to do an html from an xml&xsl, but I haven't find a code example to get a PDF. Thanks
xml + xslfo to PDF python
0.099668
0
1
1,430
38,413,099
2016-07-16T16:19:00.000
3
0
0
0
python,django,migration,squash
38,413,258
1
true
1
0
If you don't have any important data in your test or production databases your can use fresh initial migration and it will be appropriate solution. I've used this trick a lot of times and it works for me. A few thoughts: sometimes, first you need to create migrations for one of your local application and then for all the others; to be sure that all be ok you can commit your migrations and backup you db before you run ./migrate with empty db. NOTE: to speed up your tests you can try to run in memory tests and/or run tests with sqlite if its possible.
1
1
0
I have an app with 35 migrations which take a while to run (for instance before tests), so I would like to squash them. The squashmigrations command reduces the operations from 99 to 88, but it is still far from optimal. This is probably due to the fact that I have multiple RunPython operations preventing Django from optimizing other operations. All these RunPython operations are useless in the squashed migration because the database is empty. In Django 1.10 the elidable parameter will allow to skip them in this case, but still, a lot of clutter remains. What I had in mind for the squashed migration was closer to the initial migrations Django generates, hence my question: Is it advisable to use a fresh initial migration as a squashed version of a long list of migrations? How would you do that?
Using a fresh initial migration to squash migrations
1.2
0
0
459
38,415,774
2016-07-16T21:25:00.000
86
0
1
0
python,matplotlib,ipython,axis,figure
38,433,583
2
false
0
0
plt.gcf() to get current figure plt.gca() to get current axis
1
37
1
I want to add more fine grained grid on a plotted graph. The problem is all of the examples require access to the axis object. I want to add specific grid to already plotted graph (from inside ipython). How do I gain access to the current figure and axis in ipython ?
ipython : get access to current figure()
1
0
0
55,702
38,416,381
2016-07-16T22:56:00.000
0
0
0
0
python,sympy,logical-operators
38,424,476
2
false
0
0
If you use the operators &, |, and ~ for and, or, and not, respectively, you will get a symbolic boolean expression. I also recommend using sympy.true and sympy.false instead of 1 and 0.
2
1
1
I am using Sympy to process randomly generated expressions which may contain the boolean operators 'and', 'or', and 'not'. 'and' and 'or' work well: a = 0 b = 1 a and b 0 a or b 1 But 'not' introduces a 2nd term 'False' in addition to the desired value: a, not b (0, False) When processed by Sympy (where 'data' (below) provides realworld values to substitute for the variables a and b): algo_raw = 'a, not b' algo_sym = sympy.sympify(algo_raw) algo_sym.subs(data) It chokes on 'False'. I need to disable the 2nd term 'False' such that I get only the desired output '0'.
Need to disable Sympy output of 'False' (0, False) in logical operator 'not'
0
0
0
35
38,416,381
2016-07-16T22:56:00.000
1
0
0
0
python,sympy,logical-operators
38,416,665
2
false
0
0
a, not b doesn't do what you think it does. You are actually asking for, and correctly receiving, a tuple of two items containing: a not b As the result shows, a is 0 and not b is False, 1 being truthy and the not of truthy being False. The fact that a happens to be the same value as the result you want doesn't mean it's giving you the result you want as the first item of the tuple and you just need to throw away the second item! That would be equivalent to just writing a. What you want to do, I assume, is write your condition as a and not b.
2
1
1
I am using Sympy to process randomly generated expressions which may contain the boolean operators 'and', 'or', and 'not'. 'and' and 'or' work well: a = 0 b = 1 a and b 0 a or b 1 But 'not' introduces a 2nd term 'False' in addition to the desired value: a, not b (0, False) When processed by Sympy (where 'data' (below) provides realworld values to substitute for the variables a and b): algo_raw = 'a, not b' algo_sym = sympy.sympify(algo_raw) algo_sym.subs(data) It chokes on 'False'. I need to disable the 2nd term 'False' such that I get only the desired output '0'.
Need to disable Sympy output of 'False' (0, False) in logical operator 'not'
0.099668
0
0
35
38,416,671
2016-07-16T23:53:00.000
1
0
1
0
python,code-injection
38,416,692
1
true
0
0
This depends entirely on what you do with the input from the webform. In normal use the form gets encoded as x-www-form-urlencoded or json -- Both formats which can be deserialized into a python dictionary completely safely. Of course, they could be deserialized in unsafe ways too -- Make sure that you use libraries that are dedicated to handling this properly (e.g. urlparse or json). From there, whether the input is safe depends entirely on what the application does with it. (e.g. it is not safe if the application uses eval with input based on the decoded dict). As for automated testing for this -- I don't know of any way to accomplish this, but these problems are generally pretty easy to mitigate by just following normal best-practices (don't eval code you don't trust, etc. etc.)
1
3
0
I've been doing some penetration testing on my own site and have been doing a lot of research on common vulnerabilities. SQL injection comes up a lot but I was wondering, could there possibly be such a thing as python injection? Say for example that a web form submitted a value that was entered in a dictionary in some python backend app. Would it be possible that if that input wasn't handled correctly that python code could be injected and executed in the app? Is there a way to easily and harmlessly test for this vulnerability - if it is indeed a potential vulnerability? I can't seem to find many web resources on this topic.
Python Injection - is there such a thing?
1.2
0
0
6,700
38,418,140
2016-07-17T05:18:00.000
0
1
0
0
javascript,python,html,raspberry-pi2
38,418,381
1
false
1
0
I can suggest a way to handle that situation but I'm not sure how much will it suit for your scenario. Since you are trying to use a wifi network, I think it would be better if you can use a sql server to store commands you need to give to the vehicle to follow from the web interface sequentially. Make the vehicle to read the database to check whether there are new commands to be executed, if there are, execute sequentially. From that way you can divide the work into two parts and handle the project easily. Handling user inputs via web interface to control vehicle. Then make the vehicle read the requests and execute them. Hope this will help you in someway. Cheers!
1
0
0
For a college project I'm tasked with getting a Raspberry Pi to control an RC car over WiFi, the best way to do this would be through a web interface for the sake of accessibility (one of the key reqs for the module). However I keep hitting walls, I can make a python script control the car, however doing this through a web interface has proven to be dificult to say the least. I'm using an Adafruit PWM Pi Hat to control the servo and ESC within the RC car and it only has python libraries as far as I'm aware so it has to be witihn python. If there is some method of passing variables from javascript to python that may work, but in a live environment I don't know how reliable it would be. Any help on the matter would prove most valuable, thanks in advance.
How do I control a python script through a web interface?
0
0
1
229
38,419,528
2016-07-17T08:57:00.000
1
0
0
0
php,python,web-scraping
38,419,736
3
false
1
0
Ethics Using a bot to get at the content of sites can be beneficial to you and the site you're scraping. You can use the data to refer to content of the site, like search engines do. Sometimes you might want to provide a service to user that the original website doesn't offer. However, sometimes scraping is used for nefarious purposes. Stealing content, using the computer resources of others, or worse. It is not clear what intention you have. Helping you, might be unethical. I'm not saying it is, but it could be. I don't understand 'AucT', saying it is bad practice and then give an answer. What is that all about? Two notes: Search results take more resources to generate than most other webpages. They are especially vulnerable to denial-of-service attacks. I run serveral sites, and I have notices that a large amount of traffic is caused by bots. It is literally costing me money. Some sites have more traffic from bots than from people. It is getting out of hand, and I had to invest quite a bit of time to get the problem under control. Bots that don't respect bandwidth limits are blocked by me, permanently. I do, of course, allow friendly bots.
1
0
0
This is my first attempt at scraping. There is a website with a search function that I would like to use. When I do a search, the search details aren't shown in the website url. When I inspect the element and look at the Network tab, the request url stays the same (method:post), but when I looked at the bottom, in the Form Data section, I clicked view source and there were my search details in url form. My question is: If the request url = http://somewebsite.com/search and the form data source = startDate=09.07.2016&endDate=10.07.2016 How can I connect the two to pull data for scraping? I'm new to scraping, so if I'm going about this wrong, please tell me. Thanks!
Web-scraping advice/suggestions
0.066568
0
1
167
38,419,662
2016-07-17T09:14:00.000
2
0
0
0
python,gtk3,pygobject
38,427,465
2
false
0
1
Gtk.Fixed doesn't support setting the z-order. It's not meant for overlapping widgets, so the z-order is arbitrary and probably depends on what order the widgets were added in. It's not clear from your question what your intention is; if you are purposely overlapping widgets, use Gtk.Overlay instead. If you are not intending to overlap them, then you should use a more versatile container such as Gtk.Grid which will take the widgets' sizes into account.
2
1
0
I'm using Gtk.Fixed and a viewport putting images that, sometimes, overlaps. How can I set the z-index like in html?
PyGObject: Gtk.Fixed Z-order
0.197375
0
0
611
38,419,662
2016-07-17T09:14:00.000
1
0
0
0
python,gtk3,pygobject
38,430,240
2
true
0
1
Like ptomato said, I had to use the Gtk.Overlay. I used an overlay with 3 Gtk.Layout, like layers and it works fine.
2
1
0
I'm using Gtk.Fixed and a viewport putting images that, sometimes, overlaps. How can I set the z-index like in html?
PyGObject: Gtk.Fixed Z-order
1.2
0
0
611
38,423,926
2016-07-17T17:19:00.000
0
0
1
0
python,python-3.x,dictionary,memory
38,423,989
1
false
0
0
I think that the only way of doing that is using IPC. You can do that using sockets, PIPES. And for all these methods you have to serialise them with pickle or json. If the dictionary is big it can take several seconds. If you don't want to do that you should have some kind of shared memory. Multiprocessing allows that but only with basic datatypes.
1
2
0
I have two separate python processes running on a linux server, p1 and p2, how to read a dict of p1 from p2 ? Two processes are independent, so I can't use multiprocessing based approach, and because of slow performance, I don't want to use socket communication or file based approach. my python version is 3.5.1
How to share a dict among separate python processes?
0
0
0
338
38,424,826
2016-07-17T18:55:00.000
0
0
1
0
python,module,modular
38,424,845
1
false
0
0
How do I overcome this error? Every file that wants to use a certain module has to import that module. If I have to reimport the module from within camClass.py, Yes, you do. where do I do it? In the init function? Or just at the top of the script? At the top of the module is the standard place. (Modules do not have an "init function".)
1
0
0
So I'm making a modular program for a security system in python, but I can't access modules I've imported in main.py from other scripts. That is, say I have main.py that imports the random module. I use import camClass to import a script containing an object class from camClass.py in the same directory. When I try to use the random module from within the class in camClass.py, it is undefined. How do I overcome this error? If I have to reimport the module from within camClass.py, where do I do it? In the init function? Or just at the top of the script? Thanks
Access to imports made from separate files in python?
0
0
0
42
38,426,349
2016-07-17T22:01:00.000
4
0
0
0
python,numpy,linear-algebra,orthogonal
47,932,683
7
false
0
0
if you want a none Square Matrix with orthonormal column vectors you could create a square one with any of the mentioned method and drop some columns.
1
30
1
Is there a method that I can call to create a random orthonormal matrix in python? Possibly using numpy? Or is there a way to create a orthonormal matrix using multiple numpy methods? Thanks.
How to create random orthonormal matrix in python numpy
0.113791
0
0
28,284
38,429,271
2016-07-18T05:38:00.000
0
0
1
0
python,flask,psutil
38,521,678
1
false
1
0
For future need, I found a way to this. Using ElasticSearch and Psutil. I indexed the psutil values to elasticsearch then used the date-range and date-histogram aggs. Thanks!
1
0
0
I'm currently writing a web application using Flask in python that generates the linux/nix performances(CPU, Disk Usage, Memory Usage). I already implemented the python library psutil. My question is how can I get the values of each util with date ranges. For example: Last 3 hours of CPU, Disk Usage, Memory usage. Sorry for the question I'm a beginner in programming.
Python/Flask : psutil date ranges
0
0
0
172
38,431,782
2016-07-18T08:20:00.000
0
0
0
0
python,python-3.x,openmdao
38,446,190
2
false
0
0
Worked for me after updating to 1.7.1 via pip on Fedora v20. The command with conventional naming is: view_tree(top)
1
1
0
Is openmdao GUI available on 1.7.0 version? And if yes, how to run it? I have found, how to run the GUI on the 0.10.7 version, but it doesn't work on the 1.7.
Running openmdao 1.7.0 GUI
0
0
0
287
38,432,336
2016-07-18T08:51:00.000
1
0
0
1
python,pipe,subprocess,buffer
38,433,285
1
false
0
0
Answering to your questions: No, it will not be stored in memory. The child process will stuck on write operation after exceeding pipe-max-size limit (cat /proc/sys/fs/pipe-max-size); The child process will write about 1M before it will stuck, until the parent process read block of data. After this child process will write next 1024 bytes sequentualy as fast as they will be readed; Yes in case of blocking IO the process will be blocked by the OS when write syscall will be called. In case of non-blocking IO I hope write syscall will return EAGAIN or other system-specific error. So actually the application will stuck while calling write system call waiting for the pipe buffer will available. It doesn't mean that it will hang. For example if an application implements some kind of internal queue and it have more than one thread, it can continue to work and add any data to it's queue while the writting-out thread will wait for the buffer.
1
0
0
I've the below piece of code to read data from a child process as its generated and write to a file. from subprocess import Popen, PIPE proc = Popen('..some_shell_command..', shell=True, stdout=PIPE) fd = open("/tmp/procout", "wb") while True: data = proc.stdout.read(1024) if len(data) == 0: break fd.write(data) fd.close() 'Popen' default bufsize is 0 => unbuffered. What will happen if for some reason the write-to-file operation experiences a huge latency? Assuming that the child process is supposed to produce 500GB of data, do all those data get stored in memory until the parent reads them all? (OR) Will the child process wait for 1024 bytes of data to be read by the parent before writing the next 1024 bytes to stdout? (OR) Will the child process wait after the OS pipe buffer gets filled and once the parent reads, the child resumes writing again? (OR) ??
python subprocess pipe unbuffered behaviour
0.197375
0
0
546
38,433,102
2016-07-18T09:26:00.000
1
0
0
0
python-2.7,robotframework
46,287,269
1
true
0
0
You could use the keyword of wait until keyword succeedsand just keep repeating the next keyword you want to use until the download is done. Or you could set the implicit wait time to be higher so the webdriver waits for an implicit amount of time before it executes another keyword.
1
0
0
In a test case is used the keyword sleep 2s and this is obviously too slow, so i would like to replace with the wait keyword. Thing is, that it is used for a download. So the user downloads a file and then is used the sleep 2s in order to give some time to the Robot Framework to complete the download. But I cannot use wait until element is visible, wait until page contains, or wait until page contains element because nothing changes on the page :/ Any ideas? How you handle this? Thank you in advance!
Replace sleep with wait keyword on RobotFramework
1.2
0
1
5,431
38,433,584
2016-07-18T09:49:00.000
3
0
0
0
python,matplotlib
38,433,637
3
true
0
0
Assuming you know where the curve begins, you can just use: plt.plot((x1, x2), (y1, y2), 'r-') to draw the line from the point (x1, y1) to the point (x2, y2) Here in your case, x1 and x2 will be same, only y1 and y2 should change, as it is a straight vertical line that you want.
1
1
1
I have a curve of some data that I am plotting using matplotlib. The small value x-range of the data consists entirely of NaN values, so that my curve starts abruptly at some value of x>>0 (which is not necessarily the same value for different data sets I have). I would like to place a vertical dashed line where the curve begins, extending from the curve, to the x axis. Can anyone advise how I could do this? Thanks
Python - Plotting vertical line
1.2
0
0
4,256
38,436,215
2016-07-18T12:01:00.000
12
0
1
0
ipython,ipython-notebook,jupyter-notebook
38,519,000
2
false
0
0
The short answer is in a couple of ways, the slightly longer answer is Yes - but you might not get what you expect! Really long answer: The explanation is that when you are working in a notebook, now called a jupyter notebook of course, your work is stored in a series of cells each of which has one or more lines of code or markdown while when you are working in a console all of your work is a series of lines of python code. From within a console session you can save, using %save some or all of your work to one or more python files that you can then paste, import, etc, into notebook cells. You can also save using %save -r to .ipy files your work including the magics as magics rather than the results of magics that again you can use from within your notebook later. You can also use the %notebook magic to save all of your current history in one of an ipynb json file or a python .py text file with the -e export flag. However, it is not clear from the documentation if the history will end up in a single cell, one cell per command or some other division. A little testing suggests one cell per numbered line of your console, so a single command or definition, per cell. Personally I will stick with outputting anything useful into python files using the %save command - or better yet start a notebook when I think I might be doing something that I would need later.
1
12
0
I was using iPython command line interface and after some operations I want to save my operation history to a notebook file. But I was not using iPython notebook from the beginning. Can I still make it?
Can I save ipython command line history to a notebook file?
1
0
0
6,666
38,439,493
2016-07-18T14:36:00.000
1
0
1
1
python,filepath
38,439,753
2
true
0
0
If you are running from Project folder, set a variable(PRJ_PATH) to os.getcwd() and use it for opening the file like open(os.path.join(PRJ_PATH, 'data', 'data.txt')) If you are running from subdirectories, set a variable(PRJ_PATH) to os.path.join(os.getcwd(), '..') and then use it for opening the file like open(os.path.join(PRJ_PATH, 'data', 'data.txt'))
2
0
0
I have a project folder, PythonProject. Within that Project folder I have a 2 subdirectories: SubDirectory & DataSubDirectory. Within SubDirectory, I have a python file, TestFile.py which opens an external file, datafile.txt, contained in DataSubDirectory. Currently, I open this file as such; open(..\\DataSubDirectory\\data.txt) Is there a method by which I can set any file paths within my TestFile.py to be relative to the parent project folder, so that if the file were moved to another Sub Directory, or placed in the parent directory even, I would not get a filepath error? The effect being that any file opened as such; open(data\\data.txt) would actually be opened as open(PythonProject\\data\\data.txt), and not relative to whichever directory it is found?
Set a Subdirectory's File's Working Path Relative to A Parent Directory in Python
1.2
0
0
192
38,439,493
2016-07-18T14:36:00.000
0
0
1
1
python,filepath
38,439,755
2
false
0
0
You can use PythonProject = os.path.dirname(os.path.realpath(sys.argv[0])) to set the PythonProject Path
2
0
0
I have a project folder, PythonProject. Within that Project folder I have a 2 subdirectories: SubDirectory & DataSubDirectory. Within SubDirectory, I have a python file, TestFile.py which opens an external file, datafile.txt, contained in DataSubDirectory. Currently, I open this file as such; open(..\\DataSubDirectory\\data.txt) Is there a method by which I can set any file paths within my TestFile.py to be relative to the parent project folder, so that if the file were moved to another Sub Directory, or placed in the parent directory even, I would not get a filepath error? The effect being that any file opened as such; open(data\\data.txt) would actually be opened as open(PythonProject\\data\\data.txt), and not relative to whichever directory it is found?
Set a Subdirectory's File's Working Path Relative to A Parent Directory in Python
0
0
0
192
38,440,916
2016-07-18T15:43:00.000
1
0
1
1
python,compression,gzip,tar,gzipstream
38,442,022
2
false
0
0
A gzipped tar archive is not an archive of compressed files. It is a compressed archive of files. In contrast, a zip archive is an archive of compressed files. An archive of compressed files is a better archive format, if you want to be able to extract (or update) individual files. But it is an inferior compression technique; unless the component files are mostly quite large or already compressed, compressing the files individually results in quite a bit more overhead. Since the primary use case of gzipped tar archives is transmission of complete repositories, and the entire archive is normally decompressed at once, the fact that it is not possible to decompress and extract an individual file [Note 1] is not a huge cost. On the other hand, the improved compression ratio brings a noticeable benefit. To answer the question, the only way to combine multiple gzipped tar archives is to decompress all of them, combine them into a single tar archive, and then recompress the result; option 1 in the original post. Notes Of course, you can decompress the entire archive and extract a single file from the decompressed stream; it is not necessary to save the result of the decompression. The tar utility will do that transparently. But under the hood, the archive itself is being decompressed. It is not even possible to list the contents of a gzipped tar archive without decompressing the entire archive.
1
0
0
I have a group of about 10 gzipped files that I would like to archive into a single file in order for a user to download. I am wondering what the best approach to this would be. Gunzip everything, then tar-gz the complete set of files into a myfiles.tar.gz? Tar the set of gz files into a myfiles.tar. Option 1 seems to have unnecessary steps as the original files are already compressed. Option 2 seems confusing because there is no indication that the files inside the archive are indeed compressed. How do people usually deal with archiving a group of already compressed files? I am using Python (if it matters), but I am doing the operations via shell executions.
Archiving a group of gzipped files
0.099668
0
0
147
38,442,104
2016-07-18T16:50:00.000
1
0
1
1
python,macos
38,447,997
2
false
0
0
Don't! The name /Library and /System suggest that they are OS-level directories. Nobody installed them. Instead, Mac and other linux-based systems use them by default for system-level services (and they should not be even manually upgraded or system stability may suffer). For all what matters, you should just prepend your installation directory to a system variable called PATH in your $HOME/.bashrc file. Then, whenever YOU use python, the system will always search for the first occurrence of python on PATH, which is your python. Open terminal, enter the following command (once in a life time): echo "PATH={a-path-to-the-folder-containing-your-executable-python}:\$PATH" >> $HOME/.bashrc To explain it, the quoted command prepends your installation directory as the first place to search for executable files. The >> $HOME/.bashrc write this command to the last line of .bashrc, which is a file that setup your terminal environment automatically upon login.
2
0
0
I installed python using MacPort and I am satisfied with the result, but found that there are other versions of Python installed in other directories, and can not remember how they were instaldas, it's been five years that use this notebook and perhaps installed by other means a few years. I tried to remove all references to extra Python, beyond that were installed with MacPorts, but do not think like, I tried to remove the directories with the command rm -rfmas even using sudo rm -rf have success. The old instalation are in directories: /System/Library/Frameworks/Python.framework/Versions/ /Library/Python/ How to discover the origin of such facilities and remove permanently?
Remove old Python Installation in MAC OS
0.099668
0
0
1,355
38,442,104
2016-07-18T16:50:00.000
1
0
1
1
python,macos
38,442,289
2
true
0
0
Don't remove the system Pythons. They may be used by other programs. (I don't know if anything on OS X actually uses them, but it's best to keep them.) Instead, just make sure that your MacPorts bin directory (at /opt/local/bin) is first on your $PATH.
2
0
0
I installed python using MacPort and I am satisfied with the result, but found that there are other versions of Python installed in other directories, and can not remember how they were instaldas, it's been five years that use this notebook and perhaps installed by other means a few years. I tried to remove all references to extra Python, beyond that were installed with MacPorts, but do not think like, I tried to remove the directories with the command rm -rfmas even using sudo rm -rf have success. The old instalation are in directories: /System/Library/Frameworks/Python.framework/Versions/ /Library/Python/ How to discover the origin of such facilities and remove permanently?
Remove old Python Installation in MAC OS
1.2
0
0
1,355
38,442,161
2016-07-18T16:53:00.000
2
0
0
0
python,lda,gensim,topic-modeling
38,469,723
2
false
0
0
Like the documentation says, there is no natural ordering between topics in LDA. If you have your own criterion for ordering the topics, such as frequency of appearance, you can always retrieve the entire list of topics from your model and sort them yourself. However, even the notion of "top ten most frequent topics" is ambiguous, and one could reasonably come up with several different definitions of frequency. Do you mean the topic that has been assigned to the largest number of word tokens? Do you mean the topic with the highest average proportions among all documents? This ambiguity is the reason gensim has no built-in way to sort topics.
1
0
1
In the official explanation, there is no natural ordering between the topics in LDA. As for the method show_topics(), if it returned num_topics <= self.num_topics subset of all topics is therefore arbitrary and may change between two LDA training runs. But I tends to find the top ten frequent topics of corpus. Is there any other ways to achieve this? Many thanks.
How to print top ten topics using Gensim?
0.197375
0
0
836
38,446,706
2016-07-18T22:05:00.000
2
0
0
0
python,tensorflow,protocol-buffers,tensorboard
38,447,138
1
false
0
0
Overall, there isn't first class support for your use case in TensorFlow, so I would parse the merged summaries back into a tf.Summary() protocol buffer, and then filter / print data as you see fit. If you come up with a nice pattern, you could then merge it back into TensorFlow itself. I could imagine making this an optional setting on the tf.train.SummaryWriter, but it is probably best to just have a separate class for console-printing out interesting summaries. If you want to encode into the graph itself which items should be summarized and printed, and which items should only be summarized (or to setup a system of different verbosity levels) you could use the Collections argument to the summary op constructors to organize different summaries into different groups. E.g. the loss summary could be put in collections [GraphKeys.SUMMARIES, 'ALWAYS_PRINT'], but another summary could be in collection [GraphKeys.SUMMARIES, 'PRINT_IF_VERBOSE'], etc. Then you can have different merge_summary ops for the different types of printing, and control which ones are run via command line flags.
1
2
1
Tensorflow's scalar/histogram/image_summary functions are very useful for logging data for viewing with tensorboard. But I'd like that information printed to the console as well (e.g. if I'm a crazy person without a desktop environment). Currently, I'm adding the information of interest to the fetch list before calling sess.run, but this seems redundant as I'm already fetching the merged summaries. Fetching the merged summaries returns a protobuf, so I imagine I could scrape it using some generic python protobuf library, but this seems like a common enough use case that there should be an easier way. The main motivation here is encapsulation. Let's stay I have my model and training script in different files. My model has a bunch of calls to tf.scalar_summary for the information that useful to log. Ideally, I'd be able to specify whether or not to additionally print this information to console by changing something in the training script without changing the model file. Currently, I either pass all of the useful information to the training script (so I can fetch them), or I pepper the model file with calls to tf.Print
Print out summaries in console
0.379949
0
0
1,812
38,447,361
2016-07-18T23:21:00.000
0
0
1
0
python,windows,sockets,python-3.x
38,458,160
2
false
0
0
I'm not sure why: You're using socket.share and You think it would improve performance. You say that you're already threading. A web server is going to be IO bound. Most of your time will be spent: negotiating a TCP/IP connection between client and server finding the information on disk (memory? Sweet, faster!) reading from the disk (memory?) and writing to the socket You should also profile your code before you go about making improvements. What is the actual holdup? Making the connection? Reading from disk? Sending back to the client? Unless you've already done some tremendous improvements, I'm pretty sure that the actual TCP/IP negotiation is a few orders of magnitude faster than getting your information from the disk.
1
2
0
I'm writing a file cache server to hold copies of static files for a web server. Whenever a thread in the web server needs a static file, it opens a socket connection to the cache server and sends it the result of socket.share() + the name of the file it wants. The cache server uses the result of socket.share to gain access to the http client via socket.fromshare and sends the contents of a static file. Then it closes its copy of the http client socket, and the thread's connection to it. I'm wondering if using socket.detach instead of socket.close will automagically improve performance? The documentation for socket.detach says this: Put the socket object into closed state without actually closing the underlying file descriptor. The file descriptor is returned, and can be reused for other purposes. Do I have to explicitly use the returned file descriptors somehow when new sockets are created by the cache server, or does the socket module know about existing reusable file descriptors?
Can I reuse a socket file handle with socket.fromshare?
0
0
1
543
38,453,730
2016-07-19T08:52:00.000
4
0
0
0
python,session,login,form-submit,pyramid
38,453,860
1
false
1
0
I recommend you to pass a parameter like login/?next=pageA.html If the login fails, you could then forward your parameter next to /login again, even if the referrer points now to /login. Then when the user will successfully log in, you could redirect if to pageA.html that will be held in your next parameter. You will indeed need to check if your parameter next is a valid one, as someone could copy-paste or try to tamper with this parameter.
1
3
0
I'm adding authentication to an existing pyramid project. The simplest form that I'm currently trying (will be expending later) is for all pages to raise HTTPForbidden. The exception view is /login, which will ask for login details and, on success, return HTTPFound with request.referer as the location. So far so good, this does what I want, which is bringing users back to the page they were trying to access when the login page interrupted them. Let's call this Page A. The login page is a simple HTML form with a submit button. However if the user mistypes username or password, I want to return to the login page with an error message saying "Wrong password" or similar. When that happens, request.referer is now the login page instead of Page A. How do I 'store' Page A (or rather its URL) so that, when the user eventually succeeds in logging in, they find themselves back on Page A? Is the session used for things like this, and are there non-session ways of implementing it? I don't (yet) have a session for this simple page, and am trying to avoid adding different components in one pass.
How to keep request.referer value from redirect after a failed login
0.664037
0
0
611
38,459,234
2016-07-19T12:55:00.000
0
0
0
0
python
38,460,641
1
false
0
0
As it turns out, the data is not stored in the Glueviz session file, but rather loaded fresh each time the saved session is opened from the original data source file. Hence the solution is simple: Replace the data source file with a new file (of the same type) in with the updated data. The updated data file must have the exact same name, be in the exact same location, and I assume, must have only values within the source data file changed, not the amount of data or columns titles or other aspects changed from the original file. Having done that, reopen Glueviz, reload that session file, and the graphs in Glueviz should update with the updated data.
1
1
1
I am using Glueviz 0.7.2 as part of the Anaconda package, on OSX. Glueviz is a data visualization and exploration tool. I am regularly regenerating an updated version of the same data set from an external model, then importing that data set into Glueviz. Currently I can not find a way to have Glueviz refresh or update an existing imported data set. I can add a new data set, ie a second more updated version of the data from the model as a new import data set, but this does not replace the original, and does not enable the new data to show in the graphs set up in Glueviz in a simple way. It seems the only solution to plot the updated data, is to start a new session, and needing to take some time to set up all the plots again. Most tedious! As a python running application, Glueviz must be storing the imported data set somewhere. Hence I thinking a work around would be to replace that existing data with the updated data. With a restart of Glueviz, and a reload of that saved session, I imagine it will not know the difference and simply graph the updated data set within the existing graphs. Problem solved. I am not sure how Glueviz as a python package stores the data file, and what python application would be the best to use to update that data file.
Python Glueviz - is there a way to replace ie update the imported data
0
0
0
464
38,463,258
2016-07-19T15:52:00.000
7
0
0
0
python,openpyxl,xlrd,xlwt,xlutils
38,463,454
2
true
0
0
This is not possible because XLSX files are zip archives and cannot be modified in place. In theory it might be possible to edit only a part of the archive that makes up an OOXML package but, in practice this is almost impossible because relevant data may be spread across different files.
1
1
0
I have an Excel file(xlsx) that already has lots of data in it. Now I am trying to use Python to write new data into this Excel file. I looked at xlwt, xldd, xlutils, and openpyxl, all of these modules requires you to load the data of my excel sheet, then apply changes and save to a new Excel file. Is there any way to just change the data in the existing excel sheet rather than load workbook or saving to new files?
Modifying and writing data in an existing excel file using Python
1.2
1
0
5,172
38,467,133
2016-07-19T19:40:00.000
0
0
1
0
javascript,python,c,variables
38,467,357
2
false
0
0
Computers have gotten so staggeringly powerful that shorter or longer variable names make very little to no difference. But poorly named variables and functions, in any language, can cost a whole lot more money and waste way more time than any perceived performance hit you might take from using a longer name. The extra nanosecond of processing costs nobody anything, but the 45 minutes it takes a senior developer to figure out what the code is even doing costs everybody something.
1
2
0
It seems to me that C programs should tend to have longer variable names, since the names will be destroyed by the compiler and won't make any difference to the performance or binary size of an optimized executable. I don't understand why in fact the opposite happens, in newer, interpreted languages where the name of the variable/method can actually affect performance, variables and methods routinely have much longer names compared to those in C. Is it just a product of the times, or is there a real reason behind this?
Variable name length in C vs newer languages
0
0
0
237
38,468,052
2016-07-19T20:36:00.000
1
0
1
0
python,math,statistics,average,mean
38,468,131
1
false
0
0
This is not possible to formulaicly solve in the general case. If you let the number of items of A, B, and C be a, b, and c, respectively, this situation gives you the equations: a + b + c = 27 160.17*a + 162.06 * b + 140* c = 27 * 156.95 This is two equations, but you are trying to solve for three variables. If you really need to know the answers are are sure that all of a, b, and c are pretty small (under 30) and integer, you could loop through all possibilities to brute force it, but I would advise against it.
1
0
0
I am trying to work out the n for three categories from the mean average and total number. I basically have the below: Price n A 160.17 ? B 162.06 ? C 140 ? Total n: 27 Avg price: 156.95 For this one it comes out as A - 3, B - 18, C - 6. I basically found this out by trial and error but was wondering if there is a more targeted way? Due to rounding errors it may also not come out exactly, so ideally I would be after the minimum error. I mostly work in Python but can happily run with pseudo-code or any ideas people have.
Reverse estimating 3 numbers from mean and n
0.197375
0
0
42
38,468,540
2016-07-19T21:07:00.000
4
0
0
0
javascript,python
38,468,623
3
false
1
0
If you are using only javascript and don't feel like a framework is the solution, you'd better rewrite your python script using javascript. These two languages have a lot in common and most of the stuff are transferable. Calling python from javascript would most likely not going to work that great. Again, unless you share your python script(which is encouraged in SO because text only question does not quite fit in here), all answers are opinion based.
2
2
0
I'm wondering if there's a good way for me to incorporate Python scripts into my current website. I have a personal website & my own server that I have been working with for a while. So far it's just been html / css / javascript. I have made a Python script in the past that uses another website's API to retrieve something that I would like to display on my website. I've only used Python from the terminal to take input and spit out results. Is there a way for me to run a Python script from javascript through Ajax to get some content back? I don't really want to use a framework like Django or Flask because I feel as though those are mostly for entire projects. I only want to use one Python script on one page for my website. Is this even something I should do? Any advice would be great.
How can I incorporate Python scripts into my website? Do I need a framework?
0.26052
0
0
184
38,468,540
2016-07-19T21:07:00.000
2
0
0
0
javascript,python
38,468,783
3
false
1
0
I completly agree with you about Django, but I think you can give a chance to Flask, it is really light and I can be used for many porpouses. Anyway if you want to call a python scripts you need a way to call it. I think you need a "listener" for the script for example a service or a web service (for this reason I think Flask can be an really easy solution). Be careful about calling the script, a web service can be reachable from the frontend but this can not be done from a "standard" script. My suggestion is take a look at Flask is more light that you think.
2
2
0
I'm wondering if there's a good way for me to incorporate Python scripts into my current website. I have a personal website & my own server that I have been working with for a while. So far it's just been html / css / javascript. I have made a Python script in the past that uses another website's API to retrieve something that I would like to display on my website. I've only used Python from the terminal to take input and spit out results. Is there a way for me to run a Python script from javascript through Ajax to get some content back? I don't really want to use a framework like Django or Flask because I feel as though those are mostly for entire projects. I only want to use one Python script on one page for my website. Is this even something I should do? Any advice would be great.
How can I incorporate Python scripts into my website? Do I need a framework?
0.132549
0
0
184
38,474,210
2016-07-20T06:50:00.000
0
0
0
0
python,python-2.7,python-3.x,subprocess
38,478,933
2
false
0
0
This worked for me: os.system("rm umengchannel_316_豌豆荚")
1
0
0
I've got a file named umengchannel_316_豌豆荚 I want to delete this file.. I tried the following: os.remove(), os.unlink() , shutil.move() but nothing seems to work.. Are there any other approaches to this problem?
How to delete a file with invalid name using python?
0
0
0
85
38,476,379
2016-07-20T08:41:00.000
2
0
1
0
python,intellij-idea,pycharm,conda
38,732,023
1
true
0
0
You ned to change your Project Interpreter to point to $CONDA_PREFIX/bin/python where $CONDA_PREFIX is the location of your conda env. The $CONDA_PREFIX environment location you're looking for should be in the second column of the output from conda info --envs.
1
1
0
I am using Python with Conda to manage my environment and libraries. Does anyone know how to get IntelliJ (with the Python plugin) or PyCharm to add the libraries in my Conda environment to my project? It only pulls in site packages even when I select ~/anaconda/bin/python as my Python Interpreter.
How can I get IntelliJ to index libraries in my Python Conda environment?
1.2
0
0
563
38,480,595
2016-07-20T11:54:00.000
1
0
1
0
python,pycharm,exit
38,529,814
2
false
0
0
Solved it using a really bad workaround. I used all functions that are related to exit in Python, including SIG* functions, but uniquely, I did not find a way to catch the exit signal when Python program is being stopped by pressing the "Stop" button in PyCharm application. Finally got a workaround by using tkinter to open an empty window, with my program running in a background thread, and used that to close/stop program execution. Works wonderfully, and catches the SIG* signal as well as executing atexit . Anyways massive thanks to @scrineym as the link really gave a lot of useful information that did help me in development of the final version.
1
1
0
Basically I am writing a script that can be stopped and resumed at any time. So if the user uses, say PyCharm console to execute the program, he can just click on the stop button whenever he wants. Now, I need to save some variables and let an ongoing function finish before terminating. What functions do I use for this? I have already tried atexit.register() to no avail. Also, how do I make sure that an ongoing function is completed before the program can exit? Thanks in advance
Wait and complete processes when Python script is stopped from PyCharm console?
0.099668
0
0
639
38,480,676
2016-07-20T11:58:00.000
0
0
0
0
user-interface,qpython
67,756,714
1
false
0
1
you can use we app gui or qsla4AApp, i recomended to use qsla4AApp
1
0
0
What module must I use and where is the documentation for it? I think it is possible because I could to make a GUI using mobile basic on Nokia.
How to make GUI using QPython 3?
0
0
0
3,521
38,483,026
2016-07-20T13:43:00.000
1
0
0
0
python,flask,flask-login
65,905,459
3
false
1
0
I need to make this clear. This is the reason why you shoud use request_loader with flask_login. There will be a lot of @login_required from flask_login used in your api to guard the request access. You need to make a request to pass the check of auth. And there will be a lot of current_user imported from flask_login, Your app need to use them to let the request act as the identity of the current_user. There are two ways to achieve the above with flask_login. Using user_loader makes the request to be OK for @login_required. It is often used for UI logins from browser. It will store session cookies to the browser and use them to auth later. So you need to login only once and the session will keep for a time. Using request_loader will also be OK with @login_required. But it is often used with api_key or basic auth. For example used by other apps to interact with your flask app. There will be no session cookies, so you need to provide the auth info every time you send request. With both user_loader and request_loader, now you got 2 ways of auth for the same api, protected by @login_required, and with current_user usable, which is really smart.
1
11
0
I apologize in advance for asking a rather cryptic question. However, I did not understand it despite going through a lot of material. It would be great if you could shed some light on this. What is the purpose of a request_loader in flask-login? How does it interact with the user_loader decorator? If I am using a token based authentication system (I am planning on sending the token to my angularJS front end, storing the token there and sending that token in the authorization-token header), will I need a request_loader or will a user_loader (where I check the auth header and see if the user exists) suffice?
How is Flask-Login's request_loader related to user_loader?
0.066568
0
0
7,541
38,484,452
2016-07-20T14:44:00.000
2
0
0
0
python,qt,pyqt,pyside
38,486,018
2
true
0
1
In Qt Document: void QTableWidget::itemChanged(QTableWidgetItem * item) This signal is emitted whenever the data of item has changed. Hope this will help you. Edit: QTableWidget uses a default itemdelegate(QItemDelegate instance) which has createEditor method and closeEditor signal. You can reimplement createEditor which means edit starts, and connect the signal closeEditor which means the edit ends. This may be the correct way.
1
2
0
I'm writing an app in Python with the PySide library. I have a QTableWidget that gets updated about every second. The thing is, I want to be able to change the data manually, and I thought that if I could find out whether or not the user is changing the data in the cell, then I could just prevent the program from updating this cell. Otherwise I get "kicked out" of the cell at each update. Is this a good idea? Should I try something else and why? How can I achieve my goal? Many thanks EDIT : I know there exists an itemChanged signal, but what I'd really like to know is if there is a way to tell when the user is writing a new value in the cell, in order not to kick them out while editing.
How to tell when a QTableWidgetItem is being modified
1.2
0
0
1,437
38,484,950
2016-07-20T15:39:00.000
1
0
1
0
python
38,485,086
1
true
0
0
Why not just use (datetime.now() + timedelta(days=7)).isocalendar()[1] rather than calculating it from the current week number at all?
1
0
0
I am writing a script to grab files from a directory based on the week number in the filenames. I need to grab files with week N and week N+1 in the filenames. I have the basics down, but am now struggling to figure out the rollover for new years (the file format uses the isocalendar standard). Isocalendars can have either 52 or 53 weeks in them, so I need a way to figure out if the year I'm in is a 52 or 53 week year so that I can then compare to the results of datetime.now().isocalendar()[1] and see if I need to set N+1 to 1. Is there a built in python function for this?
Calculate Number of Last Week in Isocalendar Year
1.2
0
0
247
38,485,373
2016-07-20T15:58:00.000
4
0
0
1
bash,python-3.x,terminal
38,485,489
3
false
0
0
execute the following command. which python3 and check the exit status of the command $?. it will be 0 if user has python 3 installed, 1 otherwise.
1
24
0
I am writing a shell script, and before the script runs I want to verify that the user has Python 3 installed. Does anyone know or have any ideas of how I could check that, and the output be a boolean value?
From the terminal verify if python 3 is installed
0.26052
0
0
34,436
38,488,977
2016-07-20T19:12:00.000
1
0
0
1
python,docker,errbot
38,557,886
1
true
1
0
I think the best if you run Errbot in a container is to run it with a real database for the persistence (redis for example). Then you can simply run backup.py from anywhere (including your dev machine). Even better, you can just do a backup of your redis directly.
1
2
0
I'm running errbot in a docker container, we did the !backup and we have the backup.py, but when i start the docker container it just run /app/venv/bin/run.sh but i cannot pass -r /srv/backup.py to have all my data restored. any ideas? all the data is safe since the /srv is a mounted volume
how can i restore the backup.py plugin data of errbot running in a docker container
1.2
0
0
75
38,490,811
2016-07-20T21:03:00.000
1
0
0
0
python,tensorflow,neural-network,recurrent-neural-network,lstm
38,638,969
2
false
0
0
However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another. This is the definition of recurrence. All RNNs do this.
2
1
1
I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow. In the tutorial, they have an object tf.nn.rnn_cell.MultiRNNCell, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another. I understand that because the cells are recurrent, they wouldn't need to do this, but I was just trying to see if this is straight out possible. Cheers!
Horizontally layering LSTM cells
0.099668
0
0
301
38,490,811
2016-07-20T21:03:00.000
0
0
0
0
python,tensorflow,neural-network,recurrent-neural-network,lstm
71,838,557
2
false
0
0
Horizontally stacked is useless in any case I can think of. A common confusion is that there are multiple cells (with different parameters) due to the visualization of the process within an RNN. RNNs loop over themselves so for every input they generate new input for the cell itself. So they use the same weights over and over. If you would like to separate these connected RNNs and train them on generated sequences (different time steps), I think the weights will descend towards approximately similar parameters. So it will be similar (or equal) to just using one RNN cell using its output as input. You can use multiple cells kind of 'horizontal' when using it in an encoder decoder model.
2
1
1
I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow. In the tutorial, they have an object tf.nn.rnn_cell.MultiRNNCell, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another. I understand that because the cells are recurrent, they wouldn't need to do this, but I was just trying to see if this is straight out possible. Cheers!
Horizontally layering LSTM cells
0
0
0
301
38,490,946
2016-07-20T21:13:00.000
11
0
1
1
python,pyspark,jupyter,jupyter-notebook
38,510,399
1
true
1
0
The PYSPARK_DRIVER_PYTHON variable is set to start ipython/jupyter automatically (probably as intended.) Run unset PYSPARK_DRIVER_PYTHON and then try pyspark again. If you wish this to be the default, you'll probably need to modify your profile scripts.
1
2
0
I have run into an issue with spark-submit , throws an error is not a Jupyter Command i.e, pyspark launches a web ui instead of pyspark shell Background info: Installed Scala , Spark using brew on MAC Installed Conda Python 3.5 Spark commands work on Jupyter Notebook 'pyspark' on terminal launches notebook instead of shell Any help is much appreciated.
Pyspark command in terminal launches Jupyter notebook
1.2
0
0
3,753
38,491,463
2016-07-20T21:51:00.000
2
1
0
0
python-2.7,amazon-web-services,aws-lambda,boto3
38,494,011
1
true
0
0
When you are working with the AWS EC2 SDK, you are only working in a single region. So by calling client.describe_images(), you are already filtered to a single region. All AMI images returned in the result are in that same region. To get all AMI images in all regions, then you need to iterate across all regions, calling client.describe_images() within each region.
1
0
0
Im trying to get all Ami images using below code using python 2.7 in AWS lambda result = client.describe_images( Owners=[ 'self' ] ) Im able to get the ami images but not able to get in which region its created...i would like to filter images based on region.Please suggest
get AWS region for describeImages
1.2
0
0
330
38,491,716
2016-07-20T22:11:00.000
0
0
0
1
python,python-2.7,hadoop,hadoop-streaming
38,517,426
1
true
0
0
Use regular expressions: inputfile/* - will work for 1 level of sub directories inputfile/*/* - will work for 2 level of sub directories Run As: hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming.jar -mapper "python wordcount_mapper.py" -file wordcount_mapper.py -input inputfile/* -output outputfile3
1
0
0
I have written a mapper program in python for hadoop Map-Reduce framework. And I am executing it through the command: hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming.jar -mapper "python wordcount_mapper.py" -file wordcount_mapper.py -input inputfile -output outputfile3 It is working properly if the directory inputfile contains only file. But it is not working and showing error if there is sub directories into the directory inputfile . Like i have two sub directory in (KAKA and KAKU) in inputfile. And the error is showing : 16/07/20 17:01:40 ERROR streaming.StreamJob: Error Launching job : Not a file: hdfs://secondary/user/team/inputfile/kaka So, My question is that what will be the command to reach the files into the Sub Directory.
What will be the Hadoop Streaming Run Command to access the files in the Sub Directory
1.2
0
0
136
38,493,057
2016-07-21T00:44:00.000
10
0
0
0
python,django,python-3.x,django-admin
38,493,996
3
true
1
0
Activate virtualenv and install Django there (with python -m pip install django). Try python -m django startproject mysite. You can use python -m django instead of django-admin since Django 1.9.
1
6
0
I have been trying to set up Django for Python 3 for for 2 days now. I have installed python 3.5.2 on my Mac Mini. I have also have pip3 installed succesfully. I have installed Django using pip3 install Django. The problem is that when I try to start my project by typing django-admin startproject mysite, I get the error -bash: django-admin: command not found. If you need any more info, just let me know, I am also new to Mac so I may be missing something simple. How do I get django-admin working? I have tried pretty much everything I could find on the web.
Getting Django for Python 3 Started for Mac django-admin not working
1.2
0
0
6,001
38,493,144
2016-07-21T00:53:00.000
0
0
1
0
python,windows,python-2.7
38,493,199
1
false
0
0
You're using the wrong path, Pip should reside in the Scripts sub directory, set PATH to C:\Python27\Scripts then you should restart cmd.
1
0
0
I've been having some really odd issues with trying to install and use the Python "Pip" module. Firstly, I've installed the pip module by downloading the getpip.py file and running it which has replaced my pre existing pip which seemed to work fine. However whenever I try to use pip it always comes up with "pip is not recognized as an internal or external command" etc. I've set the path for python by using setx PATH "%PATH%;C:\Python27\python" and then using C:\Python27\Scripts\pip the second time to try and set the path for pip. But one of these seem to work. I can't use pip in cmd neither can I now use python. Does anyone know how to make this work? I'm trying to run this command "pip install -r requirements.txt " even in the right folder but pip is not recognized. Any suggestions? Thanks.
Python 2.7 Pip module not installing or setting paths via cmd?
0
0
0
362
38,493,608
2016-07-21T01:54:00.000
0
0
1
0
python,python-2.7,numpy,anaconda
38,503,517
1
false
0
0
The anaconda package in the AUR is broken. If anyone encounters this, simply install anaconda from their website. The AUR attempts to do a system-wide install, which gets rather screwy with the path.
1
0
1
I recently installed Anaconda on Arch Linux from the Arch repositories. By default, it was set to Python3, whereas I would like to use Python2.7. I followed the Anaconda documentation to create a new Python2 environment. Upon running my Python script which uses Numpy, I got the error No module named NumPy. I found this rather strange, as one of the major points of using Anaconda is easy installation of the NumPy/SciPy stack... Nevertheless, I ran conda install numpy and it installed. Now, I still cannot import numpy, but when I run conda install numpy it says it is already installed. What gives? Output of which conda: /opt/anaconda/envs/python2/bin/conda Output of which python: /opt/anaconda/envs/python2/bin/python
Python Anaconda - no module named numpy
0
0
0
1,790
38,494,736
2016-07-21T04:10:00.000
0
1
1
0
python,audio
47,418,314
2
false
0
0
The problem here is that music has structure, while the sounds you want to find may have different signatures. Using the door as an example, the weight, size and material of the door alone will influence the types of acoustic signatures it will produce. If you want to search by similarity, a bag-of-features approach may be the easy(ish) way to go. However, there are different approaches, such as taking samples by a sliding window along the spectrogram of a sound, and trying to match (by similarity) with a previous sound you recorded, decomposition of the sound, etc...
1
0
0
What I wanna do is just like 'Shazam' or 'SoundHound' with Python, only sound version, not music. For example, when I make sound(e.g door slam), find the most similar sound data in the sound list. I don't know you can understand that because my English is bad, but just imagine the sound version of 'Shazam'. I know that 'Shazam' doesn't have open API. Is there any api like 'Shazam'? Or, How can I implement it?
Get sound input & Find similar sound with Python
0
0
0
2,228
38,496,026
2016-07-21T06:01:00.000
0
1
1
0
python,python-2.7,pdf,text,converter
70,157,888
2
false
0
0
You can use "tabula" python library. which basically uses Java though so you have to install Java SDK and JDK. "pip install tabula" and import it to the python script then you can convert pdf to txt file as: tabula.convert_into("path_or_name_of_pdf.pdf", "output.txt", output_format="csv", pages='all') You can see other functions on google. It worked for me. Cheers!!!
1
3
0
I've been on it for several days + researching the internet on how to get specific information from a pdf file. Eventually I was able to fetch all information using Python from a text file(which I created by going to the PDF file -----> File ------> Save as Text). The question is how do I get Python to accomplish those tasks(Going to the PDF file(opening it - is quite easy open("file path"), clicking on File in the menu, and then saving the file as a text file in the same directory). Just to be clear, I do not require the pdfminer or pypdf libraries as I have already extracted the information with the same file(after converting it manually to txt)
Converting a PDF file to a Text file in Python
0
0
0
3,865
38,498,290
2016-07-21T07:56:00.000
0
0
0
0
python,mysql,relational-database,semantic-web,ontology
38,534,519
2
false
0
0
Use the right tool for the job. You're using RDF, that it's OWL axioms is immaterial, and you want to store and query it. Use an RDF database. They're optimized for storing and querying RDF. It's a waste of your time to homegrow storage & query in MySQL when other folks have already figured out how best to do this. As an aside, there is a way to map RDF to a relational database. There's a formal specification for this, it's called R2RML.
1
1
0
I already have an owl ontology which contains classes, instances and object properties. How can I map them to a relational data base such as MYSQL using a Python as a programming language(I prefer Python) ? For example, an ontology can contains the classes: "Country and city" and instances like: "United states and NYC". So I need manage to store them in relational data bases' tables. I would like to know if there is some Python libraries to so.
How can I map an ontology components to a relational database?
0
1
0
1,036
38,510,502
2016-07-21T17:16:00.000
0
0
1
0
python-3.x,anaconda,jupyter,jupyter-notebook,conda
38,616,822
2
false
0
0
I tried to change Python [Root] to Python [Root-test] but it did not work for me. But I found in here that just typing conda create -n py35 python=3.5 ipykernel in the cmd line worked.
1
1
0
When I try to create a new notebook in jupyter, the drop down menu shows Python[root] instead of Python[3]. Why is this? Is this problematic? I'm using Python 3.5.2.
Why does jupyter display python[root]?
0
0
0
1,550
38,510,964
2016-07-21T17:44:00.000
2
0
1
0
python
38,510,997
3
false
0
0
Yes. These are equivalent. The rules used by if are the same as those used by bool. not simply inverts these values without changing the logic for determining truthiness or falseness.
1
5
0
For a small testing framework we are writing I'm trying to provide some utility functions. One of them is supposed to be equivalent to if x: but if that is completely equivallent to if bool(x) is True: then I could only provide one function to check if x is True: and if x:. Is the negation of that also equivalent? if bool(x) is False: equal to if not x:?
Is `if x:` completely equivalent to `if bool(x) is True:`?
0.132549
0
0
127
38,515,096
2016-07-21T22:03:00.000
0
0
0
1
python-3.x,numpy,pyspark,google-cloud-platform,gcp
38,572,975
1
false
0
0
Not sure if this qualifies as a solution. I submitted the same job using dataproc on google platform and it worked without any problem. I believe the best way to run jobs on google cluster is via the utilities offered on google platform. The dataproc utility seems to iron out any issues related to the environment.
1
0
1
This code runs perfect when I set master to localhost. The problem occurs when I submit on a cluster with two worker nodes. All the machines have same version of python and packages. I have also set the path to point to the desired python version i.e. 3.5.1. when I submit my spark job on the master ssh session. I get the following error - py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, .c..internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/worker.py", line 98, in main command = pickleSer._read_with_length(infile) File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length return self.loads(obj) File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/serializers.py", line 419, in loads return pickle.loads(obj, encoding=encoding) File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/mllib/init.py", line 25, in import numpy ImportError: No module named 'numpy' I saw other posts where people did not have access to their worker nodes. I do. I get the same message for the other worker node. not sure if I am missing some environment setting. Any help will be much appreciated.
module error in multi-node spark job on google cloud cluster
0
0
0
206
38,515,700
2016-07-21T23:01:00.000
0
0
0
0
python,authentication,terminal,onenote,onenote-api
38,515,902
2
false
1
0
If this is always with the same account - you can make the "browser opening and password typing" a one time setup process. Once you've authenticated, you have the "access token" and the "refresh token". You can keep using the access token for ~1hr. Once it expires, you can use the "refresh token" to exchange it for an "access token" without any user interaction. You should always keep the refresh token so you can get new access tokens later. This is how "background" apps like "IFTTT" keep access to your account for a longer period of time. Answer to your updated question: The initial setup has to be through UI in a browser. If you want to automate this, you'll have to write some UI automation.
1
2
0
I want to create a python script that will allow me to upload files to OneNote via command line. I have it working perfectly and it authenticates fine. However, everytime it goes to authenticate, it has to open a browser window. (This is because authentication tokens only last an hour with OneNote, and it has to use a refresh token to get a new one.) While I don't have to interact with the browser window at all, the fact that it needs to open one is problematic because the program has to run exclusively in a terminal environment. (E.g. the OneNote authentication code tries to open a browser, but it can't because there isn't a browser to open). How can I get around this problem? Please assume it's not possible to change the environment setup. UPDATE: You have to get a code in order to generate an access token. This is the part that launches the browser. It is only required the first time though, for that initial token. Afterwards, refresh token requests don't need the code. (I was calling it for both, which was the issue). That solves the problem of the browser opening each time I run my program. However, it still leaves the problem of the browser having to open that initial time. I can't do that in a terminal environment. Is there a way around that? E.g. Can I save the code and call it later to get the access token (how long until it expires)? Will the code work for any user, or will it only work for me?
How Do I Authenticate OneNote Without Opening Browser?
0
0
1
505
38,517,032
2016-07-22T01:59:00.000
0
0
0
0
python,django,class,authentication,request
38,517,140
2
false
1
0
I dont know whats the context of your ClassBasedView ... but you can use the LoginRequiredMixin to require the login before calling your class : class ServerDeleteView(LoginRequiredMixin, DeleteView): model = Server success_url = reverse_lazy('ui:dashboard')
1
0
0
Ok, have a class based view that passes a query_set into my AssignedToMe class. The point of this class based view is to see if a user is logged in and if they are, they can go to a page and it will display all of records that are assigned to their ID. Currently, it is working how I want it to but only if a user is logged in. If a user is not logged in, I get the following error 'AnonymousUser' object is not iterable. I want it to redirect the user to the login page if there is no user logged in. Thank you in advance. Please look at the screenshot
Class Based View to get user authentication in Django
0
0
0
523
38,517,334
2016-07-22T02:43:00.000
2
0
0
0
python,pandas,global
38,517,371
1
true
0
0
Yes. Instead of using globals, you should wrap your data into an object and pass that object around to your functions instead (see dependency injection). Wrapping it in an object instead of using a global will : Allow you to unit test your code. This is absolutely the most important reason. Using globals will make it painfully difficult to test your code, since it is impossible to test any of your code in isolation due to its global nature. Perform operations on your code safely without the fear of random mutability bugs Stop awful concurrency bugs that happen because everything is global.
1
1
1
I have a program that i load millions of rows into dataframes, and i declare them as global so my functions (>50) can all use them like i use a database in the past. I read that using globals are a bad, and due to the memory mapping for it, it is slower to use globals. I like to ask if globals are bad, how would the good practice be? passing > 10 dataframes around functions and nested functions dont seems to be very clean code as well. Recently the program is getting unwieldy as different functions also update different cells, insert, delete data from the dataframes, so i am thinking of wrapping the dataframes in a class to make it more manageable. Is that a good idea?
Global dataframes - good or bad
1.2
0
0
151
38,518,000
2016-07-22T04:09:00.000
0
0
0
0
python,pandas,dataframe,import,sas
38,518,356
1
false
0
0
I don't know how python stores dates, but SAS stores dates as numbers, counting the number of days from Jan 1, 1960. Using that you should be able to convert it in python to a date variable somehow. I'm fairly certain that when data is imported to python the formats aren't honoured so in this case it's easy to work around this, in others it may not be. There's probably some sort of function in python to create a date of Jan 1, 1960 and then increment by the number of days you get from the imported dataset to get the correct date.
1
0
1
I have imported a SAS dataset in python dataframe using Pandas read_sas(path) function. REPORT_MONTH is a column in sas dataset defined and saved as DATE9. format. This field is imported as float64 datatype in dataframe and having numbers which is basically a sas internal numbers for storing a date in a sas dataset. Now wondering how can I convert this originally a date field into a date field in dataframe?
Date field in SAS imported in Python pandas Dataframe
0
0
0
398
38,518,227
2016-07-22T04:33:00.000
1
0
0
0
python,excel,dashboard
38,520,493
1
false
0
0
If the "dashboard" is in Excel and if it contains charts that refer to data in the current workbook's worksheets, then the charts will update automatically when the data is refreshed, unless the workbook calculation mode is set to "manual". By default calculation mode is set to "automatic", so changes in data will immediately reflect in charts based on that data. If the "dashboard" lives in some other application that looks at the Excel workbook for the source data, you may need to refresh the data connections in the dashboard application after the Excel source data has been refreshed.
1
0
0
I need to create a dashboard based upon an excel table and I know excel has a feature for creating dashboards. I have seen tutorials on how to do it and have done my research, but in my case, the excel table on which the dashboard would be based is updated every 2 minutes by a python script. My question is, does the dashboard display automatically if a value in the table has modified, or does it need to be reopened, reloaded, etc..?
Can Excel Dashboards update automatically?
0.197375
1
0
1,488
38,521,380
2016-07-22T08:09:00.000
0
0
0
0
python,openerp,openerp-7
38,689,879
1
false
1
0
Yes it is possible by using status bar. In order for you to compute the percentage of sales order, you should determine how much is the quota for each sale order.
1
0
0
Is it possible to have easily the percentage of sales orders / quotes per users? The objective it is to know what the percentage of quote that become a sale order per user. I have not a clue how I can do it. I am using OpenERP 7
OpenERP - Odoo - How to have the percentage of quote that become a sale orders
0
0
0
118
38,524,212
2016-07-22T10:32:00.000
0
0
0
0
python,postgis,mapnik
39,417,434
2
false
0
0
If we are looking for publish data in postgres to WMS, enable tilecache, and use more advanced rendering engine like mapnik, then I would say there could be one component missing is the GIS server. So if I am guessing your requirement correctly as I mentioned earlier then here is what the system design could be: Use postgres/postgis as database connection. Write your own server side program using python to create the service definition file for the dynamic WMS (mapfile if you are going to use MapServer). Then your program handling tilecache/tile seeding by change the configuration file (.yaml) in mapproxy. Then escalate WMS to mapnik for rendering and expose the output. Like someone else mentioned, it would be easy to have a template configuration file for each step and do parameter substitution.
1
0
0
Would that be possible to create programatically a new OGC WMS (1.1/1/3) service using: Python MapProxy Mapnik PostGIS/Postgres any script/gist or sample would be more then appreciated. Cheers, M
python script for creating maproxy OGC WMS service using Mapnik and PostGIS
0
1
0
662
38,524,981
2016-07-22T11:10:00.000
1
0
1
0
python,pyspark
38,537,603
1
true
0
0
If you are working with a HiveContext, then the max length of a table name should be the max length allowed by Hive/Metastore (last time I checked, were 128 characters), probably the same thing happen with SqlContext.
1
1
0
I am a absolute novice in python and this is a very basic question to which I could not find an answer while searching. I am using registerTempTable function to register a dataframe as table, I wanted to check what is the max length there can be for the tablename? To test I used up to 70 characters and it did register the table, but for my own knowledge I wanted to know if there is any restriction on the length of the tablename. Thanks!!
Max tablename length with registerTempTable in Python
1.2
0
0
76
38,527,505
2016-07-22T13:18:00.000
0
0
0
1
python,jenkins,jenkins-plugins
38,527,694
1
false
0
0
What I usually do is going on the build output, on the left you will find "Build environment Variables" or something similar and check if you can see them there, but the solution cited on the other SO post works usually for me as well
1
0
0
I'm currently using a Jenkins build to run a bash command that activates a Python script. The build is parametrised, and i need to be able to set environment variables containing those parameters on the Windows slave to be used by the Python script. So, my question is: How do i set temporary env. variables for the current build and secondly, how do i use Python to retreive them while running a script? An explanation of the process would be great since i couldn't make any solution work.
Jenkins with Windows slave with Python - Setting environment variables while building and using them when running a python script
0
0
0
766
38,527,573
2016-07-22T13:21:00.000
0
1
0
1
python,amazon-web-services,amazon-dynamodb,iot
38,537,832
1
false
1
0
No, it does not. I have done similar setup and it is working fine. Are you sure that your IoT device does not go into some kind of sleep mode after a while ?
1
1
0
We have implemented a simple DynamoDB database that is updated by a remote IoT device that does not have a user (i.e. root) constantly logged in to the device. We have experience issues in logging data as the database is not updated if a user (i.e. root) is not logged into the device (we log in via a ssh session). We are confident that the process is running in the background as we are using a Linux service that runs on bootup to execute a script. We have verified that the script runs on bootup and successfully pushes data to Dynamo upon user log in (via ssh). We have also tried to disassociate a screen session to allow for the device to publish data to Dynamo but this did not seem to fix the issue. Has anyone else experienced this issue? Does amazon AWS require a user (i.e. root) to be logged in to the device at all times in order to allow for data to be published to AWS?
AWS IOT with DynamoDB logging service issue
0
0
0
66
38,529,591
2016-07-22T14:57:00.000
0
1
0
1
python,apache,cgi
38,529,657
1
false
0
0
Have you tried chmod 777 foo.py or chmod +x foo.py? Those are generally the commands used to give file permission to run.
1
0
0
I'm attempting to execute foo.py from mysite.com/foo.py, however the script requires access to directories that would normally require sudo -i root access first. chmod u+s foo.py still doesn't give the script enough permission. What can I do so the script has root access? Thank you!
CGI: Execute python script as root
0
0
0
690
38,531,685
2016-07-22T16:51:00.000
2
0
1
0
python,ipython-notebook,jupyter-notebook
38,531,985
1
false
0
0
Option 1: Run multiple jupyter notebook servers from your project directory root(s). This avoids navigating deeply nested structures using the browser ui. I often run many notebook servers simultaneously without issue. $ cd path/to/project/; jupyter notebook; Option 2: If you know the path you could use webbrowser module $ python -m webbrowser http://localhost:port/path/to/notebook/notebook-name.ipynb Of course you could alias frequently accessed notebooks to something nice as well.
1
2
0
My Jupyter/IPython notebooks reside in various directories all over my file system. I don't enjoy navigating hierarchies of directories in the Jupyter notebook browser every time I have to open a notebook. In absence of the (still) missing feature allowing to bookmark directories within Jupyter, I want to explore if I can open a notebook from the command line such that it is opened by the Jupyter instance that is already running. I don't know how to do this....
Open a Jupyter notebook within running server from command line
0.379949
0
0
1,220
38,531,972
2016-07-22T17:10:00.000
0
0
1
0
python,emacs,ide,pycharm,twisted
41,860,685
2
false
0
0
You can add one more thing in your list Automatic save - (add-hook 'focus-out-hook 'save-buffer) In case, your emacs slows down try (global-hl-line-mode -1) in dotspacemacs/user-config
1
3
0
I am newbie to Emacs. I mostly work in python (specifically twisted) & trying to configure it more like Pycharm IDE. I installed package elpy. But still it doesn't work well in case of auto completion. Also it shows all errors in red color either they are errors or warnings. I tweaked pyflakes to show only specific errors ( instead of showing all errors mentioned in PEP8 specifications). But I am trying to make it more like Pycharm. Has anybody greater luck with this ? Why pycharm is so good in case of autocompletion and finding definitions/sources of functions/classes ? Also Can we configure virtualenv in emacs ? Any suggestions/resources/ideas will be welcome.
Customizing emacs like pycharm IDE
0
0
0
1,713
38,532,919
2016-07-22T18:13:00.000
0
0
1
0
java,python,time,time-complexity
38,533,334
1
false
0
0
Yes, you went from O(n^2) to O(n). You might want to look into the space complexity as well, you would have to store the graph for one of them while for the other one you would use less space. A HashMap looks ideal for this situation if you do not care about memory, or any other array if it's easier to implement.
1
0
0
So I am reading two files and storing each line in two different lists respectively. Now I have to check if a string in the first list is present in the 2nd list. By normal comparison this will take O(n^2) But using a graph based data structure like - File1_visited[string] = True File2_visited[string] = True. I can check if both are true, then the string is present in both the files. This makes it O(n). Is there any other approach I can reduce the time complexity and Is my understanding correct? Example Scenario - File1- Text1 Text2 Text3 Text4 File2 - Text5 Text7 Text1 Text2 Comparing these two files.
time complexity of this approach
0
0
0
77
38,537,125
2016-07-23T00:58:00.000
2
0
1
0
python,module,dataset,attributeerror,lda
38,537,431
2
true
0
0
Do you have a module named lda.py or lda.pyc in the current directory? If so, then your import statement is finding that module instead of the "real" lda module.
1
1
1
I have installed LDA plibrary (using pip) I have a very simple test code (the next two rows) import lda print lda.datasets.load_reuters() But i keep getting the error AttributeError: 'module' object has no attribute 'datasets' in fact i get that each time i access any attribute/function under lda!
AttributeError: 'module' object has no attribute '__version__'
1.2
0
0
4,833
38,540,517
2016-07-23T10:04:00.000
1
0
0
0
python,sonos
38,955,622
1
true
1
0
you can easily iterate over the group, and change all their volumes, for example to increate the volume on all speakers by 5: for each_speaker in my_zone.group: each_speaker.volume += 5 (assuming my_zone is you speaker object)
1
1
0
I am trying to set group volume in soco (python) for my Sonos speakers. It is straightforward to set individual speaker volume but I have not found any way to set volume on group level (without iterating through each speaker setting the volume individually). Any idea to do this?
Anyone know how to set group volume in soco (python)?
1.2
0
0
442
38,543,266
2016-07-23T15:13:00.000
0
0
1
0
python,anaconda,conda
64,464,084
3
false
0
0
I've found a solution while trying to install conda environment in my Windows. I have a space in one of my directory's name between My and MS. C:\Users\My MS\python\project_1\env So I put " " to encase the whole directory to activate the environment: $ conda activate "C:\Users\My MS\python\project_1\env"
1
3
0
conda env list or conda info -e shows py35 python=3.5 as one of the environment. How to activate conda env which has space in its name?
How to activate conda env with space in its name
0
0
0
6,175
38,545,198
2016-07-23T18:32:00.000
0
0
1
1
python,vagrant,jupyter-notebook
57,043,893
5
false
0
0
Looks like that in newer versions of Jupyter the changes that should be done in the configuration is a little different from the above answers (otherwise you can get error "'' does not appear to be an IPv4 or IPv6 address"). The entire solution: Run: jupyter notebook --generate-config Change in the config the bellow: c.NotebookApp.ip = '0.0.0.0' c.NotebookApp.open_browser = False Now you can run Jupyter simply: jupyter notebook
2
9
0
I want to run jupyter notebook running on my ubuntu vm which i fired using vagrant. $ jupyter notebook --no-browser --port 8004 [I 18:26:10.152 NotebookApp] Serving notebooks from local directory: /home/vagrant/path/to/jupyter/notebook/directory [I 18:26:10.153 NotebookApp] 0 active kernels [I 18:26:10.154 NotebookApp] The Jupyter Notebook is running at: http://localhost:8004/ [I 18:26:10.154 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). Jupyter notebook starts in localhost. But to access the notebook from my host machine I need to start the notebook in 0.0.0.0. How to bind the ip 0.0.0.0 so that it routes to 127.0.0.1 in the vm? My host machine is windows and vm is ubuntu 14.04.4
access jupyter notebook running on vm
0
0
0
20,157
38,545,198
2016-07-23T18:32:00.000
1
0
1
1
python,vagrant,jupyter-notebook
70,621,757
5
false
0
0
For my case (using VMware with Ubuntu), the solution was very simple. By default, the network adapter was already in NAT mode. If not, adjust this in the settings of your VM instance. Type ifconfig in a VM terminal to get your local IP, e.g. 192.168.124.131. Start the notebook: jupyter notebook --ip=192.168.124.131 --no-browser The terminal then gives you a link you can use on the host to access jupyter, e.g. http://192.168.124.131:8888/?token=xxxxxxxxxxxxxxxxxxx.
2
9
0
I want to run jupyter notebook running on my ubuntu vm which i fired using vagrant. $ jupyter notebook --no-browser --port 8004 [I 18:26:10.152 NotebookApp] Serving notebooks from local directory: /home/vagrant/path/to/jupyter/notebook/directory [I 18:26:10.153 NotebookApp] 0 active kernels [I 18:26:10.154 NotebookApp] The Jupyter Notebook is running at: http://localhost:8004/ [I 18:26:10.154 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). Jupyter notebook starts in localhost. But to access the notebook from my host machine I need to start the notebook in 0.0.0.0. How to bind the ip 0.0.0.0 so that it routes to 127.0.0.1 in the vm? My host machine is windows and vm is ubuntu 14.04.4
access jupyter notebook running on vm
0.039979
0
0
20,157
38,547,996
2016-07-24T01:44:00.000
0
0
1
0
python,data-structures
38,548,024
3
false
0
0
Dict with tuples as keys might work.
3
1
1
I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.
Represent sparse matrix in Python without library usage
0
0
0
331
38,547,996
2016-07-24T01:44:00.000
1
0
1
0
python,data-structures
38,548,640
3
false
0
0
The scipy.sparse library uses different formats depending on the purpose. All implement a 2d matrix dictionary of keys - the data structure is a dictionary, with a tuple of the coordinates as key. This is easiest to setup and use. list of lists - has 2 lists of lists. One list has column coordinates, the other column data. One sublist per row of matrix. coo - a classic design. 3 arrays, row coordinates, column coordinates and data values compressed row (or column) - a more complex version of coo, optimized for mathematical operations; based on linear algebra mathematics decades old diagonal - suitable for matrices were most values are on a few diagonals
3
1
1
I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.
Represent sparse matrix in Python without library usage
0.066568
0
0
331
38,547,996
2016-07-24T01:44:00.000
0
0
1
0
python,data-structures
38,548,006
3
false
0
0
Lots of ways to do it. For example you could keep a list where each list element is either one of your data objects, or an integer representing N blank items.
3
1
1
I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.
Represent sparse matrix in Python without library usage
0
0
0
331
38,549,253
2016-07-24T06:06:00.000
1
0
0
1
python,ubuntu,tensorflow,command-line,version
62,434,345
18
false
0
0
Another variation, i guess :P python3 -c 'print(__import__("tensorflow").__version__)'
3
367
0
I need to find which version of TensorFlow I have installed. I'm using Ubuntu 16.04 Long Term Support.
How to find which version of TensorFlow is installed in my system?
0.011111
0
0
1,109,737
38,549,253
2016-07-24T06:06:00.000
2
0
0
1
python,ubuntu,tensorflow,command-line,version
62,218,233
18
false
0
0
If you have TensorFlow 2.x: sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))
3
367
0
I need to find which version of TensorFlow I have installed. I'm using Ubuntu 16.04 Long Term Support.
How to find which version of TensorFlow is installed in my system?
0.022219
0
0
1,109,737
38,549,253
2016-07-24T06:06:00.000
4
0
0
1
python,ubuntu,tensorflow,command-line,version
55,177,324
18
false
0
0
Easily get KERAS and TENSORFLOW version number --> Run this command in terminal: [username@usrnm:~] python3 >>import keras; print(keras.__version__) Using TensorFlow backend. 2.2.4 >>import tensorflow as tf; print(tf.__version__) 1.12.0
3
367
0
I need to find which version of TensorFlow I have installed. I'm using Ubuntu 16.04 Long Term Support.
How to find which version of TensorFlow is installed in my system?
0.044415
0
0
1,109,737
38,555,120
2016-07-24T18:03:00.000
4
0
1
0
python,numpy,matrix
38,555,266
3
false
0
0
Well, the first question is, wich type of value will you store in your matrix? Suposing it will be of integers (and suposing that every bytes uses the ISO specification for size, 4 bytes), you will have 4*10^12 bytes to store. That's a large amount of information (4 TB), so, in first place, I don't know from where you are taking all that information, and I suggest you to only load parts of it, that you can manage easily. On the other side, as you can paralellize it, I will recommend you using CUDA, if you can afford a NVIDIA card, so you will have much better performance. In summary, it's hard to have all that information only in RAM, and, use paralell languajes. PD: You are using wrong the O() stimation about algorith time complexity. You should have said that you have a O(n), being n=size_of_the_matrix or O(nmt), being n, m and t, the dimensions of the matrix.
2
1
1
I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000. Considering that: The matrix will be dense, and should be stored in the RAM. I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable). Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time? I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances.
Hardware requirements to deal with a big matrix - python
0.26052
0
0
230
38,555,120
2016-07-24T18:03:00.000
1
0
1
0
python,numpy,matrix
38,555,206
3
false
0
0
Actually, the memory would be a big issue here. Depending on the type of the matrix elements. Each float takes 24 bytes for example as it is a boxed object. As your matrix is 10^12 you can do the math. Switching to C would probably make it more memory-efficient, but not faster, as numpy is essentially written in C with lots of optimizations
2
1
1
I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000. Considering that: The matrix will be dense, and should be stored in the RAM. I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable). Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time? I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances.
Hardware requirements to deal with a big matrix - python
0.066568
0
0
230
38,556,078
2016-07-24T19:47:00.000
15
0
1
0
python,tensorflow
38,556,752
1
true
0
0
It's true that a Variable can be used any place a Tensor can, but the key differences between the two are that a Variable maintains its state across multiple calls to run() and a variable's value can be updated by backpropagation (it can also be saved, restored etc as per the documentation). These differences mean that you should think of a variable as representing your model's trainable parameters (for example, the weights and biases of a neural network), while you can think of a Tensor as representing the data being fed into your model and the intermediate representations of that data as it passes through your model.
1
11
1
The Tensorflow documentation states that a Variable can be used any place a Tensor can be used, and they seem to be fairly interchangeable. For example, if v is a Variable, then x = 1.0 + v becomes a Tensor. What is the difference between the two, and when would I use one over the other?
In Tensorflow, what is the difference between a Variable and a Tensor?
1.2
0
0
3,747
38,556,461
2016-07-24T20:26:00.000
2
0
0
0
python,django
38,556,568
2
false
1
0
You need some kind of mutex. Since your operations involve the filesystem already, perhaps you could use a file as a mutex. For instance, at the start of the operation, check if a specific file exists in a specific place; if it does, return an error, but if not, create it and proceed, deleting it at the end of the operation (making sure to also delete it in the case of any error).
1
1
0
I have a django app, where I am using one of the views to fetch data from local filesystem and parse it and add it to my database. Now the thing is, I want to restrict this view from serving multiple requests concurrently, I want them to be served sequentially instead. Or just block the new request when one request is already being served. Is there a way to achieve it?
Is there a way to block django views from serving multiple requests concurrently?
0.197375
0
0
179
38,558,368
2016-07-25T00:59:00.000
2
0
0
0
python-3.x,utf-8,flask
38,558,411
1
false
1
0
The string (BOM) is most likely included in your template file. Open/save it in some editor which doesn't include unnecessary symbols in UTF-8 files. For example Notepad++.
1
0
0
I am trying to use Flask and for some reason it is rendering with a byte-order mark that's a quirk of something using UTF8 (the mark is  in particular for people googling the same issue). I do not know how to get rid of it or if it is a source of some of my problems. I am using Flask on Windows 10. I wish I knew how to reproduce this issue.
flask serving byte-order-mark 
0.379949
0
0
142