Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
33,064,424 | 2015-10-11T11:25:00.000 | 10 | 0 | 1 | 0 | python-idle,python-3.5 | 50,818,666 | 4 | false | 0 | 0 | Following are the steps:
Open IDLE
Go to "OPTIONS" in the top tab
Select "Configure IDLE"
Select "Highlights" from top horizontal tab
Select "Background" radio button just above the image of IDLE on the left hand side
On the right hand side under "Highlight Theme" click on "IDLE Classic/IDLE Dark/IDLE New" as per your preference
Click Apply and Then Click OK
Done! | 2 | 7 | 0 | Is there a way to change background color of python-3.5 IDLE under windows 10?
I've tried google and reading docs, but i can't find the answer. I'm not sure it's even possible... | IDLE background color in python | 1 | 0 | 0 | 92,777 |
33,064,424 | 2015-10-11T11:25:00.000 | 0 | 0 | 1 | 0 | python-idle,python-3.5 | 68,920,836 | 4 | false | 0 | 0 | FOR IDLE DARK BACKGROUND
The above answer of Daniel Puiu and Megha Chovatiya is so good. I follow their instruction and have extra personal experience that I also want to share:
This instruction will be detailed that can apply for custom version:
Open IDLE
Go to "OPTIONS" in the top tab
Select "Configure IDLE"
Select "Highlights" from top horizontal tab
***Because I want the IDLE background to be dark so in Highligting theme -> Built-in theme -> click "IDLE class" change to "IDLE dark"
There is a small coding picture that you can click to select the specific coding parameter you want to change the color. Then:
Select "Background" radio button when you want to change the background of the text that seem to be the highlight color in word document
Or select "Foreground" to adjust the color of the text
And "Choose color for" to select color you want. You can look the coding picture at the same time to see your change
When you feel good click "Apply" and name the custom.
Furthermore, you could change the size of text or bold the text if you want by:
Go to "Fonts/tab" from the top horizontal tab and change the text following your style | 2 | 7 | 0 | Is there a way to change background color of python-3.5 IDLE under windows 10?
I've tried google and reading docs, but i can't find the answer. I'm not sure it's even possible... | IDLE background color in python | 0 | 0 | 0 | 92,777 |
33,068,445 | 2015-10-11T18:15:00.000 | 0 | 1 | 1 | 0 | python,windows,atom-editor | 43,814,862 | 3 | false | 0 | 0 | From Atom > Preferences > Install:
Search for atom-runner package and install it.
Close the atom editor and reopen. This helps the atom editor set the right path and will solve the issue.
If this doesnt help, manually copy the folder of the python installation directory and add the path to system PATH. This will solve the issue. | 1 | 3 | 0 | I have read numerous articles about running code in the Atom code editor, however, I cannot seem to understand how this can be done. Could anyone explain it in simpler terms?
I want to run my Python code in it and I have downloaded the 'python-tools-0.6.5' and 'atom-script-2.29.0' files from the Atom website and I just need to know how to get them working. | Run Code In Atom Code Editor | 0 | 0 | 0 | 17,470 |
33,072,570 | 2015-10-12T02:53:00.000 | 4 | 0 | 1 | 0 | python,oop | 33,072,852 | 6 | false | 0 | 0 | A class defines a real world entity. If you are working on something that exists individually and has its own logic that is separate from others, you should create a class for it. For example, a class that encapsulates database connectivity.
If this not the case, no need to create class | 3 | 295 | 0 | I have been programming in python for about two years; mostly data stuff (pandas, mpl, numpy), but also automation scripts and small web apps. I'm trying to become a better programmer and increase my python knowledge and one of the things that bothers me is that I have never used a class (outside of copying random flask code for small web apps). I generally understand what they are, but I can't seem to wrap my head around why I would need them over a simple function.
To add specificity to my question: I write tons of automated reports which always involve pulling data from multiple data sources (mongo, sql, postgres, apis), performing a lot or a little data munging and formatting, writing the data to csv/excel/html, send it out in an email. The scripts range from ~250 lines to ~600 lines. Would there be any reason for me to use classes to do this and why? | When should I be using classes in Python? | 0.132549 | 0 | 0 | 140,866 |
33,072,570 | 2015-10-12T02:53:00.000 | 1 | 0 | 1 | 0 | python,oop | 52,722,066 | 6 | false | 0 | 0 | It depends on your idea and design. If you are a good designer, then OOPs will come out naturally in the form of various design patterns.
For simple script-level processing, OOPs can be overhead.
Simply consider the basic benefits of OOPs like reusability and extendability and make sure if they are needed or not.
OOPs make complex things simpler and simpler things complex.
Simply keep the things simple in either way using OOPs or not using OOPs. Whichever is simpler, use that. | 3 | 295 | 0 | I have been programming in python for about two years; mostly data stuff (pandas, mpl, numpy), but also automation scripts and small web apps. I'm trying to become a better programmer and increase my python knowledge and one of the things that bothers me is that I have never used a class (outside of copying random flask code for small web apps). I generally understand what they are, but I can't seem to wrap my head around why I would need them over a simple function.
To add specificity to my question: I write tons of automated reports which always involve pulling data from multiple data sources (mongo, sql, postgres, apis), performing a lot or a little data munging and formatting, writing the data to csv/excel/html, send it out in an email. The scripts range from ~250 lines to ~600 lines. Would there be any reason for me to use classes to do this and why? | When should I be using classes in Python? | 0.033321 | 0 | 0 | 140,866 |
33,072,570 | 2015-10-12T02:53:00.000 | 30 | 0 | 1 | 0 | python,oop | 33,073,271 | 6 | false | 0 | 0 | Whenever you need to maintain a state of your functions and it cannot be accomplished with generators (functions which yield rather than return). Generators maintain their own state.
If you want to override any of the standard operators, you need a class.
Whenever you have a use for a Visitor pattern, you'll need classes. Every other design pattern can be accomplished more effectively and cleanly with generators, context managers (which are also better implemented as generators than as classes) and POD types (dictionaries, lists and tuples, etc.).
If you want to write "pythonic" code, you should prefer context managers and generators over classes. It will be cleaner.
If you want to extend functionality, you will almost always be able to accomplish it with containment rather than inheritance.
As every rule, this has an exception. If you want to encapsulate functionality quickly (ie, write test code rather than library-level reusable code), you can encapsulate the state in a class. It will be simple and won't need to be reusable.
If you need a C++ style destructor (RIIA), you definitely do NOT want to use classes. You want context managers. | 3 | 295 | 0 | I have been programming in python for about two years; mostly data stuff (pandas, mpl, numpy), but also automation scripts and small web apps. I'm trying to become a better programmer and increase my python knowledge and one of the things that bothers me is that I have never used a class (outside of copying random flask code for small web apps). I generally understand what they are, but I can't seem to wrap my head around why I would need them over a simple function.
To add specificity to my question: I write tons of automated reports which always involve pulling data from multiple data sources (mongo, sql, postgres, apis), performing a lot or a little data munging and formatting, writing the data to csv/excel/html, send it out in an email. The scripts range from ~250 lines to ~600 lines. Would there be any reason for me to use classes to do this and why? | When should I be using classes in Python? | 1 | 0 | 0 | 140,866 |
33,072,577 | 2015-10-12T02:54:00.000 | 0 | 0 | 0 | 0 | python,django | 33,072,629 | 2 | false | 1 | 0 | It depends what are your queries, Your single query to the database will never face any race condition due to ACID principle impose by the database. But if there is any condition like first you are reading the data from database and after some operation on application level you are writing the updated data back to the database, then race condition may occur for that you have to implement the locks or the mutex in python. | 1 | 2 | 0 | Is there any simple way to define critical section?
when a user during the updating some database table, I'd force to make the other user cannot update on the same tables. | How to define critical section in django | 0 | 0 | 0 | 341 |
33,075,289 | 2015-10-12T07:17:00.000 | 0 | 0 | 0 | 1 | python,ipc,zeromq,pyzmq | 33,075,923 | 1 | false | 0 | 0 | put some sleep before connecting. so bind will run first, and connect will continue after waiting for sometime | 1 | 0 | 0 | Trying to establish some communication between two python processes , I've come to use pyzmq. Since the communication is simple enough I 'm using the Zmq.PAIR messaging pattern with a tcp socket. Basically one process binds on an address and the other one connects to the same address . However both operations happen at startup , and since I cannot control the order in which the processes start , I am often encountering the case in which 'connect()' is called before 'bind()' which leads to failing in establishing communication.
Is there a way to know a socket is not yet ready to be connected to ?
What are the strategies to employ in such situations in order to obtain a safe connection ? | PyZmq ensure connect() after bind() | 0 | 0 | 0 | 129 |
33,078,383 | 2015-10-12T10:02:00.000 | 0 | 0 | 0 | 0 | python,sqlite,cookies | 33,078,468 | 1 | true | 0 | 0 | That's not a programming question as is -- sqlite sets a flag in the file so that when trying to open the file a second time, the other sqlite instance knows that the file is "dirty", because the first Sqlite is actively modifying it. There's no way around this -- data integrity is the core functionality of databases, and hence, you really shouldn't try to do that.
I'd generally recommend just writing a Firefox extension, which then can access the cookie database as accessible from within firefox, and export whatever you need to your external program. | 1 | 0 | 0 | I tried to open and read the contents of cookie.sqlite file in the firefox.
cookie.sqlie is the database file were all cookies of webpages opened in firefox are stored. When I am trying to access with a python program it is not allowing to read, as cookie.sqlite is locked. How to open and read the contents. | how to read cookie.sqlite file in firefox through python program | 1.2 | 1 | 0 | 858 |
33,080,063 | 2015-10-12T11:32:00.000 | -3 | 1 | 0 | 0 | javascript,python,ios,swift,encryption | 33,080,475 | 3 | false | 1 | 0 | It's very wide question and the variety of ways to do that.
First of all, you need to choose a method of the encryption and purpose why you encrypt the data.
There are 3 main encryption methods:
1. Symmetric Encryption
Encrypter and Decrypter have access to the same key. It's quite easy, but it has one big drawback - key needs to be shared, so if you put it to the client it can be stolen and abused.
As a solution, you would need to use some another method to send the key and encrypt it on the client.
2. Asymmetric Encryption
With asymmetric encryption, the situation is different. To encrypt data you need to use public/private key pair.
The public key is usually used to encrypt data, but decryption is possible only with a private key. So you can hand out public key to your clients, and they would be able to send some encrypted traffic back to you.
But still you need to be sure that you are talking to the right public key to not abuse your encryption.
Usually, it's used in TLS (SSL), SSH, signing updates, etc.
3. Hashing (it's not really encryption)
It's the simplest. With hashing, you produce some spring that can't be reverted, but with a rule that same data will produce the same hash.
So you could just pick the most suitable method and try to find appropriate package in the language you use. | 1 | 0 | 0 | I have a very simple iOS and Android applications that download a txt file from a web server and preset it to the user. I'm looking for a way that only the application will be able to read the file so no one else can download and use it.
I would like to take the file that is on my computer, encrypt it somehow, upload the result to the server and when the client will download the file it will know how to read it.
What is the simplest way to to this kind of thing?
Thanks a lot! | encrypting data on server and decrypting it on client | -0.197375 | 0 | 0 | 1,327 |
33,080,794 | 2015-10-12T12:10:00.000 | 0 | 0 | 1 | 0 | python,csv,python-3.x | 40,980,117 | 3 | false | 0 | 0 | First of all, at the top of the code do import csv
After that you need to set a variable name; this is so you can open the CSV file. For example, data=open('CSV name', 'rt') You will need to fill in where it says CSV name. That's how you open it.
To read a CSV file, you set another variable. For example, data2=csv.reader(data) | 1 | 0 | 1 | I want to import CSV files in a python script. Column and row numbers are not fixed , first row contains name of the variables and next rows are values of those variables.
I am new to Python, any help is appreciated. thanks. | importing csv file in python | 0 | 0 | 0 | 4,532 |
33,082,220 | 2015-10-12T13:19:00.000 | 0 | 0 | 0 | 0 | python,opencv | 33,101,046 | 1 | false | 0 | 0 | Install Drivers for required camera, connect it, and use cv2.VideoCapture(int). Here, instead of 0, use a different integer according to the camera. By default, 0 is for the inbuilt webcam.
e.g.: cv2.VideoCapture(1) | 1 | 0 | 1 | The quality of video recording that is required for our project is not met by the webcams. Is it possible to use high megapixel digital cameras (Sony, Canon, Olympus) with OpenCV ?
How to talk to the digital cameras using OpenCV (and specifically using Python) | OpenCV - using digital cameras. | 0 | 0 | 0 | 1,805 |
33,086,758 | 2015-10-12T17:16:00.000 | 3 | 0 | 0 | 0 | python,pandas | 33,087,239 | 1 | true | 0 | 0 | Try changing pandas.options.display.width. (It's 80 by default) | 1 | 3 | 1 | I want to print a Dataframe to a text file,
Say I have a table with 4 lines, and 12 columns.
It looks quite nice when I just use print df, with all the values of a column aligned to the right, however, when there are too many columns (8 in my case) it breaks the table down so that the last 4 columns are printed after 4 lines of 8 values. Probably as Pandas tries to make the table fit in a 80 chars line.
I tried df.to_csv().replace(',','\t'), but then entries longer than a tab cause a jump in the line, and the lines are no longer aligned.
How can I get the nice, orderly, aligned to right, fashion, but not enforcing 80 characters per line? | print a pandas dataframe to text with lines longer than 80 chars | 1.2 | 0 | 0 | 259 |
33,087,364 | 2015-10-12T17:57:00.000 | 1 | 0 | 1 | 0 | python,apache-storm | 33,163,891 | 1 | true | 0 | 0 | Hmmm... Your question is kind of genetic and hard to answer. I personally would just go with Storm because:
It keeps the code simpler.
Installing Storm is not too hard.
For non parallel execution you might be able to use Storm LocalCluster which execute Storm programs on a single JVM and thus does not require to setup a cluster. However, be aware that LocalCluster was designed for testing code rather than to be used in production (if I am not wrong -- maybe you should double check other sources to verify). Not sure how stable it runs in the long term (and of course, there is no fault-tolerance -- however, if you implement your own local application or "proxy library", you might have no fault-tolerance either).
Writing a "proxy library" is no bad idea. But do not underestimate the complexity of this approach! It will become quite difficult to built it in a fully compatible way.. | 1 | 1 | 0 | I'm creating an application and would like it to be scalable using Storm. The problem is, most users will actually run it in a single node.
I don't think it's fair for those users to have to install Storm in a single node just to use the application. Is there a way to make the application only require Storm if the user needs to run it on multiple machines without having to rewrite part of the application? Maybe a library?
I know I could always write part of the code specific to run in a Storm environment and let the users decide, but I'm looking for a more elegant approach. If no solution exists I'm thinking about writing a "proxy library" that would decide weather to use bolts in a Storm fashion or in a single node.
I'm using Python but I can use other language if the solution requires so. | Storm alternative for deploying single node | 1.2 | 0 | 0 | 99 |
33,087,881 | 2015-10-12T18:31:00.000 | 0 | 0 | 1 | 0 | python,sympy | 33,223,272 | 2 | false | 0 | 0 | Just multiply the equations together. The product of two expressions is zero if one or the other is zero.
In SymPy, solve(expr, x) assumes that expr = 0 (for instance, solve(x**2 - 1, x) gives [-1, 1]).
If you want to solve, for instance, x**2 - 1 = 0 OR x**2 - 4 = 0, you should use solve((x**2 - 1)*(x**2 - 4), x). | 1 | 1 | 0 | Suppose I had a system of equations as follows but I have the requirement that within my system of equations I need 1 of 2 statements to be true.
Example:
sympy.solve([eq1 or eq2, eq3 or eq4, eq5 or eq6,...], [vars])
When I put the or statements, it doesn't work as I intended it to work (the first eq is always evaluated). How can I express the system of equations with or as I intend?
Thanks. | Sympy python OR statements in solve() function | 0 | 0 | 0 | 81 |
33,088,102 | 2015-10-12T18:44:00.000 | 2 | 1 | 0 | 0 | python,c++,scripting,embedded-language | 33,088,189 | 2 | false | 0 | 1 | Why would someone need/want to embed a scripting language into a programming language?
The main reason obviously is to allow to provide extensions to the game engine without need to recompile the entire game executable program, but have the extensions loaded and interpreted at run time.
Many game engines provide such feature for extensibility.
... but I greatly understand how object orientation works.
Object orientation comes in with the interfaces declared, how to interact with the particular scripts.
So python is itself an object oriented language which supports OOP principles quite well.
For instance integration of non OOP scripting languages, like e.g. lua scripts (also oftenly used for extensions), makes that harder, but not impossible after all. | 1 | 2 | 0 | I'm interesting in making games in my future, and I've heard that my favourite game's engine is made with c++, but its embedded with python. I have little experience with programming, but I greatly understand how object orientation works. | What is the purpose of embedding a scripting language into a programming language in a game engine? | 0.197375 | 0 | 0 | 159 |
33,088,597 | 2015-10-12T19:16:00.000 | 0 | 0 | 0 | 0 | python,events,triggers,tkinter,keypress | 33,089,284 | 2 | false | 0 | 1 | There may be nothing you can do. It could very well be that the keyboard itself is sending multiple events (ie: it's a hardware problem that can't be solved with software).
These events are probably coming very close together -- maybe every 100ms or so. You can use that knowledge to affect how you process events. For example, you can only do something special on a key release if it was at least 200ms after the last press. | 1 | 0 | 0 | What do I have to do so that the function that I bind with the <KeyPress> event only triggers once, even when I hold the key? | Python: Tkinter Keypress Event Trigger once: Hold vs. Pressed | 0 | 0 | 0 | 836 |
33,089,288 | 2015-10-12T20:02:00.000 | 0 | 0 | 1 | 0 | ipython-notebook | 33,089,553 | 2 | false | 0 | 0 | The three tabs "Files", "Running" and "Clusters" should be correct. The "Running" tab has the active notebooks listed.
Yes, perhaps it was an older tutorial. Do you have a link? | 2 | 0 | 0 | I have installed iPython Notebook as a component of my Anaconda installation. To become familiar with using iPython Notebook I started through an introductory tutorial and immediately ran into a discrepancy. When I open the notebook the tutorial shows I should have three tabs across the top labelled 'Notebooks', 'Running' and 'Clusters'. Instead I have 'Files', 'Running' and 'Clusters'. I cannot figure out how to switch the Files tab to the Notebooks tab...or is the Files tab correct and is the tutorial out-of-date? I would much prefer to just have the Notebooks view as listing all the file folders just gets in the way. Can someone tell me how to swap out the Files tab for the Notebooks tab? | Missing 'Notebooks' tab | 0 | 0 | 0 | 135 |
33,089,288 | 2015-10-12T20:02:00.000 | 0 | 0 | 1 | 0 | ipython-notebook | 39,973,387 | 2 | false | 0 | 0 | To start the notebook in notebook server 4.2.1, go to file tab and on the top right side, you will see new drop down menu. Click on the python[root] and it should start the new notebook. | 2 | 0 | 0 | I have installed iPython Notebook as a component of my Anaconda installation. To become familiar with using iPython Notebook I started through an introductory tutorial and immediately ran into a discrepancy. When I open the notebook the tutorial shows I should have three tabs across the top labelled 'Notebooks', 'Running' and 'Clusters'. Instead I have 'Files', 'Running' and 'Clusters'. I cannot figure out how to switch the Files tab to the Notebooks tab...or is the Files tab correct and is the tutorial out-of-date? I would much prefer to just have the Notebooks view as listing all the file folders just gets in the way. Can someone tell me how to swap out the Files tab for the Notebooks tab? | Missing 'Notebooks' tab | 0 | 0 | 0 | 135 |
33,091,980 | 2015-10-12T23:52:00.000 | 2 | 0 | 1 | 0 | python,csv | 67,879,731 | 3 | false | 0 | 0 | The technical difference is that writerow is going to write a list of values into a single row whereas writerows is going to write multiple rows from a buffer that contains one or more lists.
The practical difference is that writerows is going to be faster, especially if you have a large number of writes to perform, because it can carry them out all at once. | 1 | 16 | 0 | I am new to the realm of Python. I've been playing with some I/O operations on CSV files lately, and I found two methods in the csv module with very similar names - writerow() and writerows(). The difference wasn't very clear to me from the documentation. I tried searching for some examples but they seem to have used them almost interchangeably.
Could anyone help clarify a little bit? | Difference between writerow() and writerows() methods of Python csv module | 0.132549 | 0 | 0 | 48,810 |
33,092,424 | 2015-10-13T00:46:00.000 | 0 | 0 | 0 | 1 | python,libreoffice | 33,251,685 | 3 | true | 1 | 0 | Finally, I found a way to solve this using Python, in an elegant and easy way. Instead of libraries or APIs, Im using a socket to connect to Impress and control it.
At the end of the post you can read the full-text that indicates how to control Impress this way. It is easy, and amazing.
You send a message using Python to Impress ( that is listening in some port ), it receives the message and does things based on your request.
You must enable this "remote control" feeature in the app. I solved my problem using this.
Thanks for your replies!.
LibreOffice Impress Remote Protocol Specification
Communication is over a UTF-8 encoded character stream.
(Using RTL_TEXTENCODING_UTF8 in the LibreOffice portion.)
TCP
More TCP-specific details on setup and initial handshake to be
written, but the actual message protocol is the same as for Bluetooth.
Message Format
A message consists of one or more lines. The first line is the message description,
further lines can add any necessary data. An empty line concludes the message.
I.e. "MESSAGE\n\n" or "MESSAGE\nDATA\nDATA2...\n\n"
You must keep reading a message until an empty line (i.e. double
new-line) is reached to allow for future protocol extension.
Intialisation
Once connected the server sends "LO_SERVER_SERVER_PAIRED".
(I.e. "LO_SERVER_SERVER_PAIRED\n\n" is sent over the stream.)
Subsequently the server will send either slideshow_started if a slideshow is running,
or slideshow_finished if no slideshow is running. (See below for details of.)
The current server implementation then proceeds to send all slide notes and previews
to the client. (This should be changed to prevent memory issues, and a preview
request mechanism implemented.)
Commands (Client to Server)
The client should not assume that the state of the server has changed when a
command has been sent. All changes will be signalled back to the client.
(This is to allow for cases such as multiple clients requesting different changes, etc.)
Any lines in [square brackets] are optional, and should be omitted if not needed.
transition_next
transition_previous
goto_slide
slide_number
presentation_start
presentation_stop
presentation_resume // Resumes after a presentation_blank_screen.
presentation_blank_screen
[Colour String] // Colour the screen will show (default: black). Not
// implemented, and format hasn't yet been defined.
As of gsoc2013, these commands are extended to the existing protocol, since server-end are tolerant with unknown commands, these extensions doesn't break backward compatibility
pointer_started // create a red dot on screen at initial position (x,y)
initial_x // This should be called when user first touch the screen
initial_y // note that x, y are in percentage (from 0.0 to 1.0) with respect to the slideshow size
pointer_dismissed // This dismiss the pointer red dot on screen, should be called when user stop touching screen
pointer_coordination // This update pointer's position to current (x,y)
current_x // note that x, y are in percentage (from 0.0 to 1.0) with respect to the slideshow size
current_y // unless screenupdater's performance is significantly improved, we should consider limit the update frequency on the
// remote-end
Status/Data (Server to Client)
slideshow_finished // (Also transmitted if no slideshow running when started.)
slideshow_started // (Also transmitted if a slideshow is running on startup.)
numberOfSlides
currentSlideNumber
slide_notes
slideNumber
[Notes] // The notes are an html document, and may also include \n newlines,
// i.e. the client should keep reading until a blank line is reached.
slide_updated // Slide on server has changed
currentSlideNumber
slide_preview // Supplies a preview image for a slide.
slideNumber
image // A Base 64 Encoded png image.
As of gsoc2013, these commands are extended to the existing protocol, since remote-end also ignore all unknown commands (which is the case of gsoc2012 android implementation), backward compatibility is kept.
slideshow_info // once paired, the server-end will send back the title of the current presentation
Title | 1 | 1 | 0 | Im writing an application oriented to speakers and conferences. Im writing it with Python and focused on Linux.
I would like to know if its possible to control LibreOffice Impress with Python, under Linux in some way.
I want to start an instance of LibreOffice Impress with some .odp file loaded, from my Python app. Then, I would like to be able to receive from the odp some info like: previous, current and next slide. Or somehow generate the images of the slides on the go.
Finally, I want to control LibreOffice in real time. This is: move through the slides using direction keys; right and left.
The idea is to use python alone, but I don't mind using external libraries or frameworks.
Thanks a lot. | Control Libreoffice Impress from Python | 1.2 | 0 | 0 | 3,178 |
33,092,493 | 2015-10-13T00:55:00.000 | 6 | 0 | 0 | 0 | python,matplotlib,machine-learning,visualization,pca | 33,094,494 | 2 | false | 0 | 0 | Don't confuse PCA with dimensionality reduction.
PCA is a rotation transformation that aligns the data with the axes in such a way that the first dimension has maximum variance, the second maximum variance among the remainder, etc. Rotations preserve pairwise distances.
When you use PCA for dimensionality reduction, you discard dimensions of your rotated data that have the least variance. High variance is achieved when points are spread far from the mean. Low-variance dimensions are those, in which the values are mostly the same, so their absence is presumed to have the least effect on pairwise distances. | 2 | 3 | 1 | I am currently reading up on t-SNE visualization technique and it was mentioned that one of the drawback of using PCA for visualizing high dimension data is that it only preserves large pairwise distances between the points. Meaning points which are far apart in high dimension would also appear far apart in low dimensions but other than that all other points distances get screwed up.
Could someone help me understand why is that and what does it mean graphically?.
Thanks a lot! | What is meant by PCA preserving only large pairwise distances? | 1 | 0 | 0 | 1,245 |
33,092,493 | 2015-10-13T00:55:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,machine-learning,visualization,pca | 66,142,281 | 2 | false | 0 | 0 | If I can re-phrase @Don Reba's comment:
The PCA transformation itself does not alter distances.
The 2-dimensional plot often used to visualise the PCA results takes into account only two dimensions, disregards all the other dimensions, and as such this visualisation provides a distorted representation of distances. | 2 | 3 | 1 | I am currently reading up on t-SNE visualization technique and it was mentioned that one of the drawback of using PCA for visualizing high dimension data is that it only preserves large pairwise distances between the points. Meaning points which are far apart in high dimension would also appear far apart in low dimensions but other than that all other points distances get screwed up.
Could someone help me understand why is that and what does it mean graphically?.
Thanks a lot! | What is meant by PCA preserving only large pairwise distances? | 0 | 0 | 0 | 1,245 |
33,093,371 | 2015-10-13T02:57:00.000 | 1 | 0 | 1 | 0 | python-2.7 | 33,093,420 | 2 | true | 0 | 0 | I would like to know whether there is a way to open a file in python
by entering only its name
You can open a file without providing the path, as long as the file is located on the same dir as the python script. | 1 | 0 | 0 | I know that this question may be silly, and I am OK with that since I am at a rookie level.
I would like to know whether there is a way to open a file in python by entering only its name , e.g open(mbox.txt) instead of open(C:\Python27\mbox.txt)??
Thanks | How to open a file from python with the name only? | 1.2 | 0 | 0 | 106 |
33,095,441 | 2015-10-13T06:18:00.000 | 3 | 0 | 0 | 0 | python,hex | 33,096,117 | 3 | true | 0 | 0 | I try to use like this : "{0:03x}".format(number) and works | 2 | 0 | 0 | I want make my hex number have 2 bytes size. How I can do it? Example,if in C we can make %02f. How I make it in Python2.7?
Thankyou | Make Fix Size of Hex Number Python | 1.2 | 0 | 0 | 1,870 |
33,095,441 | 2015-10-13T06:18:00.000 | 1 | 0 | 0 | 0 | python,hex | 33,095,970 | 3 | false | 0 | 0 | You can format it in a similar way. Just use "%02x" % (the_number,) | 2 | 0 | 0 | I want make my hex number have 2 bytes size. How I can do it? Example,if in C we can make %02f. How I make it in Python2.7?
Thankyou | Make Fix Size of Hex Number Python | 0.066568 | 0 | 0 | 1,870 |
33,101,603 | 2015-10-13T11:33:00.000 | 3 | 0 | 0 | 1 | python,docker,uwsgi,wake-on-lan | 67,018,294 | 1 | false | 0 | 0 | It seems that UDP broadcast from docker isn't being routed properly (possibly only broadcasted in the container itself, not on the host).
You can't send UDP WoL messages directly, as the device you're trying to control is 'offline' it doesn't show up in your router's ARP table and thus the direct message can't be delivered.
You may try setting (CLI) --network host or (compose) network_mode: host.
If you feel this may compromise security (since your container's/host network are more directly 'connected') or otherwise interferes with your container; you may create/use a separated 'WoL' container. | 1 | 12 | 0 | I have a docker container running a python uwsgi app. The app sends a wake on lan broadcast packet to wake a pc in the local network.
It works fine without the use of docker (normal uwsgi app directly on the server), but with docker it won't work.
I exposed port 9/udp and bound it port 9 of the host system.
What am I missing here? Or with other words, how can I send a wake on lan command from a docker container to the outside network? | Send a wake on lan packet from a docker container | 0.53705 | 0 | 0 | 5,187 |
33,103,572 | 2015-10-13T13:09:00.000 | 0 | 0 | 1 | 0 | qpython | 33,681,879 | 1 | false | 0 | 0 | I think it tries to install the Mac version***, which is obviously not compatible. At least, that's what it looked like when scrolling through the CLI output when issuing the pip install numpy command using the pip script.
I don't think the numpy/scipy modules are available for Qpython for Android. I think we're just going to have to wait. I've been googling this same question and I have not found an answer.
***Ok, the script says it's a version for a variety of OS', just not Android. I have no idea why pip install would download it if it wasn't compatible. It doesn't make sense, so I'm obviously missing something. | 1 | 0 | 0 | In Qpython which uses Python 2.7.2; I can't install the numpy module. The message I get is: command python setup.py egg_info failed with error code 2 | Installation error numpy in Qpython | 0 | 0 | 0 | 1,056 |
33,105,395 | 2015-10-13T14:32:00.000 | -1 | 0 | 1 | 0 | python,virtualenv,setup.py | 33,107,397 | 1 | false | 0 | 0 | setup.py should not touch the global system-wide config.
If you need to place configuration files around the operating system files, please create a separate installation script, preferably using OS tools, and then ask the package users to run this script after completing setup.py installation.
E.g. on Ubuntu you would create a .deb package which takes care of operating system side of things when installing the package. | 1 | 2 | 0 | I have a bunch of conf file and I am running python setup.py install after initializing virtual environment. I want to install them on the machine's OS's /opt or /etc so they are available to all applications.
How do i do this ? | python setup.py install outside virtualenv | -0.197375 | 0 | 0 | 277 |
33,107,955 | 2015-10-13T16:30:00.000 | 1 | 0 | 1 | 0 | python,django,pycharm,ipython,jupyter-notebook | 37,176,675 | 1 | true | 0 | 0 | Turned out to be easier than I thought.
So, in order to use IPython Notebook to debug local Django App in the Django Console, you need to add django-extensions library to your project's interpreter, append it to settings.py INSTALLED_APPS, then go to the "Terminal" tab in PyCharm (its options are in Settings -> Tools -> Terminal), navigate to the folder with manage.py if you are not there, and use a command
python manage.py shell_plus --notebook | 1 | 0 | 0 | PyCharm 4.5.4 has an option "Python Console" (Tools -> Python Console) which runs a Python interpreter with proper paths and allows to debug Django projects (for example). Aslo it has an option of using IPython if it is present, and the question is if it is possible to connect an IPython Notebook or QtConsole to this interpreter?
I've tried using %connect_info and %qtconsole, but it doesn't work, probably that means that the kernel is not running (like if I were to run just $ ipython). If so can it be started without a lot of trouble? | Connect Ipython Notebook to PyCharm Python console | 1.2 | 0 | 0 | 811 |
33,112,374 | 2015-10-13T20:50:00.000 | 0 | 1 | 1 | 0 | python-3.x,error-handling | 33,115,056 | 1 | true | 0 | 0 | No, there is not for the simple reason an error may be transit for you while for someone else it may be seen as permanent, depending on the usage.
If in doubt, I choose the transient one.
If even you can't be sure about the errors from the software you designed, how can someone make this choice to be universal?
One approach is to deal with only the most basic subclasses. You say you want to retry on IO Errors and timeouts; Since Python 3.3, IOError and some other exceptions like socket.error have been merged onto OSError. So, you can simply check for OSError and it will apply for those and many other subclasses like TimeoutError, ConnectionError, FileNotFoundError...
You can see the affected classes with OSError.__subclasses__(). Try on a python shell (do this recursively to find all of them). | 1 | 1 | 0 | In my current project I have modules communicating using simple request/reply form of RPC (remote procedure calls). I want to automatically retry failed requests if and only if there is a chance that a new attempt might be successfull.
Vast majority of errors are permanent, but errors like timeouts and I/O errors are not.
I have defined two custom exceptions - RPCTransientError and RPCPermanentError - and currently I map all errors to one of these two exceptions. If in doubt, I choose the transient one.
I do not want to reinvent the wheel. My question is: is there any existing resource regarding classification of standard exceptions to transient and permanent errors?
I'm using Python 3.3+ with the new OS and IO related exception hierarchy that I like a lot. (PEP 3151). Don't care about previous versions. | Is there any classification of standard Python exceptions to transient and permanent errors? | 1.2 | 0 | 0 | 359 |
33,112,874 | 2015-10-13T21:23:00.000 | 0 | 0 | 1 | 0 | python,orange | 33,124,530 | 1 | false | 0 | 0 | No, there is no function to do this. Apparently nobody ever needed it.
You can do it yourself, but if you know some Python it should be easier to test a list of rules without using Orange. | 1 | 0 | 1 | I am wondering if Orange can read in a text format rule file and use it to score another dataset. For example, a rule.txt file was previously created in Orange through rule_to_string function and contains rules in this IF...THEN format:
"IF sex=['female'] AND status=['first'] THEN survived=yes". Can Orange read in the rule.txt file and use it to score a test.csv dataset? Thank you very much for helping! | Can Orange read in IF...THEN text format rule file and use it to score another dataset? | 0 | 0 | 0 | 86 |
33,113,938 | 2015-10-13T22:52:00.000 | 2 | 0 | 1 | 0 | python,floating-point,integer,type-conversion,coercion | 33,113,979 | 4 | false | 0 | 0 | We have a good, obvious idea of what "make an int out of this float" means because we think of a float as two parts and we can throw one of them away.
It's not so obvious when we have a string. Make this string into a float implies all kinds of subtle things about the contents of the string, and that is not the kind of thing a sane person wants to see in code where the value is not obvious.
So the short answer is: Python likes obvious things and discourages magic. | 1 | 8 | 0 | When I type int("1.7") Python returns error (specifically, ValueError). I know that I can convert it to integer by int(float("1.7")). I would like to know why the first method returns error. | Why is it not possible to convert "1.7" to integer directly, without converting to float first? | 0.099668 | 0 | 0 | 1,036 |
33,114,337 | 2015-10-13T23:38:00.000 | 2 | 0 | 0 | 0 | python,linux,pymssql | 35,328,465 | 2 | false | 0 | 0 | It is better if when you run python3.4 you can have modules for that version.
Another way to get the desire modules running is install pip for python 3.4
sudo apt-get install python3-pip
Then install the module you want
python3.4 -m pip install pymssql | 1 | 1 | 0 | I'm not overly familiar with Linux and am trying to run a Python script that is dependent upon Python 3.4 as well as pymssql. Both Python 2.7 and 3.4 are installed (usr/local/lib/[PYTHON_VERSION_HERE]). pymssql is also installed, except it's installed in the Python 2.7 directory, not the 3.4 directory. When I run my Python script (python3 myscript.py), I get the following error:
File "myscript.py", line 2, in
import pymssql
ImportError: No module named 'pymssql'
My belief is that I need to install pymssql to the Python 3.4 folder, but that's my uneducated opinion. So my question is this:
How can I get my script to run using Python 3.4 as well as use the pymssql package (sorry, probably wrong term there)?
I've tried many different approaches, broken my Ubuntu install (and subsequently reimaged), and at this point don't know what to do. I am a relative novice, so some of the replies I've seen on the web say to use ENV and separate the versions are really far beyond the scope of my understanding. If I have to go that route, then I will, but if there is another (i.e. easier) way to go here, I'd really appreciate it, as this was supposed to just be a tiny thing I need to take care of but it's tied up 12 hours of my life thus far! Thank you in advance. | How to install pymssql to Python 3.4 rather than 2.7 on Ubuntu Linux? | 0.197375 | 1 | 0 | 5,321 |
33,116,793 | 2015-10-14T04:45:00.000 | 1 | 0 | 0 | 0 | python,usb,output,port,sensors | 33,116,916 | 2 | true | 0 | 0 | The USB port on your computer cannot read analog data because USBs work with digital signals. You would need an analog-to-digital converter (ADC). | 2 | 0 | 0 | I may be confusing a few concepts here so any help is appreciated.
Q1: Is it possible to attach any sensor in the world to the USB on my computer as long as it gives me analog data, and read its output? (e.g. pH, temperature, oxygen sensor etc as long as it gives me analog data)
Q2: If so, then what is the simplest way in python for me read such data.
Comment: I am trying to bypass using PLC's, and trying to see if I can get the output from the sensor directly to the PC. (I do not have drivers for these sensors)
Actual Need: I have an oxygen sensor connected to my computer via a USB. The oxygen sensor is able to send out analog data. The obvious way is to go through a PLC. However, I would like a solution which by-passes PLC's so I can connect the sensor directly to my PC via USB. | Reading Analog Data from sensor connected to USB (Python) | 1.2 | 0 | 0 | 1,931 |
33,116,793 | 2015-10-14T04:45:00.000 | 1 | 0 | 0 | 0 | python,usb,output,port,sensors | 33,117,665 | 2 | false | 0 | 0 | as @digitaLink answered, it is not possible directly via USB and yes, the obvious way is to use a PLC.
I would go the PLC way - in fact, I did it a few times in the past - and start with an Arduino and later develop a custom PCB, put it in a box and done.
Another possibility is to use a raspberry pi (or similar SBC), which has the GPIOs you can use for analog read.
Edit: there is another possibility.
The sensor you use now is _very_likely_ a PLC in itself, that is, the sensor is attached to a microcontroller that uses the USB port for serial communication. Now, the drivers you are missing do nothing else but decode the data coming through the serial port. Take a look inside your harware and try to find out what components there are in.
So what you could do is to try to find out how to communicate with the sensor via a serial terminal. It is probably possible to monitor serial communication (although I must admit, I don't know how to do that), reverse engineer the code and write your own driver in python. You could learn a lot from this, even if you don't succeed. | 2 | 0 | 0 | I may be confusing a few concepts here so any help is appreciated.
Q1: Is it possible to attach any sensor in the world to the USB on my computer as long as it gives me analog data, and read its output? (e.g. pH, temperature, oxygen sensor etc as long as it gives me analog data)
Q2: If so, then what is the simplest way in python for me read such data.
Comment: I am trying to bypass using PLC's, and trying to see if I can get the output from the sensor directly to the PC. (I do not have drivers for these sensors)
Actual Need: I have an oxygen sensor connected to my computer via a USB. The oxygen sensor is able to send out analog data. The obvious way is to go through a PLC. However, I would like a solution which by-passes PLC's so I can connect the sensor directly to my PC via USB. | Reading Analog Data from sensor connected to USB (Python) | 0.099668 | 0 | 0 | 1,931 |
33,118,610 | 2015-10-14T06:58:00.000 | 1 | 0 | 1 | 1 | python,linux,windows,bash,shell | 33,118,839 | 1 | true | 0 | 0 | If repetition doesn't matter, then this command will do it:
sort <(sort file1 | uniq) <(sort file2 | uniq) | uniq -d | 1 | 1 | 0 | How do i find out the intersection of 2 files in windows?
TextFile A: 100GB
TextFile B: 10MB
All i can think of is using python
I would read the lines in textfile B into memory in python and compare with each line in text file A.
I was wondering if there is any way to do it via the command prompt in linux/windows. | Finding if a set of lines exists in a large text file | 1.2 | 0 | 0 | 35 |
33,119,667 | 2015-10-14T07:53:00.000 | 1 | 1 | 0 | 0 | python,authentication,gmail-api,imaplib | 33,492,711 | 6 | false | 0 | 0 | Thanks guys. Its working now. The issue was that the google blocked our network.. because of multiple attempts. I tried that unlock URL from a different URL and it didnt work. The catch is that, we have to run that URL in the machine where you are trying to run the script. Hope it may help someone :) | 3 | 17 | 0 | I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error
imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure)
and sometimes i am getting the below error
raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error
When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that.
What are my options ?
I cant generate the Credentials (Gmail API) as this is under company domain account. | reading gmail is failing with IMAP | 0.033321 | 0 | 1 | 23,571 |
33,119,667 | 2015-10-14T07:53:00.000 | 2 | 1 | 0 | 0 | python,authentication,gmail-api,imaplib | 41,952,132 | 6 | false | 0 | 0 | Got the same error and it was fixed by getting the new google app password. Maybe this will work for someone | 3 | 17 | 0 | I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error
imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure)
and sometimes i am getting the below error
raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error
When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that.
What are my options ?
I cant generate the Credentials (Gmail API) as this is under company domain account. | reading gmail is failing with IMAP | 0.066568 | 0 | 1 | 23,571 |
33,119,667 | 2015-10-14T07:53:00.000 | 24 | 1 | 0 | 0 | python,authentication,gmail-api,imaplib | 60,630,329 | 6 | false | 0 | 0 | Some apps and devices use less secure sign-in technology. So we need to enable Less secure app access option from gmail account.
Steps:
Login into Gmail
Go to Google Account
Navigate to Security section
Turn on access for Less secure app access
By following above steps, issue will be resolved. | 3 | 17 | 0 | I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error
imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure)
and sometimes i am getting the below error
raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error
When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that.
What are my options ?
I cant generate the Credentials (Gmail API) as this is under company domain account. | reading gmail is failing with IMAP | 1 | 0 | 1 | 23,571 |
33,126,905 | 2015-10-14T13:37:00.000 | 1 | 0 | 0 | 0 | python,ios,objective-c,xcode,socketrocket | 33,127,431 | 1 | true | 0 | 0 | You do exactly that: you [can] pass around keys and make them show up in separate windows.
From the way you've phrased your question, you appear new to streams/sockets. I'd recommend you first start with one socket and make a chat application so you can get a feel for how to develop protocols which let you do that. | 1 | 2 | 0 | I would like to create multiple sockets between all users.
So how can i pass key and ID such as the server is divided in seprated windows.
Thank You. | SocketRocket (iOS) : How to identify whom user are chatting with another? | 1.2 | 0 | 1 | 141 |
33,134,816 | 2015-10-14T20:18:00.000 | 1 | 0 | 0 | 0 | python,dropbox,dropbox-api | 33,136,183 | 1 | true | 0 | 0 | No, Dropbox doesn't have an API like this. | 1 | 0 | 0 | I'd like to use the Dropbox API (with access only to my own account) to generate a link to SomeFile.xlsx that I can put in an email to multiple Dropbox account holders, all of whom are presumed to have access to the file. I'd like for the same link, when clicked on, to talk to Dropbox to figure out where SomeFile.xlsx is on their local filesystem and open that up directly.
In other words, I do NOT want to link to the cloud copy of the file. I want to link to the clicker's locally-synced version of the file.
Does Dropbox have that service and does the API let me consume it? I haven't been able to discover the answer from the documentation yet. | Relative link to local copy of Dropbox file | 1.2 | 1 | 0 | 82 |
33,136,726 | 2015-10-14T22:29:00.000 | 0 | 0 | 1 | 0 | python,appium,pytest,python-appium | 33,136,784 | 1 | false | 0 | 0 | If anybody sees this error then run "python setup.py install --force" and "sudo easy_install six==1.10.0" or "sudo pip install --upgrade six"
And after this do a pycache clean up at the root dir level. This will fix your problems. | 1 | 0 | 0 | File "build/bdist.macosx-10.10-x86_64/egg/pkg_resources/init.py",
line 2309, in load File
"build/bdist.macosx-10.10-x86_64/egg/pkg_resources/init.py", line
2326, in require File
"build/bdist.macosx-10.10-x86_64/egg/pkg_resources/init.py", line
810, in resolve pkg_resources.VersionConflict: (six 1.8.0
(/usr/local/lib/python2.7/site-packages/six-1.8.0-py2.7.egg),
Requirement.parse('six>=1.9.0')) | python site-packages pkg_resources.VersionConflicts | 0 | 0 | 0 | 333 |
33,137,191 | 2015-10-14T23:09:00.000 | 1 | 0 | 1 | 0 | python-2.7,compression,python-unicode | 33,137,509 | 1 | true | 0 | 0 | Compression operates on bytes, not text, so naturally it takes a str (2.x) or bytes (3.x) object. You don't need to care what the internal text representation is since you will encode/decode the text yourself.
Is there a safe encoding to choose that will also be somewhat minimal in size (for a majority of utf-8 docs)?
Nope. Just encode as UTF-8 and be done with it.
Do I need to worry about backwards compatibility on decoding vs the earlier documents I've compressed? I did not fully read the lzma docs (my bad) and didn't realize it needed a String.
If you've only compressed ASCII text then you can decode as UTF-8 without issue, since UTF-8 and ASCII encode ASCII text exactly the same way. | 1 | 0 | 0 | I have a Unicode string - under Python 2.7.
I also have a headache today – a real one that was not caused by Unicode – and can't maintain focus on the problem as much as I need to. I'm more mindless-drone than thinker until the pollen count drops.
I need to compress my 'string' using backports.lzma. Occasionally I get an error, because the 'string' is not an ASCII compatible String but a Unicode object that uses some currently unknown character set (probably UTF-8 but no guarantee). lzma.compress wants a String or bytes() compatible object.
I don't necessarily have the character encoding of the unicode at this point in my code. I just know that it's a unicode object. Usually in a similar situation I know the encoding and can act appropriately. I also usually don't care about losing a character or two in transcoding. This time I care.
This leads me to a few questions:
• Is there a safe encoding to choose that will also be somewhat minimal in size (for a majority of utf-8 docs)?
• Do I need to worry about backwards compatibility on decoding vs the earlier documents I've compressed? I did not fully read the lzma docs (my bad) and didn't realize it needed a String. | encoding a unicode string for lzma compression | 1.2 | 0 | 0 | 650 |
33,137,569 | 2015-10-14T23:48:00.000 | 0 | 0 | 0 | 1 | python | 33,137,802 | 1 | false | 0 | 0 | Lots of options. Which you should choose depends on lots of things. Sorry for being so vague, but your question does not give any details besides the name Chrome.
How ever you start Chrome, e. g. by a button or menu of your window manager, maybe you can tweak that button or menu to start both programs. This is probably the simplest solution, but of course it bears the risk that a different event starts Chrome (e. g. clicking a link in a mail program). This would then pass-by your Program.
You can write a daemon process which you start via login-scripts at login time. That daemon would try to figure out when you start Chrome (e. g. by polling the process table) and react by starting your accompanying program.
You can configure Chrome so that it starts your program upon starting time. How to do this is a whole new question in itself with lots of answers; options include at least writing a plugin for Chrome which does what you want.
You can have your program up and running all the time (started via login scripts, i. e. too early) but stay invisible and passive until Chrome starts up. This is basically the same as option 2, just that your program is itself that daemon.
In case you tell us more, we probably can give narrower and thus more fitting answers. | 1 | 0 | 0 | I have a program I want to open when I open Chrome. Instead of opening them separately, I want it to automatically start when Chrome starts. How would I write code to have the program attach itself to Chrome? I don't want the program to start on startup, just when Chrome starts. I know I can right click on the Chrome icon on my desktop and change the properties to open both programs but I want to know how to do the same thing with code. | Python code to open with another program? | 0 | 0 | 0 | 61 |
33,140,894 | 2015-10-15T05:55:00.000 | 0 | 1 | 0 | 0 | python,serial-port | 33,141,447 | 1 | false | 0 | 0 | This is not a python-related question, try a linux forum on boot sequence timeout | 1 | 0 | 0 | I have a python script which uses pygsm as the library for sending and receiving sms. However, I wish to put this script on auto-run upon booting up the raspberry pi. But the connecting time of the huawei modem usually takes a while, hence causing the shell script to skip the step. How do I make it so that it would confirm that it is connected to perform the python script? | Python Auto-Start pyGSM Huawei | 0 | 0 | 0 | 65 |
33,144,885 | 2015-10-15T09:34:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,python-decorators | 33,144,981 | 3 | false | 0 | 0 | In Python 2.6 or newer, yes. Older versions only support function/method decorators. | 1 | 1 | 0 | I know of function decorators and class method decorators.
But, is it also possible to decorate classes? | Is it possible to decorate classes? | 0 | 0 | 0 | 3,122 |
33,145,523 | 2015-10-15T10:01:00.000 | 3 | 0 | 0 | 1 | python,mod-wsgi,wsgi,apscheduler | 33,232,473 | 1 | false | 1 | 0 | You're right -- the scheduler won't start until the first request comes in.
Therefore running a scheduler in a WSGI worker is not a good idea. A better idea would be to run the scheduler in a separate process and connect to the scheduler when necessary via some RPC mechanism like RPyC or Execnet. | 1 | 5 | 0 | I would like to run APScheduler which is a part of WSGI (via Apache's modwsgi with 3 workers) webapp. I am new in WSGI world thus I would appreciate if you could resolve my doubts:
If APScheduler is a part of webapp - it becomes alive just after first request (first after start/reset Apache) which is run at least by one worker? Starting/resetting Apache won't start it - at least one request is needed.
What about concurrent requests - would every worker run same set of APScheduler's tasks or there will be only one set shared between all workers?
Would once running process (webapp run via worker) keep alive (so APScheduler's tasks will execute) or it could terminate after some idle time (as a consequence - APScheduler's tasks won't execute)?
Thank you! | python: APScheduler in WSGI app | 0.53705 | 0 | 0 | 1,030 |
33,145,856 | 2015-10-15T10:17:00.000 | 2 | 0 | 1 | 0 | python,multiprocessing,cpython,python-internals | 33,146,935 | 2 | false | 0 | 0 | None of the functions you mention parallelize automatically. In general, silently spawning threads is considered poor form in most languages (this is changing, but it's still only seen in pure functional languages where thread safety is baked in by design); spawning lots of threads without giving warning is how you get mysterious errors when the user tries to launch their own threads in and get transient errors due to having too many running threads. So even if the GIL wasn't an issue, it would not make sense to do this.
That said, the GIL is there to protect interpreter internals, and that covers any scenario where reference counts are manipulated, which is constantly; with rare exception, it's not possible to do any meaningful work on PyObject*s (which is what all Python level types are represented as in C) with the GIL held. Typically, Python built-ins only release the GIL for blocking operations (I/O, waiting on locks, etc.); it's only in third party C extensions (and ctypes) where GIL release is normal, because in those cases, they convert the PyObjects entirely to C level types, release the GIL now that no reference counting or other internals are being touched, do the expensive work, reacquire the GIL, and convert the results back from C level types to Python level objects. | 1 | 8 | 0 | I understand that cPython has a GIL so that your script can't run on multiple cores without using the multiprocessing module. But is there anything to stop the built in functions such as sorting using multiple cores? I don't understand cPython structure but I think the question I'm asking is 'are builtin functions like sort, any and list comprehensions actually below the GIL? | Does cPython use multiple cores for built-in functions such as sort, any, all? | 0.197375 | 0 | 0 | 152 |
33,146,245 | 2015-10-15T10:35:00.000 | 0 | 0 | 0 | 0 | python,django,printing,receipt | 57,876,597 | 1 | false | 1 | 0 | For a web app to use a device on the client, it has to go through the browser. I may be wrong, but I seriously doubt this is a built-in feature for receipt printers. I see three options:
1) Find/make a normal printer driver that works with your receipt printer, put it on the client box, and just use the normal print js commands.
2) Find/make a browser plugin that talks to the printer and exposes an API.
3) Find/make a simple web app that talks to a server-connected receipt printer (probably via native code execution or script call), and install it on each POS, with CORS to allow remote origin; then just post to that on 127.0.0.1:whatever from the webapp client script.
Side note: I seriously discourage connecting a POS to anything resembling a network any more than absolutely necessary. Every outbound network request or trusted network peer is a potential attack vector. In short, I would never use django or any other web app for physical POS software. | 1 | 1 | 0 | I have created a simple POS application. I now have to send a command to the receipt printer to print the receipt. I don't have any code related to this problem as I don't know where to start even. My questions are:
1) Is Windows a good choice for working with receipt printers as every shop I went to use a desktop application on Windows for POS?
2) Is it possible to control the receipt printer and cash register/drawer from a web app?
3) Is there a good reading material for developing POS systems by myself? | How to send command to receipt printers from Django app? | 0 | 0 | 0 | 1,911 |
33,150,684 | 2015-10-15T14:05:00.000 | 2 | 0 | 0 | 0 | python,caching,pandas,redis | 33,152,516 | 1 | false | 0 | 0 | I have a DF of ~ 1 GB of plain text data. Assuming the dumping to disk is always slower than reading I compared HDF5 write performance with pickle.
HDF5 took 35 sec while pickle did 190 sec. So, you could consider using HDF5 instead of pickle | 1 | 0 | 1 | Which method of caching pandas DataFrame objcts will provide the highest performance? By storing it to a flat file on disk using pickle, or by storing it in a key-value store like Redis? | Caching Pandas Dataframe by Serialization or In-memory KV Store | 0.379949 | 0 | 0 | 1,886 |
33,152,869 | 2015-10-15T15:43:00.000 | 1 | 0 | 1 | 0 | python,django | 56,910,915 | 1 | true | 1 | 0 | It really depends on the situation. Suppose if your Project A need to use Pip version 17.01 to run while your project B need to use Pip version 18.01 to run. So It is not possible to use 1 virtual env to run multiple project but the downside of having multiple virtual environments is it consume much space & resource of PC. | 1 | 2 | 0 | Another newbie to django here. I was wondering if it is recommended/not-recommended to run two different projects in the same virtualenv folder that have the same django version. To be more clear, is it necessary to create separate virtualenv everytime I want to start a new project when i know that i am using same django version for all projects. I am using python django on OSX. | Can I have multiple django projects in the same virtualenv folder | 1.2 | 0 | 0 | 1,645 |
33,154,410 | 2015-10-15T17:02:00.000 | 0 | 1 | 1 | 0 | python | 33,154,751 | 3 | false | 0 | 0 | The error is independent from the version, but it is not clear
what you want to do with your file.
If you want to read from it and you get such an error, it means that your file is not where you think it is. In any case you should write a line like testFile = open("test.txt","r").
If you want to create a new file and write in it, you will have a line like
testFile = open("test.txt","w"). Finally, if your file already exists and you want to add things on it, use testFile = open("test.txt","a") (after having moved the file at the correct place). If your file is not in the directory of the script, you will use commands to find your file and open it. | 2 | 0 | 0 | I m using the command testFile = open("test.txt") to open a simple text file and received the following: Does such errors occur due to the version of python one uses?
IOError: [Errno 2] No such file or directory: 'Test.txt | python 2.7: Testing text file in idle | 0 | 0 | 0 | 43 |
33,154,410 | 2015-10-15T17:02:00.000 | 0 | 1 | 1 | 0 | python | 33,154,742 | 3 | false | 0 | 0 | Syntax for opening file is:
file object = open(file_name [, access_mode][, buffering])
As you have not mentioned access_mode(optional), default is 'Read'. But if file 'test.txt' does not exists in the folder where you are executing script, it will through an error as you got.
To correct it, either add access_mode as "a+" or give full file path e.g. C:\test.txt (assuming windows system) | 2 | 0 | 0 | I m using the command testFile = open("test.txt") to open a simple text file and received the following: Does such errors occur due to the version of python one uses?
IOError: [Errno 2] No such file or directory: 'Test.txt | python 2.7: Testing text file in idle | 0 | 0 | 0 | 43 |
33,158,779 | 2015-10-15T21:23:00.000 | 3 | 1 | 0 | 0 | python,vim | 33,159,164 | 1 | false | 0 | 0 | This has been solved by adding "let python_highlight_builtins=1" in the .vimrc file. | 1 | 3 | 0 | I'm using the Vim editor to edit my python script on some remote clusters.I'm sure the Vim editors on both clusters have syntax highlight on. However, on one cluster I could see that all python keywords have been highlighted, while on the other one I could see only some of the python keywords are highlighted, and some keywords such as "range", "open" and "float" are not highlighted. Is there anything I can do in the .vimrc file such that all python keywords can be highlighted on that machine?
I don't know if this is related to the version of the Vim editor. On the machine that does not highlight all python keywords, the version of Vim is 7.2. While for the other machine, the version of Vim is 7.4.
Thanks. | Vim only highlights part of the python syntax | 0.53705 | 0 | 0 | 62 |
33,161,270 | 2015-10-16T01:47:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,seaborn | 44,307,542 | 2 | false | 0 | 0 | A solution to this is after importing seaborn do the following:
matplotlib.rcParams['lines.markeredgewidth'] = 1 | 2 | 4 | 1 | Apparently, importing seaborn sets the marker edges in a matplotlib.pyplot.plot to zero or deletes them.
e.g. plt.plot(x,y,maker='s',markerfacecolor='none')
results in a plot without markers.
Is there a way to get the edges back?
markeredgecolor='k' has no effect. | Seaborn Restore marker edges | 0.099668 | 0 | 0 | 3,009 |
33,161,270 | 2015-10-16T01:47:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,seaborn | 43,644,522 | 2 | false | 0 | 0 | Give edgecolor='k' a try. This worked for me in a similar scatter plot. | 2 | 4 | 1 | Apparently, importing seaborn sets the marker edges in a matplotlib.pyplot.plot to zero or deletes them.
e.g. plt.plot(x,y,maker='s',markerfacecolor='none')
results in a plot without markers.
Is there a way to get the edges back?
markeredgecolor='k' has no effect. | Seaborn Restore marker edges | 0.099668 | 0 | 0 | 3,009 |
33,161,725 | 2015-10-16T02:45:00.000 | 0 | 0 | 1 | 1 | python,ubuntu | 33,161,926 | 1 | false | 0 | 0 | Try to kill the process which is opening the python file
or you can reboot your windows
It may resolve the problem | 1 | 0 | 0 | I use a shared folder to share files in linux with a windows machine. However, after i have edited a python file using notepad++ in windows, changed the owner of the file to the user of ubuntu, and run the file, i wanted to edit the file again, when i save the file, I get the following error.
'Please check whether if this file is opened in another program'
is it concerned with the linux daemon?? If both machines are windows, i can edit it as administrator, but i am new to linux, i do not know how to deal with it?
Thanks for your help!! | unable save file in a shared folder on a remote ubuntu machine using notepad++ | 0 | 0 | 0 | 59 |
33,162,320 | 2015-10-16T03:56:00.000 | 0 | 0 | 0 | 0 | python,excel,csv,pandas | 33,162,435 | 2 | false | 0 | 0 | This is a time formatting problem/philosophy by Excel. For some reason, Microsoft prefers to hide seconds and sub-seconds on user displays: even MSDOS's dir command omitted seconds.
If I were you, I'd use Excel's format operation and set it to display seconds, then save the spreadsheet as CSV and see if it put anything in it to record the improved formatting.
If that doesn't work, you might explore creating a macro which does the formatting, or use one of the IPC to Excel to command it to do your bidding. | 1 | 2 | 1 | I am using python pandas to read csv file. The csv file has a datetime column that has second precisions "9/1/2015 9:25:00 AM", but if I open in excel, it has only minute precisions "9/1/15 9:25". Moreover, when I use the pd.read_csv() function, it only shows up to minute precision. Is there any way that I could solve the problem using python? Thanks much in advance. | Use Python to Change csv Data Column Format | 0 | 0 | 0 | 1,815 |
33,165,523 | 2015-10-16T08:11:00.000 | 0 | 0 | 0 | 0 | python,logging | 33,166,177 | 1 | false | 0 | 0 | Ok, I found this in the doc
The level and handlers entries are interpreted as for the root logger, except that if a non-root logger’s level is specified as NOTSET, the system consults loggers higher up the hierarchy to determine the effective level of the logger.
So I guess That the NOTSET level only makes the logger inherit its parent level. | 1 | 0 | 0 | If I have a logger that's directly under root, and that I set its level to NOTSET while propagate = False. What would happen ?
The doc is unclear to me :
When a logger is created, the level is set to NOTSET (which causes all messages to be processed when the logger is the root logger, or delegation to the parent when the logger is a non-root logger)
I don't know if that means that the root logger is used instead of the NOTSET one or just that the root logger level is inherited | Difference between level = NOTSET and propagate | 0 | 0 | 0 | 98 |
33,165,668 | 2015-10-16T08:21:00.000 | 2 | 0 | 0 | 0 | python,cython,kalman-filter | 33,167,366 | 2 | true | 0 | 0 | The size of the covariance matrix is driven by the size of your state. Another question relates to the assumptions on your model and if this can bring up significant optimizations (obviously, optimizing implies reworking the "standard KF").
From my POV, your situation roughly depends on the value (number_of_states² * number_of_iterations)/(processing_power). | 2 | 1 | 1 | I wonder if anyone can give me a pointer to really fast/efficient Kalman filter implementation, possibly in Python (or Cython, but C/C++ could also work if it is much faster). I have a problem with many learning epochs (possibly hundreds of millions), and many input (cues; say, between tens to hundred thousands). Thus, updating a covariance matrix will be a big issue. I read a bit about Ensemble KF, but, for now, I would really like to stick with the standard KF. [ I started reading and testing it, and I would like to give it a try with my real data. ] | Fast Kalman Filter | 1.2 | 0 | 0 | 1,111 |
33,165,668 | 2015-10-16T08:21:00.000 | 1 | 0 | 0 | 0 | python,cython,kalman-filter | 33,264,437 | 2 | false | 0 | 0 | If you have many measurements per update, you should look at the information form of the Kalman filter. Each additional measurement is just addition. The tradeoff is a more complex predict step, and the cost of inverting the information matrix whenever you want to get your state out. | 2 | 1 | 1 | I wonder if anyone can give me a pointer to really fast/efficient Kalman filter implementation, possibly in Python (or Cython, but C/C++ could also work if it is much faster). I have a problem with many learning epochs (possibly hundreds of millions), and many input (cues; say, between tens to hundred thousands). Thus, updating a covariance matrix will be a big issue. I read a bit about Ensemble KF, but, for now, I would really like to stick with the standard KF. [ I started reading and testing it, and I would like to give it a try with my real data. ] | Fast Kalman Filter | 0.099668 | 0 | 0 | 1,111 |
33,168,836 | 2015-10-16T10:59:00.000 | 1 | 0 | 0 | 0 | python,r,machine-learning,scikit-learn | 33,170,242 | 1 | false | 0 | 0 | If you have to make predictions at each time stamp, then this doesn't become a a time series problem (unless you plan to use the sequence of previous observations to make your next prediction, in which case you will need to train a sequence based model). Assuming you can only train a model based on the final data you observe, there can be many approaches, but I'd recommend you use Random Forest with large number of trees and 3 or 4 variables in each tree. That way even if some variables don't give you the desired input other trees can still make predictions to a fair amount of accuracy. Besides this there can be many ensemble approaches.
The way you're currently doing may be a very loose approximation and practical but doesn't make much statistical sense. | 1 | 1 | 1 | I have a classification problem with time series data.
Each example has 10 variables which are measured at irregular intervals and in the end the object is classified into 1 of the 2 possible classes (binary classification).
I have only the final class of the example to learn from during training. But when given a new example, I would like to make a prediction at each timestamp (in an online manner). So, if the new example had 25 measurements, I would like to make 25 predictions of its class; one at each timestamp.
The way I am implementing this currently is by using the min, mean and max of the measurements of its 10 variables till that point as features for classification. Is this optimal ? What would be a better way. | Implementing online learning with time series | 0.197375 | 0 | 0 | 317 |
33,169,134 | 2015-10-16T11:15:00.000 | 0 | 0 | 1 | 0 | python,.net,windows,installation | 33,175,765 | 1 | true | 0 | 0 | First thing would be to check if you have downloaded all the bytes for the .msi. Then right-click on the .msi file and go to Properties. Click on the Compatibility tab and select "Run on compatibility mode for". When I've tried compatibility mode it has not help, but it might for you. Good luck! | 1 | 0 | 0 | I have installed the win 8.1 professional edition.
But the xx.msi file (i.e., python, .net framework etc.) can not be installed on win 8.1.
When I want to install it, it just terminates quickly.
Despite of I running it with Administrator role.
So, is there any solutions for installing the xx.msi file on win 8.1?
Thanks for you attention! | msi can not be installed on win 8.1? | 1.2 | 0 | 0 | 27 |
33,170,016 | 2015-10-16T12:01:00.000 | 0 | 0 | 0 | 0 | python,django,orm | 67,953,078 | 5 | false | 1 | 0 | I had to add import django, and then this worked. | 1 | 26 | 0 | I've used Django ORM for one of my web-app and I'm very much comfortable with it. Now I've a new requirement which needs database but nothing else that Django offers. I don't want to invest more time in learning one more ORM like sqlalchemy.
I think I can still do
from django.db import models
and create models, but then without manage.py how would I do migration and syncing? | How to use Django 1.8.5 ORM without creating a django project? | 0 | 0 | 0 | 15,755 |
33,172,090 | 2015-10-16T13:45:00.000 | 1 | 0 | 0 | 0 | python,algorithm,performance,minimum-spanning-tree,delaunay | 33,175,701 | 1 | true | 0 | 0 | NB: this assumes we're working in 2-d
I suspect that what you are doing now is feeding all point to point distances to the MST library. There are on the order of N^2 of these distances and the asymptotic runtime of Kruskal's algorithm on such an input is N^2 * log N.
Most algorithms for Delaunay triangulation take N log N time. Once the triangulation has been computed only the edges in the triangulation need to be considered (since an MST is always a subset of the triangulation). There are O(N) such edges so the runtime of Kruskal's algorithm in scipy.sparse.csgraph should be N log N. So this brings you to an asymptotic time complexity of N log N.
The reason that scipy.sparse.csgraph doesn't incorporate Delaunay triangulation is that the algorithm works on arbitrary input, not only Euclidean inputs.
I'm not quite sure how much this will help you in practice but that's what it looks like asymptotically. | 1 | 0 | 1 | I have a code that makes Minimum Spanning Trees of many sets of points (about 25000 data sets containing 40-10000 points in each set) and this is obviously taking a while. I am using the MST algorithm from scipy.sparse.csgraph.
I have been told that the MST is a subset of the Delaunay Triangulation, so it was suggested I speed up my code by finding the DT first and finding the MST from that.
Does anyone know how much difference this would make? Also, if this makes it quicker, why is it not part of the algorithm in the first place? If it is quicker to calculate the DT and then the MST, then why would scipy.sparse.csgraph.minimum_spanning_tree do something else instead?
Please note: I am not a computer whizz, some people may say I should be using a different language but Python is the only one I know well enough to do this sort of thing, and please use simple language in your answers, no jargon please! | Speed up Python MST calculation using Delaunay Triangulation | 1.2 | 0 | 0 | 659 |
33,172,753 | 2015-10-16T14:19:00.000 | 0 | 0 | 0 | 0 | python,vector,pygame,trigonometry | 33,172,964 | 3 | false | 0 | 1 | The vector of acceleration will be in the direction from the mouse cursor to the ship, away from the mouse position - so if the line from mouse to ship is at angle theta to horizontal (where anticlockwise theta is +ve, clockwise is -ve), so will the force F acting on the ship to accelerate it. If ship has mass M then using f=ma, acceleration A will be F/M. Acceleration in x axis will be A * cos(theta), in y A * sin(theta) - then use v=u+at after every time interval for your simulation. You will have to use floating point maths for this calculation. | 1 | 0 | 0 | I have a ship that shoots lasers. It draws a line from itself to (mouseX, mouseY). How can I have the ship accelerate in the exact opposite direction as if being pushed backwards?
I have been working through some trig and ratios but am not making much progress. Acceleration is constant so how far the mouse is doesn't matter, just the direction.
I've been primarily working on trying to create a vector that only works with integers in the range (-4,4), as I cannot have the ship move across the screen at a Float speed, and the (-4,4) range would at least give me... 16 different directions for the ship to move, though I'd like that not to be a limitation. | How to have an object run directly away from mouse at a constant speed | 0 | 0 | 0 | 185 |
33,172,842 | 2015-10-16T14:24:00.000 | 0 | 0 | 1 | 0 | python,excel,xlwings | 33,176,797 | 4 | true | 0 | 0 | A small workaround could be to package your application with cx_freeze or pyinstaller. Then it can run on a machine without installing python. The downside is of course that the program tend to be a bit bulky in size. | 3 | 1 | 0 | I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | create an application in excel using python for a user without python | 1.2 | 1 | 0 | 963 |
33,172,842 | 2015-10-16T14:24:00.000 | 0 | 0 | 1 | 0 | python,excel,xlwings | 33,176,474 | 4 | false | 0 | 0 | It is possible using xlloop. This is a customized client-server approach, where the client is an excel .xll which must be installed on client's machine.
The server can be written in many languages, including python, and of course it must be launched on a server that has python installed. Currently the .xll is available only for 32 bits. | 3 | 1 | 0 | I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | create an application in excel using python for a user without python | 0 | 1 | 0 | 963 |
33,172,842 | 2015-10-16T14:24:00.000 | 0 | 0 | 1 | 0 | python,excel,xlwings | 33,172,988 | 4 | false | 0 | 0 | no. you must need install a python and for interpreting the python function etc. | 3 | 1 | 0 | I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | create an application in excel using python for a user without python | 0 | 1 | 0 | 963 |
33,176,353 | 2015-10-16T17:35:00.000 | 0 | 0 | 1 | 0 | python,pip,pypi | 53,302,849 | 4 | false | 0 | 0 | In this case, the .pip folder will be hidden as well as the new empty file pip.conf.
To view this folder from the Apple Finder, open Terminal, at the prompt$ copy/paste this in:
defaults write com.apple.finder AppleShowAllFiles -bool true
Next, copy/paste this in:
killall Finder
You can now see the hidden folder, then you can open the pip.conf file from an application like Pycharm, enter in your index-url information, save, and then use pip install .
To reverse the visibility of Hidden Folder/Files go to the prompt$ and enter:
defaults write com.apple.finder AppleShowAllFiles -bool false
Next, enter this in:
killall Finder
Result: The Apple Finder will restart and hide all the 'hidden' folders/files. | 3 | 3 | 0 | I'm trying to edit ~/.pip/pip.conf file but the .pip folder is not present in osx.
I have pip installed globally as well as using it in various virtual environments.
Do I need to create this folder and file manually? | .pip folder not in HOME directory on OSX | 0 | 0 | 0 | 2,597 |
33,176,353 | 2015-10-16T17:35:00.000 | 1 | 0 | 1 | 0 | python,pip,pypi | 33,177,206 | 4 | false | 0 | 0 | It should be in $HOME/.pip/pip.conf. | 3 | 3 | 0 | I'm trying to edit ~/.pip/pip.conf file but the .pip folder is not present in osx.
I have pip installed globally as well as using it in various virtual environments.
Do I need to create this folder and file manually? | .pip folder not in HOME directory on OSX | 0.049958 | 0 | 0 | 2,597 |
33,176,353 | 2015-10-16T17:35:00.000 | 0 | 0 | 1 | 0 | python,pip,pypi | 33,180,292 | 4 | false | 0 | 0 | I had to create my own pip.conf file.
If using virtual env, create the pip.conf at the root of $VIRTUAL_ENV or the root of your virtual env folder. | 3 | 3 | 0 | I'm trying to edit ~/.pip/pip.conf file but the .pip folder is not present in osx.
I have pip installed globally as well as using it in various virtual environments.
Do I need to create this folder and file manually? | .pip folder not in HOME directory on OSX | 0 | 0 | 0 | 2,597 |
33,176,740 | 2015-10-16T17:57:00.000 | 0 | 0 | 1 | 0 | python-3.x | 33,178,199 | 1 | false | 0 | 0 | Sounds like the file you are trying to run is simply a log of everything that you typed and that was printed while you typed; the first line being the version info of the python you are running, which is definitely not executable code, hence the "invalid syntax" error.
What you'll want to do is open an empty text file, enter in all the code again, save it as a somefile.py (a better name, obviously ;) and then open that in the run menu.
Alternatively, you could open the file you have already and remove all the lines that are not the code you had typed in. | 1 | 1 | 0 | I am new, like still have the plastic wrapping on me, new, to programming anything much less Python but I am trying to learn. My first experience is baffling already. I typed out some get to know the program code and as I was doing it everything was fine within the shell, blah blah. I saved and opened it again to watch it run. However when I chose run module is gave me a syntax error in the part of the program I didn't even write.
The 2 in "Python 3.2" is red and will not run.
Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
Tried to ask in class and just got a lecture about how I was behind. Someone else pointed me here and said this was the place, if I was going to learn programming I needed to get to know the site anyway!
So how can I check the work if I cannot run it? Am I missing a vital piece of the puzzle?
Assistance would be wonderful.
These were my steps:
Followed instructions on the code, watched as it draw as I was
adding.
Saved closed everything.
Reopened > chose file option: open> found file opened it
Chose run option> run module
Nothing happened,"invalid syntax" | Syntax Error in the version info when trying to run a module in Python 3.2 | 0 | 0 | 0 | 999 |
33,181,503 | 2015-10-17T00:46:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,dictionary | 33,181,543 | 3 | false | 0 | 0 | To get valz just use d[key2][2]. | 3 | 3 | 0 | I have a dictionary as follows:
d = {key: [val1, val2, val3....], key2: [valx, valy, valz, ...], ....}
Is it possible to get nth element of the value lists? Example: d{key2:[2]} would return 'valz'.
I tried d.get({key:[0]}) but got:
"TypeError: unhashable type: 'dict'" | Accessing nth Element of a list value for a given key: Python dictionary | 0.066568 | 0 | 0 | 5,798 |
33,181,503 | 2015-10-17T00:46:00.000 | 5 | 0 | 1 | 0 | python,python-2.7,dictionary | 33,181,553 | 3 | false | 0 | 0 | d[key][0] does the trick for me, unless I misunderstand the question. | 3 | 3 | 0 | I have a dictionary as follows:
d = {key: [val1, val2, val3....], key2: [valx, valy, valz, ...], ....}
Is it possible to get nth element of the value lists? Example: d{key2:[2]} would return 'valz'.
I tried d.get({key:[0]}) but got:
"TypeError: unhashable type: 'dict'" | Accessing nth Element of a list value for a given key: Python dictionary | 0.321513 | 0 | 0 | 5,798 |
33,181,503 | 2015-10-17T00:46:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,dictionary | 33,181,557 | 3 | false | 0 | 0 | Your problem is that d.get({key:[0]}) is equivanlent to d[{key:[0]}], amd dictionaries are illegal keys in dictionaries. The correct solution is d[key][0] | 3 | 3 | 0 | I have a dictionary as follows:
d = {key: [val1, val2, val3....], key2: [valx, valy, valz, ...], ....}
Is it possible to get nth element of the value lists? Example: d{key2:[2]} would return 'valz'.
I tried d.get({key:[0]}) but got:
"TypeError: unhashable type: 'dict'" | Accessing nth Element of a list value for a given key: Python dictionary | 0 | 0 | 0 | 5,798 |
33,181,588 | 2015-10-17T00:59:00.000 | 0 | 0 | 1 | 0 | python,string,parsing | 33,181,629 | 1 | true | 0 | 0 | If the b' and the ' at then end are really part of the string, s[2:-1] would work. But problably they are just repr output which isn't actually part of the string. | 1 | 0 | 0 | I have a string that has a b' at the beginning and a ' at the end. It also has a \n after 21 characters, it seems that the API I am calling is adding these. How is the best way to remove these and only these.
Note: I can just remove up to the \n and readd the text that is supposed to be there since it never changes. | Removing Specific Characters from Specific spots in a Python string | 1.2 | 0 | 0 | 41 |
33,183,963 | 2015-10-17T07:18:00.000 | 0 | 0 | 0 | 1 | multithreading,python-2.7,sockets,google-app-engine,apple-push-notifications | 33,184,298 | 1 | false | 1 | 0 | Found the issue. I was calling start_background_thread with argument set to function(). When I fixed it to calling it as function it worked as expected. | 1 | 0 | 0 | Does Google App engine support multithreading? I am seeing conflicting reports on the web.
Basically, I am working on a module that communicates with Apple Push Notification server (APNS). It uses a worker thread that constantly looks for new messages that need to be pushed to client devices in a pull queue.
After it gets a message and sends it to APNS, I want it to start another thread that checks if APNS sends any error response in return. Basically, look for incoming bytes on same socket (it can only be error response per APNS spec). If nothing comes in 60s since the last message send time in the main thread, the error response thread terminates.
While this "error response handler" thread is waiting for bytes from APNS (using a select on the socket), I want the main thread to continue looking at the pull queue and send any packets that come immediately.
I tried starting a background thread to do the function of the error response handler. But this does not work as expected since the threads are executed serially. I.e control returns to the main thread only after the error response thread has finished its 60s wait. So, if there are messages that arrive during this time they sit in the queue.
I tried threadsafe: false in .yaml
My questions are:
Is there any way to have true multithreading in GAE. I.e the main thread and the error response thread execute in parallel in the above example instead of forcing one to wait while other executes?
If no, is there any solution to the problem I am trying to solve? I tried kick-starting a new instance to handle the error response (basically use a push queue), but I don't know how to pass the socket object the error response thread will have to listen to as a parameter to this push queue task (since dill is not supported to serialize socket objects) | Multi-threading on Google app engine | 0 | 0 | 0 | 921 |
33,184,127 | 2015-10-17T07:40:00.000 | 1 | 0 | 0 | 1 | python-2.7,web-applications,command-line,pycharm | 33,184,244 | 1 | true | 1 | 0 | Based on what you've described, there are many ways to approach this:
Create a terminal emulator on your webpage.
If you want nicer UI, you can set up any web framework and have your command line programs in the backend, while exposing frontend interfaces to allow users to input parameters and see the results.
If you're just trying to surface the functionalities of your programs, you can wrap them as services which application can call and use (including your web terminal/app) | 1 | 1 | 0 | I am new to programming and web apps so i am not even sure if this question is obvious or not.
Sometimes I find command line programs more time efficient and easy to use (even for the users). So is there a way to publish my command line programs with command line interface to a web as a web app using cgi or Wsgi?
For example if i make a program to calculate all the formulas in math can i publish it to a web in its command line form? I am using python in pycharm. I have tried using cgi but coudnt do much beacuse cgi has more to do with forms being send to a data server for information comparison and storage.
-Thanks | Command line programs in a web (python) | 1.2 | 0 | 0 | 179 |
33,187,572 | 2015-10-17T14:11:00.000 | 0 | 0 | 0 | 0 | python,django | 33,187,887 | 2 | false | 1 | 0 | Even if you could do this, it wouldn't help solve your ultimate problem. You can't use order_by on concatenated querysets from different models; that can't possibly work, since it is a request for the database to do an ORDER BY on the query. | 2 | 1 | 0 | In Django, how does one give an attribute field name an alias that can be used to manipulate a queryset?
Background: I have a queryset where the underlying model has an auto-generating time field called "submitted_on". I want to use an alias for this time field (i.e. "date"). Why? Because I will concatenate this queryset with another one (with the same underlying model), and then order_by('-date'). Needless to say, this latter qset already has a 'date' attribute (attached via annotate()).
How do I make a 'date' alias for the former queryset? Currently, I'm doing something I feel is an inefficient hack: qset1 = qset1.annotate(date=Max('submitted_on'))
I'm using Django 1.5 and Python 2.7. | Making an alias for an attribute field, to be used in a django queryset | 0 | 0 | 0 | 452 |
33,187,572 | 2015-10-17T14:11:00.000 | 0 | 0 | 0 | 0 | python,django | 33,192,349 | 2 | true | 1 | 0 | It seems qset1 = qset1.annotate(date=Max('submitted_on')) is the closest I have right now. This, or using exclude(). I'll update if I get a better solution. Of course other experts from SO are welcome to chime in with their own answers. | 2 | 1 | 0 | In Django, how does one give an attribute field name an alias that can be used to manipulate a queryset?
Background: I have a queryset where the underlying model has an auto-generating time field called "submitted_on". I want to use an alias for this time field (i.e. "date"). Why? Because I will concatenate this queryset with another one (with the same underlying model), and then order_by('-date'). Needless to say, this latter qset already has a 'date' attribute (attached via annotate()).
How do I make a 'date' alias for the former queryset? Currently, I'm doing something I feel is an inefficient hack: qset1 = qset1.annotate(date=Max('submitted_on'))
I'm using Django 1.5 and Python 2.7. | Making an alias for an attribute field, to be used in a django queryset | 1.2 | 0 | 0 | 452 |
33,188,104 | 2015-10-17T15:08:00.000 | 7 | 0 | 1 | 0 | python,python-3.x,pycharm | 33,825,637 | 1 | true | 0 | 0 | If you are using [Right Alt] + F to insert square bracket (for example with Czech keyboard layout), pycharm interprets it as [Ctrl] + [Alt] + F, which is a shortcut to some sort of refactoring.
You can change the shortcut to something else in File - Settings - Keymap - Main menu - Refactor - Extract - Field.. | 1 | 3 | 0 | I am running python 3.5.0 on pycharm community edition 4.5.4, and every time i try to insert an open square bracket to , for example, define a list it just gives me an error saying: "Cannot perform refactoring using selected element(s)"
This is really confusing because it seemed to work fine on the older python versions (3.4.3). | pycharm error:cannot perform refactoring using selected element(s) | 1.2 | 0 | 0 | 3,951 |
33,193,501 | 2015-10-18T01:52:00.000 | -1 | 0 | 0 | 0 | django,python-3.3 | 33,193,714 | 1 | false | 1 | 0 | The easiest way is to use multiple browsers, and if necessary, multiple dev servers on alternate ports. 8000, 8001, etc. | 1 | 1 | 0 | I was wondering whether it is possible to have more than one user log in on a development server using Django 1.8
I am creating an app, where these "active" users are able to view one another details (or fields) respective to the relative models I designed.
Currently, I am only able to log in as a single user and wondered whether it is possible to somehow allow my app to have multiple logins.
Thanks | Multiple user logins on a development server using Django | -0.197375 | 0 | 0 | 103 |
33,194,779 | 2015-10-18T05:51:00.000 | 0 | 0 | 0 | 0 | python,arrays,numpy,resampling | 33,197,300 | 1 | true | 0 | 0 | Following Warren Weckesser's comment, the answer is using scipy.interpolate.interp1d | 1 | 1 | 1 | I've a nx2 ndarray which represent a height profile of the form h(x), with x being a non-negative real number and h(x) the height value in x. The x-values are irregular distributed, meaning:
x[i] - x[i - 1] != x[i + 1] - x[i]
I would like to take my array and create a new one with evenly spaced x-values with the corresponding heights. The distance between the x-values can be any positive number. Is there an efficient way to do something like this using numpy? | Resampling an irregular distributed 1-D signal in python | 1.2 | 0 | 0 | 142 |
33,196,427 | 2015-10-18T09:36:00.000 | 0 | 0 | 0 | 0 | python-2.7,image-processing,histogram,contrast | 33,197,079 | 1 | true | 0 | 0 | Let the total mass of the histogram be M.
Accumulate the mass in the bins, starting from index zero, until you pass 0.01 M. You get an index Q01.
Decumulate the mass in the bins, starting from the maximum index, until you pass 0.99 M. You get an index Q99.
These indexes are the so-called first and last percentiles. The contrast is estimated as Q99-Q01. | 1 | 0 | 1 | I need to calculate the contrast of an color image, so the steps that was given to me are,
computed the histogram for RGB channel separately and combined it together as Histogram = histOfRedC + histOfBlueC + histOfgreenC.
normalize it to unit length, as each image is of different size.
The contrast quality, is equal to the width of the middle 98% mass of the histogram.
I have done the first 2 steps but unable to understand what to compute in 3rd step. Can somebody please explain me what it means? | How to Calculate width of the middle 98% mass of the gray level histogram of a image | 1.2 | 0 | 0 | 140 |
33,205,691 | 2015-10-19T02:27:00.000 | 1 | 0 | 0 | 0 | python,oslo | 34,715,692 | 1 | false | 0 | 0 | The oslo namespace package is deprecated, you can see nothing in /usr/lib/python2.7/dist_packages/oslo, except middle directory, after oslo.config is installed. so use oslo_config instead when you want to import something. | 1 | 0 | 0 | I`m trying to use the oslo config package. But I found that someone is using this package like this
import oslo.config
while some others using it like this
import oslo_config
I`m confused , can anyone tell me what is the difference between this two package?
Thanks | what is the different between oslo.config and oslo_config? | 0.197375 | 0 | 1 | 363 |
33,208,805 | 2015-10-19T07:26:00.000 | 1 | 0 | 0 | 0 | python,django | 33,209,925 | 3 | false | 1 | 0 | Do I put the code in a specific view
a django view is a callable that must accept an HTTP request and return an HTTP response, so unless you need to be able to call your code thru HTTP there's no point in using a view at all, and even if you want to have a view exposing this code it doesn't mean the code doing the API call etc has to live in the view.
Remember that a "django app" is basically a Python package, so beside the django-specific stuff (views, models etc) you can put any module you want and have your views, custom commands etc call on these modules. So just write a module for your API client etc with a function doing the fetch / create model instance / whatever job, and then call this function from where it makes sense (view, custom command called by a cron job, celery task, whatever). | 1 | 2 | 0 | I'm unsure how to approach this problem in general in my Django app:
I need to make a call to an API every n days. I can make this call and fetch the data required via Python, but where exactly should I put the code?
Do I put the code in a specific view and then map the view to a URL and have that URL called whenever I want to create new model instances based on the API call?
Or am I approaching this the wrong way? | Create New Model Instance with API Call in Django | 0.066568 | 0 | 0 | 305 |
33,210,021 | 2015-10-19T08:38:00.000 | 0 | 0 | 0 | 0 | python,encryption,autobahn | 33,211,565 | 1 | false | 1 | 0 | You haven't provided information on how you are encrypting your data. But you should never send the private keys over the network. Never. Doing that is as secure as locking your house but leave the key in the keyhole. If you've done it even once, throw the key away and generate a new one.
The strength of RSA comes from the fact that anyone holding the public key can encrypt data, but only the one holding the private keys can decrypt. Think of it as an old video store's return box: any costumer can put the videos in it through the hole, but only the store staff can take the videos from it.
What you want to do is:
Generate the keys on the server
Client call the server and grab the public key
Client encrypt data using the retrieved public key
Client send encrypted data to server.
Server decrypt it using the private key | 1 | 0 | 0 | I'm new in python and websocket programmer.
I want to sent data that was encrypted with RSA key (in Python) through websocket to server in cloud (using nodeJs). For decrypt that data I need that key, right?
How I can sent RSA to server and use that key to decrypt?
Thankyou | Sent RSA Key Through Autobahn | 0 | 0 | 1 | 120 |
33,210,107 | 2015-10-19T08:43:00.000 | 2 | 0 | 1 | 0 | python,dictionary | 33,210,477 | 3 | false | 0 | 0 | In addition to @Martijn Pieters' answer:
In case your Foo is a heavy one, you can define a class Bar with minimal functionality (self.id, __eq__ and __hash__) and make your Foo inherit from it.
Then you can use lightweight Bar instance to retrieve value associated with Foo object from your dictionary. | 3 | 2 | 0 | I have a class Foo that has an instance string variable id. I implemented the __eq__ and __hash__ magic methods so that they only use id, so a Foo instance and its string id are mapped to the same hashtable bucket. However say there is a dict with keys being Foo instances, and I only have the string id. I want to retrieve the Foo instance that the id exists in.
One way is to iterate through all keys in the dict, but is there something more efficient/simple? | Retrieve key from key in dict | 0.132549 | 0 | 0 | 88 |
33,210,107 | 2015-10-19T08:43:00.000 | 2 | 0 | 1 | 0 | python,dictionary | 33,210,141 | 3 | false | 0 | 0 | Simply create a dummy Foo object with the given id, and look that up in the map. | 3 | 2 | 0 | I have a class Foo that has an instance string variable id. I implemented the __eq__ and __hash__ magic methods so that they only use id, so a Foo instance and its string id are mapped to the same hashtable bucket. However say there is a dict with keys being Foo instances, and I only have the string id. I want to retrieve the Foo instance that the id exists in.
One way is to iterate through all keys in the dict, but is there something more efficient/simple? | Retrieve key from key in dict | 0.132549 | 0 | 0 | 88 |
33,210,107 | 2015-10-19T08:43:00.000 | 3 | 0 | 1 | 0 | python,dictionary | 33,210,123 | 3 | true | 0 | 0 | There is no way to retrieve the corresponding Foo() object used a key directly, no.
Your options are to loop over all keys and match the id attribute, or perhaps recreate the object with Foo(id); if all you need to do is use this as a lookup key then that should more than suffice.
If these objects have more attributes and id maps uniquely to those attributes, consider adding another dictionary you maintain mapping ids to Foo() instances. | 3 | 2 | 0 | I have a class Foo that has an instance string variable id. I implemented the __eq__ and __hash__ magic methods so that they only use id, so a Foo instance and its string id are mapped to the same hashtable bucket. However say there is a dict with keys being Foo instances, and I only have the string id. I want to retrieve the Foo instance that the id exists in.
One way is to iterate through all keys in the dict, but is there something more efficient/simple? | Retrieve key from key in dict | 1.2 | 0 | 0 | 88 |
33,211,130 | 2015-10-19T09:36:00.000 | 0 | 0 | 0 | 0 | python,django,printers | 33,211,416 | 1 | false | 1 | 0 | This should not be a Django matter. It doesn't matter if the web app is on your local machine, a web app is a web app. You have to make clear in your design if you wish to use a printer as a server or as a client.
Since I assume that it is more reasonable to print as the end user, you can simply use a bit of javascript calling window.print() on click. | 1 | 0 | 0 | I have created one python based web-application. Basically the application is printing the customer bills as per their purchase. So I have created bill format but not sure how to print that by clicking on button print which i have added in page.
My requirement here is when I click on that print button of my application then the bill should be printed from printer.
If anyone having idea about the steps how should i process to solve this please feel free to add comments. | Steps to connect Epson LK 300 II with python application | 0 | 0 | 0 | 48 |
33,211,272 | 2015-10-19T09:44:00.000 | 0 | 0 | 1 | 0 | programming-languages,python-nonlocal | 71,878,951 | 3 | false | 0 | 0 | "nonlocal" means that a variable is "neither local or global", i.e, the variable is from an enclosing namespace (typically from an outer function of a nested function).
An important difference between nonlocal and global is that the a nonlocal variable must have been already bound in the enclosing namespace (otherwise an syntaxError will be raised) while a global declaration in a local scope does not require the variable is pre-bound (it will create a new binding in the global namespace if the variable is not pre-bound). | 1 | 5 | 0 | I am learning the concepts of programming languages.
I found the terminology "nonlocal" in python syntax.
What is the meaning of nonlocal in python?? | What is the difference between non local variable and global variable? | 0 | 0 | 0 | 4,873 |
33,211,867 | 2015-10-19T10:14:00.000 | 0 | 1 | 0 | 1 | python,apscheduler | 43,736,075 | 3 | false | 0 | 0 | Another approach: you can write logic of your job in separate function. So, you will be able to call this function in your scheduled job as well as somewhere else. I guess that this is a more explicit way to do what you want. | 1 | 4 | 0 | Using APScheduler version 3.0.3. Services in my application internally use APScheduler to schedule & run jobs. Also I did create a wrapper class around the actual APScheduler(just a façade, helps in unit tests). For unit testing these services, I can mock the this wrapper class. But I have a situation where I would really like the APScheduler to run the job (during test). Is there any way by which one can force run the job? | Can we force run the jobs in APScheduler job store? | 0 | 0 | 0 | 1,592 |
33,212,196 | 2015-10-19T10:32:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-3.x | 33,212,364 | 1 | false | 0 | 0 | Yes, indexing should be atomic, as long as you do it on a native list, and not one where someone could have redirected __getitem__; however, that's an implementation detail, and you can't rely on it.
No, it's not solely because of the GIL, but because switching between threads is only allowed between single interpreter instructions, and indexing is such a single instruction, but only in CPython.
Takeaway: Do not rely on atomicity in a high level scripting language such as python; if you need barriers, then use semaphores or explicitly atomic data types. | 1 | 1 | 0 | I read somewhere that indexing a list e.g. l[3] is an atomic operation. Is the atomicity achieved because of global interpreter lock. | Regarding atomicity of Python statements and Global Interpreter Lock | 0.379949 | 0 | 0 | 107 |
33,212,370 | 2015-10-19T10:42:00.000 | 1 | 0 | 0 | 0 | python,internet-explorer,internet-explorer-11,robotframework | 35,595,207 | 2 | false | 1 | 0 | I have also encountered the same problem.Below are the steps which i have followed.
1.I have enabled the proxy in IE.
2.Set environmental variable no_proxy to 127.0.0.1 before launching the browser
Ex: Set Environmental Variable no_proxy 127.0.0.1
3.Set all the internet zones to same level(medium to high) expect restricted sites
Open browser>Tools>Internet Options>Security Tab
4.Enable "Enable Protected mode" in all zones
Please let me know your feedback. | 1 | 3 | 0 | I am writing some test cases in the Robot Framework using Ride. I can run the tests on both Chrome and Firefox, but for some reason Internet Explorer is not working.
I have tested with the iedriverServer.exe (32bit version 2.47.0.0).
One thing to add is that I am using a proxy. When I disable the proxy in IE and enable the automatic proxy configuration... IE can start up. But it can not load the website. For Chrome and FF the proxy is working fine.
Error message:
WebDriverException: Message: Can not connect to the IEDriver. | Robot Framework Internet explorer not opening | 0.099668 | 0 | 1 | 5,285 |
33,212,589 | 2015-10-19T10:53:00.000 | 1 | 0 | 1 | 0 | python-3.x,sentry | 33,223,249 | 1 | false | 0 | 0 | The clients are version agnostic (that is, they run on anything newer than 2.5) so they don't care what version you're using for your applications. That said, the server will never run on Python 3, but it simply doesn't matter as its isolated. A bit extreme, but its kind of the same direction as "can we run the server on Ruby?" | 1 | 3 | 0 | I've been using Sentry with Python 2.7, is there any problem with using Python 3 ? The documentation in the official site only refers to Sentry with Virtualenv and Pip and not Virtualenv for Python 3 and Pip3. | Sentry with Python 3 | 0.197375 | 0 | 0 | 886 |
33,216,350 | 2015-10-19T13:55:00.000 | 1 | 0 | 0 | 0 | python,paraview | 33,282,334 | 1 | true | 0 | 0 | Usually plots are made by plotting one data array versus another. You can often obtain that data directly from the filter/source that produced it and save it to a CSV file. To do this, select
You can save data from a filter as a CSV file by selecting the filter/source in the Pipeline Browser and choosing File -> Save Data. Choose the CSV File (*.csv) file type. Arrays in the filter/source output are written to different columns in the CSV file. | 1 | 0 | 1 | How to extract data from plot data filter in paraview through python script?? I want to get data through python script by which paraview is drawing the graph.
If anyone know this answer please help
Thank you | extracting the data through python script in paraview | 1.2 | 0 | 0 | 628 |
33,220,279 | 2015-10-19T17:18:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,cron | 33,220,726 | 1 | false | 0 | 0 | Unfortunately, you cannot combine the weekday option with the interval.
You could add a switch in the request handler of your cron-job, that will just exit if current week-day is not Saturday, while your cron.job is scheduled "every 2 minutes from 01:00 to 03:00". But that means that your handler will be called 300 times per week for doing nothing, and only doing something the other 60 times.
Alternatively, you could combine an "every saturday 01:00" cron-job (as dispatcher) that will create 60 push tasks (as worker) with countdown or ETA, spread between 01:00 and 03:00. However, I don't think the execution time is not guaranteed. | 1 | 0 | 0 | I wanted to run my cron job as 'schedule: every saturday every 2 minutes from 01:00 to 3:00', and it won't allow this format. Is it possible to set a cron job to target another cron job? Or is my schedule possible just not in the correct format? | Google App Engine Python Cron Job | 0 | 0 | 0 | 34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.