Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
28,582,037
2015-02-18T11:06:00.000
1
0
1
0
python-3.x
49,924,334
2
false
0
1
This is often implemented by writing to a temp file and then moving it to the original file's name.
1
0
0
I am trying to open a file, remove some characters (defined in dic) and then save it to the same the file. I can print the output and it looks fine, but I cannot save it into the same file that the original text is being loaded from. from tkinter import * from tkinter.filedialog import askopenfilename from tkinter.messagebox import showerror import sys import fileinput dic = {'/':' ', '{3}':''}; def replace_all(text, dic): for i, j in dic.items(): text = text.replace(i, j) return text class MyFrame(Frame): def __init__(self): Frame.__init__(self) self.master.title("Example") self.master.rowconfigure(5, weight=1) self.master.columnconfigure(5, weight=1) self.grid(sticky=W+E+N+S) self.button = Button(self, text="Browse", command=self.load_file, width=10) self.button.grid(row=1, column=0, sticky=W) def load_file(self): fname = askopenfilename(filetypes=(("Napisy", "*.txt"), ("All files", "*.*") )) if fname: try: with open (fname, 'r+') as myfile: #here data = myfile.read() #here data2 = replace_all(data, dic) #here print(data2) #here data.write(data2) #and here should it happen except: showerror("Open Source File", "Failed to read file\n'%s'" % fname) return if __name__ == "__main__": MyFrame().mainloop() I have tried several commands but either I am receiving python errors or it is simply not working.
How can I save output tho the same file that I have got the data from, in Python 3
0.099668
0
0
649
28,583,633
2015-02-18T12:23:00.000
1
1
0
1
python,bash,ubuntu,command-line,history
28,583,843
1
false
0
0
Bash usually saves all your commands in the history buffer except if you specifically mark them to be excluded. There is an environment variable HISTIGNORE which might be configured to ignore python invocations altogether, although this is somewhat unlikely; or you may be marking them for exclusion by typing a space before the command.
1
0
0
I quiet often launch python scripts from command-line, like python somescript.py --with-arguments Now I'm wondering why that does not saved in output of history command? And if there is a way to see history of it
Why records about python command-line launches do not saved in history
0.197375
0
0
45
28,584,209
2015-02-18T12:55:00.000
1
0
0
0
python,shell,scheduled-tasks
28,676,803
1
true
0
0
Not sure why it wasn't working to be honest, but after going through the steps to recreate the task, it started working. Maybe I had just mistakenly entered the credentials of the user running the task or something like that.
1
0
0
I'm troubleshooting a python script that does 3 things. Stops ArcGIS Server using subprocess.check_output('net stop "ArcGIS Server"', shell=True) Runs some arcpy functions Starts ArcGIS Server using subprocess.check_output('net start "ArcGIS Server"', shell=True) The script runs fine when run from IDLE. It also runs fine as a scheduled task when I check the radio button that says "Run only when the user is logged on" However, this task runs on a server and I don't want to have to be logged in for the task to run. So I check the radio button that says "Run whether user is logged on or not", and "Run with highest privileges". The result is the script's log file is empty, and the script never completes. It returns the error code in the Task Scheduler, that I mentioned above. More task details: Action: Start a program Program/script: C:\Python27\ArcGIS10.3\python.exe Add arguments (optional): D:\ArcGISData\server-data\Python\copydata.py Am I doing something wrong, or is there any other steps I can try to get this working?
Python Script Running in Windows Task Scheduler Completes with exit code 3221225477
1.2
0
0
1,480
28,587,843
2015-02-18T15:52:00.000
0
0
0
1
python,python-2.7
37,771,773
2
false
0
0
I had the same problem and the solution was: uninstall Ulipad install Ulipad on a different disk, e.g. D:\Ulipad
2
0
0
I am familiar with R but new to Python. To use python, I installed Python 2.7, set environment variables and installed wxPython. And then, after installing Ulipad I opened Ulipad but an error message showed this: The logfile 'C:\program file(x86)\Ulipad\Ulipad.exe.log' could not be opened:[Errno13] Permission denied: 'C:\program file(x86)\Ulipad\Ulipad.exe.log' Can you help me to open the Ulipad? Or Is there any other good program like Ulipad? I am not good at programming but only familiar with R. Python seems to be little different from R in an interface.
error in opening Ulipad for python
0
0
0
336
28,587,843
2015-02-18T15:52:00.000
0
0
0
1
python,python-2.7
29,441,423
2
false
0
0
This is happening because under newer Windows OS(Vista, 7, 8) programs do not have write access to "C:\program file(x86)\" for security reasons. Easiest fix for your problem would be to uninstall the current installation and re-install it at a different location e.g. C:\Ulipad. Alternative is to run Ulipad using "Run as an administrator" option but it is not recommended.
2
0
0
I am familiar with R but new to Python. To use python, I installed Python 2.7, set environment variables and installed wxPython. And then, after installing Ulipad I opened Ulipad but an error message showed this: The logfile 'C:\program file(x86)\Ulipad\Ulipad.exe.log' could not be opened:[Errno13] Permission denied: 'C:\program file(x86)\Ulipad\Ulipad.exe.log' Can you help me to open the Ulipad? Or Is there any other good program like Ulipad? I am not good at programming but only familiar with R. Python seems to be little different from R in an interface.
error in opening Ulipad for python
0
0
0
336
28,590,903
2015-02-18T18:15:00.000
0
0
0
1
python,updates
28,593,395
2
false
0
0
The data isn't written to the latest.log file until the process writing it (probably the server) fills or flushes the buffer. There probably isn't any way to change that from within Python. The best bet is to see if you can configure the writing process to flush after each line.
1
0
0
I am making a python program for a Minecraft server that automatically bids on items up to a certain price. In appdata/roaming/.minecraft/logs there is a chat log called "latest.log". This log is constantly being updated with what everyone on my server is saying over chat. If I open it and view the text, the log doesn't automatically update (obviously). How would I use a python script to print every line in my log and automatically update? I am on Windows 8.1 with Python 2.7.9
How to auto-refresh a log.txt file
0
0
0
1,704
28,592,624
2015-02-18T19:51:00.000
2
1
1
1
vim,python-mode
28,592,881
1
true
0
0
Not trivial. Python-mode uses the Python interpreter Vim is linked against; you'll have to recompile Vim and link it against Anaconda.
1
0
0
When i use python-mode it uses my system (mac python), I have anaconda installed and want Vim to autocomplete etc with that version of python As it stands now, python-mode will only autocomplete modules in from system python and not any other modules e.g. pandas that is installed in the anaconda distro. thanx Tobie
Vim python-mode plugin picks up system python and not anaconda
1.2
0
0
513
28,593,711
2015-02-18T20:53:00.000
5
0
1
1
python,sublimetext
28,593,838
3
true
0
0
You can use ctrl-b to run your python in sublime. If you want to use a different interpreter you can customise under Tools -> Build System
2
1
0
I use Sublime Text and am using the terminal to run my code. I would prefer to use the Python Shell to run my code, as it has color and is not so hard to look at. Is there any easy way to do this other than saving then opening in IDLE?
Using Python Shell with any text editor
1.2
0
0
240
28,593,711
2015-02-18T20:53:00.000
3
0
1
1
python,sublimetext
28,593,841
3
false
0
0
Stick with Sublime text. It's a popular text editor with syntax highlighting for several different programming languages. Here's what you need to do: Press Ctrl + Shift + P to bring up command palette and enter "python". Choose the option that says something like "Set syntax to Python". Enter Python code then Ctrl + Shift + B to build the project. Code will run below in another view(you will probably be able to move it to the side). This is the standard procedure for a python setup in sublime text, but you may need to install SublimeREPL for python in order to get user input. Just give it a Google search.
2
1
0
I use Sublime Text and am using the terminal to run my code. I would prefer to use the Python Shell to run my code, as it has color and is not so hard to look at. Is there any easy way to do this other than saving then opening in IDLE?
Using Python Shell with any text editor
0.197375
0
0
240
28,594,933
2015-02-18T22:10:00.000
0
1
1
0
python,c,automation,automated-tests
28,594,984
1
false
0
0
You can use the subprocess module in Python to spawn other programs, retrieve their output and feed them arbitrary input. The relevant pieces will most likely be the Popen.communicate() method and/or the .stdin and .stdout file objects; ensure when you do this that you passed PIPE as the argument to the stdin and stdout keywords on creation.
1
0
0
I need to score my students c programming homeworks. I want to write an autograder script which automatically score the homeworks. I plan to write this script in python language. My question is, in the homework, in some parts students get some input from keyboard with scanf, how can i handle this problem if I try to write an autograder? Is there any way to read from text file when the scanf line runs in homework ? Any idea is appreciated.
Autograder script - reading keyboard inputs from textfile
0
0
0
160
28,596,867
2015-02-19T00:51:00.000
2
0
1
0
python
28,596,911
2
false
0
0
Because the two numbers you are dividing are integers, python 2 floors the quotient of 3/2. If you want to get a float as an answer, just do 3.0/2.0 instead. (note: you don't have to do this in python 3)
1
0
0
why is negative 3 divided by two is negative two.And three divided by two is one in Python.I have tried it on IDLE and don't understand why. why -3/2=-2 and 3/2=1 in Python
>why -3/2=-2 and 3/2=1 in Python
0.197375
0
0
1,447
28,597,205
2015-02-19T01:27:00.000
2
0
0
1
python,django,deployment,passenger,uwsgi
28,599,998
1
false
1
0
Production performance is pretty the same, so I wouldn't worry about that. uWSGI has some advanced builtin features like clustering and a cron API while Phusion Passenger is more minimalist, but Phusion Passenger provides more friendly tools for administration and inspection (e.g. passenger-status, passenger-memory-stats, passenger-config system-metrics).
1
1
0
Which way of deploying Django app is better (or maybe the better question would be what are pros and cons): using UWSGI, using Phusion Passenger? In my particular case the most important advantage for using Passenger is ease of use (on my hosting I need to place single file in project directory and it's done), but what with performance things, etc.? What do you think?
Django app - deploy using UWSGI or Phusion Passenger
0.379949
0
0
760
28,597,575
2015-02-19T02:11:00.000
3
0
0
0
postgresql,psycopg2,python-db-api
28,602,221
1
true
0
0
You can re-register a plain string type caster for every single PostgreSQL type (or at least for every type you expect a string for in your code): when you register a type caster for an already registered OID the new definition takes precedence. Just have a look at the source code of psycopg (both C and Python) to find the correct OIDs. You can also compile your own version of psycopg disabling type casting. I don't have the source code here right now but probably is just a couple line changes.
1
2
0
Query results from some Postgres data types are converted to native types by psycopg2. Neither pgdb (PostgreSQL) and cx_Oracle seem to do this. …so my attempt to switch pgdb out for psycopg2cffi is proving difficult, as there is a fair bit of code expecting strings, and I need to continue to support cx_Oracle. The psycopg2 docs explain how to register additional types for conversion, but I'd actually like to remove that conversion if possible and get the strings as provided by Postgres. Is that doable?
Can one disable conversion to native types when using psycopg2?
1.2
1
0
274
28,600,076
2015-02-19T06:38:00.000
0
1
0
0
python,tweepy
28,600,163
2
false
0
0
Pythonanywhere requires a premium account for web access. You can create a limited account with one web app at your-username.pythonanywhere.com, restricted Internet access from your apps, low CPU/bandwidth. It works and it's a great way to get started! Get a premium account: $5/month Run your Python code in the cloud from one web app and the console A Python IDE in your browser with unlimited Python/bash consoles One web app with free SSL at your-username.pythonanywhere.com Enough power to run a typical 50,000 hit/day website. (more info) 3,000 CPU-seconds per day (more info) 512MB disk space That said, I'd just set it up locally if its for personal use and go from there.
2
0
0
I am using pythonanywhere.com and trying to run an app that I made for twitter that uses tweepy but it keeps saying connection refused or failed to send request. Is there any way to run a python app online easily that sends requests?
Trying to use python on a server
0
0
1
214
28,600,076
2015-02-19T06:38:00.000
0
1
0
0
python,tweepy
28,600,193
2
false
0
0
You will need a server that has a public IP. You have a few options here: You can use a platform-as-a-service provider like Heroku or AWS Elastic Beanstalk. You can get a server online on AWS, install your dependencies and use it instead. As long as you keep your usage low, you can stay withing the free quotas for these services.
2
0
0
I am using pythonanywhere.com and trying to run an app that I made for twitter that uses tweepy but it keeps saying connection refused or failed to send request. Is there any way to run a python app online easily that sends requests?
Trying to use python on a server
0
0
1
214
28,600,606
2015-02-19T07:17:00.000
0
0
1
1
python,anaconda
28,635,473
1
false
0
0
%run is a command that's run from inside of IPython. To use it, you should start ipython first. Or just run python program.py (if your program is named program.py).
1
0
0
I have opened Anaconda - then i maneuvered to the directory where a certain python program i want to run actually lies. I then tried the %run command. But the command does not seem to work! So how am i to run that program? Does anyone know the right command that one has to use in the black colored Anaconda console command line, to run a Python program existing in a certain directory (to which the command line has been taken to)
Opening Python program from Anaconda
0
0
0
407
28,600,714
2015-02-19T07:25:00.000
1
0
1
0
python,syntax-highlighting,pycharm
28,616,559
1
false
0
0
There is no such feature in PyCharm 4.
1
1
0
PyCharm version: Community Edition 4.0.4 Is it possible to customize my color scheme for a python file in PyCharm such that certain statements are of darker color? e.g. I want to make all statements starting with "logger" to be of gray color so that I can focus on my main code without having to wade through lot of info/debug statements. I tried to find out if I can add new keyword in keywords1 keywords2 keywords3 keywords4 but can't find any such option. And on top of that, I can't find any way to alter colors for keyword1/2/3/4 individually. I can't be the only one wanting to hide/dim logging statements!
how to 'dim' certain python statements in PyCharm
0.197375
0
0
74
28,605,646
2015-02-19T11:50:00.000
1
0
0
1
python,django,parallel-processing,celery,mongoengine
28,642,118
1
true
1
0
When you make the synchronous calls to external systems it will tie up a thread in the application server, so depending on application server you choose and how many concurrent threads/users you have will determine whether doing it that way will work for you. Usually when you have long running requests like that it is a good idea to use a background processing system such as celery, like you suggest.
1
1
0
I'm working on a project that uses Django and mongoengine. When a user presses a button, a trigger to a call_command (django.core.management - just calls a script it seems to me) is made which sshs to multiple servers in parallel, copies some files, parses them and stores them in the database. The problem is that when the button is pressed and the above process is running, if any other user tries to use the website, it doesn't load. Is this because of mongo's lock? This happens as soon as the button is pressed (so when the connections to other servers are still made, not yet writing to the DB) so I was thinking that it's not a mongo issue. So is it a Django issue calling the command synchronously? Do I need to use Celery for this task?
Django's "call_command" hangs the application
1.2
0
0
347
28,606,259
2015-02-19T12:20:00.000
0
1
0
0
python,remote-connection
28,615,437
1
true
0
0
When you're not cleaning up the proxy objects they keep a connection live to the pyro daemon. By default the daemon accepts 16 concurrent connections. If you use the with.. as... syntax, you're closing the proxy cleanly after you've done using it and this releases a connection in the daemon, making it available for a new proxy. You can increase the number of 16 by increasing Pyro's threadpool size via the config. Alternatively you could perhaps use the multiplex server type instead of the default threaded one.
1
0
0
I'm using python and writing something that connects to a remote object using Pyro4 When running some unit tests (using pyunit) that repeatedly connects to a remote object with pyro, I found I couldn't run more than 9 tests or the tests would get stuck and just hang there. I've now managed to fix this by using with Pyro4.Proxy(PYRONAME:name) as pyroObject: do something with object... whereas before I was creating the object in the test set up: def setUp(self): self.pyroObject = Pyro4.Proxy(PYRONAME:name) and then using self.pyroObject within the tests Does anyone know why this has fixed the issue? Thanks
python - number of pyro connections
1.2
0
0
499
28,606,809
2015-02-19T12:46:00.000
1
1
0
0
python,twisted,irc,twisted.internet,twisted.words
28,607,141
1
false
0
0
Found it, when I override RPL_WHOISUSER, I can get the information after issuing an IRCClient.whois. (And yes, did search for it before I posted my question, but had an epiphany right after I posted my question...)
1
1
0
I'm trying to get the hostmask for a user, to allow some authentication in my IRCClient bot. However, it seems to be removed from all responses? I've tried 'whois', but it only gives me the username and the channels the user is in, not the hostmask. Any hint on how to do this?
How to get a user's hostmask with Twisted IRCClient
0.197375
0
1
37
28,608,320
2015-02-19T14:02:00.000
1
0
0
0
arrays,python-3.x,numpy
28,608,797
1
true
0
0
Use array indexing as below: color[0]
1
1
1
How to get the content of a row of a Numpy array ? For example I have a Numpy array with 3 rows color=np.array([[255,0,0],[255,255,0],[0,255,0]]) and I want to retrieve the content of the first row [255,0,0].
How to get the content of a row of a Numpy array?
1.2
0
0
41
28,608,641
2015-02-19T14:15:00.000
0
0
1
1
python,shutil
28,608,827
3
false
0
0
rmtree does not appear to have any kind of filtering mechanism that you could use; further, since part of its functionality is to remove the directory itself, and not just its contents, it wouldn't make sense to. If you could do something to the file so that rmtree's attempt to delete it fails, you can have rmtree ignore such errors, thus leaving your file but deleting the others. If you cannot, you could resort to os.walk to loop over the contents of your directory, and thus decide which items to remove for yourself.
1
1
0
I would periodically like to delete the contents of a Windows directory which includes files and sub directories that contain more files. However I do not there is one specific file that I do not want to remove (it is the same file every time). I am using shutil.rmtree to delete the contents of a folder but I am deleting the file I wish to keep also. How would I make an exception preventing the removal of the file I would like to keep and is shutil the best method for this?
Deleting Contents of a folder selevtively with python
0
0
0
897
28,613,399
2015-02-19T17:51:00.000
84
0
0
0
python,rest,websocket,httprequest
28,618,369
1
true
0
0
The most efficient operation for what you're describing would be to use a webSocket connection between client and server and have the server send updated price information directly to the client over the webSocket ONLY when the price changes by some meaningful amount or when some minimum amount of time has elapsed and the price has changed. This could be much more efficient than having the client constantly ask for new price changes and the timing of when the new information gets to the client can be more timely. So, if you're interested in how quickly the information on a new price level gets to the client, a webSocket can get it there much more timely because the server can just send the new pricing information directly to the client the very moment it changes on the server. Whereas using a REST call, the client has to poll on some fixed time interval and will only ever get new data at the point of their polling interval. A webSocket can also be faster and easier on your networking infrastructure simply because fewer network operations are involved to simply send a packet over an already open webSocket connection versus creating a new connection for each REST/Ajax call, sending new data, then closing the connection. How much of a difference/improvement this makes in your particular application would be something you'd have to measure to really know. But, webSockets were designed to help with your specific scenario where a client wants to know (as close to real-time as practical) when something changes on the server so I would definitely think that it would be the preferred design pattern for this type of use. Here's a comparison of the networking operations involved in sending a price change over an already open webSocket vs. making a REST call. webSocket Server sees that a price has changed and immediately sends a message to each client. Client receives the message about new price. Rest/Ajax Client sets up a polling interval Upon next polling interval trigger, client creates socket connection to server Server receives request to open new socket When connection is made with the server, client sends request for new pricing info to server Server receives request for new pricing info and sends reply with new data (if any). Client receives new pricing data Client closes socket Server receives socket close As you can see there's a lot more going on in the Rest/Ajax call from a networking point of view because a new connection has to be established for every new call whereas the webSocket uses an already open call. In addition, in the webSocket cases, the server just sends the client new data when new data is available - the client doens't have to regularly request it. If the pricing information doesn't change super often, the REST/Ajax scenario will also frequently have "do-nothing" calls where the client requests an update, but there is no new data. The webSocket case never has that wasteful case since the server just sends new data when it is available.
1
40
0
I need to constantly access a server to get real time data of financial instruments. The price is constantly changing so I need to request new prices every 0.5 seconds. The REST APIs of the brokers let me do this, however, I have noticed there's quite some delay when connecting to the server. I just noticed that they also have websocket API though. According to what I read, they both have some pros/cons. But for what I want to do and because speed is specially important here, which kind if API would you recommend? Is websocket really faster? Thank you!
websocket vs rest API for real time data?
1.2
0
1
21,513
28,614,874
2015-02-19T19:11:00.000
0
0
1
0
python-2.7,pip,scikit-learn
35,262,231
2
false
0
0
Changing the directory worked in my case. Suppose your python 2.7.9 is in C drive so you set you directory as follows and write your command like this : C:\python27\scripts> pip install -U scikit-learn
1
0
1
I have python 2.7.9 (which comes with pip already installed), I have numpy 1.8.2 and scipy 0.15.1 installed as well. When I try to install scikit-learn, I get the following error pip install -U scikit-learn SyntaxError: invalid syntax What am I doing wrong? Or is there another way to install scikit- learn on windows, if I can't use pip ?
unable to install scikit-earn on python 2.7.9 in Windows?
0
0
0
771
28,615,418
2015-02-19T19:39:00.000
0
1
0
1
eclipse,python-3.x,pydev
28,972,434
1
false
0
0
It seems like Eclipse Luna does not provide support for PyDev when it's installed with Aptana. I was able to install Aptana without PyDev and do a separate install of Pydev on its own and this solved the problem.
1
0
0
I'm using the Pydev plugin for Eclipse Luna for Java EE. The python code runs correctly, but errors are showing up for built in keywords like print. Error: Undefined Variable: print I looked on stackoverflow for other answers, and the suggestions have all been to manually configure an interpreter. I changed my interpreter to point at C:/python34/python.exe, but this has not fixed the problem. I also made sure that I was using grammar version 3.0. Update: I think it might be a problem with aptana instead of pydev. I uninstalled aptana, and installed pydev without any issues. But when I tried to reinstall aptana, I can only do it by uninstalling pydev. I need a way to try a previous version of aptana or else a way to install aptana and pydev separately
Syntax errors for keywords in pydev plugin for Eclipse
0
0
0
303
28,618,026
2015-02-19T22:12:00.000
0
0
1
0
python,multithreading
28,620,077
2
false
0
0
You cannot safely terminate a thread without its cooperation. Threads are not isolated within a process, so unsafely terminating a thread contaminates the process. Please, don't go down this road. If you need this kind of isolation, you need a process. You can safely terminate a process without its cooperation, though it may leave system objects (such as files) that the process was working on in an intermediate state. In your case, that may mean a print job half-done and a page halfway in the printer. Or it may mean temporary files that don't get removed.
1
0
0
I have read most of the similar questions in stackoverflow, but none see to solve my problem. I use ctypes to call a function from dll file. Therefore, I can't edit the source codes of the dll file to add any "end looping" conditions. Also, this function may last long (like some printing command). I need to design a "halt" command in case that something of emergency happens while printing is processed. The only way I can do is to kill the thread.
How to kill a Python thread without communication
0
0
0
364
28,618,400
2015-02-19T22:38:00.000
6
0
1
0
python,nlp,nltk
28,635,345
7
false
0
0
English language has two voices: Active voice and passive voice. Lets take most used voice: Active voice. It follows subject-verb-object model. To mark the subject, write a rule set with POS tags. Tag the sentence I[NOUN] shot[VERB] an elephant[NOUN]. If you see the first noun is subject, then there is a verb and then there is an object. If you want to make it more complicated, a sentence- I shot an elephant with a gun. Here the prepositions or subordinate conjunctions like with, at, in can be given roles. Here the sentence will be tagged as I[NOUN] shot[VERB] an elephant[NOUN] with[IN] a gun[NOUN]. You can easily say that word with gets instrumentative role. You can build a rule based system to get role of every word in the sentence. Also look at the patterns in passive voice and write rules for the same.
1
20
0
Can Python + NLTK be used to identify the subject of a sentence? From what I have learned till now is that a sentence can be broken into a head and its dependents. For e.g. "I shot an elephant". In this sentence, I and elephant are dependents to shot. But How do I discern that the subject in this sentence is I.
How to identify the subject of a sentence?
1
0
0
29,575
28,618,441
2015-02-19T22:41:00.000
0
0
1
0
python,function,math,3d
70,202,164
3
false
0
0
your function is not surjective, let p is a prime number, we can't find any x,y,z in N such that p=2^x3^y5^z...
1
1
0
Can anyone help me in finding a bijective mathematical function from N * N * N → N that takes three parameters x, y, and z and returns a number n? I would like to know the function f and its inverse f' in a way that if I have n I will be able to determine x, y, z by applying f'(n).
A bijective function from N*N*N to N
0
0
0
4,542
28,618,468
2015-02-19T22:42:00.000
20
0
0
0
python,amazon-web-services,amazon-s3,boto
58,636,713
10
false
0
0
I know it's a very old question. But as for now, we can just use s3_conn.get_object(Bucket=bucket, Key=key)['Body'].iter_lines()
1
35
1
I have a csv file in S3 and I'm trying to read the header line to get the size (these files are created by our users so they could be almost any size). Is there a way to do this using boto? I thought maybe I could us a python BufferedReader, but I can't figure out how to open a stream from an S3 key. Any suggestions would be great. Thanks!
Read a file line by line from S3 using boto?
1
0
1
83,468
28,618,591
2015-02-19T22:54:00.000
3
0
0
0
python,numpy,fft
28,618,872
3
false
0
0
The magnitude, r, at a given frequency represents the amount of that frequency in the original signal. The complex argument represents the phase angle, theta. x + i*y = r * exp(i*theta) Where x and y are the numbers that that the numpy FFT returns.
1
12
1
The np.fft.fft() returns a complex array .... what is the meaning of the complex number ? I suppose the real part is the amplitude ! The imaginary part is phase-shift ? phase-angle ? Or something else ! I figured out the position in the array represent the frequency.
numpy.fft() what is the return value amplitude + phase shift OR angle?
0.197375
0
0
29,182
28,619,118
2015-02-19T23:36:00.000
0
0
1
0
python
28,619,217
1
false
0
0
This is the expected result, because I am doing a circular import without realizing it! Thanks to a comment from Iguananaut I realized this is an example/special case of the Circular imports in Python, which is addressed elsewhere.
1
0
0
I have a file structure that looks like this: pckg/ __ init __.py module1 module2 module3 module4 In the __init__.py I import all of the classes from the modules, so they are available on 'from pckg import (class)' However, I can't seem to use this method within the modules. For example, in module1 I have to import the classes I need from module2 and module 3, I can't directly import them from pckg. Is this the expected result when trying to import from within a package, or am I doing something wrong? Let me know if more info would be helpful.
Importing from __init__.py within a file
0
0
0
54
28,620,139
2015-02-20T01:19:00.000
4
0
1
0
python,json,django,rest
28,620,154
2
true
0
0
For this purpose, you should use GET. It's the only one that isn't expected to make changes to the underlying system.
1
1
0
I'm building a very simple REST service in Python. All it does is take in a JSON string, apply an algorithm on it and send back a JSON string response. I understand the difference between GET, POST, PUT and DELETE, but it does not seem like any of them would seem suitable for my scenario.
Which REST verb to use?
1.2
0
0
99
28,621,436
2015-02-20T04:04:00.000
1
0
1
0
python-2.7,pip
39,076,721
8
false
0
0
I faced the same issue and got to know that the error is because it is not able to find the pip.exe to execute. You need to check the path : C:\Python27\Scripts There, you will find the .exe file and if you run the command from that folder, the command should not give you the error or while running the command, please provide entire path instead of just pip command.
2
6
0
I've installed Python 2.7.9, which comes with already bundled with pip. I've check that it's there in the modules list. But when I run pip install I get SyntaxError: invalid syntax With install highlighted as the error? What am I doing wrong?
pip not working on windows python 2.7.9
0.024995
0
0
34,753
28,621,436
2015-02-20T04:04:00.000
8
0
1
0
python-2.7,pip
28,621,524
8
false
0
0
Append C:\Python27\Scripts;in PATH variable where C:\Python27\Scripts; is the path where pip script is located.
2
6
0
I've installed Python 2.7.9, which comes with already bundled with pip. I've check that it's there in the modules list. But when I run pip install I get SyntaxError: invalid syntax With install highlighted as the error? What am I doing wrong?
pip not working on windows python 2.7.9
1
0
0
34,753
28,625,131
2015-02-20T09:05:00.000
2
1
1
0
c++,python-2.7,math
28,625,338
2
false
0
0
When x is large enough (about 4.5E15 for an IEEE double, I think), 2^n-1 isn't representable.
1
1
0
I know that log2(x) accuracy fails when x is large enough and is in the form 2^n-1 for most languages, except R and Matlab may be. Any specific reasons ? Edit 1: x is an integer around 10^15 and up
Can anyone explain the inaccuracy in log2 in C++/Python?
0.197375
0
0
80
28,625,474
2015-02-20T09:23:00.000
1
0
0
0
python,api,high-availability
28,629,932
1
true
1
0
actually there is no silver bullet. you mention two different things. one is availability. it depends on how many nines you want to have in your 9,999... availability. second thing is api change. so: availability: some technologies allows you to do hot changes/deloyments. which means pending requests goes the old path, new request goes the new path. if your technology don't support it you can't use it for othe reasons, there are other options in small scale intranet applications you simply don't care. you stop the world: stop the application, upload new version and start. on application stop many web frameworks stop accepting new connection and wait until all pending requests are finished. if yours don't support it you have 2 options: ignore it (db will rollback current transaction, user will get error) implement it yourself (may be challenging). and you do your best to shorten the inactivity period. if you can't afford to stop everything then you can do: clustering. restart one service by one. all the time some server is available. that's not always possible because sometimes you have to change your database and not always you can do it on working system or you can't afford to loose any data in case of update failure microservices. if you split your application into many independent components connected with persistent queues then you turn of only some parts of your system (graceful degradation). for example you can disable component that writes changes to the database but still allow reads. if you have infrastructure to do it quickly then the update may be unnoticed - requests will be put into queues and picked up by new version api change: you version your api. each request says which version it requires. if you control all your clients / small scale / decide not to support old versions : you don't care. everyone has to update its client. if not, then again microservices may help. you split your public api from internal api. and you keep all your public api services running and announce then some of them are deprecated. you monitor their usage. when you decide that usage of some version is low enough you announce end-of-life and later you shutdown specific version that's best i've got for the moment
1
0
0
I'm planning to deliver an API as a web-service. When I update my API's (interpreted) code-base, however, I anticipate possibly needing to restart the API service or even just have a period where the code is being overwritten. This introduces the possibility that incoming API requests may be dropped or, even worse, that processes triggered by the API may be interrupted. The flask library for python appears to offer something of a solution; by enabling debug mode it will check the modified flag on all python files and, for each request, it will reload any modules that have changed. It's not the performance penalty that puts me off this approach - it's the idea that it looks slightly jilted. Surely there is an elegant, high-availability approach to what must be a common issue? Edit: As @piotrek answered below, "there is no silver bullet". One briefly visible comment suggested using DNS to switch to a new API server after an update.
How do I design an API such that code updates don't cause interruptions?
1.2
0
1
33
28,630,336
2015-02-20T13:39:00.000
1
0
1
1
python,batch-file,cmd,dos,affinity
28,848,968
2
true
0
0
This is more of an answer to a question that arose in comments, but I hope it might help. I have to add it as an answer only because it grew too large for the comment limits: There seems to be a misconception about two things here: what "processor affinity" actually means, and how the Windows scheduler actually works. What this SetProcessAffinityMask(...) means is "which processors can this process (i.e. "all threads within the process") can run on," whereas SetThreadAffinityMask(...) is distinctly thread-specific. The Windows scheduler (at the most base level) makes absolutely no distinction between threads and processes - a "process" is simply a container that contains one or more threads. IOW (and over-simplified) - there is no such thing as a process to the scheduler, "threads" are schedulable things: processes have nothing to do with this ("processes" are more life-cycle-management issues about open handles, resources, etc.) If you have a single-threaded process, it does not matter much what you set the "process" affinity mask to: that one thread will be scheduled by the scheduler (for whatever masked processors) according to 1) which processor it was last bound to - ideal case, less overhead, 2) whichever processor is next available for a given runnable thread of the same priority (more complicated than this, but the general idea), and 3) possibly transient issues about priority inversion, waitable objects, kernel APC events, etc. So to answer your question (much more long-windedly than expected): "But if I will use a multicore X like 15 or F or 0xF (meaning in my opinion all 4 cores) it will still run only on the first core" What I said earlier about the scheduler attempting to use the most-recently-used processor is important here: if you have a (or an essentially) single-threaded process, the scheduling algorithm goes for the most-optimistic approach: previously-bound CPU for the switchback (likely cheaper for CPU/main memory cache, prior branch-prediction eval, etc). This explains why you'll see an app (regardless of process-level affinity) with only one (again, caveats apply here) thread seemingly "stuck" to one CPU/core. So: What you are effectively doing with the "/affinity X" switch is 1) constraining the scheduler to only schedule your threads on a subset of CPU cores (i.e. not all), and 2) limit them to a subset of what the scheduler kernel considers "available for next runnable thread switch-to", and 3) if they are not multithreaded apps (and capable of taking advantage of that), "more cores" does not really help anything - you might just be bouncing that single thread of execution around to different cores (although the scheduler tries to minimize this, as described above). That is why your threads are "sticky" - you are telling the scheduler to make them so.
1
0
0
I have a batch that launches a few executables .exe and .py (python) to process some data. With start /affinity X mybatch.bat it will work as it should only if X equals to 0, 2, 4 or 8 (the individual cores) But if I will use a multicore X like 15 or F or 0xF (meaning in my opinion all 4 cores) it will still run only on the first core. Does it have to do with the fact that the batch is calling .exe files that maybe cannot be affinity controled this way? OS:Windows 7 64bit
DOS Batch multicore affinity not working
1.2
0
0
632
28,630,414
2015-02-20T13:43:00.000
5
1
1
0
python,string,unicode
28,630,446
1
true
0
0
In Python 2, if the conversion with str() was successful, then you can reverse the result. Using str() on a unicode value is the equivalent of using unicode_value.encode('ascii') and the reverse is to simply use str_value.decode('ascii'). Using unicode(str_value) will use the same implicit ASCII codec to decode. In Python 3, calling str() on a unicode value simply gives you the same object back, since in Python 3 str() is the Unicode type. Using bytes() on a Unicode value without an encoding fails, you always have to use explicit codecs in Python 3 to convert between str and bytes.
1
0
0
The proper way to convert a unicode string u to a (byte)string in Python is by calling u.encode(someencoding). Unfortunately, I didn't know that before and I had used str(u) for conversion. In particular, I called str(u) to coerce u to be a string so that I can make it a valid shelve key (which must be a str). Since I didn't encounter any UnicodeEncodeError, I wonder if this process is reversible/lossless. That is, can I do u = str(converted_unicode) (or u = bytes(converted_unicode) in Python 3) to get the original u?
Is converting Python unicode by casting to str reversible?
1.2
0
0
326
28,632,987
2015-02-20T15:51:00.000
1
0
0
0
java,python,excel,apache-poi,poi-hssf
28,634,898
3
false
0
0
Use JODConverter. You have an Excel 4.0 file; too old for Apache POI.
1
0
0
I need to process a lot of .xls files which come out of this Microscopy image analysis software called Aperio (after analysis with Aperio, it allows you to export the data as "read-only" xls format. The save-as only works in Excel on a Mac, on windows machine, the save and save as buttons are greyed out since the files are protected). Unfortunately, the header of these files are not standard OLE2 format. Therefore, they cannot be picked up with Java API POI unless they are manually loaded in Microsoft Excel and save as .xls one by one. Since there are so many of them in the directory, it would be pretty painful to do the save-as by hand. Is there a way to write a Java program to automatically save these files as standard xls files? If it is impossible for Java, what other language can handle this situation, Python? Edit: I loaded one of the files in hex reader and here it is: 09 04 06 00 07 00 10 00 00 00 5C 00 04 00 05 4D 44 41 80 00 08 00 00 00 00 00 00 00 00 00 92 00 19 00 06 00 00 00 00 00 F0 F0 F0 00 00 00 00 00 FF FF FF 00 00 00 00 00 FF FF FF 0C 00 02 00 01 00 0D 00 02 00 64 00 0E 00 02 00 01 00 0F 00 02 00 01 00 11 00 02 00 00 00 22 00 02 00 00 00 2A 00 02 00 00 00 2B 00 02 00 00 00 25 02 04 00 00 00 FF 00 1F 00 02 00 22 00 1E 04 0A 00 00 00 07 47 65 6E 65 72 61 6C 1E 04 04 00 00 00 01 30 1E 04 07 00 00 00 04 30 2E 30 30 1E 04 08 00 00 00 05 23 2C 23 23 30 1E 04 0B 00 00 00 08 23 2C 23 23 30 2E 30 30 1E 04 18 00 00 00 15 23 2C 23 23 30 5F F0 5F 2E 3B 5C 2D 23 2C 23 23 30 5F F0 5F 2E 1E 04 1D 00 00 00 1A 23 2C 23 23 30 5F F0 5F 2E 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 5F F0 5F 2E 1E 04 1E 00 00 00 1B 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 3B 5C 2D 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 1E 04 23 00 00 00 20 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 1E 04 18 00 00 00 15 23 2C 23 23 30 22 F0 2E 22 3B 5C 2D 23 2C 23 23 30 22 F0 2E 22 1E 04 1D 00 00 00 1A 23 2C 23 23 30 22 F0 2E 22 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 22 F0 2E 22 1E 04 1E 00 00 00 1B 23 2C 23 23 30 2E 30 30 22 F0 2E 22 3B 5C 2D 23 2C 23 23 30 2E 30 30 22 F0 2E 22 1E 04 23 00 00 00 20 23 2C 23 23 30 2E 30 30 22 F0 2E 22 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 2E 30 30 22 F0 2E 22 1E 04 05 00 00 00 02 30 25 1E 04 08 00 00 00 05 30 2E 30 30 25 1E 04 0B 00 00 00 08 30 2E 30 30 45 2B 30 30 1E 04 0A 00 00 00 07 23 22 20 22 3F 2F 3F 1E 04 09 00 00 00 06 23 22 20 22 3F 3F 1E 04 0D 00 00 00 0A 64 64 2F 6D 6D 2F 79 79 79 79 1E 04 0C 00 00 00 09 64 64 2F 6D 6D 6D 2F 79 79 1E 04 09 00 00 00 06 64 64 2F 6D 6D 6D 1E 04 09 00 00 00 06 6D 6D 6D 2F 79 79 1E 04 0E 00 00 00 0B 68 3A 6D 6D 5C 20 41 4D 2F 50 4D 1E 04 11 00 00 00 0E 68 3A 6D 6D 3A 73 73 5C 20 41 4D 2F 50 4D 1E 04 07 00 00 00 04 68 3A 6D 6D 1E 04 0A 00 00 00 07 68 3A 6D 6D 3A 73 73 1E 04 13 00 00 00 10 64 64 2F 6D 6D 2F 79 79 79 79 5C 20 68 3A 6D 6D 1E 04 0B 00 00 00 08 23 23 30 2E 30 45 2B 30 1E 04 08 00 00 00 05 6D 6D 3A 73 73 1E 04 04 00 00 00 01 40 1E 04 36 00 00 00 33 5F 2D 2A 20 23 2C 23 23 30 22 F0 2E 22 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 22 F0 2E 22 5F 2D 3B 5F 2D 2A 20 22 2D 22 22 F0 2E 22 5F 2D 3B 5F 2D 40 5F 2D 1E 04 36 00 00 00 33 5F 2D 2A 20 23 2C 23 23 30 5F F0 5F 2E 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 5F F0 5F 2E 5F 2D 3B 5F 2D 2A 20 22 2D 22 5F F0 5F 2E 5F 2D 3B 5F 2D 40 5F 2D 1E 04 3E 00 00 00 3B 5F 2D 2A 20 23 2C 23 23 30 2E 30 30 22 F0 2E 22 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 2E 30 30 22 F0 2E 22 5F 2D 3B 5F 2D 2A 20 22 2D 22 3F 3F 22 F0 2E 22 5F 2D 3B 5F 2D 40 5F 2D 1E 04 3E 00 00 00 3B 5F 2D 2A 20 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 5F 2D 3B 5F 2D 2A 20 22 2D 22 3F 3F 5F F0 5F 2E 5F 2D 3B 5F 2D 40 5F 2D 31 00 14 00 A0 00 00 00 08 00 0D 4D 53 20 53 61 6E 73 20 53 65 72 69 66 31 00 14 00 A0 00 00 00 0E 00 0D 4D 53 20 53 61 6E 73 20 53 65 72 69 66 31 00
How to program to save a bunch of ".xls" files in Excel
0.066568
1
0
415
28,636,141
2015-02-20T18:50:00.000
0
0
0
0
django,python-3.x,heroku,heroku-postgres
28,636,553
3
false
1
0
It seems to me that you are using raw SQL queries instead of Django ORM calls and this causes portability issues when you switch database engines. I'd strongly suggest to use ORM if it's possible in your case. If not, then I'd say that you need to detect database engine on your own and construct queries depending on current engine. In this case you could try to use 0 instead of false, I guess this should work both on SQLite and Postgres.
2
0
0
I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error: column "is_superuser" is of type integer but expression is of type boolean LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_superuser" = false, "us... ^ HINT: You will need to rewrite or cast the expression. The last function call in the stack trace is: /app/.heroku/python/lib/python3.4/site-packages/django/db/backends/utils.py in execute return self.cursor.execute(sql, params) ... ▶ Local vars I don't have access to the base django code, just the code on my app. So any help getting this to work would be really helpful.
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error
0
1
0
2,600
28,636,141
2015-02-20T18:50:00.000
-1
0
0
0
django,python-3.x,heroku,heroku-postgres
28,638,965
3
false
1
0
The problem is caused by a variable trying to change data types (i.e. from a char field to date-time) in the migration files. A database like PostgreSQL might not know how to change the variable type. So, make sure the variable has the same type in all migrations.
2
0
0
I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error: column "is_superuser" is of type integer but expression is of type boolean LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_superuser" = false, "us... ^ HINT: You will need to rewrite or cast the expression. The last function call in the stack trace is: /app/.heroku/python/lib/python3.4/site-packages/django/db/backends/utils.py in execute return self.cursor.execute(sql, params) ... ▶ Local vars I don't have access to the base django code, just the code on my app. So any help getting this to work would be really helpful.
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error
-0.066568
1
0
2,600
28,640,234
2015-02-20T23:53:00.000
0
0
1
0
windows,ipython-notebook,pandoc
28,641,007
1
false
1
0
I finally solve my problem by adding the full paths to my files (But I have used wkhtmltopdf which is simpler to use for a good result.)
1
0
0
sorry for my english in my post (it is my first on this forum, and my question is perhaps stupid). I encounter a problem in converting a html file to pdf file with pandoc. Here is my code in the console set Path=%Path%;C:\Users\nicolas\AppData\Local\Pandoc (redirecting to Pandoc directory) followed by pandoc --data-dir=C:\Users\nicolas\Desktop essai.html -o essai.pdf As indicated, my file is in the Desktop, but I got the following error: pandoc: essai.html: openFile: does not exist (No such file or directory) I get the same error if i do (with the file essai.html in the same folder as pandoc.exe): pandoc essai.html -o essai.pdf Have you any idea of the cause of my problem? (I precise that the file's name i want to convert is correct). Remark: My original problem was to create a pdf faithful to the beautiful html file generated by Ipython Notebook via pandoc but I encounter the same kind of problem when i want to convert a .ipynb file in pdf with nbconvert.
openFile with pandoc 1.13.2 - Windows 8.1
0
0
0
173
28,643,670
2015-02-21T08:17:00.000
1
0
1
0
python,python-3.x
28,651,330
1
true
0
0
Thanks Joe for pointing that out. Mechanize is not supported for Python 3.x. For my job, I set up a new Python 2.7 environment through conda and switched to it. It addressed the issue.
1
3
0
I am using the Anaconda 2.1.0 distribution of Python on Windows 8. python --version Python 3.4.1 :: Anaconda 2.1.0 (64-bit) I used pip to install the mechanize package. pip (v 6.0.8) installed mechanize 0.2.5 which is the most recent release. But, while trying to import the package, python throws an error: >>> import mechanize Traceback (most recent call last): File "", line 1, in File "C:\Anaconda3\lib\site-packages\mechanize\__init__.py", line 122, in from _mechanize import \ ImportError: No module named '_mechanize' Similar questions here received replies to check if the installation was done on the PYTHONPATH. I also checked sys.path and there seems to be no problem there. >>> import sys >>> sys.path ['', 'C:\\Anaconda3\\Scripts', 'C:\\Anaconda3\\lib\\site-packages\\cssselect-0.9.1-py3.4.egg', 'C:\\Anaconda3', 'C:\\Anaconda3\\python34.zip', 'C:\\Anaconda3\\DLLs', 'C:\\Anaconda3\\lib', 'C:\\Anaconda3\\lib\\site-packages', 'C:\\Anaconda3\\lib\\site-packages\\Sphinx-1.2.3-py3.4.egg', 'C:\\Anaconda3\\lib\\site-packages\\win32', 'C:\\Anaconda3\\lib\\site-packages\\win32\\lib', 'C:\\Anaconda3\\lib\\site-packages\\Pythonwin', 'C:\\Anaconda3\\lib\\site-packages\\runipy-0.1.1-py3.4.egg', 'C:\\Anaconda3\\lib\\site-packages\\setuptools-12.2-py3.4.egg', 'C:\\Anaconda3\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\UserName\\.ipython'] I am able to import other packages residing in the same directory, for e.g. numpy. Am I missing something?
Cannot import package - "ImportError: No module named _mechanize"
1.2
0
0
2,222
28,648,230
2015-02-21T16:41:00.000
3
1
0
1
python-2.7,ubuntu,mono,ironpython,fcntl
28,673,847
1
true
0
0
As far as I can see, the fcntl module of cPython is a builtin module (implemented in C) - those modules need to be explicitly implemented for most alternative Python interpreters like IronPython (in contrast to the modules implemented in plain Python), as they cannot natively load Python C extensions. Additionally, it seems that there currently is no such fcntl implementation in IronPython. There is a Fcntl.cs in IronRuby, however, maybe this could be used as a base for implementing one in IronPython.
1
1
0
I have latest IronPython version built and running in Ubuntu 14.04 through Mono. Building Ironpython and running with Mono seems trivial but I am not convinced I have proper sys.paths or permissions for Ironpython to import modules, especially modules like fcntl. Running ensurepip runs subprocess, and wants to import "fcntl". There are numerous posts already out there, but mostly regarding windows. As I understand, fcntl is part of unix python2.7 standard library. To start the main problem seems to be that Ironpython has no idea where this is, but I also suspect that since fcntl seems to be perl or at least not pure python, that there is more to the story. So my related sys.path questions are: In Ubuntu, where should I install Ironpython (Ironlanguages folder) to? Are there any permissions I need to set? What paths should I add to the sys.path to get Ironpython's standard library found?' What paths should I add to the sys.path to get Ubuntu's python 2.7 installed modules? What paths should I add to the sys.path or methods to get fcntl to import properly in Ironpython Any clues on how to workaround known issues installing pip through ensurepip using mono ipy.exe X:Frames ensurepip Thanks!
Ubuntu and Ironpython: What paths to add to sys.path AND how to import fcntl module?
1.2
0
0
456
28,653,502
2015-02-22T01:35:00.000
0
0
1
1
python,exe,cx-freeze
28,671,674
1
true
0
0
Seems like using cython will make it impossible to get script back.
1
0
0
Is it possible to get .py text file from .exe file generated with cx_Freeze? If yes, how can I prevent it when I generate exe? I don't want that somebody see my python code. Of course anybody will have access to bytecode, but it much harder to disasemblate it.
Extract python script from exe generated with cx_Freeze
1.2
0
0
916
28,653,507
2015-02-22T01:36:00.000
0
0
0
0
python,youtube,oauth-2.0,youtube-api,google-oauth
28,657,784
1
true
0
0
You will need an access token for each account. There are various ways you can get them, for example request offline access for each account and store the two refresh tokens. There are other ways too. You'll need to detail the behaviour you want to achieve.
1
0
0
I have a Python script that uploads videos to my YouTube channel. Now I have more than one YouTube channel under my account and want to upload to the second channel once I'm done with the first. But OAuth 2 at no point asks me which account I want to upload to, and keeps uploading to the first channel that I authorised. How should I fix this ? Thanks :)
How to Google OAuth2 to upload videos to 2 different YouTube account with a Python script
1.2
0
1
170
28,654,325
2015-02-22T03:53:00.000
9
0
0
0
python,numpy,pandas,scikit-learn,data-scrubbing
28,665,731
3
true
0
0
Scikit-learn doesn't handle missing values currently. For most machine learning algorithms, it is unclear how to handle missing values, and so we rely on the user of handling them prior to giving them to the algorithm. Numpy doesn't have a "missing" value. Pandas uses NaN, but inside numeric algorithms that might lead to confusion. It is possible to use masked arrays, but we don't do that in scikit-learn (yet).
1
33
1
What is python's equivalent of R's NA? To be more specific: R has NaN, NA, NULL, Inf and -Inf. NA is generally used when there is missing data. What is python's equivalent? How libraries such as numpy and pandas handle missing values? How does scikit-learn handle missing values? Is it different for python 2.7 and python 3?
What is python's equivalent of R's NA?
1.2
0
0
54,508
28,654,775
2015-02-22T05:09:00.000
0
1
0
0
java,python,junit,jenkins,continuous-integration
31,576,925
1
false
1
0
Jenkins allows you to run any external command so you can just call your Python script afterwards
1
0
0
Hello everyone I am a grader for a programming language class and i am trying to use jenkins continuous integration for running some junit tests on their code that is pushed to github. I was able to get all the committed jobs to the jenkins, but can i run a python file in order to push a testing class to their code and then build their projects an get an xml report about their program test...??
How can i add junit test for a java program using python for continuous integration in jenkins
0
0
0
516
28,655,576
2015-02-22T07:31:00.000
1
0
0
0
python,amazon-s3,boto,librsync
28,658,712
1
false
1
0
There is no way to append to or modify an existing object in S3. You can overwrite it completely with new content and you can have versioning enabled on the bucket so the previous versions of the object are still accessible but modifying an existing object is just not supported by the S3 service or API.
1
0
0
I have an app built with boto that sync files locally using librsync(wrapped in a python module). I was wondering if is it possible to write on S3 keys so that I could use librsync remotely, for example I would sync a local file with a file in S3 by taking signatures,delta and patch the result. In the boto documentation it says the open_write is not implemented yet. But I do know the folks like dropbox use s3 and librsync too,so there must be a way...Thanks.
Is it possible to write on s3 key using boto?
0.197375
0
0
210
28,657,010
2015-02-22T10:55:00.000
2
0
1
0
python,encoding,utf-8,python-2.x
28,721,881
5
false
0
0
Real-word example #1 It doesn't work in unit tests. The test runner (nose, py.test, ...) initializes sys first, and only then discovers and imports your modules. By that time it's too late to change default encoding. By the same virtue, it doesn't work if someone runs your code as a module, as their initialisation comes first. And yes, mixing str and unicode and relying on implicit conversion only pushes the problem further down the line.
1
34
0
There is a trend of discouraging setting sys.setdefaultencoding('utf-8') in Python 2. Can anybody list real examples of problems with that? Arguments like it is harmful or it hides bugs don't sound very convincing. UPDATE: Please note that this question is only about utf-8, it is not about changing default encoding "in general case". Please give some examples with code if you can.
Dangers of sys.setdefaultencoding('utf-8')
0.07983
0
0
19,406
28,661,207
2015-02-22T17:56:00.000
0
0
1
0
python,firefox,selenium,mozilla
32,930,931
2
true
0
0
You can think Selenium as launching 'firefox' behind the scenes. You won't see it but it's there and then accordingly opening up the webpage and manipulating things. How do you think it does all that cool stuff without writing explicit url header etc. So for that you need to have a firefox installed with a physical display(monitor) installed. You can fake a pyhsical terminal it's just input/output but you AFAIK you need to have a firefox installed. Sad news but that's the way it is.
1
1
0
I have a server on which I want to build a script to login to page which is using javascript. I want to use python selenium to achieve the same. We have a shared drive which contains all the installed binaries and the same has to be included. So when running a python program I won't be using my #!/usr/bin/python instead efs/path../python, similarly all the packages are to be included in this ways. sys.path.append("/efs/path.../selenium-egg-info"). This works good, but as selenium would need firefox included, I could see mozilla in the path, but where are it's binary, exactly which folder to include inside mozilla.
Selenium include mozilla instance
1.2
0
1
100
28,663,658
2015-02-22T21:44:00.000
0
0
0
0
python,excel,python-3.x,xlwings,vba
28,684,225
2
false
0
0
Thanks for your help. I've got this to work now and I'm super excited about the future possibilities for Python, xlwings and Excel. My problem was simple once I got the looping through the range sorted (which incidentally was handily imported as each row per element rather than each cell). I had declared my list outside of the function and so was not reset each time the true condition was met. It was getting very frustrating watching my cells fill with the same values time after time. Simple once you know how :)
1
0
0
I have a large dataset that I do not have direct access to and am trying to convert the data headers into column headings using Python and then returning it back to Excel. I have created the function to do this and it works but I have hit a snag. What I want the Excel VBA to do is loop down the range and if the cell's value matches the criteria call the Python function and return the resulting list items in the columns moving across from the original cell. For example: A1 holds the string to format, the functions returns B1, C1, D1, and so on. I can only get this to work if I hard code B1, C1, D1, etc. Is there a way to do this via the get_address() range method? I think I can then use the offset() method but am not sure.
xlwings output to iterative cell range
0
1
0
5,808
28,663,856
2015-02-22T22:05:00.000
7
0
1
0
python,numpy,multidimensional-array,count
37,332,201
31
false
0
0
y.tolist().count(val) with val 0 or 1 Since a python list has a native function count, converting to list before using that function is a simple solution.
2
558
1
In Python, I have an ndarray y that is printed as array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1]) I'm trying to count how many 0s and how many 1s are there in this array. But when I type y.count(0) or y.count(1), it says numpy.ndarray object has no attribute count What should I do?
How to count the occurrence of certain item in an ndarray?
1
0
0
883,665
28,663,856
2015-02-22T22:05:00.000
0
0
1
0
python,numpy,multidimensional-array,count
59,595,030
31
false
0
0
here I have something, through which you can count the number of occurrence of a particular number: according to your code count_of_zero=list(y[y==0]).count(0) print(count_of_zero) // according to the match there will be boolean values and according to True value the number 0 will be return
2
558
1
In Python, I have an ndarray y that is printed as array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1]) I'm trying to count how many 0s and how many 1s are there in this array. But when I type y.count(0) or y.count(1), it says numpy.ndarray object has no attribute count What should I do?
How to count the occurrence of certain item in an ndarray?
0
0
0
883,665
28,664,458
2015-02-22T23:05:00.000
0
0
1
0
python,string,parsing,python-3.x,split
28,664,533
1
false
0
0
Use the indexof function to find instances of any operators you want to check for. Then take every character to the left of that index (using a substring), and assign it to one value. Take the character at the index and store that as the operator. Take everything that is left and store it as your second roman numeral.
1
0
0
Im trying to create a roman numeral calculator and am unsure how I would parse a user input into 3 parts the first roman numeral the operator and the second numeral while ignoring any spaces the user might include for example XV + L the "XV" would be held by the left, the "+" would be held under op and the right would hold "L"
How to parse an input and assign different values to the split string
0
0
0
64
28,665,515
2015-02-23T01:05:00.000
1
0
1
0
python
28,665,549
3
false
0
0
Every dict has a value for every key -- no such thing as "no value" for one or more keys. However you can set the value to a placeholder, e.g None, and then simply reassign it once you know exactly what the value for each given key you want to be.
1
0
0
Is there a way to import a text file and assign the key without a value? Im doing so to import other values from another text file to assign them as the values for the key. For example, I would get a name from one text file and assign it as a key, then get their bank accnt number, amount of money, id number from another text file and assign those values to the key. I know how to import text files WITH the key and value from a single text file, but i dont know how to do it separately and assign it to one key value pair. Thanks.
In Python I'm trying to import a text file into a dictionary but only assign the key, no value
0.066568
0
0
74
28,668,641
2015-02-23T07:18:00.000
1
0
0
1
python,hadoop,mapreduce,hadoop-streaming
28,762,585
1
true
0
0
This question seems very generic to me. Chain of many map-reduce jobs are the most common pattern for the production ready solutions. But as programmer, we should always try to use less number of MR jobs to get the best performance (You have to be smart in selecting your key-value pairs for the jobs in order to do this) but off course it is dependent on the use cases. Some people use different combinations of Hadoop Streaming, Pig, Hive, JAVA MR etc. MR jobs to solve one business problem. With the help of any workflow management tools like Oozie or bash scripts you can set the dependencies between the jobs. And for exporting/importing data between RDBMS and HDFS, you can use Sqoop. This is the very basic answer of your query. If you want to have further explanation for any point then let me know.
1
2
0
I have a huge txt data store on which I want to gather some stats. Using Hadoop-streaming and Python I know how to implement a MapReduce for gathering stats on a single column, e.g. count how many records there are for each of a 100 categories. I create a simple mapper.py and reducer.py, and plug them into the hadoop-streaming command as -mapper and -reducer respectively. Now, I am at a bit of a loss at how to practically approach a more complex task: gathering various stats on various other columns in addition to categories above (e.g. geographies, types, dates, etc.). All that data is in the same txt files. Do I chain the mapper/reducer tasks together? Do I pass key-value pairs initially long (with all data included) and "strip" them of interesting values one by one while processing? Or is this a wrong path? I need a practical advice on how people "glue" various MapReduce tasks for a single data source from within Python.
Python & MapReduce: beyond basics -- how to do more tasks on one database
1.2
0
0
107
28,671,531
2015-02-23T10:24:00.000
0
0
1
0
python,python-2.7,numpy,import,python-import
28,673,761
2
false
0
0
ndimage is a submodule under scipy. This submodule is not imported by scipy's __init__.py so it will not get imported with import scipy. You can see what is actually imported with import scipy by reading scipy's __init__.py. In my system it is available at: >>> scipy.__file__ '/usr/local/lib/python2.7/dist-packages/scipy/__init__.pyc' Scipy imports fairly little of its own names, but a lot from numpy. I see a from numpy import * there so you got access to many numpy names, e.g., scipy.array and scipy.mean.
1
0
0
There is something that I don't understand about importing the modules in python. My understanding was that if we use from ipython command prompt e.g. In [1]: from module import * then it would import all the submodules and function definitions associated with specific <module>. For example, I need to import ndimage package from scipy. But from scipy import * does not import everything associated with scipy. Only way it seems to work is to use: from scipy import ndimage or import scipy.ndimage. A general, is there a way to know the list of default functions/modules/definitions that are imported (or not imported) using import * command? PS: I am using Anaconda distribution of python (2.7) on Windows 7 OS.
importing module and associated definitions/functions/objects into python namespace using import *
0
0
0
38
28,673,515
2015-02-23T12:10:00.000
2
0
1
0
python,step
28,678,622
1
false
0
0
This module was added to pythonocc after the 0.16 release, which is why it's not included. You'll have to rebuild the project.
1
1
0
I am reading STEP file ( which supports exact color of a component) by using Python and its working successfully but object showing only one color. So I am not getting any idea how to solve this issue. Another situation here - Downloaded pythonOCC-0.16.0-win32-py34.exe and installed. After installation found some module is missing (for example - OCC.STEPCAFControl, OCC.TDocStd). How to get this module? Please help.
Missing some Python OCC module
0.379949
0
0
334
28,675,722
2015-02-23T14:12:00.000
0
0
0
0
python,django,email,templates
28,676,062
2
true
1
0
I believe there are many answers on here already regarding this; but to summarize what I've found: It is "safe" to do so, but take care what variables/objects you expose to the user (i.e. include in the context of the template to be rendered). render_to_string('template_name.txt', {'user': Users}) would be really bad :)
2
3
0
I'm creating an small SaaS app in Django. It gathers data from webservers from different organizations. Once in a while it automatically needs to send out notification mails to their customers (domain owners). I would like to let our users (the webhosters) to change the email templates to their likings/needs, before sending them out. The email templates are plain Django templates, including a number of available variables. So, i created a model for the email templates. Which can be edited by the users through a form. The have access to a limited number of template variables per email template. Are there any security issues/risks that I need to be aware of? Or is this approach recommended. My approach is currently aimed at server side rendering of the emails. I also checked out some solutions for client side rendering, like Dust.js, but I'm not yet convinced that it will help me.
Is it (un)safe to let users edit email Django templates in your app for emails?
1.2
0
0
183
28,675,722
2015-02-23T14:12:00.000
-1
0
0
0
python,django,email,templates
28,676,356
2
false
1
0
It all depends on the context in which the template will be evaluated, just make sure that no variable is passed that should be considered private. Also, should a security bug be discovered in Django templating system, your web application would be at risk. You would have to validate the input, but you can't really do that, because the input does not have any particular structure. So try and sandbox the process from the rest of the application, if you can. Or simply ask yourself if this feature is really necessary and if you can't just let the user specify what to include in the message by using a checklist or anything similar. At that point, validating the input becomes trivial and you don't have to expose the full template to the user.
2
3
0
I'm creating an small SaaS app in Django. It gathers data from webservers from different organizations. Once in a while it automatically needs to send out notification mails to their customers (domain owners). I would like to let our users (the webhosters) to change the email templates to their likings/needs, before sending them out. The email templates are plain Django templates, including a number of available variables. So, i created a model for the email templates. Which can be edited by the users through a form. The have access to a limited number of template variables per email template. Are there any security issues/risks that I need to be aware of? Or is this approach recommended. My approach is currently aimed at server side rendering of the emails. I also checked out some solutions for client side rendering, like Dust.js, but I'm not yet convinced that it will help me.
Is it (un)safe to let users edit email Django templates in your app for emails?
-0.099668
0
0
183
28,679,023
2015-02-23T16:56:00.000
0
0
1
0
python
28,679,611
3
true
0
0
The print('hello') call is necessary if you are using Python 3. print 'hello' will still be fine in previous versions. There might be more changes in what you already learned so keep that in mind that differences between 2.7 and 3 are not just cosmetic.
1
0
0
I just joined a course called introduction to python on coursera. They showed like print "Hello" or print 'hello' is working on their online tool called codeSculptor. But on my PC its showing an error. print ("hello") is working fine on my PC. Why is it like that?
Difference in Python printing statement in coursera and book
1.2
0
0
54
28,679,250
2015-02-23T17:06:00.000
0
0
0
0
python,ajax,json,node.js
28,687,176
2
false
1
0
In your application, if you have some requirement of processing results of python server requests in your nodejs application, then you need to call the python server requests in nodejs app with request libray and then process the result. Otherwise, you should simply call python server resources through client side ajax requests. Thanks
1
1
0
I need to get data (json) in my html page, with help of Ajax. I have a Nodejs server serving requests. I have to get the json from server, which is python code to process and produce json as output. So should i save json in db and access it? (seems complicated just for one single use) Should i run python server, to serve the requests with json as result (call it directy from html via ajax) Should i serve requests with nodejs alone, by calling python method from nodejs? if so how to call the python method. If calling python requires it to run server ? which one to prefer (zeropc, or some kind of web framework?) Which is the best solution? or Which is preferred over other in what scenario and what factors?
Nodejs Server, get JSON data from Python in html client with Ajax
0
0
1
932
28,685,931
2015-02-24T00:01:00.000
8
0
0
0
django,sqlite,python-3.x,django-1.9
35,020,640
10
false
1
0
the new django 1.9 has removed "syncdb", run "python manage.py migrate", if you are trying to create a super user, run "python manage.py createsuperuser"
6
32
0
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
"Unknown command syncdb" running "python manage.py syncdb"
1
1
0
73,141
28,685,931
2015-02-24T00:01:00.000
0
0
0
0
django,sqlite,python-3.x,django-1.9
34,814,438
10
false
1
0
You can run the command from the project folder as: "python.exe manage.py migrate", from a commandline or in a batch-file. You could also downgrade Django to an older version (before 1.9) if you really need syncdb. For people trying to run Syncdb from Visual Studio 2015: The option syncdb was removed from Django 1.9 (deprecated from 1.7), but this option is currently not updated in the context menu of VS2015. Also, in case you didn't get asked to create a superuser you should manually run this command to create one: python.exe manage.py createsuperuser
6
32
0
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
"Unknown command syncdb" running "python manage.py syncdb"
0
1
0
73,141
28,685,931
2015-02-24T00:01:00.000
0
0
0
0
django,sqlite,python-3.x,django-1.9
36,004,441
10
false
1
0
Run the command python manage.py makemigratons,and than python manage.py migrate to sync.
6
32
0
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
"Unknown command syncdb" running "python manage.py syncdb"
0
1
0
73,141
28,685,931
2015-02-24T00:01:00.000
1
0
0
0
django,sqlite,python-3.x,django-1.9
42,688,208
10
false
1
0
Django has removed python manage.py syncdb command now you can simply use python manage.py makemigrations followed bypython manage.py migrate. The database will sync automatically.
6
32
0
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
"Unknown command syncdb" running "python manage.py syncdb"
0.019997
1
0
73,141
28,685,931
2015-02-24T00:01:00.000
2
0
0
0
django,sqlite,python-3.x,django-1.9
42,795,652
10
false
1
0
In Django 1.9 onwards syncdb command is removed. So instead of use that one, you can use migrate command,eg: python manage.py migrate.Then you can run your server by python manage.py runserver command.
6
32
0
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
"Unknown command syncdb" running "python manage.py syncdb"
0.039979
1
0
73,141
28,685,931
2015-02-24T00:01:00.000
0
0
0
0
django,sqlite,python-3.x,django-1.9
43,525,717
10
false
1
0
Alternarte Way: Uninstall Django Module from environment Edit Requirements.txt a type Django<1.9 Run Install from Requirments option in the enviroment Try Syncdb again This worked for me.
6
32
0
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
"Unknown command syncdb" running "python manage.py syncdb"
0
1
0
73,141
28,686,574
2015-02-24T01:05:00.000
0
0
1
0
python,input,stack,simulation
28,686,839
2
false
0
0
You can try write your own stack class, but you don't have to write Class Stack: def pop(): # bla def push(value): #bla def init(): #bla… No. Standard list has methods pop and append (append works like push). You can write: Stack1=[] Stack2=[] And parse input file and call appropriate methods.
1
0
0
I'm trying to use inputs(such as use raw_input()) to create stacks. Input: The first line of the input contains the total number of stack operations N, 0 < N ≤ 100000. Each of the next N lines contains a description of a stack operation, either in the form PUSH A B (meaning to push B into stack A), or in the form POP A (meaning to pop an element from stack A), where A is the number of stack (1 ≤ A ≤ 1000), and B is an integer (0 ≤ B ≤ 109). You may assume, that every operation is correct (i.e., before each POP operation, the respective stack is not empty). Output: For each POP operation, described in the input, output the value, which this POP operation gets from the top of that stack, to which it is applied. Numbers should appear according to the order of the POP operations in the input. Each number should be output in a separate line. Here are some sample inputs and out puts: Sample Input: 7 PUSH 1 100 PUSH 1 200 PUSH 2 300 PUSH 2 400 POP 2 POP 1 POP 2 Sample Output: 400 200 300 Thanks
python use inputs build stacks
0
0
0
585
28,689,687
2015-02-24T06:45:00.000
0
0
0
0
python,machine-learning,statistics,probability,dirichlet
28,703,110
1
false
0
0
Some ideas. (1) To calculate the normalizing factor exactly, maybe you can rewrite the gamma function via gamma(a_i + 1) = a_i gamma(a_i) (a_i need not be an integer, let the base case be a_i < 1) and then you'll have sum(a_i, i, 1, n) terms in the numerator and denominator and you can reorder them so that you divide the largest term by the largest term and multiply those single ratios together instead of computing an enormous numerator and an enormous denominator and dividing those. (2) If you don't need to be exact, maybe you can apply Stirling's approximation. (3) Maybe you don't need the pdf at all. For some purposes, you just need a function which is proportional to the pdf. I believe Markov chain Monte Carlo is like that. So, what is the larger goal you are trying to achieve here?
1
0
1
I need to calculate PDFs of mixture of Dirichlet distribution in python. But for each mixture component there is the normalizing constant, which is the inverse beta function which has gamma function of sum of the hyper-parameters as the numerator. So even for a sum of hyper-parameters of size '60' it goes unbounded. Please suggest me a work around for this problem. What happens when I ignore the normalizing constant? First its not the calculation of NC itself that is the problem. For a single dirichlet I have no problem . But what I have here is a mixture of product of dirichlets, so each mixture component is a product of many dirichlets each with its own NCs. So the product of these goes unbounded. Regarding my objective, I have a joint distribution of p(s,T,O), where 's' is discrete, 'T' and 'O' are the dirichlet variables i.e. a set of vectors of parameters which sum to '1'. Now since 's' is discrete and finite I have |S| set of mixture of product of dirichlet components for each 's'. Now my objective here is to find p(s|T,O). So I directly substitute a particular (T,O) and calculate the value of each p('s'|T,O). For this I need to calc the NCs. If there is only one mixture component then I can ignore the norm constant, calc. and renormalise finally, but since I have several mixture components each components will have different scaling and so I can't renormalise. This is my conundrum.
Normalizing constant of mixture of dirichlet distribution goes unbounded
0
0
0
492
28,692,809
2015-02-24T10:02:00.000
0
0
0
1
python-2.7,openstack-neutron
28,739,765
2
false
0
0
I was able to solve this. This was my mistake. I have exported hpext:dns in keystone_admin and .bashrc file. This value is very much specific if anyone is using hp cloud and they are logging into their geos.
2
1
0
I have installed designate client on the same box where designate server is running with OpenStack Juno. After setting environment by issuing . .venv/bin/activate and keystone variables by issuing this command keystonerc_admin. When I try to run designate --debug server-list command I am getting this error: EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found Please help me out.
EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found
0
0
0
961
28,692,809
2015-02-24T10:02:00.000
0
0
0
1
python-2.7,openstack-neutron
28,767,745
2
false
0
0
Yes, that value is from before designate was an incubated project, but was running in HP Cloud. The standard 'dns' service should be used for anyone not using the HP Public Cloud service (it is the default in python-designateclient, so you shouldn't have to do anything)
2
1
0
I have installed designate client on the same box where designate server is running with OpenStack Juno. After setting environment by issuing . .venv/bin/activate and keystone variables by issuing this command keystonerc_admin. When I try to run designate --debug server-list command I am getting this error: EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found Please help me out.
EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found
0
0
0
961
28,699,580
2015-02-24T15:20:00.000
0
0
0
0
python,module,scheduler
29,127,707
1
true
1
0
For now, changed the function call for the tidy up feature, by using the background scheduler implementation of the APScheduler module of python. This does not impact the function for serving http requests and has currently solved my problem
1
0
0
I am new to the programming world and trying out something with Python. My requirement is to have http web server(built using BaseHTTPServer) that runs forever, which takes an input binary file through HTML form based on user selection and returns a set of HTML files back on the web client. As part of this when the user is selecting his specific input file, there are set of folders created with HTML files written inside those folders in the server, i thought of putting in a tidy up functionality for these folders on the server. So that everyday the tidy up would clean up the folders automatically based on a configuration. i could build both these modules in my script(http web service & tidy up on server), specifically the tidy up part is achieved using python's sched module Both of these functionalities are working independently, i.e when i comment out the function for tidy up, i can access the server url in the browser and the index.html page shows up correctly and further(accepts binary, parsing happens and output htmls are returned) when i comment out the function for http server, based on the configuration set, i am able to ensure the tidy up functionality is working But when i have both these functions in place, i see that the tidy up function works/is invoked correctly for the scheduled time, but the index.html page is not loaded when i request for the server on the browser I researched on the sched module enough to understand that it is just to schedule multiple events on the system by setting time delays and priorities Not able to work both the functionality Questions: Is this a correct approach, using sched to achieve the tidy up? If yes, what could be the reason that the http service functionality is blocked and only the tidy up is working? Any advice would be helpful. Thanks
does the function in a python program written using sched, impact/block the the other functionality in the program?
1.2
0
0
39
28,702,423
2015-02-24T17:31:00.000
2
0
0
0
python,ajax,google-app-engine,memcached,google-cloud-datastore
28,702,936
2
false
1
0
I would recommend the blur event of the username field, combined with some sort of inline error/warning display. I would also suggest maintaining a memcache of registered usernames, to reduce DB hits and improve user experience - although probably not populate this with a warm-up, but instead only when requests are made. This is sometimes called a "Repository" pattern. BUT, you can only populate the cache with USED usernames - you should not store the "available" usernames here (or if you do, use a much lower timeout). You should always check directly against the DB/Datastore when actually performing the registration. And ideally in some sort of transactional method so that you don't have race conditions with multiple people registering. BUT, all of this work is dependant on several things, including how busy your app is and what data storage tech you are using!
1
1
0
I want to add the 'check username available' functionality on my signup page using AJAX. I have few doubts about the way I should implement it. With which event should I register my AJAX requests? We can send the requests when user focus out of the 'username' input field (blur event) or as he types (keyup event). Which provides better user experience? On the server side, a simple way of dealing with requests would be to query my main 'Accounts' database. But this could lead to a lot of request hitting my database(even more if we POST using the keyup event). Should I maintain a separate model for registered usernames only and use that to get better results? Is it possible to use Memcache in this case? Initializing cache with every username as key and updating it as we register users and use a random key to check if cache is actually initialized or pass the queries directly to db.
Checking username availability - Handling of AJAX requests (Google App Engine)
0.197375
0
0
170
28,702,522
2015-02-24T17:35:00.000
2
0
0
0
python,macos,wxpython
28,724,680
1
false
0
1
Which version of wxPython are you using? Disabling those kinds of widgets seems to be working fine for me with current builds. For some reason Apple thought that it was a good idea to never give the keyboard focus to some types of controls, because apparently nobody would ever want to use them with anything but a mouse or trackpad. So if the widget never gets the focus, then it can never lose it and so there won't be any EVT_KILL_FOCUS for it either. You can change this in the Keyboard panel in System Preferences by setting "Full Keyboard Access" to "All Controls"
1
0
0
I have a python(2.7)/wxpython program developed on windows, which I am trying to migrate to mac but am encountering some problems. The bit I am having problems with consists of two panels: Panel A consists of a tree control containing key=value pairs and with user editing disabled. Panel B consists of a set of controls of various types (filePicker, textCtrl, valueCtrl, choice, checkbox, comboBox, and spinEdit), all of which are initially disabled When the user selects a tree node the program checks the key and decides which control on panel B should be used to edit the tree node's value. Panel A then sends the relevant info to panel B using pubsub which initializes and enables the relevant control. Each control on panel B has a EVT_KILL_FOCUS event so that when the user moves away from the control, the controls value is sent back to panel A using pubsub and the tree node's value is updated and the editing control on panel B is disabled. This works fine on windows. On mac I have the following problems: The filepicker and spinCtrl can not be disabled - this could lead to incorrect information being sent back to the treeCtrl if either of these controls inappropriately receives focus the spinctrl, choice, checkbox, and comboctrl appear not be be triggering EVT_KILL_FOCUS events and so no information is sent back to the treeCtrl. I fixed this on the choice control by binding the EVT_CHOICE. Using non-focus events for the other controls doesn't work as well and creates undesired behaviors. So my questions are: 1: is it possible to disable the filepicker and spinCtrl on OSX? 2: is there a way to use the kill focus events of the spinctrl, choice, checkbox, and comboctrl controls on mac? 3: if fill focus events can not be used, is there an alternate event that would be triggered after editing is complete, for each of these controls? Thanks Rob
wxpython control problems when migrating from win to mac
0.379949
0
0
68
28,704,465
2015-02-24T19:24:00.000
1
0
0
0
python-2.7,selenium-webdriver,browsermob
29,100,263
1
false
1
0
When you configure the WebDriver in your test code, set the proxy address not as localhost:8080 but as 127.0.0.1:8080. I think that Firefox has some problems resolving the proxy localhost:8080 that it does not have with the explicit form 127.0.0.1:8080.
1
1
0
I'm trying to use BrowserMob to proxy pages with Selenium WebDriver. When the initial page request is made, many elements of the page fail to load (e.g., css, jquery includes). If I manually refresh the page everything loads as expected. Has anyone else seen this behavior? Is there a solution? Thanks!
BrowserMob only partially loading page on initial load; fine afterwards
0.197375
0
1
90
28,705,661
2015-02-24T20:30:00.000
1
0
0
0
python,cherrypy
28,705,996
3
false
0
0
Sounds like you want to store a reference to the object stored in Memcache and then pull it back when you need it, rather than relying on the state to handle the loading / saving.
2
2
0
I have a CherryPy Webapp that I originally wrote using file based sessions. From time to time I store potentially large objects in the session, such as the results of running a report - I offer the option to download report results in a variety of formats, and I don't want to re-run the query when the user selects a download due to the potential of getting different data. While using file based sessions, this worked fine. Now I am looking at the potential of bringing a second server online, and as such I need to be able to share session data between the servers, for which it would appear that using the memchached session storage type is the most appropriate. I briefly looked at using a PostgreSQL storage type, but this option was VERY poorly documented, and from what I could find, may well be broken. So I implemented the memcached option. Now, however, I am running into a problem where, when I try to save certain objects to the session, I get an "AssertionError: Session data for id xxx not set". I'm assuming that this is due to the object size exceeding some arbitrary limit set in the CherryPy session backend or memcached, but I don't really know since the exception doesn't tell me WHY it wasn't set. I have increased the object size limit in memcached to the maximum of 128MB to see if that helped, but it didn't - and that's probably not a safe option anyway. So what's my solution here? Is there some way I can use the memcached session storage to store arbitrarily large objects? Do I need to "roll my own" DB based or the like solution for these objects? Is the problem potentially NOT size based? Or is there another option I am missing?
CherryPy Sessions and large objects?
0.066568
1
0
1,009
28,705,661
2015-02-24T20:30:00.000
1
0
0
0
python,cherrypy
28,717,896
3
false
0
0
From what you have explained I can conclude that conceptually it isn't a good idea to mix user sessions and a cache. What sessions are mostly designed for is holding state of user identity. Thus it has security measures, locking, to avoid concurrent changes, and other aspects. Also a session storage is usually volatile. Thus if you mean to use sessions as a cache you should understand how sessions really work and the consequences are. What I suggest you to do it to establish normal caching of your domain model that produces report data and keep session for identity. CherryPy details Default CherryPy session implementation locks the session data. In the OLAP case your user likely won't be able to perform concurrent requests (open another tab for instance) until the report is completed. There's however an option of manual locking management. PostgreSQL session storage is broken and may be removed in next releases. Memcached session storage doesn't implement distributed locking, so make sure you use consistent rule to balance your user across your servers.
2
2
0
I have a CherryPy Webapp that I originally wrote using file based sessions. From time to time I store potentially large objects in the session, such as the results of running a report - I offer the option to download report results in a variety of formats, and I don't want to re-run the query when the user selects a download due to the potential of getting different data. While using file based sessions, this worked fine. Now I am looking at the potential of bringing a second server online, and as such I need to be able to share session data between the servers, for which it would appear that using the memchached session storage type is the most appropriate. I briefly looked at using a PostgreSQL storage type, but this option was VERY poorly documented, and from what I could find, may well be broken. So I implemented the memcached option. Now, however, I am running into a problem where, when I try to save certain objects to the session, I get an "AssertionError: Session data for id xxx not set". I'm assuming that this is due to the object size exceeding some arbitrary limit set in the CherryPy session backend or memcached, but I don't really know since the exception doesn't tell me WHY it wasn't set. I have increased the object size limit in memcached to the maximum of 128MB to see if that helped, but it didn't - and that's probably not a safe option anyway. So what's my solution here? Is there some way I can use the memcached session storage to store arbitrarily large objects? Do I need to "roll my own" DB based or the like solution for these objects? Is the problem potentially NOT size based? Or is there another option I am missing?
CherryPy Sessions and large objects?
0.066568
1
0
1,009
28,707,240
2015-02-24T22:03:00.000
2
0
1
0
python,html,forms,python-3.x
28,707,296
1
true
1
0
You should use the "name" attribute. For example using radio buttons, each button will have the same name but different Id. When submitted only the one with a value (the selected one) will be submitted.
1
1
0
I am trying to submit a form via python and I need to know, should I use the "id" value, or the "name" value. They are both different.
For submiting HTML form data should I use the variable, "id", or "name"
1.2
0
0
39
28,708,890
2015-02-25T00:15:00.000
0
0
0
0
python,python-2.7,bokeh
28,711,218
1
false
1
0
These files are to store data and plots persistently on a bokeh-server, for instance if you want to publish a plot so that it will always be available. If you are just using the server locally and always want a "clean slate" you can run with --backend=memory to use the in-memory data store for Bokeh objects.
1
0
0
on Bokeh 0.7.1 I've noticed that when I run the bokeh-server, files appear in the directory that look like bokeh.data, bokeh.server, and bokeh.sets, if I use the default backend, or redis.db if I'm using redis. I'd like to run my server from a clean start each time, because I've found that if the files exist, over time, my performance can be severely impacted. While looking through the API, I found the option to turn "load_from_config" from True to False. However, tinkering around with this didn't seem to resolve the situation (it seems to only control log-in information, on 0.7.1?). Is there a good way to resolve this and eliminate the need for me to manually remove these files each time? What is the advantage of having these files in the first place?
Bokeh Server Files: load_from_config = False
0
0
0
132
28,709,535
2015-02-25T01:22:00.000
0
0
0
1
python,amazon-web-services,tornado,amazon-elb
28,732,773
1
false
0
0
It is possible. For example our setup is ELB->nginx->tornado. nginx is used for app specific proxy, cache and header magic, but can be thrown out of this chain or replaced with something else.
1
0
0
I haven't been able to find any solid information online. I'm curious to know if its possible (and how) to use the Elastic Load Balancing (ELB) service with Tornado. If it isn't, whats the best alternative to using AWS as a scalable option with Tornado?
Elastic Load Balancing with Tornado
0
0
0
808
28,710,649
2015-02-25T03:33:00.000
1
0
0
0
apache,security,ipython
28,710,800
1
false
1
0
The apache proxy seems a viable solution (if that meets your needs for login security). You could probably use iptables to do a port forward from that server port (using localhost probably?) to port 80 on apache. This way nobody will be able to access it directly.
1
0
0
I'm not an expert web app developer. I have an IPython Notebook server running on some port. The in-built security is not great -- one global password can be set, no support for multiple users or for integrating with (e.g.) active directory or OpenID. I believe I can use an Apache port redirect to control access. e.g. put a firewall up over the port so external users can't go straight to the notebook, but rather they have to go via port 80 as served by Apache. Is there some way to write a login page which provides multi-user authentication, then only pass authorised users through to the notebook server? I apologise in advance if I have used the wrong terminology or glossed over important details.
Can I use a port-redirect in Apache as a security layer?
0.197375
0
0
74
28,710,913
2015-02-25T04:04:00.000
1
0
1
0
python-3.x
28,710,978
2
false
0
0
The binary floating-point format Python uses cannot represent 53/24 exactly - but the decimal format most humans learn can't represent 53/24 exactly either. 2.2083333333333333 would still be wrong. It's just that when Python's slightly wrong binary result is translated into decimal, the result isn't the same slightly wrong result you'd get if you did the math in decimal.
1
1
0
53/24 results 2.2083333333333335 in python.Why is that?
Is it a rounding error in python?
0.099668
0
0
36
28,714,197
2015-02-25T08:17:00.000
1
0
0
1
python,bottle
28,722,748
1
false
1
0
In general, a best practice is to do the work in the app, and do (only) presentation in the template. This keeps your so-called business logic as separate as possible from your rendering. Even if it wasn't a bad idea, I don't even know how you could walk through a directory of files from within a template. The subset of Python that's available to you in a template is pretty constrained. Hope that helps!
1
1
0
I'm beginning to work on a Python 3.4 app to serve a little website (mostly media galleries) with the bottle framework. I'm using bottle's 'simple template engine' I have a YAML file pointing to a folder which contains images and other YAML files (with metadata for videos). The app or the template should then grab all the files and treat them according to their type. I'm now on the point where I have to decide whether I should iterate through the folder within the app (in the function behind the @app.routedecorator) or in the template. Is there a difference in performance / caching between these two approaches? Where should I place my iteration loops for the best performance and the most "pythonic" way?
Python bottle: iterate through folder in app's route or in template?
0.197375
0
0
105
28,716,002
2015-02-25T09:59:00.000
0
0
1
0
python,unit-testing,python-2.7,nose,nosetests
28,716,083
1
false
0
0
setUp and tearDown are executed for each test. So no, 'closing the cmd' wouldn't do it, because that only happens at the end of the entire test suite. The idea is that you have a consistent starting point for every single test, and these two methods are responsible for setting that up at the beginning, and putting things back the way they were at the end.
1
0
0
I understand that setUp is necessary because you may have to initialize some variables before running the test (please correct me if I'm wrong) but what is the tearDown for? Is it to delete all the variables created on the setUp? If so, when closing the cmd woundn't it already clear up the data? This is confusing for me. I'm in particular using NoseTests with Python 2.7
Why is tearDown for in testing?
0
0
0
40
28,719,031
2015-02-25T12:27:00.000
0
0
0
0
python,django
28,719,139
3
false
1
0
this problem has several solutions: put your choices on the settings file and get these values in apps. set CONSTANT in determine app or model and access choices by onwer name set choices items by one-to-many and you then can access choices by model ... also you can mix these ways :)
2
5
0
In my models I'm using the choices option in some of my fields. But I'm using the same choices in multiple apps in my Django project. Where should I place my choices and how can I load these choices in all my apps?
Share choices across Django apps
0
0
0
1,335
28,719,031
2015-02-25T12:27:00.000
4
0
0
0
python,django
28,719,284
3
false
1
0
We usually have quite a few project-specific apps per project here, and to try and keep dependencies reasonably clean we usually have two more apps: "core" in which we put things shared by other apps (any app can depend on "core", "core" doesn't depend on any app), and "main" in which tie things together ("main" can depend on any app, no app is allowed to depend on "main"). In your case, these shared choices would belong to core.models.
2
5
0
In my models I'm using the choices option in some of my fields. But I'm using the same choices in multiple apps in my Django project. Where should I place my choices and how can I load these choices in all my apps?
Share choices across Django apps
0.26052
0
0
1,335
28,722,011
2015-02-25T14:46:00.000
1
0
1
0
python,emacs,configuration-files,pylint
28,729,056
1
false
0
0
Unfortunatly no. You can have a pylintrc file for a project but not for a package only.
1
1
0
I know you can disable specific pylint warnings globally, by editing ~/.pylintrc, or locally, by adding # pylint: disable=(...) to a particular code block. I'm writing a library, and I would like to disable certain warnings for all files in the project. Ideally I'd put this list of disabled warnings in some pylintrc file that I place at the project's root directory. I want this to apply only when I'm pylinting source files from that directory and below. Is there a way to do this? I use pylint purely through its emacs interface, epylint.
Can I disable specific pylint warnings for a single project?
0.197375
0
0
353
28,722,273
2015-02-25T14:58:00.000
0
0
0
0
python-2.7,kivy
63,775,920
3
false
0
1
If you are using kivy and your app loads some images, then there is no problem. But if you are using some other packages like pygame to load images, it will crash. Or if you are working in some files like reading or writing a text file, it will crash. Because when you package your app with buildozer, It will move the additional files of your app (.txt, .png) to a different location. So your python file fails to find the additional files in your specified location. To solve this, do some corrections in your python file like changing the path of the additional file from the current directory of the python file to the path that is referred below. PATH = "/data/data/#package domain#.#package name#/files/app" In the above path change #package domain# with the domain of your package and #package name# with your package name. All the additional files of any app that is installed in an android device will go to this location. Try out this. All the best.
1
1
0
I have some problems with an App, which I wrote with kivy an packaged with buildozer is always crashing when I try to run in on my phone. On my PC I use Ubuntu 14.10 and I don't get any error when compiling it (buildozer android debug). Then I send it on my SmartPhone and I install and run it, but it just loads and after a few seconds it crashes. By the way the kivy program is not very big. Could someone help me, please? And sorry for my bad grammar ;)
Android App created with Kivy (Buildozer) crashes on phone, but why?
0
0
0
2,562
28,723,232
2015-02-25T15:42:00.000
1
0
0
1
python,qt,pyqt,cups
30,196,277
1
true
0
1
I got around the CUPS sandboxing by having the backend send the information to a listening server on localhost that then processed the job as I needed it. I made sure that the server listening would only accept connections from localhost. I never was able to get pyinstaller or cx_freeze to work with PyQt, but this workaround was a better alternative.
1
0
0
I have been trying to get a program I wrote in PyQt to work being called from a CUPS backend on OS X. The problem is that CUPS sandboxing keeps the program from being able to access the PyQt python modules which I have brewed in /usr/local/Cellar. Is there any way to grab those files, as well as the Qt files in the Cellar, and put them all in one contained folder? It is simple for other modules, but PyQt depends on a lot itself. I tried using pyinstaller and cx_freeze, but with no luck. How can I round up all my applications dependencies into one location?
Rounding up Dependencies for PyQt
1.2
0
0
191
28,724,427
2015-02-25T16:32:00.000
0
0
0
0
python,sql-server,pymssql
38,181,077
4
false
0
0
If you want to connect SQL server using secured connection using pymssql then you need to provide "secure" syntax in your host.. for e.g. unsecured connection host : xxx.database.windows.net:1433 secured connection host : xxx.database.secure.windows.net:1443
1
4
0
I'm making queries from a MS SQL server using Python code (Pymssql library) however I was wondering if there was any way to make the connection secure and encrypt the data being sent from the server to python? Thanks
Can Pymssql have a secure connection (SSL) to MS SQL Server?
0
1
0
6,365
28,724,782
2015-02-25T16:49:00.000
0
1
1
1
python,vim,python-mode
35,023,398
2
false
0
0
If you are not happy with folding, you might like to disable python-mode folding by keeping let g:pymode_folding = 0 in ~/.vimrc. What I usually do is enable folding and use space bar to open it. I also set set foldclose=all to automatically fold unfolded fold
2
0
0
I installed python-mode for vim on my Mac OSX system. I decided to try one of the python motion commands. I hit [C which I thought would go to the next class. But the screen also switched, to show ONLY class names in gray highlighting. I've searched the python-mode documentation, and I can't see anything about this happening, and therefore no way to undo it. Well, I thought, I will just quit and reload, and everything will be fine. But no! When I come back in to the file, it opens as I left it, with just the class names showing, highlighted in gray, and indications of line numbers. How do I get out of this "mode" or whatever I am stuck in?
Installed python-mode; now I only see class names in my file
0
0
0
132
28,724,782
2015-02-25T16:49:00.000
1
1
1
1
python,vim,python-mode
28,724,868
2
false
0
0
It sounds like you've discovered the "folding" feature of Vim. Press zo to open one fold under the cursor. zO opens all folds under the cursor. zv opens just enough folds to see the cursor line. zR opens all folds. See :help folding for details.
2
0
0
I installed python-mode for vim on my Mac OSX system. I decided to try one of the python motion commands. I hit [C which I thought would go to the next class. But the screen also switched, to show ONLY class names in gray highlighting. I've searched the python-mode documentation, and I can't see anything about this happening, and therefore no way to undo it. Well, I thought, I will just quit and reload, and everything will be fine. But no! When I come back in to the file, it opens as I left it, with just the class names showing, highlighted in gray, and indications of line numbers. How do I get out of this "mode" or whatever I am stuck in?
Installed python-mode; now I only see class names in my file
0.099668
0
0
132
28,726,712
2015-02-25T18:21:00.000
1
1
1
0
python,parse-platform,raspberry-pi,six,parsepy
28,727,971
1
false
0
0
You need to install the six module. There is probably an installable package available with apt-get install python-six; you can also install it using pip or easy_install (e.g., pip install six).
1
0
0
I am trying to connect my raspberry pi to parse.com wit ParsePy which uses the rest-api from parse.com. I am writing some python code to get it to work and I have an error with the classes supplied by ParsePy. In particular its the datatypes.py class. It seems that when I run the code when it states import six, it cannot see it. The errors I get is NameError:name 'six' is not defined. What can I do so that I gets the right class?
Raspberry Pi error with ParsePy and six.py
0.197375
0
0
311
28,726,763
2015-02-25T18:24:00.000
1
0
0
0
python,django,logging,heroku
28,755,134
1
true
1
0
The best tool I found is newrelic.com It hooks nicely into django apps and heroku. It can even show you the bottlenecks due to queries and functions inside your views.
1
0
0
I have a backend server running on heroku. Right now for going through logs all I have been using is the 'heroku logs' command. I have been using that command also to track how long different requests to each endpoint are taking. Is there a better way to see a list of how long requests to different endpoints are taking and a good way to track bottlenecks for what is slowing down these endpoints? Also is there any good add ons for heroku that can point out bad responses that are not status =200? I am using python with django if that is relevant.
Better logs and tracking bottlenecks in heroku
1.2
0
0
40
28,732,095
2015-02-25T23:58:00.000
0
0
0
0
python,api,authentication,flask,flask-login
28,750,364
1
true
1
0
The way I ended up going was combining both approaches. user_logged_in fires whenever a user logs in. I used that method to generate an api-token and store it in the user object at login. Then, when the user wants to make an api call, the token is simply retrieved from the user object. I'm not sure if this is best practice, but it seems to be working fine.
1
0
0
I have two web applications. One is a website. The other is an API. Both are built using flask. They use different methods of authentication. The website uses the flask-login library. Specifically, is uses login_user if user.check_password supplied by a form is true. The api uses a cryptographically signed token. The api is used by mobile applications (ios for example). These applications make a call to /api/login and POST the same username and password that you would expect on the website. The api then returns a token which the app stores and uses for authentication in the future. The token is generated using the itsdangerous library. Specifically, it is created using TimedJSONWebSignatureSerializer. I am experiencing a confusing problem, now, where one of our website pages needs to access our api. Of course the api won't allow access, because the user doesn't have a properly generated auth token. I have control over every part of the code, but I'm not sure what the most elegant solution is in this case. Should I stop using one of the authentication mechanisms? Should I somehow store the api auth token for the website user? Any advice would be appreciated. UPDATE As I think about this problem, it occurs to me that I could change the token generation process employed by login_user. If login_user used the same token as the api, then presumably I could get the token from the session whenever the user needed to make an api request via the website. Not yet clear if this is insane.
Let website user access api flask
1.2
0
0
143
28,732,751
2015-02-26T01:06:00.000
1
0
0
0
python,amazon-s3,boto
28,745,850
1
true
1
0
When boto uploads a file to S3 it calculates the MD5 checksum locally, sends that checksum to S3 as the Content-MD5 header and then checks the value of the ETag header returned by the S3 service against the previously computed MD5 checksum. If the ETag header does not match the MD5 it raises an S3DataError exception. This exception is a subclass of ClientError and client errors are not retried by boto. It is also possible for the S3 service to return a BadDigest error if the Content-MD5 header we provide does not match the MD5 checksum computed by the service. This is a 400 response from S3 and is also considered a client error and would not be retried.
1
1
0
The boto config has a num_retries parameter for uploads. num_retries The number of times to retry failed requests to an AWS server. If boto receives an error from AWS, it will attempt to recover and retry the request. The default number of retries is 5 but you can change the default with this option. My understanding is that this parameter governs how many times to retry on commands like set_content_from_string. According to the documentation, the same command will fail if the md5 checksum does not match upon upload. My question is, will boto also retry upon checksum failure, or does num_retry apply to a separate class of failures?
Does Boto retry on failed md5 checks?
1.2
0
1
329
28,733,056
2015-02-26T01:42:00.000
2
0
0
0
python,pygame,pyglet,cocos2d-python
28,733,171
2
false
0
1
Pygame and Pyglet are definitely on the radar. Pygame Great, popular game engine. No support for a variety of file types. But it's no longer maintained. Pyglet Very powerful, high support for many files. Thorough documentation. OpenGL support, intuitive. Cocos2D I really feel that Coco2D is just an unnecessary medium to use the underlying Pyglet engine.
1
2
0
First off please do not mark this topic as a duplicate. All the relevant threads are years old. I would like updated information. What are the pros and cons of the following libraries? I am aware of: PyGame, seems to be the most popular but the website is full of broken links and no news in awhile. Cocos2D Python, Seems good as cocos2D is all the rage right now. Almost no support though and the stack overflow pages only seem to get a post every couple weeks. Pyglet, This is the only one I never tried as Cocos2D seems to use Pyglet. Seems to update more then the cocos2D python stack overflow but less then the pygame. Other? Rate a better library! I am looking for information on ease of use, preferably pythonic and up to date. But perhaps most importantly active.
Python Game Libraries
0.197375
0
0
1,029
28,738,062
2015-02-26T08:57:00.000
1
1
0
0
python,webkit,gtk,raspberry-pi,startup
28,744,914
1
false
0
0
If you run a DE, then use the DE's session manager. If you want to run only your application in fullscreen mode use the ~/.xinitrc to lauch it.
1
0
0
I want to run a python script on raspberry pi when it is turned on. how can i do this? my script contains "Webkit" and "Gtk" modules. I've tried many methods but still not working. the code works perfectly through the python IDEL
how to run a python script automatically after startx on raspberrypi
0.197375
0
0
711