Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36,552,865 | 2016-04-11T15:23:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,inspect | 38,218,698 | 1 | true | 0 | 0 | It's a warning message, printed via log.warn in bdist_egg.py. It probably should include the word 'warning', and I'm not sure why it doesnt.
The warning is raised by scan_module, which is used to determine if the package can be zip compressed or not. You'd probably have to check the mailing list to see why stack introspection prevents compressed eggs, but at a guess I'd say zip compression might mess with line number information handed to inspect. Referencing __file__ or __path__ will also get the package flagged as not zip safe. | 1 | 0 | 0 | When I build/install my package, the line [package.module]: module MAY be using inspect.stack prints to the log. It doesn't preface this with "warning" or "error", but it seems like a strange thing to print in the midst of the other information (e.g., "creating...egg", "Extracting...to...site-packages".
Is there some reason that I shouldn't be using inspect.stack() within my package? If there is no reason, then why does this one function (out of the hundreds used) result in this strange notification? | Why does log print "module MAY be using inspect.stack" | 1.2 | 0 | 0 | 178 |
36,561,040 | 2016-04-11T23:28:00.000 | 1 | 0 | 1 | 0 | python,boto,boto3 | 36,579,850 | 1 | true | 0 | 0 | I think you already read this :
"To create a snapshot for EBS volumes that serve as root devices, you
should stop the instance before taking the snapshot."
This is typical sysops operation. Unless you are 100% sure that no cache data pending writing to EBS block, then you can create EBS snapshot without stopping instances.
HOWEVER,
if such EBS is used to launch your instances (as mentioned above), part of the OS swap spaces might sit inside EBS, this is still not very serious, since you can recreate swap. But you will NEVER want to deal with partially written OS update (that might run in the background )
If your EBS is used as Database store, then you should stop ALL services that tap into the EBS. In fact, stopping the instances is 100% guarantee that no forgotten services(that you launch and not sure what they are doing) writing to EBS.
A corrupted EBS snapshot is not something snapshot make for. | 1 | 0 | 0 | Is it possible to create snapshot for EBS without a reboot. I'm planing to write a script to take snapshots on regular basics for running instances(Using python boto module). I go through the boto documentation, but didn't find any. Can someone please help on this. | create a snapshot for EBS without a reboot using python boto module | 1.2 | 0 | 0 | 119 |
36,561,231 | 2016-04-11T23:47:00.000 | 0 | 0 | 0 | 1 | python,tensorflow,tensorflow-serving | 37,579,943 | 1 | false | 0 | 0 | You are close, you need to update the environment as they do in this script
.../serving/bazel-bin/tensorflow_serving/example/mnist_export
I printed out the environment update, did it manually
export PYTHONPATH=...
then I was able to import tensorflow_serving | 1 | 4 | 1 | While I admire, and am somewhat baffled by, the documentation's commitment to mediating everything related to TensorFlow Serving through Bazel, my understanding of it is tenuous at best. I'd like to minimize my interaction with it.
I'm implementing my own TF Serving server by adapting code from the Inception + TF Serving tutorial. I find the BUILD files intimidating enough as it is, and rather than slogging through a lengthy debugging process, I decided to simply edit BUILD to refer to the .cc file, in lieu of also building the python stuff which (as I understand it?) isn't strictly necessary.
However, my functional installation of TF Serving can't be imported into python. With normal TensorFlow you build a .whl file and install it that way; is there something similar you can do with TF Serving? That way I could keep the construction and exporting of models in the realm of the friendly python interactive shell rather than editing it, crossing all available fingers, building in bazel, and then /bazel-bin/path/running/whatever.
Simply adding the directory to my PYTHONPATH has so far been unsuccessful.
Thanks! | de-Bazel-ing TensorFlow Serving | 0 | 0 | 0 | 443 |
36,562,764 | 2016-04-12T02:49:00.000 | 1 | 0 | 0 | 0 | database,python-2.7,amazon-dynamodb,insert-update | 36,564,082 | 2 | true | 1 | 0 | At this point, you cannot do this, we have to pass a key (Partition key or Partition key and sort key) to update the item.
Currently, the only way to do this is to scan the table with filters to get all the values which have 0 in "updated" column and get respective Keys.
Pass those keys and update the value.
Hopefully, in future AWS will come up with something better. | 2 | 2 | 0 | I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table.
I tried batch write, but seems like update_item required Key inputs. How could I update the entire column to have every value as 0 efficiently please?
I am using a python script.
Thanks a lot. | DynamoDB update entire column efficiently | 1.2 | 1 | 0 | 643 |
36,562,764 | 2016-04-12T02:49:00.000 | 0 | 0 | 0 | 0 | database,python-2.7,amazon-dynamodb,insert-update | 36,564,129 | 2 | false | 1 | 0 | If you can get partition key, for each partition key you can update the item | 2 | 2 | 0 | I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table.
I tried batch write, but seems like update_item required Key inputs. How could I update the entire column to have every value as 0 efficiently please?
I am using a python script.
Thanks a lot. | DynamoDB update entire column efficiently | 0 | 1 | 0 | 643 |
36,562,873 | 2016-04-12T03:02:00.000 | 8 | 0 | 1 | 0 | python,python-2.7,package,requirements.txt | 62,383,040 | 3 | false | 0 | 0 | Inside Pycharm
Go to code > inspect code
Select Whole project option and click OK.
In inspection results panel locate Package requirements section under Python (note that this section will be showed only if there is any requirements.txt or setup.py file). The section will contain one of the following messages:
Package requirement '' is not satisfied if there is any package that is listed in requirements.txt but not used in any .py file.
Package '' is not listed in project requirements if there is any package that is used in .py files, but not listed in requirements.txt.
You have all your required packages remove/add them accordingly. | 1 | 29 | 0 | I use pip freeze > requirements.txt to gather all packages I installed. But after developing many days, some packages are now unused.
How can I find these unused packages and remove them, to make my project more clear? | Python: How to detect unused packages and remove them | 1 | 0 | 0 | 22,930 |
36,563,002 | 2016-04-12T03:15:00.000 | 0 | 1 | 1 | 0 | c#,python | 36,647,773 | 2 | false | 0 | 1 | Can you create a C# class that calls a Python script without using Iron Python and without using any external API? No. That is not possible. You have a few other choices:
Integrate the Python runtime into your program.
Smead already described one way to do this. It will work, and it does avoid creating another process, but it will be a lot of work to get it running, and it is still technically using an API. I do not recommend this for a single Python script where you don't need to pass data back and forth, but it's good to know that option exists if your other options don't pan out.
Use the Process module.
This is probably what I would do. Process has security concerns when a malicious user can cause you to execute bogus shell commands, or if the malicious user can replace the contents of the Python script. It is quite safe when you can lock down those two things.
The speed is unlikely to be a concern. It will literally only take a few minutes to set up a C# program with a process call, so if your mentor is concerned about speed, just write it and measure the speed to see if it's actually a problem.
Consider rewriting the script in C#
C# is a very expressive language with a very strong standard library, so assuming your script is not thousands of lines long, and does not use any obscure Python libraries, this might actually not be much work. If you really must not use Process, this would be the next solution I would consider. | 1 | 0 | 0 | I have a python code. I need to execute the python script from my c# program. After searching a bit about this, I came to know that there is mainly two ways of executing a python script from c#.
One by using 'Process' command and
the other by using Iron Python.
My question might seem dumb, is there any other way through which I can execute a python script? To be more specific, can I create a class , lets say 'Python' in c# and a member function 'execute_script' which doesn't use any api like iron python or doesn't create a process for executing the script, so that if call 'execute_scipt(mypythonprogram.py)' , my script gets executed. Sorry if this seems dumb. If this is possible, please do help me. Thanks in advance. | Creating a class that executes python in c# | 0 | 0 | 0 | 74 |
36,563,227 | 2016-04-12T03:41:00.000 | 6 | 0 | 1 | 0 | python,random | 36,563,317 | 1 | true | 0 | 0 | Unless you're making your own online casino, use random.randint(1,6) six times. Yes, random.randint is pseudo-random, but for all practical purposes each roll will be independent of the previous rolls. | 1 | 2 | 0 | In python, I need to simulate 6 dice.
Should I call random.randint(1,6) six times or do I need to somehow obtain 6 different random number generating objects, each seeded randomly and then call each of them once? | one random number generator vs six | 1.2 | 0 | 0 | 72 |
36,563,381 | 2016-04-12T03:58:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,anaconda,packaging,conda | 36,574,701 | 1 | false | 0 | 0 | First, I would check that you are using the correct python (i.e. which python and confirm that it is the python in your conda environment). Next, you can check if your package is in the site-packages directory of that same python.
The most likely thing, I'd bet, is that the conda package doesn't include everything correctly. Are you sure that you have a build.sh (or bld.bat if you are on windows) and a setup.py? Did you try expanding your built conda package and looking for your python classes in there?
If you expand your built conda package, probably something like gender_univ-VERSION-py35_0.tar.bz2, you should see a lib/python3.5/site-packages/gender_univ directory (i.e. python package). Do you? If not, then the failure is with your building. | 1 | 0 | 0 | I built a python3 package called gender_univ using the Anaconda conda build command. I uploaded the package to the Anaconda cloud and then installed it into my conda environment. Though the package shows up in the list of installed packages when I type conda list, whenever I try to access the package using import gender_univ I get the error no module named gender_univ.
I want to understand why I can't seem to import a package that is apparently installed in my conda virtual environment? Any suggestions. | Python Anaconda package built and installed, but cannot be imported | 0.379949 | 0 | 0 | 453 |
36,563,679 | 2016-04-12T04:33:00.000 | 4 | 0 | 1 | 1 | python | 36,563,699 | 2 | true | 0 | 0 | Yes, Python is not constantly reading the file, the file is only interpreted once per run. The current instance that is already running will not be affected by changes in the script. | 2 | 3 | 0 | If I have a long running process that is running from file.py, can I edit file.py while it is running and run it again, starting a new process and not affect the already running process? | If I have a python program running, can I edit the .py file that it is running from? | 1.2 | 0 | 0 | 264 |
36,563,679 | 2016-04-12T04:33:00.000 | 0 | 0 | 1 | 1 | python | 36,563,817 | 2 | false | 0 | 0 | Of course you can.
When you are running the first process, the unmodified code is loaded into memory,like a copy in memory. When you edit the running code, it makes another copy into memory, you won't change the original one.
And even though you click save, it won't make any change to the code in the memory that the first process is using.
But as you say, your program is very long. If you change a package the program hasn't used, it may cause the problem, coz the import part is loaded when the program execute the import part. | 2 | 3 | 0 | If I have a long running process that is running from file.py, can I edit file.py while it is running and run it again, starting a new process and not affect the already running process? | If I have a python program running, can I edit the .py file that it is running from? | 0 | 0 | 0 | 264 |
36,564,002 | 2016-04-12T05:04:00.000 | 29 | 0 | 0 | 1 | python,pudb | 36,564,003 | 1 | true | 0 | 0 | Put the focus in the command-line / interpreter pane (using Ctrl-x).
Use the right-arrow key to put the focus on the Clear button. (the background changes color to indicate it is selected)
Now use any of the following commands:
_ (underscore; makes that pane the smallest size possible)
= (equals; makes that pane the largest size possible)
+ (plus; increases the size of that pane with each press)
- (minus; decreases the size of that pane with each press) | 1 | 18 | 0 | Is there any way to resize the command-line / interpreter window/pane in pudb, just like the size of the side pane can be adjusted? | How to make the command-line / interpreter pane/window bigger in pudb? | 1.2 | 0 | 0 | 1,627 |
36,566,586 | 2016-04-12T07:36:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,tweepy,jupyter,canopy | 36,576,493 | 1 | false | 0 | 0 | Whoever provided you with that requirement list was incorrect. There is no released version of Canopy which supports Python 3. I suggest that you install Python 3 using conda. | 1 | 0 | 0 | Hey Guys I am using Windows 7, and started learning Pyton (great language). And for my online tutorial exercise i have to use above combination.
I have installed canopy which came with ipython notebook.
But Ipython notebook just have kernel (python 2) no python 3. But i have to use python 3 for exercises.
And please guide me how to install tweepy library of python to download tweets in Canopy. (I mean any commands or options). | Windows + Canopy + Ipython Notebook + Python 3 + Python "tweepy" package | 0.197375 | 0 | 0 | 176 |
36,568,713 | 2016-04-12T09:13:00.000 | 1 | 0 | 0 | 0 | python-2.7,boto,boto3 | 36,582,398 | 1 | true | 0 | 0 | There is currently no way to use a file-like object with upload_file. put_object and upload_part do support these, though you don't get the advantage of automatic multipart uploads. | 1 | 1 | 0 | I am using python with boto3 to upload file into s3 bucket. Boto3 support upload_file() to create s3 object. But this API takes file name as input parameter
Can we give actual data buffer as a parameter to upload file () function instanced of file name?
I knew that we can use put_object() function if we want to give data buffer as parameter to create s3 object. But I want to use upload_file with data buffer parameter. Is there any way to get out of this?
Thanks in advance | Boto3 : Can we use actual data buffer as parameter instaed of file name to upload file in s3? | 1.2 | 1 | 1 | 638 |
36,571,963 | 2016-04-12T11:28:00.000 | 0 | 0 | 1 | 0 | python,git,pip,setuptools,microservices | 37,508,534 | 2 | true | 0 | 0 | We ended up using git dependencies and not devpi.
I think when git is used, there is no need to add another package repository as long as pip can use this.
The core issue, where the branch code (because of a second level dependency) is different from the one merged to master is not solved yet, instead we work around that by working to remove that second level dependency. | 1 | 4 | 0 | We are trying to find the best way to approach that problem.
Say I work in a Python environment, with pip & setuptools.
I work in a normal git flow, or so I hope.
So:
Move to feature branch in some app, make changes.
Move to feature branch in a dependent lib - Develop thing.
Point the app, using "-e git+ssh" to the feature branch of the dependent lib.
Create a Pull Request.
When this is all done, I want to merge stuff to master, but I can't without making yet another final change to have the app (step 3 above) requirements.txt now point to the main branch of the feature.
Is there any good workflow for "micro services" or multiple dependent source codes in python that we are missing? | Python Development in multiple repositories | 1.2 | 0 | 0 | 2,178 |
36,575,268 | 2016-04-12T13:48:00.000 | 0 | 1 | 0 | 0 | python,linux,gps,updates,working-remotely | 36,577,750 | 1 | false | 0 | 0 | I think you however need a server(or it can be just file-share service). If I got it correctly you need to control(or just update) Raspberry(ies), that connected to internet via 3G. So, there are options I see:
Connect them into VPN;
Write script that always be checking for new update for your app from a http\ftp file-sharing server;
Use reverse-shell, but working depends on NAT specs that uses 3G provider. | 1 | 0 | 0 | I am currently developing a python program for a Raspberry Pi. This Raspberry is meant to control a solar panel. In fact, there will be many Raspberry(ies) controlling solar panels and they will be connected to each others by RJ wires. The idea is that every Raspberry has the same status, there is not any "server" Raspberry and "client" Raspberry.
The program will receive GPS data, i.e. position, time...
Except from the GPS data, the Raspberry(ies) will not have direct internet access. However, it will be possible to plug a 3G key in order to gain access to internet.
The problem is the following : I want to update my python program remotely, by internet provided by my 3G key (the solar panels are in a field, and I'm home for instance so I do not want to drive a hundred miles to get my Raspberry(ies) back and update them manually...). How is it possible to make the update remotely considering that I do not have a real "server" in my network of Raspberry(ies)? | How can I update a python program remotely on linux? | 0 | 0 | 1 | 223 |
36,575,776 | 2016-04-12T14:09:00.000 | 0 | 0 | 1 | 0 | python,pandas,machine-learning | 70,480,217 | 2 | false | 0 | 0 | always split your data to train and test split to prevent overfiting.
if some of your features has big scale and some doesnt you should standard the data.make sure to sandard the data only on the train set not to couse overfiting.
you also have to look for missing datas and replace or remove them.
if less than 0.5% of the data in a column is missing you can use 'dropna' otherwise you have to replace it with something(you can replace ut with zero,mean,the previous data...)
you also have to check outliers by using boxplot.
outliers are point that are significantly different from other data in the same group can also affects your prediction in machine learning.
its the best if we check the multicollinearity.
if some features have correlation we have multicollinearity can couse wrong prediction for our model.
for using your data some of the columns might be categorical with sholud be converted to numerical. | 1 | 0 | 1 | I have a DataFrame in Python and I need to preprocess my data. Which is the best method to preprocess data?, knowing that some variables have huge scale and others doesn't. Data hasn't huge deviance either. I tried with preprocessing.Scale function and it works, but I'm not sure at all if is the best method to proceed to the machine learning algorithms. | Data Preprocessing Python | 0 | 0 | 0 | 1,399 |
36,576,158 | 2016-04-12T14:23:00.000 | 0 | 1 | 0 | 0 | python,ssh,paramiko | 36,601,638 | 1 | true | 0 | 0 | I figured out a way to get the data, it was pretty straight forward to be honest, albeit a little hackish. This might not work in other cases, especially if there is latency, but I could also be misunderstanding what's happening:
When the connection opens, the server spits out two messages, one saying it can't chdir to a particular directory, then a few milliseconds later it spits out another message stating that you need to connect to the other IP. If I send a command immediately after connecting (doesn't matter what command), exec_command will interpret this second message as the response. So for now I have a solution to my problem as I can check this string for a known message and change the flow of execution.
However, if what I describe is accurate, then this may not work in situations where there is too much latency and the 'test' command isn't sent before the server response has been received.
As far as I can tell (and I may be very wrong), there is currently no proper way to get the stdout stream immediately after opening the connection with paramiko. If someone knows a way, please let me know. | 1 | 1 | 0 | I'm writing a script that uses paramiko to ssh onto several remote hosts and run a few checks. Some hosts are setup as fail-overs for others and I can't determine which is in use until I try to connect. Upon connecting to one of these 'inactive' hosts the host will inform me that you need to connect to another 'active' IP and then close the connection after n seconds. This appears to be written to the stdout of the SSH connection/session (i.e. it is not an SSH banner).
I've used paramiko quite a bit, but I'm at a loss as to how to get this output from the connection, exec_command will obviously give me stdout and stderr, but the host is outputting this immediately upon connection, and it doesn't accept any other incoming requests/messages. It just closes after n seconds.
I don't want to have to wait until the timeout to move onto the next host and I'd also like to verify that that's the reason for not being able to connect and run the checks, otherwise my script works as intended.
Any suggestions as to how I can capture this output, with or without paramiko, is greatly appreciated. | Paramiko get stdout from connection object (not exec_command) | 1.2 | 0 | 1 | 391 |
36,576,353 | 2016-04-12T14:31:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 36,576,845 | 2 | false | 0 | 0 | I am assuming it is a plain text document. In that case you would open('file.txt') as f and then get every [for] line in f and check if 'word' in f.lower() and then incrament a counter accordingly (say wordxtotal += 1) | 1 | 1 | 0 | I have a text file in which I need to search for specific 3 words using Python. For example the words are account, online and offer and I need the count of how many times it appears in the system. | Python word search in a text file | 0 | 0 | 0 | 305 |
36,577,442 | 2016-04-12T15:16:00.000 | 0 | 0 | 0 | 0 | python | 36,577,730 | 2 | false | 0 | 0 | You could use atexit.register() from module atexit to register cleanup functions. If you register functions f, g, h in that order. At program exit these will be executed in the reverse order, h, g, f.
But one thing to note: These functions will be invoked upon normal program exit. Normal means exits handled by python. So won't work in weird cases. | 1 | 0 | 0 | I have a python script that collect data from a database every minutes by timestamp.
Every minutes this script collect data from a given table in DB by that match the current time with a dely of 1 minutes:
For example at ' 216-04-12 14:53 ' the script will look for data
that match ' 216-04-12 14:52 ' in the DB and so on...
Now I want the script to save the last timestamp it collected from the data base before exiting and that for any type of exit (keyboard interrupt, system errors, data base points of failure etc.)
Is there a simple way to do what I want knowing that I can't modify the dataBase | How to make a python script do something before exiting | 0 | 0 | 0 | 1,055 |
36,578,931 | 2016-04-12T16:22:00.000 | 1 | 0 | 0 | 0 | mysql,json,django,python-2.7,utf-8 | 36,579,883 | 1 | false | 1 | 0 | -- encoding: utf-8
Is changing only encoding of the source file, meaning you can define variables/comments using non-ascii chars.
You can try to use
json.dumps(..., ensure_ascii=False, encoding="ISO-8859-1") | 1 | 0 | 0 | I'm using Django and ajax to print data to an HTML table with jQuery and JSON.
It was working until new data came and had "ú@ñ" type of characters and I got:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xf9 in position 4: invalid start byte
I've read and tried many different possible reasons and it's still not working.
I've tried:
saving my file in UTF-8 in Sublime Text and with a file -bi myfile I still get text/x-python; charset=us-ascii
using # -*- encoding: utf-8 -*- at the beginning of my views.py
changing MySQL charset to CHARSET=utf8mb4 from CHARSET=latin1
json.dumps(list(rows), default=datetime_handler), content_type="application/json", encoding='utf-8')
I'd rather avoid using .decode() for every string in my data but if there's no other solution, it's what I'll have to do. | UnicodeDecodeError: 'utf8' codec can't decode byte | 0.197375 | 0 | 0 | 2,359 |
36,582,249 | 2016-04-12T19:22:00.000 | 1 | 0 | 0 | 0 | python,attributes,arcgis,arcmap | 37,197,270 | 1 | false | 0 | 0 | The easiest way to accomplish this that I can think of is to use Calculate Field. In this case, the script would be:
FC = path to shapefile
field = "ROWID"
arcpy.CalculateField(FC, field, '!Liber! + !Page!', "PYTHON_9.3") | 1 | 0 | 0 | I searched for a question similar to mine, but couldn't find something precisely similar, so if this has been answered already, I apologize.
I am creating a ROW shapefile that contains (among others) 3 fields:
"Liber"
"Page"
"ROWID"
Currently I have "Liber" and "Page" set as text values (though that may later change to a numerical data type), and "ROWID" as a text value. Is it possible to write a script that will automatically calculate "ROWID" as:
"ROW-" + !Liber! + !Page!
I'm trying to set this up for someone to enter data without needing to do much more than enter values in the "Liber" and "Page" fields. If it isn't possible to do it this way, how can I set it so that someone can right-click the "ROWID" field, select "Calculate Values" (which I actually can't find anymore; did they remove this from outside the Geoprocessing Toolbox?), and have it run the above Python script? I can't really expect the data entry person to run the field calculator every time they need to calculate a new "ROWID" value, unfortunately.
Please let me know if this isn't making sense. Thank you! | ArcGIS: Auto-Calculate Field Value from Other Fields | 0.197375 | 0 | 0 | 1,687 |
36,582,318 | 2016-04-12T19:26:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy | 36,582,371 | 2 | true | 0 | 0 | You can do U[:, None, :] to add a new dimension to the array. | 1 | 0 | 1 | I have a numpy array U with shape (20, 50): 20 spatial points, in a space of 50 dimensions.
How can I transform it into a (20, 1, 50) array, i.e. 20 rows, 1 column, and each element is a 50 dimension point? Kind of encapsulating each row as a numpy array.
Context
The point is that I want to expand the array along the columns (actually, replicating the same array along the columns X times) using numpy.concatenate. But if I would do it straight away I would not get the result I want.
E.g., if I would expand it once along the columns, I would get an array with shape (20, 100). But what I would like is to access each element as a 50-dimensional point, so when I expand it I would expect to have a new U' with shape (20, 2, 50). | Numpy group scalars into arrays | 1.2 | 0 | 0 | 71 |
36,585,941 | 2016-04-12T23:36:00.000 | 1 | 1 | 0 | 0 | python,api,github,oauth-2.0,github-api | 39,495,778 | 1 | false | 0 | 0 | The Short, Simple Answer
You should probably give them none of those things. They are equivalent to handing over your username and password to someone.
The Longer Answer
It depends...
Personal Access Tokens
Your personal access token is a unique token that authorises and represents you during API calls, the same way that logging via the web interface authorises you to perform actions there. So when you call an API function with a personal access token, you are performing that API action as if you yourself had logged in and performed the same action. Therefore, if you were to give someone else your token, they would have the same access to the site as they would have if you gave them you username and password combination.
Personal access tokens have attached scopes. Scopes control exactly how much access to GitHub a particular token has. For example, one token my have access to all private repositories, but another token only to public ones.
Client IDs
A client ID represents your application, rather than you. So when you create an application, GitHub gives you an ID that you use to identify your application to GitHub.
Chiefly this allows someone logging into your application using OAuth to see on the GitHub web interface that it's your particular application requesting access to their account.
Client Secrets
A client secret is a random, unguessable string that is used to provide an extra layer of authentication between your application and GitHub. If you think of the client ID as the username of your application, you can think of the client secret as the password.
Should I Share Them?
Whether you wish to share any of these things depends largely on how much you trust the other developers. If you are all working on the same application, it's likely that you will all know the client ID and client secret. But if you want to develop an open-source application that people will install on their own machines, they should generate their own client ID and secrets for their own instances of the app.
It's unlikely that you should ever share a personal access token, but if you have a bot account used by the whole team, then sharing the tokens could also be okay. | 1 | 0 | 0 | I am writing a basic python script and I am trying to use the Github API. Because I am new to the development scene, I am unsure of what I can share with other developers. Do I generate a new personal access token (that I assume can be revoked) or do I give them Client ID and Client Secret?
Can someone explain how OAuth (Client ID and Client Secret) is different from a personal access keys?
Does this logic work across all APIs (not just on Github's)? | Personal Access Tokens, User Tokens | 0.197375 | 0 | 0 | 430 |
36,589,322 | 2016-04-13T05:26:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,methods,pycharm,self | 36,592,035 | 1 | true | 0 | 0 | Its a hint saying that the method can be a static method since it is not acting on instances (ie, you are passing in self but aren't actually making use of it).
There is no conventional means to handle it - either you want that method to be there, because you are creating a class tree and want it to be defined/overridden in the descendants; or for whatever other reason. In this case, you can ignore the warning.
This is entirely different than @staticmethod; which has a lot of other consequences. So its not a matter of "if I'm not using self, but passing it in, lets just make it a static method"; you have to know what the method is doing.
Static and class methods are most often used in factory classes. | 1 | 1 | 0 | In PyCharm, whenever you declare a method that doesn't make use of the self variable it gives you the warning
Method 'method_name' may be 'static'
I've come across this warning many times and most of the time I just ignore it. However, I was wondering if there is a conventional or pythonic way to handle it.
So basically, my question is what should I do when I come across this? Should I ignore it? Should I replace it with a static method (@staticmethod)? | What if a method doesn't use self? | 1.2 | 0 | 0 | 583 |
36,593,464 | 2016-04-13T09:02:00.000 | 1 | 0 | 1 | 0 | python,multithreading,zeromq,pyzmq | 36,602,302 | 1 | true | 0 | 0 | There is no dinner for free
even if some marketing blable offers that, do not take it for granted.
Efficient means usually a complex resources handling.
Simplest to implement usually fights with overheads & efficient resources handling.
Simplest?
Using sir Henry FORD's point of view, a component, that is not present in one's design simply cannot fail.
In this very sense, we strive here not to programmatically control anything beyond the simplest possible use of elementary components of an otherwise smart ZeroMQ library:
Scenario SIMPLEST:
Rule a)
The central HQ-unit ( be it just a thread or a fully isolated process ) .bind()-s it's receiving port ( pre-set as a ZMQ_SUBSCRIBE behaviour-archetype ) and "subscribes" it's topic-filter to "everything" .setsockopt( ZMQ_SUBSCRIBE, "" ) before it spawns the first DAQ-{ thread | process }, further ref'd as DAQ-unit.
Rule b)
Each DAQ-unit simply .connect()-s to an already setup & ready port on the HQ-unit with a unit-local socket access-port, pre-set as a ZMQ_PUBLISH behaviour-archetype.
Rule c)
Any DAQ-unit simply .send( ..., ZMQ_NOBLOCK )-s as needed it's local-data via a message, which is being delivered in the background by the ZeroMQ-layer to the hands of the HQ-unit, being there queued & available for a further processing at the HQ-unit's will.
Rule d)
The HQ-unit regularly loops and .poll( 1 )-s for a presence of a collected message from any DAQ-unit + .recv( ZMQ_NOBLOCK ) in case any such was present.
That's all
Asynchronous: yes.
Non-blocking: yes.
Simplest: yes.
Scaleable: yes. Almost linearly, until I/O-bound ( still some tweaking possible to handle stressed-I/O-operations ) as a bonus-point... | 1 | 1 | 0 | The parent process launches a few tens of threads that receive data (up to few KB, 10 requests per second), which has to be collected in a list in the parent process. What is the recommended way to achieve this, which is efficient, asynchronous, non-blocking, and simpler to implement with least overhead?
The ZeroMQ guide recommends using a PAIR socket archetype for coordinating threads, but how to scale that with a few hundred threads? | How to collect data from multiple threads in the parent process with ZeroMQ while keeping the solution both as simple & as scaleable as possible? | 1.2 | 0 | 0 | 214 |
36,600,093 | 2016-04-13T13:34:00.000 | 0 | 0 | 0 | 0 | javascript,python,ajax,d3.js,flask | 36,601,249 | 1 | false | 1 | 0 | If simply loading the json is too heavy for the browser, then doing a complete rendering server-side would not help, as the rendered object would one way or another include the same amount of data.
But I guess you cannot show that much data at once. Since you are going for a zoomable visualizer, you should probably only load the data that is visible at the current scale, within the current window (just like any map application does: you can't just load the whole world-map at street level at once, but zooming can still go smoothly). Quadtrees are normally quite useful for this task. | 1 | 0 | 0 | So my issue is that I'm passing a large JSON file (I'm not sure of the exact size, but it's very very big) into a D3 zoomable treemap.
I'm doing this by way of AJAX call to a Python backend. The performance of my browser just degrades completely when I load the file in, it takes 5-10 mins for it to even appear.
I'm just wondering are there any options that will help with performance? Rendering it server side perhaps?
This is the first ever time I've run into a performance issue like this so I'm really not sure where to go. Any help would be appreciated. | Performance of D3 treemap with large amounts of data | 0 | 0 | 0 | 170 |
36,600,993 | 2016-04-13T14:09:00.000 | 9 | 0 | 0 | 0 | python,python-3.x,kivy | 36,601,394 | 1 | true | 0 | 1 | Widgets have an export_to_png method. Call this from the Widget whose canvas you have drawn on. | 1 | 3 | 0 | I was wondering if it is possible to save a canvas that had several textures painted on it as an image file.
I know I can save regular Image's (kivy.core.image) or Texture's (kivy.graphics.texture) as an image file with the save() function, so if I am able to convert the canvas to an Image or a Texture it should be easy, but so far I wasn't able to do this. | How to save a kivy canvas object as an image file | 1.2 | 0 | 0 | 2,200 |
36,604,460 | 2016-04-13T16:32:00.000 | 1 | 0 | 0 | 0 | python,pyspark | 59,394,797 | 2 | false | 0 | 0 | If you get this error even after verifying that you have NOT used from pyspark.sql.functions import *, then try the following:
Use import builtins as py_builtin
And then correspondingly call it with the same prefix.
Eg: py_builtin.max()
*Adding David Arenburg's and user3610141's comments as an answer, as that is what help me fix my problem in databricks where there was a name collision with min() and max() of pyspark with python build ins. | 1 | 4 | 1 | Python function max(3,6) works under pyspark shell. But if it is put in an application and submit, it will throw an error:
TypeError: _() takes exactly 1 argument (2 given) | Python function such as max() doesn't work in pyspark application | 0.099668 | 0 | 0 | 2,764 |
36,604,543 | 2016-04-13T16:36:00.000 | 0 | 0 | 1 | 0 | python,math,sage | 36,646,556 | 1 | false | 0 | 0 | You could run the Sage REPL from the Sage shell (run sage -sh and start the Sage REPL from there). Then it would use Sage's Python. | 1 | 1 | 0 | Sage only works with python2, but I'm running python3 on my system in a virtual environment. Whenever I try to start the sage REPL, it fails saying module "sage" not found. When I open python2 directly and import sage, it works. So it seems like sage is trying to use python3 and failing. It's probably using my PATH env variable, but I don't want to change this every time I start up the REPL. How can I tell it to use a specific version of python/ipython? | Sage REPL Specify IPython Binary | 0 | 0 | 0 | 29 |
36,606,390 | 2016-04-13T18:15:00.000 | 1 | 1 | 0 | 0 | python,numpy,fft,ifft | 36,609,298 | 1 | true | 0 | 0 | What you are doing is perfectly fine. You are generating the analytic signal to accommodate the negative frequencies in the same way a discrete Hilbert transform would. You will have some scaling issues - you need to double all the non-DC and non-Nyquist signals in the real frequency portion of the FFT results.
Some practical concerns are that this method imparts a delay of the window size, so if you are trying to do this in real-time you should probably examine using a FIR Hilbert transformer and the appropriate sums. The delay will be the group delay of the Hilbert transformer in that case.
Another item of concern is that you need to remember that the DC component of your signal will also shift along with all the other frequencies. As such I would recommend that you demean the data (save the value) before shifting, zero out the DC bin after you FFT the data (to remove whatever frequency component ended up in the DC bin), then add the mean back to preserve the signal levels at the end. | 1 | 1 | 1 | So I am trying to perform a frequency shift on a set of real valued points. In order to achieve a frequency shift, one has to multiply the data by a complex exponential, making the resulting data complex. If I multiply by just a cosine I get results at both the sum and difference frequencies. I want just the sum or the difference.
What I have done is multiply the data by a complex exponential, use fft.fft() to compute the fft, then used fft.irfft() on only the positive frequencies to obtain a real valued dataset that has only a sum or difference shift in frequency. This seems to work great, but I want to know if there are any cons to doing this, or maybe a more appropriate way of accomplishing the same goal. Thanks in advance for any help you can provide! | In python, If I perform an fft on complex data, then irfft only the positive frequencies, how does that affect the data? | 1.2 | 0 | 0 | 581 |
36,606,931 | 2016-04-13T18:43:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,pandas | 69,174,336 | 4 | false | 0 | 0 | Maybe try df = pd.read_csv(header = 0) | 1 | 63 | 1 | When I read in a CSV, I can say pd.read_csv('my.csv', index_col=3) and it sets the third column as index.
How can I do the same if I have a pandas dataframe in memory? And how can I say to use the first row also as an index? The first column and row are strings, rest of the matrix is integer. | How to set in pandas the first column and row as index? | 0 | 0 | 0 | 160,606 |
36,607,394 | 2016-04-13T19:05:00.000 | 0 | 0 | 0 | 0 | javascript,python,robotframework | 71,966,776 | 9 | false | 1 | 0 | For me editing JAVA_ARGS in /etc/default/jenkins didn't work. To make changes permanent on Ubuntu 18.04 LTS when running Jenkins as service I did following:
Run service jenkins status and from second line take path to actual service configuration file, mine was: /lib/systemd/system/jenkins.service
Run sudo vim /lib/systemd/system/jenkins.service find property Environment= under comment Arguments for the Jenkins JVM
Paste: -Dhudson.model.DirectoryBrowserSupport.CSP=\"sandbox allow-scripts; default-src 'none'; img-src 'self' data: ; style-src 'self' 'unsafe-inline' data: ; script-src 'self' 'unsafe-inline' 'unsafe-eval' ;\" behind -Djava.awt.headless=true
Run sudo service jenkins stop, you should see following warning: Warning: The unit file, source configuration file or drop-ins of jenkins.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Run sudo systemctl daemon-reload
Run sudo service jenkins start
You should be now able to browse robot framework results after restart. | 3 | 25 | 0 | If I open any .html file that generated by Robot Framework and try to convert it in any other format(for example, docx formate) using either any python code or inbuilt command line tool that are available. I am getting below error,
Opening Robot Framework log failed
• Verify that you have JavaScript enabled in your browser.
• Make sure you are using a modern enough browser. Firefox 3.5, IE 8, or equivalent is required, newer browsers are recommended.
• Check are there messages in your browser's JavaScript error log. Please report the problem if you suspect you have encountered a bug.
· I am getting this error even though I have already enabled JavaScript in my browser.I am using Mozilla Firefox version 45.0.2 on mac.
Can anyone please help me to solve this issue? | Error: Opening Robot Framework log failed | 0 | 0 | 0 | 41,528 |
36,607,394 | 2016-04-13T19:05:00.000 | 0 | 0 | 0 | 0 | javascript,python,robotframework | 58,062,311 | 9 | false | 1 | 0 | The accepted answer works for me but is not persistent. To make it persistent, modify the file /etc/default/jenkins and after JAVA_ARGS line, add the following line:
JAVA_ARGS="$JAVA_ARGS -Dhudson.model.DirectoryBrowserSupport.CSP=\"sandbox allow-scripts; default-src 'none'; img-src 'self' data: ; style-src 'self' 'unsafe-inline' data: ; script-src 'self' 'unsafe-inline' 'unsafe-eval' ;\""
Change will apply and be persistent after reboot | 3 | 25 | 0 | If I open any .html file that generated by Robot Framework and try to convert it in any other format(for example, docx formate) using either any python code or inbuilt command line tool that are available. I am getting below error,
Opening Robot Framework log failed
• Verify that you have JavaScript enabled in your browser.
• Make sure you are using a modern enough browser. Firefox 3.5, IE 8, or equivalent is required, newer browsers are recommended.
• Check are there messages in your browser's JavaScript error log. Please report the problem if you suspect you have encountered a bug.
· I am getting this error even though I have already enabled JavaScript in my browser.I am using Mozilla Firefox version 45.0.2 on mac.
Can anyone please help me to solve this issue? | Error: Opening Robot Framework log failed | 0 | 0 | 0 | 41,528 |
36,607,394 | 2016-04-13T19:05:00.000 | 2 | 0 | 0 | 0 | javascript,python,robotframework | 53,811,785 | 9 | false | 1 | 0 | The easiest thing to do is (if there are no worries on security aspects) also a permanent fix.
open the jenkins.xml file and
add the following
<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -Dhudson.model.DirectoryBrowserSupport.CSP="" -jar "%BASE%\jenkins.war" -- httpPort=8080 --webroot="%BASE%\war"</arguments>
restart the jenkins server
rerun your jenkins jobs to see the result files.
If we are using the script console, every time you restart the jenkins server, the changes will be lost. | 3 | 25 | 0 | If I open any .html file that generated by Robot Framework and try to convert it in any other format(for example, docx formate) using either any python code or inbuilt command line tool that are available. I am getting below error,
Opening Robot Framework log failed
• Verify that you have JavaScript enabled in your browser.
• Make sure you are using a modern enough browser. Firefox 3.5, IE 8, or equivalent is required, newer browsers are recommended.
• Check are there messages in your browser's JavaScript error log. Please report the problem if you suspect you have encountered a bug.
· I am getting this error even though I have already enabled JavaScript in my browser.I am using Mozilla Firefox version 45.0.2 on mac.
Can anyone please help me to solve this issue? | Error: Opening Robot Framework log failed | 0.044415 | 0 | 0 | 41,528 |
36,607,919 | 2016-04-13T19:33:00.000 | 0 | 0 | 1 | 0 | python,packages,cython | 36,609,493 | 1 | false | 0 | 0 | Turns out it's very simple - create a setup.py file with both packages and extension modules. Setuptools takes care of everything else. | 1 | 2 | 0 | Is it possible to create a Python wheel (or egg) with both Python packages and a Cython extension module? I looked for examples and found none.
How can I do that? I'd rather not create two packages, as the Cython extension module has absolutely no use outside the Python package. | Create a Python package with both Python code and a Cython extension | 0 | 0 | 0 | 45 |
36,609,150 | 2016-04-13T20:39:00.000 | 5 | 0 | 0 | 0 | python,django,virtualenv | 36,609,193 | 1 | true | 1 | 0 | The entire project is loaded into the same Python process. You can't have two Python environments active at the same time in the same process. So the answer is no - you can't have concurrent virtual environments for apps in the same project. | 1 | 2 | 0 | In Django, a project can contain many apps. Can each app have its own virtualenv? Or do all the apps in a Django project have to use the project's virtualenv? | A different virtualenv for each Django app | 1.2 | 0 | 0 | 89 |
36,609,201 | 2016-04-13T20:42:00.000 | 0 | 0 | 0 | 0 | python,django,git,github | 36,613,541 | 1 | false | 1 | 0 | The django_sessions table should get initialized when you run your first migrations. You said taht you made your migrations, but did you run them (with python manage.py migrate). Also, do you have django.contrib.auth in the installed_apps in your settings file? This is the app that owns that session table | 1 | 1 | 0 | I have just cloned a Django app from Github to a local directory. I know for a fact that the app works because I've run it on others' computers.
When I run the server, I can see the site and register for an account. This works fine (I get a confirmation email). But then my login information causes an error because the DB appears to not have configured properly on my machine. I get the following errors:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py in execute
return self.cursor.execute(sql, params) ...
▶ Local vars
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py in execute
return Database.Cursor.execute(self, query, params) ...
▶ Local vars
The above exception (no such table: django_session) was the direct cause of the following exception:
(It then lists a bunch of problems with local vars).
I tried making migrations with every part of the app but this didn't appear to fix anything. | Problems with database after cloning Django app from Github | 0 | 1 | 0 | 315 |
36,615,987 | 2016-04-14T07:05:00.000 | 0 | 0 | 0 | 0 | python-3.x,machine-learning,data-analysis,feature-selection,data-science | 36,617,627 | 3 | false | 0 | 0 | You are already doing a lot of preprocessing. The only additional step I recommend is to normalize the values after PCA. Then your data should be ready to be fed into your learning algorithm.
Or do you want to avoid PCA? If the correlation between your features is not too strong, this might be ok. Then skip PCA and just normalize the values. | 1 | 0 | 1 | Blockquote
I am a student and beginner in Machine Learning. I want to do feature
selection of columns. My dataset is 50000 X 370 and it is a binary
classification problem.
First i removed the columns with std.deviation = 0, then i removed duplicate columns, After that i checked out top 20 features with highest ROC curve area. What should be the next step apart doing PCA? Can anybody give a sequence of steps to be followed for feature selection? | Suggestions on Feature selection techniques? | 0 | 0 | 0 | 541 |
36,616,309 | 2016-04-14T07:22:00.000 | 8 | 0 | 0 | 0 | python,django,django-rest-framework | 60,892,980 | 2 | false | 1 | 0 | In rest_framework.request.Request
request.body is bytes, which is always available, thus there is no limit in usage
request.data is a "property" method and can raise an exception,
but it gives you parsed data, which are more convenient
However, the world is not perfect and here is a case when request.body win
Consider this example:
If client send:
content-type: text/plain
and your REST's endpoint doesn't accept text/plain
your server will return 415 Unsupported Media Type
if you access request.data
But what if you know that json.loads(request.body) is correct json.
So you want to use that and only request.body allow that.
FYI: A described example is a message of AWS SNS notification sent by AWS to HTTP endpoint. AWS SNS works as a client here and of course, this case is a bug in their SNS.
Another example of benefits from request.body is a case when you have own custom parsing and you use own MIME format. | 1 | 30 | 0 | Django REST framework introduces a Request object that extends the regular HttpRequest, this new object type has request.data to access JSON data for 'POST', 'PUT' and 'PATCH' requests.
However, I can get the same data by accessing request.body parameter which was part of original Django HttpRequest type object.
One difference which I see is that request.data can be accessed only one time. This restriction doesnt apply to request.body.
My question is what are the differences between the two. What is preferred and what is the reason DRF provides an alternative way of doing same thing when There should be one-- and preferably only one --obvious way to do it.
UPDATE: Limiting the usecase where body is always of type JSON. Never XML/ image or conventional form data. What are pros/cons of each? | request.data in DRF vs request.body in Django | 1 | 0 | 1 | 34,630 |
36,616,705 | 2016-04-14T07:42:00.000 | 4 | 0 | 1 | 1 | python,window,python-idle | 37,742,333 | 2 | false | 0 | 0 | On the top menu, choose Options, then Configure IDLE. The Fonts/Tabs tab will be displayed. There is a Size button. Click on it and select a bigger size than the default of 10. At present, this only only affects the font in Shell, editor, and output windows, but that is the main issue for me.
EDIT: correct Options menu entry as suggested by Nathan Wailes. IDLE Preferences was once the title of the resulting dialog, but it is now Settings. | 1 | 2 | 0 | I can't seem to figure out how to zoom in or anything, and the font is so small. Thank you for your help. I'm using a mac if that helps | How to make IDLE window more readable? Font too small for me to see | 0.379949 | 0 | 0 | 8,590 |
36,618,103 | 2016-04-14T08:51:00.000 | 0 | 0 | 0 | 0 | python,selenium | 36,622,332 | 1 | false | 1 | 0 | Selenium doesn't start Chrome in incognito mode, It just creates a new and fresh profile in the temp folder. You could force Selenium to use the default profile or you could launch Chrome with the debug port opened and the let Selenium connect to it. There is also a third way which is to preinstall the webdriver extension in Chrome. These are the only ways I've encountered to automate Chrome with Selenium. | 1 | 0 | 0 | I'm looking for a solution that could help me out automating the already opened application in Chrome web browser using Selenium and Python web driver. The issue is that the application is super secured, and if it is opened in incognito mode as Selenium tries to do, it sends special code on my phone. This defeats the whole purpose. Can someone provide a hacky way or any other work around/open source tool to automate the application. | Selenium: How to work with already opened web application in Chrome | 0 | 0 | 1 | 127 |
36,621,256 | 2016-04-14T11:07:00.000 | 0 | 0 | 1 | 1 | python-2.7,permissions | 36,621,405 | 1 | false | 0 | 0 | You can call subprocess and run normal system command from there:
did not test but I think this should work:
subprocess.call(["chmod", "-R 777 /PATH"]) | 1 | 0 | 0 | I want to set permission (777) to my directory ( including all the files and subdirectories ) in one line, don't want to use any os.walk or for loop | set permission(777) for all the files, subdirectories of a directory in python without any loop | 0 | 0 | 0 | 317 |
36,626,267 | 2016-04-14T14:31:00.000 | 1 | 0 | 1 | 1 | python,multithreading,parallel-processing,job-scheduling | 36,626,600 | 1 | false | 0 | 0 | In addition to your "organiser-script" you will need some program/script on each of the other machines, that listens on the network for commands from the "organiser-script", starts "workers" and reports when "workers" have finished.
But there are existing solutions for your task. Take a good look around before you start coding. | 1 | 1 | 0 | I would like to be able to run multiple, typically long processes, over different machines connected over a local network.
Processes would generally be python scripts.
In other words, suppose that I have 100 processes and 5 machines, and I don't want to run more than 10 processes on each machine at the same time.
My "organiser-script" would then start 10 processes per machine, then send the next ones as the first ones end.
Is there any way to do this in python?
Any suggestion would be very much appreciated!
Thank you! | Running processes (mainly python) over several machines | 0.197375 | 0 | 0 | 157 |
36,627,362 | 2016-04-14T15:17:00.000 | 1 | 0 | 0 | 0 | python,image,numpy,matrix,matplotlib | 36,628,839 | 1 | true | 0 | 0 | Solved using scipy library
import scipy.misc
...(code)
scipy.misc.imsave(name,array,format)
or
scipy.misc.imsave('name.ext',array) where ext is the extension and hence determines the format at which the image will be stored. | 1 | 1 | 1 | I would like to save a numpy matrix as a .png image, but I cannot do so using the matplotlib since I would like to retain its original size (which apperently the matplotlib doesn't do so since it adds the scale and white background etc). Anyone knows how I can go around this problem using numpy or the PIL please? Thanks | Converting a matrix into an image in Python | 1.2 | 0 | 0 | 617 |
36,630,188 | 2016-04-14T17:36:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,python-3.5 | 36,630,380 | 1 | false | 0 | 0 | It is a lot safer to change your environment so that Python 3.5 is given preference over the default Python.
There are many ways to do this; if you do them all, it provides the maximum compatibility.
You can set these in your .bash_profile file, which is a hidden file in your home directory.
You can set the PATH environment variable so that Python 3.5 appears first in the search order; like this PATH='/path/to/your/python3.5/directory':$PATH
You can set a local alias in your shell, so that the python command points to Python 3.5, like this alias python="/path/to/the/file/python3.5"
Once you set the above, make sure you restart the terminal application.
If you download the installer form python.org; it will set these environment variables for you.
Also, if you use a utility like brew it will set the shell up correctly for you.
This will ensure that the shell environment will point to version of Python you want; however this does not affect applications that run on the desktop as most of them don't read the shell environment variables.
So, if you are using and IDE like PyCharm you'll still have to manually set the correct Python version for your projects.
This may seem like a lot of workaround, but on most Linux systems and even on OSX, Python is a core part of the system and it is used by some utilities, therefore it is always dangerous to rip and replace the version of Python that came with the operating system. | 1 | 1 | 0 | just a quick question. I have Python 2.7 on my mac by default. I have also installed 3.4 and use it more than 2.7, but would like to upgrade to the new 3.5. Should I remove 3.4 and just lay down a new install of 3.5, or is there a way to just update it. All my searches just talk about upgrading from 2.7 to 3x. I am just concerned about messing one of the installs up. Any input would be greatly appreciated.
Cheers. | Updating Python 3.4x to 3.5 | 0.197375 | 0 | 0 | 3,553 |
36,630,260 | 2016-04-14T17:40:00.000 | 1 | 0 | 0 | 0 | python,apache-spark,pyspark | 36,644,976 | 2 | false | 0 | 0 | Spark have a function takeSample which can merge two RDD in to an RDD. | 1 | 0 | 1 | I have two python arrays of the same length. They are generated from reading two separate text files. One represents labels; let it be called "labelArray". The other is an array of data arrays; let it be called "dataArray". I want to turn them into an RDD object of LabeledPoint. How can I do this? | RDD from label Array and data Array in python/spark | 0.099668 | 0 | 0 | 1,437 |
36,630,393 | 2016-04-14T17:47:00.000 | 0 | 0 | 1 | 0 | python-3.x | 36,747,743 | 3 | false | 0 | 0 | I have found an answer on here:
positions = [ i+1 for i in range(len(result)) if each == result[i]]
Which works well. | 1 | 0 | 0 | I am trying to work out how I can compare a list of words against a string and report back the word number from list one when they match. I can easily get the unique list of words from a sentence - just removing duplicates, and with enumerate I can get a value for each word, so Mary had a little lamb becomes 1, Mary, 2, had, 3, a etc. But I cannot work out how to then search the original list again and replace each word with its number value (so it becomes 1 2 3 etc).
Any ideas greatly received! | Matching the value of a word in a list with the place value of another list | 0 | 0 | 0 | 36 |
36,630,739 | 2016-04-14T18:06:00.000 | 2 | 0 | 1 | 0 | python,perl,file-processing,bigdata | 36,631,145 | 3 | false | 0 | 0 | You can keep a hash (an associative array) mapping column values to open output file handles, and open an output file only if none is open for that column value yet.
This will be good enough unless you'll hit your limit on maximum number of open files. (Use ulimit -Hn to see it in bash.) If you do, either you need to close file handles (e.g. a random one, or the one that hasn't been used the longest, which is easy to keep track of in another hash), or you need to do multiple passes across the input, processing only as many column values as you can open output files in one pass and skipping them in future passes. | 1 | 1 | 0 | I have a 10 billion line tab-delimited file that I want to split into 5,000 sub-files, based on a column (first column). How can I do this efficiently in Perl or Python?
This has been asked here before but all the approaches open a file for each row read, or they put all the data in memory. | split 10 billion line file into 5,000 files by column value in Perl or Python | 0.132549 | 0 | 0 | 292 |
36,632,215 | 2016-04-14T19:26:00.000 | 0 | 0 | 1 | 0 | python-2.7,powershell,pip,ipython,jupyter-notebook | 37,302,528 | 2 | true | 0 | 0 | for me it ended up being a path error
somehow my path changed
once I added 'Continuum\Anaconda\Scripts' to my path it fixed the issue | 1 | 0 | 0 | I used to be able to open IPython Notebook by opening powershell and typing in Ipython Notebook
At some point a few months ago that didn't work and I would get the error 'IPython: The term 'IPython' is not recognized as the name of a cdmlet...'
Now to open IPython Notebook through the powershell or to use pip install I have to open Powershell and type python -m IPython notebook and python -m pip install #package
Why did this happen and how can I fix this? | python -m to open IPython Notebook from Windows Powershell | 1.2 | 0 | 0 | 2,379 |
36,639,002 | 2016-04-15T05:37:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,g++,theano | 38,653,850 | 1 | false | 0 | 0 | On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw. | 1 | 0 | 1 | I am working on some neural networks. But my dataset is same 95 features and about 120 datasets.
So while importing theano i get warning g++ not detected and it will degrade the performance.
Do this will effect even a small dataset?
I will have a 2-3 hidden layers.
My shape of neural network will be (95, 200,200, 4)
I hope to hear. | G++ not detected | 0 | 0 | 0 | 112 |
36,641,083 | 2016-04-15T07:40:00.000 | 1 | 0 | 0 | 0 | python,django,asynchronous | 36,641,177 | 2 | false | 1 | 0 | Async page refresh can be done only in front-end with javascript. Django will only render the template or return the HTTP response
P.S: You can do page refresh via backend code(Django) or any | 1 | 0 | 0 | I have a page that displays last 10 requests to server. Requests are models that are saved by the middleware.
I need to update page with new requests, without refreshing.
I know I can use ajax, and ping server periodically, but sure should be better approach. | How to implement async page refresh in django? | 0.099668 | 0 | 0 | 914 |
36,643,784 | 2016-04-15T09:49:00.000 | 2 | 0 | 0 | 0 | python-2.7,wxpython | 36,649,037 | 2 | true | 0 | 1 | Set the style on the text ctrl as
TE_PASSWORD: The text will be echoed as asterisks. | 1 | 0 | 0 | I want to add a simple password check to a Python/wxPython/MySQL application to confirm that the user wants to carry out a particular action. So far I have a DialogBox with a textCtrl for password input and Buttons for Submit or Cancel. At the moment the password appears in the textCtrl. I would prefer this to appear as asterisks whilst the user input is captured but cannot figure out how to do this. How could I implement this? | Creating Simple Password Check In wxPython | 1.2 | 0 | 0 | 493 |
36,644,144 | 2016-04-15T10:03:00.000 | 2 | 0 | 1 | 0 | python,spyder | 44,979,996 | 6 | false | 0 | 0 | Unblock multi-line comment
Ctrl+5
Multi-line comment
Ctrl+4
NOTE: For my version of Spyder (3.1.4) if I highlighted the entire multi-line comment and used Ctrl+5 the block remained commented out. Only after highlighting a small portion of the multi-line comment did Ctrl+5 work. | 5 | 72 | 0 | I recently changed from the Enthought Canopy Python distribution to Anaconda, which includes the Spyder IDE.
In Canopy's code editor, it was possible to comment and uncomment lines of code by pressing the "Cntrl+/" shortcut key sequence. In Spyder I was unable to find an equivalent shortcut key in the introductory tutorial.
Is there a shortcut key for commenting and uncommenting code in Spyder? | Shortcut key for commenting out lines of Python code in Spyder | 0.066568 | 0 | 0 | 280,328 |
36,644,144 | 2016-04-15T10:03:00.000 | -7 | 0 | 1 | 0 | python,spyder | 43,246,130 | 6 | false | 0 | 0 | on Windows F9 to run single line
Select the lines which you want to run on console and press F9 button for multi line | 5 | 72 | 0 | I recently changed from the Enthought Canopy Python distribution to Anaconda, which includes the Spyder IDE.
In Canopy's code editor, it was possible to comment and uncomment lines of code by pressing the "Cntrl+/" shortcut key sequence. In Spyder I was unable to find an equivalent shortcut key in the introductory tutorial.
Is there a shortcut key for commenting and uncommenting code in Spyder? | Shortcut key for commenting out lines of Python code in Spyder | -1 | 0 | 0 | 280,328 |
36,644,144 | 2016-04-15T10:03:00.000 | 3 | 0 | 1 | 0 | python,spyder | 49,548,882 | 6 | false | 0 | 0 | Yes, there is a shortcut for commenting out lines in Python 3.6 (Spyder).
For Single Line Comment, you can use Ctrl+1. It will look like this #This is a sample piece of code
For multi-line comments, you can use Ctrl+4. It will look like this
#=============
\#your piece of code
\#some more code
\#=============
Note : \ represents that the code is carried to another line. | 5 | 72 | 0 | I recently changed from the Enthought Canopy Python distribution to Anaconda, which includes the Spyder IDE.
In Canopy's code editor, it was possible to comment and uncomment lines of code by pressing the "Cntrl+/" shortcut key sequence. In Spyder I was unable to find an equivalent shortcut key in the introductory tutorial.
Is there a shortcut key for commenting and uncommenting code in Spyder? | Shortcut key for commenting out lines of Python code in Spyder | 0.099668 | 0 | 0 | 280,328 |
36,644,144 | 2016-04-15T10:03:00.000 | 167 | 0 | 1 | 0 | python,spyder | 36,644,714 | 6 | true | 0 | 0 | Single line comment
Ctrl + 1
Multi-line comment select the lines to be commented
Ctrl + 4
Unblock Multi-line comment
Ctrl + 5 | 5 | 72 | 0 | I recently changed from the Enthought Canopy Python distribution to Anaconda, which includes the Spyder IDE.
In Canopy's code editor, it was possible to comment and uncomment lines of code by pressing the "Cntrl+/" shortcut key sequence. In Spyder I was unable to find an equivalent shortcut key in the introductory tutorial.
Is there a shortcut key for commenting and uncommenting code in Spyder? | Shortcut key for commenting out lines of Python code in Spyder | 1.2 | 0 | 0 | 280,328 |
36,644,144 | 2016-04-15T10:03:00.000 | 5 | 0 | 1 | 0 | python,spyder | 62,369,970 | 6 | false | 0 | 0 | While the other answers got it right when it comes to add comments, in my case only the following worked.
Multi-line comment
select the lines to be commented + Ctrl + 4
Multi-line uncomment
select the lines to be uncommented + Ctrl + 1 | 5 | 72 | 0 | I recently changed from the Enthought Canopy Python distribution to Anaconda, which includes the Spyder IDE.
In Canopy's code editor, it was possible to comment and uncomment lines of code by pressing the "Cntrl+/" shortcut key sequence. In Spyder I was unable to find an equivalent shortcut key in the introductory tutorial.
Is there a shortcut key for commenting and uncommenting code in Spyder? | Shortcut key for commenting out lines of Python code in Spyder | 0.16514 | 0 | 0 | 280,328 |
36,645,076 | 2016-04-15T10:46:00.000 | 3 | 0 | 0 | 0 | python,django | 52,244,829 | 3 | true | 1 | 0 | I know this is a while as to when i asked the question. I finally fixed this by changing the hosts. I went for Digital Oceans (created a new droplet) which supports wsgi. I deployed the app using gunicorn (application server) and nginx (proxy server).
It is not a good idea to deploy a Django app on shared hosting as you will be limited especially installing the required packages. | 2 | 5 | 0 | I am trying to deploy a django app on hostgator shared hosting. I followed the hostgator django installation wiki and i deployed my app. The issue is that i am getting a 500 error internal page when entering the site url in the browser. I contacted the support team but could not provide enough info on troubleshooting the error Premature end of script headers: fcgi.This was the error found on the server error log.
I am installed django 1.9.5 on the server and from the django documentation it does not support fastcgi.
So my question 500 error be caused by the reason that i am running django 1.9.5 on the server and it does not support fastcgi. if so do i need to install lower version of django to support the fastcgi supported by hostgator shared hosting
First i thought the error was caused by my .htaccess file but it has no issue from the what i heard from support team.
Any Leads to how i can get the app up and running will be appreciated. This is my first time with django app deployment. Thank you in advance | Django app deployment on shared hosting | 1.2 | 0 | 0 | 6,154 |
36,645,076 | 2016-04-15T10:46:00.000 | 0 | 0 | 0 | 0 | python,django | 36,646,426 | 3 | false | 1 | 0 | As you say, Django 1.9 does not support FastCGI.
You could try using Django 1.8, which is a long term support release and does still support FastCGI.
Or you could switch to a different host that supports deploying Django 1.9 with wsgi. | 2 | 5 | 0 | I am trying to deploy a django app on hostgator shared hosting. I followed the hostgator django installation wiki and i deployed my app. The issue is that i am getting a 500 error internal page when entering the site url in the browser. I contacted the support team but could not provide enough info on troubleshooting the error Premature end of script headers: fcgi.This was the error found on the server error log.
I am installed django 1.9.5 on the server and from the django documentation it does not support fastcgi.
So my question 500 error be caused by the reason that i am running django 1.9.5 on the server and it does not support fastcgi. if so do i need to install lower version of django to support the fastcgi supported by hostgator shared hosting
First i thought the error was caused by my .htaccess file but it has no issue from the what i heard from support team.
Any Leads to how i can get the app up and running will be appreciated. This is my first time with django app deployment. Thank you in advance | Django app deployment on shared hosting | 0 | 0 | 0 | 6,154 |
36,645,478 | 2016-04-15T11:06:00.000 | 3 | 0 | 0 | 0 | python,directory,pyqt,qfiledialog,getopenfilename | 36,650,012 | 1 | false | 0 | 1 | It turns out that the filename which is accessed by QFileDialog.getOpenFileName() is actually not only the filename but the whole path.. | 1 | 2 | 0 | I'm searching for a way to get the path of the directory of the file that I have chosen by QFileDialog.getOpenFileName().
I know that you can access it by os.path.dirname(os.path.realpath(filename), but I'm searching for a better way because I need to work in this directory.
I don't really understand why you can access the file by open(filename, 'r') though your current working directory (when typing print(os.getcwd()) is not the directory of the file.
Maybe there is a way by accessing something like the current working directory of the Qt.Application, but I had no success..
Also I have functions where you need arg1 = directory and arg2 = filename1 (in the directory) as arguments. Funnily enough they suddenly seem to work with just(!) arg1 = 'C:' as directory and arg2 = filename2 when filename2 is the file I've accessed by QFileDialog.getOpenFileName().
I'm happy about any explanation! | pyqt QFileDialog.getOpenFileName() get path of the directory of the file | 0.53705 | 0 | 0 | 5,334 |
36,647,169 | 2016-04-15T12:27:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,neural-network,deep-learning,nolearn | 36,647,234 | 1 | false | 0 | 0 | Try increasing the number of hidden units and the learning rate. The power of neural networks comes from the hidden layers. Depending on the size of your dataset, the number of hidden layers can go upto a few thousands. Also, please elaborate on the kind, and number of features you're using. If the feature set is small, you're better off using SVMs and RandomForests instead of neural networks. | 1 | 0 | 1 | I am currently working on some project related to machine learning.
I extracted some features from the object.
So I train and test that features with NB, SVM and other classification algorithms and got result about 70 to 80 %
When I train the same features with neural networks using nolearn.dbn and then test it I got about 25% correctly classified. I had 2 hidden layers.
I still don't understand what is wrong with neural networks.
I hope to have some help.
Thanks | Python: Deep neural networks | 0.197375 | 0 | 0 | 473 |
36,655,812 | 2016-04-15T19:56:00.000 | 0 | 0 | 0 | 0 | python,apache,cx-oracle | 36,711,130 | 2 | true | 0 | 0 | I was able to solve this with the help of mod_env module of python by natively passing the env_variables to apache . What I did to achieve this was
--> define the my required env variables in the file /etc/sysconfig/httpd like
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/folder_with_library/
export LD_LIBRARY_PATH
--> Then passing this variable in the httpd.conf file like
PassEnv LD_LIBRARY_PATH
Hope this helps | 1 | 0 | 0 | Need help from someone who has got Apache , Python and cx_Oracle (Lib to run Oracle database using python) .
Even after setting all the required variables still getting the error ": libclntsh.so.11.1: cannot open shared object file: No such file or directory" when running python script .
The same script works perfectly fine when running it from cli.
My working environment is RHEL 6.4
An help in this matter would be appreciated , for those who got this working in their environment
Merci d'avance | libclntsh.so.11.1: cannot open shared object file python error while running CGIusing cx_Oracle | 1.2 | 1 | 0 | 2,034 |
36,656,426 | 2016-04-15T20:38:00.000 | 0 | 0 | 1 | 0 | python,regex | 36,656,842 | 1 | true | 0 | 0 | Does ordering of alternatives matter for speed/choosing between alternatives?
Yes, it does. Alternative groups are analyzed from left to right, and that happens at each position in the input string.
Thus, putting the most common matches at the start is already a boost.
When speaking about unanchored alternation lists in NFA regex (as in Python), it is important that alternatives that can match at the same location should be ordered in such a way that the longest comes first because otherwise a shorter alternative will always "win", and you may end up with xxxone when matching with some|someone -> xxx wanting to get xxx from someone. | 1 | 2 | 0 | I am trying to match (and remove) any of 4000 expressions.
If I put the most common matches at the front will that speed matching (or is it undefined)
although typically exclusive, I sometimes have default cases: 'ax*|a(0-9)|', ie 'a', but I want a greedy match if possible. is it sufficient to reorder 'a(0-9)|ax*' or is this not guaranteed by the specification? | python regular expressions does ordering of alternatives matter for speed/choosing between alternatives | 1.2 | 0 | 0 | 120 |
36,657,049 | 2016-04-15T21:23:00.000 | 2 | 1 | 0 | 0 | python,django,tastypie,throttling | 36,657,503 | 2 | true | 1 | 0 | Throttle key is based on authentication.get_identifier function.
Default implementation of this function returns a combination of IP address and hostname.
Edit
Other implementations (i.e. BasicAuthentication, ApiKeyAuthentication) returns username of the currently logged user or nouser string. | 2 | 3 | 0 | I can't seem to find any information on what TastyPie throttles based on. Is it by the IP of the request, or by the actual Django user object? | TastyPie throttling - by user or by IP? | 1.2 | 0 | 0 | 203 |
36,657,049 | 2016-04-15T21:23:00.000 | 2 | 1 | 0 | 0 | python,django,tastypie,throttling | 36,659,688 | 2 | false | 1 | 0 | Tomasz is mostly right, but some of the authentication classes have a get_identifier method that returns the username of the currently logged in user, otherwise 'nouser'. I plan on standardizing this soon. | 2 | 3 | 0 | I can't seem to find any information on what TastyPie throttles based on. Is it by the IP of the request, or by the actual Django user object? | TastyPie throttling - by user or by IP? | 0.197375 | 0 | 0 | 203 |
36,658,093 | 2016-04-15T22:53:00.000 | 0 | 1 | 0 | 0 | python,ssh,paramiko | 36,700,655 | 1 | false | 0 | 0 | Workaround from another stack overflow issue by putting the following cipher/mac/kex settings to sushi_config:
Ciphers [email protected],[email protected],aes256-ctr,aes128-ctr
MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 | 1 | 0 | 0 | I"m using python 2.7 and paramiko 1.16
While attempting an SSH to el capitan, paramiko throws the exception no acceptable kex algorithm. I tried setting kex, cyphers in sshd_config, but sshd can't be restarted for some reasons. I tried some client side fixes, but upgrading paramiko did not fix the problem. | paramiko no acceptable kex algorithm while ssh to el capitan | 0 | 0 | 1 | 677 |
36,661,233 | 2016-04-16T07:07:00.000 | 0 | 0 | 0 | 0 | python,web-services,security | 36,664,840 | 1 | false | 1 | 0 | If you want the service itself to run on dedicated hardware within the network and have the webserver itself hosted on the DMZ you'll have to use a proxy. You can use Nginx (for instance) to forward the port in which you're exposing the webserver to the port opened by flask on the dedicated machine. You'll have to configure your firewall to forward that port so that it's accessible to the machine in the DMZ.
I'd provide the configuration for your firewall and Nginx but it really depends on the parameters of your network and the service you want to run. | 1 | 0 | 0 | I have written a python library and a web service using flask to expose functions from that library. The library should run on computer A (to do its processing). In our IT setup, web servers will run on a DMZ (computer B). This being the case, if the flask web service directly imports the library and runs a function, it would be running on the DMZ, rather than the intended computer? How do I design the program such that the library executes on the intended hardware, but the web service is hosted by the webserver on the DMZ? | Python web service and DMZ | 0 | 0 | 0 | 301 |
36,662,393 | 2016-04-16T09:24:00.000 | 6 | 1 | 1 | 0 | python,cron | 36,662,797 | 3 | true | 0 | 0 | You should probably use cron if two conditions are met;
It is available on all platforms your code needs to run on.
Starting a script on a set time is sufficient for your needs.
Mirroring these are two reasons to build your own solution:
Your program needs to be portable across many operating systems, including those that don't have cron available. (like ms-windows)
You need to schedule things in a way other than on a set start time. E.g. on a set interval, or if some other condition it met. | 1 | 7 | 0 | If you want to run your python script, let's say every day at 6 pm, is it better to go with a crontab entry or with a Advanced Python Scheduler solution regarding to power, memory, cpu ... consumption?
In my eyes doing a crone job is therefore better, because I do not see the advantage of permanently running an Advanced Python Scheduler. | Cron job vs Advanced Python Scheduler | 1.2 | 0 | 0 | 8,160 |
36,665,110 | 2016-04-16T13:59:00.000 | 4 | 0 | 1 | 0 | python,operators | 36,665,192 | 2 | false | 0 | 0 | No difference at all. You might wanna use operator.abs with functions like itertools.accumulate, just like you use operator.add for +. There is a performance differene though.
For example using operator.add is twice as fast as +(Beazly). | 1 | 4 | 0 | In python what is the difference between :
abs(a) and operator.abs(a)
They are the very same and they work alike. If they are the very same then why are two separate functions doing the same stuff are made??
If there is some specific functionality for any one of it - please do explain it. | Python: what's the difference - abs and operator.abs | 0.379949 | 0 | 0 | 408 |
36,665,448 | 2016-04-16T14:33:00.000 | 1 | 0 | 1 | 0 | python,exception-handling | 36,665,546 | 3 | false | 0 | 0 | let the exception happen organically.
It's up to the caller to decide what to do, so give the caller the information it needs - the exception - and let it decide. Passing return codes just loses information, and is error-prone to boot. | 3 | 0 | 0 | If I have a function that reads a file and returns a list of values, but the file doesn't exist, or have read permissions set for the process, or any other kind of exception, is it better to:
let the exception happen organically
Try/catch the exception and print the error as a string, and return -1 or some other failure int
Try/catch the exception and print nothing, but return -1
Try/catch the exception and return the empty list (which is misleading)
Something else
In this case we are speaking of Python if it matters. | Should a function that normally returns list return a negative int on an exception? | 0.066568 | 0 | 0 | 59 |
36,665,448 | 2016-04-16T14:33:00.000 | 2 | 0 | 1 | 0 | python,exception-handling | 36,665,556 | 3 | true | 0 | 0 | The answer is not really Python specific. A general rule about exceptions is that you should only catch them in the context where something can be done to recover from it: e.g. try again every 10 sec up to 3 times, where it makes sense, or ask the user what they want to do: retry, abort?
If the function catching the exception cannot recover sensibly, it is better to let the exception "bubble up". Sometimes, if the context catching the exception can add additional useful information, you can catch the exception, add meaningful information and re-throw a different, more meaningful exception. It really depends on what you are trying to do.
Note that returning an invalid value is a different paradigm of error handling and it makes sense in some situations, but in general, a failure like you describe is better handled through exceptions. | 3 | 0 | 0 | If I have a function that reads a file and returns a list of values, but the file doesn't exist, or have read permissions set for the process, or any other kind of exception, is it better to:
let the exception happen organically
Try/catch the exception and print the error as a string, and return -1 or some other failure int
Try/catch the exception and print nothing, but return -1
Try/catch the exception and return the empty list (which is misleading)
Something else
In this case we are speaking of Python if it matters. | Should a function that normally returns list return a negative int on an exception? | 1.2 | 0 | 0 | 59 |
36,665,448 | 2016-04-16T14:33:00.000 | 1 | 0 | 1 | 0 | python,exception-handling | 36,665,644 | 3 | false | 0 | 0 | If you can guarantee that the function will always be able to completely deal correctly with the exception then handle the exception in the function itself and return None on failure, since an empty list may be a valid return, and None is more Pythonic than an integer return code.
However, it's highly probably that you can't make such a guarantee, so you should allow the exception to bubble up to the caller. Or you may compromise by catching the exception, perform some analysis on it (possibly printing a warning message) and then re-raise the exception, or raise a modified version of it with extra error / warning information attached.
This is in accordance with the ancient programmer proverb:
Never test for an error condition that you don't know how to handle.
:) | 3 | 0 | 0 | If I have a function that reads a file and returns a list of values, but the file doesn't exist, or have read permissions set for the process, or any other kind of exception, is it better to:
let the exception happen organically
Try/catch the exception and print the error as a string, and return -1 or some other failure int
Try/catch the exception and print nothing, but return -1
Try/catch the exception and return the empty list (which is misleading)
Something else
In this case we are speaking of Python if it matters. | Should a function that normally returns list return a negative int on an exception? | 0.066568 | 0 | 0 | 59 |
36,665,977 | 2016-04-16T15:22:00.000 | 3 | 0 | 0 | 0 | python,plot,bokeh | 36,686,063 | 1 | true | 0 | 0 | It is not possible to call python functions directly from JavaScript callbacks. The JS callback code is executing in the browser, which is not the same process as a python interpreter. If you want to have user interactions, or tools, or callbacks that trigger real, actual python functions, you will have to use the Bokeh server (this is in fact what it is designed to do). | 1 | 0 | 0 | I would like to call an external python function to get the callcack modification which I cannot using javascript math.
How can I do this? | Python Bokeh callback external python function | 1.2 | 0 | 0 | 842 |
36,667,149 | 2016-04-16T17:08:00.000 | 4 | 0 | 0 | 0 | python,pygame | 36,670,152 | 3 | true | 0 | 1 | Make classes for each one, where each class is a subclass of pygame.Surface, of the same size as the display. Then you could have 3 variable TITLESCREEN, PLAYING, HIGHSCORES and change them on key press. Then you would be able to blit the correct screen to the display. | 2 | 1 | 0 | How would you be able to create multiple screens using Pygame and events performed by the user?
For example, if I had a menu screen with 2 buttons ('Start' and 'Exit') and the user clicked on 'Start', a new screen would appear in the same window with whatever is next in the game. From that screen, a user could click on another button and move on to another screen/ return to the menu, etc. | Making multiple 'game screens' using Pygame | 1.2 | 0 | 0 | 3,094 |
36,667,149 | 2016-04-16T17:08:00.000 | 0 | 0 | 0 | 0 | python,pygame | 62,853,674 | 3 | false | 0 | 1 | What I did for this was have a literal main loop that checks to see what "Screen" the player wants to go to. Like if they press the exit button, it returns where to go next. The main loop then runs a the new script for the new screen. | 2 | 1 | 0 | How would you be able to create multiple screens using Pygame and events performed by the user?
For example, if I had a menu screen with 2 buttons ('Start' and 'Exit') and the user clicked on 'Start', a new screen would appear in the same window with whatever is next in the game. From that screen, a user could click on another button and move on to another screen/ return to the menu, etc. | Making multiple 'game screens' using Pygame | 0 | 0 | 0 | 3,094 |
36,667,548 | 2016-04-16T17:45:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,pandas,range,series | 68,226,844 | 5 | false | 0 | 0 | try pd.Series([0 for i in range(20)]).
It will create a pd series with 20 rows | 1 | 27 | 1 | I am new to python and have recently learnt to create a series in python using Pandas. I can define a series eg: x = pd.Series([1, 2, 3, 4, 5]) but how to define the series for a range, say 1 to 100 rather than typing all elements from 1 to 100? | How to create a series of numbers using Pandas in Python | 0.039979 | 0 | 0 | 69,375 |
36,668,305 | 2016-04-16T18:52:00.000 | 0 | 0 | 1 | 0 | python,pip,pycharm | 36,668,628 | 1 | false | 0 | 0 | Changed to Python 2.7 - all works | 1 | 0 | 0 | I just recently switched to Python and started using Pycharm. I have installed 'rumps' library but when I tried to import it in PyCharm, the app says library not found. What am I doing wrong? | How to synchronise pip libraries with PyCharm | 0 | 0 | 0 | 74 |
36,668,467 | 2016-04-16T19:06:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 60,068,593 | 4 | false | 0 | 0 | If you want to run your code on the second GPU,it assumes that your machine has two GPUs, You can do the following trick.
open Terminal
open tmux by typing tmux (you can install it by sudo apt-get install tmux)
run this line of code in tmux: CUDA_VISIBLE_DEVICES=1 python YourScript.py
Note: By default, tensorflow uses the first GPU, so with above trick, you can run your another code on the second GPU, separately.
Hope it would be helpful!! | 1 | 17 | 1 | Based on the documentation, the default GPU is the one with the lowest id:
If you have more than one GPU in your system, the GPU with the lowest
ID will be selected by default.
Is it possible to change this default from command line or one line of code? | Change default GPU in TensorFlow | 0 | 0 | 0 | 32,954 |
36,668,587 | 2016-04-16T19:17:00.000 | 1 | 0 | 0 | 0 | python-2.7,ubuntu,emacs | 36,672,978 | 1 | false | 0 | 1 | You need to start the interpreter first with C-c C-p (run-python). Then you can C-c C-c (python-shell-send-buffer) to send the entire buffer to the python process or even C-c C-r (python-sheel-send-region) to only send the selected region to the python process. See more keybindings for your current buffer with C-h m (describe-mode). | 1 | 0 | 0 | I am new to using emacs as python ide.
I am using Emacs 24.5.1 with its default python mode.
I am writing a game with pygame and pyganim modules, but i am not able to run the code and test it because sometimes when i hit C-c C-c for eval buffer at the down window (i think it calls mini buffer?) i see only Sent: import pygame... (if i have imported pygame for example) and nothing more happens. How can i evaluate the whole code and why is this happening? | Emacs+python C-c C-c not working | 0.197375 | 0 | 0 | 459 |
36,669,500 | 2016-04-16T20:42:00.000 | 0 | 0 | 0 | 1 | python,azure,web-applications,azure-webjobs | 36,669,596 | 2 | false | 1 | 0 | You would need to provide some more information about what kind of interface your web app exposes. Does it only handle normal HTTP1 requests or does it have a web socket or HTTP2 type interface? If it has only HTTP1 requests that it can handle then you just need to make multiple requests or try and do long polling. Otherwise you need to connect with a web socket and stream the data over a normal socket connection. | 2 | 0 | 0 | I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App.
I don't know how to proceed to link the WebJob to the web app.
Any ideas ? | Streaming values in a python script to a wep app | 0 | 0 | 0 | 63 |
36,669,500 | 2016-04-16T20:42:00.000 | 1 | 0 | 0 | 1 | python,azure,web-applications,azure-webjobs | 36,671,291 | 2 | true | 1 | 0 | You have two main options:
You can have the WebJobs write the values to a database or to Azure Storage (e.g. a queue), and have the Web App read them from there.
Or if the WebJob and App are in the same Web App, you can use the file system. e.g. have the WebJob write things into %home%\data\SomeFolderYouChoose, and have the Web App read from the same place. | 2 | 0 | 0 | I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App.
I don't know how to proceed to link the WebJob to the web app.
Any ideas ? | Streaming values in a python script to a wep app | 1.2 | 0 | 0 | 63 |
36,672,772 | 2016-04-17T04:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.5 | 36,672,789 | 4 | false | 0 | 0 | You can calculate x = 1/16; and then calculate y = 1/x. Hope it helps | 1 | 0 | 0 | what I need is that I ask the user to input something like 1/2
ex: i=input("Input the half; if one-sixteenth enter 1/16" )
what I need is to take 16 out of 1/16 and use it for calculations.
in math we can easily use 1/(1/16), but in python I do get a data type error when trying to asign and do the calculations using the fractional value like this,
Any help is highly apprecitated | How to convert 1/2 to 2 in python 3.5 | 0 | 0 | 0 | 153 |
36,673,170 | 2016-04-17T05:55:00.000 | 1 | 0 | 1 | 1 | python,python-3.x,makefile,installation | 36,673,452 | 1 | false | 0 | 0 | If you don't want to copy the binaries you built into a shared location for system-wide use, you should not make install at all. If the build was successful, it will have produced binaries you can run. You may need to set up an environment for making them use local run-time files instead of the system-wide ones, but this is a common enough requirement for developers that it will often be documented in a README or similar (though as always when dealing with development sources, be prepared that it might not be as meticulously kept up to date as end-user documentation in a released version). | 1 | 1 | 0 | I just downloaded Python sources, unpacked them to /usr/local/src/Python-3.5.1/, run ./configure and make there. Now, according to documentation, I should run make install.
But I don't want to install it somewhere in common system folders, create any links, change or add environment variables, doing anything outside this folder. In other words, I want it to be portable. How do I do it? Will /usr/local/src/Python-3.5.1/python get-pip.py install Pip to /usr/local/src/Python-3.5.1/Lib/site-packages/? Will /usr/local/src/Python-3.5.1/python work properly?
make altinstall, as I understand, still creates links what is not desired. Is it correct that it creates symbolic links as well but simply doesn't touch /usr/bin/python and man?
Probably, I should do ./configure prefix=some/private/path and just make and make install but I still wonder if it's possible to use Python make install. | Build and use Python without make install | 0.197375 | 0 | 0 | 1,409 |
36,673,670 | 2016-04-17T07:07:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 36,673,729 | 1 | true | 1 | 0 | Upgrading Python to 2.7.8 or later versions fixed the issue.
EDIT:
Also check if you are using google app engine SDK 1.8.1 or later version. As of version SDK 1.8.1 the cacerts.txt has been renamed to urlfetch_cacerts.txt. You can try removing cacerts.txt file to fix the problem. | 1 | 0 | 0 | I tried deploying python code using google app engine.
But I got Error Below:
certificate verify failed
I had included proxy certificate in urlfetch_cacerts.py and enabled 'validate_certificate' in urlfetch_stub.py by _API_CALL_VALIDATE_CERTIFICATE_DEFAULT = True.But I still get the error..
Can you suggest any solution?
Thanks in advance. | Certificate Error while Deploying python code in Google App Engine | 1.2 | 0 | 0 | 97 |
36,680,422 | 2016-04-17T18:23:00.000 | 5 | 0 | 1 | 1 | eclipse,python-3.x,cython,pydev | 37,348,293 | 9 | false | 0 | 0 | simply copy all the command "/usr/bin/python3.5" "/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py" build_ext --inplace ,
paste in a command line terminal (tipically bash shell) and press return :) | 5 | 18 | 0 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? | Eclipse pydev warning - "Debugger speedups using cython not found." | 0.110656 | 0 | 0 | 20,938 |
36,680,422 | 2016-04-17T18:23:00.000 | 1 | 0 | 1 | 1 | eclipse,python-3.x,cython,pydev | 50,884,895 | 9 | false | 0 | 0 | I faced a similar issue while using Python3.5 and Eclipse Pydev for debugging. when I tried
>"/usr/bin/python3.5" "/home/frodo/eclipse/plugins/org.python.pydev.core_6.3.3.201805051638/pysrc/setup_cython.py" build_ext --inplace
Traceback (most recent call last):
File "/home/frodo/eclipse/plugins/org.python.pydev.core_6.3.3.201805051638/pysrc/setup_cython.py", line 14, in
from setuptools import setup
ImportError: No module named 'setuptools'
Later I fixed the issue with the below commands to install setuptools and the related python3-dev libraries using
sudo apt-get install python3-setuptools python3-dev
and that resolved the issues while executing the above command. | 5 | 18 | 0 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? | Eclipse pydev warning - "Debugger speedups using cython not found." | 0.022219 | 0 | 0 | 20,938 |
36,680,422 | 2016-04-17T18:23:00.000 | 0 | 0 | 1 | 1 | eclipse,python-3.x,cython,pydev | 53,074,490 | 9 | false | 0 | 0 | On ubuntu, I needed to do the following in a terminal:
sudo apt-get install build-essential
sudo apt-get install python3-dev
I then copied the full setup path from the error in eclipse and onto my command prompt:
python "/home/mark/.eclipse/360744347_linux_gtk_x86_64/plugins/org.python.pydev.core_6.5.0.201809011628/pysrc/setup_cython.py" build_ext --inplace
It finally compiled and the error message no longer appears. | 5 | 18 | 0 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? | Eclipse pydev warning - "Debugger speedups using cython not found." | 0 | 0 | 0 | 20,938 |
36,680,422 | 2016-04-17T18:23:00.000 | 0 | 0 | 1 | 1 | eclipse,python-3.x,cython,pydev | 68,233,508 | 9 | false | 0 | 0 | GNU/Linux / Eclipse 2021-06 / Python 3.6.9, cython installed with apt install cython
Localization of setup_cython.py: find <eclipse binary installation> -name setup_cython.py
Execution : python3 "<previous find result>" build_ext --inplace
That's all folks! | 5 | 18 | 0 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? | Eclipse pydev warning - "Debugger speedups using cython not found." | 0 | 0 | 0 | 20,938 |
36,680,422 | 2016-04-17T18:23:00.000 | 13 | 0 | 1 | 1 | eclipse,python-3.x,cython,pydev | 36,691,558 | 9 | false | 0 | 0 | This is as expected. Run"/usr/bin/python3.5" "/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py" build_ext --inplace as it asks to get the debugger accelerations.
(Nb. The error in the comment below was because this answer was missing an initial double quote.)
Ideally run it from within your virtual environment, if you use one, to make sure you run this for the correct Python version. You'll need to run this once per Python version you use. | 5 | 18 | 0 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? | Eclipse pydev warning - "Debugger speedups using cython not found." | 1 | 0 | 0 | 20,938 |
36,680,852 | 2016-04-17T18:58:00.000 | 1 | 0 | 0 | 0 | android,galaxy-tab,qpython,qpython3 | 39,323,926 | 1 | false | 0 | 1 | You can install and run both the apps.It is better to install both because qpython uses python2.7 where as qpython3 uses python3.x.The syntax are different for both the versions so obviously the libraries.
There is a pip console in qpython to install libraries where as in qpython3 it is not there. | 1 | 0 | 0 | I'd like to try Qpython on a Samsung Galaxy Tab 4 running Android 4.4.2. Which would be better, in terms of stability, functionality, supported libraries, etc., Qpython or Qpython3? Or can they be installed side-by-side?
Thanks. | Qpython or Qpython3? | 0.197375 | 0 | 0 | 2,861 |
36,684,591 | 2016-04-18T02:32:00.000 | 0 | 0 | 1 | 0 | list,python-3.x,search,input | 36,684,739 | 1 | false | 0 | 0 | what about ?
if any(word in 'some one long two phrase three' for word in list_ if word.isalpha() and len(word) == 7) | 1 | 0 | 0 | I have a list of words from a text file and a list of seven letters that are entered as input from a user. How can I search the list of words for words that contain at least five of the seven letters of input? | Search text file for words cont | 0 | 0 | 0 | 29 |
36,685,347 | 2016-04-18T04:11:00.000 | 1 | 0 | 1 | 0 | python,pandas | 36,686,758 | 3 | false | 0 | 0 | you can use df._get_numeric_data() directly. | 1 | 9 | 1 | I have a DataFrame in which a column might have three kinds of values, integers (12331), integers as strings ('345') or some other string ('text').
Is there a way to drop all rows with the last kind of string from the dataframe, and convert the first kind of string into integers? Or at least some way to ignore the rows that cause type errors if I'm summing the column.
This dataframe is from reading a pretty big CSV file (25 GB), so I'd like some solution that would work when reading in chunks. | Ignoring non-numerical string values in pandas dataframe | 0.066568 | 0 | 0 | 17,034 |
36,685,482 | 2016-04-18T04:27:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,raspbian,lirc | 49,105,962 | 2 | false | 0 | 0 | Instead of running two instances on my pi I opted to make what is essentially a transistor switchboard (on a breadboard). I call each send command from a script which first runs another script that turns on one of three GPIOs, activating one of three transistors, and thus exposing one of three IR transmitters to receive signal from the single LIRC gpio.
This actually works very well, and I was able to put this together in less time than it takes to read the tutorials on multiple instances and drivers. I needed this ability because I have multiple components which are of the same make and therefore receive some of the same codes such as power. If each device didn't have it's own transmitter I wouldn't be able to control one device without the other non intended device also responding to the command. | 1 | 0 | 0 | I am new for LIRC programming. Right now I am using GPIO 18 for lirc implementation. But I want to implement multiple IR emitters with different GPIOs and working all as different remotes. This is because I have two same brand TVs in different rooms to control. | How to get multiple instances of LIRC working and each using different GPIO with raspberry pi? | 0 | 0 | 0 | 800 |
36,685,953 | 2016-04-18T05:19:00.000 | 0 | 0 | 1 | 0 | python,atom-editor | 36,686,001 | 1 | false | 0 | 0 | Modifying that file will likely get overwritten when the language-html package gets updated. If you want to set a snippet for yourself, just modify your snippets.cson file. Find "Open your Snippets" in the Command Palette to open the file or open ~/.atom/snippets.cson. | 1 | 0 | 0 | How can I remove an annoying snippet from a language pack in Atom? In particular I find the property snippet from the source.python language pack super useful and it triggers everytime I want to add a property decorator.
Would love a way to remove it completely instead of just change the snippet replace content. | Atom-Editor Remove snippit from language pack | 0 | 0 | 0 | 22 |
36,686,392 | 2016-04-18T05:54:00.000 | 1 | 0 | 1 | 0 | python | 36,686,991 | 2 | false | 0 | 0 | So you want to note down how many times each coordinate was visited. You can use a dictionary. A dictionary is a container which holds a particular item for every particular key. Its denoted by curly braces.
Let the dictionary be d.
d = {}
To set an item,
d[(x, y)] = 1
Here (x,y) is a tuple. You are confused between what tuples and dictionaries are.
You'll do this for the first time. For the next time you'll have to increase it by one which you can do by:
d[(x, y)] += 1 | 1 | 0 | 0 | I have a game that when I land on a place, which is a coordinate,it checks it and replaces that coordinate with an object sort out thing. I'm struggling to do it please can any of you help me.
Please can any of you help me.THANKS :) t | How to check if i landed on the same coordinate three times? | 0.099668 | 0 | 0 | 55 |
36,686,661 | 2016-04-18T06:15:00.000 | 0 | 0 | 0 | 0 | python-3.x,selenium,centos6,bamboo | 36,808,047 | 1 | false | 1 | 0 | I solved the problem by changing the task type from a command task to a script task. My understanding is that not all tasks are run in the sequence as they were defined in the job. If this is not the case, then it might be a bug in Bamboo. | 1 | 0 | 0 | I have setup an Atlassian Bamboo deploy plan. One of its steps is to run a command to run automated UI tests written in Selenium for Python. This runs on a headless Centos 6 server.
I had to install the X-server to simulate the existence of a display
I made the following commands run in the system boot so that the X-server is always started when the machine starts
Xvfb :1 -screen 1600x900x16
export DISPLAY=:1
The command task in the deployment plan simply invokes the following
/usr/local/bin/python3.5 .py
The funny thing is that when I run that directly from the command line it works perfect the the UI unit tests work. They start firefox and start dealing with the site.
On the other hand, when this is done via the deployment command I keep getting the error "The browser appears to have exited "
17-Apr-2016 14:18:23 selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details" As if it still does not sense that there is a display.
I even added a task in the deployment job to run X-server again but it came back with error that the server is already running.
This is done on Bamboo version 5.10.3 build 51020.
So, any ideas why it would fail within the deployment job?
Thanks, | Atlassian Bamboo command tasks not running correctly | 0 | 0 | 1 | 837 |
36,687,929 | 2016-04-18T07:35:00.000 | 0 | 0 | 0 | 0 | python-2.7,scikit-learn,text-classification | 36,693,072 | 1 | true | 0 | 0 | As with most supervised learning algorithms, Random Forest Classifiers do not use a similarity measure, they work directly on the feature supplied to them. So decision trees are built based on the terms in your tf-idf vectors.
If you want to use similarity then you will have to compute a similarity matrix for your documents and use this as your features. | 1 | 1 | 1 | I am doing some work in document classification with scikit-learn. For this purpose, I represent my documents in a tf-idf matrix and feed a Random Forest classifier with this information, works perfectly well. I was just wondering which similarity measure is used by the classifier (cosine, euclidean, etc.) and how I can change it. Haven't found any parameters or informatin in the documentation.
Thanks in advance! | similarity measure scikit-learn document classification | 1.2 | 0 | 0 | 350 |
36,694,745 | 2016-04-18T13:00:00.000 | 0 | 1 | 0 | 1 | python,bash,unix,kill-process,diskspace | 36,695,061 | 2 | false | 0 | 0 | If you delete a file which is opened in some processes, it's marked as deleted, but the content remains on disk, so that all processes still can read it. Once all processes close corresponding descriptors (or simply finish), the space will be reclaimed. | 1 | 0 | 0 | I was writing a huge file output.txt (around 10GB) on a server thorugh a python script using the f.write(row) command but because the process was too long I decided to interrupt the program using
kill -9 pid
The problem is that this space is still used on the server when I check with the command
df -h
How can I empty the disk occupied by this buffer that was trying to write the file?
the file output.txt was empty (0 Byte) when I killed the script, but I still deleted it anyway using
rm output.txt
but the space in the disk doesn't become free, I still have 10 GB wasted.. | Delete an unfinished file | 0 | 0 | 0 | 62 |
36,694,973 | 2016-04-18T13:10:00.000 | 0 | 1 | 0 | 0 | java,python,server,client,hessian | 36,695,144 | 1 | true | 1 | 0 | Burlap and Hessian are 2 different (but related) RPC protocols, with Burlap being XML based and Hessian being binary.
They're both also pretty ancient, so if you have an opportunity to use something else, I'd highly recommend it. If not, then you're going to have to find a Burlap lib for Python.
Since it seems that a Burlap lib for Python simply doesn't exist (at least anymore), your best choice is probably to make a small Java proxy that communicates with a more recent protocol with the Python side and in Burlap with the Java server. | 1 | 0 | 0 | I'm trying to connect a burlap java server with a python client but I can't find any detail whatsoever regarding how to use burlap with python or if it even is implemented for python. Any ideas? Can I build burlap python clients? Any resources? Would using a hessian python client work with a java burlap server? | Burlap java server to work with python client | 1.2 | 0 | 0 | 205 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.