Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,324,438 | 2017-06-02T08:39:00.000 | 0 | 0 | 1 | 0 | python,python-datetime | 44,324,559 | 4 | false | 0 | 0 | split by space fist value is date like 2017-06-01
d =date.split(" ")[0]
split by dot to get time
t = data.split(" ")[1].split(".")[0] | 1 | 2 | 0 | The output of
/sbin/hwclock --show --utc
looks like
2017-06-01 16:04:47.029482+1:00
How to parse this string into a datetime object in Python? | Parse hw_clock output in Python | 0 | 0 | 0 | 288 |
44,324,494 | 2017-06-02T08:42:00.000 | 6 | 0 | 1 | 0 | python,python-2.7,python-3.x,pypy | 44,333,848 | 1 | false | 0 | 0 | Using PyPy is not the best choice here, given that it always retain the insertion order in its dicts (with a method that makes dicts use less memory). We can of course make it change the order dicts are enumerated, but it defeats the point.
Instead, I'd suggest to hack at the CPython source code to change the way the hash is used inside dictobject.c. For example, after each hash = PyObject_Hash(key); if (hash == -1) { ..error.. }; you could add hash ^= HASH_TWEAK; and compile different versions of CPython with different values for HASH_TWEAK. (I did such a thing at one point, but I can't find it any more. You need to be a bit careful about where the hash values are the original ones or the modified ones.) | 1 | 13 | 0 | We have a large collection of python code that takes some input and produces some output.
We would like to guarantee that, given the identical input, we produce identical output regardless of python version or local environment. (e.g. whether the code is run on Windows, Mac, or Linux, in 32-bit or 64-bit)
We have been enforcing this in an automated test suite by running our program both with and without the -R option to python and comparing the output, assuming that would shake out any spots where our output accidentally wound up dependent on iteration over a dict. (The most common source of non-determinism in our code)
However, as we recently adjusted our code to also support python 3, we discovered a place where our output depended in part on iteration over a dict that used ints as keys. This iteration order changed in python3 as compared to python2, and was making our output different. Our existing tests (all on python 2.7) didn't notice this. (Because -R doesn't affect the hash of ints) Once found, it was easy to fix, but we would like to have found it earlier.
Is there any way to further stress-test our code and give us confidence that we've ferreted out all places where we end up implicitly depending on something that will possibly be different across python versions/environments? I think that something like -R or PYTHONHASHSEED that applied to numbers as well as to str, bytes, and datetime objects could work, but I'm open to other approaches. I would however like our automated test machine to need only a single python version installed, if possible.
Another acceptable alternative would be some way to run our code with pypy tweaked so as to use a different order when iterating items out of a dict; I think our code runs on pypy, though it's not something we've ever explicitly supported. However, if some pypy expert gives us a way to tweak dictionary iteration order on different runs, it's something we'll work towards. | Equivalent to python's -R option that affects the hash of ints | 1 | 0 | 0 | 564 |
44,325,823 | 2017-06-02T09:46:00.000 | 0 | 0 | 0 | 0 | android,python,appium | 44,330,828 | 2 | false | 1 | 0 | Try this
driver.find_element_by_id('main_btn1').click()
Find the ID mentioned under the resource id in case you are using appium version less than 1.0.2
You are pasting the whole package id com.socialnmobile.dictapps.notepad.color.note:id/main_btn1 which appium wont detect because that is certainly not the element id.
In case this doesnt work, please let me know the contents you see in the inspector. | 2 | 0 | 0 | This is my first post so I did some research before asking this question, but it was all in vaine.
I'm writing my python script for Android application and I need to use basic click() command, in order to get deeper.
Android 6.0.1 (xiaomi redmi note 3 pro), SDK installed for Android 6.0, python 3.6.1, Appium 1.0.2 + Pycharm.
Element is localized with no problems, but click() doesn't work, nothing happens.
Part of my script:
driver.find_element_by_id('com.socialnmobile.dictapps.notepad.color.note:id/main_btn1').click()
I tried to use .tap() instead, but it says "AttributeError: 'WebElement' object has no attribute 'tap'".
I would be very grateful for your help, cause I'm stuck with it for good. | Python+Appium+Android 6.0.1 - 'Click()' doesn't work | 0 | 0 | 1 | 305 |
44,325,823 | 2017-06-02T09:46:00.000 | 1 | 0 | 0 | 0 | android,python,appium | 44,336,428 | 2 | false | 1 | 0 | Ok, after a long fight I came up with the solution. My smartphone - Xiaomi Redmi Note 3 Pro apart from standard USB Debugging option in settings, has another USB Debugging (security option). It has to be enabled as well, cause the second option protects smartphone from remote moves. Regards. | 2 | 0 | 0 | This is my first post so I did some research before asking this question, but it was all in vaine.
I'm writing my python script for Android application and I need to use basic click() command, in order to get deeper.
Android 6.0.1 (xiaomi redmi note 3 pro), SDK installed for Android 6.0, python 3.6.1, Appium 1.0.2 + Pycharm.
Element is localized with no problems, but click() doesn't work, nothing happens.
Part of my script:
driver.find_element_by_id('com.socialnmobile.dictapps.notepad.color.note:id/main_btn1').click()
I tried to use .tap() instead, but it says "AttributeError: 'WebElement' object has no attribute 'tap'".
I would be very grateful for your help, cause I'm stuck with it for good. | Python+Appium+Android 6.0.1 - 'Click()' doesn't work | 0.099668 | 0 | 1 | 305 |
44,328,855 | 2017-06-02T12:21:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,network-programming,web.py | 50,786,355 | 2 | false | 1 | 0 | You need to check your firewall. Try to disable it and it will work fine. | 1 | 0 | 0 | So, I created a simple site written in Python using Web.py. It's now live and running. I can access it through my computer that runs the server by typing one of this: http://0.0.0.0:8080/ or http://localhost:8080/. However, I can only access it through that computer that runs the Python Web.py server.
I tried accessing it with another computer on the same network, but it gives me an error. Help? | How to access Python web.py server in LAN? | 0 | 0 | 1 | 947 |
44,329,734 | 2017-06-02T13:06:00.000 | -1 | 0 | 0 | 0 | python,pandas | 57,430,035 | 4 | false | 0 | 0 | l2 = ((df.val1.loc[df.val== 'Best'].value_counts().sort_index()/df.val1.loc[df.val.isin(l11)].value_counts().sort_index())).loc[lambda x : x>0.5].index.tolist() | 1 | 4 | 1 | I'm trying out pandas for the first time. I have a dataframe with two columns: user_id and string. Each user_id may have several strings, thus showing up in the dataframe multiple times. I want to derive another dataframe from this; one where only those user_ids are listed that have at least 2 or more strings associated to them.
I tried df[df['user_id'].value_counts()> 1], which I thought was the standard way to do this, but it yields IndexingError: Unalignable boolean Series key provided. Can someone clear out my concept and provide the correct alternative? | Filtering dataframe based on column value_counts (pandas) | -0.049958 | 0 | 0 | 9,412 |
44,331,043 | 2017-06-02T14:12:00.000 | 0 | 0 | 0 | 0 | python,mysql,pycharm,mysql-python | 44,332,632 | 1 | true | 0 | 0 | Reading between the lines here. I believe what you are being asked to do is called ETL. If somebody were to ask me to do the above my approach would be
Force an agreed upon format for the incoming data (probably a .csv)
Write a python application to; a. Read the data from the csv, b. Condition the data if necessary, c. Write the results to the database
Pycharm would be the tool that I would use to write my Python code. It would have nothing to do with MySql. The ETL process should be initiated from the command line so you can automate it. So you need to research the following for python.
Reading files from a csv
Parsing command line arguments
Connecting and writing to a MySql database
Again, I'm doing some guessing here as your question is vague.
SteveJ | 1 | 0 | 0 | I have set up a database using MySQL Community Edition to log serial numbers of HDD's and file names. I am instructed to find a way to integrate Python scripting into the database so that the logs can be entered through python programming instead of manually (as manually would take a ridiculous amount of time.) Pycharm was specified as the programming tool that will be used, I have done research for the past few days and haven't found any solid way that this should be done, the python connector doesn't appear to be able to work with Pycharm. Any suggestions? | What is the best way to integrate Pycharm Python into a MySQL CE Database? | 1.2 | 1 | 0 | 173 |
44,335,053 | 2017-06-02T18:10:00.000 | 0 | 0 | 0 | 0 | qpython3 | 45,024,399 | 4 | false | 0 | 1 | try the following code
Import os
os.system("sh") | 3 | 1 | 0 | When I run any script in QPython, it runs in a console then I get prompted "Press enter to exit", but I want to interact with my just-run script instead of exiting. Is there any configuration option to control this behavior?
Does it have anything to do with the <&& exit> option displayed at the top of the console? | QPython prompts "Press enter to exit" after running script. How to interact with console instead? | 0 | 0 | 0 | 1,007 |
44,335,053 | 2017-06-02T18:10:00.000 | 0 | 0 | 0 | 0 | qpython3 | 51,234,532 | 4 | false | 0 | 1 | I know it is a very old post, but I had the same problem:
You need to switch your keyboard to word based instead of character based.
(Enter the three points- preferences-input method-word based) Then you receive a keyboard with an enter-sign.
Press enter and it exits. | 3 | 1 | 0 | When I run any script in QPython, it runs in a console then I get prompted "Press enter to exit", but I want to interact with my just-run script instead of exiting. Is there any configuration option to control this behavior?
Does it have anything to do with the <&& exit> option displayed at the top of the console? | QPython prompts "Press enter to exit" after running script. How to interact with console instead? | 0 | 0 | 0 | 1,007 |
44,335,053 | 2017-06-02T18:10:00.000 | 0 | 0 | 0 | 0 | qpython3 | 46,437,687 | 4 | false | 0 | 1 | If you want to access the console, you could try to raise an exception. | 3 | 1 | 0 | When I run any script in QPython, it runs in a console then I get prompted "Press enter to exit", but I want to interact with my just-run script instead of exiting. Is there any configuration option to control this behavior?
Does it have anything to do with the <&& exit> option displayed at the top of the console? | QPython prompts "Press enter to exit" after running script. How to interact with console instead? | 0 | 0 | 0 | 1,007 |
44,335,270 | 2017-06-02T18:24:00.000 | 0 | 0 | 0 | 1 | python-2.7,robotframework | 44,404,547 | 2 | true | 0 | 0 | Do a right click on ride.py and choose python as default program. | 1 | 1 | 0 | I want to start using the RIDE tool for automation but I am currently unable to launch it from the command prompt. The following are what I have installed.
Python 2.7.11 (32bit)
Wx Python 2.8.12.1(unicode) for Python 2.7
robotframework 3.0.2 (pip installed)
robotframework-ride 1.5.2.1 (pip installed)
When I launch ride.py from cmd, it opens up a word file which has the same ride.py which is installed in the C:\Python27\Scripts folder.
The same setup works on a different machine. I don't understand why in this machine, it opens up a word document instead of launching RIDE | Unable to Launch RIDE from command prompt | 1.2 | 0 | 0 | 1,827 |
44,337,390 | 2017-06-02T21:02:00.000 | 0 | 0 | 0 | 0 | python,flask | 48,550,508 | 2 | false | 1 | 0 | Just wanted to highlight one more fact about the requests object.
As per the documentation, it is kind of proxy to objects that are local to a specific context.
Imagine the context being the handling thread. A request comes in and the web server decides to spawn a new thread (or something else, the underlying object is capable of dealing with concurrency systems other than threads). When Flask starts its internal request handling it figures out that the current thread is the active context and binds the current application and the WSGI environments to that context (thread). It does that in an intelligent way so that one application can invoke another application without breaking. | 1 | 0 | 0 | I read the Flask doc, it said whenever you need to access the GET variables in the URL, you can just import the request object in your current python file?
My question here is that if two user are hitting the same Flask app with the same URL and GET variable, how does Flask differentiate the request objects? Can someone tell me want is under the hood? | Flask/Python: from flask import request | 0 | 0 | 1 | 553 |
44,340,539 | 2017-06-03T05:14:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,module | 44,340,670 | 1 | true | 0 | 0 | You need to import with from your local project path.
like if your project name is test,
app name is test1,and
test1 having module.py and main.py
you need to import like this
from test.testing import module
i hope this may help. | 1 | 0 | 0 | I'm still pretty new to python and i've been having some trouble getting modules to work. When ever i do something like this:
module.py:
def function():
var = "this is some stuff"
main.py:
import module
But when i run main.py, i end up getting: ModuleNotFoundError: No module named 'module'
EDIT: I probably should have mentioned this is in the original post, the files are in the same directory | Python 3.6.1 ModuleNotFoundError | 1.2 | 0 | 0 | 773 |
44,343,975 | 2017-06-03T12:34:00.000 | 0 | 1 | 1 | 1 | python,parallel-processing,automated-tests,adb,pytest | 70,539,100 | 1 | false | 0 | 0 | I think you have to use Selenium Grid for runs on multiple devices. | 1 | 1 | 0 | I have 3 devices connected to my computer and I want to run pytest in parallel on each of them.
I there any way to do it, either with pytest or with adb?
Thanks in advance. | Running Pytest on Multiple Devices in Parallel | 0 | 0 | 0 | 264 |
44,351,174 | 2017-06-04T05:53:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 44,351,189 | 3 | false | 0 | 0 | You should add parenthesis to your print statement
print("".join(tsOutput)) should work | 2 | 0 | 0 | i am getting error in
print "".join(tsOutput)
is it due to syntax are different in python 2.x and 3.x versions? | print " ".join(tsOutput) ^ SyntaxError: invalid syntax | 0.132549 | 0 | 0 | 2,440 |
44,351,174 | 2017-06-04T05:53:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 44,351,191 | 3 | true | 0 | 0 | I'm going to go out on a limb and say you must be using python 3. In that case, this must be written as print("".join(tsOutput))
This is because print is a function in python 3, and needs standard function syntax. (In python 2 it was built more tightly into the core language and was an exception to the regular syntax in that it required no parentheses) | 2 | 0 | 0 | i am getting error in
print "".join(tsOutput)
is it due to syntax are different in python 2.x and 3.x versions? | print " ".join(tsOutput) ^ SyntaxError: invalid syntax | 1.2 | 0 | 0 | 2,440 |
44,351,248 | 2017-06-04T06:08:00.000 | 0 | 0 | 0 | 0 | python,regression,tflearn | 44,433,959 | 2 | false | 0 | 0 | That's not how regression works. You must have only one column as a target. That's why the tensorflow API only allows one column to be the target of regression, specified with an integer. | 1 | 0 | 1 | How to specify multiple target_column in tflearn.data_utils.load_csv method.
According to Tflearn docs load_csv takes target_column as integer.
Tried passing my target_columns as a list in the load_csv method and as expected got a TypeError: 'list' object cannot be interpreted as an integer traceback.
Any solutions for this.
Thanks | Specifying Multiple targets for regression in TFLearn | 0 | 0 | 0 | 443 |
44,353,532 | 2017-06-04T11:24:00.000 | 1 | 0 | 0 | 0 | python,mysql,django,django-models,django-rest-framework | 44,354,531 | 1 | false | 1 | 0 | It is true that if you used a database for your business logic you could get maximum possible performance and security optimizations. However, you would also risk many things such as
No separation of concerns
Being bound to the database vendor
etc.
Also, whatever logic you write in your database won't be version controlled with your app. Thus, whenever you change your database, you will have to create all those things once again.
Instead, use Django ORM. It will create and manage your database based on your models by itself. As a result, whenever you recreate your database, you will just have to run migrations with one single command and you are done.
This will cover most of the situations. And whenever you will need those speeds of stored procedures, Django ORM has you covered as well.
In short, I believe that business logic should be kept out of the database as much as possible. | 1 | 0 | 0 | I wonder if a good habit is to have some logic in MySQL database (triggers etc.) instead of logic in Django backend. I'm aware of fact that some functionalities may be done both in backend and in database but I would like to do it in accordance with good practices. I'm not sure I should do some things manually or maybe whole database should be generated by Django (is it possible)? What are the best rules to do it as well as possible? I would like to know the opinion of experienced people. | Django - backend logic vs database logic | 0.197375 | 1 | 0 | 133 |
44,354,833 | 2017-06-04T13:55:00.000 | 2 | 0 | 0 | 0 | python,web | 44,354,888 | 2 | true | 1 | 0 | No, that is not possible for two reasons:
Python is a server side language that is not executed in the user's browser
Even if you did use a client side language that runs in the user's browser (i.e. JavaScript), security restrictions prevent this
Think of the implications if this were possible: Would you want any company whose website you are accessing to be able to spy on you like that? | 1 | 0 | 0 | I was wondering if it is possible to get all the URL's from the history page of a person, perhaps with python (using selenium, or any other tool)? | Getting the user's browsing history's URLs | 1.2 | 0 | 1 | 181 |
44,355,163 | 2017-06-04T14:27:00.000 | 1 | 0 | 0 | 0 | python-3.x,numpy | 45,017,647 | 1 | false | 0 | 0 | This is likely due to the fact that typical image formats are compressed. If you open an image using e.g. scipy.ndimage.imread, the file will be decompressed and the result will be a numpy array of size (NxMx3), where N and M are the dimensions of the image and 3 represents the [R, G, B] channels. Transforming this to a string does not perform any compression, so the result will be larger than the original file. | 1 | 0 | 1 | The Operation : transforming a rgb image numpy array to string gives an output that if saved to a file also results in a bigger file than the original image, why? | Numpy array to string gives an output that if saved to a file results in a bigger file than the original image, why? | 0.197375 | 0 | 0 | 15 |
44,356,350 | 2017-06-04T16:37:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,user-interface,tkinter,raspberry-pi2 | 44,356,705 | 2 | false | 0 | 1 | The problem is that you are using place. place should be avoided for exactly this reason. If you learn to use grid and pack properly, and do not call pack_propagate(0) or grid_propagate(0) unless you are certain it is the only solution to your problem, tkinter will do a fantastic job of adapting to different screen resolutions, font sizes, and user preferences.
In other words, the answer to "[is there] an easy way to achieve what I'm after" is "use grid and pack, and avoid place". | 2 | 0 | 0 | I understand that a few others have asked similar questions although i cant find any that match my specific issue so please be gentle.
My issue is every display i use results in my Tkinter display completely changing and im having to reposition and resize everything again through trial and error which is massively time consuming. Is there an easy way around this? I've used place and pixel sizes in the script so i think its everytime the resolution changes im having to start over.
Anyway I thought i set the resolution to the Rpi touchscreen and built the GUI to that. However, now i've bought the touchscreen the GUI isn't even close to fitting it.
I'm not that keen on having to resize and reposition everything again so if there is an easy way to achieve what i'm after i'd be grateful if someone could share it. If not, i'll just have to get on with it.
Cheers
chris | Tkinter Screen Change as Altered Display - Easy way to rectify? | 0.099668 | 0 | 0 | 383 |
44,356,350 | 2017-06-04T16:37:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,user-interface,tkinter,raspberry-pi2 | 44,356,396 | 2 | false | 0 | 1 | You probably want to utilize the width and height aspects of your screen, and make your program relative to those instead of making everything fixed/static.
You can do this by calling winfo_width() or winfo_height() on your widget to get it's size. After you have this, instead of making an oval at 200 pixels if your screen is 400 pixels in width, you can make it relative, i.e. at width/2 (if you declare winfo_width()as a variable). | 2 | 0 | 0 | I understand that a few others have asked similar questions although i cant find any that match my specific issue so please be gentle.
My issue is every display i use results in my Tkinter display completely changing and im having to reposition and resize everything again through trial and error which is massively time consuming. Is there an easy way around this? I've used place and pixel sizes in the script so i think its everytime the resolution changes im having to start over.
Anyway I thought i set the resolution to the Rpi touchscreen and built the GUI to that. However, now i've bought the touchscreen the GUI isn't even close to fitting it.
I'm not that keen on having to resize and reposition everything again so if there is an easy way to achieve what i'm after i'd be grateful if someone could share it. If not, i'll just have to get on with it.
Cheers
chris | Tkinter Screen Change as Altered Display - Easy way to rectify? | 0 | 0 | 0 | 383 |
44,356,514 | 2017-06-04T16:50:00.000 | 0 | 0 | 1 | 0 | python,plugins,notepad++ | 51,279,361 | 2 | false | 0 | 0 | Yeah, I had also this problem of a plugin crashing my notepad++ every minute, but I used notepad++ portable, instead, you just go to the directory where it's installed and look for the plugins directory, notepad++ should offer an inner functionality of doing this. | 1 | 1 | 0 | I currently installed via plug-in manager of Notepad++ the Python Indent plug-in
I cannot uninstall it.
It's in update pane of Notepad++ plug-in manager, I check it and update it.
After update installation it is there again and not in installed plug-ins.
So it cannot be uninstalled.
Any idea how to remove it? | How to uninstall Python indent plug-in from Notepad++? | 0 | 0 | 0 | 1,831 |
44,358,307 | 2017-06-04T20:06:00.000 | 4 | 0 | 0 | 0 | python,django | 44,358,407 | 1 | true | 1 | 0 | The web is stateless. This means that if a browser requests the same page twice, a traditional web server has no real way of knowing if it's the same user.
Enter sessions. Django has an authentication system which requires each user to log in. When the user is logged in they're given a session. A session is made of two parts; A cookie containing a randomly generated token, and a database entry with that same token.
When a user logs in, a new session token is generated and sent, via a cookie, back to the user which the browser stores. At the same time, that record is created in the database. Each time a browser makes a request to Django, it sends its session cookie along with the request and Django compares this to the tokens in the database. If the token exists, the user is considered to be logged in. If the token doesn't exist, the user isn't logged in.
In Django, there are User models which make it easy to check who the currently logged in user is for each request. They're doing all that token checking in the background for us on each and every request made by every user. Armed with this, we can associate other models via "foreign key" relationships to indicate who owns what.
Say you were making a blog where multiple users could write articles. If you wanted to build an editing feature you'd probably want to restrict users to only be allowed to edit their own articles and nobody else's. In this situation, you'd receive the request, find out who the current user was from it, compare that user to the "author" field on the blog Post model and see if that foreign key matches. If it matches, then the user making the current request is the owner and is allowed to edit.
This whole process is secured by the fact that the session tokens are randomly generated hashes, rather than simple ID numbers. A malicious attacker can't simply take the hash and increment the value to try and access adjacent accounts, they'd have to intercept another user's hash entirely. This can be further secured by using SSL certificates so that your connections go over https:// and all traffic is encrypted between the browser and your server. | 1 | 0 | 0 | If for example I want to show a zero(0) for all users to see, and I want all users to add one(1) to the number With their Identity only shown for superusers. And how to make sure that each user only can add one time, and of course what is the Security requirements that have to be done to prevent unautohrized Access to change any of this or to get any information?
I understand this is a big topic, but could someone briefly explain for me what parts of Programming that are involved, and maybe some good books on these topics? | How does django know which user owns what data? | 1.2 | 0 | 0 | 79 |
44,358,803 | 2017-06-04T21:10:00.000 | 0 | 0 | 0 | 1 | python,django,apache,wsgi | 44,366,981 | 1 | true | 1 | 0 | Sorry for anyone who read this, it would probably have been impossible to solve given my supplied information.
What had actually happened was that I'd been having to modify my wsgi.py script in order to make it happy inside the Apache server, and I'd added a line which said "os.system('/bin/bash --rcfile )" to try and make sure that when running inside apache it got the virtualenv activated.
This line must have been causing some strange problem, another symptom was that I realised when I was running "runserver", it wasn't crashing the the python process was backgrounding itself, where normally it runs inside that console window.
Thanks everyone who asked questions helping me debug! | 1 | 0 | 0 | I have a situation where if I run Apache with wsgi (now uninstalled), a test website works, but running the same server with runserver 0.0.0.0:8080 gives ERR_CONNECTION_REFUSED from local or remote (even with the apache2 service stopped).
Edit: I don't think it's Apache, I've reproduced the problem on a clean server with no Apache installed, so unless Apache somehow modified something under source control it's not that
My knowledge of web details is hazy, I don't even know where to troubleshoot this problem - the devserver runs (runserver prints as expected and doesn't give any errors) but never receives a request, I have nothing in iptables. | django devserver can't connect | 1.2 | 0 | 0 | 246 |
44,359,336 | 2017-06-04T22:17:00.000 | 1 | 0 | 0 | 0 | python,multithreading,cpu | 44,359,362 | 1 | true | 0 | 0 | You can have significantly more than 4 IO-bound threads on a quad-core CPU. However, you do want to have some maximum. Even IO bound processes use the CPU some of the time. For example, when a packet is received, that packet needs to be handled to update TCP state. If you are reading from a socket and writing to a file, some CPU is required to actually copy the characters from the socket buffer to the file buffer under most circumstances. If you use TLS, CPU is typically required to decrypt and encrypt data. So even threads that are mostly doing IO use the CPU some. Eventually the small fraction of time that you are using the CPU will add up and consume the available CPU resources.
Also, note that in Python, because of the global interpreter lock, you can only have one thread using the CPU to run python code at a time. So, the GIL would not typically be held while doing something like waiting for an outgoing connection. During that time other threads could be run. However, for some fraction of the time while reading and writing from a socket or file, the GIL will be held. It's likely with most common work loads that the performance of your application will reach a maximum when the fraction of time your threads need a CPU reaches one full CPU rather than four full CPUs.
You may find that using asyncio or some other event-driven architecture provides better performance. When true this is typically because the event-driven model is better at reducing cross-thread contention for resources.
In response to your question edit, I would not expect 10 threads to be a problem | 1 | 2 | 0 | TL;DR:
If I spawn 10 web requests, each on its own thread, with a CPU that has a 4 thread limit, is this okay or inefficient? Threads are IO bound so sit idle while awaiting server response (I believe). How does CPU deal if more than 4 threads return simultaneously?
I've got a script that currently starts a new thread for every file I need to download (each located at a unique URL) through an http.client.HTTPSConnection. At max, I may need to spawn 730 threads. I have done this, since the threads are all IO bound work (downloading and saving to file), but I am not sure if they are executing in parallel or if the CPU is only executing a set at a time. Total run time for file sizes ranging between 20MB to 110MB was roughly 15 minutes.
My CPU is quad-core with no Hyper-Threading. This means that it should only support 4 threads simultaneously at any given time. Since the work is IO bound and not CPU bound, am I still limited by the hold of only 4 simultaneous threads?
I suppose what is confusing is I am not sure what sequence of events takes place if say I send out just 1 request on 10 threads; what happens if they all return at the same time? Or how does the CPU choose which 4 to finish before moving onto the next available thread?
And after all of this, if the CPU is only handling 4 threads at a time, I would image it is still smart to spawn as many IO threads as I need (since they will sit idle while waiting for server response) right? | Am I overspawning IO bound web scraping threads? | 1.2 | 0 | 1 | 269 |
44,360,273 | 2017-06-05T00:43:00.000 | 2 | 0 | 0 | 0 | python,python-2.7,python-3.x,tensorflow | 44,362,389 | 1 | false | 0 | 0 | When operating Tensorflow from python most code to feed the computational engine with data resides in python domain. There are known differences between python 2/3 when it comes to performance on various tasks. Therefore, I'd guess that the python code you use to feed the net (or TF python layer, which is quite thick) makes heavy use of python features that are (by design) a bit slower in python 3. | 1 | 3 | 1 | My tests show that Tensorflow GPU operations are ~6% slower on Python 3 compared to Python 2. Does anyone have any insight on this?
Platform:
Ubuntu 16.04.2 LTS
Virtualenv 15.0.1
Python 2.7.12
Python 3.6.1
TensorFlow 1.1
CUDA Toolkit 8.0.44
CUDNN 5.1
GPU: GTX 980Ti
CPU: i7 4 GHz
RAM: 32 GB | Tensorflow Slower on Python 3 vs. Python 2 | 0.379949 | 0 | 0 | 1,834 |
44,365,841 | 2017-06-05T09:31:00.000 | 2 | 0 | 1 | 0 | python,pdf | 44,365,891 | 1 | true | 0 | 0 | To open it in the default application for that file type:
subprocess.Popen([file],shell=True)
Considering that you are implementing a user guide, you may want to open it in a web browser.
import webbrowser
webbrowser.open_new(r'file://C:\path\to\file.pdf') | 1 | 0 | 0 | I need to implement the user guide of my program and I was thinking on putting a button that opens the pdf, like if its double clicked on Windows Explorer.
But I've tried with os.popen(myfile) and open(myfile) and the interpreter open it in python, so I can print and it prints me the info of the object <_io.TextIOWrapper name='userguide.pdf' mode='r' encoding'cp1252' and what I need is to open it with its native application to avoid embed the pdf into the program.
Any way to do this?
Thanks | Open file with its default program via python | 1.2 | 0 | 0 | 803 |
44,367,961 | 2017-06-05T11:28:00.000 | 0 | 0 | 0 | 1 | python,redis,celery,estimation | 44,961,359 | 1 | true | 1 | 0 | I don't think there is a magic way to do this.
What I do in my app is just log the execution time for each task and return that as an ETA. If you wanted to get a little more accurate you could also factor in the redis queue size and the task consumption rate. | 1 | 0 | 0 | I want to get eta of task in celery each time with get request. There is no direct api in celery to get task scheduled time (except inspect() - but it's seems very costly to me)
How can i manage eta of particular task? The downside of storing eta time in Django model is not consistent ( either i couldnt store taks_id because i can't - dont know how get eta from task_id)
I see on one question that there is no api, cause it somehow depends on brokers and etc. But i hope that there is some solution
So what's the best way manage task_id to get eta?
Backend and broker is redis | Celery best way manage/get eta of task | 1.2 | 0 | 0 | 429 |
44,368,133 | 2017-06-05T11:38:00.000 | 0 | 0 | 0 | 1 | python,django,celery | 44,368,455 | 3 | false | 1 | 0 | Celery was originally written specifically as an offline task processor for Django, and although it was later generalised to deal with any Python code it still works perfectly well with Django.
How many tasks there are and how long they take is pretty much irrelevant to the choice of technology; each Celery worker runs as a separate process, so the limiting resource will be your server capacity. | 2 | 0 | 0 | I have a few django micro-services.
Their main workload is constant background processes, not request handling.
The background processes constantly use Django's ORM, and since I needed to hack a few things for it to work properly (it did for quite a while), now I have problems with the DB connection, since Django is not really built for using DB connections a lot in the background I guess...
Celery is always suggested in these cases, but before switching the entire design, I want to know if it really is a good solution.
Can celery tasks (a lot of tasks, time-consuming tasks) use Django's ORM in the background without problems? | Use Django's ORM in Celery | 0 | 0 | 0 | 1,506 |
44,368,133 | 2017-06-05T11:38:00.000 | 3 | 0 | 0 | 1 | python,django,celery | 44,382,660 | 3 | false | 1 | 0 | Can celery tasks (a lot of tasks, time-consuming tasks) use Django's ORM in the background without problems?
Yes, depending on your definition of “problems” :-)
More seriously: The Django ORM performance will be mostly limited by the performance characteristics of the underlying database engine.
If your chosen database engine is PostgreSQL, for example, you will be able to handle a high volume of concurrent connections. | 2 | 0 | 0 | I have a few django micro-services.
Their main workload is constant background processes, not request handling.
The background processes constantly use Django's ORM, and since I needed to hack a few things for it to work properly (it did for quite a while), now I have problems with the DB connection, since Django is not really built for using DB connections a lot in the background I guess...
Celery is always suggested in these cases, but before switching the entire design, I want to know if it really is a good solution.
Can celery tasks (a lot of tasks, time-consuming tasks) use Django's ORM in the background without problems? | Use Django's ORM in Celery | 0.197375 | 0 | 0 | 1,506 |
44,368,332 | 2017-06-05T11:51:00.000 | 1 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot | 44,371,412 | 1 | false | 0 | 0 | Telegram doesn't support this at this time, but you can try to use inline keyboard contain 0-9 number instead of it. | 1 | 1 | 0 | I am trying to create a ChatBot where using Python I interact with Telegram API to get update of previous message and reply suitably. My user's message could be a password which cannot be displayed and need to converted into dots(*****). Is there such feature to convert PW to dots ? Any help would be appreciated. | Telegram API - Password to Dots | 0.197375 | 0 | 0 | 956 |
44,370,165 | 2017-06-05T13:31:00.000 | 2 | 1 | 0 | 0 | python-3.x,serial-port,cnc | 44,370,436 | 1 | false | 0 | 0 | Gcode per se doesn't support reading anything from any peripheral. Gcode is nothing more than a line-oriented textual machine command format, and is typically fed from a storage medium or file into an interpreter. This interpreter determines the axes movements, usually incorporating some trajectory planner. Then the interpreter emits signals to a peripheral device (LPT port, special card, etc.) that are fed to motor controllers. So without more details, based on your question, I think you're going to need something else to handle any serial connection. If you could clarify or add more details a solution may become apparent. | 1 | 0 | 0 | I am attempting to read a RS232/USB input from a Gcode script. Is it possible to perform this from GCode or am I going to have to wrap it in python or something?
For reference, my algorithm is essentially:
-Perform some CNC movements
-Read/store/record variable from RS232 peripheral
-Repeat a bunch of times in marginally different ways | GCode and RS232 | 0.379949 | 0 | 0 | 263 |
44,370,237 | 2017-06-05T13:35:00.000 | 0 | 1 | 0 | 0 | python,import,scipy,export,linear-programming | 44,379,161 | 1 | true | 0 | 0 | No as Sascha mentioned in the comment. Use other alternatives such as cvxpy/cvxopt. | 1 | 0 | 1 | I am trying to manage problems from Scipy. So does Scipy provide techniques to import and export model files? | Does Scipy have techniques to import&export optimisation model files such as LP? | 1.2 | 0 | 0 | 95 |
44,373,924 | 2017-06-05T16:59:00.000 | 4 | 0 | 0 | 0 | python,django,python-3.x | 44,374,195 | 1 | true | 1 | 0 | Django doesn't have any asynchronous method to deal with what you're asking. The platform is request-based so there's no way to send the information later using the same request.
The solution in this case is using two views (or a view that can handle multiple formats, like @SumNeuron mentioned):
The first view loads the page where you'll be loading the data later. On that page, an XMLHttpRequest is done to request the data.
This view only sends the data, it can be just the data in JSON or it can be partial HTML made from a template. | 1 | 1 | 0 | I am new to Django and have completed the 7 part tutorial and am now trying to learn more by making my own app.
Suppose you are making an interactive data visualization application where the front end is powered by d3 (or your favorite JS library) and your data is coming from the server. Since your data is large, you first have to load it into server memory (maybe from a binary file, or however you store it). Yet you don't want your user to be looking at a blank page when they could be seeing the rest of the site (maybe filling in some parameters for the interactive data).
How can you, when a user requests a web-page, load and maintain data in memory on the server while Django still renders the page as well as maybe sends POST requests to update some server side information? | Django: asynchronously load data when page is requested? | 1.2 | 0 | 0 | 1,588 |
44,374,397 | 2017-06-05T17:27:00.000 | 1 | 0 | 1 | 0 | python,api | 44,512,919 | 7 | false | 0 | 0 | As it has been already mentioned many times - one of the ways is to create REST API and send input and output over HTTP.
However, there is another option which is more complex. You can use CORBA (Common Object Request Broker Architecture). There is an implementation of CORBA in python omniORB. CORBA allows to interface between apps written in various languages.
There are a number of examples on using CORBA with python on the web. | 1 | 13 | 0 | If I create a package in Python, then another Python user can import that package and interface with it.
How can I create a package so that it doesn't matter what language the other user is calling the library with?
I could specify input and output file formats so that another language can interface with my Python code by merely providing the input files and reading the output files. However, creating the input and output files is very expensive computationally. Is there an easier solution? | How can I create a language independent library using Python? | 0.028564 | 0 | 0 | 1,971 |
44,376,270 | 2017-06-05T19:26:00.000 | 1 | 0 | 0 | 0 | python,html,hosting,vps | 46,166,962 | 1 | false | 1 | 0 | you can connect to your server/serverpilot app via SSH/SFTP.
Filezilla, codeanywhere are options that allow you to do this. | 1 | 0 | 0 | So I just bought a VPS server from Vultr. Then I went on ServerPilot, and installed it on my server.
Now I can access, via SFTP, all the files on my server.
But how can I access these files from my web-browser via Internet? I mean, when I type in the IP address of my Vultr Server, I land on the ServerPilot page "your app xxx is set up". Alright, but how can I access the other files I uploaded now?
Thanks | Beginner VPS Vultr/ServerPilot -> How to change the homepage & access the files I uploaded? | 0.197375 | 0 | 1 | 110 |
44,376,527 | 2017-06-05T19:44:00.000 | 0 | 0 | 1 | 0 | python,pyqt4,spyder | 44,401,170 | 1 | false | 0 | 1 | Qt4 is no longer supported in Anaconda, so no new packages were created for Python 3.6.
You could use a community channel called conda-forge to install PyQt4 for Python 3.6. However, that channel is not supported nor associated with Continuum (the company behind Anacoda). | 1 | 0 | 0 | I'm trying to step through some code using Spyder (Python 3.6) but I keep getting the below error:
ModuleNotFoundError: No module named 'PyQt4'
I have googled it and looked through Stack Overflow, but none of the possibilities seem to work. | How to get PyQt4 working with Spyder | 0 | 0 | 0 | 1,265 |
44,377,666 | 2017-06-05T21:04:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller,publisher,windows-defender | 69,619,137 | 2 | false | 0 | 0 | This is a known False Positive with Windows Defender. This happens to my files as well when tested on a Windows 10 VM, and it happens to others as well. Also, Windows Defender 'Smartscreen' may block any unsigned file even when using another Antivirus, but you should be able to click more information and then continue
You can exclude the file from Windows Defender, but the best solution is to use another antivirus, as Windows Defender is not very good anyway. (that is not just based on my experience but off AV tests)
I am not sure what other antivirues have the same False Positive, but I know there are a few.
You also could test on a VM, where you could disable Windows Defender and Smartscreen, while leaving it enabled on your host system. (VirtualBox is a great free VM software for Windows) | 1 | 7 | 0 | I developed a Python code and I converted it to an .exe with pyinstaller but the problem is that there is no publisher so each time a computer runs my program, Windows Defender throws an alert that says that there is no publisher so the program is not sure...
Does anyone know how to change the publisher of an .exe from none to something or how to implement Publisher in pyinstaller? | Pyinstaller .exe throws Windows Defender [no publisher] | 0 | 0 | 0 | 12,698 |
44,378,607 | 2017-06-05T22:22:00.000 | 0 | 0 | 1 | 0 | regex,python-3.6,mathematical-expressions | 44,378,741 | 2 | false | 0 | 0 | You need ([0-9]+)([+])([0-9]+)(?:([+])([0-9]+))*
you get the '+4' for the group is out the last two expressions (([+])([0-9]+)).
the ?: indicate to python dont get de string for this group in the output. | 1 | 2 | 0 | I want a regex to match complex mathematical expressions.
However I will ask for an easier regex because it will be the simplest case.
Example input:
1+2+3+4
I want to separate each char:
[('1', '+', '2', '+', '3', '+', '4')]
With a restriction: there has to be at least one operation (i.e. 1+2).
My regex: ([0-9]+)([+])([0-9]+)(([+])([0-9]+))*
or (\d+)(\+)(\d+)((\+)(\d+))*
Output for re.findall('(\d+)(\+)(\d+)((\+)(\d+))*',"1+2+3+4")
:
[('1', '+', '2', '+4', '+', '4')]
Why is this not working? Is Python the problem? | Why are these regular expressions not working? | 0 | 0 | 0 | 189 |
44,378,805 | 2017-06-05T22:43:00.000 | 1 | 0 | 0 | 0 | python,colors,visualization,pyx | 44,378,914 | 2 | false | 1 | 0 | PyX provides several color schemes for defining colors, if you want to use RGB as a hex string you can provide it as: pyx.color.rgbfromhexstring("#1122DD"). | 2 | 2 | 0 | Many Visualisation libraries and languages provide easy ways of generating any color. for example you could simple add a style tag with a css color code to set the color of an object in html.
Is there a similiar way to set the color of a pyx drawing to any wanted color code instead of the predefined pyx color codes?
(for example something like color.code = #1122DD instead of color.rgb.blue) | How to generate any color in PyX | 0.099668 | 0 | 0 | 216 |
44,378,805 | 2017-06-05T22:43:00.000 | 2 | 0 | 0 | 0 | python,colors,visualization,pyx | 44,378,992 | 2 | true | 1 | 0 | Here is the way I found as the most convenient to have a simple RGB color:
pyx.color.rgb(r,g,b)
where r, g, b are numbers in the range [0, 1] and set the percentage of each base color in RGB format. | 2 | 2 | 0 | Many Visualisation libraries and languages provide easy ways of generating any color. for example you could simple add a style tag with a css color code to set the color of an object in html.
Is there a similiar way to set the color of a pyx drawing to any wanted color code instead of the predefined pyx color codes?
(for example something like color.code = #1122DD instead of color.rgb.blue) | How to generate any color in PyX | 1.2 | 0 | 0 | 216 |
44,384,854 | 2017-06-06T08:20:00.000 | 0 | 0 | 1 | 0 | python,arrays,string | 44,384,926 | 7 | true | 0 | 0 | First remove [ and ] from your string, then split on commas, then remove spaces from resulting items (using strip). | 1 | 6 | 0 | I have a string with the following structure.
string = "[abcd, abc, a, b, abc]"
I would like to convert that into an array. I keep using the split function in Python but I get spaces and the brackets on the start and the end of my new array. I tried working around it with some if statements but I keep missing letters in the end from some words.
Keep in mind that I don't know the length of the elements in the string. It could be 1, 2, 3 etc. | Split string into array in Python | 1.2 | 0 | 0 | 16,090 |
44,385,374 | 2017-06-06T08:45:00.000 | 0 | 0 | 0 | 0 | python,smartsheet-api | 44,395,555 | 3 | false | 0 | 0 | It's unclear from your question if you are reading a sheet or a report. Sample code would be very helpful.
The report API get_report(report_id, page_size=100, page=1, include=None) also allows you to specify the number of results to return. However, note that you can read a maximum of 500 rows at a time from a report. So you would need to use a loop. | 1 | 0 | 0 | I am trying to fetch all the rows that are existing in an Smartsheet using API.
I have generated the bearer authorization token and have the report details. My Python is fetching me results of the first 100 rows (source has 1200 rows). I don't use any filters within the Python. Is this due to any default (page-size value)?
I am unable to retrieve all existing data from this API. | How to fetch all the data from Smartsheet using API? | 0 | 0 | 0 | 1,188 |
44,387,283 | 2017-06-06T10:15:00.000 | 0 | 0 | 0 | 0 | python,server | 44,387,505 | 1 | false | 1 | 0 | is your server is linux or windows?
for linux: you can add a script to run your script on runlevel 3 or 5
write a script put it under /etc/init.d/ folder then link your script /etc/rc3.d or /etc/rc5.d to be start | 1 | 0 | 0 | I'm building a website with some backprocessing with python. I want to know how to execute my python code from the server ?
There is no direct link between my HTML pages and my python code.
Let's say I want to do an addition with python in the server, how can I do that ?
Thanks so much in advence :) | Running python code on server | 0 | 0 | 0 | 105 |
44,387,732 | 2017-06-06T10:37:00.000 | 4 | 0 | 0 | 0 | python,excel,xlrd,xlsxwriter | 44,417,110 | 1 | true | 0 | 0 | Fundamentally, there is no reason you need to read twice and save twice. For your current (no charts) process, you can just read the data you need using xlrd; then do all your processing; and write once with xlwt.
Following this workflow, it is a relatively simple matter to replace xlwt with XlsxWriter. | 1 | 4 | 0 | I am writing software that manipulates Excel sheets. So far, I've been using xlrd and xlwt to do so, and everything works pretty well.
It opens a sheet (xlrd) and copies select columns to a new workbook (xlwt)
It then opens the newly created workbook to read data (xlrd) and does some math and formatting with the data (which couldn't be done if the file isn't saved once) - (xlwt saves once again)
However, I am now willing to add charts in my documents, and this function is not supported by xlwt. I have found that xlsxwriter does, but this adds other complications to my code: xlsxwriter only has xlsxwriter.close(), which saves AND closes the document.
Does anyone know if there's any workaround for this? Whenever I use xlsxwriter.close(), my workbook object containing the document I'm writing isn't usable anymore. | Saving XlsxWriter workbook more than once | 1.2 | 1 | 0 | 1,717 |
44,387,854 | 2017-06-06T10:43:00.000 | 0 | 0 | 0 | 0 | python,matrix,ellipse,gaussianblur | 44,388,121 | 1 | false | 0 | 0 | You need to draw samples from a multi-variate gaussian distribution. The function you can use is numpy.random.multivariate_normal
You mean value matrix should be [40, 60]. The covariance C matrix should be 2X2. Regarding its values:
C[1, 1], C[2, 2]: decides the width of the ellipse along each axis. Choose it so that 3*C[i,i] is almost equal to the width of the ellipse along this axis.
The diagonal values are zero if you want the ellipse to be along the axes, otherwise put larger values (keep in mind that C[2, 1]==C[1, 2]
However, keep in mind that, since it is a Gaussian distribution, the output values will be close to 0 at distance 3*C[i,i] from the center, but they will never be truly zero. | 1 | 0 | 1 | I have a 100x100 Matrix with Zeros. I want to add a 10x20 ellipsis around a specific point in the Matrix - lets say at position 40,60. The Ellipsis should be filled with values from 0 to 1. (1 in the center - 0 at the edge) - The numbers should be gaussian-distributed.
Maybe someone can give me a clue, how to start with this problem.. | Create Matrix with gaussian-distributed ellipsis in python | 0 | 0 | 0 | 221 |
44,389,123 | 2017-06-06T11:43:00.000 | 0 | 0 | 0 | 0 | python-sphinx,accelerated-mobile-page | 44,396,007 | 1 | false | 1 | 0 | There is nothing out of the box. The easiest option is to post-process your HTML output from Sphinx. | 1 | 1 | 0 | I am working on creating a Sphinx theme based on Accelerated Mobile Pages (AMP). While creating it, I came to realize that since AMP uses amp-img tag in place of the img tag. Is there a way to convert all the img tag in the sphinx generated docs to amp-img | Use amp-img tag in place of img tag for images - Sphinx | 0 | 0 | 0 | 94 |
44,391,885 | 2017-06-06T13:50:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,memory-management,garbage-collection,virtualization | 44,571,401 | 1 | false | 0 | 0 | TL;DR
Module reload trick I tried worked locally, broke when used on a machine with a different python version... (?!)
I ended up taking any and all caches I wrote in code and adding them to a global cache list - then clearing them between tests.
Sadly this will break if anyone uses a cache/manual cache mechanism and misses this, tests will start growing in memory again...
For starters I wrote a loop that goes over sys.modules dict and reloads (loops twice) all modules of my code. this worked amazingly - all references were freed properly, but it seems it cannot be used in production/serious code for multiple reasons:
old python versions break when reloading and classes that inherit meta-classes are redefined (I still don't get how this breaks).
unit tests survive the reload and sometimes have bad instances to old classes - especially if the class uses another classes instance. Think super(class_name, self) where self is the previously defined class, and now class_name is the redefined-same-name-class. | 1 | 0 | 0 | I am running a large unit test repository for a complex project.
This project has some things that don't play well with large test amounts:
caches (memoization) that cause objects not to be freed between tests
complex objects at module level that are singletons and might gather data when being used
I am interested in each test (or at least each test suite) having its own "python-object-pool" and being able to free it after.
Sort of a python-garbage-collector-problem workaround.
I imagine a python self-contained temporary and discardable interpreter that can run certain code for me and after i can call "interpreter.free()" and be assured it doesn't leak.
One tough solution for this I found is to use Nose or implement this via subprocess for each time I need an expendable interpreter that will run a test. So each test becomes "fork_and_run(conditions)" and leaks no memory in the original process.
Also saw Nose single process per each test and run the tests sequantially - though people mentioned it sometimes freezes midway - less fun..
Is there a simpler solution?
P.S.
I am not interested in going through vast amounts of other peoples code and trying to make all their caches/objects/projects be perfectly memory-managed objects that can be cleaned.
P.P.S
Our PROD code also creates a new process for each job, which is very comfortable since we don't have to mess around with "surviving forever" and other scary stories. | Temporary object-pool for unit tests? | 0 | 0 | 0 | 128 |
44,392,159 | 2017-06-06T14:02:00.000 | 1 | 0 | 0 | 0 | python,django,virtualenv | 44,392,332 | 5 | false | 1 | 0 | This is to some extent an opinion based question, but because it concerns good practices I'd like to drop few lines.
Some instructors in Django videos don't say anything about virtualenvs, maybe because their course aims at Django and not virtualenvs and their videos can't have it all.
It is generally a good practice to use virtual environments for Python, especially for web development. You'll start with some tutorial and use certain version of Django. For your next project, you'd like eventually to go with the latest or maybe the LTS version. Don't mess everything by installing packages globally as root.
Beyond virtualenv I'd strongly recommend using virtualenvwrapper, which makes the use of virtual environments a real pleasure. | 3 | 0 | 0 | I'm a beginner, and I'm learning Django for web development. So I want to know for my little test, should in always have a virtualenv or can I continue to learn without installing it? I've asked because in youtube Django videos, some instructors are installing it, and others aren't. | Should I always use virtualenvs in Django? | 0.039979 | 0 | 0 | 987 |
44,392,159 | 2017-06-06T14:02:00.000 | 1 | 0 | 0 | 0 | python,django,virtualenv | 44,392,512 | 5 | false | 1 | 0 | Your question is:
Can I continue without installing it?
So the answer is yes.
The virtualenv is not a requirement to set up a Django project. Nevertheless, you will eventually use virtualenvs in the future when you are more aware of dependencies, so why don't you start to learn now how to use them?
A virtualenv is not the only solution to isolate your project's dependencies but is the most commonly used in Python developments. | 3 | 0 | 0 | I'm a beginner, and I'm learning Django for web development. So I want to know for my little test, should in always have a virtualenv or can I continue to learn without installing it? I've asked because in youtube Django videos, some instructors are installing it, and others aren't. | Should I always use virtualenvs in Django? | 0.039979 | 0 | 0 | 987 |
44,392,159 | 2017-06-06T14:02:00.000 | 3 | 0 | 0 | 0 | python,django,virtualenv | 44,392,668 | 5 | true | 1 | 0 | No. It's not mandatory to use virtualenvs but it might help.
I think you need to understand what virtualenvs does and when to use it.
What virtualenvs does is creating an isolated environment(for example, a django project) from others.
It's not just a Django component. It can be used for other projects as well.
Let's assume that there are two different projects, A and B. It doesn't matter if they are Django projects.
In project A, you might want to use python module AAA(version 1.00). And in project B, you also want to use python module AAA(version 2.00). Both use the same module but their versions are different.
In those cases you can use virtualenvs to create two different environments and isolate those. And install AAA version 1.0, AAA version 2.0 in those environment respectively.
When people start a new project, it's very natural and common to create a new isolated environment only for it. Then it would be easy to move the project to another computer or recreate exactly the same environment in other machines. Also it won't be affected by other previously installed modules or configuration. If there are other modules before your project might work differently and you might not find that until you deploy.
So it's not mandatory but using it is a good practice.
Also there are many tools for creating an isolated environment in different layers like Conda, Docker etc... Take a look at those as well. | 3 | 0 | 0 | I'm a beginner, and I'm learning Django for web development. So I want to know for my little test, should in always have a virtualenv or can I continue to learn without installing it? I've asked because in youtube Django videos, some instructors are installing it, and others aren't. | Should I always use virtualenvs in Django? | 1.2 | 0 | 0 | 987 |
44,392,676 | 2017-06-06T14:24:00.000 | 1 | 0 | 0 | 0 | python,sql,oracle,pandas,dataframe | 44,519,380 | 1 | true | 0 | 0 | Removing pandas and just using cx_Oracle still resulted in an integer overflow so in the SQL query I'm using:
CAST(field AS NUMBER(19))
At this moment I can only guess that any field between NUMBER(11) and NUMBER(18) will require an explicit CAST to NUMBER(19) to avoid the overflow. | 1 | 0 | 1 | I'm running pandas read_sql_query and cx_Oracle 6.0b2 to retrieve data from an Oracle database I've inherited to a DataFrame.
A field in many Oracle tables has data type NUMBER(15, 0) with unsigned values. When I retrieve data from this field the DataFrame reports the data as int64 but the DataFrame values have 9 or fewer digits and are all signed negative. All the values have changed - I assume an integer overflow is happening somewhere.
If I convert the database values using to_char in the SQL query and then use pandas to_numeric on the DataFrame the values are type int64 and correct.
I'm using Python 3.6.1 x64 and pandas 0.20.1. _USE_BOTTLENECK is False.
How can I retrieve the correct values from the tables without using to_char? | pandas read_sql_query returns negative and incorrect values for Oracle Database number field containing positive values | 1.2 | 1 | 0 | 577 |
44,395,049 | 2017-06-06T16:14:00.000 | 13 | 0 | 1 | 0 | python,regex,syntax | 44,395,132 | 2 | true | 0 | 0 | It's a bit more than the < symbol, in the regular expression you've provided.
What's actually there is a 'Negative lookbehind': (?<! ) which is saying "What's before this is not...". In your case, it's looking for }, on the condition that what comes before it is not \s - whitespace (tabs, spaces...) | 1 | 10 | 0 | I have the regular expression re.sub(r"(?<!\s)\}", r' }', string). What does the (?<!…) sequence indicate? | What do the "(?<!…)" symbols mean in a Python regular expression? | 1.2 | 0 | 0 | 13,318 |
44,395,496 | 2017-06-06T16:41:00.000 | 1 | 0 | 0 | 0 | python,django,rest,woocommerce | 47,623,906 | 2 | false | 1 | 0 | Woocommerce now has a REST webhooks API that can push orders to your django when orders are made/updated/deleted - its great.
I'm doing this with my site Adventuretickets.nl - you can ask for code samples.
Google woocommerce webhooks API for documentation on what exactly they will send you on new order. You can write normal django views to receive the JSON.
If you do need to pull from Woocommerce, you can use package django-woocommerce. | 1 | 2 | 0 | I am trying to use the REST api to link a wordpress e-commerce (using woocommerce) website and a Django app.
My question is, how can I execute a python script inside Django, and return data after a purchase has been made on the e-commerce website. | Django and woocommerce | 0.099668 | 0 | 0 | 2,463 |
44,396,067 | 2017-06-06T17:18:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 44,396,713 | 1 | false | 0 | 0 | use winscp to keep your remote directory in sync
install ssh
add external tool of "run remote" (I assume this is available)
ssh [email protected] -t "cd /path/to/proj && python myproj.py"
if you have ssh keys setup it should just work ... if not you will probably be prompted for a password | 1 | 2 | 0 | PyCharm Professional has a nice feature of remote development -- developing codes on your local machine and running them on a remote server. Without PyCharm Professional, what would be a good way of mimicking this feature? I really like PyCharm so I won't give up for another IDE. I used to package the codes into an egg file and sftp it to a remove server but this is cumbersome, since I have to make the setup files, etc.
Thanks in advance for your suggestion. | what is the best way of doing a remote development using PyCharm community version | 0 | 0 | 0 | 534 |
44,396,972 | 2017-06-06T18:10:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,pip | 44,397,097 | 1 | false | 0 | 0 | First, make sure that your packages do have compatibility with the version of Python you're looking to use.
Next, run pip freeze > requirements.txt in the base directory of your Python project. This puts everything in a readable file to re-install from. If you know of any packages that require a certain version that you'll want to re-install, put package==x.x.x (where package is the package name and x.x.x is the version number) in the list of packages to make sure it downloads the correct version.
Run pip uninstall -r requirements.txt -y to uninstall all packages. Afterwards, run pip install -r requirements.txt.
This allows you to keep packages at the correct version for the ones you assign a version number in requirements.txt, while upgrading all others. | 1 | 0 | 0 | I recently updated from Python 3.5 to Python 3.6 and am trying to use packages that I had previously downloaded, but they are not working for the updated version of Python. When I try to use pip, I use the command "pip install selenium" and get the message "Requirement already satisfied: selenium in /Users/Jeff/anaconda/lib/python3.5/site-packages" How do I add packages to the new version of Python? | Install packages using pip for updated versions of python | 0.197375 | 0 | 1 | 86 |
44,397,034 | 2017-06-06T18:13:00.000 | 2 | 0 | 0 | 0 | python,pandas,dataframe,indexing | 44,397,071 | 2 | false | 0 | 0 | row_2 = df[['B', 'C']].iloc[1]
OR
# Convert column to 2xN vector, grab row 2
row_2 = list(df[['B', 'C']].apply(tuple, axis=1))[1] | 1 | 2 | 1 | Let's say I have a dataframe df with columns 'A', 'B', 'C'
Now I just want to extract row 2 of df and only columns 'B' and 'C'. What is the most efficient way to do that?
Can you please tell me why df.ix[2, ['B', 'C']] didn't work?
Thank you! | Get a subset of data from one row of Dataframe | 0.197375 | 0 | 0 | 52 |
44,398,043 | 2017-06-06T19:11:00.000 | 0 | 0 | 1 | 0 | python,download,pip,installation,anaconda | 44,398,117 | 2 | false | 0 | 0 | You can install Anaconda to virtualenv and write your venv to flashdrive or write only site-packages/anaconda folder to flash drive | 1 | 0 | 0 | I am unable to install pip normally because of a proxy at work. I am going to use a flash drive to transfer it onto the computer. Is it possible to put just pip onto a flashdrive and transfer it onto different computer to install? If so how? | Putting Anaconda pip onto Flash drive and installing off it | 0 | 0 | 0 | 520 |
44,399,531 | 2017-06-06T20:45:00.000 | 0 | 1 | 0 | 0 | python-3.x,ssh | 44,401,977 | 1 | true | 0 | 0 | Found this answer out through reddit.
Add the key 'verbose': False to your network device dictionary. More information on the Netmiko Standard Tutorial page | 1 | 0 | 0 | I have a quick question in regards to the Netmiko module (based on the Paramiko module).
I have a for loop that iterates through a bunch of Cisco boxes (around 40)...and once they complete I get the following each time a SSH connection establishes:
SSH connection established to ip address:22
Interactive SSH session established
This isn't in my print statements or anything, it's obviously hard coded within ConnectionHandler (which I use to make the SSH connections).
This output really makes my output muddled and full of 40 extra lines I do not need. Is there any way I can get these removed from the output?
Regards, | Remove Netmiko Automatic Output | 1.2 | 0 | 1 | 384 |
44,399,558 | 2017-06-06T20:46:00.000 | 0 | 0 | 1 | 0 | python,xlwings | 52,744,358 | 2 | false | 0 | 0 | Go to your Anaconda folder, if you installed xlwings correctly, you'll see the dll files there, but they're named after their version number ie. "xlwings32-11.8.dll"
Change the name of the corresponding file to "xlwings32.dll". | 1 | 1 | 0 | I originally had an older version of Anaconda and Python2.7 on my machine, and xlwings worked great. I recently uninstalled Anaconda / Python (via window's add remove programs...), and installed the newest version of Anaconda with Python 3.6 (which includes xlwings).
I went back to my excel sheets, and suddenly it can't find the xlwings dll. I've checked:
The DLL is in the same folder as python.exe
My environment variables are correct, and have the python.exe
folder path in PATH.
I can, in Command Line, go to python $, and import xlwings, and
see its path is in the correct spot...
Thanks for any ideas! | xlwings - can't find xlwings32.dll | 0 | 0 | 0 | 3,161 |
44,402,355 | 2017-06-07T02:07:00.000 | 1 | 0 | 1 | 0 | python,printing | 44,402,484 | 2 | false | 0 | 0 | OPTIONS:
Remember what you were trying to do when you went in and edited the .py files in the first place, and that might give you a recollection of which files you changed.
Search the python folder and sort by date stamp. The py files that have been recently changed are LIKELY to be the ones you edited. Then go in and revert the edit you made
Flush your Python install and reinstall from scratch. (The 'Nuke the planet from orbit, it's the only way to be sure' option)
In general its a BAD idea to go in and edit the core Python files unless you REALLY know what you are doing. | 1 | 3 | 0 | Given the novice that I am, I have an extremely naive question, apologies for that
I modified certain core .py files by adding print. Now it's printing a bunch of lines and I'm unable to trace what file I added the print statement. How can I find out what line / file the print is occurring, so that i can go back and remove it? It happens when code executes import sklearn.
I tried debugging and going back to certain files I had modified and searching for print statement, but not able to trace back. | Trace where print is occurring? | 0.099668 | 0 | 0 | 86 |
44,403,526 | 2017-06-07T04:29:00.000 | 0 | 0 | 1 | 0 | python-3.x,network-programming,powershell-3.0 | 44,403,602 | 1 | false | 0 | 0 | Well the easiest thing would be when the host of your mother program has a fixed IP or DNS.
Then your clients could simply connect back to your mother program.
It sounds a bit like a reverse shell you want to program here and I want to mention that according to German Law already the programming of this is illegal. Probably this is the reason why you don't find much about the topic. | 1 | 0 | 0 | My original concept is that i would have a mother program controlling the child programs installed on some (possible up to 5) computers. The child programs job would be to take commands from the mother program and execute those commands on their host computers. I know for this project my program would need to deal with ip addresses. I also know it will have to interact with the windows command promt/powershell. If anyone can give me advice on where to start it would be very much appreciated! Google doesnt return much on this topic for some reason. Thanks people! | Creating a mother/child program in python | 0 | 0 | 0 | 52 |
44,403,745 | 2017-06-07T04:50:00.000 | 5 | 0 | 0 | 0 | python,machine-learning,neural-network,deep-learning,caffe | 44,403,857 | 1 | false | 0 | 0 | There is a fundamental difference between weights and input data: the training data is used to learn the weights (aka "trainable parameters") during training. Once the net is trained, the training data is no longer needed while the weights are kept as part of the model to be used for testing/deployment.
Make sure this difference is clear to you before you precede.
Layers with trainable parameters has a filler to set the weights initially.
On the other hand, an input data layer does not have trainable parameters, but it should supply the net with input data. Thus, input layers has no filler.
Based on the type of input layer you use, you will need to prepare your training data. | 1 | 4 | 1 | I am studying deep learning and trying to implement it using CAFFE- Python. can anybody tell that how we can assign the weights to each node in input layer instead of using weight filler in caffe? | Deep learning using Caffe - Python | 0.761594 | 0 | 0 | 166 |
44,404,237 | 2017-06-07T05:30:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-endpoints | 44,442,742 | 1 | false | 1 | 0 | Responses from Google Cloud Endpoints APIs must be of content-type application/json. You could bundle the binary data of the PDF in a JSON value by encoding it with base64. | 1 | 0 | 0 | I would like to know whether I could return a pdf as a response to Google App Engine Endpoints for Python. | What are the supported return types for Google App Engine Endpoints for Python? | 0 | 0 | 0 | 40 |
44,406,663 | 2017-06-07T07:50:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt5,qlineedit,qtwidgets | 44,407,346 | 1 | false | 0 | 1 | so umm... sightly embarrassing,
setPlaceHolderText() is suppose to have a lowercase h -> setPlaceholderText()
and QLayout.setFixedSize is suppose to have uppercase S -> SetFixedSize() | 1 | 0 | 0 | I'm trying figure out whats wrong with my PyQt5 install, I've looked at all the documentation and I should be able to use placeHolderText and setPlaceHolderText() but it doesn't look like it does. The QtWidgets.QLineEdit() works and shows up on my gui but can get it to setPlaceHolderText. Also QLayout.setFixedSize also returns the same error. Importing PyQt5 doesn't return any errors so these should work too.
I installed PyQt5 through pip3 on python 3.5.2, has anyone had this issue before, I'm not sure what I've done wrong. | PyQt5 5.8.2 QLineEdit has no attribute 'setPlaceHolderText' | 0 | 0 | 0 | 847 |
44,407,686 | 2017-06-07T08:40:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,python-3.x | 44,407,755 | 1 | false | 0 | 0 | sys does not do a recursive lookup because that is wasteful. Explicitly specify the complete path to ONLY the directories/sub-directories you want to include in your PATH, and python will only look for modules to import there. | 1 | 1 | 0 | Why we add 2 directory path in windows environment variableC:\Python27\;C:\Python27\Scripts\ ?
Is C:\Python27\ is not sufficient ?
Any helpful answer will be appreciated ! | Why we add 2 directories path in Python in Windows environment variable | 0.197375 | 0 | 0 | 54 |
44,408,340 | 2017-06-07T09:11:00.000 | 1 | 0 | 1 | 0 | python,odoo-9 | 44,408,527 | 1 | false | 1 | 0 | To install a new module in Odoo you have to :
Take a acces right to your new module : chmod -R 755 custom_module
In admin user activate external carctirstique
restart the service service odoo restart
Update module list | 1 | 0 | 0 | I have installed odoo - 9 in my local , I cannot able to see my custom module in update module list. I have activated "Developer Mode" even though it is not appearing in my module list. I also updated the module list, Can anybody guide me to install "Custom Module" . | odoo - 9 how to install custom module in odoo 9 | 0.197375 | 0 | 0 | 156 |
44,409,185 | 2017-06-07T09:49:00.000 | 1 | 1 | 0 | 0 | python-2.7,cisco,cisco-ios | 44,484,886 | 1 | true | 0 | 0 | You need to send terminal length 0 command first. That will disable pagination that is enabled on the router by default. | 1 | 0 | 0 | I'm using a python script to access to cisco switch using SSH or telnet .. I'm using module pexpect .. the connection done. My problem that, when I want to show all the configuration using
telconn.sendline("sh run " + "\r")
I can't see all the configuration cuz I face a problem with --more--. So how I can avoid this and what I can do to see all the configuration | Remote access using python 2.7 | 1.2 | 0 | 0 | 94 |
44,409,981 | 2017-06-07T10:24:00.000 | 4 | 0 | 0 | 0 | python,opencv | 44,412,918 | 2 | true | 0 | 0 | I think it is a little more complicated than what is suggested in the comments, since you are dealing with temperatures. You need to revert the color mapping to a temperature value image, then apply one colormap with OpenCV that you like.
Going back to greyscale is not so straightforward as converting the image from BGR to greyscale, because you have colors like dark red that will may be mapped as dark grey as well as dark blue colors, however they are in totally opposite parts of the scale.
Both of your images are in different scale (temperature wise) as well, so, if you pass them back to grey scale, black is not the same temperature as the other one, so it is not possible to compare them directly.
To get a proper scale value you can try to get the upper rectangle (the one that shows the scale) and separate them in equal pieces and divide the temperature range with the same number of divisions. This will give you a color for each temperature. Then transform both images to cv::Mat double and each pixel will have the temperature value.
Finally you must decide what will be your temperature range to decide the colors for all the images you have. For example you can choose 25-45. Then normalize the images with temperatures (the one with doubles) with the range you selected and normalize them to greyscale images (0 will be 25 and 255 will be 45) and apply the color map to this images.
I hope this helps. | 2 | 1 | 1 | I have a set of thermal images which are encoded with different types of color maps.
I want to use a constant color map to make fault intelligence easier.
Please guide me on how to go about this. | How to convert between different color maps on OpenCV? | 1.2 | 0 | 0 | 942 |
44,409,981 | 2017-06-07T10:24:00.000 | 0 | 0 | 0 | 0 | python,opencv | 44,412,645 | 2 | false | 0 | 0 | You can use cvtcolor to HSV, and then manually change Hue. After you change hue, you can cvt color back to rbg. | 2 | 1 | 1 | I have a set of thermal images which are encoded with different types of color maps.
I want to use a constant color map to make fault intelligence easier.
Please guide me on how to go about this. | How to convert between different color maps on OpenCV? | 0 | 0 | 0 | 942 |
44,411,632 | 2017-06-07T11:40:00.000 | 0 | 0 | 0 | 1 | python,django,amazon-s3,uwsgi | 44,416,796 | 2 | false | 1 | 0 | What should be the upper limit?
That depends on your hardware such as quantity of core in your server
Is there any other solution that I can use in this scenario?
Consider using Celery/rabbitMQ. Celery could be used to process asynchronously file or upload to S3 and notify the events to rabbitMQ | 1 | 1 | 0 | I am developing a django application which handles lots of file uploads from multiple clients periodically. Each file is around 1 to 10 megabytes.
Since uploads are thread blocking I can only serve a number of requests equivalent to the number of uwsgi workers/processes (4 in my case).
What should I do to increase throughput?
Is it advisable to increase number of processes/workers in uwsgi?
What should be the upper limit?
Is there any other solution that I can use in this scenario?
Stack: django+uwsgi+nginx running on amazon ec2 and s3 buckets used for storing zip files. | Django - Handling several file upload requests at a time? | 0 | 0 | 0 | 700 |
44,416,700 | 2017-06-07T15:18:00.000 | 10 | 0 | 0 | 1 | python,docker,containers,lxc,lxd | 44,417,108 | 1 | true | 0 | 0 | If for example someone wants to run a python script which downloads global weather data every 12 hours, why would they use docker?
I wouldn't, in this case. set up a cron job to run the script.
What is the advantage of using docker to Linux LXC/LXD containers?
Docker was originally built on top of LXC containers. Since then, it has moved to a newer standard, libcontainer.
The major benefit here, is cross-platform compatibility with a much larger ecosystem.
The world of linux containers with lxc probably still has a place, but Docker is quickly bringing containers to everyone and not just linux users.
I am struggling to understand the benefits of using Docker.
for me, the big advantage i see in docker is in my development efforts. i no longer have to worry about older projects that require older runtime libraries and dependencies. it's all encapsulated in docker.
then there's the production scaling and deployment story. with the community and user base around docker, there are simple solutions for nearly every scenario - from one server deployments, to auto-scaling and netflix level stuff that i'll never get near.
I'm just finding it difficult to understand Docker outside of a webapp server context
think slightly more broadly to any app or process that runs continuously, providing an API or service for other applications to consume. it's typically web based services, yes, but any TCP/IP or UDP enabled process should be able to work.
database systems, cache systems, key-value stores, web servers... anything with an always running process that provides an API over TCP/IP or UDP.
the big benefit here is encapsulating the service and all of it's runtime dependencies, like i was saying before.
need to run MongoDB 2.3 and 3.2 on your server? no problem. they are both in separate containers, can both run independently.
want to run mysql for this app, and mongo for that app? done.
containerization is powerful in helping keep apps separate from each other, and in helping to reduce the "works on my machine" problem. | 1 | 7 | 0 | I am trying to understand how docker is useful outside of the webapp space.
If for example someone wants to run a python script which downloads global weather data every 12 hours, why would they use docker?
What is the advantage of using docker to Linux LXC/LXD containers?
I am struggling to understand the benefits of using Docker. | How is docker useful for non webapp applications (e.g. Python scripts)? What is the advantage of using it over LXC/LXD? | 1.2 | 0 | 0 | 2,216 |
44,419,017 | 2017-06-07T17:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,keras | 44,433,064 | 3 | false | 0 | 0 | I had some issues with my tensorflow's installation too.
I personnaly used anaconda to solve the problem.
After installing anaconda (Maybe uninstall the old one if you already have one), launch an anaconda prompt and input conda create -n tensorflow python=3.5, afther that, you must activate it with activate tensorflow.
Once it's done, you have to install tensorflow on your python 3.5.
For that, use:
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.0rc1-cp35-cp35m-win_amd64.whl
for cpu version
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.2.0rc1-cp35-cp35m-win_amd64.whl for gpu version
You now have the r1.2 version of tensorflow.
Then, just use pip install keras and keras will be installed.
Now, all you have to do is launch anaconda navigator, select tensorflow on the scrolling menu and launch spyder/jupyter.
You can now use Keras with a tensorflow backend in Python 3.5
Hope it helped someone ! (It take me so much time to find it by myself) | 1 | 3 | 1 | This is my first question on stackoverflow, please bear with me as I will do my best to provide as much info as possible.
I have a windows 10, 6-bit processor. My end goal is to use keras within spyder. The first thing I did was update python to 3.6 and install tensorflow, which seemed to work. When I attempted to get keras, however, it wasn't working, and I read that keras worked on python 3.5. I successfully installed keras on python 3.5, which automatically installed theano as the backend.
But now I have two spyder environments, one running off of python 3.5, one off of 3.6. The 3.5 reads keras but doesn't go through with any modules because it cannot find tensorflow. The 3.6 can read tensorflow, but cannot find keras.
Please let me know what you would recommend. Thank you! | Switching from tensorflow on python 3.6 to python 3.5 | 0 | 0 | 0 | 2,643 |
44,420,434 | 2017-06-07T18:40:00.000 | 0 | 0 | 0 | 0 | python,image-processing,tensorflow | 44,421,256 | 2 | false | 0 | 0 | A file-name suffix is just a suffix (which sometimes help to get info about that file; e.g. Windows decides which tool is called when double-clicked). A suffix does not need to be correct. And of course, changing the suffix will not change the content.
Every format will need their own decoder. JPG, PNG, MAT and co.
To some extent, these are automatically used by reading out metadata (giving some assumptions!). Many image-tools have some imread-function which works for jpg and png, even if there is no suffix (because there is checking for common and supported image-formats).
I'm not sure what tensorflow does automatically, but:
jpg, png, bmp should be no problem
worst-case: use scipy to read and convert
mat is usually a matrix (with infinite different encodings) and often matlab-based
scipy can read many matlab-based formats
bin can be anything (usually stands for binary; no clear mapping like the above)
Don't get me wrong, but i expect someone trying to use tensorflow (not a small, not a simple tool) to know that changing a suffix should never magically transform the content to the new format (especially in the lossless/lossy case like png, jpg). I hope you evaluated this decision and you are not running blindly into using a popular tool. | 1 | 1 | 1 | can the tensorflow read a file contain a normal images for example in JPG, .... or the tensorflow just read the .bin file contains images
what is the difference between .mat file and .bin file
Also when I rename the .bin file name to .mat, does the data of the file changed??
sorry maybe my language not clear because I cannot speak English very well | the difference between .bin file and .mat files | 0 | 0 | 0 | 574 |
44,423,730 | 2017-06-07T22:24:00.000 | 0 | 0 | 1 | 0 | python,string,python-3.x,input,eval | 44,423,780 | 2 | false | 0 | 0 | This is a somewhat difficult problem because it is hard for the computer to know which of the quotations are extraneous and which are the intended ones. The best solution which i can think of would be to first remove all double quotes and spaces, then add back in double quotes after any instance of the characters "[", insert a space and double quotes after any comma, and insert double quotes before every comma and "]". This is not an elegant solution and may take a few lines of code but unless you can sanitize the input earlier in the program this is probably the best solution. | 1 | 0 | 0 | I am writing a program in Python that takes in user input (not using Python's built-in input) that is a string that contains a list of strings, ie. '["hello", "world"]'. However, some inputs will have multiple quotes inside, ie. '["Hello", "wor"ld"]'. I need the string to always eval() to a list. Any advice on cleansing the input string to ensure that it will always eval? Already tried .replace('"', '\"'). | Eval list within quotes | 0 | 0 | 0 | 1,245 |
44,424,308 | 2017-06-07T23:26:00.000 | 0 | 1 | 0 | 0 | python,c++,arrays,numpy,swig | 44,424,756 | 2 | false | 0 | 1 | You'll find that passing things back and forth between languages is much easier if you use a one-dimensional array in which you access elements using, e.g. arr[y*WIDTH+x].
Since you are operating in C++ you can even wrap these arrays in classes with nice operator()(int x, int y) methods for use on the C++ side.
In fact, this is the internal representation which Numpy uses for arrays: they are all one-dimensional. | 1 | 0 | 0 | C++ part
I have a class a with a public variable 2d int array b that I want to print out in python.(The way I want to access it is a.b)
I have been able to wrap the most part of the code and I can call most of the functions in class a in python now.
So how can I read b in python? How to read it into an numpy array with numpy.i(I find some solution on how to work with a function not variable)? Is there a way I can read any array in the c++ library? Or I have to deal with each of the variables in the interface file.
for now b is <Swig Object of type 'int (*)[24]' at 0x02F65158> when I try to use it in python
ps:
1. If possible I don't want to modify the cpp part.
I'm trying to access a variable, not a function.
So don't refer me to links that doesn't really answer my question, thanks. | Reading c++ 2d array in python swig | 0 | 0 | 0 | 569 |
44,425,362 | 2017-06-08T01:45:00.000 | 0 | 0 | 1 | 0 | python-3.x | 44,425,421 | 1 | false | 0 | 0 | To get the last digit, you have to divide the number by 10 and get the remainder.
For example, to get the last digit of 123, you can do 123 % 10 which results to 3.
To remove the last digit, you have to divide the number by 10 and discard the remainder.
For example, to remove the last digit of 123, you can do 123 // 10 which results to 12. | 1 | 0 | 0 | I'm doing procedural programming and for my final assignment I have to create an application that will allow the user to do the following:
Allow the user to enter the customer’s details: name, postcode and loyalty card details
Check if the card has expired
Check the loyalty card number is valid by:
Allowing the user to enter the 8 digits shown on the front of the card
Removing the 8th digit and storing it as ‘check_digit’
Reversing the numbers
Multiplying the 1st, 3rd, 5th and 7th digits by 2
If the result of the multiplication is greater than 9 then subtract 9 from the result
Adding together the resulting 7 digits
Checking if the sum of the added digits plus the ‘check_digit’ is divisible by 10
Output whether the loyalty card is valid or not
Output customer and loyalty card details.
But, how do I go about removing the 'last digit' then storing it as a check_digit? Sorry if this is vague, this is copied directly from my assignment brief. | Assignment 3 in Procedural Programming | 0 | 0 | 0 | 77 |
44,436,899 | 2017-06-08T13:19:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,tensorflow,neural-network | 50,867,766 | 1 | false | 0 | 0 | This normally means that you did not set the phase_train parameter back to false after testing. | 1 | 0 | 1 | I am using tensorflow for a classification problem.
I have some utility for saving and loading my network. When I restore the network, I can specify a different batch size than for training.
My problem is that I am getting different test results when I restore with a different batch size. However, there is no difference when using the same batch size.
EDIT: Please note that I am not using dropout.
The difference is between 0% and 1% (0.5% on average).
My network is a fully connected layer that predicts two different outputs. I did not have the issue when I only had one task to predict.
My loss op is a sum of both losses.
What could be the issue? Does it have to do with Tensorflow's parallelization strategy? | Is it normal to obtain different test results with different batch sizes with tensorflow | 0 | 0 | 0 | 361 |
44,437,307 | 2017-06-08T13:38:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,tensorboard | 44,444,164 | 1 | true | 0 | 0 | Not in tensorboard, but the predict method can return the class names instead of numbers if you provide label_keys. | 1 | 0 | 1 | What is label_keys parameter good for in a Classifier.
Can you visualize the labeled data on Tensorboard at the Embeddings section? | What is label_keys parameter good for in a Classifier - Tensorflow? | 1.2 | 0 | 0 | 65 |
44,438,534 | 2017-06-08T14:30:00.000 | 4 | 0 | 1 | 0 | python,pyvisa | 44,440,006 | 1 | false | 0 | 0 | The problem was that the device was still being connected to by another instance. The cause of that was because running visa.ResourceManager().list_resources() was listing the device twice, once as a USB device and also as an ASRL.
The solution was to call visa.ResourceManager().list_resources(query='USB?*') to make sure that the instrument is only listed once in my results. (Alternately, I could have disabled USB or GPIB in the device settings.) Then call device.clear() immediately after opening the resource to make sure that the buffers were empty because at the error there might have been unread data there. This solved the problem. | 1 | 2 | 0 | I am working with a Keysight waveform generator and pyVisa and I notice that if my code doesn't complete successfully and ends I need to perform a hard reset of the device to attempt my code again.
I have tried resetting the device under the __del__ method so that the device is in a known state but that doesn't seem to work. I've also tried using pyvisa.resources.SerialInstrument.clear(). Has anyone else had a problem like this and how did you solve it?
The host computer is running windows 7. PyVISA version is 1.8. After the program fails by me cancelling the python script I will try to send a simple *IDN? SCPI command to the device and I get error:
pyvisa.errors.VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before operation completed.
If I try to call pyvisa.resources.SerialInstrument.clear() on the device I get error
pyvisa.errors.VisaIOError: VI_ERROR_INV_SETUP (-1073807302): Unable to start operation because setup is invalid (usually due to attributes being set to an inconsistent state). | PyVISA SerialInstrument requires hard reset to connect after failure | 0.664037 | 0 | 0 | 2,062 |
44,439,375 | 2017-06-08T15:05:00.000 | 3 | 0 | 0 | 0 | python,numpy | 44,439,439 | 2 | true | 0 | 0 | Your best bet is probably something like np.count_nonzero(x > threshold), where x is your 2-d array.
As the name implies, count_nonzero counts the number of elements that aren't zero. By making use of the fact that True is 1-ish, you can use it to count the number of elements that are True. | 1 | 4 | 1 | I have a numpy 2d array (8000x7200). I want to count the number of cells having a value greater than a specified threshold. I tried to do this using a double loop, but it takes a lot of time.
Is there a way to perform this calculation quickly? | Conditional summation in python | 1.2 | 0 | 0 | 1,618 |
44,439,443 | 2017-06-08T15:08:00.000 | 40 | 0 | 1 | 0 | python,opencv,pip,package | 48,156,474 | 13 | false | 0 | 0 | Easy and simple
Prerequisites
pip install matplotlib
pip install numpy
Final step
pip install opencv-python
Specific version
* Final step
* opencv-python==2.4.9 | 1 | 86 | 0 | I know that I could pip install opencv-python which installs opencv3, but is there a separate command or name for opencv specific version such as 2.4.9?
If not, how can I specify which version to install?
Thanks. | Python: How to pip install opencv2 with specific version 2.4.9? | 1 | 0 | 0 | 331,977 |
44,441,002 | 2017-06-08T16:24:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,svm | 44,441,810 | 1 | false | 0 | 0 | Yes, this is mostly a matter of experimentation -- especially as you've told us very little about your data set: separability, linearity, density, connectivity, ... all the characteristics that affect classification algorithms.
Try the linear and Gaussian kernels for starters. If linear doesn't work well and Gaussian does, then try the other kernels.
Once you've found the best 1 or 2 kernels, then play with the cost and gamma parameters. Gamma is a "slack" parameter: it gives the kernel permission to make a certain proportion of raw classification errors as a trade-off for other benefits: width of the gap, simplicity of the partition function, etc.
I haven't yet had an application that got more than trivial benefit from altering the cost. | 1 | 1 | 1 | I'm trying to use SVM from sklearn for a classification problem. I got a highly sparse dataset with more than 50K rows and binary outputs.
The problem is I don't know quite well how to efficiently choose the parameters, mainly the kernel, gamma anc C.
For the kernels for example, am I supposed to try all kernels and just keep the one that gives me the most satisfying results or is there something related to our data that we can see in the first place before choosing the kernel ?
Same goes for C and gamma.
Thanks ! | How to choose parameters for svm in sklearn | 0 | 0 | 0 | 726 |
44,443,999 | 2017-06-08T19:23:00.000 | 0 | 0 | 0 | 0 | python-3.x,numpy,histogram | 44,444,173 | 2 | false | 0 | 0 | Python is among the slowest production-ready languages you can use.
As you haven't posted any code, I can only provide general suggestions. They are listed in order of practicality below:
Use a compiled version of python, such as pypy or cpython
Use existing software with your desired functionality. There's nothing wrong with finding free software online.
Use a more efficient (or perhaps even lossy) algorithm to skip computation
Use a faster language such as Rust, C++, C#, or Java | 1 | 0 | 1 | I have about 3 million images and need to calculate a histogram for each one. Right now I am using python but it is taking of lot of time. Is there any way to process the images in batches? I have NVIDIA 1080 Ti GPU cards, so maybe if there is a way to process on the GPU?
I can't find any code or library to process the images in parallel. Any kind of help to boost up the speed is appreciated | Faster calculation histogram of a set of images | 0 | 0 | 0 | 528 |
44,444,385 | 2017-06-08T19:47:00.000 | 1 | 0 | 0 | 0 | python,django,postgresql,orm | 44,806,040 | 2 | true | 1 | 0 | My eventual solution:
Override the save method of the model, using a raw query to SELECT nextval('serial') inside the override, setting that as the value of the necessary field, then call save on the parent (super(PARENT, self).save()). | 1 | 3 | 0 | I have a PostgreSQL database that is being used by a front-end application built with Django, but being populated by a scraping tool in Node.js. I have made a sequence that I want to use across two different tables/entities, which can be accessed by a function (nexval(serial)) and is called on every insert. This is not the primary key for these tables, but simply a way to maintain order through some metadata. Using it in Node.js during the insertion of the data into the tables is trivial, as I am using raw SQL queries. However, I am struggling with how to represent this using Django models. There does not seem to be any way to associate this Postgres function with a model's field.
Question:
Is there a way to use a Postgres function as the default value of a Django model field? | Postgres Sequences as Default Value for Django Model Field | 1.2 | 1 | 0 | 2,587 |
44,444,634 | 2017-06-08T20:04:00.000 | 0 | 0 | 1 | 0 | python,machine-learning,virtualenv,keras,mnist | 45,195,580 | 2 | false | 0 | 0 | Do you get an error message if you just import keras? I was getting a similar error in the command line and then implemented in Spyder (using Anaconda) and it worked fine. | 1 | 1 | 1 | I want to run keras on anaconda for convolution neural network using mnist handwriting recognition. A day before everything worked fine but as I try to run the same program, i get the following error in the first line:
from keras.datasets import mnist (first line of code)
ModuleNotFoundError: No module named 'keras.datasets'; 'keras' is not
a package
I also created virtual environment to use python 3.5 as my python version is 3.6. I have installed both keras and tensorflow. How do i fix the above error? Perhaps it is related to path and not error with keras. My anaconda is installed in E: whearas working environment is C:\Users\Prashant Mahato. | Cannot run keras | 0 | 0 | 0 | 1,591 |
44,451,227 | 2017-06-09T06:55:00.000 | 1 | 0 | 0 | 0 | python,numpy,deep-learning,conv-neural-network | 44,454,289 | 3 | false | 0 | 0 | The standard way is to resize the image such that the smaller side is equal to 224 and then crop the image to 224x224. Resizing the image to 224x224 may distort the image and can lead to erroneous training. For example, a circle might become an ellipse if the image is not a square. It is important to maintain the original aspect ratio. | 2 | 2 | 1 | I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension?
print(train_images[key].reshape(224, 224,3))
gives me an output
ValueError: total size of new array must be unchanged
I would be very grateful if anybody could help me with this. | How to reshape a 3D numpy array? | 0.066568 | 0 | 0 | 2,121 |
44,451,227 | 2017-06-09T06:55:00.000 | 1 | 0 | 0 | 0 | python,numpy,deep-learning,conv-neural-network | 44,451,381 | 3 | false | 0 | 0 | Here are a few ways I know to achieve this:
Since you're using python, you can use cv2.resize(), to resize the image to 224x224. The problem here is going to be distortions.
Scale the image to adjust to one of the required sizes (W=224 or H=224) and trim off whatever is extra. There is a loss of information here.
If you have the larger image, and a bounding box, use some delta to bounding box to maintain the aspect ratio and then resize down to the required size.
When you reshape a numpy array, the produce of the dimensions must match. If not, it'll throw a ValueError as you've got. There's no solution using reshape to solve your problem, AFAIK. | 2 | 2 | 1 | I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension?
print(train_images[key].reshape(224, 224,3))
gives me an output
ValueError: total size of new array must be unchanged
I would be very grateful if anybody could help me with this. | How to reshape a 3D numpy array? | 0.066568 | 0 | 0 | 2,121 |
44,452,143 | 2017-06-09T07:45:00.000 | 1 | 0 | 0 | 0 | python,python-requests | 45,435,412 | 1 | false | 0 | 0 | To send certificate, you need the certificate which contains public key like server.crt. If you have this crt file then you can send it as
r=requests.get('https://server.com', verify='server.crt' )
or if you don't have that file then you can get it using get_ssl_certificate method
ert=ssl.get_server_certificate(('server.com',443),ssl_version=3)
then you can write it into a file and send it. | 1 | 3 | 0 | how to send certificate authentication in python post request,
for example I used next but in get request:
requests.get(url, params = params, timeout=60,cert=certs)
where certs is path to certificate,
it's worked fine.
requests.post(url_post,data=params,cert = certs, timeout=60)
not working, error - SSL authentication error | python post request with certificate | 0.197375 | 0 | 1 | 6,412 |
44,454,668 | 2017-06-09T09:51:00.000 | 0 | 1 | 1 | 0 | python-3.x,i2c | 44,459,253 | 2 | false | 0 | 0 | I already did it but it continues with the same error | 1 | 0 | 0 | I'm doing a school project to read the temperature through the sensor mlx90615.
In my code an error appears:
Traceback (most recent call last):
File "/home/p/12345.py", line 21, in
Import i2c
Importerror: no module named 'i2c' | install i2c library on python | 0 | 0 | 0 | 614 |
44,456,932 | 2017-06-09T11:44:00.000 | 1 | 0 | 0 | 0 | python-3.x,opencv,contour | 49,260,218 | 2 | false | 0 | 0 | The mode and method parameter of findContours() are enum with integer values. One can use either the keywords or the integer values assigned to it. This detail can be viewed as an intellisense in visual studio when opencv is included in a project.
Below are the associated values with each enum.
MODES
CV_RETR_EXTERNAL : 0
CV_RETR_LIST : 1
CV_RETR_CCOMP : 2
CV_RETR_TREE : 3
METHODS
CV_CHAIN_APPROX_NONE : 1
CV_CHAIN_APPROX_SIMPLE : 2
CV_CHAIN_APPROX_TC89_L1 : 3
CV_CHAIN_APPROX_TC89_KCOS : 4 | 1 | 0 | 1 | I think,I understood well the function "cv2.findContours(image, mode, method).
But I got this thing contours,hierarchy = cv2.findContours(thresh,2,1) in one of the documents of opencv. I am not getting what is the meaning of 2,1 here and why have they been used. Someone please explain it. | Finding contour in using opencv in python | 0.099668 | 0 | 0 | 777 |
44,457,230 | 2017-06-09T11:58:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 44,457,352 | 1 | true | 0 | 0 | This is because of the Pycharm is your default program to open (.py) files
Solution:
Go to the file properties and under general tab change the "Opens with" option as desired. | 1 | 0 | 0 | I'm very new in programming languages and I'm learning Python. I tried to install PyCharm to make my life easier but I noticed my python files (.py) changed the icon when I installed it, this is really annoying me, I like the old icon! How can I change back to the older one? | PyCharm changed python files icon | 1.2 | 0 | 0 | 405 |
44,457,336 | 2017-06-09T12:02:00.000 | 1 | 0 | 0 | 1 | python,git | 44,457,468 | 1 | true | 0 | 0 | If I remove --name-status and add --raw, I see a format where each individual blob has a before... after... hash. | 1 | 0 | 0 | Having the command
git --no-pager log -m --first-parent --no-renames --reverse --name-status --pretty=oneline --full-index
is there any way to also get the blob hash for each file at that particular commit, next to the "name status"?
The command is used in a deployment pipeline for some huge repositories, so whatever the solution, I aim at keeping it fast, meaning: not spawning new processes.
If not possible, an acceptable approach would be to use a python library / binding. If you think that's the best approach, then please point to some key API calls which I'd need. | git log show file blob id | 1.2 | 0 | 0 | 426 |
44,457,928 | 2017-06-09T12:30:00.000 | 4 | 0 | 1 | 0 | python,32bit-64bit | 44,458,002 | 2 | true | 0 | 0 | Python is an interpreted language, not a compiled one. That basically means that if you're referring to pure Python code, that is, code that does not rely on any native compile libraries, the answer is yes.
If not, then I guess it depends on a bunch of things. | 2 | 0 | 0 | (And vice versa)
Or are there incompatibilities ?
PS: I will be using python 3.5.3 on 2 machines, but one on 64 bit and one on 32 bit. I will change writing/running scripts between the 2 machines often. | Can Python scripts created with 64bit Python be run on same version but 32 bit Python? | 1.2 | 0 | 0 | 652 |
44,457,928 | 2017-06-09T12:30:00.000 | 2 | 0 | 1 | 0 | python,32bit-64bit | 45,111,929 | 2 | false | 0 | 0 | Anything really created with 64bit Python will probably not run under a 32bit version (for example, .pyc byte code files). Plain text scripts (.py files) created with any text editor is however compatible with both 32 and 64 bit Python interpreters. | 2 | 0 | 0 | (And vice versa)
Or are there incompatibilities ?
PS: I will be using python 3.5.3 on 2 machines, but one on 64 bit and one on 32 bit. I will change writing/running scripts between the 2 machines often. | Can Python scripts created with 64bit Python be run on same version but 32 bit Python? | 0.197375 | 0 | 0 | 652 |
44,459,737 | 2017-06-09T13:58:00.000 | 1 | 0 | 0 | 1 | python,linux,scripting | 44,460,033 | 2 | false | 0 | 0 | In general it is not a bad thing to create another process from your own process.
People do this constantly on the bash.
However, one always should ask oneself what is the best environment to do the task you need to do.
For instance I could easily call a python script to cut (the linux tool) a column from a file. However, the overhead to first open the python interpreter, then save the output from cut, and then save that again is possibly higher than checking how to use the bash-tool with man.
However, collecting output from another "serious" program to do further calculations on that output, yes, you can do that nicely with subprocesses (though I would opt for storing that output in a file and then just read in the file if I need to rerun my script).
And this is where launching a subprocess may get tricky: depending on how you open a new subprocess, you can not rely anymore on environment variables.
Especially when dealing with large input data, the output from the subprocess does not get piped further and therefore is collected in memory until the program finished, which might lead into a memory problem.
To put it short: if using python solves your problem faster than combining bash-only tools, sure, do it. If that involves launching serious subprocesses, ok. However, if you want to replace bash with python, do not do that. | 1 | 6 | 0 | I was researching on whether or not Python can replace Bash for shell scripting purposes. I have seen that Python can execute Linux commands using subprocess.call() or os.system(). But I've read somewhere (forgot the link of the article) that using these is a bad thing. Is this really true?
If yes, then why is it a bad thing?
If not, then is it safe to say that Python can indeed replace Bash for scripting since I could just execute Linux commands using either of the 2 function calls?
Note: If I'm not mistaken, os.system() is deprecated and subprocess.call() should be used instead but that is not the main point of the question. | Is it bad to use subprocess.call() or os.system() when writing Python shell scripts? | 0.099668 | 0 | 0 | 3,744 |
44,460,242 | 2017-06-09T14:22:00.000 | 0 | 0 | 1 | 0 | python,data-structures,linked-list | 44,460,291 | 2 | false | 0 | 0 | Yes.
Python has a garbage collector so objects that cannot be reached in any way are automatically destroyed and their memory will be reused for other objects created in the future. | 1 | 0 | 0 | While deleting the end node from the linked list we just set the link of the node pointing to the end node to "None".Does that mean that the end node is destroyed and memory occupied by it has been released? | Memory deallocation in Linked list Python | 0 | 0 | 0 | 260 |
44,461,622 | 2017-06-09T15:30:00.000 | 0 | 0 | 0 | 1 | python,jupyter | 53,287,898 | 1 | false | 0 | 0 | Try this command
%cat array.txt | 1 | 0 | 0 | I wrote !cat array.txt , and the result was:
ERROR: 'cat' is not recognized as an internal or external command,
operable program or batch file. | Jupyter notebook is not able to execute unix command (i.e. ls,pwd etc)? | 0 | 0 | 0 | 464 |
44,464,315 | 2017-06-09T18:12:00.000 | 1 | 0 | 0 | 0 | python,file,io,bit-manipulation | 44,464,380 | 1 | false | 0 | 0 | Endiannes is a problem of binary files. CSV file is a text file. The numbers are not binary numbers but ASCII characters. There is no endiannes in it. | 1 | 0 | 1 | Say I want to process a CSV file. I know in Python I can call the read() function to open the file and read it in a byte at a time, from the first field in the file (i.e. the field in the top left of the file) to the last field (the field in the bottom right).
My question is how I can determine the orientation of a file in memory. That is, if I view the contents of the file as a single binary number and process it as bit stream, how can I know if the first field (the field the read() first returns to us) is stored in the least significant positions of the binary number or the most significant positions? Would that be determined by the endianness of the machine my program is running on?
Here's one (contrived) instance where this distinction would matter. Say I first scanned the binary representation of the file from least significant position to most significant position to determine the widths of each of the CSV values. If I were to then call read(), the first field width I calculated would correspond to the first field read() returns if and only if the first field of the CSV file is stored at the least significant bit positions when we view the file as a single binary number. If the first field was instead stored at the most significant positions, I'd first have to reverse my list of calculated field widths before I could use it.
Here's a more concrete example:
CSV file: abc,12345
Scanned field widths: either [3, 5] or [5, 3] depending on how the CSV file is laid out in memory.
Now, if I call read(), the first field I'll process is abc. If abc happened to be the first field I scanned through when calculating the field widths, I'm good. I'll know that I've scanned the entire first field after reading 3 characters. However, if I first scanned 12345 when calculating the field widths, I've got a problem.
How can I determine how a file is laid out in memory? Is the first field of a file stored in the least significant bit positions, or the most significant bit positions? | Determining the orientation of a file in memory | 0.197375 | 0 | 0 | 29 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.