Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,524,577 | 2016-09-16T05:55:00.000 | 1 | 0 | 0 | 0 | python,tactic | 39,710,247 | 1 | true | 1 | 0 | Found it, the API itself provides a method of inserting a sObject to any sType in the system. It was by using server.insert( sType, data = {}) where data is a dictionary of key value pairs. | 1 | 0 | 0 | I wish to create a new sobject for a specific stype.
Currently I am using server.get_unique_sobject( stype, data), But it assumes that an sobject is already present i.e it creates a new sobject iff there is no combination of sobject with same data already existing in the DB.
I wish to create a new sobject each and every time I wish , even if there is a sobject already present with the same name and data present. | How to create an sObject for a sType without using get_unique_sobject method? | 1.2 | 0 | 0 | 21 |
39,524,917 | 2016-09-16T06:20:00.000 | -1 | 0 | 1 | 0 | python-3.x,pyinstaller | 39,525,032 | 4 | false | 0 | 0 | You seem to be doing everything from C: - which is a protected area on any up to date version of windows.
I would suggest moving it somewhere more accessible, such as your documents folder or even your desktop. It could just be a permissions issue.
Alternatively, you could try running pyinstaller as an admin. Usually this can be done by right clicking the executable and selecting Run as administrator. | 1 | 1 | 0 | I have pyinstaller in my basic c:\ folder, myfile.py in my c: Pyinstaller folder and both pyinstaller myfile,py and pyinstaller pyinstaller\myfile.py give "failed to create process" What am I doing wrong? | python 3 pyinstaller consistently gives "failed to create process" | -0.049958 | 0 | 0 | 3,220 |
39,525,214 | 2016-09-16T06:41:00.000 | 0 | 0 | 0 | 1 | java,python,scala,apache-spark,pyspark | 39,530,209 | 2 | false | 0 | 0 | your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the application from your local machine, use client-mode. If you have an SSH access, you are free to use both.
The simplest way is to copy your code directly on the cluster if it is properly configured then start the application with the ./spark-submit script, providing the class to use as an argument. It works with python script and java/scala classes (I only use python so I don't really know) | 1 | 0 | 1 | I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.
My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?
I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date. | Configurate Spark by given Cluster | 0 | 0 | 0 | 35 |
39,530,054 | 2016-09-16T11:10:00.000 | 0 | 0 | 0 | 0 | python,c++,numpy,swig | 39,576,294 | 1 | true | 0 | 0 | Numpy's polynomial package is largely a collection of functions that can accept array-like objects as the polynomial. Therefore, it is sufficient to convert to a normal ndarray, where the value at index n is the coefficient for the term with exponent n. | 1 | 0 | 1 | I'm using SWIG to wrap a C++ library with its own polynomial type. I'd like to create a typemap to automatically convert that to a numpy polynomial. However, browsing the docs for the numpy C API, I'm not seeing anything that would allow me to do this, only numpy arrays. Is it possible to typemap to a polynomial? | Is it possible to create a polynomial through Numpy's C API? | 1.2 | 0 | 0 | 42 |
39,530,825 | 2016-09-16T11:52:00.000 | 0 | 0 | 0 | 0 | python,sublimetext3,sublime-text-plugin | 39,626,287 | 3 | false | 0 | 0 | It's possible in CudaText app, with Python API. API has on_change_slow event to run your code. And your code must find text x/y position, then call ed.attr() to highlight substring at x/y with any color. It's simple. | 1 | 0 | 0 | I am currently trying to extract information (manually) from a text file. The text file has a decent format (parsable), but it contains something like 'random chars'. These random chars are not completely random, by running an algorithm on them I am able to collect information. I am giving each char a positive integer.
The question is whether or not I can write a sublime text 3 plugin that will help me see those chars.
I would like to change the colour of those chars.
Note: there can be a char in the same string with 2 colours. The colour depends on of the position.
Can such a plugin be written for sublime text 3? If not what can I use instead? The algorithm that gives each char a score is written in python. | Sublime text change text color | 0 | 0 | 0 | 1,058 |
39,532,034 | 2016-09-16T12:53:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,kivy,komodo,kivy-language | 39,537,143 | 2 | false | 0 | 1 | This only works for Komodo code intelligence. Run time is still limited to PYTHONPATH. To run a script that's using Kivy, even in your command line, the kivy source has to be on your PYTHONPATH.
You can add items to your PYTHONPATH in Komodo using Edit > Preferences > Environment then create a New environmental variable to append the kivy installation location to your $PYTHONPATH, ie $PYTHONPATH:install/location/kivy.
If you don't mind having it in your system though, I'd just do what @tuan333 suggest above, install it using pip, then make sure you're using THAT Python interpreter in Komodo. | 1 | 1 | 0 | I'm trying to run a Kivy python file on Komodo IDE (for Mac) but its giving me this error
import kivy
ImportError: No module named kivy although if I drag-drop the file on the Kivy app its running normally,
any ideas ?, thanks | Can't run kivy file on Komodo | 0 | 0 | 0 | 141 |
39,536,762 | 2016-09-16T17:06:00.000 | 0 | 0 | 0 | 0 | python,proxy,web-crawler,google-trends | 41,170,050 | 3 | false | 0 | 0 | No one is going to write code for you.
But I can leave some comments because I have been using Crawlera proxies for the past few months.
With crawlera you can scrape Google Trends with new IP each time, or even you can use a same IP each time(its called session management in crawlera).
You can send a header 'X-Crawlera-Session':'create' along with your request and Crawlera on their end will create a session, and in response, they will return 'X-Crawlera-Session': ['123123123'] ... And if you think that you are not blocked from Google,
You can send 'X-Crawlera-Session': '123123123' with each of your request so Crawlera will use same IP each time. | 1 | 1 | 0 | Since google trends require you to login, can I still use an IP rotator such as crawlera to download the csv files? If so, is there any example code with python (i.e python + crawlera to download files on google).
Thanks in advance. | Is it possible to use a proxy rotator such as crawlera with google trends? | 0 | 0 | 1 | 1,606 |
39,537,700 | 2016-09-16T18:10:00.000 | -1 | 0 | 1 | 0 | python,pycharm,xml-rpc | 44,059,972 | 1 | false | 0 | 0 | Hi I had the same problem as you. I solved the problem by making the line 127.0.0.1 localhost as the first line in /etc/hosts. The reason python console does not run is that python console tries to connect to localhost:pycharm-port, but localhost was resolved to the IPv6 addess of ::1, and the connection is refused. | 1 | 4 | 0 | Exception in XML-RPC listener loop (java.net.SocketException: Socket closed).
When I run PyCharm from bash , I get this error..As result: I cant't use python-console in pycharm Anybody know how to fix it ?
OS: ubuntu 16.04 | How to fix python console error in pycharm? | -0.197375 | 0 | 1 | 602 |
39,538,259 | 2016-09-16T18:45:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,debugging,pygame,livewires | 39,582,747 | 1 | true | 0 | 1 | Thanks to Jack Zhai's suggestion, I figured it out.
When the breakpoint is hit, unpause the debugger (shortcut: F5)
Play to the point in the game you want to debug.
Repause the debugger using Break All (shortcut: Ctrl-Alt-Break)
Press F10 a few times; this makes it go a few more steps in the main livewires loop.
In the autos window, there is an entry that contains the a list of the game's objects. Look through it, until you find the one(s) you're looking for. In my case, the list of the objects was self._objects. The object I was looking for was the second, which in other words, was self._objects[1].
The dropdown arrows of the object(s) you're looking for show the object's members. If you want to look at the object's properties in a less cumbersome way, use the Interactive Debugger to assign that object to a variable. That way, you can look at its values through typing objectNameHere.objectValueHere into the interactive debug console.
In my case, I had to type in player = self._objects[1] to get it assigned, then could see the player's x position by then entering player.x into the debug console.
--
I know this answer might only work for my specific problem, so if anyone else has a better one, please post it for others' sake. | 1 | 1 | 0 | I've been making a small game for a homework assignment using Pygame with Livewires. I've been trying to debug it, but to no success; I can't get a look at the variables I want to look at before the main loop is executed. Though I can press F10 to skip over the main loop, that just stops the Autos and Watches window from working; apparently they can only records vars when the game is paused by the debugger.
Is there a way for me to use the debugger to look at the vars in runtime? Because no matter what I do while the debugger is paused, I can't look at the data inside the game objects I want to take a look in | How to debug Pygame application with Visual Studio during runtime | 1.2 | 0 | 0 | 1,009 |
39,538,363 | 2016-09-16T18:52:00.000 | 0 | 0 | 0 | 0 | python,algorithm,graph,graph-theory,networkx | 39,538,477 | 3 | false | 0 | 0 | Why would bfs not solve it? A bfs algorithm is breadth traversal algorithm, i.e. it traverses the tree level wise. This also means, all nodes at same level are traversed at once, which is your desired output.
As pointed out in comment, this will however, assume a starting point in the graph. | 1 | 4 | 1 | Firstly I am not sure what such an algorithm is called, which is the primary problem - so first part of the question is what is this algorithm called?
Basically I have a DiGraph() into which I insert the nodes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and the edges ([1,3],[2,3],[3,5],[4,5],[5,7],[6,7],[7,8],[7,9],[7,10])
From this I'm wondering if it's possible to get a collection as follows: [[1, 2, 4, 6], [3], [5], [7], [8, 9, 10]]
EDIT: Let me add some constraints if it helps.
- There are no cycles, this is guaranteed
- There is no one start point for the graph
What I'm trying to do is to collect the nodes at the same level such that their processing can be parallelized, but within the outer collection, the processing is serial.
EDIT2: So clearly I hadn't thought about this enough, so the easiest way to describe "level" is interms of deepest predecessor, all nodes that have the same depth of predecessors. So the first entry in the above list are all nodes that have 0 as the deepest predecessor, the second has one, third has two and so on. Within each list, the order of the siblings is irrelevant as they will be parallelly processed. | Given an acyclic directed graph, return a collection of collections of nodes "at the same level"? | 0 | 0 | 0 | 661 |
39,540,563 | 2016-09-16T21:49:00.000 | 0 | 0 | 0 | 0 | python,wxpython,wxnotebook | 39,541,649 | 1 | false | 0 | 1 | As you've guessed, care needs to be taken when destroying UI objects from their own event handlers. Not only is the current event handler still active, but there may be other pending events that are still in the queue, and if the target object has been destroyed when they are delivered then you can get a crash.
The best thing to do is to defer the destruction until after the current and possible pending event handlers have been completed and there is nothing waiting to be done on the UI object except for the destruction that you want to do. And the best way to do that is to use wx.CallAfter. It will call a function with parameters that you give it the next time the the event loop empties, so implicitly there is not anything else waiting to be done or sent to the UI object in question.
In your case it would be safe to do things like immediately remove the page from the notebook, and hide the page window. Then use wx.CallAfter to call some function (perhaps in the notebook class) which calls the page window's Destroy method and does any other cleanup that is necessary. The reason I suggest splitting these two sets of tasks is not because it will take a long time for the function to be called, but in some cases it may be long enough to flicker momentarily in a transitory state, so the appearance is less smooth for the users. | 1 | 0 | 0 | I want to dynamically create and delete pages in a notebook. In the main class I successfully create and add pages with a button. The pages are a separate class of course, and have a button inside.
I know I can put the button outside the notebook and delete them from the main class but I want to use page's own button to self destruct the whole page as it won't be needed any more.
Sorry I don't post any code but I'm posting from my cellphone. Also it seems to be a question not so specific to need a minimal working example. | Self delete page in wx.Notebook | 0 | 0 | 0 | 224 |
39,540,741 | 2016-09-16T22:06:00.000 | 10 | 0 | 1 | 0 | python,ocr,google-cloud-platform,google-cloud-vision,text-recognition | 39,809,660 | 2 | false | 0 | 0 | I am unable to tell you why this works, perhaps it has to do with how the language is read, o vs 0, l vs 1, etc. But whenever I use OCR and I am specifically looking for numbers, I have read to set the detection language to "Korean". It works exceptionally well for me and has influenced the accuracy greatly. | 1 | 17 | 1 | I've been trying to implement an OCR program with Python that reads numbers with a specific format, XXX-XXX. I used Google's Cloud Vision API Text Recognition, but the results were unreliable. Out of 30 high-contrast 1280 x 1024 bmp images, only a handful resulted in the correct output, or at least included the correct output in the results. The program tends to omit some numbers, output in non-English languages or sneak in a few special characters.
The goal is to at least output the correct numbers consecutively, doesn't matter if the results are sprinkled with other junk. Is there a way to help the program recognize numbers better, for example limit the results to a specific format, or to numbers only? | Google Cloud Vision - Numbers and Numerals OCR | 1 | 0 | 0 | 5,635 |
39,541,393 | 2016-09-16T23:31:00.000 | 4 | 0 | 1 | 0 | python | 39,541,428 | 3 | false | 0 | 0 | In Python lists can be composed of mixed types, there is no way to do something like setting the "type" of a list. Also, even if you could, this "type" is not enforced and could change at any time. | 1 | 2 | 0 | (Maybe b/c I'm from a C++ world)
I want to verify some python variable is
list(string) or list(dict(int, string)) or SomethingIterable(string)
Is there a simple and unified way to do it?
(Instead of writing customized code to iterate and verify each instance..)
I emphasize that I understand in Python list can have elements of different types, which is exactly the reason why I ask how to verify a list which are composed by just a certain type e.g. string. | How to do type verification in python? | 0.26052 | 0 | 0 | 214 |
39,541,655 | 2016-09-17T00:17:00.000 | 1 | 0 | 1 | 0 | python,matplotlib,data-analysis | 39,543,370 | 1 | true | 0 | 0 | Thank you for prompting me to look at this, as I much prefer 'step' style histograms too! I solved this problem by going into the matplotlib source code. I use anaconda, so it was located in anaconda/lib/site-packages/python2.7/matplotlib.
To change the histogram style I edited two of the files. Assuming that the current directory is matplotlib/, then open up axes/_axes.py and locate the hist() function there (it's on line 5690 on my machine, matplotlib version 1.5.1). You should see the histtype argument there. Change this to 'step'.
Now open up pyplot.py and again locate the hist() function and make the same change to the histtype argument (line 2943 in version 1.5.1 and on my machine). There is a comment about not editing this function, but I only found this to be an issue when I didn't also edit axes/_axes.py as well.
This worked for me! Another alternative would be just to write a wrapper around hist() yourself that changes the default argument. | 1 | 3 | 1 | Is there a way to configure the default argument for histtype of matplotlib's hist() function? The default behavior is to make bar-chart type histograms, which I basically never want to look at, since it is horrible for comparing multiple distributions that have significant overlap.
In case it's somehow relevant, the default behavior I would like to attain is to have histtype='step'. | Setting default histtype in matplotlib? | 1.2 | 0 | 0 | 602 |
39,542,539 | 2016-09-17T03:22:00.000 | 4 | 1 | 0 | 0 | python,raspberry-pi,kivy,led | 39,542,558 | 1 | false | 0 | 1 | Well apparently I didn't look hard enough. The solution is to copy "~/.kivy/config.ini" to "/root/.kivy/config.ini"
So the commands are
"sudo cp ~/.kivy/config.ini /root/.kivy/config.ini"
And then everything works happily together! | 1 | 1 | 0 | I want to make a kivy app for the raspberry pi that can use a touch screen. I was able to get the demos to work with the touchscreen with just "python ~/kivy/examples/demo/showcase/main.py". The issue comes when I need to start the app with "sudo python main.py", the touchscreen then ceases to work.
The app I am trying to write uses the rpi_ws281x library for controlling addressable leds which HAS to be run as root. Is there a way to run the kivy app as root while still enabling the touchscreen functionality?
If there isn't, is there a way to send data from the kivy app to say a script which is running sudo that controls the leds?
I've looked a lot of places but no one seems to have had this problem before (or they could work around it by changing the privileges of other directories where they were accessing the sudo protected content). Any help is greatly appreciated! | Run Kivy app as root user with touch screen on raspberry pi | 0.664037 | 0 | 0 | 989 |
39,543,888 | 2016-09-17T07:06:00.000 | 0 | 0 | 1 | 0 | python,pygame | 39,551,714 | 4 | false | 0 | 1 | The only way to have the text entries separate to the pygame window is to use someVar = input("A string") so the text input is in the python shell or the command window/Linux terminal and then have pygame reference that var. | 1 | 0 | 0 | this is my first question. What I would like to achieve is in a normal window for the text-based game to be running, however I would also like to have a pygame window running as well that shows a map that updates. Thank you in advance. | Python/Pygame: Can you run a program whilst having a Pygame window that can still update? | 0 | 0 | 0 | 400 |
39,544,944 | 2016-09-17T09:03:00.000 | 2 | 0 | 1 | 0 | python,pip,pyautogui | 39,563,987 | 6 | false | 0 | 1 | I encountered the same error message as you did. This workaround worked for me. Try these steps in order...
Install PyScreeze 0.1.7.
Update PyScreeze to 0.1.8.
Install PyAutoGui.
I hope this helps. | 3 | 3 | 0 | I have been installing PyautoGui on my WIN10 PC. But I am getting the following error, i have been getting a lot of errors jut to get this far.
i have been reinstalling python so its destination folder is in C:\Python instead of C:\Users\Home\AppData\Local\Programs\Python\Python35-32 mabye thats why ? How do i fix this ?
C:\Python\Scripts>pip.exe install pyautogui Collecting pyautogui
Using cached PyAutoGUI-0.9.33.zip Collecting pymsgbox (from pyautogui)
Using cached PyMsgBox-1.0.3.zip Collecting PyTweening>=1.0.1 (from
pyautogui) Using cached PyTweening-1.0.3.zip Collecting Pillow (from
pyautogui) Using cached Pillow-3.3.1-cp35-cp35m-win32.whl Collecting
pyscreeze (from pyautogui) Using cached PyScreeze-0.1.8.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Home\AppData\Local\Temp\pip-build-kxm3249e\pyscreeze\setup.py",
line 6, in
version=import('pyscreeze').version,
File "c:\users\home\appdata\local\temp\pip-build-kxm3249e\pyscreeze\pyscreeze__init__.py",
line 21, in
from PIL import Image
ImportError: No module named 'PIL'
Command "python setup.py egg_info" failed with error code 1 in
C:\Users\Home\AppData\Local\Temp\pip-build-kxm3249e\pyscreeze\ | Installing PyAutoGUI Error in pip.exe install pyautogui | 0.066568 | 0 | 0 | 10,149 |
39,544,944 | 2016-09-17T09:03:00.000 | 1 | 0 | 1 | 0 | python,pip,pyautogui | 42,882,370 | 6 | false | 0 | 1 | I'm happy to report that this installation error has been fixed as of version 0.9.34. All you have to do is install or update PyAutoGUI from PyPI. | 3 | 3 | 0 | I have been installing PyautoGui on my WIN10 PC. But I am getting the following error, i have been getting a lot of errors jut to get this far.
i have been reinstalling python so its destination folder is in C:\Python instead of C:\Users\Home\AppData\Local\Programs\Python\Python35-32 mabye thats why ? How do i fix this ?
C:\Python\Scripts>pip.exe install pyautogui Collecting pyautogui
Using cached PyAutoGUI-0.9.33.zip Collecting pymsgbox (from pyautogui)
Using cached PyMsgBox-1.0.3.zip Collecting PyTweening>=1.0.1 (from
pyautogui) Using cached PyTweening-1.0.3.zip Collecting Pillow (from
pyautogui) Using cached Pillow-3.3.1-cp35-cp35m-win32.whl Collecting
pyscreeze (from pyautogui) Using cached PyScreeze-0.1.8.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Home\AppData\Local\Temp\pip-build-kxm3249e\pyscreeze\setup.py",
line 6, in
version=import('pyscreeze').version,
File "c:\users\home\appdata\local\temp\pip-build-kxm3249e\pyscreeze\pyscreeze__init__.py",
line 21, in
from PIL import Image
ImportError: No module named 'PIL'
Command "python setup.py egg_info" failed with error code 1 in
C:\Users\Home\AppData\Local\Temp\pip-build-kxm3249e\pyscreeze\ | Installing PyAutoGUI Error in pip.exe install pyautogui | 0.033321 | 0 | 0 | 10,149 |
39,544,944 | 2016-09-17T09:03:00.000 | 2 | 0 | 1 | 0 | python,pip,pyautogui | 40,112,934 | 6 | false | 0 | 1 | Instead of letting PyautoGUI get all the packages for you.
Install all of them individually. Then, run the pip install --upgrade _packageName_
Then run pip install pyautogui.
Hope this helps. | 3 | 3 | 0 | I have been installing PyautoGui on my WIN10 PC. But I am getting the following error, i have been getting a lot of errors jut to get this far.
i have been reinstalling python so its destination folder is in C:\Python instead of C:\Users\Home\AppData\Local\Programs\Python\Python35-32 mabye thats why ? How do i fix this ?
C:\Python\Scripts>pip.exe install pyautogui Collecting pyautogui
Using cached PyAutoGUI-0.9.33.zip Collecting pymsgbox (from pyautogui)
Using cached PyMsgBox-1.0.3.zip Collecting PyTweening>=1.0.1 (from
pyautogui) Using cached PyTweening-1.0.3.zip Collecting Pillow (from
pyautogui) Using cached Pillow-3.3.1-cp35-cp35m-win32.whl Collecting
pyscreeze (from pyautogui) Using cached PyScreeze-0.1.8.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Home\AppData\Local\Temp\pip-build-kxm3249e\pyscreeze\setup.py",
line 6, in
version=import('pyscreeze').version,
File "c:\users\home\appdata\local\temp\pip-build-kxm3249e\pyscreeze\pyscreeze__init__.py",
line 21, in
from PIL import Image
ImportError: No module named 'PIL'
Command "python setup.py egg_info" failed with error code 1 in
C:\Users\Home\AppData\Local\Temp\pip-build-kxm3249e\pyscreeze\ | Installing PyAutoGUI Error in pip.exe install pyautogui | 0.066568 | 0 | 0 | 10,149 |
39,545,314 | 2016-09-17T09:44:00.000 | 0 | 0 | 1 | 0 | python,installation | 39,545,380 | 1 | false | 0 | 0 | If you installed IDLE correctly, it will appear in your menu with a folder for IDLE. Look there. If not, attempt to reinstall IDLE. It will not affect anything. | 1 | 0 | 0 | I am a complete beginner to programming. I downloaded Python 3.4.3. 64 bit for my Windows 10 OS. The download specified that it included IDLE. I opened Python with no issue, but IDLE was not in that folder. I searched for it on my C drive and it didn't match any file names.
Did it save somewhere else? | Why isn't IDLE in the folder with the Python program? | 0 | 0 | 0 | 103 |
39,546,228 | 2016-09-17T11:20:00.000 | 3 | 0 | 0 | 0 | python,django,heroku,amazon-s3,large-files | 45,600,079 | 2 | false | 1 | 0 | The points in the other answer are valid. The short answer to the question of "Is there anyway that i can possibly upload large files through Django backend without using JavaScript" is "not without switching away from Heroku".
Keep in mind that any data transmitted to your dynos goes through Heroku's routing mesh, which is what enforces the 30 second request limit to conserve its own finite resources. Long-running transactions of any kind use up bandwidth/compute/etc that could be used to serve other requests, so Heroku applies the limit to help keep things moving across the thousands of dynos. When uploading a file, you will first be constrained by client bandwidth to your server. Then, you will be constrained by the bandwidth between your dynos and S3, on top of any processing your dyno actually does.
The larger the file, the more likely it will be that transmitting the data will exceed the 30 second timeout, particularly in step 1 for clients on unreliable networks. Creating a direct path from client to S3 is a reasonable compromise. | 1 | 15 | 0 | I have a django app that allows users to upload videos. Its hosted on Heroku and the uploaded files stored on an S3 Bucket.
I am using JavaScript to directly upload the files to S3 after obtaining a presigned request from Django app. This is due to Heroku 30s request timeout.
Is there anyway that i can possibly upload large files through Django backend without using JavaScript and compromising the user experience? | Uploading Large files to AWS S3 Bucket with Django on Heroku without 30s request timeout | 0.291313 | 0 | 0 | 1,926 |
39,546,734 | 2016-09-17T12:18:00.000 | 1 | 0 | 0 | 0 | python,django,django-models,django-migrations | 39,547,167 | 1 | true | 1 | 0 | As long as your migration isn't applied to the database you can manually update your migration file located in myapp/migrations/*.py. Find the string '10.07.2016' and update it to a supported format.
A less attractive solution would be to delete the old migration file (as long as it isn't apllied to the database) and create a new migrations file with python manage.py makemigrations. Because you've updated the model to use a default value it won't ask for a one-off default this time.
To check whether a migration is applied to the database run: python manage.py showmigrations. | 1 | 1 | 0 | I worked with django 1.9 and added a new field (creation_date) to myapp/models.py.
After that I run "python manage.py makemigrations". I got:
Please select a fix:
Provide a one-off default now (will be set on all existing rows)
Quit, and let me add a default in models.py."
I choose 1-st option and added value in wrong format '10.07.2016'.
After this mistake I couldn't run "python manage.py migrate".
So I decided to change models.py and add a default value "datetime.now".
But after that I still have problems with "python manage.py makemigrations". I see such things like that:
django.core.exceptions.ValidationError: [u"'10.07.2016' value has an invalid format. It must be in YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format."]
How to solve this problem? | Django models, adding new value, migrations | 1.2 | 0 | 0 | 861 |
39,547,386 | 2016-09-17T13:29:00.000 | 0 | 0 | 0 | 0 | python,image,sockets,stream,udp | 39,556,790 | 1 | false | 0 | 0 | My psychic powers tell me that are hitting the size limit for a UDP packet, which is just under 64KB. You will likely need to split your image bytes up into multiple packets when sending and have some logic to put them back together on the receiving end. You will likely need to roll out your own header format.
Not sure why you would need to base64 encode your image bytes, that just add 33% of network overhead for no reason.
While UDP has less network overhead than TCP, it generally relies on you, the developer, to come up with your own mechanisms for flow control, fragmentation handling, lost packets, etc... | 1 | 0 | 0 | I am trying to make a video-streaming application, in which i'll be able to both stream my webcam and my desktop. Up until now I've done so with TCP communication in order to make sure everything works, and it does, but very slowly.
I know that usually in live streams like these you would use UDP, but I can't get it to work. I have created a basic UDP client and a server, and it works with sending shorts string, but when it comes to sending a whole image i can't find a solution to that. I have also looked it up online but found only posts about sending images through sockets in general, and they used TCP.
I'm using Python 2.7, pygame to show the images, PIL + VideoCapture to save them, and StringIO + base64 in order to send them as string. | Sending an image through UDP communication | 0 | 0 | 1 | 992 |
39,555,060 | 2016-09-18T07:09:00.000 | 0 | 0 | 0 | 0 | python,multithreading,thread-safety,locking,tensorflow | 39,577,466 | 1 | false | 0 | 0 | Do your inference calls need to be on an up-to-date version of the graph? If you don't mind some delay, you could make a copy of the graph by calling sess.graph.as_graph_def on the training thread, and then create a new session on the inference thread using that graph_def periodically. | 1 | 0 | 1 | I have several threads that either update the weights of my network or run inference on it. I use the use_locking parameter for the optimizer to prevent concurrent updates of the weights.
Inference should always use a recent, and importantly, consistent, version of the weights. In other words, I want to prevent using a weight matrix for inference for which some of the elements are already updated but others are not.
Is this guaranteed? If not, how can I ensure this? There doesn't seem to be a tf.Lock or similar. | Can I safely do inference from another thread while training the network? | 0 | 0 | 0 | 67 |
39,555,235 | 2016-09-18T07:38:00.000 | 2 | 0 | 0 | 0 | python,cuda,cython,numba,pycuda | 39,557,406 | 1 | true | 0 | 1 | As far as I am aware, this isn't possible in either language. Neither exposes the necessary toolchain controls for separate compilation or APIs to do runtime linking of device code. | 1 | 2 | 1 | I'm working on a project that involves creating CUDA kernels in Python. Numba works quite well (what these guys have accomplished is quite incredible), and so does PyCUDA.
My problem is that I want to call a C device function from my Python generated kernel. I couldn't find a way to accomplish this. Numba can call CFFI modules but only in CPU code. In PyCUDA I can add my C device functions to the SourceModule, but I couldn't figure out how to include functions that already exist in another library.
Is there a way to accomplish this? | Calling a C++ CUDA device function from a Python kernel | 1.2 | 0 | 0 | 604 |
39,558,984 | 2016-09-18T14:52:00.000 | 5 | 0 | 0 | 0 | python,flask,wtforms,flask-wtforms | 39,559,508 | 1 | false | 1 | 0 | For the whole form, form.errors contains a map of fields to lists of errors. If it is not empty, then the form did not validate. For an individual field, field.errors contains a list of errors for that field. The list is the same as the one in form.errors.
form.validate() performs validation and populates errors. When using Flask-WTF, form.validate_on_submit() performs an additional check that request.method is a "submit" method, which mostly means it is not a GET request. | 1 | 2 | 0 | I called form.validate_on_submit(), but it returned False. How can I find out why the form didn't validate? | Determine why WTForms form didn't validate | 0.761594 | 0 | 0 | 1,729 |
39,559,677 | 2016-09-18T16:00:00.000 | 27 | 0 | 1 | 0 | python,pyinstaller | 39,559,759 | 2 | true | 0 | 0 | Pyinstaller optionally encrypts the python sources with a very strong method.
Of course without the key it is nearly impossible to extract the files.
BUT the sources still need to be accessed at run time or the program couldn't work (or someone would have to provide the password each time, like protected excel files for instance).
It means that the key lies somewhere embedded in the installed software. And since all this stuff is open source, looking at the source code tells you where PyInstaller embeds the key. Of course, it's not trivial, but not an encryption-breaking problem, just reverse engineering with - added - the source available. | 1 | 14 | 0 | I'm trying to understand why PyInstaller documentation states that the --key argument to encrypt Python source code can be easily extracted:
Additionally, Python bytecode can be obfuscated with AES256 by specifying an encryption key on PyInstaller’s command line. Please note that it is still very easy to extract the key and get back the original byte code, but it should prevent most forms of “casual” tampering.
My basic understanding of AES-256 is that if no one has the encryption key you specify, they can't extract it "easily"
Does anyone have better understanding ? | PyInstaller Encryption --key | 1.2 | 0 | 0 | 15,700 |
39,562,939 | 2016-09-18T21:40:00.000 | 0 | 0 | 0 | 0 | python | 39,563,394 | 1 | false | 0 | 0 | First of all, you should not save the result of cross validation. Cross validation is not a training method, it is an evaluation scheme. You should build a single model on your whole dataset and use it to predict.
If, for some reason, you can no longer train your model, you can still use this 5 predictions by averaging them (as random forest itself is a simple averagin ensemble of trees), however going back and retraining should give you bettter results. | 1 | 0 | 1 | I used joblib.dump in python to save models from 5 fold cross validation modelling using random forest. As a result I have 5 models for each dataset saved as: MDL_1.pkl, MDL_2.pkl, MDL_3.pkl, MDL_4.pkl, MDL_5.pkl. Now I want to use these models for prediction of external dataset using predict_proba when the final prediction for each line in my external dataset is an average of 5 models. What is the best way to proceed?
Thank you for your help | predict external dataset with models from random forest | 0 | 0 | 0 | 43 |
39,563,319 | 2016-09-18T22:42:00.000 | 1 | 0 | 1 | 0 | python | 39,566,295 | 1 | false | 0 | 0 | Let's put it this way:
If you are making a big jig saw puzzle, you start dividing the pieces into sections; clouds with clouds, water with water, grass with grass etc. Subsequently, you don't mix the pieces, but put each of the sections into a different corner on the table.
Likewise, if you have bigger coding projects, you start separating data from code, create functions and classes to organize your code in a way that makes sense, according to their functionality. If the project is big enough, put different sections into different files (modules).
This is not only to limit the scrolling through possibly thousands of lines, but also to keep your mind clean for the section you are working on, and to make the project maintainable.
Last but not least; working this way, most likely at a certain point you will find yourself writing modules, to be reused in other projects. | 1 | 2 | 0 | I have this script I'm working on which is some 2,000 lines. At the top I have a list of settings to be used in the script, then all the functions, then the script itself.
I found myself going back and forth from the actual script to the functions (and the settings) so I thought it might be a good idea to place them in separate files, to be imported in the main script, with the practical benefit of being able to open them in different windows, for ease of access.
Is this approach of any value? or, is there any problem with this, maybe in the long term, when the script grows even more? or is it all down to personal preference?
(if it matters, I'm using Python.) | Is it worth placing functions, variables, settings etc in different files? | 0.197375 | 0 | 0 | 64 |
39,566,809 | 2016-09-19T06:39:00.000 | 5 | 0 | 0 | 0 | python,dask | 57,812,535 | 2 | false | 0 | 0 | you can convert your dask dataframe to a pandas dataframe with the compute function and then use the to_csv. something like this:
df_dask.compute().to_csv('csv_path_file.csv') | 1 | 34 | 1 | New to dask,I have a 1GB CSV file when I read it in dask dataframe it creates around 50 partitions after my changes in the file when I write, it creates as many files as partitions.
Is there a way to write all partitions to single CSV file and is there a way access partitions?
Thank you. | Writing Dask partitions into single file | 0.462117 | 0 | 0 | 14,834 |
39,570,317 | 2016-09-19T09:58:00.000 | 4 | 0 | 1 | 0 | python,object,if-statement,reference,conditional | 39,570,581 | 1 | true | 0 | 0 | When they say "everything is an object or a reference" they are referring specifically to data. So this naturally does not apply to statements. Of course, all expressions will result in data. For example a == b is <class 'bool'> because it is an expression.
There are some languages where if is an expression but python is not one of them. | 1 | 2 | 0 | As far as I understand, everything in python is an object or a reference.
For example: in x = 1, x is a reference to the integer object 1. If I write print type(x), then Python will tell me the object that x is referencing is an integer.
So what about statements such as if?
if I try print type(if), unsurprisingly, I get a syntax error. I can speculate on why this is the case. Maybe if is a static method of a class, or maybe it has somehow been weirdly defined as non returnable, etc. I just don't know.
Ultimately, I suspect that if has nothing to do with an object or a reference. However, that would surely go against the idea of everything being an object or a reference? | in Python, are statements objects? | 1.2 | 0 | 0 | 172 |
39,571,659 | 2016-09-19T11:05:00.000 | 0 | 0 | 0 | 0 | javascript,python,html | 69,865,240 | 2 | false | 1 | 0 | If you are not gonna use any sensitive data like password you can use localStorage or Url Hash . | 1 | 0 | 0 | I want to post data from html to another html
I know how to post data html->python and python-> html
I have dictionary in the html (I get it from python - return render_to_response('page.html', locals())
how can I use with the dictionary in the second html file? | Post data from html to another html | 0 | 0 | 1 | 95 |
39,573,614 | 2016-09-19T12:48:00.000 | 0 | 1 | 1 | 1 | php,python,bash,variables,share | 39,580,897 | 1 | false | 0 | 0 | OK, I think the best approach for me here would be to limit variable storages from 3 to at least 2 and make python script deal with bash tasks over os.system. To use 2 storages is somehow manageable | 1 | 1 | 0 | I have project that uses same initial variables on same server by different programming languages. they are PHP, python and bash. i need all languages to access those variable and I cannot exclude any language.
for now I keep 3 places to store variables:
for php I have Mysql storage, for python and bash 2 separate files
if initial value of any variable changes, i need to change it at 3 locations
i want to simplify that now. lest assume all systems can access Mysql. is there the way define initial variables in Mysql instead of files? or what is the best practice to share variables in my case? | share variables by PHP, python, bash | 0 | 0 | 0 | 46 |
39,574,567 | 2016-09-19T13:35:00.000 | 0 | 0 | 1 | 0 | python-2.7,scipy,scikit-learn,k-means | 39,581,743 | 1 | false | 0 | 0 | So, the situation as of today is: there is no distributed Python implementation of KMeans++ other than in scikit-learn. That situation may change if a good implementation finds its way into scipy. | 1 | 0 | 1 | I have nothing against scikit-learn, but I had to install anaconda to get it, which is a bit obtrusive. | Is there any K-means++ implementation outside of scikit-learn for Python 2.7? | 0 | 0 | 0 | 72 |
39,576,478 | 2016-09-19T15:08:00.000 | 4 | 1 | 1 | 0 | python | 39,576,570 | 3 | true | 0 | 1 | Header files in C/C++ are a "copy paste" mechanism. The included header file is literally written into the file while preprocessing (copy pasting the source code together).
After that is done, the compiler transaltes the source code. The linker then connects the function calls.
That is somewhat outdated and error prone -> can be really frustrating as well when sth doesnt work as expected.
Newer languages have module systems, which are nicer (import simply does it). | 3 | 2 | 0 | I am a complete newbie to python transiting from C++.
While learning C++, I was explained that header files tells about the working of the function or define them to the compiler so that it understands what means what i.e., iostream contains the definition of cin (nd much more) so that the compiler knows that it is a keyword and understands its function.
However, python and java do not need header files.
So basically how does the compiler understands the actual meaning/function of 'print' or 'input' in python????? | Why Python and java does not need any header file whereas C and C++ needs them | 1.2 | 0 | 0 | 2,209 |
39,576,478 | 2016-09-19T15:08:00.000 | 1 | 1 | 1 | 0 | python | 39,576,943 | 3 | false | 0 | 1 | In Java and Python we use a similar keyword called import to add a package and use the methods in it. But in advanced languages like Java and Python few packages are imported by default.
e.g. in Java java.lang.* is imported by default. | 3 | 2 | 0 | I am a complete newbie to python transiting from C++.
While learning C++, I was explained that header files tells about the working of the function or define them to the compiler so that it understands what means what i.e., iostream contains the definition of cin (nd much more) so that the compiler knows that it is a keyword and understands its function.
However, python and java do not need header files.
So basically how does the compiler understands the actual meaning/function of 'print' or 'input' in python????? | Why Python and java does not need any header file whereas C and C++ needs them | 0.066568 | 0 | 0 | 2,209 |
39,576,478 | 2016-09-19T15:08:00.000 | -1 | 1 | 1 | 0 | python | 39,576,524 | 3 | false | 0 | 1 | Java and Python have import which is similar to include.
Some inbuilt functions are built in and hence does not require any imports. | 3 | 2 | 0 | I am a complete newbie to python transiting from C++.
While learning C++, I was explained that header files tells about the working of the function or define them to the compiler so that it understands what means what i.e., iostream contains the definition of cin (nd much more) so that the compiler knows that it is a keyword and understands its function.
However, python and java do not need header files.
So basically how does the compiler understands the actual meaning/function of 'print' or 'input' in python????? | Why Python and java does not need any header file whereas C and C++ needs them | -0.066568 | 0 | 0 | 2,209 |
39,577,548 | 2016-09-19T16:09:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-rest-framework | 39,584,832 | 1 | false | 1 | 0 | django-locking is the way to go. | 1 | 0 | 0 | I am trying to build a project(like e-commerce) using Django and integrate it with android. (I am not building website, I am trying mobile only, so I am using django-rest-framework to create api)
So my question is how to handle a case where two or more users can book an item at the same time when there is only a single item. (basically how to handle concurrent modification and access of data) ?
Please help. I am stuck on this one. | How to handle concurrent modifications in django? | 0 | 0 | 0 | 76 |
39,580,809 | 2016-09-19T19:38:00.000 | 1 | 0 | 1 | 0 | python,mongodb,scrapy,mongoexport | 39,580,966 | 1 | true | 0 | 0 | Replace --out scrape-export.csv in your command with --out scrape-export-$(date +"%Y-%m-%d").csv
It'll create filenames in the format scrape-export-2016-09-05 | 1 | 0 | 0 | I have a MongoDB that houses data from a web scrape that runs weekly via Scrapy. I'm going to setup a cron job to run the scrape job weekly. What I would like to do is also export a CSV out of MongoDB using mongoexport however I would like to inject the current date into the file name. I've tried a few different methods without much success. Any help would be greatly appreciated! For reference, my current export string is: mongoexport --host localhost --db glimpsedb --collection scrapedata --csv --out scrape-export.csv --fields dealerid,unitid,seller,street,city,state,zipcode,rvclass,year,make,model,condition,price
So, ideally the file name would be scrape-export-current date.csv
Thanks again! | Export MongoDB to CSV Using File Name Variable | 1.2 | 1 | 0 | 244 |
39,582,974 | 2016-09-19T22:21:00.000 | 1 | 0 | 0 | 0 | python,tensorflow | 39,583,123 | 1 | true | 0 | 0 | This output means that TensorFlow's shape inference has only been able to infer a partial shape for the mask tensor. It has been able to infer (i) that mask is a 4-D tensor, and (ii) its last dimension is 1; but it does not know statically the shape of the first three dimensions.
If you want to get the actual shape of the tensor, the main approaches are:
Compute mask_val = sess.run(mask) and print mask_val.shape.
Create a symbolic mask_shape = tf.shape(mask) tensor, compute mask_shape_val = sess.run(mask_shape) and print `mask_shape.
Shapes usually have unknown components if the shape depends on the data, or if the tensor is itself a function of some tensor(s) with a partially known shape. If you believe that the shape of the mask should be static, you can trace the source of the uncertainty by (recursively) looking at the inputs of the operation(s) that compute mask and finding out where the shape becomes partially known. | 1 | 0 | 1 | During debuging the Tensorflow code, I would like to output the shape of a tensor, say, print("mask's shape is: ",mask.get_shape()) However, the corresponding output is mask's shape is (?,?,?,1) How to explain this kind of output, is there anyway to know the exactly value of the first three dimensions of this tensor? | regarding the tensor shape is (?,?,?,1) | 1.2 | 0 | 0 | 569 |
39,585,238 | 2016-09-20T03:25:00.000 | 0 | 0 | 1 | 0 | python,macos | 39,585,904 | 1 | false | 0 | 0 | for mysql-connector installation problem, i found the solution:
Try go to python3 bin directory and find pip method. this pip method can be override by the system python2 pip command, so if you want to install MySQL-python module to python3.x site-packages, you should cd to such bin directory and ./pip install MySQL-python, it can download such module successfully but installed error:ImportError:No module named 'ConfigParser', I google such error and find there is no such module in python3 and we can get its fork version:mysqlclient.
NOTE: In order not to be conflict with system default python2 pip command, cd and go to python3 bin directory and ./pip install mysqlclient and succeed. | 1 | 0 | 0 | I installed python3.5 on my mac, its installation was automatically. but these days i found there was already python2 on my mac and every module i installed through pip went to /Library/Python/2.7/site-packages.
I find python3 installed location is /Library/Frameworks/Python.framework/Versions/3.5
Now download a mysql-connector-python and installed it, install location is python2.7/site-packages, when i open pycharm whose default interceptor is python3.5, hence i can not use mysql-connector, so is there any body who know this question? | mac two version python conflict | 0 | 1 | 0 | 851 |
39,596,724 | 2016-09-20T14:25:00.000 | 1 | 0 | 1 | 0 | python-2.7,class,inheritance | 39,596,873 | 1 | false | 0 | 0 | In python everything is an object, an object is an object. You can also create Meta-classes to change some behavior of your objects/object to implement something.
In you case KNNLearner(object) the (object) permits you to pass the class that you want to KNNLearner to inherit. | 1 | 0 | 0 | I am learning to make classes using template codes and one of them is coded with what looks like inheritance syntax:
class KNNLearner(object):
What is the purpose of (object) here? | Does Python have a pre-existing class called 'object'? | 0.197375 | 0 | 0 | 39 |
39,598,572 | 2016-09-20T15:49:00.000 | 1 | 0 | 1 | 0 | python,multithreading,queue,python-multithreading,pyzmq | 70,651,650 | 1 | false | 0 | 0 | Q : "Is there a better approach?"
A :
Well, my ultimate performance-candidate would be this :
the sampler will operate two or more, separate, statically preallocated "circular"-buffers, one for storing in phase one, the other thus free-to get sent and vice-verse
once the sampler's filling reaches the end of the first buffer, it starts filling the other, sending the first one and vice versa
ZeroMQ zero-copy, zero-blocking .send( zmq.NOBLOCK ) over an inproc:// transport-class uses just memory-pointer mapping, without moving data in-RAM ( or we can even further reduce the complexity, if moving the filled-up buffer right from here directly to the client, w/o any mediating party ( if not needed otherwise ) for doing so, if using a pre-allocated, static storage,like a numpy.array( ( bufferSize, nBuffersInRoundRobinCYCLE ), dtype = np.int32 ), we can just send an already packed-block of { int32 | int64 }-s or other dtype-mapped data using .data-buffer, round-robin cycling along the set of nBuffersInRoundRobinCYCLE-separate inplace storage buffers (used for sufficient latency-masking, filling them one after another in cycle and letting them get efficiently .send( zmq.NOBLOCK )-sent in the "background" ( behind the back of the Python-GIL-lock blocker tyrant ) in the meantime as needed ).
Tweaking Python-interpreter, disabling gc.disable() at all and tuning the default GIL-lock smooth processing "meat-chopper" from 100[ms] somewhere reasonably above, as no threading is needed anymore, by sys.settimeinterval() and moving several acquired samples in lump multiples of CPU-words ~up~to~ CPU-cache-line lengths ( aligned for reducing the fast-cache-to-slow-RAM-memory cache-consistency management mem-I/O updates ) are left for the next LoD of bleeding performance boosting candidates | 1 | 7 | 0 | I acquire samples (integers) at a very high rate (several kilo samples per seconds) in a thread and put() them in a threading.Queue. The main thread get()s the samples one by one into a list of length 4096, then msgpacks them and finally sends them via ZeroMQ to a client. The client shows the chunks on the screen (print or plot). In short the original idea is, fill the queue with single samples, but empty it in large chunks.
Everything works 100% as expected. But the latter part i.e. accessing the queue is very very slow. Queue gets larger and the output always lags behind by several to tens of seconds.
My question is: how can I do something to make queue access faster? Is there a better approach? | Python threading queue is very slow | 0.197375 | 0 | 0 | 3,388 |
39,600,764 | 2016-09-20T17:57:00.000 | 1 | 1 | 0 | 0 | python,optimization,tail-recursion | 39,600,829 | 1 | true | 0 | 0 | It's not possible. inspect doesn't let you rewrite the stack that way, and in any case, it only gives Python stack frames. Even if you could change how the Python stack frames hook up to each other, the C call stack would be unaffected. | 1 | 0 | 0 | I've recently discovered the inspect and thought if it's possible to manually remove "outer" frames of the current frame and thus implementing tail-recursion optimization.
Is it possible? How? | Hacking tail-optimization | 1.2 | 0 | 0 | 36 |
39,602,586 | 2016-09-20T19:45:00.000 | 2 | 0 | 0 | 0 | javascript,python,email,security | 39,603,344 | 1 | true | 1 | 0 | The first time someone accesses the URL, you could send them a random cookie, and save that cookie with the document. On future accesses, check if the cookie matches the saved cookie. If they share the URL with someone, that person won't have the cookie.
Caveats:
If they share the URL with someone else, and the other person goes to the URL first, they will be the one who can access it, not the original recipient.
If the recipient clears cookies, they'll lose access to the document. You'll need a recovery procedure. This could send a new URL to the original email address. | 1 | 0 | 0 | Not sure if this question should come to SO, but here it goes.
I have the following scenario:
A Flask app with typical users that can login using username / password. Users can share some resources among them, but now we want to let them share those with anyone, not users of the app basically.
Because the resources content is important, only the person that received the email should be able to access the resource. Not everyone with the link, in other words.
What I've thought so far:
Create a one-time link -> This could work, but I'd prefer if the link is permanent
Add some Javascript in the HTML email message sent and add a parameter to the request sent so I can make sure the email address that opened the link was the correct one. This assuming that I can do that with Javascript...which is not clear to me. This will make the link permanent though.
Any toughts? Thanks | Is there a way to share a link with only a spefic mail recipient? | 1.2 | 0 | 0 | 29 |
39,602,962 | 2016-09-20T20:09:00.000 | 1 | 0 | 1 | 0 | python,windows,memory,memory-management | 39,603,049 | 2 | false | 0 | 0 | Have you heard of paging? Windows dumps some ram (that hasn't been used in a while) to your hard drive to keep your computer from running out or ram and ultimately crashing.
Only Windows deals with memory management. Although, if you use Windows 10, it will also compress your memory, somewhat like a zip file. | 1 | 2 | 0 | I have written a program that expands a database of prime numbers. This program is written in python and runs on windows 10 (x64) with 8GB RAM.
The program stores all primes it has found in a list of integers for further calculations and uses approximately 6-7GB of RAM while running. During some runs however, this figure has dropped to below 100MB. The memory usage then stays low for the duration of the run, though increasing as expected as more numbers are added to the prime array. Note that not all runs result in a memory drop.
Memory usage measured with task manager
These, seemingly random, drops has led me the following theories:
There's a bug in my code, making it drop critical data and messing up the results (most likely but not supported by the results)
Python just happens to optimize my code extremely well after a while.
Python or Windows is compensating for my over-usage of the RAM by cleaning out portions of my prime-number array that aren't used that much. (eventually resulting in incorrect calculations)
Python or Windows is compensating for my over-usage of the RAM by allocating disk space instead of ram.
Questions
What could be the reason(s) for this memory drop?
How does python handle programs that use more than available RAM?
How does Windows handle programs that use more than available RAM? | Using more memory than available | 0.099668 | 0 | 0 | 1,637 |
39,604,918 | 2016-09-20T22:51:00.000 | 6 | 0 | 0 | 0 | python,numpy,machine-learning,neural-network | 39,605,142 | 2 | false | 0 | 0 | (n,) is a tuple of length 1, whose only element is n. (The syntax isn't (n) because that's just n instead of making a tuple.)
If an array has shape (n,), that means it's a 1-dimensional array with a length of n along its only dimension. It's not a row vector or a column vector; it doesn't have rows or columns. It's just a vector. | 1 | 9 | 1 | I've tried searching StackOverflow, googling, and even using symbolhound to do character searches, but was unable to find an answer. Specifically, I'm confused about Ch. 1 of Nielsen's Neural Networks and Deep Learning, where he says "It is assumed that the input a is an (n, 1) Numpy ndarray, not a (n,) vector."
At first I thought (n,) referred to the orientation of the array - so it might refer to a one-column vector as opposed to a vector with only one row. But then I don't see why we need (n,) and (n, 1) both - they seem to say the same thing. I know I'm misunderstanding something but am unsure.
For reference a refers to a vector of activations that will be input to a given layer of a neural network, before being transformed by the weights and biases to produce the output vector of activations for the next layer.
EDIT: This question equivocates between a "one-column vector" (there's no such thing) and a "one-column matrix" (does actually exist). Same for "one-row vector" and "one-row matrix".
A vector is only a list of numbers, or (equivalently) a list of scalar transformations on the basis vectors of a vector space. A vector might look like a matrix when we write it out, if it only has one row (or one column). Confusingly, we will sometimes refer to a "vector of activations" but actually mean "a single-row matrix of activation values transposed so that it is a single-column."
Be aware that in neither case are we discussing a one-dimensional vector, which would be a vector defined by only one number (unless, trivially, n==1, in which case the concept of a "column" or "row" distinction would be meaningless). | What does (n,) mean in the context of numpy and vectors? | 1 | 0 | 0 | 2,722 |
39,606,308 | 2016-09-21T01:52:00.000 | 0 | 0 | 0 | 0 | javascript,python,pycharm | 39,629,226 | 1 | false | 1 | 0 | Open the Chrome Developer tool setting, and disable the cache.
credit to @All is Vanity | 1 | 1 | 0 | I am developing a simple web application integrated with MySQL database. I am using PyCharm to write Python, HTML, JavaScript, CSS. After I make change to my JavaScript and I run my application on Chrome, the Chrome console suggests that the change did not apply. I already invalid PyCharm caches and restart Pycharm, it still cannot work. Anyone has idea about this?
PS: if I rename the JavaScript file, it will work. But what is the reason of this problem? And how can I solve it without renaming?
Thanks in advance! | PyCharm not respond to my change in JavaScript file | 0 | 0 | 0 | 433 |
39,607,359 | 2016-09-21T04:03:00.000 | 0 | 0 | 0 | 0 | python,django | 39,607,668 | 2 | false | 1 | 0 | OK, so I crossed my fingers, backed my local 0021-0028 migration files, and then deleted them. It worked. I think they key is that the migration files were not yet in the database yet, but not 100% sure. +1 if anyone can answer further for clarification. | 1 | 2 | 0 | So, I committed and pushed all my code, and then deployed my web application successfully. Then, I added a new model to my 'home' app, which (for a reason I now understand, but doesn't matter here), created an IntegrityError (django.db.utils.IntegrityError: insert or update on table "foo" violates foreign key constraint "bar"). I ran python manage.py makemigrations, python manage.py migrate, which causes the the IntegrityError.
However, even if I remove all of my new model code(so that git status comes up with nothing), the IntegrityError still happens. If I connect to my db via a different python instance and download select * from django_migrations;, the latest db migration: 0020 there is eight migrations away from my latest local home/migrations migration file: 0028.
--> My question is: is it safe for me to delete my local 0021-0028 migration files? Will this fix my problem? | Stuck in a django migration IntegrityError loop: can I delete those migrations that aren't yet in the db? | 0 | 0 | 0 | 801 |
39,607,721 | 2016-09-21T04:42:00.000 | 0 | 0 | 0 | 0 | python,algorithm,chess | 39,607,825 | 4 | false | 0 | 0 | Try something. Draw boards of the following sizes: 1x1, 2x2, 3x3, 4x4, and a few odd ones like 2x4 and 3x4. Starting with the smallest board and working to the largest, start at the bottom left corner and write a 0, then find all moves from zero and write a 1, find all moves from 1 and write a 2, etc. Do this until there are no more possible moves.
After doing this for all 6 boards, you should have noticed a pattern: Some squares couldn't be moved to until you got a larger board, but once a square was "discovered" (ie could be reached), the number of minimum moves to that square was constant for all boards not smaller than the board on which it was first discovered. (Smaller means less than n OR less than x, not less than (n * x) )
This tells something powerful, anecdotally. All squares have a number associated with them that must be discovered. This number is a property of the square, NOT the board, and is NOT dependent on size/shape of the board. It is always true. However, if the square cannot be reached, then obviously the number is not applicable.
So you need to find the number of every square on a 200x200 board, and you need a way to see if a board is a subset of another board to determine if a square is reachable.
Remember, in these programming challenges, some questions that are really hard can be solved in O(1) time by using lookup tables. I'm not saying this one can, but keep that trick in mind. For this one, pre-calculating the 200x200 board numbers and saving them in an array could save a lot of time, whether it is done only once on first run or run before submission and then the results are hard coded in.
If the problem needs move sequences rather than number of moves, the idea is the same: save move sequences with the numbers. | 2 | 0 | 1 | Here is the problem:
Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)
Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).
My current ideas:
Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.
Floyd-Warshall shortest path algorithm.
Create an adjacency list and perform BFS on that (but I think this would be inefficient).
To be honest though I don't really have a solid grasp on the logic.
Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem? | Number of shortest paths | 0 | 0 | 0 | 3,995 |
39,607,721 | 2016-09-21T04:42:00.000 | 0 | 0 | 0 | 0 | python,algorithm,chess | 39,608,395 | 4 | false | 0 | 0 | My approach to this question would be backtracking as the number of squares in the x-axis and y-axis are different.
Note: Backtracking algorithms can be slow for certain cases and fast for the other
Create a 2-d Array for the chess-board. You know the staring index and the final index. To reach to the final index u need to keep close to the diagonal that's joining the two indexes.
From the starting index see all the indexes that the knight can travel to, choose the index which is closest to the diagonal indexes and keep on traversing, if there is no way to travel any further backtrack one step and move to the next location available from there.
PS : This is a bit similar to a well known problem Knight's Tour, in which choosing any starting point you have to find that path in which the knight whould cover all squares. I have codes this as a java gui application, I can send you the link if you want any help
Hope this helps!! | 2 | 0 | 1 | Here is the problem:
Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)
Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).
My current ideas:
Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.
Floyd-Warshall shortest path algorithm.
Create an adjacency list and perform BFS on that (but I think this would be inefficient).
To be honest though I don't really have a solid grasp on the logic.
Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem? | Number of shortest paths | 0 | 0 | 0 | 3,995 |
39,608,377 | 2016-09-21T05:44:00.000 | 0 | 0 | 0 | 0 | python,ajax,django,rest,django-rest-framework | 39,608,464 | 1 | true | 1 | 0 | I usually follow DDD approach. So all my requests end up being just a CRUD operation for an entity. I always prefer REST APIs, thus I would say if you have DDD approach already in place go with django-rest-framework.
Otherwise, it really does not matter, depends on your need. | 1 | 0 | 0 | I have a bunch of ajax requests on my website (ex. upvote sends request to server)
Should I integrate this functionality server side just with another view function
Or is it recommended that I shove all the necessary views into a Django rest framework? | Should I handle ajax requests in vanilla Django or rest Django? | 1.2 | 0 | 0 | 86 |
39,611,124 | 2016-09-21T08:21:00.000 | 0 | 1 | 0 | 0 | python,producer-consumer,kafka-python | 39,652,029 | 1 | true | 0 | 0 | I found the answer already, just make sure the partition number is not equal to one while creating new topic. | 1 | 0 | 0 | I've try before sending message from single producer to 2 different consumer with DIFFERENT consumer group id. The result is both consumer able to read the complete message (both consumers getting the same message). But I would like to ask is it possible for these 2 consumers read different messages while setting them under a SAME consumer group name? | Single producer to multi consumers (Same consumer group) | 1.2 | 0 | 1 | 112 |
39,611,995 | 2016-09-21T09:02:00.000 | 0 | 0 | 0 | 1 | python,cassandra,cqlsh,cqlengine | 39,628,303 | 1 | false | 0 | 0 | Blob will be converted to a byte array in Python if you read it directly. That looks like a byte array containing the Hex value of the blob.
One way is to explicitly do the conversion in your query.
select id, name, blobasint(value) from table limit 3
There should be a conversion method with the Python driver as well. | 1 | 0 | 0 | I have a column-family/table in cassandra-3.0.6 which has a column named "value" which is defined as a blob data type.
CQLSH query select * from table limit 2; returns me:
id | name | value
id_001 | john | 0x010000000000000000
id_002 | terry | 0x044097a80000000000
If I read this value using cqlengine(Datastax Python Driver), I get the output something like:
{'id':'id_001', 'name':'john', 'value': '\x01\x00\x00\x00\x00\x00\x00\x00\x00'}
{'id':'id_002', 'name':'terry', 'value': '\x04@\x97\xa8\x00\x00\x00\x00\x00'}
Ideally the values in the "value" field are 0 and 1514 for row1 and row2 resp.
However, I am not sure how I can convert the "value" field values extracted using cqlengine to 0 and 1514. I tried few methods like ord(), decode(), etc but nothing worked. :(
Questions:
What is this format?
'\x01\x00\x00\x00\x00\x00\x00\x00\x00' or
'\x04@\x97\xa8\x00\x00\x00\x00\x00'?
How I can convert these arbitrary values to 0 and 1514?
NOTE: I am using python 2.7.9 on Linux
Any help or pointers would be useful.
Thanks, | Not able to convert cassandra blob/bytes string to integer | 0 | 1 | 0 | 1,608 |
39,613,555 | 2016-09-21T10:09:00.000 | 0 | 0 | 1 | 0 | python,n-gram,language-model | 39,613,813 | 1 | false | 0 | 0 | Sounds like you need to store the intermediate frequency counts on disk rather than in memory. Luckily most databases can do this, and python can talk to most databases. | 1 | 1 | 1 | I have 3 milion abstracts and I would like to extract 4-grams from them. I want to build a language model so I need to find the frequencies of these 4-grams.
My problem is that I can't extract all these 4-grams in memory. How can I implement a system that it can estimate all frequencies for these 4-grams? | N-grams - not in memory | 0 | 0 | 0 | 83 |
39,615,861 | 2016-09-21T11:54:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,memcached | 39,623,880 | 2 | false | 1 | 0 | Memcache is shared across users. It is not a cookie, but exists in RAM on the server for all pertinent requests to access. | 1 | 1 | 0 | Using Google App Engine Memcache... Can more than one user access the same key-value pair?
or in other words.. Is there a Memcache created per user or is it shared across multiple users? | Google App Engine Memcache Python | 0 | 0 | 0 | 71 |
39,616,849 | 2016-09-21T12:41:00.000 | 1 | 0 | 0 | 0 | python,google-bigquery | 39,632,118 | 2 | false | 0 | 0 | There's no option to do this in one step. I'd recommend running the query, inspecting the results, and then performing a table copy with WRITE_TRUNCATE to commit the results to the final location if the intermediate output contains at least one row. | 1 | 0 | 0 | In BigQuery it's possible to write to a new table the results of a query. I'd like the table to be created only whenever the query returns at least one row. Basically I don't want to end up creating empty table. I can't find an option to do that. (I am using the Python library, but I suppose the same applies to the raw API) | Write from a query to table in BigQuery only if query is not empty | 0.099668 | 1 | 0 | 788 |
39,617,506 | 2016-09-21T13:09:00.000 | 1 | 0 | 1 | 0 | python,caching,blender | 40,272,613 | 1 | true | 0 | 0 | you can avoid that pycache folder by setting the sys.dont_write_bytecode
varaible to True.
Keep in mind that there won´t be any caches and caches at all for all your python files | 1 | 0 | 0 | I'm currently coding an add-on for Blender (on OSX, but this shouldn't be relevant).
All my python files are in the default add-on folder. This folder is loaded at Blender's startup and I can see, enable and disable it in User Preferences in Blender.
Now, when I edit my add-on, I simply save the file and hit f8 in Blender to reload all the add-ons. This is kind of working, but sometimes (not always), my changes are not reloaded and I have to go to the add-on folder and delete a folder called __pycache__, then back in Blender and hit f8 again.
Is there a way to prevent the cache from loading an obsolete version of my add-on (either by specifying it in my code, or by setting something in Blender)? | Blender: disable addon cache | 1.2 | 0 | 0 | 724 |
39,618,985 | 2016-09-21T14:08:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,joblib | 39,621,200 | 1 | false | 0 | 0 | After reverting to scikit-learn 0.16.x, I just needed to install OpenBlas for Ubuntu. It appears that the problem was more a feature of the operating system rather than Python. | 1 | 0 | 1 | I have a Hidden Markov Model that has been pickled with joblib using the sklearn.hmm module. Apparently, in version 0.17.x this module has been deprecated and moved to hmmlearn. I am unable to load the model and I get the following error:
ImportError: No module named 'sklearn.hmm'
I have tried to revert back to version 0.16.x but still cannot load the model. I get the following error:
ImportError: libopenblas.so.0: cannot open shared object file: No such file or directory
I do not have access to the source code to recreate the model and re-pickle it
I am running Python 3.5
Has anyone else experienced this problem and have you found a solution? Does anyone know if scikit-learn has any way to guarantee persistence since the deprecation? | Deprecated Scikit-learn module prevents joblib from loading it | 0 | 0 | 0 | 576 |
39,620,185 | 2016-09-21T15:03:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,parallel-processing,scikit-learn,logistic-regression | 39,620,443 | 2 | false | 0 | 0 | the parallel process backend also depends on the solver method. if you want to utilize multi core, the multiprocessing backend is needed.
but solver like 'sag' can only use threading backend.
and also mostly, it can be blocked due to a lot of pre-processing. | 1 | 2 | 1 | I'm trying to train a huge dataset with sklearn's logistic regression.
I've set the parameter n_jobs=-1 (also have tried n_jobs = 5, 10, ...), but when I open htop, I can see that it still uses only one core.
Does it mean that logistic regression just ignores the n_jobs parameter?
How can I fix this? I really need this process to become parallelized...
P.S. I am using sklearn 0.17.1 | sklearn Logistic Regression with n_jobs=-1 doesn't actually parallelize | 0.099668 | 0 | 0 | 2,100 |
39,621,594 | 2016-09-21T16:11:00.000 | 0 | 0 | 0 | 0 | python,django,email,smtp,webfaction | 39,628,967 | 1 | true | 1 | 0 | You set EMAIL_HOST and EMAIL_PORT just for sending emails to your user.
Mail is sent using the SMTP host and port specified in the EMAIL_HOST and EMAIL_PORT settings. The EMAIL_HOST_USER and EMAIL_HOST_PASSWORD settings, if set, are used to authenticate to the SMTP server, and the EMAIL_USE_TLS and EMAIL_USE_SSL settings control whether a secure connection is used. | 1 | 1 | 0 | I'm implementing a contact form for one of my sites. One thing I'm not sure I understand completely is why you need EMAIL_HOST_USER and EMAIL_HOST_PASSWORD.
The user would only need to provide his/her email address, so what is the EMAIL_HOST_USER referring to then and why would I need to specify an email and password?
EDIT:
I'm using webfaction as my mail server | django email username and password | 1.2 | 0 | 0 | 268 |
39,627,787 | 2016-09-21T23:01:00.000 | 2 | 0 | 1 | 0 | python,numpy | 39,640,835 | 2 | false | 0 | 0 | Actually in order to make all intermediate directories if needed the os.makedirs(path, exist_ok=True) . If not needed the command will not throw an error. | 2 | 2 | 1 | I'm trying loop over many arrays and create files stored in different folders.
Is there a way to have np.savetxt creating the folders I need as well?
Thanks | Create Folder with Numpy Savetxt | 0.197375 | 0 | 0 | 6,937 |
39,627,787 | 2016-09-21T23:01:00.000 | 3 | 0 | 1 | 0 | python,numpy | 39,628,096 | 2 | true | 0 | 0 | savetxt just does a open(filename, 'w'). filename can include a directory as part of the path name, but you'll have to first create the directory with something like os.mkdir. In other words, use the standard Python directory and file functions. | 2 | 2 | 1 | I'm trying loop over many arrays and create files stored in different folders.
Is there a way to have np.savetxt creating the folders I need as well?
Thanks | Create Folder with Numpy Savetxt | 1.2 | 0 | 0 | 6,937 |
39,628,128 | 2016-09-21T23:44:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-ec2,boto | 39,649,819 | 1 | false | 1 | 0 | At the time of writing, there is no way to do this in a single operation. | 1 | 0 | 0 | Is that a way to create an EC2 instance with tags(I mean, adding tag as parameter when creating instance)
I can't find this function in boto APIs. According to the document, we can only add tags after creating.
However, when creating on the browser, we can configure the tags when creating. So can we do the same thing in boto? (In our course we are required to tag our resource when creating, which is for bill monitor purpose, so adding tags after creating is not allowed.....) | Is there a way to set a tag on an EC2 instance while creating it? | 0 | 0 | 1 | 66 |
39,628,191 | 2016-09-21T23:54:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-webdriver | 39,629,897 | 2 | false | 0 | 0 | In case of implicit wait driver waits till elements appears in DOM but at the same time it does not guarantee that elements are usable. Elements might not be enabled to be used ( like button click ) or elements might not have shape defined at that time.
We are not interested with all the elements on the page as far as we are using selenium. All element might not have shape even.But presence of all the element in DOM is important to have other element working correctly. So implicit wait.
When working with any element, we use explicit wait ( WebDriverwait ) or FluentWait. | 1 | 2 | 0 | Other people have asked this question and there are some answers but they do not clarify one moment. Implicit wait will wait for a specified amount of time if element is not found right away and then will run an error after waiting for the specified amount of time. Does it mean that implicit wait checks for the element the very first second and then waits for the specified time and checks at the last second again?
I know that explicit wait polls the DOM every 500ms. What is the practical use of implicit wait if tests take longer with it? | Selenium Webdriver Python - implicit wait is not clear to me | 0 | 0 | 1 | 1,773 |
39,631,465 | 2016-09-22T06:08:00.000 | 1 | 0 | 1 | 0 | javascript,python | 39,632,417 | 3 | false | 1 | 0 | Well, an interpreter is not really a job for a beginner, also you'd better send the code to server side with AJAX, and then display the result in the page. | 1 | 1 | 0 | I want to make a python interpreter by using Javascript.
Then you can input the python code and the Javascript in the webpage can interpret the code into javascript code and then run the code and return the result.
Because I have not much experience in this area, so I want some advice from the senior.
Thanks very much ... | making a python interpreter using javascript | 0.066568 | 0 | 0 | 4,748 |
39,640,037 | 2016-09-22T13:10:00.000 | 1 | 0 | 0 | 0 | python,django,django-models | 39,640,286 | 1 | true | 1 | 0 | ForeignKey is a many-to-one relationship. Requires a positional argument: the class to which the model is related.
It must be Relation (class) or Null (if null allowed). You cannot set 0 (integer) to ForeignKey columnn. | 1 | 1 | 0 | I have to update a record which have foreign key constraint.
i have to assign 0 to the column which is defined as foreign key
while updating django don't let me to update record. | How to temporarily disable foreign key constraint in django | 1.2 | 0 | 0 | 1,839 |
39,642,961 | 2016-09-22T15:21:00.000 | 2 | 0 | 1 | 1 | python,django,postgresql,psycopg2 | 51,605,060 | 4 | false | 0 | 0 | Instead of
pip install psycopg2
try
pip install psycopg2-binary | 1 | 2 | 0 | I have postgresql installed (with postgresql app). When I try "pip install psycopg2", i get "unable to execute gcc-4.2: No such file or directory. How to fix? | How to install psycopg2 for django on mac | 0.099668 | 0 | 0 | 1,746 |
39,643,256 | 2016-09-22T15:34:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,computer-vision,tensorflow | 39,664,586 | 2 | false | 0 | 0 | I'm guessing that you're wanting to automatically store the hyper-parameters as part of the file name in order to organize your experiments better? Unfortunately there isn't a good way to do this with TensorFlow, but you can look at some of the high-level frameworks built on top of it to see if they offer something similar. | 1 | 0 | 1 | Is it possible to group tensorflow FLAGS by type?
E.g.
Some flags are system related (e.g. # of threads) while others are model hyperparams.
Then, is it possible to use the model hyperparams FLAGS, in order to generate a string? (the string will be used to identify the model filename)
Thanks | Is it possible to group tensorflow FLAGS by type and generate a string from them? | 0 | 0 | 0 | 79 |
39,646,077 | 2016-09-22T18:11:00.000 | 0 | 0 | 1 | 0 | python,anaconda | 39,646,217 | 2 | false | 0 | 0 | You can use any text editor to open a .py file, e.g. TextMate, TextWrangler, TextEdit, PyCharm, AquaMacs, etc. | 2 | 1 | 0 | I have installed Anaconda, but I do not know how to open a .py file..
If it is possible, please explain plainly, I browsed several threads, but I understood none of them..
Thanks a lot for your helps..
Best, | How to read a .py file after I install Anaconda? | 0 | 0 | 0 | 8,545 |
39,646,077 | 2016-09-22T18:11:00.000 | 3 | 0 | 1 | 0 | python,anaconda | 39,646,328 | 2 | true | 0 | 0 | In the menu structure of your operating system, you should see a folder for Anaconda. In that folder is an icon for Spyder. Click that icon.
After a while (Spyder loads slowly) you will see the Spyder integrated environment. You can choose File then Open from the menu, or just click the Open icon that looks like an open folder. In the resulting Open dialog box, navigate to the relevant folder and open the relevant .py file. The Open dialog box will see .py, .pyw, and .ipy files by default, but clicking the relevant list box will enable you to see and load many other kinds of files. Opening that file will load the contents into the editor section of Spyder. You can view or edit the file there, or use other parts of Spyder to run, debug, and do other things with the file.
As of now, there is no in-built way to load a .py file in Spyder directly from the operating system. You can set that up in Windows by double-clicking a .py file, then choosing the spyder.exe file, and telling Windows to always use that application to load the file. The Anaconda developers have said that a soon-to-come version of Anaconda will modify the operating system so that .py and other files will load in Spyder with a double-click. But what I said above works for Windows.
This answer was a bit condensed, since I do not know your level of understanding. Ask if you need more details. | 2 | 1 | 0 | I have installed Anaconda, but I do not know how to open a .py file..
If it is possible, please explain plainly, I browsed several threads, but I understood none of them..
Thanks a lot for your helps..
Best, | How to read a .py file after I install Anaconda? | 1.2 | 0 | 0 | 8,545 |
39,648,189 | 2016-09-22T20:19:00.000 | 1 | 0 | 1 | 0 | python,pip,virtualenv | 46,492,773 | 2 | false | 0 | 0 | I can be late, but you can do it either via:
config (~/.pip/pip.conf):
[global]
require-virtualenv = true
env variable PIP_REQUIRE_VIRTUALENV | 1 | 4 | 0 | Sometimes by mistake I install some packages globally with plain pip install package and contaminate my system instead of creating a proper virtualenv and keeping things tidy.
How can I easily disable global installs with pip at all? Or at least show big fat warning when using it this way to make sure that I know what am I doing? | Disable global installs using pip - allow only virtualenvs | 0.099668 | 0 | 0 | 534 |
39,648,803 | 2016-09-22T21:01:00.000 | 0 | 1 | 0 | 0 | php,python,linux,apache,beagleboneblack | 39,649,062 | 1 | false | 0 | 0 | I am unfamiliar with php, but could you have php write to a temporary text file, and then when that python script gets called, it simply reads in that text file to store the boolean value? | 1 | 1 | 0 | I'm making a simple home automation system with my Beagle Bone, raspberry pi and a hand full of components. I have a simple interface on a webpage and i'm currently trying to remotely toggle a relay. Right now I have a button on the webpage that uses php to call a python script that either turns the relay on or off depending on a boolean.
I'm having trouble figuring out the best way to share this boolean. Is there any way to pass a php varible into the python script? or is there anyway to have the python interpreter "keep/save" the state of the variable in-betweeen instances of the script. Or is the best way just to have it write/read from a common file? any help would be awesome | Best way to share a variable with multiple instances of the same python script | 0 | 0 | 0 | 55 |
39,650,312 | 2016-09-22T23:24:00.000 | 3 | 0 | 0 | 0 | python,numpy | 39,650,350 | 1 | true | 0 | 0 | If x is the array, you could use 2*(x >= 0) - 1.
x >= 0 will be an array of boolean values (i.e. False and True), but when you do arithmetic with it, it is effectively cast to an array of 0s and 1s.
You could also do np.sign(x) + (x == 0). (Note that np.sign(x) returns floating point values, even when x is an integer array.) | 1 | 1 | 1 | I have a large numpy array with positive data, negative data and 0s. I want to convert it to an array with the signs of the current values such that 0 is considered positive. If I use numpy.sign it returns 0 if the current value is 0 but I want something that returns 1 instead. Is there an easy way to do this? | Convert a numpy array into an array of signs with 0 as positive | 1.2 | 0 | 0 | 2,087 |
39,652,553 | 2016-09-23T04:16:00.000 | 0 | 0 | 0 | 1 | python,python-wheel,python-install | 39,652,742 | 2 | false | 0 | 0 | You don't have to know. Use pip - it will select the most specific wheel available. | 1 | 0 | 1 | We have so may versions of wheel.
How could we know which version should be installed into my system?
I remember there is a certain command which could check my system environment.
Or is there any other ways?
---------------------Example Below this line -----------
scikit_learn-0.17.1-cp27-cp27m-win32.whl
scikit_learn-0.17.1-cp27-cp27m-win_amd64.whl
scikit_learn-0.17.1-cp34-cp34m-win32.whl
scikit_learn-0.17.1-cp34-cp34m-win_amd64.whl
scikit_learn-0.17.1-cp35-cp35m-win32.whl
scikit_learn-0.17.1-cp35-cp35m-win_amd64.whl
scikit_learn-0.18rc2-cp27-cp27m-win32.whl
scikit_learn-0.18rc2-cp27-cp27m-win_amd64.whl
scikit_learn-0.18rc2-cp34-cp34m-win32.whl
scikit_learn-0.18rc2-cp34-cp34m-win_amd64.whl
scikit_learn-0.18rc2-cp35-cp35m-win32.whl
scikit_learn-0.18rc2-cp35-cp35m-win_amd64.whl | How to know which .whl module is suitable for my system with so many? | 0 | 0 | 0 | 1,458 |
39,654,954 | 2016-09-23T07:21:00.000 | 1 | 0 | 1 | 0 | python,user-interface,frameworks | 39,655,710 | 2 | false | 0 | 1 | Yes, after all tinker and pygame are just python classes packaged as modules.
Python frameworks are a bunch of pre-tested and reusable modules that allow you to use and extend upon so you don't have to reinvent the wheel.
Yes, frameworks will have differences in usability and code.
The computer will always need the dependencies, though you can package these in various ways aka create a package that has all your dependencies for the program to run. | 1 | 2 | 0 | Is it possible to create a user interface without the help of python framework (like tinker or pygame) and use only vanilla python code? If yes, how?
Can you briefly explain how python framework works?
Is the code of different python framework different?
If the computer did not have the framework installed, will the program still runnable if the program uses a framework?
Thanks very much | Making GUI with only python without framework? | 0.099668 | 0 | 0 | 4,330 |
39,654,986 | 2016-09-23T07:23:00.000 | 3 | 1 | 1 | 0 | python,compression,complexity-theory,zlib | 39,662,445 | 1 | false | 0 | 0 | Time complexity is how the processing time scales with the size of the input. For zlib, and any other compression scheme I know of, it is O(n) for both compression and decompression. The time scales linearly with the size of the input.
If you are thinking that the time complexity of decompression is less somehow, then perhaps you are thinking about the constant in front of the n, as opposed to the n. Yes, decompression is usually faster than compression, because that constant is smaller. Not because the time complexity is different, because it isn't. | 1 | 3 | 0 | What is the time complexity of Zlib's deflate algorithm?
I understand that in Python this algorithm is made available via the zlib.compress function.
Presumably the corresponding decompression algorithm has the same or better complexity. | Time complexity of zlib's deflate algorithm | 0.53705 | 0 | 0 | 858 |
39,659,748 | 2016-09-23T11:32:00.000 | 1 | 1 | 1 | 0 | python,eclipse,python-3.x,autocomplete | 39,773,761 | 1 | true | 0 | 0 | If you are using PyDev, make sure that interpreter grammar is set to 3.0 (right click project -> Properties -> PyDev - Interpreter/Grammar) | 1 | 0 | 0 | I changed the interpreter for my python projects from 2.x to 3.5 recently. The code interpretes correctly with the 3.5 version.
I noticed that the autocompletion function of Eclipse still autocompletes as if I am using 2.x Python version. For example: print gets autocompleted without parenthesis as a statement and not as a function. Any idea how to notify the Eclipse that it need to use 3.5 autocompletion? | force eclipse to use Python 3.5 autocompletion | 1.2 | 0 | 0 | 82 |
39,660,973 | 2016-09-23T12:38:00.000 | 0 | 0 | 0 | 0 | python-2.7,audio,wav,wave | 39,747,053 | 1 | false | 1 | 0 | I found solution. The trick was to make wav file the same way as you would make it for 32 bit depth, but set LOWER(not upper) 8 bits(LSBs) to zeros. So in hex format you would have 00 xx xx 00 xx xx ... where xx are some hex numbers. | 1 | 0 | 0 | is it posible to generate wav file in python with 24bit deep and sample width 4, not 3 (3x8=24). The idea is to have 32bit deep, so that sample width of 4 (4x8=32) can be made, but i would try to make upper bits all ones (1), so that it looks like 24bit deep.
Im open to suggestions.
Thank you. | 24 bit deep wav file generator | 0 | 0 | 0 | 58 |
39,663,091 | 2016-09-23T14:18:00.000 | -1 | 0 | 0 | 1 | python,python-2.7,cmd | 39,663,773 | 3 | false | 0 | 0 | 1.Go to Environmental Variables >
system variable > Path > Edit
2.It look like this
Path C:\Program Files\Java\jdk1.8.0\bin;%SystemRoot%\system32;C:\Program Files\nodejs\;
3.You can add semicolon (;) at the end and add C:\Python27
4.After adding it look like this
C:\Program Files\Java\jdk1.8.0\bin;%SystemRoot%\system32;C:\Program Files\nodejs\;C:\Python27; | 2 | 4 | 0 | I'm using windows cmd to run my python script. I want to run my python script withouth to give the cd command and the directory path.
I would like to type only the name of the python script and run it.
I'm using python 2.7 | How can I add a python script to the windows system path? | -0.066568 | 0 | 0 | 5,762 |
39,663,091 | 2016-09-23T14:18:00.000 | 1 | 0 | 0 | 1 | python,python-2.7,cmd | 69,844,640 | 3 | false | 0 | 0 | Make sure .py files are associated with the Python launcher C:\Windows\py.exe or directly with e.g. 'C:\Python27\python.exethen edit yourPATHEXTenvironment variable using (System Properties) to add;.PY` at the end. You can now launch Python files in the current directory by typing their name.
To be able to launch a given Python script from any directory, you can either put it in a directory that's already on the PATH, or add a new directory to PATH (I like creating a bin directory in my user folder and adding %USERPROFILE%\bin to PATH) and put it there.
Note that this is more a "how do I use Windows" question rather than a Python question. | 2 | 4 | 0 | I'm using windows cmd to run my python script. I want to run my python script withouth to give the cd command and the directory path.
I would like to type only the name of the python script and run it.
I'm using python 2.7 | How can I add a python script to the windows system path? | 0.066568 | 0 | 0 | 5,762 |
39,665,029 | 2016-09-23T15:59:00.000 | 1 | 1 | 0 | 0 | python | 39,665,400 | 2 | false | 0 | 0 | shutil is a very usefull thing to use when copying files.
I once needed to have a python script that moved all .mp3 files from a directory to a backup, deleted the original directory, created a new once, and moved the .mp3 files back in. shutil was perfect for thise.
The formatting for the command is how @Kieran has stated earlier.
If you're looking to keep file metadata, then use shutil.copy2(src, dest), as that is the equivalent of running copy() and copystat() one after another. | 1 | 1 | 0 | I am very new to Python. I am curious to how I would be able to copy some files from my directory to another Users directory on my computer using python script? And would I be correct in saying I need to check the permissions of the users and files? So my question is how do I send files and also check the permissions at the same time | Python sending files from a user directory to another user directory | 0.099668 | 0 | 0 | 639 |
39,668,174 | 2016-09-23T19:24:00.000 | 1 | 0 | 0 | 0 | python,opencv,coordinate-transformation,homography | 39,668,864 | 2 | false | 0 | 0 | The way I see it, the problem is that homography applies a perspective projection which is a non linear transformation (it is linear only while homogeneous coordinates are being used) that cannot be represented as a normal transformation matrix. Multiplying such perspective projection matrix with some other transformations therefore produces undesirable results.
You can try multiplying your original matrix H element wise with:
S = [1,1,scale ; 1,1,scale ; 1/scale, 1/scale, 1]
H_full_size = S * H
where scale is for example 2, if you decreased the size of original image by 2. | 1 | 2 | 1 | I am calculating an homography between two images img1 and img2 (the images contain mostly one planar object, so the homography works well between them) using standard methods in OpenCV in python. Namely, I compute point matches between the images using sift and then call cv2.findHomography.
To make the computation faster I scale down the two images into small1 and small2 and perform the calculations on these smaller copies, so I calculate the homography matrix H, which maps small1 into small2.
However, at the end, I would like to use calculate the homography matrix to project one full-size image img1 onto the other the full-size image img2.
I thought I could simply transform the homography matrix H in the following way H_full_size = A * H * A_inverse where A is the matrix representing the scaling from img1 to small1 and A_inverse is its inverse.
However, that does not work. If I apply cv2.warpPerspective to the scaled down image small1 with H, everything goes as expected and the result (largely) overlaps with small2. If I apply cv2.warpPerspective to the full size image img1 with H_full_size the result does not map to img2.
However, if I project the point matches (detected on the scaled down images) using A (using something like projected_pts = cv2.perspectiveTransform(pts, A)) and then I calculate H_full_size from these, everything works fine.
Any idea what I could be doing wrong here? | homography and image scaling in opencv | 0.099668 | 0 | 0 | 2,276 |
39,670,135 | 2016-09-23T21:58:00.000 | 0 | 0 | 0 | 1 | python,arm | 39,798,938 | 1 | false | 0 | 0 | Build gevent with dependencies on QEMU raspberry pi. | 1 | 1 | 0 | I want to build greenlet to use on arm32 linux box. I have an ubuntu machine, with my gcc cross-compiler for the arm target. How do I build greenlet for my target from my ubuntu machine? | Cross-compiling greenlet for linux arm target | 0 | 0 | 0 | 141 |
39,671,661 | 2016-09-24T01:34:00.000 | 0 | 0 | 1 | 0 | python,algorithm,data-structures | 39,671,924 | 3 | false | 0 | 0 | The lookup time wouldn't be O(n) because not all items need to be searched, it also depends on the number of buckets. More buckets would decrease the probability of a collision and reduce the chain length.
The number of buckets can be kept as a constant factor of the number of entries by resizing the hash table as needed. Along with a hash function that evenly distributes the values, this keeps the expected chain length bounded, giving constant time lookups.
The hash tables used by hashmaps and hashsets are the same except they store different values. A hashset will contain references to a single value, and a hashmap will contain references to a key and a value. Hashsets can be implemented by delegating to a hashmap where the keys and values are the same. | 2 | 0 | 1 | I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and then list[key] and then add to the head of a LinkedList. This occurs in O(1) time
When removing a value from the linkedlist, in python we replace it with a placeholder because hashsets are not allowed to have Null/None values, correct?
When the list[] gets over a certain % of load/fullness, we copy it over to another list
Regarding Time Complexity Confusion:
So one question is, why is Average search/access O(1) if there can be a list of N items at the linkedlist at a given index?
Wouldnt the average case be the searchitem is in the middle of its indexed linkedlist so it should be O(n/2) -> O(n)?
Also, when removing an item, if we are replacing it with a placeholder value, isn't this considered a waste of memory if the placeholder is never used?
And finally, what is the difference between this and a HashMap other than HashMaps can have nulls? And HashMaps are key/value while Hashsets are just value? | Is my understanding of Hashsets correct?(Python) | 0 | 0 | 0 | 1,427 |
39,671,661 | 2016-09-24T01:34:00.000 | 0 | 0 | 1 | 0 | python,algorithm,data-structures | 39,671,749 | 3 | false | 0 | 0 | For your first question - why is the average time complexity of a lookup O(1)? - this statement is in general only true if you have a good hash function. An ideal hash function is one that causes a nice spread on its elements. In particular, hash functions are usually chosen so that the probability that any two elements collide is low. Under this assumption, it's possible to formally prove that the expected number of elements to check is O(1). If you search online for "universal family of hash functions," you'll probably find some good proofs of this result.
As for using placeholders - there are several different ways to implement a hash table. The approach you're using is called "closed addressing" or "hashing with chaining," and in that approach there's little reason to use placeholders. However, other hashing strategies exist as well. One common family of approaches is called "open addressing" (the most famous of which is linear probing hashing), and in those setups placeholder elements are necessary to avoid false negative lookups. Searching online for more details on this will likely give you a good explanation about why.
As for how this differs from HashMap, the HashMap is just one possible implementation of a map abstraction backed by a hash table. Java's HashMap does support nulls, while other approaches don't. | 2 | 0 | 1 | I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and then list[key] and then add to the head of a LinkedList. This occurs in O(1) time
When removing a value from the linkedlist, in python we replace it with a placeholder because hashsets are not allowed to have Null/None values, correct?
When the list[] gets over a certain % of load/fullness, we copy it over to another list
Regarding Time Complexity Confusion:
So one question is, why is Average search/access O(1) if there can be a list of N items at the linkedlist at a given index?
Wouldnt the average case be the searchitem is in the middle of its indexed linkedlist so it should be O(n/2) -> O(n)?
Also, when removing an item, if we are replacing it with a placeholder value, isn't this considered a waste of memory if the placeholder is never used?
And finally, what is the difference between this and a HashMap other than HashMaps can have nulls? And HashMaps are key/value while Hashsets are just value? | Is my understanding of Hashsets correct?(Python) | 0 | 0 | 0 | 1,427 |
39,672,376 | 2016-09-24T04:07:00.000 | 0 | 0 | 0 | 0 | python,class | 39,672,721 | 1 | false | 0 | 0 | I would say yes. Basically I want to:
Take the unique set of data
Filter it so that just a subset is considered (filter parameters can be time of recording for example)
Use a genetic algorithm the filtered data to match on average a target.
Step 3 is irrelevant to the post, I just wanted to give the big picture in order to make my question more clear. | 1 | 1 | 1 | I'm planning to develop a genetic algorithm for a series of acceleration records in a search to find optimum match with a target.
At this point my data is array-like with a unique ID column, X,Y,Z component info in the second, time in the third etc...
That being said each record has several "attributes". Do you think it would be beneficial to create a (records) class considering the fact I will want to to do a semi-complicated process with it as a next step?
Thanks | Organizing records to classes | 0 | 0 | 0 | 17 |
39,678,374 | 2016-09-24T16:11:00.000 | 1 | 0 | 1 | 0 | python,multithreading,queue | 39,678,524 | 1 | false | 0 | 0 | If you don't cap the size, the Queue can grow until you run out of memory. So no size limit is imposed by Python, but your machine still has finite resources.
In some applications (probably most), the programmer knows memory consumption can't become a problem, due to the specific character of their application.
But if, e.g., you have producers that "run forever", and consumers that run much slower than producers, capping the size is essential to avoid unbounded memory demands.
As to deadlocks, it's highly unlikely that the implementation of Queue is responsible for a deadlock regardless of whether the Queue's size is bounded; far more likely that deadlocks are due to flaws in application code. For example, picture a producer that fetches things over a network, and mistakenly suppresses errors when the network connection is broken. Then the producer can fail to produce any new items, and so a consumer will eventually block forever waiting for something new to show up on the Queue. That "looks like" deadlock, but the cause really has nothing to do with Queue. | 1 | 1 | 0 | so I have Queue.Queue()
i have bunch of producers who puts jobs into that Queue.Queue() and bunch of consumers who pops from the queue
1) is there benefit of capping the Queue size vs. not doing so?
2) by not capping the size, does it really not have any size limit? can grow forever?
I've noticed that deadlock seems to occur more when queue has a fixed size | python: queue.queue() best practice to set size? | 0.197375 | 0 | 0 | 1,074 |
39,679,167 | 2016-09-24T17:38:00.000 | 3 | 0 | 0 | 0 | python,django | 39,679,214 | 1 | false | 1 | 0 | You're confusing two different things here. A class can easily have an attribute that is a list which contains instances of another class, there is nothing difficult about that.
(But note that there is no way in which a Message should extend MessageBox; this should be composition, not inheritance.)
However then you go on to talk about Django models. But Django models, although they are Python classes, also represent tables in the database. And the way you represent one table containing a list of entries in another table is via a foreign key field. So in this case your Message model would have a ForeignKey to MessageBox.
Where you put the send method depends entirely on your logic. A message should probably know how to send itself, so it sounds like the method would go there. | 1 | 0 | 0 | I have been having trouble using django. Right now, I have a messagebox class that is suppose to hold messages, and a message class that extends it. How do I make it so messagebox will hold messages?
Something else that I cannot figure out is how classes are to interact. Like, I have a user that can send messages. Should I call its method to call a method in messagebox to send a msg or can I have a method in user to make a msg directly.
My teacher tries to accentuate cohesion and coupling, but he never even talks about how to implement this in django or implement django period. Any help would be appreciated. | How can a class hold an array of classes in django | 0.53705 | 0 | 0 | 121 |
39,679,473 | 2016-09-24T18:09:00.000 | 0 | 0 | 0 | 0 | python,tkinter,pyqt,pyqt4 | 39,681,979 | 2 | false | 0 | 1 | No, there is no way to combine widgets from PyQt and Tkinter in a single app. At least, not without resorting to running each toolkit in a separate thread or process. You can't embed the widgets of one into the widgets of the other. | 2 | 1 | 0 | I have an image editor created in tkinter. However, I would add a floating widgets that exist in PyQt. Is there any way to run integrated tkinter with PyQt? | Using widgets intregrated. PyQT with Tkinter | 0 | 0 | 0 | 367 |
39,679,473 | 2016-09-24T18:09:00.000 | 1 | 0 | 0 | 0 | python,tkinter,pyqt,pyqt4 | 39,689,904 | 2 | false | 0 | 1 | I make a workaround that solved the problem. I used python subprocess for call the PyQT instance and the option QtCore.Qt.WindowStaysOnTopHint for app running on top of tkinter. It´s work.
but the best solution is to create a thread in python and call PyQt in this thread. In this case it is possible to pass an instance of tk for PyQt and make communication between the two. It´s work too. It´s fine. | 2 | 1 | 0 | I have an image editor created in tkinter. However, I would add a floating widgets that exist in PyQt. Is there any way to run integrated tkinter with PyQt? | Using widgets intregrated. PyQT with Tkinter | 0.099668 | 0 | 0 | 367 |
39,680,467 | 2016-09-24T20:05:00.000 | 1 | 0 | 1 | 0 | python,encoding,output | 41,566,970 | 1 | true | 0 | 0 | The bytes type in Python is what I should have been using for this. Though I didn't quite understand it when I was posting the question, I needed a list of single-byte variables. This is exactly what the bytes object does, and even better, it can be used exactly like a string. | 1 | 0 | 0 | For example: the character \x80, or 128 in decimal, has no UTF-8 character assigned to it. But if I understand text files correctly, I should still be able to create a file that contains that character, even if nothing can display it. However, when I try to print an array that contains one of these characters, it writes as '\x80', and when I try to write it directly as a chr, I get an error "UnicodeEncodeError: 'charmap' codec can't encode character '\x80' in position 0: character maps to ". Am I doing something fundamentally wrong, or is there a fix I just don't know about here? | Write unmapped characters to file? | 1.2 | 0 | 0 | 146 |
39,681,631 | 2016-09-24T22:45:00.000 | 1 | 0 | 1 | 1 | python,subprocess,chroot,isolation | 39,681,690 | 1 | false | 0 | 0 | I don't know if you have an objection to using a 3'rd party communication library for your task but this sounds like what ZeroMQ would be used for. | 1 | 1 | 0 | I am currently working on a personal project where I have to run two processes simultaneously. The problem is that I have to isolate each of them (they cannot communicate between them or with my system) and I must be able to control their stdin, stdout and stderr. Is there anyway I can achieve this?
Thank you! | Subprocess Isolation in Python | 0.197375 | 0 | 0 | 324 |
39,681,639 | 2016-09-24T22:46:00.000 | 3 | 0 | 1 | 0 | python,memory-management,linked-list | 39,681,668 | 2 | true | 0 | 0 | First, you're evidently talking about a 32-bit machine since your pointers and values are 4 bytes. Obviously, that's different for a 64-bit machine.
Second, the value needn't be 4 bytes. Frequently the value is a pointer or an int, in which case it is 4 bytes (on your 32-bit machine). But if it was a double, for example, it would be 8 bytes. In fact, the payload could be any type at all, and have that type's size.
Third, your book probably is referring to the two pointers - the links - as the "overhead'.
Fourth, your book is omitting the impact of the memory manager ("heap manager"). Frequently, because of alignment issues and heap management issues, heap elements are larger than actually requested. Most heap implementations on a 32-bit machine won't allocate 12 bytes when you ask for 12. They'll allocate 16 bytes. (The last 4 bytes are not used by your program.) Because for many machines, 8-byte alignment of certain values (e.g., doubles) is either required by the machine architecture or desirable for performance reasons. You have to investigate yourself, for your particular heap implementation (that is, the compiler's run-time's heap implementation) what kind of overhead it imposes. Additionally, some heap implementations (many?) actually use memory inside the allocated object for its own bookkeeping purposes. In this case, that header amount is sometimes as small as 4 bytes, but typically, for most machines which require 8 byte alignment for doubles, is 8 bytes. So in this usual case, if you ask for 12 bytes you'll actually use up 24 bytes: 8 bytes of heap overhead and 12 bytes for your data, and that's only 20, so an additional 4 bytes just for alignment! | 2 | 1 | 0 | I understand that a node is usually 8 bytes, 4 for a value, 4 for a pointer in a singly linked list.
Does this mean that a doubly linked list node is 12 bytes in memory with two pointers?
Also this book I'm reading talks about how theres 8 bytes of "overhead" for every 12 byte node, what does this refer to? | What does "overhead" memory refer to in the case of Linked Lists? | 1.2 | 0 | 0 | 1,495 |
39,681,639 | 2016-09-24T22:46:00.000 | 0 | 0 | 1 | 0 | python,memory-management,linked-list | 39,681,696 | 2 | false | 0 | 0 | Does this mean that a doubly linked list node is 12 bytes in memory with two pointers?
Yes, if the data is 4 bytes, and the code is compiled for 32bit and not 64bit, so 4 bytes per pointer.
Also this book I'm reading talks about how theres 8 bytes of "overhead" for every 12 byte node, what does this refer to?
That might be referring to the 8 bytes used for the 2 pointers. But it might also refer to the memory manager's own overhead when allocating the 12 bytes. Memory managers usually have to allocate more bytes than requested so they can store tracking info used later when freeing the memory (class type for destructor calls, array element counts, etc). | 2 | 1 | 0 | I understand that a node is usually 8 bytes, 4 for a value, 4 for a pointer in a singly linked list.
Does this mean that a doubly linked list node is 12 bytes in memory with two pointers?
Also this book I'm reading talks about how theres 8 bytes of "overhead" for every 12 byte node, what does this refer to? | What does "overhead" memory refer to in the case of Linked Lists? | 0 | 0 | 0 | 1,495 |
39,683,194 | 2016-09-25T03:49:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,ptvs | 39,732,669 | 1 | false | 0 | 0 | I have just reinstalled windows 10, python, and visual studio again. It works now. I have no clue why it did not work before. | 1 | 1 | 0 | I am trying to use PTVS in visual studio, but cannot set python interpreter. I installed visual studio enterprise 2015 and installed python 3.5.2.
I opened python environment in visual studio, but I cannot find installed interpreter, even cannot click the '+custom' button.
Please let me know if someone experienced same issue and solved it. | I am trying to use PTVS in visual studio, but cannot set python interpreter | 0 | 0 | 0 | 298 |
39,687,586 | 2016-09-25T13:53:00.000 | 0 | 0 | 0 | 0 | python | 39,687,703 | 2 | false | 0 | 0 | Most probably you use SOCK_STREAM type socket. This is a TCP socket and that means that you push data to one side and it gets from the other side in the same order and without missing chunks, but there are no delimiters. So send() just sends data and recv() receives all the data available to the current moment.
You can use SOCK_DGRAM and then UDP will be used. But in such case every send() will send a datagram and recv() will receive it. But you are not guaranteed that your datagrams will not be shuffled or lost, so you will have to deal with such problems yourself. There is also a limit on maximal datagram size.
Or you can stick to TCP connection but then you have to send delimiters yourself. | 1 | 1 | 0 | SORRY FOR BAD ENGLISH
Why if I have two send()-s on the server, and two recv()-s on the client, sometimes the first recv() will get the content of the 2nd send() from the server, without taking just the content of the first one and let the other recv() to take the "due and proper" content of the other send()?
How can I get this work in an other way? | Weird behavior of send() and recv() | 0 | 0 | 1 | 66 |
39,691,547 | 2016-09-25T20:31:00.000 | 0 | 0 | 0 | 1 | python,cmd,command-line,sublimetext3,command-prompt | 66,841,135 | 5 | false | 0 | 0 | After adding the path variable, restart your PC, worked like a charm. | 3 | 3 | 0 | when I run the subl command, it just pauses for a moment and doesn't give me any feedback as to what happened and doesn't open. I am currently on windows 10 running the latest sublime text 3 build. I already copied my subl.exe from my sublime text 3 directory to my system32 directory. What am I missing? I've tried subl.exe ., subl.exe detect.py, subl, subl.exe
Please help me with this setup | Sublime Text 3 subl command not working in Windows 10 | 0 | 0 | 0 | 7,531 |
39,691,547 | 2016-09-25T20:31:00.000 | 1 | 0 | 0 | 1 | python,cmd,command-line,sublimetext3,command-prompt | 60,349,655 | 5 | false | 0 | 0 | You can add gitbash alias like below
open a gitbash terminal and type
alias subl="/c/Program\ Files/Sublime\ Text\ 3/subl.exe"
then try subl . from gitbash
you can also add permanent alias for git bash like below
Go to: C:\Users\ [youruserdirectory] \
make a .bash_profile file
open it with text editor
add the alias .
alias subl="/c/Program\ Files/Sublime\ Text\ 3/subl.exe" | 3 | 3 | 0 | when I run the subl command, it just pauses for a moment and doesn't give me any feedback as to what happened and doesn't open. I am currently on windows 10 running the latest sublime text 3 build. I already copied my subl.exe from my sublime text 3 directory to my system32 directory. What am I missing? I've tried subl.exe ., subl.exe detect.py, subl, subl.exe
Please help me with this setup | Sublime Text 3 subl command not working in Windows 10 | 0.039979 | 0 | 0 | 7,531 |
39,691,547 | 2016-09-25T20:31:00.000 | 1 | 0 | 0 | 1 | python,cmd,command-line,sublimetext3,command-prompt | 63,048,909 | 5 | false | 0 | 0 | After adding a path environmental variable, you have just to type subl.exe in command prompt | 3 | 3 | 0 | when I run the subl command, it just pauses for a moment and doesn't give me any feedback as to what happened and doesn't open. I am currently on windows 10 running the latest sublime text 3 build. I already copied my subl.exe from my sublime text 3 directory to my system32 directory. What am I missing? I've tried subl.exe ., subl.exe detect.py, subl, subl.exe
Please help me with this setup | Sublime Text 3 subl command not working in Windows 10 | 0.039979 | 0 | 0 | 7,531 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.