Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,655,223 | 2017-08-12T21:50:00.000 | 0 | 0 | 0 | 0 | python,selenium | 45,656,035 | 1 | false | 0 | 0 | I don't think they have any built in way to do this with any of the browsers. Your best bet would be to connected to the same instance of the browser (this is easier if you use the grid server) from another program then just take screenshots at short intervals. | 1 | 0 | 0 | I've been working with selenium in Python recently.
I was curious if anyone has had experience with recording an instance of a headless browser? I tried finding a way to do this, but didn't find any solutions in Python - a code example would be excellent.
Some tips would be helpful. | Recording video of headless selenium browser | 0 | 0 | 1 | 454 |
45,656,981 | 2017-08-13T04:11:00.000 | 0 | 1 | 1 | 0 | python,unit-testing | 45,657,800 | 1 | false | 0 | 0 | Mock can help to write unit tests.
In unit tests, you want to test a small portion of your implementation. For example, as small as one function or one class.
In a moderately large software, these small parts depend on each other. Or sometimes there are external dependencies. You open files, do syscalls, or get external data in some other way.
While writing a directed unit test for a small portion of your code, you do not want to spend time to set-up everything else around it. (The files, syscalls, external data). Mock comes to your help there. With mock, you can make the other dependencies of your code behave exactly as you like. This way you can focus on testing your intended implementation.
Coming to the mock with the return value: Say you want to test func_a. And func_a calls func_b. func_b does a lot of funky processing to calculate its return value. For example, talking to an external service, doing bunch of syscalls, or some other expensive operation. Since you are testing func_a, you only care about possible return values of func_b. (So that func_a can use them) In this scenario you would mock func_b and set the return values explicitly. This can really simplify your test complexity. | 1 | 0 | 0 | I am very new to Python and I saw many projects on Github using Mock to do their test but I don't understand why.
When we use mock, we construct a Mock object with a specific return_value, I don't truely understand why we do this. I know sometimes it is difficult to build our needed resources but what is the point if we construct a object/function with a certain return value ? | Python - Why we should use mock to do test? | 0 | 0 | 0 | 58 |
45,657,132 | 2017-08-13T04:40:00.000 | 0 | 0 | 1 | 0 | python,parallel-processing,multiprocessing,networkx,ipython-parallel | 45,657,197 | 1 | false | 0 | 0 | Try treating G as a database - so instead that it will be shared by all the sub-processes - they will be able to get info from it and do what they need | 1 | 0 | 0 | This might be a naive question but I've really tried searching multiple resources: multiprocessing and ipyparallel but these seem to be lack of appropriate information for my task.
What I have is a large directed graph G with 9 million edges and 6 million nodes. My goal is to, for a list of target nodes (50k, along with their direct neighbours (both in/out), extract subgraphs from G. I am currently using networkx to do this.
I tried to use ipyparallel but I could not find tutorial on how to share an object (in my case, G) across processors for subgraph function. Is there an easy way to parallelize this across different cpu cores (there are 56 available so I really want to make full use of it)?
Thank you! | Parallelizing subgraph tasks in Python | 0 | 0 | 1 | 280 |
45,657,358 | 2017-08-13T05:20:00.000 | 0 | 1 | 1 | 0 | python | 45,657,424 | 3 | false | 0 | 0 | Try this any(x in string for x in '@#!') | 1 | 0 | 0 | The prog asks me to check whether any special char like !@#... are present in the given string. How can I do this? | How to check whether there is any special characters present in the given string using python? | 0 | 0 | 0 | 355 |
45,658,550 | 2017-08-13T08:33:00.000 | 1 | 0 | 1 | 0 | python,type-hinting | 45,658,789 | 1 | false | 0 | 0 | The short answer is there is no easy way. The typing module is, by design, not going to provide much help for runtime checking. PEP 484 says
This PEP aims to provide a standard syntax for type annotations,
opening up Python code to easier static analysis and refactoring,
potential runtime type checking, and (perhaps, in some contexts) code
generation utilizing type information.
Of these goals, static analysis is the most important. This includes
support for off-line type checkers such as mypy, as well as providing
a standard notation that can be used by IDEs for code completion and
refactoring.
Non-goals
While the proposed typing module will contain some building
blocks for runtime type checking -- in particular the get_type_hints()
function -- third party packages would have to be developed to
implement specific runtime type checking functionality, for example
using decorators or metaclasses. Using type hints for performance
optimizations is left as an exercise for the reader. | 1 | 2 | 0 | I have a function accepting another Python functions which are annotated with type hints (__annotations__). I would like to use those hints to do some type checking during runtime. The issue is that type classes from typing module does not seem very easy to work with (no isinstance, no issubclass). So, I wonder, is there a way to convert them to mypy type objects, and then use mypy.subtypes.is_subtype to compare types from type hints? | Compare types from Python typing | 0.197375 | 0 | 0 | 941 |
45,661,124 | 2017-08-13T13:57:00.000 | 0 | 0 | 1 | 0 | python,list,numpy | 45,661,326 | 3 | false | 0 | 0 | I found a code that accomplishes my request:
x = [str(i[0]) for i in the_list] | 1 | 1 | 1 | I have a list of ten 1-dimension-ndarrays, where each on hold a string, and I would like to one long list where every item will be a string (without using ndarrays anymore). How should I implement it? | How to convert a list of ndarray arrays into list in python | 0 | 0 | 0 | 136 |
45,662,253 | 2017-08-13T15:58:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,keras,jupyter | 57,688,939 | 7 | false | 0 | 0 | Of course. if you are running on Tensorflow or CNTk backends, your code will run on your GPU devices defaultly.But if Theano backends, you can use following
Theano flags:
"THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py" | 1 | 154 | 1 | I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?
I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed. | Can I run Keras model on gpu? | 0.057081 | 0 | 0 | 296,920 |
45,662,729 | 2017-08-13T16:54:00.000 | 1 | 0 | 1 | 0 | python,pycharm,virtualenv,virtualenvwrapper | 45,663,849 | 1 | true | 0 | 0 | As @phd says in comments and also as mentioned in virtualenvwrapper docs, virtualenvwrapper includes wrappers for creating and deleting virtual environments and otherwise managing your development workflow. It is somehow created for making virtualenv easier to use. | 1 | 1 | 0 | There is an option in PyCharm (==2017.2) which allow you to create virtual environments without the CLI.
But this option support virtualenv package, does PyCharm can support virtualenvwrapprr too ?
I asking it for later use in CLI only. | Integration virtualenvwrapper into Pycharm | 1.2 | 0 | 0 | 1,481 |
45,666,183 | 2017-08-14T00:58:00.000 | 0 | 0 | 0 | 0 | python,google-cloud-platform,sparse-matrix,bigdata | 45,666,230 | 1 | true | 0 | 0 | From python perspective, I'm currently using h5py to handle this big data. And it is fast too. You should check it out. However, I believe that Google might have provide something to handle this type of data. | 1 | 0 | 1 | I have a large dataset with 100 million rows of user online activities. Each row includes a timestamp, user id, and site domain name. I would like to transform the dataset into a matrix of unique domain and user id, in order to perform some matrix operations. The number of unique domains is about 100K and the number of unique user is about 10 million. The matrix is very sparse.
What's the best packages or technologies to use? I realize my question is very broad. I am using python and Google Cloud Platform, so I am hoping the solutions would be on those lines. | How to load large dataset to python and perform matrix operations | 1.2 | 0 | 0 | 48 |
45,667,982 | 2017-08-14T05:35:00.000 | 14 | 0 | 1 | 0 | python,virtualenv | 45,668,213 | 1 | true | 0 | 0 | When it's there, pip will check that you have the latest version of pip. You can remove it. | 1 | 15 | 0 | I am using virtualenv in my python project and I noticed a file called
pip-selfcheck.json
What is the purpose of this file? Can I delete it from my project? | pip-selfcheck.json with virtualenv | 1.2 | 0 | 0 | 3,801 |
45,672,563 | 2017-08-14T10:30:00.000 | 0 | 0 | 0 | 0 | php,python,web,network-programming | 45,712,038 | 1 | false | 0 | 0 | Your web application can not know if there is a video service like Skype that is running on the same host than the browser, but your application can simply run a speed test, to check for the currently available bandwidth between the browser and your server. For instance, you can do that easily writing some AJAX code that automatically starts downloading a big file (using XMLHttpRequest()) and stops after 10 seconds (using XMLHttpRequest.abort()). You can monitor the throughput using progress events. Then, if your app find that there is not enough bandwidth, you may ask the user to stop network applications. | 1 | 0 | 0 | I am working on a web based application where users will use their webcams for realtime Video call on web browser (kinda Webx). Is their any way that the web application can identify if any (tcp/udp/https) application/service is using the video service and consuming the network bandwidth so that I can show a message to the user - "Please close skype or gtalk and then to proceed with the video call".
In short; how to identify skype like service which is holding the webcam via web application and alert user to close that app first. | Way to identify tcp/udp/https service holding the webcam and port | 0 | 0 | 1 | 48 |
45,672,853 | 2017-08-14T10:48:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,pygame | 45,672,890 | 1 | true | 0 | 1 | You wouldn't. Pygame isn't a web based technology for games. If that's one of your main platforms, look into HTML5 based frameworks instead (or a platform that can export its runtime and content to HTML5). | 1 | 0 | 0 | I have a Python 2.7 program with Pygame 2.7 which I wanted to embed into a website. How would you do this? | How do you embed a Python 2.7 game requiring pygame into a website? | 1.2 | 0 | 0 | 75 |
45,675,970 | 2017-08-14T13:44:00.000 | 2 | 0 | 1 | 0 | python | 45,676,160 | 2 | false | 0 | 0 | run virtualenv -p python3 envname or pip install --upgrade virtualenv | 1 | 1 | 0 | I have already created a virtual env,in which python 3.4 is already installed,is there a way i can install python 3.5 in this env.i already tried pip install python3.5 ,i get -no distributions found that satisfy the requirement | installing python 3.5 in virtaul env | 0.197375 | 0 | 0 | 1,938 |
45,676,247 | 2017-08-14T13:57:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,pyaudio | 45,676,889 | 3 | true | 0 | 0 | Check in the documentation of pyaudio if it is compatible with your python version
Some modules which are not compatible may be installed without issues, yet still won't work when trying to access them | 2 | 1 | 0 | I ran pip install pyaudio in my terminal and got this error:
Command "/home/oliver/anaconda3/bin/python -u -c "import setuptools,
tokenize;file='/tmp/pip-build-ub9alt7s/pyaudio/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, file, 'exec'))" install
--record /tmp/pip-e9_md34a-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-ub9alt7s/pyaudio/
So I ran sudo apt-install python-pyaudio python3-pyaudio
which seemed to work.
Then in jupyter:
import pyaudio
error:
ModuleNotFoundError: No module named 'pyaudio'
Can anyone help me work out this problem? I am not familiar with Ubuntu and it's commands paths etc as I've only been using it a few months.
If you need more information, let me know what, and how. Thanks | ModuleNotFoundError: No module named 'pyaudio' | 1.2 | 0 | 0 | 5,269 |
45,676,247 | 2017-08-14T13:57:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,pyaudio | 59,344,854 | 3 | false | 0 | 0 | if you are using windows then these command on the terminal:
pip install pipwin
pipwin install pyaudio | 2 | 1 | 0 | I ran pip install pyaudio in my terminal and got this error:
Command "/home/oliver/anaconda3/bin/python -u -c "import setuptools,
tokenize;file='/tmp/pip-build-ub9alt7s/pyaudio/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, file, 'exec'))" install
--record /tmp/pip-e9_md34a-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-ub9alt7s/pyaudio/
So I ran sudo apt-install python-pyaudio python3-pyaudio
which seemed to work.
Then in jupyter:
import pyaudio
error:
ModuleNotFoundError: No module named 'pyaudio'
Can anyone help me work out this problem? I am not familiar with Ubuntu and it's commands paths etc as I've only been using it a few months.
If you need more information, let me know what, and how. Thanks | ModuleNotFoundError: No module named 'pyaudio' | 0 | 0 | 0 | 5,269 |
45,676,676 | 2017-08-14T14:20:00.000 | 0 | 0 | 1 | 1 | python | 45,677,292 | 2 | true | 0 | 0 | Metaphox nailed it:
I think you are looking for python -i ./file.py, where the -i flag
will enter interactive mode after executing the file. If you are
already in the console, then execfile. – Metaphox 2 mins ago
But I want to thank for the other suggestions as well, which go beyond the original question yet are useful! | 1 | 0 | 0 | Is there a way to execute a Python script, yet stay in the Python shell thereafter, so that variable values could be inspected and such? | Run Python console after script execution in same environment | 1.2 | 0 | 0 | 184 |
45,677,057 | 2017-08-14T14:39:00.000 | 0 | 0 | 0 | 1 | python,scapy | 45,693,049 | 1 | false | 0 | 0 | I was able to get this working by removing the bin(). This works:
test2 = int(binascii.hexlify('He'),16) | 1 | 0 | 0 | I'm trying to play with a security tool using scapy to spoof ASCII characters in a UDP checksum. I can do it, but only when I hardcode the bytes in Hex notation. But I can't convert the ASCII string word into binary notation. This works to send the bytes of "He" (first two chars of "Hello world"):
sr1(IP(dst=server)/UDP(dport=53, chksum=0x4865)/DNS(rd=1,qd=DNSQR(qname=query)),verbose=0)
But whenever I try to use a variable of test2 instead of 0x4865, the DNS packet is not transmitted over the network. This should create binary for this ASCII:
test2 = bin(int(binascii.hexlify('He'),16))
sr1(IP(dst=server)/UDP(dport=53, chksum=test2)/DNS(rd=1,qd=DNSQR(qname=query)),verbose=0)
When I print test2 variable is shows correct binary notation representation.
How do I convert a string such as He so that is shows in the checksum notation accepted by scapy, of 0x4865 ?? | Spoofing bytes of a UDP checksum over network | 0 | 0 | 0 | 179 |
45,677,797 | 2017-08-14T15:18:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook,ipython-notebook | 48,306,206 | 1 | false | 0 | 0 | If you are just switching between the starting folder and subfolder, "%cd -"
will work.
Each time you run it it directs you to the previous directory, which means if you run it two times you will stay in the current directory. | 1 | 0 | 0 | I have a Jupyter Notebook where I frequently use the %cd magic command. At the top of this notebook I set a bookmark (%bookmark 'base_dir') so that I can easily return to my starting directory (via %cd -b 'base_dir').
Is there an easier way to return to the starting directory (i.e. the directory where the running *.ipynb exists)? | Using the %cd magic command in Jupyter Notebooks | 0 | 0 | 0 | 1,287 |
45,678,559 | 2017-08-14T16:03:00.000 | 3 | 1 | 0 | 0 | python,pytest | 45,678,620 | 3 | false | 0 | 0 | To the best of my knowledge, py.test is still the business! | 1 | 2 | 0 | I want to use something like pytest for very basic testing using simple asserts. Is pytest the best choice for this or are there better recent alternatives? | Is there a recent pytest alternative with simple "assert"? | 0.197375 | 0 | 0 | 1,708 |
45,678,932 | 2017-08-14T16:26:00.000 | 1 | 0 | 0 | 1 | python,django,openerp | 45,716,439 | 2 | false | 1 | 0 | Odoo 10 source code does not contain the ./odoo.py file, it is probably from <=9.0, where the now odoo module was named openerp. You should've got the wrong source, or mixed up the two. | 2 | 2 | 0 | While running odoo at the first time it shows ImportError: No module named openerp
C:\Python27\python.exe E:/workspaces/odoo-10.0-20170812/odoo.py -c
E:\workspaces\odoo-10.0-20170812\odoo.conf Traceback (most recent call
last):
File "E:/workspaces/odoo-10.0-20170812/odoo.py", line 160, in
main()
File "E:/workspaces/odoo-10.0-20170812/odoo.py", line 156, in main
import openerp
ImportError: No module named openerp
Process finished with exit code 1 | ImportError: No module named openerp | 0.099668 | 0 | 0 | 2,238 |
45,678,932 | 2017-08-14T16:26:00.000 | 4 | 0 | 0 | 1 | python,django,openerp | 45,686,661 | 2 | true | 1 | 0 | import openerp won't work in Odoo 10 because openerp is replaced with odoo. Upto version 9 it was openerp but in 10 it changed.
So try:
import odoo instead of import openerp.
Odoo 10 source code does not contain an import openerp anywhere, maybe you have downloaded from the wrong source. | 2 | 2 | 0 | While running odoo at the first time it shows ImportError: No module named openerp
C:\Python27\python.exe E:/workspaces/odoo-10.0-20170812/odoo.py -c
E:\workspaces\odoo-10.0-20170812\odoo.conf Traceback (most recent call
last):
File "E:/workspaces/odoo-10.0-20170812/odoo.py", line 160, in
main()
File "E:/workspaces/odoo-10.0-20170812/odoo.py", line 156, in main
import openerp
ImportError: No module named openerp
Process finished with exit code 1 | ImportError: No module named openerp | 1.2 | 0 | 0 | 2,238 |
45,680,851 | 2017-08-14T18:34:00.000 | 2 | 0 | 0 | 0 | python,opencv,slice | 45,681,068 | 2 | true | 0 | 0 | The slice operator works as with 3 params. Start(inclusive), end(exclusive) and step.
If start is not specified then it gets the start of the array, same with the end but with the last element. If step not specified the default is 1.
This way, if you do [1, 2, 3, 4][0:2] it will return [1, 2]
If you do [1, 2, 3, 4][1:] it will return [2, 3, 4]
If you do [1, 2, 3, 4][1::2] it will return [2, 4]
For negative indexes, that means iterate backwards so [1, 2, 3, 4][::-1] says, from the starting element until the last element iterate the list backwards 1 element at a time, returning [4, 3, 2, 1].
As the question is not entirely clear I hope this clears up the functioning and make you get to an answer. | 1 | 1 | 1 | OpenCV uses BGR encoding, and img[...,::-1] swaps the red and blue axes of img for when an image needs to be in the more common RGB. I've been using it for several months now but still don't understand how it works. | How does [...,::-1] work in slice notation? | 1.2 | 0 | 0 | 207 |
45,681,319 | 2017-08-14T19:01:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,pylint | 45,707,944 | 1 | true | 0 | 0 | I think you are looking for useless-suppression, as in pylint --enable=useless-suppression. It is disabled by default. | 1 | 4 | 0 | With pylint, is it possible to tell it to output warnings on lines that explicitly disable a particular warning, but where the warning doesn't actually occur?
The idea here would be that sometimes I'd like to clean up the suppression lines I added, after refactoring the code.
Now the obvious method would be to remove all suppression lines and then add them back one by one. But since pylint knows about the code and what I ask of it using suppression lines, it'd be better equipped to point out unnecessary suppression lines.
Can pylint do this?
I tried to search for this feature, but came up empty-handed. So I probably picked the wrong search terms. | Can I have pylint warn of suppression lines that would be unnecessary? | 1.2 | 0 | 0 | 146 |
45,682,899 | 2017-08-14T20:54:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter | 45,700,714 | 1 | true | 0 | 1 | I used .grid(row = 0) on both panedWindows. Then I called lift on the window I wanted to raise up and it worked. | 1 | 0 | 0 | The title basically says it. Right now I have the two panedWindows attached to the root window. I would like the windows to either lift() or lower() one panedWindow on top of the other when a button is pressed rather than the panedWindows being stacked on top of each other in the same window.
I also understand there may be a better way of implementing this sort of menu feature. If you know a better way, that would be great too. | How to place a panedWindow behind another panedWindow tkinter | 1.2 | 0 | 0 | 44 |
45,683,889 | 2017-08-14T22:23:00.000 | 1 | 0 | 0 | 0 | python,quantitative-finance,quantlib | 45,896,146 | 1 | false | 0 | 0 | No, unfortunately there's no way around this. For this particular class, you'll have to recreate an instance when your settlement date changes.
Writing a version of the class that takes distances between dates can be done, but it's not currently available. If you write it, please consider creating a pull request for inclusion in the library. | 1 | 0 | 1 | I am using quantlib in python. In order to construct a DiscountCurve object, I need to pass a vector of Dates and corresponding discount factors. The problem is that, when I change the evaluation date to account for settlement days, the curve object is not shifted/adjusted properly and the NPV of the bond does not change as a function of evaluation date.
Is there any way around this? Do I have to construct a different DiscountCurve by shifting the dates whenever I change the number of settlement days?
Ideally, instead of passing a vector of dates, I should be able to pass a vector of distances between consecutive dates but the very first date should be allowed to be the evaluation date. | DiscountCurve is not aware of evaluation date in QuantLib python | 0.197375 | 0 | 0 | 667 |
45,686,418 | 2017-08-15T04:17:00.000 | 1 | 0 | 1 | 1 | python,virtualenv,robotframework | 45,724,271 | 1 | false | 0 | 0 | No, you cannot change the date in a virtualenv separate from the system time. A virtualenv is nothing more than environment variables and symbolic links to some folders, it is not an isolated system. | 1 | 1 | 0 | I am using python virtualenv to run robot framework in linux.
My doubt is about the system date for virtualenv, is it possible to change the date of virtualenv with out changing the OS level system date. | Python virtualenv date different from OS | 0.197375 | 0 | 0 | 147 |
45,687,374 | 2017-08-15T06:13:00.000 | 1 | 0 | 0 | 0 | python,cntk,resnet | 45,702,226 | 1 | true | 0 | 0 | (1) Yes.
(2) 29.37% mean that 29.37% of the classification are correct. Evaluation is on the test data, assuming you are reading both training and test data.
(3) Make sure that the input is the same format, by that I mean do you normalize or subtract the mean in your python, if so then you need to do the same in C#. Can you run the eval first using Python and see what result do you get? | 1 | 3 | 1 | I am new to cntk and python. I have created a python program based on TrainResNet_CIFAR10.py to train 4736 of (64x64x3) images and test 2180 images with 4 classes. After train 160 epochs, I got loss = 0.663 and metric = 29.37%. Finished evaluation metric = 18.94%. When I evaluate the train model based on CNTKLibraryCSEvalExamples.cs to test 2180 images, almost all 2180 are classified as one class (second class). My questions are:
I assume loss is calculated from cross_entropy_with_softmax(z, label_var) and metric is using classification_error(z, label_var). Am I correct and how are they actually determined?
What does mean of metric = 29.37% and evaluation metric = 18.94%? Are they from train and test images, respectively?
what could cause totally wrong evaluate results?
Any help will be greatly appreciated. | how loss and metric are calculated in cntk | 1.2 | 0 | 0 | 401 |
45,688,063 | 2017-08-15T07:14:00.000 | 0 | 0 | 1 | 0 | python,setuptools | 45,688,448 | 1 | false | 0 | 0 | It happens because you imported os in a.py, and then you imported a.py as a module. You may either redesign the a into a class or overwrite os as a function in a.py. Hope this helps. | 1 | 0 | 0 | I got a problem when using setup.py. An instances maybe clear:
Suppose I have a.py in my source folder as a module. I import os and implement a function named 'b' in a.py. After ruunning python setup.py install, I should be able to import a and call a.b. But I can also call a.os in my case.
why does this happen? a.os should not appear, right? How to solve this issue? Looking for help! | The imported packages in source codes are as submodules of my package when using setup.py | 0 | 0 | 0 | 19 |
45,688,168 | 2017-08-15T07:22:00.000 | 0 | 0 | 0 | 0 | python,excel,pandas,openpyxl,xlsxwriter | 45,689,273 | 1 | false | 0 | 0 | i have been recently working with openpyxl. Generally if one cell has the same style(font/color), you can get the style from cell.font: cell.font.bmeans bold andcell.font.i means italic, cell.font.color contains color object.
but if the style is different within one cell, this cannot help. only some minor indication on cell.value | 1 | 0 | 1 | I'm working a lot with Excel xlsx files which I convert using Python 3 into Pandas dataframes, wrangle the data using Pandas and finally write the modified data into xlsx files again.
The files contain also text data which may be formatted. While most modifications (which I have done) have been pretty straight forward, I experience problems when it comes to partly formatted text within a single cell:
Example of cell content: "Medical device whith remote control and a Bluetooth module for communication"
The formatting in the example is bold and italic but may also be a color.
So, I have two questions:
Is there a way of preserving such formatting in xlsx files when importing the file into a Python environment?
Is there a way of creating/modifying such formatting using a specific python library?
So far I have been using Pandas, OpenPyxl, and XlsxWriter but have not succeeded yet. So I shall appreciate your help!
As pointed out below in a comment and the linked question OpenPyxl does not allow for this kind of formatting:
Any other ideas on how to tackle my task? | Modifying and creating xlsx files with Python, specifically formatting single words of a e.g. sentence in a cell | 0 | 1 | 0 | 190 |
45,692,841 | 2017-08-15T12:28:00.000 | 0 | 0 | 1 | 1 | python,python-2.6 | 52,295,075 | 1 | false | 0 | 0 | Nope. Python uses such libraries as sys, os, and etc to get access to system variables and os functional. It can't do it with just core functions. So, in any case, you need to import sys. | 1 | 7 | 0 | I want to find the command line arguments that my program was called with, i.e. sys.argv, but I want to do that before Python makes sys.argv available. This is because I'm running code in usercustomize.py which is imported by the site module, which is imported before Python populates sys.argv. (If you're curious, the reason I'm doing this is to start my debugger without changing my program code.)
Is there any way to find the command line arguments without sys.argv?
Also: The solution needs to work for Python 2.6 :( | Python: Find `sys.argv` before the `sys` module is loaded | 0 | 0 | 0 | 299 |
45,693,078 | 2017-08-15T12:42:00.000 | 0 | 1 | 1 | 0 | python,git,import,git-submodules | 45,695,609 | 1 | false | 0 | 0 | If you want package2 to be a top-level importable package its parent directory (submodule_project in your case) has to be in sys.path. There are many way to do it: sys.path.append(), sys.path.insert(), PYTHONPATH environment variable.
Or may be you don't want to have the code as a submodule at all. It doesn't make sense to have a submodule if the code in the submodule uses absolute import instead of relative (from ..package2 import code2). May be the package should be installed in site-packages (global or in a virtual environment) but not attached to the project as a submodule. | 1 | 0 | 0 | I have added a git submodule in my project. Now all the imports in that submodule are broken because I have to use the full path of import.
For example, if the structure is like this:
Myproject:
- submodule_project:
-- package1:
--- code1.py
-- package2:
--- code2.py
Now, in code1.py there is from package2 import code2. It tells me that package2 is unresolved reference. It is only resolved if I change it to from submodule_project.package2 import code2.
I don't want this because I don't want to change anything in the submodule. I just added it to use some of its packages in my project and to get regularly updated whenever its developers update it. | python importing level 2 packages | 0 | 0 | 0 | 52 |
45,697,068 | 2017-08-15T16:20:00.000 | 3 | 1 | 0 | 0 | python,robotframework | 45,698,543 | 1 | true | 0 | 0 | There's nothing that is officially supported. Though, a solution that might work for you is to import sys, and then scan sys.argv for the --dryrun option. This won't work if you have the dry run argument inside an argument file.
Another simple solution is for you to define a variable when you specify the dry run flag (eg: robot --dryrun --variable DRYRUN:True), and then your logic can use that variable. | 1 | 4 | 0 | I have a listner which updates test result to test management tool at end_test. The problem is when run in dryrun mode it update every thing as Passed which is False result.
Is there a way I can access ROBOT_OPTIONS in my listener because it will have all the command line options, OR is there an alternative way of checked if -dryrun is enabled in my listener library | In my listener how to check if dryrun flag is set or not | 1.2 | 0 | 0 | 464 |
45,697,524 | 2017-08-15T16:47:00.000 | 0 | 1 | 0 | 0 | php,python,api,raspberry-pi | 45,698,143 | 1 | false | 0 | 0 | There are two simple ways to achieve this. You haven't described what sort of actions you are processing so the following is quite generic.
Polling
Have a master that all of the workers (pi) connect to and poll to get any work. The workers can do the work and send data back to the master.
Event driven
Run an API on each pi that your master can call for each event. This is going to be the most performant but will probably require more work. | 1 | 0 | 0 | I have a Raspberry Pi running a Python script posting data to a database on my server. So I would like to do the inverse of this. I need this raspberry pi to do some actions when they are called from the website.
What would be the best approach?
Maybe open some port and start listening for events there? | Communicate to a local device from website | 0 | 0 | 1 | 33 |
45,699,832 | 2017-08-15T19:09:00.000 | 0 | 0 | 0 | 0 | python | 45,699,897 | 2 | false | 1 | 0 | Yes; they are both WSGI apps. Just configure a router container to hold them, and map URL prefixes to one or the other. How you do this depends on your WSGI server (some have native support).
Alternatively, have Flask host the Pyramid WSGI app; any Flask route that returns an object that is not a string or a Response object, will be treated as a WSGI app and have the correct data passed in. | 1 | 1 | 0 | I'm building a Pyramid application, but I would like to have a light-weight REST API built in to it. I've built such an API already with Flask, so is it possible to build my application in both Flask and Pyramid simultaneously? | Is it possible to incorporate Flask with Pyramid? | 0 | 0 | 0 | 37 |
45,700,003 | 2017-08-15T19:20:00.000 | 0 | 0 | 0 | 1 | python,django,python-2.7,python-3.x | 45,700,170 | 2 | false | 1 | 0 | Yes. Python 3's binaries are installed with a suffix of "3", so python will launch a Python 2 interpreter and you need to run python3 to specifically use Python 3. | 1 | 2 | 0 | I have been working through the initial tutorial and ran into a load of issues with my anaconda install using python 2.7. In the end it wouldn't launch the server.
Anyway, I decided to change up on my machine to python3. That said, I am now getting strange results which are:
If I use the terminal command $python -m django --version I get the following:
"../Contents/MacOS/Python: No module named django"
If I change to "$python3 -m django --version" terminal gives me back: "1.11.4"
Now, when I am in the tutorial and starting again from the beginning I do the following: "$django-admin startproject mysite"
This seemed to work.
However, when I tried: "$python manage.py runserver" I get the following:
Traceback (most recent call last):
File "manage.py", line 17, in
"Couldn't import Django. Are you sure it's installed and "
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
If I change to include 3, so "$python3 manage.py runserver" all is well.
My question is do I need to always use python3 in every command now? I does not say that in the tutorial.
My Mac OSx has a native install of 2.7 which I believe is required by my machine for other apps dependency.
Any help would be really appreciated! I am sure given I am new to python I am being a complete moron! | "$python manage.py runserver" not working. Only "python3 manage.py runserver" | 0 | 0 | 0 | 10,200 |
45,702,192 | 2017-08-15T21:59:00.000 | 0 | 0 | 1 | 0 | python,mysql,python-multiprocessing,python-multithreading | 45,702,416 | 2 | false | 0 | 0 | For one I wrote in C#, I decided the best work partitioning was each "source" having a thread for extraction, one for each transform "type", and one to load the transformed data to each target.
In my case, I found multiple threads per source just ended up saturating the source server too much; it became less responsive overall (to even non-ETL queries) and the extractions didn't really finish any faster since they ended up competing with each other on the source. Since retrieving the remote extract was more time consuming than the local (in memory) transform, I was able to pipeline the extract results from all sources through one transformer thread/queue (per transform "type"). Similarly, I only had a single target to load the data to, so having multiple threads there would have just monopolized the target.
(Some details omitted/simplified for brevity, and due to poor memory.)
...but I'd think we'd need more details about what your ETL process does. | 1 | 0 | 0 | I've been pouring over everywhere I can to find an answer to this, but can't seem to find anything:
I've got a batch update to a MySQL database that happens every few minutes, with Python handling the ETL work (I'm pulling data from web API's into the MySQL system).
I'm trying to get a sense of what kinds of potential impact (be it positive or negative) I'd see by using either multithreading or multiprocessing to do multiple connections & inserts of the data simultaneously. Each worker (be it thread or process) would be updating a different table from any other worker.
At the moment I'm only updating a half-dozen tables with a few thousand records each, but this needs to be scalable to dozens of tables and hundreds of thousands of records each.
Every other resource I can find out there addresses doing multithreading/processing to the same table, not a distinct table per worker. I get the impression I would definitely want to use multithreading/processing, but it seems everyone's addressing the one-table use case.
Thoughts? | Python Multithreading/processing gains for inserts to different tables in MySQL? | 0 | 1 | 0 | 325 |
45,702,870 | 2017-08-15T23:14:00.000 | 0 | 0 | 0 | 0 | python-2.7,numpy,matplotlib,64-bit,py2exe | 45,702,871 | 1 | false | 0 | 0 | I tracked the error to the numpy library. Numpy calls numpy.linalg._umath_linalg.inv() and the program abruptly exits with no error message, warning, or traceback.
_umath_linalg is a .pyd file and I discovered that this particular .pyd file doesn't like being called from library.zip, which is where py2exe puts it when using bundle option 2 or 1.
The solution is to exclude numpy in the py2exe setup script and copy the entire package folder into the distribution directory and add that directory to the system path at the top of the main python script. | 1 | 0 | 1 | (I have already resolved this issue but it cost me two weeks of my time and my employer a couple of grand, so I'm sharing it here to save some poor soul.)
My company is converting our application from 32-bit to 64-bit. We create an executable using py2exe, using the bundle=2 option. The executable started crashing as soon as it tried to render a matplotlib plot.
Versions:
python==2.7.13,
matplotlib==2.0.0,
numpy==1.13.1,
py2exe==0.6.10a1 | Bundled executable crashes without warning when rendering plots | 0 | 0 | 0 | 69 |
45,704,177 | 2017-08-16T02:25:00.000 | 2 | 0 | 0 | 0 | python,django,sqlite | 45,706,624 | 1 | false | 1 | 0 | sqlite3 is part of the standard library. You don't have to install it.
If it's giving you an error, you probably need to install your distribution's python-dev packages, eg with sudo apt-get install python-dev. | 1 | 3 | 0 | I'm trying to create app using the command python3 manage.py startapp webapp but i'm getting an error that says:
django.core.exceptions.ImproperlyConfigured: Error loading either
pysqlite2 or sqlite3 modules (tried in that order): No module named
'_sqlite3'
So I tried installing sqlite3 using pip install sqlite3 but I got this error:
Using cached sqlite3-99.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-dbz_f1ia/sqlite3/setup.py", line 2, in
raise RuntimeError("Package 'sqlite3' must not be downloaded from pypi")
RuntimeError: Package 'sqlite3' must not be downloaded from pypi
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-dbz_f1ia/sqlite3/
I tried running this command: sudo apt install sqlite3 but it says sudo is not a valid command, even apt isn't for some reason. I'm running Python3.6.2. I installed Python on my Godaddy hosting and i'm using SSH to install everything. I installed Python and setup a virtualenv. Afterwards, I installed Django and created a Django project. How can I fix these errors to successfully create a Django app? | Downloading sqlite3 in virtualenv | 0.379949 | 1 | 0 | 5,897 |
45,705,189 | 2017-08-16T04:34:00.000 | 0 | 0 | 0 | 0 | python,django,react-native,virtualenv,react-native-android | 45,727,148 | 1 | false | 1 | 0 | Previously when running global python I was doing manage.py runserver 0.0.0.0:8000. Turns out I had to use my actual machine's IP address with the virtual env. Not sure why it works differently in that way, but it does. | 1 | 0 | 0 | I had been building my React Native app for a long time using a global python environment, which I know is bad form, so I decided to create a virtualenv.
But now when I activate the virtualenv and run the server, none of my api endpoints are reachable (I'm using django btw). Instead, the network call doesn't return for a very long time and then comes back with the error "Network Request Failed". But if I deactivate the virtualenv and go back to global python and run the server, everything works fine.
I've seen this "Network Request Failed" error before during times where I've forgotten to turn the server on, so I know it indicates that the server is unreachable.
Here's one last weird aspect though. If I activate the virtualenv and turn on the server and then type the url to an endpoint into the browser, it successfully reaches the browsable django api for the endpoint. So it seems to be set up fine except that the app for whatever reason can't communicate with it. Very bizarre. | React Native app fails to reach endpoints on python virtualenv, but succeeds with global python env | 0 | 0 | 0 | 75 |
45,710,181 | 2017-08-16T09:43:00.000 | -1 | 0 | 1 | 0 | python-3.x | 45,710,206 | 3 | false | 0 | 0 | You can add a counter of instances in the constructor for example. | 1 | 5 | 0 | In one of my classes, I am printing data from another class that is yet to be initialized.I only want to print that data once the class has been initialized.Is there any way check if the class has been instantiated? | How can I check if a class has been instantiated in Python ? | -0.066568 | 0 | 0 | 5,931 |
45,711,940 | 2017-08-16T11:09:00.000 | 1 | 0 | 1 | 0 | python-3.x | 45,712,201 | 2 | true | 0 | 0 | Except the weirdness of the question :)
What you did is a correct way, but each time you call a new program your stack gets bigger and after a while your stack is full and you get a stack overflow (no you don't get this site :p ), but just an error which this site is named after as you encountered.
If you really want to keep their system busy I would try to do something heavy inside one program. | 1 | 0 | 0 | I was wondering how to make program1 run program2 and program2 run program1 and so on. I have already tried using os.system() on each program to run the other, but a really long line of errors comes up and says maximum recursion depth reached
Thanks | Can I make an endless loop of python programs? | 1.2 | 0 | 0 | 36 |
45,712,240 | 2017-08-16T11:23:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,aws-lambda | 54,216,242 | 2 | false | 0 | 0 | select "Upload a .ZIP file" option from "Code entry type". then you need to change the Handler . for python code you will use "main/lambda_handler". | 2 | 3 | 0 | How to upload a zip folder in AWS lambda functions (python) without storing in S3. I need to upload the python code written by me normally including all python libraries in order to run the program. | How to upload a zip folder in AWS lambda (python) without storing in S3 | 0 | 0 | 0 | 3,797 |
45,712,240 | 2017-08-16T11:23:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,aws-lambda | 45,712,595 | 2 | true | 0 | 0 | Do you want to do this from AWS Console?
If yes then you simply have to select "Upload a .ZIP file" option in the "Code entry type" dropdown while creating the Lambda
That dropdown has 3 options as follows
Edit code inline
Upload a .ZIP file
Upload a file from Amazon S3 | 2 | 3 | 0 | How to upload a zip folder in AWS lambda functions (python) without storing in S3. I need to upload the python code written by me normally including all python libraries in order to run the program. | How to upload a zip folder in AWS lambda (python) without storing in S3 | 1.2 | 0 | 0 | 3,797 |
45,717,860 | 2017-08-16T15:41:00.000 | 0 | 0 | 1 | 0 | python,dataframe,logical-operators,interpretation | 45,743,561 | 1 | false | 0 | 0 | After some work I was able to do this with regular expressions and eval().
Using regex, I extracted both the 'template' and the 'criteria'. The 'template' would look something like 1 & 2 | (3 & 4 & (5 or 6)), and the associated 'criteria' would be something like ['criteria1', 'criteria2', ..., 'criteria6']. Then I could manipulate the criteria however I wanted, then substitute the manipulated values back into the template string. Finally, I could just run eval(template) or whatever name of the final string to be executed. | 1 | 0 | 1 | I'm not sure where exactly to start with this one. I know I can do bit-wise logical combinations of masks like so: (mask1 & mask2) | mask3 | (mask4 & (mask5 | mask6))
Now, if I had a user input a string like: '(criteria1 & criteria2) | criteria3 | (criteria4 & (criteria5 | criteria6))', but needed to interpret each criteria through a function to determine and return a mask, how can I retain the parentheses and logic and then combine the masks? | interpreting a string to do and/or combinations of dataframe masks | 0 | 0 | 0 | 25 |
45,723,246 | 2017-08-16T21:25:00.000 | 1 | 1 | 0 | 0 | python,file,ssh,sftp,paramiko | 45,726,406 | 1 | true | 0 | 0 | Judging from the error message "Unknown type", this error is not caused by initialization of the file object in your sftp session, but rather something afterwards has caused the error. It'll be clear if you can post the source code. | 1 | 0 | 0 | I'm using paramiko in Python in order to write files to linux servers. I seem to have errors when writing to a path which includes folders with Hebrew names.
After initializing a ssh_client and sftp client on that session, i'm using chmod to get to the folder I want to write to.
Then,
I'm using the sftp.file method to get a file object to write some content to.
It works when I have paths in English.
When I have a path that contains Hebrew the method fails..
It fails at the point I'm initializing the file in the sftp session.
The error is
Unknown type for u'/root/\u05e9/filename.json' type
Thanks! | Unknown type error trying to use the file method on sftp client from Paramiko in Python | 1.2 | 0 | 0 | 278 |
45,723,500 | 2017-08-16T21:44:00.000 | 2 | 1 | 0 | 0 | python,selenium,scripting,automation,appium | 45,723,536 | 1 | true | 0 | 0 | The longer a script you write is, the more chances you have for making a mistake while you are writing it.
That said, it is not the case that
tests are failing randomly just because my script is long
Your tests are likely failing because there is an error somewhere, either in the tests or the logic in your script. | 1 | 0 | 0 | What I mean is, if my script is 1000 lines of code long, is it more likely to fail then if it was only a hundred lines long, with everything else being equal?
It seems like my tests are failing randomly just because my script is long | Does length of an automation script directly determine its likelihood of failing? | 1.2 | 0 | 0 | 33 |
45,724,575 | 2017-08-16T23:45:00.000 | 0 | 0 | 0 | 0 | excel,python-2.7,openpyxl | 45,729,433 | 1 | true | 0 | 0 | openpyxl 2.5 includes read support for charts | 1 | 0 | 0 | Working with Python 2.7 and I'd like to add new sheets to a current Excel workbook indexed to a specific position. I know Openpyxl's create_sheet command will allow me to specify an index for a new sheet within an existing workbook, but there's a catch: Openpyxl will delete charts from an existing Excel workbook if opened & saved. And my workbook has charts that I don't wish to be deleted.
Is there another way I can open this workbook, create a a few blank sheets that are located precisely after the existing first sheet, all without deleting any of the workbook's charts? | Add indexed sheets to Excel workbook w/out Openpyxl? | 1.2 | 1 | 0 | 70 |
45,724,955 | 2017-08-17T00:39:00.000 | 27 | 0 | 0 | 0 | python,opencv | 45,725,015 | 2 | false | 0 | 0 | It is just a matter of ratios:
On the x-axis, you have resized by a ratio Rx = newX/oldX, and by a ratio Ry = newY/oldY on the y-axis.
Therefore, your new coordinates for point (x,y) are (Rx * x, Ry * y). | 1 | 13 | 0 | I've resized my image to newX, newY. Prior to resizing I had a point (x,y). Now that I've resized my image I'd like to know where the point is on the new image. Sounds simple but I'm bad at math. Any ideas? | Find new coordinates of a point after image resize | 1 | 0 | 0 | 8,906 |
45,725,440 | 2017-08-17T01:51:00.000 | 1 | 0 | 0 | 0 | python,opencv,matrix,camera,3d-reconstruction | 45,727,419 | 1 | true | 0 | 0 | You already have a code for camera calibration and printing a camera matrix in your OpenCV installation. Go to this path if you are on windows -
C:\opencv\sources\samples\python
There you have a file called calibrate | 1 | 1 | 1 | I want to achieve a 3D-reconstruction algorithm with sfm,
But how should i set the parameters of the Camera Matrix?
I have double cameras,both know their focal length.
And how about Rotation Matrix and Translation Matrix from world view?
i use python | How can i obtain Camera Matrix in 3Dreconstruction? | 1.2 | 0 | 0 | 574 |
45,726,623 | 2017-08-17T04:32:00.000 | 0 | 0 | 1 | 1 | python | 45,726,701 | 2 | false | 0 | 0 | Brew installs packages into /usr/local/Cellar and then links them to /usr/local/bin (i.e. /usr/local/bin/python3). In my case, I just make sure to have /usr/local/bin in my PATH prior to /usr/bin.
export PATH=/usr/local/bin:$PATH
By using brew, your new packages will be installed to:
/usr/local/Cellar/python
or
/usr/local/Cellar/python3
Package install order shouldn't matter. | 1 | 1 | 0 | As we all know; Apple ship OSX with Python, but it locks it away.
This force me and anyone else that use python, to install another version and start the painful process of installing with pip with 100 tricks and cheats.
Now, I would like to understand how to do this right; and sorry but I can't go with the route of the virtualenv, due to the fact that I run this for a build server running Jenkins, and I have no idea how to set that up correctly.
Could you please clarify for me these?
How do you tell OSX to run the python from brew, instead than system one?
Where is the official python living, and where are the packages installed, when I run pip install with and without the -U and/or the --user option?
In which order should I install a bunch of packages starting from scratch.on a fresh OSX machine,so I can set it up reliably every time?
Mostly I use opencv, scikit-image, numpy, scipy and pillow. These are giving me so many issues and I can't get a reliable setup so that Jenkins is happy to run the python code, using these libraries. | Questions about double install of Python on OSX | 0 | 0 | 0 | 37 |
45,726,834 | 2017-08-17T04:54:00.000 | 0 | 0 | 0 | 0 | python,django,search | 45,727,210 | 1 | false | 1 | 0 | The simplest way to use full text search is to search a single term against a single column in the database.
example: Product.objects.filter(description_text__search='lorem')
Searching against a single field is great but rather limiting. To query against both fields, use a SearchVector
Same way you can use SearchQuery too. | 1 | 0 | 0 | I'm working on a online store using Django, the question is simple, for a simple model named Product with "name" and "description" fields, should I try a full text search using PostgreSQL or a simple query using "icontains" field lookup? | What kind of search should be used | 0 | 0 | 0 | 58 |
45,728,111 | 2017-08-17T06:35:00.000 | 3 | 0 | 0 | 0 | python,mysql,mysql-cluster | 45,729,005 | 1 | true | 0 | 0 | You have to call SQL nodes from your application. Use comma separated ip addresses for this. In your code use
DB_HOST = "ip4, ip5" | 1 | 0 | 0 | I have configured the server to use MySQL Cluster. The Cluster architecture is as follows:
One Cluster Manager(ip1)
Two Data Nodes (ip2,ip3)
Two SQL Nodes(ip4,ip5)
My Question: Which node should I use to connect from Python application? | Connecting to mysql cluster from python application | 1.2 | 1 | 0 | 915 |
45,729,077 | 2017-08-17T07:31:00.000 | 0 | 0 | 0 | 0 | python,windows,kivy,kivy-language | 45,732,412 | 2 | false | 0 | 1 | You can redefine stop in your app-class and you only call super(MyApp, self).stop() if you want to quit. However you need an overlayed widget wich overloads on_close, that you can quit with [escape] in which you write super(MyApp, app).stop(). | 1 | 1 | 0 | Does Kivy offer any of the functions for disabling windows hotkey (ALT+F4) to your app?
Or can I do this through python 3.5+ ? | How to disable windows hotkey ALT+F4 in Kivy app? | 0 | 0 | 0 | 303 |
45,729,494 | 2017-08-17T07:54:00.000 | 1 | 0 | 1 | 0 | python,tensorflow | 45,788,066 | 2 | true | 0 | 0 | I managed to solve the problem. The tip from @amo-ej1 to run in a regular file was a step in the correct direction. This uncovered that the tensor flow process was killing itself off with a SIGKILL and returning an error code of 137.
I tried Tensorflow Debugger tfdbg though this did not provide any further details as the problem was the graph did not initialize. I started to think the graph structure was incorrect, so I dumped out the graph structure using:
tf.summary.FileWriter('./logs/traing_graph', graph)
I then used up Tensorboard to inspect the resultant summary graph structure data dumped out the the directory and found that the tensor dimensions of the Fully Connected layer was wrong , having a width of 15million !!?! (wrong)
It turned out that one of the configurable parameters of the graph was incorrect. It was picking the dimension of the layer 2 tensor shape incorrectly from an incorrect addressing the previous tf.shape type property and it exploded the dimensions of the graph.
There were no OOM error messages in /var/log/system.log so I am unsure why the graph initialisation caused the python tensorflow script process to die.
I fixed the dimensions of the graph and graph initialization worked just fine!
My top tip is visualise your graph with Tensorboard before initialisation and training to do a quick check the resultant graph structure you coded it what you expected it to be. You probably will save yourself a lot of time! :-) | 1 | 2 | 1 | I'm after advice on how to debug what on Tensorflow is struggling with when it hangs.
I have a multi layer CNN which hangs upon global_variables_initializer() is run in the session. I am getting no errors or messages on the console output.
Is there an intelligent way of debugging what Tensorflow is struggling with when it hangs instead of repeatedly commenting out lines of code that makes the graph, and re-running to see where it hangs. Would TensorFlow debugger (tfdbg) help? What options do I have?
Ideally it would be great to just to break current execution and look at some stack or similar to see where the execution is hanging during the init.
I'm currently running Tensorflow 0.12.1 with Python 3 inside a Jupiter notebook. | Debugging Tensorflow hang on global variables initialisation | 1.2 | 0 | 0 | 944 |
45,731,787 | 2017-08-17T09:47:00.000 | 1 | 0 | 0 | 0 | python-3.x,tensorflow,mnist | 45,747,350 | 1 | false | 0 | 0 | I have solved this problem.
I changed line 204 and line 210 of mnist_with_summaries.py to the local directories, and I created some folders.
OR, don't change the code, and I created some folders in the local disk where is the running environment according to the code.
line 204: create /tmp/tensorflow/mnist/input_data
line 210: create /tmp/tensorflow/mnist/logs/mnist_with_summaries | 1 | 1 | 1 | When running this example:" python mnist_with_summaries.py ", it has
occurred the following error:
detailed errors:
Traceback (most recent call last):
File "mnist_with_summaries.py", line 214, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py"
, line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "mnist_with_summaries.py", line 186, in main
tf.gfile.MakeDirs(FLAGS.log_dir)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.p
y", line 367, in recursive_create_dir
pywrap_tensorflow.RecursivelyCreateDir(compat.as_bytes(dirname), status)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\contextlib.py", line 89, in exit
next(self.gen)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\framework\errors
_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a directory: /tmp\tensorflow
Running environment:windows7+Anaconda3+python3.6+tensorflow1.3.0
Why?Any idea on how to resolve this problem?Thank you! | When running " python mnist_with_summaries.py ", it has occurred the error | 0.197375 | 0 | 0 | 249 |
45,731,858 | 2017-08-17T09:51:00.000 | 7 | 0 | 0 | 1 | python-2.7,apache-airflow | 45,876,209 | 2 | false | 0 | 0 | Without the code, it's kind of hard to help you. However, this means that you have a loop in your DAG. Generally, thie error happens when one of your task has a downstream task whose own downstream chain includes it again (A calls B calls C calls D calls A again, for example).
That's not permitted by Airflow (and DAGs in general). | 2 | 10 | 0 | i am running the airflow pipeline but codes looks seems good but actually i'm getting the airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task:
can u please help to resolve this issue | airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task | 1 | 0 | 0 | 10,290 |
45,731,858 | 2017-08-17T09:51:00.000 | 24 | 0 | 0 | 1 | python-2.7,apache-airflow | 54,864,435 | 2 | false | 0 | 0 | This can happen due to duplicate task_id'a in multiple tasks. | 2 | 10 | 0 | i am running the airflow pipeline but codes looks seems good but actually i'm getting the airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task:
can u please help to resolve this issue | airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task | 1 | 0 | 0 | 10,290 |
45,732,437 | 2017-08-17T10:17:00.000 | 1 | 1 | 0 | 1 | python,linux,bash,ubuntu,logging | 45,801,164 | 1 | true | 0 | 0 | Unless the script has an internal logging mechanism like e.g. using logging as mentioned in the comments, the output will have been written to /dev/stdout or /dev/stderr respectively, in which case, if you did not log the respective data streams to a file for persistent storage by using e.g. tee, your output is lost. | 1 | 0 | 0 | I have a bash script, in Python that runs on a Ubuntu server. Today, I mistakenly closed the Putty window after monitoring that the script ran correctly.
There is some usefull information that was printed during the scrip running and I would like to recover them.
Is there a directory, like /var/log/syslog for system logs, for Python logs?
This scripts takes 24 hours to run, on a very costly AWS EC2 instance, and running it again is not an option.
Yes, I should have printed usefull information to a log file myself, from the python script, but no, I did not do that. | Recover previous Python output to terminal | 1.2 | 0 | 0 | 389 |
45,733,205 | 2017-08-17T10:53:00.000 | 1 | 0 | 0 | 0 | python,django,rest,templates,view | 45,733,426 | 2 | true | 1 | 0 | The kwargs are passed directly onto the template already. You can just to {{ id }}. | 1 | 0 | 0 | Say I have this URL
url(r'users/(?P<id>\d+)/edit$',TemplateView.as_view(template_name='edit.html'))
Is there a way to send the <id> to the template within that url pattern definition? | How to pass Regex named group variable from URL to TemplateView as argument | 1.2 | 0 | 0 | 42 |
45,734,960 | 2017-08-17T12:19:00.000 | 0 | 1 | 0 | 0 | python,mail-server,aiosmtpd | 45,750,954 | 3 | false | 0 | 0 | You may consider the following features:
Message threading
Support for Delivery status
Support for POP and IMAP protocols
Supports for protocols such as RFC 2821 SMTP and RFC 2033 LMTP email message transport
Support Multiple message tagging
Support for PGP/MIME (RFC2015)
Support list-reply
Lets each user manage their own mail lists Supports
Control of message headers during composition
Support for address groups
Prevention of mailing list loops
Junk mail control | 1 | 4 | 0 | I want to write my own small mailserver application in python with aiosmtpd
a) for educational purpose to better understand mailservers
b) to realize my own features
So my question is, what is missing (besides aiosmtpd) for an Mail-Transfer-Agent, that can send and receive emails to/from other full MTAs (gmail.com, yahoo.com ...)?
I'm guessing:
1.) Of course a domain and static ip
2.) Valid certificate for this domain
...should be doable with Lets Encrypt
3.) Encryption
...should be doable with SSL/Context/Starttls... with aiosmtpd itself
4.) Resolving MX DNS entries for outgoing emails!?
...should be doable with python library dnspython
5.) Error handling for SMTP communication errors, error replies from other MTAs, bouncing!?
6.) Queue for handling inbound and pending outbund emails!?
Are there any other "essential" features missing?
Of course i know, there are a lot more "advanced" features for a mailserver like spam checking, malware checking, certificate validation, blacklisting, rules, mailboxes and more...
Thanks for all hints!
EDIT:
Let me clarify what is in my mind:
I want to write a mailserver for a club. Its main purpose will be a mailing-list-server. There will be different lists for different groups of the club.
Lets say my domain is myclub.org then there will be for example [email protected], [email protected] and so on.
Only members will be allowed to use this mailserver and only the members will receive emails from this mailserver. No one else will be allowed to send emails to this mailserver nor will receive emails from it. The members email-addresses and their group(s) are stored in a database.
In the future i want to integrate some other useful features, for example:
Auto-reminders
A chatbot, where members can control services and request informations by email
What i don't need:
User Mailboxes
POP/IMAP access
Webinterface
Open relay issue:
I want to reject any [FROM] email address that is not in the members database during SMTP negotiation.
I want to check the sending mailservers for a valid certificate.
The number of emails/member/day will be limited.
I'm not sure, if i really need spam detection for the incoming emails?
Losing emails issue:
I think i will need a "lightweight" retry mechanism. However if an outgoing email can't be delivered after some retries, it will be dropped and only the administrator will be notified, not the sender. The members should not be bothered by email delivery issues. Is there any Python Library that can generate RFC3464 compliant error reply emails?
Reboot issue:
I'm not sure if i really need persistent storage for emails, that are not yet sent? In my use case, all the outgoing emails should be delivered usually within a few seconds (if no delivery problem occurs). Before a (planned) reboot i can check for an empty send queue. | Python aiosmtpd - what is missing for an Mail-Transfer-Agent (MTA)? | 0 | 0 | 0 | 2,053 |
45,735,688 | 2017-08-17T12:53:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,graphics,real-time | 45,740,933 | 1 | false | 0 | 1 | Write a multithreading script that runs both, your computation script and a script for the images(where each of them can act as one frame for the video). Keep closing the image window each time the next image is computed.
This solution is makeshift but will work | 1 | 0 | 1 | I am trying to visualize data computed by my program (neural network), by showing images while the program is working, creating a video that shows the progress in real time.
It should be pretty basic, but I'm new to Python, and I'm struggling to find the good framework to do this. It seems that with most libraries (Tkinter, graphics, matplotlib, etc), displaying a video stops the computation, and the user has to interact with the GUI (like close the window) to go back to the program. For now I use PIL.show() to display a single image without stopping the program, but it does not seem suited to video, because I cannot replace the displayed image by another, as the window is not handled by the program anymore.
I'm using Linux Mint and Python 2.7.6
So what is the simplest way to do that ? Is there a library that is well-suited ? Or where can I find an example code doing that ? | Real-time animation in Python | 0 | 0 | 0 | 436 |
45,735,733 | 2017-08-17T12:55:00.000 | 0 | 0 | 0 | 0 | python,web-scraping,beautifulsoup,bs4 | 45,735,827 | 3 | false | 1 | 0 | You will have to use the click option given with selenium that will allow you to find the read more tag or class and click it, as soon as it appear you will have to click it again.. and when it does not shows up you will have to scrap the content you require, | 1 | 0 | 0 | I am trying to scrape reviews from a website and am not able to scrape reviews having a 'read more' option.
I am only able to get data till read more.
I am using BeautifulSoup.
Any help is appreciated. | How to Scrape reviews with read more from Webpages using BeautifulSoup | 0 | 0 | 1 | 2,339 |
45,736,070 | 2017-08-17T13:12:00.000 | 0 | 0 | 0 | 1 | python,linux,cron,command-line-interface | 45,736,226 | 2 | false | 0 | 0 | You can redirect all prints /dev/tty to print in all terminals python exec.py &> /dev/tty). But cron is execute deattached from all terminals. | 1 | 0 | 0 | Just wondering - as a fail safe backup, I'm setting up a python cronjob script that I can print various things to the terminal.
I was wondering, once the cronjob has finished - am I able to take a terminal dump for the last output? Even if it errors out...
Probably going to be running on a Linux VPS - CentOS (not sure if that 100% matters). | Python print() Terminal dump after cronjob has finished | 0 | 0 | 0 | 27 |
45,736,874 | 2017-08-17T13:45:00.000 | 2 | 0 | 0 | 0 | python,pandas,indexing | 45,737,266 | 3 | false | 0 | 0 | For any python object, () invokes the __call__ method, whereas [] invokes the __getitem__ method (unless you are setting a value, in which case it invokes __setitem__). In other words () and [] invoke different methods, so why would you expect them to act the same? | 1 | 14 | 1 | I do not see any documentation on pandas explaining the parameter False passed into loc. Can anyone explain how () and [] differ in this case? | Why can you do df.loc(False)['value'] in pandas? | 0.132549 | 0 | 0 | 971 |
45,737,486 | 2017-08-17T14:11:00.000 | 1 | 0 | 1 | 0 | python,mongodb,python-3.x,pymongo,pymongo-3.x | 45,737,589 | 1 | true | 0 | 0 | You should be able to just insert into the collection in parallel without needing to do anything special. If you are updating documents then you might find there are issues with locking, and depending on the storage engine which your MongoDB is using there may be collection locking, but this should not affect how you write your python script. | 1 | 1 | 0 | I would like to know how to insert into same MongoDb collection from different python scripts running at the same time using pymongo
any help redirecting guidance would be very appreciated because I couldn't find any clear documentation in pymongo or mongdb about it yet
thank in advance | Writing in parallel to MongoDb collection from python | 1.2 | 1 | 0 | 754 |
45,738,082 | 2017-08-17T14:36:00.000 | 3 | 0 | 1 | 0 | python,anaconda,spyder | 45,741,041 | 1 | true | 0 | 0 | (Spyder developer here) No, there isn't and it's not possible to implement such functionality at the moment, sorry. | 1 | 1 | 0 | This would really help with debugging and just be a pretty useful feature. Not sure if I am just missing something or it is not an available feature. | Is there a way to refresh Spyder's Variable Explorer while a program is running? | 1.2 | 0 | 0 | 1,343 |
45,740,126 | 2017-08-17T16:08:00.000 | 1 | 0 | 0 | 0 | python,distributed-computing,mt4 | 45,751,233 | 5 | false | 0 | 0 | Several options:
exchange with files (write data from mt4 into a file for python, another folder in opposite direction with buy/sell instructions);
0MQ (or something like that) as a better option. | 1 | 2 | 0 | I'm using a MetaTrader4 Terminal and I'm experienced python developer.
Does anyone know, how can I connect MT4 and Python? I want to:
- connect to MT4
- read USD/EUR data
- make order (buy/sell)
Does anyone know some library, a page with instructions or a documentation or have at least idea how to do that?
I googled first 30 page but I didn't find anything useful. | How to control MT4 from python? | 0.039979 | 0 | 0 | 12,826 |
45,741,991 | 2017-08-17T17:54:00.000 | 0 | 0 | 0 | 0 | python,html-parsing | 45,742,170 | 2 | false | 1 | 0 | To answer your question, it's highly unlikely you can get ONLY article content targeting <p></p> tags. You WILL get a lot of unnecessary content that will take a ton of effort to filter through, guaranteed.
Try to find an RSS feed for these websites. That will make scraping target data much easier than parsing an entire HTML page. | 2 | 0 | 0 | I have a list of 1,000 URLs of articles published by different agencies, and of course each has its own HTML layout.
I am writing a python code to extract ONLY the article body from each URL. Can this be done by only looking at the < p>< /p> Paragraph tags?
Will I be missing some content? or including irrelevant content by this approach?
Thanks | How to extract article contents from websites with different layouts | 0 | 0 | 1 | 80 |
45,741,991 | 2017-08-17T17:54:00.000 | 0 | 0 | 0 | 0 | python,html-parsing | 45,742,189 | 2 | true | 1 | 0 | For some articles you will be missing content, and for others you will include irrelevant content. There is really no way to grab just the article body from a URL since each site layout will likely vary significantly.
One thing you could try is grabbing text contained in multiple consecutive p tags inside the body tag, but there is still no guarantee you will get just the body of the article.
It would be a lot easier if you broke the list of URLs into a list for each distinct site, that would you could define what the article body is case by case. | 2 | 0 | 0 | I have a list of 1,000 URLs of articles published by different agencies, and of course each has its own HTML layout.
I am writing a python code to extract ONLY the article body from each URL. Can this be done by only looking at the < p>< /p> Paragraph tags?
Will I be missing some content? or including irrelevant content by this approach?
Thanks | How to extract article contents from websites with different layouts | 1.2 | 0 | 1 | 80 |
45,742,377 | 2017-08-17T18:16:00.000 | 0 | 0 | 1 | 1 | python,pyzo | 46,764,383 | 2 | false | 0 | 0 | This is how I fixed it:
I went to miniconda3 file in C:\Users\<user>\Miniconda3 (might be in other file/the thing is you need to find the miniconda3 file)
Find the Python application
Rename it to "python.exe"
Then go to shell configuration and replace the path to the operable python program in "exe" with your path (for me it was
C:\Users\<user>\Miniconda3\python.exe) | 1 | 0 | 0 | I recently booted up my Pyzo IDE with the intention of doing some programming, however, upon starting up the python shell it gave this following error:
The given path was not found
The process failed to start (invalid command?). (1)
I am not able to run any code with this error. If I try to run it nothing happens and the error re-appears.
I have tried reinstalling the whole thing without success, I have tried reading the log but there was no error message and I have also tried looking for posts regarding the same problem without success. I was hoping if someone could explain what my problem is and a possible solution, thanks. | Getting a "The process failed to start (invalid command?). (1)" error when starting up Pyzo | 0 | 0 | 0 | 6,080 |
45,743,894 | 2017-08-17T19:51:00.000 | 2 | 0 | 0 | 0 | python,tensorflow | 45,804,171 | 1 | true | 0 | 0 | If your evaluation is a whole epoch, you're right that it doesn't make much sense. eval_steps is more for the case when you're doing mini-batch evaluation and want to evaluate on multiple mini-batches. | 1 | 2 | 1 | I do wonder what is the parameter 'eval_steps' in learn.experiment in tensorflow ? Why would you run over the evaluation set several times every time you want to evaluate your model ?
Thanks! | What is eval_step in Experiment Tensorflow | 1.2 | 0 | 0 | 629 |
45,747,880 | 2017-08-18T03:08:00.000 | 4 | 0 | 0 | 0 | python,openerp,qweb | 45,753,702 | 1 | true | 1 | 0 | The best way to do this is to have a popup target=new and have a statusbar on top right which will be clickable/not readonly (so that the user can go back). And depending on the state of your record, show the appropriate fields
You can of course create a popup, and when the user clicks next destroy that popup and create another one but that doesn't seem like a good idea to me. | 1 | 0 | 0 | I'm trying to create a wizard which has several pages.
I know how to pass to 'target' new or current, to pass the action to a form or tree view, but what I actually need, before that, is to create several steps which will be on different "views" of this wizard, like a form with 'next' and 'back' buttons.
Is there some example code I can look for that?
I've searched on default addons, with no success. | Add button next page - wizard - Odoo v8 | 1.2 | 0 | 0 | 639 |
45,749,992 | 2017-08-18T06:35:00.000 | 6 | 0 | 1 | 0 | python,tensorflow | 49,615,888 | 8 | false | 0 | 0 | If you are using python3 on windows then you might do this as well
pip3 install tensorflow==1.4
you may select any version from "(from versions: 1.2.0rc2, 1.2.0, 1.2.1, 1.3.0rc0, 1.3.0rc1, 1.3.0rc2, 1.3.0, 1.4.0rc0, 1.4.0rc1, 1.4.0, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.7.0rc0, 1.7.0rc1, 1.7.0)"
I did this when I wanted to downgrade from 1.7 to 1.4 | 2 | 46 | 0 | I have tensorflow 1.2.1 installed, and I need to downgrade it to version 1.1 to run a specific tutorial. What is the safe way to do it? I am using windows 10, python 3.5. Tensorflow was installed with pip3, but "pip3 show tensorflow" returns blank.
Is it possible to have multiple version of tensorflow on the same OS? | How to downgrade tensorflow, multiple versions possible? | 1 | 0 | 0 | 174,217 |
45,749,992 | 2017-08-18T06:35:00.000 | 58 | 0 | 1 | 0 | python,tensorflow | 45,750,287 | 8 | false | 0 | 0 | Pip allows to specify the version
pip install tensorflow==1.1 | 2 | 46 | 0 | I have tensorflow 1.2.1 installed, and I need to downgrade it to version 1.1 to run a specific tutorial. What is the safe way to do it? I am using windows 10, python 3.5. Tensorflow was installed with pip3, but "pip3 show tensorflow" returns blank.
Is it possible to have multiple version of tensorflow on the same OS? | How to downgrade tensorflow, multiple versions possible? | 1 | 0 | 0 | 174,217 |
45,757,754 | 2017-08-18T13:21:00.000 | 1 | 0 | 0 | 0 | python,parent-child,hierarchy,maya,mel | 45,758,995 | 1 | false | 0 | 0 | Any time you change the scale of a parent node, the translation for it's children is going to change - at least, if you're measuring in world space units. So, moving 10 units under a parent scaled to 0.5 will actually move 5 world space units (for example).
I'm pretty sure your rotations should be fine since scale doesn't really change how rotation around a pivot works; however, if you're rotating something from a pivot that is not in the center of the object and you have non-uniform scaling (xyz are not all equal) the rotation inside of the squashed space will feel more like an oval than a circle.
If that's not a problem, the main thing to worry about is the translation positions - you basically need to get the world space positions of each object at each key, 'fix' the scale, then go through the keys and set the world space position again (I would use the xform command for that since you can query and set position with world space values). So, the steps you outlined will probably be the best bet...
If you have non-uniform scales though, you may not actually be able to get the rotations to work out in a way that gives you the same results (just depending on positions/pivots and consecutive descendant positions/pivots). If the parent's scale isn't actually hurting anything and isn't supposed to be keyed/animated, it might be ok to just lock and hide it without any adverse effects. | 1 | 2 | 0 | I have hierarchy of objects with animation on translation and rotation, the scale xyz are equal and static but not 1. When I freeze scale on a parent mesh it's children's animation goes wild. Is there any way to prevent this from happening?
I have found a workaround, but it's not perfect yet. Let's say we have simple setup like this:
parentObject=>childObject
I put childObject in a group "childObjectGroup"
parent childObjectGroup to the world and zero out it's transforms excluding scale.
Bake childObject's trasformations to the world so we don't need a group anymore. (found a good script for that)
Freeze scale transforms on parentObject and childObject
Reparent them back
It works for simple hierarchies like that, but not sure how to apply it for more complicated ones with deep tree and several brunches. Probably I'm missing something and there is really simple solution to that. | Freeze scale transform on a parent object with animated child (MAYA MEL/Python script) | 0.197375 | 0 | 0 | 1,522 |
45,760,932 | 2017-08-18T16:13:00.000 | 0 | 1 | 1 | 0 | python,ipython | 60,616,311 | 2 | false | 0 | 0 | %run myprogram works for Python scripts/programs.
To run any arbitrary programs, use ! as a prefix, e.g. !myprogram.
Many common shell commands/programs (cd, ls, less, ...) are also registered as IPython magic commands (run via %cd, %ls, ...), and also have registered aliases, so you can directly run them without any prefix, just as cd, ls, less, ... | 1 | 1 | 0 | I'm in IPython and want to run a simple python script that I've saved in a file called "test.py".
I'd like to use the %run test.py command to execute it inside IPython, but I don't know to which folder I need to save my test.py.
Also, how can I change that default folder to something else, for example C:\Users\user\foldername ?
I tried with the .ipython folder (original installation folder) but that's not working. | IPython - running a script with %run command - saved to which folder? | 0 | 0 | 0 | 475 |
45,761,067 | 2017-08-18T16:21:00.000 | 6 | 0 | 0 | 0 | python-3.x,scipy,curve-fitting | 45,766,815 | 1 | true | 0 | 0 | It is normal for the objective function to be called initially with very small (roughly 1e-8) changes in parameter values in order to calculate the partial derivatives to decide which way to go in parameter space. If the result of the objective function does not change at all (not even at 1e-8 level) the fit will give up: changing the parameter values did not change the result.
I would first look into whether the result of your objective function is really sensitive to the parameters. If the changes to your result really are not sensitive to a 1e-8 change, but would be sensitive to a larger change, you may want to increase the value of epsfcn passed to scipy.optimize.leastsq. | 1 | 3 | 1 | I am trying to fit a custom function to some data points using curve_fit. I have tried 1 or two free parameters. I have used it other times. Now I am struggling to make a fit, because the algorithm returns always the initial input values, with infinite sigma, no matter what the initial values are. I have also tried to print the internal parameters with which my custom function is called, and I don't understand, my custom function is called just 4 times, the first three with always the same parameters and the last with a relative change of the parameter of 10^-8. this doesn't look right | python 3.3 : scipy.optimize.curve_fit doesn't update the value of point | 1.2 | 0 | 0 | 985 |
45,761,959 | 2017-08-18T17:25:00.000 | 3 | 0 | 1 | 0 | python,django,md5,filenames,sha1 | 45,762,192 | 2 | true | 0 | 0 | You can use almost anything as a filename, minus reserved characters. Those particular choices tell you nothing about the file itself, aside from its hash value. Provided they aren't uploading identical files, that should prevent file naming collisions. If you don't care about that, have at it.
Usually people upload files in order for someone to pull them back down. So you'd need to have a descriptor of some kind; otherwise users would need to open a mass of files to get the one they want. Perhaps a better option would be to let the user select a name (up to a character limit) and then append the datetime code. Then, in order to have a collision, you'd need to have 2 users select the exact same name at the exact same time. Include seconds in the datetime code, and the chances of collision approach (but never equal) zero. | 2 | 1 | 0 | Let's consider a site where users can upload files. Can I use MD5 or SHA1 hashes of their contents as filenames? If not, what should I use? To avoid collisions. | Can I use MD5 or SHA1 hashes for filenames? | 1.2 | 0 | 0 | 2,604 |
45,761,959 | 2017-08-18T17:25:00.000 | 1 | 0 | 1 | 0 | python,django,md5,filenames,sha1 | 52,287,399 | 2 | false | 0 | 0 | Despite the SHA1 collision attack previously, SHA1 hash collision probability is still so low that can be assumed to be safe to use as filenames in most cases.
The other common approach is using GUID/UUID for every file. So the only question left is how do you want to handle two identical files uploaded by two users. The easiest way is treat them as two separate files and neither of them will be affected by each other.
Though sometimes you might be concerned about storage space. For example, if the files uploaded are really big, you might want to consider storing the two identical files as one to save space. Depending on the user experience of your system, you might need to handle some situations afterwards, such as when one of the two users removed the file. However these are not difficult to handle and just depend on the rest of your system. | 2 | 1 | 0 | Let's consider a site where users can upload files. Can I use MD5 or SHA1 hashes of their contents as filenames? If not, what should I use? To avoid collisions. | Can I use MD5 or SHA1 hashes for filenames? | 0.099668 | 0 | 0 | 2,604 |
45,763,459 | 2017-08-18T19:01:00.000 | 0 | 1 | 0 | 0 | python,email,exchange-server | 45,763,596 | 2 | false | 0 | 0 | It depends on how well you know the people you're mailing.
If you know them pretty well, it should be fine. If they're total strangers, the recipients might think it's spam and start blocking you.
I could help more if you told me how well you know the recipients. | 2 | 0 | 0 | I have written a Python script that iterates through rows of an Excel file and, for each row:
Gets an e-mail address, name, and name of attachment file to use
Composes an e-mail
Sends out the e-mail
I'm not sure if it's accurate to call this mass-emailing or if it is a candidate for being black-listed because it is sending out individualized e-mails. With a message submission rate of 5/minute, I want to throttle it (or have the limit increased to 100).
So my question is: Is the sort of scenario, assuming the limit is increased to 100, prone to black-listing? | Python Email via MS Exchange: Message Submission Rate Limit | 0 | 0 | 0 | 86 |
45,763,459 | 2017-08-18T19:01:00.000 | 0 | 1 | 0 | 0 | python,email,exchange-server | 45,764,959 | 2 | false | 0 | 0 | Its not easy to answer your question as it depends hardly on the remote email environment used here and what you understand with individualized emails (only a different "Hello Mr. ZZ" or "Dear Ms. YY" isn´t really an individualized email these days). To give you three possible examples:
Situation 1:
All users are on the same email environment (e.g. Exchange Online / Office 365). Then the remote mail server might see here +100 similar emails and might mark them as spam. If all +100 users are on +100 different email servers that might be different however the following might be possible:
Situation 2:
One user think that this email is spam and report that as spam. Depending on the AntiSpam engine used here a hash value from that email might be created and other email server using the same AntiSpam engine might therefore detect your email as spam as well.
Situation 3:
The users are on different email environments but in front of them is an AntiSpam cloud solution. This solution will then see +100 similar emails from one eMail environment and might therefore clarify that as SPAM.
Offtopic: You might consider using services like from MailChimp which use different email servers to spread out a similar email. This might help to prevent such issues as the mass emails aren´t send from only one server. On top of that you do not risk your own email server from being blacklisted which might have a very bad business impact on your company. | 2 | 0 | 0 | I have written a Python script that iterates through rows of an Excel file and, for each row:
Gets an e-mail address, name, and name of attachment file to use
Composes an e-mail
Sends out the e-mail
I'm not sure if it's accurate to call this mass-emailing or if it is a candidate for being black-listed because it is sending out individualized e-mails. With a message submission rate of 5/minute, I want to throttle it (or have the limit increased to 100).
So my question is: Is the sort of scenario, assuming the limit is increased to 100, prone to black-listing? | Python Email via MS Exchange: Message Submission Rate Limit | 0 | 0 | 0 | 86 |
45,764,187 | 2017-08-18T19:55:00.000 | 0 | 1 | 0 | 0 | python,exception-handling,timing,nas | 45,768,586 | 1 | false | 0 | 0 | Taking on MQTT at this stage would be a big change to this nearly-finished project. But your suggestion of decoupling the near-real-time Python from the NAS drive by using a second script is I think the way to go. If the Python disc interface commands wait 10 seconds for an answer, I can't help that. But I can stop it holding up the time-critical Python functions by keeping all time-critical file accesses local in Pi memory, and replicating whole files in both directions between the Pi and the NAS drive whenever they change. In fact I already have the opportunistic replicator code in Python - I just need to move it out of the main time-critical script into a separate script that will replicate the files. And the replicator Python script will do any waiting, rather than the time-critical Python script. The Pi scheduler will decouple the two scripts for me. Thanks for your help - I was beginning to despair! | 1 | 0 | 0 | For a flood warning system, my Raspberry Pi rings the bells in near-real-time but uses a NAS drive as a postbox to output data files to a PC for slower-time graphing and reporting, and to receive various input data files back. Python on the Pi takes precisely 10 seconds to establish that the NAS drive right next to it is not currently available. I need that to happen in less than a second for each access attempt, otherwise the delays add up and the Pi fails to refresh the hardware watchdog in time. (The Pi performs tasks on a fixed cycle: every second, every 15 seconds (watchdog), every 75 seconds and every 10 minutes.) All disc access attempts are preceded by tests with try-except. But try-except doesn't help, as tests like os.path.exists() or with open() both take 10 seconds before raising the exception, even when the NAS drive is powered down. It's as though there's a 10-second timeout way down in the comms protocol rather than up in the software.
Is there a way of telling try-except not to be so patient? If not, how can I get a more immediate indicator of whether the NAS drive is going to hold up the Pi at the next read/write, so that the Pi can give up and wait till the next cycle? I've done all the file queueing for that, but it's wasted if every check takes 10 seconds. | How can Python test the availability of a NAS drive really quickly? | 0 | 0 | 0 | 226 |
45,769,111 | 2017-08-19T07:51:00.000 | 0 | 0 | 0 | 0 | mongodb,python-3.x,cron | 45,797,856 | 1 | false | 0 | 0 | Yes, I also faced this problem but then I tried by moving small chunks of the data. Sharding is not the better way as per my experience regarding this kind of problem. Same thing for the replica set. | 1 | 0 | 0 | Let me explain the problem
We get real time data which is as big as 0.2Million per day.
Some of these records are of special significance. The attributes
that shall mark them as significant are pushed in a reference collection. Let us say each row in Master Database has the following attributes
a. ID b. Type c. Event 1 d. Event 2 e. Event 3 f. Event 4
For the special markers, we identify them as
Marker1 -- Event 1 -- Value1
Marker2 -- Event 3 -- Value1
Marker3 -- Event 1 -- Value2
and so on. We can add 10000 such markers.
Further, the attribute Type can be Image, Video, Text, Others. Hence the idea is to segregate Data based on Type, which means that we create 4 collections out of Master Collection. This is because we have to run search on collections based on Type and also run some processing.The marker data should show in a different tab on the search screen.
We shall also be running a search on Master Collection through a wild search.
We are running Crons to do these processes as
I. Dumping Data in Master Collection - Cron 1
II. Assigning Markers - Cron 2
III. Segregating Data based on Type - Cron 3
Which runs as a module. Cron 1 - Cron 2 - Cron 3.
But assigning targets and segregation takes a very long time. We are using Python as scripting language.
In fact, the crons don't seem to work at all. The cron works from the command prompt. But scheduling these in crontab does not work. We are giving absolute path to the files. The crons are scheduled at 3 minutes apart.
Can someone help? | How to segregate large real time data in MongoDB | 0 | 1 | 0 | 143 |
45,770,489 | 2017-08-19T10:35:00.000 | 1 | 0 | 1 | 0 | python,parentheses | 45,770,637 | 2 | false | 0 | 0 | Here is an outline of an algorithm to both insert radians( at the proper place and keep the parentheses balanced. This will work if the parentheses are indeed balanced beforehand and if there are no unbalanced parentheses in string literals such as len("abc(d"). It does not seem terribly pythonic, however.
Do not just use replace(). Instead, use find() to find a usage of cos( or other trig function. Set a counter to zero. Then scan the string from immediately after that opening parenthesis [the ( in cos(] to the right. When you encounter an opening parenthesis, increment the counter by one; when you encounter a closing parenthesis, decrement the counter by one. When your counter reaches -1 you have found the close parenthesis for your trig function. Insert a new close parenthesis at that location, then insert your radians( just after the trig function.
Continue this until you have treated all the trig functions in your string. | 1 | 2 | 0 | I am trying to implement a function for balancing parentheses of a given math equation as a string. It should be changing the string, not just checking if it is balanced.
Because the math equation can contain trigonometric functions, I want to add radians() after such functions, because in Python, trigonometric functions take input as radians, while I want degrees.
So tan(65) becomes tan(radians(65)).
cos(65) + sin(35) becomes cos(radians(65)) + sin(radians(35))
cos((30 - 10) * 2) becomes cos(radians((30 - 10) * 2))
So far, what I've done is using replace() to replace cos( with cos(radians(, sin( with sin(radians( and the same thing goes for all the rest trigonometric functions. But the problem is, the string (which is a math equation) becomes parentheses-unbalanced.
How do I write a function to solve this problem? | Function for balancing parentheses | 0.099668 | 0 | 0 | 184 |
45,771,774 | 2017-08-19T12:53:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,web-scraping,scrapy | 68,959,793 | 1 | false | 1 | 0 | Don't store any intermediate data. Check if the code going through any infinite loops.
for storing URL's, use any queuing broker like RabbitMq or Redis.
for final data, store in any DB using the python db connection library (sqlalchemy,mysqlconnecter,pyodc etc depending on the db selected)
This can help your code to be run distributed and effienct (remember to use NUllpool or singlepool to avoid too many db connections)
For easy and efficient way is using a sqlite db
insert 1 million in a table with status as done or notyet
after Crawling and storing the URL data into another data table update the URL table to "done" from "notyet"
This helps to keep track of the URLs scraped so for and can restart the script in case any issues and scrape only the not done date. | 1 | 7 | 0 | I'm trying to scrape a rather large website (with around 1 million pages) with Scrapy. The spider works fine and it is able to scrape a few thousand pages before inevitably crashing due to low memory.
Things I've tried:
Using the -s JOBDIR=<DIRECTORY>: This gave me an initial improvement and I was able to crawl about twice the number of URLs than with the previous approach. However, even with this option Scrapy's memory consumption slowly increases, until it is killed by the out-of-memory killer.
Preventing unnecessary functions, such as preventing excessive output by raising the log limit from DEBUG to INFO.
Using yield statements instead of returning arrays.
Keeping the returned data to an absolute minimum.
Running the spider on a beefier machine: This helps me crawl a bit more, but inevitably it crashes again at a later point (and I'm nowhere near the 1 million mark).
Is there something I'm missing which can help me with complete the scraping? | Is there a way to reduce Scrapy's memory consumption? | 0.197375 | 0 | 0 | 2,171 |
45,772,510 | 2017-08-19T14:05:00.000 | 0 | 0 | 0 | 1 | python | 45,772,896 | 3 | false | 0 | 0 | In my case I would try something using Task Manager data, probably using subprocess.check_output(ps)(for me that looks good), but you can the [psutil][1] library.
Tell us what you did later :) | 1 | 0 | 0 | In Python, how do you check that an external program is running? I'd like to track my use of some programs, so I can see the amount of time I've spent with them. For example, if I launch my program , I want to be able to see if Chrome has already been launched, and if so, start a timer which would end when I exit Chrome.
Ive seen that then subprocess module can launch external programs, but this is not what I'm looking for.
Thanks in advance. | python: how to check the use of an external program | 0 | 0 | 0 | 47 |
45,774,274 | 2017-08-19T17:13:00.000 | 1 | 0 | 1 | 0 | python,date,time,raspbian | 45,774,473 | 1 | true | 0 | 0 | The code is encrypted (No one can see the source code of the program).
That's a fallacy. Unless you're using a secure processor that can actually decrypt things into memory that can't be read by the operating system, your program is never truly encrypted. Sure, the original python might be hidden, but from the assembly, a somewhat skilled person can easily gather what is happening.
So, since this is kind of a data security question: Security by obscurity doesn't work on general-purpose hardware. Especially not with relatively high-level things like Python.
Now my problem. I need to be sure that the data stored in my database has not been modified.
That is a hard problem, indeed. The problem is that: if someone's able to fully reconstruct the state of your program, they can also reconstruct what your encryption would have done if the data was different.
There's a few ways around that. But in the end, they all break down to a single principle:
You need some hardware device that can encrypt your data as it comes and proves it hasn't been tampered with, e.g. by keeping a counter of how many things have been encrypted. So, if you have e.g 100 things in the database that have been encrypted by your secure, uncloneable crypto hardware, and it shows it has only been used 100 times, you're fine. The same would apply if that hardware would, for example, always do "encrypt(input bytes + timestamp)".
You can't do that in software on a general purpose OS — software can always be made to run with modified data, and if it's just that you patch the physical memory accessed just in time.
So, what you'll need specific hardware. Feels like a crypto smart card would be able to do something like that, but I don't know whether that includes the functionality to keep a counter or include the timestamp.
One solution that might work is basically using a stream cipher to ensure the integrity of the whole data "stream". Here, part of the secret is the state in which the encryption algorithm is in. Imagine this: You have a smart card with a secret key from a keypair generated on the card itself on it. You hold the other key in your cellar.
You, before shipping the device, encrypt something secret. That puts the smartcard in a state that the malicious tamperer can't guess.
You encrypt the first value, save the output. That changes the internal state!
You encrypt and save the output of a known word or sequence
repeat 2. + 3. for all the other values to be stored.
at the end, you decrypt the data in the database using the key you kept in your cellar. Since the internal state necessarily changed with the input data (i.e. encrypting the same data twice doesn't give the same result!!), the data isn't correctly decryptable if you something is missing from the records. You can immediately check by the output generated by the known word.
takeaway
What you're trying to do is hard – that namely being:
running software on hardware that you have no control over and having to ensure the authenticity of the data it produced.
Now, the impossible part is actually making sure that data hasn't been tampered with before it enters your software – who says that, for example, the driver for your temperature sensor hasn't been replaced by something that always says "-18 °C"? To avoid the capability of people to tamper with your software, you'll need hardware that enforces the non-tampering. And that's not something you can do on PC-style hardware, unless you disable all debugging possibilities and ensure you have safe booting capability. | 1 | 0 | 0 | First of all sorry for my bad english.
I'm working on a project and I need to generate a code (ID) that I can verify later.
As my project is very extensive I will give you and example and later what I need to solve.
Example: I have a code that get the temperature of a place once a day, and the data is stored in a local database (I save the temperature, the date, and the unique ID).
The code is encrypted (No one can see the source code of the program).
Now my problem.
I need to be sure that the data stored in my database has not been modified.
What I think can solve this is: For example, the date is 08-19-2017 and the temperature is 25°C. I can do some math operations (for example, multiply all) and get an ID, and later on I can verify if the code match the date and temperature.
Do you think this is a good solution or is there a better one?
Thanks all.
I'm using Python and linux. | Generate a unique code using date and time | 1.2 | 0 | 0 | 230 |
45,775,590 | 2017-08-19T19:30:00.000 | 2 | 0 | 0 | 0 | python,flask | 45,775,615 | 1 | true | 1 | 0 | Flask and all of your dependencies are loaded once when you start your web server, so no need to worry about startup time. | 1 | 0 | 0 | I am just about ready to deploy my Flask based website but before I do, I would like to know if the Flask framework is loaded for each session or if it is loaded just once on the server. The reason I ask is because I have a lot of python libraries to load and I want to know if I should load them all at once (if Flask is only loaded once) or load them on a page by page basis (if Flask is loaded for each session). It is really a question about getting the best performance for the end user. | Is Flask loaded just once on the server or every time someone visits site? | 1.2 | 0 | 0 | 53 |
45,775,996 | 2017-08-19T20:27:00.000 | -1 | 0 | 1 | 0 | python,tesseract,python-tesseract | 49,129,541 | 2 | false | 0 | 0 | Please check the correct path of installation of Tesseract-OCR. Setting the correct path, i.e. C:\Program Files (x86)\Tesseract-OCR worked for me. | 1 | 1 | 0 | I installed tesseract-OCR for windows and it resides in C:\Program Files\Tesseract-OCR path in my system.
I set up an environment variable by adding C:\Program Files\Tesseract-OCR in the PATH variable.
I also set up TESSDATA_PREFIX in system variable to the same above tesseract location.
Still, when I try to run the command "tesseract some path\image.tif somepath\output", it gives message as "'tesseract' is not recognized as an internal or external command".
when i run the same command from the location where tesseract is installed, it works fine but i need to have it set in the environment variables as it will also allow PYOCR wrapper to recognise it.
PYOCR is currently giving "pyocr.get_available_tools()[0]" as empty list.
any help is much appreciated. | tesseract command not working from command line in windows | -0.099668 | 0 | 0 | 3,259 |
45,776,795 | 2017-08-19T22:23:00.000 | 0 | 1 | 0 | 0 | python,rocksdb | 49,740,205 | 1 | false | 0 | 0 | Does rocksdb itself or its python-rocksdb API contain any vulnerability that would warrant sanitise those received key values before attempting to retrieve the data with them?
There is no security risk in using rocksdb to store arbitrary data. The issues comes in when you are serializing / deserializing the value. You must use a known safe library. | 1 | 1 | 0 | I am working on a web application that stores metadata about files in rocksdb, using their packed base64 MD5 hashes as a keys(Example: 7XDfSsHImTYaYDUIG8QfYg==) and allows end users to access it by providing same keys. Does rocksdb itself or its python-rocksdb API contain any vulnerability that would warrant sanitise those received key values before attempting to retrieve the data with them? | Are there security security vulnerabilities in rocksdb or python-rocksd that warrant sanitizing keys received from external source? | 0 | 0 | 0 | 252 |
45,779,671 | 2017-08-20T07:39:00.000 | 4 | 0 | 1 | 0 | python,google-assistant-sdk | 45,808,580 | 2 | false | 0 | 0 | Easy fix but hard to find. You just need to make sure that all the settings are there as mentioned before. I completed the above actions, and then I set it to administrator and input the 3 commands:
pip install --upgrade google-api-python-client
pip install --upgrade google-auth-oauthlib[tool]
google-oauthlib-tool --client-secrets path/to/client_secret_XXXXX.json --scope https://www.googleapis.com/auth/assistant-sdk-prototype --save --headless
Success! | 1 | 2 | 0 | I'm receiving this error when trying to install Google Assistant, and I am using Windows 10, Python 3.6 and SDK 0.3.3. Could someone please recommend the next step? I've tried inputting in the string recommended on other sites, which ends with --scope https://googleapis.com... but this did not work. | No module named googlesamples.assistant.auth_helpers | 0.379949 | 0 | 0 | 12,193 |
45,781,750 | 2017-08-20T11:40:00.000 | 1 | 0 | 0 | 0 | ssl,python-3.6 | 45,781,816 | 1 | false | 0 | 0 | This is not possible.
SSL sockets in Python are implemented using OpenSSL. For each SSL socket in python there is a user space state managed by OpenSSL. Transferring a SSL socket to another process would need this internal SSL state to be transferred too. But, Python has no direct access to this state because it only uses the OpenSSL library with the Libraries API and thus can not transfer it. | 1 | 0 | 0 | The first process will receive and send some data (to complete the authentication) after accept a sslsocket, then send the sslsocket to another process.
I know that multiprocessing.reduction.send_handle can send socket, but it didn't work with sslsocket.
Please help. | How to send a sslsocket to a running process | 0.197375 | 0 | 1 | 41 |
45,785,730 | 2017-08-20T18:49:00.000 | 1 | 1 | 0 | 1 | python,amazon-web-services,amazon-s3,amazon-redshift | 45,787,036 | 1 | false | 0 | 0 | COPY command can load multiple files in parallel very fast and efficiently. So when you run one COPY command for each file in your python file, that's going to take a lot of time since you are not taking advantage of parallel loading.
So maybe you can write a script to find bad JSONs in your manifest and kick them out and run a single COPY with the new clean manifest?
Or like you suggested, I would recommend splitting manifest file into small chunks so that COPY can run for multiple files at a time. (NOT a single COPY command for each file) | 1 | 1 | 0 | I have a large manifest file containing about 460,000 entries (all S3 files) that I wish to load to Redshift. Due to issues beyond my control a few (maybe a dozen or more) of these entries contain bad JSON that will cause a COPY command to fail if I pass in the entire manifest at once. Using COPY with a key prefix will also fail in the same way.
To get around this I have written a Python script that will go through the manifest file one URL at a time and issue a COPY command for each one using psycopg2. The script will additionally catch and log any errors to ensure that the script runs even when it comes across a bad file, and allows us to locate and fix the bad files.
The script has been running for a little more than a week now on a spare EC2 instance and is only around 75% complete. I'd like to lower the run time, because this script will be used again.
My understanding of Redshift is that COPY commands are executed in parallel, and with that I had an idea - will splitting the manifest file into smaller chunks and then running the script each of those chunks reduce the time it takes to load all the files? | Using multiple manifest files to load to Redshift from S3? | 0.197375 | 0 | 0 | 792 |
45,786,134 | 2017-08-20T19:40:00.000 | 0 | 0 | 1 | 1 | python-2.7,py2exe | 45,798,385 | 1 | false | 0 | 0 | Open a command prompt window in windows and execute the .exe from there at the system were it does not work - then you will see the error message why it does not work, and that might help to figure out where the problem is.
If you just double-click the exe the error shows as well, but the cmd window is closed immediately since the process terminates | 1 | 0 | 0 | The executable i created works completely fine on my system,but as soon as opened in an other system the cmd opens for a very brief time and then closes. | why does the executable i created using py2exe only runs on my computer and not on others? | 0 | 0 | 0 | 31 |
45,787,213 | 2017-08-20T22:08:00.000 | 0 | 0 | 0 | 0 | javascript,python,html | 45,788,234 | 2 | false | 1 | 0 | You can use Two Python libraries.
Django
Flask
I recommend Django. Django is easy and fast to make.
the Flask is more complex but you can make more detail functions. | 1 | 0 | 0 | I would like to basically call a python script from HTML, after the script is called and it finished running, I would like to execute a javascript file(wich I know how to do.) Now my question is: Can I do this with just pure HTML and javascript or do I need to get a library for python? If I dont need a library, how would I go about doing this? | Runnning python script on HTML page | 0 | 0 | 0 | 76 |
45,787,580 | 2017-08-20T23:14:00.000 | 0 | 0 | 0 | 0 | python,sockets,server | 45,787,646 | 1 | false | 0 | 0 | There are quite a few ways one can imagine handling this. However, the right solution is almost certainly setting up a database. AWS offers free services below a certain tier of usage. Look there if this is a small personal project.
Since you are using python, you should be using sqlalchemy to define a model and interact with your persistent data. You can setup such a database on an ec2 instance for free if you keep it small enough. RDS makes database management easier, but I'm not sure there is a free tier for it. | 1 | 0 | 0 | I am trying to build a python application that reads AND writes data to a file online that other instances of the application have access to. I know I could use sockets with a dedicated server, but I don't have one. Is there any service that does this, or should I get a server?
Thanks | Python Data Acquisition Without Server | 0 | 0 | 1 | 46 |
45,788,655 | 2017-08-21T02:39:00.000 | 1 | 0 | 1 | 0 | c#,python,dll | 45,789,539 | 1 | true | 0 | 0 | Your options are as follow,
Use a TCP socket, bind it to a port and listen for data in C# while the python application sends all data to it. C# has some great features for sockets such as System.Net.TcpClient
and
System.Net.TcpServer.
Your other option is that if the C# application only needs to be run once it receives information from the python program and then it can die, you could have the python program start the C# one and pass it parameters containing the information that you need transmitted.
By the looks of it your question only asked if there was a way to communicate, these are probably the best two. Please let me know if I can help anymore. | 1 | 0 | 1 | i have a chart pattern recognition program using neural networks written in python .instead of porting the whole code to C# ,i decided it was better to only send certain bits indicating the following :
Buy=1,Sell=-1,Do nothing=0
once they are in C-sharp ,i could relay them to a third party program (multicharts) which would continuously call the C# dll function and receive these values after a certain time interval .
my question is ,is there a way to relay these bit's to C# and pack all of this in a dll ,which gets read by the 3rd party program ?
the whole reason i want to port to C# is because multicharts only reads in dll's and i dont think python has them.
sorry for being naive ,i don't have very good grip on C# . | sending data from python to c# | 1.2 | 0 | 0 | 612 |
45,789,617 | 2017-08-21T05:06:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,graphene-python | 45,997,346 | 2 | false | 1 | 0 | When you map your Django model to a GraphQL, it create a new model with GraphQL object types from the introspection of the Django model..
And nothing prevent you to combine this model with with plain GraphQL objects types, or mapped from an other third party persistence model | 1 | 5 | 0 | I've successfully used Graphene-Django to successfully build several GraphQL calls. In all of those cases I populated, in whole or in part, a Django model and then returned the records I populated.
Now I have a situation where I'd like to return some data that I don't wish to store in the Django model. Is this possible to do with Graphene?
Robert | Graphene Django without Django Model? | 0 | 0 | 0 | 2,118 |
45,790,373 | 2017-08-21T06:15:00.000 | 0 | 0 | 1 | 1 | python | 45,790,671 | 1 | true | 0 | 0 | Python3 is not in your "search path"
You need to alter the Windows PATH value so the Python3.exe module is found. | 1 | 0 | 0 | The documentation tells me to type python3 -m venv myenv into the command prompt, assuming the directory I'd like is called myenv. However, when I do this, I get:
"python3 is not recognized as an internal or external command, operable program or batch file."
I have not seen this addressed on here, or in the documentation. My installation seems to have run correctly, because simply typing python shows me what it's supposed to show. | Trying to create a virtual env on Windows 7, using Python 3.6.2 | 1.2 | 0 | 0 | 164 |
45,791,218 | 2017-08-21T07:12:00.000 | 1 | 0 | 1 | 0 | python-2.7 | 45,791,336 | 2 | true | 0 | 0 | Every base64 character encodes 6 bits. If your original string is 80 bits (10 * 8), 80/6 = ~13.3 so you need 14 characters to represent all 80 bits, plus two padding characters.
base64 string must have a multiple of 4, as every 4 characters maps to 3 bytes. The '=' character is used as padding.
EDIT: for clarity, 14+2 = 16 | 2 | 0 | 0 | What max length can I expect to a base64encoded string of length 10 in python?
I need to specify that in my database. | What will be the max length of base64encode string of length 10 in python? | 1.2 | 0 | 0 | 1,041 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.