Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
43,220,715 | 2017-04-05T02:18:00.000 | 0 | 0 | 1 | 1 | python,build,dependencies,scons | 43,432,166 | 2 | false | 0 | 0 | Here's another potential solution which is kind of another workaround. Is it possible for the scanner to speculate the list of files that will be generated if the *.i swig interface files are passed to it as the "node" argument? This way the scanner doesn't actually need the files to be present to generate the list of dependencies.
In general, I'm wondering if the solution to this problem is to just write logic to aggressively speculate the dependencies before the SWIG libraries are actually generated. I don't assume much info can be gained from looking at the "_*.so" files themselves. | 2 | 0 | 0 | I have a SCons build system set up to build some libraries from C++, as well as Python wrappers for them via SWIG. Then the results are used for data processing, which is also a part of SCons build. The data processing is Python scripts that use the built SWIG-wrapped libraries.
I've set up the dependencies such that data processing starts after all the libraries and wrappers are built, and that works out well. But there's a caveat (you guessed it, right? :) ). I want to add a source scanner, which also uses some of the SWIG libraries to expand the dependencies. The problem is that the scanner runs too soon. In fact, I see it running twice - once at some point early in the build and the other just before data processing starts. So the first scanner run in parallel build typically happens before all the necessary libraries are built, so it fails.
How can I make the scanner itself depend on library targets?
Or, can I delay the scanner run - or eliminate the first scanner run?
Any other ideas? | How to delay the run of SCons source scanner? | 0 | 0 | 0 | 180 |
43,220,715 | 2017-04-05T02:18:00.000 | 1 | 0 | 1 | 1 | python,build,dependencies,scons | 43,220,910 | 2 | false | 0 | 0 | One workaround I think would work is to turn the scanner into a builder that runs the scan process instead of scanner and generates a file that lists all the dependencies. The data processing build would then simply have a scanner to parse that file. I'd expect SCons not attempt to run it early, because it would be aware of the scanned source file being a target of some builder.
Assuming it works, it is still a sub-par solution as it complicates the build set up and adds extra file I/O of a not-so-small file (the dependencies are thousands of files, with long paths). | 2 | 0 | 0 | I have a SCons build system set up to build some libraries from C++, as well as Python wrappers for them via SWIG. Then the results are used for data processing, which is also a part of SCons build. The data processing is Python scripts that use the built SWIG-wrapped libraries.
I've set up the dependencies such that data processing starts after all the libraries and wrappers are built, and that works out well. But there's a caveat (you guessed it, right? :) ). I want to add a source scanner, which also uses some of the SWIG libraries to expand the dependencies. The problem is that the scanner runs too soon. In fact, I see it running twice - once at some point early in the build and the other just before data processing starts. So the first scanner run in parallel build typically happens before all the necessary libraries are built, so it fails.
How can I make the scanner itself depend on library targets?
Or, can I delay the scanner run - or eliminate the first scanner run?
Any other ideas? | How to delay the run of SCons source scanner? | 0.099668 | 0 | 0 | 180 |
43,223,984 | 2017-04-05T06:58:00.000 | 1 | 1 | 1 | 0 | python,chef-infra | 43,225,019 | 1 | false | 0 | 0 | No, Chef does not support concurrent execution in the general case. Chef-provisioning specifically supports concurrent handling for VM creation but that's a special case. | 1 | 0 | 0 | I have a chef recipe say A which contains three recipes B, C and D. Now i what is to run D in parallel wrt sequential execution of B and C. One hack i know is write is to write logic of B, C and D in python(B.py, C.py and D.py) and execute them in parallel( All.py ) in some script and make a consolidated recipe that calls All.py. But the code is too large to convert. Is there any hack to run recipes in parallel in chef using the same recipes? | Is it possible to run a recipe in parallel with others in chef with some hack? | 0.197375 | 0 | 0 | 541 |
43,236,458 | 2017-04-05T16:11:00.000 | 1 | 0 | 1 | 0 | python,intellij-idea,pycharm | 43,240,575 | 1 | false | 0 | 0 | If you haven't changed your default keymap you can place your cursor on the variable from which you want its documentation and hit Ctrl + q which opens a popup with the available documentation!
If Ctrl + q does not work for you, open File > Settings > Keymap and in the search bar search for "Quick Documentation" and use the listed hot-key mentioned there for that action! | 1 | 2 | 0 | Say I have a file open and I know the type of a variable is a dict, but the editor doesn't know that. Is there a way I can navigate to dict documentation?
I tried search everywhere, but that doesn't seem to work.
Thanks! | How do you look up python builtin documentation with intellij/pycharm? | 0.197375 | 0 | 0 | 101 |
43,237,879 | 2017-04-05T17:29:00.000 | 1 | 1 | 1 | 0 | python | 43,238,045 | 4 | false | 0 | 0 | The best way to accomplish this is to have your program run on some type of server that your computer can connect to. A server could be anything from a raspberry pi to an old disused computer or a web server or cloud server. You would have to build a program that can be accessed from your computer, and depending on the server and you would access it in a lot of different ways depending the way you build your program and your server.
Doing things this way means your script will always be able to check the temperature because it will be running on a system that stays on. | 3 | 0 | 0 | I have a python script that checks the temperature every 24 hours, is there a way to leave it running if I shut the computer down/log off. | Is there a way to leave python code running when the computer is shutdown? | 0.049958 | 0 | 0 | 8,836 |
43,237,879 | 2017-04-05T17:29:00.000 | 0 | 1 | 1 | 0 | python | 43,238,020 | 4 | false | 0 | 0 | Scripts are unable to run while your computer is powered off. What operating system are you running? How are you collecting the temperature? It is hard to give much more help without this information.
One thing I might suggest is powering on the system remotely at a scheduled time, using another networked machine. | 3 | 0 | 0 | I have a python script that checks the temperature every 24 hours, is there a way to leave it running if I shut the computer down/log off. | Is there a way to leave python code running when the computer is shutdown? | 0 | 0 | 0 | 8,836 |
43,237,879 | 2017-04-05T17:29:00.000 | 2 | 1 | 1 | 0 | python | 43,238,017 | 4 | false | 0 | 0 | Shutdown - no.
Logoff - potentially, yes.
If you want to the script to automatically start when you turn the computer back on, then you can add the script to your startup folder (Windows) or schedule the script (Windows tasks, cron job, systemd timer).
If you really want a temperature tracker that is permanently available, you can use a low-power solution like the Raspberry Pi rather than leaving your pc on. | 3 | 0 | 0 | I have a python script that checks the temperature every 24 hours, is there a way to leave it running if I shut the computer down/log off. | Is there a way to leave python code running when the computer is shutdown? | 0.099668 | 0 | 0 | 8,836 |
43,240,152 | 2017-04-05T19:37:00.000 | 1 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot | 43,241,747 | 1 | true | 0 | 0 | For the first part of your question you can make a private group and add your bot as one of its administrators. Then it can talk to the members and answer to their commands.
Even if you don't want to do so, it is possible by checking the chatID of each update that the bot receives. If the chatID exists in the file, DataBase or even in a simple array the bot answers the command and if not it just ignores or sends a simple text like what you said good-bye.
Note that bots cannot block people they can only ignore their
messages. | 1 | 3 | 0 | I want to create a telegram bot for a home project and i wish the bot only talk to 3 people, how can I do this?
I thought to create a file with the chat id of each of us and check it before responding to any command, I think it will work. the bot will send the correct info if it's one of us and "goodbye" to any other
But is there any other way to block any other conversation with my bot?
Pd: I'm using python-telegram-bot | Telegram-bot user control | 1.2 | 0 | 1 | 986 |
43,241,188 | 2017-04-05T20:38:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,virtualenvwrapper | 44,764,793 | 2 | false | 0 | 0 | I've found you can also do the same for virtualenvwrapper projects like so:
mkproject -p python2 my_venv_project.
Also, Make sure the project name comes last | 2 | 1 | 0 | Using virtualenv I can set the python version to a specific virtual environment via th -p option. This means for different environments I can set different python versions. Is there a similar possibility for virtualenvwrapper? Please note I dont want to set a system wide version used in ALL virtual environments. I would like to have the flexibility it to set the python version on virtual environment level. | Set python version for each virtualenv using virtualenvwrapper | 0 | 0 | 0 | 375 |
43,241,188 | 2017-04-05T20:38:00.000 | 5 | 0 | 1 | 0 | python,virtualenv,virtualenvwrapper | 43,241,448 | 2 | true | 0 | 0 | You should be able to use the -p option when creating a virtualenv using virtualenvwrapper to specify the version: mkvirtualenv -p /usr/bin/python2.7 my_env. | 2 | 1 | 0 | Using virtualenv I can set the python version to a specific virtual environment via th -p option. This means for different environments I can set different python versions. Is there a similar possibility for virtualenvwrapper? Please note I dont want to set a system wide version used in ALL virtual environments. I would like to have the flexibility it to set the python version on virtual environment level. | Set python version for each virtualenv using virtualenvwrapper | 1.2 | 0 | 0 | 375 |
43,245,220 | 2017-04-06T03:35:00.000 | 2 | 0 | 1 | 1 | python,python-3.x,docker,containers,virtualization | 43,245,303 | 1 | false | 0 | 0 | So does Docker share a common Python GIL lock among all containers?
NO.
The GIL is per Python process, a Docker container may have 1 or many Python processes, each with it's own GIL.
If you are not multi-threading, you should not even be aware of the GIL. Are you using threads at all? | 1 | 5 | 0 | When I run a Python script inside a Docker container, it completes one execution loop in ~1 minute. Now as I spin up 2 more containers from same image, and run Python scripts inside them, everything slow down to a crawl and start requiring 5-6 minutes per loop.
None of the scripts are resource bound; there is plenty of RAM and CPU cores sitting around idle. This happens when running 3 containers on a 64-core Xeon Phi system.
So does Docker share a common Python GIL lock among all containers? What are my options to separate the GILs, so each process will run at its full potential speed?
Thank you! | Do Docker containers share a single Python GIL? | 0.379949 | 0 | 0 | 948 |
43,246,011 | 2017-04-06T04:56:00.000 | 0 | 0 | 1 | 0 | python,debugging,pycharm | 43,246,037 | 2 | false | 0 | 0 | No.
Once you power down the computer, you cannot recover its state previous to the power shutdown; as the memory is cleared during a power reboot cycle. | 1 | 0 | 0 | I'm running a python script (which takes about a week) in pycharm debug mode and need to move my PC.
I can pause and restart the script no problem, but can I pause it, shut down and restart the computer and then continue running the script from where I paused it? | Can I shut down and restart pycharm debugger? | 0 | 0 | 0 | 350 |
43,248,083 | 2017-04-06T07:08:00.000 | 1 | 0 | 0 | 0 | javascript,python,uniqueidentifier,distributed-system | 43,288,043 | 2 | false | 1 | 0 | I haven't used Simpleflake itself, but have been using a similar scheme for years, though I use 128 bits instead of 64.
The key ingredient is that most of the bits are random. So even if your libraries choose a slightly different number of bits for the timestamp portion, or a different granularity then the likelihood of collisions is low. Of course, in such cases it lessens the speed improvements in the database.
I imagine that some Simpleflake implementation is "standard" and the other implementations are straight ports---keeping compatibility and characteristics. If not, shame on them for using Simpleflake in their name. | 1 | 3 | 0 | I need to generated unique Ids in distributed manner. Some at server side and another at client side. Server side programming language can be ruby and python while client side is javascript.
I am plannning to use simpleflake libraries for respective languages.
Can I assume that the ids will never collide?
OR they can collide often, due to the implementation details in different packages?
Thanks in advance.
-Amit | Are simpleflake ids generarated in different languages consistent? | 0.099668 | 0 | 0 | 317 |
43,248,141 | 2017-04-06T07:11:00.000 | 7 | 0 | 1 | 0 | python,python-3.x,jupyter-notebook | 43,248,254 | 2 | true | 0 | 0 | I assume you are in a Windows environment.
You can run cmd with admin privilege then run jupyter notebook.
Type cmd in the start menu
Right click on Command Prompt and click Run as administrator
Type jupyter notebook in cmd and press enter
Please use this with caution as admin privilege is not something you want to use for every automation. | 1 | 6 | 0 | Because of API I need to run Python with admin privileges.
I can run Anaconda or PyCharm with admin privileges by click running by admin privilege.
But how can I run Python on Jupyter notebook with admin privileges? | How can i run Jupyter notebook in admin privilege? | 1.2 | 0 | 0 | 15,235 |
43,253,823 | 2017-04-06T11:26:00.000 | 1 | 0 | 0 | 0 | node.js,django,python-2.7,oauth-2.0,nodes | 43,254,420 | 2 | false | 1 | 0 | Scenario A
You create a url in django ^v2/.*$ and then make a request from django view to node.js process. this way django can handle user auth and permissions and node can be standalone and not know anything about user auth. If you need user data (id or something) you can inject it in the request as a header or a cookie.
Scenario B
You dig into django REST OAuth implementation, find where tokens are stored in db, and on each request you take the oauth token from the header/cookie and compare it to the one in DB. You would have to setup nginx as a reverse proxy and route all traffic that goes on url /v1/.*$ to django app, and all traffic that goes to /v2/.*/ to node app.
Either options are doable but I would suggest Scenario A. It's easier, quicker and far less error prone. | 1 | 1 | 0 | Hi I have two process
Django and MYSQL
node/express and mongo db.
1.
How can I configure this two process to point to different url
like. Django point to api.abc.com/v1 and node point to api.abc.com/v2 ?
2.
all my user login is inside Django and MYSQL with OAuth. I can authenticate user in Django.
But how can authenticate user in nodejs app with the token send by Django REST OAuth ?
Thanks. | Django and Node processes for the same domain | 0.099668 | 0 | 0 | 1,050 |
43,256,137 | 2017-04-06T13:06:00.000 | 0 | 0 | 0 | 0 | python,outlook,openerp,exchange-server,pyexchange | 45,020,552 | 1 | false | 1 | 0 | bulk_update() and bulk_delete() are methods on the Account class, not the Folder class. That's why you're getting the AttributeError. | 1 | 1 | 0 | Which is the good python package/library for integrating MS Exchange in Python.: PyExchange or exchangelib .
I am trying to integrate MS Exchange with Odoo 10, please advise me with the best package to do it. I tried some of the functions of exchangelib . eg: bulk_create, account.calendar.all(), etc and working.But some functions bulk_update, bulk_delete is not working.
Am not able to update the values from Odoo to MS Exchange calender because it shows error:
AttributeError: 'Calendar' object has no attribute 'bulk_update'
AttributeError: 'Calendar' object has no attribute 'bulk_delete'
please advise | Microsoft Exchange Integration in Python | 0 | 0 | 0 | 310 |
43,260,916 | 2017-04-06T16:32:00.000 | 0 | 0 | 0 | 0 | python,graph,graph-theory,networkx | 43,304,907 | 2 | false | 0 | 0 | Do you use networkx for calculation or visualization?
There is no need to use it for calculation since your model is simple and it is easier to calculate it with matrix (vector) operations. That is suitable for numpy.
Main part in a step is calculation of probability of switching from 0 to 1. Let N be vector that for each node stores 0 or 1 depending of state. Than probability that node n switch from 0 to 1 is numpy.amax(A[n,:] * N).
If you need visualization, than probably there are better libraries than networkx. | 1 | 0 | 1 | I have been told networkx library in python is the standard library to use for graph-theoretical applications, but I have found using it quite frustrating so far.
What I want to do is this:
Generating an SIS epidemiological network, assigning initial contact rates and recovery rates and then following the progress of the disease.
More precisely, imagine a network of n individuals and an adjacency matrix A. Values of A are in [0,1] range and are contact rates. This means that the (i,j) entry shows the probability that disease is transferred from node i to node j. Initially, each node is assigned a random label, which can be either 1 (for Infective individuals) or 0 (for Susceptible, which are the ones which have not caught the disease yet).
At each time-step, if the node has a 0 label, then with a probability equal to the maximum value of weights for incoming edges to the node, it can turn into a 1. If the node has a 1 label then with a probability specified by its recovery rate, it can turn into a 0. Recovery rate is a value assigned to each node at the beginning of the simulation, and is in [0,1] range.
And while the network evolves in each time step, I want to display the network with each node label coloured differently.
If somebody knows of any other library in python that can do such a thing more efficiently than netwrokx, be grateful if you let me know. | Generating an SIS epidemilogical model using Python networkx | 0 | 0 | 0 | 1,096 |
43,261,953 | 2017-04-06T17:25:00.000 | 1 | 0 | 1 | 0 | python,string | 43,262,034 | 2 | false | 0 | 0 | Use a regex to find all strings that match the pattern of a userid; then you can see if any of them are actual userids. | 1 | 0 | 0 | I have a requirement to find the user from the log.
I have a line of code from my log file. One of the strings in the line is a userid. I have the list of all userid also.
Is there any easy way to identify the userid mentioned in the line.
Eg: Calling Business function ProcessSOMBFCommitment_Sourcing from F4211FSEditLine for ND9074524. Application Name [P421002], Version [] (BSFNLevel = 3)
Here, ND9074525 is the user id. My intention is to identify the user from the line.
Other possible userid can be AB9074158, AC9074168, AD9074123, AE9074152
I do not want to loop through all the possible userid. I thought of creating a list of all userid's and find the userid used in line by some method. Not sure if it exists. | How to find a string among many strings in a line | 0.099668 | 0 | 0 | 41 |
43,264,701 | 2017-04-06T19:57:00.000 | 2 | 0 | 0 | 1 | python,mysql,django,redis,celery | 43,264,780 | 1 | false | 0 | 0 | Performance-wise it's probably going to be Redis but performance questions are almost always nuance based.
Redis stores lists of data with no requirement for them to relate to one another so is extremely fast when you don't need to use SQL type queries against the data it contains. | 1 | 2 | 0 | Currently I am using celery to build a scheduled database synchronization feature, which periodically fetch data from multiple databases. If I want to store the task results, would the performance be better if I store them in Redis instead of a RDB like MySQL? | Celery: Is it better to store task results in MySQL or Redis? | 0.379949 | 1 | 0 | 931 |
43,264,838 | 2017-04-06T20:06:00.000 | 0 | 0 | 0 | 1 | python,django,redis,rabbitmq,celery | 72,343,366 | 2 | false | 0 | 0 | The Redis broker gives tasks to workers in a fair round robin between different queues. Rabbit is FIFO always. For me, a fair round robin was preferable and I tried both. Rabbit seems a tad more stable though. | 2 | 58 | 0 | My rough understanding is that Redis is better if you need the in-memory key-value store feature, however I am not sure how that has anything to do with distributing tasks?
Does that mean we should use Redis as a message broker IF we are already using it for something else? | Celery: When should you choose Redis as a message broker over RabbitMQ? | 0 | 0 | 0 | 22,467 |
43,264,838 | 2017-04-06T20:06:00.000 | 75 | 0 | 0 | 1 | python,django,redis,rabbitmq,celery | 48,627,555 | 2 | true | 0 | 0 | I've used both recently (2017-2018), and they are both super stable with Celery 4. So your choice can be based on the details of your hosting setup.
If you must use Celery version 2 or version 3, go with RabbitMQ. Otherwise...
If you are using Redis for any other reason, go with Redis
If you are hosting at AWS, go with Redis so that you can use a managed Redis as service
If you hate complicated installs, go with Redis
If you already have RabbitMQ installed, stay with RabbitMQ
In the past, I would have recommended RabbitMQ because it was more stable and easier to setup with Celery than Redis, but I don't believe that's true any more.
Update 2019
AWS now has a managed service that is equivalent to RabbitMQ called Amazon MQ, which could reduce the headache of running this as a service in production. Please comment below if you have any experience with this and celery. | 2 | 58 | 0 | My rough understanding is that Redis is better if you need the in-memory key-value store feature, however I am not sure how that has anything to do with distributing tasks?
Does that mean we should use Redis as a message broker IF we are already using it for something else? | Celery: When should you choose Redis as a message broker over RabbitMQ? | 1.2 | 0 | 0 | 22,467 |
43,264,959 | 2017-04-06T20:14:00.000 | 0 | 0 | 0 | 0 | python,inverse-transform | 65,609,395 | 3 | false | 0 | 0 | You can use scipy.signal.residuez for z^-n form of z-transform or scipy.signal.residue for z^n form of z-transform. | 1 | 0 | 1 | Is there a way to do inverse z-transforms in Python? (I don’t see anything like this in NumPy or SciPy). | Is there a way to do inverse z-transforms in Python? | 0 | 0 | 0 | 4,204 |
43,265,711 | 2017-04-06T21:03:00.000 | 0 | 0 | 1 | 0 | python | 43,265,835 | 1 | false | 0 | 0 | Python documents object.__new__ as receiving the parameters that will eventually be passed to __init__. So if Python is calling your __new__, that's what you're going to get. (Except of course for the first parameter, which will be the class in the call to __new__ and the instance in the call to __init__.)
The result of new is checked. If it is an instance of the class, then __init__ is called.
It seems to me that if __new__ was allocating data, based on the parameters, then it would allocate the wrong data if it was given the wrong parameters.
So I would say that unless you have a specific use case in mind where you know that your parameters don't affect the creation of the object, you should follow the protocol. | 1 | 0 | 0 | I am writing a metaclass and overriding both __new__ and __init__ to have (the same) custom parameters. Must I pass the exact same parameters to type.__new__ and type.__init__ when I call them from the overridden methods?
It would be useful if that was not necessary, because I am in turn inheriting from my metaclass to specialize it further. In these subclasses I am doing most of the work in my __init__ methods. If I had to pass the same parameters to both type.__new__ and type.__init__, I would have to override __new__ in all my subclasses. | Must type.__new__ and type.__init__ be passed the same parameters? | 0 | 0 | 0 | 37 |
43,266,059 | 2017-04-06T21:27:00.000 | 1 | 0 | 0 | 0 | python,django,postgresql,django-models,django-database | 43,267,208 | 2 | false | 1 | 0 | there's always the dump data from django, which is pretty easy to use.
or you could do this manually:
if the 2 databases share the same data (they are mirror one to another) and the same table structure, you could just run a syncdb from django to create the new table structure and then dump and import (i'm assuming you're using mysql, but the general idea is the same) the old database into the new one
if the two databases share different data (still with the same structure) you should import every single row of the two databases: this way, you'll keep relations etc, but you'll have your unique id updated to the new sole db.
if the two databases are different in both data and structure, you'll have to run two sincdb and two imports, but this doesn't seem to be your case | 1 | 2 | 0 | I have a Django project with 5 different PostgreSQL databases. The project was preemptively separated in terms of model routing, but has proven quite problematic so now I'm trying to reverse it. Unfortunately, there's some overlap of empty, migrated tables so pg_dump's out of the question. It looks like django-dumpdb may suit my needs but it doesn't handle per-database export/import. Additionally, Django's dumpdata/loaddata are installing 0 of the records from generated fixtures. Can I have some suggestions as to the least painful way to merge the data? | How to intelligently merge Django databases? | 0.099668 | 1 | 0 | 650 |
43,267,632 | 2017-04-07T00:00:00.000 | 0 | 1 | 0 | 0 | python,bash,bluetooth,rename,overwrite | 43,277,093 | 1 | false | 1 | 0 | Assuming that the mp3 files are all in the same directory, you could perhaps have a cron job running that periodically renames the most recent file and so something like:
mv $(ls -1t *.txt | head -1) song.mp3
This is a quick example. It would be more preferable to add the ablve to a script and add some "belts and braces" to ensure that the script doesn't crash. | 1 | 0 | 0 | as of right now I have a file called song.mp3 that I have integrated into a Python program which will act as an alarm. I would like it so that whenever I send the Raspberry Pi a new song via Bluetooth, it will just automatically rename this song to be song.mp3, thereby overwriting the previous song. That way I don't have to change my alarm program for different songs. Any help? | Automatically overwrite existing file with an incoming file | 0 | 0 | 0 | 141 |
43,268,156 | 2017-04-07T01:17:00.000 | 0 | 0 | 1 | 0 | python,pycharm,xgboost | 70,407,247 | 10 | false | 0 | 0 | If anyone else were installing pycharm on mac and got the code 137 in PyCharm error while doing a simple print('test') command it most certainly is because of the path to interpreter being present in the new project created.
The error I believe is because of python being installed through brew and it not showing up in the "Python X.YZ /Library/Frameworks/Python.framework/Versions/X.YZ/bin/pythonX" path
Work around uninstall installed python version using brew and then manually install it.
Now it would appear under the interpreter path which is under "Preferences-> Project -> Python Interpreter -> Gear symbol -> add base interpreter" point this to under /Library/Frameworks/.... path | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 0 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | 3 | 0 | 1 | 0 | python,pycharm,xgboost | 70,251,987 | 10 | false | 0 | 0 | I've recently run into this error installing PyCharm on an M1 Mac Mini. It was accompanied by an error that said my SDK was invalid upon compilation of a project. It turns out this was due to my Python Interpreter being pointed at a strange directory, I'm not 100% how this happened.
I went to Preferences > Project:yourProject > Python Interpreter and selected a valid SDK from the drop-down (in my case Python 3.8). You'll know the package is valid because it will populate the package list below with packages.
Again, not sure how it happened on install, but this solved it. | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 0.059928 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | 0 | 0 | 1 | 0 | python,pycharm,xgboost | 61,272,896 | 10 | false | 0 | 0 | My python process get killed with the 137 error code because my Docker for Windows memory limit was set too low. | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 0 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | 5 | 0 | 1 | 0 | python,pycharm,xgboost | 54,504,880 | 10 | false | 0 | 0 | It's not always a memory issue. In my case subprocess.Popen was utilized and it was throwing the error as 137 which looks like signalKILL and the cause is definitely not the memory utilization, because during the runtime it was hardly using 1% of memory use. This seems to be a permission issue after more investigation. I simply moved the scripts from /home/ubuntu to the root directory. | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 0.099668 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | 8 | 0 | 1 | 0 | python,pycharm,xgboost | 68,101,107 | 10 | false | 0 | 0 | If you are in Ubuntu, increase the SWAP memory. It will work. Use htop to see SWAP usage, when it is full, it will give error 137. | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 1 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | 12 | 0 | 1 | 0 | python,pycharm,xgboost | 59,721,695 | 10 | false | 0 | 0 | {In my experience}
this is because of Memory issue.
When I try to train ml model using sklearn fit with full data set , it abruptly breaks and gives whereas with small data it works fine.
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Interestingly this is not caught in Exception block either | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 1 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | 104 | 0 | 1 | 0 | python,pycharm,xgboost | 50,910,479 | 10 | false | 0 | 0 | Exit code 137 means that your process was killed by (signal 9) SIGKILL . In the case you manually stopped it - there's your answer.
If you didn't manually stop the script and still got this error code, then the script was killed by your OS. In most of the cases, it is caused by excessive memory usage. | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | 1 | 0 | 0 | 119,077 |
43,268,156 | 2017-04-07T01:17:00.000 | -1 | 0 | 1 | 0 | python,pycharm,xgboost | 70,844,300 | 10 | false | 0 | 0 | Click on the gear icon.
Then set the Poetry environment to Python 3.x.
Click Ok and Apply.
Now the code can run without showing any error! | 8 | 87 | 0 | When I stop the script manually in PyCharm, process finished with exit code 137. But I didn't stop the script. Still got the exit code 137. What's the problem?
Python version is 3.6, process finished when running xgboost.train() method. | Process finished with exit code 137 in PyCharm | -0.019997 | 0 | 0 | 119,077 |
43,268,201 | 2017-04-07T01:25:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-3.x,multiprocessing,python-3.4 | 67,351,980 | 3 | false | 0 | 0 | I was throwing a custom exception somewhere in the code, and it was being thrown in most of my processes (in the pool). About 90% of my processes went to sleep because this exception occurred in them. But, instead of getting a normal traceback, I get this cryptic error. Mine was on Linux, though.
To debug this, I removed the pool and ran the code sequentially. | 2 | 15 | 0 | I am running a piece of code using a multiprocessing pool. The code works on a data set and fails on another one. Clearly the issue is data driven - Having said that I am not clear where to begin troubleshooting as the error I receive is the following. Any hints for a starting point would be most helpful. Both sets of data are prepared using the same code - so I don't expect there to be a difference - yet here I am.
Also see comment from Robert - we differ on os, and python version 3.6 (I have 3.4, he has 3.6) and quite different data sets. Yet error is identical down to the lines in the python code.
My suspicions:
there is a memory limit per core that is being enforced.
there is some period of time after which the process literally collects - finds the process is not over and gives up.
Exception in thread Thread-9:
Traceback (most recent call last):
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\threading.py", line 911, in _bootstrap_inner
self.run()
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\threading.py", line 859, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\multiprocessing\pool.py", line 429, in _handle_results
task = get()
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\multiprocessing\connection.py", line 251, in recv
return ForkingPickler.loads(buf.getbuffer())
TypeError: init() missing 1 required positional argument: 'message' | TypeError: init() missing 1 required positional argument: 'message' using Multiprocessing | 0.132549 | 0 | 0 | 5,810 |
43,268,201 | 2017-04-07T01:25:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-3.x,multiprocessing,python-3.4 | 43,268,953 | 3 | false | 0 | 0 | Thanks to Robert - focusing on lang detect yielded the fact that possibly one of my text entries were empty
LangDetectException: No features in text
rookie mistake - possibly due to encoding errors- re-running after filtering those out - will keep you (Robert) posted. | 2 | 15 | 0 | I am running a piece of code using a multiprocessing pool. The code works on a data set and fails on another one. Clearly the issue is data driven - Having said that I am not clear where to begin troubleshooting as the error I receive is the following. Any hints for a starting point would be most helpful. Both sets of data are prepared using the same code - so I don't expect there to be a difference - yet here I am.
Also see comment from Robert - we differ on os, and python version 3.6 (I have 3.4, he has 3.6) and quite different data sets. Yet error is identical down to the lines in the python code.
My suspicions:
there is a memory limit per core that is being enforced.
there is some period of time after which the process literally collects - finds the process is not over and gives up.
Exception in thread Thread-9:
Traceback (most recent call last):
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\threading.py", line 911, in _bootstrap_inner
self.run()
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\threading.py", line 859, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\multiprocessing\pool.py", line 429, in _handle_results
task = get()
File "C:\Program Files\Python\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\multiprocessing\connection.py", line 251, in recv
return ForkingPickler.loads(buf.getbuffer())
TypeError: init() missing 1 required positional argument: 'message' | TypeError: init() missing 1 required positional argument: 'message' using Multiprocessing | 0.132549 | 0 | 0 | 5,810 |
43,269,842 | 2017-04-07T04:47:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,performance,api | 43,271,723 | 3 | false | 0 | 0 | From your explanation
We're pulling our data from multiple sources with each user search.
Being directly connected to the scrapers for those sources, we display
the content as each scraper completes content retrieval. I was
originally looking to mimic this in the API, which is obviously quite
different from traditional pagination - hope this clarifies.
So you in your API, you want to
take query from user
initiate live scrapers
get back the data to the user when scrapers finish the job !
(correct me if im wrong)
My Answer
This might feel little complicated, but this is the best one I can think of.
1. When user submits the query:
1. Initiate the live scrapers in to celery queue (take care of the priority).
2. Once the queue is finished, get back to the user with the information you have via sockets(this is how facebook or any website sends users notifications`. But in your case you will send the results html data in the socket.
3. Since you will have the data already, moved into the db as you scraped, you will can paginate it like normal db.
But this approach gives you a lag of a few seconds or a minute to reply back to the user, meanwhile you keep theuser busy with something on the UI front. | 2 | 0 | 0 | I've built an API that delivers live data all at once when a user submits a search for content. I'd like to take this API to the next level by delivering the API content to a user as the content is received instead of waiting for all of the data to be received before displaying.
How does one go about this? | Building an API | 0 | 0 | 1 | 92 |
43,269,842 | 2017-04-07T04:47:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,performance,api | 43,269,984 | 3 | false | 0 | 0 | I think the better way to apply it is setting limit in your query. For example, If you have 1000 of records in your database, then retrieving all data at once takes time. So, if a user search a word 'apple', you initially send the database request with limit 10. And, you can set pagination or scroll feature at your front-end. If the user click next page or scroll your page, you can again send the database request with another limit 10 so that the database read action will not take more time to read the limited data. | 2 | 0 | 0 | I've built an API that delivers live data all at once when a user submits a search for content. I'd like to take this API to the next level by delivering the API content to a user as the content is received instead of waiting for all of the data to be received before displaying.
How does one go about this? | Building an API | 0 | 0 | 1 | 92 |
43,270,149 | 2017-04-07T05:12:00.000 | 0 | 0 | 1 | 0 | python,ios,arrays,anaconda,conda | 49,812,547 | 2 | false | 0 | 0 | Some time restart works. I was also facing same issue, when I restarted my system it works like charm. | 2 | 2 | 0 | I'm new to coding and decided to install Anaconda because I heard it was the most practical platform for beginners.
The problem is, every time I try opening it, it literally takes at least 15 minutes to boot up while showing me "Updating metadata..." and subsequently showing me "Updating repodata..." statements.
Would any of you know how to fix or go around this issue?
I'm using a macbook air that has 8gb of RAM and an i5 processor, if that helps. | The Anaconda launcher takes long time to load | 0 | 0 | 0 | 7,621 |
43,270,149 | 2017-04-07T05:12:00.000 | 0 | 0 | 1 | 0 | python,ios,arrays,anaconda,conda | 59,691,347 | 2 | false | 0 | 0 | I started Anaconda Navigator with "Run as Administrator" privileges on my Windows machine, and it worked like a charm. Though it did ask me for Admin credentials for a couple of times while loading different scripts, but the response was <1 min, compared to 6 - 8 mins. earlier.
Search for Anaconda through desktop search or go to Cortana tool on the desktop toolbar and type Anaconda
On the Anaconda icon that shows up, right-click and choose "Run as Administrator"
Provide Admin credentials when prompted
This should hopefully work for Windows 10 users. | 2 | 2 | 0 | I'm new to coding and decided to install Anaconda because I heard it was the most practical platform for beginners.
The problem is, every time I try opening it, it literally takes at least 15 minutes to boot up while showing me "Updating metadata..." and subsequently showing me "Updating repodata..." statements.
Would any of you know how to fix or go around this issue?
I'm using a macbook air that has 8gb of RAM and an i5 processor, if that helps. | The Anaconda launcher takes long time to load | 0 | 0 | 0 | 7,621 |
43,270,820 | 2017-04-07T06:08:00.000 | 98 | 0 | 0 | 1 | python,hadoop,airflow | 43,330,451 | 2 | true | 0 | 0 | In the UI:
Go to the dag, and dag run of the run you want to change
Click on GraphView
Click on task A
Click "Clear"
This will let task A run again, and if it succeeds, task C should run.
This works because when you clear a task's status, the scheduler will treat it as if it hadn't run before for this dag run. | 1 | 54 | 0 | I am using a LocalExecutor and my dag has 3 tasks where task(C) is dependant on task(A). Task(B) and task(A) can run in parallel something like below
A-->C
B
So task(A) has failed and but task(B) ran fine. Task(C) is yet to run as task(A) has failed.
My question is how do i re run Task(A) alone so Task(C) runs once Task(A) completes and Airflow UI marks them as success. | How to restart a failed task on Airflow | 1.2 | 0 | 0 | 35,603 |
43,272,664 | 2017-04-07T07:53:00.000 | 126 | 0 | 1 | 0 | python,visual-studio-code,pylint | 45,989,777 | 15 | false | 0 | 0 | Check the path Pylint has been installed to, by typing which pylint on your terminal.
You will get something like: /usr/local/bin/pylint
Copy it.
Go to your Visual Studio Code settings in the preferences tab and find the line that goes
"python.linting.pylintPath": "pylint"
Edit the line to be
"python.linting.pylintPath": "/usr/local/bin/pylint",
replacing the value "pylint" with the path you got from typing which pylint.
Save your changes and reload Visual Studio Code. | 3 | 93 | 0 | I want to run Python code in Microsoft Visual Studio Code but it gives an error:
Linter pylint is not installed
I installed:
The Visual Studio Code Python extension
Python 3
Anaconda
How can I install Pylint? | Error message "Linter pylint is not installed" | 1 | 0 | 0 | 187,350 |
43,272,664 | 2017-04-07T07:53:00.000 | 6 | 0 | 1 | 0 | python,visual-studio-code,pylint | 47,494,232 | 15 | false | 0 | 0 | Try doing this if you're running Visual Studio Code on a Windows machine and getting this error (I'm using Windows 10).
Go to the settings and change the Python path to the location of YOUR python installation.
I.e.,
Change: "python.pythonPath": "python"
To: "python.pythonPath": "C:\\Python36\\python.exe"
And then: Save and reload Visual Studio Code.
Now when you get the prompt telling you that "Linter pylint is not installed", just select the option to 'install pylint'.
Since you've now provided the correct path to your Python installation, the Pylint installation will be successfully completed in the Windows PowerShell Terminal. | 3 | 93 | 0 | I want to run Python code in Microsoft Visual Studio Code but it gives an error:
Linter pylint is not installed
I installed:
The Visual Studio Code Python extension
Python 3
Anaconda
How can I install Pylint? | Error message "Linter pylint is not installed" | 1 | 0 | 0 | 187,350 |
43,272,664 | 2017-04-07T07:53:00.000 | 1 | 0 | 1 | 0 | python,visual-studio-code,pylint | 46,046,078 | 15 | false | 0 | 0 | I had this issue as well and found the error's log regarding permissions or something.
So, I ran Visual Studio Code with administrator privileges and ran "pip install pylint" in the terminal.
Then the error seemed to be fixed.
(I run Visual Studio Code on Windows 10.) | 3 | 93 | 0 | I want to run Python code in Microsoft Visual Studio Code but it gives an error:
Linter pylint is not installed
I installed:
The Visual Studio Code Python extension
Python 3
Anaconda
How can I install Pylint? | Error message "Linter pylint is not installed" | 0.013333 | 0 | 0 | 187,350 |
43,275,431 | 2017-04-07T10:10:00.000 | 1 | 0 | 0 | 0 | python,cntk | 43,362,812 | 2 | true | 0 | 0 | There are the following options:
1) Use distributed learner + training session - then you need to either use ImageDeserializer, or implement your own MinibatchSource (this extensibility only available starting RC2)
2) Use distributed learner + write the training loop yourself. In that case you have to take care of splitting the data (each worker should only read images that correspond to its rank) and all conditions inside the loop should be based on trainer->TotalNumberOfSamples() (i.e. checkpointing if you do any). | 1 | 0 | 1 | I'm training an autoencoder network which needs to read in three images per training sample (one input RGB image, two output RGB images). It was easy to make this work with python and numpy interop and reading the image files in myself.
How can I enable parallel/distributed training with this? Do I have to use the training session construct? Do I have to use the image reader minibatch source with that? | Parallel training with CNTK and numpy interop | 1.2 | 0 | 0 | 194 |
43,275,741 | 2017-04-07T10:25:00.000 | 0 | 0 | 0 | 0 | python,cookies,scrapy | 44,177,895 | 1 | false | 1 | 0 | Facing the similar situation myself. I can get away easily here, but one idea that I have is to subclass CookieMiddleware and then write a method to tweak jar variable directly. It's dirty, but maybe is worth consideration.
Another option would be to write a feature request to at least have a function to clear the cookies. Can easily take another year to implement, if deemed needed at all, I don't particularly trust scrapy devs here.
Just occured to me, that you can use your own cookiejar meta, and if you want to return to the clean state, you simply use different value (something like incrementing an integer would do). | 1 | 2 | 0 | I noticed that I sometimes get blocked while scraping because of a session cookie being used on too many pages.
Is there a way to simply clear all cookies completely during crawling to get back to the initial state of the crawler? | Clear cookies on scrapy completely instead of changing them | 0 | 0 | 1 | 879 |
43,276,189 | 2017-04-07T10:46:00.000 | 1 | 0 | 0 | 0 | python,node.js,flask,vesta | 43,281,169 | 1 | false | 1 | 0 | You will have to build them from source yourself or use binaries.
If you build them you will probably need to pass user directory parameters to the build, e.g.: you will have to crete "/opt", "/lib", "/tmp" and other root folders and point the build to them.
Else just place the binaries in the /bin folder, create it if it doesn't exist, add it to your $PATH and use them directly. | 1 | 0 | 0 | How to run flask (A Python Microframework) on Vesta Panel?
It's not specialized question for Flask. I'm asking django, flask, cherrypy, sanic, nodejs, strongloop etc.
It's looks like basic question but vesta panel wrote with PHP and using apache, nginx. It's complicated. Python and NodeJS using their own socket. | Flask, Django, NodeJS on Vesta Panel | 0.197375 | 0 | 0 | 708 |
43,276,718 | 2017-04-07T11:11:00.000 | 0 | 0 | 1 | 0 | python,machine-learning,text-classification,malware,malware-detection | 43,302,193 | 1 | false | 0 | 0 | You can use the hash of the lower case of the path, and you can consider only the directory but not the file name, since many malware write random file name, but write to common directories. | 1 | 0 | 0 | I am currently using Dynamic analysis for malware detection. I have list of all the files accessed by malware and benign executable. My aim is to build classifiers on the information extracted through the analysis reports.
As of now i am using the file path string like c:\hvtqk\modules\packages\reboot.py as a separate dimension in my classifier. i just want to know if there are any other innovative techniques that can be used to featurize the path strings ? | Different Representation of Full file access paths by malware | 0 | 0 | 0 | 62 |
43,287,990 | 2017-04-07T22:31:00.000 | 1 | 0 | 0 | 0 | python,numpy | 43,288,022 | 1 | false | 0 | 0 | Negating a boolean mask array in NumPy is ~mask.
Also, consider whether you actually need where at all. Seemingly the most common use is some_array[np.where(some_mask)], but that's just an unnecessarily wordy and inefficient way to write some_array[some_mask]. | 1 | 0 | 1 | I get a PEP8 complaint about numpy.where(mask == False) where mask is a boolean array. The PEP8 recommendation comparison should be either 'if condition is false' or 'if not condition'. What is the pythonic syntax for the suggested comparison inside numpy.where()? | Pythonic array indexing with boolean masking array | 0.197375 | 0 | 0 | 905 |
43,288,566 | 2017-04-07T23:42:00.000 | 8 | 0 | 1 | 0 | python-3.x,debugging,pycharm,breakpoints | 45,306,358 | 1 | false | 0 | 0 | I'm not sure if this answers your question but you can set a breakpoint on the line of code you want to break at, right click on that break point once it is set and then apply a condition.
An example of such a condition could be:
x > 5
Once you are at the stage in your loop/code where this condition is true i.e. when x = 6 then it will break and you can inspect all the current values/ status of your code.
Hope this helps | 1 | 5 | 0 | I have a big dictionary and some of the elements occasionally end up with illegal values. I want to figure out where the illegal values are coming from. PyCharm should constantly monitor the values of my dictionary, and the moment any of them take the illegal value, it should break and let me inspect the state of the program.
I know I can do this by just creating a getter/setter for my dictionary instead of accessing it directly, and then break inside the setter with an appropriate condition.
Is there a way to do it without modifying my code? | How can I make PyCharm break when a variable takes a certain value? | 1 | 0 | 0 | 1,602 |
43,293,738 | 2017-04-08T11:51:00.000 | 0 | 0 | 0 | 0 | android,python,mobile,input,user-input | 46,112,131 | 2 | false | 0 | 1 | the sololearn playground allows multiple inputs
enter each input on a new line in the input prompt
infinite loop works not though | 1 | 1 | 0 | I have a program of tic tac toe in Python that I wish to post on SoloLearn, which is a mobile application that runs code in different languages (i.e. Java, C++, Python, etc.)
However, even though my program requires multiple inputs, from hopefully two users, it ends after the first input.
The prompt used is what I see too often on these type of apps, It states that the program requires input but you must enter all of the input somehow all at once.
Is there any way to obtain multiple user inputs from mobile apps that run such code. | Obtain multiple user inputs from mobile apps that run code like SoloLearn? | 0 | 0 | 0 | 101 |
43,293,819 | 2017-04-08T11:59:00.000 | 2 | 0 | 1 | 0 | python,collections,abstract-class | 43,293,878 | 1 | true | 0 | 0 | So what would the implementation of the concrete methods be?
The point of these classes is not to give you yet another list object. They exist to communicate what methods a class would need to implement to adhere to the given protocol.
For the container ABCs, they are not containers themselves; they don't actually hold anything. So you can't provide a concrete __getitem__ method for a Sequence; there is no internal state. And providing an implementation that uses a _list attribute would dictate how a subclass should implement this, but a proxy class, just to name an example, would not have an internal sequence state.
Only methods that can be expressed in terms of other methods, such as __contains__ (return True if __getitem__ doesn't raise an exception) or __iter__ (use an increasing index and produce the result of __getitem__ until it raises an exception) have a concrete implementation for subclassing convenience. | 1 | 0 | 0 | The collections library provides abstract classes and their subclasses such as MutableSequence and it's super class Sequence.
What is the necessity of abstract methods in the subclasses which are then forced to be defined in classes inheriting from them. Why can't concrete methods be used instead? | Purpose of Abstract Methods in collections.abc | 1.2 | 0 | 0 | 234 |
43,295,149 | 2017-04-08T14:13:00.000 | 0 | 0 | 0 | 0 | python-3.x,tkinter | 43,295,456 | 1 | false | 0 | 0 | No, it is not possible to have a transparent canvas.
If all you need is a crosshair, just draw one on the canvas. You can hide it temporarily while you zoom so that it doesn't get zoomed along with everything else. | 1 | 0 | 0 | To be quite specific in what I'm wanting to do...
I have a scrollable map on a canvas. I have zoom in/out capabilities. I want to place a cross hair/bullseye on top of the map so I know where I am going to be zooming in/out with high accuracy. I realize I could be more flexible and do it by locating the mouse pointer position and just have it zoom in/out based on where the mouse is at but I think given the magnitude of the project and the way it keeps evolving I better plan otherwise.
I'm thinking I would have to have two canvases on the screen to pull off what I'm wanting to do. That shouldn't be a problem. The problem...is it possible to make the top canvas trans????, is that parent or lucent...aka see-thru(I can never remember which is which, LOL:)) while still being able to see the cross hairs placed on the center of the top canvas. I don't think it could be done with only one canvas but I might be wrong.
Yes, this is a bit of a tricky question. | Two canvases on top of each with both 'visible' at the same time | 0 | 0 | 0 | 86 |
43,299,688 | 2017-04-08T19:51:00.000 | 0 | 0 | 0 | 0 | python,scrapy | 62,831,415 | 4 | false | 1 | 0 | This fixed it for me:
If you're on windows 10, find or create a random html-file on your system.
Right click the html-file
Open with
Choose another app
Select your browser (e.g Google Chrome) and check the box "Always use this app to open .html"
Now attempt to use view(response) in the Scrapy shell again and it should work. | 1 | 2 | 0 | How do I change the browser used by the view(response) command in the scrapy shell? It defaults to safari on my machine but I'd like it to use chrome as the development tools in chrome are better. | How do I change the browser used by the scrapy view command? | 0 | 0 | 1 | 1,418 |
43,304,612 | 2017-04-09T08:24:00.000 | 0 | 0 | 1 | 0 | python,pip | 48,089,054 | 13 | false | 0 | 0 | I just successfully installed a package for excel. After installing the python 3.6, you have to download the desired package, then install.
For eg,
python.exe -m pip download openpyxl==2.1.4
python.exe -m pip install openpyxl==2.1.4 | 4 | 106 | 0 | I'm trying to Install PIP for python 3.6 and I've looked over YouTube for tutorials but all of them seem to be out of date and none of them have seemed to work. Any information would be helpful so I can carry on with my project. | How to install PIP on Python 3.6? | 0 | 0 | 0 | 328,795 |
43,304,612 | 2017-04-09T08:24:00.000 | 1 | 0 | 1 | 0 | python,pip | 49,318,883 | 13 | false | 0 | 0 | There are situations when your pip doesn't get downloaded along with python installation. Even your whole script folder can be empty.
You can do so manually as well.
Just head to Command Prompt and type python -m ensurepip --default-pip Press Enter.
Make sure that value of path variable is updated.
This will do the Trick | 4 | 106 | 0 | I'm trying to Install PIP for python 3.6 and I've looked over YouTube for tutorials but all of them seem to be out of date and none of them have seemed to work. Any information would be helpful so I can carry on with my project. | How to install PIP on Python 3.6? | 0.015383 | 0 | 0 | 328,795 |
43,304,612 | 2017-04-09T08:24:00.000 | 1 | 0 | 1 | 0 | python,pip | 45,854,537 | 13 | false | 0 | 0 | There is an issue with downloading and installing Python 3.6. Unchecking pip in the installation prevents the issue. So pip is not given in every installation. | 4 | 106 | 0 | I'm trying to Install PIP for python 3.6 and I've looked over YouTube for tutorials but all of them seem to be out of date and none of them have seemed to work. Any information would be helpful so I can carry on with my project. | How to install PIP on Python 3.6? | 0.015383 | 0 | 0 | 328,795 |
43,304,612 | 2017-04-09T08:24:00.000 | 10 | 0 | 1 | 0 | python,pip | 43,304,827 | 13 | false | 0 | 0 | pip is included in Python installation. If you can't call pip.exe try calling python -m pip [args] from cmd | 4 | 106 | 0 | I'm trying to Install PIP for python 3.6 and I've looked over YouTube for tutorials but all of them seem to be out of date and none of them have seemed to work. Any information would be helpful so I can carry on with my project. | How to install PIP on Python 3.6? | 1 | 0 | 0 | 328,795 |
43,306,222 | 2017-04-09T11:35:00.000 | 0 | 0 | 1 | 0 | python-2.7,python-multithreading,raw-input | 43,306,536 | 1 | true | 0 | 0 | OK so i figured out that threading module is not actually apply parallel running of threads because of some mechanism called GIL. My solution is using multi processing instead. It works fine. Hope it helped someone. | 1 | 0 | 0 | I am using python 2.7, with the module threading. Now I am having a countdown of 24 hours which is one thread, the other thread is taking user input using raw input.
When my program run, the countdown thread is waiting for the user input to be inserted, and only then the countdown continues. At the first place my reason of using threading is to achieve both the threads run at the same time. I just can't understand why would one thread wait for the input of another one? And how to fix that?
Thanks in advance! | Python thread stuck while another thread waiting for user input | 1.2 | 0 | 0 | 410 |
43,310,597 | 2017-04-09T18:51:00.000 | 0 | 0 | 0 | 0 | python,apache-spark,cassandra,pyspark | 43,311,283 | 2 | false | 0 | 0 | I'll just give my "short" 2 cents. The official docs are totally fine for you to get started. You might want to specify why this isn't working, i.e. did you run out of memory (perhaps you just need to increase the "driver" memory) or is there some specific error that is causing your example not to work. Also it would be nice if you provided that example.
Here are some of my opinions/experiences that I had. Usually, not always, but most of the time you have multiple columns in partitions. You don't always have to load all the data in a table and more or less you can keep the processing (most of the time) within a single partition. Since the data is sorted within a partition this usually goes pretty fast. And didn't present any significant problem.
If you don't want the whole store in casssandra fetch to spark cycle to do your processing you have really a lot of the solutions out there. Basically that would be quora material. Here are some of the more common one:
Do the processing in your application right away - might require some sort of inter instance communication framework like hazelcast of even better akka cluster this is really a wide topic
spark streaming - just do your processing right away in micro batching and flush results for reading to some persistence layer - might be cassandra
apache flink - use proper streaming solution and periodically flush state of the process to i.e. cassandra
Store data into cassandra the way it's supposed to be read - this approach is the most adviseable (just hard to say with the info you provided)
The list could go on and on ... User defined function in cassandra, aggregate functions if your task is something simpler.
It might be also a good idea that you provide some details about your use case. More or less what I said here is pretty general and vague, but then again putting this all into a comment just wouldn't make sense. | 1 | 3 | 1 | I have huge data stored in cassandra and I wanted to process it using spark through python.
I just wanted to know how to interconnect spark and cassandra through python.
I have seen people using sc.cassandraTable but it isnt working and fetching all the data at once from cassandra and then feeding to spark doesnt make sense.
Any suggestions? | Spark and Cassandra through Python | 0 | 0 | 0 | 1,860 |
43,314,517 | 2017-04-10T03:23:00.000 | -3 | 0 | 1 | 1 | python,cmd | 43,314,666 | 5 | false | 0 | 0 | For installing multiple packages on the command line, just pass them as a space-delimited list, e.g.:
pip install numpy pandas | 1 | 6 | 0 | I know how to install *.whl files through cmd (the code is simply python -m pip install *so-and-so-.whl). But since I accidentally deleted my OS and had no backups I found myself in the predicament to reinstall all of my whl files for my work.
This comes up to around 50 files. I can do this manually which is pretty simple, but I was wondering how to do this in a single line. I can't seem to find anything that would allow me to simply type in python -m pip install *so-and-so.whl to find all of the whl files in the directory and install them.
Any ideas? | How to install multiple whl files in cmd | -0.119427 | 0 | 0 | 9,927 |
43,314,679 | 2017-04-10T03:47:00.000 | 1 | 0 | 1 | 0 | python-3.x,binary,exe | 43,314,754 | 2 | false | 0 | 0 | A savvy user who has this .exe version could extract the .pyc (byte code) and then break that down using a python decompiler like Uncompyle to get it pretty much back to source code. Thus there is a way (and a chance) of the python source code (close to it) getting extracting from your .exe version. | 1 | 0 | 0 | I am new to python and need to compile it into .exe version. My question is when a py script is compiled into .exe does it mean it cant decode anymore?
Our goal is make python scripts safe when deploying to client servers as we dont want them to get our source code using the .exe. | Python Compilation to .exe | 0.099668 | 0 | 0 | 53 |
43,317,029 | 2017-04-10T07:20:00.000 | 2 | 0 | 1 | 0 | python,windows,portable-applications | 43,376,024 | 1 | true | 0 | 0 | OK. If anyone is interested about how I solved this topic:
I had to perform winPython's 'Register Distribution' process on the client machine to achieve what I wanted. It can be accessed from the advanced menu option from the WinPython Control Panel that is distributed with the WinPython package.
(This somewhat 'registers' the distributed python exe to the clients computer - which enables all .py files in that computer to be interpreted by this new python exe delivered with the winPython package.)
I had to tell my client to perform this action before he could run the py files with a double click. | 1 | 1 | 0 | I want to distribute my python project to others so that they can run it without installing python 3.4 on their PCs (windows)
I have downloaded and extracted both the WinPython-32bit-3.4.4.6Qt5 and WinPython-64bit-3.4.4.6Qt5 packages - and with the help of Spyder - I ran my python scripts without any problem (on 2 different machines with 32 and 64 bit windows). (I copied my python scripts into the settings/.spyder-py3 folder - and opened an run it from spyder). This worked OK even on machines where python is not installed.
However, when I double click on the python script (with a .py extension) - I see that it does not start running automatically. Windows is asking me to select the program that shall run/open the file. This is happening both on win7 and win8.
This seems strange - because last night it worked OK on the machine with a different version of winPython (py version 3.5). Today even that version is not working.
Cannot find any advice or suggestions regarding this on WinPython documentation or in any other place on the net.
What am I doing wrong? Aren't the .py scripts supposed to run without Spyder being invoked first?
Any help shall be greatly appreciated. | WinPython Cannot run .py files directly (without spyder) | 1.2 | 0 | 0 | 2,558 |
43,317,119 | 2017-04-10T07:25:00.000 | 0 | 0 | 0 | 0 | python,python-import,importerror | 45,893,216 | 3 | false | 0 | 0 | Actually, there is no problem. Just need to restart the Jupyter and you will see it is working well. | 2 | 1 | 1 | I keep getting this error whenever I try running this code, if I can get some insight in what is going on that would be a great help, since I'm pretty new to this coding environment I would really appreciate some help. The code is this:
File "C:\Users\user\Desktop\Python\pythonsimulation.py", line 6, in
from scipy import *
File "C:\Python34\lib\site-packages\scipy__init__.py", line 61, in
from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl
ImportError: cannot import name 'NUMPY_MKL'
Am I missing some package or module that python requires in order to run the code? | Python ImportError - NUMPY MKL | 0 | 0 | 0 | 5,133 |
43,317,119 | 2017-04-10T07:25:00.000 | 0 | 0 | 0 | 0 | python,python-import,importerror | 45,571,159 | 3 | false | 0 | 0 | If you are using Jupyter, try restarting the kernel. please click Restart in Kernel menu. | 2 | 1 | 1 | I keep getting this error whenever I try running this code, if I can get some insight in what is going on that would be a great help, since I'm pretty new to this coding environment I would really appreciate some help. The code is this:
File "C:\Users\user\Desktop\Python\pythonsimulation.py", line 6, in
from scipy import *
File "C:\Python34\lib\site-packages\scipy__init__.py", line 61, in
from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl
ImportError: cannot import name 'NUMPY_MKL'
Am I missing some package or module that python requires in order to run the code? | Python ImportError - NUMPY MKL | 0 | 0 | 0 | 5,133 |
43,321,000 | 2017-04-10T10:38:00.000 | 0 | 0 | 1 | 0 | python,ipython-notebook,jupyter-notebook | 69,334,471 | 5 | false | 0 | 0 | This solution worked for me though. Type the below commands in this order.
%qtconsole
%connect_info
Restart the Kernel and clear all outputs.
These steps worked for me. | 1 | 30 | 0 | I would like to be able to fiddle around in the environment using a console in a Jupyter notebook. Adding an additional cell means that I always have to scroll to the very bottom or create new cells wherever I want a 'console-like' text field. Is it possible to have a permanent console window, e.g. at the bottom of the window?
Thanks! | Is it possible to show a console in a Jupyter notebook? | 0 | 0 | 0 | 31,404 |
43,321,447 | 2017-04-10T11:01:00.000 | 2 | 0 | 0 | 0 | python,django,django-models,django-model-utils | 43,321,849 | 4 | false | 1 | 0 | You should create an ForeignKey field between the user model and the voting model. To only allow a single vote, per user; per movie, you might want to create a unique key constraint, on the voting model, over the userid and the movieid.
The vote should then contain a relation to the movie and a rating. If a user withdraws his vote you remove it from database.
Using a dictionary will lead to problems for you as you will have multiple movies.
To increase the speed and performance counting the votes you might want to take a look at caching and simply cache the number of votes and update the number every time a vote was added/withdrawn by a user on the specific movie. | 1 | 1 | 0 | Suppose, I am making a movie rating app. A logged-in user should be able to rate the movie with stars (in the range 1 to 5).
I want to quickly access all the rater's name along with their rating.
If a user rates the movie again, the rating should be updated. At the same time, if a user decides to withdraw his rating i.e. provide zero rating, I would like to remove the entry from the field.
I believe dictionary would be the best choice to achieve the same. However, I am open for suggestions.
I also want a user to see all the movies the he/she has rated along with the rating. | Is there a way to have dictinary-like field in Django model? | 0.099668 | 0 | 0 | 66 |
43,322,038 | 2017-04-10T11:31:00.000 | 0 | 0 | 0 | 0 | python,selenium,ubuntu,tor | 64,649,803 | 3 | false | 0 | 0 | To see your TorBrowser path and binary open Tor and under the three stripe menu on the top right go Help>Troubleshooting Information | 1 | 3 | 0 | How can I install the tor browser to make it useable in Python using Selenium?
I have tried sudo apt-get install tor-browser, but I don't know where it gets installed, hence what to put in the PATH variable (or in executable-path).
My goal is to
install Tor browser
open Tor Browser with Python Selenium
go to a website. | Ubuntu: Install tor browser & use it with Selenium Python | 0 | 0 | 1 | 2,433 |
43,324,537 | 2017-04-10T13:29:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 43,325,932 | 1 | false | 0 | 0 | Try this out, this will replace the dash with a dash and whitespace. Its basic but it works :)
textfile = "John -Doe0001.txt"
textfile = textfile.replace('-','- ')
x = open(textfile, 'w') | 1 | 0 | 0 | I have a very large repository of .txt documents with the same naming scheme. I would like to write a simple Python script to add a space at a particular point in the filename.
Current Name Scheme:
John -Doe0001.txt
John -Doe0002.txt
John -Doe0003.txt
John -Doe0004.txt
Expected Outcome After Python Script:
John - Doe0001.txt
John - Doe0002.txt
John - Doe0003.txt
John - Doe0004.txt
Any suggestions? | Adding a white space within a filename using Python | 0 | 0 | 0 | 27 |
43,324,788 | 2017-04-10T13:39:00.000 | 2 | 0 | 0 | 0 | python,tensorflow | 50,902,599 | 4 | false | 0 | 0 | Don't run this on your desktop, but for HPC/remote machines with no display, this kills all left over GPU-using processes:
nvidia-smi -q -d PIDS | grep -P "Process ID +: [0-9]+" | grep -Po "[0-9]+" | xargs kill -9 | 1 | 3 | 1 | Whenever I run a python script that uses tensorflow and for some reason decide to kill it before it finishes, there is the problem that ctrl-c doesn't work. I would use ctrl-z but it doesn't release the gpu memory, so when i try to re-run the script there is no memory left. Is there a solution for this in linux? | How to simply kill python-tensorflow process and release memory? | 0.099668 | 0 | 0 | 6,349 |
43,326,405 | 2017-04-10T14:51:00.000 | 0 | 0 | 1 | 0 | python,excel,vba,xlwings | 43,326,946 | 1 | false | 0 | 0 | It seems that xlwings' RunPython VBA macro will start a new Python process each time it is called. This means that you cannot use global variables inside Python to share information between calls.
You could keep the data in the Excel file, for example in an extra sheet that you read and write from your Python script. Otherwise you will need to use a different data persistence solution, for example a separate file or a database. | 1 | 3 | 0 | I am working on a project using xlwings. I have a question, can I set global dynamic variables like pandas data frame or dict, list etc a live in the memory? Currently I found between different runpython VBA calls, the data frame seems to be lost. Anyone has an idea? Or do you have any recommendations on other plugins which can do that. Thank you very much. | Keep python global variables between function calls in excel using xlwings | 0 | 0 | 0 | 932 |
43,327,551 | 2017-04-10T15:43:00.000 | 0 | 1 | 0 | 0 | python,selenium,webdriver | 43,327,713 | 2 | false | 0 | 0 | Preferably you would have each test be able to run in isolation. If you have a way to create the deal through an API or Database rather than creating one through the UI, you could call that for each test. And, if possible, also clean up that data after your test runs.
If this is not possible, you could also record some data from a test in a database, xml, or json file. Then your following tests could read in that data to get what it needs to run the test. In this case it would be some reference to your financial deal.
The 2nd option is not ideal, but might be appropriate in some cases. | 2 | 1 | 0 | I'm a newbie in automation testing.
Currently doing a manual testing and trying to automate the process with Selenium Webdriver using Pyhton.
I'm creating a test suite which will run different scripts. Each script will be running tests on different functionality.
And I got stuck.
I'm working on financial web application. The initial scrip will create financial deal, and all other scripts will be testing different functionality on this deal.
I'm not sure how to handle this situation. Should I just pass the URL from the first script (newly created deal) into all other scripts in the suite, so all the tests were run on the same deal, and didn't create a new one for each test? How do I do this?
Or may be there is a better way to do this?
Deeply appreciate any advise!!! Thank you! | How to run all the tests on the same deal. Selenium Webdriver + Python | 0 | 0 | 1 | 284 |
43,327,551 | 2017-04-10T15:43:00.000 | 0 | 1 | 0 | 0 | python,selenium,webdriver | 43,328,191 | 2 | false | 0 | 0 | There's a couple of approaches here that might help, and some of it depends on if you're using a framework, or just building from scratch using the selenium api.
Use setup and teardown methods at the suite or test level.
This is probably the easiest method, and close to what you asked in your post. Every framework I've worked in supports some sort of setup and teardown method out of the box, and even if it doesn't, they're not hard to write. In your case, you've got a script that calls each of the test cases, so just add a before() method at the beginning of the suite that creates the financial deal you're working on.
If you'd like a new deal made for each individual test, just put the before() method in the parent class of each test case so they inherit and run it with every case.
Use Custom Test Data
This is probably the better way to do this, but assumes you have db access or a good relationship with your dbm. You generally don't want the success of one test case to rely on the success of another(what the first answer meant by isolaton). If the creation of the document fails in some way, every single test downstream of that will fail as well, even though they're testing a different feature that might be working. This results in a lot of lost coverage.
So, instead of creating a new financial document every time, speak to your DBM and see if it's possible to create a set of test data that either sits in your test db or is inserted at the beginning of the test suite.
This way you have 1 test that tests document creation, and X tests that verify it's functionality based on the test data, and those tests do not rely on each other. | 2 | 1 | 0 | I'm a newbie in automation testing.
Currently doing a manual testing and trying to automate the process with Selenium Webdriver using Pyhton.
I'm creating a test suite which will run different scripts. Each script will be running tests on different functionality.
And I got stuck.
I'm working on financial web application. The initial scrip will create financial deal, and all other scripts will be testing different functionality on this deal.
I'm not sure how to handle this situation. Should I just pass the URL from the first script (newly created deal) into all other scripts in the suite, so all the tests were run on the same deal, and didn't create a new one for each test? How do I do this?
Or may be there is a better way to do this?
Deeply appreciate any advise!!! Thank you! | How to run all the tests on the same deal. Selenium Webdriver + Python | 0 | 0 | 1 | 284 |
43,327,583 | 2017-04-10T15:44:00.000 | 0 | 0 | 1 | 0 | python-3.x | 43,328,487 | 1 | true | 0 | 0 | You are using wrong syntax for print.
In Python 3x , print required statement to be enclosed in parenthesis.
print('%s got :%s expected :%s' % (prefix, repr(got), repr(expected))) | 1 | 0 | 0 | Edit: Nevermind Just figured it out | Syntax error in google Python guide with Py 3 | 1.2 | 0 | 0 | 475 |
43,328,064 | 2017-04-10T16:09:00.000 | 1 | 0 | 1 | 0 | python,pyqt,spyder | 54,673,251 | 2 | false | 0 | 1 | I had a similar problem and found that my application only worked when the graphics settings inside Spyder are set to inline. This can be done at Tools -> Preferences -> IPython console -> Graphics, now change the Backends to inline.
Hope this helps. | 1 | 3 | 0 | I am working for the first time towards the implementation of a very simple GUI in PyQt5, which embeds a matplotlib plot and few buttons for interaction.
I do not really know how to work with classes so I'm making a lot of mistakes, i.e. even if the functionality is simple, I have to iterate a lot between small corrections and verification.
For some reason I would like to debug, however, the whole process is made much, much slower by the fact that at any other try, the python kernel dies and it needs restarting (all done automatically) several times.
That is, every time I try something that should last maybe 5 secs, I end up spending a minute.
Anybody know where to look to spot what is causing these constant death/rebirth circles?
I have been using spyder for some time now and I never experienced this behaviour before, so I'm drawn to think it might have to do with PyQt, but that's about how far I can go. | Spyder + Python 3.5 - how to debug kernel died, restarting? | 0.099668 | 0 | 0 | 14,340 |
43,331,510 | 2017-04-10T19:29:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,scikit-learn,svm,k-means | 43,386,604 | 2 | false | 0 | 0 | My Solution:-
Manual Processing:-
If the size of your dataset is small, you can manually create a vector data (also reliable, when it is created by yourself). If not, it is much difficult to apply SVM to classify the images.
Automatic Processing:-
Step 1:-
You can use "Unsupervised Image Clustering" technique to group your images into those 4 categories, then label the images from 1 to 4 after clustering is done. (eg. K-Means Clustering Algorithm)
Step 2:-
Currently, you are having a dataset of labeled images. Split them to train-test data.
Step 3:-
Now apply SVM to classify your test images and find out your model accuracy. | 1 | 3 | 1 | I am using scikit-learn library to perform a supervised classification (Support Vector Machine classifier) on a satellite image. My main issue is how to train my SVM classifier. I have watched many videos on youtube and have read a few tutorials on how to train an SVM model in scikit-learn. All the tutorials I have watched, they used the famous Iris datasets. In order to perform a supervised SVM classification in scikit-learn we need to have labels. For Iris datasets we have the Iris.target which is the labels ('setosa', 'versicolor', 'virginica') we are trying to predict. The procedure of training is straightforward by reading the scikit-learn documentation.
In my case, I have to train a SAR satellite image captured over an urban area and I need to classify the urban area, roads, river and vegetation (4 classes). This image has two bands but I do not have label data for each class I am trying to predict such as the Iris data.
So, my question is, do I have to manually create vector data (for the 4 classes) in order to train the SVM model? Is there an easier way to train the model than manually creating vector data? What do we do in this case?
I am bit confused to be honest. I would appreciate any help | How to train an SVM classifier on a satellite image using Python | 0.099668 | 0 | 0 | 6,775 |
43,331,589 | 2017-04-10T19:34:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,installshield,python-module | 43,726,757 | 1 | true | 0 | 0 | I never received an answer here, so I forged ahead on my own.
The Windows Python 2.7.13 installation includes pip and setuptools by default. That fact allowed me to switch from .exe module installers to wheel (.whl) installers. Since we have no Internet connection, I couldn't use a whl with unmet dependencies, but thankfully none of the modules I needed fell into that category. Once Python itself is installed, each pip installation is triggered right from the InstallShield code via LaunchAppAndWait().
The only "gotcha" was that the pywin32 module has a post-install script that must be run after the install by pip. That was handled automatically with the exe installer, so I didn't even know about it unless things went initially wrong with the whl install. | 1 | 2 | 0 | We have an existing InstallShield installer which installs the following:
Our product
Python 2.7.13 via the official Windows exe installer
3 python modules (pywin32, psycopg, and setuptools) via exe installers
2 egg modules that we produce
Python is installed silently, but the 3 module installers bring up their own installer windows that block our install, look very unprofessional, and require the user to click through them. There appears to be no parameters that we can pass to force them to run silently.
Our installer is 7 years old. I assume that advancements in how Python modules are installed on Windows have made exe-based module installers completely obsolete, but I can't seem to find a clear answer on what the recommended "modern" method of installation would be. Given the following limitations, what can we do to make the installer run to completion with no need to click through the module installers?
The following conditions apply:
We must continue to use InstallShield as the installation engine.
We will not have an Internet connection during installation.
The install is for all users on the machine. | How best to install Python + modules on Windows using InstallShield | 1.2 | 0 | 0 | 847 |
43,332,985 | 2017-04-10T21:13:00.000 | 1 | 1 | 0 | 0 | python,ssl,build,installation | 50,069,365 | 1 | false | 0 | 0 | In standard python 3 installation ssl is considered a built-in, so there's no option to install it.
Probably rebuilding is the only solution then. | 1 | 1 | 0 | So a few hours ago I built python (3.6) from source on a raspberry pi (raspbian). Now trying to install modules I find out I do not have the SSL module. Looking around at other questions there seams to be no other way other than to rebuild it with the arg --with-ssl or something.
I don't want to do that again as it took about 3 and a half hours to complete. Unless you can multi-thread the make process across all four cores? However the pi thermals will probably hold it back.
I there a way to install it? On python 2 you could with pip install ssl, me been stupid tried pip3 install ssl but that is still python 2 ssl module so throws syntax errors. Then tried with ssl3 but that does not exist.
Any suggestions?
Thanks. | Install the SSL module on Python 3.6 without rebuilding | 0.197375 | 0 | 0 | 1,001 |
43,334,233 | 2017-04-10T23:05:00.000 | 1 | 0 | 0 | 0 | python,urllib2,blocked | 43,334,265 | 1 | false | 0 | 0 | It is indeed possible, maybe the sysadmin noticed that your IP was making way too many requests and decided to block it.
It could also be that the server has a limit of requests that you exceeded.
If you don't have a static IP, a restart of your router should reset your IP, making the ban useless. | 1 | 0 | 0 | I had been using urllib2 to parse data from html webpages. It was working perfectly for some time and stopped working permanently from one website.
Not only did the script stop working, but I was no longer able to access the website at all, from any browser. In fact, the only way I could reach the website was from a proxy, leading me to believe that requests from my computer were blocked.
Is this possible? Has this happened to anyone else? If that is the case, is there anyway to get unblocked? | Python urllib2: Getting blocked from a website? | 0.197375 | 0 | 1 | 608 |
43,334,302 | 2017-04-10T23:14:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,abaqus | 50,130,557 | 3 | false | 0 | 0 | I agree with the answer, except for some minor syntax problems.
defining instance variables inside the handler is a no no. not to mention they are not being defined in any sort of init() method. Subclass TCPServer and define your instance variables in TCPServer.init(). Everything else will work the same. | 1 | 4 | 0 | The problem statement is as follows:
I am working with Abaqus, a program for analyzing mechanical problems. It is basically a standalone Python interpreter with its own objects etc. Within this program, I run a python script to set up my analysis (so this script can be modified). It also contains a method which has to be executed when an external signal is received. These signals come from the main script that I am running in my own Python engine.
For now, I have the following workflow:
The main script sets a boolean to True when the Abaqus script has to execute a specific function, and pickles this boolean into a file. The Abaqus script regularly checks this file to see whether the boolean has been set to true. If so, it does an analysis and pickles the output, so that the main script can read this output and act on it.
I am looking for a more efficient way to signal the other process to start the analysis, since there is a lot of unnecessary checking going on right know. Data exchange via pickle is not an issue for me, but a more efficient solution is certainly welcome.
Search results always give me solutions with subprocess or the like, which is for two processes started within the same interpreter. I have also looked at ZeroMQ since this is supposed to achieve things like this, but I think this is overkill and would like a solution in python. Both interpreters are running python 2.7 (although different versions) | Communication between two separate Python engines | 0 | 0 | 0 | 1,591 |
43,337,694 | 2017-04-11T05:50:00.000 | 0 | 0 | 1 | 0 | python,variables,dimensions | 43,337,961 | 4 | false | 0 | 0 | Is not possible to make arrays with different sizes as I understood you want to, and this is because a 2D-array is basically a table with rows and columns, and each row has the same number of columns, no matter what.
But, you can join the values in each variable and save the resulting strings in the array, and to use them again just split it back and parse the values to the type you need them. | 2 | 0 | 1 | I want to create a variable D by combining two other variable x and y.
x has the shape [731] and y has the shape [146].
At the end D should be 2D so that D[0] contains all x-values and D[1] all y-values.
I hope I explained it in a way someone can understand what I want to do.
Can someone help me with this? | python : create a variable with different dimension sizes | 0 | 0 | 0 | 792 |
43,337,694 | 2017-04-11T05:50:00.000 | 2 | 0 | 1 | 0 | python,variables,dimensions | 43,337,785 | 4 | true | 0 | 0 | It is a simple as: D = [x, y]
Hope it helped :) | 2 | 0 | 1 | I want to create a variable D by combining two other variable x and y.
x has the shape [731] and y has the shape [146].
At the end D should be 2D so that D[0] contains all x-values and D[1] all y-values.
I hope I explained it in a way someone can understand what I want to do.
Can someone help me with this? | python : create a variable with different dimension sizes | 1.2 | 0 | 0 | 792 |
43,342,433 | 2017-04-11T09:48:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,object-detection,yolo | 45,476,167 | 4 | false | 0 | 0 | There are 13x13 grid cells, true, but P(object) is calculated for each of 5x13x13 anchor boxes. From the YOLO9000 paper:
When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box.
I can't comment yet because I'm new here, but if you're wondering about test time, it works kind of like an RPN. At each grid cell, the 5 anchor boxes each predict a bounding box, which can be larger than the grid cell, and then non-maximum suppression is used to pick the top few boxes to do classification on.
P(object) is just a probability, the network doesn't "know" if there is really an object in there or not.
You can also look at the source code for the forward_region_layer method in region_layer.c and trace how the losses are calculated, if you're interested. | 3 | 8 | 1 | Currently I am testing the yolo 9000 model for object detection and in the Paper I understand that the image is splited in 13X13 boxes and in each boxes we calculate P(Object), but How can we calculate that ? how can the model know if there is an object in this boxe or not, please I need help to understand that
I am using tensorflow
Thanks, | How Yolo calculate P(Object) in the YOLO 9000 | 0 | 0 | 0 | 1,673 |
43,342,433 | 2017-04-11T09:48:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,object-detection,yolo | 59,667,414 | 4 | false | 0 | 0 | During test time the YOLO network gets the IOU from the default setted value. That is 0.5. | 3 | 8 | 1 | Currently I am testing the yolo 9000 model for object detection and in the Paper I understand that the image is splited in 13X13 boxes and in each boxes we calculate P(Object), but How can we calculate that ? how can the model know if there is an object in this boxe or not, please I need help to understand that
I am using tensorflow
Thanks, | How Yolo calculate P(Object) in the YOLO 9000 | 0 | 0 | 0 | 1,673 |
43,342,433 | 2017-04-11T09:48:00.000 | 4 | 0 | 0 | 0 | python,tensorflow,object-detection,yolo | 44,433,597 | 4 | false | 0 | 0 | They train for the confidence score = P(object) * IOU. For the ground truth box they take P(object)=1 and for rest of the grid pixels the ground truth P(object) is zero. You are training your network to tell you if some object in that grid location i.e. output 0 if not object, output IOU if partial object and output 1 if object is present. So at test time, your model has become capable of telling if there is an object at that location. | 3 | 8 | 1 | Currently I am testing the yolo 9000 model for object detection and in the Paper I understand that the image is splited in 13X13 boxes and in each boxes we calculate P(Object), but How can we calculate that ? how can the model know if there is an object in this boxe or not, please I need help to understand that
I am using tensorflow
Thanks, | How Yolo calculate P(Object) in the YOLO 9000 | 0.197375 | 0 | 0 | 1,673 |
43,343,868 | 2017-04-11T10:52:00.000 | 2 | 0 | 0 | 0 | python,image-processing,computer-vision,deep-learning,keras | 43,345,587 | 1 | true | 0 | 0 | There is no general rules on how to choose the layer for feature extraction but you might use a easy rule of thumb. The deeper you go to the network - the less ImageNet specific semantic features you would have. But in the same time - you are getting less semantic features also.
What I would do is to use the pool layers in both topologies - and if this didn't work well - then I would go deeper by setting the depth as metaparameter. | 1 | 4 | 1 | I am using pretrained resnet50 and inception v3 networks to extract features from my images, which I then use with my ML algo.
Which layers are recommended for feature extraction?
I am currently using: "mixed10" in Inception v3 and "avg_pool" in resent50. The features are modelling well in XGBoost though.
Thank you. | Feature extraction in Keras | 1.2 | 0 | 0 | 1,958 |
43,345,925 | 2017-04-11T12:27:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning,object-detection | 60,558,628 | 3 | false | 0 | 0 | There is not any rule as such but yes the best practice of proper annotation is to keep certain pixel width while creating bounding boxes.
See, the background changes are the variations in the object which will make it robust but keep in mind to have enough samples to properly recognizing the patterns in object(edges, shapes, textures, etc.)
Hope i addressed your query! | 3 | 1 | 1 | Currently, I am working to create a deep neural network for object detection, and i am also create my own dataset, and I use the bounding box to annotate my images, and my question is what are the rules to have the best bounding box for my images training. I mean if I wrap my object is it good to limit the background of my object or do I need t find a way to bound only my object.
Thanks, | Best Way to create a bounding box for object detection | 0 | 0 | 0 | 2,423 |
43,345,925 | 2017-04-11T12:27:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning,object-detection | 53,543,602 | 3 | false | 0 | 0 | You can reference YOLO algorithm- this is the best algorithm for object detection. The first, input image will divide into SxS grid cell, Yolo will predict 5 bounding box for each cell and with each bounding box, Yolo also predict the center coordinates of box, width, height of box and confidence score of having any object in that box along with the probabilities that object will belong to M classes. After that, we use Non Max Suppression and IOU to calculate the accuracy between bounding box with ground truth and get only the most exactly bounding box for the object in the input image. | 3 | 1 | 1 | Currently, I am working to create a deep neural network for object detection, and i am also create my own dataset, and I use the bounding box to annotate my images, and my question is what are the rules to have the best bounding box for my images training. I mean if I wrap my object is it good to limit the background of my object or do I need t find a way to bound only my object.
Thanks, | Best Way to create a bounding box for object detection | 0 | 0 | 0 | 2,423 |
43,345,925 | 2017-04-11T12:27:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning,object-detection | 43,346,462 | 3 | false | 0 | 0 | I am not specialized in bounding box, but in general in deep learning, we try to obtain a network which will be robust against irrelevant variables, in your case, the background. Bounding should not be dependant of the background, so set your bounding box the way you want, it should be learned by the network how to replicate it.
The most important thing is more the size of the database and having consistent bounding than precise bounding.
Also if you want your network to be robust against changes of backgrounds, you should have as many backgrounds as possible, uncorrelated with the bounding. | 3 | 1 | 1 | Currently, I am working to create a deep neural network for object detection, and i am also create my own dataset, and I use the bounding box to annotate my images, and my question is what are the rules to have the best bounding box for my images training. I mean if I wrap my object is it good to limit the background of my object or do I need t find a way to bound only my object.
Thanks, | Best Way to create a bounding box for object detection | 0 | 0 | 0 | 2,423 |
43,348,620 | 2017-04-11T14:19:00.000 | 0 | 0 | 0 | 0 | python,django,security | 72,083,078 | 1 | false | 1 | 0 | This is very late. but I have implemented license validation through python java bridge. I am generating tokens at the device level. and put a valid key according to that token so that it doesn't work on another device. and I have disabled the main feature for an invalid license key. and also compiled source code in deployment. however, it is not difficult to decompile code but at least we can prevent a normal user. | 1 | 3 | 0 | I am making a Django application and I am running into an issue. I know Python is interpreted and it would be impossible to completely fight against piracy, however I want to implement some sort of security/licensing feature for my application.
I thought I would post the question because I can't find much information about this online. I'm not sure if I should create some sort of installer which downloads files from a server depending on if a key is valid or not and installs them onto a users host, or if I should encrypt the files upon sending and decrypt them with a key.
If anybody has any pointers or if anybody has faced this before I'd love to hear! | Protecting or Licensing a Django Application | 0 | 0 | 0 | 1,487 |
43,348,971 | 2017-04-11T14:34:00.000 | 4 | 0 | 0 | 0 | python,django,shell | 43,349,037 | 1 | true | 1 | 0 | The shell starts a new process to run the Python interpreter. The Python interpreter reads manage.py and executes it directly. There's no such thing as "the main Django process". | 1 | 2 | 0 | Django has many management commands. In addition, we can write our own commands.
What happens after I make a shell call python manage.py XXX?
Will the code be executed in a process that launched from the shell?
Or the shell process just communicates with the main Django process that executes the command? | is the Django management command executed in a separate process? | 1.2 | 0 | 0 | 485 |
43,351,596 | 2017-04-11T16:33:00.000 | 6 | 0 | 1 | 0 | python,visual-studio-code,anaconda | 46,554,629 | 15 | false | 0 | 0 | Unfortunately, this does not work on macOS. Despite the fact that I have export CONDA_DEFAULT_ENV='$HOME/anaconda3/envs/dev' in my .zshrc and "python.pythonPath": "${env.CONDA_DEFAULT_ENV}/bin/python",
in my VSCode prefs, the built-in terminal does not use that environment's Python, even if I have started VSCode from the command line where that variable is set. | 3 | 92 | 0 | I have Anaconda working on my system and VsCode working, but how do I get VsCode to activate a specific environment when running my python script? | Activating Anaconda Environment in VsCode | 1 | 0 | 0 | 175,026 |
43,351,596 | 2017-04-11T16:33:00.000 | 5 | 0 | 1 | 0 | python,visual-studio-code,anaconda | 60,607,499 | 15 | false | 0 | 0 | Just launch the VS Code from the Anaconda Navigator. It works for me. | 3 | 92 | 0 | I have Anaconda working on my system and VsCode working, but how do I get VsCode to activate a specific environment when running my python script? | Activating Anaconda Environment in VsCode | 0.066568 | 0 | 0 | 175,026 |
43,351,596 | 2017-04-11T16:33:00.000 | 0 | 0 | 1 | 0 | python,visual-studio-code,anaconda | 66,031,427 | 15 | false | 0 | 0 | As I was not able to solve my problem by suggested ways, I will share how I fixed it.
First of all, even if I was able to activate an environment, the corresponding environment folder was not present in C:\ProgramData\Anaconda3\envs directory.
So I created a new anaconda environment using Anaconda prompt,
a new folder named same as your given environment name will be created in the envs folder.
Next, I activated that environment in Anaconda prompt.
Installed python with conda install python command.
Then on anaconda navigator, selected the newly created environment in the 'Applications on' menu.
Launched vscode through Anaconda navigator.
Now as suggested by other answers, in vscode, opened command palette with Ctrl + Shift + P keyboard shortcut.
Searched and selected Python: Select Interpreter
If the interpreter with newly created environment isn't listed out there, select Enter Interpreter Path and choose the newly created python.exe which is located similar to C:\ProgramData\Anaconda3\envs\<your-new-env>\ .
So the total path will look like C:\ProgramData\Anaconda3\envs\<your-nev-env>\python.exe
Next time onwards the interpreter will be automatically listed among other interpreters.
Now you might see your selected conda environment at bottom left side in vscode. | 3 | 92 | 0 | I have Anaconda working on my system and VsCode working, but how do I get VsCode to activate a specific environment when running my python script? | Activating Anaconda Environment in VsCode | 0 | 0 | 0 | 175,026 |
43,351,701 | 2017-04-11T16:39:00.000 | 0 | 0 | 1 | 0 | python,r,multithreading,rserve,pyrserve | 43,352,367 | 2 | false | 0 | 0 | I kept on working with the code and it turns out that each thread needs its own port in order to work. I didn't find that documented anywhere, I was just trying out different idea. So:
I set up as many instances of Rserve as a I wanted threads. Each one of those instances was its own port
In my python code, when I instantiated the pyRserve object, I assigned it a unique port number.
Multi-threading is now working as desired and fast! | 1 | 1 | 1 | I have a Python script set up where it instantiates Rserve, sets up a few R scripts and functions and then runs some data against the functions. However, I have been unable to create a multi-threaded instance of this same process. My core issue is that one thread always seems to dominate the processing and all of the other threads are ignored.
I've made the assumption that pyRserve can be multi-threaded - is that a correct assumption? Are there any examples out there that show this as a multi-threaded app? | R, Python and pyRserve - multi-threaded examples? | 0 | 0 | 0 | 290 |
43,351,742 | 2017-04-11T16:42:00.000 | 2 | 0 | 0 | 0 | python,opencv,numpy,nao-robot,choregraphe | 43,369,916 | 2 | false | 0 | 0 | It depends if you're using a real NAO or a simulated one.
Simulated one: choregraphe use its own embedded python interpreter, even if you add library to your system it won't change anything
Real NAO: the system python interpreter is used, you need to install those library to your robot (and not to the computer running choregraphe). As pip ofthen doesn't work fine in NAO, you'll have to manually copy library to /home/nao/.local/lib/python2.7/site-packages | 1 | 1 | 1 | I'm doing a project that require cv2 and numpy in one of the scripts using choregraphe, but I get an error :
No module named cv2/numpy.
I think it is because choregraphe has its own python interpreter but I do not know how to install cv2 and numpy into the python of choregraphe.
How can I do it? | how to import cv2 and numpy in Choregraphe for NAO robot? | 0.197375 | 0 | 0 | 1,416 |
43,351,833 | 2017-04-11T16:47:00.000 | 0 | 1 | 0 | 0 | python,unit-testing,jenkins,pytest,xunit | 45,173,514 | 1 | false | 1 | 0 | Check that you didn't include the original results-1.xml and results-2.xml file in the path that Jenkins scan for results.
If you're not sure about it, try to delete the origin files after the merge (and before running the xunit-report action) | 1 | 0 | 0 | xunitmerge create duplicate tests when merged the py.test results of two different set of tests.
I have two test folders and ran them separately using py.test, which created results-1.xml &results-2.xml. after that i am merging as below.
xunitmerge results-1.xml results-2.xml results.xml
which created results.xml, when i publish the results using jenkins (publish xunit results) i see the tests recorded shown them as duplicate though the tests of results-1.xml and results-2.xml are unique.
How to avoid duplicate test results during merge? | xunitmerge does creates duplicate tests when merged the py.test results of two different set of tests | 0 | 0 | 1 | 272 |
43,352,671 | 2017-04-11T17:33:00.000 | 1 | 1 | 1 | 0 | python,c++,binaryfiles | 43,355,184 | 2 | false | 0 | 0 | Binary files contain data.
There are a plethora of data layouts of binary files. Some examples are JPEG, Executables, Word Processor, Raw Text and archive files.
A file may have an extension that may indicate the layout. For example, a ".png" would would most likely follow the PNG format. A "bin" or "dat" extension is to generic. One could zip up files and name the archive with a "png" extension.
If there is no file extension or the OS doesn't store the type of a file, then the format of the file is based on discovery (or trying random formats). Some file formats have integrity values in them to help verify correctness. Knowing the integrity value and how it was calculated, can assist in classify the format type. Again, there is no guarantee.
BTW, file formats are independent of the language to used to read them. One could read a gzipped file using FORTRAN or BASIC. | 1 | 0 | 0 | I have a simple (and maybe silly) question about binary data files. If a simple type is used (int/float/..) it is easy to imagine the structure of the binary file (a sequence of floats, with each float written using a fixed number of bytes). But what about structures, objects and functions ? Is there some kind of convension for each language with regards to the order in which the variables names / attributes / methods are written, and if so, can this order be changed and cusotomized ? otherwise, is there some kind of header that describes the format used in each file ?
I'm mostly interested in python and C/C++. When I use a pickled (or gzipped) file for example, python "knows" whether the original object has a certain method or attribute without me casting the unpickled object or indicating its type and I've always wondered how is that implemented. I didn't know how to look this up on Google because it may have something to do with how these languages are designed in the first place. Any pointers would be much appreciated. | Binary files structure for objects and structs in C++/Python | 0.099668 | 0 | 0 | 62 |
43,352,757 | 2017-04-11T17:38:00.000 | 1 | 0 | 0 | 0 | python,wxpython,wxwidgets | 43,353,997 | 2 | false | 0 | 1 | The wxPython package uses native widgets in its core widgets as much as possible. Thus, the wx.Button widget is going to be a native widget that you can only modify via the methods mentioned in the documentation. As Igor mentioned, you can try using SetBackgroundColour() or SetForegroundColour(), although depending on your platform's button widget, they may or may not work.
What you really want is a custom widget. I recommend checking out the GenericButtons, PlateButton and GradientButton for examples. You might even be able to use a GenericButton directly and paint its background as you mentioned. | 2 | 0 | 0 | I have search far and wide on how you can paint the background color of a button or GenButton with a pattern such as lines or cross hatch. I have seen examples of wx DirectContext so that you can draw objects with patterns instead of just solid colors but it seems that this is only for specific shapes and not the color of button objects. Does the dc or gc library allow to paint on these objects. I know that I have to create an event handler for OnPaint and OnResize but I may be missing some steps so that it applies this to the button itself. | WxPython: Setting the background of buttons with wx.Brush | 0.099668 | 0 | 0 | 616 |
43,352,757 | 2017-04-11T17:38:00.000 | 0 | 0 | 0 | 0 | python,wxpython,wxwidgets | 43,353,634 | 2 | false | 0 | 1 | wx.Button object represents a native control. And so unfortunately you can't manipulate how the native control paints itself.
You can try SetBackgroundColour()/SetForegroundColour() but this is as far as you can go. | 2 | 0 | 0 | I have search far and wide on how you can paint the background color of a button or GenButton with a pattern such as lines or cross hatch. I have seen examples of wx DirectContext so that you can draw objects with patterns instead of just solid colors but it seems that this is only for specific shapes and not the color of button objects. Does the dc or gc library allow to paint on these objects. I know that I have to create an event handler for OnPaint and OnResize but I may be missing some steps so that it applies this to the button itself. | WxPython: Setting the background of buttons with wx.Brush | 0 | 0 | 0 | 616 |
43,354,382 | 2017-04-11T19:11:00.000 | 3 | 0 | 1 | 1 | python,django,bash,macos,terminal | 43,354,458 | 10 | false | 0 | 0 | If you have python various versions of python installed,you can launch any of them using pythonx.x.x where x.x.x represents your versions. | 2 | 38 | 0 | My Mac came with Python 2.7 installed by default, but I'd like to use Python 3.6.1 instead.
How can I change the Python version used in Terminal (on Mac OS)?
Please explain clearly and offer no third party version manager suggestions. | How to switch Python versions in Terminal? | 0.059928 | 0 | 0 | 185,402 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.