Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
39,155,669
2016-08-25T22:24:00.000
0
0
1
0
python,windows,python-2.7,pip
57,235,242
9
false
0
0
I have installed "python-2.7.16" in my windows 10 PC. PIP is installed under "C:\Python27\Scripts". One can add the "C:\Python27\Scripts" path to environment variable and then access pip command from command prompt. To add the path in environment variable: Control Panel\System and Security\System --> Advanced System Settings --> Advanced --> enviroment variables --> under system variable --> Path(variable name)
5
9
0
I have Python 2.7.11 installed on my machine which to my understanding should come with pip, however when I check the C:\Python27\Tools\Scripts\ directory there is no pip.exe present. I have tried completely removing and reinstalling Python 2.7.11 without success. Running the installer pip is set to be installed, but after the install pip is nowhere to be found. I also have Python 3.4 installed which has pip as expected. Any thoughts?
Python 2.7.11 pip not installed
0
0
0
24,089
39,156,092
2016-08-25T23:12:00.000
0
0
0
0
python,nuke
39,215,596
1
true
0
0
According to Foundry support, it is not possible to insert knobs into existing nodes or groups. The only option is to delete and recreate the set of knobs.
1
0
0
The main interface for adding knobs to nodes in nuke appears to be the node.addKnob() function. The knobs are added in the order that the addKnob method is called. Is there a way to insert knobs before other knobs that have already been created?
Is it possible to insert knobs on nodes in Nuke?
1.2
0
0
643
39,156,864
2016-08-26T00:55:00.000
0
0
1
0
python,multithreading,gil
39,157,969
1
false
0
0
The GIL is a complex topic and the exact behavior in your case is hard to explain without your code. So I can not tell you if you will run into troubles in future. I can just advise to bring you project to a recent version of Python 3 if posdible. There have been many improvents made to the GIL in Python 3. The is nothing like a magic number of threads at which Python will break. The general rule is just: The more threads, the more problem. Most complicated is going from one to two. The GIL is released in some situations, especially when C code is executed or I/O is done. This allows code to run in parallel. With the advanced featured of modern CPUs is wouldn't be wise to limit your code to just one CPU.
1
0
0
I have been having no problems with performance with Python's Global Interpreter Lock. I've had to make a few things thread-safe - despite common advice, the GIL does NOT automatically guarantee thread-safety - but I've got a program commonly running upwards of 10 threads, where all of them can be active at any time, including together. It is a somewhat complex asynchronous messaging system. I understand multiprocessing and am even using Celery in this program, but the solution would have to be very convoluted to work through multiprocessing for this problem set. I'm running 2.7 and using recursive locks despite their performance penalties. My question is this: will I run into scaling problems with the GIL? I have seen no performance problems with it so far. Measuring this is...problematic. Is there a number of threads or something similar that you hit and it just starts choking? Does GIL performance differ significantly from executing multi-threaded code on a single-core CPU? Thanks!
Having no performance problems with threading and Python's Global Interpreter Lock. Scalability?
0
0
0
100
39,158,621
2016-08-26T05:05:00.000
3
0
1
0
python,multithreading,sqlite
39,163,285
2
false
0
0
This is my issue too. SQLite using some kind of locking mechanism which prevent you doing concurrency operation on a DB. But here is a trick which i use when my db are small. You can select all your tables data into memory and operate on it and then update the original table. As i said this is just a trick and it does not always solve the problem. I advise to create your trick.
2
2
0
I am using Python with SQLite currently and wondering if it is safe to have multiple threads reading and writing to the database simultaneously. Does SQLite handle data coming in as a queue or have sort of mechanism that will stop the data from getting corrupt?
Can you have multiple read/writes to SQLite database simultaneously?
0.291313
1
0
1,006
39,158,621
2016-08-26T05:05:00.000
2
0
1
0
python,multithreading,sqlite
39,158,655
2
false
0
0
SQLite has a number of robust locking mechanisms to ensure the data doesn't get corrupted, but the problem with that is if you have a number of threads reading and writing to it simultaneously you'll suffer pretty badly in terms of performance as they all trip over the others. It's not intended to be used this way, even if it does work. You probably want to look at using a shared database server of some sort if this is your intended usage pattern. They have much better support for concurrent operations.
2
2
0
I am using Python with SQLite currently and wondering if it is safe to have multiple threads reading and writing to the database simultaneously. Does SQLite handle data coming in as a queue or have sort of mechanism that will stop the data from getting corrupt?
Can you have multiple read/writes to SQLite database simultaneously?
0.197375
1
0
1,006
39,160,816
2016-08-26T07:34:00.000
0
0
0
0
python,web,openerp
39,180,410
1
false
1
0
There is no such thing as 'flag/state'. What you are probably trying to say is that you want to know which operations are taking place on a record. The easiest method is to take a look at your log. There will be statements there in the form /web/dataset/call_kw/model/operation where model is your ORM model and operation could be a search, read, unlink etc. RPC calls are logged in there as well. The format of the log output is a little bit different between different versions of odoo. You can go to a lower level by monitoring sql transactions on postgresql but I do not think that this is what you want.
1
1
0
I am new in odoo, I want to know how we get the current flag/state of every operation. For example: when we create a new record how do we know the current flag/state is "add"? or when we view a record how do we know the current flag/state is "view"? It something like current user id that stored in session named "uid", is there something similar to get the current flag/state in every operation?
How to get the flag/state of current operation in Odoo 9?
0
0
0
107
39,161,806
2016-08-26T08:30:00.000
0
0
0
0
python,django,shopify
39,170,945
1
true
1
0
Here is the recipe: create a Proxy in your App to accept incoming Ajax call from customer create a form and button in customer liquid that submits to your Proxy in the App Proxy, validate the call from Shopify and when valid, look for your form params. open the customer record with the ID of the customer you sent along with the form data, and add an address to their account Done. Simple.
1
0
0
I'm planning to create a simple app using Django/Python that shows a nice button when installed by the store owner on user's account. Clicking on that button should trigger a webhook request to our servers that would send back the generated shipping address for the user. My questions: Is it possible to create such button through shopify API or this something the store owner must manually add? Is it possible to add a shipping address upon user request? Thanks
Shopify app: adding a new shipping address via webhook
1.2
0
0
256
39,164,943
2016-08-26T11:11:00.000
0
0
1
1
python,linux,python-2.7
39,174,922
3
false
0
0
To build on Tryph's answer, you can install that new version to your home directory, then in a directory specified within your PATH (like in .bash_profile), you can point to that directory and within there create a sym-link that points to the new python. For instance, if you have a bin folder in your home directory that is specified in the path ln -s /bin/python ~/bin/python
2
0
0
There is a default python version, namely python 2.6, on the GPU server with Linux OS. Now I want to install a new python version on the server from its source, namely python 2.7. I should not change the default python version since I am not the administrator and some reason. So what should I do?
How install and use another version python(python 2.7) on linux with the default python version is python 2.6
0
0
0
115
39,164,943
2016-08-26T11:11:00.000
0
0
1
1
python,linux,python-2.7
39,165,141
3
false
0
0
You can install your new version of Python. It should be accessible with the python27 command (which may be a symbolic link). Then you will just have to check that the python symbolic link still points to python26. Doing this, python will keep on execute Python 2.6 while python27 will execute Python 2.7
2
0
0
There is a default python version, namely python 2.6, on the GPU server with Linux OS. Now I want to install a new python version on the server from its source, namely python 2.7. I should not change the default python version since I am not the administrator and some reason. So what should I do?
How install and use another version python(python 2.7) on linux with the default python version is python 2.6
0
0
0
115
39,165,180
2016-08-26T11:23:00.000
0
0
0
0
python,mks-integrity
41,023,731
1
false
0
0
I can't help you with python, but for MKS connect to a host: IM connect --hostname=%host% --port=%port% run query: im runquery --hostname=%host% --port=%port% %query_name% You can see the help for each command if you just write IM command -?
1
0
0
I would like to create a script in Python for logging into MKS integrity and calling an already defined MKS query. Since I am a newby in programming, I was wondering if there is any script example for the task. That would be a great help for getting me started. Thank you!
Python script for MKS integrity query
0
1
0
1,411
39,166,725
2016-08-26T12:47:00.000
1
0
1
1
python,anaconda,opensuse
39,167,113
1
true
0
0
I read the anaconda documentation, and there is no evidence of anaconda packages replacing your openSUSE packages. There isn't a reason for it to do so. If I got it right, then Conda is very similar to ruby's gem and similar tools, which definitely don't replace the installed packages. I think you can feel free to install it next to your current packages. Also, you can specify the python and python package version in the anaconda envinroments, which is another thing which it allows you to do, so you can decide what you will use there. Note, I'm not a conda user, this is how I understood the docs. Hope this helps.
1
1
0
I am using openSUSE Leap 42.1 and do some data analysis work in python. Most of the python packages I use are available in the standard openSUSE repositories (e.g. obs://build.opensuse.org/devel:languages:python); however sometimes they aren't, whereas they are available in Anaconda. I would like to replace all of the python packages installed on my computer with those available through Anaconda. Is it possible to just install Anaconda in parallel with the normal openSUSE packages or should I manually delete the packages I've installed? I know python is used heavily throughout the operating system so I probably don't want to deep clean the system of python before going the Anaconda route. Has anyone done this before? I was unable to find any info on this on the Anaconda site, and I'm curious if there is a clean way to do this.
Switch from linux distro package manager to Anaconda
1.2
0
0
280
39,168,025
2016-08-26T13:54:00.000
2
0
0
0
python,neural-network,tensorflow,lstm
39,177,157
1
false
0
0
If you are using tf.rnn_cell.BasicLSTMCell , the variable you are looking for will have the following suffix in its name : <parent_variable_scope>/BasicLSTMCell/Linear/Matrix . This is a concatenated matrix for all the four gates. Its first dimension matches the sum of the second dimensions of the input matrix and the state matrix (or output of the cell to be exact). The second dimension is 4 times the number of cell size. The other complementary variable is <parent_variable_scope>/BasicLSTMCell/Linear/Bias that is a vector of the same size as the second dimension of the abovementioned tensor (for obvious reasons). You can retrieve the parameters for the four gates by using tf.split() along dimension 1. The split matrices would be in the order [input], [new input], [forget], [output]. I am referring to the code here form rnn_cell.py. Keep in mind that the variable represents the parameters of the Cell and not the output of the respective gates. But with the above info, I am sure you can get that too, if you so desire. Edit: Added more specific information about the actual tensors Matrix and Bias
1
5
1
I am using the LSTM model that comes by default in tensorflow. I would like to check or to know how to save or show the values of the forget gate in each step, has anyone done this before or at least something similar to this? Till now I have tried with tf.print but many values appear (even more than the ones I was expecting) I would try plotting something with tensorboard but I think those gates are just variables and not extra layers that I can print (also cause they are inside the TF script) Any help will be well received
Tensorflow: show or save forget gate values in LSTM
0.379949
0
0
1,695
39,168,251
2016-08-26T14:06:00.000
0
0
0
0
python,sap-ase
39,202,310
1
false
0
0
You need to capture the actual SQL query text which is sent to the ASE server before conclusions can be drawn.
1
0
0
I have encountered a problem that I can not figure out. I'm working on an application written in Python and a Sybase ASE database using sybpydb to communicate with the datbase. Now I need to update a post where one of the columns in the where clause is of numeric(10) data type. When selecting the post Python treats the data as a float no problem there. But when I try to update the post using the numeric value I just got from the select i get a "Invalid data type" error. My first thought was to try to convert the float to an integer but it still gives the same error
Sybase numeric datatype and Python
0
1
0
152
39,172,559
2016-08-26T18:23:00.000
0
0
0
0
python,scipy,interpolation
39,174,418
1
false
0
0
As long as you can assume that your errors represent one-sigma intervals of normal distributions, you can always generate synthetic datasets, resample and interpolate those, and compute the 1-sigma errors of the results. Or just interpolate values+err and values-err, if all you need is a quick and dirty rough estimate.
1
0
1
I have a number of data sets, each containing x, y, and y_error values, and I'm simply trying to calculate the average value of y at each x across these data sets. However the data sets are not quite the same length. I thought the best way to get them to an equal length would be to use scipy's interoplate.interp1d for each data set. However, I still need to be able to calculate the error on each of these averaged values, and I'm quite lost on how to accomplish that after doing an interpolation. I'm pretty new to Python and coding in general, so I appreciate your help!
Python: How to interpolate errors using scipy interpolate.interp1d
0
0
0
403
39,172,944
2016-08-26T18:49:00.000
0
0
0
1
python,python-2.7,file-io,path
39,173,233
1
false
0
0
The best way to deal with this is to avoid constructing the path yourself altogether. Let os.path.join() do it for you.
1
0
0
I am trying to open a resource via an absolute path on my Macbook with open(file[,mode]). The resource I am trying to access is not in the same folder as the script that is running. If I use something like /Users/myname/Dev/project/resource I get an IOError: No such file or directory. Whats confusing me is that if i add and extra forward slash to the beginning so it starts with //Users/... it finds the resource without a problem. What is going on here?
Absolute path in Python requiring an extra leading forward slash?
0
0
0
107
39,173,069
2016-08-26T18:58:00.000
0
0
0
1
python,python-2.7,pexpect
39,256,145
1
false
0
0
There are three ways in which this problem can be handled but none to flush the buffer In Pexpect every send call should be matched with a call to expect. This ensures that the file pointer has moved ahead of the previous send. If there is a series of send before a single expect then we need to provide a way to move file pointer to the location of last send. This can be done by an extra send whose expect output is unique. The uniqueness should be such that none of the send in the series of send should give that output. Third method is to use set logfile_read to a file. All the output will be logged to this file. Before the send for which the expect is used, get the position of file pointer. Now get the position of file pointer after the send as well. Search for expected pattern in the file in between first and second pointer. First method is the ideal way it should be done.
1
1
0
I am trying to read the output of pexpect.send(cmd) but here's the problem I am facing. I am sending many commands in a sequence and I want to read/expect after a certain set of commands. Condition is that only the output of last command is to be considered. But expect matches from the point it last read. I have tried different methods such as matching for an EOF before sending the command of which I need the output but EOF means that child has terminated. I have tried reading till timeout and then sending the command but timeout itself causes the child to terminate. I have looked for ways in which I could read from the end or the last line of output. I am considering reading a fixed bytes to a file or string and then manipulate the output to get the info I want. Here as well the fixed number of bytes is not fixed. There does not seems to be a reliable way to do this. Could anyone help me sort this out ?
Pexpect: Read from the last send
0
0
0
716
39,173,459
2016-08-26T19:26:00.000
1
0
1
1
python,macos,python-2.7
39,173,554
1
false
0
0
This doesn't answer the question in the post's title, but leave Python 2 as the default python. If you want to run Python 3, you run python3 or maybe python3.4 or python3.5, depending on your installation. The system and other third-party software depend on python being Python 2. If you change it, you may encounter puzzles down the road. I'm not sure if having a third-party Python 2 is good (OS X ships with Python 2 already), but it should be fine. Edit: Sorry, didn't see there was already an answer. It was posted as I was typing.
1
0
0
So I installed python 2.7.11 a few months ago, now the class I'm about to take uses 3. So I installed 3 and it works fine. I also uninstalled 2.7.11 by going to applications and removing it, but going to terminal and typing which python, the directory is Library/Frameworks/Python.framework/Versions/2.7/bin/python, which means this it's still not removed. What should I do...leave it alone? I only need Python 3, but this is bothering me a bit. Thanks.
Remove third-party installed Python on Mac?
0.197375
0
0
251
39,179,880
2016-08-27T10:07:00.000
1
1
0
0
python,excel,csv,win32com
39,360,812
1
false
0
0
if you need to read the file frequently, I think it is better to save it as CSV. Otherwise, just read it on the fly. for performance issue, I think win32com outperforms. however, considering cross-platform compatibility, I think xlrd is better. win32com is more powerful. With it, one can handle Excel in all ways (e.g. reading/writing cells or ranges). However, if you are seeking a quick file conversion, I think pandas.read_excel also works. I am using another package xlwings. so I am also interested with a comparison among these packages. to my opinion, I would use pandas.read_excel to for quick file conversion. If demanding more processing on Excel, I would choose win32com.
1
1
0
I have this huge Excel (xls) file that I have to read data from. I tried using the xlrd library, but is pretty slow. I then found out that by converting the Excel file to CSV file manually and reading the CSV file is orders of magnitude faster. But I cannot ask my client to save the xls as csv manually every time before importing the file. So I thought of converting the file on the fly, before reading it. Has anyone done any benchmarking as to which procedure is faster: Open the Excel file with with the xlrd library and save it as CSV file, or Open the Excel file with win32com library and save it as CSV file? I am asking because the slowest part is the opening of the file, so if I can get a performance boots from using win32com I would gladly try it.
XLRD vs Win32 COM performance comparison
0.197375
1
0
582
39,180,027
2016-08-27T10:21:00.000
0
0
1
0
python,ide,installation,pycharm
39,185,756
1
true
0
0
Unpack the archive, go to bin folder and run .sh script.
1
0
0
I am a Ubuntu 16 user and I installed pycharm educational 2 at my computer. A few days ago while starting the app I received the notification that there's am update : version 3 So I downloaded the file(.tgz) from from the developer website and tried to install the update. I can only extract the file instead of actually installing like in Windows wizards Can you explain me what went wrong? Thanks in advance
Can't *update* pycharm educational
1.2
0
0
85
39,185,255
2016-08-27T20:03:00.000
0
0
1
0
python,windows,pip
39,185,406
1
false
0
0
What you could do is create a symbolic link. Or in your case, on windows, a shortcut. So in your case : C:\path_to_anaconda_interpreter_in_user\Lib\site-packages would be a shortcut leading to C:\Python27\Lib\site-packages (right click python27/lib/site-packages, click 'create shortcut' and move it into your anaconda lib directory) Edit : See Eryksun's comment below
1
1
0
I downloaded Python2.7 a while ago to my C:\ directory. After that I downloaded pip to install packages. After that I installed the Anaconda interpreter to a different directory within my user. I prefer to use the Anaconda interpreter but every time I install a package with pip it is put in C:\Python27\Lib\site-packages. Is there any way I can change the install command with pip or some pip config file so that it installs packages to C:\path_to_anaconda_interpreter_in_user\Lib\site-packages?
Use pip to Install to a different interpreter
0
0
0
1,590
39,185,570
2016-08-27T20:42:00.000
11
0
0
1
python,django,amazon-web-services,amazon-ec2,amazon-elastic-beanstalk
42,735,371
2
true
1
0
I've realised that the problem was that Elastic Beanstalk, for some reasons, kept the unsuccessfully deployed versions under .elasticbeanstalk. The solution, at least in my case, was to remove those temporal (or whatever you call them) versions of the application.
1
18
0
I'm trying to deploy a new version of my Python/Django application using eb deploy. It unfortunately fails due to unexpected version of the application. The problem is that somehow eb deploy screwed up the version and I don't know how to override it. The application I upload is working fine, only the version number is not correct, hence, Elastic Beanstalk marks it as Degraded. When executing eb deploy, I get this error: "Incorrect application version "app-cca6-160820_155843" (deployment 161). Expected version "app-598b-160820_152351" (deployment 159). " The same says in the health status at AWS Console. So, my question is the following: How can I force Elastic Beanstalk to make the uploaded application version the current one so it doesn't complain?
How to force application version on AWS Elastic Beanstalk
1.2
0
0
10,464
39,187,032
2016-08-28T00:41:00.000
1
0
0
0
python,django,database,django-models
39,187,633
2
false
1
0
I do not know specifically how Django people use the terms, but 'record-level operation' should mean an operation on 1 or more records while a 'table-level operation' should mean an operation of the table as a whole. I am not quite sure what an operation on all rows should be -- perhaps both, perhaps it depends on the result. In Python, the usual term for 'record-level' would be 'element-wise'. For Python builtins, bool operates on collections: bool([0, 1, 0, 3]) = True. For numpy arrays, bool operates (at least usually) on elements: `bool([0, 1, 0, 2]) = [False, True, False, True]. Also compare [1,2,3]*2 = [1,2,3,1,2,3] versus [1,2,3]*2 = [2,4,6]. I hope this helps. See if it makes sense in context.
1
3
0
While going through the documentation of django to muster the detailed knowledge, i endured the word 'table level operation' and 'record level operation'. What is the difference in between them? Could anyone please explain me this 2 word with example? Does they have other name too? P.S I am not asking their difference just because i feel they are alike but i feel it can be more clear to comprehend this way.
What is the difference between table level operation and record-level operation?
0.099668
0
0
471
39,187,958
2016-08-28T04:18:00.000
1
0
1
0
python
39,188,083
2
false
0
0
You have to use : instead of > in tuples. like the answer ahsanul haque provided. thumbs up for him.
1
0
0
How do you select python Tuple/Dictionary values where the index is greater than some number. I would think the code should look similar to the following assuming we create a Tuple: dt = (100, 200, 300,400) dt[dt.index > 1]
How to select Python Tuple/Dictionary Values where Index > x
0.099668
0
0
62
39,188,662
2016-08-28T06:41:00.000
4
0
0
1
java,python,rabbitmq,celery,messaging
39,188,804
1
true
0
0
The advantage of using Celery is that we mainly need to write the task processing code and handling of task delivery delivery to the task processors is taken care of by the Celery framework. Scaling out task processing is also easy by just running more Celery workers with higher concurrency (more of processing threads/processes). We don't even need to write code for submitting tasks to queues and consuming tasksfrom the queues. Also, it has built in facility to add/removing consumers for any of the task queues. The framework supports retry of tasks, failure handling, results accumulating etc. It has many many features which helps us to concentrate on implementing the task processing logic only. Just for an analogy, implementing a map-reduce program to run on Hadoop is not a very complex task. If data is small, we can write a simple Python script to implement the map-reduce logic which will outperform a Hadoop map-reduce Job processing the same data. But when data is very huge, we have to divide the data across machines, we will need to run multiple processes across machines and co-ordinate their executions. The complexity lies in running multiple instances of mappers and then reducers tasks across multiple machines, collecting inputs and distributing the inputs to mappers, transferring the outputs of mappers to appropriate reducers, monitoring progress, relaunching failed tasks, detecting job completion etc. But because we have Hadoop, we don't need to care much about the underlying complexity of executing a distribute job. Same way Celery also helps us to concentrate mainly on task execution logic.
1
0
0
I am having trouble understanding what the advantage of using Celery is. I realize you can use Celery with Redis, RabbitMQ etc, but why wouldn't I just get the client for those message queue services directly rather than sitting Celery in front of it?
Celery with Redis vs Redis Alone
1.2
0
1
1,102
39,192,386
2016-08-28T14:37:00.000
0
0
1
0
python-3.x,documentation,pydoc
40,321,643
1
true
0
0
On Mac Os they are located at /Library/Frameworks/Python.framework/Versions/3.5/Resources/English.lproj/Documentation/index.html
1
2
0
I downloaded and installed python 3 from the official website for Mac Os. In the setup there was a checkmark for the docs. This package installs the python documentation at a location that is useable for pydoc and IDLE. However I cannot locate it. How can I open the docs or where are they located?
How to open the python documentation?
1.2
0
0
134
39,197,579
2016-08-29T02:03:00.000
0
0
0
0
python,tkinter,widget,transparent,python-3.5
42,109,832
2
false
0
1
It is anyway not possible to do so (without canvas), but if you really need it here's a hack: Go to paint Open the image you want and type in the text on it in whatever colour or font you want Set this image as your background in the frame that you want You can then add whatever buttons you want and place them wherever you want. I know this is inconvenient but sometimes useful.
2
2
0
I'm looking to add an image in the background of a Text widget in tkinter, but as far as I'm concerned, that is not possible. So, to work around this, I'm wondering if it is possible to make the background of a Text widget transparent. Thanks in advance.
Make the background of 'Text' widget in tkinter transparent
0
0
0
1,618
39,197,579
2016-08-29T02:03:00.000
1
0
0
0
python,tkinter,widget,transparent,python-3.5
39,197,855
2
true
0
1
No, it is not possible to make the background of the text widget transparent.
2
2
0
I'm looking to add an image in the background of a Text widget in tkinter, but as far as I'm concerned, that is not possible. So, to work around this, I'm wondering if it is possible to make the background of a Text widget transparent. Thanks in advance.
Make the background of 'Text' widget in tkinter transparent
1.2
0
0
1,618
39,204,331
2016-08-29T10:44:00.000
11
0
1
1
python,python-3.x,unix,gdb
52,784,903
2
false
0
0
Actually peda-gdb doesn't really install any executable in your computer. All the peda-gdb does is to modify the config file of gdb. This file is by default located at ~/.gdbinit. use cat ~/.gdbinit can you peek how does peda do Therefore, to go back to vanilla gdb, there are 2 solutions gdb --nx This is a better way, since you may need peda someday rm -rf ~/.gdbinit This will remove the config file of gdb, so what peda did will have no effect on your gdb now.
2
4
0
While learning debugging,I somehow went into installing gdb and then gdb-peda.But now, I would like to uninstall gdb-peda.Can anyone pls guide me ?
How to remove/disable gdb-peda in ubuntu
1
0
0
7,074
39,204,331
2016-08-29T10:44:00.000
2
0
1
1
python,python-3.x,unix,gdb
42,280,176
2
false
0
0
You can remove the Peda folder, should be somewhere in your Home directory. After that you should have your old gdb back.
2
4
0
While learning debugging,I somehow went into installing gdb and then gdb-peda.But now, I would like to uninstall gdb-peda.Can anyone pls guide me ?
How to remove/disable gdb-peda in ubuntu
0.197375
0
0
7,074
39,210,668
2016-08-29T16:11:00.000
0
0
0
0
python,django,admin
39,212,494
4
false
1
0
This can't be done at least cleanly via templates.. You can put the auth app verbose name "authentication and authorization" in your own .po file (& follow Django docs on translation) This way Django will normally use your name.
1
3
0
I'm learning python/Django and setting up my first project. All is going well but I've been searching like crazy on something very simple. There is a default menu item "Authentication and Authorization" and I want to change the name. I've searched in the template if I need to extend something, I've searched if there's a .po file or what not but I can't find it nor a hint on which parameter I should overwrite in admin.py to set it. I'm not trying to install multi language or some advanced localization, just want to change the name of that one menu item :) Any ideas?
Change name of the "Authentication and Authorization" menu in Django/python
0
0
0
2,299
39,212,117
2016-08-29T17:43:00.000
1
0
1
0
python,jupyter
40,406,124
2
false
0
0
Another option which requires less mouse movement and clicks than JGreenwell's answer, especially if you prefer accomplishing this with speedy keyboard work like I would: Click in cell #1 Select all code (Ctrl+A) Comment-out code (Ctrl+/) Go to next cell (Shift+Enter) -- will execute cell, but is meaningless since all code is commented Repeat steps 2-4 until cell #5 Select cell #200 and select Run All Above Go back and un-comment cell 1-5 (Ctrl+A, Ctrl+/) You can easily move between cells using the keyboard: Press Esc (command mode), Press J (up) or K (down) to select adjacent cell. Press Enter (to enter the code). Then press Ctrl+A then Ctrl+/ to un-comment. Repeat from Esc.
1
2
0
Say I have a jupyter nootebook with 200 cells. how to run from 5th cell to 100th cell, without running other part of the notebook? Now I commend out 101th-200th and 1st-4th cell. I'm sure it is not the best practise.
how to run jupyter notebook from 5th cell to 100th cell, without running other part of the notebook?
0.099668
0
0
935
39,212,632
2016-08-29T18:18:00.000
1
0
1
0
python,python-2.7,python-3.x
39,213,600
4
false
0
0
Because the default parameter in print is \n for the end, though if you pass parameter for print end variable as \t or space , then you can see the same ! But it works 2.7 and above!
1
1
0
Just typing print only gives newline in python. Typing print without the brackets in 3.x will also gives a newline. Why?
Why print command gives a new line even though there is no data to print
0.049958
0
0
186
39,215,155
2016-08-29T21:02:00.000
2
0
1
0
python,windows,cython,appveyor
39,215,388
2
true
0
0
The typical case in my experience is where you've managed to grab a value from unallocated or unaligned storage -- in short, a memory usage error that finesses the compiler's ability to detect such abuse. Normally, you get a garbage value; the print statement forces an evaluation or memory alignment that "fixes" the problem. This is hard to do accidentally in most modern languages, unless you specifically "hard-cast" a value, changing a type without altering the bit value.
1
1
0
I have an application that's written in a combination of Python and Cython. I recently added a new feature and tests to this application. The tests pass on my local machine (a macbook), but when I push to appveyor (a Windows CI service) the tests fail. This in itself is not so strange. When I add print statements to my Cython code in an attempt to see what is happening when it runs on appveyor, the tests no longer fail. This is frustrating because it leaves me no way to figure out what's happening when the tests fail on appveyor. It also is just perplexing because it violates my understanding of how Python and Cython work in general. My code is complex and there's no reasonable way for me to share an example of this phenomenon. However, I'm looking for reasons this could happen. How and in what situations might a print statement in Cython code have an effect on other computations?
Adding print statements to cython code affects output
1.2
0
0
486
39,215,404
2016-08-29T21:20:00.000
1
1
1
0
python,arduino,processing,firmata,pyprocessing
40,960,308
1
true
0
0
You'll have to place the library in question inside your sketch folder. Python Mode doesn't use your system python, and cannot see any of the modules installed there.
1
0
0
The following code produces an error message in the Processing IDE: from pyfirmata import Arduino,util "No module named pyfirmata" I have no problem running the code directly in the python 2.7 interpreter. But, I can't access the Processing API from the interpreter.
Can't access pyFirmata from Processing.py
1.2
0
0
177
39,217,582
2016-08-30T01:39:00.000
0
0
0
1
python,terminal,directory
39,286,424
1
false
0
0
Try to use autocompletion on TAB key press — maybe names contain some whitespace (less probably) Check ls -l output — maybe these directories are just broken symbolic links
1
0
0
the directory /Library/Frameworks/Python.framework/ contains the following four elements: Headers Python Resources Versions When I try to cd into either Headers, Python or Resources (e.g. cd Resources), I get an error message telling me that the element does not exist (e.g.: "-bash: cd: Resources: No such file or directory"). What's going on here?
-bash: cd: Resources: No such file or directory
0
0
0
729
39,217,618
2016-08-30T01:45:00.000
0
0
1
0
python,matlab,dictionary,struct
39,229,405
1
false
0
0
So python -> MATLAB is a bit tricky with dictionaries/structs because the type of object that MATLAB is expecting is a dictionary object where each key is a single variable you want from python as a simple data type (array,int, etc). It doesn't like having nested dictionaries. I recommend 1: Store each dictionary separately instead of as part of a higher level object. or 2: even though it is not very nice converting the structs to individual variables. MATLAB should be able to handle simple non-nested structures like that.
1
0
1
I have a function in Python that outputs a dict. I run this function into MATLAB and save the output to a parameter (say tmp) which is a dict of nested other dicts itself. Now I want to convert this file into a useful format such as structure. To elaborate: tmp is a dict. data = struct(tmp) is a structure but the fields are other dicts. I tried to go through every field and convert it individually, but this is not very efficient. Another option: I have the output saved in a JSON file and can load it into MATLAB. However, it is still not useable.
Convert Python dict files into MATLAB struct
0
0
0
1,329
39,217,639
2016-08-30T01:50:00.000
1
0
1
0
python
39,217,670
7
false
0
0
You can do your list comprehension logic with tuples and then flatten the resulting list: [n for pair in [(x, x+1) for x in [1,5,7]] for n in pair]
2
5
0
For example, how to convert [1, 5, 7] into [1,2,5,6,7,8] into python? [x, x+1 for x in [1,5,7]] can't work for sure...
How to convert a list by mapping an element into multiple elements in python?
0.028564
0
0
93
39,217,639
2016-08-30T01:50:00.000
0
0
1
0
python
39,217,679
7
false
0
0
If you just want to fill the list with the numbers between the min and max+1 values you can use [i for i in range (min(x),max(x)+2)] assuming x is your list.
2
5
0
For example, how to convert [1, 5, 7] into [1,2,5,6,7,8] into python? [x, x+1 for x in [1,5,7]] can't work for sure...
How to convert a list by mapping an element into multiple elements in python?
0
0
0
93
39,217,946
2016-08-30T02:34:00.000
0
0
1
0
python
39,218,048
2
false
0
0
Yes, this is perfectly normal for any beginner. What you need to do is just continue doing what you are doing. The beginning will be difficult learning curve so aim for more beginner/basic level questions (i.e.: beginner level online challenges). Develop plenty of your own programs for fun and whenever you get stuck ask online. When other people answer your questions online look for the most elegant solutions (i.e.: quickest execution time, neat coding style, etc...) and try remember their solutions. The fastest way (in my opinion) to learn code is to build your own programs for fun. Just never give up no matter how hard and frustrated it gets.
1
0
0
So I'v been learning how to program for about a month now. I just finished reading 'Invent your own games with python'. Before the book I had never seen a line of code. After reading the book I'm able to read code and understand what's going on. But that's about it. I got the syntax down and can use all the flow statements. I'm still not able to create my own projects and when I try to do a challenge online, I just sit there and stare at it not knowing where to start. Is this normal? Is this one of those things where one day I'm gonna wake up and It's gonna click in head. Any suggestions as too what I can do accelerate my learning?
Just being Curious
0
0
0
56
39,218,109
2016-08-30T02:57:00.000
0
0
1
0
python,directory,cwd
39,218,124
3
false
0
0
If the user enters the complete file path with the directory, you can parse it (using sys.path) and then os.chdir() there.
1
0
0
Let's say the program asks the user for a file name to read and then a file to write the processed data into. Is there a way to get the directory of the needed file so the program can alter the current directory to use it? Or is there another way to access that file?
Import files not in the current working directory
0
0
0
201
39,218,263
2016-08-30T03:20:00.000
0
0
0
1
linux,python-3.x,twisted,fedora
39,235,093
3
false
0
0
To get the latest version of Twisted requires Python 2.7+ because 2.6 support has finally been EOL. So if you're running an old Python, then I'd suggest you build your own Python 2.7+ and alt install it. It's very important you don't override CentOS's default Python as this could lead to a disastrous situation. Once Python is updated, then you can do pip install twisted. Alternatively, you could get a yum repo with a updated versions of Python and Twisted.
2
0
0
Is there a source tarball of Twisted available for download which could be used to build it in Fedora or CentOS? I see the download for Ubuntu/Debian on the site, of course.
Twisted setup in Fedora or CentOS
0
0
0
450
39,218,263
2016-08-30T03:20:00.000
0
0
0
1
linux,python-3.x,twisted,fedora
39,620,252
3
false
0
0
You can use python pip to install twisted in centos or fedora. Make sure you have python-pip installed then just do sudo pip install twisted in terminal
2
0
0
Is there a source tarball of Twisted available for download which could be used to build it in Fedora or CentOS? I see the download for Ubuntu/Debian on the site, of course.
Twisted setup in Fedora or CentOS
0
0
0
450
39,219,761
2016-08-30T05:58:00.000
3
0
0
0
python,ssh,pycharm,interpreter
39,244,970
1
true
0
0
I've found a solution, the environment variables (including python path) should be defined from pycharm: Run/Debug configurations-> Environment variables. Pycharm won't use bashrc paths.
1
2
0
I am trying to run my code in the server using ssh remote interpreter. The connection and the deployment work but when I want to import libraries located in the server it gives an import error ssh://***@****.com:22/usr/bin/python -u /home//main.py Traceback (most recent call last): File "/home//main.py", line 11, in from clplibs.clp import ContinuousLearningPlatform as clp ImportError: No module named clplibs.clp Process finished with exit code 1
Import error with remote interpreter in pycharm
1.2
0
0
2,436
39,221,697
2016-08-30T07:47:00.000
0
1
0
1
python-2.7,ldap,openldap
39,302,136
2
false
0
0
It's an operational attribute, so you have to request it explicitly, or include "+" in the attributes to be returned. However you should not be using this for your own purposes. It's none of your business. It can change across backup/restore, for example.
1
0
0
I am trying to retrieve internal attributes from openldap server. More specifically I need to retrieve entryUUID attribute of an object. In LDAP, objectGUID is being fetched from server but couldn't retrieve similar field from openldap. SCOPE_SUBTREE is being used to retrieve attributes. Anyone knows way out? Thanks in advance.
Retrieve Internal attributes(entryUUID) from openldap server
0
0
0
2,194
39,221,915
2016-08-30T07:58:00.000
0
0
1
0
python
39,244,597
2
false
0
0
Since you get something from disk, you open a file. So, you could use the class with the with "function" of python. You should check the context managers. With that, you will be able to implement the functionality that you want each time that someone access the config file through the __enter__ method and (if it is needed) implement the functionality for stop using the resource with the __exit__ method.
1
0
0
I am writing a Python app which will use a config file, so I am delegating the control of the config file to a dedicated module, configmanager, and within it a class, ConfigManager. Whenever a method within ConfigManager is run, which will change my config file in some way, I will need to get the latest version of the file from the disk. Of course, in the spirit of DRY, I should delegate the opening of the config file to it's own function. However, I feel as though explicitly calling a method to get and return the config file in each function that edits it is not very "clean". Is there a recommended way in Python to run a method, and make a value available to other methods in a class, whenever and before a method is run in that class? In other words: I create ConfigManager.edit_config(). Whenever ConfigManager.edit_config() is called, another function ConfigManager.get_config_file() is run. ConfigManager.get_config_file() makes a value available to the method ConfigManager.edit_config(). And ConfigManager.edit_config() now runs, having access to the value given by ConfigManager.get_config_file(). I expect to have many versions of edit_config() methods in ConfigManager, hence the desire to DRY my code. Is there a recommended way of accomplishing something like this? Or should I just create a function to get the config fine, and manually call it each time?
Python classes - run method when any other method is called
0
0
0
1,599
39,222,209
2016-08-30T08:16:00.000
2
0
1
0
windows,python-3.x,dll,pygame,usb
39,222,282
1
true
0
0
If you installed python on your home PC "for all users" the .dll is in the c:\windows\system32\ (or equivalent). Copy it to your USB drive folder or reinstall python "just for me" on the USB drive so it contains everything in one place.
1
2
0
I want to run python3 on our school computers (under Windows) during our programming classes. I installed python 3.1 onto a USB flash drive at home (using Windows), and brought it to school. However, it gives me the following error: The program can't start because python31.dll is missing from your computer. Try reinstalling the program to fix this problem. How do I get the file, where do I put it (can I put it onto the USB itself?) and/or is there a better alternative for python3 portability? The reason why I don't simply use an online editor is because I also want to have pygame along with python on the USB.
Running python3 from a usb drive (portably)
1.2
0
0
1,856
39,233,347
2016-08-30T17:09:00.000
0
0
0
0
python,django,model-view-controller,permissions
39,233,603
1
false
1
0
You could write your own decorator for this. Or use django.contrib.auth.decorators.user_passes_test(your_test_func) to create a custom decorator. In both cases, have a look at the source code of the permission_required decorator in the above module.
1
3
0
For my view, I am checking the permission through the @permission_required decorator but I really wish to check for "either" permission A or permission B. so if user has at least one of two permissions, the view is execute... Is there a way to do this?
Check django permission or operator?
0
0
0
594
39,235,274
2016-08-30T19:05:00.000
1
0
0
0
python,google-cloud-dataflow,apache-beam
39,255,667
1
false
0
0
While this isn't part of the base distribution, this is something you could implement by processing these elements and sorting them as part of a global window before writing out to a file, with the following caveats: The entire contents of the window would need to fit in memory, or you would need to chunk up the file into smaller global windows. If you are doing the second option, you'd need to have a strategy for writing the smaller windows in order to the file.
1
4
1
I'm using beam to process time series data over overlapping windows. At the end of my pipeline I am writing each element to a file. Each element represents a csv row and one of the fields is a timestamp of the associated window. I would like to write the elements in order of that timestamp. Is there a way to do this using the python beam library?
In python apache beam, is it possible to write elements in a specific order?
0.197375
0
0
998
39,235,868
2016-08-30T19:42:00.000
3
1
1
0
python,multithreading,embedded,pyserial
39,236,047
1
true
0
0
So, ultimately you're looking for advice on how to debug this sort of time dependent problem. Somehow, state is getting created somewhere in your computer, your python process, or the microcontroller that affects things. (It's also theoretically possible that an external environmental factor is affecting things. As an example, if your microcontroller has a realtime clock, you could actually have a time-of-day dependent bug. That seems less likely than other possibilities). First, try restarting your python program. If that fixes things, then you know some state is created inside python or your program that causes the problem. Update your question with this information. If that doesn't fix it, try rebooting your computer. If that fixes things, then you strongly suspect some state in your computer is affecting things. If none of that works, try rebooting the micro controller. If rebooting both the PC and the micro controller doesn't fix things, include that in your question as it is very interesting data. Examples of state that can get created: flow control. The micro controller could be sending xoff, clearing clear-to-send or otherwise indicating it does not want data Flow control in the other direction: your PC could be sending xoff, clearing request-to-send or otherwise indicating that it doesn't want data Your program gets pyserial into a confused state--either because of a bug in your code or pyserial. Serial port configuration--the serial port settings could be getting messed up. Hyper terminal could do various things to clear flow control state or reconfigure the serial port. If restarting python doesn't fix the problem, threading is very unlikely to be your issue. If restarting python fixes the problem threading may be an issue.
1
2
0
I am not expecting a code here, but rather to pick up on knowledge of the folks out there. I have a python code - which uses pyserial for serial communication with an Micro Controller Unit(MCU). My MCU is 128byte RAM and has internal memory. I use ser.write command to write to the MCU and MCU responds with data - I read it using ser.read command. The question here is - It is working excellently until last week. Since yesterday - I am able to do the serial communication only in the morning of the day. After a while, when I read the data the MCU responds with"NONE" message. I read the data next day, it works fine. Strange thing is - I have Hyperterminal installed and it properly communicates with the MCU and reads the data. So I was hoping if anyone have faced this problem before. I am using threads in my python program - Just to check if running the program mulitple times with threads is causing the problem. To my knowledge, threads should only effect the Memory of my PC and not the MCU. I rebooted my Computer and also the MCU and I still have this problem. Note: Pycharm is giving me the answers I mentioned in the question. If I do the same thing in IDLE - it is giving me completely different answers
Pyserial - Embedded Systems
1.2
0
0
252
39,236,025
2016-08-30T19:53:00.000
0
1
0
1
python,linux,installation,hdf5
67,224,754
4
false
1
0
For Centos 8, I got the below warning message : Warning: Couldn't find any HDF5 C++ libraries. Disabling HDF5 support. and I solved it using the command : sudo yum -y install hdf5-devel
1
6
0
I want to use h5py which needs libhdf5-dev to be installed. I installed hdf5 from the source, and thought that any options with compiling that one would offer me the developer headers, but doesn't look like it. Anyone know how I can do this? Is there some other source i need to download? (I cant find any though) I am on amazon linux, yum search libhdf5-dev doesn't give me any result and I cant use rpm nor apt-get there, hence I wanted to compile it myself.
how to install libhdf5-dev? (without yum, rpm nor apt-get)
0
0
0
20,706
39,237,180
2016-08-30T21:13:00.000
1
0
1
0
python
39,237,267
1
true
0
0
The tuple you receive from sys.exc_info() can safely be passed to and used from other threads, even after the death of the thread the tuple came from. The references from the tuple keep things like stack state alive even when the thread is dead. (You won't be able to access the tuple as sys.exc_info() from other threads, so you'll need to store it somewhere before the thread dies, but it sounds like you're aware of that.)
1
0
0
Supposing I catch an exception inside a thread and store exc_info tuple somewhere. Then thread finishes. Is my exc_info content still accessible and correct, so I can interpret it later in other thread?
Is result of sys.exc_info() "stable" when its origin thread finishes?
1.2
0
0
23
39,237,729
2016-08-30T21:58:00.000
1
0
1
0
emacs,ipython
39,353,949
1
true
0
0
With python-mode.el the behaviour is controlled by customizable variable py-split-window-on-execute. For changes on the fly exist commands py-split-window-on-execute-off, py-split-window-on-execute-on
1
0
0
When I launch ipython in emacs, it rearranges all the windows. I find this annoying. Yet experimentation with my .emacs file yielded no solution. Where should I look? What should I suspect? What can I query?
Launching ipython in emacs undesireably moves my windows
1.2
0
0
17
39,241,151
2016-08-31T05:06:00.000
1
0
0
0
python,django,discourse
39,243,456
1
true
1
0
That sounds plausible. To make sure a user is logged in to both, you may put one of the auths in front of the other. For example, if discourse is in front of Django, you can use something like the builtin RemoteUserMiddleware. In general, if they are going to be hosted on different domains, take a look at JWT. It has been gainining ground to marry different services and the only thing you need is to be able to decode the JWT token, which a lot of languages have nowadays in the form of libraries.
1
2
0
I need a modern looking forum solution that is self hosted (to go with a django project) The only reasonable thing I can see using is discourse, but that gives me a problem... How can I take care of auth between the two? It will need to be slightly deeper than just auth because I will need a few User tables in my django site as well. I have been reading about some SSO options, but I am unclear on how to appraoch the problem down the road. here is the process that I have roughly in my head... Let me know if it sounds coherent... Use Discourse auth (since it already has social auth and profiles and a lot of user tables. Make some SSO hook for django so that it will accept the Discourse login Upon account creation of the Discourse User, I will send (from the discourse instance) an API request that will create a user in my django instance with the proper user tables for my django site. Does this sound like a good idea?
How would Auth work between Django and Discourse (working together)
1.2
0
0
696
39,241,643
2016-08-31T05:47:00.000
1
0
1
0
python,pdf-generation,importerror
60,907,464
10
false
0
0
I had the same issue and fixed it when switching Python compiler (bottom left corner on Visual Studio Code) . Try on different versions and eventually it should work.
4
27
0
I use Spyder, with Python 2.7, on a windows 10. I was able to install the PyPDF2 package with a conda command from my prompt. I said installation complete. Yet, If I try to run a simple import command: import PyPDF2 I get the error: ImportError: No module named PyPDF2 How can I fix this?
"no module named PyPDF2" error
0.019997
0
0
90,897
39,241,643
2016-08-31T05:47:00.000
0
0
1
0
python,pdf-generation,importerror
71,316,431
10
false
0
0
I encountered the same issue today while doing Udemy course. try the following: type this import sys !{sys.executable} -m pip install PyPDF2 then import PyPDF2 Hope it works for you too.
4
27
0
I use Spyder, with Python 2.7, on a windows 10. I was able to install the PyPDF2 package with a conda command from my prompt. I said installation complete. Yet, If I try to run a simple import command: import PyPDF2 I get the error: ImportError: No module named PyPDF2 How can I fix this?
"no module named PyPDF2" error
0
0
0
90,897
39,241,643
2016-08-31T05:47:00.000
31
0
1
0
python,pdf-generation,importerror
48,355,361
10
false
0
0
In my case, I was trying to import 'pyPdf2' instead of 'PyPDF2'. Observe the case. import PyPDF2 is correct.
4
27
0
I use Spyder, with Python 2.7, on a windows 10. I was able to install the PyPDF2 package with a conda command from my prompt. I said installation complete. Yet, If I try to run a simple import command: import PyPDF2 I get the error: ImportError: No module named PyPDF2 How can I fix this?
"no module named PyPDF2" error
1
0
0
90,897
39,241,643
2016-08-31T05:47:00.000
6
0
1
0
python,pdf-generation,importerror
45,600,015
10
false
0
0
I had this problem too when I tried to import PyPDF2 like this: sudo apt-get install python-pypdf2 When running some simple script with import PyPDF2, I would get an error like this: ImportError: No module named PyPDF2 The solution was to also install pdfmerge, like this: pip install pdfmerge
4
27
0
I use Spyder, with Python 2.7, on a windows 10. I was able to install the PyPDF2 package with a conda command from my prompt. I said installation complete. Yet, If I try to run a simple import command: import PyPDF2 I get the error: ImportError: No module named PyPDF2 How can I fix this?
"no module named PyPDF2" error
1
0
0
90,897
39,243,626
2016-08-31T07:45:00.000
0
0
0
0
php,python,mysql,database
39,244,221
1
true
0
0
No, it is not possible to call external scripts from MySQL. The only thing you can do is adding an ON UPDATE trigger that will write into some queue. Then you will have the python script POLLING the queue and doing whatever it's supposed to do with the rows it finds.
1
1
0
I want to execute script(probably written in python), when update query is executed on MySQL database. The query is going to be executed from external system written in PHP to which I don't have access, so I can't edit the source code. The MySQL server is installed on our machine. Any ideas how I can accomplish this, or is it even possible?
Executing script when SQL query is executed
1.2
1
0
54
39,244,082
2016-08-31T08:09:00.000
1
0
1
0
python-2.7,multiprocessing
39,244,140
2
false
0
0
If these processes are independent then something like this is not possible. At least not without some additional mechanisms like sockets (which greatly increases complexity). If these are created via multiprocessing and my_list is just a list then each process will have its own copy of the list. Again in multiprocessing if you define my_list as multiprocessing.Queue then indeed this will be a shared structure between processes.
1
1
0
Let's say that I have an empty list my_list and 5 processes pr1, pr2, pr3, pr4, pr5, where each of them appends something to my list. My question is : Is something like this possible? And will it behave normaly or an error will occur?
Mutual list in multiple processes
0.099668
0
0
57
39,249,639
2016-08-31T12:26:00.000
1
0
0
0
python,arrays,datetime,numpy,multidimensional-array
40,301,138
2
false
0
0
Posting the psuedo solution I used: The problem here is the lack of date-time indexing for 3d array data (i.e. satillite, radar). Whilst there is time series functions in pandas there is not for arrays (as far as i'm aware). This solution was possible because the data files I use have date-time in the name e.g. '200401010000' is 'yyyymmddhhMM'. construct a 3d array with all the data (missing times in places). Using the list of data file (os.listdir), create a list of timestamps (length matches 3d array length) create dfa using timestamps (2) as dfa index and create a column 'inx' of running integers (range(0, len(array) = integers = index of 3d array) create a datetime index using data start and end times and known frequency of data (no missing datetimes). Create a new dfb using this as the index. dfb in (4) left merge with dfa in (3). Aka dfa now has accurate datetime index and 'inx' column containg the 3d array index postition and with nan's at missing data. Using this you can then resmaple the df, to for example 1 day, taking the Min and Max of 'inx'. This gives you the start - end position for your array functions. You can also insert arrays of nans at mising datetimes (i.e. 'inx' min max = nan) so that your 3d array matches the length of actual datetimes. Comment if you have Q's or if you know of a better solution / package to this problem.
1
1
1
Is there a way to index a 3 dimensional array using some form of time index (datetime etc.) on the 3rd dimension? My problem is that I am doing time series analysis on several thousand radar images and I need to get, for example, monthly averages. However if i simply average over every 31 arrays in the 3rd dimension it becomes inacurate due to shorter months and missing data etc.
python 3D numpy array time index
0.099668
0
0
949
39,251,079
2016-08-31T13:31:00.000
1
1
0
0
python,raspberry-pi3
39,251,247
2
false
0
0
Because of Pi3's BT/wifi support Power LED is controlled directly from the GPU through a GPIO expander. I believe that there's no way to do what you want
1
2
0
How to read the status of the Power-LED (or the rainbow square - if you like that better) on Raspberry Pi 3 to detect an low-voltage-condition in a python script? Since the wiring of the Power-LED has changed since Raspberry Pi 2, it seems that GPIO 35 cannot longer be used for that purpose. Update: Since it seems to be non-trivial to detect a low-power-condition in code on Raspberry Pi 3, i solved it with a quick hardware hack. I soldered a wire between the Output of the APX803 (the power-monitoring device used on Pi 3) and GPIO26 and that way I can simply read GPIO26 to get the power status. Works like a charm.
How to read the status of the Power-LED on Raspberry Pi 3 with Python
0.099668
0
0
3,791
39,251,169
2016-08-31T13:35:00.000
1
0
1
0
python,multithreading,mpi,mpi4py
39,260,892
1
false
0
0
You do not need to implement any hardcore idea. mpiexec is already spawning process for you. Assuming your using the pudb Python debugger you could do the following: mpiexec -np 4 xterm -e python2.7 -m pudb.run helloword.py The -e option of xterm is specifying which program xterm is going to execute. PS: I have not test it with Python 3.5 but a similar solution will work
1
0
0
To use mpi4py, the standard approach is to use mpiexec to start a program using multiple MPI processes. For example mpiexec -n 4 python3.5 myprog.py. Now, that makes debugging difficult, because one can not straight-forwardly use the Python interpreter plus maybe an IDE debugger using the Python interpreter. However, it is no problem to debug a multi-threaded application. So my idea would be: Instead of using mpiexec to spwan the processes, I have a Python script that will spwan several threads, each of them will act as an MPI process, all happening within the Python interpreter. So the use of mpiexec woud not be necessary and I could debug my application like any other multi-threaded Python program. Would that be possible, and how? (In general, I'd be very happy to find some good example collections or tutorials for mpi4py, there's not very much available.)
mpi4py: Spawn processes as Python threads for easier debugging
0.197375
0
0
1,007
39,253,346
2016-08-31T15:15:00.000
2
0
0
1
apache-kafka,message-queue,messaging,kafka-python
39,254,261
1
true
0
0
Consumer group will take some times to contact group coordinator and get assigned partitions automatically during the delay. If you use manual assignment, you will get less delay.
1
0
0
I have written a worker service to consume messages from a Kafka queue, and I have also written a test script to add messages to the queue every few seconds. What I have noticed is that often the consumer will sit by idle for minutes at a time, while messages are being added to the queue. Then suddenly the consumer will pick up the first message, process it, then rapidly move on to the rest. So it eventually catches up, but I'm wondering why there such a delay in the first place?
Why is there a delay between writing to and reading from Kafka queue?
1.2
0
0
276
39,256,378
2016-08-31T18:13:00.000
0
0
1
0
python,python-3.x,bpy
39,298,240
1
false
0
0
Found the error: the pyd file was compiled with a 32 bit Python, was called with a 64 bit Python
1
0
0
I could not get a working example of importing a compiled library (pyd file) in Python. I compiled the blender source code, result is a bpy.pyd file. This file is placed in the python\lib folder. In the source code I have import bpy The file is found at runtime, but I get a runtime error that the module could not be imported Does someone have a good documentation on importing compiled python modules? I searched ~100 entries, but only general definitions on how to do this. I trued all suggestions without success. Thanks!
How to Import compiled libs (pyd) in python
0
0
0
703
39,256,913
2016-08-31T18:47:00.000
0
0
1
0
python,ipython,jupyter-notebook,plotly
56,981,228
3
false
0
0
I also meet this annoying issue. I find no way to reveal them on the notebook, but I find a compromise way to display them on an html page that is File -> Print Preview.
1
8
1
When I create a notebook with plotly plots, save and reopen the notebook, the plots fail to render upon reopening - there are just blank blocks where the plots should be. Is this expected behavior? If not, is there a known fix?
Plotly + iPython Notebook - Plots Disappear on Reopen
0
0
0
1,438
39,257,126
2016-08-31T19:01:00.000
1
0
0
0
python,listbox,wxpython
39,257,327
1
false
0
1
Put your instances in a list Use GetSelection to retrieve the index of the item you selected from the listbox Get the corresponding instance from that index in the list
1
0
0
I am building a wxpython project, I have a list of elements that are a class instances. Each of this elements has an attribute title. In a ListBox I want to display only the titles, and when the title selected, after we GetSelection from listbox, the instance should be returned and not just the title. Is this achievable ? Note: Searching for the string is out of question because names(titles) may be recurring. Thank you.
WxPython, How to fill ListBox with class attributes
0.197375
0
0
38
39,257,759
2016-08-31T19:42:00.000
0
0
1
0
python,regex,format,expression
39,257,814
5
false
0
0
For the letter use [a-zA-Z], and if it's only upper case then [A-Z] is sufficient.
2
2
0
I need to match things that format something along the lines of 657432-76, 54678-01, 54364A-12 I got (r'^\d{6}-\d{2}$') and (r'^\d{5}-\d{2}$') but how do you get the letter? thanks!!
Regular Expression with letter
0
0
0
47
39,257,759
2016-08-31T19:42:00.000
0
0
1
0
python,regex,format,expression
39,257,845
5
false
0
0
it seems the pattern generically is 6 characters with possible letter or number at last char max then - then 2 numbers? so then you'd use this pattern pattern = r'^d{5}.+-\d{2}$'
2
2
0
I need to match things that format something along the lines of 657432-76, 54678-01, 54364A-12 I got (r'^\d{6}-\d{2}$') and (r'^\d{5}-\d{2}$') but how do you get the letter? thanks!!
Regular Expression with letter
0
0
0
47
39,257,857
2016-08-31T19:48:00.000
3
0
1
0
python,multithreading,cpu
39,257,984
1
true
0
0
The question is very complex, because on the same CPU there can be an arbitrary number of others threads running, from processes that are not under your control. Instead, you can estimate when a certain piece of code worth to be executed by a separate thread, based on the time needed to create a new thread.
1
2
0
I am beginning to appreciate the usefulness of the threading library on python and I was wondering what was the optimal number of threads to keep open in order to maximise the efficiency of the script I am running. Please do keep in mind that my one and only priority is speed, I don't need to multitask /do other stuff (think a computing dedicated server) To be specific, if I run on a 4 cores / 4 threads CPU, will the optimal number of threads be 4? 8? 16? Then again, if I had more than a thread per core (4 core 8 t), would the answer change? Also, does cpu clock influence any of this? I understand there are a variety of implications on the matter but, as much as my research on the subject went, I still feel to be very much in the dark. (I gathered it's not so simple as n threads = n processes)
CPU cores, treads and optimal number of workers - python threading
1.2
0
0
1,450
39,261,395
2016-09-01T01:28:00.000
0
0
0
1
mariadb,python-idle
39,261,467
1
false
0
0
IDLE is in the python-tools package.
1
0
0
Python community. I am looking for a Red Enterprise Linux 7 version of IDLE - Python GUI. The only version I have found are for Windows and Mac. I will be using it to test and build an API to tie in with http.
idle-python for RHEL 7
0
0
0
68
39,268,717
2016-09-01T10:11:00.000
0
0
1
0
python
39,270,661
3
false
0
0
Python has a lot off stuff. The very basics you can learn on codecademy.com. On python-course.org you have some more advanced topics. If you want to learn something specific you should look on the official python documentation. Overal I don't think you want to use anaconda. It is better to just install the modules with PiP unless you are only doing scientific stuff or so.
1
0
0
newbie here. I'm using python (3) on my mac, and although I'm able to write some (basic) scripts, I realise I have lots of confusion around where python is stored, the famous usr/bin directory, where packages are saved, etc. For example I had pip installed and working fine, but then I installed miniconda and all of a sudden pip was 'managed' (for lack of a better term) by conda, some of the packages I had installed couldn't be found anymore etc. This highlights just how confused I am with all of this. Can you recommend a good resource that can explain how these things work together? Ideally something for beginners :)
python installations, directories and environments - good resource to understand the basics
0
0
0
35
39,273,012
2016-09-01T13:34:00.000
1
0
0
0
python,pandas
39,273,086
2
false
0
0
Instead of writing a CSV output which you have to re-parse, you can write and read the pandas.DataFrame in efficient binary format with the methods pandas.DataFrame.to_pickle() and pandas.read_pickle(), respectively.
1
0
1
I currently have several python pandas scripts that I keep separate because of 1) readability, and 2) sometimes I am interested in the output of these partial individual scripts. However, generally, the CSV file output of one of these scripts is the CSV input of the next and in each I have to re-read datetimes which is inconvenient. What best practices do you suggest for this task? Is it better to just combine all the scripts into one for when I'm interested in running the whole program or is there a more Python/Pandas way to deal with this? thank you and I appreciate all your comments,
Suggestions to handle multiple python pandas scripts
0.099668
0
0
66
39,274,850
2016-09-01T14:54:00.000
1
1
0
0
python,django,testing,rpc,spyne
39,275,854
1
false
1
0
I believe if you are using a service inside a test, that test should not be a unit test. you might want to consider use factory_boy or mock, both of them are python modules to mock or fake a object, for instance, to fake a object to give a response to your rpc call.
1
1
0
I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those @rpc functions, I have no luck there at all. What I have tried in testing those @rpc functions: 1. Get dummy data in model database 2. Start a server at localhost:8000 3. Create a suds.Client object that can communicate with localhost:8000 4. Try to invoke @rpc functions from the suds.Client object, and test if the output matches what I expected. However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running. I tried to make the server run on a different thread, but that messed up my test even more. I have searched as much as I could online and found no materials that can answer this question. TL;DR: how do you test @rpc functions using unit test?
How to test RPC of SOAP web services?
0.197375
0
0
458
39,280,278
2016-09-01T20:21:00.000
0
0
0
0
python,pandas
39,280,341
2
false
0
0
data[c] does not return a value, it returns a series (a whole column of data). You can apply the strip operation to an entire column df.apply. You can apply the strip function this way.
1
0
1
I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop for c in data.columns: data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip() but I keep getting this error: AttributeError: 'Series' object has no attribute 'strip' data is the data frame and was obtained from an excel file xl = pd.ExcelFile('test.xlsx'); data = xl.parse(sheetname='Sheet1') Am I missing something? I added the str but that didn't help. Is there a better way to do this. I don't want to use the column labels, like so data['column label'], because the text can be different. I would like to iterate each column and remove the characters mentioned above. incoming data: id city country 1 Ontario Canada 2 Calgary ' Canada' 3 'Vancouver Canada desired output: id city country 1 Ontario Canada 2 Calgary Canada 3 Vancouver Canada
Pandas - how to remove spaces in each column in a dataframe?
0
0
0
5,363
39,281,149
2016-09-01T21:30:00.000
0
0
0
0
python,matrix,equation-solving
39,290,812
2
false
0
0
If you're solving for the matrix, there is an infinite number of solutions (assuming that B is nonzero). Here's one of the possible solutions: Choose an nonzero element of B, Bi. Now construct a matrix A such that the ith column is C / Bi, and the other columns are zero. It should be easy to verify that multiplying this matrix by B gives C.
1
0
1
I am trying to solve a matrix equation such as A.B = C. The A is the unknown matrix and i must find it. I have B(n*1) and C(n*1), so A must be n*n. I used the BT* A.T =C.T method (numpy.linalg.solve(B.T, C.T)). But it produces an error: LinAlgError: Last 2 dimensions of the array must be square. So the problem is that B isn't square.
Solving matrix equation A B = C. with B(n* 1) and C(n *1)
0
0
0
339
39,282,825
2016-09-02T00:49:00.000
1
0
0
0
python,database,postgresql,restore
39,286,433
1
true
0
0
One of the few activities that you cannot perform while a user is connected is dropping the database. So – if that is what you are doing during restore – you'll have to change your approach. Don't drop the database (don't use the -C option in pg_dump or pg_restore), but rather drop and recreate the schemas and objects that don't depend on a schema (like large objects). You can use the -c flag of pg_dump or pg_restore for that. The other problem you might run into is connections with open transactions (state “idle in transaction”). Such connections can hold locks that keep you from dropping and recreating objects, and you'll have to use pg_terminate_backend() to get rid of them.
1
0
0
I run a number of queries for adhoc analysis against a postgres database. Many times I will leave the connection open through the day instead of ending after each query. I receive a postgres dump over scp through a shell script every five minutes and I would like to restore the database without cutting the connections. Is this possible?
Restore postrgres without ending connections
1.2
1
0
271
39,290,234
2016-09-02T10:31:00.000
0
0
1
0
python,distributed-computing
39,290,454
1
false
0
0
For starters, if MySQL isn't handling your performance requirements (and it rightly shouldn't, that doesn't sound like a very sane use-case), consider using something like in-memory caching, or for more flexibility, Redis: It's built for stuff like this, and will likely respond much, much quicker. As an added bonus, it has an even simpler implementation than SQL. Second, consider hashing some user and request details and storing that hash with the response to be able to identify it. Upon receiving a request, store an entry with a 'pending' status, and only handle 'pending' requests - never ones that are missing entirely.
1
0
0
Let's say my python server has three different responses available. And one user send three HTTP requests at the same time. How can I make sure that one requests get one unique response out of my three different responses? I'm using python and mysql. The problem is that even though I store already responded status in mysql, it's a bit too late by the time the next request came in.
How to response different responses to the same multiple requests based on whether it has been responded?
0
0
1
34
39,290,932
2016-09-02T11:08:00.000
1
0
1
0
python,visual-studio,visual-studio-2015,exe
39,291,254
2
false
0
0
Python is a dynamic language (executed by interpreter) and cannot be compiled to binary. (Similar to javascript, php and etc.) It needs interpreter to execute python commands. It's not possible to do that without 3rd party tools which translates python to another languages and compile them to exe.
1
1
0
How to change python code to .exe file using microsoft Visual Studio 2015 without installing any package? Under "Build" button, there is no convert to .exe file.
How to change python to .exe file in visual studio
0.099668
0
0
11,967
39,298,445
2016-09-02T18:08:00.000
0
1
1
0
python,time,soc
39,298,941
1
false
0
0
time.time uses gettimeofday on platforms that support it. That means its resolution is in microseconds. You might try using time.clock_gettime(time.CLOCK_REALTIME) which provides nanosecond resolution (assuming the underlying hardware/OS provides that). The result still gets converted to floating point as with time.time. It's also possible to load up the libc shared object and invoke the native clock_gettime using the ctypes module. In that way, you could obtain access to the actual nanosecond count provided by the OS. (Using ctypes has a fairly steep learning curve though.)
1
1
0
My team and I are designing a project which requires the detection of rising edge of a square wave and then storing the time using time.time() in a variable.If a same square wave is given to 3 different pins of RPi,and event detection is applied on each pin,theoretically,they should occur at the same time,but they have a delay which causes a difference in phase too (we are calculating phase from time). We concluded that time.time() is a slow function. Can anyone please help me as to which function to use to get SOC more precise than time.time()? Or please provide me with the programming behind the function time.time(). I'll be really thankful.
something better than time.time()?
0
0
0
116
39,300,131
2016-09-02T20:17:00.000
0
0
1
0
python,visual-studio-code
64,252,632
3
false
0
0
I'm quite late to this conversation, but a workaround I use is to put a pass statement at the end of my file, then add a breakpoint to it. I then run it in the debugger and can access all of the variables etc. This allows most of the functionality that I used to use in the PyCharm python terminal, such as exploring data structures, checking out methods, etc. Just remember that, if you want to make a multi-line statement (eg. for a loop), you need to use Shift-Enter to go to the next line otherwise it'll try to evaluate it immediately.
1
4
0
I'm using visual studio code with standard python extension, my issue is that when I run the code the python interpreter instantly closes right after and I only see the output which means that if I create some data structure I have to create it every single time. Is it possible to leave the console open after running the code and maybe running multiple files in the same python interpreter instance?
Visual Studio Code - python console
0
0
0
5,458
39,303,681
2016-09-03T05:48:00.000
1
1
1
1
ubuntu,python-appium
41,982,234
1
false
0
0
Try to use nosetest. Install: pip install nose Run: nosetests (name of the file containing test)
1
0
0
I got the following error while executing a python script on appium ImportError: No module named appium I am running appium in one terminal and tried executing the test on another terminal. Does anyone know what is the reason for this error? and how to resolve it?
Python and Appium
0.197375
0
0
571
39,313,181
2016-09-04T03:07:00.000
1
0
1
0
python,binary,hex,offset,bytecode
39,313,298
1
true
0
0
use file.seek() to skip between locations in a file. In this case, to go to the next location in the file, you would use file.seek(3121, 1) which seeks 3121 bytes ahead relative to the current position. EDIT: I didn't realize you were changing the file position after opening it, so it should actually be 2685 bytes that you're seeking ahead each time, to account for the 256 you read.
1
0
0
In Python, I'm trying to extract some data from a binary file. I know the offsets of my data. They are always the same. For instance, written beneath is the first 4 offsets and the converted offset as a decimal value. Offset1 - 0x00000409 - 1033 Offset2 - 0x0000103A - 4154 Offset3 - 0x00001C6B - 7275 Offset4 - 0x0000289C - 10396 I know that each offset (after the first one), is 3121 decimals apart, so is there a way I can just skip to the next offset? How do I move 3121 decimals to the next offset? There are 128 offsets that I need to extract. I hope there is a way of dynamically determining the difference (number of bytes) between offsets? I can then get the same data each time, using 0x100 to extract 256 characters from the offset.
Calculate bytes between two hex offsets
1.2
0
0
769
39,316,087
2016-09-04T10:45:00.000
6
0
1
0
python,matlab,audio
52,775,086
2
false
0
0
Use Scipy: import scipy.io.wavfile as wav fs,signal=wav.read(file_name) signal /= 32767 You need to divided by max int if you want exactly the same thing as in Matlab. Warning: if the wav file is int32 encoded, you need to normalize by max int32
1
4
0
I'm using wavefile.read() in Python to import a audio file to Python. What I want is read a audio file where every sample is in double and normalized to -1.0 to +1.0 similar to Matlab audioread() function. How can I do it ?
How to read a audio file in Python similar to Matlab audioread?
1
0
0
9,942
39,316,449
2016-09-04T11:31:00.000
1
0
1
0
python,testing
39,316,539
2
true
0
0
The code you test, and the code you run should be same. I do not recommend using a filename, because now you are dealing with (in one function) opening the file - and the errors associated with that part, and then confirming the file format (the actual purpose of the function). It sounds to me that your function's job is to check if the file's contents contain a specific string. So, this function should take any type of content element (an iterable) and then as long as the key string is not found, the function should return None/False/Fail condition - and your test should check for that.
1
0
0
I am writing tests for a program I intend to write that checks for certain lines in configuration files. For example, the program might check that the line: AllowConnections- is contained in the file SomeFile.conf. My function stub does not take any arguments because I know the file that I am going to be checking. I am trying to write a tests for this function that check the behavior for different SomeFile.conf files, but I don't see how I could do this. It is possible to change SomeFile.conf in the setup and teardown test functions, but this seems like a bad way to test. Should I change the function so that it can accept a file argument just for the sake of testing?
Should functions take extra arguments for the sake of testing?
1.2
0
0
56
39,318,053
2016-09-04T14:36:00.000
1
1
1
1
python,pip,easy-install
39,318,468
1
true
0
0
After some trial and error I discovered that binutils-dev and python-dev packages were missing and causing the header path errors. After installing those the setup script worked.
1
1
0
I've been trying to install the pybfd module but nothing works so far. Tried the following: pip install pybfd returns error: option --single-version-externally-managed not recognized. After a quick search I found the --egg option for pip which seems to work, says successfully installed but when I try to run my code ImportError: No module named pybfd.bfd easy_install pybfd returns an error as well: Writing /tmp/easy_install-oZUgBf/pybfd-0.1.1/setup.cfg Running pybfd-0.1.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-oZUgBf/pybfd-0.1.1/egg-dist-tmp-gWwhoT [-] Error : unable to determine correct include path for bfd.h / dis-asm.h No eggs found in /tmp/easy_install-oZUgBf/pybfd-0.1.1/egg-dist-tmp-gWwhoT (setup script problem?) For the last attempt I downloaded the pybfd repo from GitHub and ran the setup script: [-] Error : unable to determine correct include path for bfd.h / dis-asm.h Does anyone have any idea what could be causing all this and how to actually install the module ?
Problems installing python module pybfd
1.2
0
0
259
39,319,275
2016-09-04T16:49:00.000
0
0
0
0
python,amazon-web-services,lua,server,torch
48,819,337
2
false
1
0
It does make sense to look at the whole task and how it fits to your actual server, Nginx or Lighttpd or Apache since you are serving static content. If you are going to call a library to create the static content, the integration of your library to your web framework would be simpler if you use Flask but it might be a fit for AWS S3 and Lambda services. It may be worth it to roughly design the whole site and match your content to the tools at hand.
1
0
0
I am building am application to process user's photo on server. Basically, user upload a photo to the server and do some filtering processing using deep learning model. Once it's done filter, user can download the new photo. The filter program is based on the deep learning algorithm, using torch framework, it runs on python/lua. I currently run this filter code on my local ubuntu machine. Just wonder how to turn this into a web service. I have 0 server side knowledge, I did some research, maybe I should use flask or tornado, or other architecture?
how to build an deep learning image processing server
0
0
0
906
39,322,550
2016-09-05T00:12:00.000
0
0
0
0
python,flask,swagger
66,651,876
3
false
1
0
There are three ways of doing it: via Restful-Api (Api.doc) via getting swagger templates via registering blueprints (from flask-swagger-ui or smth).
1
3
0
I have build a small service with flask and already wrote a swagger yaml file to describe it's API. How can I expose the swagger file through the flask app? I didn't mean to expose the file itself (send_from_directory) but to create new endpoint that will show it as swagger-ui (interactive, if possible)
Exposing API Documentation Using Flask and Swagger
0
0
0
1,718
39,324,217
2016-09-05T05:04:00.000
1
0
1
1
python,oracle,pyinstaller,cx-oracle
39,349,805
1
true
0
0
The error "Unable to acquire Oracle environment handle" means there is something wrong with your Oracle configuration. Check to see what libclntsh.so file you are using. The simplest way to do that is by using the ldd command on the cx_Oracle module that PyInstaller has bundled with the executable. Then check to see if there is a conflict due to setting the environment variable ORACLE_HOME to a different client! If PyInstaller picked up the libclntsh.so file during its packaging you will need to tell it to stop doing that. There must be an Oracle client (either full client or the much simpler instant client) on the target machine, not just the one file (libclntsh.so). You can also verify that your configuration is ok by using the cx_Oracle.so module on the target machine to establish a connection -- independently of your application. If that doesn't work or you don't have a Python installation there for some reason, you can also use SQL*Plus to verify that your configuration is ok as well.
1
0
0
I am building an application in Python using cx_Oracle (v5) and Pyinstaller to package up and distribute the application. When I built and packaged the application, I had the Oracle 12c client installed. However, when I deployed it to a machine with the 11g client installed, it seems not to work. I get the message "Unable to acquire Oracle environment handle". I assume this is as the result of the application being packaged with Pyinstaller while my ORACLE_HOME was pointed to a 12c client. I know that the cx_Oracle I have was built against both 11g and 12 libraries. So, I'm wondering how I deploy an application using Pyinstaller so it can run with either 11 or 12c client libraries installed? By the way, I am building this on Linux (debian/Mint 17.2), and deploying to Linux (CentOS 7).
How do I build a cx_oracle app using pyinstaller to use multiple Oracle client versions?
1.2
0
0
738
39,325,178
2016-09-05T06:48:00.000
2
0
1
0
python-3.x
39,325,222
4
true
0
0
print will always first try to call __str__ on the object you give it. In the first case the __str__ of the int instance 9 is '9'. In the second case, you first explicitly call str on 9 (which calls its __str__ and yields '9'). Then, print calls '9''s __str__ which, if supplied with a string instance, returns it as it is resulting in '9' again. So in both cases, in the end print will print out similar output.
1
0
0
What is the difference between print(9) and print(str(9)) in Python when the output is the same for both functions?
Difference between `print(9)` and `print(str(9))`
1.2
0
0
2,204
39,328,658
2016-09-05T10:29:00.000
0
0
1
0
python,python-3.x,scikit-learn,ipython,jupyter-notebook
64,611,718
2
false
0
0
To add to the list of confirmed explanations (point 2): Too much memory is required Stack overflow - Too many recursive steps In my case, when I ran it as a Python script, I got this: Fatal Python error: Cannot recover from stack overflow. ... Aborted (core dumped)
1
22
0
I'm running some code using scipy and scikits.learn on Jupyter notebook using Python 3 kernel. During the computation the kernel is being restarted with a message dialogue saying that “The kernel appears to have died. It will restart automatically.”. The stderr of the underlying Jupyter process just logs the fact that the kernel dies and is going to be restarted without any helpful message. Is there any way of checking the underlying error? It might be a segfault coming from within some C++ code, but I can only guess. I searched for any relevant logs on the server and failed to find anything helpful.
How to debug dying Jupyter Python3 kernel?
0
0
0
6,775
39,332,901
2016-09-05T14:36:00.000
0
0
0
0
python,installation,attributes,scikit-learn,theano
39,334,690
1
false
0
0
Apparently it was caused by some issue with Visual Studio. The import worked when I reinstalled VS and restarted the computer. Thanks @super_cr7 for the prompt reply!
1
0
1
I'm trying to use scikit-learn's neural network module in iPython... running Python 3.5 on a Win10, 64-bit machine. When I try to import from sknn.mlp import Classifier, Layer , I get back the following AttributeError: module 'theano' has no attribute 'gof' ... The command line highlighted for the error is class DisconnectedType(theano.gof.type.Type), within theano\gradient.py Theano version is 0.8.2, everything installed via pip. Any lights on what may be causing this and how to fix it?
Failure to import sknn.mlp / Theano
0
0
0
331
39,334,441
2016-09-05T16:20:00.000
1
0
0
0
python,wxpython,zooming
39,335,812
1
false
0
1
Yes, from your description FloatCanvas would certainly meet your needs. Another possibility to consider would be the wx.GraphicsContext and related classes. It is vector-based (instead of raster) and supports the use of a transformation matrix which would make zooming, rotating, etc. very easy. However, the actual drawing and management of the shapes and such would probably require more work for you than using FloatCanvas.
1
1
0
I am developing a wxpython project where I am drawing a diagram on to a panel that I need to be able to zoom in/out to this diagram(a directed acyclic graph in my case). I will achieve this by mouse scroll when the cursor is on the panel, however that is not a part of my question. I need an advice from an experienced person about the method I am using for zooming. So far I thought as doing, There are lines, rectangles and texts inside rectangles within this diagram. So maybe I could increase/decrease their length/size with the chosen mouse event. But it is hard to keep it balanced because rectangles are connected with lines their angles should not change, and texts inside the rectanges should stay in the middle of them. Other method I thought of doing is to search for a built-in zoom method. Which I heard about something like Scale. However I have some questions about this method. Will this work on vector drawings(like mine) rather than images. And will it be scaling only the panel I chose and not the whole screen ? After I hear your advice about this, I will look deeper into this, but now I am a bit clueless. Sorry if my question is too theoretical. But I felt I needed help in the area. Thanks in advance. Note: Zooming not necessarily applied by scrolling. Note2: My research also led me to FloatCanvas. Is this suitable to my needs ?
WxPython zooming technique
0.197375
0
0
686
39,336,128
2016-09-05T18:43:00.000
2
0
1
0
python,python-2.7,python-import,importerror
39,336,156
1
true
0
0
Your filename moviepy.py shadows installed package. Rename your main file and everything should work fine (if moviepy is installed in used interpreter).
1
0
0
I have a file beginning with from moviepy.editor import *. when I run this file I get the error: Traceback (most recent call last): File "moviepy.py", line 2, in from moviepy.editor import * File "/home/debian/Videos/moviepy.py", line 2, in from moviepy.editor import * ImportError: No module named editor the strange thing is I am 100% sure moviepy is installed. I checked sys.path and in one of the paths is a folder called moviepy with multiple files inside including __init__.py __init__.pyc and editor.py so what am I doing wrong?
how to fix ImportError python
1.2
0
0
704
39,336,204
2016-09-05T18:50:00.000
0
0
0
0
python,python-3.x,tkinter
39,337,845
1
true
0
1
So as Bryan said, tkinter gets the fonts from the standard font locations of the OS. Putting fonts there will permit to tkinter to be able to load them. Thanks Bryan
1
2
0
I would like to know where does tkinter load his fonts from. Is it from /usr/share/fonts or does it have a specific folder ? thanks
Where does tkinter load his fonts from?
1.2
0
0
67
39,337,545
2016-09-05T20:48:00.000
0
0
0
0
python,pyqtgraph
39,413,892
1
false
0
1
Sorry, there was a bug in my code which was handling the case where a part of one LinearRegionItem overlapped with another LinearRegionItem. Now I see that one linearRegionItem can lie on top of another one. Consider this solved
1
1
0
I have added 2 LinearRegionItems to a pyqtgraph plot. When I move the boundary of 1 over the other, the boundary never overlaps the other. I would like to know how to allow overlapping. This is a functionality that I need, where I am selecting different regions of the data plot to be used later on.
How to get 2 or more LinearRegionItem to overlap each other
0
0
0
78
39,337,821
2016-09-05T21:20:00.000
2
1
0
1
python,python-2.7,rabbitmq,pika
39,340,382
1
true
0
0
1.If at least two consumers at the same time can get the same message? no - a single message will only be delivered to a single consumer. Because of that, your scenario #2 doesn't come into play at all. You'll never have 2 consumers working on the same message, unless you nack the message back to the queue but continue processing it anyways.
1
1
0
I've multiple consumers which are polling on the same queue, and checking the queue every X seconds, basically after X seconds it could be that at least two consumers can launch basic.get at the very same time. Question are: 1.If at least two consumers at the same time can get the same message? 2.According to what I understood only basic_ack will delete a mesage from the queue, so suppose we have the following scenario: Consumer1 takes msg with basic.get and before it reaches basic_ack line , Consumer2 is getting this message also (basic.get), now Consumer1 reaches the basic.ack, and only now Consumer2 reaches its own basic.ack. What will happen When Consumer2 will reach its basic.ack? Will the message be processes by Consumer2 as well, because actions are not atomic? My code logic of consumer using python pika is as follows: while true: m_frame =None while(m_frame is None): self.connection.sleep(10) m_frame,h_frame,body = self.channel.basic_get('test_queue') self.channel.basic_ack(m_frame.delivery_tag) [Doing some long logic - couple of minutes] Please note that I don't use basic.consume So I don't know if round robin fetching is included for such usage
Rabbitmq one queue multiple consumers
1.2
0
0
1,439
39,339,743
2016-09-06T02:23:00.000
0
0
1
0
python
39,339,769
2
false
0
0
print("\'") repr of a string will never put out a single backslash. repr smartly picks quotes to avoid backslashes. You can get a \' to come out if you do something like '\'"'
1
1
0
I want Python interpreter to show me \' as a calculated value. I tried typing "\\'" and it returned me the same thing. If I try "\'" then the backslash is not shown anymore after I hit return. How do I get it to show the backslash like this after I hit return- \' ? Additional question This is exact question I am not understanding: Expression --> 'C' + + 'D' Calculated Value --> 'C\'D' Find the missing literal
Backslash escaping is not working in Python
0
0
0
964
39,339,804
2016-09-06T02:32:00.000
-2
0
0
1
python,celery,django-celery
39,340,088
1
false
1
0
Something along these lines would work: (\b(dev.)(\w+)). Then refer to the second group for the stuff after "dev.". You'll need to set it up to capturing repeated instances if you want to get multiple.
1
5
0
Background Celery worker can be started against a set of queues using -Q flag. E.g. -Q dev.Q1,dev.Q2,dev.Q3 So far I have seen examples where all the queue names are explicitly listed as comma separated values. It is troublesome if I have a very long list. Question Is there a way I can specify queue names as a regex & celery worker will start consuming from all queues satisfying that regex. E.g. -Q dev.* This should consume from all queuess starting with dev i.e. dev.Q1, dev.Q2, dev.Q3. But what I have seen is - it creates a queue dev..* Also how can I tune the regex so that it doesn't pick ERROR queues e.g. dev.Q1.ERROR, dev.Q2.ERROR.
Celery Worker - Consume from Queue matching a regex
-0.379949
0
0
409
39,345,313
2016-09-06T09:22:00.000
1
0
1
0
python,anaconda,32bit-64bit,development-environment,conda
39,345,619
1
true
0
0
As I understand, Anaconda installs into a self-contained directory (<pwd>/anaconda3). Since 64-bit and 32-bit builds of Python can not be mixed or converted into each other (in terms of the compiled Python binaries and libraries in site-packages or other PYTHONPATH location), you have to go with a second (64-bit) Anaconda installation in another directory. If you have 32-bit code that needs to call 64-bit code, you have to rely subprocesses and pipes (or other IPC mechanisms). You probably have to be careful about your environment variables, e.g. PATH and PYTHONPATH when doing so.
1
2
0
I have a 32 bit installation of the Anaconda Python distribution. I know how to create environments for different python versions. What I need is to have a 64 bit version of python. Is it possible to create a conda env with the 64 bit version? Or do I have to reinstall anaconda or install a different version of anaconda and then switch the values of the PATH when I need the different versions? I looked and searched the documentation, and the conda create -h help page did not find any mention of this.
How to create python conda 64 bit environment in existing 32bit install?
1.2
0
0
4,169
39,355,693
2016-09-06T18:34:00.000
3
0
1
0
python,regex
39,355,798
2
false
0
0
A hint: A regular expression that will find any amount of whitespace (not just space characters, but also tabs, etc.) around a slash is: r'\s*/\s*'. The regex there is between the apostrophes. The period is just the end of my sentence, and the r tells Python to treat the string inside apostrophes as a "raw" string. If you don't want to find arbitrary whitespace characters, but only the space character itself, the regex is r' */ *'. Note the space in front of the asterisk. The rest I leave to you, since this sounds like a homework problem.
1
0
0
I want to remove whitespaces in a string that are adjacent to / while keeping the rest of the whitespaces. For instance, say I have a string 98 / 100 xx 3/ 4 and 5/6. The desired result for my example would be 98/100 xx 3/4 and 5/6. (So that I could use .split(' ') as a next step and extract those meaningful numbers, i.e.98/100, 3/4, and 5/6 as my final results.) Note: I only want to look for /, no need to worry other operators. I know I should probably use re for this, but I can't figure it out. Any help is appreciated! ---------------------My Approach Below------------------ I used [index_num.end(0) for index_num in re.finditer('/', test)], where test = '98 / 100 xx 3/ 4 and 5/6' to find the index of the /, then check if the previous or the next is a whitespace. That's not ideal and I believe there are easier ways.
Python conditionally remove whitespaces
0.291313
0
0
178