Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48,237,510 | 2018-01-13T06:37:00.000 | 0 | 0 | 0 | 0 | java,android,python | 48,238,627 | 1 | false | 1 | 0 | If you have access to python from java android app. You can write python output in file, then in java code read that file.
Or if python output is in web, you will need some web service that provides json/xml output, and then in java code you should call that web service. | 1 | 0 | 0 | I have to develop an Android application which makes use of machine learning algorithms at the back end. Now, for developing the Android app, I use Java and for implementing the machine learning algorithms I use Python.
My question is how to link the Python code to an Android app written in Java. That is supposed my Python code generates an output, now how to send this data to an Android application? | How to link python code to android application developed in java? | 0 | 0 | 0 | 347 |
48,241,482 | 2018-01-13T15:50:00.000 | 2 | 0 | 1 | 0 | python,shortest-path | 48,241,513 | 3 | false | 0 | 0 | How about doing list('0123456789abcdef') to make it explicit?
If you don't want to spell it out, [f'{i:x}' for i in range(16)] should also work. | 1 | 4 | 0 | I write a python script, which needs a list of all hexadecimal characters.
Is it a good idea to do list(string.printable[:16]) to get ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'] ? | char list from 0 to f | 0.132549 | 0 | 0 | 284 |
48,241,838 | 2018-01-13T16:28:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 50,647,228 | 2 | false | 1 | 0 | There are certain limitations when using AWS lambda.
1) The total size of your uncompressed code and dependencies should be less than 250MB.
2) The size of your zipped code and dependencies should be less than 75MB.
3) The total fixed size of all function packages in a region should not exceed 75GB.
If you are exceeding the limit, try finding smaller libraries with less dependencies or breaking your functionality in to multiple micro services rather than building code which does all the work for you. This way you don't have to include every library in each function. Hope this helps. | 1 | 1 | 0 | I am doing an example of a Simple Linear Regression in Python and I want to make use of Lambda functions to make it work on AWS so that I could interface it with Alexa. The problem is my Python package is 114 MB. I have tried to separate the package and the code so that I have two lambda functions but to no avail. I have tried every possible way on the internet.
Is there any way I could upload the packages on S3 and read it from there like how we read csv's from S3 using boto3 client? | Working around AWS Lambda space limitations | 0 | 0 | 0 | 946 |
48,241,888 | 2018-01-13T16:32:00.000 | 1 | 0 | 1 | 0 | python,django,pip,requirements.txt | 48,242,094 | 1 | true | 1 | 0 | Sounds like you are working in a virtual environment, yet your Django dependency is installed globally. Check which Python packages are installed globally and uninstall Django (you probably don't need it globally). Then install it into your virtual environment. Now the freeze command should output Django as well.
General note: Most packages should be installed into your project virtual environment. There are only few packages where it makes sense to install them globally (eg aws management tools). | 1 | 1 | 0 | I'm trying to deploy my Django site on Heroku, and thus, I need a requirements.txt file with the necessary packages that Heroku needs to install for me. I understand Django is a necessary package to be installed. Unfortunately, Django isn't included in the file when I run pip freeze > requirements.txt. Why is this? I'm not sure what to show you so you can tell me what's going wrong. Let me know and I'll add it. FYI the site hosts just fine on my local computer, so Django is definitely installed. | why doesn't pip freeze > requirements.txt output Django? | 1.2 | 0 | 0 | 1,475 |
48,242,064 | 2018-01-13T16:49:00.000 | 0 | 0 | 0 | 0 | python,c,arrays,data-files | 48,242,730 | 1 | false | 0 | 0 | I'd say write it to a text file. Per line put one first column number, followed by the second column's list of floats. Put a space between each element.
Assuming you know the maximum number of floats in the second column's array, and the maximum character length of each float, you can use fgets() to read the file line by line. | 1 | 1 | 1 | I have two column dataset where each element of first column corresponds to an array. So basically my second column elements are arrays and first column elements are just numbers. I need to write it in a file using Python and later read it in C. I know HDF5 is the best way to store arrays but I was wondering if there is any other effective way of writing it in .csv/.dat/.txt file. Since I have to read it in C I can't use things like numpy.savez. | How to write arrays as column elements in a Data file in python and read it later in C? | 0 | 0 | 0 | 135 |
48,243,418 | 2018-01-13T19:16:00.000 | 1 | 0 | 1 | 0 | python,multithreading,parallel-processing,gil | 48,243,488 | 1 | true | 0 | 0 | Not quite. The GIL will prevent computationally expensive threads from executing at the same time, and won't give that much of a performance boost.
On the other hand, non-computational tasks (file I/O, for example) can be multithreaded to vastly increase performance. Imagine you have to write 40 files. If the computer can handle writing 20 at once, you can get the writing done much quicker with multithreading than with blocking and writing each file. | 1 | 1 | 0 | As the title says, since in Python we have the GIL, which says that only one thread may execute code at any one time, that means that separate threads don't really run in parallel, but rather "interlaced" but still using only a single core of the CPU.
Doesn't that basically defeat the whole concept of threading? Since parallelization using threads is not exactly possible since they dont really run in parallel?
I have found a few answers regarding this but nothing that clearly addressed this, I apologize if this specific facet of the issue has already been answered. | Do threads really matter in python? | 1.2 | 0 | 0 | 53 |
48,248,816 | 2018-01-14T10:51:00.000 | 0 | 0 | 1 | 0 | python,tensorflow,pycharm | 48,272,401 | 1 | false | 0 | 0 | I found creating a project and adding the source file makes pychamr remember the project as working previous project in the list. | 1 | 0 | 0 | I recently installed TensorFlow on a Ubunu 14.04 machine using virtual environment and installed PyCharm to analyze TensoFlow Python programs.
I downloaded mnist_softmax.py, the first tutorial program under ~/TF. I opened it with PyCharm and set the Python interpreter to the one in the virtual environment. I can run it, set breakpoint, and do single stepping.
Ok, then, I exit PyCharm . When I start PyCharm again, the recent project list is shown, but the location is /tmp/mnist_softmax.py, not ~/TF/mnist_softmax.py and of course if I try to open it (/tmp/mnist_softmax.py), it complains that the file is not there.
How can I save the mnist_softmax.py as a PyCharm project? I couldn't find the submenu in the File menu. I tried Save All before exiting but it was the same and no *.idea file is under ~/TF.
How can I do it? | how to create and save a pycharm project from an existing python program? | 0 | 0 | 0 | 428 |
48,251,568 | 2018-01-14T16:16:00.000 | 0 | 0 | 0 | 0 | python,ubuntu,tensorflow | 48,259,989 | 1 | false | 0 | 0 | You could copy the contents of your python site-packages across. But if you are generally in the situation where internet access is expensive, you might find it more practical to use a caching proxy for all of your internet traffic. | 1 | 0 | 1 | So I have computers all running on ubuntu and only one of them has python tensorflow module. I want to install tensorflow to all of them but it would be inefficient to connect every computer to the internet and install them all over again.so is there a possible efficient way to just copy paste some files from the computer to another to use this python module? thanks in advance. | how could I copy tensorflow module from one computer to another? | 0 | 0 | 0 | 160 |
48,253,333 | 2018-01-14T19:26:00.000 | 0 | 0 | 1 | 1 | python-3.x,facebook,cmd,windows-10 | 56,398,684 | 1 | false | 0 | 0 | I just face this same issue. I solved by installing facebook-sdk instead just facebook.
Run :- pip install facebook-sdk or pip3 install facebook-sdk | 1 | 0 | 0 | When I type pip3 install --user Facebook in my CMD. Output appear as below
Could not find a version that satisfies the requirement facebook (from versions: )
No matching distribution found for facebook | Fail to install package Facebook in CMD | 0 | 0 | 0 | 911 |
48,255,732 | 2018-01-15T00:59:00.000 | 3 | 1 | 0 | 0 | python,python-3.x,discord,discord.py | 48,287,783 | 3 | false | 0 | 0 | You can iterate through every message and do:
if not message.attachments:
...
message.attachments returns a list and you can check if it is empty using if not | 1 | 5 | 0 | On my Discord server, I have a #selfies channel where people share photos and chat about them. Every now and then, I would like to somehow prune all messages that do not contain files/images. I have tried checking the documentation, but I could see not see any way of doing this. Is it not possible? | Is there a way to check if message.content on Discord contains a file? | 0.197375 | 0 | 1 | 10,898 |
48,257,994 | 2018-01-15T06:34:00.000 | 2 | 0 | 0 | 0 | python,python-2.7,download,urllib,urlretrieve | 48,258,054 | 2 | true | 0 | 0 | I think curl and head would work better than a Python solution here:
curl https://my.website.com/file.txt | head -c 512 > header.txt
EDIT: Also, if you absolutely must have it in a Python script, you can use subprocess to perform the curl piped to head command execution
EDIT 2: For a fully Python solution: The urlopen function (urllib2.urlopen in Python 2, and urllib.request.urlopen in Python 3) returns a file-like stream that you can use the read function on, which allows you to specify a number of bytes. For example, urllib2.urlopen(my_url).read(512) will return the first 512 bytes of my_url | 1 | 6 | 0 | Situation: The file to be downloaded is a large file (>100MB). It takes quite some time, especially with slow internet connection.
Problem: However, I just need the file header (the first 512 bytes), which will decide if the whole file needs to be downloaded or not.
Question: Is there a way to do download only the first 512 bytes of a file?
Additional information: Currently the download is done using urllib.urlretrieve in Python2.7 | How to Download only the first x bytes of data Python | 1.2 | 0 | 1 | 1,451 |
48,263,901 | 2018-01-15T13:24:00.000 | 1 | 0 | 0 | 1 | python,apache-kafka,kafka-producer-api | 48,264,104 | 1 | true | 0 | 0 | kafka-python does not support the Kafka Admin APIs at the moment. The only way to create topics via this client is to rely on the auto-create broker feature.
However, as you've noticed, this does not allow you to provide any topic configurations.
You can either:
Set the replication factor in the broker config (that will apply to all topics) by setting default.replication.factor=3 in the broker's server.properties file.
Use a script (like the kafka-topics.sh tool) to explicitely create topics with custom settings.
The Kafka Admin APIs are still relatively new and very few clients apart from the official Java client support it. | 1 | 0 | 0 | As a fact when creating a topic in kafka it is possible to set the replication factor however I was using a KafkaProducer (the kafka api for python pip install kafka)
I thought I could do producer.send(...,replication-factor=3)but then there was no option for me to do that.
Now I have only one option left is to directly create a shell script to connect to kafka to create a topic but then if the feature of python kafka is so lackluster why would I continue using it. So is there a way to set a replication factor when I am going to produce a kafka topic. | Python KafkaProducer cannot set replication factor | 1.2 | 0 | 0 | 907 |
48,264,656 | 2018-01-15T14:07:00.000 | 5 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,machine-learning,amazon-sagemaker | 48,278,872 | 8 | false | 1 | 0 | Do make sure the Amazon SageMaker role has policy attached to it to have access to S3. It can be done in IAM. | 1 | 53 | 1 | I've just started to experiment with AWS SageMaker and would like to load data from an S3 bucket into a pandas dataframe in my SageMaker python jupyter notebook for analysis.
I could use boto to grab the data from S3, but I'm wondering whether there is a more elegant method as part of the SageMaker framework to do this in my python code?
Thanks in advance for any advice. | Load S3 Data into AWS SageMaker Notebook | 0.124353 | 1 | 0 | 73,507 |
48,264,698 | 2018-01-15T14:09:00.000 | 4 | 0 | 1 | 0 | python,c | 48,264,739 | 2 | false | 0 | 1 | Named arguments are not supported in C.
All arguments must be passed in the correct order. | 1 | 4 | 0 | Is there a way to use named arguments in C function?
Something like function with prototype void foo(int a, int b, int c);
and I want to call it with foo(a=2, c=3, b=1); [replaced the order of b & c and used their names to distinguish]
Motivation: I want a more Pythonic C, where I can easily manipulate my function arguments without mixing them by mistake | A way to emulate named arguments in C | 0.379949 | 0 | 0 | 339 |
48,265,821 | 2018-01-15T15:18:00.000 | 0 | 1 | 0 | 0 | python,c,popen | 48,266,333 | 1 | false | 0 | 0 | Probably not a full answer, but I expect it gives some hints and it is far too long for a comment. You should think twice about your requirements, because it will probably not be that easy depending on your proficiency in C and what OS you are using.
If I have correctly understood, you have a sensor that sends data (which is already weird unless the sensor is an intelligent one). You want to write a C program that will read that data and either buffer it, and retain only last (you did not say...) and at the same time will wait for requests from a Python script to give it back what it has received (and kept) from the sensor. That probably means a dual thread program with quite a bit of synchronization.
You will also need to specify the communication way between C and Python. You can certainly use the subprocess module, but do not forget to use unbuffered output in C. But you could also imagine an independant program that uses a FIFO or a named piped with a well defined protocol for external requests, in order to completely separate both problems.
So my opinion is that this is currently too broad for a single SO question... | 1 | 0 | 0 | I have a flow sensor that I have to read with c because python isn't fast enough but the rest of my code is python. What I want to do is have the c code running in the background and just have the python request a value from it every now and then. I know that popen is probably the easiest way to do this but I don't fully understand how to use it. I don't want completed code I just want a way to send text/numbers back and forth between a python and a c code. I am running raspbian on a raspberry pi zero w. Any help would be appreciated. | How to read a sensor in c but then use that input in python | 0 | 0 | 0 | 215 |
48,265,893 | 2018-01-15T15:23:00.000 | 1 | 0 | 0 | 0 | java,python | 48,266,588 | 2 | false | 1 | 0 | Normally Python and Java have their own interpreters/VM's and cannot be shared. It is possible to use Jython but has limitations (fe. python version and support/compatibility with other python packages).
The interpreter and JVM do not match: Java has strict typing, python not. Java compiles and run, python is an interpreter and can change code in runtime (if you want). These are extra challenges why putting all in a same environment is very complex.
There are possibilities like a client/server architecture, but the feasability depends on the level of the framework.
Most of the time low level frameworks are optimized to run directly inside your application process. Any loose coupling will introduce performance and security and compatibility issues. Just think about how reflection will work or multiple inheritance.
If it is a high level framework (fe able to run stand alone) it is more feasable to use some sort of client/server. But still you have to develop a lot for it.
Industry standard is just to implement the framework of your desire in the language you want, then you can get also all the benefits of your platform. | 1 | 1 | 0 | Can Python invoke the Java Framework?
I want to know whether a Python project can invoke a Java Framework, I find a Java Framework in GitHub, whether I can use it in my Python project? | Can Python invoke the Java Framework? | 0.099668 | 0 | 0 | 523 |
48,265,926 | 2018-01-15T15:25:00.000 | 44 | 0 | 0 | 0 | python,machine-learning,keras,deep-learning,keras-layer | 48,267,194 | 2 | true | 0 | 0 | model.layers will give you the list of all layers. The number is consequently len(model.layers) | 1 | 24 | 1 | Is there a way to get the number of layers (not parameters) in a Keras model?
model.summary() is very informative, but it is not straightforward to get the number of layers from it. | Keras: find out the number of layers | 1.2 | 0 | 0 | 16,915 |
48,267,934 | 2018-01-15T17:25:00.000 | 4 | 0 | 1 | 0 | python,python-3.x | 48,268,062 | 1 | true | 0 | 0 | Go into the Python 3.6.4 folder, search for IDLE, then a folder called idlelib will appear. Go into that and there is a file called idle.bat; this is it. | 1 | 1 | 0 | I just installed the newest version of python (3.6.4) and I just had it upgrade from python 3.6.2 but now I can't find IDLE.
Please help I really need to get my work done. | Windows Python 3.6.4: Can't find IDLE | 1.2 | 0 | 0 | 7,598 |
48,269,197 | 2018-01-15T19:00:00.000 | 2 | 0 | 1 | 0 | python-2.7,io,xlsxwriter | 64,650,126 | 2 | false | 0 | 0 | I was able to get around the problem by invoking the workbook.save() inside the loop. I have this long running program that keeps appending lines to the excel file and once the save method is invoked inside the loop, I can see new lines getting added as the program progresses. | 2 | 2 | 0 | So I am writing a program which writes data into an opened excel file.
The issue is that I need to run an infinite loop and the program is closed when it is killed.
The file isn't even created when I do this. workbook.close() is outside the infinite while loop.
Is there a flush method within xlsxwriter so that I can save the data? | Is there a flush method in the xlsxwriter module? [Python 2.7] | 0.197375 | 1 | 0 | 962 |
48,269,197 | 2018-01-15T19:00:00.000 | 0 | 0 | 1 | 0 | python-2.7,io,xlsxwriter | 64,651,008 | 2 | false | 0 | 0 | Is there a flush method in the xlsxwriter module
No. You can only close()/save a file once with XlsxWriter. | 2 | 2 | 0 | So I am writing a program which writes data into an opened excel file.
The issue is that I need to run an infinite loop and the program is closed when it is killed.
The file isn't even created when I do this. workbook.close() is outside the infinite while loop.
Is there a flush method within xlsxwriter so that I can save the data? | Is there a flush method in the xlsxwriter module? [Python 2.7] | 0 | 1 | 0 | 962 |
48,269,248 | 2018-01-15T19:06:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn,python-multithreading,xgboost | 64,898,707 | 4 | false | 0 | 0 | Currently n_jobs can be used to limit threads at predict time:
model._Booster.set_param('n_jobs', 2)
More info
Formerly (but now deprecated):
model._Booster.set_param('nthread', 2) | 2 | 4 | 0 | I want to use XGBoost for online production purposes (Python 2.7 XGBoost API).
In order to be able to do that I want to control and limit the number of threads used by XGBoost at the predict operation.
I'm using the sklearn compatible regressor offered by XGBoost (xgboost.XGBRegressor), and been trying to use the param nthread in the constructor of the regressor to limit the max threads used to 1.
Unfortunately, XGBoost keeps using multiple threads regardless of the value set in nthread.
Is there another way to limit XGBoost and force it to perform the predict operation using n=1 threads? | Limiting the number of threads used by XGBoost | 0 | 0 | 0 | 3,117 |
48,269,248 | 2018-01-15T19:06:00.000 | 1 | 0 | 1 | 0 | python,scikit-learn,python-multithreading,xgboost | 48,275,081 | 4 | false | 0 | 0 | Answer:
Standard set_params with nthread fails, but when using regr._Booster.set_param('nthread', 1) I was able to limit XGBoost to using a single thread.
As mentioned above the env variable OMP_NUM_THREADS=1 works as well. | 2 | 4 | 0 | I want to use XGBoost for online production purposes (Python 2.7 XGBoost API).
In order to be able to do that I want to control and limit the number of threads used by XGBoost at the predict operation.
I'm using the sklearn compatible regressor offered by XGBoost (xgboost.XGBRegressor), and been trying to use the param nthread in the constructor of the regressor to limit the max threads used to 1.
Unfortunately, XGBoost keeps using multiple threads regardless of the value set in nthread.
Is there another way to limit XGBoost and force it to perform the predict operation using n=1 threads? | Limiting the number of threads used by XGBoost | 0.049958 | 0 | 0 | 3,117 |
48,272,093 | 2018-01-15T23:26:00.000 | 1 | 0 | 0 | 0 | python,amazon-redshift,etl,emr,amazon-emr | 48,276,498 | 1 | false | 0 | 0 | I suppose your additional columns are measures, not dimensions. So you can keep the dimensions in the individual columns and include them into sort key, and store measures in JSON, accessing them whenever you need. Also if you can distinguish between frequently used measures vs. occasional you can store the frequently used ones in columns and the occasional ones in JSON. Redshift has native support for extracting the value given the key, and you also have the ability to set up Python UDFs for more complex processing. | 1 | 0 | 0 | Scenario:
I have a Source which maintains the transactions data. They have around 900 columns and based on the requirements of the new business, they add additional columns.
We are a BI team and we only extract around 200 columns which are required for our reporting. But when new business is launched / new analysis is required, sometimes users approach us and request us to pull extra columns from the source.
Current Design:
We have created a table with extra columns for future columns as well.
We are maintaining a 400 column table with the future column names like str_01, str_02...., numer_01, numer_02... date_01, date_02... etc.
We have a mapping table which maps the columns in our table and columns in Source table. Using this mapping table, we extract the data from source.
Problem:
Recently, we have reached the 400 column limit of our table and we won't be able to onboard any new columns. One approach that we can implement is to modify the table to increase the columns to 500 (or 600) but I am looking for other solutions on how to implement ETL / design the table structure for these scenarios. | ETL for a frequently changing Table structure | 0.197375 | 1 | 0 | 131 |
48,272,916 | 2018-01-16T01:30:00.000 | -1 | 0 | 0 | 0 | python,oauth-2.0,uber-api | 48,273,068 | 2 | true | 1 | 0 | The Uber Api does not support oauth2 with a username and password (at least not from python) | 1 | 0 | 0 | I am creating an app which only needs to access the Uber API with one account, mine. Is it possible to connect to the API with my account credentials? And if not, how else can I programmatically order rides through my account? | How to get oauth2 authorization with developer password for Uber API? | 1.2 | 0 | 1 | 219 |
48,273,001 | 2018-01-16T01:45:00.000 | 0 | 0 | 0 | 1 | python,parallel-processing,fabric | 48,341,150 | 1 | true | 0 | 0 | The task1 did not run at all because running a command with & in Fabric does not work.
It is because, in linux when you log out of a session all the processes associated with it are terminated.
So if you want to make sure a command keeps running even after you log out of the session you need to run it like this:
run('nohup sh command &') | 1 | 0 | 0 | For my automation purposes, I'm using Fabric. But I could not run 2 tasks at the same time?
For example, I want to run task 1 to collect data in the tmp folder. I want to run task 2 which will generate data and put in tmp. Tas1 2 will be running a bit before task 2.
Here is my sudo code:
output1 = run("./task1_data_logger &")
output2 = run("./task2_main_program")
RESULT: Task2_main_program is running fine but I didn't see task1_data_logger running at all. I thought I put the & so that Task1 can be run in the background.
I've read Parallel execution document but it is more for running parallel in multiple host, which is not my case.
Anyone knows how to 2 tasks simultaneously instead of serially?
Thank you. | Running 2 tasks at the same time using Fabric | 1.2 | 0 | 0 | 114 |
48,273,710 | 2018-01-16T03:33:00.000 | 0 | 0 | 1 | 0 | python,compare,difflib | 48,273,885 | 2 | false | 0 | 0 | You may use below
set(str1).intersection(set(str2))
which will give you the difference of the two list. | 1 | 0 | 0 | I'm relatively new to python and I am using difflib to compare two files and I want to find all the lines that don't match. The first file is just one line so it is essentially comparing against all the lines of the second file. When using difflib, the results show the '-' sign in front of the lines that don't match and it doesn't show anything in front of the line that does match. (I thought it would show a '+'). For the lines that have a '-' in front, how can I just write those lines to a brand new file (without the '-' in front) ? Below is the code snippet I am using for the difflib. Any help is greatly appreciated.
f=open('fg_new.txt','r')
f1=open('out.txt','r')
str1=f.read()
str2=f1.read()
str1=str1.split()
str2=str2.split()
d=difflib.Differ()
diff=list(d.compare(str2,str1))
print ('\n'.join(diff)) | difflib and removing lines even without + in front of them python | 0 | 0 | 0 | 252 |
48,279,419 | 2018-01-16T10:46:00.000 | 1 | 1 | 1 | 0 | python,class | 48,279,639 | 2 | false | 0 | 0 | Its not a matter of quality here, in Object-oriented-design which python supports, __init__ is way to supply data when an object is created first. In OOPS, this is called a constructor. In other words A constructor is a method which prepares a valid object.
There are design patters on large projects are build that rely on the constructor feature provided by python. Without this they will not function,
for e.g.
You want to keep track of every object that is created for a class, now you need a method which is executed every time a object is created, hence the constructor.
Other useful example for a constructor is lets say you want to create a customer object for a Bank. Now a customer for a bank will must have an account number, so basically you have to set a rule for a valid customer object for a Bank hence the constructor. | 1 | 2 | 0 | How does the quality of my code gets affected if I don't use __init__ method in python? A simple example would be appreciated. | What happens when your class does not explicitly define an __init__ method? | 0.099668 | 0 | 0 | 486 |
48,279,902 | 2018-01-16T11:12:00.000 | 1 | 0 | 0 | 0 | python,pandas,apriori | 48,280,113 | 1 | false | 0 | 0 | message is string type, and elif "what is" in message: seems to be correct in syntax.
Have you checked whether the indentation is correct? Sometimes the bug can be a very simple thing.. | 1 | 0 | 1 | I am performing Sequential Rule Mining using Apriori Algorithm and FPA, I have the dataset in excel as shown below, I want to know, how should I load my data into pandas dataframe what I am using is the following read_excel command, but the data contains ---> between items and lies in single column as shown below.
How should I load and perform Pattern Mining. | Sequential Rule Mining using Apriori Algorithm and Pandas | 0.197375 | 0 | 0 | 683 |
48,280,156 | 2018-01-16T11:26:00.000 | 1 | 0 | 1 | 0 | python,pypy,bin | 48,308,373 | 1 | false | 0 | 0 | PyPy, like CPython, includes pip and setuptools. You should open a CMD command line windows, cd to the directory with the pypy.exe or pypy3.exe file, and execute pypy -mensurepip or pypy3 -ensurepip | 1 | 1 | 0 | I have just installed PyPy 3.5 (32-bit version) under Windows 10 and everything seems to be there, apart from the bin directory. That means that I don't have pip at my disposal and thus can't install additional packages.
Is there a way to get it installed properly?
This question refers to PyPy and not to CPython! | PyPy 3.5 (Windows) does not contain bin directory | 0.197375 | 0 | 0 | 463 |
48,282,664 | 2018-01-16T13:43:00.000 | 0 | 0 | 1 | 0 | python | 48,282,772 | 2 | false | 0 | 0 | Parentheses after a name means a function/method is called there.
An object can be created by calling its constructor(__init__ function). Constructor is invoked by calling the class itself as a function Workbook()
The functions or object methods are called similarly using parentheses. | 1 | 2 | 0 | I am just new in Python and have limited knowledge about Object Oriented Programming. Just want to ask a few things about Object, Methods and Function.
I noticed that an Objects have parentheses right after its name like book = Workbook() and some have no parenthesis on it. May I know the difference between the two?
Similarly, for methods right after the name of the method b.get_sheet_names() have parentheses.
May I know what is the concept behind inclusion of parentheses right after the object name and methods. | Purpose of parentheses right after object's and method's name | 0 | 0 | 0 | 1,533 |
48,283,696 | 2018-01-16T14:38:00.000 | 3 | 0 | 0 | 1 | python,heroku,celery | 48,284,437 | 1 | true | 1 | 0 | Quite simply: limit your concurrency (number of celery worker processes) to the number of tasks that can safely run in parallel on this server.
Note that if you have different tasks having widly different resource needs (ie one task that eats a lot of ram and takes minutes to complete and a couple ones that are fast and don't require much resources at all) you might be better using two distinct nodes to serve them (one for the heavy tasks and the other for the light ones) so heavy tasks don't block light ones. You can use queues to route tasks to different celery nodes. | 1 | 3 | 0 | I have an app running on Heroku and I'm using celery together with a worker dyno to process background work.
I'm running tasks that are using quite a lot of memory. These tasks get started at roughly the same time, but I want only one or two tasks to be running at the same time, the others must wait in the queue. How can I achieve that?
If they run at the same time I run out of memory and the system gets restarted. I know why it's using a lot of memory and not looking to decrease that | How to limit the number of tasks that runs in celery | 1.2 | 0 | 0 | 1,789 |
48,287,088 | 2018-01-16T17:48:00.000 | 0 | 0 | 0 | 1 | python,docker,bcolz | 48,287,398 | 1 | false | 0 | 0 | The setup.py compiles bcolz with different flags depending on the CPU. This means bcolz is not portable in a docker container. | 1 | 0 | 0 | I am running a docker container that works perfectly on multiple different hosts. However when I run on AWS cr1.8xlarge one of the packages (bcolz) fails with "invalid instruction" error.
I exec into the container and run bcolz.test() which fails. But if I pip uninstall bcolz and then reinstall the same version with pip install bcolz==1.1.1 and run bcolz.test() again and it works.
How can this be? | Package fails in docker container. Reinstall and it works. Why? | 0 | 0 | 0 | 110 |
48,287,470 | 2018-01-16T18:16:00.000 | 0 | 0 | 0 | 0 | python,html,pdf,jupyter-notebook,jupyter | 48,291,383 | 2 | false | 1 | 0 | I fixed this myself.
It turns out that somewhere in the code, there was a tag.
Although it did not run the entire length of the cell, the fact that the plaintext tag was there at all changed the dynamic of the cell.
Next, I had strange formatting errors (Text was of different size and strangely emphasized) when using = as plaintext in the cell. When opening the cell for editing, these = symbols were big bold and blue. This probably has something to do with the markdown language.
This was solved by placing the = on the same line as other text.
I did have to convert the page to HTML, then use a firefox addon to convert to PDF.
Converting to PDF from jupyter notebook uses LaTeX to transcribe the page, and all html is converted to plaintext.
The page appeared as normal with html tables, and normal html in the markdown cell. I just had to be careful with any extraneous tags.
If anyone else encounters this problem, check your html tags, and make sure that you are not accidentally doing something in markdown language. | 1 | 0 | 0 | I am just getting started with Jupyter Notebook and I'm running into an issue when exporting.
In my current notebook, I alternate between code cells with code and markdown cells. (Which explain my code).
In the markdown cells, sometimes I will use a little HTML to display a table or a list. I will also use the bold tag <b></b> to emphasize a particular portion of text.
My problem is, when I export this notebook to PDF (via the menu in Jupyter Notebook) all of my HTML gets saved as plaintext.
For example, instead of displaying a table, when exporting to PDF, the HTML will be displayed instead. <tr>Table<tr> <th>part1</th>, etc.
I've tried exporting to HTML instead, but even the HTML file displays the HTML as plaintext.
I tried downloading nbconvert (which is probably what I'm doing when I use the jupter GUI anyways) and using that via terminal, but I still get the same result.
Has anyone run into this problem before? | Exporting Jupyter Notebook as either PDF or HTML makes all HTML plaintext | 0 | 0 | 0 | 1,850 |
48,287,766 | 2018-01-16T18:36:00.000 | 3 | 0 | 0 | 0 | python,pandas,dataframe,sorting,dask | 48,287,827 | 1 | true | 0 | 0 | Yes, by calling set_index on the column that you wish to sort. On a single machine it uses your hard drive intelligently for excess space. | 1 | 4 | 1 | I need to sort a data table that is well over the size of the physical memory of the machine I am using. Pandas cannot handle it because it needs to read the entire data into memory. Can dask handle that?
Thanks! | sort very large data with dask? | 1.2 | 0 | 0 | 1,393 |
48,288,763 | 2018-01-16T19:47:00.000 | 2 | 0 | 1 | 0 | python,regex | 48,289,327 | 1 | true | 0 | 0 | If you are ok with the following assertions:
Names and surnames always begin with a capital letter
For names reduced to one capital letter, this letter is always immediately followed by a dot
Names can be separated with either a comma or the "and" word
These names end with a final dot
Then you can use this regex: ^©[0-9]{4} +(([A-Z][a-z]+|[A-Z]\.|and|,) *)*\. * | 1 | 0 | 1 | I want to use regex to match patterns in paragraphs like the following:
©2016 Rina Foygel Barber and Emil Y. Sidky. Many optimization problems arising in high-dimensional statistics decompose naturally into a sum of several terms, where the individual terms are relatively simple but the composite objective function can only be optimized with iterative algorithms. In this paper, we are interested in optimization problems of the form F(Kx) + G(x), where K is a fixed linear transformation, while F and G are functions that may be nonconvex and/or nondifferentiable. In particular, if either of the terms are nonconvex, existing alternating minimization techniques may fail to converge; other types of existing approaches may instead be unable to handle nondifferentiability. We propose the mocca (mirrored convex/concave) algorithm, a primal/dual optimization approach that takes a local convex approximation to each term at every iteration. Inspired by optimization problems arising in computed tomography (CT) imaging, this algorithm can handle a range of nonconvex composite optimization problems, and offers theoretical guarantees for convergence when the overall problem is approximately convex (that is, any concavity in one term is balanced out by convexity in the other term). Empirical results show fast convergence for several structured signal recovery problems.
So that the first line with human names, year, and copyright (©2016 Rina Foygel Barber and Emil Y. Sidky.) can be removed.
The only I can come up now was to use ^© ?[0-9][0-9][0-9][0-9].+\.. However, this can hardly match things like the above paragraph due to the . in human names. Any suggestions? Thanks! | Python regex match human names with abbr (dots) in text | 1.2 | 0 | 0 | 179 |
48,289,834 | 2018-01-16T21:06:00.000 | 0 | 0 | 0 | 0 | python,django | 48,289,880 | 3 | true | 1 | 0 | Yes you can use admin panel for your teachers. for this Purpose you need to mark them as staff to login in admin panel and set for them permissions you want to add them. | 3 | 1 | 0 | I am currently trying to make a learning app. The three main users would be the Admin, Teacher and Student. Should I use the django admin panel for the teachers ? It has a lot of features and is fully customizable and I can choose what a teacher can do or not from there. Is this a correct approach ? | Should I use django admin panel for regular users? | 1.2 | 0 | 0 | 754 |
48,289,834 | 2018-01-16T21:06:00.000 | 0 | 0 | 0 | 0 | python,django | 48,289,917 | 3 | false | 1 | 0 | You should give an admin panel to all users. However, limit what certain users see.
Example: Students shouldn't be able to create courses in the panel but Admins could. | 3 | 1 | 0 | I am currently trying to make a learning app. The three main users would be the Admin, Teacher and Student. Should I use the django admin panel for the teachers ? It has a lot of features and is fully customizable and I can choose what a teacher can do or not from there. Is this a correct approach ? | Should I use django admin panel for regular users? | 0 | 0 | 0 | 754 |
48,289,834 | 2018-01-16T21:06:00.000 | 5 | 0 | 0 | 0 | python,django | 48,290,569 | 3 | false | 1 | 0 | While you can use the admin panel for all users, I don't recommend it. Security is tight, but not very flexible. Also, dedicated pages developed for your user functions can be better suited for the job from both the design and functionality standpoints.
Take the time to develop quality pages for your users. You won't regret it. | 3 | 1 | 0 | I am currently trying to make a learning app. The three main users would be the Admin, Teacher and Student. Should I use the django admin panel for the teachers ? It has a lot of features and is fully customizable and I can choose what a teacher can do or not from there. Is this a correct approach ? | Should I use django admin panel for regular users? | 0.321513 | 0 | 0 | 754 |
48,291,449 | 2018-01-16T23:29:00.000 | 3 | 0 | 1 | 0 | python | 48,291,494 | 1 | false | 0 | 0 | If the function exists exclusively to serve that object type, then you should probably make it a method of the class; that requires the obj.func() syntax.
If the function will also work on objects not of that one class, then you should make it a regular function, performing the generalization and discrimination with the function. This requires the syntax func(obj). | 1 | 2 | 0 | I understand that the dot operator is accessing the method specific to an object that is an instance of the class containing that method/function. However, in which cases do you instead call the function directly on an object, in the form func(obj) as opposed to obj.func()?
Can both techniques always be implemented (at least in custom code) or are there certain cases in which the former should be used over the latter, and vice versa?
I had previously read that the form func(obj) is for processing data that the object holds, but why would this not be possible with doing obj.dataMember.func(), is there an advantage to passing just the object, such as some change in mutability? | In Python, when should an object be passed as an argument as opposed to calling the method on object with the dot operator? | 0.53705 | 0 | 0 | 38 |
48,297,293 | 2018-01-17T09:08:00.000 | 0 | 0 | 1 | 0 | python,namedtuple | 48,297,465 | 2 | true | 0 | 0 | Few problem's what I can see is
You can't specify default arguments values for namedtuple classes.This makes them unwieldy when your data may have many optional properties .
The attribute values of namedtuple instances are still accessible using numerical indexes and iteration .Especially in externalised API's , this can lead to unintentional usage that makes it harder to move to real class later.If you are not in control of all of the usage of namedtuple instances , its' better define your own class
and for when to use it please do see the comment by IMCoins | 1 | 1 | 0 | During my online class one of my python tutors told me that namedtuple can cause more harm than good.
I am confused why. Can someone please specify when to use namedtuple and when not to? | When should I use and when should I avoid namedtuple in Python? | 1.2 | 0 | 0 | 1,346 |
48,299,705 | 2018-01-17T11:11:00.000 | 0 | 0 | 0 | 0 | python,postgresql,vpn | 48,399,241 | 1 | false | 0 | 0 | Additional information on the topic revealed that actual issue being the
local address he client is using for sending data when talking to the (database) server:
Your client need to use the local VPN address assigned as source address.
This is achieved by adding in a socket.bind(_source address_) call before the call to socket.connect(_target address_).
Or, more conveniently, just provide the source address parameter with the socket.create_connection(address[, timeout[, source_address]]) call that is setting up the connection to the server. | 1 | 0 | 0 | I have a project using pandas-python to access data from Postgres using SQLAlchemy createngine function. While I pass the credentials and hostname:portname it throws error and asks me to add the machine IP to pg_conf.hba file on the Postgres server. Which will be cumbersome as I don't have a static IP for my machine and even this project need to be shared with other people and it doesn't make any sense to keep on adding new IPs or making requests with ** IPs as it has sensitive data. | Trying to access Postgres Data within the VPN without adding local machine ip to Postgres Server | 0 | 1 | 0 | 451 |
48,302,876 | 2018-01-17T13:57:00.000 | 1 | 0 | 0 | 0 | python,pandas | 48,303,455 | 1 | false | 0 | 0 | I think the problem is related to the fact that I was trying to assign a None to a bool Series, then it just tries to convert to a different type (why not object?)
Fixed changing the dtype to object first: dataframe.foo = dataframe.foo.astype(object).
Works like a charm now. | 1 | 0 | 1 | I'm facing a weird issue on Pandas now, not sure if a pandas pitfall or just something I'm missing...
My pd.Series is just
foo
False
False
False
> a.foo.dtype
dtype('bool')
When I use a dataframe.set_value(index, col, None), my whole Series is converted to dtype('float64') (same thing applies to a.at[index, col] = None).
Now my Series is
foo
NaN
NaN
NaN
Do you have any idea on how this happens and how to fix it?
Thanks in advance. :)
Edit:
Using 0.20.1. | dtype changes after set_value/at | 0.197375 | 0 | 0 | 37 |
48,302,947 | 2018-01-17T14:01:00.000 | 0 | 0 | 0 | 0 | python,attributes,gurobi | 48,479,898 | 1 | false | 0 | 0 | If m is the model, i create new model for presolve (mpre = m.presolve()), and then i use mpre.getAttr(GRB.Attr.NumVars), mpre.getAttr(GRB.Attr.NumConstrs) and mpre.getAttr(GRB.Attr.NumConstrs), mpre.getAttr(GRB.Attr.NumNZs). | 1 | 0 | 1 | I try to get number of rows, columns and nonzeros values after presolve. I know about getAttr(GRB.Attr.NumVars), getAttr(GRB.Attr.NumConstrs)...but they give the statics before presolve. Can any one help me.
Thanks | Python Gurobi, report of statistics presolve | 0 | 0 | 0 | 71 |
48,308,350 | 2018-01-17T19:12:00.000 | 2 | 0 | 1 | 0 | python,windows,virtualenv | 48,308,517 | 1 | false | 0 | 0 | packages.txt exists in Python36-32\Scripts\ENV\Scripts, but you weren't in that directory when you ran the pip install command. | 1 | 0 | 0 | I have just activated my virtualenv in Python. I have a text file in the same path named "packages.txt".
Below is the path:
[![enter image description here][1]][1]
But, I get the following error which is shown below.
Unable to find the reason. Please guide. | Error while trying to install a text file of packages | 0.379949 | 0 | 0 | 25 |
48,309,776 | 2018-01-17T20:50:00.000 | 0 | 1 | 0 | 0 | python,selenium,automated-tests,modular | 48,311,473 | 1 | false | 0 | 0 | You don't really want your tests to be sequential. That breaks one of the core rules of unit tests where they should be able to be run in any order.
You haven't posted any code so it's hard to know what to suggest but if you aren't using the page object model, I would suggest that you start. There are a lot of resources on the web for this but the basics are that you create a single class per page or widget. That class would hold all the code and locators that pertains to that page. This will help with the modular aspect of what you are seeking because in your script you just instantiate the page object and then consume the API. The details of interacting with the page, the logic, etc. all lives in the page object is exposed via the API it provides.
Changes/updates are easy. If the login page changes, you edit the page object for the login page and you're done. If the page objects are properly implemented and the changes to the page aren't severe, many times you won't need to change the scripts at all.
A simple example would be the login page. In the login class for that page, you would have a login() method that takes username and password. The login() method would handle entering the username and password into the appropriate fields and clicking the sign in button, etc. | 1 | 0 | 0 | Edited Question:
I guess I worded my previous question improperly, I actually want to get away from "unit tests" and create automated, modular system tests that build off of each other to test the application as whole. Many parts are dependent upon the previous pages and subsequent pages cannot be reached without first performing the necessary steps on the previous pages.
For example (and I am sorry I cannot give the actual code), I want to sign into an app, then insert some data, then show that the data was sent successfully. It is more involved than that, however, I would like to make the web driver portion, 'Module 1.x'. Then the sign in portion, 'Module 2.x'. The data portion, 'Module 3.x'. Finally, success portion, 'Module 4.x'. I was hoping to achieve this so that I could eventually say, "ok, for this test, I need it to be a bit more complicated so let's do, IE (ie. Module 1.4), sign in (ie. Module 2.1), add a name (ie Module 3.1), add an address (ie. Module 3.2), add a phone number (ie Module 3.3), then check for success (ie Module 4.1). So, I need all of these strung together. (This is extremely simplified and just an example of what I need to occur. Even in the case of the unit tests, I am unable to simply skip to a page to check that the elements are present without completing the required prerequisite information.) The issue that I am running into with the lengthy tests that I have created is that each one requires multiple edits when something is changed and then multiplied by the number of drivers, in this case Chrome, IE, Edge and Firefox (a factor of 4). Maybe my approach is totally wrong but this is new ground for me, so any advice is much appreciated. Thank you again for your help!
Previous Question:
I have found many answers for creating unit tests, however, I am unable to find any advice on how to make said tests sequential.
I really want to make modular tests that can be reused when the same action is being performed repeatedly. I have tried various ways to achieve this but I have been unsuccessful. Currently I have several lengthy tests that reuse much of the same code in each test, but I have to adjust each one individually with any new changes.
So, I really would like to have .py files that only contain a few lines of code for the specific task that I am trying to complete, while re-using the same browser instance that is already open and on the page where the previous portion of the test left off. Hoping to achieve this by 'calling' the smaller/modular test files.
Any help and/or examples are greatly appreciated. Thank you for your time and assistance with this issue.
Respectfully,
Billiamaire | Is it possible to make sequential tests in Python-Selenium tests? | 0 | 0 | 1 | 150 |
48,310,756 | 2018-01-17T22:03:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,concurrency,queue,amazon-sqs | 48,311,147 | 1 | false | 1 | 0 | Hard to understand what exactly you're trying to accomplish, but I think you should let the user submit through the web server to a database, and then have your timed process query that database for 'valid' submissions to process every so many minutes. Timestamp, max() and a user/sessionId within SQL should be the only necessary elements.
Might give one of the managed database offerings on AWS, IBM Cloud, etc. a go if you don't want to sink in the time to running your own database server.
Best of luck. | 1 | 0 | 0 | I have made a web form which, upon submit, starts a large process of operations that could result in a large batch of notifications being sent to a number of other users.
In order to avoid an end-user mass-submitting many times during a short time span, I would like to queue up these operations in a queue (or what?) and after a certain delay, only actually submit the latest entry.
I am currently working on AWS and experimenting with various solutions using SQS. Unfortunately I have not been able to find a good solution.
Current solution
Right now, I am doing the following, but I am assuming I am using the wrong tool for the job:
First time the user submits:
Backend receives request, looks if a temporary queue exists in amazon
called something like temp_queue_[user's id]
If true, delete this queue, then create a new queue, with a
delivery delay of 10 minutes, enqueue the message in this one
If false, the same as above, just without deleting a queue first
I then have a separate process which reloads a list of queues every 10-some minutes and actually commits any message they find.
I have played around with other approaches such as trying different delivery delays, various MessageGroupIds and so forth, but I end up with the same problem which is, that one consumer will not be guaranteed to get all messages, and in flight, delayed and invisible messages are not able to be dequeued.
Furthermore, I cannot "filter" messages from a queue, such as to only receive messages related to only a specific user. So I am definitely starting to think that a queue is the wrong tool. Problem is, I don't know what is.
Best regards, | Delay form submission and commit only the latest | 0 | 0 | 0 | 35 |
48,314,010 | 2018-01-18T04:46:00.000 | 3 | 0 | 1 | 0 | python,python-3.x | 48,314,021 | 6 | false | 0 | 0 | You can use pip freeze > requirements.txt to generate dependencies list, and use pip install -r requirements.txt to install all dependencies. | 2 | 6 | 0 | In Node.js, for a project we create a package.json file, which lists all the dependencies, so that NPM can automatically install them.
Is there an equivalent way to do this with Python ? | Is there a way to automatically install required packages in Python? | 0.099668 | 0 | 0 | 14,641 |
48,314,010 | 2018-01-18T04:46:00.000 | 3 | 0 | 1 | 0 | python,python-3.x | 61,704,488 | 6 | false | 0 | 0 | The quickest and best way that I've found when delivering a program that has dependencies uses a few simple steps.
Use pip install pipreqs that allows running pipreqs /--directory to your program folder--/ and produces the requirements.txt file inside your program's folder for your program that lists all package dependencies.
Copy/paste the folders containing package dependencies from where ever your python packages are downloaded to (eg..\Python\Python38-32\Lib\site-packages) straight into the folder for your program.
Run pip uninstall -r requirements.txt and run the program to make sure it still works without the dependencies installed. | 2 | 6 | 0 | In Node.js, for a project we create a package.json file, which lists all the dependencies, so that NPM can automatically install them.
Is there an equivalent way to do this with Python ? | Is there a way to automatically install required packages in Python? | 0.099668 | 0 | 0 | 14,641 |
48,314,268 | 2018-01-18T05:15:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,aws-glue | 48,823,484 | 9 | false | 0 | 0 | Adding to CedricB,
For development / testing purpose, its not necessary to upload the code to S3, and you can setup a zeppelin notebook locally, have an SSH connection established so you can have access to the data catalog/crawlers,etc. and also the s3 bucket where your data resides.
After all the testing is completed, you can bundle your code, upload to an S3 bucket. Then create a Job pointing to the ETL script in S3 bucket, so that the job can be run, and scheduled as well. Once all the development/testing is completed, make sure to delete the dev endpoint, as we are charged even for the IDLE time.
Regards | 4 | 38 | 0 | After reading Amazon docs, my understanding is that the only way to run/test a Glue script is to deploy it to a dev endpoint and debug remotely if necessary. At the same time, if the (Python) code consists of multiple files and packages, all except the main script need to be zipped. All this gives me the feeling that Glue is not suitable for any complex ETL task as development and testing is cumbersome. I could test my Spark code locally without having to upload the code to S3 every time, and verify the tests on a CI server without having to pay for a development Glue endpoint. | Can I test AWS Glue code locally? | 0.022219 | 0 | 0 | 23,328 |
48,314,268 | 2018-01-18T05:15:00.000 | 8 | 0 | 0 | 0 | python,amazon-web-services,aws-glue | 54,096,194 | 9 | false | 0 | 0 | I spoke to an AWS sales engineer and they said no, you can only test Glue code by running a Glue transform (in the cloud). He mentioned that there were testing out something called Outpost to allow on-prem operations, but that it wasn't publically available yet. So this seems like a solid "no" which is a shame because it otherwise seems pretty nice. But with out unit tests, its no-go for me. | 4 | 38 | 0 | After reading Amazon docs, my understanding is that the only way to run/test a Glue script is to deploy it to a dev endpoint and debug remotely if necessary. At the same time, if the (Python) code consists of multiple files and packages, all except the main script need to be zipped. All this gives me the feeling that Glue is not suitable for any complex ETL task as development and testing is cumbersome. I could test my Spark code locally without having to upload the code to S3 every time, and verify the tests on a CI server without having to pay for a development Glue endpoint. | Can I test AWS Glue code locally? | 1 | 0 | 0 | 23,328 |
48,314,268 | 2018-01-18T05:15:00.000 | 2 | 0 | 0 | 0 | python,amazon-web-services,aws-glue | 48,531,207 | 9 | false | 0 | 0 | Not that I know of, and if you have a lot of remote assets, it will be tricky. Using Windows, I normally run a development endpoint and a local zeppelin notebook while I am authoring my job. I shut it down each day.
You could use the job editor > script editor to edit, save, and run the job. Not sure of the cost difference. | 4 | 38 | 0 | After reading Amazon docs, my understanding is that the only way to run/test a Glue script is to deploy it to a dev endpoint and debug remotely if necessary. At the same time, if the (Python) code consists of multiple files and packages, all except the main script need to be zipped. All this gives me the feeling that Glue is not suitable for any complex ETL task as development and testing is cumbersome. I could test my Spark code locally without having to upload the code to S3 every time, and verify the tests on a CI server without having to pay for a development Glue endpoint. | Can I test AWS Glue code locally? | 0.044415 | 0 | 0 | 23,328 |
48,314,268 | 2018-01-18T05:15:00.000 | 7 | 0 | 0 | 0 | python,amazon-web-services,aws-glue | 53,972,448 | 9 | false | 0 | 0 | You can keep glue and pyspark code in separate files and can unit-test pyspark code locally. For zipping dependency files, we wrote shell script which zips files and upload to s3 location and then applies CF template to deploy glue job.
For detecting dependencies, we created (glue job)_dependency.txt file. | 4 | 38 | 0 | After reading Amazon docs, my understanding is that the only way to run/test a Glue script is to deploy it to a dev endpoint and debug remotely if necessary. At the same time, if the (Python) code consists of multiple files and packages, all except the main script need to be zipped. All this gives me the feeling that Glue is not suitable for any complex ETL task as development and testing is cumbersome. I could test my Spark code locally without having to upload the code to S3 every time, and verify the tests on a CI server without having to pay for a development Glue endpoint. | Can I test AWS Glue code locally? | 1 | 0 | 0 | 23,328 |
48,314,549 | 2018-01-18T05:43:00.000 | 1 | 0 | 1 | 0 | python,opencv,pycharm,interpreter | 48,314,822 | 1 | true | 0 | 0 | Go to project interpreter settings in preferences in Pycharm, and use the + sign there to add the module to your interpreter.
This can happen when pip is installing to a directory that is not part of your project's python interpreter's PATH.
Installing via the Pycharm preferences menu always solves for me, although there is a deeper issue of pip not installing to the correct directory... | 1 | 0 | 1 | I am python newbie. I am doing 1 project testing of Keras. I already installed opencv by pip install opencv-python but i can't find opencv in interpreter of Pycharm and have an error when i import cv2. My interpreter is usr/bin/python2.7 | I have trouble with Interpreter in Pycharm | 1.2 | 0 | 0 | 335 |
48,315,785 | 2018-01-18T07:21:00.000 | 3 | 0 | 1 | 0 | python,pyinstaller | 53,320,412 | 5 | false | 0 | 0 | I have been battling with this problem myself. Unfortunately, there is no feasible solution to the problem other than using the ugly console (completely agree there).
The problem stems from the fact that until PyInstaller unpacks all the files into a temp dir, no scripts will be run. From my research, there is no way to alter this functionality using currently available options within PyInstaller. It would be nice if the community behind PyInstaller would make this a standard feature of the module, but until then we may have to explore other installer options.
Happy programming! | 1 | 13 | 0 | I create a single file python application with Pyinstaller using --onefile parameters.
Everything work as expected but the startup time is around 10 seconds on my machine. The problems is that during the file unpacking process of Pyinstaller package there are no visual feedback, so you don't know if the application is starting or even if you really clicked the icon. This problem became worse if the machine is slow (on my test with a very old machine i need almost 20 seconds to see the first login of my application)
There is a way to create some splash screen or visual feedback (like a progress bar as on unpackers) during the Pyinstaller bootstrap sequence?
Please note the question is about Pyinstaller unpacking process BEFORE the real application will be executed not by the application itself that already has is own splash screen
thank you
19.01.2018 - UPDATE1
My application is FULL GUI so i prefer to not use the console as "visual feedback" during the unpacking process. | Pyinstaller adding splash screen or visual feedback during file extraction | 0.119427 | 0 | 0 | 7,827 |
48,322,232 | 2018-01-18T13:13:00.000 | 0 | 0 | 0 | 0 | python-3.x,decision-tree,one-hot-encoding | 48,341,100 | 1 | false | 0 | 0 | Based on my tests, One-hot encoding results in continuous variables (amounts, in my case) having been assigned higher feature importance.
Also, a single level of a categorical variable must meet a very high bar in order to be selected for splitting early in the tree building. This apparently degrades predictive performance (I didn't see this consequence mentioned in any post, unfortunately).
I will investigate for other approaches. | 1 | 0 | 1 | I need to build decision trees on categorical data.
I understood that scikit-learn was only able to deal with numerical values, and the recommended approach is then to use on-hot encoding, preferrably using the Panda Dummies.
So, I build a sample dataset where all attributes and labels are categorical. At this stage, I try to understand how to 'one-hot' encode to be able to use sklearn, but the documentation does not address this case.
Could eventually give me a quick example or a link to some material for beginners ? | Use of one-hot encoder to build decision trees | 0 | 0 | 0 | 792 |
48,323,434 | 2018-01-18T14:20:00.000 | 2 | 0 | 1 | 0 | javascript,python,regex,regex-negation,regex-group | 48,323,646 | 2 | false | 0 | 0 | (^|\s)(\w*(\.))+ - this may satisfy the sample text you've posted. You can find all '.' in third group
UPDATE: if in your text you have words, started with any other symbol, for instance, #asd.qwe.zxc, you can improve your reg exp:
(^|\s)[^@]?(\w*(\.))+ | 1 | 1 | 0 | assuming the following sentence:
this is @sys.any and. here @names hello. and good.bye
how would I find all the '.' besides the ones appearing in words that start with @?
disclaimer, been playing at regex101 for over 2 hours now after reading a few answers on SO and other forums. | Regex find char '.' except for words starting with @ | 0.197375 | 0 | 0 | 98 |
48,324,322 | 2018-01-18T15:04:00.000 | 0 | 0 | 1 | 0 | python-3.x,opencv3.0 | 48,517,556 | 1 | false | 0 | 0 | It could be an issue of having multiple python versions on your machine, you should select the python interpreter that is global to your system "that which utilises" pip in terminal. | 1 | 0 | 1 | I installed opencv in python3 using pip. It runs well in terminal, but when I try to run it in idle, it cannot import cv2. What could be the solution?
I am using vim as my python idle. | Use opencv in python idle | 0 | 0 | 0 | 787 |
48,326,786 | 2018-01-18T17:08:00.000 | 0 | 0 | 1 | 0 | python-3.x,pycharm | 48,326,851 | 1 | false | 0 | 0 | It's telling you that you need Microsoft Visual C++ 10.0 as pandas library has dependencies that refer to that version of Microsoft Visual C++. It provides you the url to download and install it. Use it, then import pandas library. | 1 | 0 | 1 | the error message
error: Microsoft Visual C++ 10.0 is required. Get it with "Microsoft Windows SDK 7.1": www.microsoft.com/download/details.aspx?id=8279 | While trying to import pandas library to pycharm there is an error message | 0 | 0 | 0 | 244 |
48,330,515 | 2018-01-18T21:21:00.000 | 1 | 0 | 0 | 0 | python,flask | 48,330,614 | 3 | false | 1 | 0 | to make an application works with public host you have to make sure enabling port forwarding in your modem device, you can etablish a cnx with the nginx server | 2 | 0 | 0 | So I've currently got a flask app that I'm using to run a testing app, (this works on local host) but I cant work out how to launch it so I can test the connectivity from other devices (public).
can someone explain how I can go about launching it, or at least point me in the right direction to some documentation about how to make it public. I don't think I'm either port forwarding it correctly or i need a web server like xampp to run it.
thanks | Launching a flask app/website so other networks can connect | 0.066568 | 0 | 0 | 243 |
48,330,515 | 2018-01-18T21:21:00.000 | 0 | 0 | 0 | 0 | python,flask | 48,330,687 | 3 | false | 1 | 0 | If you change the ip address of the flask server from the default 0.0.0.0 to your the ip address of your computer (eg 192.168.1.2) the other clients on your local network can connect.
If you want to expose your app the whole of the internet you should get a host that (eg try heroku.com) that has a fixed ip assigned and is reachable from the internet. | 2 | 0 | 0 | So I've currently got a flask app that I'm using to run a testing app, (this works on local host) but I cant work out how to launch it so I can test the connectivity from other devices (public).
can someone explain how I can go about launching it, or at least point me in the right direction to some documentation about how to make it public. I don't think I'm either port forwarding it correctly or i need a web server like xampp to run it.
thanks | Launching a flask app/website so other networks can connect | 0 | 0 | 0 | 243 |
48,331,004 | 2018-01-18T21:58:00.000 | 1 | 0 | 0 | 0 | python,opencv,image-processing,feature-detection,sift | 48,331,105 | 2 | false | 0 | 0 | I would split the image into smaller windows. So long as you know the windows overlap (I assume you have an idea of the lateral shift) the match in any window will be valid.
You can even use this as a check, the translation between feature points in any part of the image must be the same for the transform to be valid | 2 | 1 | 1 | I want to use OpenCV Python to do SIFT feature detection on remote sensing images. These images are high resolution and can be thousands of pixels wide (7000 x 6000 or bigger). I am having trouble with insufficient memory, however. As a reference point, I ran the same 7000 x 6000 image in Matlab (using VLFEAT) without memory error, although larger images could be problematic. Does anyone have suggestions for processing this kind of data set using OpenCV SIFT?
OpenCV Error: Insufficient memory (Failed to allocate 672000000 bytes) in cv::OutOfMemoryError, file C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp, line 55
OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file
(I'm using Python 2.7 and OpenCV 3.4 in the Spyder IDE on a Windows 64-bit with 32 GB of RAM.) | Errors processing large images with SIFT OpenCV | 0.099668 | 0 | 0 | 649 |
48,331,004 | 2018-01-18T21:58:00.000 | 0 | 0 | 0 | 0 | python,opencv,image-processing,feature-detection,sift | 48,355,204 | 2 | false | 0 | 0 | There are a few flavors how to process SIFT corner detection in this case:
process single image per unit/time one core;
multiprocess 2 or more images /unit time on single core;
multiprocess 2 or more images/unit time on multiple cores.
Read cores as either cpu or gpu. Threading result in serial processing instead of parallel.
As stated Rebecca has at least 32gb internal memory on her PC at her disposal which is more than sufficient for option 1 to process at once.
So in that light.. splitting a single image as suggested by Martin... should be a last resort in my opinion.
Why should you avoid splitting a single image in multiple windows during feature detection (w/o running out of memory)?
Answer:
If a corner is located at the spilt-side of the window and thus becomes unwillingly two more or less polygonal straight-line-like shapes you won't find the corner you're looking for, unless you got a specialized algorithm to search for those anomalies.
In casu:
In Rebecca's case its crucial to know which approach she took on processing the image(s)... Was it one, two, or many more images loaded simultaneously into memory?
If hundreds or thousands of images are simultaneously loaded into memory... you're basically choking the system by taking away its breathing space (in the form of free memory). In addition, we're not talking about other programs that are loaded into memory and claim (reserve) or consume memory space for various background programs. That comes on top of the issue at hand.
Overthinking:
If as suggested by Martin there is an issue with the Opencv lib in handling such amount of images as described by Rebecca.. do some debugging and then report your findings to Opencv, post a question here at SO as she did... but post also code that shows how you deal with the image processing at the start; as explained above why that is important. And yes as Martin stated... don't post wrappers... totally pointless to do so. A referral link to it (with possible version number) is more than enough... or a tag ;-) | 2 | 1 | 1 | I want to use OpenCV Python to do SIFT feature detection on remote sensing images. These images are high resolution and can be thousands of pixels wide (7000 x 6000 or bigger). I am having trouble with insufficient memory, however. As a reference point, I ran the same 7000 x 6000 image in Matlab (using VLFEAT) without memory error, although larger images could be problematic. Does anyone have suggestions for processing this kind of data set using OpenCV SIFT?
OpenCV Error: Insufficient memory (Failed to allocate 672000000 bytes) in cv::OutOfMemoryError, file C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp, line 55
OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file
(I'm using Python 2.7 and OpenCV 3.4 in the Spyder IDE on a Windows 64-bit with 32 GB of RAM.) | Errors processing large images with SIFT OpenCV | 0 | 0 | 0 | 649 |
48,333,572 | 2018-01-19T03:11:00.000 | 3 | 0 | 0 | 0 | python,postgresql,csv | 48,336,589 | 1 | true | 0 | 0 | inf (meaning infinity) is a correct value for floating point values (real and double precision), but not for numeric.
So you will either have to use one of the former data types or fix the input data. | 1 | 2 | 1 | I am working on copying csv file content into postgresql database.
While copying into the database, I get this error:
invalid input syntax for type numeric: "inf"
My question is:
I think "inf" means "infinitive" value, is it right? what does "inf" correctly mean? If it is kinda error, is it possible to recover original value?
And, Should I manually correct these values to copy it into the database?
Is there any good solution to fix this problem without manually correcting or setting exceptions in copying codebase? | What mean "Inf" in csv? | 1.2 | 1 | 0 | 1,132 |
48,333,923 | 2018-01-19T03:59:00.000 | 2 | 0 | 1 | 0 | python,tabs,spyder,spaces,enter | 48,334,011 | 1 | true | 0 | 0 | (Spyder maintainer here) Spyder uses spaces by default. However, that can be changed to tabs by going to
Preferences > Editor > Advanced settings > Indentation characters | 1 | 1 | 0 | I have read all about the horrendous nature of tabs instead of spaces in Python. When I press enter in Spyder, is the IDE adding a tab or 4 spaces? | Is pressing *enter* in Spyder (in Python) a tab or 4 spaces? | 1.2 | 0 | 0 | 1,047 |
48,333,942 | 2018-01-19T04:02:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,compiler-errors,architecture,nuitka | 49,217,467 | 1 | false | 0 | 0 | I just ran into this problem today.
I think it is because anaconda is probably compiled with a different C compiler. I just used the normal python (no anaconda) and pipenv to have environments. | 1 | 0 | 0 | When I was compiling my python project using Nuitka, there came an error:
python36.lib(python36.dll) : fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'x86'
I am using Windows 10 64bit, Nuitka 0.5.28.1 Python3.6 64 bit, Visual Studio 2017 Community and Python 3.6.3 |Anaconda custom (64-bit)|.
I want to build a x64 exe file.
I've searched Internet saying that something Configuration Properties, Target Machine. However, I don't have an VS project when using Nuitka, so I don't know where to config. | fatal error LNK1112 when nuitka compile python | 0 | 0 | 0 | 298 |
48,333,999 | 2018-01-19T04:10:00.000 | 6 | 1 | 0 | 1 | python,python-2.7,python-os | 48,334,082 | 3 | false | 0 | 0 | I believe os.path.abspath(os.sep) is close to what you are asking for. | 1 | 5 | 0 | Situation: I need to find the top level [root] directory of any operating system using the most Pythonic way possible, without system calls.
Problem: While I can check for the operating system using things like if "Windows" in platform.system(), I cannot be too sure if the drive letter is always C:\ or / (the latter being unlikely). I also cannot possibly be sure that there are only Windows and *NIXes that needs to be catalog.
Question: Is there a way to get the top level directory of any operating system? Preferably using the os module since I am already using it. | How to get the filesystem's root directory in Python? | 1 | 0 | 0 | 5,801 |
48,335,000 | 2018-01-19T06:00:00.000 | 0 | 0 | 0 | 0 | python,google-finance,google-finance-api | 48,366,381 | 1 | false | 1 | 0 | You can't request time series data for multiple stocks from that source at once. Instead, you have to put your request into a loop. Putting your request into a loop, you can request time series stock by stock. | 1 | 0 | 0 | I am able to get historical data on one stock per each request. But I need to get historical data for multiple stocks in a single request from Google finance using python.
Any help will be highly appreciated!
Thanks | Get historical data for multiple stocks in single request using python from Google finance | 0 | 0 | 1 | 799 |
48,335,994 | 2018-01-19T07:23:00.000 | 0 | 0 | 0 | 0 | python,algorithm,classification,knn,supervised-learning | 48,701,185 | 1 | true | 0 | 0 | There is no definite answer to this. The current trend is not do feature selection and let the classifier decide which features to use. Take current image datasets for example which also have 1000+ features (depending on the image resolution). They are fed to a CNN usually without any preprocessing. However, this is not generally true. If for example you assume to have a lot of correlations in the data, feature selection might help. | 1 | 0 | 1 | My question is,
Does the machine learning algorithm takes care of selecting the best features in my data ? or shall I do feature selection and scaling prior to my machine learning algorithm.
I am aware of few supervised classification machine learning algorithms such as kNN, Neural Networks, Adaboast etc.
But is there some you recommend me looking at ? | Do I have to do feature selection prior to applying my machine learning algorithm? | 1.2 | 0 | 0 | 124 |
48,338,962 | 2018-01-19T10:29:00.000 | 1 | 0 | 1 | 0 | python,postgresql,jupyter-notebook | 51,459,116 | 1 | false | 0 | 0 | conda install -c anaconda postgresql worked fine for me on Windows 10.
I know postgresql isn't the same module as psycopg2, but the easy installation of postgresql would trump any advantages psycopg2 might have for me. | 1 | 0 | 0 | I have installed Anaconda 2.7 on my desktop and want to connect it to Postgresql server.
I also installed psycopg2 through command prompt and it was successful. But when I import it using Jupyter notebook it shows me the following error.
ImportError Traceback (most recent call
last) in ()
----> 1 import psycopg2
C:\Users\amitdarak\AppData\Local\Continuum\anaconda2\lib\site-packages\psycopg2-2.7.3.2-py2.7-win-amd64.egg\psycopg2__init__.py
in ()
48 # Import the DBAPI-2.0 stuff into top-level module.
49
---> 50 from psycopg2._psycopg import ( # noqa
51 BINARY, NUMBER, STRING, DATETIME, ROWID,
52
ImportError: DLL load failed: The specified module could not be found | Unable to import psycopg2 in python using anaconda | 0.197375 | 1 | 0 | 1,406 |
48,339,205 | 2018-01-19T10:40:00.000 | 5 | 0 | 0 | 0 | python-3.x,rest,django-rest-framework | 52,226,988 | 2 | false | 0 | 0 | as of SEP18, also have a look on Quart, APIstar
as of MAR19, add fastAPI, looks very promising
nota:
Bottle is lighter (& faster) than Flask, but with less bells & whistles
Falcon is fast !
fastAPI is fast as well !
also, as Bottle/Flask are more general frameworks (they have templating features for instance, not API related), frameworks such as Falcon or fastAPI are really designed to serve as APIs framework. | 1 | 10 | 0 | I am new to Python and looking to build rest full web services using python. Due to some dependency, Could not use any other scripting language.
Anyone can suggest if Python has any api-only kind of framework or if any other lightweight framework for rest APIs in Python.
Thanks,
Pooja | Which python framework will be suitable to rest api only | 0.462117 | 0 | 1 | 10,417 |
48,340,410 | 2018-01-19T11:47:00.000 | 0 | 0 | 0 | 0 | python,django,oracle,datetime | 48,375,785 | 2 | true | 1 | 0 | So finally setting my local machine's timezone to UTC with this timedatectl set-timezone UTC particular command worked. | 1 | 0 | 0 | I have django app running on ubuntu-14.04 and database is oracle.
The timezones are as follow
django- settings - TIME_ZONE = 'UTC'
ubuntu - Asia/Kolkata
oracle dbtimezone - UTC
oracle sessiontimezone - Asia/Kolkata #this is via sqldeveloper
While storing datetimes into db I am doing following.
datetime.datetime.now(timezone.utc)
Error I get is time can not be past.
I don't want to change the code line. I can set the timezone of my Ubuntu or oracle as that is my development env. | Confusion about timezones oracle and django | 1.2 | 0 | 0 | 281 |
48,342,270 | 2018-01-19T13:30:00.000 | 0 | 0 | 1 | 0 | python,os.system | 48,342,355 | 1 | false | 0 | 0 | Can I suggest you to use ipython-notebook.
It is very friendly and also supports the shell commands directly. | 1 | 0 | 0 | Python is lovely to me because of short line code.
Is there any other way to clear Python shell without using:
import os
os.system('clear') | Is this possible to clear python shell without os.system("clear")? | 0 | 0 | 0 | 120 |
48,343,024 | 2018-01-19T14:13:00.000 | 0 | 0 | 0 | 1 | python,windows,oauth,google-bigquery | 48,620,815 | 2 | true | 0 | 0 | You can create the credentials by following this link cloud.google.com/storage/docs/authentication#service_accounts.
In the python script, you can pass the json file path directly to the function you are using to read/write from/to BQ with the private_key argument.
pandas_gbq.read_gbq(query, project_id= myprojectid, ..., private_key= 'jsonfilepath', dialect=’legacy’)
pandas.to_gbq(dataframe, destination_table, project_id, chunksize=10000, ..., private_key='jsonfilepath')
Then you schedule the task to run the python script as you'll normally do with the windows task scheduler. | 1 | 1 | 0 | Is there a way to schedule a python script loading data to Bigquery without having to copy the authentication code generated from a google account link for each run.
I am currently using the windows task scheduler to achieve this. | Schedule a python script loading data to BigQuery under windows 10 | 1.2 | 1 | 0 | 853 |
48,344,081 | 2018-01-19T15:09:00.000 | 0 | 0 | 0 | 0 | python,minimization,simulated-annealing | 48,346,637 | 1 | false | 0 | 0 | Gonna answer my own question here. I climbed into the actual .cpp code and found the answers.
In Corana's method, you select how many total iterations N of annealing you want. Then the minimization is a nested series of loops where you vary the step sizes, number of step-size adjustments, and temperature values at user-defined intervals. In PAGMO, they changed this so you explicitly specify how many times you will do these. Those are the n_* parameters and bin_size. I don't think bin_size is a good name here, because it isn't actually a size. It is the number of steps taken through a bin range, such that N=n_T_adj * n_range_adj * bin_range. I think just calling it n_bins or n_bins_adj makes more sense. Every bin_size function evaluations, the stepsize is modified (see below for limits).
In Corana's method you specify the multiplicative factor to decrease the temperature each time it is needed; it could be that you reach the minimum temp before running out of iterations, or vice versa. In PAGMO, the algorithm automatically computes the temperature-change factor so that you reach Tf at the end of the iteration sequence: r_t=(Tf/Ts)**(1/n_T_adj).
The start_range is, I think, a bad name for this variable. The stepsize in the alorithm is a fraction between 0 and start_range which defines the width of the search bins between the upper and lower bounds for each variable. So if stepsize=0.5, width=0.5*(upper_bound-lower_bound). At each iteration, the step size is adjusted based on how many function calls were accepted. If the step size grows larger than start_range, it is reset to that value. I think I would call it step_limit instead. But there you go. | 1 | 0 | 1 | I'm using the PYGMO package to solve some nasty non-linear minimization problems, and am very interested in using their simulated_annealing algorithm, however it has a lot of hyper-parameters for which I don't really have any good intuition. These include:
Ts (float) – starting temperature
Tf (float) – final temperature
n_T_adj (int) – number of temperature adjustments in the annealing schedule
n_range_adj (int) – number of adjustments of the search range performed at a constant temperature
bin_size (int) – number of mutations that are used to compute the acceptance rate
start_range (float) – starting range for mutating the decision vector
Let's say I have a 4 dimensional geometric registration (homography) problem with variables and search ranges:
x1: [-10,10] (a shift in x)
x2: [10,30] (a shift in y)
x3: [-45,0] (rotation angle)
x4: [0.5,2] (scaling/magnification factor)
And the cost function for a random (bad) choice of values is 50. A good value is around zero.
I understand that Ts and Tf are for the Metropolis acceptance criterion of new solutions. That means Ts should be about the expected size of the initial changes in the cost function, and Tf small enough that no more changes are expected.
In Corana's paper, there are many hyperparameters listed that make sense: N_s is the number of evaluation cycles before changing step sizes, N_T are the number of step-size changes before changing the temperature, and r_T is the factor by which the temp is reduced each time. However, I can't figure out how these correlate to pygmo's parameters of n_T_adj, n_range_adj, bin_size, and start_range.
I'm really curious if anyone can explain how pygmo's hyperparameters are used, and how they relate to the original paper by Corana et al? | PAGMO/PYGMO: Anyone understand the options for Corana’s Simulated Annealing? | 0 | 0 | 0 | 175 |
48,344,243 | 2018-01-19T15:18:00.000 | 0 | 0 | 1 | 1 | python,pycharm | 48,344,507 | 1 | false | 0 | 0 | If you haven't looked at the Keymap reference from the Help menu, please do so first. I tried doing Option+Left/Right and I was able to navigate words in PyCharm Community Edition 2017.2.3. Also within the IDE, type Shift+CMD+A to find any action. However, in the PyCharm Terminal (View -> Tool Windows --> Terminal, Shift+Left/Right key combination should work to navigate words. | 1 | 1 | 0 | Most terminal emulators allow you to configure it to make Option+Left/Right Arrow jump forward or backward a word. Is it possible to do this in the PyCharm Terminal? | PyCharm Terminal: Make Option + Arrow Keys go backwards/forwards a word | 0 | 0 | 0 | 214 |
48,349,091 | 2018-01-19T20:32:00.000 | 0 | 0 | 1 | 0 | python,c,gcc | 48,350,932 | 1 | false | 0 | 0 | Well assuming you want to handle all type of projects and their dependencies (which is not easy) the best way is to have a module that generates a Makefile for the project and use it to compile and solve all dependencies | 1 | 0 | 0 | I'm trying to build a simple IDE that is web based in Python. For now, this IDE will support C only. I know it is possible to call the gcc with Python to compile and run a single C file. But what if I would like to compile and run multiple C files from a single project (i.e. linking .h files and .c files), is this possible? If yes, can you please tell me how? | Calling gcc to compile multiple files with Python | 0 | 0 | 0 | 330 |
48,351,902 | 2018-01-20T01:58:00.000 | 0 | 0 | 1 | 0 | python | 48,351,965 | 3 | false | 0 | 0 | For each dict item in the list you want to sort, you want to take the item's value keyed by 'info', which is a list, and sort on the second item (addressed as [1], counting from zero.
So: data.sort(key=lambda item: item['info'][1]) | 1 | 1 | 1 | data = [{'info': ('orange', 400000, 'apple'), 'photo': None}, {'info': ('grape', 485000, 'watermelon'), 'photo': None}]
I want to sort data by the 2nd element (400000, 485000) in the tuple in the dictionary. How do I do this?
I followed another answer and my closest attempt was data.sort(key=lambda tup: tup[1]), but that produces this error:
KeyError: 1 | Python - Sort a list by a certain element in a tuple in a dictionary | 0 | 0 | 0 | 55 |
48,352,221 | 2018-01-20T03:07:00.000 | 2 | 0 | 1 | 0 | python,numpy,matplotlib,cross-platform | 48,352,262 | 1 | false | 0 | 0 | A popular convention is to list requirements in a text file (requirements.txt) and install them when deploying the project. Depending on your deployment configuration, libraries can be installed in a virtual environment (google keyword: virtualenv), or in a local user folder (pip install --user -r requirements.txt, if this is the only project under this account) or globally (pip install -r requirements.txt, e.g. in a docker container) | 1 | 0 | 0 | I am writing a code in python that uses numpy, matplotlib etc.
How to make sure that even a remote web server with python installed but no extra modules, can run the code without errors?
I usually work on linux environment. Hence from source code, I can install the libraries in a prefix directory and can keep that along with my code. Then add pythonpath locally in my python code to use the directory.
But, I started to realize it's not correct way as first thing, it can't work on cross platform as the libraries are different, and my code inside the script to extend the pythonpath may not work due to the use of "/" in path. Also, I am not sure if the compiled code can work in different environments of the same Linux Platform.
So I think I need to create a directory like unix,windows,osx etc. and put my code there? I believe this is what I find when I download any code online. Is that what developers generally do to avoid these issues? | How does python web developers in general include the required python modules? | 0.379949 | 0 | 0 | 39 |
48,353,544 | 2018-01-20T07:00:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,amazon-redshift,aws-glue | 54,622,059 | 3 | false | 0 | 0 | AWS Glue should be able to process all the files in a folder irrespective of the name in a single job. If you don’t want the old file to be processed again move it using boto3 api for s3 to another location after each run. | 1 | 1 | 1 | Within AWS Glue how do I deal with files from S3 that will change every week.
Example:
Week 1: “filename01072018.csv”
Week 2: “filename01142018.csv”
These files are setup in the same format but I need Glue to be able to change per week to load this data into Redshift from S3. The code for Glue uses native Python as the backend. | Aws Glue - S3 - Native Python | 0 | 0 | 0 | 292 |
48,353,603 | 2018-01-20T07:09:00.000 | 0 | 0 | 0 | 0 | python,django,git,digital-ocean | 48,354,146 | 2 | false | 1 | 0 | No. Migrations are integral parts of your source code, they must be kept in your repo and deployed as any other part of your source.
Remember that migrations are not restricted to generated schema change operations - some schema changes cannot be correctly generated by makemigrations and have to be manually coded, and also data migrations can only be handcoded. | 1 | 0 | 0 | I'm going to push my offline Django project to Bitbucket, and then push that repo to my live Django server. My question is, do I exclude the contents in my migrations folders? And then perform makemigrations and migrate on my live server (Ubuntu/DigitalOcean) after the repo has been pushed to there? | Do I add the files in my migrations folder to my live Django server? | 0 | 0 | 0 | 136 |
48,356,271 | 2018-01-20T12:42:00.000 | 0 | 0 | 0 | 0 | python,sql,postgresql,insert | 48,356,325 | 1 | false | 0 | 0 | 150 inserts per second can be a load on a database and can affect performance. There are pros and cons to changing the approach that you have. Here are some things to consider:
Databases implement ACID, so inserts are secure. This is harder to achieve with buffering schemes.
How important is up-to-date information for queries?
What is the query load?
insert is pretty simple. Alternative mechanisms may require re-inventing the wheel.
Do you have other requirements on the inserts, such as ensuring they are in particular order?
No doubt, there are other considerations.
Here are some possible alternative approaches:
If recent data is not a concern, snapshot the database for querying purposes -- say once per day or once per hour.
Batch inserts in the application threads. A single insert can insert multiple rows.
Invest in larger hardware. An insert load that slows down a single processor may have little effect on a a larger machine.
Invest in better hardware. More memory and faster disk (particularly solid state) and have a big impact.
No doubt, there are other approaches as well. | 1 | 0 | 0 | I have a python script which hits dozens of API endpoints every 10s to write climate data to a database. Lets say on average I insert 1,500 rows every 10 seconds, from 10 different threads.
I am thinking of making a batch system whereby the insert queries aren't written to the db as they come in, but added to a waiting list and this list is inserted in batch when it reaches a certain size, and the list of course emptied.
Is this justified due to the overhead with frequently writing small numbers of rows to the db?
If so, would a list be wise? I am worried about if my program terminates unexpectedly, perhaps a form of serialized data would be better? | Overhead on an SQL insert significant? | 0 | 1 | 0 | 46 |
48,357,141 | 2018-01-20T14:12:00.000 | 1 | 0 | 0 | 0 | python | 48,357,241 | 1 | true | 0 | 0 | I would use the process of elimination. Start off with a set of all 4 digit numbers from 1000 to 9999.
Then if you give the computer a cow, so the computer knows it is of the form _ _ 3 _. Remove all numbers that are not of that form from the set.
If you give the computer a bull, say for the number 4. Remove all 4 digit numbers that don't have a 4 in them somewhere.
For the computers next turn, just pick a random number from the set of numbers that it now knows are still potential values.
Also, if you don't get a bull or a cow from a number, you can remove all numbers that include the digits for the numbers you didn't get a bull or cow for.
Then repeat.
You'll whittle down the potential numbers pretty quickly. Then the computer will either guess the correct number or there will only be one left.
I hope this helps :) | 1 | 0 | 0 | So I've been making a small game in Python named Cows and Bulls. For those who don't know it's very simple. 1 player generates a number the other tries to guess. If the guess has a number on the correct position it gives you a cow. If it has a number but on the wrong position it gives you a bull, so until the cow value isn't 4 (4 digit number) the game keeps going. It keeps giving hints until the number is guessed.
I've actually sucessfully created the player part of the program. Now I moved on to creating an AI. I generate a number, and the PC tries to guess that number.
My problem is the conditions to help the PC find this number. Right now I have the basic ones. If the PC guess finds no bulls and no cows, it discards all those numbers for the next guesses, if it finds all bulls is tries every combination of with those 4 numbers and of course the normal winning conditions.
The PC takes a long time to guess it though. There aren't enough conditions that facilitate the process of guessing the number.
So I was wondering if anyone can give me some tips on what conditions I can put onto my program to facilitate him guessing the right number? I've been thinking about it but been struggling with it. Can't seem to find a good condition that actually helps considerably the time the PC takes to guess.
In any way thanks in advance! | Cows and Bulls PC Guessing | 1.2 | 0 | 1 | 593 |
48,357,407 | 2018-01-20T14:41:00.000 | 0 | 0 | 0 | 0 | python,html,sql,asp.net | 48,359,205 | 1 | false | 1 | 0 | Oracle procedure PM_USER_LOGIN_SP has one or more parameters, each of them having its own data type. When calling that procedure, you must match number and data type of each of them.
For example, if it expects 3 parameters, you can't pass only 2 (nor 4) of them (because of wrong number of arguments (parameters)).
If parameter #1 is DATE, you can't pass letter A to it (because of a wrong type). Note that DATEs are kind of "special", because something that looks like a date to us, humans (such as 20.01.2018, which is today) passed to Oracle procedure's DATE data type parameter must really be a date. '20.01.2018' is a string, so either pass date literal, such as DATE '2018-01-20' or use appropriate function with a format mask, TO_DATE('20.01.2018', 'dd.mm.yyyy').
Therefore, have a look at the procedure first, pay attention to what it expects. Then check what you pass to it. | 1 | 0 | 0 | I am working on a crawler using Python to grab some data on company internal web.but when I posted all the data,it showed PLS-00306 wrong number or type of arguments in call to PM_USER_LOGIN_SP
ORA-066550:line 1, column 7
PL/SQL: Statement ignored
I checked my Firefox inspector again and again, and all my request data were right, even I removed some of my request data or changed it, it returned another error code.
Is there someone help me out what's the problem. | Return PLS-00306 During login in with python | 0 | 1 | 0 | 28 |
48,364,941 | 2018-01-21T08:19:00.000 | 0 | 0 | 0 | 0 | python,web-scraping | 48,365,456 | 1 | false | 0 | 0 | Ok to answer your questions in order.
As @GalAbra mentions above, it is dependent on the design of the tool. From the sounds of it though, if index.html forces the browser to post data to tool.py then the IP of where tool.py is located will be the one that requests the page.
The ideal way would be to have a queing system built into the tool. You could have the client add their request to the queue (possibly in a database) and then have the tool.py monitor the queue for new entries and have it then request. Possibly using threading where there are multiple new requests in the queue, depending on how much activity you think this tool will see.
Hope this helps | 1 | 0 | 0 | I am creating a web application which scrapes data from some other websites based on what the user searches.
I am planning to host this application on hosting websites like Hostgator or Namecheap.
Currently, the application contains a total of 2 pages. One is index.html and another is tool.py.
index.html takes an input via form and post it to tool.py.
tool.py is responsible for web scraping. I have 2 questions regarding this:
1) Let's say 2 users come to my website and searched simultaneously. Which IP will go to these websites which are to be scraped? Is it users own IP will go or the script IP will go (where the tool.py is located in this case let's suppose Namecheap server ip).
2) If 100's of users search simultaneously, how will the tool.py script reacts? Is there a better way to prevent excessive load to the single script? Maybe distributing and picking scripts randomly (eg: tool1.py, tool2.py, tool3.py etc) | Which IP address will go to the destination website? | 0 | 0 | 1 | 41 |
48,365,313 | 2018-01-21T09:14:00.000 | -1 | 0 | 0 | 0 | python,numpy,scipy,cluster-analysis,correlation | 48,372,244 | 1 | false | 0 | 0 | The obvious choice here is hierarchical agglomerative clustering.
Beware that most tools (e.g., sklearn) expect a distance matrix. But it can be trivially implemented for a similarity matrix instead. Then you can use correlation. This is textbook stuff. | 1 | 0 | 1 | Say I calculated the correlations of prices of 500 stocks, and stored them in a 500x500 correlation matrix, with 1s on the diagonal.
How can I cluster the correlations into smaller correlation matrices (in Python), such that the correlations of stocks in each matrix is maximized? Meaning to say, I would like to cluster the stocks such that in each cluster, the stock prices are all highly correlated with one another.
There is no upper bound to how many smaller matrices I can cluster into, although preferably, their sizes are similar i.e it is better to have 3 100x100 matrices and 1 200x200 matrix than say a 10x10 matrix, 90x90 matrix and 400x400 matrix. (i.e minimize standard deviation of matrix sizes).
Preferably to be done in Python. I've tried to look up SciPy's clustering libraries but have not yet found a solution (I'm new to SciPy and such statistical programming problems).
Any help that points me in the right direction is much appreciated! | Clustering a correlation matrix into multiple matrices of maximum correlations in Python | -0.197375 | 0 | 0 | 1,454 |
48,365,368 | 2018-01-21T09:21:00.000 | 1 | 0 | 1 | 0 | windows,python-3.x,uninstallation | 48,365,418 | 1 | false | 0 | 0 | Open the installation file like will setup.
When the installation window open, you can see "Change", "Repair" and "Remove".
Select "Remove" and continue. | 1 | 1 | 0 | Python 3.6 (32 bit) is installed on my computer in Program file folder. And it is available as a shortcut in my start menu. But I do not see it in "Control Panel --> Program and Features". So I am unable to uninstall it. Is there any other way (like, command line) to uninstall Python?
I have to uninstall the 32 bit. Then I have to re-install 64-bit version please. | Unable to uninstall Python 3.6 (32 bit) | 0.197375 | 0 | 0 | 4,128 |
48,366,535 | 2018-01-21T12:01:00.000 | 0 | 0 | 0 | 0 | python,xml,odoo-10 | 57,324,645 | 4 | false | 1 | 0 | On Odoo 12 it works only by setting readonly="True" (<- CamelCase). Using edit="False" nothing happens... | 1 | 0 | 0 | I was trying to make all fields in a form view read-only in odoov10. Is there any python method through I can get all the form view fields and change its attribute to readonly="True"? | How to make all fields readonly in form view odoo? | 0 | 0 | 0 | 4,994 |
48,369,319 | 2018-01-21T16:56:00.000 | 0 | 0 | 0 | 0 | python-2.7,tensorflow | 48,369,514 | 1 | false | 0 | 0 | This error probably occurs because you are using python 2.7. Where as tensorflow is for use with python 3.5 and python 3.6 | 1 | 0 | 1 | I just upgraded from tf 1.1 to tf 1.4 for python 2.7 and I got the following problem:
I have a graph which I am putting all its OPs in the specific device by using tf.device('device') command. But, one of the OPs is only allowed to be in CPU device, so I am using allow_soft_placement=True and it was working correctly in tf 1.1 (it put only the OPs without GPU implementation in CPU and other OPs in GPU). But now (in tf1.4) when I am running my network it is putting all the OPs in the CPU (not just the one which has not GPU implementation).
Any help is appreciated. | Allow soft placement in tensorflow | 0 | 0 | 0 | 518 |
48,370,121 | 2018-01-21T18:14:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,artificial-intelligence,reinforcement-learning,openai-gym | 48,370,179 | 1 | false | 0 | 0 | You need to make up a reward that proxies the behavior you want - and that is actually no trivial business.
If there is some numbers on a fixed part of the screen representing score, then you can use old fashioned image processing techniques to read the numbers and let those be your reward function.
If there is a minimap in a fixed part of the screen with fixed scale and orientation, then you could use minus the distance of your character to a target as reward.
If there are no fixed elements in the UI you can use to proxy the reward, then you are going to have a bad time, unless you can somehow access the internal variables of the console to proxy the reward (using the position coordinates of your PC, for example). | 1 | 0 | 1 | I am new to RL and the best I've done is CartPole in openAI gym. In cartPole, the API automatically provides the reward given the action taken. How am I supposed to decide the reward when all I have is pixel data and no "magic function" that could tell the reward for a certain action.
Say, I want to make a self driving bot in GTA San Andreas. The input I have access to are raw pixels. How am I supposed to figure out the reward for a certain action it takes? | Reinforcement Learning - How to we decide the reward to the agent when the input to the game is only pixels? | 0.379949 | 0 | 0 | 322 |
48,370,499 | 2018-01-21T18:54:00.000 | 0 | 1 | 0 | 0 | python,django,django-apps | 48,372,180 | 2 | false | 1 | 0 | as a simple idea the inserting of new msg in database should be with a condition to limit their numbers (the count of the previous msg isn't > max )
another method : you will show the input of the msg jsut when (selet * form table where userid=sesion and count(usermsg)< max ) | 1 | 3 | 0 | I have been using Django Postman for a few weeks now, and in order to limit the number of messages sent by each user, I have been wondering what would be the best way to limit the number of messages a user can send a day, a week... using Django-postman?
I have been browsing dedicated documentation for weeks too in order to find an answer for the how, but I think this is not a usecase for now, and I do not really know how to manage that.
Of course I am not looking for a well cooked answer, but I would like to avoid writing labyrinthine code, so maybe just a few ideas about it could help me to see clear through that problematic.
Many thanks for your help on that topic! | Is it possible to limit the number of messages a user can send a day with Django postman? | 0 | 0 | 0 | 427 |
48,371,137 | 2018-01-21T20:01:00.000 | 1 | 0 | 0 | 0 | android,python,pygame,mouseevent,mouse | 48,371,785 | 1 | true | 0 | 1 | As far as I know this is not possible.
When handling input, mouse input and touch input are to be handled separately.
So to answer the 2 questions you listed at the end:
As far as I know there is no way to implement this functionality.
You could use the pixel coordinates of the arrows. However you can use Rects for that and test if the place of mouse input/touch input is inside the arrow button Rect with the collidepoint method
You can achieve that as follows:
arrow_left.collidepoint(mouse_x, mouse_y)
I hope this answer helped you! | 1 | 1 | 0 | I am trying to develop a game for Android using pygame.
It would be a platformer. To move the main charachter, I would make the game to wait for mouse events (like pygame.MOUSEBUTTONDOWN).
In order to do that on mobile, I'd like to create a graphic representation of a joypad with arrow keys to be shown on the bottom left corner of the screen.
Now, when the user touches one of the arrows, a MOUSEBUTTONDOWN event should be triggered and the charachter should move accordingly.
My question is: since the "joypad" object is a mere draw, how can I link it to the event with pygame?
Is there a way to do so? Should I use the pixel coordinates of the arrow keys of the joypad or is there a better choice? | pygame - link an object to mouse event | 1.2 | 0 | 0 | 70 |
48,371,880 | 2018-01-21T21:25:00.000 | 1 | 0 | 0 | 0 | java,python,macos,beautifulsoup | 48,371,962 | 1 | false | 1 | 0 | In linux you can use the sudo command to bypass any permission issues, I believe that same can be said for mac os. Just add into your terminal sudo "command" this will basically install it as "super user" hence you shouldn't have an issue reading and writing to files at certain locations. | 1 | 0 | 0 | I am getting the following error message when i try to install Beautifullsoup via mac terminal.
[Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-40423.pth'
Please help. | Beautifull soup install mac error | 0.197375 | 0 | 0 | 33 |
48,375,629 | 2018-01-22T06:15:00.000 | 0 | 0 | 1 | 0 | macos,python-2.7,anaconda,virtualenv,jupyter-notebook | 48,389,018 | 1 | false | 0 | 0 | That's strange, a Jupyter notebook should be able to use all packages you see with conda list. What does conda info --envs show?
You can activate a specific environment by doing source activate 'env-name' and then start a jupyter notebook from there. | 1 | 0 | 0 | I'm new to MacOS, and first time using Anaconda- I've installed Anaconda successfully and am able to read all the packages installed via conda list. I run python via Jupyter, but when I open Jupyter via cmd:Jupyter notebook, I'm not able to import some of the packages that conda lists as already installed. I've read that anaconda automatically uses a virtualenv- so I'm not familiar with how virtual environments work as well- I tried launching Jupyter from the anaconda directory- but the problem persists.
How can I access anaconda via Jupyter notebooks ?
PS: I even opened the Anaconda navigator GUI and launched Jupyter notebook from there- still I'm not able to load those libraries
PS2: I checked virtualenv --version and got command not found so I guess virtualenv isn't installed or used? This is a new machine and I've just installed anaconda and nothing else- not even a fresh copy of python-assuming that python comes inbuilt in the os. | How to access anaconda libraries on a Jupyter notebook on MacOS? | 0 | 0 | 0 | 764 |
48,379,797 | 2018-01-22T10:48:00.000 | 0 | 0 | 1 | 1 | python-3.x,sorting,directory | 48,391,262 | 1 | false | 0 | 0 | I will answer my own question: it turns out that it was the fact that I had moved dir and its complete contents to a different place on my disk, thus giving them different addresses, that caused the different sorting. | 1 | 0 | 0 | I have been using os.walk() to traverse a bunch of subdirectories inside a directory dir. These subdirectories are numbered from 0001 to 0899. I assumed that os.walk(dir) traverses these subdirectories in numerical order, i.e., as they are shown in the finder (I am on Mac), and so far I have had no reason to believe that this is not true.
However, a few days ago I noticed something strange: os.walk() suddenly (?) traverses the folders non-numerically (but always in the same sequence, I think). I am fairly sure that this was not the case before - I would have noticed.
I am aware that I can use sorted(os.walk(dir)) to have the subdirectories processed numerically, but that does not answer my question. How is it possible that the behaviour of os.walk() changed? Could it have to do with upgrading Python in the meantime (which I don't think I did - can this be checked somehow)?
EDIT: it occurred to my that I updated from OS Sierra to OS Sierra High in the meantime. Maybe that is where the answer lies? | Strange behaviour of os.walk() | 0 | 0 | 0 | 45 |
48,382,873 | 2018-01-22T13:36:00.000 | 0 | 0 | 1 | 0 | python,algorithm,filtering,signal-processing,perceptron | 48,451,376 | 1 | true | 0 | 0 | Your explanation is correct. The X input vector is multiplied recursively by the filter coefficients. It's been some time since I wrote an adaptive filter, but if I remember correctly you're multiplying M filter coefficients by the latest M input values to get an update.
So M is the order of your filter, or the number of filter coefficients, and n is the length of the signal you are filtering. And as you note your recursive filter will look at a 'window' of those input values for each filtered output calculation. | 1 | 0 | 1 | I am trying to implement an adaptive filter for noise cancellation, in particular a RLS filter to remove motion artifacts from a signal. To do this I am reading some literature, there is one thing I don't understand and every book or article I found just asumes I already now this.
I have a reference signal represented as a list in Python of about 8000 elements, or samples. I need to input this to the RLS filter, but every algorithm I find always talks about the input vector as
X[n] = [x1[n], x2[n], x3[n], ........, xM[n]]T
Where X is the input vector, and n is a time instant. And here is where I get lost. If n is a time instant, it would mean x[n] is an element in the list, a salmple. But if that is the case, what are x1, x2, ...., xM???.
I realise this is not strictly a coding problem, but I hope someone can help!
Thanks... | Adaptive Filter input vectors and iteration | 1.2 | 0 | 0 | 345 |
48,384,337 | 2018-01-22T14:53:00.000 | 2 | 0 | 0 | 0 | python,boto,boto3 | 48,384,477 | 1 | false | 1 | 0 | No, as of now it's not possible.
You have to specify the primary key to delete an item, although you can optionally pass ConditionExpression to prevent it from being deleted if some condition is not met. Only this much flexibility api is providing us. | 1 | 1 | 0 | Is it possible to delete an item from DynamoDB using the Python Boto3 library by specifying a secondary index value? I won't know the primary key in advance, so is it possible to skip the step of querying the index to retrieve the primary key, and just add a condition to the delete request that includes the secondary index value? | Is it possible to delete a DynamoDB item based on secondary index with Python Boto3 lib? | 0.379949 | 0 | 1 | 1,939 |
48,385,116 | 2018-01-22T15:34:00.000 | 9 | 0 | 1 | 0 | python,tkinter | 48,385,739 | 3 | true | 0 | 1 | _tkinter is a C-based module that wraps an internal tcl/tk interpreter. When you import it, and it only, you get access to this interpreter but you do not get access to any of the python classes.
You certainly can import _tkinter, but then you would have to recreate all of the python interfaces to the tcl/tk functions. | 1 | 5 | 0 | All tutorials simply import tkinter,
I am wondering, though, why not import _tkinter? If my understanding is correct, _tkinter is the actual library in cpython and tkinter is the interface or API.
I am simply trying to grasp the paradigm as I read through some of the tkinter source code. It seems there is some python black magic afoot. | Import _tkinter or tkinter? | 1.2 | 0 | 0 | 3,762 |
48,386,293 | 2018-01-22T16:37:00.000 | 3 | 0 | 0 | 0 | python,arrays,algorithm | 48,389,275 | 1 | false | 0 | 0 | Here is my approach that I managed to come up with.
First of all we know that the resulting array will contain N+M elements, meaning that the left part will contain (N+M)/2 elements, and the right part will contain (N+M)/2 elements as well. Let's denote the resulting array as Ans, and denote the size of one of its parts as PartSize.
Perform a binary search operation on array A. The range of such binary search will be [0, N]. This binary search operation will help you determine the number of elements from array A that will form the left part of the resulting array.
Now, suppose we are testing the value i. If i elements from array A are supposed to be included in the left part of the resulting array, this means that j = PartSize - i elements must be included from array B in the first part as well. We have the following possibilities:
j > M this is an invalid state. In this case it means we still need to choose more elements from array A, so our new binary search range becomes [i + 1, N].
j <= M & A[i+1] < B[j] This is a tricky case. Think about it. If the next element in array A is smaller than the element j in array B, this means that element A[i+1] is supposed to be in the left part rather than element B[j]. In this case our new binary search range becomes [i+1, N].
j <= M & A[i] > B[j+1] This is close to the previous case. If the next element in array B is smaller than the element i in array A, the means that element B[j+1] is supposed to be in the left part rather than element A[i]. In this case our new binary search range becomes [0, i-1].
j <= M & A[i+1] >= B[j] & A[i] <= B[j+1] this is the optimal case, and you have finally found your answer.
After the binary search operation is finished, and you managed to calculate both i and j, you can now easily find the value of the median. You need to handle a few cases here depending on whether N+M is odd or even.
Hope it helps! | 1 | 1 | 1 | I'm working on a competitive programming problem where we're trying to find the median of two sorted arrays. The optimal algorithm is to perform a binary search and identify splitting points, i and j, between the two arrays.
I'm having trouble deriving the solution myself. I don't understand the initial logic. I will follow how I think of the problem so far.
The concept of the median is to partition the given array into two sets. Consider a hypothetical left array and a hypothetical right array after merging the two given arrays. Both these arrays are of the same length.
We know that the median given both those hypothetical arrays works out to be [max(left) + min(right)]/2. This makes sense so far. But the issue here is now knowing how to construct the left and right arrays.
We can choose a splitting point on ArrayA as i and a splitting point on ArrayB as j. Note that len(ArrayB[:j] + ArrayB[:i]) == len(ArrayB[j:] +ArrayB[i:]).
Now we just need to find the cutting points. We could try all splitting points i, j such that they satisfy the median condition. However this works out to be O(m*n) where M is size of ArrayB and where N is size of ArrayA.
I'm not sure how to get where I am to the binary search solution using my train of thought. If someone could give me pointers - that would be awesome. | How to find the median between two sorted arrays? | 0.53705 | 0 | 0 | 711 |
48,387,330 | 2018-01-22T17:41:00.000 | -1 | 1 | 0 | 0 | telegram-bot,python-telegram-bot | 48,407,514 | 3 | false | 0 | 0 | No. You need to hardcode user id in your source and compare if user id in admin-ids array. | 1 | 3 | 0 | I have added my bot to a group chat, now for few commands I need to give access only to the group admin, so is it possible to identify if the message sender is admin of the group?
I am using python-telegram-bot library | Telegram Bot: Can we identify if message is from group admin? | -0.066568 | 0 | 1 | 6,220 |
48,387,602 | 2018-01-22T17:59:00.000 | 4 | 0 | 0 | 0 | python,tensorflow,keras | 48,448,962 | 1 | true | 0 | 0 | This is a question that's important in multi-task learning where you have multiple loss functions, a shared neural network structure in the middle, and inputs that may not all be valid for all loss functions.
You can pass in a binary mask which are 1 or 0 for each of your loss functions, in the same way that you pass in the labels. Then multiply each loss by its corresponding mask. The derivative of 1x is just dx, and the derivative of 0x is 0. You end up zeroing out the gradient in the appropriate loss functions. Virtually all optimizers are additive optimizers, meaning you're summing the gradient, adding a zero is a null operation. Your final loss function should be the sum of all your other losses.
I don't know much about Keras. Another solution is to change your loss function to use the labels only: L = cross_entropy * (label / (label + 1e-6)). That term will be almost 0 and almost 1. Close enough for government work and neural networks at least. This is what I actually used the first time before I realized it was as simple as multiplying by an array of mask values.
Another solution to this problem is to us tf.where and tf.gather_nd to select only the subset of labels and outputs that you want to compare and then pass that subset to the appropriate loss function. I've actually switched to using this method rather than multiplying by a mask. But both work. | 1 | 0 | 1 | I have a neural net with two loss functions, one is binary cross entropy for the 2 classes, and another is a regression. Now I want the regression loss to be evaluated only for class_2, and return 0 for class_1, because the regressed feature is meaningless for class_1.
How can I implement such an algorithm in Keras?
Training it separately on only class_1 data doesn't work because I get nan loss. There are more elegant ways to define the loss to be 0 for one half of the dataset and mean_square_loss for another half? | Multi-Task Learning: Train a neural network to have different loss functions for the two classes? | 1.2 | 0 | 0 | 2,069 |
Subsets and Splits