Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
43,457,337
2017-04-17T18:33:00.000
0
0
1
0
0
css,ipython-notebook,jupyter-notebook,jupyter
0
67,893,531
0
2
0
false
1
0
The /custom/custom.css stopped working for me when I generated a config file, but if anyone stumbles to this problem too, the solution is to uncomment the line c.NotebookApp.extra_static_paths = [] in the jupyter_notebook_config.py file and add "./custom/" - or whatever path you chose for your custom css - inside the brackets. P.S.: OS is Linux Manjaro 5.12 and Jupyter Notebook version is 6.3.0.
2
4
0
0
I'm using Jupyer 4.3.0. I find that when I update my ~/.jupyter/custom/custom.css, the changes are not reflected in my notebook until I kill jupyter-notebook and start it again. This is annoying, so how can I make Jupyter Notebook recognize the custom.css file changes without completely restarting the notebook?
Jupyter reload custom.css?
0
0
1
0
0
643
43,466,644
2017-04-18T07:54:00.000
-2
0
1
0
0
python,arrays
0
43,467,467
0
4
0
false
0
0
If your arrays are only 'list', sumplt defines an empty list at the beginning and append item into it: foo=[] for i in range(14): ... foo.append(tab)
1
6
1
0
I have a project in which there is a for loop running about 14 times. In every iteration, a 2D array is created with this shape (4,3). I would like to concatenate those 2D arrays into one 3D array (with the shape of 4,3,14) so that every 2D array would be in different "layer". How should that be implemented in Python?
how to combine 2D arrays into a 3D array in python?
0
-0.099668
1
0
0
8,835
43,470,642
2017-04-18T11:16:00.000
0
0
0
0
0
python-2.7,wit.ai
0
43,544,729
0
1
0
false
0
0
since the new release of messenger, you can convert speech into text, so if you are developing for messenger or another app with a good voice-to-text, you can rely on the app instead of trying doing it by yourself. In the end you're gonna have just text inputs, but people would be able to convert it's speech into text.
1
0
0
0
I'm trying to build a chat bot using wit.ai, which will recognize the speech and convert into text in chat bot. Is it possible with the GUI of wit.ai to make such kind of chat bot? I actually converted the voice into text, but facing difficulty to integrate the voice input with chat bot. How to do this?
how to make speech recognition chat bot using wit.ai?
1
0
1
0
0
771
43,483,717
2017-04-18T23:14:00.000
0
0
0
0
0
python,sas,regression,random-forest,dummy-variable
0
48,334,045
0
2
0
false
0
0
You may google SAS STAT manual /User guide. Check out any major regression procedures there that support Class statement. Underneath the Class it details Reference... option. They all detail how a design matrix is fashioned out. The way you fed your 100 dummies must have been obvious enough to trigger JMP to roll back to a temp class variable that re-engineer back to one single variable. If you want to know how exactly JMP is triggered to do roll back, go to JMP website and open a technical support track. But mechanically I am confident this is what happens.
1
1
1
0
I am building a regression model with about 300 features using Python Sklearn. One of the features has over 100 categories and I end up having ~100 dummy columns for this feature.Now each of the dummy column has its own coefficient, or a feature ranking score (if using Random Forest or xgb) - which is something I dont like. However, when I create the same model in SAS JMP, it gives me one single feature score for the feature with 100 categories -it apparently handles categories automatically. Can someone tell me how SAS JMP combines the coefficients/feature importances of 100 dummy variables into one metric. And how can i can achieve the same in Python.
Combining effects of dummy variables in a regression model
0
0
1
0
0
284
43,520,574
2017-04-20T13:12:00.000
0
1
0
1
0
pytest,python-appium,aws-device-farm
0
44,378,193
0
1
0
false
1
0
I work for the AWS Device Farm team. This seems like an old thread but I will answer so that it is helpful to everyone in future. Device Farm parses the test in a random order. In case of Appium Python it will be the order what is received from pytest --collect-only. This order may change across executions. The only way to guarantee an order right now is to wrap all the test calls in a new test which will be the only test called. Although not the prettiest solution this is the only way to achieve this today. We are working on bringing more parity between your local environment and Device Farm in the coming weeks.
1
1
0
0
I'm using Appium-Python with AWS Device farm. And I noticed that AWS run my tests in a random order. Since my tests are part dependent I need to find a way to tell AWS to run my tests in a specific order. Any ideas about how can I accomplish that? Thanks
AWS Device Farm- Appium Python - Order of tests
0
0
1
0
0
251
43,531,607
2017-04-20T23:40:00.000
1
1
0
0
1
python,numpy,hash
0
43,531,728
0
1
0
true
0
0
(x + y) % z == ((x % z) + (y % z)) % z. So you could take the modulus before doing the sum: Cast a and x to uint64. (Multiply two uint32 will never overflow uint64). Compute h = (a * x) % p + b Return (h - p) if h > p else h. (Alternatively: return h % p)
1
1
1
0
For one of my pet projects I would like to create an arbitrary number of different integer hashes. After a bit of research I figured that universal hashing is the way to go. However, I struggle with the (numpy) implementation. Say I'm trying to use the hash family h(a,b)(x) = ((a * x + b) mod p) mod m and my x can be anywhere in the range of uint32. Then choosing p >= max(x) and a, b < p means that in the worst case a * x + b overflows not only uint32 but also uint64. I tried to find an implementation which solves this problem but only found clever ways to speed up the modulo operation and nothing about overflow. Any suggestions for how to implement this are greatly appreciated. Thanks :-)
Integer overflow in universal hashing implementation
0
1.2
1
0
0
189
43,546,529
2017-04-21T15:25:00.000
3
0
1
0
0
python,python-3.x
0
43,546,641
0
3
0
false
0
0
if you want some kind of state persistance then your options are limited: save the state into a file as you suggest in your question (either a text file or spreadsheet, but spreadsheet is harder to do) change your concept so that instead of "running the script" multiple times, the script is always running, but you give it some kind of signal (keyboard input, GUI with a button etc) to let it know to increment the counter split your script into two halves, a server script and a client. the server would listen for connections from the client, and keep track of the current count, the client would then connect and tell the server to increment the count, if needed it could also send the previous or new count back to the client for some kind of output. this would prevent having many writes to disk, but the count would be lost if the server process is closed.
1
3
0
0
I am trying to write a script that will have a count that starts a 001, and increasing by one every time that the script is run. I just help some help started this off, what can I do to set it up so that it knows where it start from every time? Is there is way that I can build it into the script to do this? My bad ideas about how to do this so far: - Have the number(001) exported to a text file, and have the script change that number at the end of every script (001 +1). This number will also be in a spreadsheet, so have the script read to value from the spreadsheet, and add one to that value. I feel like there has to be an easier way, and I'd prefer a way that was self-contained within the script. Can someone help point me in the right direction? Thanks for your input.
Python script to input the next number in a sequence every time it runs.
0
0.197375
1
0
0
545
43,549,448
2017-04-21T18:13:00.000
0
0
0
0
0
python,image,pygame
0
43,549,524
0
2
0
false
0
1
You can also query the size of the image. Adjust the corner coordinates by half of the size in each direction.
1
0
0
0
I am trying to move an image from its center using pygame. I loaded the image using pygame but I have the top corner coordinates of the image. how do i get the coordinates of center.
Finding the position of center using pygame
0
0
1
0
0
1,167
43,563,447
2017-04-22T19:05:00.000
1
0
1
0
0
python,json,r,bigdata
0
43,563,552
0
1
0
false
0
0
The jsonlite R package supports streaming your data. In that way there is no need to read all the json data into memory. See the documentation of jsonlite for more details, the stream_in function in particular. Alternatively: I would dump the json into a mongo database and process the data from that. You need to install mongodb, and start running mongod. After that you can use mongoimport to import the json file into the database. After that, you can use the mongolite package to read data from the database.
1
1
1
0
I was trying to do some exploratory analyses on a large (2.7 GB) JSON dataset using R, however, the file doesn't even load in the first place. When looking for solutions, I saw that I could process the data in smaller chunks, namely by iterating through the larger file or by down-sampling it. But I'm not really sure how to do that with a JSON dataset. I also thought of converting the original JSON data into .csv, but after having a look around that option didn't look that helpful. Any ideas here?
How to iterate/ loop through a large (>2GB) JSON dataset in R/ Python?
0
0.197375
1
0
0
609
43,579,175
2017-04-24T03:07:00.000
0
0
1
0
0
python,python-3.x,numpy
0
43,579,231
0
3
0
false
0
0
random.choices(['a', 'b'], weights=[0.8, 0.2], k=2)
1
1
0
0
I want to write a program with the following requirement. arr = ['a', 'b'] How to write a python program which choose a from arr x% of time. (For example 80% of time). I have no idea how should I start. Please help. I know random.choice(arr). But it will give a random choice I can not make it biased.
arr = [a,b] choose a, x% of time
0
0
1
0
0
62
43,580,876
2017-04-24T06:08:00.000
0
0
0
0
0
python-2.7,ncurses,curses,python-curses
0
43,591,335
0
2
0
false
0
0
putwin() and getwin() are the functions for saving and restoring an individual window, and they're available in Python.
1
1
0
0
I have created window in the curses and created my call flow (data). window = curses.newwin(2500, 2500, 0, 0) How should i copy the window content(exact replica) to the file ?
how to write the curses window content to the file in python?
0
0
1
0
0
971
43,592,879
2017-04-24T16:04:00.000
4
0
1
0
0
python,anaconda,spyder
0
43,593,102
1
8
0
false
0
0
In Preferences, select Python Interpreter Under Python Interpreter, change from "Default" to "Use the following Python interpreter" The path there should be the default Python executable. Find your Python 2.7 executable and use that.
2
42
0
0
I am using 3.6 Python version in anaconda spyder on my mac. But I want to change it to Python 2.7. Can any one tell me how to do that?
How to change python version in anaconda spyder
0
0.099668
1
0
0
189,338
43,592,879
2017-04-24T16:04:00.000
4
0
1
0
0
python,anaconda,spyder
0
54,063,608
1
8
0
false
0
0
Set python3 as a main version in the terminal: ln -sf python3 /usr/bin/python Install pip3: apt-get install python3-pip Update spyder: pip install -U spyder Enjoy
2
42
0
0
I am using 3.6 Python version in anaconda spyder on my mac. But I want to change it to Python 2.7. Can any one tell me how to do that?
How to change python version in anaconda spyder
0
0.099668
1
0
0
189,338
43,596,345
2017-04-24T19:34:00.000
0
0
0
0
0
python,kernel-density
0
43,597,467
0
1
1
false
0
0
I did find how to work around by transforming the dataframe's columns into one single column. df.stack()
1
0
1
0
I need to make a single gaussian kernel density plot of a dataframe with multiple columns which includes all columns of the dataframe. Does anyone know how to do this? So far I only found how to draw a gaussian kernel plot of a single column with seaborn. ax = sns.kdeplot(df['shop1']) However, neither ax = sns.kdeplot(df)norax = sns.kdeplot(df['shop1','shop2]) do not work. Otherwise is there a workaround where I could transform the dataframe with shape df.shape(544, 33) to (17952, 2), by appending each columns to eachother? The dataframe includes normalized prices for one product, whereas each column represents a different seller and the rows indicate date and time of the prices.
python: one kernel density plot which includes multiple columns in a single dataframe
0
0
1
0
0
465
43,607,466
2017-04-25T09:58:00.000
4
1
0
0
0
python,python-2.7,amazon-web-services,amazon-ec2
0
43,607,778
0
1
1
false
1
0
I think you need to profile your code locally and ensure it really is CPU bound. Could it be that time is spent on the network or accessing disk (e.g. reading the image to start with). If it is CPU bound then explore how to exploit all the cores available (and 25% sounds suspicious - is it maxing out one core?). Python can be hard to parallelise due to the (in)famous GIL. However, only worry about this when you can prove it's a problem, profile first!
1
1
0
0
I'm using g2.2 xlarge instance of amazon. I'm having this function that takes 3 minutes to run on my laptop which is so slow. However, when running it on EC2 it takes the same time , sometimes even more. Seeing the statistics , I noticed EC2 uses at its best 25% of CPU. I paralleled my code, It's better but I get the same execution time between my laptop and EC2. for my function: I have an image as input,I run my function 2 times(image with and without image processing) that I managed to run them in parallel. I then extract 8 text fields from that image using 2 machine learning algorithms(faster-rcnn(field detection)+clstm(text reading) and then the text is displayed on my computer. Any idea how to improve performance (processing time) in EC2?
Why amazon EC2 is as slow as my machine when running python code?
0
0.664037
1
0
0
1,356
43,622,277
2017-04-25T22:54:00.000
0
0
0
0
0
python,django,cassandra,permissions,cqlengine
1
43,630,653
0
2
0
false
1
0
Is the system_auth keyspace RF the same as the amount of nodes? Did you try to run a repair on the system_auth keyspace already? If not do so. For me it sounds like a consistency issue.
2
0
0
0
I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has all the permissions, and sometimes it works. So if user did not have the permissions it should always throw no permission error. So what might be the reasons behind this and how to find the problem?
Cassandra: occasional permission errors
0
0
1
1
0
292
43,622,277
2017-04-25T22:54:00.000
0
0
0
0
0
python,django,cassandra,permissions,cqlengine
1
43,645,204
0
2
0
false
1
0
If you have authentication enabled, make sure you set appropriate RF for keyspace system_auth (should be equal to number of nodes). Secondly, make sure the user you have created has following permissions on all keyspaces. {'ALTER', 'CREATE', 'DROP', 'MODIFY', 'SELECT'}. If you have the user as a superuser make sure you add 'AUTHORIZE' as a permission along with the ones listed above for that user. Thirdly, you can set off a read-repair job for all the data in system_auth keyspace by running CONSISTENCY ALL; SELECT * from system_auth.users ; SELECT * from system_auth.permissions ; SELECT * from system_auth.credentials ; Hope this will resolve the issue !
2
0
0
0
I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has all the permissions, and sometimes it works. So if user did not have the permissions it should always throw no permission error. So what might be the reasons behind this and how to find the problem?
Cassandra: occasional permission errors
0
0
1
1
0
292
43,624,308
2017-04-26T03:07:00.000
0
0
0
0
0
python,nlp,classification,feature-extraction,sentiment-analysis
0
43,625,960
0
1
0
false
0
0
I think you will find that bag-of-words is not so naive. It's actually a perfectly valid way of representing your data to give it to an SVM. If that's not giving you enough accuracy you can always include bigrams, i.e. word pairs, in your feature vector instead of just unigrams.
1
1
1
0
I have a corpus of around 6000 texts with comments from social network (FB, twitter), news content from general and regional news and magazines, etc. I have gone through first 300 of these texts and tag each of these 300 texts' content as either customer complaint or non-complaint. Instead of naive way of bag of words, I am wondering how can I accurately extract the features of these complaints and non-complaints texts? My goal is to use SVM or other classification algorithm/ library such as Liblinear to most accurately classify the rest of these texts as either complaint or non-complaint with the current training set of 300 texts. Is this procedure similar to sentiment analysis? If not, where should I start?
How to extract COMPLAINT features from texts in order to classify complaints from non-complaints texts
0
0
1
0
0
319
43,631,693
2017-04-26T10:33:00.000
6
0
0
1
0
python,hadoop,airflow
0
53,409,092
0
5
0
false
0
0
As menioned by Pablo and Jorge pausing the Dag will not stop the task from being executed if the execution already started. However there is a way to stop a running task from the UI but it's a bit hacky. When the task is on running state you can click on CLEAR this will call job.kill() the task will be set to shut_down and moved to up_for_retry immediately hence it is stopped. Clearly Airflow did not meant for you to clear tasks in Running state however since Airflow did not disable it either you can use it as I suggested. Airflow meant CLEAR to be used with failed, up_for_retry etc... Maybe in the future the community will use this bug(?) and implement this as a functionality with "shut down task" button.
2
55
0
0
How can I stop/kill a running task on Airflow UI? I am using LocalExecutor. Even if I use CeleryExecutor, how do can I kill/stop the running task?
How to stop/kill Airflow tasks from the UI
0
1
1
0
0
83,312
43,631,693
2017-04-26T10:33:00.000
11
0
0
1
0
python,hadoop,airflow
0
50,707,968
0
5
0
false
0
0
from airflow gitter (@villasv) " Not gracefully, no. You can stop a dag (unmark as running) and clear the tasks states or even delete them in the UI. The actual running tasks in the executor won't stop, but might be killed if the executor realizes that it's not in the database anymore. "
2
55
0
0
How can I stop/kill a running task on Airflow UI? I am using LocalExecutor. Even if I use CeleryExecutor, how do can I kill/stop the running task?
How to stop/kill Airflow tasks from the UI
0
1
1
0
0
83,312
43,633,459
2017-04-26T11:53:00.000
0
1
0
0
0
python,linux,security,ssh
0
43,633,928
0
1
0
false
0
0
Why don't you just add an ssh-daemon on Port 8443 and use ssh-Agent forwarding? That way the private key never gets written down on P and you don't have to write and maintain your own program.
1
1
0
0
A python program P running on server S1, listening port 8443. Some other services can send id_isa, ip pair to P. P could use this pair and make a ssh connection to the ip (create a ssh process). How to make protect the id_rsa file even the machine S1 is cracked ? How to let root user can't get the id_rsa content (It seems ssh can use -i keyfile only)? The main problem is P must save the id_rsa file to the disk,so that ssh can use it to conect to the ip.
how to design this security demands?
1
0
1
0
0
48
43,660,353
2017-04-27T14:14:00.000
1
0
1
0
0
python,class,module,readability
0
43,661,711
0
2
0
false
0
0
You can do as you please. If the code for your classes is short, putting them all in your main script is fine. If they're longish, then splitting them out into separate files is a useful organizing technique (that has the added benefit of the code in them no getting recompiled into byte-code everytime the the script they are used in is run. Putting them in modules also encourages their reuse since they're no longer mixed in with a lot of other unrelated stuff. Lastly, they may be useful because modules are esstentially singleton objects, meaning that there's only once instance of them in your program which is created the first time it's imported. Later imports in other modules will just reuse the existing instance. This can be a nice way to do initialize that only has to be done once.
1
0
0
0
As I learn more about Python I am starting to get into the realm of classes. I have been reading on how to properly call a class and how to import the module or package.module but I was wondering if it is really needed to do this. My question is this: Is it required to move your class to a separate module for a functional reason or is it solely for readability? I can perform all the same task using defined functions within my main module so what is the need for the class if any outside of readability?
Is there a reason to create classes on seperate modules?
1
0.099668
1
0
0
201
43,670,334
2017-04-28T01:07:00.000
0
0
0
0
0
python,database-design,web-scraping
0
43,670,482
0
1
0
false
0
0
In windows, you can use Task Scheduler or in Linux crontab. You can configure these to run python with your script at set intervals of time. This way you don't have a python script continuously running preventing some hangup in a single call from impacting all subsequent attempts to scrape or store in database. To store the data there are many options which could either be a flat file(write to a text file), save as a python binary(use shelf or pickle), or install an actual database like MySQL or PostgreSQL and many more. Google for info. Additionally an ORM like SQLAlchemy may make controlling, querying, and updating your database a little easier since it will handle all tables as objects and create the SQL itself so you don't need to code all queries as strings.
1
0
1
0
I have written a piece of python code that scrapes the odds of horse races from a bookmaker's site. I wish to now: Run the code at prescribed increasingly frequent times as the race draws closer. Store the scraped data in a database fit for extraction and statistical analysis in R. Apologies if the question is poorly phrased/explained - I'm entirely self taught and so have no formal training in computer science. I don't know how to tell a piece of python code to run itself every say n-minutes and I also have no idea how to correctly approach building such a data base or what factors I should be considering. Can someone point me in the right direction for getting started on the above?
How to run python code at prescribed time and store output in database
0
0
1
1
0
73
43,684,654
2017-04-28T16:13:00.000
2
0
1
0
0
python,machine-learning
0
43,684,698
0
2
0
false
0
0
If you have the lists of typical Chinese and English names and the problem is performance only, I suggest you convert the lists into sets and then ask for membership in both sets as this is much faster than finding out whether an element is present in a large list.
1
2
0
0
Given a bunch of names, how can we find out which are Chinese names and which are English names? For the Chinese names, I build a list of the Chinese last names to find out the Chinese names. For example, Bruce Lee, Lee is a Chinese last name, so we regard Bruce Lee is a Chinese name. However, the Chinese last names list is large. Is there any better way to do it? If you are not familiar with the Chinese name, you can tell how you will distinct the English names from some other names, like French names, Italian names, etc.
How to recognize Chinese or English name using python
0
0.197375
1
0
0
1,206
43,696,735
2017-04-29T14:20:00.000
0
0
1
1
0
python,windows,python-3.x,automation,windows-10
0
59,797,481
0
4
0
false
0
0
You can use this new technique, its called winapps its used for searching, modifying, and uninstalling apps. Its download command on cmd windows is pip install winapps.
1
2
0
0
So, as you may know there are certain apps on Windows that can be installed from the app store, and are classified as Windows Trusted Apps. I am not sure, but I think these do not use the classic .exe format. So I am writing a python script to automate some stuff when I start my pc, and I need to start a certain Windows App, but I don't know how to do this as I don't know what I need to start to do so, and I also do not know where these files are located. Anyone can help?
How can I open a Windows 10 app with a python script?
0
0
1
0
0
11,049
43,709,095
2017-04-30T17:10:00.000
-1
0
1
0
0
python,python-3.x
0
43,709,925
0
3
0
false
0
0
Use pip install random. That works with every Python distribution.
2
2
0
0
I am trying to download the random module and was wondering if I copy a code and put it in a file editor, how do I go about installing it through pip? I placed the code in notepad and saved it on my desktop as random.py. What do I do now so that I can get this in installed through anaconda? I tried pip install random.py but it says the package is not found. Is there perhaps a zip file of the random module that I can install?
Downloading Random.py Using Anaconda
0
-0.066568
1
0
0
11,349
43,709,095
2017-04-30T17:10:00.000
0
0
1
0
0
python,python-3.x
0
43,709,432
0
3
0
false
0
0
If using python3, it will simply be pip3 install random as @Remi stated.
2
2
0
0
I am trying to download the random module and was wondering if I copy a code and put it in a file editor, how do I go about installing it through pip? I placed the code in notepad and saved it on my desktop as random.py. What do I do now so that I can get this in installed through anaconda? I tried pip install random.py but it says the package is not found. Is there perhaps a zip file of the random module that I can install?
Downloading Random.py Using Anaconda
0
0
1
0
0
11,349
43,711,437
2017-04-30T21:20:00.000
1
1
1
0
0
python,math,mathematical-expressions
0
43,711,560
0
1
0
false
0
0
If you trust the user, you could input a string, the complete function expression as it would be done in Python, then call eval() on that string. Python itself evaluates the expression. However, the user could use that string to do many things in your program, many of them very nasty such as taking over your computer or deleting files. If you do not trust the user, you have much more work to do. You could program a "function builder", much like the equation editor in Microsoft Word and similar programs. If you "know just the very basic of Python programming" this is beyond you. You might be able to use a search engine to find one for Python. One more possibility is to write your own evaluator. That would also be beyond you, and you also might be able to find one you can use. If you need more detail, show some more work of your own then ask.
1
1
0
0
I have a work to do on numerical analysis that consists on implementing algorithms for root-finding problems. Among them are the Newton method which calculates values for a function f(x) and it's first derivative on each iteraction. For that method I need a way to the user of my application enter a (mathematical) function and save it as a variable and use that information to give values of that function on different points. I know just the very basic of Python programming and maybe this is pretty easy, but how can I do it?
Mathematical functions on Python 3
0
0.197375
1
0
0
239
43,733,387
2017-05-02T08:29:00.000
2
0
0
1
1
python,django,celery
0
43,733,656
0
5
0
false
1
0
There is django-celery-beat package which allows you to dynamicly add tasks to database and then they are executed as you defined in database.(e.g. every 5 minutes) But currently they have bug which causes that task is not appended to celery queue when added to database. One suggested workaround is to restart celery process every time that new task is added. I solved it with Dan Baders schedule package. I scheduled task on every minute which checks database for tasks that need to be executed in current minute. Then I start each of this tasks in new thread. Hope this helps.
1
6
0
0
I need to build an app in Django that lets the user do some task everyday at the time they specify at runtime. I have looked at Celery but couldn't​ find anything that will help. I found apply_async and I can get the task to execute once at the specificied duration but not recurrent. I am missing something but don't know what. Please suggest how can I achieve this.
How to dynamically schedule tasks in Django?
0
0.07983
1
0
0
6,721
43,779,887
2017-05-04T10:07:00.000
0
0
1
0
1
python,excel,openpyxl
0
43,783,883
0
1
0
false
0
0
Sounds like you might want to take advantage of the type guessing in openpyxl. If so, open the workbook with guess_types=True and see if that helps. NB. this feature is more suited to working with text sources like CSV and is likely to be removed in future releases.
1
0
0
0
I process a report that consists of date fields. There are some instances wherein the date seen in the cell is not a number (how do I know? I use the isnumber() function from excel to check if a date value is really a number). Using a recorded macro, for all the date columns, I do the text to columns function in excel to make these dates pass the isnumber() validation. And then I continue further processing using my python sccipt. But now I need to replicate the text to column action in excel, and imitate this in Python openpyxl. I naively tried to do the int(cell.value) but this didn't worked. So to sum up the questions, is there a way in Python to convert a date represented as text, to be changed to a date represented as a number?
How to convert date formatted as string to a number in excel using openpyxl
1
0
1
1
0
940
43,787,699
2017-05-04T15:56:00.000
1
1
0
1
1
python,amazon-web-services,amazon-ec2,oauth-2.0,google-analytics-api
0
48,089,379
0
1
0
false
0
0
I am not sure why this is happening, But I have a list of steps which might help you. check if this issue is caused by google analytics API version, google generally deprecates the previous versions of their API. I am guessing that you are running this code on cron on your EC2 serv, make sure that you include the path to the folder where the .dat file is. 3.check whether you have the latest credentials in the .dat file. Authentication to the API will happen through the .dat file. Hope this solves your issue.
1
6
0
0
I have an AWS EC2 machine that has been running nightly google analytics scripts to load into a database. It has been working fine up for months until this weekend. I have not made any changes to the code. These are the two errors that are showing up in my logs: /venv/lib/python3.5/site-packages/oauth2client/_helpers.py:256: UserWarning: Cannot access analytics.dat: No such file or directory warnings.warn(_MISSING_FILE_MESSAGE.format(filename)) Failed to start a local webserver listening on either port 8080 or port 8090. Please check your firewall settings and locally running programs that may be blocking or using those ports. Falling back to --noauth_local_webserver and continuing with authorization. It looks like it is missing my analytics.dat file but I have checked and the file is in the same folder as the script that calls the GA API. I have been searching for hours trying to figure this out but there are very little resources on the above errors for GA. Does anyone know what might be going on here? Any ideas on how to troubleshoot more?
Google analytics .dat file missing, falling back to noauth_local_webserver
1
0.197375
1
0
0
913
43,790,494
2017-05-04T18:41:00.000
0
0
1
0
0
python-3.x
0
43,790,837
0
2
0
false
0
0
I would recommend not doing anything. I don't believe in editing any password submissions, except for sanitizing to prevent security risks.
2
1
0
0
From experience I know that sometimes while copying and pasting a password into the password filed a white space is copied along with the password and this is causing errors (I don't know how common this is, but it happens). Now I'm learning Python (no previous programming experience) and came across rstrip() lstrip() and strip() method. What would be the "right" way to handle such situations ? Any inside is highly appreciated.
white spaces while copying password into password filed
0
0
1
0
0
367
43,790,494
2017-05-04T18:41:00.000
1
0
1
0
0
python-3.x
0
43,790,613
0
2
0
false
0
0
I would use strip() to strip both sides :-) I think it's very annoying when you copy paste a password and it's not accepted because you mis-copied with some extra blank characters.
2
1
0
0
From experience I know that sometimes while copying and pasting a password into the password filed a white space is copied along with the password and this is causing errors (I don't know how common this is, but it happens). Now I'm learning Python (no previous programming experience) and came across rstrip() lstrip() and strip() method. What would be the "right" way to handle such situations ? Any inside is highly appreciated.
white spaces while copying password into password filed
0
0.099668
1
0
0
367
43,791,236
2017-05-04T19:24:00.000
5
0
0
0
0
python-3.x,amazon-s3,boto,boto3
0
43,791,579
0
1
0
true
1
0
There is no way to append data to an existing object in S3. You would have to grab the data locally, add the extra data, and then write it back to S3.
1
5
0
0
I know how to write and read from a file in S3 using boto. I'm wondering if there is a way to append to a file without having to download the file and re-upload an edited version?
Appending to a text file in S3
1
1.2
1
0
1
4,234
43,824,204
2017-05-06T18:51:00.000
0
0
1
1
0
python,dulwich
0
46,393,625
0
1
0
false
0
0
You can "stage" a file that no longer exists, which will remove it. Alternatively, there is also a dulwich.porcelain.remove function that provides the equivalent of git rm (i.e. removes the file if it exists and then unversions it).
1
0
0
0
With dulwich I can stage a file using repo.stage, but how do I remove a file ? I am looking for the equivalent of git rm
How do I remove a file from a git repository with dulwich?
0
0
1
0
0
75
43,836,766
2017-05-07T21:12:00.000
0
0
0
1
0
python,c++
0
43,836,803
0
1
0
true
0
0
Java has a native keyword that allows functions from c++ to be brought into java as methods. Python might have the same feature.
1
4
0
0
I need to connect to a data stream written in C++ with my current program in Python, any advice or resources on how to connect?
Connecting to a data stream with Python
0
1.2
1
0
0
95
43,859,988
2017-05-09T02:17:00.000
0
0
0
0
0
python,mysql,matplotlib
0
43,860,724
0
1
0
true
0
0
you can use datetime module,although i use now() function to extract datetime from mysql,but i consider the format is the same。 for instance python>import datetime as dt i put the datetime data into a list named datelist,and now you can use datetime.strptime function to convert the date format to what you want python>dates = [dt.datetime.strptime(d,'%Y-%m-%d %H:%M:%S') for d in datelist At last,you can put the list named dates into plot X-axis I hope it helps you.
1
0
1
0
After doing a bit of research I am finding it difficult to find out how to use mysql timestamps in matplotlib. Mysql fields to plot X-axis: Field: entered Type: timestamp Null: NO Default: CURRENT TIMESTAMP Sample: 2017-05-08 18:25:10 Y-axis: Field: value Type: float(12,6) Null: NO Sample: 123.332 What date format is matplotlib looking for? How do I convert to this format? I found out how to convert from unix timestamp to a format that is acceptable with matplotlib, is unix timestamp better than the timestamp field type I am using? Should I convert my whole table to unix timestamps instead? Would appreciate any help!
Python - convert mysql timestamps type to matplotlib and graph
0
1.2
1
1
0
202
43,862,233
2017-05-09T06:03:00.000
0
0
0
0
0
python,django,forms
0
43,864,634
0
2
0
false
1
0
may be you can store the first form values in session and provide them as initial data for the first form when you are rendering the second form with error. For eg: data={"f1":request.session['abc'],"f2":request.session["xyz"]} form1 = abc(initial=data)
1
0
0
0
I have a page that I want to behave like this: First, the user only sees a single form, for the sake of example, lets say it allows the user to select the type of product. Upon submitting this form, a second form (whose contents depend on the product type) appears below it. The user can then either fill out the second form and submit it, or revise and resubmit the first form - either way I want the first form to maintain the user's input (in this case, the product type), even if the second form is submitted. How can I do this cleanly in django? What I am struggling with is preserving the data in the first form: e.g. if the user submits the second form and it has validation errors, when the page displays the first form the product type will be rendered blank but I want the option to remain set to what the user picked. This behaviour isn't mysterious or unexpected, but is not what I want. Also, if the user submits the second form successfully, I would like to redirect so that the first form maintains the selection and the second form is cleared. The best that I've thought of is mucking up the URL with the fields of the first form (admittedly not too many parameters) and storing its state there, or combining both forms into one form object in HTML and responding differently based on the name of the submit button (though I don't see how I could use a redirect to clear the second form and keep the first if I do this). Are there any cleaner, more obvious ways that I'm missing? Thanks.
Retain edit data for one form when submitting a second
1
0
1
0
0
35
43,866,696
2017-05-09T09:56:00.000
0
0
0
0
1
python-3.x,turtle-graphics
0
43,875,556
0
1
0
true
0
1
First, you need the bounds of the rectangle in some form -- it can be the lower left position plus a width and height or it can be the lower left position and the upper right position, etc. (It could even be the formulas of the four lines that make up the rectangle.) Then write a predicate function that tests if an (x,y) position is fully within the rectangle or not. You can simply do a series of comparisons to make sure x is greater than the lower left x and less than the upper right x, and ditto for y. Typically returning True or False. If the predicate returns False, indicating you've touched or crossed some line of the rectangle, then turn around and go in the opposite direction (or some other recovery technique.) You can also consider first using turtle's undo feature to eliminate the move that made you touch the line. If you'd like example code that does the above, please indicate such.
1
1
0
0
Our task is to create a turtle that always stays within a rectangle. It would be really great if you could show me how I can make a turtle run away from a line another turtle has created. Please don't fix the problem for me.
How can I make a turtle not touch a line?
0
1.2
1
0
0
220
43,871,444
2017-05-09T13:39:00.000
0
0
1
0
0
python,csv,dataframe,merge
0
43,872,949
0
1
0
false
0
0
The first file is smth like: Timestamp ; Flow1 ; Flow 2 2017/02/17 00:05 ; 540 ; 0 2017/02/17 00:10 ; 535 ; 0 2017/02/17 00:15 ; 543 ; 0 2017/02/17 00:20 ; 539 ; 0 CSV file #2: Timestamp ; DOC ; Temperatute ; UV254; 2017/02/17 00:14 ; 668.9 ; 15,13 ; 239,23 2017/02/17 00:15 ; 669,46 ; 15,14 ; 239,31 2017/02/17 00:19 ; 668 ; 15,13 ; 239,43 2017/02/17 00:20 ; 669,9 ; 15,14 ; 239,01 he output file is supposed to be like : Timestamp ; DOC ; Temperatute ; UV254 ; Flow1 ; Flow2 2017/02/17 00:15 ; 669,46 ; 15,14 ; 239,31 ; 543 ; 0 2017/02/17 00:20 ; 669,9 ; 15,14 ; 239,01 ; 539 ; 0
1
0
1
0
I would like to know how can I proceed in order to concatenate two csv files, here is the composition of this two files: The first one contains some datas related to water chemical parameters, these measurements are taken in different dates. The second one shows the different flow values of waste water, during a certain period of time. The problem is that I am looking to assign each value of the second file (Flow values) to the right row in the first file (water chemical parameters) in such a way that the flow and the other chemical parameters are measured in the same moments. Any suggestions ?
Merging two DataFrames (CSV files) with different dates using Python
0
0
1
0
0
37
43,884,375
2017-05-10T05:42:00.000
1
0
0
1
0
python,azure,azure-table-storage
0
43,884,796
0
2
0
true
0
0
Returning number of entities in the table storage is for sure not available in Azure Table Storage SDK and service. You could make a table scan query to return all entities from your table but if you have millions of these entities the query will probably time out. it is also going to have pretty big perf impact on your table. Alternatively you could try making segmented queries in a loop until you reach the end of the table.
1
0
0
0
We have a table in Azure Table Storage that is storing a LOT of data in it (IoT stuff). We are attempting a simple migration away from Azure Tables Storage to our own data services. I'm hoping to get a rough idea of how much data we are migrating exactly. EG: 2,000,000 records for IoT device #1234. The problem I am facing is in getting a count of all the records that are present in the table with some constrains (EG: Count all records pertaining to one IoT device #1234 etc etc). I did some fair amount of research to find posts that say that this count feature is not implemented in the ATS. These posts however, were circa 2010 to 2014. I'm assumed (hoped) that this feature has been implemented now since it's now 2017 and I'm trying to find docs to it. I'm using python to interact with out ATS. Could someone please post the link to the docs here that show how I can get the count of records using python (or even HTTP / rest etc)? Or if someone knows for sure that this feature is still unavailable, that would help me move on as well and figure another way to go about things! Thanks in advance!
Counting Records in Azure Table Storage (Year: 2017)
0
1.2
1
0
0
1,349
43,896,486
2017-05-10T15:12:00.000
0
0
1
0
1
python,2d-games,pythonista
0
48,909,025
0
2
0
false
0
1
Instead of putting the joystick on a separate scene, you should draw it on a scene.Node. Then in your game scene, you can add it like another sprite, using Scene.add_child(). To convert the touch positions to the nodes coordinate system, you can use Node.point_from_scene(), and to convert back to the scene’s coordinate system, you use Node.point_to_scene()
1
0
0
0
I am learning Python through Pythonista on the iPhone. The first thing I did was make a simple touch-screen joystick (controller). Im starting to work on the actual game, but i don't know how to merge or overlay the 2 scenes. (One is the actual game, the other is the controller I made in another file.) I have already tried importing and running it, but it seems like only 1 could be run at once, the controller file or the game file. Any help is appreciated.
Running multiple scenes in Pythonista
0
0
1
0
0
528
43,897,009
2017-05-10T15:34:00.000
0
0
0
0
0
python,google-apps-script,gspread
0
65,546,184
0
2
0
false
0
0
Make sure you're using the latest version of gspread. The one that is e.g. bundled with Google Colab is outdated: !pip install --upgrade gspread This fixed the error in gs.csv_import for me on a team drive.
1
1
0
0
I am trying to access a Spreadsheet on a Team Drive using gspread. It is not working. It works if the spreadsheet is on my Google Drive. I was wondering if gspread has the new Google Drive API v3 capability available to open spreadsheets on Team Drives. If so, how do I specify the fact I want to open a spreadsheet on a Google Team Drive and not my own Google drive? If not, when will that functionality be available? Thanks!
Does gspread Support Accessing Spreadsheets on Team Drives?
1
0
1
1
0
426
43,898,566
2017-05-10T16:54:00.000
1
0
0
0
0
python,amazon-web-services,chalice
0
44,975,845
0
2
0
true
1
0
You wouldn't serve HTML from Chalice directly. It is explicitly designed to work in concert with AWS Lambda and API Gateway to serve dynamic, API-centric content. For the static parts of an SPA, you would use a web server (nginx or Apache) or S3 (with or without CloudFront). Assuming you are interested in a purely "serverless" application model, I suggest looking into using the API Gateway "Proxy" resource type, forwarding to static resources on S3. Worth noting that it's probably possible to serve HTML from Chalice, but from an architecture perspective, that's not the intent of the framework and you'd be swimming upstream to get all the capabilities and benefits from tools purpose-built for serving static traffic (full HTTP semantics w/ caching, conditional gets, etc)
1
1
0
0
Has anyone here ever worked with chalice? Its an aws tool for creating api's. I want to use it to create a single page application, but Im not sure how to actually serve html from it. I've seen videos where its explored, but I can't figure out how they actually built the thing. Anyone have any advice on where to go, how to start this?
Using aws chalice to build a single page application?
0
1.2
1
0
1
923
43,904,029
2017-05-10T23:09:00.000
2
0
0
0
0
python,machine-learning,gensim,word2vec,text-classification
0
43,976,879
0
1
0
false
0
0
If you have trained word2vec model, you can get word-vector by __getitem__ method model = gensim.models.Word2Vec(sentences) print(model["some_word_from_dictionary"]) Unfortunately, embeddings from word2vec/doc2vec not interpreted by a person (in contrast to topic vectors from LdaModel) P/S If you have texts at the object in your tasks, then you should use Doc2Vec model
1
1
1
0
I want to analyze the vectors looking for patterns and stuff, and use SVM on them to complete a classification task between class A and B, the task should be supervised. (I know it may sound odd but it's our homework.) so as a result I really need to know: 1- how to extract the coded vectors of a document using a trained model? 2- how to interpret them and how does word2vec code them? I'm using gensim's word2vec.
How extract vocabulary vectors from gensim's word2vec?
0
0.379949
1
0
0
1,519
43,922,344
2017-05-11T17:44:00.000
0
0
0
0
0
django,python-2.7,jinja2
0
43,922,380
0
1
0
true
1
0
Unfortunately you can't do this directly with Django. You'll have to set up an AJAX handler (probably on keypress) in order to do this.
1
1
0
0
So I'm sending an item to my html page and put a value off this item in an input. What i want is when i change the input, i want to dynamically print the new value next to the input. Something like that : <input type='text' value="{{item.qty}}"/> {{myNewInputValue}} I know how to do this with angular but don't know if it's possible with Python Thanks
How to show input value next to it ? Python
0
1.2
1
0
0
61
43,939,873
2017-05-12T14:11:00.000
0
0
1
0
0
python,windows
0
43,940,430
0
1
0
false
0
1
Just an idea - not sure if that would work under your specific conditions (PyQT etc), but couldn't you run it from the pen drive directly? As in create a Python virtual environment (for example using venv, with all the dependencies) on the pendrive and then call your program using the python interpreter in the installed virtual environment. Or use the virtual environment and it's interpreter to install the dependencies?
1
0
0
0
I'm bringing you this issue: I'm trying to create a program to run in Windows using PyQT, to work on a pen drive. My idea is: I plug my pen drive, and everything that I need to run the program is there, including Python 3, PyQT, etc.. I don't want the user to install all the requirements, I just want one executable that install all the programs necessary and then, there will be the executable to open the program. Considering, of course, that Python 3 is not installed in this Windows Machine Just wondering how can I do it? Do you guys have any idea? Thanks, Gus.
Creating a "pen drive program" with Python
0
0
1
0
0
322
43,941,174
2017-05-12T15:15:00.000
2
0
0
0
0
python,python-3.x,nginx,gunicorn
0
43,942,327
0
1
0
false
1
0
Not quite. Basically flask is the webapp, it gets loaded when gunicorn starts up. At that point the flask app is up and running and gunicorn itself can answer requests by sending them to the flask app within its python processes (ie, no net traffic). Nginx sits on top of gunicorn and proxies requests between clients and gunicorn as gunicorn is not a web server. So nginx -> gunicorn -> flask (loaded by gunicorn itself) When gunicorn starts up, it loads and initialises the flask app on its own. Doing that on every request would be very slow. Nginx just proxies to gunicorn's listening port. It does not load a Flask app by itself, which is really a WSGI compliant Python webapp.
1
1
0
0
I was wondering how exactly is the request handled, I mean,I think it's something like this: Nginx receives the request, does initial handling based on configuration,passes to Gunicorn Gunicorn receives it, and initiate a instance of the Flask app, with the request data Flask app receives the request data, and does the work it was programmed to Is it something like this? Does a new instance of the Flask app get initiated at each request?
How does the Flask-Gunicorn-Nginx setup works under the hood?
0
0.379949
1
0
0
538
43,947,405
2017-05-12T22:42:00.000
5
0
1
0
0
python,multithreading,coroutine,eventlet
0
43,962,169
0
1
0
true
0
1
You wrote the answer yourself, I can only rephrase it. With regard to Eventlet, Gevent, Twisted, Asyncio and other cooperative multitasking libraries we use term "blocking" to denote that it blocks everything. Unpatched time.sleep(1) will block all coroutines/greenthreads as opposed to OS threads semantics where it would only block caller OS thread and allow other OS threads to continue. To differentiate things that block OS thread from things that block coroutine/greenthread we use term "yielding". A yielding function is one that allows execution to rest of coroutines, while blocking (due to Python execution semantics) only caller coroutine. Armed with that powerful terminology, tpool.execute() turns blocking call into yielding one. Combined with eventlet.spawn(tpool.execute, fun, ...) it would not block even the caller coroutine. Maybe you find this a helpful combination. And patches are always welcome. Eventlet is a great library because it contains combined effort of many great people.
1
2
0
0
I am trying to understand what eventlet.tpool is useful for. The docs say that tpool.execute() lets you take a blocking function and run it in a new thread. However, the tpool.execute() method itself blocks until the thread is complete! So how is this possibly useful? If I have some blocking/long running function myfunc() and call it directly, it will block. If I call it inside tpool.execute(myfunc) then the tpool.execute(myfunc) call will block. What exactly is the difference? The only thing I can guess is that when myfunc() is called directly, it not only blocks this coroutine but also prevents other coroutines from running, while calling tpool.execute() will block the current coroutine but somehow yields so that other coroutines can run. Is this the case? Otherwise I don't see how tpool can be useful.
How is eventlet tpool useful?
0
1.2
1
0
0
1,179
43,954,187
2017-05-13T14:16:00.000
-2
0
0
0
0
python,opencv,image-processing,distance
0
43,954,917
0
1
0
false
0
0
I am sorry but finding a distance is a metrology problem, so you need to calibrate your camera. Calibrating is a relatively easy process which is necessary for any measurements. Let's assume you only have one calibrated camera, if the orientation/position of this camera is fixed relatively to the ground plane, it is possible to calculate the distance between the camera and the feet of somebody (assuming the feet are visible).
1
0
1
0
I want to find out the distance between the camera and the people (detected using the HOG descriptor) in front of camera.I'm looking into more subtle approach rather than calibrating the camera and without knowing any distances before hand. This can fall under the scenario of an autonomous car finding the distance between the car in front. Can someone help me out with a sample code or an explanation on how to do so
How to finding distance between camera and detected object using openCV in python?
0
-0.379949
1
0
0
2,354
43,972,059
2017-05-15T05:25:00.000
0
0
0
0
0
python,machine-learning,scikit-learn,statistical-sampling
0
43,972,717
0
1
0
false
0
0
To evaluate a classifier's accuracy against another classifier, you need to randomly sample from the dataset for training and test. Use the test dataset to evaluate each classifier and compare the accuracy in one go. Given a dataset stored in a dataframe , split it into training and test (random sampling is better to get an indepth understanding of how good your classifier is for all cases , stratified sampling can sometimes mask your true accuracy) Why? Let's take an example : If you are doing stratified sampling on some particular category (and let's assume this category has an exceptionally large amount of data[skewed] and the classifier predicts that one category well , you might be led to believe that the classifier works well, even if doesn't perform better on categories with less information. Where does stratified sampling work better? When you know that the real world data will also be skewed and you will be satisifed if the most important categories are predicted correctly. (This definitely does not mean that your classifier will work bad on categories with less info, it can work well , it's just that stratified sampling sometimes does not present a full picture) Use the same training dataset to train all classifers and the same testing dataset to evaluate them. Also , random sampling would be better.
1
1
1
0
I'm working on a project to classify 30 second samples of audio from 5 different genres (rock, electronic, rap, country, jazz). My dataset consists of 600 songs, exactly 120 for each genre. The features are a 1D array of 13 mfccs for each song and the labels are the genres. Essentially I take the mean of each set of 13 mfccs for each frame of the 30 second sample. This leads to 13 mfccs for each song. I then get the entire dataset, and use sklearn's scaling function. My goal is to compare svm, knearest, and naive bayes classifiers (using the sklearn toolset). I have done some testing already but I've noticed that results vary depending on whether I do random sampling/do stratified sampling. I do the following function in sklearn to get training and testing sets: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0, stratify=y) It has the parameters "random state" and "stratify". When "random state" is ommitted, it randomly samples from the entire dataset; when it is set to 0, the training and test sets are guaranteed to be the same each time. My question is, how do I appropriately compare different classifiers. I assume I should make the same identical call to this function before training and testing each classifer. My suspicion is that I should be handing the exact same split to each classifier, so it should not be random sampling, and stratifying as well. Or should I be stratifying (and random sampling)?
Music genre classification with sklearn: how to accurately evaluate different models
1
0
1
0
0
894
43,980,261
2017-05-15T13:01:00.000
1
0
0
1
1
python,python-3.x,packages,easy-install
1
43,981,171
0
1
0
true
0
0
easy_install must be used as a command in the command prompt and it cannot be opened as an application. Go to the folder where easy_install is and open command-prompt in that folder. Now perform installation of any libraries using: >easy_install pandas #example Or you can set this path in your environment variables and use it instead of using this path to install everytime.
1
1
0
0
I'm trying to get new packages (request for example) and trying to do it through easy_install, but when I try to open it (both easy_install and easy_install-3.6) all I get is a blank terminal screen popping up for a second and than closing with nothing happening. What's wrong with it and how can I get new packages?
easy_install not working with no error
0
1.2
1
0
0
347
43,988,174
2017-05-15T20:22:00.000
1
0
0
0
0
python,opengl,math,pygame
0
43,988,562
0
1
0
false
0
1
You need to compute the player forward vector: The forward vector is the vector that points in the forward direction seen from the player's eyes - it tells you in which direction the eyes of the player are looking. The local forward vector (I call it lfw for now) is probably (0,0,1) because you specified the y axis as "up". The world forward vector (called wfv for now) is: (rotationMatrix * lfw); That is the direction which the player is looking at in world coordinates because you multiplied it with the players rotationMatrix. The final lookAt position is: position + wfv ( means: make one step from position in the forward direction --> yields the point after you took the step.) Hope this helps a bit
1
0
0
0
I started to to something with opengl in pygame, but I'm stuck at the point where gluLookAt takes world coordinates to orient the camera, but I want to move the camera with the mouse, so I have values like "mouse moved 12 pixels to the right". At the moment I have gluLookAt(player_object.x, player_object.y, player_object.z, lookat_x, lookat_y, -100, 0, 1, 0) but I don't know how to convert the movement of the mouse to these coordinates. Maybe someone knows the answer or a formula to convert it. (I use python, but I think it's easy to port code or just a formula)
opengl gluLookAt with orientation in degrees instead of coordinates
0
0.197375
1
0
0
198
44,005,182
2017-05-16T15:06:00.000
1
1
1
0
1
python
0
44,025,088
0
2
0
false
0
0
I have not been able to determine where the problem was so I just specificied the full path using the command getcwd from os. It has worked so far. It means I must have a hidden .pyc or .py~ file somewhere.
1
1
0
0
I am currently trying to load function from another .py file. I have in the same folder: algo.py and test_algo.py. I need to import all functions from algo in test_algo so I use the command: from algo import * The import is succesful however one function do_sthg() takes 3 arguments in algo but the imported version requires 4 arguments which was the case in a very old version of the code. I deleted all .py~ related files and there are not any other scripts with the name algo on my computer. How is that possible and how can i solve this issue? (I can not specify the full links to my script as it should change over time, I am using 2.7 version of Python) Any help would be appreciated.
Old version of a script is imported using import on Python
1
0.099668
1
0
0
875
44,006,560
2017-05-16T16:12:00.000
2
0
1
0
0
python,pyaudio
0
44,007,671
0
1
0
true
1
0
Surely you can record audio for more than an hour using pyaudio. Try invoking the recording function in a thread and put the main process in a loop or sleep for that period. Note: Make sure you do not run out of memory.
1
0
0
0
I use pyaudio with python2.7.13 to record wav ,but my pragram dead when I record more than 1 hour , how can I do if I want to record for more than 1 hour with py2.7. Thanks for your replay!
I want to use pyaudio to record wav more than hours
0
1.2
1
0
0
204
44,010,155
2017-05-16T19:37:00.000
1
1
0
0
0
linux,bash,python-3.x,minecraft,gnu-screen
0
44,896,034
0
1
0
true
0
0
Well, the ideal solution would be to write a bukkit plugin/forge mod to do this, rather than doing this entirely from outside the actual server. That being said, however, your best bet is probably watching the log files, as JNevill says in the comment.
1
1
0
0
I'm currently in the process of hacking together a bit of bash and python3 to integrate my Minecraft server with my friends Discord. I managed to power through most of the planned features with nary a hitch, however now I've gotten myself stuck halfway into the chat integration. I can send messages from the Discord to the server no problem, but I have no idea how to read the console output of the server instance, which is running in a screen session. I would appreciate some pointers in the right direction, if you know how this sort of thing is done. Ideally I would like a solution that is capable of running asynchronously, so I don't have to do a whole lot of busy-waiting to check for messages. P.S.: Sorry if this belongs on superuser instead, I wasn't sure where to put it.
How do I grab console output from a program running in a screen session?
1
1.2
1
0
0
254
44,023,863
2017-05-17T11:41:00.000
0
1
0
1
0
python,c++,python-3.x,c++11
0
44,025,218
0
2
0
false
0
1
There's POSIX popen and on Windows _popen, which is halfway between exec and system. It offers the required control over stdin and stdout, which system does not. But on the other hand, it's not as complicated as the exec family of functions.
1
0
0
0
I am trying to make a program in c++, but i cant make the program because in one part of the code I need to run a python program from c++ and I dont know how to do it. I've been trying many ways of doing it but none of them worked. So the code should look sometihnglike this:somethingtoruntheprogram("pytestx.py"); or something close to that. Id prefer doing it without python.h. I just need to execute this program, I need to run the program because I have redirected output and input from the python program with sys.stdout and sys.stdin to text files and then I need to take data from those text files and compare them. I am using windows.
How to run a python program from c++
0
0
1
0
0
362
44,045,913
2017-05-18T10:59:00.000
0
0
0
0
0
python,machine-learning,statistics,regression,data-science
0
44,049,390
0
2
0
false
0
0
Why did customer service calls drop last month? It depends on what type and features of data you have to analyze and explore the data. One of the basic things is to look at correlation between features and target variable to check if you can identify any feature that can correlate with the drop of calls. So exploring different statistic might help better to answer this question than prediction models. Also it is always a good practice to analyze and explore the data before you even start working on prediction models as its often necessary to improve the data (scaling, removing outliers, missing data etc) depending on the prediction model you chose. Should we go with this promotion model or another one? This question can be answered based on the regression or any other prediction models you designed for this data. These models would help you to predict the sales/outcome for the feature if you can provide the input features of the promotion models.
1
0
1
0
Thanks for your help on this. This feels like a silly question, and I may be overcomplicating things. Some background information - I just recently learned some machine learning methodologies in Python (scikit and some statsmodels), such as linear regression, logistic regression, KNN, etc. I can work the steps of prepping the data in pandas data frames and transforming categorical data to 0's and 1's. I can also load those into a model (like, logistic regression in scikit learn). I know how to train and test it (using CV, etc.), and some fine tuning methods (gridscore, etc.). But this is all in the scope of predicting outcomes on new data. I mainly focused on learning on building a model to predict on new X values, and testing that model to confirm accuracy/precision. However, now I'm having trouble identifying and executing the steps to the OTHER kinds of questions that say, a regression model, can answer, like: Why did customer service calls drop last month? Should we go with this promotion model or another one? Assuming we have all our variables/predictor sets, how would we determine those two questions using any supervised machine learning model, or just a stat model in the statsmodels package. Hope this makes sense. I can certainly go into more detail.
Answering business questions with machine learning models (scikit or statsmodels)
0
0
1
0
0
200
44,058,544
2017-05-18T22:11:00.000
6
0
1
0
0
python,audio,mp3,encoder,pydub
1
48,354,157
0
3
0
false
1
0
The other solution did not work for me. The problem for me was that the ffmpeg version that came installed with Anaconda did not seem to be compiled with an encoder. So instead of: DEA.L. mp3 MP3 (MPEG audio layer 3) (decoders: mp3 mp3float mp3_at ) (encoders: libmp3lame ) I saw: DEA.L. mp3 MP3 (MPEG audio layer 3) (decoders: mp3 mp3float mp3_at ) Without the (encoders: ...) part. My solution was to do this: ffmpeg -codecs | grep mp3, to check if there is any encoder (there isn't!). conda uninstall ffmpeg Open new terminal window. brew install ffmpeg --with-libmp3lame ffmpeg -codecs | grep mp3, to check if there is any encoder (now there is!).
1
5
0
0
I'm trying to export a file as mp3 in pydub, but I get this error: Automatic encoder selection failed for output stream #0:0. Default encoder for format mp3 is probably disabled. Please choose an encoder manually How do I select an encoder manually, what is the default encoder, and how could I enable it? PS: My Pydub opens mp3 files without any problem. I'm using Windows and Libav.
Pydub export error - Choose encoder manually
0
1
1
0
0
8,086
44,089,046
2017-05-20T17:43:00.000
0
0
0
1
0
python,terminal,filepath
0
44,089,362
0
1
0
true
0
0
Two problems going on here: 1) That path is probably correct. You're not using find correctly, in particular. You need to do sudo find / -name "*.app" (note the quotes around *app). From the man page: -iname pattern Like -name, but the match is case insensitive. For example, the patterns 'fo*' and 'F??' match the file names 'Foo', 'FOO', 'foo', 'fOo', etc. In these patterns, unlike filename expansion by the shell, an initial '.' can be matched by '*'. That is, find -name *bar will match the file '.foobar'. Please note that you should quote patterns as a matter of course, otherwise the shell will expand any wildcard characters in them. 2) Try using subprocess.Popen(["/Applications/Google Earth.app"], shell=True). Don't worry too much about security problems with shell=True unless you are taking user input. If it's just hardcoded to use your path to Google Earth, you're fine. If you have user input in your logic, however, DO NOT use shell=True. shell=True just means that shell metacharacters will work if they are in the command. The reason you need it is that subprocess.Popen() will have trouble parsing your command since there is a space in the path. Alternatively, you could just use os.system("/Applications/Google Earth.app").
1
0
0
0
I am trying to build a script that opens the Google Earth.app which I can see in Finder, but when I go to the applications folder it is not present. I looked at some other posts to find the filepath of Google Earth.app via sudo find / -iname *.app, which was /Applications/Google Earth.app. When I try and find this file I get 'No such file or directory'. Could some one please explain why you applications that are in Finder don't show up in terminal? Also how would I find the correct file path so I can use subprocess.Popen() to open Google Earth in Python.
Trying to open Google Earth via a Script, file path nonexistent
1
1.2
1
0
0
103
44,089,424
2017-05-20T18:28:00.000
0
0
1
0
0
python,opencv,import
0
48,517,382
0
1
0
false
0
0
To import opencv in your python project import cv2
1
0
0
0
I have problem about importing OpenCV to my project. Not actually problem, but I didn't find how to do that. I know it's trivial, but I really don't know. I have opencv downloaded and compiled in my home directory. I know how to import it in virtualenv, but how to import it directly from original - non virtualenv python2.7?
How to import opencv in python (not from virtualenv) [UBUNTU]
0
0
1
0
0
71
44,090,457
2017-05-20T20:19:00.000
0
0
0
0
0
telegram-bot,python-telegram-bot
0
44,094,915
0
1
0
false
1
0
you have lots of options. at first you need to store all chat_ids. you can do it in database or simple text file. then you need a trigger in order to start sending messages. I'm not familiar with your technology but i just create simple service in order to do it.
1
1
0
0
I've built a small telegram bot using python-telegram-bot. When a conversation is started,I add a periodical job to the job queue and then message back to the user every X minutes. Problem is when my bot goes offline (maintenance, failures, etc), the jobqueue is lost and clients do not receive updates anymore unless they send /start again I could maybe store all chat_ids in a persistent queue and restore them at startup but how do I send a message without responding to an update ?
Restoring job-queue between telegram-bot restarts
0
0
1
0
1
778
44,100,906
2017-05-21T19:23:00.000
-1
0
1
0
0
python,python-3.x
0
44,100,972
0
1
0
false
0
0
Try this from bs4 import BeautifulSoup Edit: Was already answered by @jonsharpe and @Vinícius Aguiar in the comments under the question.
1
1
0
0
I am doing a python webpage scraper . Some tutorial told me to use this package: BeautifulSoup. So I installed it using pip. Then, in my script, I try to import BeautifulSoup as bs. But I was warned that no module named BeautifulSoup. Is there a reliable way to get module name out of an installed package?
I installed a pip package, how to know the module name to import?
0
-0.197375
1
0
0
91
44,123,641
2017-05-22T23:41:00.000
5
0
1
0
0
python,google-assistant-sdk,google-assist-api
0
44,134,868
0
2
0
false
0
0
Currently (Assistant SDK Developer Preview 1), there is no direct way to do this. You can probably feed the audio stream into a Speech-to-Text system, but that really starts getting silly. Speaking to the engineers on this subject while at Google I/O, they indicated that there are some technical complications on their end to doing this, but they understand the use cases. They need to see questions like this to know that people want the feature. Hopefully it will make it into an upcoming Developer Preview.
1
3
0
0
I am using the python libraries from the Assistant SDK for speech recognition via gRPC. I have the speech recognized and returned as a string calling the method resp.result.spoken_request_text from \googlesamples\assistant\__main__.py and I have the answer as an audio stream from the assistant API with the method resp.audio_out.audio_data also from \googlesamples\assistant\__main__.py I would like to know if it is possible to have the answer from the service as a string as well (hoping it is available in the service definition or that it could be included), and how I could access/request the answer as string. Thanks in advance.
How to receive answer from Google Assistant as a String, not as an audio stream
1
0.462117
1
0
1
1,474
44,124,471
2017-05-23T01:40:00.000
0
0
0
0
0
python,machine-learning,scikit-learn
1
54,380,982
0
2
0
false
0
0
One another solution is that, you can do a bivariate analysis of the categorical variable with the target variable. What yo will get is a result of how each level affects the target. Once you get this you can combine those levels that have a similar effect on the data. This will help you reduce number of levels, as well as each well would have a significant impact.
1
0
1
0
guys. I have a large data set (60k samples with 50 features). One of this features (which is really relevant for me) is job names. There are many jobs names that I'd like to encode to fit in some models, like linear regression or SVCs. However, I don't know how to handle them. I tried to use pandas dummy variables and Scikit-learn One-hot Encoding, but it generate many features that I may not be encounter on test set. I tried to use the scikit-learn LabelEncoder(), but I also got some errors when I was encoding the variables float() > str() error, for example. What would you guys recommend me to handle with this several categorical features? Thank you all.
How to encode categorical with many levels on scikit-learn?
0
0
1
0
0
1,235
44,133,280
2017-05-23T11:17:00.000
0
0
0
0
0
python,string,pandas,split
0
44,134,880
0
1
0
true
0
0
Ok just solved the question: with df.shape I found out what the dimensions are and then started a for loop: for i in range(1,x): df[df.columns[i]]= df[df.columns[i]].str.split('/').[-1] If you have any more efficient ways let me know :)
1
0
1
0
I am currently in the phase of data preparation and have a certain issue I would like to make easy. The content of my columns: 10 MW / color. All the columns which have this content are named with line nr. [int] or a [str] What I want to display and which is the data of interest is the color. What I did was following: df['columnname'] = df['columnname'].str.split('/').str[-1] The problem which occurs is that this operation should be done on all columns which names start with the string "line". How could I do this? I thought about doing it with a for-loop or a while-loop but I am quite unsure how to do the referencing in the data frame then since I am a nooby in python for now. Thanks for your help!
Strip certain content of columns in multiple columns
0
1.2
1
0
0
50
44,140,535
2017-05-23T16:39:00.000
0
0
0
1
1
python
1
44,140,659
0
1
0
false
0
0
The thing to remember is that you run the script as you but like chron startup does not, so you need to: Ensure that the executable flags are set for all users and that it is in a directory that everybody has access to. Use the absolute path for every thing, including the script. Specify what to run it with, again with the absolute path.
1
0
0
0
I am from electrical engineering and currently working on a project using UP-Board, I have attached LEDs, switch, Webcam, USB flash drive with it. I have created an executable script that I want to run at startup. when I try to run the script in terminal using the code sudo /etc/init.d/testRun start it runs perfectly. Now when I write this command in terminal sudo update-rc.d testRun defaults to register the script to be run at startup it gives me the following error insserv: warning: script 'testRun' missing LSB tags and overrides Please guide me how to resolve this? I am from Electrical engineering background, so novice in this field of coding. Thanks a lot :)
Error while Registering the script to be run at start-up, how to resolve?
0
0
1
0
0
24
44,141,059
2017-05-23T17:08:00.000
0
0
0
0
0
python,process,cluster-computing,pymc3,dirichlet
0
44,144,746
0
1
0
false
0
0
If I understand you correctly, you're trying to extract which category (1 through k) a data point belongs to. However, a Dirichlet random variable only produces a probability vector. This should be used as a prior for a Categorical RV, and when that is sampled from, it will result in a numbered category.
1
0
1
0
I am using PyMC3 to cluster my grouped data. Basically, I have g vectors and would like to cluster the g vectors into m clusters. However, I have two problems. The first one is that, it seems PyMC3 could only deal with one-dimensional data but not vectors. The second problem is, I do not know how to extract the cluster id for the raw data. I do extract the number of components (k) and corresponding weights. But I could not extract the id that indicating the which cluster that each point belongs to. Any ideas or comments are welcomed!
How to extract cluster id from Dirichlet process in PyMC3 for grouped data?
0
0
1
0
0
95
44,186,905
2017-05-25T17:55:00.000
0
0
0
0
0
python,user-interface,raspberry-pi3
0
44,188,711
0
2
0
true
0
1
Your pi boots up and displays a console - just text - by running a program (getty). Then you run another application called a graphical display manager which then runs a window manager. On a pi it is usually gnome but there are many others,.. this window manager is what displays your GUI window. What you want is obviously possible, it is just that it is non-trivial to do. What you are talking about is either kiosk-mode application still running 'on the desktop' as you say but which obscures the desktop completely and does not allow you to switch or de-focus or an even more complicated JeOS like Kodi/XBMC bare metal installation running without your current window manager. Your python would have to do the job of the display manager and the window manager and it would be very, very slow. Use a really light window manager and go kiosk mode. Or you could go with text! There are libraries eg ncurses but I'm not sure how that would work with your touch screen display.
1
1
0
0
I am creating a GUI interface that will be using a 7" touch display with a raspberry pi 3. I want the GUI to take the place of the desktop, I do not want it displayed in a window on the desktop. any thoughts on how to do that. I have read the raspberry pi documentation to edit the rc.local script to start the application at login, but I can not figure out how to set up the python GUI with out creating a window
how to replace the desktop interface with a python application
0
1.2
1
0
0
737
44,188,070
2017-05-25T19:07:00.000
0
0
1
0
1
python,rounding
0
44,205,462
0
2
0
false
0
0
You could divide your altitude by 1000.0 and cast to an int which would drop the decimal: if int(altitude/1000.0) % 2 == 0 Then you can do whatever you want with that information.
1
1
0
0
If an aircraft is flying VFR in the US, if the heading is east, the altitude must be an odd thousand plus 500 feet (1500, 3500, 5500, etc). If flying west, the altitude must be an even thousand plus 500 feet (2500, 4500, 6500, etc). If I input a given altitude, but it is the wrong (odd or even) for the heading, how do I get Python to correct it next higher odd or even thousandths (1500 becomes 2500, 6500 becomes 7500, etc)? We never round down for altitudes. Thanks!
Determining Thousandths in a number
0
0
1
0
0
54
44,207,726
2017-05-26T18:06:00.000
0
0
0
0
0
python,sqlalchemy
0
44,395,701
0
2
0
false
1
0
I tried extending Query but had a hard time. Eventually (and unfortunately) I moved back to my previous approach of little helper functions returning filters and applying them to queries. I still wish I would find an approach that automatically adds certain filters if a table (Base) has certain columns. Juergen
1
1
0
0
in my app I have a mixin that defines 2 fields like start_date and end_date. I've added this mixin to all table declarations which require these fields. I've also defined a function that returns filters (conditions) to test a timestamp (e.g. now) to be >= start_date and < end_date. Currently I'm manually adding these filters whenever I need to query a table with these fields. However sometimes me or my colleagues forget to add the filters, and I wonder whether it is possible to automatically extend any query on such a table. Like e.g. an additional function in the mixin that is invoked by SQLalchemy whenever it "compiles" the statement. I'm using 'compile' only as an example here, actually I don't know when or how to best do that. Any idea how to achieve this? In case it works for SELECT, does it also work for INSERT and UPDATE? thanks a lot for your help Juergen
sqlalchemy automatically extend query or update or insert upon table definition
0
0
1
1
0
182
44,217,644
2017-05-27T13:50:00.000
1
0
1
0
0
python,regex,django
0
44,217,707
0
2
0
false
1
0
Using the carat inside of square brackets (e.g. [^...]) makes it an inverse. So, for example, [A-Za-z0-9_] would match alpha numerics and underscores, whereas [^A-Za-z0-9_] would match anything that is not alpha numerics or underscores. In your case, the regex you seem to want is r'^[^\w\s\.\@\+\-]+$'.
1
0
0
0
In a Django website of mine, I allow usernames that are alphanumeric, and/or contain @ _ . + -. Moreover, whitespaces are allowed too. I've written a simple regex to ensure this: '^[\w\s.@+-]+$'. It might be an obvious question, but how do I capture characters that do not pass regex validation? I want to display such characters in a tool tip to my users.
Catching characters that do not pass regex validation
0
0.099668
1
0
0
26
44,233,219
2017-05-29T00:21:00.000
1
0
0
0
0
python,directx,mouseevent,mouse,mousemove
0
44,331,240
0
2
0
true
0
1
Found a perfect solution, it will require some extra stuff to run but its the shortest way. Haven't found anyone who actually knows how to simulate mouse but decided to ask Sentdex for help and he recommended using vJoy to simulate controller. So you need to simulate a controller instead of mouse by using a combination of vJoy(controller driver) and FreePIE(Input emulator). After doing some research, for my purpose the best solution for moving on the axis(x,y) is to bind controller (x,y) axis movements to keyboard shortcuts(E.g. W.A.S.D) and make the main script to press these shortcuts, if I'm looking in the wrong direction. tl;dr Simulate controller. NEED: vJoy, FreePie
1
1
0
0
I've just started learning python few days ago and I'd like to know how to simulate mouse movements inside games that have forced mouse coordinates. directx environments? I've currently tested pyautogui, ctypes, wxpython. I've also tried using directpython11 but I've had trouble installing it, ton of dll errors. Can't find any topics helping with this in google, lots of pages on how to click or write in these kinds of cases but nothing about moving the mouse.
How to simulate mouseMovement in-game?
0
1.2
1
0
0
1,950
44,242,207
2017-05-29T12:12:00.000
0
0
0
0
0
python,opencv
0
44,246,824
0
4
0
false
0
0
As the card symbol is at fixed positions, you may try below (e.g. in OpenCV 3.2 Python): Crop the symbol at top left corner, image = image[h1:h2,w1:w2] Threshold the symbol colors to black, the rest to white, thresh = mask = cv2.inRange(image,(0,0,0),(100,100,100)) Perform a find contour detection, _, contours, hierarchy = cv2.findContours() Get the area size of the contour. area = cv2.contourArea(contour) Compare the area to determine which one of the 4 symbols it belongs to. In which you have to build the area thresholds of each symbol for step 4 to compare. All the cv2 functions above are just for reference. Hope this help.
1
0
1
0
I'm trying to detect the difference between a spade, club, diamond and hart. the number on the card is irrelevant, just the suit matters. i've tried color detection by looking at just the red or black colors, but that still leaves me with two results per color. how could i make sure i can detect each symbol individually? for instance: i have a picture of a red hart, a red diamond, a black spade and a black club. i want to draw the contours of each symbol in a different color. I'm using my webcam as a camera.
detect card symbol using opencv python
0
0
1
0
0
4,838
44,243,853
2017-05-29T13:35:00.000
0
0
0
1
0
python,shell,subprocess
1
44,243,964
0
2
0
false
0
0
Make sure you execute this in the directory where the files exist. If you just fire up Idle to run this code, you will not be in that directory.
1
0
0
0
I have couple of file with same name, and I wanted to get the latest file [root@xxx tmp]# ls -t1 abclog* abclog.1957 abclog.1830 abclog.1799 abclog.1742 I can accomplish that by executing below command. [root@xxx tmp]# ls -t1 abclog*| head -n 1 abclog.1957 But when I am trying to execute the same in python , getting error : subprocess.check_output("ls -t1 abclog* | head -n 1",shell=True) ls: cannot access abclog*: No such file or directory '' Seems it does not able to recognize '*' as a special parameter. How can I achieve the same ?
how to use subprocess.check_output() in Python to list any file name as "abc*"
0
0
1
0
0
260
44,259,535
2017-05-30T10:19:00.000
1
0
0
0
0
revit-api,revitpythonshell
1
44,261,410
0
3
0
false
0
0
To answer my own question, I actually never thought of looking through the code of other Revit Python scripts... in this case of PyRevit, which is in my opinion far more eloquently written than RPS, raelly looking forward for their console work to be done! Basically, I had mistakenly used GetParameter('parameter') instead of LookupParameter('parameter'). As I said, it was something stupidly simple that I just didn't understand. If anyone has sufficient knowledge to coherently clarify this, please do answer! Many thanks!
1
1
0
0
I'm just trying to find a way to access the name property of an Area element inside Revit Python Shell, tried looking on Jeremy Tammik's amazingly informative blog, tried AUGI, Revit API docs, been looking for 2 days now... Tried accessing via a bunch of ways, FilteredElementsCollector(doc).OfCategory(BuiltInCategory.OST_Areas), tried by Area class, tried through AreaTag, every single time I get an error under every circumstance and it's driving me nuts, it seems like such a simple issue that I can't seem to grasp! EDIT: Also tried by element id, through tags, through area schemes, nada, no go... Can anyone please tell me how to access this property via RPS?
Accessing Area.Name Throws Error
0
0.066568
1
0
0
127
44,276,962
2017-05-31T06:10:00.000
0
0
0
0
1
python,opencv,disparity-mapping
1
44,306,111
0
2
0
false
0
1
Unlike c++, Python doesn't work well with pointers. So the arguments are Filtered_disp = ximgproc_DisparityWLSFilter.filter(left_disp,left, None,right_disp) Note that it's no longer a void function in Python! I figured this out through trial and error though.
1
2
1
0
I get a ximgproc_DisparityWLSFilter from cv2.ximgproc.createDisparityWLSFilter(left_matcher), but I cannot get ximgproc_DisparityWLSFilter.filter() to work. The error I get is OpenCV Error: Assertion failed (!disparity_map_right.empty() && (disparity_map_right.depth() == CV_16S) && (disparity_map_right.channels() == 1)) in cv::ximgproc::DisparityWLSFilterImpl::filter, file ......\opencv_contrib\modules\ximgproc\src\disparity_filters.cpp, line 262 In general, how do I figure out how to use this, when there isn't a single google result for "ximgproc_DisparityWLSFilter"?
What are the ximgproc_DisparityWLSFilter.filter() Arguments?
0
0
1
0
0
2,604
44,297,372
2017-06-01T02:06:00.000
1
0
0
0
1
python,amazon-web-services,flask,aws-lambda,aws-api-gateway
0
44,307,224
0
2
0
false
1
0
The recommended way to host a server with lambda and without EC2 is: Host your front static files on S3 (html, css, js). Configure your S3 bucket to be a static web server Configure your lambdas for dynamic treatments and open it to the outside with API-gateway your JS call the lambda through API-gateway, so don't forget to activate CORS (on the S3 bucket AND on API-gateway). configure route53 to link it with your bucket (your route53 config must have the same name as your bucket) so you can use your own DNS name, not the generic S3-webserver url
2
2
0
0
I'm new to AWS in general, and would like to learn how to deploy a dynamic website with AWS. I'm coming from a self-hosted perspective (digitalocean + flask app), so I'm confused on what exactly the process would be for AWS. With self-hosting solution, the process is something like: User makes a request to my server (nginx) nginx then directs the request to my flask app flask app handles the specific route (ie, GET /users) flask does db operations, then builds an html page using jinja2 with the results from the db operation returns html to user and user's browser renders the page. With AWS, I understand the following: User makes a request to Amazon's API Gateway (ie, GET /users) API Gateway can call a AWS Lambda function AWS Lambda function does db functions or whatever, returns some data API Gateway returns the result as JSON (assuming I set the content-type to JSON) The confusing part is how do I generate the webpage for the user, not just return the JSON data? I see two options: 1) Somehow get AWS Lambda to use Jinja2 module, and use it to build the HTML pages after querying the db for data. API Gateway will just return the finished HTML text. Downside is this will no longer be a pure api, and so I lose flexibility. 2) Deploy Flask app onto Amazon Beanstalk. Flask handles application code, like session handling, routes, HTML template generation, and makes calls to Amazon's API Gateway to get any necessary data for the page. I think (2) is the 'correct' way to do things; I get the benefit of scaling the flask app with Beanstalk, and I get the flexibility of calling an API with the API Gateway. Am I missing anything? Did I misunderstand something in (2) for serving webpages? Is there another way to host a dynamic website without using a web framework like Flask through AWS, that I don't know about?
Serving dynamic webpages using aws?
0
0.099668
1
0
0
818
44,297,372
2017-06-01T02:06:00.000
1
0
0
0
1
python,amazon-web-services,flask,aws-lambda,aws-api-gateway
0
44,312,884
0
2
0
false
1
0
You definitely have to weigh the pros and cons of serving the dynamic website via API GW and Lambda. Pros: Likely cheaper at low volume Don't have to worry about scaling Lambda Functions are easier to manage than even beanstalk. Cons: There will be some latency overhead In some ways less flexible, although Python is well supported and you should be able to import the jinja2 module. Both of your proposed solutions would work well, it kind of depends on how you view the pros and cons.
2
2
0
0
I'm new to AWS in general, and would like to learn how to deploy a dynamic website with AWS. I'm coming from a self-hosted perspective (digitalocean + flask app), so I'm confused on what exactly the process would be for AWS. With self-hosting solution, the process is something like: User makes a request to my server (nginx) nginx then directs the request to my flask app flask app handles the specific route (ie, GET /users) flask does db operations, then builds an html page using jinja2 with the results from the db operation returns html to user and user's browser renders the page. With AWS, I understand the following: User makes a request to Amazon's API Gateway (ie, GET /users) API Gateway can call a AWS Lambda function AWS Lambda function does db functions or whatever, returns some data API Gateway returns the result as JSON (assuming I set the content-type to JSON) The confusing part is how do I generate the webpage for the user, not just return the JSON data? I see two options: 1) Somehow get AWS Lambda to use Jinja2 module, and use it to build the HTML pages after querying the db for data. API Gateway will just return the finished HTML text. Downside is this will no longer be a pure api, and so I lose flexibility. 2) Deploy Flask app onto Amazon Beanstalk. Flask handles application code, like session handling, routes, HTML template generation, and makes calls to Amazon's API Gateway to get any necessary data for the page. I think (2) is the 'correct' way to do things; I get the benefit of scaling the flask app with Beanstalk, and I get the flexibility of calling an API with the API Gateway. Am I missing anything? Did I misunderstand something in (2) for serving webpages? Is there another way to host a dynamic website without using a web framework like Flask through AWS, that I don't know about?
Serving dynamic webpages using aws?
0
0.099668
1
0
0
818
44,298,778
2017-06-01T04:52:00.000
5
0
0
0
1
python,browser
0
44,317,597
0
1
0
true
0
0
Lower-case v should work (unless you use the QtWebEngine backend). Otherwise, Ctrl-A in insert mode does. However, Stackoverflow is for programming questions - this isn't about programming. ;-)
1
2
0
0
I need to map to select the document like in a normal browser. I tried using ggVG ( vim equivalent) but 'V' didn't work. Does anyone know how to do it?
How to select all the text in a page in QuteBrowser?
0
1.2
1
0
0
958
44,313,288
2017-06-01T17:00:00.000
1
0
0
0
0
python,django,django-parler,django-shop
0
50,614,495
0
2
0
false
1
0
The simplest way to localize prices in django-SHOP, is to use the MoneyInXXX class. This class can be generated for each currency using the MoneyMaker factory. Whenever an amount of a Money class is formatted, it is localized properly.
1
1
0
0
I know easy way, make a few different fields for needed currencies, but that's not only ugly, but the currencies will be hardcoded. It seems to me be more elegant through django-parler, but I do not quite understand how to do it.
What the right way to localize the price in the Django-shop?
0
0.099668
1
0
0
210
44,356,514
2017-06-04T16:50:00.000
0
0
1
0
0
python,plugins,notepad++
0
51,279,361
0
2
0
false
0
0
Yeah, I had also this problem of a plugin crashing my notepad++ every minute, but I used notepad++ portable, instead, you just go to the directory where it's installed and look for the plugins directory, notepad++ should offer an inner functionality of doing this.
1
1
0
0
I currently installed via plug-in manager of Notepad++ the Python Indent plug-in I cannot uninstall it. It's in update pane of Notepad++ plug-in manager, I check it and update it. After update installation it is there again and not in installed plug-ins. So it cannot be uninstalled. Any idea how to remove it?
How to uninstall Python indent plug-in from Notepad++?
1
0
1
0
0
1,831
44,358,307
2017-06-04T20:06:00.000
4
0
0
0
0
python,django
0
44,358,407
0
1
0
true
1
0
The web is stateless. This means that if a browser requests the same page twice, a traditional web server has no real way of knowing if it's the same user. Enter sessions. Django has an authentication system which requires each user to log in. When the user is logged in they're given a session. A session is made of two parts; A cookie containing a randomly generated token, and a database entry with that same token. When a user logs in, a new session token is generated and sent, via a cookie, back to the user which the browser stores. At the same time, that record is created in the database. Each time a browser makes a request to Django, it sends its session cookie along with the request and Django compares this to the tokens in the database. If the token exists, the user is considered to be logged in. If the token doesn't exist, the user isn't logged in. In Django, there are User models which make it easy to check who the currently logged in user is for each request. They're doing all that token checking in the background for us on each and every request made by every user. Armed with this, we can associate other models via "foreign key" relationships to indicate who owns what. Say you were making a blog where multiple users could write articles. If you wanted to build an editing feature you'd probably want to restrict users to only be allowed to edit their own articles and nobody else's. In this situation, you'd receive the request, find out who the current user was from it, compare that user to the "author" field on the blog Post model and see if that foreign key matches. If it matches, then the user making the current request is the owner and is allowed to edit. This whole process is secured by the fact that the session tokens are randomly generated hashes, rather than simple ID numbers. A malicious attacker can't simply take the hash and increment the value to try and access adjacent accounts, they'd have to intercept another user's hash entirely. This can be further secured by using SSL certificates so that your connections go over https:// and all traffic is encrypted between the browser and your server.
1
0
0
0
If for example I want to show a zero(0) for all users to see, and I want all users to add one(1) to the number With their Identity only shown for superusers. And how to make sure that each user only can add one time, and of course what is the Security requirements that have to be done to prevent unautohrized Access to change any of this or to get any information? I understand this is a big topic, but could someone briefly explain for me what parts of Programming that are involved, and maybe some good books on these topics?
How does django know which user owns what data?
1
1.2
1
0
0
79
44,367,961
2017-06-05T11:28:00.000
0
0
0
1
0
python,redis,celery,estimation
0
44,961,359
0
1
1
true
1
0
I don't think there is a magic way to do this. What I do in my app is just log the execution time for each task and return that as an ETA. If you wanted to get a little more accurate you could also factor in the redis queue size and the task consumption rate.
1
0
0
0
I want to get eta of task in celery each time with get request. There is no direct api in celery to get task scheduled time (except inspect() - but it's seems very costly to me) How can i manage eta of particular task? The downside of storing eta time in Django model is not consistent ( either i couldnt store taks_id because i can't - dont know how get eta from task_id) I see on one question that there is no api, cause it somehow depends on brokers and etc. But i hope that there is some solution So what's the best way manage task_id to get eta? Backend and broker is redis
Celery best way manage/get eta of task
0
1.2
1
0
0
429
44,376,270
2017-06-05T19:26:00.000
1
0
0
0
0
python,html,hosting,vps
0
46,166,962
0
1
0
false
1
0
you can connect to your server/serverpilot app via SSH/SFTP. Filezilla, codeanywhere are options that allow you to do this.
1
0
0
0
So I just bought a VPS server from Vultr. Then I went on ServerPilot, and installed it on my server. Now I can access, via SFTP, all the files on my server. But how can I access these files from my web-browser via Internet? I mean, when I type in the IP address of my Vultr Server, I land on the ServerPilot page "your app xxx is set up". Alright, but how can I access the other files I uploaded now? Thanks
Beginner VPS Vultr/ServerPilot -> How to change the homepage & access the files I uploaded?
0
0.197375
1
0
1
110
44,377,666
2017-06-05T21:04:00.000
0
0
1
0
0
python,pyinstaller,publisher,windows-defender
0
69,619,137
0
2
0
false
0
0
This is a known False Positive with Windows Defender. This happens to my files as well when tested on a Windows 10 VM, and it happens to others as well. Also, Windows Defender 'Smartscreen' may block any unsigned file even when using another Antivirus, but you should be able to click more information and then continue You can exclude the file from Windows Defender, but the best solution is to use another antivirus, as Windows Defender is not very good anyway. (that is not just based on my experience but off AV tests) I am not sure what other antivirues have the same False Positive, but I know there are a few. You also could test on a VM, where you could disable Windows Defender and Smartscreen, while leaving it enabled on your host system. (VirtualBox is a great free VM software for Windows)
1
7
0
0
I developed a Python code and I converted it to an .exe with pyinstaller but the problem is that there is no publisher so each time a computer runs my program, Windows Defender throws an alert that says that there is no publisher so the program is not sure... Does anyone know how to change the publisher of an .exe from none to something or how to implement Publisher in pyinstaller?
Pyinstaller .exe throws Windows Defender [no publisher]
0
0
1
0
0
12,698
44,387,283
2017-06-06T10:15:00.000
0
0
0
0
0
python,server
0
44,387,505
0
1
0
false
1
0
is your server is linux or windows? for linux: you can add a script to run your script on runlevel 3 or 5 write a script put it under /etc/init.d/ folder then link your script /etc/rc3.d or /etc/rc5.d to be start
1
0
0
0
I'm building a website with some backprocessing with python. I want to know how to execute my python code from the server ? There is no direct link between my HTML pages and my python code. Let's say I want to do an addition with python in the server, how can I do that ? Thanks so much in advence :)
Running python code on server
0
0
1
0
0
105
44,387,854
2017-06-06T10:43:00.000
0
0
0
0
0
python,matrix,ellipse,gaussianblur
0
44,388,121
0
1
0
false
0
0
You need to draw samples from a multi-variate gaussian distribution. The function you can use is numpy.random.multivariate_normal You mean value matrix should be [40, 60]. The covariance C matrix should be 2X2. Regarding its values: C[1, 1], C[2, 2]: decides the width of the ellipse along each axis. Choose it so that 3*C[i,i] is almost equal to the width of the ellipse along this axis. The diagonal values are zero if you want the ellipse to be along the axes, otherwise put larger values (keep in mind that C[2, 1]==C[1, 2] However, keep in mind that, since it is a Gaussian distribution, the output values will be close to 0 at distance 3*C[i,i] from the center, but they will never be truly zero.
1
0
1
0
I have a 100x100 Matrix with Zeros. I want to add a 10x20 ellipsis around a specific point in the Matrix - lets say at position 40,60. The Ellipsis should be filled with values from 0 to 1. (1 in the center - 0 at the edge) - The numbers should be gaussian-distributed. Maybe someone can give me a clue, how to start with this problem..
Create Matrix with gaussian-distributed ellipsis in python
0
0
1
0
0
221
44,424,308
2017-06-07T23:26:00.000
0
1
0
0
0
python,c++,arrays,numpy,swig
0
44,424,756
0
2
0
false
0
1
You'll find that passing things back and forth between languages is much easier if you use a one-dimensional array in which you access elements using, e.g. arr[y*WIDTH+x]. Since you are operating in C++ you can even wrap these arrays in classes with nice operator()(int x, int y) methods for use on the C++ side. In fact, this is the internal representation which Numpy uses for arrays: they are all one-dimensional.
1
0
0
0
C++ part I have a class a with a public variable 2d int array b that I want to print out in python.(The way I want to access it is a.b) I have been able to wrap the most part of the code and I can call most of the functions in class a in python now. So how can I read b in python? How to read it into an numpy array with numpy.i(I find some solution on how to work with a function not variable)? Is there a way I can read any array in the c++ library? Or I have to deal with each of the variables in the interface file. for now b is <Swig Object of type 'int (*)[24]' at 0x02F65158> when I try to use it in python ps: 1. If possible I don't want to modify the cpp part. I'm trying to access a variable, not a function. So don't refer me to links that doesn't really answer my question, thanks.
Reading c++ 2d array in python swig
0
0
1
0
0
569
44,425,362
2017-06-08T01:45:00.000
0
0
1
0
0
python-3.x
0
44,425,421
0
1
0
false
0
0
To get the last digit, you have to divide the number by 10 and get the remainder. For example, to get the last digit of 123, you can do 123 % 10 which results to 3. To remove the last digit, you have to divide the number by 10 and discard the remainder. For example, to remove the last digit of 123, you can do 123 // 10 which results to 12.
1
0
0
0
I'm doing procedural programming and for my final assignment I have to create an application that will allow the user to do the following: Allow the user to enter the customer’s details: name, postcode and loyalty card details Check if the card has expired Check the loyalty card number is valid by: Allowing the user to enter the 8 digits shown on the front of the card Removing the 8th digit and storing it as ‘check_digit’ Reversing the numbers Multiplying the 1st, 3rd, 5th and 7th digits by 2 If the result of the multiplication is greater than 9 then subtract 9 from the result Adding together the resulting 7 digits Checking if the sum of the added digits plus the ‘check_digit’ is divisible by 10 Output whether the loyalty card is valid or not Output customer and loyalty card details. But, how do I go about removing the 'last digit' then storing it as a check_digit? Sorry if this is vague, this is copied directly from my assignment brief.
Assignment 3 in Procedural Programming
0
0
1
0
0
77
44,441,002
2017-06-08T16:24:00.000
0
0
0
0
0
python,machine-learning,scikit-learn,svm
0
44,441,810
0
1
0
false
0
0
Yes, this is mostly a matter of experimentation -- especially as you've told us very little about your data set: separability, linearity, density, connectivity, ... all the characteristics that affect classification algorithms. Try the linear and Gaussian kernels for starters. If linear doesn't work well and Gaussian does, then try the other kernels. Once you've found the best 1 or 2 kernels, then play with the cost and gamma parameters. Gamma is a "slack" parameter: it gives the kernel permission to make a certain proportion of raw classification errors as a trade-off for other benefits: width of the gap, simplicity of the partition function, etc. I haven't yet had an application that got more than trivial benefit from altering the cost.
1
1
1
0
I'm trying to use SVM from sklearn for a classification problem. I got a highly sparse dataset with more than 50K rows and binary outputs. The problem is I don't know quite well how to efficiently choose the parameters, mainly the kernel, gamma anc C. For the kernels for example, am I supposed to try all kernels and just keep the one that gives me the most satisfying results or is there something related to our data that we can see in the first place before choosing the kernel ? Same goes for C and gamma. Thanks !
How to choose parameters for svm in sklearn
0
0
1
0
0
726
44,468,527
2017-06-10T01:04:00.000
0
0
1
0
0
python,amazon-web-services
0
44,469,173
0
1
0
true
1
0
Much of what you are asking depends upon your use-case. For example, if you have work continually arriving then you will need capacity continually available. However, if it is more batch-oriented then you could start/stop capacity and even use the new Amazon Batch service that can allocate resources when needed and remove them when jobs are finished. Some things to note: You can change the Instance Type when an instance is stopped. So, your t2.micro instance can be changed to a large instance (eg m4.xlarge) by stopping it, changing the instance type and starting it again. The t2.micro Instance Type is actually extremely powerful when CPU burst credits are available, but it limited in capability when all CPU Credits are consumed. A good machine for bursty workloads, but not continual workloads. Spot instances are great, but please note that they will be terminated if the Spot Price goes above your bid price. Prices vary by region, Availability Zone within a region and instance type, so you could launch instances with a variety of attributes (different AZs, different instance types) that would make it unlikely that you would lose all capacity at the same time. Spot can save considerable costs and is well worth investigating. Take a look at Amazon EMR, which runs Hadoop to provide parallel processing across a cluster of instances. Stop instances when you don't need them. That's the best way to get good value!
1
0
0
0
I am new to Amazon Web Services (AWS) & I am using the free tier t2.micro right now ( 1 CPU and 1 GB memory). Doing some backtesting/ simulation stuff and it seems free tier is quite inadequate. Pretty slow actually. Thus thinking of options which will help me to run my code at a faster speed for few hours. Option 1 : I can 1 buy CPU optimized/ higher Memory instance ( 4 cores and 4 GB RAM for example ) and then make an image of my t2.micro and run my stuff in that new one. It will be expensive though if I keep it running, so I need to "stop" the instance when I am not working ( or nothing is running ) to reduce the cost. Option 2 : I can buy spot instances. I am not sure how to use the CPU and RAM of that spot instance from my existing t2.micro. Can I create a temporary grid/cluster where my Head Node will be running in my t2.micro but compute node will be the spot instance ( higher CPU and RAM ), thus all my calculations, etc will be using the spot instance. My question : Is the Option 2 possible ? I program everything in python and I have all the relevant softwares/python IDEs etc are already installed in my t2.micro instance. Any existing grid/cluster software I can use right now ? I don't know C++, Csharp, Java etc. Know only phython & R so any programming stuff I need to do to build a grid/cluster must use Python :) Thank you in advance.
Cluster with t2.micro instances in AWS
0
1.2
1
0
0
582
44,469,313
2017-06-10T03:50:00.000
0
0
1
0
0
python,pandas,ipython-notebook
0
59,876,018
0
15
0
false
0
0
This will also work: dframe.amount.str.replace("$","").astype(int)
1
4
1
0
I have a column called amount with holds values that look like this: $3,092.44 when I do dataframe.dtypes() it returns this column as an object how can i convert this column to type int?
Price column object to int in pandas
0
0
1
0
0
12,214
44,487,269
2017-06-11T18:29:00.000
0
0
0
0
0
java,python,stanford-nlp
0
44,512,629
0
1
0
false
0
0
If you are using the command line you can use -outputFormat text to get a human readable version or -outputFormat json to get a json version. In Java code you can use edu.stanford.nlp.pipeline.StanfordCoreNLP.prettyPrint() or edu.stanford.nlp.pipeline.StanfordCoreNLP.jsonPrint() to print out an Annotation.
1
0
1
0
Using the Stanford NLP, I want my text to go through lemmatization and coreference resolution. So for an input.txt: "Stanford is located in California. It is a great University, founded in 1891." I would want the output.txt: "Stanford be located in California. Stanford be a great University, found in 1891." I am also looking to get a table where the first column consists of the name-entities that were recognized in the text, and the second column is the name class they were identified as. Thus, for the example sentence above, it would be something like: 1st Column 2nd Column Stanford Location, Organization California Location Thus, in the table, the name-entities would occur only once. There's nothing I was able to find online about manipulating the default xml output or making direct changes to the input text file using the NLP. Could you give me any tips on how to go about this?
Stanford NLP Output Formatting
0
0
1
0
0
784
44,504,140
2017-06-12T16:16:00.000
0
0
0
0
0
python,padding,cntk
0
45,828,029
0
1
0
false
0
0
There is a new pad operation (in master; will be released with CNTK 2.2) that supports reflect and symmetric padding.
1
1
1
0
In the cntk.layers package we have the option to do zero padding: pad (bool or tuple of bools, defaults to False) – if False, then the filter will be shifted over the “valid” area of input, that is, no value outside the area is used. If pad=True on the other hand, the filter will be applied to all input positions, and positions outside the valid region will be considered containing zero. Use a tuple to specify a per-axis value. But how can I use other types of padding like reflect or symmetric padding? Is it possible to integrate my own padding criterion in the cntk.layers? I'm a beginner in cntk and really grateful for every help.
CNTK & Python: How to do reflect or symmetric padding instead of zero padding?
1
0
1
0
0
177
44,517,122
2017-06-13T09:13:00.000
0
0
0
0
1
python,runtime-error,cntk
1
44,535,536
0
1
0
false
0
0
This line cloneModel.parameters[0] = cloneModel.parameters[0]*4 tries to replace the first parameter with an expression (a CNTK graph) that multiplies the parameter by 4. I don't think that's the intent here. Rather, you want to do the above on the .value attribute of the parameter. Try this instead: cloneModel.parameters[0].value = cloneModel.parameters[0].value*4
1
1
1
0
I have trained a model in CNTK. Then I clone it and change some parameters; when I try to test the quantized model, I get RuntimeError: Block Function 'softplus: -> Unknown': Inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Parameter('alpha', [], []), Constant('Constant70740', [], [])' of the new clone do not match the cloned inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Constant('Constant70740', [], []), Parameter('alpha', [], [])' of the clonee Block Function. I have no idea what this error means or how to fix it. Do you have any ideas? P.S. I clone and edit the model by doing clonedModel = model.clone(cntk.ops.CloneMethod.clone) cloneModel.parameters[0].value = cloneModel.parameters[0].value*4 then when I try to use cloneModel I get that error above.
CNTK: The new clone do not match the cloned inputs of the clonee Block Function
1
0
1
0
0
115
44,526,642
2017-06-13T16:10:00.000
0
0
0
0
0
python-2.7,nonlinear-functions
0
44,526,860
0
1
0
false
0
0
It looks more like a math problem to me here, since you ask "how to start". you know that a function's plot is just a lot of points (x, y) where y=f(x). And I know that for any two pairs of points (not vertically aligned), I have an infinity of second-degree functions (parabolas) going through these two points. they are given by y=ax^2+bx+c You want the parabola to go through your 2 points, so you can substitute x and y for each of the 2 points, that will give you 2 equations (where a, b and c are the unknown) . Then you can add a random point (I would suggest on the y-axis : (0; r) ). This will give you a third equation. With these 3 equations, solve for a, b and c. (in function of r) now, for any value of r, you will have some a, b and c that define a parabola going through your 2 known points. Once you understand how to solve this math problem, the python part is completely independant.
1
0
1
0
I have two given points (3.0, 3.2) and (7.0, 4.59) . My job here is very simple but I don't even know how to start. I just need to plot 4 nonlinear functions that go through these two points. Did somebody have a similar problem before? How does one even start?
Generate a random nonlinear function going through given points in python
0
0
1
0
0
591
44,528,223
2017-06-13T17:43:00.000
3
0
1
0
0
python,python-2.7,python-3.x,pip,packages
0
44,571,919
1
1
0
false
0
0
pip3 install and python3 -m pip install — both work perfectly and don't have any impact on Python 2. You can have as many Pythons in your system as you want; I for one have Python 2.7, 3.4, 3.5 and 3.6. To distinguish different versions of pip I use versioned names: pip3.4 install. And of course I use virtual environments and virtualenvwrapper quite intensively.
1
2
0
0
I have been using Python 2.7 for a while now and installing packages using pip install without any issue. I just started using python 3 for a certain code and realized how confusing having different versions of Python can get. I have Fedora 25, the default Python version is 2.7.13 and the default Python 3 version is Python 3.5.3, I want to be able to use python 2.7 and python 3, my general question is: What are the best practices when installing packages for both Python 2 and Python 3 on one machine? As I mentioned using pip install in Python 2.7 works fine, but what about Python 3? I can: use pip3 install use python3 -m pip install Which one should I use and how does it affect the python 2 version of the module? pip3 is not installed on Fedora 25, which raises a new question: how should I install it? as I understand I can: use dnf install python3-pip (it is unclear if that actually works when pip for Python 2.7 is installed) use python3 get-pip.py Finally, would it be a good idea to create a Python 2 and a Python 3 virtual environment to address this issue? From what I have read on the internet there does not seem to be a clear consensus on these questions, I hope this thread will clarify.
Package management for coexisting Python 2 and 3
0
0.53705
1
0
0
170
44,547,072
2017-06-14T14:04:00.000
0
1
1
0
0
python,eclipse,github,pydev,egit
0
44,783,503
0
1
0
false
0
0
On git, you always work with the repo as a whole (even if you see only a part of it on Eclipse). So, to do what you want, you have to actually create a new repo and copy the sources you want and then push from there (there are ways to do that with git saving the history too if that's important to you). You might want to take a look at git submodules too...
1
0
0
0
So I have a few Packages that I have made and I want to share them with my friends and I want to put them in separate github repositories, now I know how to make a project in eclipse, I already have my packages in the project and I also cloned the empty github repository in my local computer now when i connect the project to the local repository and push it into github it actually copies the complete project into the repository but i want only the packages to be copied i.e. right now its like githubrepository/pythonproject/pythonpackage but i want it to be githubrepository/pythonpackage can someone suggest a link or some ways to solve it?am i making a mistake?
How do I properly share my Python Packages using Eclipse+pydev+egit?
0
0
1
0
0
21
44,554,135
2017-06-14T20:25:00.000
0
0
0
0
0
python
0
44,554,231
0
1
0
false
0
0
You can use Pandas to import a CSV file with the pd.read_csv function.
1
0
1
0
I have a 1 column excel file. I want to import all the values it has in a variable x (something like x=[1,2,3,4.5,-6.....]), then use this variable to run numpy.correlate(x,x,mode='full') to get autocorrelation, after I import numpy. When I manually enter x=[1,2,3...], it does the job fine, but when I try to copy paste all the values in x=[], it gives me a NameError: name 'NO' is not defined. Can someone tell me how to go around doing this?
Import a column from excel into python and run autocorrelation on it
0
0
1
0
0
57