Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
46,898,800
2017-10-23T21:26:00.000
0
0
0
1
python,google-cloud-datastore,google-cloud-dataflow
46,920,522
1
false
0
0
You could omit limit and filter by something else (date?) and then do top N on Dataflow instead.
1
0
0
I realized that while using the ReadFromDatastore PTransform, if the query has a limit set, the query won't be split across workers. The documentation for the Python class says: "... when the query is configured with a limit ..., then all the returned results will be read by a single worker in order to ensure correct data. Since data is read from a single worker, this could have significant impact on the performance of the job." In my case, I need to specify the limit, because there are many more entities matching the query in Datastore than I need for this job. However, the performance hit is severe enough that specifying a limit doesn't give me results any faster (or fast enough). What can I do to somehow finish the job and flush the pipeline when I have processed a certain number of entities without getting a performance hit?
Datastore query splitting behaviour when specifying limit on Dataflow pipeline
0
0
0
177
46,898,834
2017-10-23T21:28:00.000
0
0
1
0
python,sql,json,database,data-warehouse
46,899,529
1
false
0
0
You should be able to use json.dumps(json_value) to convert your JSON object into a JSON string that can be put into an sql database.
1
1
0
I am building a warehouse consisting of data that's found from a public facing API. In order to store & analyze the data, I'd like to save the JSON files I'm receiving into a structured SQL database. Meaning, all the JSON contents shouldn't be contained in 1 column. The contents should be parsed out and stored in various other tables in a relational database. From a process standpoint, I need to do the following: Call API Receive JSON Parse JSON file Insert/Update table(s) in a SQL database (This process will be repeated hundreds and hundreds of times) Is there a best practice to accomplish this - from either a process or resource standpoint? I'd like to do this in Python if possible. Thanks.
Save JSON file into structured database with Python
0
1
0
1,594
46,898,992
2017-10-23T21:42:00.000
0
0
1
0
python,tkinter
46,899,067
2
true
0
1
The function should automatically stop when it is finished running. Using return, though, will immediately exit the current function, perhaps before it gets through all of its code, if that's what you mean.
1
0
0
I am trying to build a gambling dice game for fun using Python's Tkinter in Python 3. The error I am having is after the money is taken away from your bank account (it does this in a different function) I want it to go back to the mainloop. So basically I want to exit a function to get back into the main code (which isn't in a function). Any ideas on how?
Exit Function Into Main Code
1.2
0
0
84
46,902,362
2017-10-24T04:43:00.000
1
0
1
0
python,python-3.x,machine-learning,ipython,catboost
51,193,804
1
true
0
0
Yes, you can set verbose=int in CatBoost starting from version 0.7
1
1
0
I'm using python's CatBoostClassifier(). Can I change its verbose to an int? The output is currently the measured loss functions to stdout, every single iteration, turning this output exhaustively long to analyse. I'd like to see this output in 50-iterations intervals, like verbose=50 (verbose=int). Is this possible? If so, how?
Can Catboost's verbose be an int?
1.2
0
0
255
46,909,905
2017-10-24T11:54:00.000
8
0
0
0
python,django,server,web
46,910,029
2
true
1
0
In settings.py write ALLOWED_HOSTS = ['*'] and run python manage.py runserver 0.0.0.0:8000 Note: you can use any port instead of 8000
1
3
0
Now that I have created a website, I wish the users can connect to my website within a local network. However my website now can only be accessible from the localhost (my pc), the others are not able to connect to my website when typing my ip address for example xxx.xxx.xxx.xxx:8000 on their browser. I launch the service on my localhost using # python manage.py runserver. May I know if there is a way/command to allow the others to connect to my website? Note: I have tried # python manage.py runserver 0.0.0.0:8000 as well, which allow all incoming, but it didn't work.
How to allow others to connect to my Django website?
1.2
0
1
3,829
46,911,160
2017-10-24T12:53:00.000
0
0
0
0
python,opencv,threshold,orb
46,911,651
2
false
0
0
The python docstring of ORB_create actually contains information about the parameter nfeatures, which is the maximum number of features to return. Could that solve your problem?
2
0
1
I am new to opencv and am trying to extract keypoints of a gesture image by ORB algorithm in python interface. The image input is binary and has many curvatures. So ORB gives too many points as keypoints (which are actually not). I am trying to increase the threshold of ORB algorithm so that the unnesessary points dont get detected. I have searched for ORB algorithms and haven't found any use of threshold except for in c++ function description. So my question is what are the input parameters for ORB detection algorithm and what is the actual syntax in python. Thanks in advance.
orb opencv variable inputs
0
0
0
258
46,911,160
2017-10-24T12:53:00.000
0
0
0
0
python,opencv,threshold,orb
47,732,243
2
false
0
0
After looking at the ORB() function in Opencv C++ description, I realized that the input parameters can be passed into function in Python as nfeatures=200,mask=img etc. (not sure about C++ though).
2
0
1
I am new to opencv and am trying to extract keypoints of a gesture image by ORB algorithm in python interface. The image input is binary and has many curvatures. So ORB gives too many points as keypoints (which are actually not). I am trying to increase the threshold of ORB algorithm so that the unnesessary points dont get detected. I have searched for ORB algorithms and haven't found any use of threshold except for in c++ function description. So my question is what are the input parameters for ORB detection algorithm and what is the actual syntax in python. Thanks in advance.
orb opencv variable inputs
0
0
0
258
46,915,455
2017-10-24T16:20:00.000
-1
1
1
1
python,centos,rpm,yum
63,481,093
2
false
0
0
To install Python on CentOS: sudo yum install python2/3 (select the version as per requirement) To uninstall Python on CentOS: sudo yum remove python2/3 (select the version as per your requirement) To check version for python3(which you installed): python3 --version To check version for python2 (which you installed): python2 --version
1
3
0
Today I messed up the versions of Python on my CentOS machine. Even yum cannot work properly. I made the mistake that I removed the default /usr/bin/python which led to this situation. How could I get back a clear Python environment? I thought remove them totally and reinstall Python may work, but do not know how to do it. Wish somebody could help!
Remove clearly and reinstall python on CentOS
-0.099668
0
0
19,404
46,916,293
2017-10-24T17:10:00.000
1
0
0
0
python,pointers,variables,tkinter,parameters
46,916,546
2
false
0
1
No, you cannot do what you want. You will need to call the configure method of every widget that uses that color.
1
1
0
If I create a tkinter.Label with parameter fg = PRIMARY_COLOR and than .pack() it, if I change the value of PRIMARY_COLOR variable, call the .update() method of the widget, foreground color will not change. I know, why this is happening, but can I somehow do, that the widget will change foreground color with the PRIMARY_COLOR variable change? Can I make some kind of "pointer"?
Update tkinter widget parameters with "pointer"
0.099668
0
0
380
46,916,554
2017-10-24T17:26:00.000
1
0
0
0
python-3.x,tensorflow,keras
46,918,492
1
true
0
0
If x coordinates for all plots are same you could (and in fact should) ignore it. Because in this case this data do not introduce any additional information. Their use will only lead to a more complex neural network, worse convergence and as result to increasing of training time and performance degradation. About second question - it is not necessary to do it. During training neural network will automatically identify which features are the most important.
1
1
1
Currently I'm trying to make Keras binary classify a set of (x,y) plots. As a newbie, I can't figure out the proper way to give a correct input, since I've got these plots with app 3400 pairs each one and a set of 8 aditional features (local minimae locations) for every plot. What I tried is to give keras a 3400 + 3400 + 8 input layer, but it just feels wrong to do, and so far isn't making any progress. As x variable is almost a correlative order, ¿should I ignore it? ¿Is it possible to ask keras to distinguish: "Hey these 3400 numbers are a plot, and these other 8 are some features about it"?
Keras. Correct way to give a (x,y) plot + features as input to a NN
1.2
0
0
128
46,917,025
2017-10-24T17:54:00.000
2
0
1
0
python,pycharm
46,917,203
3
true
0
0
You can, assuming you have PyCharm set to automatically produce spaces instead of tabs (which is default, as far as I know). In the menu: Code -> Reformat Code with the top-level folder selected.
2
1
0
If I open an existing project into PyCharm that used tabs instead of spaces can I format all of these files within PyCharm or do I need to manually update all of them?
Can I automatically format files to use spaces instead of tabs in PyCharm?
1.2
0
0
672
46,917,025
2017-10-24T17:54:00.000
0
0
1
0
python,pycharm
46,917,198
3
false
0
0
Try: Edit -> Convert Indents -> To Spaces There are also some quick ways to do this from a UNIX shell.
2
1
0
If I open an existing project into PyCharm that used tabs instead of spaces can I format all of these files within PyCharm or do I need to manually update all of them?
Can I automatically format files to use spaces instead of tabs in PyCharm?
0
0
0
672
46,917,721
2017-10-24T18:36:00.000
0
0
0
0
python,api,web-scraping
46,917,799
1
true
0
0
A web API is nothing but an HTTP layer over your custom logic so that requests can be served the HTTP way (GET PUT POST DELETE). Now, the question is, how? The easiest way is to use already available packages called "web frameworks" which python has in abundance. The easiest one to probably implement would mostly be Flask. For a more robust application, you can use django as well.
1
0
0
I've already built a python script that scrapes some data from a website that require a login-in. My question is: How can i transform this script into an api? For example i send to the api username, password and data required, then it returns the data needed.
Web Scraping Api with Python
1.2
0
1
342
46,918,646
2017-10-24T19:32:00.000
0
0
1
1
python,serial-port,pycharm,signals,signal-processing
46,918,772
3
false
0
0
What part of the diagnostic message did you find unclear? Did you consider ensuring writable self-owned files by doing sudo chown -R isozyesil /Users/isozyesil/Library/Caches/pip ? Did you consider sudo pip install bluepy ?
1
2
0
I am trying to install bluepy 1.0.5. However, I get receiving error below. Any idea how can i solve it? (I am using Mac OS X El Capitan) 40:449: execution error: The directory '/Users/isozyesil/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/isozyesil/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/95/f900ttf95g1b7h02y2_rtk400000gn/T/pycharm-packaging669/bluepy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /var/folders/95/f900ttf95g1b7h02y2_rtk400000gn/T/pip-djih0T-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/95/f900ttf95g1b7h02y2_rtk400000gn/T/pycharm-packaging669/bluepy/ (1)
Bluepy Installation Error
0
0
0
2,751
46,919,360
2017-10-24T20:20:00.000
0
0
0
0
python,pandas
46,919,591
2
false
0
0
Well, I am not sure about how is your data so not sure if the answer will help but from what you said is that you are trying to check the month of the highest sales, so giving the product you will probably want to use the pandas groupby using the month and you will have a DataFrame with every month grouped. imagine a DF named Data: mean_buy = Data.groupby(months).mean() with months = np.array([1,2,3,4,5,6,7,8,9,10,11,12]*number_of_years)
2
0
1
I am new to python pandas, and I am trying to find the strongest month within a given series of timestamped sales data. The question to answer for n products is: when is the demand for the given product the highest? I am not looking for a complete solution but rather some ideas, how to approach this problem. I already looked into seasonal_decomposition to get some sort of seasonality indication but I feel that this might be a bit too complicated of an approach.
How to use Pandas to find the strongest month of sale for a product
0
0
0
324
46,919,360
2017-10-24T20:20:00.000
0
0
0
0
python,pandas
46,919,474
2
false
0
0
I don't have 50 reputation to add comment hence adding answer section. Some insight about your required solution would be great, because to me it's not clear about your requirement. BTW Coming to the idea, if your can split and load the time series data as the timestamp and demand then you can easily do it using regular python methods like max and then getting the time stamp value where the max demand occurred.
2
0
1
I am new to python pandas, and I am trying to find the strongest month within a given series of timestamped sales data. The question to answer for n products is: when is the demand for the given product the highest? I am not looking for a complete solution but rather some ideas, how to approach this problem. I already looked into seasonal_decomposition to get some sort of seasonality indication but I feel that this might be a bit too complicated of an approach.
How to use Pandas to find the strongest month of sale for a product
0
0
0
324
46,919,968
2017-10-24T21:03:00.000
0
1
0
0
python,flask,pexpect
62,195,793
1
false
1
0
You probably just need to replace ssh with its full path, e.g. /usr/bin/ssh. You can find the full path with which ssh.
1
1
0
I am writing a tool for internal use at work. A user enters a router or switch IP address, username and password into a web form. The app then uses pexpect to SSH into the device, downloads a configuration and tests that the configuration complies with various standards by running true/false tests (e.g. hostname is set). Leaving aside whether this is a good idea or not, my problem is that when I run the program under the Flask development program it works fine. When I set it up to run under WSGI it fails at the SSH portion with the error: pexpect.exceptions.ExceptionPexpect: The command was not found or was not executable: ssh. I tried uWSGI and Unicorn and played with the number of workers etc. to no avail. I suspect this is a setuid root thing. Google searches do not point to a solution. Can anyone lead me to a fix? If pexpect will not work, I may give up and require the user to upload a config file they save themselves but I am frustrated that this works on the flask development server but not a production server.
Using python flask and pexpect works on flask development server fails with WSGI
0
0
0
256
46,920,188
2017-10-24T21:19:00.000
0
0
0
1
python,django,attributeerror
64,591,309
4
false
1
0
Update your Python and Django version and that will work perfect.
2
3
0
I'm trying a Django tutorial. For some reason my existing superuser got deleted; creating that went fine, but I can't make another one. This also happens when I try to use pip. I didn't change anything in the libraries so not sure why this happens now but didn't before. On windows 7 (Python 3.6.3 and Django 1.11). I've seen similar but not the exact same problems for Windows. Still I checked the file and there seems to be a PathLike class. I've also tried to repair my Python installation but it didn't help. any ideas?
manage.py createsuperuser: AttributeError: module 'os' has no attribute 'PathLike'
0
0
0
3,463
46,920,188
2017-10-24T21:19:00.000
0
0
0
1
python,django,attributeerror
46,932,401
4
false
1
0
Seems like you may have modified the settings.py file. But as MrName mentioned you need to share full stack trace
2
3
0
I'm trying a Django tutorial. For some reason my existing superuser got deleted; creating that went fine, but I can't make another one. This also happens when I try to use pip. I didn't change anything in the libraries so not sure why this happens now but didn't before. On windows 7 (Python 3.6.3 and Django 1.11). I've seen similar but not the exact same problems for Windows. Still I checked the file and there seems to be a PathLike class. I've also tried to repair my Python installation but it didn't help. any ideas?
manage.py createsuperuser: AttributeError: module 'os' has no attribute 'PathLike'
0
0
0
3,463
46,921,675
2017-10-24T23:45:00.000
0
0
1
0
windows,python-3.x,google-chrome,browser
46,921,853
1
true
0
0
By default, browsers (chrome or most others) only run javascript code, but not Python code. Python code runs in an interpreter. There are some third party websites that lets you type in some code, but I don't think any of them are great. If you tried one of them, please provide the website and the code you used and the error you got. 'an error message' is never useful. Please detail the particular error you are getting, and any traceback when available..
1
0
0
I have seen other programmers running their code in chrome and I want to do the same with my python 3 programs, but when I follow tutorials I always get an error message.
running Python 3 in chrome on Windows 8 computer
1.2
0
0
13
46,922,483
2017-10-25T01:25:00.000
0
0
1
0
python,pycharm
46,971,673
1
true
0
0
WinAppDbg is only for Python 2.x, it does not work on Python 3.x. Honestly, I had no idea it would even let you import it. All those import errors are happening not because of missing dependencies (also, no idea there were similarly named modules in pip), they are submodules of WinAppDbg itself. Since Python 3 has a different syntax to specify those, it tries to load them as external modules instead. I suppose you could fix that in the sources by prepending a dot before every submodule import, but I'm guessing more stuff would break down the road (string handling for example is radically different and that would affect the ctypes layer). TL;DR: use Python 2.x.
1
0
0
Whenever I try to just import winappdbg it gives me an error ModuleNotFoundError: No module named 'breakpoint'. So, I tried installing breakpoint and that gives me another error ModuleNotFoundError: No module named 'ConfigParser' and I've installed configparser several times and still get the error. (Can't find capital ConfigParser) I'm using Windows 10/PyCharm Community Edition 2017.2.3/python 3.6.3
Importing winappdbg gives ModuleNotFoundError for breakpoint in PyCharm?
1.2
0
0
977
46,922,950
2017-10-25T02:25:00.000
14
0
1
1
python,airflow,apache-airflow
46,962,458
2
true
0
0
We make heavy use of airflow, and we use VM's running Linux to get it running. We have Windows machines, but have to use VM's or mount drives on Linux/Mac boxes to get it to work. As far as I know it's not even on the road map to have Airflow run on Windows. So, long answer short: No, even as of October 2017 airflow runs only on Unix based systems (it uses some python libraries that only work for unix underneath), and it's unlikely that anytime soon it will support Windows.
1
3
0
I have researching for a few hours now but I cannot confirm If as of October 2017, you can run airflow on Windows. I have installed it using Python package "pip install airflow" but I cannot initialize it or even see the version, which I assume that it cannot run on Windows.
Can I run Airflow on Windows?
1.2
0
0
11,045
46,926,809
2017-10-25T07:49:00.000
0
0
0
0
python,tensorflow
47,045,367
4
false
0
0
tf.argmax is not differentiable because it returns an integer index. tf.reduce_max and tf.maximum are differentiable
1
18
1
I've written a custom loss function for my neural network but it can't compute any gradients. I thinks it is because I need the index of the highest value and are therefore using argmax to get this index. As argmax is not differentiable I to get around this but I don't know how it is possible. Can anyone help?
Getting around tf.argmax which is not differentiable
0
0
0
10,784
46,927,517
2017-10-25T08:26:00.000
0
1
0
1
python,multithreading,server,uwsgi
47,568,008
1
false
1
0
it has solved. the point is that you should create separate connection for each completely separate query to avoid missing data during each query execution
1
0
0
I am running a uwsgi application on my linux mint. it has does work with a database and shows it on my localhost. i run it on 127.0.0.1 IP and 8080 port. after that i want to test its performance by ab(apache benchmark). when i run the app by command uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi and get test of it, it works correctly but slowly. so i want to run the app with more than one thread to speed up. so i use --threads option and command is uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi --threads 8 for example. but when i run ab to test it, after 2 or 3 request, my application stops with some errors and i don't know how to fix it. every time i run it, type of errors are different. some of errors are like these: (Traceback (most recent call last): 2014, 'Command Out of Sync') or (Traceback (most recent call last): File "./wsgi.py", line 13, in application return show_description(id) File "./wsgi.py", line 53, in show_description cursor.execute("select * from info where id = %s;" %id) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query conn.query(q) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) 'Packet sequence number wrong - got 1 expected 2',) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in _read_query_result or ('Packet sequence number wrong - got 1 expected 2',) Traceback (most recent call last): or ('Packet sequence number wrong - got 1 expected 2',) Traceback (most recent call last): File "./wsgi.py", line 13, in application return show_description(id) File "./wsgi.py", line 52, in show_description cursor.execute('UPDATE info SET views = views+1 WHERE id = %s;', id) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) Please help me how to run my uwsgi application wiht more than one thread safety. any help will be welcome
uwsgi application stops with error when running it with multi thread
0
1
0
238
46,929,145
2017-10-25T09:46:00.000
1
0
0
0
python,tensorflow,gpu,cpu
48,502,090
1
false
0
0
Do any of your networks share operators? E.g. they use variables with the same name in the same variable_scope which is set to variable_scope(reuse=True) Then multiple nets will try to reuse the same underlying Tensor structures. Also check it tf.ConfigProto.allow_soft_placement is set to True or False in your tf.Session. If True you can't be guaranteed that the device placement will be actually executed in the way you intended in your code.
1
0
1
I need to train a very large number of Neural Nets using Tensorflow with Python. My neural nets (MLP) are ranging from very small ones (~ 2 Hidden Layers with ~30 Neurons each) to large ones (3-4 Layers with >500 neurons each). I am able to run all of them sequencially on my GPU, which is fine. But my CPU is almost idling. Additionally I found out, that my CPU is quicker than the GPU for my very small nets (I assume because of the GPU-Overhead etc...). Thats why I want to use both my CPU and my GPU in parallel to train my nets. The CPU should process the smaller networks to the larger ones, and my GPU should process from the larger to the smaller ones, until they meet somewhere in the middle... I thought, this is a good idea :-) So I just simply start my consumers twice in different processes. The one with device = CPU, the other one with device = GPU. Both are starting and consuming the first 2 nets as expected. But then, the GPU-consumer throws an Exception, that his tensor is accessed/violated by another process on the CPU(!), which I find weird, because it is supposed to run on the GPU... Can anybody help me, to fully segregate my to processes?
Running Python Tensorflow on CPU and GPU in parallel
0.197375
0
0
1,450
46,940,171
2017-10-25T19:01:00.000
2
0
1
0
python,json,object-literal,object-notation
46,940,207
1
false
0
0
JavaScript literals are not called JSON. JSON derived its name and syntax from JavaScript, but they’re not the same thing. Use “Python literals”.
1
2
0
I have a program that can output results either as JSON or Python data structure literals. I am wondering how to succinctly name the latter option.
Is there a name for Python literals? The way JavaScript literals are called JSON?
0.379949
0
0
58
46,940,780
2017-10-25T19:37:00.000
0
1
0
0
python,telegram-bot,python-telegram-bot
46,947,479
1
false
0
0
InlineQueryResultAudio only accepts links while InlineQueryResultCachedAudio only accepts file_id. What you can do is post the files to your own server or upload it elsewhere to use the former one, or use SendAudio to get the file_id of it and use the latter one.
1
0
0
Okay i can send audio with some url in inline mode. But how can i send the local audio from the directory? Telegram Bot API return me this: A request to the Telegram API was unsuccessful. The server returned HTTP 400 Bad Request. Response body: [b'{"ok":false,"error_code":400,"description":"Bad Request: CONTENT_URL_INVALID"}']
Telegram Bot API InlineQueryResultAudio
0
0
1
505
46,941,115
2017-10-25T20:00:00.000
1
0
0
0
python,django
46,941,490
2
true
1
0
Django admin is intended for administration purposes. For all intents and purposes it is a direct interface to your database. While I have seen some people building customer facing interfaces using admin, this is most definitely not the way to make a general Django web application. You should define views for your models. You can use built-in APIs to login and authenticate users. You should most likely restrict access to admin to internal users only. As for templates, the modern way of doing things is to dynamically fetch data using an API and do all the UI logic in Javascript. Django can be used very well to provide an API to a frontend. Look into Django REST Framework. The basic idea is to write serializers for your models and have view functions serve the serialized data to the front end. You could go the old school way and render your pages using templates of course. In that case your views would render templates using data provided by your models.
2
1
0
I am a total noob with Django, I come from the PHP world and I am used to doing things differently. I'm building an app and I want to change the way the backend looks, I want to use Bootstrap 4 and add a lot of custom stuff e.g. permission based admin views, and I was wondering what is the best practice, or how do more experienced django devs go about it? Do they override all the django.contrib.admin templates, or do they build custom templates and login/register next to it, and use the django.contrib.admin only for the superuser? What is the django way?
Overriding Django admin vs creating new templates/views
1.2
0
0
451
46,941,115
2017-10-25T20:00:00.000
0
0
0
0
python,django
46,941,275
2
false
1
0
Yes. The admin pages is actually for administering the webpage. For user login and registration you create the templates. However, if you want your backend to look different then you can tweak the template for the admin page, admin login page as well. And you can also have permission based admin views. It's okay to over ride the defaults as long as you know what you're doing. Hope that helped.
2
1
0
I am a total noob with Django, I come from the PHP world and I am used to doing things differently. I'm building an app and I want to change the way the backend looks, I want to use Bootstrap 4 and add a lot of custom stuff e.g. permission based admin views, and I was wondering what is the best practice, or how do more experienced django devs go about it? Do they override all the django.contrib.admin templates, or do they build custom templates and login/register next to it, and use the django.contrib.admin only for the superuser? What is the django way?
Overriding Django admin vs creating new templates/views
0
0
0
451
46,945,113
2017-10-26T02:53:00.000
0
0
1
0
python,sorting
46,945,235
3
false
0
0
You can transform the items in a using a key. That key is a function of each element of a. Try this: a = sorted(a, key=lambda i: b[i]) Note that if any value in a is outside the range of b, this would fail and raise an IndexError: list index out of range. Based on your description, however, you want the list to be sorted in reverse order, so: a = sorted(a, key=lambda i: b[i], reverse=True)
2
0
1
Suppose there is a list a[i] which stores index value v, v is the index value of another list b[v]. I want to according to the values of list b to sort the list a. For example a=[0,2,3,1] b=[7,10,8,6] I want the list a become a=[1,2,0,3], is there some concise way to sort list a?
If the value of list 1 is the index of list 2, how can I sort list 1 according to the list 2 value
0
0
0
48
46,945,113
2017-10-26T02:53:00.000
-1
0
1
0
python,sorting
46,945,183
3
false
0
0
Simply solution would be sort the list b first and then get the indexes for list b after sorting, finally get the values of a list in the order of b indexes that were taken after sorting b list
2
0
1
Suppose there is a list a[i] which stores index value v, v is the index value of another list b[v]. I want to according to the values of list b to sort the list a. For example a=[0,2,3,1] b=[7,10,8,6] I want the list a become a=[1,2,0,3], is there some concise way to sort list a?
If the value of list 1 is the index of list 2, how can I sort list 1 according to the list 2 value
-0.066568
0
0
48
46,949,681
2017-10-26T08:47:00.000
0
0
0
0
python,http
46,949,933
1
true
0
0
This is not about caching. You don’t close the sockets after sending a response nor do you tell the browser that you don’t support multiple requests. So the browser will assume it can request again with the same connection. Close the connection and if you claim to support HTTP 1.1 add an appropriate Connection header. Also don’t bind/close the server socket after each request. There’s no need and it will just make things not work. Bind on startup, close on shutdown. The code might also fail on data that is non-ASCII since you tell the length of the string and not the actual data you send. These may be different.
1
0
0
I'm trying to create a (very minimalist) web server with Python using the socket module. I have a problem with, I think, web browser caching. Explanation: When I load a page, the first time work. It will work 2-3 other time at the beginning, then it will load just one time every two requests made by my browser (I use Firefox). I press F5, it works, I re-press F5, it loads nothing infinitely, I re-press F5 and it works. I looked at my python console, and it seems that Firefox doesn't send any request when the loading of the page fails. When I press Ctrl + F5, it ALWAYS work, Firefox send a request each time and my webserver send it the page. I tried adding HTTP headers to prevent caching (Cache-Control, Pragma, Expires), but it still works one in two. I tested with Internet Explorer, and it works better, but it sometime fails (on 4-5 requests, it will fail only one time). So, my question is: Why Firefox and IE sometimes doesn't send request and still seems to wait for something? What is the web server supposed to do? Thanks.
Python/HTTP - How does web browsers cache work?
1.2
0
1
70
46,954,478
2017-10-26T12:33:00.000
0
0
1
0
python
47,309,224
3
true
0
0
Yes because \ means break in Python. \\ equals \but if you use 3 you are breaking something again.
1
0
0
How to insert "\\\" in a string in python. It seems to give Error SyntaxError: EOL while scanning string literal
Enter "\\\" in a string in python
1.2
0
0
161
46,958,456
2017-10-26T15:39:00.000
0
0
0
0
python,django,stripe-payments
46,962,742
1
false
1
0
I think your question needs to be more specific, but I also struggled with Stripe's documentation on subscriptions and webhooks. Here is a bit of stuff I gleaned from multiple exchanges with their support: Once you've set up the customer object, attached a payment source, and subscribed your customer to a plan, you won't need to take any further steps. Stripe will take care of creating the charges on the recurring basis that you've set. Despite stripe's recommendation of implementing webhooks as a best practice for subscriptions with Stripe, it is not required to continue billing. Billing is attempted automatically without the need to handle web hooks. Hope that helps.
1
2
0
What do I need to do to make payments available with Stripe and Django using dj-stripe 1.0? I found the documentation quite unforgiving for a newcomer to dj-stripe. I think I have gleaned that most configuration of e.g. subscription plans are done at stripe.com and updated via webhooks to my application. However, what do I need to implement myself and how?
How to get started with dj-stripe 1.0?
0
0
0
208
46,960,126
2017-10-26T17:12:00.000
0
0
1
0
python,cvxopt
46,960,460
1
false
0
0
Because you have python 3 this isnt working, you’re using the python 2 pip. To use the python 3 pip try pip3 install.
1
0
0
I have a Mac 10.6.8 with Pycharm 3.0.3 installed When I install the cvxopt packege via terminal: pip install cvxopt and then i import the module I get the following: Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cvxopt/init.py", line 242, in cvxopt.base.normal, cvxopt.base.uniform = normal, uniform AttributeError: 'module' object has no attribute 'base' I ve been trying several solution but no luck Any help please? Thanks
python cvxopt installation Mac
0
0
0
379
46,961,091
2017-10-26T18:10:00.000
-1
0
0
0
python,pandas,scikit-learn,decision-tree,one-hot-encoding
46,961,237
1
false
0
0
The two options you are describing do two very different things. If you choose to binarize (one-hot encode) the values of the variable, there is no order to them. The decision tree at each split, considers a binary split on each of the new binary variables and chooses the most informative one. So yes, each binary feature is now treated as an independent feature. The second choice puts the values in an order. Implicitly you are saying that a < b < c if transforming a=1, b=2, c=3. If you use this variable in a decision tree, the algorithm considers the splits 1 vs 2,3 and 1,2 vs 3 (but not 3 vs 1,2). So the meaning of the variables is very different and I don't think you can expect equivalent results.
1
0
1
So my understanding is that you perform one hot encoding to convert categorical features as integers to fit them to scikit learn machine learning classifier. So let's say we have two choices a. Splitting all the features into one hot encoded features (if A is say a categorical features that takes values 'a', 'b' and 'c', then it becomes A_a, A_b and A_c with binary values in each of its rows with binary value '1' meaning that the observation has the feature and binary value '0' meaning it does not possess the feature!). I would then fit a DecisionTreeClassifier on this. b. Not splitting all the features, but converting each category into an integer value WITHOUT performing one hot encoding (if A is say a categorical features that takes values 'a', 'b' and 'c', then 'a', 'b' and 'c' are renamed as 1, 2, 3 and no new columns are created, 'A' remains a single column with integer values 1, 2, 3 by using pandas.factorize or something which you an then fit a DecisionTreeClassifier. My question is, when you fit DecisionTreeClassifier on the one hot encoded dataset, with multiple columns, will each of the new columns be treated as a separate feature? Also, if you fit the DecisionTreeClassifier on the dataset where the categorical features are simply converted to an integer and kept in a single column; will it produce the same node splits, as the one where the DecisionTreeClassifier was fit on the dataset with the one-hot encoded features? Like, when you visualize the tree in both cases, is the interpretation given below the right way to look at it? for DecisionTreeClassifier with one-hot-encoding if attribute == A_a, then yes if attribute == A_b, then no for DecisionTreeClassifier without one-hot-encoding ('a' represented by integer value 1 and 'b' by value 2) if attribute == 1 then yes if attribute == 2, then no
One hot encoding and its combination with DecisionTreeClassifier
-0.197375
0
0
325
46,962,868
2017-10-26T20:04:00.000
2
0
1
0
python,pycharm
46,962,982
2
false
1
0
You can run a .py file directly without manually creating a Run/Debug Configuration. To do so, right click on the file in the Project view and select either the Run or Debug option from the context menu.
1
1
0
Simple question but couldn't find a clear answer anywhere. Do i have to create a new run/debug configuration every time i create a new project, and assign the .py file at "script" or is there a way to make pycharm do this automatically. Would love to know! Thanks in advance :)
Pycharm Run/Debug Configuration saved as default
0.197375
0
0
684
46,964,702
2017-10-26T22:19:00.000
2
0
1
0
python,wireshark,pcap,packet-sniffers,pcap-ng
49,832,278
1
true
0
0
Since no one answered, the right way to do it is to use the OS module and run "mergecap -w destfile.pcap [source_files.pcap]". You'll have to configure mergecap as into the Path veriable in Windows, or otherwise use the absolute destination, which is to the Wireshark folder.
1
1
0
Using the Pyshark module, is there a way to efficientaly merge/join multiple pcap/pcapng files? Tried playing around with pyshark.FileCapture and native file methodes in Python, but with no success. Any ideas? Thanks in advance!
Python | Merging multiple pcap/pcapng files using Pyshark
1.2
0
0
1,725
46,965,192
2017-10-26T23:13:00.000
0
0
0
0
python,python-3.x,csv
46,979,942
2
false
0
0
import csv # Files to load (Remember to change these) file_to_load = "raw_data/budget_data_2.csv" # Read the csv and convert it into a list of dictionaries with open(file_to_load) as revenue_data: reader = csv.reader(revenue_data) # use of next to skip first title row in csv file next(reader) revenue = [] date = [] rev_change = [] # in this loop I did sum of column 1 which is revenue in csv file and counted total months which is column 0 for row in reader: revenue.append(float(row[1])) date.append(row[0]) print("Financial Analysis") print("-----------------------------------") print("Total Months:", len(date)) print("Total Revenue: $", sum(revenue)) #in this loop I did total of difference between all row of column "Revenue" and found total revnue change. Also found out max revenue change and min revenue change. for i in range(1,len(revenue)): rev_change.append(revenue[i] - revenue[i-1]) avg_rev_change = sum(rev_change)/len(rev_change) max_rev_change = max(rev_change) min_rev_change = min(rev_change) max_rev_change_date = str(date[rev_change.index(max(rev_change))]) min_rev_change_date = str(date[rev_change.index(min(rev_change))]) print("Avereage Revenue Change: $", round(avg_rev_change)) print("Greatest Increase in Revenue:", max_rev_change_date,"($", max_rev_change,")") print("Greatest Decrease in Revenue:", min_rev_change_date,"($", min_rev_change,")") Output I got Financial Analysis ----------------------------------- Total Months: 86 Total Revenue: $ 36973911.0 Avereage Revenue Change: $ -5955 Greatest Increase in Revenue: Jun-2014 ($ 1645140.0 ) Greatest Decrease in Revenue: May-2014 ($ -1947745.0 )
1
0
1
Date Revenue 9-Jan $943,690.00 9-Feb $1,062,565.00 9-Mar $210,079.00 9-Apr -$735,286.00 9-May $842,933.00 9-Jun $358,691.00 9-Jul $914,953.00 9-Aug $723,427.00 9-Sep -$837,468.00 9-Oct -$146,929.00 9-Nov $831,730.00 9-Dec $917,752.00 10-Jan $800,038.00 10-Feb $1,117,103.00 10-Mar $181,220.00 10-Apr $120,968.00 10-May $844,012.00 10-Jun $307,468.00 10-Jul $502,341.00 # This is what I did so far... # Dependencies import csv # Files to load (Remember to change these) file_to_load = "raw_data/budget_data_2.csv" totalrev = 0 count = 0 # Read the csv and convert it into a list of dictionaries with open(file_to_load) as revenue_data: reader = csv.reader(revenue_data) next(reader) for row in reader: count += 1 revenue = float(row[1]) totalrev += revenue for i in range(1,revenue): revenue_change = (revenue[i+1] - revenue[i]) avg_rev_change = sum(revenue_change)/count print("avg rev change: ", avg_rev_change) print ("budget_data_1.csv") print ("---------------------------------") print ("Total Months: ", count) print ("Total Revenue:", totalrev) I have above data in CSV file. I am having problem in finding revenue change, which is Revenue of row 1 - row 0 , row 2 - row 1 and so on... finally, I want sum of total revenue change. I tried with loop but I guess there is some silly mistake. Please suggest me codes so I can compare my mistake. I am new to python and coding.
Python - How can I find difference between two rows of same column using loop in CSV file?
0
0
0
5,951
46,966,692
2017-10-27T02:39:00.000
1
0
0
0
python,tkinter
46,967,037
2
true
0
1
No, there is nothing built-in. You can probably make it work, but tkinter is designed to work the other way around: you specify the text and the widget will automatically resize to fit.
1
0
0
Let's say I have a button widget of arbitrary size, is there a conventional way to make its text to fit or let's say resize in proportion of the button's new size? If so what is it?
Is there a conventional, simple way to make the fontsize of text option in a widget to fit?
1.2
0
0
170
46,967,312
2017-10-27T04:01:00.000
0
0
0
0
python,numpy,scikit-learn
46,967,516
2
false
0
0
You have to convert the array to list to make it work This should do for you accuracy_score(y_test.tolist(),labs)
1
0
1
I have two lists y_test = array('B', [1, 2, 3, 4, 5]) and labs = [1, 2, 3, 4, 5] In sklearn, when i do print accuracy_score(y_test,labs), i get error ValueError: Expected array-like (array or non-string sequence), got array('B', [1, 2, 3, 4, 5]). I tried to compare it using print accuracy_score(y_test['B'],labs) but it is showing TypeError: array indices must be integers
python sklearn accuracy score for two different list
0
0
0
2,073
46,969,917
2017-10-27T07:42:00.000
1
0
1
0
python-3.x,pip
47,573,225
1
true
1
0
I had this issue because i had the npm module pip installed, removing this fixed my issue remove with npm uninstall pip
1
2
0
whenever I run something with pip in console on windows 10, it opens the javascript file 'cli.js'. i added python to my environment variables. python runs fine in console. and yes this all this was run in cmd + admin or powershell. Anybody that has an idea?
running pip opens javascript file
1.2
0
0
94
46,971,103
2017-10-27T08:53:00.000
2
0
1
0
asp.net-mvc,python-3.x,machine-learning
46,971,585
1
false
1
0
Since your jobs will probably be long running, the best way would probably be to use some form of messaging (eg. RabbitMQ, Apache Kafka). A possible outline add to MVC a tread/process listening to a messaging queue An image is added to MVC (or some other action happens, for which python should be notified) MVC sends a message to the python server. python learning system is notified, and updates its knowledge When done, it sends a message to back MVC, containing whatever results you need The actual image could be passed from MVC to python either as binary data, inside the message itself or, written to a commonly accessed database, and the message to python contains just a notification (eg, the filename of the added image) If you go with the shared database, make sure that MVC only writes and python only reads, otherwise you might face inconsistencies.
1
0
0
We're creating an application to extract images from document where we'd be using MVC dot net for UI and job that extracts images and learn is in python. Here python batch would be on server and we're not sure whether MVC interaction with that batch would be possible directly or no. If not then I was thinking of WCF. But would like to explore other options as well which might be efficient. So, can Python batch have duplex communication with MVC dot net UI? If not which are the options to establish this? Thanks
MVC UI with python backend
0.379949
0
0
87
46,982,244
2017-10-27T19:33:00.000
-1
0
1
0
python,multithreading
46,982,342
2
false
0
0
The reason to join a thread is so that the calling thread waits for the joined thread to finish. If the child thread terminates (either on purpose or due to an unhandled exception), the the parent's join() call will return. If you are joining every child thread and then waiting the for the timeout to complete, then your main thread isn't actually respecting the point of join(), which is to cease execution until the child threads terminate. If you expect your child thread to complete in under the timeout, then the main thread shouldn't ever need to wait for the full timeout. If you expect the child thread to operate for longer than the timeout and you don't want the parent thread to stop execution while the child operates, then why have the parent join() the child at all?
1
3
0
It seems that if you have a loop of n threads and join them one by one with the timeout t, the actual time you take is n * t because the beginning to count timeout of one child thread is the ending time of last child thread. Is there any way to reduce this total time to t not n*t?
python multi-thread join with timeout
-0.099668
0
0
1,736
46,988,123
2017-10-28T09:20:00.000
0
0
0
0
python,widget,kivy
47,026,950
1
false
0
1
You probably have a switch that is larger than you expect. From the documentation of the switch: "The entire widget is active, not just the part with graphics. As long as you swipe over the widget’s bounding box, it will work." I would try changing the background of the switch to see how big it really is.
1
0
0
I have a kivy file with several widgets, one of them is a switch. The problem is that wherever I press on the screen, it flips the switch. I have 2 other widgets - one of them is a check-box and another is radio-buttons, which also have some problems and I think they are occuring because of the switch. The problem is that in order to press them I need to press on a different part of the screen and not on the widget itself. Any help would be appreciated. UPDATE I am using Windows only for development.
Kivy module switch widget pressed anywhere
0
0
0
109
46,989,998
2017-10-28T12:59:00.000
0
0
0
0
python,keras
47,003,117
1
false
0
0
The way how mse is defined in Keras make it compute an average pixel error. So you could simply take a loss value as an average pixel error.
1
1
1
I'm using keras for CNN on 2D images for regression with mean squared error as the loss function. The loss values are of the range 100. To know average error at each pixel, should I divide it by total number of pixels? Or the loss values displayed are for pixels?
Is the loss value computed by keras for 2D CNN regression by keras point wise?
0
0
0
246
46,994,246
2017-10-28T20:37:00.000
0
0
0
0
python,opencv,color-detection
46,994,657
1
false
0
0
You need to normalize % values. Opencv takes 0-180 for H baundary and 0-255 for S and L values. You can also compute by hand. For example: 100% =255 and 50%=127.
1
1
0
When I use any color space converter it only gives me single values for H,S and L but how to determine the range boundaries from this, also the converters gives me S & L values in % not value in the range from 0-255. I need to use this color selection in a python pipeline. Any help ?
How to determine upper and lower boundaries for HSL color detection ?
0
0
0
114
46,998,234
2017-10-29T08:29:00.000
1
0
0
0
python,pandas,regression,statsmodels
46,998,392
1
false
0
0
You can apply grouping and then do logistic regression on each group. Or you treat it as multilabel classifier and do "Softmax regression".
1
0
1
I have a dataset that includes 7 different covariates and an output variable, the 'success rate'. I'm trying to find the important factors that predict the success rate. One of the covariates in my dataset is a categorical variable that takes on 700 values (0- 700), each representing the ID of the district they're from. How should I deal with this variable while performing logistic regression? If I make 700 dummy columns, how can I make it easier to interpret the results? I'm using Python and statsmodels.
Logistic Regression- Working with categorical variable in Python?
0.197375
0
0
755
46,999,584
2017-10-29T11:24:00.000
-1
0
0
0
python,neural-network,artificial-intelligence,genetic-algorithm
46,999,668
2
false
0
0
Normally you use a seed for genetic algorithms, which should be fixed. It will always generate the same "random" childs sequentially, which makes your approach reproducible. So the genetic algorithm is kind of pseudo-random. That is state of art how to perform genetic algorithms.
1
0
1
So I am using a genetic algorithm to train a feedforward neural network, tasked with recognizing a function given to the genetic algorithm. I.e x = x**2 or something more complicated obviously. I realized I am using random inputs in my fitness function, which causes the fitness to be somewhat random for a member of the population, however, still in line with how close it is to the given function obviously. A colleague remarked that it is stranged that the same member of the population doesnt always get the same fitness, which I agree is a little unconventional. However, it got me thinking, is there any reason why this would be bad for the genetic algorithm? I actually think it might be quite good because it enables me to have a rather small testset, speeding up number of generations while still avoiding overfitting to any given testest. Does anyone have experience with this? (fitness function is MSE compared to given function, for a randomly generated testset of 10 iterations)
Random element in fitness function genetic algorithm
-0.099668
0
0
495
47,001,108
2017-10-29T14:07:00.000
3
1
0
1
python,windows,python-idle
47,001,239
3
false
0
0
You can do that by adding the directory where you have installed the idle editor the PATH environment variable. How to do that depends on your operating system: just search the Internet for add directory to path plus your operating system: e.g. Windows/Ubuntu, etc. After changing the environment variable it may be a good idea to restart your PC (to make sure that all programs use the updated version)
1
3
0
How to open my .py script from anywhere on my pc directly into python idle form command prompt? Is there any way so that i can typeidle test.py on cmd so that it opens the test.py file in the current directory and if test.py is not available creates a new file and opens it into idle
Python IDLE from cmd
0.197375
0
0
2,809
47,001,112
2017-10-29T14:07:00.000
-2
0
1
0
python,memory-leaks,heap-memory
47,001,134
2
false
0
0
Python is a high level language. And here you need not worry about memory de-allocation. It is the responsibility of the python runtime to manage memory allocations and de-allocations.
1
2
0
I am coming from C++ where I worked on heap memory and there I had to delete the memory of heap which I created on heap using 'new' keyword and I am always in confusion what to do in python for heap memory to stop memory leakage please recommend me any text for detail of python memory allocation and deletion.Thanks
how to deallocate heap memory in python
-0.197375
0
0
2,010
47,002,039
2017-10-29T15:38:00.000
0
0
1
0
python,abaqus
47,081,470
1
true
0
0
The integrated Abaqus Python development, called "PDE", is very helpful. To open it from Abaqus/CAE, select File > Abaqus PDE from the main menu bar.
1
0
0
In Abaqus we can execute Python scripts. Using globals() or locals() we can list all current variables in the console. Is there a possibility to open a window showing all these variables within a table? Or maybe exporting them into an external text file?
Show all Python variables in Abaqus
1.2
0
0
145
47,005,067
2017-10-29T20:44:00.000
1
1
0
1
python,usb,autorun
58,617,987
2
false
0
0
Try to use os.path.exists to detect whether the pendrive is there in an infinite loop and when detected execute code on pendrive using os.system and break out of loop .
1
0
0
I have a Raspberry Pi running Linux. My plan is that I can plug in a USB in the robot and have and it will run the python files. the reason I chose this method is that it allows for easy editing and debugging of the scripts. Is there a way to execute my files when the USB is inserted?
Auto run python file when usb inserted
0.099668
0
0
2,024
47,006,642
2017-10-30T00:25:00.000
0
0
0
0
python,machine-learning,cross-validation,hyperparameters,catboost
47,006,813
1
false
0
0
You have essentially answered your own question already. Any variable that depends on something else x you must first define x. One thing to keep in mind is you can define a function before the variables you need to pass into it since its only when you call the function that you need the input variables, defining the function is just setting the process you will use. Calling a function and defining the variable it returns is what you have to do in order. The order you would use is: Include any remote librarys or functions, define any initial variables that dont depend on anything, define your local functions. Next in your main you first need to generate the variables that your iteration function requires, then itterate with these variables, then generate the ones that depend on the itteration.
1
1
1
So with Catboost you have parameters to tune and also iterations to tune. So for iterations you can tune using cross validation with the overfit detector turned on. And for the rest of the parameters you can use Bayesian/Hyperopt/RandomSearch/GridSearch. My question is which order to tune Catboost in. Should I tune the number of iterations first or the other parameters first. A lot of the parameters are kind of dependent on number of iterations, but also the number of iterations could be dependent on the parameters set. So any idea of which order is the proper way?
Catboost tuning order?
0
0
0
1,861
47,009,499
2017-10-30T06:51:00.000
0
0
1
0
python,pycharm,parent-child
47,011,258
2
false
0
0
just simply add __init__.py to each directory (dir_1 and dir_2). Note that you don't need to write anything to 2 of __init__.py, it's fine to just leave it blank.
1
1
0
Suppose we have a directory: main that includes: directory_1 file_1.py directory_2 file_2.py If the main code is inside file_1.py, how can I import file_2.py? If it helps, I am using pycharm.
how to let files in pycharm see different ones?
0
0
0
55
47,009,793
2017-10-30T07:14:00.000
1
0
0
0
python,video,hash,hashlib
47,010,012
1
false
0
0
As a first approach, I would consider taking crc32 from first 10 Mb plus maybe file size. You will have collisions with this method and will need to handle them but all hashing algorithms have collisions. UPDATE Alternatively you can use utility ffprobe (which comes with ffmpeg) to get video headers and compute md5 from them. But running it as a process will be slow and it seems it doesn't exist as a python library to import.
1
3
0
I need a unique hash for video files, which can handle the following: - Change in filename - Change in file location - Two files with exactly the same filesize, but different contents within (should be treated as different files) Now while the hashing algorithms like md5, sha1 seem to be a good candidate, I need something which takes fraction of seconds to produce. On a 2GB video file, it takes 5 sec to produce the md5 checksum value. I assume the long processing time is natural because of having to read the large video file. Is there something I could use, which specifically utilizes the properties of video files, maybe does the comparison just using video file headers or something. Goal here is to obtain the unique video id in fraction of seconds.
Fast method to get a unique identity for video files
0.197375
0
0
1,365
47,011,332
2017-10-30T09:01:00.000
0
0
0
0
python-3.x,xpath,lxml
47,045,098
1
false
1
0
I solved it myself. What I wanted to do was with using slice, I wanted to traverse to all xpath using iteration where the iterative will be in digits coming from for loop.
1
0
0
I want to iterate //span[@class="postNum and contains(text(),1)] This xpath over the range of 1 to 10 and store it in a variable. I want it to be done in HTML format and not XML. pseudo code: for e in range(1,11): xpathvar[e]='//span[@class="postNum and contains(text(),e)]' how to implement this so that xpathvar[1] will contain the first xpath with e=1. I cannot do this because the element in RHS is a string.
How to iterate over a XPath in HTML format using lxml and python?
0
0
1
204
47,013,937
2017-10-30T11:17:00.000
0
0
0
0
python,tensorflow
47,014,211
1
false
0
0
A solution could be to keep your one-hot vector ;). Another one, more general, is to make a random positive vector, then compute the difference between the highest score d and the score of your true class, then add a random number between d and +infinity to the true class's score, then normalize to get a valid distribution. (note that you can force the true class's initial score to be 0, but that will probably be a tiny bit longer to code). The choice of the distributions for the initial random vector and for the quantity to add to the true class's score will change the output distribution, but I don't know which one you want or why you want to do that...
1
0
1
Is there a way in tensorflow to transform a one-hot vector into a softmax-like distribution? For example, I have the following one-hot vector: [0 0 0 0 1 0] I want to have a vector with probabilities where the one value is the most likely number, like: [0.1 0.1 0.1 0.1 0.5 0.1] This vector should always be random, but with the true class having the highest probability. How can I reach this?
One-hot vector to softmax-like distribution in tensorflow
0
0
0
359
47,019,229
2017-10-30T15:45:00.000
3
0
1
0
python,windows,import,anaconda
47,021,372
2
false
0
0
Remove the Anaconda Python 3.6.0 python path from your PATH environment variables. Instead, add the Python 3.6.3 path to your PATH variable. Now use your normal command prompt for Python 3.6.3 version. Use the command activate root in command prompt When you need Python 3.6.0 version.
1
0
0
I have two version of python installed on my computer one is 3.6.0 Anaconda and another one is 3.6.3. Now on 3.6.3 I can not run or import any library like pandas or numpy on IDLE. I use windows 10. I can work on 3.6 version [Anaconda]. I tried to change the version through command prompt by py -3 but since both the version are in 3 + so it didn't work.
I have two version of python installed on my computer but one is not responding
0.291313
0
0
158
47,021,086
2017-10-30T17:32:00.000
0
0
1
0
python
47,021,310
1
false
0
0
Information must be stored somewhere. Since functions are part of a script being executed that has been loaded into memory, and might be needed at any time, I assume it is in the memory it is stored. Furthermore, in C, you can put pointers (variables linking to memory addresses) pointing to memory that is containing a function. This means that the function occupies memory when using C, so probably does so when using python as well.
1
1
0
I know that function call has to occupy memory, but what about function definition? Is it memoryless?
Python - Function definition occupy memory?
0
0
0
148
47,022,486
2017-10-30T19:02:00.000
0
0
0
0
python,python-3.x,directory,vpython
47,043,754
2
false
0
1
If you are running this in a Jupyter Notebook and the directory where the image exists is a subdirectory of the directory where the notebook is located then it will work. For instance if there is an images directory in the same directory as the notebook which contains Tex.jpg file then this will work. self.3DObject= sphere(pos=vector(0,0,0),radius = 1, texture="images\Tex.jpg")
1
1
0
I am working on a small project in VPython 7; Python 3.6, where textures need to be applied to my 3D objects. However, when I try loading the texture in, the object does not appear, until I place the texture in the Lib\site packages\vpython\vpython_data folder, where it is loaded perfectly with no problems. However, for my project, I need it to be in my chosen directory for easy organisation. Let's call the directory C:\Project with my texture Tex /Tex.jpg textures.customTex= {'file':":Tex.jpg"} self.3DObject= sphere(pos=vector(0,0,0),radius = 1, texture=textures.Tex) The above will work if the texture is the the /vpython_data directory. However, when I try to load the same texture but in my directory: textures.customTex= {'file':":C:\Project\Tex.jpg"} self.3DObject= sphere(pos=vector(0,0,0),radius = 1, texture=textures.Tex) The above will not work. My question is whether there if I am loading it in wrong, or whether there is simply no workaround this problem. Thank you in advance
VPython 7 Texture not loading from custom directory
0
0
0
433
47,023,092
2017-10-30T19:46:00.000
1
0
0
0
python,windows,github,pycharm
47,023,220
1
true
0
0
This may seems like an overkill way of handling the problem but I have fixed it myself by re-installing git on my machine. It seems to actually be the fix for this. Another thing you could do is try git-bash (Git for Windows app) in the future.
1
2
0
Fetch failed: Unable to find remote helper for 'https' When I tried to fetch on PyCharm from my GitHub repository, the above is the message I ended up getting. I was wondering how I could fix this.
Can't seem to fetch from GitHub repository in PyCharm
1.2
0
1
161
47,024,145
2017-10-30T21:05:00.000
3
0
0
0
python,openerp,odoo-8
57,271,414
1
true
1
0
You should add those fields to product.template model and then they will be automatically added to product.product by the inheritance. This way you will be able to show the fields in product.template views. I do not know what is the exact problem that you are trying to solve but when you need to add a field to a product you should think if its value is going to be different for each variant of the product (product.product are variants and product.template are the original product). If it is going to have the same value (You want to add it to product.template view so I imagine it is going to have it) then, add it to product.template model. I hope it helps you.
1
1
0
I'm trying to add new product.product fields to the default product.template view, problem is, that I've tried many examples, but none seems to be working. The issue is that I do have added these fields to the product.product default view, (as an inherit view) BUT, that view is only available on sales module, the vast majority of the Odoo's product views are from product.template Does anybody has an idea on how to achieve this on the xml view? Is it possible at all? Being product.product the model ?
Add product.product fields to product.template view - Odoo v8
1.2
0
0
1,258
47,024,444
2017-10-30T21:27:00.000
-1
0
1
0
python,input,types
47,024,516
2
false
0
0
You are on the right track if the split() function. The problem is that when you say that user will give three values separated by ' ', you are taking in a string. The following is a string: '34.44 35.45 5' Maybe what you can do is after using split, you can cast each returned item to a variable. If you still need to check the type of variable, you can use the type() function. Hope this helps!
1
0
0
So i need to find a way to take multiple data types as an input and store them in variables. Lets say i have 3 variables, each of them should store a fixed data type a - float b - str c - int and if the user enters a wrong, he will be asked to type again. The user will enter all of them, separated by space but i can't think of a way to store them in the variables, while keeping the data type. I've tried with .split(), but it just transforms them into strings. It's probably something quite obvious, but can't figure it out right now. Thanks in advance!
Python 3 taking multiple data types in input, separated by space
-0.099668
0
0
1,324
47,025,408
2017-10-30T22:51:00.000
1
0
1
0
python,anaconda
68,860,687
2
false
0
0
stack 'em. create environments for base_env (base packages) and app_env (just your application packages) then, conda activate base_env conda activate --stack app_env
1
2
0
Is it possible to create an anaconda environment with all of the packages from my other environments? It would be even better if it could dynamically stay up to date.
Create anaconda environment with all packages from other environments
0.099668
0
0
2,647
47,025,896
2017-10-30T23:49:00.000
2
0
0
0
python-2.7,image-processing,machine-learning,keras,conv-neural-network
47,031,545
1
false
0
0
I don't think there is a standard approach on this. In machine learning, in many cases we have to try and see. If I were you, if I had to build a custom neural network, I would start with mean image size and then I would gradually increase the size until reaching optimum score. If you are using a pretrained neural network then just resize your images to network's default.
1
1
1
I am training a neural net on a set of images with heterogeneous dimensions. Of course, they all have to have the same dimensions to be fed to the NN, and it is simple enough to use scipy.misc.imresize() for this. But, how should I choose width and height? My first instinct was to plot histograms of both and eyeball values around the 75th percentile. I also thought maybe I should scale all images up to the max values for both height and width, so that no details are discarded from the higher-pixel images. Is there a best practice for addressing this problem? Thanks! For reference, I am using python 2.7 and keras with theano backend and dimension ordering.
Is there a heuristic for homogenizing image dimensions before using them to train neural net?
0.379949
0
0
44
47,030,098
2017-10-31T07:37:00.000
0
1
0
0
python,sms,gsm,sms-gateway
47,046,290
1
true
0
0
If you consider all protocols involved, including radio part, 300+ messages across a good dozen of protocols would have to be sent in order to deliver outgoing SMS to SMSC, and a great deal of waiting and synchronization will have to be involved. This (high overhead) will be your limiting factor and you would probably get 10-15 SMS per minute or so. Reducing overhead is only possible with a different connectivity methods, mostly to eliminate radio part and mobility management protocols. Usual methods are: connecting to a dedicated SMS gateway provider via whatever protocol they fancy, or acting as SMSC yourself and connecting to SS7 network directly.
1
0
0
I have a GSM Modem SIM 900D. I am using it with my server and Python code to send and receive messages. I want to know how many Text SMS I could send and receive through this GSM modem per minute.
GSM 900D Module Limit for Text Messages Sending & Receiving
1.2
0
0
377
47,037,150
2017-10-31T13:53:00.000
1
0
1
0
python,numpy
47,037,520
6
false
0
0
I can propose to use such notation [5*10**5:1*10**6] but it's not so clear as in case of 5e5 and 1e6. And even worse in case of 3.5e6 = 35*10**5
1
1
1
I frequently need to enter large integers for indexing and creating numpy arrays, such as 3500000 or 250000. Normally I'd enter these using scientific notation, 3.5e6 or .25e6 or such. This is quicker, and much less likely to have errors. Unfortunately, python expects integer datatypes for indexing. The obvious solution is to convert datatypes. So [5e5:1e6] becomes [int(5e5):int(1e6)], but this decreases readability and is somewhat longer to type. Not to mention, it's easy to forget what datatype an index is until an indexing operation fails on a list or numpy.ndarray. Is there a way to have numpy or python interpret large floats as integers, or is there an easy way to create large integers in python?
Using large index in python (numpy or lists)
0.033321
0
0
330
47,038,101
2017-10-31T14:42:00.000
0
0
1
0
python,database,save
47,038,338
5
false
0
0
Correct me if I'm wrong, but opening, writing to, and subsequently closing a file should count as "saving" it. You can test this yourself by running your import script and comparing the last modified dates.
1
0
1
I am writing a program in Python which should import *.dat files, subtract a specific value from certain columns and subsequently save the file in *.dat format in a different directory. My current tactic is to load the datafiles in a numpy array, perform the calculation and then save it. I am stuck with the saving part. I do not know how to save a file in python in the *.dat format. Can anyone help me? Or is there an alternative way without needing to import the *.dat file as a numpy array? Many thanks!
Save data as a *.dat file?
0
0
0
41,290
47,038,309
2017-10-31T14:52:00.000
2
0
0
0
python,django,apache,amazon-ec2,ubuntu-14.04
47,038,557
1
true
1
0
You can set secrets in environment variables and get them in python code as password = os.getenv('ENVNAME').
1
0
0
Where to store payment gateway secret key when using python Django with apache server? I don't what to store in settings.py as i will checking this file in my git. Can i do it the same way amazon store AWS ec2 keys. If possible how to do it.
Where to store payment gateway secret key when using python Django with apache server hosted on aws ec2 Ubuntu
1.2
0
0
112
47,038,961
2017-10-31T15:24:00.000
1
0
0
0
python,python-2.7,sqlalchemy
65,727,720
3
false
0
0
If you use flask-login module of Flask you could just import a function current_user with from flask_login import current_user. Then you could just get it from the database and db model (for instance Sqlite/SqlAlchemy) if you save it in a database: u_id = current_user.id u_email = current_user.email u_name = current_user.name etc.
1
1
0
I am trying to get the current user of the db I have. But I couldn't find a way to do that and there are no questions on stackoverflow similar to this. In postgresql there is a method current_user. For example I coudl just say SELECT current_user and I would get a table with the current user's name. Is there something similar in Sqlalchemy?
SqlAlchemy current db user
0.066568
1
0
2,399
47,041,206
2017-10-31T17:26:00.000
0
0
1
0
python,algorithm,python-3.x,heap,priority-queue
47,041,703
3
false
0
0
I would have have each element be a data structure with a flag for whether to ignore it. When you heappop, you'll just pop again if it is an element that got flagged. This is very easy, obvious, and involves knowing nothing about how the heap works internally. For example you don't need to know where the element actually is in the heap to flag it. The downside of this approach is that the flagged elements will tend to accumulate over time. Occasionally you can just filter them out then heapify. If this solution is not sufficient for your needs, you should look for a btree implementation in Python of some sort. That will behave like the treemap that you are used to in Java.
1
5
0
I'm trying to implement an algorithm to solve the skyline problem that involves removing specific elements from the middle of a max heap. The way I currently do it is maxheap.remove(index) but I have to follow up with a heapify(maxheap) otherwise the order is thrown off. I know in java you can use something like a treemap to do that. Is there anyway to do that in python more efficiently than calling two separate methods each of which takes O(n) time?
How to remove a specific element in a heap without losing heap properties in Python?
0
0
0
5,553
47,041,472
2017-10-31T17:41:00.000
6
0
1
0
python,exception,error-handling,iterator
47,041,560
1
true
0
0
The overall answer is you cannot use sentinel values to safely state the end of an iterator/generator. Imagine simply that you have a list of None objects, None can no longer be used as a sentinel value. That's why StopIteration is used: no sentinel value, the problem is avoided. If you want to avoid this behavior and return a default value in next, you can simply call next(generator, None).
1
5
0
When looking at Stackoverflow-Questions about when to raise Exceptions and when to simply return None in Python functions, the emerging picture is that you should use Exceptions for unexpected behaviour and return None otherwise. However, for-loops over iterables are realized via the StopIteration Exception raised by the next() method, although eventually reaching the "end" of an iterable usually isn't unexpected. Why is the for-loop over iterables implemented the way it is?
Why is the end of an iteration realized with a StopIteration Exception
1.2
0
0
659
47,042,689
2017-10-31T18:53:00.000
0
0
0
0
python,ipython,jupyter-notebook,ipython-notebook
47,042,891
1
false
0
0
Are you explicitly saving your notebook before you re-open it? A Jupyter notebook is really just a large json object, eventually rendered as a fancy html object. If you save the notebook, illustrations and diagrams should be saved as well. If that doesn't do the trick, try putting the one-liner "data" in a different cell than read_sql().
1
0
1
I use Pandas with Jupyter notebook a lot. After I ingest a table in from using pandas.read_sql, I would preview it by doing the following: data = pandas.read_sql("""blah""") data One problem that I have been running into is that all my preview tables will disappear if I reopen my .ipynb Is there a way to prevent that from happening? Thanks!
How to prevent charts or tables to disappear when I re-open Jupyter Notebook?
0
1
0
71
47,043,356
2017-10-31T19:40:00.000
1
0
1
0
python-3.x,virtualenv
47,043,476
1
true
0
0
You need to run venv with the appropriate python version, so install Python3.6 and run python3.6 -m venv
1
0
0
I have a project that exists in ~/Allen/project1 and used venv to create an isolated environment: python3 -m venv ~/Allen/project1. The project I am doing requires Python 3.6, but my current Python3 is python3 --version: Python 3.5.1 and my default OS X python is 2.7.10, neither of which is the correct python version. How do I get configure a Python 3.6 interpreter inside my virtual environment. Note that I'm using the newer venv instead of virtualenv although I don't think that should make too much of a difference.
How to change interpreter using venv?
1.2
0
0
639
47,043,407
2017-10-31T19:44:00.000
1
0
1
0
python,python-3.x,pandas,jupyter-notebook
55,322,568
5
false
0
0
Try this for python3 sudo pip3 install pandas
4
5
1
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent information: I'm using python3 I've installed pandas using conda install pandas My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook.
Jupyter Notebook: no module named pandas
0.039979
0
0
7,502
47,043,407
2017-10-31T19:44:00.000
4
0
1
0
python,python-3.x,pandas,jupyter-notebook
47,049,051
5
false
0
0
You can try: which conda and which python to see the exact location where conda and python was installed and which was launched. And try using the absolute path of conda to launch jupyter. For example, /opt/conda/bin/jupyter notebook
4
5
1
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent information: I'm using python3 I've installed pandas using conda install pandas My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook.
Jupyter Notebook: no module named pandas
0.158649
0
0
7,502
47,043,407
2017-10-31T19:44:00.000
0
0
1
0
python,python-3.x,pandas,jupyter-notebook
72,239,426
5
false
0
0
The default kernel in jupyter notebook points the python that is different to the python used inside the terminal. You could check using which python So the packages installed by conda lives in different place compared to the python that is used by the jupyter notebook at default. To fix the issue, both needs to be same. For that create a new kernel using ipykernel. syntax: python -m ipykernel install --user --name custom_name --display-name "Python (custom_name)" After that, check the custom kernel and the path of the python used. jupyter kernel list --json Finally, Restart the jupyter notebook. And change the kernel to the new custom_kernel.
4
5
1
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent information: I'm using python3 I've installed pandas using conda install pandas My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook.
Jupyter Notebook: no module named pandas
0
0
0
7,502
47,043,407
2017-10-31T19:44:00.000
0
0
1
0
python,python-3.x,pandas,jupyter-notebook
64,191,121
5
false
0
0
Its seems using homebrew installs for packages dependancies of home brew formulas are not handled by home brew well. Mostly path issues as installs are in different locations vs pip3, I also had tried installing pandas thru nb with !pip3, but I got errors that is was already satisfied meaning it was already installed just not importing. I uninstalled homebrew jupyterlab and used pip3 instead and all worked proper as a work around.
4
5
1
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent information: I'm using python3 I've installed pandas using conda install pandas My conda environment has pandas installed correctly. After activating the environment, I type python into the terminal and from there I can successfully import pandas and use it appropriately. This leads me to believe that it is an issue with my jupyter notebook.
Jupyter Notebook: no module named pandas
0
0
0
7,502
47,044,380
2017-10-31T20:53:00.000
0
0
1
0
python,pyqt
50,945,478
1
false
0
0
Need to set "QT_QPA_PLATFORM_PLUGIN_PATH" in Environment Variables that points to the "platforms" folder of the active Anaconda Environment (not the Base root environment if you created other environment). For example, point to ..\Library\plugins\platforms.
1
0
0
I have come across this error. I have tried everything that the first 3 pages of Google provides. It is driving me crazy. I have uninstalled all my versions of python and anaconda. I reinstalled only anaconda and the error persists. My Anaconda installation is not in a path that contains any non-ASCII characters C:\Users\eee\Anaconda3 I have checked the qt.conf file. All the folders it is pointing to do exist in the stated directories. After reinstalling Anaconda, everything seemed to work until there was an error and then after that point it went back to the same error. This is extremely frustrating, I just want to be able to use matplotlib in an environment that is more suitable for development than ipython notebook. I tried PyCharm and also PyDev on Eclipse. The same error shows up in both environments. But not on ipython notebook. I am aware that you cannot have multiple instances of qt installed. How can I make sure to uninstall all of them such that none are hiding in the shadows. Python 3.6.3 |Anaconda, Inc.| (default, Oct 15 2017, 03:27:45) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
This application failed to start because it could not find or load the Qt platform plugin "windows" in ""
0
0
0
1,043
47,044,392
2017-10-31T20:54:00.000
0
0
0
0
python,python-requests,basic-authentication
47,045,020
2
false
0
0
with python requests you can open your session, do your job, then logout with: r = requests.get('logouturl', params={...}) the logout action is just an http Get method.
2
0
0
I am creating python script to configure router settings remotely but recently stumbled on problem how to logout or close session after job is done? From searching I found that Basic-Authentication doesn´t have option to logout. How to solve it in python script?
Basic-Auth session using python script
0
0
1
395
47,044,392
2017-10-31T20:54:00.000
0
0
0
0
python,python-requests,basic-authentication
47,044,849
2
true
0
0
Basic auth doesn't have a concept of a logout but your router's page should have some implementation. If not, perhaps it has a timeout and you just leave it. Since you're using the requests module it may be difficult to do an actual logout if there is no endpoint or parameter for it. I think the best one can do at that point is log in again but with invalid credentials. Studying the structure of the router's pages and the parameters that appear in the urls could give you more options. If you want to go a different route and use something like a headless web browser you could actually click a logout button if it exists. Something like Selenium can do this.
2
0
0
I am creating python script to configure router settings remotely but recently stumbled on problem how to logout or close session after job is done? From searching I found that Basic-Authentication doesn´t have option to logout. How to solve it in python script?
Basic-Auth session using python script
1.2
0
1
395
47,047,727
2017-11-01T03:21:00.000
1
0
1
0
python,maxmind,geoip2
47,048,122
1
false
0
0
You should be able to download CSV file and import it into SQL lite.
1
0
0
I need to read the whole geoip2 database and insert that data into SQL lite database. I tried to read the .mmdb file in the normal way but it prints random characters.
Can we read the geoip2 database file with .mmdb format like normal file in Python?
0.197375
1
0
845
47,048,278
2017-11-01T04:30:00.000
0
0
1
0
python,css,jupyter-lab
62,358,081
5
false
0
0
You should change the font size in the website. It should be at Settings->Fonts->Code->Size for code editor and Settings->Fonts->Content->Size for main contents The css file should be <prefix>/share/jupyter/lab/themes/@jupyterlab/<your theme>/index.css. To change the font, find all the place in the css file which looks like setting font, and change it.
2
18
0
I recently updated to the most recent version of JupyterLab (0.28.12). I'm on Windows. I've tried adjusting the variables.css file located in \Lib\site-packages\jupyterlab\themes\@jupyterlab\theme-light-extension of my Miniconda/Anaconda folder. I mainly want to change the font family and size, which I've tried using the variables.css file. However, I can't see any changes. I went to the extreme point of deleting both theme folders, but still I can change themes without a problem through the Lab interface. Where are the JupyterLab theme .css files located? Or how can I find them? I've searched for css files and the themes sub folder seems to be the only location for them. I can't seem to find any in my user directory either c:\Users\User\.jupyter where the .css files were for Jupyter Notebook were located. Thanks!
jupyterlab - change styling - font, font size
0
0
0
36,582
47,048,278
2017-11-01T04:30:00.000
4
0
1
0
python,css,jupyter-lab
65,618,110
5
false
0
0
It is now possible to change the font sizes of most elements of the interface via the Settings menu .e.g: Settings->JupyterLab Theme->Increase Code Font Size etc. Note: These do not change if View->Presentation Mode is ticked. To change the font style one still needs to go to Settings->Advanced Settings Editor (as mentioned in other answers) - and one can also changes font sizes there - which will take effect even if Presentation Mode is enabled.
2
18
0
I recently updated to the most recent version of JupyterLab (0.28.12). I'm on Windows. I've tried adjusting the variables.css file located in \Lib\site-packages\jupyterlab\themes\@jupyterlab\theme-light-extension of my Miniconda/Anaconda folder. I mainly want to change the font family and size, which I've tried using the variables.css file. However, I can't see any changes. I went to the extreme point of deleting both theme folders, but still I can change themes without a problem through the Lab interface. Where are the JupyterLab theme .css files located? Or how can I find them? I've searched for css files and the themes sub folder seems to be the only location for them. I can't seem to find any in my user directory either c:\Users\User\.jupyter where the .css files were for Jupyter Notebook were located. Thanks!
jupyterlab - change styling - font, font size
0.158649
0
0
36,582
47,051,326
2017-11-01T09:01:00.000
1
0
1
0
python,pandas,stata
47,129,061
2
false
0
0
Just use the read_table() of Pandas then make sure to include delim_whitespace=True and header=None.
1
3
1
I am trying to read a Stata (.dta) file in Python with pandas.read_stata, But I'm getting this error: ValueError: Version of given Stata file is not 104, 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), or 118 (Stata 14) Please advise.
unable to read stata .dta file in python
0.099668
0
0
3,485
47,054,052
2017-11-01T11:43:00.000
-2
0
0
0
xml,python-2.7,odoo-10
47,055,227
2
false
1
0
It is not possible to make readonly specific row (sale order line), because it is an one2many field and readonly attribute will apply on the field level not on record level. so in my openion you have to extend one2many field widget.
2
1
0
I want to make 'readonly' a single row of sale order line for particular condition. Can it is possible for a check box, if check box(is_pack) condition is true, than that row is readonly? Thanks in advance.
Readonly a particular rows field value of sales order line for appropriate condition in odoo10
-0.197375
0
0
420
47,054,052
2017-11-01T11:43:00.000
0
0
0
0
xml,python-2.7,odoo-10
47,071,806
2
false
1
0
It is possible to make readonly a particular row for any condition.
2
1
0
I want to make 'readonly' a single row of sale order line for particular condition. Can it is possible for a check box, if check box(is_pack) condition is true, than that row is readonly? Thanks in advance.
Readonly a particular rows field value of sales order line for appropriate condition in odoo10
0
0
0
420
47,056,308
2017-11-01T13:42:00.000
1
0
1
0
python,caching,pip,virtualenv
47,059,541
1
false
0
0
Wheel cache is venv-independent. It's always %LOCALAPPDATA%\pip\Cache so that pip can install the same cached wheels in multiple venvs.
1
0
0
I'm using virtualenv on win7. > python -m pip install [packagename]. My question is where is the default cache directory of pip in virtualenv? Thanks a lot :)
In virtualenv, where is the default directory of python pip install's .whl cache files? both for win7 and ubuntu16.04
0.197375
0
0
434
47,057,011
2017-11-01T14:18:00.000
1
0
1
0
python,django,visual-studio-code
47,057,170
2
false
1
0
It also happened to me, i tried creating a django project using django-admin.py startproject example, I asked around and i found out that the django-admin.py does not work on vscode for windows (am not really sure about mac), vscode sees it as a file and not as a command, cause vscode doesnt need the .py extension to execute the command.
1
0
0
I don't know when and how it started but now I have such a glitch: open CMD enter python command: "django-admin.py help" Visual Studio Code starts up and opens manage.py for editing. The CMD command itself does not return anything. on the other hand, if I enter: "django-admin help" (without .py) the CMD shows help and VSCODE does not react in any way. What is this magic? How to change VSCODE reaction to .py mentioning?
VSCode starts up when I use ".py" extension with CMD commands
0.099668
0
0
93
47,060,759
2017-11-01T17:44:00.000
0
0
1
1
python,linux,bash
47,060,870
1
true
0
0
You should definitely learn basic C++. You really just have to learn <iostream>'s std::cout, std::cin, std::endl and the <csysio>'s system() function for executing commands. Pros: It's available for every Linux system. It's far more extensible than Bash. You can compile your program using static linking so that it doesn't need ANY dependency on a new system. Cons: Harder to learn. Have to compile the code.
1
0
0
I like to change my operating system. I also frequently format it when it becomes cluttered and restore needed files from backup. I started to develop my tiny script in bash that automatizes some of those tasks, like adding repositories, installing software, setting up wallpaper and panels and so on. Unfortunately, this bash script is getting less and less readable. I was wondering, what language could I pick, so that after reinstalling operating system I will be able to copy my little program from pendrive, run it and let it do whole work for me. Most of programming languages require to install some kind of running environment (let's take java and JRE as an example). This is why I am focusing on languages, that can be run immediately after installing operating system. As I am only using GNU/Linux systems, bash was an obvious choice. But readability is a downside. I thought about Python, but some operating systems have 2.X and some 3.X. What can I do, to create tiny generic program, that will work on most Linux based operating systems? I know, that this is pretty much hard question without specifying those operating systems, but I simply do not know what operating system will I use in future (beside fact, that it will be mainstream Linux OS). We can assume, that it is enough if it can run on at least 80 of 100 operating systems listed on distrowatch.com.
Bootstraping operating system settings after reinstall
1.2
0
0
37
47,061,126
2017-11-01T18:08:00.000
0
0
0
0
python,django
47,061,243
1
false
1
0
You have two options: use blocktrans to translate it, it's fine you big text too. Gettext is caching it Use flatpages and put it there like /tos/en/ or /tos/de/ etc. both are fine
1
0
0
I will use Django translation functions/tags to translate words and small block of text. But I am wondering whether it is relevant to do the same thing for big textual content like "term of service" or "Privacy Policy" pages ? I see 2 ways : 1) use {% blocktrans %} on the whole text, but it will make a lot of data into the gettext database, it may slow down the translation process of all other strings 2) use as many templates as languages, that is to have for the "Privacy Policy" page these kind of template files : privacy_en.html, privacy_fr.html, privacy_de.html... What would be the correct way ?
Django translation strategy for big textual content
0
0
0
97
47,061,626
2017-11-01T18:45:00.000
45
0
1
0
python,pandas,jupyter-notebook,ipython-notebook
51,229,787
5
false
0
0
Type "nbsp" to add a single space. Type "ensp" to add 2 spaces. Type "emsp" to add 4 spaces. Use the non-breaking space (nbsp) 4 times to insert a tab. eg. &emsp;This is an example.
3
19
0
I am writing descriptive ipynb file and need to give output in markdown with space, but unable to add tab space for printing structured data.
How to get tab space in 'markdown' cell of Jupyter Notebook
1
0
0
59,768
47,061,626
2017-11-01T18:45:00.000
9
0
1
0
python,pandas,jupyter-notebook,ipython-notebook
54,674,431
5
false
0
0
I just had this problem and resorted to the following: This is text with $\;\;\;\;\;\;$ some space inserted.
3
19
0
I am writing descriptive ipynb file and need to give output in markdown with space, but unable to add tab space for printing structured data.
How to get tab space in 'markdown' cell of Jupyter Notebook
1
0
0
59,768
47,061,626
2017-11-01T18:45:00.000
4
0
1
0
python,pandas,jupyter-notebook,ipython-notebook
67,157,179
5
false
0
0
Markdown is used primarily to generate HTML, and HTML collapses white spaces by default. Use "&nbsp;" instead of space characters. Type "&nbsp" to add a single space. Type "&ensp" to add 2 spaces. Type "&emsp" to add 4 spaces.
3
19
0
I am writing descriptive ipynb file and need to give output in markdown with space, but unable to add tab space for printing structured data.
How to get tab space in 'markdown' cell of Jupyter Notebook
0.158649
0
0
59,768
47,061,792
2017-11-01T18:55:00.000
0
0
0
1
python,linux
47,061,983
1
false
0
0
You can run the "top" command from within a python script using "subprocess.run()" and get the output in the returned "CompletedProcess" instance.
1
0
0
I'm writing a python script that generates a text file report of CPU usage per core. Really, what I want is the information that top provides once you type 1. However, optimally this would be returned to the terminal (just like running top -b) so I can grep etc. Is there a way of getting this information, either with top or another command, in a format that I can then grep and handle within my python script. Thanks very much!
Generate report of Linux CPU info per core using top
0
0
0
134
47,063,752
2017-11-01T21:13:00.000
4
1
0
0
python,amazon-web-services,email,boto3,amazon-iam
47,064,014
2
false
1
0
It is not possible to change an account's email address (Root) programmatically. You must log in to the console using Root credentials and update the email address.
2
0
0
Every AWS Account has an email associated to it. How can I change that email address for that account using boto3?
How can I change the AWS Account Email with boto3?
0.379949
0
1
494
47,063,752
2017-11-01T21:13:00.000
0
1
0
0
python,amazon-web-services,email,boto3,amazon-iam
58,530,022
2
false
1
0
No as of Oct 2019 you can't update account information(including email) using boto or any other AWS provided SDKs.
2
0
0
Every AWS Account has an email associated to it. How can I change that email address for that account using boto3?
How can I change the AWS Account Email with boto3?
0
0
1
494
47,064,078
2017-11-01T21:36:00.000
0
1
0
0
python,telegram,telegram-bot,python-telegram-bot
47,100,798
2
false
0
0
A bot can delete messages: 1. in groups: Only his own messages if he is not admin, otherwise also messages from other users. 2. in private: only his own messages in both the cases only if the message is not older than 48h. Probably, since you said in comments messages aren't older than 48h, you can doing it wrong because of the first 2 points
1
1
0
I'm trying to write the telegram bot, and i need help here bot.deleteMessage(chat_id=chatId, message_id=mId) This code returns the following error: 400 Bad Request: message can't be deleted Bot has all rights needed for deleting messages.
Telegram Bot deleteMessage function returns 400 Bad Request Error
0
0
1
2,259
47,066,314
2017-11-02T01:47:00.000
0
0
0
0
python,machine-learning,random-forest
47,236,204
2
false
0
0
No, your model is not fine. In your dataset around 88% records belong to "Label 0", which makes your model bias to "Label 0". Thus, even though your AUC is low, it shows 84% accuracy as most of the data belongs to "Label 0". You can undersample records belong to "Label 0" or oversample records belong to "Label 1" to make your model more accurate. Hope it helps.
1
0
1
I have built model which gives me 84% accuracy for random forest and support vector machine but giving very low auc of 13% only. I am building this in python and I am new to machine learning and data science. I am predicting 0 and 1 labels on dataset. My overall dataset is having records of 30744. Label 1 - 6930 Label 0 - 23814 Could you please advise if this is fine? Is model is getting overfitted? Appreciate any suggestion on improving auc?
why model is giving high accuracy of 84% but very low AUC 13%?
0
0
0
1,723