Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
41,476,490
2017-01-05T02:34:00.000
0
0
1
0
0
python,string
0
41,476,538
0
3
0
false
0
0
And if you want to use join only , so you can do like thistest="test string".split() "_".join(test) This will give you output as "test_string".
1
1
0
0
I want to transform the string 'one two three' into one_two_three. I've tried "_".join('one two three'), but that gives me o_n_e_ _t_w_o_ _t_h_r_e_e_... how do I insert the "_" only at spaces between words in a string?
How to join() words from a string?
0
0
1
0
0
10,301
41,485,251
2017-01-05T12:33:00.000
0
0
0
1
0
python,django,docker,containers,celery
0
41,668,121
0
1
0
false
1
0
You can shell into the running container and check things out. Is the celery process still running, etc... docker exec -ti my-container-name /bin/bash If you are using django, for example, you could go to your django directory and do manage.py shell and start poking around there. I have a similar setup where I run multiple web services using django/celery/celerybeat/nginx/... However, as a rule I run one process per container (kind of exception is django and gunicorn run in same container). I then share things by using --volumes-from. For example, the gunicorn app writes to a .sock file, and the container has its own nginx config; the nginx container does a --volumes-from the django container to get this info. That way, I can use a stock nginx container for all of my web services. Another handy thing for debugging is to log to stdout and use docker's log driver (splunk, logstash, etc.) for production, but have it log to the container when debugging. That way you can get a lot of information from 'docker logs' when you've got it under test. One of the great things about docker is you can take the exact code that is failing in production and run it under the microscope to debug it.
1
1
0
0
I have a micro-services architecture of let say 9 services, each one running in its own container. The services use a mix of technologies, but mainly Django, Celery (with a Redis Queue), a shared PostgreSQL database (in its own container), and some more specific services/libraries. The micro-services talk to each other through REST API. The problem is that, sometimes in a random way, some containers API doesn't respond anymore and get stuck. When I issue a curl request on their interface I get a timeout. At that moment, all the other containers answer well. There is two stucking containers. What I noticed is that both of the blocking containers use: Django django-rest-framework Celery django-celery An embedded Redis as a Celery broker An access to a PostgreSQL DB that stands in another container I can't figure out how to troubleshoot the problem since no relevant information is visible in the Services or Docker logs. The problem is that these API's are stuck only at random moments. To make it work again, I need to stop the blocking container, and start it again. I was wondering if it could be a python GIL problem, but I don't know how to check this hypothesis... Any idea about how to troubleshot this?
Troubleshooting API timeout from Django+Celery in Docker Container
0
0
1
0
0
472
41,485,507
2017-01-05T12:46:00.000
1
0
1
0
0
python,python-3.x,asynchronous,concurrency,python-asyncio
0
41,502,152
0
1
0
true
0
0
var1, var2 = loop.run_until_complete(asyncio.gather(task1, task2)) According to the docs, gather retains the order of the sequence it was passed
1
2
0
0
What I want to achieve is : tasks = [call(url) for url in urls] call is an async method / coroutine in Python3.5 to perform GET requests , let's say aiohttp. So basically all calls to call are async. Now I can run asyncio.wait(tasks) and later access the result in futures one by one. BUT, what I want is, lets assume there are 2 url only, then : a, b = call(url1), call(url2) Something like how you do it in Koa by yielding an array. Any help how to do this if it can be done ??
Set result of 2 or more Async HTTP calls into named variables
0
1.2
1
0
1
73
41,519,202
2017-01-07T07:56:00.000
-1
0
1
0
0
python,python-3.4
0
49,356,344
0
2
0
false
0
0
Press the key Ctrl+F6 then you can restart the powershell. Just like the 'clear' used in terminal, it clears the all variables you've assigned values for.
1
0
0
0
I used some of the codes such as clear cls clc but none of them gave me the desired result. Is there any command that can clear the screen of the idle?
how to clear the screen of the idle3(python3 shell)?
0
-0.099668
1
0
0
560
41,528,141
2017-01-08T00:26:00.000
1
1
0
0
1
python,html,python-3.x,web
0
44,302,397
0
1
0
true
0
0
So after some good answers and further research, I have found that selenium is the thing that best suits my needs. It works not only with python, but supports other languages as well. If anyone else is looking for something that I had been when I asked the my question, a quick Google search for "selenium" should give them all the information they need about the tool that I found best for what I needed.
1
0
0
0
Ok, so I've looked around on how to do this and haven't really found an answer that showed me examples that I could work from. What I'm trying to do is have a script that can do things like: -Log into website -Fill out forms or boxes etc. Something simple that might help me in that I though of would be for example if I could write a script that would let me log into one if those text message websites like textnow or something like that, and then fill out a text message and send it to myself. If anyone knows a good place that explains how to do something like this, or if anyone would be kind enough to give some guidance of their own then that would be greatly appreciated.
How to have python interact automatically with a web site
0
1.2
1
0
1
58
41,550,060
2017-01-09T14:21:00.000
0
0
0
0
0
python,excel
0
60,421,964
0
5
0
false
0
0
I am not sure if this is what the OP was looking for,but if you have to manipulate data in python without installing any modules (just standard library), you can try the sqlite3 module, which of course allows you to interact with sqlite files (a Relational Database Management System). These databases are conceptually similar to an Excel file. If an excel file is basically a collection of sheets, with each sheet being a matrix where you can put data, sqlite databases are the same (but each "matrix" is called a table instead). This format is scripting friendly, as you can read and write data using SQL, but it does not follow the client-server model other DBMS are based on. The whole database is contained in a single file that you can email to a colleague, and you can also install a GUI that gives you a spreadsheet like interface to make it more user-friendly (DB browser for SQLite is avaible for Windows, Linux and Mac). This allows you to include SQL code in your python scripts, which adds a lot of data processing capabilities, and it is an excellent way to achieve data persistence for simple programs.
1
4
0
0
I am new to Python. I use putty to manage some servers. I want to use Python to create a Excel file on each server, for that I think if I can use some orders like ssh ip "python abc.py" to create the file. It is possible to write a bash script to manage all servers. This is the trouble I meet: The servers can't use internet. And it is not allowed to use any third party libraries. When a linux(redhat 6.5) was installed, is there any library in python that can be used to create Excel immediately? Please help me, thanks.
how to create a excel file only with python standard library?
1
0
1
1
0
15,636
41,568,395
2017-01-10T12:01:00.000
1
0
0
1
0
macos,python-3.x,terminal,subprocess
0
41,615,003
0
1
0
false
0
0
Thanks for the comments guys but I managed to figure it out. In the end I used a combination of subprocess.Popen() and os.chdir() and it seems to work using Jupyter Notebook.
1
0
0
0
I have recently started using a program which has command line interfaces accessed through the Mac Terminal. I am trying to automate the process whereby a series of commands are passed through the terminal using Python. So far I have found a way to open the Terminal using the subprocess.Popen command but how do I then "write" in the terminal once it's open ? For example what I am looking to do is; 1. Open the Terminal App. 2. Select a directory in the App. 3. Run a command. In this instance the file I wish to run is called "RunUX" and what I want to type is "./RunUX ..." followed by command line arguments. I'm fairly new to Python and programming and appreciate all help !! Thanks
Manipulating the Terminal Using a Python Script
0
0.197375
1
0
0
379
41,573,587
2017-01-10T16:27:00.000
56
0
1
0
0
python,virtualenv,virtualenvwrapper,pyenv,python-venv
0
65,854,168
0
8
0
false
0
0
Let's start with the problems these tools want to solve: My system package manager don't have the Python versions I wanted or I want to install multiple Python versions side by side, Python 3.9.0 and Python 3.9.1, Python 3.5.3, etc Then use pyenv. I want to install and run multiple applications with different, conflicting dependencies. Then use virtualenv or venv. These are almost completely interchangeable, the difference being that virtualenv supports older python versions and has a few more minor unique features, while venv is in the standard library. I'm developing an /application/ and need to manage my dependencies, and manage the dependency resolution of the dependencies of my project. Then use pipenv or poetry. I'm developing a /library/ or a /package/ and want to specify the dependencies that my library users need to install Then use setuptools. I used virtualenv, but I don't like virtualenv folders being scattered around various project folders. I want a centralised management of the environments and some simple project management Then use virtualenvwrapper. Variant: pyenv-virtualenvwrapper if you also use pyenv. Not recommended pyvenv. This is deprecated, use venv or virtualenv instead. Not to be confused with pipenv or pyenv.
1
1,722
0
0
Python 3.3 includes in its standard library the new package venv. What does it do, and how does it differ from all the other packages that seem to match the regex (py)?(v|virtual|pip)?env?
What is the difference between venv, pyvenv, pyenv, virtualenv, virtualenvwrapper, pipenv, etc?
0
1
1
0
0
438,843
41,575,620
2017-01-10T18:15:00.000
0
0
0
0
0
python,hadoop,graph,random-walk,bigdata
0
44,357,542
0
1
0
false
0
0
My understanding is, you need to process large graphs which are stored on file systems. There are various distributed graph processing frameworks like Pregel, Pregel+, GraphX, GPS(Stanford), Mizan, PowerGraph etc. It is worth taking a look at these frameworks. I will suggest coding in C, C++ using openMPI like which can help achieve better efficiency. Frameworks in Java are not very memory efficient. I am not sure of API of these frameworks in Python. It is worth taking a look at blogs and papers which give a comparative analysis of these frameworks before deciding on implementing them.
1
0
1
1
I am working on a project that involves a RandomWalk on a large graph(too big to fit in memory). I coded it in Python using networkx but soon, the graph became too big to fit in memory, and so I realised that I needed to switch to a distributed system. So, I understand the following: I will need to use a graph database as such(Titan, neo4j, etc) A graph processing framework such as Apache Giraph on hadoop/ graphx on spark. Firstly, are there enough APIs to allow me to continue to code in Python, or should I switch to Java? Secondly, I couldn't find exact documentation on how I can write my custom function of traversal(in either Giraph or graphx) in order to implement the Random Walk algorithm.
Large graph processing on Hadoop
0
0
1
0
0
480
41,591,079
2017-01-11T12:32:00.000
0
0
0
0
0
python,django,django-1.10
0
41,592,978
0
1
0
false
1
0
Django's ORM might not be the right tool for you if you need to change your schema (or db) online - the schema is defined in python modules and loaded once when Django's web server starts. You can still use Django's templates, forms and other libraries and write your own custom DB access layer that manipulates a DB dynamically using python.
1
0
0
0
I am developing a Cloud based data analysis tool, and I am using Django(1.10) for that. I have to add columns to the existing tables, create new tables, change data-type of columns(part of data-cleaning activity) at the run time and can't figure out a way to update/reflect those changes, in run time, in the Django model, because those changes will be required in further analysis process. I have looked into 'inspectdb' and 'syncdb', but all of these options would require taking the portal offline and then making those changes, which I don't want. Please can you suggest a solution or a work-around of how to achieve this. Also, is there a way in which I can select what database I want to work from the list of databases on my MySQL server, after running Django.
Changing Database in run time and making the changes reflect in Django in run time
1
0
1
1
0
43
41,627,247
2017-01-13T04:08:00.000
7
0
0
0
0
python,download,ipython,jupyter-notebook,jupyter
0
46,266,094
0
4
0
false
0
0
The download option did not appear for me. The solution was to open the file (which could not be correctly read as it was a binary file), and to download it from the notebook's notepad.
1
24
1
0
I'm using ipython notebook by connecting to a server I don't know how to download a thing (data frame, .csv file,... for example) programatically to my local computer. Because I can't specific declare the path like C://user//... It will be downloaded to their machine not mine
Download data from a jupyter server
0
1
1
0
0
45,980
41,633,039
2017-01-13T10:59:00.000
6
1
1
0
0
python,debugging,assembly,reverse-engineering,obfuscation
0
41,637,182
0
3
0
false
0
0
There's no way to make anything digital safe nowadays. What you CAN do is making it hard to a point where it's frustrating to do it, but I admit I don't know python specific ways to achieve that. The amount of security of your program is not actually a function of programsecurity, but of psychology. Yes, psychology. Given the fact that it's an arms race between crackers and anti-crackers, where both continuously attempt to top each other, the only thing one can do is trying to make it as frustrating as possible. How do we achieve that? By being a pain in the rear! Every additional step you take to make sure your code is hard to decipher is a good one. For example could you turn your program into a single compiled block of bytecode, which you call from inside your program. Use an external library to encrypt it beforehand and decrypt it afterwards. Do the same with extra steps for codeblocks of functions. Or, have functions in precompiled blocks ready, but broken. At runtime, utilizing byteplay, repair the bytecode with bytes depending on other bytes of different functions, which would then stop your program from working when modified. There are lots of ways of messing with people's heads and while I can't tell you any python specific ways, if you think in context of "How to be difficult", you'll find the weirdest ways of making it a mess to deal with your code. Funnily enough this is much easier in assembly, than python, so maybe you should look into executing foreign code via ctypes or whatever. Summon your inner Troll!
2
4
0
0
I'm creating a program in python (2.7) and I want to protect it from reverse engineering. I compiled it using cx_freeze (supplies basic security- obfuscation and anti-debugging) How can I add more protections such as obfuscation, packing, anti-debugging, encrypt the code recognize VM. I thought maybe to encrypt to payload and decrypt it on run time, but I have no clue how to do it.
protect python code from reverse engineering
0
1
1
0
0
12,365
41,633,039
2017-01-13T10:59:00.000
3
1
1
0
0
python,debugging,assembly,reverse-engineering,obfuscation
0
41,635,003
0
3
0
false
0
0
Story time: I was a Python programmer for a long time. Recently I joined in a company as a Python programmer. My manager was a Java programmer for a decade I guess. He gave me a project and at the initial review, he asked me that are we obfuscating the code? and I said, we don't do that kind of thing in Python. He said we do that kind of things in Java and we want the same thing to be implemented in python. Eventually I managed to obfuscate code just removing comments and spaces and renaming local variables) but entire python debugging process got messed up. Then he asked me, Can we use ProGuard? I didn't know what the hell it was. After some googling I said it is for Java and cannot be used in Python. I also said whatever we are building we deploy in our own servers, so we don't need to actually protect the code. But he was reluctant and said, we have a set of procedures and they must be followed before deploying. Eventually I quit my job after a year tired of fighting to convince them Python is not Java. I also had no interest in making them to think differently at that point of time. TLDR; Because of the open source nature of the Python, there are no viable tools available to obfuscate or encrypt your code. I also don't think it is not a problem as long as you deploy the code in your own server (providing software as a service). But if you actually provide the product to the customer, there are some tools available to wrap up your code or byte code and give it like a executable file. But it is always possible to view your code if they want to. Or you choose some other language that provides better protection if it is absolutely necessary to protect your code. Again keep in mind that it is always possible to do reverse engineering on the code.
2
4
0
0
I'm creating a program in python (2.7) and I want to protect it from reverse engineering. I compiled it using cx_freeze (supplies basic security- obfuscation and anti-debugging) How can I add more protections such as obfuscation, packing, anti-debugging, encrypt the code recognize VM. I thought maybe to encrypt to payload and decrypt it on run time, but I have no clue how to do it.
protect python code from reverse engineering
0
0.197375
1
0
0
12,365
41,634,436
2017-01-13T12:14:00.000
0
0
0
0
0
python,json,python-requests
0
41,634,715
0
1
0
true
0
0
You could answer your question quite simpply by reading the source code. But anyway: response.json() does read the response's content, obviously - it's just a convenient shortcut for json.loads(response.content).
1
0
0
0
I read the following on python-requests website: Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set stream to False or read the content property of the Response object. But as I use the object returned by req.json() and doesn't use req thereafter. I wonder when is the connection released? I don't really know how to check that for sure too. Many thanks
In requests-python, when is connection released when using req_json = req.json()?
0
1.2
1
0
1
59
41,652,978
2017-01-14T17:38:00.000
0
0
0
0
0
python,tkinter
0
41,653,041
0
2
0
false
0
1
You could create a couple of variables that hold the size of the screen. then replace (0,0) with (self.screenWidth-0, self.sceenHeight-0)
1
3
0
0
When I usually create a canvas, the (0, 0) coord is place on the top left corner of it. Now I want to set it on the bottom left corner. I think I have to set the "scrollbarregion" but I can't understand how to do it. Can someone explain?
Tkinter: Set 0, 0 coords on the bottom of a canvas
0
0
1
0
0
4,720
41,662,821
2017-01-15T15:27:00.000
0
1
1
1
0
python,linux,centos
0
41,662,958
0
3
0
false
0
0
There is no intrinsic reason why Python should be different from any other scripting language here. Here is someone else using python in init.d: blog.scphillips.com/posts/2013/07/… In fact, that deals with a lot that I don't deal with here, so I recommend just following that post.
1
2
0
0
I'm trying to make a Python script run as a service. It need to work and run automatically after a reboot. I have tried to copy it inside the init.d folder, But without any luck. Can anyone help?(if it demands a cronjob, i haven't configured one before, so i would be glad if you could write how to do it) (Running Centos)
How to run python script at startup
0
0
1
0
0
6,891
41,666,809
2017-01-15T22:01:00.000
2
0
0
0
0
python,video,video-streaming,httprequest,buffering
0
41,672,032
0
1
0
true
0
0
Before playing an MP4 file the client (e.g. browser) needs to read the header part of the file. An MP4 is broken into 'Atoms' and the Moov atom is the header or index atom for the file. For MP4 files that will be streamed, a common optimisation is to move this Moov atom to the front of the file. This allows the client to get the moov at the start and it will then have the information it needs to allow you jump to the offset you want in your case. If you don't have the moov atom at the start the client needs to either download the whole file, or if it is a bit more sophisticated, jump around the file with range requests until it finds it.
1
1
0
0
I have been breaking my head for the pass 2 weeks, and I still can't figure it out. I'm trying to build a Server-Client based streaming player on Python (Ironpython for the wpf GUI) that streams video files. My problem is when the client requests to seek on a part that he did not load yet. When I try to send him just the middle of the .mp4 file, he cant seem to be able to play it. Now I know such thing exists because every online player has it, and it uses the HTTP 206 Partial Content request, where the client just requests the byte range that he desires and the server sends it to him. My question is - how is the client able to play the video with a gap in bytes in his .mp4 file - how can he start watching for the middle of the file? When i seem to try it the player just wont open the file. And more importantly: how can I implement this on my Server-Client program to enable free seeking? I really tried to look for a simple explanation for this all over the internet... Please explain it thoroughly and in simple terms for a novice such as me, I would highly appreciate it. Thanks in advance.
How does HTTP 206 Partial Content Request works
1
1.2
1
0
1
1,969
41,668,158
2017-01-16T01:09:00.000
1
0
0
0
0
python,python-3.x,tensorflow,deep-learning,data-science
0
41,763,164
0
1
0
false
0
0
Generally Deep Learning algorithms are ran on GPUs which has limited memory and thus a limited number of input data samples (in the algorithm commonly defined as batch size) could be loaded at a time. In general larger batch size reduces the overall computation time (as the internal matrix multiplications are done in a parallel manner in GPU, thus with large batch sizes the time gets saved in reading/writing gradients and possibly some other operations output). Another probable benefit of large batch size is: In multi-class classification problems, if the number of classes are large, a larger batch size makes algorithm generalize better(technically avoids over-fitting) over the different classes (while doing this a standard technique is to keep uniform distribution of classes in a batch). While deciding batch size there are some other factors which comes into play are: learning rate and the type of Optimization method. I hope this answers your question to certain extent!
1
0
1
0
I am learning TensorFlow (as well as general deep learning). I am wondering when do we need to break the input training data into batches? And how do we determine the batch size? Is there a rule of thumb? Thanks!
TensorFlow: how to determine if we want to break the training dataset into batches
0
0.197375
1
0
0
211
41,671,972
2017-01-16T08:15:00.000
0
0
0
0
0
python,hostname,nat,dhcp,sdn
0
41,672,160
0
1
0
false
0
0
Try socket.gethostbyaddr() from the module socket
1
1
0
0
I'm using python to develop SDN I also wrote a virtual network function just like DHCP,NAT,Firewall,QoS But I want to get computer's hostname from IP like 192.168.2.XXX I try to use arp but it only can find IP and MAC address in packets. So how should I get hostname from specific IP? Should I try this in DHCP or NAT? Thanks a lot !!
How to get hostname from IP?
0
0
1
0
1
1,035
41,680,636
2017-01-16T16:17:00.000
1
0
1
0
1
python,python-newspaper
1
46,494,795
0
2
0
false
0
0
You can type at the terminal pip install newspaper3k
1
4
0
0
I am trying to build a python program that will display various headlines from certain news sites. I used pip to install the module newspaper, but when I run the program, I get the error: ImportError: No module named newspaper Any ideas on how to fix this?
ImportError: No module named newspaper
0
0.099668
1
0
0
8,074
41,710,540
2017-01-18T02:43:00.000
0
0
1
0
0
python,python-3.x,pip,pyautogui
0
41,710,713
0
1
0
true
0
1
If you have multiple versions of Python installed you need to find your versions and rename them and their Pips. In windows the path is, C:\\Users\USERNAME\AppData\Local\Programs\Python\Python3x-32. The x should be replaced with the Python version and USERNAME with your username. On Mac it's located in /usr/local/bin/python. On Linux it should be in /usr/bin/python. The location might vary depending on OS and Python version. Rename the files python.exe/python and pip.exe/pip so that each file is different. I named mine python35.exe, python2.exe and python.exe(for 3.5, 2.7 and 3.6). Now when you execute your pip command use, pip34 install pyautogui or whatever you named the file. Or if you really want to you can go the painful way of renaming all the path variables, but I won't explain that here.
1
1
0
0
I, as it will soon be obvious, am a total newb when it comes to Python. I am running python version 3.5 on Windows 10, 64 bit. I installed the PyAutoGui module for a small project I am working on. At first, everything worked perfectly. But now it appears that PyAutoGui is crashing when it clicks. I suspect that it's because PyAutoGui is only intended for use up to Python 3.4. In order to rectify this, I downloaded Python 3.4. Unfortunately, however, when I try to install PyAutoGui (using pip install pyautogui), it tells me that it's already been installed because it sees it in the Python 3.5 folder. My question is this: How do I install PyAutoGui in Python 3.4 with it already installed in Python 3.5? Assume that I know virtually nothing about how to install a module manually without using pip Thanks in advance!
Installing PyAutoGui on multiple versions of Python
0
1.2
1
0
0
1,047
41,725,993
2017-01-18T17:40:00.000
0
0
0
0
0
python,machine-learning
0
41,740,148
0
1
0
false
0
0
Any aggregative operation on the word vectors can give you a sentence vector. You should consider what do you want your representation to mean and choose the operation accordingly. Possible operations are summing the vectors, averaging them, concatenating, etc.
1
0
1
0
I am interested to find sentence vectors using word vectors.I read that by multiplying each word's tf-idf weights with their vectors and finding their average we can get whole sentence vector. Now I want to know that how these tf-idf weights helps us to get sentence vectors i.e how these tf-idf and sentence vector are related?
How tf-idf is relevant in calculating sentence vectors
0
0
1
0
0
878
41,732,616
2017-01-19T02:12:00.000
0
0
0
0
0
python-2.7,python-3.x
1
45,281,197
0
1
0
false
1
0
First verify if the login is being done by checking the redirected link with print br.geturl() If it is logging in and you're have an http error in your console use exceptions for the http error which will redirect you to your page
1
0
0
0
I am having trouble logging in my Microsoft account using python mechanize utility. user-name and password are working fine. Problem comes when submitting the form, I get an interim response page with title: "continue" . and URL: some interim_URL. Question is how do I move to my intended URL? br.open("intended_URL") doesn’t work at all.
python mechanize Http error 100
0
0
1
0
0
57
41,742,720
2017-01-19T13:15:00.000
1
0
1
0
1
python,google-app-engine,int,type-conversion
0
41,743,702
0
1
0
true
0
0
It seems like an encoding issue, but a quick workaround would be to remove '\x00' from each string before converting it. So try int(splitted_line[j].replace('\x00',''))
1
0
0
0
I have some weird problem with python int function. I read some file with numeric values and convert these to integers. When I do this locally it's goes fine, but when I upload it to Google App Engine the conversion fails with error: invalid literal for int() with base 10: '' I tried to print the value it's trying to convert and it is 2210. Then I tried to output whole splitted line from file and got this: ['\x00B\x00a\x00u\x00w\x00e\x00n\x00s\x00', '\x002\x002\x001\x000\x00', '\x005\x004\x003\x001\x00', '\x005\x003\x007\x002\x00', '\x005\x002\x006\x005\x00', '\x005\x006\x001\x008\x00', '\x005\x003\x002\x008\x00\r\x00'] I use that code to convert: int(splitted_line[j]) And I am very new to python. Could someone say what I need to do?
How solve conversion thing in python that int() can't convert '\x002\x002\x001\x000\x00' into integer?
0
1.2
1
0
0
325
41,745,022
2017-01-19T15:02:00.000
1
0
0
0
0
python,machine-learning,tensorflow,neural-network,deep-learning
0
41,792,826
0
1
0
false
0
0
You have 3 main options - multiply your classes, multi-label learning or training several models. The first option is the most straight forward - instead of having teachers who belong to John and teachers who belong to Jane you can have teachers whose class is Teachers_John and class whose class is Teachers_John and learn to classify to those categories as you would learn any other set of categories, or use something like hierarchical softmax. The second option is to have a set of categories that includes Teachers as well as John and Jane - now your target is not to correctly predict the one most accurate class (Teachers) but several (Teachers and John). Your last option is to create a hierarchy of models where the first learns to differentiate between John and Jane and others to classify the inner classes for each of them.
1
0
1
0
I am using the inception v3 model to retrain my own dataset. I have few folder which represent the classes which contain images for each class. What i would like to do is to 'attach' some text ids to these images so when they are retrained and used to run classification/similarity-detection those ids are retrieved too. (basically its image similarity detection) For instance, Image X is of class 'Teachers' and it belongs to John. When i retrain the model, and run a classification on the new model, i would like to get the Teachers class, but in addition to this i would like to know who is teacher (John). Any ideas how to go for it? Regards
Tensorflow Inception v3 retraining - attach text/labels to individual images
0
0.197375
1
0
0
596
41,748,325
2017-01-19T17:43:00.000
0
1
0
0
1
django,postgresql,python-3.x,post,arduino
0
41,748,434
0
1
0
true
1
0
1)Depends, if your arduino is on the same local network than your Django Server then you don't need a public IP, otherwise you would have to forward your Django Server IP & port so its accesible from internet. 2) Not really, you can do a traditional POST request to a normal view on Django.
1
1
0
0
I know it is frowned upon to post questions without code, but I have been stuck for days thinking of how to handle this issue and cant think of a solution. My setup is this: Arduino Mega w/ 4G + GPS Shield from Cooking Hacks Django Server set up with Python Postgresql Database Because the 4G + GPS shield has the capability for http commands, I want to use http POST to send gps data to my Django Server and store that information in my Postgresql database. Another thing to keep in mind is I am running a Django test server on my Localhost, so I need to POST to that local host. Because I am not posting through a form and it is not synchronous I am really confused as to how the Django server is supposed to handle this asynchronous POST. It will look like this (I imagine): Arduino (POST) --> Django Server (Localhost) --> Postgresql Database So I have 2 questions: 1) In order to successfully send a POST to my local Django Server, should my host be my public router IP and the Port be the same as that which I am running my server on? Is there something else I am missing? 2) Do I need to use Django REST Framework to handle the POST request? if not, how would I implement this in my views.py? I am trying to get a reference point on the problem in order to visualize how to do it. I DONT need coded solutions. Any help on this would be greatly appreciated, and if you have any other questions I will be quick to answer.
HTTP POST Data from Arduino to Django Database
0
1.2
1
0
0
1,484
41,751,050
2017-01-19T20:25:00.000
0
1
0
0
0
python,git,jenkins
0
41,751,932
0
1
0
false
0
0
I've made use of the following plugins to achieve this: Flexible Publish Plugin Run Condition Plugin
1
0
0
0
I'm using Jenkins with python code as follows. After detecting a change to the GIT dev branch: Checkout GIT repository dev branch code Perform Unit tests / code coverage If build passes, check code into the production branch of the same repo What I want to add, is the ability to keep track of the previous code version (the python code package stores the version number in the setup.py file ) and if the version in the latest build job is incremented compared to the saved version, only then check the passed code into the production branch. Any thoughts on how best to achieve this? Thanks
Jenkins - Store code previous version number, and take actions if version number changes
0
0
1
0
0
140
41,754,825
2017-01-20T01:52:00.000
0
0
0
0
0
python,powershell,scripting,server,tableau-api
0
47,515,380
0
1
0
false
0
0
Getting data from excel to Tableau Server: Setup the UNC path so it is accessible from your server. If you do this, you can then set up an extract refresh to read in the UNC path at the frequency desired. Create an extract with the Tableau SDK. Use the Tableau SDK to read in the CSV file and generate a file. In our experience, #2 is not very fast. The Tableau SDK seems very slow when generating the extract, and then the extract has to be pushed to the server. I would recommend transferring the file to a location accessible to the server. Even a daily file copy to a shared drive on the server could be used if you're struggling with UNC paths. (Tableau does support UNC paths; you just have to be sure to use them rather than a mapped drive in your setup.) It can be transferred as a file and then pushed (which may be fastest) or it can be pushed remotely. As far as scheduling two steps (python and data extract refresh), I use a poor man's solution myself, where I update a csv file at one point (task scheduler or cron are some of the tools which could be used) and then setup the extract schedule at a slightly later point in time. While it does not have the linkage of running the python script and then causing the extract refresh (surely there is a tabcmd for this), it works just fine for my purposes to put 30 minutes in between as my processes are reliable and the app is not mission critical.
1
2
1
0
I used python scripting to do a series of complex queries from 3 different RDS's, and then exported the data into a CSV file. I am now trying to find a way to automate publishing a dashboard that uses this data into Tableau server on a weekly basis, such that when I run my python code, it will generate new data, and subsequently, the dashboard on Tableau server will be updated as well. I already tried several options, including using the full UNC path to the csv file as the live connection, but Tableau server had trouble reading this path. Now I'm thinking about just creating a powershell script that can be run weekly that calls the python script to create the dataset and then refreshes tableau desktop, then finally re-publishes/overwrites the dashboard to tableau server. Any ideas on how to proceed with this?
Tableau: How to automate publishing dashboard to Tableau server
0
0
1
0
0
1,415
41,771,459
2017-01-20T20:02:00.000
4
0
1
0
0
python,package,atom-editor,conda,virtual-environment
1
47,923,174
0
2
0
false
0
0
One way is to start atom from the activated virtual environment. In this case, executing programs/scripts uses the configured python interpreter and imports the installed in the virtual environment. EDIT: It's been long though, it might be useful for people redirected to this question. By installing atom-python-virtualenv you can create, change or deactivate virtual environments with atom editor.
1
12
0
0
Don't have much expertise in programming. Only picked up Python last summer. I have installed both Atom and Conda on my computer. Recently, I've used Atom to edit my scripts in Python, then run the scripts via Command Line. As per standard practice, I created Virtual Environments where I installed packages I needed to run different Python scripts. I now want to use Atom as an IDE, and so have installed the Script package on Atom so I can run my scripts in Atom itself. However, when I tried running a Python script that required the Python numpy package, I got this: ImportError: No module named 'numpy' This error is obviously going to appear for other packages that haven't already been installed in the root environment (I think?). So now, my question is how do I activate the needed Virtual Environment in Atom? In other applications like Jupyter and Spyder, I would activate the Virtual Environment I needed then open the Application via Command Line, but I can't do that with Atom. (If possible, is there a way to use Virtual Environments created by Conda) Thanks
Activating Python Virtual Environment in Atom
1
0.379949
1
0
0
10,521
41,789,176
2017-01-22T09:10:00.000
-1
0
0
0
0
python,amazon-web-services,amazon-s3,boto
0
41,790,354
0
5
0
false
0
0
As of now, you cannot get such information without downloading the zip file. You can store the required information as the metadata for a zip file when uploading to s3. As you have mentioned in your question, using the python functions we are able to get the file list without extracting. You can use the same approach to get the file counts and add as metadata to a particular file and then upload it to S3. Hope this helps, Thanks
1
7
0
0
Case: There is a large zip file in an S3 bucket which contains a large number of images. Is there a way without downloading the whole file to read the metadata or something to know how many files are inside the zip file? When the file is local, in python i can just open it as a zipfile() and then I call the namelist() method which returns a list of all the files inside, and I can count that. However not sure how to do this when the file resides in S3 without having to download it. Also if this is possible with Lambda would be best.
How to count files inside zip in AWS S3 without downloading it?
1
-0.039979
1
0
1
3,417
41,793,953
2017-01-22T17:26:00.000
2
0
0
0
1
python,tkinter
0
41,794,262
0
1
0
true
0
1
You should assign an IntVar (or possibly StringVar) to the checkbutton when you create it, via its variable= configuration option. You call .get() on this var to check the button's state, and .set() to change its state.
1
0
0
0
I have a checkbutton inside of a menu widget in python with tkinter. (Using python 3.5.2). I know that with normal checkbuttons you can select or deselect the checkbuttons using checkbutton.select() and checkbutton.deselect(). I need to know how to do this with the checkbuttons that I have in the menu object. I have tried the menu.entrybutton.configure(id, coption) method but there is no coption for selecting and deselecting checkbuttons within the menu. Any help would be appreciated.
Selecting and deselecting tkinter Menu Checkbutton widgets
0
1.2
1
0
0
2,380
41,850,349
2017-01-25T11:22:00.000
0
0
1
0
0
python,scikit-learn
0
41,851,421
0
2
0
false
0
0
I'm not sure if there is a single method of treating class_weight for all the algorithms. The way Decision Trees (and Forests) deals with this is by modifying the weights of each sample according to its class. You can consider weighting samples as a more general case of oversampling all the minority class samples (using weights you can "oversample" fractions of samples).
1
0
1
0
I would like to know how scikit-learn put more emphasis on a class when we use the parameter class_weight. Is it an oversampling of the minority sampling ?
How class_weight emphasis a class in in scikit-learn
0
0
1
0
0
420
41,856,832
2017-01-25T16:34:00.000
0
0
1
0
0
python,python-2.7,operating-system,locks,readerwriterlock
0
41,869,149
0
1
0
true
0
0
Writer: Upload a file W. If this fails, wait and try again. Upload a file R. If this fails, wait and try again. Do as many writes as desired. Remove W. Remove R. Reader: Upload a file R. If this fails, wait and try again. Check for the existence of a file W. If it exists, remove R and return to step 1. Do one read. If multiple reads are needed, return to step 2. Remove R. You can use the Python module ftplib (or for SFTP, paramiko) to implement the above operations.
1
0
0
0
For this question simplictly , I'm having two types of computers: type A and B. There is one computer of type A, and many of type B. B are type of hosts which can write and read from ftp. A is a computer which can just read from ftp. As you might already guess, ftp is the shared area which need to be protected by readers-writers lock solution. Does anybody knowns of an already existing python package which handle this scenario, if not, do anybody has an example how can it be implemented for such need? I guess that some locks should be implemented as files on ftp, since we are dealing with processes from different hosts. Thanks
Reader writer lock with preference to writers
0
1.2
1
0
0
460
41,867,109
2017-01-26T04:52:00.000
0
0
1
0
0
python,coala,coala-bears
0
45,793,120
0
1
0
false
0
0
Hey the AnnotationBear yields, HiddenResult which are results meant to be used by other bears, and not be directly viewed by the user. If you are trying to test coala, you should check bears which actually give results, for eg: PyFlakesBear
1
0
0
0
when i tried to check the bear results for a python file by using coala --bears AnnotationBear -f add.py --save and when it asked for setting language- give "python", then on checking .coafile i didn't find any result that AnnotationBear has to give so, how to check result?
Checking result by applying bear on file
0
0
1
0
0
49
41,889,588
2017-01-27T08:24:00.000
1
0
1
0
1
python,user-interface,model-view-controller,interface
0
41,889,758
0
1
0
true
0
1
Well maybe have the function in the Core module return some specifier that such a thing has happened (found multiple) along with the given names, then display the choice to the user and call a function in the Core module that returns relevant information about that file. Bear in mind you do not have to be dogmatic regarding such restrictions, there are some situations where having code in the GUI is much less of a hassle than having to integrate some way of it to work in between modules. This is where you make a decision how to go about writing the code, bearing in mind how important this feature is to you, how testable/maintainable you need it to be.
1
0
0
0
Hi I know this is a pretty basic design question. But I don't realy get it.... I write it in Python with PySide, but I think this is more a language unrelated question. A simplified example what I want to do: I Have a Gui with a button that opens a file dialog. In this one I choose a folder. The code scans the suffixes of the files in the folder and returns the 3 needed one. lets say .mp3, .txt and .mov and shows them in the gui. To this point the seperation should be no problem I would have a Gui class that runs the code of the core class, gets the three files as return values and sets up the gui. What I am wondering about is what happens when there are more then one files matching the .mp3 suffix. I would want to have a pop up with a combobox to select the one I want to use. But I don't realy get how to implement it without adding gui code to the core class.
clean divide Code and Gui
0
1.2
1
0
0
48
41,912,691
2017-01-28T17:29:00.000
0
0
1
0
0
python,python-3.x,pyqt,pycharm
0
41,919,317
0
1
0
true
0
1
Since you do seem to have PyQt installed my guess is that you have multiple Python versions installed (version 3.4 and version 3.6) and that PyQt is only installed under 3.6, but that PyCharm and the Designer are configured to use 3.4. I don't know how to change the Python interpreter in the Qt Designer as I never use it. However in PyCharm open the settings and look for the "Project Interpeter" tab. There you can configure the default Python interpreter that is used for your project. It even shows the installed packages for that interpreter. When you run a Python program from PyCharm, the first line in the output shows which Python interpreter was used. This way you can check if it is as expected. If it is still not correct, it can be that you have overridden it in your Run Configuration. Select "Edit Configuration" from the "Run" menu. This will open a dialog with Run Configuration settings for the Python script that you last executed. Check the "Python Interpreter" there and change it if needed.
1
1
0
0
I am a beginner and have 2 issues, which may be related to each other. 1. I am using PyCharm, and when I put "from PyQt4 import QtCore, QtGui, uic" I get a red line under each word (except from & import) saying "unresolved reference". I have PyQ4/Designer installed (I know it is because I have made a GUI), but when I click 'view code' for the GUI, it says "unable to launch C:/Python34/Lib/site-packages/PyQt4\uic" Maybe a path issue??? Like I said, I am very new to Python/Qt and really do not know how to check the path and/or change it if it is wrong. I downloaded Python 3.6.0, PyChamr2016.3.2, Qt4.8.7
PyCharm not recognizing PyQT4 and PyQt4 not allowing me to 'view code'
0
1.2
1
0
0
1,774
41,925,527
2017-01-29T20:21:00.000
0
0
0
0
1
python,node.js,multithreading,sockets,zeromq
0
42,099,793
0
1
0
false
0
0
There's no obligation to use a single socket for the two way comms. Two is perfectly fine. This means you can have PUB/SUB to broadcast from your NodeJS to your Python code. That's the easy part. Then have a separate PUSH/PULL socket back the other way - the Python does the pushing, the NodeJS does the pulling. One PUSH socket per Python thread, and just one PULL socket in the NodeJS (it will pull from any of the push sockets). Thus whenever one of the Python threads wants to send something to the NodeJS, it simply sends it through the PUSH socket. AFAIK the NodeJS can 'bind' it's PULL socket, and the Python can 'connect' it's PUSH socket, which is something you can do if one wants to feel that the NodeJS is the 'server'. Though this is unecessary - either end can bind, so long as the other end connects. Remember though that ZeroMQ is Actor model programming; there is no clients or servers, there's just actors. 'Bind' and 'connect' are only mentioned at all because it's all implemented on top of tcp (or similar), and it's the tcp transport that has to be told who's binding and connecting. Have each thread responsible for its own socket. Though given that Python threads aren't real threads you're not going to get a speed-up through having them (unless they've gone and got rid of the global interpretter lock since I last looked). The ZeroMQ context itself sets up a thread(s) which marshalls all the actual message transfers in the background, so the IO is already significantly overlapped.
1
0
0
0
I'm learning about the ZeroMQ patterns, and I need to implement the following: NodeJS will send messages to many python threads, but it doesn't need to wait for the answers synchronously, they can come in any order. I know that the publish/subscribe pattern solves it in one way: it can send to many, but how do the python workers send the reply back? Also, in order for the python threads to receive the message, which is the better design: the python process receives the message and sends to the appropriate thread (don't know how to do it), or each thread is responsible to receive its own messages?
ZeroMQ: publish to many, receive replys in any order
0
0
1
0
0
68
41,927,996
2017-01-30T01:45:00.000
-1
0
0
0
0
python,django,sqlite
0
41,928,825
0
2
1
false
1
0
Each Django model is a class which you import in your app to be able to work with them. To connect models together you can use foreign keys to define relationships, i.e. your Page class will have a foreign key from Book. To store lists in a field, one of the ways of doing it is to convert a list to string using json module and define the field as a text field. json.dumps converts the list to a string, json.loads converts the string back to a list. Or, if you are talking about other "lists" in your question, then maybe all you need is just django's basic queryset that you get with model.objects.get(). Queryset is a list of rows from a table.
1
1
0
0
I am currently trying to implement a book structure in Django in the model. The structure is as follows Book Class: title pages (this is an array of page objects) bibliography (a dictionary of titles and links) Page Class: title sections (an array of section objects) images (array of image urls) Section Class: title: text: images (array of image urls) videos (array of video urls) I am pretty new to Django and SQL structuring. What my question specifically is, what would be the best method in order to make a db with books where each entry has the components listed above? I understand that the best method would be to have a table of books where each entry has a one to many relationship to pages which in turn has a one to many relationship with sections. But I am unclear on connecting Django models together and how I can enable lists of objects (importantly these lists have to be dynamic).
Book Structure in Django
0
-0.099668
1
0
0
296
41,964,500
2017-01-31T18:05:00.000
0
0
1
0
1
python,pip,python-3.5,pymysql,pyc
0
41,965,456
0
2
0
false
0
0
Use cx_freeze, pyinstaller or virtualenv. Or copy code and put in your. Read python import
1
1
0
0
I'm making a program that uses PyMySql and I'd like people to be able to run my program without going through the manual installation of PyMySql, is there a way I can achieve that? I've already tried compiling to .pyc but that doesn't seem to work, in fact when I uninstall PyMySql it doesn't work anymore. PS: There probably are better languages to do that but it's a homework assignment for school and can't use anything but python, also sorry for my bad english
If I install modules with pip, how can I make sure other people can run my program without having that module installed?
1
0
1
0
0
75
41,970,630
2017-02-01T01:46:00.000
2
0
1
1
1
python-3.x,anaconda,jupyter-notebook
0
42,047,557
0
1
0
true
0
0
Looks like this was fixed in the newest build of anaconda (4.3.0 .1). Unfortunately looks like it requires uninstall and reinstall as the locations seems to have changed drastically (from some subsubsub folder off of AppData to something higher up, under user directory). (But that might be the effect of testing 4.3.0.1 on a different machine.) For example, ipython is now: C:\Users\user_name\Anaconda3\python.exe C:\Users\user_name\Anaconda3\cwp.py C:\Users\user_name\Anaconda3 "C:/Users/user_name/Anaconda3/python.exe" "C:/Users/user_name/Anaconda3/Scripts/ipython-script.py" Here is changelog for 4.3.0.1: In this “micro” patch release, we fixed a problem with the Windows installers which was causing problems with Qt applications when the install prefix exceeds 30 characters. No new Anaconda meta-packages correspond to this release (only new Windows installers).
1
3
0
0
After installing anaconda 4.3 64-bit (python 3.6) on windows, and choosing "install for current user only" and "add to path": I noticed that the anaconda program shortcuts don't work on my start menu--they are cut off at the end. Does anyone know how the correct entries should read? (or instead, how to repair the links?) thanks. UPDATE: I reproduced the problem on two other machines, Windows 10 (x64) and windows 8.1 (x64), that were "clean" (neither one had a prior installation of python). This is what they are after a fresh install (under "Target" in "Properties" in the "Shortcut" tab for each shortcut item): JUPYTER NOTEBOOK: C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc JUPYTER QTCONSOLE: C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L SPYDER: C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L RESET SPYDER: C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc NAVIGATOR: C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L IPYTHON: C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc
Anaconda 4.3, 64-bit (python 3.6), leaves incorrect truncated paths in windows Start menu
0
1.2
1
0
0
1,261
41,974,959
2017-02-01T08:21:00.000
1
0
0
0
1
python,django,apache,ssh
1
41,976,000
0
1
0
true
1
0
Your question is confusing. If you deployed it with Apache, it's running through Apache and not through runserver. You might have additionally started runserver, but that is not what is serving your site.
1
0
0
0
I recently deployed a Django site on a DigitalOcean droplet through Apache. I did python manage.py runserver through ssh and now the Django site is running. However, it stayed on even after the ssh session expired (understandable because it's still running on the remote server) but how do I shut it down if I need to? Also, due to this, I don't get error messages on the terminal if something goes wrong like I do when I develop locally. What would be a fix for this?
Is it normal that the Django site I recently deployed on Apache is always on?
0
1.2
1
0
0
48
41,992,104
2017-02-02T00:00:00.000
0
0
1
1
0
python,linux
0
41,992,148
1
2
0
false
0
0
1) You should not modify the system's binaries yourself directly 2) If your $PATH variable doesn't contain /usr/local/bin, the naming of that secondary directory isn't really important. You can install / upgrade independently wherever you have installed your extra binaries. 3) For Python specifically, you could also just use conda / virtualenv invoked by your system's python to manage your versions & projects.
1
7
0
0
On Linux, specifically Debian Jessie, should I use /usr/bin/python or should I install another copy in /usr/local/bin? I understand that the former is the system version and that it can change when the operating system is updated. This would mean that I can update the version in the latter independently of the OS. As I am already using python 3, I don't see what significant practical difference that would make. Are there other reasons to use a local version? (I know there are ~42 SO questions about how to change between version, but I can't find any about why)
/usr/bin/python vs /usr/local/bin/python
1
0
1
0
0
6,843
42,003,461
2017-02-02T13:27:00.000
-2
0
0
0
0
python,e-commerce,bigcommerce
0
48,835,159
0
1
0
false
1
0
This will create the product on the BigCommerce website. You create the image after creating the product, by entering the following line. The image_file tag should be a fully qualified URL pointing to an image that is accessible to the BigCommerce website, being found either on another website or on your own webserver. api.ProductImages.create(parentid=custom.id, image_file='http://www.evenmore.co.uk/images/emgrab_s.jpg', description='My image description')
1
3
0
0
how do I upload an image (from the web) using Bigcommerce's Python API? I've got this so far: custom = api.Products.create(name='Test', type='physical', price=8.33, categories=[85], availability='available', weight=0) Thank you! I've tried almost everything!
Bigcommerce Python API, how do I create a product with an image?
0
-0.379949
1
0
1
438
42,006,246
2017-02-02T15:34:00.000
1
0
0
0
0
python,django,django-allauth
0
42,011,936
0
1
0
false
1
0
@pennersr was kind enough to answer this on the allauth github page: This truly all depends on how you model things, there is nothing in allauth that blocks you from implementing the above. One way of looking at things is that the signup form is not different at all. It merely contains an additional switch that indicates the type of user account that is to be created. Then, it is merely a matter of visualizing things properly, if you select type=employer, then show a different set of fields compared to signing up using type=developer. If you don't want such a switch in your form, then you can store the type of account being created somewhere in the session, and refer to that when populating the account.
1
0
0
1
Having read many stack overflow questions, tutorials etc on all-auth I keep getting the impression that it only supports the registration of one type of user per project. I have two usecases A business user authenticates and registers his business in one step. A developer user authenticates and just fills in the name of his employer (software company). I do not want the developer to see the business fields when he signs up. i.e his signup form is different. If, in fact signup should be common and the user specific details should be left to a redirect, how to accomplish this from social auth depending on user type?
Django All-Auth Role Based Signup
0
0.197375
1
0
0
376
42,040,813
2017-02-04T13:15:00.000
-7
0
1
0
0
python,arrays,python-3.x,sorting
0
42,040,862
0
2
0
false
0
0
A list is a data structure that has characteristics which make it easy to do some things. An array is a very well understood standard data structure and isn't optimized for sorting. An array is basically a standard way of storing the product of sets of data. There hasn't ever been a notion of sorting it.
1
1
1
0
Why doesn’t the array class have a .sort()? I don't know how to sort an array directly. The class array.array is a packed list which looks like a C array. I want to use it because only numbers are needed in my case, but I need to be able to sort it. Is there some way to do that efficiently?
Why doesn’t 'array' have an in-place sort like list does?
0
-1
1
0
0
144
42,053,240
2017-02-05T14:31:00.000
0
0
1
0
0
python-3.x,crash,python-idle
0
42,232,624
0
1
0
true
0
0
In the end, I just retyped my code. Luckily, I'd done a backup the previous night, so didn't lose too much. I am now making sure to do daily backups.
1
1
0
0
I was writing my code, then I pressed Ctrl+S. It then started not responding. I closed it and came back on to find the file was now empty! Anyone know how I can retrieve it?
Python IDLE crashed when saving and all my code disappeared
0
1.2
1
0
0
431
42,057,667
2017-02-05T21:42:00.000
1
0
0
0
0
python,tensorflow
0
42,057,766
0
1
0
false
0
0
Not sure exactly what you are asking. I will answer about what I understood. In case you want to predict only one class for example digit 5 and rest of the digits. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5. Then design your network with only two nodes in output, where first node will show the probability that the input vector belongs to class 'one' (or digit 5) and second node will show the probability of belonging to class 'zero'. Then just train your network. To find accuracy, you can simple techniques like just count how many times you predict right i.e. if probability is higher than 0.5 for right class classify it as that class. I hope that helps, if not maybe it would be better if you could explain your question more precisely.
1
1
1
0
I want to find the accuracy of one class in MMNIST dataset .So how can i split it on the basis of classes?
How can we use MNIST dataset one class as an input using tensorflow?
1
0.197375
1
0
0
567
42,058,677
2017-02-05T23:51:00.000
0
0
1
0
0
python,pip,virtualenv
0
42,059,013
0
2
0
false
0
0
You can use the history command to view the history of all your commands and then grep for pip with out put to a file. Similar to the comment above.
1
1
0
0
I'd like to keep a record of all pip commands that were executed in a given virtual environment and of the package versions that got installed/updated/removed. Is there an easy way to do that? Alternatively, how do I get requirements.txt (including --install-option, etc.) out of my virtual environment state, if that's possible? Presumably, only the immediate dependencies need to be there.
log all pip commands in a given virtual environment?
0
0
1
0
0
337
42,059,103
2017-02-06T01:03:00.000
0
0
0
0
0
python,tensorflow,one-hot-encoding
0
59,105,698
0
1
0
false
0
0
while preparing the data you can use numpy to set all the data points in class 5 as 1 and the others will be set to as 0 using . arr = np.where(arr!=5,arr,0) arr = np.where(arr=5,arr,1) and then you can create a binary classifier using Tensorflow to classifiy them while using a binary_crossentropy loss to optimize the classifier
1
2
1
0
In case you want to predict only one class. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5. how can I implement this in tensorflow using puthon
how to predict only one class in tensorflow
0
0
1
0
0
164
42,060,186
2017-02-06T04:08:00.000
2
0
1
0
0
python,gspread
0
42,066,652
0
1
0
true
0
0
You can access a spreadsheet key with mySpreadSheet.id after you have opened it by title.
1
2
0
0
how do I get the key of the workbook if I know only the name of the workbook? I can use open by title, but once i'm in I didn't find a get.key type method in the documents. Is there a way to get the key by only knowing the title?
gspread get key once opened by title
0
1.2
1
0
0
140
42,072,399
2017-02-06T16:16:00.000
0
0
1
0
0
python,python-docx
0
42,074,715
0
1
0
true
0
0
There is no "centralized authority" in a Word document of what Fonts have been used. You'll need to parse through the full document and detect them yourself. Runs are the right place to look, but you'll also need to check styles, both paragraph and character styles. Also, to be thorough, you'll need to check the document default font.
1
0
0
0
I am using Python docx 0.8.5 I can't seem to be able to figure out how to get a list of typeface and sizes used in a document There is a Font object, accessible on Run.font but I can't handle this problem. Can somebody please point me to an example? Thanks
get a list of typeface and sizes used in a docx
0
1.2
1
0
0
51
42,079,675
2017-02-07T00:23:00.000
0
0
1
0
1
python-2.7
0
42,079,696
0
1
0
false
0
0
The .pyc files are not readable by humans - the python interpreter compiles the source code to these files, and they are used by the python virtual machine. You can delete these files, and when you run the .py file again, you will see a new .pyc file created.
1
0
0
0
I was recently trying to make my own module when I realised a copy of my module had been made but instead of ending in .py like the origional, it ended in .pyc. When I opened it, I could not understand a thing. I was using the import to make a game from pygame and the fact that the .pyc file had a bunch of question marks and weird symbols seemed to be helpful for hackers if I ever make a game good enough for release which probably wont happen. I just want to know a few things about these files: Can other computers that download the game still read the module if I delete the original and only leave the weirder .pyc file? Are they readable by humans and can they actually prevent hacks on downloaded game? (its not online I just don't want a easy game for people who know python) Should I get rid of them for what I am doing? (I saw other questions asking how to do that but the answers said it was helpful) Last but not least, will it work for .txt files (will they not just be read as a bunch of symbols)? Thanks!
are there limitations on .pyc files?
0
0
1
0
0
66
42,081,790
2017-02-07T04:33:00.000
2
0
0
0
0
python,pandas,dataframe
0
42,081,957
0
1
0
true
0
0
altering @VaishaliGarg's answer a little, you can use df.groupby(['Qgender','Qmajor']).count() Also if needed a dataframe out of it, we need to add .reset_index() since it would be a groupbyObject. df.groupby(['Qgender','Qmajor']).count().reset_index()
1
1
1
0
Sorry about the vague title, but I didn't know how to word it. So I have a pandas dataframe with 3 columns and any amount of rows. The first column is a person's name, the second column is their major (six possible majors, always written the same), and the third column is their gender (always 'Male' or 'Female'). I was told to print out the number of people in each major, which I was able to accomplish by saying table.Qmajor.value_counts() (table being my dataframe variable name). Now I am being asked to print the amount of males and females in each major, and I have no idea where to start. Any help is appreciated. The column names are Qnames, Qmajor, and Qgender.
Pandas dataframe: Listing amount of people per gender in each major
1
1.2
1
0
0
5,858
42,087,789
2017-02-07T10:46:00.000
1
0
1
0
1
python,spyder
0
42,121,732
1
2
0
false
0
0
(Spyder developer here) We're aware of these problems in the Python console, but unfortunately we don't know how to fix them. Please use the IPython console instead because the Python console is going to be removed in Spyder 3.2.
1
0
0
0
I am currently using Spyder and have been for a long time, however I downloaded anaconda recently and started using Spyder for Python 3.5 which gives me several problems. Whenever I run a script in the Python Console, I have to run it twice and then when I am finished running it and want to run a new I have to kill the current process and reload it. I am currently using some scripts with threading, but that never used to be a problem before I upgraded, anyone have similar experiences and know how to fix it?
Spyder IDE environment in Python
0
0.099668
1
0
0
502
42,089,856
2017-02-07T12:23:00.000
2
0
1
0
0
python,caching
0
42,091,305
0
3
0
true
0
0
There is no magic possible there - ou want to store a value, so you need a place to store your value. You can't just decide "I won't have an extra entry on my __slots__ because it is not elegant" - you don't need to call it _cached: give it whatever name you want, but these cached values are something you want to exist in each of the object's instances, and therefore you need an attribute. You can cache in a global (module level) dictionary, in which the keys are id(self) - but that would be a major headache to keep synchronized when instances are deleted. (The same thing is true for a class-level dictionary, with the further downside of it still be visible on the instance). TL;DR: the "one and obvious way to do it" is to have a shadow attribute, starting with "_" to keep the values you want cached, and declare these in __slots__. (If you use a _cached dictionary per instance, you loose the main advantage from __slots__, that is exactly not needing one dictionary per instance).
2
2
0
0
I am trying to cache a computationally expensive property in a class defined with the __slots__ attribute. Any idea, how to store the cache for later use? Of course the usual way to store a dictionary in instance._cache would not work without __dict__ being defined. For several reasons i do not want to add a '_cache' string to __slots__. I was thinking whether this is one of the rare use cases for global. Any thoughts or examples on this matter?
Python caching attributes in object with __slots__
0
1.2
1
0
0
992
42,089,856
2017-02-07T12:23:00.000
-2
0
1
0
0
python,caching
0
42,089,927
0
3
0
false
0
0
Something like Borg pattern can help. You can alterate the status of your instance in the __init__ or __new__ methods.
2
2
0
0
I am trying to cache a computationally expensive property in a class defined with the __slots__ attribute. Any idea, how to store the cache for later use? Of course the usual way to store a dictionary in instance._cache would not work without __dict__ being defined. For several reasons i do not want to add a '_cache' string to __slots__. I was thinking whether this is one of the rare use cases for global. Any thoughts or examples on this matter?
Python caching attributes in object with __slots__
0
-0.132549
1
0
0
992
42,089,967
2017-02-07T12:30:00.000
1
0
0
0
0
python,django
0
42,090,129
0
2
0
true
1
0
If you are using Django Rest Framework, then you can simply use serializers. But I don't think that is a case. What you want to accomplish seems very similar to the role of django forms, but as such they are only used (conventionally) for saving/updating models i.e. POST requests. Now either you can define a new class for filtering/rendering and use that in your view or just go ahead and use django forms which would automatically provide basic cleaning for different fields.
1
2
0
0
My django app displays the objects from database in table view. The problem is that these objects (models) are pretty complex: the have 50+ fields. Nearly for each field I have to do some formatting: conver phone numbers from int 71234567689 to "+7 (123) 456789" display long prices with spaces: "7 000 000" instead of "7000000" construct full address from several fields like "street", "house" and so on (logic if pretty complex with several if-else-s) and so on Django templating language has several useful tags for simple cases but I guess is not suitable in general case (like mine) for serious formatting. Create the @property-s in model class is also not an option because the question is about rendering and is not related to model. So I guess I should do my conversions in view: create dict for each obj, fill with converted data and pass to template. But! The model has a lot of fields and I don't want to copy them all :) Moreover, it would be great to preserve model structure to use it in django template (say, regroup) and query set laziness. So the greatest way would be to instruct django "how to render". Is it possible?
Django: best way to convert data from model to view
1
1.2
1
0
0
392
42,103,374
2017-02-08T01:58:00.000
1
1
0
1
1
python,python-2.7,salt,salt-stack,salt-cloud
0
42,263,855
1
1
0
false
0
0
The salt packages are built using the system python and system site-packages directory. If something doesn't work right, file a bug with salt. You should avoid overwriting the stock python, as that will result in a broken system in many ways.
1
1
0
0
I am trying to setup a salt-master/salt-cloud on Centos 7. The issue that I am having is that I need Python 2.7.13 to use salt-cloud to clone vm in vmware vcenter (uses pyvmomi). CentOS comes with Python 2.7.5 which salt has a known issue with (SSL doesn't work). I have tried to find a configuration file on the machine to change which python version it should use with no luck. I see two possible fixes here, somehow overwrite the python 2.7.5 with 2.7.13 so that it is the only python available. OR If possible change the python path salt uses. Any Ideas on how to do either of these would be appreciated? (Or another solution that I haven't mentioned above?)
How to change Default Python for Salt in CentOS 7?
0
0.197375
1
0
0
1,084
42,105,805
2017-02-08T06:03:00.000
0
0
1
0
0
python,api,microservices
0
42,114,560
0
2
0
false
0
0
Api gateway is not needed for Internal service to service communication But, you need a service registry or some kind of dynamic load balancing mechanism to reach the services
1
0
0
0
I would like to know how to create a communication for each services. I am using API Gateway for the outside of the system to communicate with the services within. Is it necessary for a service to call another service through API Gateway or just directly into the service itself ? Thank You
Microservices Communication Design
0
0
1
0
1
3,274
42,108,324
2017-02-08T08:40:00.000
0
0
0
0
0
python,machine-learning,scikit-learn,decision-tree
0
42,115,789
0
2
0
false
0
0
In general - no. Decision trees work differently that that. For example it could have a rule under the hood that if feature X > 100 OR X < 10 and Y = 'some value' than answer is Yes, if 50 < X < 70 - answer is No etc. In the instance of decision tree you may want to visualize its results and analyse the rules. With RF model it is not possible, as far as I know, since you have a lot of trees working under the hood, each has independent decision rules.
1
0
1
0
I have trained my model on a data set and i used decision trees to train my model and it has 3 output classes - Yes,Done and No , and I got to know the feature that are most decisive in making a decision by checking feature importance of the classifier. I am using python and sklearn as my ML library. Now that I have found the feature that is most decisive I would like to know how that feature contributes, in the sense that if the relation is positive such that if the feature value increases the it leads to Yes and if it is negative It leads to No and so on and I would also want to know the magnitude for the same. I would like to know if there a solution to this and also would to know a solution that is independent of the algorithm of choice, Please try to provide solutions that are not specific to decision tree but rather general solution for all the algorithms. If there is some way that would tell me like: for feature x1 the relation is 0.8*x1^2 for feature x2 the relation is -0.4*x2 just so that I would be able to analyse the output depends based on input feature x1 ,x2 and so on Is it possible to find out the whether a high value for particular feature to a certain class, or a low value for the feature.
How to know the factor by which a feature affects a model's prediction
1
0
1
0
0
998
42,118,850
2017-02-08T16:42:00.000
0
0
0
0
0
python,machine-learning,neural-network,generator,keras
0
46,009,804
0
2
0
false
0
0
I think the only option here is to NOT shuffle the files. I have been wondering this myself and this is the only thing I could find in the docs. Seems odd and not correct...
1
7
1
0
If I don't shuffle my files, I can get the file names with generator.filenames. But when the generator shuffles the images, filenames isn't shuffled, so I don't know how to get the file names back.
How to retrieve the filename of an image with keras flow_from_directory shuffled method?
0
0
1
0
0
1,974
42,121,512
2017-02-08T19:02:00.000
0
0
0
0
0
python-2.7,url,cgi,query-string
0
42,122,277
0
1
0
false
1
0
Thanks for all the help on what was actually not to complicated a question. What I was looking for was a router/dispatcher that is usually handled by a framework fairly simply though an @route or something similar. Opting for a more efficient approach all I had to do was import os and then look at os.environ.get('PATH_INFO', '') for all the data I could possibly need. For anyone else following the path I was, that is how I found my way.
1
0
0
0
I know I am using the wrong search terms and that's why I haven't been able to suss out the answer myself. However, I cannot seem to figure out how to use the CGI module to pull what I think counts as a query string from the url. given a url www.mysite.com/~usr/html/cgi.py/desired/path/info how would one get the desired/path/info out of the url? I understand GET and POST requests and know I can use CGI's FieldStorage class to get at that data to fill my Jinja2 templates out and such. But now I want to start routing from a landing page with different templates to select before proceeding deeper into the site. I'm hoping the context is enough to see what I'm asking because I am lost in a sea of terms that I don't know. Even if it's just the right search term, I need something to help me out here.
Getting what I think is a part of the query string using python 2.7/CGI
0
0
1
0
0
33
42,149,777
2017-02-10T00:59:00.000
0
0
0
0
0
ipython,spyder
0
50,688,040
0
1
0
false
0
0
One way that I have figured out is to define a dictionary and then record the results you want individually. Apparently, this is not the most efficient way, but it works.
1
2
1
0
What I meant by the title is that I have two different programs and I want to plot data on one figure. In Matlab there is this definition for figure handle which eventually points to a specific plot. Let's say if I call figure(1) the first time, I get a figure named ''1'' created. The second I call figure(1), instead of creating a new one, Matlab simply just plot on the previous figure named ''1''. I wondered how I can go about and do that in Spyder. I am using Matplotlib in sypder. I would imagine this could be easily achieved. But I simply don't know much about this package to figure my problem out. :( Any suggestions are appreciated!
How to plot data from different runs on one figure in Spyder
0
0
1
0
0
54
42,151,236
2017-02-10T03:54:00.000
1
0
1
0
0
python
0
42,151,338
0
2
0
false
0
0
No. It wouldn't be possible unless written somewhere. Simple reason is that once the python process ends, GC cleans up everything.
1
0
0
0
This may be a dumb question, but I want to add a line at the very start of the code like print 'previous runtime' time.time()-tic Is there a way to do it? Or can I somehow get the previous runtime other than keeping a logfile?
Is there a way to write over the python code after the interpretation?
0
0.099668
1
0
0
37
42,156,957
2017-02-10T10:23:00.000
-1
0
0
0
0
python,tensorflow,gradient
0
56,578,297
1
6
0
false
0
0
You can use Pytorch instead of Tensorflow as it allows the user to accumulate gradients during training
1
18
1
0
I'm using TensorFlow to build a deep learning model. And new to TensorFlow. Due to some reason, my model has limited batch size, then this limited batch-size will make the model has a high variance. So, I want to use some trick to make the batch size larger. My idea is to store the gradients of each mini-batch, for example 64 mini-batches, and then sum the gradients together, use the mean gradients of this 64 mini batches of training data to update the model's parameters. This means that for the first 63 mini-batches, do not update the parameters, and after the 64 mini batch, update the model's parameters only once. But as TensorFlow is graph based, do anyone know how to implement this wanted feature? Thanks very much.
How to update model parameters with accumulated gradients?
0
-0.033321
1
0
0
8,795
42,171,188
2017-02-11T01:28:00.000
1
0
1
0
1
python,heroku
0
42,213,846
0
1
0
true
1
0
Have to agree with @KlausD, doing what you are suggesting is actually a bit more complex trying to work with a filesystem that won't change and tracking state information (last selected) that you may need to persist. Even if you were able to store the last item in some environmental variable, a restart of the server would lose that information. Adding a db, and connecting it to python would literally take minutes on Heroku. There are plenty of well documented libraries and ORMs available to create a simple model for you to store your list and your cursor. I normally recommend against storing pointers to information in preference to making the correct item obvious due to the architecture, but that may not be possible in your case.
1
0
0
0
I have deployed a small application to Heroku. The slug contains, among other things, a list in a textfile. I've set a scheduled job to, once an hour, run a python script that select an item from that list, and does something with that item. The trouble is that I don't want to select the same item twice in sequence. So I need to be able to store the last-selected item somewhere. It turns out that Heroku apparently has a read-only filesystem, so I can't save this information to a temporary or permanent file. How can I solve this problem? Can I use os.environ in python to set a configuration variable that stores the last-selected element from the list?
Heroku: how to store a variable that mutates?
0
1.2
1
0
0
93
42,179,424
2017-02-11T18:11:00.000
0
1
0
0
0
python,raspberry-pi,touch
0
42,180,008
0
1
0
false
0
1
For a gui you could always take a look into Tkinter. You could test the gui without having the actual raspberry pi. Switching to leds would require an LED matrix, which is more demanding in terms of electrical engineering. Raspberry pi would be my recommendation.
1
0
0
0
I am new to Python and took on a small project for the firehouse. I am looking to make a "Calls YTD" Sign. The initial thought was a raspberry Pi connected to the a touch screen. After some playing around and learning how to use python a little I realized one very important fact... I am way over my head. Looking for some direction. In order for this to display on the touch screen I will need to build it into a GUI. Should I stop right there and instead get a 12x12 LED and keep it more simple? Otherwise the goal would be to display the current call number "61" for example, with an up and down arrow to simply advance or retract a number . Adding the ability to display last years call volume would be cool but not necessary. What I am looking for ultimately, is some direction if python and raspberry pi is the way to go or should I head in another direction. Thank you in advance.
Counter Display Design
0
0
1
0
0
67
42,211,463
2017-02-13T18:55:00.000
0
0
0
0
1
python,mysql,mysql-workbench
0
42,220,939
0
1
0
false
0
0
Editing a table means to be able to write back data in a way that reliably addresses the records that have changed. In MySQL Workbench there are certain conditions which must be met to make this possible. A result set: must have a primary key must not have any aggregates or unions must not contain subselects When you do updates in a script you have usually more freedom by writing a WHERE clause that limits changes to a concrete record.
1
0
0
0
I have created a table in MySQL command line and I'm able to interact with it using python really well. However, I wanted to be able to change values in the table more easily so I installed MySQL workbench to do so. I have been able to connect to my server but when I try to change any values after selecting a table, it doesn't let me edit it. I tried making a new table within MySQL Workbench and I could edit it then. So, I started to use that table. However, trying to edit the table python stopped working, so I made another table with command line again and it works! Does anyone know how to fix either of these problems? It seems MySQL Workbench can only edit tables that have been created with Workbench, and not with Command Line. There must be a configuration option somewhere that is limiting this. Thanks in advance!
MySQL Workbench can't edit a table that was created using Command Line
0
0
1
1
0
94
42,216,640
2017-02-14T02:02:00.000
0
1
0
1
0
python,node.js,meteor,meteor-galaxy
1
42,284,125
0
1
0
false
1
0
It really depends on how horrible you want to be :) No matter what, you'll need a well-specified requirements.txt or setup.py. Once you can confirm your scripts can run on something other than a development machine, perhaps by using a virtualenv, you have a few options: I would recommend hosting your Python scripts as their own independent app. This sounds horrible, but in reality, with Flask, you can basically make them executable over the Internet with very, very little IT. Indeed, Flask is supported as a first-class citizen in Google App Engine. Alternatively, you can poke at what version of Linux the Meteor containers are running and ship a binary built with PyInstaller in your private directory.
1
1
0
0
I have a meteor project that includes python scripts in our private folder of our project. We can easily run them from meteor using exec, we just don't know how to install python modules on our galaxy server that is hosting our app. It works fine running the scripts on our localhost since the modules are installed on our computers, but it appears galaxy doesn't offer a command line or anything to install these modules. We tried creating our own command line by calling exec commands on the meteor server, but it was unable to find any modules. For example when we tried to install pip, the server logged "Unable to find pip". Basically we can run the python scripts, but since they rely on modules, galaxy throws errors and we aren't sure how to install those modules. Any ideas? Thanks!
Installing python modules in production meteor app hosted with galaxy
0
0
1
0
0
192
42,218,932
2017-02-14T06:00:00.000
2
0
0
0
0
python,google-apps-script,google-sheets,google-spreadsheet-api
0
42,327,384
0
2
0
false
0
0
you will need several changes. first you need to move the script to the cloud (see google compute engine) and be able to access your databases from there. then, from apps script look at the onOpen trigger. from there you can urlFetchApp to your python server to start the work. you could also add a custom "refresh" menu to the sheet to call your server which is nicer than having to reload the sheet. note that onOpen runs server side on google thus its impossible for it to access your local machine files.
1
0
0
0
I have a python script (on my local machine) that queries Postgres database and updates a Google sheet via sheets API. I want the python script to run on opening the sheet. I am aware of Google Apps Script, but not quite sure how can I use it, to achieve what I want. Thanks
Running python script from Google Apps script
0
0.197375
1
1
0
6,619
42,231,764
2017-02-14T16:51:00.000
8
0
1
0
0
python,anaconda,conda
0
68,422,861
0
8
0
false
0
0
As the answer from @pkowalczyk mentioned some drawbacks: In my humble opinion, the painless and risk-free (workaround) way is following these steps instead: Activate & Export your current environment conda env export > environment.yml Deactivate current conda environment. Modify the environment.yml file and change the name of the environment as you desire (usually it is on the first line of the yaml file) Create a new conda environment by executing this conda env create -f environment.yml This process takes a couple of minutes, and now you can safely delete the old environment. P.S. nearly 5 years and conda still does not have its "rename" functionality.
1
469
0
0
I have a conda environment named old_name, how can I change its name to new_name without breaking references?
How can I rename a conda environment?
0
1
1
0
0
242,754
42,237,072
2017-02-14T22:01:00.000
26
0
1
0
0
python
0
42,237,193
0
10
0
true
0
0
Scan your import statements. Chances are you only import things you explicitly wanted to import, and not the dependencies. Make a list like the one pip freeze does, then create and activate a virtualenv. Do pip install -r your_list, and try to run your code in that virtualenv. Heed any ImportError exceptions, match them to packages, and add to your list. Repeat until your code runs without problems. Now you have a list to feed to pip install on your deployment site. This is extremely manual, but requires no external tools, and forces you to make sure that your code runs. (Running your test suite as a check is great but not sufficient.)
1
71
0
0
What is the most efficient way to list all dependencies required to deploy a working project elsewhere (on a different OS, say)? Python 2.7, Windows dev environment, not using a virtualenv per project, but a global dev environment, installing libraries as needed, happily hopping from one project to the next. I've kept track of most (not sure all) libraries I had to install for a given project. I have not kept track of any sub-dependencies that came auto-installed with them. Doing pip freeze lists both, plus all the other libraries that were ever installed. Is there a way to list what you need to install, no more, no less, to deploy the project? EDIT In view of the answers below, some clarification. My project consists of a bunch of modules (that I wrote), each with a bunch of imports. Should I just copy-paste all the imports from all modules into a single file, sort eliminating duplicates, and throw out all from the standard library (and how do I know they are)? Or is there a better way? That's the question.
List dependencies in Python
0
1.2
1
0
0
86,884
42,239,173
2017-02-15T01:20:00.000
1
0
0
0
1
python,django,python-2.7,virtualenv
1
42,239,415
1
1
0
true
1
0
The problem was not with the Django-core but with django-user-accounts app that was included with pinax. Upgrading the django-user-accounts app fixed the issue. Thanks to @Selcuk for the solution.
1
0
0
0
I am trying to run an existing django app. The app has been built in django-1.10. I set up a new virtualenv and installed the requirements and everything. However, I get errors like the following: from django.utils import importlib ImportError: cannot import name importlib Now, the above is from the following source - .virtualenvs/crowd/lib/python2.7/site-packages/account/conf.py When I manually fix the conf.py file, I still keep getting errors to fix either deprecated or removed features from older django versions. Any idea as to how to fix this? I thought the purpose of working in virtualenvs was to avoid such errors. Any suggestions would be much appreciated. Thanks in advance! This is how the question is different: Even after I fix the importlib import statement, it keeps giving me errors like that of the usage of SubFieldBase and so on.
django-1.10 still contains deprecated and removed features
0
1.2
1
0
0
74
42,264,022
2017-02-16T03:00:00.000
1
0
1
0
0
python,python-2.7
0
42,289,440
0
1
0
true
0
0
Call win32file.SetEndOfFile(handle) after positioning the file handle to the offset that you want to be the new end of file. This is similar to the ftruncate POSIX system call, or writing 0 bytes in DOS.
1
0
0
0
How do you truncate a PyHandle returned by win32file.CreateFile. I know you can open it with the TRUNCATE_EXISTING flag, but how do you truncate it to a specific size after reading/writing? Note: The reason I cannot use the standard library is because I'm using win32file to restrict simultaneous reading/writing to a file.
Truncate PyHandle (win32file)
0
1.2
1
0
0
63
42,264,307
2017-02-16T03:28:00.000
1
0
0
1
1
swift,python-2.7
0
56,529,866
0
2
0
false
0
0
For me, Apple Swift is under /usr/bin/swift and python-swiftclient is under /usr/bin/local/swift. Explicitly invoking it as /usr/bin/local/swift works.
1
0
0
0
I have installed OpenStack swift python client (pip install python-swiftclient). However /usr/bin has swift executable (which I can not remove as it is owned by root) and is overriding python swift. Requirement already satisfied: python-swiftclient in /Library/Python/2.7/site-packages Requirement already satisfied: requests>=1.1 in /Library/Python/2.7/site-packages (from python-swiftclient) Requirement already satisfied: six>=1.5.2 in /Library/Python/2.7/site-packages/six-1.10.0-py2.7.egg (from python-swiftclient) Requirement already satisfied: futures>=3.0; python_version == "2.7" or python_version == "2.6" in /Library/Python/2.7/site-packages (from python-swiftclient) However, I am unable to find python swift anywhere. Please let me know how to resolve this. Many Thanks Chen
Apple Swift is overriding Openstack swift package
0
0.099668
1
0
0
416
42,267,553
2017-02-16T07:30:00.000
0
1
0
0
0
mysql,python-2.7,amazon-web-services,aws-lambda
0
42,268,813
0
3
0
false
0
0
You should install your packages in your lambda folder : $ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER And then, compress your whole directory in a zip to upload in you lambda.
1
1
0
0
I want to import and use dataset package of python at AWS Lambda. The dataset package is about MySQL connection and executing queries. But, when I try to import it, there is an error. "libmysqlclient.so.18: cannot open shared object file: No such file or directory" I think that the problem is because MySQL client package is necessary. But, there is no MySQL package in the machine of AWS Lambda. How to add the third party program and how to link that?
How to use the package written by another language in AWS Lambda?
1
0
1
1
0
131
42,271,330
2017-02-16T10:32:00.000
0
1
0
1
0
python,cron,crontab
0
42,271,741
0
2
0
true
0
0
Simple solution is, you can set some Bash env variable MONITORING=true and let your python script to check that variable using os.environ["MONITORING"]. If that variable is true then check if the server is up or down else don't check anything. Once server down is found, set that variable to false from script like os.environ["MONITORING"] = false. So it won't send emails until you set that env variable again true.
2
0
0
0
I have python script that checks if the server is up or down, and if it's down it sends out an email along with few system logs. What I want is to keep checking for the server every 5 minutes, so I put the cronjob as follows: */5 * * * * /python/uptime.sh So whenever the server's down, it sends an email. But I want the script to stop executing (sending more emails) after the first one. Can anyone help me out with how to do this? Thanks.
Running cronjob every 5 minutes but stopped after first execution?
0
1.2
1
0
0
227
42,271,330
2017-02-16T10:32:00.000
0
1
0
1
0
python,cron,crontab
0
42,295,869
0
2
0
false
0
0
write an empty While True script that runs forever (ex: "mailtrigger.py") run it with -nohup mailtrigger.py from shell in infinite loop once the server is down check if mailtrigger.py is running, if its not then terminate mailtrigger.py (kill process id) your next iterations will not send mails since mailtrigger.py is not running.
2
0
0
0
I have python script that checks if the server is up or down, and if it's down it sends out an email along with few system logs. What I want is to keep checking for the server every 5 minutes, so I put the cronjob as follows: */5 * * * * /python/uptime.sh So whenever the server's down, it sends an email. But I want the script to stop executing (sending more emails) after the first one. Can anyone help me out with how to do this? Thanks.
Running cronjob every 5 minutes but stopped after first execution?
0
0
1
0
0
227
42,274,756
2017-02-16T13:02:00.000
2
0
0
0
0
python,machine-learning,3d,tensorflow,scikit-learn
0
42,284,733
0
1
0
false
0
0
You have to first extract "features" out of your dataset. These are fixed-dimension vectors. Then you have to define labels which define the prediction. Then, you have to define a loss function and a neural network. Put that all together and you can train a classifier. In your example, you would first need to extract a fixed dimension vector out of an object. For instance, you could extract the object and project it on a fixed support on the x, y, and z dimensions. That defines the features. For each object, you'll need to label whether it's convex or concave. You can do that by hand, analytically, or by creating objects analytically that are known to be concave or convex. Now you have a dataset with a lot of sample pairs (object, is-concave). For the loss function, you can simply use the negative log-probability. Finally, a feed-forward network with some convoluational layers at the bottom is probably a good idea.
1
1
1
0
I try to write an script in python for analyse an .stl data file(3d geometry) and say which model is convex or concave and watertight and tell other properties... I would like to use and TensorFlow, scikit-learn or other machine learning library. Create some database with examples of objects with tags and in future add some more examples and just re-train model for better results. But my problem is: I don´t know how to recalculate or restructure 3d data for working in ML libraries. I have no idea. Thank you for your help.
How to analyse 3d mesh data(in .stl) by TensorFlow
0
0.379949
1
0
0
1,021
42,309,798
2017-02-18T00:48:00.000
0
0
0
0
0
python,tkinter,python-idle
0
42,333,708
0
1
0
true
0
1
Outline of possible solution: Create a 1-pixel wide Frame, with a contrasting background color, as a child of the Text. Use .place() to position the Frame at an appropriate horizontal coordinate. Possible issues: I don't see any easy way to get the coordinate for a particular column. Text.bbox("1.80") looks promising, but doesn't work unless there actually are 80 characters in the line already. You may have to insert a dummy 80-character line at first, call update_idletasks to get the Text to calculate positions, call bbox to get the coordinates, then delete the dummy text. Repeat whenever the display font is changed. The line would necessarily appear on top of any text or selection region, which isn't quite the visual appearance I'd expect for a feature like this.
1
0
0
0
My intention is to add a vertical bar to IDLE to indicate preferred line length at column 80. I have tried to find a configuration option for the Text tkinter widget that would allow this but have found nothing. I was hoping it would be a simple configuration option so I could just add a another item the text_options dictionary within EditorWindow.py found within Python\Lib\idlelib. I am not sure how styles/themes work but do they have the capability to change the background colour of only 1 column in a Text widget?
Adding a vertical bar or other marker to tkinter Text widgets at a particular column
0
1.2
1
0
0
395
42,339,941
2017-02-20T08:47:00.000
1
0
0
0
0
python,cntk
0
42,521,000
0
1
0
true
0
0
You can create two minibatch sources, one for x and one for x_mask, both with randomize=False. Then the examples will be read in the order in which they are listed in the two map files. So as long as the map files are correct and the minibatch sizes are the same for both sources you will get the images and the masks in the order you want.
1
1
1
0
does anyone know how to create or use 2 minibatch sources or inputs a sorted way? My problem is the following: I have images named from 0 to 5000 and images named 0_mask to 5000_mask. For each image x the coressponding image x_mask is the regression image for a deconvolution output. So i need a way to tell cntk that each x corresponds to x_match and that there is no regression done between x and y_mask. I'm well aware of the cntk convolution sample. I've seen it. The problem are the two input streams with x and x_mask. Can i combine them and make the reference, i need it in an easy way? Thank you in advance.
CNTK 2 sorted minibatch sources
0
1.2
1
0
0
65
42,357,801
2017-02-21T02:51:00.000
0
0
0
0
0
python,opencv
0
42,357,893
0
1
0
false
0
0
Its dependent on the pixel-distance ratio. You can measure this by taking an image of a meter-stick and and measuring its pixel width (for this example say its 1000px). The ratio of pixels to distance is 1000px/100cm, or 10. You can now use this constant as a multiplier, so for a given length and width in cm., you will just multiply by the ratio, and can get a pixel height and width, which can be passed into opencv's draw rectangle function.
1
0
1
0
I know how to draw a rectangle in opencv. But can I choose the length and breadth to be in centi meters?
Draw rectangle in opencv with length and breadth in cms?
0
0
1
0
0
408
42,372,121
2017-02-21T15:58:00.000
1
0
0
0
0
python-3.x,spyder,openpyxl
0
42,372,863
0
1
0
true
0
0
This isn't possible without you writing some of your own code. To do this you will have to write code that can evaluate conditional formatting because openpyxl is a library for the file format and not a replacement for an application like Excel.
1
2
0
0
I'm working on a Project with python and openpyxl. In a Excel file are some cells with conditional formatting. These change the infillcolor, when the value changes. I need to extract the color from the cell. The "normal" methode worksheet["F11"].fill.start_color.index doesn't work. Excel doesn't interpret the infillcolor from the conditional formatting as infillcolor so i get a '00000000' back for no infill. Anyone knows how to get the infillcolor? Thanks!
Python/openpyxl get conditional format
0
1.2
1
1
0
496
42,446,403
2017-02-24T19:04:00.000
0
0
0
1
0
python,html,linux,ubuntu
0
42,446,617
0
1
0
true
0
0
I'm going to answer your question but also beg you to consider another approach. The functionality you are looking for is usually handled by a database. If you don't want to use anything more complex, SQLite is often all you need. You would then need a simple web application that connects to the database, grabs the fields, and then injects them into HTML. I'd use Flask for this as it comes with Jinja and that's a pretty simple stack to get started with. If you really want to edit the HTML file directly in Python, you will need write permissions for whatever user is running the Python script. On Ubuntu, that folder is typically owned by www-data if you are running Apache. Then you'd open the file in Python, perform file operations on it, and then close it. with open("/var/www/html/somefile.txt", "a") as myfile: myfile.write("l33t h4x0r has completed the challenge!\n") That's an example of how you'd do a simple append operation in Python.
1
0
0
0
I'm making a "wargame" like the ones on overthewire.org or smashthestack.org. When you finish the game, the user should get a python program that has extra permissions to edit a file in /var/www/html so that they can sign their name. I want to have a program like this so that they can add text to the html file without removing the text of other users and so that it filters offensive words. How can I make a file editable by a specific program in Linux? And how can I make the program edit the file in python? Do I just use os.system?
Allow a python file to add to a different file linux
0
1.2
1
0
0
31
42,477,956
2017-02-27T04:42:00.000
6
0
0
1
0
python,dll
1
42,478,265
0
1
0
false
0
0
If you are using a 32 bit Python and the DLL is a 64 bit DLL you will get this error, likewise if the DLL is 32 bit and your Python is 64 bit. You can check this using the dumpbin /HEADERS <dll filepath> command from a visual studio command prompt.
1
3
0
0
I found this error [Error 193] %1 is not a valid Win32 application when i run this python command windll.LoadLibrary("C:\Windows\System32\plcommpro.dll") For this error i found my plcommpro.dll file is not executable file.But I don't know how to make it as a executable file.If someone knows please share me. Thanks and Best.
Error 193 %1 is not a valid Win32 application
0
1
1
0
0
4,749
42,479,954
2017-02-27T07:24:00.000
3
0
0
0
0
python-3.x,tensorflow,keras,gmm
0
42,481,143
0
1
0
false
0
0
Are you sure that it is what you want? you want to integrate a GMM into a neural network? Tensorflow and Keras are libraries to create, train and use neural networks models. The Gaussian Mixture Model is not a neural network.
1
4
1
0
I am trying to implement Gaussian Mixture Model using keras with tensorflow backend. Is there any guide or example on how to implement it?
Implement Gaussian Mixture Model using keras
0
0.53705
1
0
0
2,619
42,483,272
2017-02-27T10:25:00.000
0
0
0
0
0
python-2.7,python-3.x,keyboard,mouse,pyautogui
0
42,882,714
0
1
0
true
0
0
PyAutoGUI will still work if there's no keyboard or mouse connected. However, PyAutoGUI does not have any way to detect if a keyboard or mouse are connected to your machine.
1
0
0
0
I wrote a script with python-pyautogui to automate mouse and keyboard actions. Mouse and keyboard commands are working as per script when keyboard and mouse are connected. I wondered, it still works even when they are not connected. if so it is as per design, may i know how to set the condition to execute the script only if keyboard and mouse are connected?? Kindly share your ideas. Thanks in advance..
Pyautogui commands are working even when no mouse or keyboard is connected
0
1.2
1
0
0
371
42,493,384
2017-02-27T18:40:00.000
1
0
0
0
0
python,django,python-2.7,django-models
0
42,494,571
0
2
0
false
1
0
You can have a profile class (say UserProfile) with foreign key to the user that is to be created only when user signs up using the website's registration form. That way, superuser which is created on admin site or through command line wouldn't need an extra profile instance attached to it.
1
2
0
0
I know that superusers and regular users are both just django's User objects, but how can I write a custom user class that requires some fields for plain users and doesn't require those fields for superusers?
In Django, is it possible for superusers to have different required fields than non-superusers?
1
0.099668
1
0
0
535
42,493,984
2017-02-27T19:16:00.000
0
0
0
0
0
python,python-2.7,indexing,sqlite,whoosh
0
51,001,220
0
1
0
false
0
0
You need to add a post-save function index_data to your database writers. This post-save should get the data to be written in database, normalize it and index it. The searcher could be an independent script given an index and queries to be searched for.
1
3
0
0
Could someone give me an example of using whoosh for a sqlite3 database, I want to index my database. Just a simple connect and searching through the database would be great. I searched online and was not able to find an examples for sqlite3.
Using Whoosh with a SQLITE3.db (Python)
0
0
1
1
0
466
42,497,824
2017-02-27T23:42:00.000
0
0
1
0
1
python,pysvn
0
42,912,443
0
1
0
false
0
0
The simplest way to call pysvn.Client().checkin() is with the absolute path to the working copy top folder. You should then see from svn log that all the changed files where committed. By using an absolute path you avoid issues with the current working directory and relative paths. If this does not help post details of the error message you receive, the version of python, the version of pysvn and your operating system.
1
0
0
0
After I get the pysvn client, how can I set the working folder to a specific local working folder belonging to a specific repo? I'd like to set the working folder so i can then commit changes from there. I have tried passing in the path to the client but that doesn't work.
PYSVN: how to set local working folder so I can commit a file?
0
0
1
0
0
353
42,500,030
2017-02-28T04:06:00.000
0
0
0
1
0
python
0
42,500,440
0
2
0
false
0
0
You can create init script in /etc/init/ directory Example: start on runlevel [2345] stop on runlevel [!2345] kill timeout 5 respawn script exec /usr/bin/python /path/to/script.py end script Save with .conf extension
1
0
0
0
I am currently using linux. I have a python script which I want to run as a background service such as the script should start to run when I start my machine. Currently I am using python 2.7 and the command 'python myscripy.py' to run the script. Can anyone give an idea about how to do this. Thank you.
Run a python script as a background service in linux
0
0
1
0
0
1,140
42,506,643
2017-02-28T10:47:00.000
0
0
0
0
0
python-2.7,ibm-cloud-infrastructure
0
42,534,904
0
1
0
true
0
0
that is not possbile using the Softlayer's API even using the Softlayer's control portal that information is not avaiable. regards
1
0
0
0
I would like to know how traffic flows in SoftLayer between the servers, In course of traffic flow how to detect unusual traffic and how to detect ports that are prone to unusual/malicious traffic. Can we retrieve this information using any SoftLayer python API's ?
how to get unusual traffic or traffic information in SoftLayer using python API's
0
1.2
1
0
1
61
42,514,902
2017-02-28T17:14:00.000
4
0
0
0
0
python,sql,django
0
42,515,036
0
1
0
true
1
0
A SQLite database is just a file. To drop the database, simply remove the file. When using SQLite, python manage.py migrate will automatically create the database if it doesn't exist.
1
1
0
0
How to remove and add completly new db.sqlite3 database to django project written in pycharm? I did something wrong and I need completelty new database. The 'flush' command just removes data from databse but it't dosent remove tables schema. So the question is how to get get back my databse to begin point(no data, no sql table)
How to remove and add completly new db.sqlite3 to django project written in pycharm?
0
1.2
1
1
0
1,356
42,515,611
2017-02-28T17:54:00.000
0
0
0
0
0
python,selenium,button,youtube
0
42,515,710
0
1
0
true
0
0
you can select with CSS Selector like this: if you want to like: #watch8-sentiment-actions > span > span:nth-child(1) > button if you want cancel like: #watch8-sentiment-actions > span > span:nth-child(2) > button
1
0
0
0
does anyone of you know how to find / click the YT-Like button in Python using selenium since it doesn't have a real Id, etc... Thanks for the answers
Python - Selenium: Find / Click YT-Like Button
0
1.2
1
0
1
378
42,522,654
2017-03-01T03:30:00.000
0
0
0
0
0
python,python-2.7,apache-spark,pyspark
0
42,522,820
0
1
1
true
0
0
You can create traditional Python data objects such as array, list, tuple, or dictionary in PySpark. You can perform most of the operations using python functions in Pyspark. You can import Python libraries in Pyspark and use them to process data in Pyspark You can create a RDD and apply spark operations on them
1
1
1
0
I am currently self-learning Spark programming and trying to recode an existing Python application in PySpark. However, I am still confused about how we use regular Python objects in PySpark. I understand the distributed data structure in Spark such as the RDD, DataFrame, Datasets, vector, etc. Spark has its own transformation operations and action operations such as .map(), .reduceByKey() to manipulate those objects. However, what if I create traditional Python data objects such as array, list, tuple, or dictionary in PySpark? They will be only stored in the memory of my driver program node, right? If I transform them into RDD, can i still do operations with typical Python function? If I have a huge dataset, can I use regular Python libraries like pandas or numpy to process it in PySpark? Will Spark only use the driver node to run the data if I directly execute Python function on a Python object in PySpark? Or I have to create it in RDD and use Spark's operations?
How Python data structure implemented in Spark when using PySpark?
0
1.2
1
0
0
854
42,524,114
2017-03-01T05:46:00.000
7
0
0
0
1
python,webdriver,geckodriver
0
42,542,815
0
5
0
true
0
0
For one make sure you are downloading the one for your OS. Windows is at the bottom of the list it will say win32. Download that file or 64 doesn't matter. After that you are going to want to extract the file. If you get an error that says there is no file in the Winrar file, this may be because in your Winrar settings you have Winrar set to not extract any files that have the extension .exe. If you go to Winrar options then settings then security you can delete this it will say *.exe, and after you delete that you can extract the file. After that is done, search how to update the path so that gecko driver can be accessed. Then you will most likely need to restart.
2
8
0
0
I am trying to intall webdriver and in order to open firefox i need the geckodriver to be installed and in the correct path. Firstly the download link to install geckodriver only allows you to install a file that is not an executable. So is there a way to make it an executable? secondly I have tried to change my path variables in commmand prompt, but of course it didn't work. I then changed the user variable not the system path variables because there is not Path in system. there is a Path in user variables so I edited that to change where the file is located. I have extracted the geckodriver rar file and have received a file with no extension. I don't know how you can have a file with no extension, but they did it. The icon is like a blank sheet of paper with a fold at the top left. If anyone has a solution for this including maybe another package that is like webdriver and will allow me to open a browser and then refresh the page after a given amount of time. this is all I want to do.
how to install geckodriver on a windows system
0
1.2
1
0
1
66,774
42,524,114
2017-03-01T05:46:00.000
0
0
0
0
1
python,webdriver,geckodriver
0
46,927,125
0
5
0
false
0
0
I've wrestled with the same question for last hour. Make sure you have the latest version of Firefox installed. I had Firefox 36, which, when checking for updates, said it was the latest version. Mozilla's website had version 54 as latest. So download Firefox from website, and reinstall. Make sure you have the latest gecko driver downloaded. If you're getting the path error - use the code below to figure out which path python is looking at. Add the geckodriver.exe to the working directory. import os os.getcwd()
2
8
0
0
I am trying to intall webdriver and in order to open firefox i need the geckodriver to be installed and in the correct path. Firstly the download link to install geckodriver only allows you to install a file that is not an executable. So is there a way to make it an executable? secondly I have tried to change my path variables in commmand prompt, but of course it didn't work. I then changed the user variable not the system path variables because there is not Path in system. there is a Path in user variables so I edited that to change where the file is located. I have extracted the geckodriver rar file and have received a file with no extension. I don't know how you can have a file with no extension, but they did it. The icon is like a blank sheet of paper with a fold at the top left. If anyone has a solution for this including maybe another package that is like webdriver and will allow me to open a browser and then refresh the page after a given amount of time. this is all I want to do.
how to install geckodriver on a windows system
0
0
1
0
1
66,774
42,546,031
2017-03-02T02:58:00.000
1
0
1
0
0
python,anaconda,spyder
0
42,579,694
0
1
0
true
0
0
This was fixed in Spyder 3.2, which was released in July of 2017.
1
1
0
0
Spyder Variable explore only show variables when i run a python script. But while debugging, there is nothing in Spyder Variable explore. how to set?
how to show variable in Spyder Variable explore while debugging?
0
1.2
1
0
0
1,166
42,582,938
2017-03-03T15:39:00.000
0
0
1
0
0
python,image,photo,editing
0
42,583,042
0
2
0
false
0
0
Online photo editor means that most of the processing will be done on the client side (i.e. in browser). Python is mostly a server-side language, so I would suggest using some other, more browser-friendly, language (perhaps, JavaScrip?)
1
0
0
0
I am going to build an online photo editor using python, but I don't know how to start. My plan is to create a platform online. Users can upload their photos and the system can transform their photos into a style like Ukioyoe from Japan, the ancient wood printing, so the photo outcomes are similar to that. Is there any similar works that have already done or any libraries that can help to do this work? Thanks for answering.
Creating a online photo editor in Python
0
0
1
0
0
2,065
42,583,082
2017-03-03T15:46:00.000
0
0
1
1
0
python,python-2.7,python-3.x
0
42,583,156
0
7
0
false
0
0
In my case, /usr/bin/python is a symlink that points to /usr/bin/python2.7. Ususally, there is a relevant symlink for python2 and python3. So, if you type python2 you get a python-2 interpreter and if you type python3 you get a python-3 one.
5
0
0
0
I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter?
how to switch python interpreter in cmd?
0
0
1
0
0
10,377