Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36,193,175 | 2016-03-24T04:13:00.000 | 0 | 0 | 0 | 0 | http,python-requests,fiddler | 36,207,761 | 1 | false | 1 | 0 | Fiddler isn't correctly intercepting anything that this program is sending/receiving
That means the program is either firing requests to localhost (very unlikely), or ignoring the proxy settings for the current user (most likely). The latter also means this application won't function on a machine where a proxy connection is required in order to make HTTP calls to the outside.
The alternative would be to use a packet inspector like Wireshark, or to let the application be fixed to respect proxy settings, or to capture all HTTP requests originating from that machine on another level, for example the next router in your network. | 1 | 0 | 0 | I am trying to intercept HTTP requests sent via an application I have installed on my Windows 7 machine. I'm not sure what platform the application is built on, I just know that Fiddler isn't correctly intercepting anything that this program is sending/receiving. Requests through Chrome are intercepted fine.
Can Fiddler be set up as a proxy for ALL applications, and if so, how would I go about doing this? I have no control over the application code, it's just something I installed. It is a live bidding auction program which seems to mainly display HTML pages inside the application window. | Using Fiddler to intercept requests from Windows program | 0 | 0 | 1 | 767 |
36,194,303 | 2016-03-24T06:19:00.000 | 0 | 0 | 0 | 1 | python,shell,emacs,elisp | 36,212,585 | 4 | true | 0 | 0 | Emacs has a number of different ways to interact with external program. From your text, I suspect you need to look at comint in the emacs manual and the elisp reference manual. Comint is the low level general shell in a buffer functionality (it is what shell mode uses).
Reading between the lines of your post, I would also suggest you have a look at emacspeak. and speechd.el, both of which are both packages which add speech to emacs. Speechd.el is bare bones and uses speech-dispatcher while emacspeak is very feature rich. The emacspeak package uses a Tcl script which communicates with hardware or software speech servers. It also has a mac version written in python which communicates with the OSX accessiblity (voiceOver) subsystem. Looking at how these packages work will likely give you good examples on how to make yours do what you want. | 2 | 0 | 0 | I written a simple shell in python and compiled it with nuitka.
My shell as some simple commands, such as "say string", "braille string", "stop" etc.
This program uses python accessible_output package to communicate with screen reader in windows.
Ok, this works well froma a normal shell, or executing it from windows.
Now, I would like run this program from within emacs, such as normal shell in emacs.
I tried some functions, "start-process", "shell-command", but I can't write commands.
My program displays a prompt, like python interpreter, where I can put my commands.
Elisp is able to run python shells, mysql shells, but I'm unable to run my own shell.
Help! | How can I run my own shell from elisp? | 1.2 | 0 | 0 | 154 |
36,194,303 | 2016-03-24T06:19:00.000 | 0 | 0 | 0 | 1 | python,shell,emacs,elisp | 36,230,312 | 4 | false | 0 | 0 | What about just launching your script from inside an emacs shell buffer?
M-x shell RET /path/to/my/script RET | 2 | 0 | 0 | I written a simple shell in python and compiled it with nuitka.
My shell as some simple commands, such as "say string", "braille string", "stop" etc.
This program uses python accessible_output package to communicate with screen reader in windows.
Ok, this works well froma a normal shell, or executing it from windows.
Now, I would like run this program from within emacs, such as normal shell in emacs.
I tried some functions, "start-process", "shell-command", but I can't write commands.
My program displays a prompt, like python interpreter, where I can put my commands.
Elisp is able to run python shells, mysql shells, but I'm unable to run my own shell.
Help! | How can I run my own shell from elisp? | 0 | 0 | 0 | 154 |
36,195,457 | 2016-03-24T07:56:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,cluster-analysis,k-means | 50,800,687 | 6 | false | 0 | 0 | You can Simply store the labels in an array. Convert the array to a data frame. Then Merge the data that you used to create K means with the new data frame with clusters.
Display the dataframe. Now you should see the row with corresponding cluster. If you want to list all the data with specific cluster, use something like data.loc[data['cluster_label_name'] == 2], assuming 2 your cluster for now. | 1 | 40 | 1 | I am using the sklearn.cluster KMeans package. Once I finish the clustering if I need to know which values were grouped together how can I do it?
Say I had 100 data points and KMeans gave me 5 cluster. Now I want to know which data points are in cluster 5. How can I do that.
Is there a function to give the cluster id and it will list out all the data points in that cluster? | How to get the samples in each cluster? | 0 | 0 | 0 | 62,619 |
36,200,052 | 2016-03-24T12:18:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,google-cloud-dataflow | 36,207,559 | 1 | false | 0 | 0 | There is nothing planned at this time.
If you can run the Tensorflow training on a single machine (it sounds like this is what you were doing with Spark) then it should be possible to do the training within a DoFn of a Dataflow pipeline. | 1 | 2 | 1 | I am playing around with tensorflow and today I have noticed that google also open-sourced Python SDK for their dataflow.
Currently when I need to train and evaluate several networks in parallel I usually use either luigi and run one model training after another or I use spark and I am performing each model training within the map step.
Whole this data processing is just a part of the pipeline.
I am wondering if there is or if there is planned something like perform tensorflow model training step inside of the dataflow pipeline?
Is there currently some best practice around this?
Or do I have to run each model setting within the map step?
I went through the documentation and for now it seems to be really vague, so I'm asking here if someone has some experience with this. | Running tensorflow model training from dataflow | 0.197375 | 0 | 0 | 310 |
36,201,960 | 2016-03-24T13:58:00.000 | 0 | 0 | 0 | 0 | python,autobahn,wamp-protocol,crossbar | 70,193,006 | 1 | false | 1 | 0 | Yes, the simplest thing that could possibly work is to open two WAMP clients - one for each realm. | 1 | 2 | 0 | I'm looking into designing a WAMP/Crossbar application with two or more realms; one realm would be for backend messaging, while a second would essentially expose a public API to frontend clients. Now, at some point messages need to cross between realms, which would require one client to join two realms and act as a bridge.
Is that feasible at all without a lot of bending over backwards? Or is the design approach flawed from the beginning, and I should rather use specific topic URIs to separate front and backends? | Bridging two realms/subscribing one client to multiple realms | 0 | 0 | 0 | 163 |
36,202,521 | 2016-03-24T14:24:00.000 | 9 | 0 | 1 | 0 | python,ipython,jupyter-notebook | 41,979,893 | 2 | false | 0 | 0 | You can write exit() to quit debug mode. | 2 | 7 | 0 | I am using iPython notebook, with the %%debug command.
My code performs a loop, in which I set some break point.
Now I can't seem to stop the loop with 'CTRL+C' (works in the regular ipython).
For example, let's say I have some loop with an ipdb.set_trace() inside. Now if I hit the 'C' key the loop continues to the ipdb.set_trace(), with no option from the user the exit the loop with 'CTRL+D/C'.
How can I exit such a loop? | How to quit an iPython notebook debug session? | 1 | 0 | 0 | 4,235 |
36,202,521 | 2016-03-24T14:24:00.000 | 0 | 0 | 1 | 0 | python,ipython,jupyter-notebook | 36,202,722 | 2 | true | 0 | 0 | you can use "ctrl+d" to delete the cell..
or to quit the debug mode just do exit()
note: it is not recommended to delete the cell if there is importent code | 2 | 7 | 0 | I am using iPython notebook, with the %%debug command.
My code performs a loop, in which I set some break point.
Now I can't seem to stop the loop with 'CTRL+C' (works in the regular ipython).
For example, let's say I have some loop with an ipdb.set_trace() inside. Now if I hit the 'C' key the loop continues to the ipdb.set_trace(), with no option from the user the exit the loop with 'CTRL+D/C'.
How can I exit such a loop? | How to quit an iPython notebook debug session? | 1.2 | 0 | 0 | 4,235 |
36,202,899 | 2016-03-24T14:41:00.000 | 1 | 0 | 0 | 0 | python,django,multithreading,django-rest-framework,django-1.9 | 36,209,892 | 1 | true | 1 | 0 | Django itself doesn't have a queue, but you can easily simulate it. Personally, I would probably use an external service, like rabbitMQ, but it can be done in pure Django if you want. Add a separate ImageQueue model to hold references to incoming images and use transaction management to make sure simultaneous requests don't return the same image. Maybe something like this (this is purely proof of concept code, of course).
class ImageQueue(models.Model):
image = models.OneToOne(Image)
added = models.DateTimeField(auto_now_add=True)
processed = models.DateTimeField(null=True, default=None)
processed_by = models.ForeignKey(User, null=True, default=None)
class Meta:
order_by=('added')
...
# in the incoming image API that drone uses
def post_an_image(request):
image = Image()
... whatever you do to post an image ...
image.save()
queue = ImageQueue.objects.create(image=image)
... whatever else you need to do ...
# in the API your users will use
from django.db import transaction
@transaction.atomic
def request_images(request):
user = request.user
num = request.POST['num'] # number of images requested
queue_slice = ImageQueue.objects.filter(processed__isnull=True)[:num]
for q in queue_slice:
q.processed = datetime.datetime.now()
q.processed_by = user
q.save()
return [q.image for q in queue_slice] | 1 | 1 | 0 | I am setting up a system where one user will be posting images to a Django server and N users will each be viewing a subset of the posted images in parallel. I can't seem to find a queuing mechanism in Django to accomplish this task. The closest thing is using latest with filter(), but that will just keep sending the latest image over and over again until a new one comes. The task queue doesn't help since this isn't a periodic task, it only occurs when a user asks for the next picture. I have one Viewset for uploading the images and another for fetching. I thought about using the python thread-safe Queue. The unloader will enqueue the uploaded image pk, and when multiple users request a new image, the sending Viewset will dequeue an image pk and send it to the most recent user requesting an image and then the next one dequeued to the second most recent user and so on...
However, I still feel like there are some race conditions possible here. I read that Django is thread-safe, but that the app can become un-thread-safe. In addition, the Queue would need to be global to be shared among the Viewsets, which feels like bad practice. Is there a better and safer way of going about this?
Edit
Here is more detail on what I'm trying to accomplish and to give it some context. The user posting the pictures is a Smart-phone attached to a Drone. It will be posting pictures from the sky at a constant interval to the Django server. Since there will be a lot of pictures coming in. I would like to be able to have multiple users splitting up the workload of looking at all the pics (i.e. no two user's should see the same picture). So when a user will contact the Django server, saying "send me the next pic you have or send me the next 3 pics you have or etc...". However, multiple users might say this at the same time. So Django needs to keep some sort of ordering to the pictures,that's why I said Queue and figure out how to pass it to users if more than one of them asks at a time. So one Viewset is for the smart phone to post the pics and the other is for the users to ask for the pics. I am looking for a thread-safe way to do this. The only idea I have so far is to use Python's thread-safe queue and make it a global queue to the Viewsets. However, I feel like that is bad practice, and I'm not sure if it is thread-safe with Django. | Queuing pictures to be requested in Django | 1.2 | 0 | 0 | 394 |
36,205,356 | 2016-03-24T16:34:00.000 | 3 | 0 | 1 | 0 | python,jupyter-notebook | 64,924,017 | 2 | false | 0 | 0 | You can follow 'Kernel' --> 'Interrupt' in the menu bar at the top.
Pressing 'I' to the left of the cell also works. This only works if you're in command mode. If not already enabled, press Esc to enable it.
If you are okay to lose variables,you might restart the session. | 1 | 63 | 0 | I have trouble stopping a given kernel in an iPython notebook.
Many times the cell is stuck, and I need to restart the entire kernel.
The loop could be messy sometimes.
How can I stop the kernel? | How should I stop a busy cell in an iPython notebook? | 0.291313 | 0 | 0 | 89,961 |
36,209,363 | 2016-03-24T20:23:00.000 | 1 | 0 | 1 | 0 | python,matplotlib,installation,pip | 36,209,490 | 1 | false | 0 | 0 | On the computer with the internet, use pip wheel matplotlib to download all the wheel files needed for the installation (including dependencies).
Then, on the computer without the internet, use pip install matplotlib --find-links="<directory of the wheel files>". This will install matplotlib from the local files. | 1 | 0 | 1 | I'm trying to install matplotlib a server that is sequestered and has no internet connection. When I run pip with this command:
pip.exe install matplotlib-1.5.1-cp27-none-win_amd64.whl --no-index
I get this message:
Could not find a version that satisfies the requirement pyparsing!=2.0.4,>=1.5.6 (from matplotlib==1.5.1) (from versions: )
No matching distribution found for pyparsing!=2.0.4,>=1.5.6 (from matplotlib==1.5.1)
I'm relatively new with this and have only used Pip once or twice before but always with an internet connection. This computer is sequestered on a DoD compound and is for sandbox development only. I have to go through three layers of RDP in order to get to it. I've tried everything I can think of or find but no luck.
Any suggestions are greatly appreciated.
Thanks | No internet access but Pip wants to look for newer version | 0.197375 | 0 | 0 | 518 |
36,212,431 | 2016-03-25T00:50:00.000 | 0 | 0 | 1 | 0 | python | 36,212,457 | 4 | false | 0 | 0 | Why not read each file into a list each element in the list holds 1 line.
Once you have both files loaded to your lists you can work line by line (index by index) through your list doing whatever comparisons/operations you require. | 1 | 0 | 0 | I want to using python to open two files at the same time, read one line from each of them then do some operations. Then read the next line from each of then and do some operation,then the next next line...I want to know how can I do this. It seems that for loop cannot do this job. | how to use python to deal with two file at the same time | 0 | 0 | 0 | 48 |
36,212,815 | 2016-03-25T01:44:00.000 | 2 | 0 | 0 | 0 | python,django,pandas,scikit-learn,h2o | 36,227,222 | 1 | false | 1 | 0 | It seems the Pandas DataFrame to H2OFrame conversion works fine outside Django, but fails inside Django. The problem might be with Django's pre_save not allowing the writing/reading of the temporary .csv file that H2O creates when ingesting a python object. A possible workaround is to explicitly write the Pandas DataFrame to a .csv file with model_data_frame.to_csv(<path>, index=False) and then import the file into H2O with h2o.import_file(<path>). | 1 | 0 | 1 | I am taking input values from a django model admin screen and on pre_save calling h2o to do predictions for other values and save them.
Currently I convert my input from pandas (trying to work with sklearn preprocessing easily here) by using:
modelH2OFrame = h2o.H2OFrame(python_obj = model_data_frame.to_dict('list'))
It parses and loads. Hell it even creates a frame with values when I do it step by step.
BUT. When I run this inside of the Django pre_save, the H2OFrame comes back completely empty.
Ideas for why this may be happening? Sometimes I get errors connecting to the h2o cluster or timeouts--maybe that is a related issue? I load the H2O models in the pre_save call and do the predictions, allocate them to model fields, and then shut down the h2o cluster (in one function). | H2OFrame converts dict to all zeros | 0.379949 | 0 | 0 | 276 |
36,212,891 | 2016-03-25T01:53:00.000 | 1 | 0 | 0 | 0 | python,django,postgresql | 36,213,045 | 1 | false | 1 | 0 | Open connections will likely stop schema updates. If you can't wait for existing connections to finish, or if your environment is such that long-running connections are used, you may need to halt all connections while you run the update(s).
The downtime, if it's likely to be significant to you, could be mitigated if you have a read-only slave that could stay online. If not, ensuring your site fails over to some sort of error/explanation page/redirect would at least avoid raw failure code responses to requests that come in if downtime for migrations is acceptable. | 1 | 1 | 0 | I've tried to deploy (includes migration) production environment. But my Django migration (like add columns) very often stops and doesn't progress anymore.
I'm working with postgresql 9.3, and I find some reasons of this problem. If postgresql has an active transaction, alter table query is not worked. So until now, restarting postgresql service before migration was a solution, but I think this is a bad idea.
Is there any good idea to progress deploying nicely? | django migration doesn't progress and makes database lock | 0.197375 | 1 | 0 | 1,701 |
36,214,865 | 2016-03-25T06:03:00.000 | 1 | 0 | 1 | 0 | python,pygame | 36,214,891 | 1 | false | 0 | 1 | Make sure that you're not using pygame.py as a script filename. It will prevent the import of pygame module you want.
Rename the file not to conflict. Also make sure there's no remaning pygame.pyc file. | 1 | 0 | 0 | Pygame module is installed but not working.
ImportError: No module named 'pygame.locals'; 'pygame' is not a package
it worked when I import it in idle without creating a file but not working when .py file is executed through interpreter | pygame module is not working in python 3.4.2 | 0.197375 | 0 | 0 | 74 |
36,218,385 | 2016-03-25T10:42:00.000 | 0 | 0 | 0 | 0 | python,opencv,object-detection | 66,138,989 | 4 | false | 0 | 0 | detectMultiScale function is used to detect the faces. This function will return a rectangle with coordinates(x,y,w,h) around the detected face.
It takes 3 common arguments — the input image, scaleFactor, and minNeighbours.
scaleFactor specifies how much the image size is reduced with each scale. In a group photo, there may be some faces which are near the camera than others. Naturally, such faces would appear more prominent than the ones behind. This factor compensates for that.
minNeighbours specifies how many neighbours each candidate rectangle should have to retain it. You can read about it in detail here. You may have to tweak these values to get the best results. This parameter specifies the number of neighbours a rectangle should have to be called a face.
We obtain these values after trail and test over a specific range. | 2 | 27 | 1 | I am not able to understand the parameters passed to detectMultiScale. I know that the general syntax is detectMultiScale(image, rejectLevels, levelWeights)
However, what do the parameters rejectLevels and levelWeights mean? And what are the optimal values used for detecting objects?
I want to use this to detect pupil of the eye | Parameters of detectMultiScale in OpenCV using Python | 0 | 0 | 0 | 78,382 |
36,218,385 | 2016-03-25T10:42:00.000 | 20 | 0 | 0 | 0 | python,opencv,object-detection | 55,628,240 | 4 | false | 0 | 0 | Amongst these parameters, you need to pay more attention to four of them:
scaleFactor – Parameter specifying how much the image size is reduced at each image scale.
Basically, the scale factor is used to create your scale pyramid. More explanation, your model has a fixed size defined during training, which is visible in the XML. This means that this size of the face is detected in the image if present. However, by rescaling the input image, you can resize a larger face to a smaller one, making it detectable by the algorithm.
1.05 is a good possible value for this, which means you use a small step for resizing, i.e. reduce the size by 5%, you increase the chance of a matching size with the model for detection is found. This also means that the algorithm works slower since it is more thorough. You may increase it to as much as 1.4 for faster detection, with the risk of missing some faces altogether.
minNeighbors – Parameter specifying how many neighbors each candidate rectangle should have to retain it.
This parameter will affect the quality of the detected faces. Higher value results in fewer detections but with higher quality. 3~6 is a good value for it.
minSize – Minimum possible object size. Objects smaller than that are ignored.
This parameter determines how small size you want to detect. You decide it! Usually, [30, 30] is a good start for face detection.
maxSize – Maximum possible object size. Objects bigger than this are ignored.
This parameter determines how big size you want to detect. Again, you decide it! Usually, you don't need to set it manually, the default value assumes you want to detect without an upper limit on the size of the face. | 2 | 27 | 1 | I am not able to understand the parameters passed to detectMultiScale. I know that the general syntax is detectMultiScale(image, rejectLevels, levelWeights)
However, what do the parameters rejectLevels and levelWeights mean? And what are the optimal values used for detecting objects?
I want to use this to detect pupil of the eye | Parameters of detectMultiScale in OpenCV using Python | 1 | 0 | 0 | 78,382 |
36,221,588 | 2016-03-25T14:13:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 38,489,976 | 1 | false | 1 | 0 | Not sure if this will work for you, but at least for DNNCLassifiers you can specify the model_dir parameter when creating it and that will construct the model from the files and then you can continue the training.
For the DNNClassifiers you specify model_dir when first creating the object and the training will store checkpoints and other files on this directory. You can come then after that, and create another DNNClassifier specifying the same model_dir and that will restore your pre-trained model. | 1 | 0 | 1 | In the latest version of tensorflow, when I save the model I find two files are produced: model_xxx and model_xxx.meta.
Does model_xxx.meta specify the network? Can I resume training using model_xxx and model_xxx.meta without specify the network in the code? What about training queue structure, are they stored in model_xxx.meta? | How to resume training from *.meta in tensorflow? | 0 | 0 | 0 | 429 |
36,223,345 | 2016-03-25T15:55:00.000 | 10 | 0 | 0 | 1 | python,windows-10,simplehttpserver | 36,223,473 | 3 | true | 0 | 0 | Ok, so different commands is apparently needed.
This works:
C:\pathToIndexfile\py -m http.server
As pointed out in a comment, the change to "http.server" is not because of windows, but because I changed from python 2 to python 3. | 1 | 5 | 0 | I recently bought a Windows 10 machine and now I want to run a server locally for testing a webpage I am developing.
On Windows 7 it was always very simple to start a HTTP Server via python and the command prompt. Fx writing the below code would fire up a HTTP server and I could watch the website through localhost.
C:\pathToIndexfile\python -m SimpleHTTPServer
This does however not seems to work on Windows 10...
Does anyone know how to do this on Windows 10? | How to start python simpleHTTPServer on Windows 10 | 1.2 | 0 | 1 | 25,052 |
36,225,950 | 2016-03-25T18:42:00.000 | 0 | 0 | 0 | 0 | python,datetime,numpy,pandas,types | 36,226,280 | 1 | false | 0 | 0 | The best way might be to avoid floats altogether. Preempt the conversion to numerics in read_table by specifying dtype, with the column in question being kept an object. to_datetime handles that as intended.
HT: BrenBarn in the comments. | 1 | 0 | 1 | I am confused by something in pandas 0.18.0. In my input csv data, a field is supposed to consist dates as YYYYMMDD strings, but some rows have this missing or misformatted. I want to represent this column as datetime with dates where possible, and missing where not.
I tried several options, and what got me furthest was not using parse_dates upon reading the table in (with read_table), but then coercing the conversion with pandas.to_datetime(DataFrame['Seriesname'], errors='coerce',format='%Y%m%d'). This is robust to typos where the number cannot represent a date (think '20100231', column imported as int64 first) or when the string does not represent a number at all (think '2o1oo228', column an object upon import).
What this procedure is not robust to is when the columns contains only numbers but one field is empty. Then read_table imports the entire column as a float64 (not an int64, which has no missing values in numpy), and the conversion above produces all missing, even for rows where the data makes sense.
Is there a way around this? | Why does to_datetime produce only missing values for a float column in pandas? | 0 | 0 | 0 | 61 |
36,226,083 | 2016-03-25T18:50:00.000 | 0 | 0 | 0 | 0 | python,pandas,dataframe,nan | 64,681,838 | 14 | false | 0 | 0 | df.isna() return True values for NaN, False for the rest. So, doing:
df.isna().any()
will return True for any column having a NaN, False for the rest | 3 | 231 | 1 | Given a pandas dataframe containing possible NaN values scattered here and there:
Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs? | How to find which columns contain any NaN value in Pandas dataframe | 0 | 0 | 0 | 303,847 |
36,226,083 | 2016-03-25T18:50:00.000 | 0 | 0 | 0 | 0 | python,pandas,dataframe,nan | 68,703,088 | 14 | false | 0 | 0 | features_with_na=[features for features in dataframe.columns if dataframe[features].isnull().sum()>0]
for feature in features_with_na:
print(feature, np.round(dataframe[feature].isnull().mean(), 4), '% missing values')
print(features_with_na)
it will give % of missing value for each column in dataframe | 3 | 231 | 1 | Given a pandas dataframe containing possible NaN values scattered here and there:
Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs? | How to find which columns contain any NaN value in Pandas dataframe | 0 | 0 | 0 | 303,847 |
36,226,083 | 2016-03-25T18:50:00.000 | 40 | 0 | 0 | 0 | python,pandas,dataframe,nan | 47,418,973 | 14 | false | 0 | 0 | You can use df.isnull().sum(). It shows all columns and the total NaNs of each feature. | 3 | 231 | 1 | Given a pandas dataframe containing possible NaN values scattered here and there:
Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs? | How to find which columns contain any NaN value in Pandas dataframe | 1 | 0 | 0 | 303,847 |
36,226,500 | 2016-03-25T19:19:00.000 | 3 | 1 | 0 | 0 | python,teamcity,pytest,tox | 36,237,069 | 1 | true | 1 | 0 | TeamCity counts the tests based on their names. My guess is since your tests in the tox matrix have the same name, they are counted as one test. This should be visible on the test page of your build, where you can see invocation counts of each test.
For TeamCity to report number of tests correctly, test names must differ in different configurations. Perhaps, you could include configuration details in the reported test name | 1 | 4 | 0 | I have project with very simple configuration matrix, described in tox: py{27,35}-django{18,19}
I'm using TeamCity as the CI-server, run tests with py.test with installed teamcity-messages. I've tried to run every configuration like tox -e py27-django18 in different steps. But Teamcity didn't summarize tests and didn't accumulate coverage for files, it's only count coverage for last run and Tests passed: ... show tests from only one build.
How testing with multiple Python configurations can be integrated into Teamcity?
upd. Find out, that coverage counts correctly, just forgot to add --cov-append option to py.test. | Testing python project with Tox and Teamcity | 1.2 | 0 | 0 | 864 |
36,228,021 | 2016-03-25T21:06:00.000 | 6 | 0 | 1 | 0 | python,pep8 | 36,228,075 | 2 | true | 0 | 0 | There's nothing wrong with it. There are plenty of classes that return multiline values for str (such as pandas DataFrames and Series). You can return whatever format you think best represents the object. | 2 | 3 | 0 | In python, is there a section of a PEP against the method __str__ returning a multiline string?
Pro: PyCharm does not tell me off and Google draws a blank
Contra: I could see it would cause an inelegant response if someone concatenated it, feels un-pythonic and I am not sure I can recall seeing one —bar in my own code
So I am not sure if to just have a multiline __str__ return (keep it simple) or to leave alone __str__ (i.e. returns <class foo>) and have a special module (e.g. foo.report()). | PEP and multiline `__str__` return | 1.2 | 0 | 0 | 1,718 |
36,228,021 | 2016-03-25T21:06:00.000 | 2 | 0 | 1 | 0 | python,pep8 | 36,228,106 | 2 | false | 0 | 0 | The __str__ method is meant to return a string value that is human readable. If there is any use in having multiple lines for that (such as the string representation of a board game that has two dimensions), you should do that. If it is meant to be consumed by another source, you may want to use a custom method that includes the appropriate information in the appropriate format, or override __repr__, which is meant to return an unambiguous string representation. | 2 | 3 | 0 | In python, is there a section of a PEP against the method __str__ returning a multiline string?
Pro: PyCharm does not tell me off and Google draws a blank
Contra: I could see it would cause an inelegant response if someone concatenated it, feels un-pythonic and I am not sure I can recall seeing one —bar in my own code
So I am not sure if to just have a multiline __str__ return (keep it simple) or to leave alone __str__ (i.e. returns <class foo>) and have a special module (e.g. foo.report()). | PEP and multiline `__str__` return | 0.197375 | 0 | 0 | 1,718 |
36,230,585 | 2016-03-26T01:25:00.000 | 0 | 0 | 0 | 0 | python,heroku,python-requests,cpython | 36,230,874 | 2 | false | 1 | 0 | CPython will release memory, but it's a bit murky.
CPython allocates chunks of memory at a time, lets call them fields.
When you instantiate an object, CPython will use blocks of memory from an existing field if possible; possible in that there's enough contagious blocks for said object.
If there's not enough contagious blocks, it'll allocate a new field.
Here's where it gets murky.
A Field is only freed when it contains zero objects, and while there's garbage collection in CPython, there's no "trash compactor". So if you have a couple objects in a few fields, and each field is only 70% full, CPython wont move those objects all together and free some fields.
It seems pretty reasonable that the large data chunk you're pulling from the HTTP call is getting allocated to "new" fields, but then something goes sideways, the object's reference count goes to zero, then garbage collection runs and returns those fields to the OS. | 1 | 0 | 0 | Some observations on Heroku that don't completely mesh with my mental model.
My understanding is that CPython will never release memory once it has been allocated by the OS. So we should never observe a decrease in resident memory of CPython processes. And this is in fact my observation from occasionally profiling my Django application on Heroku; sometimes the resident memory will increase, but it will never decrease.
However, sometimes Heroku will alert me that my worker dyno is using >100% of its memory quota. This generally happens when a long-running response-data-heavy HTTPS request that I make to an external service (using the requests library) fails due to a server-side timeout. In this case, memory usage will spike way past 100%, then gradually drop back to less than 100% of quota, when the alarm ceases.
My question is, how is this memory released back to the OS? AFAIK it can't be CPython releasing it. My guess is that the incoming bytes from the long-running TCP connection are being buffered by the OS, which has the power to de-allocate. It's murky to me when exactly "ownership" of TCP bytes is transferred to my Django app. I'm certainly not explicitly reading lines from the input stream, I delegate all of that to requests. | python requests memory usage on heroku | 0 | 0 | 0 | 854 |
36,231,114 | 2016-03-26T02:54:00.000 | -2 | 0 | 0 | 1 | python,google-app-engine,google-cloud-sql,gcloud | 36,407,336 | 1 | true | 1 | 0 | SQL schema migration is a well-known branch of SQL DB administration which is not specific to Cloud SQL, which is mainly different to other SQL systems in how it is deployed and networked. Other than this, you should look up schema migration documentation and articles online to learn how to approach your specific situation. This question is too broad for Stack Overflow as it is, however. Best of luck! | 1 | 1 | 0 | I'm currently using google cloud sql 2nd generation instances to host my database. I need to make a schema change to a table but Im not sure what the best way to do this.
Ideally, before I deploy using gcloud preview app deploy my migrations will run so the new version of the code is using the latest schema. Also, if I need to rollback to an old version of my app the migrations should run for that point in time. Is there a way to integrate sql schema migrations with my app engine deploys?
My app is app engine managed VM python/flask. | How to perform sql schema migrations in app engine managed vm? | 1.2 | 1 | 0 | 180 |
36,231,304 | 2016-03-26T03:29:00.000 | 2 | 0 | 0 | 0 | python,runtime,solver,pulp,interruptions | 36,231,325 | 1 | false | 0 | 0 | Your computer didn't shut down; it hibernated, essentially writing the current state of your memory and CPU to disk. When power was restored, it reinitialized from the stored state rather than booting from scratch. | 1 | 1 | 0 | I've been running a large data set through a code using the PuLP solver in Python. The cdc solver itself has been taking a very long time to solve the MILP minimization problem, but this is expected as the data set is extremely large. I managed to leave my computer on for about the first 12 hours of the running program, however eventually my laptop shut down from a dead battery. To my surprise, when I turned on the computer the Python window was still open and the code was still running. I wanted to know, did the program restart, continue from where it left off, or should I suspect that it's no longer working?
And just to clarify, I know the Python program works correctly- it returns the correct answer for smaller subsets of the large data set (it solves the problem in 16 minutes for 85% of the large data set). I'd appreciate any insight I can get! | What happens to running Python script when computer shuts down? | 0.379949 | 0 | 0 | 1,150 |
36,232,088 | 2016-03-26T05:53:00.000 | 1 | 1 | 1 | 0 | python,python-2.7,raw-input | 36,232,129 | 2 | false | 0 | 0 | Your factorial function probably takes an integer as input.
However, raw_input returns a string.
So you either need to convert the returned string to an integer.
By using int().
Or you can directly use input() which returns an integer in Python2. | 1 | 0 | 0 | I write a simple factorial function in python to know the factorial of n . As like we given n and it shows the factorial of n . But the problem is , i use raw_input function to get the value of n. But can't work with that . What can i do to do with raw_input ? | Input from keyboard in python | 0.099668 | 0 | 0 | 188 |
36,234,690 | 2016-03-26T11:24:00.000 | 1 | 1 | 0 | 0 | python,email | 36,242,008 | 1 | true | 0 | 0 | Well, just a little bit of testing with telnet will give you the answer to the question 'how do I find the imap server for privateemail.com'.
mail.privateemail.com is their IMAP server. | 1 | 0 | 0 | I am using NameCheap to host my domain, and I use their privateemail.com to host my email. I'm looking to create a python program to retrieve specific/all emails from my inbox and to read the HTML from them (html instead of .body because there is a button that has a hyperlink which I need an is only accessible via html). I had a couple questions for everyone.
Would the best way to do this be via IMAPlib? If it is, how do I find out the imap server for privateemail.com?
I could do this via selenium, but it would be heavy and I would prefer a lighter weight and faster solution. Any ideas on other possible technologies to use?
Thanks! | Retrieving emails from NameCheap Private Email | 1.2 | 0 | 1 | 411 |
36,240,752 | 2016-03-26T20:53:00.000 | -1 | 0 | 1 | 0 | python,exe,fpdf,pyinstaller | 37,174,567 | 1 | false | 0 | 0 | Downgrading setuptools to version 19.2 may solve the issue. | 1 | 0 | 0 | I've created an windows executable from a python script using PyInstaller. Since I've included the fpdf package, the exe is compiled without errors, but when I run it, I get an error dialog displaying the following and an OK button:
Fatal Error!
pyi_rth_pkgres returned -1
I can click the OK button and the exe process finishes.
I've looked trough the fpdf source code, but I don't think it has some hidden imports.
What can I do now? Does anybody has experience with including fpdf in a PyInstaller .exe? | PyInstaller exe including fpdf package is not working | -0.197375 | 0 | 0 | 288 |
36,240,900 | 2016-03-26T21:06:00.000 | 1 | 0 | 0 | 0 | python,flask,wtforms,flask-wtforms | 36,382,378 | 2 | false | 1 | 1 | Ok. I found it.
IntegerField(widget = widgets.Input(input_type="tel")) | 1 | 6 | 0 | I want to use a phone number field in my form. What I need is when this field is tapped on Android phone, not general keyboard, but digital only appears.
I learned that this can be achieved by using <input type="tel" or <input type="number".
How do I use the tel or number input types in WTForms? | How do I use "tel", "number", or other input types in WTForms? | 0.099668 | 0 | 0 | 5,419 |
36,242,806 | 2016-03-27T01:14:00.000 | 0 | 0 | 1 | 0 | python,console,spyder | 36,248,477 | 2 | true | 0 | 0 | I'm running a different version of Spyder & Python but was able to repro your issue. Make sure you always have an iPython console running, this will make sure you don't get this error. This will help even on re-opening spyder. | 2 | 0 | 0 | I can run my script once with each Python console ("Python 1, Python 2, etc"), but after running, the console is unusable - I can't run or type anything into the console and get a return. When I try run the script again I get a message -
"No Python console is currently selected to run GameLoop.py. Please select or open a new Python console and try again."
If I open a new console, I can run the script again. But the new console has no memory of the variables created in the script.
I don't think this problem occured the first 2 times using Spyder. My version is Spyder 2.3.8 and I am running Python 2.7.
My console settings are set to "Execute in current IPython or Python console", but changing this setting to "dedicated" doesn't help.
How can I a console to continue being usable after running a script? | Spyder no console currently connected | 1.2 | 0 | 0 | 4,252 |
36,242,806 | 2016-03-27T01:14:00.000 | 0 | 0 | 1 | 0 | python,console,spyder | 46,189,525 | 2 | false | 0 | 0 | Go to uninstall programmes>click on pythonxx(xx is version)>repair>yes
after repair procedure completed,Idle works fine. | 2 | 0 | 0 | I can run my script once with each Python console ("Python 1, Python 2, etc"), but after running, the console is unusable - I can't run or type anything into the console and get a return. When I try run the script again I get a message -
"No Python console is currently selected to run GameLoop.py. Please select or open a new Python console and try again."
If I open a new console, I can run the script again. But the new console has no memory of the variables created in the script.
I don't think this problem occured the first 2 times using Spyder. My version is Spyder 2.3.8 and I am running Python 2.7.
My console settings are set to "Execute in current IPython or Python console", but changing this setting to "dedicated" doesn't help.
How can I a console to continue being usable after running a script? | Spyder no console currently connected | 0 | 0 | 0 | 4,252 |
36,245,566 | 2016-03-27T08:54:00.000 | 4 | 0 | 0 | 0 | python,django,apache | 36,245,682 | 1 | true | 1 | 0 | In most deployment scenarios there is a Python interpreter running in the web server or next to it, and it has your code loaded into memory. If the code is changed, the loaded parts are not reloaded automatically (but some updated parts may be loaded if they were not loaded previously, hence errors) and there is no clean way to fully reload all code without destroying all objects, so restarting the interpreter is the only way.
You can use the Django development server with the autorestart option, but that's still uses restarting. | 1 | 0 | 0 | If I don't reload the webserver (apache) after making changes to source files in my django application, the browser displays erratic content, sometines errors.
Why is that? (Just out of interest)
And more importantly: can I switch it of during development? | Why do I have to restart or reload the webserver when I make changes in django? | 1.2 | 0 | 0 | 164 |
36,246,925 | 2016-03-27T11:46:00.000 | 0 | 0 | 0 | 0 | javascript,python,ajax,node.js,sockets | 36,248,252 | 1 | true | 1 | 0 | So you have three systems, and an asynchronous request. I solved a problem like this recently using PHP and the box.com API. PHP doesn't allow keeping a connection open indefinitely so I had a similar problem.
To solve the problem, I would use a recursive request. It's not 'real-time' but that is unlikely to matter.
How this works:
The client browser sends the "Get my download thing" request to the Node.js server. The Node.js server returns a unique request id to the client browser.
The client browser starts a 10 second poll, using the unique request id to see if anything has changed. Currently, the answer is no.
The Node.js server receives this and sends a "Go get his download thing" request to the Python server. (The client browser is still polling every 10 seconds, the answer is still no)
The python server actually goes and gets his download thing, sticks it in a place, creates a URL and returns that to the Node.js server. (The client browser is still polling every 10 seconds, the answer is still no)
The Node.js server receives a message back from the Python server with the URL to the thing. It stores the URL against the request id it started with. At this point, its state changes to "Yes, I have your download thing, and here it is! - URL).
The client browser receives the lovely data packet with its URL, stops polling now, and skips happily away into the sunset. (or similar more appropriate digital response).
Hope this helps to give you a rough idea of how you might solve this problem without depending on push technology. Consider tweaking your poll interval (I suggested 10 seconds to start) depending on how long the download takes. You could even get tricky, wait 30 seconds, and then poll every 2 seconds. Fine tune it to your problem. | 1 | 0 | 0 | I have a node.js server which is communicating from a net socket to python socket. When the user sends an asynchronous ajax request with the data, the node server passes it to the python and gets data back to the server and from there to the client.
The problem occurs when the user sends the ajax request: he has to wait for the response and if the python process takes too much time then the ajax is already timed out.
I tried to create a socket server in node.js and a client that connects to the socket server in python with the data to process. The node server responds to the client with a loading screen. When the data is processed the python socket client connects to the node.js socket server and passes the processed data. However the client can not request the processed data because he doesn't know when it's done. | Node.js long term tasks | 1.2 | 0 | 1 | 99 |
36,246,954 | 2016-03-27T11:49:00.000 | 2 | 0 | 1 | 0 | python,postgresql,python-2.7,virtualenv,psycopg2 | 36,247,000 | 1 | true | 0 | 0 | virtualenvs are by default isolated from the system packages so you need to install all packages into each virtualenv (or you can pass --system-site-packages when creating it). | 1 | 0 | 0 | I want to use psycopg2 (PostgreSQL) with virtualenv.
I am using Ubuntu and root already having psycopg2 and it is working fine but if i try to use it after activating virtualenv it shows
ImportError: No module named psycopg2
Do i need to put symbolic link of dist-packages manually ?? | PostgreSQL not working with virtual envirement | 1.2 | 1 | 0 | 20 |
36,247,420 | 2016-03-27T12:38:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,ubuntu,gtk,gobject | 36,247,772 | 1 | true | 0 | 0 | I reinstalled python-gobject and python-gobject-2, and now everything is working! | 1 | 3 | 0 | I try to improt Gtk like this:
from gi.repository import Gtk
and I get the following error:
ImportError: When using gi.repository you must not import static modules like "gobject". Please change all occurrences of "import gobject" to "from gi.repository import GObject"
And I get the same error when I try from gi.repository import GObject
Help me to solve this problem, please! | Please change all occurrences of "import gobject" to "from gi.repository import GObject" | 1.2 | 0 | 0 | 4,501 |
36,250,426 | 2016-03-27T17:33:00.000 | 0 | 0 | 1 | 0 | python,function,arguments | 36,250,521 | 2 | false | 0 | 0 | This question is not just for python programming but it can be applied to any programming. If a function is doing two different things like fitting a radius and fitting a region, it is better to split the function into two different functions and give more meaningful name to each like fit_raidus and fit_region instead of one generic name like run_fit. | 1 | 0 | 0 | What is the best practice in Python for a function that can be called with two "kinds" of arguments?
As an example, I have a function run_fit that can take a radius argument and fit at all points in the radius or can take a region argument and fit at all points in the custom region.
Should radius and region be Key Word Arguments? Even though exactly one is always required?
Another way of asking my question is: is there a way to capture the fact that neither argument is necessary but at least one must be provided? | Python best practice for function that can be called in two different ways | 0 | 0 | 0 | 40 |
36,259,990 | 2016-03-28T10:01:00.000 | 1 | 0 | 1 | 0 | python,windows,installation | 43,224,421 | 4 | false | 0 | 0 | I encountered the same problem. The reason is that Python-dependent system updates need to be rebooted before they can be configured correctly | 2 | 2 | 0 | I was trying to install python and for some weird and unknown reason, the installation process is stopped and returns this error
An error occurred during the installation of assembly 'Microsoft.VC90.CRT,version="9.0.21022.8",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="amd64",type="win32"'. Please refer to Help and Support for more information.
This is the first time I encountered such error. I tried 'googling' it but there seems to be no way to fix this up. Do you guys have any suggestions? I just want to install python. | Error when installing python | 0.049958 | 0 | 0 | 6,387 |
36,259,990 | 2016-03-28T10:01:00.000 | 5 | 0 | 1 | 0 | python,windows,installation | 47,285,003 | 4 | false | 0 | 0 | I know this is an old post but I recently ran into this exact same issue.
It was caused by my machine having pending Windows updates that needed a restart to be applied.
Once you've reset your machine remove the C:\Python27 folder (this is important).
Rerun the installer and it should work with no issues :-) | 2 | 2 | 0 | I was trying to install python and for some weird and unknown reason, the installation process is stopped and returns this error
An error occurred during the installation of assembly 'Microsoft.VC90.CRT,version="9.0.21022.8",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="amd64",type="win32"'. Please refer to Help and Support for more information.
This is the first time I encountered such error. I tried 'googling' it but there seems to be no way to fix this up. Do you guys have any suggestions? I just want to install python. | Error when installing python | 0.244919 | 0 | 0 | 6,387 |
36,264,081 | 2016-03-28T14:09:00.000 | 2 | 0 | 1 | 1 | python,dll | 36,264,203 | 1 | false | 0 | 0 | Because it is .dll that you are trying to delete there is a big chance that the file is in use, and therefore can't be deleted.
Try to see if you can delete it manually first. | 1 | 0 | 0 | I wrote a simple script to delete a few files from some directories, I have to delete all the .exe files and all the .dll files. I manage to delete the .exe files using os.remove("path_name") but when I am trying to delete the .dll files I get "Windows Error: [Error 267] The directory name is invalid". I am adding my code below and I hope someone can help me solve the problem.
for name in dirs:
dirPath = RES_PATH + "\\" + name
dirsInside = os.listdir(dirPath)
LOG_FILE = open(dirPath + "\\log.log", 'w')
for doc in dirsInside:
if (".exe" in doc):
os.remove(dirPath + "\\" + doc)
elif (".dll" in doc):
shutil.rmtree(os.path.join(dirPath, doc))
if ("ResultFile.txt" in doc):
pathToResultFile = dirPath + "\\" + doc
fileResult = open(pathToResultFile, 'r')
lines = fileResult.readlines()
thanks in advance.
EDIT
when I am trying to use the os.unlink() I get:
"WindowsError: [Error 5] Access is denied"
for the .dll file (the .exe file is deleted as it should) | couldn't delete file.dll using python script | 0.379949 | 0 | 0 | 531 |
36,265,728 | 2016-03-28T15:44:00.000 | 2 | 1 | 0 | 0 | python,python-2.7,google-chrome | 36,270,465 | 2 | false | 1 | 0 | Re-indenting the file seemed to help, but that might have just been coincidental. | 1 | 5 | 0 | Has anyone had trouble verifying/submitting code to the google foobar challenges? I have been stuck unable to progress in the challenges, not because they are difficult but because I literally cannot send anything.
After I type "verify solution.py" it responds "Verifying solution..." then after a delay: "There was a problem evaluating your code."
I had the same problem with challenge 1. I waited an hour then tried verifying again and it worked. Challenge 2 I had no problems. But now with challenge 3 I am back to the same cryptic error.
To ensure it wasn't my code, I ran the challenge with no code other than "return 3" which should be the correct response to test 1. So I would have expected to see a "pass" for test 1 and then "fail" for all the rest of the tests. However it still said "There was a problem evaluating your code."
I tried deleting cookies and running in a different browser. Neither changed anything. I waited overnight, still nothing. I am slowly running out of time to complete the challenge. Is there anything I can do?
Edit: I've gotten negative votes already. Where else would I put a question about the google foobar python challenges? Also, I'd prefer not to include the actual challenge or my code since it's supposedly secret, but if necessary I will do so. | Error with the google foobar challenges | 0.197375 | 0 | 1 | 9,217 |
36,267,978 | 2016-03-28T17:57:00.000 | 1 | 0 | 1 | 0 | python,nlp | 36,270,781 | 1 | false | 0 | 0 | Similarity is actually a difficult problem in NLP. I recommend you to use Word2Vec and generate word embeddings of each word. Then you can compare the distance of each word pair and see if could word better than your way. The key to improve the effectiveness of word embedding is to pick a corpus which is large enough and specifies on the area closer to your problem. | 1 | 1 | 1 | I am working on a project that needs to be able to classify modifiers like "a lot", "a few", "lots", "some" etc. into minimum percentages
For example "a lot" -> 80%
Right now I'm thinking of simply creating a large dictionary that relates these modifiers and numerical values e.g.
a few -> 15%
some -> 10%
lots -> 80%
However this is very laborious and probably won't cover all scenarios. Is there an easier way to do this, or is there a NLP tool that already exists for this purpose - preferably in python (or a database out there already?) | Easy way to classify words like "A lot", "A few", "some" | 0.197375 | 0 | 0 | 76 |
36,270,867 | 2016-03-28T20:45:00.000 | 2 | 0 | 0 | 0 | python,web-crawler,google-scholar | 37,187,565 | 2 | false | 0 | 0 | Without testing, I'm still pretty sure one of the following does the trick :
Easy, but small chance of success :
Delete all cookies from site in question after every rand(0,100) request,
then change your user-agent, accepted language, etc. and repeat.
A bit more work, but a much sturdier spider as result :
Send your requests through Tor, other proxies, mobile networks, etc. to mask your IP (also do suggestion 1 at every turn)
Update regarding Selenium
I missed the fact that you're using Selenium, took for granted it was some kind of modern programming language only (I know that Selenium can be driven by most widely used languages, but also as some sort of browser plug-in, demanding very little programming skills).
As I then presume your coding skills aren't (or weren't?) mind-boggling, and for others with the same limitations when using Selenium, my answer is to either learn a simple, scripting language (PowerShell?!) or JavaScript (since it's the web you're on ;-)) and take it from there.
If automating scraping smoothly was as simple as a browser plug-in, the web would have to be a much more messy, obfuscated and credential demanding place. | 1 | 9 | 0 | I am trying to get information on a large number of scholarly articles as part of my research study. The number of articles is on the order of thousands. Since Google Scholar does not have an API, I am trying to scrape/crawl scholar. Now I now, that this is technically against the EULA, but I am trying to be very polite and reasonable about this. I understand that Google doesn't allow bots in order to keep traffic within reasonable limits. I started with a test batch of ~500 hundred requests with 1s in between each request. I got blocked after about the first 100 requests. I tried multiple other strategies including:
Extending the pauses to ~20s and adding some random noise to them
Making the pauses log-normally distributed (so that most pauses are on the order of seconds but every now and then there are longer pauses of several minutes and more)
Doing long pauses (several hours) between blocks of requests (~100).
I doubt that at this point my script is adding any considerable traffic over what any human would. But one way or the other I always get blocked after ~100-200 requests. Does anyone know of a good strategy to overcome this (I don't care if it takes weeks, as long as it is automated). Also, does anyone have experience dealign with Google directly and asking for permission to do something similar (for research etc.)? Is it worth trying to write them and explain what I'm trying to do and how, and see whether I can get permission for my project? And how would I go about contacting them? Thanks! | Crawling Google Scholar | 0.197375 | 0 | 1 | 4,309 |
36,275,005 | 2016-03-29T03:43:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,classification,random-forest | 36,281,252 | 2 | false | 0 | 0 | You can use parameter class_weight .
Weights associated with classes in the form {class_label: weight}
You can give more weight to your small class and find best weight using cross-validation.
For example class_weight={1: 10, 0:1}. Gives more weight to class labeled 1. | 2 | 2 | 1 | I'm building a random forest classification model with the response variable split being 98%(False)-2%(True). I'm using Scikit Learn's RandomForest classifier for this.
What is the best way to handle this unbalanced data and avoid oversampling? | Stratified sampling for Random forest -Python | 0 | 0 | 0 | 1,888 |
36,275,005 | 2016-03-29T03:43:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,classification,random-forest | 54,748,830 | 2 | false | 0 | 0 | In newer versions of sklearn's random forest classifier, you can simply set class_weight="balanced". | 2 | 2 | 1 | I'm building a random forest classification model with the response variable split being 98%(False)-2%(True). I'm using Scikit Learn's RandomForest classifier for this.
What is the best way to handle this unbalanced data and avoid oversampling? | Stratified sampling for Random forest -Python | 0 | 0 | 0 | 1,888 |
36,275,331 | 2016-03-29T04:16:00.000 | 0 | 0 | 0 | 0 | javascript,python,server | 36,275,505 | 2 | false | 1 | 0 | Not too sure what you are trying to achieve from reading the question. but from what i understand is that you'd like to be able to launch your application and serve it on the web when done.
you could use heroku!(www.heroku.com), same way you are hosting the application on your local, you simply make a Procfile, and in that file you put in the command that you'd normally run on your local, and push to Heroku. | 1 | 0 | 0 | I'm developing an html web page which will visualize data. The intention is that this web page works only on one computer, so I don't want it to be online. Just off line. The web uses only Js, css and html. It is very simple and is not using any database, the data is loaded through D3js XMLHttpRequest. Up to now it is working with a local python server for development, through python -m SimpleHTTPServer. Eventually I will want to launch it easyer. Is it possible to pack the whole thing in a launchable app? Do you recommend some tools to do it or some things to read? What about the server part? Is it possible to launch a "SimpleHTTPServer" kind of thing without the console? Or maybe just one command which launches the server plus the web?
Thanks in advance. | Serving locally a webapp | 0 | 0 | 1 | 61 |
36,280,254 | 2016-03-29T09:24:00.000 | 0 | 1 | 0 | 0 | python,windows,raspberry-pi,putty | 36,300,075 | 2 | false | 0 | 0 | you can write the listener inside a loop and also the mail functionality as function
like
GPIO.setup(23, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
GPIO.setup(24, GPIO.IN, pull_up_down = GPIO.PUD_UP)
while True:
if(GPIO.input(23) ==1):
# button pressed then cal the mail() function
#after excutting the function if need you can again set the pinpoint back to 0
if(GPIO.input(24) == 0):
print(“Button 2 pressed”)
GPIO.cleanup()
try the example | 1 | 0 | 0 | i have created python script mail.py which include code of sending mail when gpio 4 is pressed..my gpio 4 is PULLED UP switch ,but problem is that when i directly run the script it runs means it send mail but when i press that switch it doesn't run before pressing switch it goes outside the loop and script doen't run,also email doesn't send..i have also put delay for that..i think problem is when i press the switch one time it have to store the state of switch so after 10 second it read the state but i can't store the state of switch..if any suggestion plz tell me..thanks in advance.. | Raspberry pi:send email when PULLED UP switch is pressed | 0 | 0 | 0 | 1,325 |
36,283,541 | 2016-03-29T11:54:00.000 | 3 | 0 | 1 | 0 | python,pip,pyodbc | 51,525,144 | 4 | false | 0 | 0 | To get a short report about the module, including its version, run:
pip show pyodbc
in a command prompt (e.g. Windows cmd). This works even for modules without a __version__ attribute. | 2 | 16 | 0 | I am using pyodbc and I want to know the version of it that I am using.
Apparently I cannot use pyodbc.__version__ because probably the variable is not set.
How else can I figure out the version of the package? | How to check version of python package if no __version__ variable is set | 0.148885 | 0 | 0 | 31,326 |
36,283,541 | 2016-03-29T11:54:00.000 | 1 | 0 | 1 | 0 | python,pip,pyodbc | 65,987,671 | 4 | false | 0 | 0 | Go to start --> cmd --> pip list
It list's all the packages you installed with version | 2 | 16 | 0 | I am using pyodbc and I want to know the version of it that I am using.
Apparently I cannot use pyodbc.__version__ because probably the variable is not set.
How else can I figure out the version of the package? | How to check version of python package if no __version__ variable is set | 0.049958 | 0 | 0 | 31,326 |
36,288,578 | 2016-03-29T15:25:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 36,298,305 | 1 | false | 0 | 0 | I don't see why you'd want X to vary for each task: the point of multitask learning is that the same feature space is used to represent instances for multiple tasks which can be mutually informative. I get that you may not have ground truth y for all instances for all tasks, though this is currently assumed in the scikit-learn implementation. | 1 | 0 | 1 | I have one huge data matrix X, of which subsets of rows correspond to different tasks that are related but also have different idiosyncratic properties.
Thus I want to train a Multi-Task model with some regularization and chose sklearn's linear_model MultiTaskElasticNet function.
I am confused with the inputs of fitting the model. It says that both the X and the Y matrix are 2-dimensional. The 2nd dimension in Y corresponds to the number of tasks. That makes sense, but in my understanding the X matrix should be 3-dimensional right? In that way I have selected which subsets of my data correspond to different tasks as I know that in advance (obviously).
Does someone know how to enter my data correctly for this scikit-learn module?
Thank you! | Sklearn multi-task: Input data not 3-dimensional? | 0 | 0 | 0 | 511 |
36,288,794 | 2016-03-29T15:35:00.000 | 2 | 0 | 1 | 0 | python,mongodb,pymongo | 36,289,062 | 1 | false | 0 | 0 | With mongo you have to pass a json (which in python we can essentially think of as a dict) in order to do anything really, so say you want to add a list or set of numbers to the hamburgers field in your collection, you would prepare it as follows:
db.your_collection.update({'$push': {'hamburgers': {$each: [1, 2, 3, 4]}}})
if it's a set, you cant convert it to a list
db.your_collection.update({'$push': {'hamburgers': {$each: list({1, 2, 3, 4})}}}) | 1 | 1 | 0 | Apparently it is not possible and the best advice I found so far is to use dict but this is not what I want. Is there still a way to store a set / list in MongoDB? | MongoDB & PyMongo: how to store a set / list? | 0.379949 | 0 | 0 | 2,869 |
36,291,392 | 2016-03-29T17:42:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,nltk,cluster-analysis,k-means | 36,304,549 | 2 | false | 0 | 0 | Precision, Recall, and thus the F-measure are inappropriate for cluster analysis. Clustering is not classification, and clusters are not classes!
Common measures for clustering (if you are trying to compare with existing labels, which does not make a whole lot of sense - if you already know the classes, then use classification and not clustering) are the Adjusted Rand Index and its variants. | 1 | 2 | 1 | I am trying to use NLTK's KMeans Clustering Algorithm.
It is generally going fine.
I want to use the Metrics package of NLTK to determine precision,recall and f measure.
I searched for some examples in web and in other references but may be without a clue.
If any one may kindly cite an example or reference.
Thanks in Advance. | How may I calculate Accuracy in NLTK KMeans Clustering | 0 | 0 | 0 | 2,455 |
36,292,230 | 2016-03-29T18:25:00.000 | 0 | 1 | 0 | 0 | python,math,trigonometry,angle,euler-angles | 36,292,344 | 2 | false | 0 | 0 | Suppose u(1), u(2), ..., u(m), v are all unit vectors. You want to determine i such that the angle between u(i) and v is maximized. This is equivalent to finding the i such that np.dot(u(i), v) is minimized. So if you have a matrix U where the rows are the u(i), you can simply do i = np.argmin(np.dot(U, v)) to find the i that has the angle between u(i) and v maximized. | 1 | 1 | 1 | I have a sensor attached to a drill. The sensor outputs orientation in heading roll and pitch. From what I can tell these are intrinsic rotations in that order. The Y axis of the sensor is parallel to the longitudinal axis of the drill bit. I want to take a set of outputs from the sensor and find the maximum change in orientation from the final orientation.
Since the drill bit will be spinning about the pitch axis, I believe it can be neglected.
My first thought would be to try to convert heading and roll to unit vectors assuming pitch is 0. Once I have their vectors, call them v and vf, the angle between them would be
Θ=arccos(v . vf)
It should then be fairly straight forward to have python calculate Θ for a given set of orientations and pull the largest out.
My question is, is there a simpler way to do this using python, and, if not what is the most efficient way to convert these intrinsic rotations to unit vectors. | I need to find the angle between two sets of Roll and Yaw angles | 0 | 0 | 0 | 914 |
36,293,140 | 2016-03-29T19:12:00.000 | 2 | 0 | 1 | 0 | python,list,append,extend | 36,293,210 | 2 | true | 0 | 0 | In CPython, this is a difference in the way lists and tuples are allocated. For a list, the object contains a pointer to the memory allocated for the list contents. The list object proper is small, and never needs to move; the address of the vector it points to may change any number of times.
For a tuple object, which is expected to be small most times, the memory for the tuple content is indeed allocated directly in the tuple object. But tuples can't be resized, so your scenario can't arise in that case. | 2 | 3 | 0 | Being mutable, when a Python list is extended (e.g., mylist.extend() or mylist += anotherlist), the id of the list does not change.
I understand that (at least in CPython) lists are contiguous in memory (and the id happens to be the address of the head of the list). What if the memory after the list is already highly fragmented and the list extension cannot be allocated (even though there's plenty of free space, albeit discontiguous in that area)? Does the allocation fail? How is that mitigated? | How does CPython handle list extend if there is insufficient contiguous memory after the list? | 1.2 | 0 | 0 | 301 |
36,293,140 | 2016-03-29T19:12:00.000 | 1 | 0 | 1 | 0 | python,list,append,extend | 36,293,708 | 2 | false | 0 | 0 | This is implementation specific, but I assume you are asking about CPython.
As you say, lists are contiguous memory, but these are C pointers to PyObjects (*PyObject). The objects themselves, be they integers or objects of your own class, are allocated somewhere in dynamic memory but (probably) not in contiguous memory.
When a list is first allocated then more memory is obtained than is required, leaving "slack" entries on (conceptually) the right-hand side. A list.append() will just use the next slack entry. Of course eventually they are all full.
At that point a reallocation of memory is performed. In C terms that is a call to realloc(), but the implementation might not actually use the C run-time heap for this. Basically, a larger memory allocation is obtained from dynamic memory, then all the elements of the list are copied to the new list. Remember though that we are copying pointers to Python objects, not the objects themselves.
There is an overhead in doing this, but the use of the slack entries reduces it. You can see from this that list.append() is more efficient than adding an entry on the "left" of the list.
If the allocation fails because there is insufficient memory for a new list, then you will get an out of memory error. But if you are that short of memory then any new Python object could cause this.
Heap fragmentation can be an issue, but Python does its best to manage this, which is one reason why Python does some heap management of its own. In recent Windows implementations the low-level virtual memory APIs are used instead of the C RTL so Python can perform its own memory management. | 2 | 3 | 0 | Being mutable, when a Python list is extended (e.g., mylist.extend() or mylist += anotherlist), the id of the list does not change.
I understand that (at least in CPython) lists are contiguous in memory (and the id happens to be the address of the head of the list). What if the memory after the list is already highly fragmented and the list extension cannot be allocated (even though there's plenty of free space, albeit discontiguous in that area)? Does the allocation fail? How is that mitigated? | How does CPython handle list extend if there is insufficient contiguous memory after the list? | 0.099668 | 0 | 0 | 301 |
36,295,569 | 2016-03-29T21:38:00.000 | 1 | 0 | 1 | 0 | python,class,internal | 36,295,771 | 2 | true | 0 | 0 | Python code doesn't have any such equivalent for an anonymous namespace, or static linkage for functions. There are a few ways you can get what you're looking for
Prefix with _. Names beginning with an underscore are understood
to be for internal use to that python file and are not exported by
from * imports. it's as simple as class _MyClass.
Use __all__: If a python file contains a list a list of strings
named __all__, the functions and classes named within are
understood to be local to that python file and are not exported by
from *.
Use local classes/functions. This would be done the same way you've
done so with C++ classes.
None these gets exactly what you want, but privacy and restricting in this way are just not part of the language (much like how there's no private data member equivalent). Pydoc is also well aware of these conventions and will provide informative documentation for the intended-to-be-public functions and classes. | 1 | 1 | 0 | I'm relatively new to Python.
When I did C/C++ programming, I used the internal classes quite often. For example, in some_file.cc, we may implement a class in the anonymous namespace to prevent it from being used outside. This is useful as a helper class specific to that file.
Then, how we can do a similar thing in Python? | How to make a python class not exposed to the outside? | 1.2 | 0 | 0 | 751 |
36,297,179 | 2016-03-29T23:51:00.000 | 1 | 0 | 0 | 0 | python,django,web-applications | 36,440,307 | 1 | true | 1 | 0 | I have found out that django works fine with os.path and no problems. actually if you are programming with python then django is a great choice for server work. | 1 | 0 | 0 | I have a python algorithm that access a huge database in my laptop. I want to create a web server to work with it. can I use django with the folder paths I have used ? like how do I communicate with it ? I want to get an image from web application and get it sent to my laptop and run algorithm on it then send result back to the webserver. would that still be possible without me changing my algorithm paths? like I use os.path to access my database folder, would I still be able to do what with django or shall I learn something else? I wanted to try django as it runs in python and I can learn it easy. | can i use django to access folders on my pc using same os.path for windows? | 1.2 | 0 | 0 | 55 |
36,297,429 | 2016-03-30T00:16:00.000 | 0 | 0 | 0 | 0 | python-2.7,pyqt,pyqt4,qwidget,qlabel | 36,314,912 | 3 | false | 0 | 1 | There's nothing in Qt that does that by default. You will indeed need to create an animation that changes the text.
You can use QFontMetrics (label.fontMetrics()) to determine if the label text is larger than then QLabel (to know if you need to scroll it or not). You need a way to repaint the QLabel every half second or so to animate the scrolling. The easiest way is probably a QTimer. The easiest method would probably be to subclass QLabel and have it check if it needs to scroll itself and reset the text every half second or so to simulate the scrolling.
If you wanted scrolling to be even smoother (at a sub-character level), you'd have to override the paint method and paint the text yourself translating and clipping it as necessary. | 1 | 1 | 0 | I am trying to use a Qlabel as a message center for relaying messages to users of the application. Some messages might be longer than allowed for the Qlabel and I want it to just scroll horizontally until the end of the text. How can I do this in a Qlabel? I cannot seem to find anything in designer and don't want to work out some sort of truncation method in code that just takes off pieces from the front of the string, that seems silly. | Smooth scrolling text in Qlabel | 0 | 0 | 0 | 2,078 |
36,302,677 | 2016-03-30T07:47:00.000 | 0 | 1 | 1 | 0 | python,heroku | 36,302,969 | 1 | true | 1 | 0 | When the dyno restarts, it's a new one. The filesystem on Heroku is ephemeral and is not persisted across dynos; so your file is lost.
You need to store it somewhere more permanent - either somewhere like S3, or one of the database add-ons. Redis might be suitable for this. | 1 | 0 | 0 | I have a twitter bot which is reading a text file and tweeting. Now, a free Heroku dyno sleeps after every 18 hours for 6 hours, after which it restarts with the same command. So, the text file is read again and the tweets are repeated.
To avoid this, everytime a line was read out of the list of lines from the file, I was removing the line from the list (after tweeting) and putting the remaining list into a new file which is then renamed to the original file.
I thought this might work, but when the dyno restarted, it started from the beginning. Am I missing something here? It would be great if someone could help me with this. | Twitter Bot is restarting after Heroku dyno recharges | 1.2 | 0 | 0 | 103 |
36,307,621 | 2016-03-30T11:35:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pyautogui | 38,230,687 | 3 | false | 0 | 1 | I had the same problem using Windows 8.1. I solved making a bat file calling the python script and running the bat file as administrator.
To run the bat file as administrator I made a right click on bat file and run as administrator. | 2 | 1 | 0 | So I am learning to use python 3 and now the "pyautogui" module. When I try to use "pyautogui.click(x, y)". I get this error "[WinError 5] Access is denied". It still clicks the coordinates, but why I get this error. I have tried to run this from normal and administer CMD. I am using windows 10. If you can help me please help!
Thanks for advice! | Pyautogui.click(x, y) error | 0 | 0 | 0 | 1,059 |
36,307,621 | 2016-03-30T11:35:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pyautogui | 39,370,746 | 3 | false | 0 | 1 | The root cause is when you have a mouse options app installed (at least in my case).
I know it from this story: I had a python script that used click (twice). It worked well, but in the meantime, I installed a mouse settings app on my comp. After that I run my script, but got this access denied error. (However, the first click worked, only the second one gave that error). Then I uninstalled this mouse software (almost unusable anyway), and voila, the clicking was again all right.
Hope this helps you as well. | 2 | 1 | 0 | So I am learning to use python 3 and now the "pyautogui" module. When I try to use "pyautogui.click(x, y)". I get this error "[WinError 5] Access is denied". It still clicks the coordinates, but why I get this error. I have tried to run this from normal and administer CMD. I am using windows 10. If you can help me please help!
Thanks for advice! | Pyautogui.click(x, y) error | 0 | 0 | 0 | 1,059 |
36,308,531 | 2016-03-30T12:12:00.000 | 5 | 0 | 1 | 0 | python,pip,anaconda,package-managers,conda | 66,869,034 | 3 | false | 0 | 0 | For what it's worth, I noticed the following...
conda clean --all --dry-run gave me a rough total of 2GB
conda clean --packages --dry-run gave me a rough total of 6GB
So same discrepancy as observed by OP...
When I next did conda clean --tarballs --dry-run I noticed it also gave me 2GB, strange... Comparing output of first and last commands it seems conda clean --all --dry-run only showed me the tarballs, no mention of the packages
I went ahead, did conda clean --tarballs and then reran the conda clean --all --dry-run... guess what? It now showed the packages (after mentioning there were no tarballs, which is logical as I just cleaned them)
My conclusion... when there are still tarballs in the cache, conda clean --all --dry-run does not provide you the full picture of what will/could be removed | 1 | 48 | 0 | I have a conda virtual environment with several unused packages installed inside it (either using pip install or conda install).
What is the easiest way to clean it up so that only packages that are actually used by my code remain, and the others are uninstalled? | How to uninstall all unused packages in a conda virtual environment? | 0.321513 | 0 | 0 | 55,233 |
36,311,599 | 2016-03-30T14:17:00.000 | 1 | 0 | 1 | 0 | python,pip | 36,312,671 | 1 | true | 0 | 0 | I think it depends on the type of the packages.
Plain Python packages could be potentially installed to a network drive and reused on any device that has that location mounted. It would be then just the matter of adding this location to the PYTHONPATH (which can be done through environment variables or some local settings file in your application).
However cpython packages that need to be linked against other libraries (e.g. database drivers) would possibly not work as they are compiled against a specific library version and any difference across your computers would possibly cause them to break. | 1 | 0 | 0 | Hi my question is about the installation of python packages in a central location. I have many computers on my network and would ideally want to install python packages in only one location and have the packages available to all the computers . Is there a clean way to do this ? | Installing python packages in a central location | 1.2 | 0 | 0 | 106 |
36,312,022 | 2016-03-30T14:34:00.000 | 2 | 0 | 1 | 1 | python,virtualenv,python-3.5,ubuntu-15.04 | 36,312,213 | 1 | false | 0 | 0 | You can just install the latest version of python. You can also download and install the different versions in your user's home dir.
In case you are planning to have multiple versions installed manually. This is from the offical python README file.
Installing multiple versions
On Unix and Mac systems if you intend to install multiple versions of Python using the same installation prefix (--prefix argument to the configure script) you must take care that your primary python executable is not overwritten by the installation of a different version. All files and directories installed using "make altinstall" contain the major and minor version and can thus live side-by-side. "make install" also creates ${prefix}/bin/python3 which refers to ${prefix}/bin/pythonX.Y. If you intend to install multiple versions using the same prefix you must decide which version (if any) is your "primary" version. Install that version using "make install". Install all other versions using "make altinstall".
For example, if you want to install Python 2.5, 2.6 and 3.0 with 2.6 being the primary version, you would execute "make install" in your 2.6 build directory and "make altinstall" in the others.
Once done that you can continue using the virtual environment for python using the python version of your choice. | 1 | 2 | 0 | I've never used virtualenv, I'm working on Ubuntu 15.04 (remotely via ssh), and I've been told I can't make any changes to system the Pythons. Ubuntu 15.04 comes with Pythons 2.7 and 3.4.3, but I want Python 3.5 in my virtualenv. I've tried virtualenv -p python3.5 my_env and it gives The executable python3.5 (from --python=python3.5) does not exist, which I take to mean that it's complaining about the system not having Python 3.5. So, is it impossible to create a virtualenv with Python 3.5, if the system does not already have Python 3.5? | `virtualenv` with Python 3.5 on Ubuntu 15.04 | 0.379949 | 0 | 0 | 906 |
36,314,303 | 2016-03-30T16:14:00.000 | 4 | 0 | 0 | 0 | python,kivy | 36,314,717 | 1 | true | 0 | 1 | This is an issue with Kivy 1.9.1, where the hint text disappears as soon as the TextInput is focused. It has been fixed in the development branch, and now only disappears when there is content in the field. | 1 | 4 | 0 | Hi i noticed that whenever you have the focus property of a textinput widget set to True,
the hint_text is not displayed when the textinput is actually in focus.
Please is there a way to combine them both, i.e the hint_text gets displayed even when the text input is in focus? | Kivy TextInput how to combine hint_text and focus | 1.2 | 0 | 0 | 443 |
36,314,815 | 2016-03-30T16:40:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,encryption,cryptography,save | 36,345,400 | 2 | false | 0 | 0 | For good encryption to be secure, the key (and only the key) must be secure.
But if an application cannot retrieve the key from someplace else, it must store it in its own data or generate every time its needed. And that is not secure. It's like putting your house key under the doormat outside.
So obfuscation is the best you can hope for. And that depends on how sophisticated your public is.
Compressing a savefile with one of the compression modules from the standard library and then cutting off the header before writing the data to disk will defeat casual lookers because it looks like random data without the identifying header. But if they can read and understand the source code of the program, they can easily make the text readable again.
Another approach would be to do nothing with the savefile, but create a checksum (say SHA1) of the savefile contents and store that somewhere else. Refuse to open a savefile whose contents don't match the checksum. Again, anyone who is capable and willing to go through the program's source code can "break" this protection. | 2 | 2 | 0 | I am in the process of creating a text-based adventure game in Python. The game will not (and has no reason to) access the internet. When creating a save file, I want to securely encrypt and store the file. I would take all the data to be saved, run it through some encryption function, and create a text file storing the output of that encryption function. The file would be stored on the player's local computer storage (hard disk, user documents, user desktop, whatever).
My question is this: is there any way that I can create an encryption function inside the program that virtually uncrackable (whilst being decryptable)? Can any local encryption (even outside of Python) ever be "truly" secure? Just thinking about it/based on my knowledge of cryptography, I would say that it's not possible, but I've been wrong before.
I know that I can create pseudo-cryptography that's essentially obfuscation, but that, very obviously, is quite easy to crack. Also, I understand that most people wouldn't bother to or have the knowledge to edit the save file, but it doesn't take someone with too awfully much knowledge to do what I've stated in the following paragraph (if the individual is motivated enough, which most cheaters are).
The reason I want to encrypt the save file is because without encrypting it, it is so very easy to cheat the game by simply editing the (unencrypted) save file. Even encrypting the file leaves the encryption algorithm in plain sight in the python code; python does not need to be compiled into an executable, so the raw/natural python code is outright for anyone who simply looks at the contents of the game.
I highly doubt that secure encryption such as this is even possible, but if it is, please explain how and, if possible, provide some example Python code for incorporating it into Python (if it's possible for Python). | Can encryption/cryptography be secure inside a local, internet-nonaccessing program? | 0 | 0 | 0 | 172 |
36,314,815 | 2016-03-30T16:40:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,encryption,cryptography,save | 36,344,091 | 2 | false | 0 | 0 | There is no secure (as in cryptographically secure) way of achieving this. As the comments already say: your program must be able to decrypt the file again, so the knowledge how to decrypt it, must be there.
However, you can raise the bar that cheaters need to cross. The simplest way is to just encode the file instead of encrypting it. This can be base64 or even something simple as gzip. Your users would need to use an extra tool instead of just using a text editor.
Everything else needs your creativity. You might use obfuscated code to obtain the encryption key, like the hash of a python source file, or whatever. A cheater would then need to use a python debugger, which increases the needed knowledge.
But the question remains: If you don't play online, whom are your users going to cheat? | 2 | 2 | 0 | I am in the process of creating a text-based adventure game in Python. The game will not (and has no reason to) access the internet. When creating a save file, I want to securely encrypt and store the file. I would take all the data to be saved, run it through some encryption function, and create a text file storing the output of that encryption function. The file would be stored on the player's local computer storage (hard disk, user documents, user desktop, whatever).
My question is this: is there any way that I can create an encryption function inside the program that virtually uncrackable (whilst being decryptable)? Can any local encryption (even outside of Python) ever be "truly" secure? Just thinking about it/based on my knowledge of cryptography, I would say that it's not possible, but I've been wrong before.
I know that I can create pseudo-cryptography that's essentially obfuscation, but that, very obviously, is quite easy to crack. Also, I understand that most people wouldn't bother to or have the knowledge to edit the save file, but it doesn't take someone with too awfully much knowledge to do what I've stated in the following paragraph (if the individual is motivated enough, which most cheaters are).
The reason I want to encrypt the save file is because without encrypting it, it is so very easy to cheat the game by simply editing the (unencrypted) save file. Even encrypting the file leaves the encryption algorithm in plain sight in the python code; python does not need to be compiled into an executable, so the raw/natural python code is outright for anyone who simply looks at the contents of the game.
I highly doubt that secure encryption such as this is even possible, but if it is, please explain how and, if possible, provide some example Python code for incorporating it into Python (if it's possible for Python). | Can encryption/cryptography be secure inside a local, internet-nonaccessing program? | 0 | 0 | 0 | 172 |
36,315,168 | 2016-03-30T16:58:00.000 | 10 | 0 | 0 | 0 | python,django,field,verbose | 54,033,108 | 1 | false | 1 | 0 | Verbose Field Names are optional. They are used if you want to make your model attribute more readable, there is no need to define verbose name if your field attribute is easily understandable. If not defined, django automatically creates it using field's attribute name.
Ex : student_name = models.CharField(max_length=30)
In this example, it is understood that we are going to store Student's name, so no need to define verbose explicitly.
Ex : name = models.CharField(max_length=30)
In this example, one may be confused what to store - name of student or Teacher. So, we can define a verbose name.
name = models.CharField("student name",max_length=30) | 1 | 8 | 0 | If I have a web app that use only one language and it is not English, is that correct to use model field verbose_name attribute for the field description (that will be printed in form)? I dont use the translation modules. | The use of model field "verbose name" | 1 | 0 | 0 | 10,288 |
36,317,027 | 2016-03-30T18:36:00.000 | 1 | 0 | 0 | 0 | python,angularjs,django | 36,317,441 | 1 | true | 1 | 0 | It really depends on what you want to do, there are many ways, but yeah, ideally your routes and templates are handled in AngularJS, Angular requests information from Django or POSTs information, and they communicate using JSON.
You need to put your Angular templates and files in the static folder, you can use grunt or django-pipeline to better manage all the files you have. | 1 | 2 | 0 | Now I am working with Django on server side and jQuery on client. Django views functions return templates with js code.
How it should look using AngularJS. Should I return JSON from Django and render response using JS ?
Many thanks! | How to right code in Django and AngularJS? | 1.2 | 0 | 0 | 86 |
36,319,702 | 2016-03-30T21:04:00.000 | 0 | 0 | 0 | 0 | python,pagination,flask-sqlalchemy | 36,334,148 | 1 | false | 1 | 0 | ok I don't know the answer to this question but ordering the query (order_by) solved my problem... I am still interested to know why paginate does not have an order by itself, because it basically means that without the order statement, paginate cannot be used to iterate through all elements?
cheers
carl | 1 | 0 | 0 | I am using a simple sqlalchemy paginate statement like this
items = models.Table.query.paginate(page, 100, False)
with page = 1. When running this command twice I get different outputs? If I run it with less element (e.g. 10) it gives me the same outputs when run multiple times? I thought for a paginate command to work it has to result in the same set each time it is called?
cheers
carl | flask sqlalchemy paginate() function does not get the same elements when run twice | 0 | 1 | 0 | 193 |
36,324,433 | 2016-03-31T04:52:00.000 | 0 | 0 | 1 | 0 | python,regex | 36,324,537 | 1 | false | 0 | 0 | Check this one
\b(\d{1,2}|100)\b
This will match any number with at least 2 numbers or 100 only. | 1 | 0 | 0 | So i am trying to write a regular expression that will accept any digits between 0 and 100. What I tried is \d+{0-100}. but that didn't work. I am not sure if the syntax is wrong or there is another problem since I am fairly new to re. Thanks for your help | Regular expression for digits in a certain range | 0 | 0 | 0 | 108 |
36,328,123 | 2016-03-31T08:32:00.000 | 2 | 0 | 0 | 0 | python-3.x,import | 42,805,878 | 1 | false | 0 | 0 | You seem to have a typo in your "handshake" __init__.py file. Instead you've named it __Init__.py with capital 'I' | 1 | 1 | 0 | I know this question has already been asked several times on the site, but I have not seen any satisfactory answer. I have a project partitionned like this
TLS project (folder)
sniffer_tls.py
| - tls (folder)
| - __init__.py
| - tls_1_2.py
| - handshake (folder)
| --__ Init__.py
| - client_hello.py
when I am importing tls_1_2.py sniffer_tls.py in the main file, there is no problem. By cons when I import client_hello in tls_1_2.py, there python that mistake me out
File "/home/kevin/Documents/Python/Projet TLS/tls/tls_1_2.py", line 8, in
import handshake.client_hello
ImportError: No module named 'handshake'
I tried to import this way
import handshake.client_hello
and then I tried another way that I read on the forum
import client_hello from handshake.client_hello
I deleted the init.py file to test, it does not work either, I really need help to solve this problem | ImportError: No module named xxxxx | 0.379949 | 0 | 1 | 258 |
36,330,033 | 2016-03-31T09:56:00.000 | 0 | 0 | 0 | 0 | javascript,python,json,nlp,nltk | 36,531,992 | 2 | false | 0 | 0 | As I commented, I think you should add some code, since not everyone has read the book.
Anyway my conclusion is that yes, as you said it has a lot of limitations and the only way to achieve more complex queries is to write very extensive and complete grammar productions, a pretty hard work. | 1 | 1 | 0 | I need to develop natural language querying tool for a structured database. I tried two approaches.
using Python nltk (Natural Language Toolkit for python) using
Javascript and JSON (for data source)
In the first case I did some NLP steps to format the natural query by doing removing stop words, stemming, finally mapping keywords using featured grammar mapping. This methodology works for simple scenarios.
Then I moved to second approach. Finding the data in JSON and getting corresponding column name and table name , then building a sql query. For this one, I also implemented removing stop words, stemming using javascript.
Both of these techniques have limitations.I want to implement semantic search approach.
Please can anyone suggest me better approach to do this.. | Natural Language Processing Database Querying | 0 | 1 | 0 | 1,077 |
36,330,139 | 2016-03-31T10:00:00.000 | 1 | 0 | 0 | 0 | appium,python-appium | 51,953,768 | 2 | false | 1 | 0 | tap() method belongs to AppiumDriver class while the click() method belongs to the WebDriver class
I think, it is better to use driver.tap() method as it is more closely binded to the mobile scenario. and we can use on both emulator and real device. the result is same | 2 | 1 | 0 | Suppose I have to tap on a menu navigation button. I have tried click() and tap() both are working fine but which one is preferred(for both emulator and physical device) | What is the difference between click() and tap() method in appium while writing test cases | 0.099668 | 0 | 0 | 2,478 |
36,330,139 | 2016-03-31T10:00:00.000 | 0 | 0 | 0 | 0 | appium,python-appium | 36,330,191 | 2 | false | 1 | 0 | I suppose that click is for computers and tap for touch screen/mobile devices (phone, tablette) | 2 | 1 | 0 | Suppose I have to tap on a menu navigation button. I have tried click() and tap() both are working fine but which one is preferred(for both emulator and physical device) | What is the difference between click() and tap() method in appium while writing test cases | 0 | 0 | 0 | 2,478 |
36,330,860 | 2016-03-31T10:31:00.000 | 1 | 0 | 1 | 0 | python | 36,330,915 | 6 | false | 0 | 0 | You could use exceptions handling and catch actually NameError and SyntaxError. Test it inside try/except block and inform user if there is some invalid input. | 1 | 23 | 0 | tldr; see the final line; the rest is just preamble.
I am developing a test harness, which parses user scripts and generates a Python script which it then runs. The idea is for non-techie folks to be able to write high-level test scripts.
I have introduced the idea of variables, so a user can use the LET keyword in his script. E.g. LET X = 42, which I simply expand to X = 42. They can then use X later in their scripts - RELEASE CONNECTION X
But what if someone writes LET 2 = 3? That's going to generate invalid Python.
If I have that X in a variable variableName, then how can I check whether variableName is a valid Python variable? | Pythonically check if a variable name is valid | 0.033321 | 0 | 0 | 16,798 |
36,335,351 | 2016-03-31T13:48:00.000 | 2 | 0 | 1 | 0 | python,list,tuples,find-occurrences | 36,335,470 | 3 | false | 0 | 0 | Here's something to get you started. For #1, you can get the lengths of each part of the tuples with a list comprehension:
lens = [len(y) for x, y in mylist]
This will generate an array of the lengths of each sublist in the tuples, so lens[0] will equal 2, etc. | 1 | 0 | 0 | I have a list of tuples, with each tuple having two elements: an integer and an inner list. The list of tuples has len=900 and each inner list can have len=2, len=3, or len=4 depending on the case being.
This is an excerpt of the list:
mylist=[(0, [1.0, 1.0]), (1, [1.0, 1.0, 1.0]), ..., (31, [1.0, 1.0, 1.0, 1.0]), ...].
This is a specific case in which each element of the inner lists is 1.0, but I want to deal with the generic case featuring different values.
My questions:
1) How can I count the total number of elements within the inner lists? In the visualized example this number would be 2+3+4=9.
2) How can I count the occurrences of each value (or, better, of each bin) within the inner lists? | Python: how can I count the occurrences in a list of tuples? | 0.132549 | 0 | 0 | 1,307 |
36,337,376 | 2016-03-31T15:14:00.000 | 1 | 0 | 1 | 0 | python,pycharm,pyinstaller | 36,337,529 | 2 | false | 0 | 0 | Run pyinstaller from your project directory, but call it as the full directory to the .exe like C:\PathTo\Pyinstaller.exe
so your cmd would look something like
C:\Users\user\PycharmProjects\myproject> C:\PathTo\pyinstaller.exe --onefile --windowed myprogram.py | 1 | 1 | 0 | I have written a project in PyCharm consisting of a .py file, a .txt file, a .ico file and the regular .idea folder for PyCharm projects. All is saved in C:\Users\user\PycharmProjects\myproject.
I would like to create a single-file .exe using PyInstaller. But when I run the command pyinstaller.exe --onefile --windowed myprogram.py, I get the following error:
'pyinstaller.exe' is not recognized as an internal or external command,
operable program or batch file.
To my understanding, this is because "pyinstaller.exe" is not in the location in which I ran the command prompt.
However, if I run cmd in the pyinstaller folder (C:\Users\user\AppData\Local\Programs\Python\Python35-32\Scripts), my project isn't there. So that doesn't work either.
What do I need to do to get my program into one .exe file? | How do I create an executable file out of a PyCharm project using PyInstaller? | 0.099668 | 0 | 0 | 15,031 |
36,339,195 | 2016-03-31T16:44:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,panda3d | 43,324,179 | 2 | true | 0 | 1 | At least in recent development builds of Panda, you should be able to call base.destroy() to shut down ShowBase and unload (most) resources. You would still need to override finalizeExit() to not quit the application when the main window is closed. | 1 | 1 | 0 | wrote a small application which is composed of a QT GUI module which in turn initiate an object that inherits from ShowBase class.
Problem is, if I close the Panda App, the ShowBase class calls finalizeExit() which in turn shuts the whole process by calling exit.
If I avoid calling the finalize method by overriding userExit(), the resources for the App are not being deleted and the task manager keeps on working.
Is there a way to close the Panda App without calling exit? | Closing a Panda3d app without shutting down the whole Process | 1.2 | 0 | 0 | 928 |
36,340,454 | 2016-03-31T17:58:00.000 | 2 | 0 | 1 | 0 | python,list,indexing,add | 36,340,493 | 1 | true | 0 | 0 | Just sum(your_list[0:2]). This is easy and very fast. BTW, avoid calling your list list because it will shadow the built-in list function. | 1 | 0 | 0 | I'd like to know if there's a bult-in function in Python to add the elements of a list relative to a specific index. Let me explain. Say I have a list list = [1,2,3,4,5] . I want to be able to add the numbers of lets say list[0:2]. In this case the answer would be 6.
I know for a fact that I could do it with a simple loop but I would prefer a one liner, if possible.
Anyway, thanks in advance! | Add numbers in a list relative to a given index | 1.2 | 0 | 0 | 82 |
36,341,867 | 2016-03-31T19:18:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,scroll,console-application,windows-console | 36,342,079 | 2 | false | 0 | 0 | You can not do it from you python script (OK, it is possible, but most probably you don't want to do it). Scrolling depends on environment (Windows or Linux terminal, doesn't matter). So it is up to users to set it up in a way that is good for them.
On Linux you can use less or more:
python script.py | less
it will buffers output from the script and will let user ability to scroll up and down without losing any information. | 1 | 1 | 0 | I have a Python (2.7) console application that I have written on Windows (8.1). I have used argparse.ArgumentParser() for handling the parameters when executing the program.
The application has quite a few parameters, so when the --help parameter is used the documentation greatly exceeds the size of the console window. Even with the console window maximized. Which is fine, but the issue I'm encountering is that the user is unable to scroll up to view the rest of the help documentation. I have configured my windows console properties appropriately, such as the "Window Size" and "Screen Buffer Size". And I have verified that those changes are working, but they only work outside of the Python environment. As soon as I execute a Python script or run a --help command for a script, the console properties no longer apply. The scroll bar will disappear from the window and I can no longer scroll to the top to see the previous content.
So basically, I need to figure out how to enable scrolling for my Python console programs. I need scrolling enabled both when executing a script and when viewing the --help documentation. I'm not sure how to go about doing that. I have been searching online for any info on the subject and I have yet to find anything even remotely helpful.
At this point, I am completely stuck. So if someone knows how to get scrolling to work, I would greatly appreciate your help. | How to Enable Scrolling for Python Console Application | 0 | 0 | 0 | 4,402 |
36,341,884 | 2016-03-31T19:19:00.000 | 2 | 0 | 1 | 0 | python,pip,conda,folium | 36,347,166 | 1 | false | 0 | 0 | No, as far as I'm aware, that is not normal. It looks like the folium package is in the site-packages for anaconda. Perhaps your pip is not also the anaconda pip? What does which pip say? Perhaps try to uninstall using the anaconda pip directly via /Users/my_name/anaconda/bin/pip uninstall folium.
If that doesn't work, you can remove the "package_name.dist-info" folder you found and it should stop showing up in your pip list and conda list. You probably also want to check if the actual folium package is in the site-packages and remove that too. | 1 | 5 | 0 | Is this normal? How do I make sure that the package (named folium) is completely removed from my computer?
I see a folder called "package_name.dist-info" under Users/my_name/anaconda/lib/python2.7/site-packages.
I also have conda installed on my computer so when I do $conda list, the package shows in that output list also.
Thank you for any help. | Python Package removed by typing "pip uninstall package_name" in terminal, still shows up in the list output of "pip list" | 0.379949 | 0 | 0 | 1,671 |
36,343,431 | 2016-03-31T20:46:00.000 | 1 | 0 | 1 | 0 | python,break | 36,359,087 | 3 | false | 0 | 0 | Thanks for your reply.
I already tried use sys.exit, raise and others into the third funcion but nothing works. What I did, I put a return with stantement pass or fail. On the secund function I test the return and if fail the script execute sys.exit(). When I do this on the secund function the script stop like we want. Now it's working fine. Probably this is the worst way to do this but worked.
Best regards. | 1 | 1 | 0 | I'm trying to develop a script that will connect to our switch and do some tasks.
In this script I have a main function that calls a second function. In the second function I pass a list of switches that Python will start to connect one by one.
The second function will call a third function. In the third function the script makes some tests. If one of these tests fail I want to close the entire script.
The problem is that I tried to put return, exit, raise System, os.exit but what happens is that the script doesn't stop, it just jumps to another switch and goes on.
Anyone knows how can I close my entire script from a function?
Best regards. | Python exit from all function and stop the execution | 0.066568 | 0 | 0 | 5,270 |
36,351,403 | 2016-04-01T08:19:00.000 | 0 | 0 | 1 | 0 | python,pandas | 36,353,073 | 1 | false | 0 | 0 | You have many otions but I think they can be summarized. I couldn't tell which one would make more sense to you without more context.
Convert numeric strings to numbers
If you are afraid of issues with floats, convert only integers.
If you want to keep your data as is, store the converted values in a different column / object and use it just for filtering.
If you want to keep the data types in the filtered data, filter the converted data and use the filtered index to subset the original data.
Convert numbers to strings (same considerations as above)
Filter by both the numbers in the lookup list and their string representation. | 1 | 0 | 1 | I am writing a function which takes a pandas df, column name and a list of values and gives the filtered df. This function uses df.query() internally.
In one specific case, I have a dataframe which has a column in which both integers and strings are present. My function should filter this df on a list whose elements are all integers. At the moment, I get an empty df as strings can't be compared to int. Even though in the dataframe and lookup list are same - for eg. '345' & 345.
What is a general way to handle this in pandas? I could coerce the list of integers to strings but I would like to stay away from that. This is because I want my function to be able to handle non-integral values as well. I am not sure if coercing to strings would be safe then: for eg. for floats. | Querying a dataframe if column and values are of different types | 0 | 0 | 0 | 54 |
36,352,167 | 2016-04-01T09:04:00.000 | 0 | 0 | 0 | 0 | python,interactive-brokers,ibpy | 37,589,275 | 1 | false | 0 | 0 | a) As far as I know, IB API has no concept of 'Portfolio'. You probably need to keep a list of orders put in for which portfolio and then resolve the order data supplied by IB against your portfolio vs order data.
b) IB does keep track of the client (i.e. your client that is calling the API code - normally defaulted to 0) that has put in the orders.
If you want to know what orders are open that were input via your client then: client.reqOpenOrders();
If you want to know all open orders i.e. orders put in via your client plus other clients or TWS, then: client.reqAllOpenOrders(); | 1 | 0 | 0 | I've been experimenting with IBPy for a while; however, the two following things have been eluding me:
a) How does one the name of the actual portfolios that positions belong to? I know how to find positions, their costs, values etc. (using message.UpdatePortfolio), but out trading simulation will likely have many portfolios and it helps to know which portfolio each position belongs to. Is it even possible to send information to IB in multiple portfolios?
b) How does one find out the existing orders using IBPy? So when I run the code, I want it to display all positions, along with their order types and limits (e.g. if its a limit order for AAPL, I want to find the limit price etc.)
Many thanks! | Getting portfolio names and existing orders using Interactive Brokers IBPy | 0 | 0 | 0 | 549 |
36,357,301 | 2016-04-01T13:12:00.000 | 0 | 0 | 1 | 1 | python,command,pip | 36,357,798 | 1 | false | 0 | 0 | Try opening a new command prompt and run pip from that. Sometimes changes to environment variables don't propagate to already-open prompts. | 1 | 0 | 0 | I installed pip-8.1.1. The windows prompted that it was successfully installed. After I added 'C:\Python27\Scripts;' to Path in System Variables, the system still couldn't recognize 'pip' as a command.
Here is the error message: 'pip' is not recognized as an internal or external command, operable program or batch file.
How do I correctly add pip to Path? | 'pip' is not recognized as a command | 0 | 0 | 0 | 629 |
36,360,876 | 2016-04-01T16:03:00.000 | 2 | 0 | 1 | 1 | python,process,blender | 37,028,985 | 2 | false | 0 | 0 | My solution was to launch Blender via console with a python script (blender --python script.py) that contains a while loop and creates a server socket to receive requests to process some specific code. The loop will prevent blender from opening the GUI, and the socket will handle the multiple requests inside the same blender process. | 1 | 6 | 0 | Normally, I would use "blender -P script.py" to run a python script. In this case, a new blender process is started to execute the script. What I am trying to do now is to run a script using a blender process that is already running, instead of starting a new one.
I have not seen any source on this issue so far, which makes me concern about the actual feasibility of this approach.
Any help would be appreciated. | How do I run a python script using an already running blender? | 0.197375 | 0 | 0 | 1,547 |
36,362,122 | 2016-04-01T17:13:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,pytest,codeship | 38,299,299 | 1 | false | 1 | 0 | Not sure If I understood your question correctly but if your concern was gobbling of output by Py.Test they run the pytest using -s option. | 1 | 0 | 0 | I'm attempting to add a test to my unit tests that is significantly more complicated and takes longer to perform. The idea would be to run this longer test infrequently. However, the test itself takes longer than the 10 minute timeout that codeship currently has, and since it doesn't fail/pass within 10 minutes my codeship will show as failing.
Is there any way to get py.test to print out a heartbeat or something every x minutes to keep codeship happy? Obviously any of my output and logging gets gobbled up by py.test itself, so that isn't helpful.
Thanks! | py.test timeout/keepalive/heartbeat? | 0 | 0 | 0 | 158 |
36,362,190 | 2016-04-01T17:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 36,371,050 | 4 | false | 0 | 0 | Seem like an invalid argument into embedding_rnn_decoder.
Maybe try to change enc_state:
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state[-1], e_cell, vocab_size, output_projection=(W, b), feed_previous=False) | 3 | 1 | 1 | I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell
Code:
e_cell = rnn_cell.GRUCell(self.rnn_size)
e_cell = rnn_cell.MultiRNNCell([e_cell] * 2)
Later on I use it from inside seq2seq.embedding_rnn_decoder as follows
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state, e_cell, vocab_size, output_projection=(W, b), feed_previous=False)#
On doing this I get the following error
Error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 1024
[[Node: en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sigmoid_2, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/begin, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/size)]]
[[Node: en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient/_1230 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_11541_en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Has anyone seen a similar error? Any pointers? | Slice error when using MultiRNNCell | 0 | 0 | 0 | 1,345 |
36,362,190 | 2016-04-01T17:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 36,794,050 | 4 | false | 0 | 0 | I've got a problem similar to yours.{tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 40}
I also use rnn_cell.GRUCell(self.rnn_size)
I want to share my experience ,maybe it's helpful.
Here is how I fixed it.
I want to use gru cell and basic rnn cell ,so I adapt the programme from others which is coded for lstm cell.
The different between lstm and GRU/BasicRnn is the state_size.
Here is lstm celldef state_size(self):return 2 * self._num_units
Here is GRU/BasicRnn cell def state_size(self):return self._num_units
Therefore the shape of matrix is different,and the tensor is not suit for the op.I advice you to check your code contains tf.slice | 3 | 1 | 1 | I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell
Code:
e_cell = rnn_cell.GRUCell(self.rnn_size)
e_cell = rnn_cell.MultiRNNCell([e_cell] * 2)
Later on I use it from inside seq2seq.embedding_rnn_decoder as follows
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state, e_cell, vocab_size, output_projection=(W, b), feed_previous=False)#
On doing this I get the following error
Error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 1024
[[Node: en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sigmoid_2, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/begin, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/size)]]
[[Node: en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient/_1230 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_11541_en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Has anyone seen a similar error? Any pointers? | Slice error when using MultiRNNCell | 0 | 0 | 0 | 1,345 |
36,362,190 | 2016-04-01T17:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 38,411,187 | 4 | false | 0 | 0 | This problem is occurring because you have doubled your GRU cell but your initial vector is not doubled.
If your initial_vector size is [batch_size,50].
Then initial_vector = tf.concat(1,[initial_vector,initial_vector])
Now input this to decoder as initial vector. | 3 | 1 | 1 | I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell
Code:
e_cell = rnn_cell.GRUCell(self.rnn_size)
e_cell = rnn_cell.MultiRNNCell([e_cell] * 2)
Later on I use it from inside seq2seq.embedding_rnn_decoder as follows
ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state, e_cell, vocab_size, output_projection=(W, b), feed_previous=False)#
On doing this I get the following error
Error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 1024
[[Node: en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sigmoid_2, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/begin, en/embedding_rnn_decoder_1/rnn_decoder/MultiRNNCell/Cell1/Slice/size)]]
[[Node: en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient/_1230 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_11541_en/embedding_rnn_decoder/rnn_decoder/loop_function_17/StopGradient", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Has anyone seen a similar error? Any pointers? | Slice error when using MultiRNNCell | 0 | 0 | 0 | 1,345 |
36,362,294 | 2016-04-01T17:22:00.000 | 3 | 0 | 0 | 1 | python,testing,robotframework | 36,364,642 | 2 | true | 0 | 0 | There's no need to do tail -f. At the start of your test you can get the number of bytes in the file. Let the test run, and then read the file starting at the byte offset that you calculated earlier (or read the whole file, and use a slice to look at the new data) | 1 | 2 | 0 | I want to open a file and tail -f the output. I'd like to be able to open the file at the beginning of my test in a subprocess, execute the test, then process the output starting from the beginning of the tail.
I've tried using Run Process, but that just spins, as the process never terminates. I tried using Start Process followed by Get Process Result, but I get an error saying Getting results of unfinished processes is not supported.
Is this possible? | tail a log file in Robot Framework | 1.2 | 0 | 0 | 1,238 |
36,363,502 | 2016-04-01T18:34:00.000 | 1 | 0 | 0 | 0 | python,apache-spark | 36,363,565 | 1 | false | 0 | 0 | You distribute the data you read among nodes.
Every node finds it's 5 local maximums.
You combine all the local maximums and you keep the 5 max of them,
which is the answer. | 1 | 0 | 1 | total noob question. I have a file that contains a number on each line, there are approximately 5 millions rows, each row has a different number, how do i find the top 5 values in the file using spark and python. | spark python product top 5 numbers from a file | 0.197375 | 0 | 0 | 54 |
36,364,651 | 2016-04-01T19:44:00.000 | 2 | 0 | 1 | 0 | python,dictionary,hash | 36,364,839 | 2 | false | 0 | 0 | I don't think Python's dictionary objects have any public API that allows you to see the hashes their objects are stored with. You can't store an object directly by hash in Python code (it may be possible by calling internal C functions in CPython). There are a few good reasons that you can't add values to a dictionary by hash value, rather than by key.
The most obvious is that multiple key objects might have the same hash. If such a hash collision happens, the second value will be inserted somewhere else in the hash table. The important thing is that it won't overwrite the previous value stored under a different key that hashes the same. If you could just pass the hash and not the key too, Python wouldn't be able to tell if you were using the same key or if you were providing a new key that happened to have a colliding hash.
A secondary reason that you can't insert by hash is that it would be a security vulnerability. The performance of a hash table like Python's dictionaries is very good when there are few hash collisions. It is very bad however if every hash is the same. If you could submit data to a Python program that all hashes to the same value, you can perform a very efficient denial of service attack (the new hash randomization for strings was added in recent versions of Python to make this kind of attack harder). | 1 | 2 | 0 | Is there a way to extract existing key hashes from a dictionary, without recalculating them again?
What would be the risks for exposing them and consequntly, accessing the dict by hashes not keys? | Get dictionary keys hashes without recalculation | 0.197375 | 0 | 0 | 472 |
36,365,990 | 2016-04-01T21:18:00.000 | 0 | 0 | 1 | 0 | python,list,numpy | 53,074,632 | 5 | false | 0 | 0 | arr = np.array(['a','b','c','d','e','f'])
Then
arr = [x for x in arr if arr != 'e'] | 1 | 32 | 1 | Is it possible to delete an object from a numpy array without knowing the index of the object but instead knowing the object itself?
I have seen that it is possible using the index of the object using the np.delete function, but I'm looking for a way to do it having the object but not its index.
Example:
[a,b,c,d,e,f]
x = e
I would like to delete x. | How to delete an object from a numpy array without knowing the index | 0 | 0 | 0 | 51,262 |
36,373,009 | 2016-04-02T12:05:00.000 | 0 | 0 | 0 | 0 | python,django,macos,path | 36,373,089 | 2 | false | 1 | 0 | You shouldn't do anything with paths at all, other than setting up a virtualenv. If you have pip, you can install virtualenv with sudo pip install virtualenv (and note that this is the last thing you should install with sudo pip; everything else after that should be inside an activated virtualenv). | 1 | 0 | 0 | I have a background in programming with python but I really would like to start playing around with django but I have had difficulty setting up a project.
I know that I have django installed but the command django-admin is not recognized. I believe this has something to do with the way my path is set up. I still feel clumsy about setting up correct paths, so a clear explanation of this would be greatly appreciated.
I also understand that it is advisable to set everything up within a virtual environment. I believe that I have pip installed which should enable virtualenv to be recognized, unfortunately the virtualenv command is also not recognized. I feel like I'm missing something very basic. Any help on setting up these basics would be very greatly appreciated. | How do I properly set up a django project in OSx? | 0 | 0 | 0 | 42 |
36,374,416 | 2016-04-02T14:18:00.000 | 1 | 0 | 1 | 0 | python,pip | 36,374,476 | 2 | false | 0 | 0 | e.g,:
python3.5 -m pip install mypackagename | 1 | 0 | 0 | I am using ubundu and have python 2X and 3X installed, default version is python 2X. I want to install a package for python3X. When i tried doing it with pip the module got installed to python 2X | Installing python module when there are two versions of python installed | 0.099668 | 0 | 0 | 56 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.