Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
40,309,098
2016-10-28T16:12:00.000
-6
0
1
0
python,jupyter-notebook
45,188,049
7
false
0
0
You can download the cell content as .py file from jupier and then you can copy and paste wherever you want
5
100
0
I am trying to copy cells from one jupyter notebook to another. How this is possible?
Is it possible to copy a cell from one jupyter notebook to another?
-1
0
0
66,536
40,309,430
2016-10-28T16:34:00.000
0
0
1
0
python,multithreading,python-multithreading
40,309,470
2
false
0
0
Use the pickle module. It allows saving of python types.
1
1
0
In a program I am creating, I have to write a threading.Thread object to a file, so I can use it later. How would I go about doing this?
How would I write an object to a file for later use?
0
0
0
95
40,309,777
2016-10-28T16:55:00.000
2
0
0
0
python,regex,pandas
40,310,470
2
false
0
0
The problem is that the match function does not return True when it matches, it returns a match object. Pandas cannot add this match object because it is not an integer value. The reason you get a sum when you are using 'not' is because it returns a boolean value of True, which pandas can sum the True value and return a number.
1
3
1
I can find the number of rows in a column in a pandas dataframe that do NOT follow a pattern but not the number of rows that follow the very same pattern! This works: df.report_date.apply(lambda x: (not re.match(r'[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}', x))).sum() This does not: removing 'not' does not tell me how many rows match but raises a TypeError. Any idea why that would be the case? df.report_date.apply(lambda x: (re.match(r'[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}', x))).sum()
cannot sum rows that match a regular expression in pandas / python
0.197375
0
0
883
40,312,526
2016-10-28T20:08:00.000
0
0
0
0
python,rest,http,error-handling
40,322,779
1
false
0
0
For the client which is calling serviceA , serviceB doesn't exists. serviceB is for an internal mechanism of serviceA. So in my opinion either point 1 or point 3, it should just be 500 internal server error. For point 2, I think serivceA should catch the serviceB exception for no data and return 204 No content found. Now, additional points. If you have some logic on your client side when serviceB is down and you must know that, you can return 503 or 504 for point 1.
1
0
0
I am developing a new rest service , lets call serviceA which will internally invoke another rest service ,lets call it serviceB and do some data manipulation and return the response. I am trying to determine what http error status codes returned in below scenarios when client invokes serviceA serviceB is down serviceB returns the exception to serviceA because data does not exist as per the request. serviceA gets the correct response from serviceB , but fails to complete the internal processing and errors out. Thanks, any comments are appreciated.
Error Scenarios in rest service
0
0
1
28
40,314,098
2016-10-28T22:35:00.000
0
1
0
0
python,pytest
54,313,898
3
false
0
0
If you are using -k switch, you don't need to specify the full path separated by colons. If the path is unique, you can use just part of the path. You need full path of tests only when you are not using -k switch. e.g. pytest -k "unique_part_of_path_name" for pytest tests/a/b/c.py::test_x, you can use pytest -k "a and b and c and x". You can use boolean logic for -k switch. BTW, pytest --collect-only does give the file name of the test in <Module line just above the test names of the file.
1
13
0
When I run pytest --collect-only to get the list of my tests, I get them in a format like <Function: test_whatever>. However, when I use pytest -k ... to run a specific test, I need to input the "address" of the test in the format foo::test_whatever. Is it possible to get a list of all the addresses of all the tests in the same format that -k takes?
Pytest: Getting addresses of all tests
0
0
0
1,577
40,314,215
2016-10-28T22:48:00.000
0
0
0
0
python,windows,selenium,selenium-webdriver
40,421,062
1
false
0
0
It looks like I’ve found the explanation: It’s the screen resolution. My colleague and I were connecting to the PC using RDP, and his connection had a smaller screen resolution than mine. When I was starting a connection, explicitly setting the resolution to his values, the tests produced errors, and when I then started a new connection with my original settings, the tests were running fine again. And this was reproducible. I think the background is that on the page are many elements, which are moving dynamically with the page size, and some elements, which are fix or have a minimum size. So it’s possible that an element is covered by another, if the screen is too small, and with it the browser-window. Also a fix element could be partly out of the viewport.
1
0
0
I wrote some testing scripts with selenium, and they were working fine as long as I started them from my account, on a Windows 7 machine. But when a colleague started it from his account, on the same machine, some of the tests had a NoSuchElementException. What can cause that difference, maybe something the graphic-settings like the display resolution? The scripts are written in Python, they are using Selenium-Webdriver with Firefox. The PC has Windows 7 Enterprise, 64 Bit, Service Pack 1, with Python 2.7.12 installed.
What difference can the windows-user-settings make, when running a selenium-script?
0
0
1
44
40,314,898
2016-10-29T00:38:00.000
15
0
1
0
python,multithreading,list,get,append
40,314,927
3
true
0
0
Here's a way, although it looks odd: (list or [None])[-1]
1
3
0
My Python script opens 2 threading.Threads() with the following functions : Stuff() : function appending stuff to a list (global var) if stuff happens in a big loop. Monitor() : function displaying the last item added to the list every second with additional info. The purpose of these 2 threads is that Stuff() contains a loop optimized to be very fast (~ 200 ms / iteration) so printing from inside would be pointless. Monitor()takes care of the output instead. At the beginning, I set list = [], then start the threads. Inside Monitor() I get the last item of the list with list[-1] but if no stuff happend before, the list is still empty and the Monitor() raises an IndexError: list index out of range. Is there a simple way (no try or if not list) to display None instead of an error if the list is empty ?
PYTHON get the last item of a list even if empty
1.2
0
0
2,970
40,315,229
2016-10-29T01:39:00.000
1
0
1
0
python,object,process,multiprocessing
40,315,279
1
false
0
0
Are you passing the trm object as the target of the multiprocessing.Process? As long as I know, the target should be a callable. In this case, you should be passing trm.write as the target of the process, not trm. If you need to pass arguments to the target, you can pass through the args parameter, which gets a tuple of arguments that will be passed to your target.
1
0
0
I have a program with an object trm of terminal, and terminal has the function write() which writes a string to a GUI. I am passing this object into a an object of multiprocessing.Process, however when I call the function write() within this process, it doesn't work. Nothing happens. What is going on?
Pass object into process Python
0.197375
0
0
183
40,316,599
2016-10-29T06:15:00.000
0
0
1
0
python,python-2.7
40,316,623
3
false
0
0
You should use two variables: one to track the n that you refer to (which increases by 1 each time), one to track your desired sequence (which increases by n each time).
1
0
0
For example, I first increment from 1 to 2 by n=1, then from 2 to 4 by n=2, then from 4 to 7 by n=3, then from 7 to 11 by n=4, etc. How do I set up my code for an increment such as the one I just prescribed? I've tried i=1; i+=i, but that just increases by an increasing sequence of positive even integers
In general, what is the best way to increment by an increasing sequence of every positive integer?
0
0
0
45
40,316,891
2016-10-29T07:07:00.000
0
0
0
0
arrays,postgresql,python-3.x
40,320,130
1
false
0
0
“Better” is a rather unspecific adjective in this case. If you are asking for aesthetic judgement, simplicity of code and mantainability, I don't feel in a position to pronounce a clear judgement. My gut feeling is that both are similar. If you are asking about good performance, I'd advise you to run a simple test. But even without a test I'd say that both solutions are not optimal and you should write it as a single SQL statement. If you are asking about portability, the answer depends on whether it is more important to port to another database (that would favor the application software solution) or to port to a different programming language (in that case, the solution in the database is preferable).
1
0
0
I have a database with time series data split up into even sized chunks stored as arrays in postgres. I need to arbitrarily extract ranges of them and concatenate the returned set into a single array. They have an offset field so given a start offset and length you can find any part of the set you are looking for. Which is better: To write queries that return each individual array and concatenate in software or Use a stored procedure that takes a start point and length and does the concatenation internally before returning the entire array
stored procedure versus queries for array concatanation
0
1
0
25
40,317,176
2016-10-29T07:54:00.000
1
0
1
0
python,sympy
40,317,251
2
false
0
0
You may also want to look at SageMath, in addition to SymPy, since Sage goes to great lengths to come with prebuilt mathematical structures. However, it's been development more with eyes towards algebraic geometry, various areas of algebra, and combinatorics. I'm not sure to what extent it implements any operator algebra.
1
2
0
I am wondering if there is an easy way to implement abstract mathematical Operators in Sympy. With operators I simply mean some objects that have 3 values as an input or 3 indices, something like "Operator(a,b,c)". Please note that I am refering to a mathematical operator (over a hilbert space) and not an operator in the context of programming. Depending on these values I want to teach Sympy how to multiply two Operators of this kind and how to multiply it with a float and so on. At some point of the calculation I want to replace these Operators with some others... So far I couldn't figure out if sympy provides such an abstract calculation. Therefore I started to write a new Python-class for these objects, but this went beyond the scope of my limited knowledge in Python very fast... Is there a more easy way to implement that then creating a new class?
Working with abstract mathematical Operators/Objects in Sympy
0.099668
0
0
132
40,321,096
2016-10-29T16:16:00.000
1
0
1
0
python,string,format,python-requests,bs4
40,321,224
1
false
0
0
Just use repr() Like: print(repr(<variable with string>))
1
0
0
So, I have been playing around with requests and bs4 for a project I'm working on and have managed to return the following in a variable: "---------- Crossways Inn Withy Road West Huntspill Somerset TA93RA 01278783756 www.crosswaysinn.com ----------" This was scraped from a site, using .text function within the bs4 module. Is there any way I can format this within my program to look like the following: "----------\n Crossways Inn\n Withy Road\n West Huntspill\n Somerset\n TA93RA\n 01278783756\n www.crosswaysinn.com\n ----------\n" Sorry for the vague explanation of what I want to do, but do not know how to explain it better. Thanks!
How do I format the contents of the following variable in Python?
0.197375
0
0
45
40,321,668
2016-10-29T17:23:00.000
1
0
1
0
c++,python-2.7,struct,namedtuple
40,336,177
2
false
0
0
namedtuple is implemented purely in Python. You can see its full source in collections.py. It's very short. The thing to keep in mind is that namedtuple itself is a function which creates a class in the frame in which it is called and then returns this class (not an instance of this class). And it is this returned class that is then used to create instances. So the object which you get is not what you want to pass into C++ if you want to pass individual instances. C++ creates struct definitions at compile time. namedtuple creates namedtuple classes at run time. If you want to bind them to C++ structs, either use the PyObject to create your newly minted class' instances inside of C++ and assign them to struct elements at compile time. Or create the newly minted class' instances in Python and pass them to C++. Or you can use _asdict method (provided by namedtuple factory method for all classes it builds) and pass that to C++ to then do the binding of run-time defined data to compile-time defined data. If you really want to do the bulk of the work in C++, you may also use the Struct module instead of using namedtuple. namedtuple is really the swiss-army knife of Python for data which stays in Python. It gives positional access, named access, and all the elements are also "properties" (so they have fget accessor method which can be used in maps, filters, etc. instead of having to write your own lambdas). It's there for things like DB binding (when you don't know which columns will be there at run time). It's less clunky than OrderedDict for converting data from one format into another. When it's used that way, the overhead of processing strings is nothing compared to actual access of the db (even embedded). But I wouldn't use namedtuple for large arrays of structs which are meant to be used in calculations.
1
1
0
I have searched the internet for hours at this point. Does anyone know how to parse a namedtuple returned from a python function into a struct or just into separate variables. The part I am having trouble with is getting the data out of the returned pointer. I am calling a python function embedded in C++ using the PyObject_CallFunction() call and I don't know what to do once I have the PyObject* to the returned data. I am using Python 2.7 for reference. EDIT: I ended up moving all of the functionality I was trying to do in both Python and C++ to just Python for now. I will update in the near future about attempting the strategy suggested in the comments of this question.
Python NamedTuple to C++ Struct
0.099668
0
0
1,655
40,323,573
2016-10-29T20:54:00.000
0
0
1
0
python,python-2.7
40,323,605
2
false
0
0
Square brackets typically mean that the value is optional. Here, varname refers to the environment variable you want to get and value is an optional value that is return if the environment variable doesn't exist.
1
0
0
I have a problem understanding some description of functions in Python. I understand simply functions like os.putenv(varname, value) but I have no idea how to use this: os.getenv(varname[, value]). How to pass arguments to that function, what does those square brackets mean?
How to read Python function documentation
0
0
0
161
40,325,067
2016-10-30T00:46:00.000
0
0
1
0
python,tkinter,python-3.5,cx-freeze
40,326,006
3
false
0
1
Try pyinstaller -F -w Finder.py as the command or you could check out CxFreeze.
2
1
0
I have coded a program in Python 3.5 that uses the Tkinter import. I'm trying to figure out a way to run it on computers that don't have Python. First I tried freezing it but I haven't been able to because none of the freezing tools I found support Python 3.5. Then I tried possibly using a online idle but I couldn't find any that support Tkinter. I would prefer to be able to get a .exe file or something similar but if I could run it online that would be good too any ideas? EDIT So I have now successfully downloaded PyInstaller using pip. My current problem is when I type this into the console: pyinstaller.exe --onefile --windowed Finder.py I get this error: 'pyinstaller.exe' is not recognized as an internal or external command, operable program or batch file. EDIT I have now found the pathway to pyinstaller.exe. Now when I try to use it it says Access is denied.
How do I run a Python 3.5 program that uses Tkinter on a computer without Python installed?
0
0
0
1,343
40,325,067
2016-10-30T00:46:00.000
3
0
1
0
python,tkinter,python-3.5,cx-freeze
40,349,880
3
true
0
1
I finally figured it out after about three days of work. Fist I downloaded PyInstaleller in the zipped form and extracted it. Then I put my program in the PyInstaller folder. Then I opened a regular command prompt. I then typed cd then the location of the PyInstaller folder. Finally I typed pyinstaller.py --one file --windowed program.py. Then when I went into the PyInstaller folder there was a folder called program with the .exe file in the dist folder. Thanks everyone for all of your help!
2
1
0
I have coded a program in Python 3.5 that uses the Tkinter import. I'm trying to figure out a way to run it on computers that don't have Python. First I tried freezing it but I haven't been able to because none of the freezing tools I found support Python 3.5. Then I tried possibly using a online idle but I couldn't find any that support Tkinter. I would prefer to be able to get a .exe file or something similar but if I could run it online that would be good too any ideas? EDIT So I have now successfully downloaded PyInstaller using pip. My current problem is when I type this into the console: pyinstaller.exe --onefile --windowed Finder.py I get this error: 'pyinstaller.exe' is not recognized as an internal or external command, operable program or batch file. EDIT I have now found the pathway to pyinstaller.exe. Now when I try to use it it says Access is denied.
How do I run a Python 3.5 program that uses Tkinter on a computer without Python installed?
1.2
0
0
1,343
40,325,437
2016-10-30T02:04:00.000
1
0
1
1
python,multiprocessing,shared-memory,python-multiprocessing
40,400,385
1
true
0
0
In Unix this might be tractable because fork() is used for multiprocessing, but in Windows the fact that spawn() is the only way it works really limits the options. However, this is meant to be a multi-platform solution (which I'll use mainly in Windows) so I am working within that constraint. I could open the data source in each subprocess, but depending on the data source that can be expensive in terms of bandwidth or prohibitive if it's a stream. That's why I've gone with the read-once approach. Shared memory via mmap and an anonymous memory allocation seemed ideal, but to pass the object to the subprocesses would require pickling it - but you can't pickle mmap objects. So much for that. Shared memory via a cython module might be impossible or it might not but it's almost certainly prohibitive - and begs the question of using a more appropriate language to the task. Shared memory via the shared Array and RawArray functionality was costly in terms of performance. Queues worked the best - but the internal I/O due to what I think is pickling in the background is prodigious. However, the performance hit for a small number of parallel processes wasn't too noticeable (this may be a limiting factor on faster systems though). I will probably re-factor this in another language for a) the experience! and b) to see if I can avoid the I/O demands the Python Queues are causing. Fast memory caching between processes (which I hoped to implement here) would avoid a lot of redundant I/O. While Python is widely applicable, no tool is ideal for every job and this is just one of those cases. I learned a lot about Python's multiprocessing module in the course of this! At this point it looks like I've gone as far as I can go with standard CPython, but suggestions are still welcome!
1
0
0
[I'm using Python 3.5.2 (x64) in Windows.] I'm reading binary data in large blocks (on the order of megabytes) and would like to efficiently share that data into 'n' concurrent Python sub-processes (each process will deal with the data in a unique and computationally expensive way). The data is read-only, and each sequential block will not be considered to be "processed" until all the sub-processes are done. I've focused on shared memory (Array (locked / unlocked) and RawArray): Reading the data block from the file into a buffer was quite quick, but copying that block to the shared memory was noticeably slower. With queues, there will be a lot of redundant data copying going on there relative to shared memory. I chose shared memory because it involved one copy versus 'n' copies of the data). Architecturally, how would one handle this problem efficiently in Python 3.5? Edit: I've gathered two things so far: memory mapping in Windows is cumbersome because of the pickling involved to make it happen, and multiprocessing.Queue (more specifically, JoinableQueue) is faster though not (yet) optimal. Edit 2: One other thing I've gathered is, if you have lots of jobs to do (particularly in Windows, where spawn() is the only option and is costly too), creating long-running parallel processes is better than creating them over and over again. Suggestions - preferably ones that use multiprocessing components - are still very welcome!
How to efficiently fan out large chunks of data into multiple concurrent sub-processes in Python?
1.2
0
0
506
40,326,169
2016-10-30T05:05:00.000
0
0
0
0
python,tensorflow,deep-learning
40,351,277
1
false
0
0
I would first reshape your Tensor to be (sequence_length * batch_size, word_dim), do the matmul to get (sequence_length * batch_size, hidden_dim), then reshape again to get (sequence_length, batch_size, hidden_dim). There is no copying involved with reshape(), and this is equivalent to multiplying each of the batch_size matrices individually if you only have one matrix to multiply them with.
1
0
1
I have a 3D tensor (sequence_length, batch_size, word_dim), I need to do matmul operation with "word_dim" dimension so that I can change tensor into (sequence_length, batch_size, hidden_dim). It seems that matmul operation can only be used in 2D tensor. And I can not change the 3D tensor into 2D because of the "batch_size". How can I do?
How to do matmul operation in specific dimension in tensorflow
0
0
0
118
40,327,068
2016-10-30T07:56:00.000
0
0
1
1
python
67,953,206
3
false
0
0
I faced the same problem, and I did find the missing files in a directory under C:\windows\WinSxS, just do a lookup for the required file and then paste all the files in that directory in C:\Windows\System32. That solved the problem for me.
1
0
0
I installed python 3.5.2 on Windows 8.1, I executed the python-3.5.2-amd64.exe installer. Nothing bad happened. I was searching the Python35 folder in C:\ , but actually is in C:\Users\USER\AppData\Local\Programs\Python\Python35 I opened python.exe and I got an error: api-ms-win-crt-runtime-l1-1-0.dll is missing. How can I make it works? I already have installed Microsoft Visual C++ 2012 Redistributable (x64) - 11.0.50727 and so on. Thank you in advance.
Windows 8 Python doesn't work
0
0
0
167
40,329,307
2016-10-30T12:52:00.000
0
0
0
0
python,neural-network,concatenation,convolution,keras
52,020,891
2
false
0
0
I do not understand why to have 3 CNNs because you would mostly have the same results than on a single CNN. Maybe you could train faster. Perhaps you could also do pooling and some resnet operation (I guess this could prove similar to what you want). Nevertheless, for each CNN you need a cost function in order to optimize the "heuristic" you use (eg: to improve recognition). Also, you could do something as in the NN Style Transfer in which you compare results between several "targets" (the content and the style matrices); or simply train 3 CNNs then cutoff the last layers (or freeze them) and train again with the already trained weights but now with your target FN layer...
1
4
1
I want to implement a multiscale CNN in python. My aim is to use three different CNNs for three different scales and concatenate the final outputs of the final layers and feed them to a FC layer to take the output predictions. But I don't understand how can I implement this. I know how to implement a single scale CNN. Could anyone help me in this?
Multiscale CNN - Keras Implementation
0
0
0
1,527
40,329,407
2016-10-30T13:01:00.000
2
0
1
0
ipython
46,983,113
1
false
0
0
Splitting cell is Ctrl+Shift+- (minus sign) in edit mode.
1
2
0
I've tried to use the command for spliting cell 'm -' but it doesn't work. All the reset of the key commands work fine with either esc or fn key as modifiers. I'm also in the correct mode(edititng mode).
Split cell in jupyter/ipython not working
0.379949
0
0
721
40,331,375
2016-10-30T16:52:00.000
-1
0
0
0
python,google-chrome,selenium,selenium-chromedriver
52,170,277
1
false
0
0
I had issues with connections closing unexpectedly after updating to selenium 3.8.1, using Chrome and Java. I was able to resolve the issue by re-trying the driver setup when it quit unexpectedly.
1
6
0
I'm using chromedriver on selenium by python scripts. When fire the scripts, Remote end closed connection without response was raised. does anyone solve this? chrome: 55.0.2883.28 chromedriver: 2.25
Remote end closed connection without response chromedriver
-0.197375
0
1
2,445
40,332,032
2016-10-30T18:07:00.000
0
1
0
0
java,php,python,html,desktop-application
40,332,050
2
false
1
0
You could expose data from Java or Python as JSON via GET request and use PHP to access it. There are multiple libraries for each of these languages both for writing and reading JSON. GET request can take parameters if needed.
1
0
0
I'm thinking about writing a desktop application that the GUI is made with either HTML or PHP, but the functions are run by a separate Java or python code, is there any heads up that I can look into?
How to use PHP/HTML as interface and Java/Python as function in background?
0
0
0
177
40,332,383
2016-10-30T18:47:00.000
5
0
1
0
python,virtualenv
40,332,574
2
false
0
0
You put your python code inside the learning.python. Your directory structure would look something like this: learning_python .lpvenv code.py another_code.py some_python_package If you run source .lpvenv/bin/activate on Linux or OSX or .lpvenv\Scripts\activate.bat on Windows, you will be using your venv interpreter, otherwise, you will be using your system interpreter.
2
4
0
I created a virtual environment called .lpvenv which contains dependencies for my project. On windows, .lpvenv is basically a folder. Do I store my source code directly in this folder when working inside .lpvenv or does it not matter ? Let's say I have a folder learning.python, inside this folder i have .lpvenv do i put my source code in learning.python or inside .lpvenv ?
Where do I store my python program files when using a virtual environment?
0.462117
0
0
4,296
40,332,383
2016-10-30T18:47:00.000
3
0
1
0
python,virtualenv
40,332,607
2
true
0
0
The environment folder should never be touched. It's there to store the specific version of python as well as the modules you install into that environment. All of this is managed by PIP. You can put your code anywhere you want in your project directory as long as you call the .lpenv/bin/activate script to activate your environment first. However, most projects put the environment right next to their source code within their project folder, which would be learning.python in your case. If you're using version control such as Git, make sure you add .lpenv to your .gitignore file. You do not want to commit your environment into source code since it should be easily rebuilt using your requirements.txt file.
2
4
0
I created a virtual environment called .lpvenv which contains dependencies for my project. On windows, .lpvenv is basically a folder. Do I store my source code directly in this folder when working inside .lpvenv or does it not matter ? Let's say I have a folder learning.python, inside this folder i have .lpvenv do i put my source code in learning.python or inside .lpvenv ?
Where do I store my python program files when using a virtual environment?
1.2
0
0
4,296
40,335,862
2016-10-31T02:34:00.000
0
1
0
1
python,linux,windows,com,pyro
40,347,576
1
false
1
0
Yes this is a perfect use case for Pyro, to create a platform independent wrapper around your COM access code. At least I assume you have some existing Python code (using ctypes or pywin32?) that is able to invoke the COM object locally? You wrap that in a Pyro interface class and expose that to your linux box. I think the only gotcha is that you have to make sure you pythoncom.CoInitialize() things properly in your Pyro server class to be able to deal with the multithreading in the server, or use the non-threaded multiplex server.
1
1
0
I have a project that requires the usage of COM objects running on a windows machine. The machine running the Python Django project is on a Linux box. I want to use Pyro and the django App to call COM objects on the remote windows machine. Is it possible? Any suggestion is appreciated?
Python Pyro running on Linux to open a COM object on a remote windows machine, is it possible?
0
0
0
433
40,339,098
2016-10-31T09:01:00.000
0
0
0
0
python,django,postgresql,nginx,virtualenv
40,339,161
3
false
1
0
I wouldn't consider it advisable. By doing that, you are creating a dependency between the projects which means you'll never be able to upgrade one without all the others. Which would be a massive PITA. Eventually it would get to a point where you could never upgrade because Project A's dependency foo doesnt work with django 1.N but Project B's dependency bar requires at least 1.N - At which point you fall back to the cleaner solution anyway, separate environments. That applies to the django side of things at least, it may work slightly better with Postgres and Nginx.
3
1
0
I am setting up a new server machine, which will host multiple django websites. I must point out that I own (developed and are in absolute control of) all websites that will be run on the server. I am pretty certain that ALL of the websites will be using the same version of: django gunicorn nginx postgreSQL and psycopg2 (all though some websites will be using geospatial and other extensions) The only thing that I know will differ between the django applications are: python modules used (which may have implications for version of python required) I can understand using virtualenv to manage instances of where a project has specific python modules (or even python version requirements), but it seems pretty wasteful to me (in terms of resources), to have each project (via virtualenv), to have separate installations of django, nginx, gunicorn ... etc. My question then is this: Is it 'acceptable' (or considered best practice in scenarios such as that outlined above) to globally install django, gunicorn, nginx, postgreSQL and psycopg2 and simply use virtualenv to manage only the parts (e.g. python modules/versions) that differ between projects?. Note: In this scenario there'll be one nginx server handling multiple domains. Last but not the least, is it possible to use virtualenv to manage different postgreSQL extensions in different projects?
Setting up a server to host multiple domains using django, virtualenv, gunicorn and nginx
0
0
0
709
40,339,098
2016-10-31T09:01:00.000
1
0
0
0
python,django,postgresql,nginx,virtualenv
40,339,173
3
true
1
0
No. It would probably work, but it would be a bad idea. Firstly, it's not clear what kind of "resources" you think would be wasted. The only relevant thing is disk space, and we're talking about a few megabytes only; not even worth thinking about. Secondly, you'd now make it impossible to upgrade any of them individually; for anything beyond a trivial upgrade, you'd need to test and release them all together, rather than just doing what you need and deploying that one on its own.
3
1
0
I am setting up a new server machine, which will host multiple django websites. I must point out that I own (developed and are in absolute control of) all websites that will be run on the server. I am pretty certain that ALL of the websites will be using the same version of: django gunicorn nginx postgreSQL and psycopg2 (all though some websites will be using geospatial and other extensions) The only thing that I know will differ between the django applications are: python modules used (which may have implications for version of python required) I can understand using virtualenv to manage instances of where a project has specific python modules (or even python version requirements), but it seems pretty wasteful to me (in terms of resources), to have each project (via virtualenv), to have separate installations of django, nginx, gunicorn ... etc. My question then is this: Is it 'acceptable' (or considered best practice in scenarios such as that outlined above) to globally install django, gunicorn, nginx, postgreSQL and psycopg2 and simply use virtualenv to manage only the parts (e.g. python modules/versions) that differ between projects?. Note: In this scenario there'll be one nginx server handling multiple domains. Last but not the least, is it possible to use virtualenv to manage different postgreSQL extensions in different projects?
Setting up a server to host multiple domains using django, virtualenv, gunicorn and nginx
1.2
0
0
709
40,339,098
2016-10-31T09:01:00.000
0
0
0
0
python,django,postgresql,nginx,virtualenv
40,340,613
3
false
1
0
I would suggest to use docker virtualization so that every project has it's own scope and doesn't interfere with other projects. I'm currently having such configuration on multiple servers and I'm really happy with it because I'm really flexible and what is really important - I'm secure, because if any of projects has critical bugs in it, other projects are still safe.
3
1
0
I am setting up a new server machine, which will host multiple django websites. I must point out that I own (developed and are in absolute control of) all websites that will be run on the server. I am pretty certain that ALL of the websites will be using the same version of: django gunicorn nginx postgreSQL and psycopg2 (all though some websites will be using geospatial and other extensions) The only thing that I know will differ between the django applications are: python modules used (which may have implications for version of python required) I can understand using virtualenv to manage instances of where a project has specific python modules (or even python version requirements), but it seems pretty wasteful to me (in terms of resources), to have each project (via virtualenv), to have separate installations of django, nginx, gunicorn ... etc. My question then is this: Is it 'acceptable' (or considered best practice in scenarios such as that outlined above) to globally install django, gunicorn, nginx, postgreSQL and psycopg2 and simply use virtualenv to manage only the parts (e.g. python modules/versions) that differ between projects?. Note: In this scenario there'll be one nginx server handling multiple domains. Last but not the least, is it possible to use virtualenv to manage different postgreSQL extensions in different projects?
Setting up a server to host multiple domains using django, virtualenv, gunicorn and nginx
0
0
0
709
40,339,355
2016-10-31T09:20:00.000
1
0
0
0
python,matplotlib,contour
40,340,683
1
true
0
0
Without providing any code it's hard to give you a code example of what you should do, but I'm assuming you are providing contourplot some 2d numpy array of values that drives the visualization. What I would suggest is then to set the x-limit in that data-structure rather than providing the limit to matplotlib. If Xis your datastructure, then just do plt.contour(X[:10, :]).
1
1
1
I am plotting a contour on matplotlib using the contourf command. I get huge numbers at the start of frequency, but lower peaks after that. xlim only works in hiding the higher numbers - but I want the lower peaks to become the maximum on the colorbar (not shown in my images). How to rescale the contour after the xlim has hidden the unrequired contour? Basically, the light blue (cool) portion of should become the red (hot) area after applying xlim(10,100)
Plot maximum of whats remaining on plot after xlim
1.2
0
0
30
40,340,100
2016-10-31T10:09:00.000
4
0
0
0
python,unit-testing,apache-kafka,kafka-python
40,342,318
3
true
0
0
If you need to verify a Kafka specific feature, or implementation with a Kafka-specific feature, then the only way to do it is by using Kafka! Does Kafka have any tests around its deduplication logic? If so, the combination of the following may be enough to mitigate your organization's perceived risks of failure: unit tests of your hash logic (make sure that the same object does indeed generate the same hash) Kafka topic deduplication tests (internal to Kafka project) pre-flight smoke tests verifying your app's integration with Kafka If Kafka does NOT have any sort of tests around its topic deduplication, or you are concerned about breaking changes, then it is important to have automated checks around Kafka-specific functionality. This can be done through integration tests. I have had much success recently with Docker-based integration test pipelines. After the initial legwork of creating a Kafka docker image (one is probably already available from the community), it becomes trivial to set up integration test pipelines. A pipeline could look like: application-based unit tests are executed (hash logic) once those pass, your CI server starts up Kafka integration tests are executed, verifying that duplicate writes only emit a single message to a topic. I think the important thing is to make sure Kafka integration tests are minimized to ONLY include tests that absolutely rely on Kafka-specific functionality. Even using docker-compose, they may be orders of magnitude slower than unit tests, ~1ms vs 1 second? Another thing to consider is the overhead of maintaining an integration pipeline may be worth the risk of trusting that Kakfa will provide the topic deduplication that it claims to.
1
11
0
We have a message scheduler that generates a hash-key from the message attributes before placing it on a Kafka topic queue with the key. This is done for de-duplication purposes. However, I am not sure how I could possibly test this deduplication without actually setting up a local cluster and checking that it is performing as expected. Searching online for tools for mocking a Kafka topic queue has not helped, and I am concerned that I am perhaps thinking about this the wrong way. Ultimately, whatever is used to mock the Kafka queue, should behave the same way as a local cluster - i.e. provide de-deuplication with Key inserts to a topic queue. Are there any such tools?
Python: how to mock a kafka topic for unit tests?
1.2
0
0
11,023
40,341,471
2016-10-31T11:41:00.000
0
0
1
0
python,image,python-3.x,python-3.5
40,343,522
1
true
0
1
Is that a homework ? Working with a new target image, as suggested in the comments, is the easiest. But theoretically, assuming your original image is represented as some 2 dimension table of pixels, you could do it without creating a new image: First double both dimensions of the original image (with the original image staying on "upper left" and occupying 1/4 of the new image, and filling the other 3/4 with blank or any value). Then take the lower right pixel from the original image, and write 4 identical pixels in the lower right of the resized image. Then take the original pixel directly at the the left of the previous original pixel, copy it on the 4 pixels directly at the left of the 4 previous new pixels. Repeat until you reach the left end of the line, then start the process again on the line above. At some point you will overwrite pixels from the original image, but that doesn't matter since you will already have duplicated those in the new image. That's pure theory, assuming you are not allowed to use external libraries such as Pillow.
1
1
0
i have to double an image using python, So i think i can replace each pixel of the image with a square formed by 4 pixels how do i can do that and assign to each pixel of the little square different colors?
Double an image using python
1.2
0
0
1,504
40,342,364
2016-10-31T12:38:00.000
0
0
0
0
python,django,rest,django-rest-framework,jwt
40,354,393
1
false
1
0
Found the issue. My account api allowed bad passwords to fall through, so my users model wasn't able to log that password.
1
0
0
So I've created a register page that allows visitors to register an account on my website. These accounts have no staff status or administrative privileges. I've also created a login page that takes the username and password and sends an ajax post request to an auth url. The url links to obtain_jwt_token (django-rest-framework-jwt's view) which checks the username and password and then returns a jwt token to the visitor's localstorage. This is all fine and dandy, and it works well. Only problem is... well it works only for administrator accounts. For some reason the accounts with no staff status aren't validated. Json Web Tokens aren't returned for these accounts. Is this an issue with django.admin.auth? or is it an issue with drt-jwt? Is drt-jwt using the django admin page to authenticate users? Because that's not what I want. I don't just want admins to be able to log in to my website.
Django-rest-framework-jwt won't return JWToken for nonstaff accounts (django admin error?)
0
0
0
171
40,344,730
2016-10-31T14:54:00.000
1
0
1
0
python,multithreading,python-2.7
40,344,845
3
false
0
0
I am not 100% sure but its just aliases from different versions of Python i think and they have same function The reason is to keep compatibility with olders version of Python
1
1
0
For threading.Thread, there are two methods which seems to have same functionality : is_alive and isAlive For threading.Event, there is below method : is_set and isSet Similarly threading module , again these methods are available currentThread and current_thread active_count and activeCount So, question is, though it seems, both the methods have same functionality, why there are two methods available? Also, which one is preferable ?
Why two methods with same functionality available in threading module in Python?
0.066568
0
0
79
40,346,352
2016-10-31T16:25:00.000
0
0
0
0
oauth,python-social-auth,xing
40,347,385
1
true
1
0
Sry, found the answer. The user-id relies to the xing-app that asks for the user, which can only be configured by one developer account... So you need to use the same credentials to ask for the users to be able to find the same user by using the user-id again.
1
2
0
I get a slightly changing User ID's on sign up/login when authenticating with xing oauth. Something like: 37578662_a467ef and 37578662_76a7fe. Does somebody know if the user id changes when using a xing test key? Or if I could rely on the first part (before underline) to be equal and consistent on login? Using python-social-auth and Django Best Johannes
Does User ID on Xing OAuth change when using Test Key?
1.2
0
0
81
40,349,868
2016-10-31T20:12:00.000
0
0
1
0
python,pandas,quantlib
40,349,994
1
false
0
0
Looks like something like this might work: dataframe.set_index = [ pandas.to_datetime(str(a)) for a in list(data.keys()) ] but I get the following error: KeyError: Timestamp('2016-09-27 00:00:00') Press any key to continue . . .
1
0
1
I have a list data.keys() which returns dict_keys([2016-09-27, 2016-09-28, 2016-09-29, 2016-09-30, 2016-09-26]) Data type for each of the element is quantlib.time.date.Date How do I create an index for pandas with the above so that the index is of type pandas.tslib.Timestamp?
How to convert quantlib.time.date.Date to pandas.tslib.Timestamp
0
0
0
411
40,355,881
2016-11-01T07:38:00.000
1
0
0
0
python,c++,kivy
40,359,340
1
true
0
1
A window to show the emulator graphics, update a raw bitmap on each frame Not sure how exactly, but you have access to textures and to a huge part of OpenGL through Kivy and Python, so this could be doable. A window to display some debug information, large scrollable text box Use RecycleView and Label's core. There's an example for ListView, but since the new changes it's kind of broken. However, in a similar way it could be done for RecycleView A way to emit audio that's generated by the emulator Should work without problems if you can pass it to the provider. The only issue I see with built-in audio support in Kivy is pause and seek, because those afaik either aren't implemented (most probably) or are broken. However with Gstreamer it should work. Accept keyboard input to control the game. Keyboard and multitouch work out of the box with Kivy, you only need to (for keyboard) extend one method and (for touch) check for collisions with Widgets Are there widgets that will let me create the disassembly and memory viewer? No. At least none that I know will do that out of the box. If by disassembly you mean text, dump it into a widget that can handle text. Memory viewer however isn't there and you'll need to create your own widget. That's not hard if you work with Kivy at least for a while. Kivy by default doesn't do 3D. There are "plugins" that can allow you such thing, but I don't see any that's still maintained so there's this thing. Also I see the code isn't C, but C++ so I'm not sure how to bind those together. Cython should be the rescue here ^^
1
0
0
I've created a gameboy color emulator using C++ and am ready to start developing the frontend that will display the emulator's viewport, emit audio, and also display some debug information. I'm looking into using Kivy to create the UI frontend and boost.python (which looks pretty promising) to interop between the C++ core and the python UI. What I would like to have in my front end are: A window to show the emulator graphics. More specifically something that let's me update a raw bitmap (i.e. raw pixel data) on each frame. A window to display some debug information. More specifically I want a large scrollable text box to show the disassembled code and another one to show the memory. A way to emit audio that's generated by the emulator. The core doesn't support audio yet so I'm not sure what it'll look like on the C++ side. Accept keyboard input to control the game. Will Kivy allow me to do all of this? I see that it has dependencies on glew and sdl2 which should take care of the graphics and audio requirements, right? Are there widgets that will let me create the disassembly and memory viewer?
Is it feasible to create an emulator front-end using Kivy?
1.2
0
0
253
40,358,456
2016-11-01T10:43:00.000
2
0
1
0
python,startup
40,358,517
2
false
0
0
you can use the old-good startup folder of windows, adding a bat file running your script or its exe version obtained with pyinstaller. Charlie
1
2
0
I need my program to launch on system startup or user login. I have played around a little with the scheduler, but I'm pretty new to that and I couldn't get it to work. Basically I need to save something in the code to the startup registry. It might also be worth to note that I will be turning this into an exe with pyinstaller, but that shouldn't make too much of a difference I believe.
Open python program on system startup
0.197375
0
0
571
40,359,630
2016-11-01T11:59:00.000
5
0
0
0
python,python-3.x,flask
40,360,531
1
false
1
0
Error handlers follow the exception class MRO, or method resolution order, and a handler is looked up in that order; the specific exception type first, then it's direct parent class, etc, all the way down to BaseException and object. There is no need to order anything; if you registered a handler for Exception, then it'd be used for any exception for which no more specific handler was found.
1
3
0
How does one ensure that the flask error handlers get the most specific exception? From some simple tests and looking at the source code, it looks like the flask error handling code just takes the first register error handler for a given exception type instead of the most specific type possible. I guess the answer is to put the error handler for Exception at the very end?
How to handle ordering of flask error handlers
0.761594
0
0
453
40,363,937
2016-11-01T16:07:00.000
6
0
1
0
python,c++,c,cython
40,364,443
1
true
0
1
Cython's output is not intended for human consumption Cython treats C as an intermediate language, in much the same way as LLVM treats LLVM IR as an intermediate language. Cython's purpose is to produce Python extension modules, and C is just the most reasonable means to that end. It will generally produce a maze of twisty little preprocessor directives, all totally unreadable. You should not use Cython if you want C code that you can read.
1
3
0
I am the sole Matlab user on a team of C++/C# developers. I am transitioning to Python, and was hoping that Cython could help me bridge the gap between my work and my colleagues' work. I originally thought that Cython could be used to compile Python code to a C source file, which could then be imported/called from Python. I was hoping for two benefits from this: A speed boost in my programs, and A C source file that could be handed off to my colleagues for some slight polishing and then ultimately implemented in their (C++/C#) packages. Unfortunately, it looks like the latter is not an option, but I'm not positive. It looks like the C source file is very bloated with lots of references to Python. I have a three-line Python script that declares a cdef char*, assigns the string "hello world!" to that variable, then prints it. The resulting C file is 2000 lines long. So, my question is, is benefit #2 unobtainable with Cython? Is the C code generated with Cython only intended to be used by Python, or is there a way to remove the Python bloat and get a concise C translation of the Python code?
Is Cython used for building C code or is it used for building Python extensions?
1.2
0
0
430
40,366,636
2016-11-01T18:47:00.000
1
0
1
0
python,flowchart
40,366,674
1
true
0
0
Usually the flowchart symbol for a file is a rectangle with a folded-up corner, meant to depict a piece of paper.
1
0
0
So I just need to know what symbol in a flow chart will represent a text file in python. The text file is used to see if there is a match in what the user has entered and data in a text file to give a solution if found. it's hard to decide which symbol it is. Please state the name of the symbol! Thanks.
What symbol will represnt a text file in a flow chart?
1.2
0
0
1,413
40,369,042
2016-11-01T21:35:00.000
1
0
0
0
python,machine-learning,scikit-learn,artificial-intelligence
40,370,495
1
true
0
0
Your data does not make any sense from scikit-learn perspective of what is expected in the .fit call. Feature vectors is supposed to be a matrix of size N x d, where N - number of data points and d number of features, and your second variable should hold labels, thus it should be vector of length N (or N x k where k is number of outputs/labels per point). Whatever is represented in your variables - their sizes do not match what they should represent.
1
0
1
I'm currently trying to train the MLPClassifier implemented in sklearn... When i try to train it with the given values i get this error: ValueError: setting an array element with a sequence. The format of the feature_vector is [ [one_hot_encoded brandname], [different apps scaled to mean 0 and variance 1] ] Does anybody know what I'm doing wrong ? Thank you! feature_vectors: [ array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), array([ 0.82211852, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 4.45590895, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 0.3439882 , -0.22976818, -0.22976818, -0.22976818, 4.93403927, -0.22976818, -0.22976818, -0.22976818, 0.63086639, 1.10899671, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 1.58712703, -0.22976818, 1.77837916, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 2.16088342, -0.22976818, 2.16088342, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 9.42846428, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 0.91774459, -0.22976818, -0.22976818, 4.16903076, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 2.44776161, -0.22976818, -0.22976818, -0.22976818, 1.96963129, 1.96963129, 1.96963129, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 7.13343874, 5.98592598, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 3.02151799, 4.26465682, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 2.25650948, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 1.30024884, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 4.74278714, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 0.3439882 , -0.22976818, 0.3439882 , -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 0.53524033, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818, 3.49964831, -0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818]) ] g_a_group: [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] MLP: from sklearn.neural_network import MLPClassifier clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) clf.fit(feature_vectors, g_a_group)
Python MLPClassifier Value Error
1.2
0
0
437
40,370,004
2016-11-01T23:07:00.000
0
0
0
0
python-2.7,tensorflow
40,370,096
1
false
0
0
You should just make your label (y) in your reduced sum format (i.e. 3 bits), and train to that label. The neural net should be smart enough to adjust the weights to imitate your reduce_sum logic.
1
0
1
I have a question about tensorflow tensor. If I have a NeuralNet like y=xw+b as an example. then x is placeholder([7,7] dims), w is Variable([7,1]) and b is Variable([1,1]) So, y is tensorflow tensor with [7,1] dims. then, in this case. can I make a new tensor like new_y = [tf.reduce_sum(y[0:3]), tf.reduce_sum(y[3:5]), tf.reduce_sum(y[5:])] and use it for training step? If possible, how can I make it?
Generate new tensorflow tensor according to the element index of original tensor
0
0
0
54
40,370,467
2016-11-02T00:01:00.000
0
0
1
1
python,macos,ipython,anaconda,zsh
56,141,998
13
false
0
0
I had a similar issue after I installed anaconda3 in ubuntu. This is how I solved it: 1) I changed to bash and anaconda can work 2) I changed to zsh, and anaconda works. I don't know why, but I think you can try.
2
54
0
I installed Anaconda via command line. The bash file. If Im in bash, I can open and use anaconda, like notebooks, ipython, etc. If I change my shell to ZSH, all the anaconda commands appear like "not found". How I can make it work in zsh? I use a Mac with OSx Sierra. Thanks in advance,
Anaconda not found in ZSh?
0
0
0
80,553
40,370,467
2016-11-02T00:01:00.000
3
0
1
1
python,macos,ipython,anaconda,zsh
62,925,457
13
false
0
0
From their docs (This worked for me): If you are on macOS Catalina, the new default shell is zsh. You will instead need to run source <path to conda>/bin/activate followed by conda init zsh. For my specific installation (Done by double clicking the installer), this ended up being source /opt/anaconda3/bin/activate
2
54
0
I installed Anaconda via command line. The bash file. If Im in bash, I can open and use anaconda, like notebooks, ipython, etc. If I change my shell to ZSH, all the anaconda commands appear like "not found". How I can make it work in zsh? I use a Mac with OSx Sierra. Thanks in advance,
Anaconda not found in ZSh?
0.046121
0
0
80,553
40,370,552
2016-11-02T00:14:00.000
1
0
1
0
python,regex,ip,analysis
53,222,067
2
false
0
0
Use this Regex. It will match and check the IP range within 255. \b(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9]).(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9]).(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9]).(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9])\b
1
2
0
I am having an issue with Regular expression, I need the most efficient regex that match IP address and in range of 255 only. I tried this one "ip_pattern = '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'" , but it does match even numbers over 255, such as 321.222.11.4
IP address regex python
0.099668
0
1
1,682
40,371,956
2016-11-02T03:19:00.000
2
0
1
1
python,linux
40,372,327
2
true
0
0
Solved! Here's how: 1) In terminal, after SSHing into the remote machine, type 'which python' (thanks @furas!). This gives path/to/Canopy/python 2) In terminal, type 'screen path/to/Canopy/python program.py' to run the desired program (called program.py) in the Canopy version of python.
1
1
0
I want to run a process (a python program) on a remote machine. I have both Canopy and Anaconda installed. After I SSH into the remote machine, if I type 'python', I get the python prompt - the Canopy version. If I type 'screen', hit 'enter', then type 'python', I get the python prompt - the Anaconda version. I want to use the Canopy version when I'm in 'screen'. How can I do so?
Screen command and python
1.2
0
0
5,021
40,379,402
2016-11-02T11:59:00.000
0
0
1
0
python,jupyter-notebook
40,380,231
1
true
1
0
I think you are out of luck here, the one thing you can do is check .ipynb_checkpoints/ if you can get a recent checkpoint.
1
1
0
I was scraping the web using jupyter notebook and it had been running for 20 hours. Now, because it was taking up a lot of ram, the browser eventually crashed however the command prompt instance is still running. Is there a way to retrieve the browser contents back with the data which is already scraped?
Jupyter notebook browser crashed but running in command prompt
1.2
0
0
199
40,381,804
2016-11-02T13:58:00.000
0
0
0
0
python,excel
40,381,980
1
false
0
0
I don't know if that is even possible, but it will be at least difficult to do. Because Excel locks the sheet file when it is read, it cannot be modified by other processes while it is opened. So that leave only the possibility to have the Excel process modify the file. And scripting in Excel can only be done in VB as far as I know (but I don't know much about that).
1
1
0
I'm designing a trade management system. I want to be able to enter in values into excel and have python do some computation (rather than excel). Is this even possible? With openpyxl I have to enter in the value to excel, save, close, run the script, reopen excel. This is an unacceptable in terms of the design criteria. Can any one recommend a better way to have a live interface which updates when values are changed in the cells ? Ideally I would like to remain with excel
How to update an Excel worksheet live without closing and reopening python
0
1
0
1,126
40,384,700
2016-11-02T16:14:00.000
1
0
1
1
python,macos,pip
40,386,891
1
true
0
0
Resolved the problem. Turns out that this is Homebrew's behavior. I must have recently ran brew upgrade and it installed a newer version of python3. It seems that something got weird with re-linking the new python3, so all binaries for the new installs ended up somewhere deep in /usr/local/Cellar/python3. I expect that re-linking python3 would solve this, but I ended up removing all versions of python3 and reinstalling. After that all I had to do was re-install any and all packages that had binary files in them. Not sure if this is the intended behavior or a bug in python3 package.
1
1
0
Suddenly, my pip install commands stopped installing binaries into /usr/local/bin. I tried to upgrade pip to see if that might be the problem, it was up to date and a forced re-install deleted my /usr/local/pip3 and didn't install it back, so now I have to use python3 -m pip to do any pip operations. I am running OS X Sierra with the latest update (that is the main thing that changed, so I think the OS X upgrade might have caused this) with python3 installed by homebrew. How do I fix this? Edit: I am still trying to work this out. python3 -m pip show -f uwsgi actually shows the uwsgi binary as being installed to what amounts to /usr/local/bin (it uses relative paths). Yet the binary is not there and reinstalling doesn't put it there and doesn't produce any errors. So either pip records the file in its manifest, but doesn't actually put it there or the OS X transparently fakes the file creation (did Apple introduce some new weird security measures?)
pip3 stopped installing executables into /usr/local/bin
1.2
0
0
1,216
40,384,775
2016-11-02T16:17:00.000
0
0
0
1
python,sdn,pox
43,265,462
2
false
0
0
POX is not a distributed controller. I would really recommend you to migrate immediately to ONOS or opendaylight. You would implement your solution on top of ONOS.
1
0
0
I am developing a load balancing between multiple controllers in sdn. Once a load is calculated on a controller-1 I need to migrate some part of that to controller-2. I have created the topology using mininet and running 2 remote pox controllers one on 127.0.0.1:6633 and other on 127.0.0.1:6634.How do I communicate between these controllers? How can I send load information of controller-1 to controller-2 and migrate some flows there?
Communicating between multiple pox controllers
0
0
1
1,009
40,389,402
2016-11-02T20:39:00.000
-1
0
0
0
python,machine-learning,scikit-learn,k-means
40,390,555
2
false
0
0
Clustering users makes sense. But if your only feature is the rating, I don't think it could produce a useful model for prediction. Below are my assumptions to make this justification: The quality of movie should be distributed with a gaussion distribution. If we look at the rating distribution of a common user, it should be something like gaussian. I don't exclude the possibility that a few users only give ratings when they see a bad movie (thus all low ratings); and vice versa. But on a large scale of users, this should be unusual behavior. Thus I can imagine that after clustering, you get small groups of users in the two extreme cases; and most users are in the middle (because they share the gaussian-like rating behavior). Using this model, you probably get good results for users in the two small (extreme) groups; however for the majority of users, you cannot expect good predictions.
1
0
1
I have a file called train.dat which has three fields - userID, movieID and rating. I need to predict the rating in the test.dat file based on this. I want to know how I can use scikit-learn's KMeans to group similar users given that I have only feature - rating. Does this even make sense to do? After the clustering step, I could do a regression step to get the ratings for each user-movie pair in test.dat Edit: I have some extra files which contain the actors in each movie, the directors and also the genres that the movie falls into. I'm unsure how to use these to start with and I'm asking this question because I was wondering whether it's possible to get a simple model working with just rating and then enhance it with the other data. I read that this is called content based recommendation. I'm sorry, I should've written about the other data files as well.
Clustering before regression - recommender system
-0.099668
0
0
733
40,390,129
2016-11-02T21:28:00.000
3
0
1
0
python
40,390,150
3
false
0
0
It means 10 to the power of -5 times 1
2
32
0
I notice that there is such an expression "1e-5" in Python(probably in other languages also) What is the name of this notation? what does it denote in math? What does 'e' mean? It's the first time I see a character helps to denote some value, are there other characters also help do so? Why should use this way instead of some other python math operation like pow() etc.
What does "e" in "1e-5" in Python language mean and what is the name of this notation?
0.197375
0
0
134,839
40,390,129
2016-11-02T21:28:00.000
15
0
1
0
python
40,390,160
3
false
0
0
10 ** -5, i.e. 10 to the power of negative 5, 1 divided by 10 to the power of 5, or 0.00001.
2
32
0
I notice that there is such an expression "1e-5" in Python(probably in other languages also) What is the name of this notation? what does it denote in math? What does 'e' mean? It's the first time I see a character helps to denote some value, are there other characters also help do so? Why should use this way instead of some other python math operation like pow() etc.
What does "e" in "1e-5" in Python language mean and what is the name of this notation?
1
0
0
134,839
40,392,010
2016-11-03T00:35:00.000
1
0
1
0
python,regex
40,392,162
1
false
0
0
Your pattern currently doesn't work because of the word boundary that is placed at the start. Note that a word boundary will match between a word-character and a non-word-character the start of a string the end of a string In your case \b is placed between the start of the string and the +, where it will match, thus your first optional group will never match. The rest of the pattern consists of a 8-digit-number (if we forget spaces and hyphens for a moment), but the number you try to test consists of 10 characters, so both word boundaries can't match at the same time. I think you can rewrite your pattern as ((?:(\+65[\s\-]*)|\b)[3689]\d{3}[\s\-]*\d{4})\b thus either matching +65 or the word boundary. Not sure if you use the capturing groups in your pattern, so I kept them as they are.
1
0
0
I am not sure why the regex - \b((\+65[\s\-]*)?[3689]\d{3}[\s\-]*\d{4})\b doesn't work for +6565066859
Regular expression in python 2.7.11
0.197375
0
0
79
40,392,243
2016-11-03T01:06:00.000
1
1
1
0
python
40,392,290
2
false
0
0
If you run 2to3 on something and there are no corrections then it is at least python 3 compatible. Additionally, if you run pip install 3to2 and then run 3to2 and there are no corrections then said file is python 2 compatible.
1
0
0
Is there a way to determine what version of python is a particular script compatible with ? I know this sounds like a stupid question but I have Python scripts from different sources. I have to manually run each script with the Python versions that I have installed on my system to check which version they are compatible with so was wondering if there is a more elegant way to do this?
Python - Script compatible by with which version?
0.099668
0
0
124
40,393,410
2016-11-03T03:31:00.000
1
0
1
0
python,syntax
40,393,423
1
false
0
0
The colons are separators. Rather than providing a "beginning" and an "end" index, it's telling Python to skip by every -1 objects in the array. It's effectively reversing the array.
1
0
0
I was looking at python code that printed palindromes, and I stumbled upon this line of code: for i in range(1000, 7, -1): if (str(i) == str(i)[::-1]) I'm trying to learn Python right now, and I'm just not that familiar with the syntax. Currently, I understand that this line of code checks to see if the first digit of integer i matches its last digit. Does the syntax of this line mean that the index is being incremented in order to check if it's a palindrome? What is the purpose of having two colons?
What does ::-1 mean for python string index?
0.197375
0
0
2,432
40,399,023
2016-11-03T10:21:00.000
0
0
0
0
python,scikit-learn,cluster-analysis,tf-idf
40,411,804
2
false
0
0
start small. First cluster only 100.00 documents. Only once it works (because it probably won't), then think about scaling up. If you don't succeed clustering the subset (and text clusters are usually pretty bad), then you won't fare well on the large set.
2
0
1
I am using TfIdfVectorizer of sklearn for document clustering. I have 20 million texts, for which i want to compute clusters. But calculating TfIdf matrix is taking too much time and system is getting stuck. Is there any technique to deal with this problem ? is there any alternative method for this in any python module ?
python : facing memory issue in document clustering using sklearn
0
0
0
261
40,399,023
2016-11-03T10:21:00.000
1
0
0
0
python,scikit-learn,cluster-analysis,tf-idf
40,399,357
2
true
0
0
Well, a corpus of 20 million texts is very large, and without a meticulous and comprehensive preprocessing nor some good computing instances (i.e. a lot of memory and good CPUs), the TF-IDF calculation may take a lot of time. What you can do : Limit your text corpus to some hundred of thousands of samples (let's say 200.000 texts). Having too much texts might not introduce more variance than much smaller (but reasonable) datasets. Try to preprocess your texts as much as you can. A basic approach would be : tokenize your texts, use stop words, word stemming, use carefully n_grams. Once you've done all these steps, see how much you've reduced the size of your vocabulary. It should be much more smaller than the original one. If not too big (talking about your dataset), these steps might help you compute the TF-IDF much faster .
2
0
1
I am using TfIdfVectorizer of sklearn for document clustering. I have 20 million texts, for which i want to compute clusters. But calculating TfIdf matrix is taking too much time and system is getting stuck. Is there any technique to deal with this problem ? is there any alternative method for this in any python module ?
python : facing memory issue in document clustering using sklearn
1.2
0
0
261
40,406,598
2016-11-03T16:23:00.000
0
0
0
0
oracle11g,cursor,python-3.5,cx-oracle
40,425,912
1
false
0
0
They both reference the same underlying concepts and methods. What you can do in the one you should be able to do in the other. There are limitations, of course, but these are due to the differences in the languages being used in each case. If you have a specific question, please update your question accordingly!
1
0
0
Is there any difference between cursor in cx_oracle api which is used in python and cursor of plsql in database? If there is a difference please elaborate on it. I am using python 3.5 and oracle 11g database and eclipse ide to use api and connect. Thanks in advance,
Difference between cx_oracle.cursor in python and cursor in database
0
1
0
95
40,406,618
2016-11-03T16:24:00.000
0
0
0
0
python,string,unicode,encoding
40,503,197
1
true
1
0
I added this in my code reload(sys) sys.setdefaultencoding('UTF8') then removed the function str() before the output of my string and now it works fine, thanks!
1
1
0
I wrote a Python script running a SQL query and creating an external file from the output. It works well on my computer but when I try to run the exact same script on another computer the output file is different. In mine the content of the content of the output file looks like this : FR, DE, CA and with the other computer it looks like this: b'FR', b'DE', b'CA' There is this b'' around the string and I don't know what I should configure in the 2nd computer to remove that. Both computers are using Python 2.7.11. I noticed the b'' thing appears in the 2nd computer after I use the function: smart_str from django.utils.encoding Before I pass the string to the output file I do: str(x) but the b'' is not removed. Thanks in advance for your help!
Python 2.7 - b'' appears in front of a string in output file
1.2
0
0
358
40,407,263
2016-11-03T16:59:00.000
0
0
0
1
python,r,docker,rpy2
41,900,410
1
false
1
0
Finally resolve this problem myself. This is very python script specific problem. In R command call from python, just need to change TBATS and BATS function. (very specific problem if someone works with R timeseries library)
1
0
0
First, I ran R models in windows system using rpy2 python interface. It's running fine. Then, I migrate it to linux environment using docker. Now I'm executing same code with Docker run command, facing "rpy2.rinterface.RRuntimeWarning:port 11752 cannot be opened ". Note: my application running four R models using rpy2. That means it's create four robjects. So I think same time they are using same port. However, I'm not sure. Help in this issue really appreciable. Thanks in advance.
Running R Models using rpy2 interface on Docker. I am facing issue related to opening the port
0
0
0
59
40,407,475
2016-11-03T17:10:00.000
0
0
1
0
python,functional-programming,cython
40,409,299
1
true
0
0
Based upon the feedback in the comments section, using higher level functional programming tools, such as reduce or groupby, etc., will incur a performance loss from Cython. While these higher level functions will work within Cython modules, the resulting calls back to compiled Python libraries will result in performance loss.
1
1
0
I just finished reading Kurt Smith's excellent book on Cython but I was left with one question. Can I used functional programming tools from python 3, like reduce or groupby etc., inside of a Cython function? I was not clear if using these higher level functions would impose additional overhead in Cython or if I needed to provide some special type declaration for the functions.
Can I use python 3 functional programming tools from `functools` or `itertools` with `Cython`
1.2
0
0
194
40,410,975
2016-11-03T20:48:00.000
3
0
0
0
python-3.x,apache-spark,amazon-ec2
40,413,120
1
true
0
0
This question boils down to the value of managed services, IMHO. Running Spark as a standalone in local mode only requires you get the latest Spark, untar it, cd to its bin path and then running spark-submit, etc However, creating a multi-node cluster that runs in cluster mode requires that you actually do real networking, configuring, tuning, etc. This means you've got to deal with IAM roles, Security groups, and there are subnet considerations within your VPC. When you use EMR, you get a turnkey cluster in which you can 1-click install many popular applications (spark included), and all of the Security Groups are already configured properly for network communication between nodes, you've got logging already setup and pointing at S3, you've got easy SSH instructions, you've got an already-installed apparatus for tunneling and viewing the various UI's, you've got visual usage metrics at the IO level, node level, and job submission level, you also have the ability to create and run Steps -- which are jobs that can be run in the command line of the drive node or as Spark applications that leverage the whole cluster. Then, on top of that, you can export that whole cluster, steps included, and copy paste the CLI script into a recurring job via DataPipeline and literally create an ETL pipeline in 60 seconds flat. You wouldn't get any of that if you built it yourself in EC2. I know which one I would choose... EMR. But that's just me.
1
3
1
I know this question has been asked before but those answers seem to revolve around Hadoop. For Spark you don't really need all the extra Hadoop cruft. With the spark-ec2 script (available via GitHub for 2.0) your environment is prepared for Spark. Are there any compelling use cases (other than a far superior boto3 sdk interface) for running with EMR over EC2?
Does EMR still have any advantages over EC2 for Spark?
1.2
0
1
129
40,411,357
2016-11-03T21:14:00.000
0
0
0
0
python,weka
40,643,572
2
false
0
0
Not sure about python but in the gui version you can use SpreadSubsample to reduce the class imbalance. If you feel that 'bad' is a a good representation of the class then you could experiment with different number of instances of 'good.' To do this you need to select Filter ==> Supervised ==> Instance ==> SpreadSubsample ==> change the number of instances using 'max count'
2
1
1
I have an imbalanced training data and i am using logistic regression in weka to classify. There are two classes good and bad. Good has 75000 instances and bad 3000. My test data has 10000 good data. When i train it is more inclined to good data i.e it classifies almost all bad instances good. What should i do ? I tried to have 10000 good instances in training data instead of 75000 but still the problem is same.
How to classify imbalanced data in weka?
0
0
0
1,258
40,411,357
2016-11-03T21:14:00.000
0
0
0
0
python,weka
40,837,930
2
false
0
0
There are a couple of things that you could try. Use Boosting (AdaBoostM1) so that the misclassified instances will be given extra weight. Use weka.classifiers.meta.CostSensitiveClassifier and give the "bad" instances a higher weight than the "good" instances. Note: This will probably reduce your overall accuracy, but make your classifier do a better job of identifying the "bad" instances.
2
1
1
I have an imbalanced training data and i am using logistic regression in weka to classify. There are two classes good and bad. Good has 75000 instances and bad 3000. My test data has 10000 good data. When i train it is more inclined to good data i.e it classifies almost all bad instances good. What should i do ? I tried to have 10000 good instances in training data instead of 75000 but still the problem is same.
How to classify imbalanced data in weka?
0
0
0
1,258
40,411,374
2016-11-03T21:15:00.000
0
0
0
0
python,python-3.x,numpy
40,411,415
1
true
0
0
NumPy arrays only have the argmin() attribute, but no nanargmin() attribute. So A.nanargmin() does not exist. You can use numpy.argmin(A) and numpy.nanargmin(A) instead.
1
1
1
I am trying to obtain the argmin of a numpy 2 dimensional array A which has nan values. Now the problem is: numpy.nanargmin(A) returns only one index. numpy.unravel_index(A.argmin(), A.shape) returns [0,0] because it has nan values. And... numpy.unravel_index(A.nanargmin(), A.shape) throws the error: AttributeError Traceback (most recent call last) in () ----> 1 np.unravel_index(dist.nanargmin(), dist.shape) AttributeError: 'numpy.ndarray' object has no attribute 'nanargmin'
Using numpy.nanargmin() in 2 dimensional matrix
1.2
0
0
471
40,416,048
2016-11-04T05:46:00.000
3
0
0
0
python,selenium,web-scraping,properties,attributes
40,416,749
5
false
0
0
.text will retrieve an empty string of the text in not present in the view port, so you can sroll the object into the viewport and try .text it should retrive the value. On the contrary innerhtml can get the value even of it is present out side the view port
2
11
0
Whats the difference between getting text and innerHTML when using selenium. Even though we have text under particular element, when we perform .text we get empty values. But doing .get_attribute("innerHTML") works fine. Can someone point out the difference between two? When someone should use '.get_attribute("innerHTML")' over .text?
Difference between text and innerHTML using Selenium
0.119427
0
1
13,511
40,416,048
2016-11-04T05:46:00.000
3
0
0
0
python,selenium,web-scraping,properties,attributes
40,416,415
5
false
0
0
For instance, <div><span>Example Text</span></div> .get_attribute("innerHTML") gives you the actual HTML inside the current element. So theDivElement.get_attribute("innerHTML") returns "<span>Example Text</span>" .text gives you only text, not include HTML node. So theDivElement.text returns "Example Text" Please note that the algorithm for .text depends on webdriver of each browser. In some cases such as element is hidden, you might get different text when you use different webdriver. I usually get text from .get_attribute("innerText") instead of .text so I can handle the all the case.
2
11
0
Whats the difference between getting text and innerHTML when using selenium. Even though we have text under particular element, when we perform .text we get empty values. But doing .get_attribute("innerHTML") works fine. Can someone point out the difference between two? When someone should use '.get_attribute("innerHTML")' over .text?
Difference between text and innerHTML using Selenium
0.119427
0
1
13,511
40,423,769
2016-11-04T13:21:00.000
0
0
0
0
php,python,wordpress,web-applications
40,452,079
1
true
1
0
After consulting with someone who's developed with Wordpress before, he recommended to build a plugin. And since I have no experience with Wordpress, he helped me build it. It was literally 3 lines of PHP. Thank you all.
1
0
0
I was tasked to create a file upload workflow that integrates with Wordpress. I created a backend that is called via REST that does a lot of custom workflows. Thus, I cannot use the current plugins. It is a single page application that accepts a file as well as some metadata. My current dilemma: I need to integrate this web application within Wordpress and have no clue where to start.
Wordpress - Integrating custom web application
1.2
0
0
62
40,424,351
2016-11-04T13:48:00.000
0
0
1
0
biopython
40,696,400
2
true
0
0
There is no problem in downloading the 32 bit version. It will work on 64 bit Windows.
1
0
0
I need to install Biopython for my laptop, Windows 7 win 64. I have checked the website and I can only find for 32 bit. Can I install this on my laptop? The 64 bit is unofficial and I'm not willing to download it.
Biopython installation for Windows 7 64 bit
1.2
0
0
725
40,425,856
2016-11-04T15:03:00.000
1
0
0
1
javascript,python,cookies,bokeh
40,429,660
1
true
1
0
The cookies idea might work fine. There are a few other possibilities for sharing data: a database (e.g. redis or something else, that can trigger async events that the app can respond to) direct communication between the apps (e.g. with zeromq or similiar) The Dask dashboard uses this kind of communication between remote workers and a bokeh server. files and timestamp monitoring if there is a shared filesystem (not great, but sometimes workable in very simple cases) Alternatively if you can run both apps on the same single server (even though they are separate apps) then you could probably communicate by updating some mutable object in a module that both apps import. But this would not work in a scale-out scenario with more than one Bokeh server running. Any/all of these somewhat advanced usages, an working example would make a great contribution for the docs so that others can use them to learn from.
1
0
0
I have two Bokeh apps (on Ubuntu \ Supervisor \ Nginx), one that's a dashboard containing a Google map and another that's an account search tool. I'd like to be able to click a point in the Google map (representing a customer) and have the account search tool open with info from the the point. My problem is that I don't know how to get the data from A to B in the current framework. My ideas at the moment: Have an event handler for the click and have it both save a cookie and open the account web page. Then, have some sort of js that can read the cookie and load the account. Throw my hands up, try to put both apps together and just find a way to pass it in the back end.
Transfer Data from Click Event Between Bokeh Apps
1.2
0
0
157
40,426,863
2016-11-04T15:50:00.000
1
0
0
0
python,amazon-web-services,deployment,amazon-ec2,boto3
40,433,014
2
false
1
0
It is certainly best practice to have your Amazon EC2 instances in the same VPC as the Amazon RDS database. Recommended security is: Create a Security Group for your web application EC2 instances (Web-SG) Launch your Amazon RDS instance in a private subnet in the same VPC Configure the Security Group on the RDS instance to allow incoming MySQL (3306) traffic from the Web-SG security group If your RDS instance is currently in a different VPC, you can take a snapshot and then create a new database from the snapshot. If you are using an Elastic Load Balancer, you could even put your Amazon EC2 instances in a private subnet since all access will be via the Load Balancer.
1
0
0
I have web application which dynamically deployed on EC2 instances (scalable). Also I have RDS mysql instance which dynamically created by python with boto3. Now port 3306 of RDS is public, but I want to allow connection only from my EC2's from specific VPC. Can I create RDS on specific VPC (same one with EC2 instances)? What is best practice to create such set EC2 + RDS ?
Create AWS RDS on specific VPC
0.099668
1
1
214
40,428,188
2016-11-04T17:04:00.000
9
0
1
0
python,gamma-function
40,428,278
4
false
0
0
I'd use scipy.special.gamma().
1
4
0
I'm fairly new to Python but would like to use Euler's Gamma Function in a function I am writing. I'd prefer not to write it as an integral and was wondering if there's something I can import that easily defines the gamma function. Thanks
Gamma Function in Python
1
0
0
23,451
40,428,931
2016-11-04T17:50:00.000
0
0
1
0
python,python-3.x
68,052,334
9
false
0
0
I had some problems just doing writing in an empty cell pip list But once I ran it in a whole new file, I had no problems at all, and got all the libraries installed in the notebook!
1
29
0
I seem to remember there is a package that printed the versions and relevant information about Python packages used in a Jupyter notebook so the results in it were reproducible. But I cannot remember the name of the package. Can any of you point me in the right direction? Thanks in advance!
Package for listing version of packages used in a Jupyter notebook
0
0
0
42,395
40,430,960
2016-11-04T20:07:00.000
2
0
1
0
python-2.7,python-3.x,spyder,graphlab
40,952,297
1
true
0
0
Following method will solve this: Open Spyder --> tools --> preferences --> python interpreter --> change from default to custom and select the python executable under gl-env environment. Restart spyder. It will work.
1
2
0
I can run my python file with imported functionalities from GraphLab from the Terminal (first use the source activate gl-env and then run the file). So the file and installations are alright in that sense. However, I can't figure out how to run the file directly in Spyder IDE. I only get ImportError: No module named 'graphlab'. The Spyder runs with python3.5 and I've tried to change to 2.7 as the GraphLap seems to, but it doesn't work either (I redirected to the same python2.7 'scientific_startup.py' used in GraphLab lib ). I wonder if anyone knows how to run the file directly from Spyder??
run graphlab from Spyder
1.2
0
1
391
40,431,073
2016-11-04T20:16:00.000
1
1
1
0
python,hashlib,sparse-file
40,431,152
1
false
0
0
The hashlib module doesn't even work with files. You have to read the data in and pass blocks to the hashing object, so I have no idea why you think it would handle sparse files at all. The I/O layer doesn't do anything special for sparse files, but that's the OS's job; if it knows the file is sparse, the "read" operation doesn't need to do I/O, it just fills in your buffer with zeroes with no I/O.
1
0
0
I wanted to know how does python hashlib library treat sparse files. If the file has a lot of zero blocks then instead of wasting CPU and memory on reading zero blocks does it do any optimization like scanning the inode block map and reading only allocated blocks to compute the hash? If it does not do it already, what would be the best way to do it myself. PS: Not sure it would be appropriate to post this question in StackOverflow Meta. Thanks.
Python hashlib and sparse files
0.197375
0
0
96
40,432,366
2016-11-04T22:05:00.000
2
0
1
0
python,python-3.x
40,432,387
2
false
0
0
Of course they can have arguments. The only difference is whether they have side effects beyond the input and output parameters. Without input arguments to use as "inspiration", it's difficult for a pure function to do something useful.
1
0
0
Can pure functions take an argument? For example, def convert(n): Thank you in advance
What's the difference between non-pure and pure functions?
0.197375
0
0
742
40,432,499
2016-11-04T22:20:00.000
-1
0
1
1
python
40,432,570
1
false
0
0
Essentially system calls interact with the underlying system services(that is the Kernel for Linux). C functions on the other hand run on user space exclusively. To that sense system call is more "special".
1
1
0
It is written in a documentation: Such extension modules can do two things that can’t be done directly in Python: they can implement new built-in object types, and they can call C library functions and system calls. Syscalls I cannot see why "system calls" are special here. I know what it is syscall. I didn't see why it is special and why it cannot be done directly in Python. Especially, we can use open in Python to open a file. It must be a underlying syscall to get descriptor for file ( in Unix systems). It was just open. Besides that we can use: call(["ls", "-l"]) and it also must use syscall like execve or something like that. Functions Why is calling C library function is special? After all: ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
Extension for Python
-0.197375
0
0
38
40,433,243
2016-11-04T23:45:00.000
0
1
0
1
python,ssh,flask,digital-ocean
40,433,397
3
false
1
0
You needn't keep the console on, the app will still running after you close the console on your computer. But you may need to set a log to monitor it.
1
1
0
I created a droplet that runs a flask application. My question is when I ssh into the droplet and restart the apache2 server, do I have to keep the console open all the time (that is I should not shut down my computer) for the application to be live? What if I have a dynamic application that runs scripts in the background, do I have to keep the console open all the time for the dynamic parts to work? P.S: there's a similar question in SO about a NodeJs app but some parts of the answer they provided are irrelevant to my Flask app.
Running flask app on DigitalOcean: should I keep the ssh console open all the time?
0
0
0
282
40,433,906
2016-11-05T01:26:00.000
2
0
0
0
python,pandas,numpy
40,433,924
1
false
0
0
I would recommend not building from source on Windows unless you really know what you're doing. Also, don't mix conda and pip for numpy; numpy is treated specially in conda and really should work out of the box. If you get an error on import pandas there's likely something wrong with your PATH or PYTHONPATH. I suggest that you just create an empty conda env, and install only pandas in it. That will pull in numpy. If that somehow does not work, let's see if we can help you debug that.
1
0
1
I was trying to use pandas (installed the binaries and dependencies using conda, then using pip, then built then using no-binaries option); still getting error. Numpy is available (1.11.2). I understand some interface is not provided by numpy anymore. Python version I am using is 2.7.11. List of packages installed are bellow. Error message: C:.....Miniconda2\lib\site-packages\numpy\core__init__.py:14: Warning: Numpy built with MINGW-W64 on Windows 64 bits is experimental, and only available for testing. You are advised not to use it for production. CRASHES ARE TO BE EXPECTED - PLEASE REPORT THEM TO NUMPY DEVELOPERS from . import multiarray Traceback (most recent call last): File "io.py", line 2, in from data import support File "....\support.py", line 3, in import pandas File "....Miniconda2\lib\site-packages\pandas__init__.py", line 18, in raise ImportError("Missing required dependencies {0}".format(missing_dependencies)) ImportError: Missing required dependencies ['numpy']
Numpy fails to serve as a dependency for pandas
0.379949
0
0
1,990
40,438,629
2016-11-05T13:10:00.000
0
0
0
0
android,python
40,438,672
2
false
0
0
Your problem is not Android-related definitely. You simply need to educated yourself about networking. Yes, it will cost you some money - you spend them buying few books and some hardware for building home network. After about 3-6-12 months of playing with your home network you will find your question rather simple to answer.
1
0
0
I have a second laptop running kali linux which is not used at all, meaning it can be running anytime as a server for my application. So what I actually want to do is connect from my application to my server and send some data, on the server run a python program that uses this code and return some data back. I never tried to work with servers, can I even turn my computer into a server for my application? does this cost any money? can I run a python code on the server and return the results? I know I haven't published any code but I actually don't know how to start this project and I can use some help so can someone refer me to something to start with? Thanks..
Using a server in android to run code
0
0
1
259
40,442,264
2016-11-05T19:23:00.000
1
0
1
0
python
40,442,315
5
false
0
0
The # means the whole line is used for a comment while whatever is in between the two """ quotes is used as comments so you can write comments on multiple lines.
2
5
0
Starting to program in Python, I see some scripts with comments using # and """ comments """. What is the difference between these two ways to comment?
Difference between comments in Python, # and """
0.039979
0
0
3,797
40,442,264
2016-11-05T19:23:00.000
1
0
1
0
python
40,442,318
5
false
0
0
As the user in a previous answer stated, the triple quotes are used to comment multiple lines of code while the # only comments one line. Look out though, because you can use the triple quotes for docstrings and such.
2
5
0
Starting to program in Python, I see some scripts with comments using # and """ comments """. What is the difference between these two ways to comment?
Difference between comments in Python, # and """
0.039979
0
0
3,797
40,442,688
2016-11-05T20:07:00.000
4
1
1
0
python
40,442,773
1
true
0
0
When you import a C extension, python uses the platform's shared library loader to load the library and then, as you say, jumps to a function in the library. But you can't load just any library or jump to any function this way. It only works for libs specifically implemented to support python and to functions that are exported by the library as a python object. The lib must understand python objects and use those objects to communicate. Alternately, instead of importing, you can use a foreign-function library like ctypes to load the library and convert data to the C view of data to make calls.
1
1
0
In Python ( CPython) we can import module: import module and module can be just *.py file ( with a python code) or module can be a module written in C/C++ ( be extending python). So, a such module is just compiled object file ( like *.so/*.o on the Unix). I would like to know how is it executed by the interpreter exactly. I think that python module is compiled to a bytecode and then it will be interpreted. In the case of C/C++ module functions from a such module are just executed. So, jump to the address and start execution. Please correct me if I am wrong/ Please say more.
C/C++ module vs python module.
1.2
0
0
191
40,442,917
2016-11-05T20:32:00.000
0
0
0
0
python,usb
40,442,964
2
false
0
0
Python is way to highlevel for this problem, this behavior would require you to rewrite the USB Driver of your OS.
1
1
0
I want to make a simple python program, which controls my laptop's usb hubs. Nothing extra, just put first usb port's DATA+ channel into HIGH (aka 5V) or LOW (aka 0 V) state.
Python - Low Level USB Port Control
0
0
1
651
40,446,650
2016-11-06T06:30:00.000
0
0
1
0
python
40,446,697
1
true
0
0
It's simple: foo.bar() does the same thing as foo.__class__.bar(foo) so it is a function, and the argument is passed to it, but the function is stored attached to the object via its class (type), so to say. The foo.bar() notation is just shorthand for the above. The advantage is that different functions of the sams name can be attached to many objects, depending object type. So the caller of foo.bar() is calling whatever function is attached to the object by the name "bar". This is called polymorphism and can be used for all sorts of things, such as generic programming. Such functions are called methods. The style is called object orientation, albeit object orientation as well as generic programming can also be achieved using more familiar looking function (method) call notation (e.g. multimethods in Common Lisp and Julia, or classes in Haskell).
1
0
1
Beginner here. Shouldnt the required variables be passed as arguments to the function. Why is it variable.function() in python?
Why is it dataframe.head() in python and head(dataframe) in R? Why is python like this in general?
1.2
0
0
139
40,449,935
2016-11-06T13:38:00.000
5
0
0
0
python,pyramid
40,450,003
1
false
1
0
Yes, all of those frameworks are simply running Python code to handle requests. Within limits you can use external libraries just fine. The limits usually are dictated by the WSGI server and the nature of HTTP requests; if your library changes the event model (like gevents) or relies heavily on changing the interpreter state (global state, localization) or takes a long,long time to produce results, then you may need to do more work to integrate.
1
1
0
Can I use any external libraries that are developed for python on Pyramid? I mean is it the 'normal python' to which I can import external libraries as I do with the standard python downloaded from python.org What is the situation for Django and Flask and Bottle? My intention is to create backend for a mobile app. I want to do it specifically in Python because I need to learn python. The app is a native android app. Therefore the there is no need to use response with nice html code. I just want Django/Flask/Pyramid to direct http request to relevant python functions. Everything else including user auth, database is handled by my code I write. Is there a better more simpler way to map http request/responses with the relevant functions without using these 3 platforms? In case I use one of these can I still use my own libraries?
Use external python libraries on Pyramid
0.761594
0
0
107
40,451,372
2016-11-06T15:59:00.000
2
0
0
0
python,django,rest
40,451,675
2
true
1
0
Not It is not possible my friend .. because it is not related to Django or other web framework .. that's Http Rules and you can't change Them .. Every http request has only one http response ..
1
0
0
I am just working on an app, which sends a request to the server asking it to report the details of a device every 10 seconds for the next 60 seconds. I am using Django framework as the backend server. Can we send multiple responses to a single request from the app? If yes, Can you point me in the right direction.
Can we send multiple responses in intervals to a single request, Using Django as server?
1.2
0
0
2,762
40,452,529
2016-11-06T17:50:00.000
3
0
0
0
python,django
40,452,589
3
false
1
0
meaning it's really static Use nginx to serve static files. Do not use django. You will setup project structure when it will be required.
2
1
0
It is a little oxymoron now that I am making a small Django project that it is hard to decide how to structure such project. Before I will at least will have 10 to 100 apps per project. Now my project is just a website that presents information about a company with no use database, meaning it's really static, with only 10 to 20 pages. Now how do you start, do you create an app for such project.
How to structure a very small Django Project?
0.197375
0
0
318
40,452,529
2016-11-06T17:50:00.000
1
0
0
0
python,django
40,452,810
3
false
1
0
Frankly I won't use Django in that case, I would use Flask for such small projects. it's easy to learn and setup a small website. PS: I use Flask in small and large apps!
2
1
0
It is a little oxymoron now that I am making a small Django project that it is hard to decide how to structure such project. Before I will at least will have 10 to 100 apps per project. Now my project is just a website that presents information about a company with no use database, meaning it's really static, with only 10 to 20 pages. Now how do you start, do you create an app for such project.
How to structure a very small Django Project?
0.066568
0
0
318
40,452,550
2016-11-06T17:52:00.000
0
0
1
0
python
40,452,773
2
false
0
0
Cron is a much better solution as it becomes responsible for starting and stopping the script. Your other choice is to have your script working 24 hours a day plus some mechanism to re-start it if it locks up and start at reboot, etc. Cron is way simpler and more dependable.
2
0
0
I have a script which I desire to run once a day, now i can achieve this with simpletime.sleep(60*60*24), then usingnohup python ... Now i am not sure, what constraints time.sleep function would have on CPU ? Other approach would using cron job ?
cpu constraints running python script once a day
0
0
0
59
40,452,550
2016-11-06T17:52:00.000
1
0
1
0
python
40,452,784
2
false
0
0
"sleep" has no impact on the cpu. But cron job is a better approach for many reasons : if your computer restarts, you don't have to relaunch the script manually a long life process will more likely reach a border case making it crash (such as memory leak) while sleeping, process is still consuming resources, especially RAM, but also file descriptors
2
0
0
I have a script which I desire to run once a day, now i can achieve this with simpletime.sleep(60*60*24), then usingnohup python ... Now i am not sure, what constraints time.sleep function would have on CPU ? Other approach would using cron job ?
cpu constraints running python script once a day
0.099668
0
0
59
40,452,958
2016-11-06T18:28:00.000
13
0
1
0
python,spyder
40,533,922
2
false
0
0
You need to go to the Projects menu, and select the option called New Project. Then you need to select the option called Existing directory, choose the directory where your project is located at in the Location field and finally press Create. That will automatically show your project in the Project Explorer.
1
4
0
My Spyder Project Explorer normally shows a project and its files, but I recently deleted a project and made the project explorer space empty. I am trying to make the Project Explorer show my existing projects or a new project, but when i try to do that it gives me the error "Project not a Spyder project". How can i restore my Spyder project explorer to show existing or new projects?
How to make projects show in Python Spyder Project Explorer
1
0
0
18,738
40,454,897
2016-11-06T21:47:00.000
1
0
0
0
python,algorithm,path-finding
40,458,170
4
false
0
1
For this problem, simply doing a breadth-first search is enough (Dijkstra and BFS work in same way for unweighted graphs). To ensure that only the chess knight's moves are used, you'll have to define the moves in a proper way. Notice that a chess knight moves two squares to any direction, then one square perpendicular to that. This means it can move two squares left of right then one square up or down, or two squares up or down then one square left or right. The calculation will be much easier if you identify the cells by rows (0 - 7) and columns (0 - 7) instead of 0 - 63. This can be done easily by dividing the cell index by 8 and using the quotient and remainder as row and column indices. So, if the knight is at position (x, y) now, its next possible positions can be any of (x - 2, y - 1), (x - 2, y + 1), (x + 2, y - 1), (x + 2, y + 1), (x - 1, y - 2), (x - 1, y + 2), (x + 1, y - 2), (x + 1, y + 2). Be careful that all of these 8 cells may not be inside the grid, so discard the locations that falls out of the board.
3
1
0
I have a problem shown below that wants to find the quickest way to get between any two points by using only the moves of a knight in chess. My first thought was to us the A* algorithm or Dijkstra's algorithm however, I don't know how to make sure only the moves of a knight are used. I would appreciate it if you could suggest a better algorithm or just some tips to help me. Thank you. Write a function called answer(src, dest) which takes in two parameters: the source square, on which you start, and the destination square, which is where you need to land to solve the puzzle. The function should return an integer representing the smallest number of moves it will take for you to travel from the source square to the destination square using a chess knight's moves (that is, two squares in any direction immediately followed by one square perpendicular to that direction, or vice versa, in an "L" shape). Both the source and destination squares will be an integer between 0 and 63, inclusive, and are numbered like the example chessboard below: ------------------------- | 0| 1| 2| 3| 4| 5| 6| 7| ------------------------- | 8| 9|10|11|12|13|14|15| ------------------------- |16|17|18|19|20|21|22|23| ------------------------- |24|25|26|27|28|29|30|31| ------------------------- |32|33|34|35|36|37|38|39| ------------------------- |40|41|42|43|44|45|46|47| ------------------------- |48|49|50|51|52|53|54|55| ------------------------- |56|57|58|59|60|61|62|63| -------------------------
Simple algorithm to move from one tile to another using only a chess knight's moves
0.049958
0
0
1,507
40,454,897
2016-11-06T21:47:00.000
2
0
0
0
python,algorithm,path-finding
40,458,703
4
true
0
1
Approach the problem in the following way: Step 1: Construct a graph where each square of the chess board is a vertex. Step 2: Place an edge between vertices exactly when there is a single knight-move from one square to another. Step 3: Apply Dijkstra's algorithm. Dijkstra's algorithm is an algorithm to find the length of a path between two vertices (squares).
3
1
0
I have a problem shown below that wants to find the quickest way to get between any two points by using only the moves of a knight in chess. My first thought was to us the A* algorithm or Dijkstra's algorithm however, I don't know how to make sure only the moves of a knight are used. I would appreciate it if you could suggest a better algorithm or just some tips to help me. Thank you. Write a function called answer(src, dest) which takes in two parameters: the source square, on which you start, and the destination square, which is where you need to land to solve the puzzle. The function should return an integer representing the smallest number of moves it will take for you to travel from the source square to the destination square using a chess knight's moves (that is, two squares in any direction immediately followed by one square perpendicular to that direction, or vice versa, in an "L" shape). Both the source and destination squares will be an integer between 0 and 63, inclusive, and are numbered like the example chessboard below: ------------------------- | 0| 1| 2| 3| 4| 5| 6| 7| ------------------------- | 8| 9|10|11|12|13|14|15| ------------------------- |16|17|18|19|20|21|22|23| ------------------------- |24|25|26|27|28|29|30|31| ------------------------- |32|33|34|35|36|37|38|39| ------------------------- |40|41|42|43|44|45|46|47| ------------------------- |48|49|50|51|52|53|54|55| ------------------------- |56|57|58|59|60|61|62|63| -------------------------
Simple algorithm to move from one tile to another using only a chess knight's moves
1.2
0
0
1,507
40,454,897
2016-11-06T21:47:00.000
1
0
0
0
python,algorithm,path-finding
41,089,879
4
false
0
1
While User_Targaryen's answer is the best because it directly answers your question, I would recommend an algebraic solution, if your goal is computing is the delivery of an answer in the shortest amount of time. To shorten the algorithm, use reflections about the x, y, and xy axes, so as to consider only positive (x, y) where x >= y, and place the starting move at the origin, coordinate (0, 0). This is one octant (one eighth) of the possible directions. A hint to discovering the solution is to use graph-paper or Dijkstra's algorithm with the restriction of reaching all points in the first octant up to 5 moves, and display this as a grid. Each cell of the grid should be labeled with a digit representing the minimum number of moves. Let me know if you would like to broaden your question and would like additional information.
3
1
0
I have a problem shown below that wants to find the quickest way to get between any two points by using only the moves of a knight in chess. My first thought was to us the A* algorithm or Dijkstra's algorithm however, I don't know how to make sure only the moves of a knight are used. I would appreciate it if you could suggest a better algorithm or just some tips to help me. Thank you. Write a function called answer(src, dest) which takes in two parameters: the source square, on which you start, and the destination square, which is where you need to land to solve the puzzle. The function should return an integer representing the smallest number of moves it will take for you to travel from the source square to the destination square using a chess knight's moves (that is, two squares in any direction immediately followed by one square perpendicular to that direction, or vice versa, in an "L" shape). Both the source and destination squares will be an integer between 0 and 63, inclusive, and are numbered like the example chessboard below: ------------------------- | 0| 1| 2| 3| 4| 5| 6| 7| ------------------------- | 8| 9|10|11|12|13|14|15| ------------------------- |16|17|18|19|20|21|22|23| ------------------------- |24|25|26|27|28|29|30|31| ------------------------- |32|33|34|35|36|37|38|39| ------------------------- |40|41|42|43|44|45|46|47| ------------------------- |48|49|50|51|52|53|54|55| ------------------------- |56|57|58|59|60|61|62|63| -------------------------
Simple algorithm to move from one tile to another using only a chess knight's moves
0.049958
0
0
1,507
40,456,337
2016-11-07T00:55:00.000
0
0
0
0
python
60,907,357
2
false
1
0
you have to sign in and associate your foobar to your Gmail then you should be able to request a new challenge.
2
1
0
I am doing a challenge for Google FooBar, and am having trouble submitting my code. My code is correct, and I have checked my program output against the answers provided by Google, and my output is correct. However, when I try and submit, I get a Error 403: Permission denied message. I cannot submit feedback either because I receive the same error message. Does any one have any advice?
foobar Google - Error 403 permission denied - programming challenge
0
0
1
604
40,456,337
2016-11-07T00:55:00.000
0
0
0
0
python
44,912,746
2
false
1
0
I also faced the same issue. You can solve this by closing the current foobar session and opening a new in another tab. This will definitely solve this problem.
2
1
0
I am doing a challenge for Google FooBar, and am having trouble submitting my code. My code is correct, and I have checked my program output against the answers provided by Google, and my output is correct. However, when I try and submit, I get a Error 403: Permission denied message. I cannot submit feedback either because I receive the same error message. Does any one have any advice?
foobar Google - Error 403 permission denied - programming challenge
0
0
1
604
40,457,331
2016-11-07T03:20:00.000
1
0
0
0
python,information-retrieval,information-extraction
40,603,239
2
false
0
0
Evaluation has two essentials. First one is a test resource with the ranking of documents or their relevancy tag (relevant or not-relevant) for specific queries, which is made with an experiment (like user click, etc. and is mostly used when you have a running IR system), or made through crowd-sourcing. The second essential part of evaluation is which formula to use for evaluating an IR system with the test collection. So based on what you said, if you don't have a labeled test collection you cant evaluate your system.
1
3
1
i wrote one program to do the information retrieval and extraction. user enter the query in the search bar, the program can show the relevant txt result such as the relevant sentence and the article which consists the sentence. I did some research for how to evaluate the result. I might need to calculate the precision, recall, AP, MAP.... However, I am new to that. How to calculate the result. Since my dataset is not labeled and i did not do the classification. The dataset I used was the article from BBC news. there were 200 articles. i named it as 001.txt, 002.txt ...... 200.txt It would be good if u have any ideas how to do the evaluation in python. Thanks.
information retrieval evaluation python precision, recall, f score, AP,MAP
0.099668
0
0
6,407