Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,963,083 | 2017-12-24T18:22:00.000 | 0 | 0 | 1 | 0 | python,pandas,installation,anaconda,data-science | 50,020,866 | 4 | false | 0 | 0 | I have tried those options answered above. I think there was some problem with with Anaconda python 3.x versions. I used 2.7 and it worked fine there for me | 2 | 0 | 0 | Solving environment: failed
UnsatisfiableError: The following specifications were found to be in
conflict:
- wordcloud
- xlsxwriter Use "conda info " to see the dependencies for each package
.
Initially i get conflict with python 3.4 and python 3.6 so update python to 3.6 but still getting above erroe. Pls help | Can't install wordcloud in anaconda | 0 | 0 | 0 | 1,411 |
47,963,083 | 2017-12-24T18:22:00.000 | -1 | 0 | 1 | 0 | python,pandas,installation,anaconda,data-science | 49,322,558 | 4 | false | 0 | 0 | In console :
conda install -c conda-forge wordcloud
First activate your python environment ! | 2 | 0 | 0 | Solving environment: failed
UnsatisfiableError: The following specifications were found to be in
conflict:
- wordcloud
- xlsxwriter Use "conda info " to see the dependencies for each package
.
Initially i get conflict with python 3.4 and python 3.6 so update python to 3.6 but still getting above erroe. Pls help | Can't install wordcloud in anaconda | -0.049958 | 0 | 0 | 1,411 |
47,964,765 | 2017-12-24T23:52:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,machine-learning,deep-learning | 48,297,553 | 1 | false | 0 | 0 | Are the cloud and local machines running the same Python/Tensorflow versions? Sometimes checkpoints produced by a specific Tensorflow version are not backward compatible due to internal variables renaming. | 1 | 0 | 1 | I've tried to create a custom object detector in the paperspace cloud desktop, then I tried it on Jupyter Notebook and it works.
Now, I've uploaded the whole models-master folder and downloaded it on my local machine.
I ran it using Jupyter Notebook and it now gives an InvalidArgumentError. I've tried re-exporting the inference graph on my local machine using the same ckpt that was trained on cloud but it is still not working.
InvalidArgumentError Traceback (most recent call
last)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py
in _do_call(self, fn, *args) 1322 try:
-> 1323 return fn(*args) 1324 except errors.OpError as e:
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py
in _run_fn(session, feed_dict, fetch_list, target_list, options,
run_metadata) 1301 feed_dict,
fetch_list, target_list,
-> 1302 status, run_metadata) 1303
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py
in exit(self, type_arg, value_arg, traceback_arg)
472 compat.as_text(c_api.TF_Message(self.status.status)),
--> 473 c_api.TF_GetCode(self.status.status))
474 # Delete the underlying status object from memory otherwise it stays alive
InvalidArgumentError: NodeDef mentions attr 'T' not in Op index:int64>; NodeDef:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where
= WhereT=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0".
(Check whether your GraphDef-interpreting binary is up to date with
your GraphDef-generating binary.). [[Node:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where
= WhereT=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call
last) in ()
20 (boxes, scores, classes, num) = sess.run(
21 [detection_boxes, detection_scores, detection_classes, num_detections],
---> 22 feed_dict={image_tensor: image_np_expanded})
23 # Visualization of the results of a detection.
24 vis_util.visualize_boxes_and_labels_on_image_array(
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py
in run(self, fetches, feed_dict, options, run_metadata)
887 try:
888 result = self._run(None, fetches, feed_dict, options_ptr,
--> 889 run_metadata_ptr)
890 if run_metadata:
891 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py
in _run(self, handle, fetches, feed_dict, options, run_metadata)
1118 if final_fetches or final_targets or (handle and
feed_dict_tensor): 1119 results = self._do_run(handle,
final_targets, final_fetches,
-> 1120 feed_dict_tensor, options, run_metadata) 1121 else: 1122 results = []
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py
in _do_run(self, handle, target_list, fetch_list, feed_dict, options,
run_metadata) 1315 if handle is None: 1316 return
self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1317 options, run_metadata) 1318 else: 1319 return self._do_call(_prun_fn, self._session,
handle, feeds, fetches)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py
in _do_call(self, fn, *args) 1334 except KeyError: 1335
pass
-> 1336 raise type(e)(node_def, op, message) 1337 1338 def _extend_graph(self):
InvalidArgumentError: NodeDef mentions attr 'T' not in Op index:int64>; NodeDef:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where
= WhereT=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0".
(Check whether your GraphDef-interpreting binary is up to date with
your GraphDef-generating binary.). [[Node:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where
= WhereT=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Caused by op
'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where',
defined at: File "/usr/lib/python3.5/runpy.py", line 184, in
_run_module_as_main
"main", mod_spec) File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals) File "/home/ryan/.local/lib/python3.5/site-packages/ipykernel_launcher.py",
line 16, in
app.launch_new_instance() File "/home/ryan/.local/lib/python3.5/site-packages/traitlets/config/application.py",
line 658, in launch_instance
app.start() File "/home/ryan/.local/lib/python3.5/site-packages/ipykernel/kernelapp.py",
line 477, in start
ioloop.IOLoop.instance().start() File "/home/ryan/.local/lib/python3.5/site-packages/zmq/eventloop/ioloop.py",
line 177, in start
super(ZMQIOLoop, self).start() File "/home/ryan/.local/lib/python3.5/site-packages/tornado/ioloop.py",
line 888, in start
handler_func(fd_obj, events) File "/home/ryan/.local/lib/python3.5/site-packages/tornado/stack_context.py",
line 277, in null_wrapper
return fn(*args, **kwargs) File "/home/ryan/.local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py",
line 440, in _handle_events
self._handle_recv() File "/home/ryan/.local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py",
line 472, in _handle_recv
self._run_callback(callback, msg) File "/home/ryan/.local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py",
line 414, in _run_callback
callback(*args, **kwargs) File "/home/ryan/.local/lib/python3.5/site-packages/tornado/stack_context.py",
line 277, in null_wrapper
return fn(*args, **kwargs) File "/home/ryan/.local/lib/python3.5/site-packages/ipykernel/kernelbase.py",
line 283, in dispatcher
return self.dispatch_shell(stream, msg) File "/home/ryan/.local/lib/python3.5/site-packages/ipykernel/kernelbase.py",
line 235, in dispatch_shell
handler(stream, idents, msg) File "/home/ryan/.local/lib/python3.5/site-packages/ipykernel/kernelbase.py",
line 399, in execute_request
user_expressions, allow_stdin) File "/home/ryan/.local/lib/python3.5/site-packages/ipykernel/ipkernel.py",
line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent) File
"/home/ryan/.local/lib/python3.5/site-packages/ipykernel/zmqshell.py",
line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File
"/home/ryan/.local/lib/python3.5/site-packages/IPython/core/interactiveshell.py",
line 2728, in run_cell
interactivity=interactivity, compiler=compiler, result=result) File
"/home/ryan/.local/lib/python3.5/site-packages/IPython/core/interactiveshell.py",
line 2850, in run_ast_nodes
if self.run_code(code, result): File "/home/ryan/.local/lib/python3.5/site-packages/IPython/core/interactiveshell.py",
line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns) File "", line 7, in
tf.import_graph_def(od_graph_def, name='') File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py",
line 313, in import_graph_def
op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py",
line 2956, in create_op
op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py",
line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): NodeDef mentions attr
'T' not in Op index:int64>;
NodeDef:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where
= WhereT=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0".
(Check whether your GraphDef-interpreting binary is up to date with
your GraphDef-generating binary.). [[Node:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where
= WhereT=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"]] | Tensorflow trained model works on cloud machine but gives an error when used in my local pc | 0 | 0 | 0 | 665 |
47,965,021 | 2017-12-25T01:08:00.000 | 1 | 0 | 0 | 0 | python-3.x,opencv,cascade,assertion | 47,965,130 | 2 | false | 0 | 0 | I solved it haha. Even though I specified in the command the width and height to be 20x20, it changed it to 20x24. So the opencv_traincascade command was throwing an error. Once I changed the width and height arguments in the opencv_traincascade command it worked. | 1 | 1 | 1 | I keep getting this error
OpenCV Error: Assertion failed (_img.rows * _img.cols == vecSize) in get, file /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/apps/traincascade/imagestorage.cpp, line 157
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/apps/traincascade/imagestorage.cpp:157: error: (-215) _img.rows * _img.cols == vecSize in function get
Aborted (core dumped)
when running opencv_traincascade. I run with these arguments: opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 1600 -numNeg 800 -numStages 10 -w 20 -h 20.
My project build is as follows:
workspace
|__bg.txt
|__data/ # where I plan to put cascade
|__info/
|__ # all samples
|__info.lst
|__jersey5050.jpg
|__neg/
|__ # neg images
|__opencv/
|__positives.vec
before I ran opencv_createsamples -img jersey5050.jpg -bg bg.txt -info info/info.lst -maxxangle 0.5 - maxyangle 0.5 -maxzangle 0.5 -num 1800
Not quite sure why I'm getting this error. The images are all converted to greyscale as well. The neg's are sized at 100x100 and jersey5050.jpg is sized at 50x50. I saw someone had a the same error on the OpenCV forums and someone suggested deleting the backup .xml files that are created b OpenCV in case the training is "interrupted". I deleted those and nothing. Please help! I'm using python 3 on mac. I'm also running these commands on an ubuntu server from digitalocean with 2GB of ram but I don't think that's part of the problem.
EDIT
Forgot to mention, after the opencv_createsamples command, i then ran opencv_createsamples -info info/info.lst -num 1800 -w 20 -h20 -vec positives.vec | OpenCV Error: Assertion failed (_img.rows * _img.cols == vecSize) | 0.099668 | 0 | 0 | 2,068 |
47,966,082 | 2017-12-25T05:57:00.000 | 0 | 0 | 0 | 0 | python,django,web-applications,video-embedding | 47,990,584 | 2 | true | 1 | 0 | Handling video files is much more complicated than dealing with images. There are two ways you can do it: you let users embed videos uploaded to other sites (like youtube) or you let them upload photos to your website. The first option is easier: most of the video websites offer some way to copy&paste a code to embed their videos on other sites. You just need to let users paste the html code on your website and there are some security measures you should take care of.
If you want them to upload the files it's getting hard.There are many video formats, files are much bigger, different compression rates etc. A video site usually accepts a wide range of video formats, converts a video to the most convenient format and different resolutions (to save space and bandwidth). It means a lot of asynchronous and CPU intensive tasks on your server.
It's also more complicated to put the file on the website and let it play.
There's a video tag in HTML5 that makes things much easier than before although if you want to make it really nice and easy you should find some JS player. | 1 | 0 | 0 | I have a Django site that I want to add videos. Currently users can upload their photos and I want each user to be able to add their videos as well. I am not sure how to do this, since I am a beginner programmer. Can you please direct me to right way? | Video embedding | 1.2 | 0 | 0 | 93 |
47,966,943 | 2017-12-25T08:22:00.000 | 0 | 0 | 0 | 1 | python,console,pycharm | 47,967,440 | 1 | false | 0 | 0 | Because the console uses the port 51562 and 51563. So you might didnt exit a program correctly to cause the ports are still using. | 1 | 0 | 0 | More details:
Macbook Air 10.12.6 (16G1036);
Python version: python3.6
IDE: Pycharm 2017.3.1
Error:
Users/lrh/PycharmProjects/TensorFlowDemo/venv/bin/python /Applications/PyCharm.app/Contents/helpers/pydev/pydev_run_in_console.py 51562 51563 /Users/lrh/PycharmProjects/TensorFlowDemo/demo1.py
Error starting server with host: "localhost", port: "51562", client_port: "51563"
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py", line 270, in start_console_server
server = XMLRPCServer((host, port), logRequests=False, allow_none=True)
File "/Users/lrh/anaconda3/lib/python3.6/xmlrpc/server.py", line 598, in init
socketserver.TCPServer.init(self, addr, requestHandler, bind_and_activate)
File "/Users/lrh/anaconda3/lib/python3.6/socketserver.py", line 453, in init
self.server_bind()
File "/Users/lrh/anaconda3/lib/python3.6/socketserver.py", line 467, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 49] Can't assign requested address
Console server didn't start
how can i fix it? | When I use pycharm to run code, the error "OSError: [Errno 49] Can't assign requested address" happended. | 0 | 0 | 0 | 675 |
47,967,093 | 2017-12-25T08:46:00.000 | 1 | 0 | 1 | 0 | python-2.7,sympy,equation,equation-solving | 47,971,357 | 1 | false | 0 | 0 | Be aware that sympy.solve giving [] doesn't necessarily mean the equation has no solution. It only means it could not find any. Some equations have solutions but they cannot be expressed in closed form (like cos(x) = x). sympy.solveset will give you the full solutions but in cases where it can't tell it will just return a generic solution set.
As to the original question I don't know if there's a way to do it in general. If you are only dealing with real continuous functions you could check if it is strictly positive or strictly negative on its domain. Sympy doesn't have the strongest tools for checking this, without some user assistance. | 1 | 1 | 0 | I want to know whether equation set has a solution, and I use solve(f)!=[] (python sympy) to achieve it. But I only need to know that whether there is a solution so I do not need to find all the solutions which will consume a lot of time. Is there any way to do it efficiently? | How to find out whether the equations have solutions effectively? | 0.197375 | 0 | 0 | 313 |
47,968,040 | 2017-12-25T11:09:00.000 | 2 | 1 | 0 | 0 | android,python,android-activity | 48,065,914 | 2 | false | 0 | 1 | Consider Jython, Jython is an implementation of the high-level, dynamic, object-oriented language Python seamlessly integrated with the Java platform. The predecessor to Jython, JPython, is certified as 100% Pure Java.
mbedded scripting - Java programmers can add the Jython libraries to
their system to allow end users to write simple or complicated
scripts that add functionality to the application.
Interactive experimentation - Jython provides an interactive
interpreter that can be used to interact with Java packages or with
running Java applications. This allows programmers to experiment and
debug any Java system using Jython.
Rapid application development - Python programs are typically 2-10X
shorter than the equivalent Java program. This translates directly
to increased programmer productivity. The seamless interaction
between Python and Java allows developers to freely mix the two
languages both during development and in shipping products.
The awesome features of Jython are as follows,
Dynamic compilation to Java bytecodes - leads to highest possible
performance without sacrificing interactivity.
Ability to extend existing Java classes in Jython - allows effective
use of abstract classes.
Jython doesn't compile to "pure java", it compiles to java bytecode subsequently to class files. To develop for Android, one compile java bytecode to Dalvik bytecode. To be honest, this path of development is not official , thus you will run into lots of problem with compatibility of course. | 1 | 16 | 0 | I am collecting some data from my android application. How to run a python script in my android application that uses the collected data as input and generates some output? | Running Python Script from Android Activity | 0.197375 | 0 | 0 | 4,957 |
47,968,124 | 2017-12-25T11:20:00.000 | 0 | 0 | 1 | 0 | python,pip,installation | 47,968,167 | 1 | false | 0 | 0 | There might be a conflict with environment variables.
Try this
python2.7 -m pip install | 1 | 1 | 0 | how do I go about installing for python2.7 version only?
Right now when I run this : python -m pip install requests tweepy python-bittrex
It's only for python 3
I'd like to install for python 2
(I have both versions installed)
THanks | Python Pip Install For Specific Version | 0 | 0 | 0 | 236 |
47,970,844 | 2017-12-25T17:49:00.000 | 6 | 0 | 1 | 0 | python-3.x | 53,318,223 | 2 | false | 0 | 0 | 0 and 1 are exit codes.
exit code (0) means an exit without an errors or any issues, can be a compile time error or any dependency issue.
exit code (1) means there was some issue which caused the program to exit. For example if your program is running on port :8080 and that port is currently in used or not closed, then you code ends up with exit code 1 | 2 | 15 | 0 | I am beginner in Python. I tried to develop a simple currency program but I have a problem. The program should calculate money when the calculate button is clicked(like an exchange). But I can't do it. PyCharm writes "Process finished with exit code 1" | What does "Process finished with exit code 1" mean? | 1 | 0 | 0 | 90,809 |
47,970,844 | 2017-12-25T17:49:00.000 | 30 | 0 | 1 | 0 | python-3.x | 47,970,971 | 2 | true | 0 | 0 | 0 and 1 are exit codes, and they are not necessarily python specific, in fact they are very common.
exit code (0) means an exit without errors or issues.
exit code (1) means there was some issue / problem which caused the program to exit.
The effect of each of these codes can vary between operating systems, but with Python should be fairly consistent. | 2 | 15 | 0 | I am beginner in Python. I tried to develop a simple currency program but I have a problem. The program should calculate money when the calculate button is clicked(like an exchange). But I can't do it. PyCharm writes "Process finished with exit code 1" | What does "Process finished with exit code 1" mean? | 1.2 | 0 | 0 | 90,809 |
47,972,613 | 2017-12-25T23:10:00.000 | 0 | 0 | 1 | 0 | python,pygame,pyc | 47,972,705 | 2 | true | 0 | 1 | Is is 'alright? Yes, the license allows you to distribute .pyc files.
Is is 'effective'? It will most likely be unnecessary, as there will be no commercial reason to copy your work. If it is super good, people will de-compile or write similar games in another language. See jonrsharpe's link.
Is is intended? The main purpose of .pyc files is to avoid unnecessarily re-compiling Python code by caching the result. | 1 | 2 | 0 | I have a Python project (a game) I would like to sell eventually. Would it be considered, by Python standards, acceptable to use a .pyc file instead of a regular .py source file to distribute the game, for security reasons? Would this even be effective? Or, on the other hand, is that just not what .pyc files are for? | Is it proper to use .pyc files? | 1.2 | 0 | 0 | 1,752 |
47,973,216 | 2017-12-26T01:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 47,973,889 | 3 | false | 0 | 0 | Can you try to run the import command on your Python Shell and let us know if you are able to import it successfully? | 2 | 2 | 1 | I installed the module factor-analyzer-0.2.2 but cannot import the function. My code is from factor_analyzer import FactorAnalyzer. I'm getting an error ModuleNotFoundError: No module named 'factor_analyzer'.
I'm using Python 3.6 and Jupyter notebook. Do you know why this does not work?
Thank you!! | Cannot import FactorAnalyzer from module factor-analyzer-0.2.2 | 0 | 0 | 0 | 5,421 |
47,973,216 | 2017-12-26T01:43:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 58,283,301 | 3 | false | 0 | 0 | Use the anaconda prompt to run the pip install factor-analyzer rather than termainal or powershell. I had the same problem and doing that solved it for me. | 2 | 2 | 1 | I installed the module factor-analyzer-0.2.2 but cannot import the function. My code is from factor_analyzer import FactorAnalyzer. I'm getting an error ModuleNotFoundError: No module named 'factor_analyzer'.
I'm using Python 3.6 and Jupyter notebook. Do you know why this does not work?
Thank you!! | Cannot import FactorAnalyzer from module factor-analyzer-0.2.2 | 0.066568 | 0 | 0 | 5,421 |
47,973,320 | 2017-12-26T02:11:00.000 | -4 | 0 | 0 | 0 | python,pymysql | 47,973,362 | 3 | false | 0 | 0 | It looks like you can create the database object and check if it has been created if it isn't created you can raise an exception
Try connecting to an obviously wrong db and see what error it throws and you can use that in a try and except block
I'm new to this as well so anyone with a better answer please feel free to chime in | 1 | 7 | 0 | I am using pymysql to connect to a database. I am new to database operations. Is there a status code that I can use to check if the database is open/alive, something like db.status. | How to check the status of a mysql connection in python? | -1 | 1 | 0 | 13,383 |
47,973,401 | 2017-12-26T02:33:00.000 | 0 | 0 | 0 | 1 | python,macos | 47,973,439 | 2 | false | 0 | 0 | using Automator, create an application that runs a shell script that runs python, and set the default application for opening .py files. | 1 | 0 | 0 | I know that you can do "chmod +x filename.py" to make the file executable on mac but is there a way I can change some setting so that happens by default when I create a filename with .py extension? | How do I make .py files executable by default on mac OS? | 0 | 0 | 0 | 299 |
47,978,664 | 2017-12-26T12:27:00.000 | 0 | 0 | 0 | 0 | python,hdf5,h5py | 47,985,532 | 2 | false | 0 | 0 | You can slice h5py datasets like numpy arrays, so you could work on a number of subsets instead of the whole dataset (e.g. 4 100000*10000 subsets). | 1 | 1 | 1 | I am engaged in analysing HDF5 format data for scientific research purposes. I'm using Python's h5py library.
Now, the HDF file I want to read is so large. Its file size is about 20GB and the main part of its data is 400000*10000 float matrix. I tried to read the data once, but my development environment Spyder was terminated by compulsion because of the shortage of the memory. Then is there any method to read it partially and avoid this problem? | How to read data in HDF5 format file partially when the data is too large to read fully | 0 | 0 | 0 | 1,280 |
47,979,390 | 2017-12-26T13:36:00.000 | 8 | 0 | 0 | 0 | python,sqlalchemy,alembic | 47,979,603 | 1 | true | 1 | 0 | After you run your init. Open the env.py file and update context.configure, add version_table='alembic_version_your_name as a kwarg. | 1 | 4 | 0 | I am working with alembic and it automatically creates a table called alembic_revision on your database. How, do I specify the name of this table instead of using the default name? | Alembic, how do you change the name of the revision database? | 1.2 | 1 | 0 | 874 |
47,980,515 | 2017-12-26T15:20:00.000 | 0 | 0 | 0 | 0 | python,flask,flask-appbuilder | 47,981,099 | 1 | true | 1 | 0 | This can be accomplished by overriding the post_add function. If you want to modify the data before it is sent (which is what I wanted to do) that would be overriding the pre_add function. | 1 | 0 | 0 | Is there any way with Flask App builder to add post processing to after the form has been posted? For example, after the add form is submitted, I'd like to modify some of the data that has been sent.
I cannot for the life of me find anything in the docs, and I feel like this should be basic functionality.
Thanks | Flask App Builder - Post Processing Model View | 1.2 | 0 | 0 | 364 |
47,984,024 | 2017-12-26T21:46:00.000 | 0 | 1 | 1 | 0 | c#,python,python-3.x,visual-studio | 47,984,044 | 1 | false | 0 | 0 | No, it's still the same. Starting a process is a feature of the .NET language (code) and has nothing to do with Visual Studio. | 1 | 0 | 0 | I am new to Python and currently I have a Python project under VS2017 solution along with a C# project. I know the question has been asked before and the steps involved creating a process in C# and calling the Python.exe and pass in the *.pr file. However, with VS2017 and the integrated Python environment, I want to know if the process of invoking/calling Python functions has been changed/improved? Does it still require me to have Python installed on the server and physically point the location of the Python.exe?
I cannot and will not use IronPython as it is running the dead 2.7 base and I want to use Python 3.6 going forward.
Thank you for any pointers! | Invoke Python functions from C# in VS2017 | 0 | 0 | 0 | 48 |
47,985,369 | 2017-12-27T01:37:00.000 | 0 | 0 | 0 | 0 | python,mysql,django | 48,016,519 | 1 | true | 1 | 0 | about sorry
May the user anwered my question before had give me the right answer, but I did not get what they means.
what I choose
Finally I use the redis lock to ensure no repeating data.
The logic just below:
when I get the data,I try to use 'set(key,value.nx=True,ex=60)' to get a lock from redis.
If the answer is True,I would try to use the django queryqpi 'update_or_create()'.
If not,I did nothing and return True.
It make the burst problem like a single process. | 1 | 0 | 0 | background
I aim at overing a project which is made up of django and celery.
And I code two tasks which would sipder from two differnt web and save some data to database —— mysql.
As before, I do just one task, and I use update_or_create shows enough.
But when I want to do more task in different workers ,and all of them may save data to database, and what's more, they may save the same data at the same time.
question
How can I ensure different tasks running in different worker makeing no repeating data when all of them try to save the same data?
I know a django API is select_for_update which would set a lock in database. But when I reading the documentions, it similar means, would select,if exist,update. But I want to select,if exist,update,else create. It would more be like update_or_create,but this API may not use a lock? | How to ensure no repeating datas when use django? | 1.2 | 0 | 0 | 47 |
47,988,087 | 2017-12-27T07:26:00.000 | 11 | 0 | 1 | 0 | python,dictionary,redis | 47,988,323 | 2 | false | 0 | 0 | No need to ask for forgiveness for asking a question! :) In fact, I just fielded a similar question from a colleague a couple of weeks ago.
Redis objects are very similar to familiar data structures that you're likely to see in lots of other programming languages. Redis hashes are fairly analogous to Python dictionaries, Redis sets are analogous to Python sets, Redis strings are analogous to a Python string, etc. That much is true. But what if instead of a dictionary containing 10 elements, you had to manipulate a dictionary containing 1,000 elements. Or what if you had to store hundreds of dictionaries each containing dozens of keys? What if you have to store an unknown quantity of information (for instance, you want users to be able to sign-up for your service and create profiles)? Redis is a data storage engine (like MySQL, MongoDB, etc.) first and foremost. It just so happens that it also provides you with multiple ways of structuring data that are very similar to how you would structure your data in your application code, so it's really quite simple to implement certain patterns with your data in Redis that you would likely be familiar with as an application developer. Does that make sense?
I'm also of the mindset that "data" should always be stored separately from application logic, but I think that's probably a separate conversation to be had regarding philosophy as opposed to practicality. ;) | 2 | 15 | 0 | So far in my all test cases it seems I could have also used python dictionary in place of redis.So I am not able to convince my self why Redis ?
Note:
I am new to Redis so please forgive me for such a naive question. | What is the difference between python native data structure "DICTIONARY" and "Redis" database? | 1 | 0 | 0 | 2,583 |
47,988,087 | 2017-12-27T07:26:00.000 | 10 | 0 | 1 | 0 | python,dictionary,redis | 47,988,263 | 2 | false | 0 | 0 | That's kinda like asking "everywhere I see mySQL database used, I could have just stored the data in a variable in the program, and written it to a text file when I wanted to power off; why do we bother with databases?"
A dictionary is a temporary key based data structure in your own program's memory. A redis cache is a temporary key based data storage facility; it's standalone and self contained caching software that can be installed, scaled, and shared as an enterprise-wide service. It also provides for managing the stored data; expiry etc. Sure, it might look like it works like a local memory based dictionary (which is part of the appeal; simplicity and familiarity with a low grade device that does the same thing), but just imagine your boss comes and says "we're getting really popular, we need to put the website on 20 web servers to handle the load" .. sharing that cache if it's implemented with a dictionary is a nightmare. Using a caching service designed to be a shared makes the job easier.
If you don't think you'll ever need a redis cache because your application will never grow big enough, then fine- don't use one. That's not to say that you should implement everything perfectly to handle the billion hypothetical users; just make some sensible quick win design decisions at the outset be realistic about what your app will one day be asked to do: 640k wasn't enough for anybody, after all | 2 | 15 | 0 | So far in my all test cases it seems I could have also used python dictionary in place of redis.So I am not able to convince my self why Redis ?
Note:
I am new to Redis so please forgive me for such a naive question. | What is the difference between python native data structure "DICTIONARY" and "Redis" database? | 1 | 0 | 0 | 2,583 |
47,988,332 | 2017-12-27T07:48:00.000 | 0 | 1 | 0 | 0 | python,imap | 47,988,655 | 1 | true | 0 | 0 | You must implement your proxy as a Man In The Middle attack. That means that there are two different SSL/TLS encrypted communication channels: one between the client and the proxy, one between the the proxy and the server. That means that either:
the client explicitely sets the proxy as its mail server (if only few servers are to be used with one name/address per actual server)
the proxy has a certificate for the real mail server that will be trusted by the client. A common way is to use a dummy CA: the proxy has a private certificate trusted by the client that can be used to sign certificates for any domains like antivirus softwares do.
Once this is set up, the proxy has just to pass all commands and responses, and process the received mails on the fly.
I acknowledge that this is not a full answer, but a full answer would be far beyond the scope of SO and I hope it could a path to it. Feel free to ask more precise questions here if you are later stuck in actual implementation. | 1 | 0 | 0 | I would like to implement a transparent IMAPS (SSL/TLS) proxy from zero using python (pySocks and imaplib).
The user and the proxy are in the same network and the mail server is outside (example: gmail servers). All the traffic on port 993 is redirected to the proxy. The user should retrieve his emails using his favorite email application (example: thunderbird). The proxy should receive the commands and transmit it to the user/server and should be able to read the content of the retrieved emails.
However, as the traffic is encrypted, I don't know how to get the account and the password of the user (without using a database) OR how to read the content of the emails without knowing the account and the password of the user.
After few days looking for a solution, I still don't have any track. Maybe it is not possible ? If you have any track, I would be happy to read it.
Thank you. | Transparent IMAPs proxy | 1.2 | 0 | 1 | 221 |
47,988,983 | 2017-12-27T08:43:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,neural-network,keras,keras-layer | 47,990,202 | 2 | true | 0 | 0 | input_dim=3 means that your data have 3 features which will be used to determine final result eg. if you want to determine what animal data refer to you could put width, height and color as data.
100 examples of different animals widths, heights and colors combinations allow neural network to adjust its parameters (learn) what width, height and color refers to what kind of animal. Keras starts with random weights for neurons and goes one by one 100 times using provided samples to adjust network weights. Keras actually uses batches which means 100 samples are divided into smaller groups for better learning rate and general performance (less data to store in memory at once).
10 neurons are 10 'places' where network is able to store multiplications results between neuron weights and input data. You could imagine neuron as a bulb which lights a bit brighter or darker depending whether data shows some useful data feature eg. if animal is above 3 meters tall. Each neuron has its own set of weights which are changed just a bit when network examines next sample from your data. At the end you should have 10 neurons (bulbs) which react in more or less intense way depending on presence of different features in your data eg. if animal is very tall.
The more neurons you have the more possible features you can track eg. if animal is tall, hairy, orange, spotted etc. But the more neurons you have there is also the higher risk that your network will be too exact and will learn features which are unique for your training example (it's called overfitting) but do not help you to recognize animal samples which were not included in your training data (it's called ability to generalize and is most important point of actually training neural network). Selecting number of neurons is then a bit of practical exercise where you search for one which works for your need.
I hope it clarifies your doubts. If you would like to get deeper in this field there are a lot of good resources online explaining neural networks training process in details including features, neurons and learning. | 1 | 0 | 1 | I am using Keras for a project and I don't understand how Keras uses data input, that is to say how Keras reads our input data when creating the first layer.
For example:
model = Sequential()
model.add(Dense(10, activation='sigmoid', input_dim=3,name='layer1'))
In this model, what does it mean to have 10 neurons and an input with 3 dimensions? If the input data has 100 examples (number of lines in the matrix data), how does Keras use them?
Thank you. | How does Keras read input data? | 1.2 | 0 | 0 | 1,223 |
47,993,255 | 2017-12-27T13:42:00.000 | 0 | 0 | 0 | 1 | python,https,ansible,winrm | 48,159,688 | 2 | false | 0 | 0 | I too had this issue and ended up here.
Googling lead me to believe that this might be because third party root certificates on the windows host might have somehow been borked. Was it a bad update? Was it a dangerous user? Who knows! Who cares!
I did the following, and it resolved my connection issues:
1) Delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\AuthRoot\Certificates
2) Reboot
No more SSL wrong version error | 1 | 0 | 0 | I tried to connect to my windows 7 machine using Ansible using credssp I'm recieving the error as below.
tls_server_hello,wrong_ssl_version.
Whereas when I tried the same in my windows 10 machine I'm able to do win_ping | Ansible HTTPS connection wrong SSL version | 0 | 0 | 0 | 3,609 |
47,997,038 | 2017-12-27T18:33:00.000 | 0 | 0 | 0 | 1 | python,windows,cmd | 47,997,108 | 2 | false | 0 | 0 | click start
in the run box type "cmd.exe"
in the terminal type "cd /path/to/my/prog"
then type "my_prog.exe"
output should be printing to the console ....
it should be noted you have not really supplied enough information to actually answer this question, but i think this should work as long as the exe is printing to stdout | 1 | 0 | 0 | I have a game (.exe using python) that prints some console logs and I want to see these WHILE the game is running.
I have already seen solutions like:
cmd /k start.exe
but that only shows the console AFTER the game has stopped. I want it to be shown WHILE the game is running. Is this possible? Or is there some sort of python command I can run to open a console terminal during game?
edit
I'm using the blender game engine to create a .exe which is executing python script. Solutions I have come across temporarily close the terminal window until the .exe game is closed. I would like it to stay open while the game plays. | how to keep cmd line open to show console logs WHILE blender game engine runs | 0 | 0 | 0 | 425 |
47,999,716 | 2017-12-27T23:15:00.000 | 1 | 1 | 0 | 0 | python,bots,discord | 48,163,714 | 3 | true | 0 | 0 | you want to do this through oath2 using the url parameters | 1 | 0 | 0 | I am using discord.py to create my bot and I was wondering how to create roles/permissions specific to the bot?
What that means is when the bot enters the server for the first time, it has predefined permissions and role set in place so the admin of the server doesn't need to set a role and permissions for the bot.
I have been trying to look up a reference implementation of this but no luck. If someone can point me at example of how to get a simple permission/role for a bot that will be great! | How to create a role for a Discord Bot | 1.2 | 0 | 1 | 7,000 |
48,000,154 | 2017-12-28T00:22:00.000 | 3 | 0 | 1 | 1 | python,bash,python-3.x,exception | 48,000,173 | 1 | true | 0 | 0 | Python will always stop execution when an uncaught exception occurs.
It will print the traceback with information about the exception. | 1 | 0 | 0 | Is there a python3 equivalent to to the bash command set -e? That is, to stop execution as soon as an error occurs? Or does python already do this? | Is there a python3 equivalent to the bash command `set -e`? | 1.2 | 0 | 0 | 97 |
48,000,835 | 2017-12-28T02:12:00.000 | 0 | 1 | 0 | 1 | python,cron,raspberry-pi3,google-assistant-sdk | 50,710,220 | 1 | false | 0 | 0 | It is possible, but not directly, as I know.
I use IFTTT for that (check on youtube how easy it is).
The easiest way is to have public IP with your raspberry, but it is dangerous (see below for solution).
Log into ifttt.com and make an applet: Google Assistant Command to Webhook. It is a piece of cake. You will learn it in minutes (use youtube for tutorial).
You type as a webhook public IP of your raspberry with a link to PHP app.
Your PHP app fetches the info about command from Google Assistant and exec("Your command here");
Additional info:
It is probably possible to make fetching app in Python instead of PHP, but I am not a Python programmer, so I cannot help with it. I use PHP for that.
It is MUUUCH SAFER to make additional "Gate Server". Take the cheapest hosting provider. Make an APP on this server (probably in PHP :P). Your IFTTT sends a request to the Gate (instead of raspberry directly). And your raspberry pi contacts the gate checking if there are no any orders for it. If there is any, it follows the order. Because of that nobody knows your raspberry IP address. You can hide it behind the router so it is more less safe.
I hope my solution is understandable :).
K. | 1 | 2 | 0 | I have installed the google assistant SDK on a raspberry pi 3 and have so far managed to get Spotify working, and managed to get IFTTT to run for lights. What I really want to know is whether it is possible to have google assistant run and interact with python scripts on the pi. By which I mean I have a script called sound_the_alarm.py which pulls news from different sources, gives updates on bitcoin etc, and then plays a random song from my library, and turns on a light via gpio. I would like to be able to say “ok google sound the alarm at 7:00 tomorrow and have it update the crib tab with an instance of the alarm. Is this possible? I have found nothing online suggesting so!
Thanks
Will | Google assistant to run python script | 0 | 0 | 0 | 1,796 |
48,003,535 | 2017-12-28T07:39:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,machine-learning,computer-vision,deep-learning | 49,556,042 | 1 | false | 0 | 0 | Just use the sigmoid layer as the final layer. There's no need for any cross entropy when you have a single output, so just let the loss function work on the sigmoid output which is limited to the output range you want. | 1 | 0 | 1 | I want to make a neural network that has one single output neuron on the last layer that tells me the probability of there being a car on an image (the probability going from 0 - 1).
I'm used to making neural networks for classification problems, with multiple output neurons, using the tf.nn.softmax_cross_entropy_with_logits() and tf.nn.softmax() methods. But these methods don't work when there is only one column for each sample in the labels matrix since the softmax() method will always return 1.
I tried replacing tf.nn.softmax() with tf.nn.sigmoid() and tf.nn.softmax_cross_entropy_with_logits() with tf.nn.sigmoid_cross_entropy_with_logits(), but that gave me some weird results, I might have done something wrong in the implementation.
How should I define the loss and the predictions on a neural network with only one output neuron? | Neural network with a single out with tensorflow | 0.197375 | 0 | 0 | 430 |
48,003,638 | 2017-12-28T07:48:00.000 | 0 | 1 | 0 | 0 | python,unicode | 48,004,364 | 1 | false | 0 | 0 | So after much experimentation and for reasons I don't fully understand, the following code works:
print "%s" %("\xbf").decode("cp1252")
It's a workable solution so I'll call this one solved! | 1 | 0 | 0 | I've gone through about 20 threads but don't seem to be able to find an answer to what I hoped was a simple question!
I have a unicode-encoded byte "\xbf" which translates to a "¿" character.
If I encode the character as follows: u"¿".encode("cp1252") it outputs "\xbf". How can I return this back to a "¿" character for display on screen?
No matter what I attempt, I seem to get an ordinal not in range(128) error.
EDIT: A further example of this is simply using chr(191), which also gives the result "\xbf". How can I make this print the ASCII character?
Any and all help appreciated! | Unable to convert unicode byte to ASCII | 0 | 0 | 0 | 232 |
48,006,244 | 2017-12-28T10:59:00.000 | 0 | 1 | 0 | 0 | python-3.x,cron,raspberry-pi | 48,019,881 | 1 | true | 0 | 0 | While running cron job check any relative paths is given in your code, then either change it to absolute path or change the directory inside the script. Then only the cron job will take the attachment or anything that need to include into your script. | 1 | 1 | 0 | I've script in python that takes photos from raspberry camera and send mail with this photos. From command line everything works OK, script start do the jobs and finishes without errors.
I setup cron to execute script every 1 hour.
Every hour I get mail but without attachments.
Where or how can I check why script executed from cron don't send attachments? | execute python script from cron don't send mail | 1.2 | 0 | 0 | 668 |
48,007,100 | 2017-12-28T11:58:00.000 | 0 | 0 | 1 | 0 | python,pyqt5,pyinstaller | 57,068,272 | 4 | false | 0 | 1 | If you using python 3.4 and pip refuses to install pyqt5
Download and install
Pyqt5 manually to %your python 3.4 dir%
Create directory
Go to %your python 3.4 dir%\Lib\site-packages\PyQt5 create directory Qt and then move plugins folder there.
Add plugins
Then you can add ('C:/Python34-32/Lib/site-packages/PyQt5/Qt/plugins', 'PyQt5/Qt/plugins') to data in your spec file.
be sure to download PyQt 5.4.1 or other version that supports python 3.4
At least that solved my problem. I hope this will help somebody | 1 | 10 | 0 | I am trying to bundle a PyQt project using Pyinstaller. I tried creating package using command pyinstaller --onedir Hello.py.
This creates dist folder and has Hello.exe. On running it gets the error: This application failed to start because it could not find or load the Qt platform plugin "windows"in "".
Reinstalling the application may fix this problem.
I solved the issue in my PC by
setting environment variable QT_QPA_PLATFORM_PLUGIN_PATH
or by
Copying dist\Hello\PyQt5\Qt\plugins\platform folder to where Hello.exe exists.
But this issue exists when I bundle to a single file using command --onefile, and run on any other machine, where QT_QPA_PLATFORM_PLUGIN_PATH is not set.
Can someone help to figure out the issue. | Error: Could not find or load the Qt platform plugin "windows" - PyQt + Pyinstaller | 0 | 0 | 0 | 2,889 |
48,007,839 | 2017-12-28T12:49:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,python-unittest | 48,013,158 | 2 | false | 0 | 0 | You can use pdb to debug this issue, in the test simply add these two lines to halt execution and begin debugging.
import pdb
pdb.settrace()
Now for good testing practice you want deterministic test results, a test that fails only sometimes is not a good test. I recommend mocking the random function and using data sets that capture the errors you find. | 1 | 1 | 0 | For a specific program I'm working in, we need to evaluate some code, then run a unittest, and then depending on whether or not the test failed, do A or B.
But the usual self.assertEqual(...) seems to display the results (fail, errors, success) instead of saving them somewhere, so I can't access that result.
I have been checking the modules of unittest for days but I can't figure out where does the magic happen or if there is somewhere a variable I can call to know the result of the test without having to read the screen (making the program read and try to find the words "error" or "failed" doesn't sound like a good solution). | Python: How to access test result variables from unittest module | 0 | 0 | 0 | 978 |
48,008,791 | 2017-12-28T13:56:00.000 | 0 | 0 | 0 | 0 | python,django | 48,009,047 | 2 | false | 1 | 0 | Just realised that i was importing the models instead of the views.
the code above should be from reviews.views import * | 1 | 0 | 0 | So I have 2 apps for my django project. In the default mysite folder, you have the urls.py file. I cant seem to find out how to write all my urls for all the apps in just the one urls.py file.
I have tried this:
from reviews.models import *
however that isnt working.
Thanks for any help! | Django URLs from different apps in just the base one | 0 | 0 | 0 | 29 |
48,010,487 | 2017-12-28T15:54:00.000 | 1 | 1 | 0 | 0 | python,django | 48,010,534 | 1 | false | 1 | 0 | You should save your dependencies in a requirements.txt file and use pip to install the requirements as part of your Django app's deployment process.
Edit: smtplib is part of the standard lib, so it should be available out of the box. | 1 | 0 | 0 | I have imported "smtplib" in the code, and I doubt that the server won't have this installed. Is there a CDN for modules in python ? | What if the hosting Django server don't have the module I imported? | 0.197375 | 0 | 0 | 30 |
48,010,748 | 2017-12-28T16:13:00.000 | 22 | 0 | 1 | 1 | python | 48,010,814 | 1 | true | 0 | 0 | OS is python's standard library. So no need to download it. | 1 | 6 | 0 | I'm trying to install the os python module on Windows.
In a cmd console, I typed:
C:\Users\username> pip install os
Collecting os
Could not find a version that satisfies the requirement os (from versions: )
No matching distribution found for os
So I guessed the module was not available on Windows for some reasons, but I found references of it in some SO questions.
Obviously, typing Windows and os in Google only gives me answers about Windows itself.
So, how can I install the os module on Windows ? | How to install the os module on Windows? | 1.2 | 0 | 0 | 79,293 |
48,011,531 | 2017-12-28T17:10:00.000 | 2 | 0 | 1 | 1 | python,git,shell,github | 48,011,670 | 1 | true | 0 | 0 | None of those solutions are the right way to do it. When you need to patch a python code, because you want to add a feature or fix a bug, you should proceed as follows:
Solution 3
Find the project's main page (on github, gitlab or anywhere else), usually a lookup DDG or google or --help will point you the right direction.
Read the project's README file, and look for a developers section
Checkout the project issue tracker for feature requests and bug reports to be sure to check what you'll do isn't being worked on by someone else (and then you can offer your help), that's important for collaboration
For building and developing, follow instructions from the README, which usually look like the following, or just follow this:
How to work on a python project
copy the repository URL (https://github.com/<user>/<project>)
open a terminal in a directory where you want to keep your on-going work (I like to use ~/Workspace)
run git clone https://github.com/<user>/<project>
cd <project>
virtualenv var (you can use whatever name you want, but my preference goes for var)
var/bin/pip install -r requirements.txt
run var/bin/python and all the modules you're developing are available in the python REPL.
This should apply to the vast majority of python projects. Some projects will show you other tools like virtualenv for that step (hint: if there's a requirements.txt you know it's what you'll do), or will use zc.buildout or will use pipenv, but those will usually tell how to build in the README.
Your questions
It feels weird to update PYTHONPATH each time I want to work on it. But the weirder is that because the initial script is done to be used in the macOS shell, I cannot pass argument to the main() function when calling it through the Python shell.
you're right to feel weird doing that. Tools like virtualenv or buildout are there to create contained environments for you to develop comfortably. You'll even find the script you usually find in your path available in the env's bin directory. You'll also find the test suite so that you can test your changes against regressions!
Is it normal that python repos look like released code that can hardly be tweaked and quickly tested ?
usually development happens in a specific branch on a project. So on the github page, you might see the stable/release branch, whereas all the new stuff happens in the devel branch. always look at the issues, and first communicate with the project's maintainer before doing code, at risk of having your contribution rejected. | 1 | 0 | 0 | Can somebody simply explain to me what is the correct workflow to download a GitHub code and work on it, quickly try tweaks and fixes ?
Solution 1
install the GitHub repo using pip
then python "functions" are called using shell commands, not in Python shell, but in the macOS shell.
This is clearly not the good solution as I don't even know where is the source code. And not use the macOS shell. I want to see all the files, copy/paste, correct and try things.
Solution 2
Download the GitHub repo in a working directory.
Open a macOS shell.
cd in the directory.
Update PYTHONPATH to the current directory.
Start the python shell.
Load the desired script to try using the import directive.
Edit the script using a text editor and repeat action 6 to try things.
It feels weird to update PYTHONPATH each time I want to work on it. But the weirder is that because the initial script is done to be used in the macOS shell, I cannot pass argument to the main() function when calling it through the Python shell.
Am I missing something here ? Is it normal that python repos look like released code that can hardly be tweaked and quickly tested ? | How quickly set up a python environment to work and tweak a GitHub downloaded repo? | 1.2 | 0 | 0 | 46 |
48,012,824 | 2017-12-28T18:51:00.000 | 3 | 0 | 1 | 0 | python,python-3.x | 48,013,139 | 1 | true | 1 | 0 | The most important reason is performance: findall needs to find all occurences, so it searches the whole string. search just searches the string until it finds a match, so provided the pattern is present it will be faster. match just checks if the beginning of the string matches the pattern, so it probably doesn't need to search the whole string (except in some edge cases).
So findall will be slower in the best case than match or search.
Additionally findall also stores all matches, so it can easily take much more memory than search or match which only store the first match (or nothing at all).
So findall is more memory expensive.
Last but not least match and search return SRE_Match objects which not only store the matched substring but also the position (and groups if you use patterns with capture groups). Thanks @kindall which posted this in the comments.
So, while you could use findall instead of match or search it can be slower, uses more memory and stores less information for the match. So I wouldn't use it as replacement for search or match except you also need to find all other occurences. | 1 | 2 | 0 | I have been learning python for some weeks. Presently I'm having some problems/questions with Python's re.findall().
In some books or videos they use re.search() seldom they use match(). In django documentation I read search finds the first match, and re.match finds a match at the beginning of a string.
But in all cases re.findall() would work well. So why shouldn't I use only re.findall() all the time?
As I want to get better in Python I want to understand it, therefore I asked this question.
Best regards Jonathan | Why do not always use Python's re.findall instead of re.search | 1.2 | 0 | 0 | 317 |
48,013,449 | 2017-12-28T19:43:00.000 | 0 | 0 | 0 | 0 | python,leaflet,gtfs | 48,017,558 | 2 | false | 0 | 0 | The only alternative to using shapes.txt would be to use the stops along the route to define shapes. The laziest way would be to pick a single trip for that route, get the set of stops from stop_times.txt, and then get the corresponding stop locations from stops.txt.
If you wanted or needed to, you could get a more complete picture by finding the unique ordered sets of stops among all of the trips on that route, and define a shape for each ordered set in the same way.
Of course, these shapes would only be rough estimates because you don't have any information about the path taken by the vehicles between stops. | 1 | 0 | 0 | I try to consume some GTFS Feeds and work with them.
I created a MySQL Database and a Python Script, which downloads GTFS - Files and import them in the right tables.
Now I am using the LeafLet Map - Framework, to display the Stops on the map.
The next step is to display the routes of a bus or tram line on the map.
In the GTFS - Archive is no shapes.txt.
Is there a way to display the routes without the shapes.txt ?
Thanks!
Kali | Display GTFS - Routes on a Map without Shapes | 0 | 0 | 0 | 1,223 |
48,014,537 | 2017-12-28T21:21:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,security,anaconda | 48,022,958 | 2 | true | 0 | 0 | Are their security threats associated with leveraging Anaconda? If so, what specifically?
It depends on the environment. Do you have admin privileges ? How the global GPO policy looks like ? For example if you don't have an admin rights you can't do things like create a socket, access network stack on os level and vice versa. The thing is, that you can do similar damage with CMD/PowerShell also. Is the former secure than the latter, I don't think so ....
If the risk is too great, what are alternatives to simply installing the Anaconda Suite?
It depends on what kind of functionality you need from Anaconda, maybe you can use different python interpreter/framework, but from security perspective looks the same.
Usually "IT Stuff" don't have a clue how to implement OS/Domain/Security in a proper manner so their solution is to tell everybody that it's a security risk. In these days, everything is a security risk. | 2 | 2 | 0 | There has been some back and forth between myself and the IT department of a company I recently began working regarding the installation of Python / Anaconda suite on my work PC. The IT department is making claims of security risks (with Anaconda) but I suspect it’s more of a matter of them not wanting to give me access. My suspicion is based, not on my IT knowledge, but due to the fact that I’ve used Anaconda at my last job with no issues. I’m hoping for some insight of enterprise risks (if any) associated with installation of Anaconda. To summarize the situation/my knowledge:
• I am not a developer, nor do I come from an IT/enterprise risk background. I’ve used Python for analytics, data cleansing and report automation
• Current and past companies are within the finance industry, i.e. confidential information lives on the network
• I’m requesting Anaconda as opposed Anaconda Enterprise
• I’m requesting Python version 3.6.4
I’m not trying to write-off IT’s concerns. What I’m trying to do is better understand the situation, educate myself and either alleviate their concerns or propose an alternative all parties can work with.
So my questions are:
Are their security threats associated with leveraging Anaconda? If so, what specifically?
If the risk is too great, what are alternatives to simply installing the Anaconda Suite?
Thanks | Security Issues with Anaconda? | 1.2 | 0 | 0 | 5,755 |
48,014,537 | 2017-12-28T21:21:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,security,anaconda | 48,015,112 | 2 | false | 0 | 0 | One could argue that any programming environment is a potential security risk.
But such an argument is rather unconvincing in any office environment where you will probably find a well-known office suite installed that comes with its own built-in programming environment! You can cause plenty of mayhem with VBA macros hidden in Word en Excel documents... Especially when these documents are sent to unsuspecting co-workers.
To put it more bluntly, any environment that is not nailed down to the point of being unusable is a potential security risk. Can you e.g. run programs off a USB drive? Or open Office files from them? Or even copy files to them?
You could ask IT what their specific objections are?
BTW; You don't need administrator privileges to install Anaconda for yourself. Only if you want to install it for all users on your machine.
Edit: Another approach is to bring your own laptop and use Python on that.
This is an approach I'm using. And I would suggest using a laptop loaded with a UNIX-like operating system like a Linux distribution or one of the BSD variants. All of these come with a lot of tools out of the box and since they have decent package management (as opposed to ms-windows) basically every open-source tool you can think of is easily available. There is a learning curve associated with this, but on these systems tools are meant to work together instead of existing as pre-packaged one-trick ponies.
For example, I keep yearly logbooks of work-related stuff that should be documented. These logbooks can run into 200-300 pages, with hundreds of illustrations and graphs. For such a thing, ms-word just doesn't cut it; I've tried several times. So I use LaTeX, python, gnuplot and a host of other tools for it. And the whole thing is kept under revision control as a matter of course.
This laptop isn't connected to the network at work, so it cannot be a threat to that. | 2 | 2 | 0 | There has been some back and forth between myself and the IT department of a company I recently began working regarding the installation of Python / Anaconda suite on my work PC. The IT department is making claims of security risks (with Anaconda) but I suspect it’s more of a matter of them not wanting to give me access. My suspicion is based, not on my IT knowledge, but due to the fact that I’ve used Anaconda at my last job with no issues. I’m hoping for some insight of enterprise risks (if any) associated with installation of Anaconda. To summarize the situation/my knowledge:
• I am not a developer, nor do I come from an IT/enterprise risk background. I’ve used Python for analytics, data cleansing and report automation
• Current and past companies are within the finance industry, i.e. confidential information lives on the network
• I’m requesting Anaconda as opposed Anaconda Enterprise
• I’m requesting Python version 3.6.4
I’m not trying to write-off IT’s concerns. What I’m trying to do is better understand the situation, educate myself and either alleviate their concerns or propose an alternative all parties can work with.
So my questions are:
Are their security threats associated with leveraging Anaconda? If so, what specifically?
If the risk is too great, what are alternatives to simply installing the Anaconda Suite?
Thanks | Security Issues with Anaconda? | 0 | 0 | 0 | 5,755 |
48,014,892 | 2017-12-28T21:59:00.000 | 1 | 1 | 0 | 0 | python,discord.py | 51,890,116 | 2 | false | 0 | 0 | Use a Raspberry Pi with an internet connection. Install all necessary components and then run the script in its terminal. Then, you can switch off the monitor and the bot will continue running. I recommend that solution to run the bot 24/7. If you don't have a Raspberry Pi, buy one (they're quite cheap), or buy a server to run the code on. | 2 | 0 | 0 | I recently made a music bot in my discord server but the problem I have is that I have to turn on the bot manually etc. and it goes offline when I turn off my computer. How can I make the bot stay online at all times, even when I'm offline? | How do I make my discord bot stay online | 0.099668 | 0 | 1 | 4,654 |
48,014,892 | 2017-12-28T21:59:00.000 | 0 | 1 | 0 | 0 | python,discord.py | 48,034,102 | 2 | false | 0 | 0 | You need to run the python script on a server, e.g. an EWS linux instance.
To do that you would need to install python on the server and place the script in the home directory and just run it via a screen. | 2 | 0 | 0 | I recently made a music bot in my discord server but the problem I have is that I have to turn on the bot manually etc. and it goes offline when I turn off my computer. How can I make the bot stay online at all times, even when I'm offline? | How do I make my discord bot stay online | 0 | 0 | 1 | 4,654 |
48,015,101 | 2017-12-28T22:20:00.000 | 0 | 0 | 0 | 0 | python,neural-network,keras,recurrent-neural-network | 48,015,138 | 1 | false | 0 | 0 | The number of goals would be an output, not an input. For example, you could build a model to predict how many goals each team would score...
If you used goals scored as an input, the model train to be incredibly simple and useless- just comparing the goals, ignoring all other inputs. | 1 | 0 | 1 | I want to do a neural network to predict who is going to win a soccer game. I have several features (like the team, the physical shape of the team, etc.) and an output, telling which team won (or if there was a die).
In my training, I want to add features like the number of goals these teams scored. The problem is that I won't be able to include these features when predicting the final result of a future game. Is there a way to do it ?
I'm using Keras in Python as a library to easily build neural networks. | Neural Network: Extra features used for training but not for predicting new data | 0 | 0 | 0 | 26 |
48,016,091 | 2017-12-29T00:40:00.000 | 1 | 1 | 0 | 0 | python,amazon-web-services,aws-lambda,chalice | 50,925,535 | 6 | false | 1 | 0 | Fisrt, install unixODBC and unixODBC-devel packages using yum install unixODBC unixODBC-devel. This step will install everything required for pyodbc module.
The library you're missing is located in /usr/lib64 folder on you Amazon Linux instance.
Copy the library to your python project's root folder (libodbc.so.2 is just a symbolic link, make sure you copy symbolic link and library itself as listed): libodbc.so, libodbc.so.2 and libodbc.so.2.0.0 | 1 | 3 | 0 | I am trying to build a AWS Lambda function using APi Gateway which utlizes pyodbc python package. I have followed the steps as mentioned in the documentation. I keep getting the following error Unable to import module 'app': libodbc.so.2: cannot open shared object file: No such file or directory when I test run the Lambda function.
Any help appreciated. I am getting the same error when I deployed my package using Chalice. It seems it could be that I need to install unixodbc-dev. Any idea how to do that through AWS Lambda? | Unable to use pyodbc with aws lambda and API Gateway | 0.033321 | 1 | 0 | 6,014 |
48,016,206 | 2017-12-29T01:00:00.000 | 0 | 0 | 0 | 0 | python,excel,openpyxl | 49,428,497 | 4 | false | 0 | 0 | Currently what I do is in my template I create a dynamic data range that gets the data from the raw data sheet and then I set that named range to the tables data source. Then in the pivot table options there is a "refresh on open" parameter and I enable that. When the excel file opens it refreshes and you can see it refresh. Currently looking for a way to do it in openpyxl but this is where im at | 2 | 6 | 0 | I have a file that has several tabs that have pivot tables that are based on one data tab. I am able to write the data to the data tab without issue, but I can't figure out how to get all of the tabs with pivot tables to refresh.
If this can be accomplished with openpyxl that would be ideal. | Using openpyxl to refresh pivot tables in Excle | 0 | 1 | 0 | 7,810 |
48,016,206 | 2017-12-29T01:00:00.000 | 1 | 0 | 0 | 0 | python,excel,openpyxl | 52,547,674 | 4 | false | 0 | 0 | If the data source range is always the same, you can set each pivot table as "refresh when open". To do that, just go to the pivot table tab, click on the pivot table, under "Analyze" - > Options -> Options -> Data -> select "Refresh data when opening the file".
If the data source range is dynamic, you can set a named range, and in the pivot table tab, Change Data Source to the named range. And again set "refresh when open".
So the above is achieved without using any python package, alternatively you can use openpyxl to refresh. However make sure that you're using the 2.5 release or above, because otherwise the pivot table format will be lost. | 2 | 6 | 0 | I have a file that has several tabs that have pivot tables that are based on one data tab. I am able to write the data to the data tab without issue, but I can't figure out how to get all of the tabs with pivot tables to refresh.
If this can be accomplished with openpyxl that would be ideal. | Using openpyxl to refresh pivot tables in Excle | 0.049958 | 1 | 0 | 7,810 |
48,021,783 | 2017-12-29T11:18:00.000 | 0 | 0 | 0 | 0 | python-3.x,pygtk,glade | 48,027,204 | 1 | false | 0 | 1 | I think that is not posible to do dinamically.
As you know on Pygtk we load the glade file by this way wTree = gtk.glade.XML("localize.glade") only once time, and after that we have on scope the access to all tree of controls and componets.
If you have a window loaded, you can load another window, but not pull apart a tab that belong to a window loaded already, is something that not insted on pygtk, is not supported. Each window run on a singular process, i can not figure out how to pull apart it from the root proccess.
I hope its helps you out. | 1 | 3 | 0 | I made a GLADE file, a main window of type "GtkNotebook" and there are several pages in it (Window1 = Page1, Page2, Page3, Page4).
a) Is it possible, like a web-browser, to take one of this page and separate it from the main windows? Example Page4 taken away with the cursor would create a Windows2
b) If not (I could not achieve it till now), I will have probably to create 2 windows which open automatically when I start my application (one will be Window1 = Page1, Page2, Page3, the second one will be Window2 with Page4). I will search how to do this after I have a feedback from here if a) could be done in any way.
Thanks (this is my first post here) | GLADE & Pygtk: how to split dynamically windows? | 0 | 0 | 0 | 96 |
48,023,512 | 2017-12-29T13:40:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,amazon-dynamodb | 48,023,965 | 1 | true | 1 | 0 | Generally speaking, using an auto-increment primary key is not going to be best practice. If you really want to do it, you could use a lambda function and another dynamodb table that stores the last used value, but you would be much better off picking a better primary key so that you don't run into performance problems down the road.
Generally speaking, a GUID is a very easy to use alternative for a primary key where you don't have another obvious field to use. | 1 | 1 | 0 | I am working on python flask with Dynamo DB ,is there any functionality to auto increment the primary key in AWS Dynamo DB (like SQL.AUTOINCREMENT), Any suggestion on this? how to handle the AUTOINCREMENT Feature in AWS DynamoDB. | DynamoDB AutoIncrement Primary key Python flask | 1.2 | 1 | 0 | 923 |
48,025,594 | 2017-12-29T16:36:00.000 | 0 | 1 | 0 | 0 | python,api,dynamics-crm,exchange-server,exchangelib | 48,050,166 | 1 | false | 0 | 0 | How about using MS Flow? I have done similar thing before using MS Flow to transfer emails from CRM to an online email processing service and retrieving emails back to CRM. | 1 | 0 | 0 | I am looking for a way to automatically check contacts in Dynamics 365 CRM and sync them to mailcontacts in Exchange online distribution lists.
Does anyone have suggestions on where to start with this?
I was thinking of trying to use python scripts with the exchangelib library to connect to the CRM via API, check the CRM contacts, then connect to Exchange online with API to update the mailcontacts in specific distribution lists if needed.
Does this sound plausible?
Are their more efficient ways of accomplishing this?
Any suggestions are greatly appreciated, thanks. | Syncing Dynamics CRM contacts to Exchange mailcontacts distribution lists | 0 | 0 | 1 | 76 |
48,026,111 | 2017-12-29T17:21:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,pyspark,amazon-redshift,aws-glue | 50,180,152 | 5 | false | 0 | 0 | You need to modify the auto generated code provided by Glue. Connect to redshift using spark jdbc connection and execute the purge query.
To spin up Glue containers in redshift VPC; specify the connection in glue job, to gain access for redshift cluster.
Hope this helps. | 2 | 4 | 1 | I have created a Glue job that copies data from S3 (csv file) to Redshift. It works and populates the desired table.
However, I need to purge the table during this process as I am left with duplicate records after the process completes.
I'm looking for a way to add this purge to the Glue process. Any advice would be appreciated.
Thanks. | AWS Glue Truncate Redshift Table | 0 | 1 | 0 | 4,214 |
48,026,111 | 2017-12-29T17:21:00.000 | 2 | 0 | 0 | 0 | python,amazon-web-services,pyspark,amazon-redshift,aws-glue | 65,486,258 | 5 | false | 0 | 0 | The link @frobinrobin provided is out of date, and I tried many times that the preactions statements will be skiped even you provide a wrong syntax, and came out with duplicated rows(insert action did executed!)
Try this:
just replace the syntax from
glueContext.write_dynamic_frame.from_jdbc_conf() in the link above to glueContext.write_dynamic_frame_from_jdbc_conf() will works!
At least this help me out in my case(AWS Glue job just insert data into Redshift without executing Truncate table actions) | 2 | 4 | 1 | I have created a Glue job that copies data from S3 (csv file) to Redshift. It works and populates the desired table.
However, I need to purge the table during this process as I am left with duplicate records after the process completes.
I'm looking for a way to add this purge to the Glue process. Any advice would be appreciated.
Thanks. | AWS Glue Truncate Redshift Table | 0.07983 | 1 | 0 | 4,214 |
48,026,159 | 2017-12-29T17:25:00.000 | 0 | 0 | 0 | 0 | python,django | 48,027,969 | 1 | false | 1 | 0 | You should check if in Setting/Languages & Frameworks/Django the option Enable Django Support if selected,
then you should check if the routes to Django Project Root, Settings and manage.py are set correctly.
regards | 1 | 0 | 0 | I am working in PyCharm professional version and I am not able to run Django server from PyCharm, though I can start it from cmd.
After starting the server from cmd, everything works fine, I can reach my urls and get a response.
When I am trying to do the same from Pycharm, first I get a windows error ("Python has stopped working. A problem caused the program to stop working correctly. Please close the program.) When I click "Close the program", it does not close, and I can see the following in error in PyCharm:
Process finished with exit code -1073741819 (0xC0000005)
I am using the following versions:
Python version: 3.6.1
Django version: 1.9.13
I have already tried Tools | Run manage.py Task | runserver , which gives the same result.
I have found a way to make this error disappear in PyCharm:
When I go to Run | Edit Configurations | and check "No reload", then the server starts normally and I can access it through the browser.
Does anyone have a hint on why this is happening? What is the purpose of "No reload"?
Thank you for the answer! | Why does Django server crash in PyCharm professional? | 0 | 0 | 0 | 326 |
48,026,532 | 2017-12-29T18:01:00.000 | 2 | 0 | 1 | 0 | python,nlp,nltk,wordnet | 48,027,161 | 1 | true | 0 | 0 | The general task, as you've described it, is not possible from simple textual analysis of the input characters. English does not have consistent rules for handling words as they evolve. Yes, an excellent lemmatiser will solve the straightforward cases for you, those that can be discerned by applying transformations common within that POS (such as irregular verbs).
However, to eliminate false negatives, you must have complete coverage of the word's basis; complete will require etymology, especially in cases where the root word isn't in the English language, or perhaps doesn't appear in the shortened word itself.
For instance, what software tool could tell you that dis and speculum have the same root (specere), but that species does not? How would you tell that gentle, gentile, genteel, and jaunty have the same root? You'll need the etymology to get 100% of the actual connections. | 1 | 3 | 0 | I'd like to write a function same_base(word1, word2) that returns True when word1 and word2 are two English words derived from the same root word. I realize that words can have multiple senses; I want the algorithm to be overzealous, returning True whenever it is possible to view the words as originating from the same place. Some false positives are OK; false negatives are not.
Typically, stemming and lemmatization would be used for this. Here's what I've tried:
Check if the words stem to the same thing, using, for instance, the Porter Stemmer. This doesn't catch sung and sing, dig and dug, medication and medicine.
Check if the words lemmatize to the same thing. It's unclear what arguments to pass to the lemmatizer (i.e., for part of speech). The WordNet lemmatizer, at least, seems to be too conservative.
Does such a tool exist? Do I just need an extremely aggressive stemmer / lemmatizer combo — and if so, where would I find one? | Determining if two words are derived from the same root in Python | 1.2 | 0 | 0 | 633 |
48,027,878 | 2017-12-29T20:04:00.000 | 0 | 0 | 1 | 0 | python | 48,027,980 | 1 | false | 0 | 0 | Frozen binaries are not yet available for Python3. However, you can use the Api that allows you to run python code in C, but ideally, you would just use C to begin with. | 1 | 0 | 0 | How can you convert a python3 code to become executable. I am aware of py2exe program but its only for python2. Am asking if there is any program for python3 to exe | Conversion of Python code to executable | 0 | 0 | 0 | 67 |
48,028,570 | 2017-12-29T21:19:00.000 | 0 | 0 | 0 | 1 | python,ubuntu | 48,030,928 | 1 | false | 0 | 0 | If you want to say is: 'The process running in the terminal stopped, you want to run once in the second'.
Then: 1, you can find it by the up arrow in the cursor keys of your keyboard;
You can type 'sudo gedit ~/.bash_history' or 'history | less' in the terminal to see the history commands you entered.
If you want to say is: 'Your process may be stopped for some reason, but you want it to start running again'
You can try this: 1, create a script to run your process;
2, in your code to find the reasons that may cause your program to stop, and then use the conditional statement to judge, once your code stops calling the script created above. | 1 | 0 | 0 | how to keep the command to run if the process is auto-killed
example:
python3 "command"
then,
command running
if,
Killed (process killed or stop)
then,
run the early command | How to run a command if the process is Killed | 0 | 0 | 0 | 305 |
48,029,089 | 2017-12-29T22:26:00.000 | 0 | 0 | 1 | 0 | python,spyder,cv2 | 48,029,193 | 1 | false | 0 | 0 | As I know CV2 is python 2x related. When You use CV2 You need copy cv2.pyd file directly to python 2x site-packages. | 1 | 0 | 0 | I have Anaconda 3 installed (Python 3.6.3) and I am working on a project that uses the CV2 package in python. Now I am relatively new to python but I have been using this package (via "import cv2" command) in PyCharm with no problems. However, today, I wanted to give Spyder a try. I started using the IDE and everything was working. But all of a sudden, Spyder kept giving me an error when I try to import cv2. I really did not change anything and I only was debugging the script. The error I am getting is as follows:
"""
In [1]: import cv2
Traceback (most recent call last):
File "", line 1, in
import cv2
ImportError: DLL load failed: The specified procedure could not be found.
"""
I cannot figure out how this can be related to the IDE since PyCharm still can import it. Could anybody give me a hint please? | Importing CV2 does not work in Spyder but works in PyCharm | 0 | 0 | 0 | 853 |
48,031,699 | 2017-12-30T06:48:00.000 | 0 | 0 | 1 | 0 | python,string,mode | 48,031,735 | 5 | false | 0 | 0 | You could always roll your own - if the list is sorted you can keep a running mode and only update it when a new value is presented with more occurrences. | 1 | 2 | 0 | example of list [1,1,2,2,4,5]
the modes are 1 and 2.
if i use the mode function in statistics package in python, I get an error saying that there is no unique mode.
But I want the lower mode i.e. "1" to be printed.
Thanks for your answers. | Select the mode with the lowest value | 0 | 0 | 0 | 1,292 |
48,031,800 | 2017-12-30T07:09:00.000 | 4 | 0 | 0 | 0 | python,django | 48,031,808 | 2 | false | 1 | 0 | instead of django-admin.py startproject projectname, use
django-admin startproject projectname
In windows, open folder where you want to create the project, point to empty space and press Shift and then right mouse click. Choose the option Open Command Window Here. Then in command window, just enter above command. | 1 | 1 | 0 | I tried of creating project in Windows. When I type django-admin.py startproject projectname it takes but it won't created in folder. Everything is installed and previous project also running fine. | How to create Django project in Windows? | 0.379949 | 0 | 0 | 1,446 |
48,031,855 | 2017-12-30T07:18:00.000 | 1 | 1 | 1 | 1 | python,condor | 61,310,906 | 1 | false | 0 | 0 | You can simply run an interactive job (basically just a job with sleep or cat as command) and do ssh_to_job to run it.
Generally you need to set-up your python environment on the compute node, it is best to have a venv and activate it inside your start script. | 1 | 1 | 0 | I am submitting a python script to condor. When condor runs it it gets
an import error. Condor runs it as
/var/lib/condor/execute/dir_170475/condor_exec.exe. If I manually copy
the python script to the execute machine and put it in the same place
and run it, it does not get an import error. I am wondering how to
debug this.
How can I see the command line condor uses to run it? Can the file
copied to /var/lib/condor/execute/dir_170475/condor_exec.exe be
retained after the failure so I can see it? Any other suggestions on
how to debug this? | Debugging htcondor issue running python script | 0.197375 | 0 | 0 | 289 |
48,032,756 | 2017-12-30T09:48:00.000 | 2 | 0 | 0 | 1 | python,jenkins,build,continuous-deployment | 48,034,085 | 1 | false | 1 | 0 | Options 1) without using docker
Choose Free Style project when create jenkins job
Configure the Source code management section to tell jenkins where to fetch the source code,
Write the build commands you used by manual in build section.
If your jenkins slave not installed Python and jenkins server not support to install Python on jenkins slave, you need to write commands in 'build' section to install Python,pip needed for build on jenkins slave.
Option 2) using docker
Build a docker image with all requried soft and tool installed and configured to supply a build enviroment
(You can search there is exist docker image can meet your requirement on docker image hub before build image yourself? )
Upload the image to docker image hub
Install docker engine on jenkins slave
Create Free style project
Congiure Source Code Management section
Write docker command in Build section to pull docker image and start a docker container, execute build command inside container | 1 | 3 | 0 | I want to build an existing Python web application in jenkins, just like building a java application using Maven.
Is it possible or not? if possible, please help me with necessary configurations to build and Continuous Deployment of the same application. | Building a Python Web application in jenkins | 0.379949 | 0 | 0 | 1,298 |
48,032,766 | 2017-12-30T09:48:00.000 | 0 | 0 | 1 | 0 | python,shell,ipython | 48,033,310 | 1 | false | 0 | 0 | Sounds like you might want the PgDn key. | 1 | 0 | 0 | I am running python 3.5 in the ipython interactive shell.
I used the %load magic command to continue my session, but the cursor was at the top and I am now having to scroll line by line to the bottom. Is there some keyboard shortcut in ipython that can get me to the bottom line?
I know there is the %run magic command, but not quite what I'm looking for here as I ran it, and realized I made a mistake, so wanted to run the edited new block of code.
Thanks! | How to quickly go to bottom line of pasted code ipython shell | 0 | 0 | 0 | 159 |
48,036,741 | 2017-12-30T18:28:00.000 | 0 | 0 | 0 | 0 | python,pymc3 | 48,042,507 | 3 | false | 0 | 0 | Let step = pymc3.Metropolis() be our sampler, we can get the final acceptance-rate through
"step.accepted"
Just for beginners (pymc3) like myself, after each variable/obj. put a "." and hit the tab key; you will see some interesting suggestions ;) | 1 | 0 | 1 | Does anyone know how I can see the final acceptance-rate in PyMC3 (Metropolis-Hastings) ? Or in general, how can I see all the information that pymc3.sample() returns ?
Thanks | Acceptance-rate in PyMC3 (Metropolis-Hastings) | 0 | 0 | 0 | 551 |
48,039,657 | 2017-12-31T03:12:00.000 | 2 | 0 | 1 | 0 | python | 48,039,677 | 3 | false | 0 | 0 | You can try newlist = filter(lambda a: a > 0, [1, 2, 3]) or None or [i for i in original_list if i > 0] or None | 1 | 1 | 0 | I am wondering how I can take only positive values in a list in Python.
For example, if I have A = [1, 2, 3], it should return [1, 2, 3]
If I have A = [-1, 2, 3], it should return [2, 3]
if I have A = [-1, -2], it should return None
Thank you very much! | How to take only positive values in a list in Python? | 0.132549 | 0 | 0 | 20,880 |
48,040,573 | 2017-12-31T06:45:00.000 | 1 | 0 | 1 | 1 | python,macos,pip,kernel,atom-editor | 48,050,542 | 3 | false | 0 | 0 | Well although I am no Mac expert I've given it a shot anyway:
Yes you could but do you really want to risk it (or even do it)?
Mac-OS must rely on Python to fulfill something in the OS otherwise it would not come inbuilt. This means two things:
The Python installation will be minimal. By that I mean it will have things missing (any large library for a start). They will do this mainly to cut down on the OS size. Therefore you will not have the full Python library and in the long term you may end up missing out.
Second if anything went wrong (IE you broke your installation or even modified it -yep I've done this in Linux and have ended up factory resetting) then you may cause something to stop working and may need to factory reset or perform some other drastic action on your OS. A separate installation would prevent your from risking this. This is very useful because there comes a time when you may decide to update certain modules with pip and find it can't or it updates something that you shouldn't be messing with.
Yes it's possible you may run into compatibility problems but I think it's most widely accepted that you do not use the inbuilt one as it needs to remain unchanged if the OS is to use it correctly. Messing with it increases the chances of it breaking.
Conclusion: So even though installing modules with pip (and getting pip) can be done with the inbuilt Python it comes down whether you want to risk harming your OS. I strongly suggest you get a separate installation and leave the inbuilt one as it is. Second as you mentioned you will find that the inbuilt versions are never up-to-date or are built were they are not really compatible with standard libraries (expect things like the missing runtime libraries all the time) , just another reason to stay clear of them. | 1 | 1 | 0 | I have read a bunch of posts here and on Google, but my question is far more basic than the answers: If Python(2.7) came pre-installed on my MacBook Pro (High Sierra), can I just do sudo easy_install pip (as suggested) from the command line--withOUT causing issues? I have a vague understanding of global/local installations, and my understanding is certain Python installations aren't compatible with local/global kernel installations. I hope I am getting the terminology right, but I saw several warnings about installing pip for "a homebrew based python installation", but I am not sure whether Python on my laptop is installed via homebrew (nor how to find out).
My question came about because I wanted to install the Hydrogen package to use in Atom, the text editor (to help me learn Python). I finally succeeded in installing Hydrogen, but got stumped by the missing kernels (not sure which ones I need, so I am willing to install them all). But I can't seem to install the kernels without pip. So here I am.
My apologies for asking such a basic question--and thanks! | How to install pip for a total newbie | 0.066568 | 0 | 0 | 442 |
48,040,767 | 2017-12-31T07:26:00.000 | 3 | 0 | 0 | 0 | python,firebase,flask,firebase-hosting | 48,040,899 | 1 | true | 1 | 0 | You don't need Firebase Hosting for using Firebase Functions, and as you mentioned Firebase Hosting is for static pages.
Firebase Functions are hosted on firebase (independent from Firebase Hosting for static pages), and currently don't support python.
For HTTP trigger Firebase Functions you simply make HTTP requests to your function's url, from any backend or from frontend itself.
Firebase DB/Storage and other trigger functions work in the same way, but don't explicitly call then they are triggered on specific events in DB/Storage etc. that you specify when defining functions. | 1 | 1 | 0 | I have a website that uses python/flask, and I know that firebase hosting is only for static websites, but I need to be able to use firebase cloud functions in my app, and that requires firebase hosting (please correct me if I am wrong). As node js is server side, but you can use it with firebase hosting, I was hopeful that there might be a way to use python too. Otherwise, if there is a way to use cloud functions without firebase hosting, you can tell me about that too. | Can python/flask websites be hosted on firebase? | 1.2 | 0 | 0 | 9,781 |
48,041,129 | 2017-12-31T08:40:00.000 | 0 | 0 | 0 | 0 | python,django | 48,041,312 | 1 | true | 1 | 0 | A pure django solution would be:
create a form with three integer fields (say, num1, num2 and result)
in your view, populate num1 and num2 with the numbers to be added
render the form in your template (num1 and num2 fields should be read only)
the user enters the answer in the result field and submits the form
in your view, determine whether num1 + num2 == result
redirect to a success page if the answer is correct, otherwise redisplay the form | 1 | 0 | 0 | I've been thinking of this for a LONG time the past few days and can't figure out with my current toolset how to implement this in Django.
What I want is something that can be implemented trivially in Java for example, but how I'd do that on the web is much more difficult
The problem
I want to send data to a HTML template, specific example:
"What is 5 + 50?"
The data must be dynamically generated, so the 5 and 50 are actually randomly generated.
I think I am comfortable doing this part, as I'd simply pass a random variable to a template using the views.py
This is where I do not know how to proceed
I want the user to be able to enter their answer, and have them notified if it correct.
To do this, I'd need to pass the variables from the template back to another view function. I am not sure how to do that, if it is possible at all.
This is how I'm deciding to pursue my projecti dea, and I'm not sure if this is the most efficient way
tl;dr I just wanted data to be randomly generated and calculated using Django | Coding mental block with specific Django task | 1.2 | 0 | 0 | 131 |
48,041,557 | 2017-12-31T10:03:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller | 48,041,611 | 2 | false | 0 | 0 | First try running the command from a command line (terminal) so you can see what happens. Not from inside any IDE.
If you created the script in a MSWindows environment, be sure your PATH includes the executable (or explicitly give the path to the script).
In Linux/Unix, be sure to enable execute permission for the script. chmod +x scriptname | 2 | 0 | 0 | I'm trying to learn how to use pyinstaller to make an executable. I wrote a little script in 2.7 as a test. Print 'test" and named it test-print. When I click on the executable in the Build folder a cmd screen flashes, and that all she wrote. I tried adding a x = raw_input('input something: ') hoping that would cause the cmd screen, or some kind of screen to persist, but to no avail. I know this is basic stuff, but a voice of experience would be most helpful.
F | pyinstaller -- script not found. | 0 | 0 | 0 | 1,732 |
48,041,557 | 2017-12-31T10:03:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller | 48,043,401 | 2 | false | 0 | 0 | In pyinstaller the actual executable is located in the dist folder. I assume you did not use pyinstallers "--onefile" switch so once you finish compiling, navigate to dist then test-print. Afterwards look for test-print.exe in that folder. That is your executable. | 2 | 0 | 0 | I'm trying to learn how to use pyinstaller to make an executable. I wrote a little script in 2.7 as a test. Print 'test" and named it test-print. When I click on the executable in the Build folder a cmd screen flashes, and that all she wrote. I tried adding a x = raw_input('input something: ') hoping that would cause the cmd screen, or some kind of screen to persist, but to no avail. I know this is basic stuff, but a voice of experience would be most helpful.
F | pyinstaller -- script not found. | 0 | 0 | 0 | 1,732 |
48,041,979 | 2017-12-31T11:13:00.000 | 0 | 0 | 1 | 1 | windows,python-3.x,cx-freeze | 48,140,125 | 1 | false | 0 | 0 | The .exe output actually depends on the installation you are using (I'm assuming x64) so if you used an x86bit or x32bit Python installation it will output the bit version your installation is (I know both will run on x64 computers) and just freeze in the normal way.
Bear in mind that x32 works on x86, x64 as well so you might do best with that bit version. | 1 | 0 | 0 | and thank you for your answer in advance. I am making a python program to automate something for my friend, and converted it into a .exe file with cx_Freeze so I could give it to him. But I have a x64 machine and he has a x86 machine. Can anyone please tell me how to make a x86 .exe file on my PC. I'm using Win 10 x64 and Python 3.6 | Making a x86 .exe on a x64 machine | 0 | 0 | 0 | 181 |
48,042,158 | 2017-12-31T11:44:00.000 | 0 | 1 | 0 | 1 | python,hotkeys | 48,042,519 | 1 | false | 0 | 0 | Some things to consider when trying to setup a hotkey to successfully execute any kind of command:
Each Operating System has its own ways to setup hotkeys and sometimes this may differ between distributions as well as between desktop managers.
Many of the relevant how-to-descriptions are easily found via regular search machines as Google.
If you would in fact like your script to set-up its own hotkeys you would have to re-write it in such a manner that it can detect the current operating system/distribution/desktop manager by itself and execute the commands relevant to that particular set-up. | 1 | 2 | 0 | I've created a simple script that executes a "moving mouse and keyboard" sequence. Although currently to get this to work I use a Shell (Idle) to run and that is to slow with boot up time and such.
Is there a way to have this python file on desktop och with a hotkey swiftly run the code? I tried doing it through terminal but it doesn't like my module.
Some info:
This is for both mac and windows.
The module I imported is pyautogui from PyPi and it works in the shell.
Thank you in advance! | Run Python script quickly via HotKey | 0 | 0 | 0 | 5,035 |
48,043,217 | 2017-12-31T14:20:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,numpy,64-bit | 48,043,323 | 3 | false | 0 | 0 | In Windows I use always this:
Rename the pip.exe to pip64.exe for example
Add python folder to Sys path if not exists.
You can use "pip64 install package_name" | 3 | 0 | 0 | I was having a memory error in Python with a program, I found out I had to upgrade my Python to 64 bit. I did that. I then copied all the files from the Lib/site-packages folder of the Python 32 bit and pasted it in the 64 bit folder. I did this so I wouldn't have to install the modules again for my program.
I ran the program and got the following error:
NameError: name 'numpy' is not defined
And yes, I had import numpy in the program
I think the problem is that I have to actually pip install numpy inside of the 64 bit Python (even though I copied the exact same Lib/site-packages from 32 bit to 64 bit) using cmd. If that is the problem, how do I specifically pip install inside of the 64 bit Python folder rather than the default 32 bit folder?
Otherwise, any suggestions? | Using 32-bit and 64-bit Python - "NameError: global name 'numpy' etc. is not defined" | 0 | 0 | 0 | 302 |
48,043,217 | 2017-12-31T14:20:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,numpy,64-bit | 48,043,249 | 3 | false | 0 | 0 | For windows Shift-Right-Click on your 64-bit installation folder that has python.exe and select Open command window here. Then type python.exe -m pip install numpy in there and press Enter.
What this does is that it calls 64-bit python's pip to install numpy instead. | 3 | 0 | 0 | I was having a memory error in Python with a program, I found out I had to upgrade my Python to 64 bit. I did that. I then copied all the files from the Lib/site-packages folder of the Python 32 bit and pasted it in the 64 bit folder. I did this so I wouldn't have to install the modules again for my program.
I ran the program and got the following error:
NameError: name 'numpy' is not defined
And yes, I had import numpy in the program
I think the problem is that I have to actually pip install numpy inside of the 64 bit Python (even though I copied the exact same Lib/site-packages from 32 bit to 64 bit) using cmd. If that is the problem, how do I specifically pip install inside of the 64 bit Python folder rather than the default 32 bit folder?
Otherwise, any suggestions? | Using 32-bit and 64-bit Python - "NameError: global name 'numpy' etc. is not defined" | 0 | 0 | 0 | 302 |
48,043,217 | 2017-12-31T14:20:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,numpy,64-bit | 48,043,267 | 3 | false | 0 | 0 | I then copied all the files from the Scripts folder of the Python 32
bit and pasted it in the 64 bit folder. I did this so I wouldn't have
to install the modules again for my program.
That wasn't a good idea. There are executables in the Scripts folder that are not the same for 32-bit and 64-bit Python. You have to do pip install separately for 32-bit and 64-bit Python. Any DLL involved in the install will not be the same for both versions, in fact 64-bit Python will not even see a 32-bit DLL.
Do it the long way. I know that's a drag but taking shortcuts is likely to lead to baffling errors. | 3 | 0 | 0 | I was having a memory error in Python with a program, I found out I had to upgrade my Python to 64 bit. I did that. I then copied all the files from the Lib/site-packages folder of the Python 32 bit and pasted it in the 64 bit folder. I did this so I wouldn't have to install the modules again for my program.
I ran the program and got the following error:
NameError: name 'numpy' is not defined
And yes, I had import numpy in the program
I think the problem is that I have to actually pip install numpy inside of the 64 bit Python (even though I copied the exact same Lib/site-packages from 32 bit to 64 bit) using cmd. If that is the problem, how do I specifically pip install inside of the 64 bit Python folder rather than the default 32 bit folder?
Otherwise, any suggestions? | Using 32-bit and 64-bit Python - "NameError: global name 'numpy' etc. is not defined" | 0.066568 | 0 | 0 | 302 |
48,043,363 | 2017-12-31T14:43:00.000 | 0 | 1 | 0 | 0 | python,discord,discord.py | 48,051,694 | 2 | false | 0 | 0 | As Wright has commented, this is against discord's ToS. If you get caught using a selfbot, your account will be banned. However, if you still REALLY want to do this, what you need to do is add/replace bot.run('email', 'password') to the bottom of your bot, or client.run('email', 'password') to the bottom depending on how your bot is programmed. | 1 | 2 | 0 | Slack API provides you two options: use app as a bot and as a logged in user itself. I want to create App that will be working as a user and run channel commands. How can I do with discord.py? | How to login as user rather bot in discord server and run commands? | 0 | 0 | 1 | 8,029 |
48,047,793 | 2018-01-01T07:06:00.000 | 2 | 0 | 0 | 0 | python,tensorflow | 48,048,843 | 1 | false | 0 | 0 | The main benefit of using tf.Variables is you don't have to explicitly state what to optimize when you train a neural network. From the perspective of Machine Learning, Variables are network parameters (or weights) which have some initial value before training but get optimized during training (The gradients of the loss are calculated with respect to the Variables and using an Optimization Algorithm, their Variables are updated. (the famous equation w = w - alpha*dL_dw for SGD Algorithm).
In contrast, tf.constant is used to store those network parameters for which gradients of the loss are not intended. During training, the gradients of the loss with respect to constants are not calculated and hence their values are not updated. In order to feed the network with inputs, we use tf.placeholder.
Are they essential? Yes. A deep neural network has millions of parameters across dozens of Variables and though one can
define tensors with some initial value (based on some heuristics, like Glorot initialization)
calculate the gradient of the loss with respect to each of them (maybe using a for loop)
update their values based on some optimization algorithm (e.g., SGD)
taking extra care not to leave behind any of them,
a wise person will just use Variables and let the magic happen. After all, more and messy code means more chances of error. | 1 | 0 | 1 | I pretty much understand the concept of Variable in Tensorflow (I think) but I still haven't found them very useful. The main reason I read it is interesting to use them it's to later restore them, which might be handy in some cases, but you could achieve similar results using numpy.save for saving matrices and values, or even writing them out in a log file.
Variables are used to save Tensors but you can use a Python variable to save them, avoiding the Tensorflow's Variable extra wrapper.
Variables become nice when used using get_variable function since the code will be much cleaner than using dictionary to store weights, however, in terms of functionality, I don't get why are they important.
My conclusion about TF Variables is that they help us to write nicer code but they are not essential. Any thoughts about them? | Are Tensorflow Variables essential? | 0.379949 | 0 | 0 | 52 |
48,049,253 | 2018-01-01T11:26:00.000 | 3 | 0 | 0 | 0 | linux,python-2.7,pygtk,python-3.6,pycairo | 49,293,259 | 8 | false | 0 | 1 | install cairocffi, and
replace import cairocffi with import cairocffi as cairo. | 5 | 9 | 0 | I get the following error when trying to import gtk in Python 2.7 :
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gtk/__init__.py", line 40, in <module>
from gtk import _gtk
File "/usr/lib/python2.7/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python2.7/site-packages/cairo/_cairo.so: undefined symbol: cairo_tee_surface_index
And I get the following error when trying to import cairo from Python 3.6:
>>> import cairo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python3.6/site-packages/cairo/_cairo.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index
I compiled and built the modules in the order as given in the BLFS book.
I also installed cairo as given in the book with tee enabled.
My system is an LFS, with 4.14.4 Kernel Version, with Python 2.7.14 and Python 3.6.4.
EDIT: Downloaded the source and did 'make uninstall' and then reinstalled it. Now I can import cairo without any errors. | Python : Import cairo error (2.7 & 3.6) undefined symbol: cairo_tee_surface_index | 0.07486 | 0 | 0 | 7,143 |
48,049,253 | 2018-01-01T11:26:00.000 | 0 | 0 | 0 | 0 | linux,python-2.7,pygtk,python-3.6,pycairo | 50,362,423 | 8 | false | 0 | 1 | For me,
ldd /usr/lib64/python3.6/site-packages/cairo/_cairo.cpython-36m-x86_64-linux-gnu.so
showed:
libcairo.so.2 => /usr/local/lib/libcairo.so.2
I had a stale self-compiled cairo installation. If you still have the original compile tree, you can run make uninstall within it. Otherwise, simply move the offending cairo files in /usr/local/lib manually to another location, and delete once you are sure the files are unnecessary. | 5 | 9 | 0 | I get the following error when trying to import gtk in Python 2.7 :
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gtk/__init__.py", line 40, in <module>
from gtk import _gtk
File "/usr/lib/python2.7/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python2.7/site-packages/cairo/_cairo.so: undefined symbol: cairo_tee_surface_index
And I get the following error when trying to import cairo from Python 3.6:
>>> import cairo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python3.6/site-packages/cairo/_cairo.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index
I compiled and built the modules in the order as given in the BLFS book.
I also installed cairo as given in the book with tee enabled.
My system is an LFS, with 4.14.4 Kernel Version, with Python 2.7.14 and Python 3.6.4.
EDIT: Downloaded the source and did 'make uninstall' and then reinstalled it. Now I can import cairo without any errors. | Python : Import cairo error (2.7 & 3.6) undefined symbol: cairo_tee_surface_index | 0 | 0 | 0 | 7,143 |
48,049,253 | 2018-01-01T11:26:00.000 | 5 | 0 | 0 | 0 | linux,python-2.7,pygtk,python-3.6,pycairo | 55,534,650 | 8 | false | 0 | 1 | I just shifted on older version of pycairo. Try downloading version 1.11.0.
pip uninstall pycairo pip install pycairo==1.11.0
You can do shift on other versions available too.
At this time; they are:-
1.11.0, 1.11.1, 1.12.0, 1.13.0, 1.13.1, 1.13.2, 1.13.3, 1.13.4, 1.14.0, 1.14.1, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.15.5, 1.15.6, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.17.0, 1.17.1, 1.18.0
I don't know much about its internals, i just used brute force to get a solution.
Hope it helps. | 5 | 9 | 0 | I get the following error when trying to import gtk in Python 2.7 :
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gtk/__init__.py", line 40, in <module>
from gtk import _gtk
File "/usr/lib/python2.7/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python2.7/site-packages/cairo/_cairo.so: undefined symbol: cairo_tee_surface_index
And I get the following error when trying to import cairo from Python 3.6:
>>> import cairo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python3.6/site-packages/cairo/_cairo.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index
I compiled and built the modules in the order as given in the BLFS book.
I also installed cairo as given in the book with tee enabled.
My system is an LFS, with 4.14.4 Kernel Version, with Python 2.7.14 and Python 3.6.4.
EDIT: Downloaded the source and did 'make uninstall' and then reinstalled it. Now I can import cairo without any errors. | Python : Import cairo error (2.7 & 3.6) undefined symbol: cairo_tee_surface_index | 0.124353 | 0 | 0 | 7,143 |
48,049,253 | 2018-01-01T11:26:00.000 | 0 | 0 | 0 | 0 | linux,python-2.7,pygtk,python-3.6,pycairo | 58,881,631 | 8 | false | 0 | 1 | I find that the root error is not finding the py3cairo.h
just locate py3cairo.h, and ln -s /usr/include/pycairo/py3cairo.h /usr/include/py3cairo.h
then the compilation works without error. | 5 | 9 | 0 | I get the following error when trying to import gtk in Python 2.7 :
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gtk/__init__.py", line 40, in <module>
from gtk import _gtk
File "/usr/lib/python2.7/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python2.7/site-packages/cairo/_cairo.so: undefined symbol: cairo_tee_surface_index
And I get the following error when trying to import cairo from Python 3.6:
>>> import cairo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python3.6/site-packages/cairo/_cairo.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index
I compiled and built the modules in the order as given in the BLFS book.
I also installed cairo as given in the book with tee enabled.
My system is an LFS, with 4.14.4 Kernel Version, with Python 2.7.14 and Python 3.6.4.
EDIT: Downloaded the source and did 'make uninstall' and then reinstalled it. Now I can import cairo without any errors. | Python : Import cairo error (2.7 & 3.6) undefined symbol: cairo_tee_surface_index | 0 | 0 | 0 | 7,143 |
48,049,253 | 2018-01-01T11:26:00.000 | 2 | 0 | 0 | 0 | linux,python-2.7,pygtk,python-3.6,pycairo | 53,972,647 | 8 | false | 0 | 1 | I'm using conda and I have same issue, but path are little bit different due to conda env:
ImportError: /home/juro/anaconda3/envs/py37/lib/python3.7/site-packages/cairo/_cairo.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index
$ ldd /home/juro/anaconda3/envs/py37/lib/python3.7/site-packages/cairo/_cairo.cpython-37m-x86_64-linux-gnu.so
$ outputs:
...
libcairo.so.2 => /home/juro/anaconda3/envs/py37/lib/libcairo.so.2 (0x00007ff6d8ad9000)
...
It seems that conda (anaconda) package cairo is broken or pip pycairo package is broken (I don't know who's fault it is ;)). It's missing symbol cairo_tee_surface_index in "libcairo.so.2" library. That symbol is required by pycairo package (pip install pycairo), so when you do "import cairo" you get that failure.
You have these options:
I found out that my system (debian) libcairo.2 has that missing symbol:
$ strings /usr/lib/x86_64-linux-gnu/libcairo.so.2.11400.8 | grep cairo_tee_surface_index. So I just downgraded my conda's cairo to the same version as on my system conda install cairo=version and copied my system libcairo over my conda libcairo:cp /usr/lib/x86_64-linux-gnu/libcairo.so.2.11400.8 ~/anaconda3/lib/libcairo.so.2.11400.8. You can backup the original one, but don't use move command (mv) because those libraries are hardlinks (those libraries can be shared among multiple conda environments). Use just cp for backup.
You can change RPATH inside "_cairo.cpython-36m-x86_64-linux-gnu.so" libary file using chrpath command (man chrpath) to point to folder where is correct libcairo.so.2. With correct one I mean the library build with cairo_tee_surface_index symbol.
build your own cairo library (same version as in your conda '$ conda list cairo') and copy it over ~/anaconda3/lib/libcairo.so.2.{additional_version_characters}.
Where is your system's libcairo?
/sbin/ldconfig -p | grep libcairo | 5 | 9 | 0 | I get the following error when trying to import gtk in Python 2.7 :
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gtk/__init__.py", line 40, in <module>
from gtk import _gtk
File "/usr/lib/python2.7/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python2.7/site-packages/cairo/_cairo.so: undefined symbol: cairo_tee_surface_index
And I get the following error when trying to import cairo from Python 3.6:
>>> import cairo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/cairo/__init__.py", line 1, in <module>
from ._cairo import * # noqa: F401,F403
ImportError: /usr/lib/python3.6/site-packages/cairo/_cairo.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index
I compiled and built the modules in the order as given in the BLFS book.
I also installed cairo as given in the book with tee enabled.
My system is an LFS, with 4.14.4 Kernel Version, with Python 2.7.14 and Python 3.6.4.
EDIT: Downloaded the source and did 'make uninstall' and then reinstalled it. Now I can import cairo without any errors. | Python : Import cairo error (2.7 & 3.6) undefined symbol: cairo_tee_surface_index | 0.049958 | 0 | 0 | 7,143 |
48,054,962 | 2018-01-02T02:33:00.000 | 1 | 0 | 1 | 0 | python,main,pep8 | 48,055,377 | 2 | true | 0 | 0 | There are no official guidelines referring to this. | 2 | 1 | 0 | What are the official guidelines for the location of the main method in a Python program?
Should it be written first (at the top of the program), last (at the bottom), or is it just preferential? I read through PEP8 and couldn't find anything. | Python Main Method Location Convention | 1.2 | 0 | 0 | 176 |
48,054,962 | 2018-01-02T02:33:00.000 | 1 | 0 | 1 | 0 | python,main,pep8 | 48,055,466 | 2 | false | 0 | 0 | Whatever is most useful for someone to read first should go first.
Often, this is going to be some kind of "main" method. It describes the flow of the program. It will give developers a quick way of figuring out what is happening and where to look for things they want.
But I can envisage cases where it's not explicitly the main() method. Maybe the main() method just spins off some threads and it's what's inside it that counts. Maybe main() just does a bunch of argument parsing, and then calls another function. Maybe main() is just boilerplate. | 2 | 1 | 0 | What are the official guidelines for the location of the main method in a Python program?
Should it be written first (at the top of the program), last (at the bottom), or is it just preferential? I read through PEP8 and couldn't find anything. | Python Main Method Location Convention | 0.099668 | 0 | 0 | 176 |
48,056,052 | 2018-01-02T05:39:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,selenium | 69,779,987 | 3 | false | 0 | 0 | use "http://" in url
that works for me | 1 | 11 | 0 | I am trying to access the internet with Google Chrome but every time I use webbrowser.open(url) it opens IE.
So I checked to make sure I have Chrome as my default, which I do, and I tried using the get() function to link the actual Chrome application but it gives me this error instead:
File "C:\Users\xxx\AppData\Local\Programs\Python\Python36\lib\webbrowser.py", line 51, in get raise Error("could not locate runnable browser") webbrowser.Error: could not locate runnable browser
I also tried to open other browsers but it gives the same error. It also reads IE as my default and only runnable browser.
What could be happening? Is there an alternative?
Using Python 3.6. | webbrowser.get — could not locate runnable browser | 0 | 0 | 1 | 24,477 |
48,060,354 | 2018-01-02T11:43:00.000 | 1 | 0 | 0 | 0 | python,mongodb,flask | 54,701,902 | 5 | false | 1 | 0 | This works for me:
sudo pip3 uninstall pymongo
sudo apt-get install python3-pymongo
I hope that works for someone else, regards. | 1 | 10 | 0 | I am trying to build an application with mongoDB and Python Flask. While running the application, I am getting below error:
ConfigurationError: Server at 127.0.0.1:27017 reports wire version 0,
but this version of PyMongo requires at least 2 (MongoDB 2.6).
Can any one help me in this?
Thanks,
Balwinder | ConfigurationError: Server at 127.0.0.1:27017 reports wire version 0, but this version of PyMongo requires at least 2 (MongoDB 2.6) | 0.039979 | 1 | 0 | 25,842 |
48,062,744 | 2018-01-02T14:28:00.000 | 1 | 0 | 0 | 1 | python,websocket,tornado | 48,070,341 | 1 | true | 1 | 0 | Tornado's websocket implementation handles pings automatically (and so do most other implementations). You shouldn't have to do anything.
Tornado's ping timeout defaults to 3 times the ping interval, so if you're getting cut off after 60 seconds instead of 180 seconds, something else is doing it. Some proxies have a 60-second timeout for idle connections, so if you're going through one of those you may need a shorter ping interval.
If that's not it, you'll need to provide more details, ideally a reproducible test setup with client and server code. | 1 | 1 | 0 | I have written a websocket server in tornado and i used websocket_ping_interval=60 to detect which connection is really closed after 60 sec. but after 60 sec the server disconnects the link(even if it's been disconnected). i think this is done because the server sends a ping packet each 60sec and the client doesn't response to the server. i want the client side(which is written in websocket python module) to response the server whenever the server sends ping req.
I have the same problem with client websocket in browsers. any idea how to solve it? | how to response to the server ping in tornado websocket | 1.2 | 0 | 1 | 720 |
48,062,763 | 2018-01-02T14:30:00.000 | 1 | 0 | 0 | 0 | python,mysql,django,database | 48,063,653 | 1 | false | 1 | 0 | If you want to learn and use MySQL, then start without anything above it - no Django, no ORM, not even a Python script. Learn to configure your mysql server (the server process I mean - doesn't have to be on a distinct computer), to work with the command-line mysql client (database creation, tables creations / modifications, adding/updating/deleting rows, and, most important, doing simple and complex queries).
While you're at it, learn about proper relational data modeling (normalisations etc) so you fully understand how to design your schemas.
Once you're confortable with this, spend some time (should be quite fast at this point) learning to do the same things from python scripts with your python's version mysql connector.
Then if you want to learn web development and Django, well go for it. The ORM is quite easy to use when you already have a good understanding of what happens underneath, so by that time you shouldn't have much problems with this part. What you'll still have to learn are the HTTP protocol (trying to do web programming without understanding the HTTP protocol is like trying to win a car race without knowing how to drive a car - it's not technically impossible but it might end up being very painful experience), then front-end stuff (html/css/javascript) and finally the views / templates parts of Django (which should be easy once you know HTTP and html).
You can of course jump right in into Django, but you will probably fight with way too many concepts at once and end up spending twice more time to figure out how everything works and why it works that way. | 1 | 0 | 0 | I am leaning Python programming language. I have no problems with Python. I read Python official docs, and I can write small programs in Python. I want to familiarize myself with mysql database because it is useful in learning software development concepts. I've installed mysql database and Django on my computer. I have Ubuntu 14.04 and python 3.4 installed. I've configured Django settings to use mysql database. I tested Django connection to mysql db and all things work properly.
I am a complete newbie with web development. I didn't create my own website and I didn't start developing any web application.
My purpose currently is to master creation of mysql database and tables, making changes/migrations/queries, using Django models and Python.
Is it reasonable/possible to use Django ORM for work with mysql database without simultaneous development of a web application/local application? As I've said, I don't have my own website. I want just to try using mysql and Django together on my computer in order to get deeper knowledge as to Django and mysql in this respect.
My final purpose is development in Python, including work with mysql database.
Mysql without Python and Django is of no use for me. | Django to create and change mysql database for learning purposes | 0.197375 | 1 | 0 | 67 |
48,064,686 | 2018-01-02T16:41:00.000 | 0 | 1 | 0 | 0 | python,c,swig,cudd | 48,671,922 | 2 | false | 0 | 1 | I still forget what my problem was before. However, I have found that what was causing my problem here was a linker issue - the linker was looking at an old version of the library, which did not have my changes. | 1 | 0 | 0 | I am currently using PyCUDD, which is a SWIG-generated Python wrapper for the C package CUDD. I'm currently trying to get CUDD to print some debug information from inside the C code, but any printfs inside the C code don't seem to be producing any output - putting them in the .i files for SWIG produces output, putting them in the C code does not. I'm not sure if this is some property of SWIG specifically or of compiling the C code to a shared object library.
(Particularly frustrating is that I know that I have had this problem and gotten it working before, but I can't seem to find anything searching for the problem now, and I apparently forgot to leave notes on that one thing.) | Make printf appear in stdout from shared object library | 0 | 0 | 0 | 861 |
48,064,949 | 2018-01-02T17:00:00.000 | 0 | 0 | 0 | 0 | python,pygame | 48,065,090 | 1 | false | 0 | 1 | Are you on windows?
The problem is not in Python or Pygame itself, but rather in Windows. It seems sound enhancements are somehow fiddling with the way the sound my script is playing (or any other Pygame script for that matter).
I'm on Windows 10 and this is how I did it:
1.Right click on the speaker icon in the taskbar
2.Select Playback Devices
3.Select Speakers and Properties
4.Go to Enhancements tab and uncheck Equalizer and Loudness Equalization
That's it. | 1 | 0 | 0 | I can't seem to set the volume of a channel to 0 in one ear. (I initiated the mixer with stereo). I did pygame.mixer.Channel(channel).set_volume(0,volume) and it seems to play at about half-ish volume on the left and full volume on the right. Any clue what I'm doing wrong? | Pygame Can't Set Volume to 0 | 0 | 0 | 0 | 158 |
48,065,753 | 2018-01-02T18:09:00.000 | 0 | 0 | 0 | 0 | python,sql,pandas,sorting,group-by | 48,071,714 | 1 | false | 0 | 0 | Here is the answer to it-
Grouping is done when you want to pull out the conclusion based on the entire group , like total of sales done,for each of the groups(in this case John and Amy) . It is used mostly with an aggregate function or sometimes to select distinct records only. What you wrote above is sorting the data in the order of name and sales , there is no grouping involved at all. Since the operation is sorting , its obvious that the command written for it would be sorting . | 1 | 0 | 1 | I have two columns, one is a string field customer containing customer names and the other is a numeric field sales representing sales.
What I want to do is to group data by customer and then sort sales within group.
In SQL or Pandas, this is normally achieved by something like order by customer, sales on the table. But I am just curious about this implementation. Instead first sorting on customer and then sorting on sales, why not first group customer and sort sales. I don't really care about the order of the different customers since I only care about records of same customers being grouped together.
Grouping is essentially mapping and should run faster than sorting.
Why isn't there such implementation in SQL? Am I missing something?
Example data
name,sales
john,1
Amy,1
john,2
Amy,3
Amy,4
and I want it to group by name and then sort by sales:
name,sales
john,1
john,2
Amy,1
Amy,3
Amy,4
In SQL you probably would do select * from table order by name,sales
This would definitely do the job. But my confusion is since I don't care about the order of name, I should be able to do some kind of grouping (which should be cheaper than sorting) first and do sorting only on the numeric field. Am I able to do this? Why do a lot of examples from google simply uses sorting on the two fields? Thanks! | Sort by two columns, why not do grouping first? | 0 | 1 | 0 | 119 |
48,068,694 | 2018-01-02T22:26:00.000 | 3 | 0 | 1 | 0 | python,python-multiprocessing | 48,068,944 | 1 | false | 0 | 0 | Instances are not passed between processes, because instances are per Python VM process.
Values passed between processes are pickled. Unpickling normally restores instances by measures other than calling __init__; As far as I can tell it directly sets attribute values to resurrect an instance, and resolves references to/from other instances.
Provided that you run identical code in either process (and with multiprocessing, you do), it restores a reference to a correct instance method inside a restored chain of instances.
This means that if an __init__ in the chain of objects does something side-effectful, it will not have been done on the receiving side, that is, in a subprocess. Do such initialization explicitly.
All in all, it's easiest to share (effectively) immutable objects between parallel processes, and reconcile results after join()ing all of them. | 1 | 3 | 0 | If I pass in a reference to an instance method instead of a module level method into multiprocessing.Process, when I call it's start method is an exact copy of the parent process instance passed in or is the constructor called again? What happens with 'deep' instance member objects? Exact copy or default values? | passing in an instance method into a python multiprocessing Process | 0.53705 | 0 | 0 | 463 |
48,072,932 | 2018-01-03T07:24:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,atom-editor | 48,120,307 | 1 | false | 0 | 0 | To enable Atom to run python 3, using the script plugin, make the first line of your script be the path to your python3 interpreter. I can use script to run either python 2.7.13 or python 3.6 by changing first line of my script #!path-to-python2 to #!path-to-python3. I just checked this and it worked as described. | 1 | 0 | 0 | When I use the Script package for Atom to run code in python3, or any package to run code for that matter, I get an error message that says:
'python3' is not recognized as an internal or external command,
operable program or batch file.
I've changed the python.coffee file from python to python3 and neither have worked. | Difficulties with python3 and Atom | 0 | 0 | 0 | 511 |
48,076,009 | 2018-01-03T10:58:00.000 | 1 | 0 | 1 | 0 | python,macos,opencv | 48,076,093 | 2 | false | 0 | 0 | Package name is wrong pip install opencv-python Should work. | 2 | 0 | 1 | I am having trouble running a python script file that contains import opencv command. Upon running the file, it gives the following error:
ImportError: No module named cv2
I then ran pip install python-opencv but it gave Could not find a version that satisfies the requirement python-opencv (from versions: )
No matching distribution found for python-opencv error.
Does anyone know what the issue might be? I was able to run this python file at first but I haven't been able to run after that. | ImportError: No module named cv2 error upon running .py file in terminal | 0.099668 | 0 | 0 | 542 |
48,076,009 | 2018-01-03T10:58:00.000 | 0 | 0 | 1 | 0 | python,macos,opencv | 48,076,121 | 2 | false | 0 | 0 | I also had this issue. Tried different things. But finally
conda install opencv
solved the issue for me. | 2 | 0 | 1 | I am having trouble running a python script file that contains import opencv command. Upon running the file, it gives the following error:
ImportError: No module named cv2
I then ran pip install python-opencv but it gave Could not find a version that satisfies the requirement python-opencv (from versions: )
No matching distribution found for python-opencv error.
Does anyone know what the issue might be? I was able to run this python file at first but I haven't been able to run after that. | ImportError: No module named cv2 error upon running .py file in terminal | 0 | 0 | 0 | 542 |
48,077,808 | 2018-01-03T12:51:00.000 | 1 | 0 | 1 | 0 | python,conda | 48,077,850 | 1 | true | 0 | 0 | I would definitely suggest to go for miniconda or anaconda, as you already said yourself, since it allows you to keep different Python versions separated in different environments.
I cannot really give you advice on the editor to use, since I always use Spyder. It takes some time to get used to, but it very versatile and extremely useful when dealing with large and many Python scripts. | 1 | 0 | 0 | I recently switched jobs and have the oportunity to create a clean programming working environment, cleaner and better than I used to have before. In my previous work I had some problems with running different versions of python next to eachother (or different versions of package) so I thought it would be a good idea to use Conda as a python install/package manager.
As an IDE I used to use idle because I find spyder a little cluttered, but I do however miss some functionality of a proper IDE and was thinking about switching to PyCharm for personal use and iPython (that is the same as python notebook isn't it?) for courses on python I will be giving.
What is the best way to do a very clean install? Do I install miniconda first and then python3.6 (and/or python2.7), pycharm, iPython? Or can I do this in a better way without getting to much clutter? | Clean install of python working environment | 1.2 | 0 | 0 | 376 |
48,080,579 | 2018-01-03T15:42:00.000 | 3 | 0 | 0 | 0 | python,django,database | 48,080,725 | 1 | true | 1 | 0 | As with anything performance-related, do some testing and profile your application. Don't get lured into the premature optimization trap.
That said, given the fact that these statistics don't change, you could perform them asynchronously each time you do a scrape. Like the scrape process itself, this calculation process should be done asynchronously, completely separate from your Django application. When the scrape happens it would write to the database directly and set some kind of status field to processing. Then kick off the calculation process which, when completed, will fill in the stats fields and set the status to complete. This way you can show your users how far along the processing chain they are.
People love feedback over immediate results and they'll tolerate considerable delays if they know they'll eventually get a result. Strand a user and they'll get frustrate more quickly than any computer can finish processing; Lead them on a journey and they'll wait for ages to hear how the story ends. | 1 | 2 | 0 | I'm working on a Django application that consists of a scraper that scrapes thousands of store items (price, description, seller info) per day and a django-template frontend that allows the user to access the data and view various statistics.
For example: the user is able to click on 'Item A', and gets a detail view that lists various statistics about 'Item A' (Like linegraphs about price over time, a price distribution, etc)
The user is also able to click on reports of the individual 'scrapes' and get details about the number of items scraped, average price. Etc.
All of these statistics are currently calculated in the view itself.
This all works well when working locally, on a small development database with +/100 items. However, when in production this database will eventually consist of 1.000.000+ lines. Which leads me to wonder if calculating the statistics in the view wont lead to massive lag in the future. (Especially as I plan to extend the statistics with more complicated regression-analysis, and perhaps some nearest neighbour ML classification)
The advantage of the view based approach is that the graphs are always up to date. I could offcourse also schedule a CRONJOB to make the calculations every few hours (perhaps even on a different server). This would make accessing the information very fast, but would also mean that the information could be a few hours old.
I've never really worked with data of this scale before, and was wondering what the best practises are. | Django - when best to calculate statistics on large amounts of data | 1.2 | 0 | 0 | 562 |
48,081,209 | 2018-01-03T16:20:00.000 | 1 | 0 | 0 | 1 | python,linux,openpyxl | 48,085,163 | 2 | false | 0 | 0 | If you read the help for pip, you'll see that there is an option for installing pre-release versions: `pip install --pre openpyxl' | 1 | 2 | 0 | I need to install openpyxl 2.5 on a new server. When I run "pip3 install openpyxl" it installs 2.4.9. There are features in 2.5 that are required for what I'm working on.
I have been able to get 2.5 on my Mac because I can download the source and simply copy/paste it where it needs to go. However, I'm just not experienced enough in python or linux command line to use an alternate route of my own making to get 2.5 installed on a server.
Can someone give me a "recipe" or an alternative to pip3 that will install version 2.5? | Installing openpyxl 2.5 | 0.099668 | 0 | 0 | 1,554 |
48,081,688 | 2018-01-03T16:50:00.000 | 5 | 1 | 1 | 0 | python,hash,sha | 48,081,802 | 2 | false | 0 | 0 | Calling .digest() on a hashlib.sha256 object will return a 32-byte string -- the shortest possible way (with 8-bit bytes as the relevant unit) to store 256 bits of data which is effectively random for compressibility purposes.
Since 8 * 32 == 256, this provably has no wastage -- every bit is used. | 1 | 2 | 0 | What is the most dense way (fewest characters) that I can store a complete SHA-256 hash? | In Python, represent a SHA-256 hash using the fewest characters possible | 0.462117 | 0 | 0 | 1,624 |
Subsets and Splits