Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,492,606 | 2016-11-08T17:05:00.000 | 2 | 1 | 0 | 1 | 0 | python,linux,copy-paste | 0 | 40,493,312 | 0 | 1 | 0 | false | 0 | 0 | Short answer: no, you can't.
Long answer: the component that does "copy&paste" is not alone defined by the distribution. This is a function of the desktop system / window manager. In other words: there is no such thing as the "default system file" copier for "Linux".
There are file mangers like dolphin for KDE; or nautilus on gnome that all come with their own implementation of file copy. Some good, some not so much (try copying a whole directory with thousands of files with nautilus).
But the real question here: why do you want to do that? What makes you think that your file-copy implementation that requires an interpreter to run ... is suited to replace the defaults that come with Linux? Why do you think that your re-invention of an existing wheel will be better at doing anything?!
Edit: if your reason to "manage" system copy is some attempt to prevent the user from doing certain things ... you should better look into file permissions and such ideas. Within a Linux environment, you simply can't manage what the user is doing in the first place by manipulating some tools. Instead: understand the management capabilities that the OS offers to you, and use those! | 1 | 0 | 0 | 0 | I have coded a python app to manage file copies on linux, I want to know how can I get it to process copy/paste calls, like those launched by pressing ctrl + c/ ctrl + v or right click / Copy..., or drag and drop, instead of using system copier.
Can I do this for all deb based linux dist. or its on different ways for Ubuntu, Mint, Debian, and so on????
Forgive my English and thanks in advance! | Change default system file copier in Linux | 0 | 0.379949 | 1 | 0 | 0 | 205 |
40,522,177 | 2016-11-10T07:33:00.000 | 1 | 0 | 0 | 0 | 0 | django,python-2.7,django-rest-framework,jwt | 0 | 70,026,089 | 0 | 4 | 0 | false | 1 | 0 | Do this jwt.decode(token,settings.SECRET_KEY, algorithms=['HS256']) | 1 | 7 | 0 | 0 | I have started using djangorestframework-jwt package instead of PyJWT , I just could not know how to decode the incoming token (I know there is verify token methode).... All I need to know is how to decode the token and get back info encoded...... | How to decode token and get back information for djangorestframework-jwt packagefor Django | 0 | 0.049958 | 1 | 0 | 0 | 11,026 |
40,548,608 | 2016-11-11T13:01:00.000 | 1 | 0 | 1 | 0 | 0 | python,database,dataset,zodb,object-oriented-database | 0 | 40,549,472 | 0 | 2 | 0 | false | 0 | 0 | You must store the object on the filesystem and add reference to it in the zodb like using a regular database. | 1 | 2 | 0 | 0 | I was using Zodb for large data storage which was in form of typical dictionary format (key,value).
But while storing in ZODB i got following warning message:
C:\python-3.5.2.amd64\lib\site-packages\ZODB\Connection. py:550:
UserWarning: The object
you're saving is large. (510241658 bytes.)
Perhaps you're storing media which should be stored in blobs.
Perhaps you're using a non-scalable data structure, such as a
PersistentMapping or PersistentList.
Perhaps you're storing data in objects that aren't persistent at all.
In cases like that, the data is stored in the record of the containing
persistent object.
In any case, storing records this big is probably a bad idea.
If you insist and want to get rid of this warning, use the
large_record_size option of the ZODB.DB constructor (or the
large-record-size option in a configuration file) to specify a larger
size.
warnings.warn(large_object_message % (obj.class, len(p)))
please suggest how can i store large data in ZODB or suggest any other library for this purpose | ZODB or other database for large data storage in python | 0 | 0.099668 | 1 | 1 | 0 | 901 |
40,550,207 | 2016-11-11T14:34:00.000 | 0 | 0 | 1 | 0 | 0 | python,32bit-64bit,conda | 0 | 40,661,895 | 0 | 1 | 0 | true | 0 | 0 | I found that I have to delete the package caches first. Or force conda to install Python with -f option. | 1 | 0 | 0 | 0 | I installed both 32-bit conda and 64-bit conda for different projects. I created a new environment and specified python 3 in
conda create -name ..name.. python=3
The command picked up Python 3.5.2 but in 64-bit, rather than 32. But when I changed the command to
conda create -name ..name.. python=3.4
it picked up the 32-bit python correctly. My question is how to force conda to pickup 32-bit python 3.5.2? so I can use some of the packages that only support python 3.5.
Here's what I did and none of them work:
installed both 32-bit and 64-bit pythons
installed both 32-bit and 64-bit condas
set 32-bit Miniconda to come before 64-bit Miniconda in PATH
launched 32-bit conda prompt
set CONDA_FORCE_32BIT=1
Thanks! | conda 32-bit keep installing 64-bit python 3.5.2 | 0 | 1.2 | 1 | 0 | 0 | 1,344 |
40,555,625 | 2016-11-11T20:22:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,python-3.x,centos,anaconda | 0 | 48,596,296 | 1 | 2 | 0 | false | 0 | 0 | If you are looking to change the python interpreter in anaconda from 3.5 to 2.7 for the user, try the command conda install python=2.7 | 1 | 3 | 0 | 0 | Without root access, how do I change the default Python from 3.5 to 2.7 for my specific user? Would like to know how to run Python scripts with Python 2 as well.
If I start up Python by running simply python then it runs 3.5.2. I have to specifically run python2 at the terminal prompt to get a version of python2 up.
If I run which python, then /data/apps/anaconda3/bin/python gets returned and I believe Python 2.7 is under /usr/bin/python.
This is on CentOS if that helps clarify anything | How do I change default Python version from 3.5 to 2.7 in Anaconda | 0 | 0 | 1 | 0 | 0 | 6,310 |
40,560,439 | 2016-11-12T07:04:00.000 | 3 | 0 | 1 | 0 | 0 | python,django,pythonanywhere | 0 | 40,560,572 | 0 | 2 | 0 | true | 1 | 0 | On production server your print statements will output log to your webserver log files
In case of pythonanywhere there are three log files
Access log:yourusername.pythonanywhere.com.access.log
Error log:yourusername.pythonanywhere.com.error.log
Server log:yourusername.pythonanywhere.com.server.log
those logs are accessible in your web tab page.
The logs you are looking for will be in server.log | 1 | 3 | 0 | 0 | I've gotten use to using print in my python code to show contents of variable and checking the shell output.
But i have now migrated all my work onto a online server. Pythonanywhere
I don't have the foggiest idea how to do the same now?
Can someone point me in the right direction?
Print to web console? To a file? Or even to the shell session?
Thanks | Django print in prod' server | 0 | 1.2 | 1 | 0 | 0 | 1,138 |
40,562,116 | 2016-11-12T10:57:00.000 | 0 | 0 | 1 | 0 | 0 | python,macos,python-3.x,path,global-variables | 0 | 40,562,461 | 0 | 3 | 0 | false | 0 | 0 | This is not python specific, but if you want to share a config globally among your programs you could set up a environment variable like MYPROJECT_DATA_PATH and all your scripts check this variable before loading the data. Or you could write a config file which all your programs know the location. Or both, a environment variable with the path of the config file, where you can fine-tune it for your needs. | 1 | 1 | 0 | 0 | In a python project, how do I setup a project-wide "data" folder, accessible from every module? I don't have a single entry point in my program, so I cannot do something like global (dataFolderPath). I would like for every module to know where the data folder is (without hardcoding the path in every module!), so it can load and write the data it needs. I'm using python 3.5 on a mac.
Thanks! | How to set up data folder to be accessible from everywhere in a python project | 0 | 0 | 1 | 0 | 0 | 710 |
40,574,548 | 2016-11-13T13:49:00.000 | 0 | 0 | 0 | 0 | 0 | python,svg | 0 | 41,472,508 | 0 | 1 | 1 | false | 0 | 0 | The svgwrite package only creates svg. It does not read a svg file. I have not tried any packages to read and process svg files. | 1 | 1 | 0 | 0 | I want to read an existing SVG file, traverse all elements and remove them if they match certain conditions (e.g. remove all objects with red border).
There is the svgwrite library for Python2/3 but the tutorials/documentation I found only show how to add some lines and save the file.
Can I also manipulate/remove existing elements inside an SVG document with svgwrite? If not - is there an alternative for Python? | manipulating SVGs with python | 1 | 0 | 1 | 0 | 1 | 414 |
40,595,961 | 2016-11-14T19:03:00.000 | 209 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 40,684,400 | 1 | 18 | 0 | true | 0 | 0 | If you're using Spyder 3, please go to
Tools > Preferences > Syntax Coloring
and select there the dark theme you want to use.
In Spyder 4, a dark theme is used by default. But if you want to select a different theme you can go to
Tools > Preferences > Appearance > Syntax highlighting theme | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 1.2 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 0 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 56,964,742 | 1 | 18 | 0 | false | 0 | 0 | 1.Click Tools
2.Click Preferences
3.Select Syntax Coloring | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | -5 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 41,008,022 | 1 | 18 | 0 | false | 0 | 0 | Yes, that's the intuitive answer. Nothing in Spyder is intuitive. Go to Preferences/Editor and select the scheme you want. Then go to Preferences/Syntax Coloring and adjust the colors if you want to.
tcebob | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | -1 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 2 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 46,890,965 | 1 | 18 | 0 | false | 0 | 0 | I tried the option: Tools > Preferences > Syntax coloring > dark spyder
is not working.
You should rather use the path:
Tools > Preferences > Syntax coloring > spyder
then begin modifications as you want your editor to appear | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0.022219 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 1 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 50,761,910 | 1 | 18 | 0 | false | 0 | 0 | On mine it's Tools --> Preferences --> Editor and "Syntax Color Scheme" dropdown is at the very bottom of the list. | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0.011111 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 2 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 52,020,233 | 1 | 18 | 0 | false | 0 | 0 | I think some of the people answering this question don’t actually try to do what they recommend, because there is something wrong with way the Mac OS version handles the windows.
When you choose the new color scheme and click OK, the preferences window looks like it closed, but it is still there behind the main spyder window. You need to switch windows with command ~ or move the main spyder window to expose the preferences window. Then you need to click Apply to get the new color scheme. | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0.022219 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 0 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 56,276,123 | 1 | 18 | 0 | false | 0 | 0 | At First click on preferences(Ctrl+Shift+alt+p) then click the option of syntax coloring and change the scheme to "Monokai".Now apply it and you will get the dark scheme. | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 1 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 58,119,453 | 1 | 18 | 0 | false | 0 | 0 | I've seen some people recommending installing aditional software but in my opinion the best way is by using the built-in skins, you can find them at:
Tools > Preferences > Syntax Coloring | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0.011111 | 1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 0 | 0 | 1 | 0 | 0 | python,themes,spyder | 0 | 60,023,463 | 1 | 18 | 0 | false | 0 | 0 | In Spyder 4.1, you can change background color from:
Tools > Preferences > Appearance > Syntax highlighting scheme | 9 | 106 | 0 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0 | 1 | 0 | 0 | 309,596 |
40,626,429 | 2016-11-16T07:37:00.000 | 31 | 0 | 1 | 0 | 0 | python,visual-studio-code,vscode-settings,pylint | 0 | 56,183,059 | 0 | 5 | 0 | false | 0 | 0 | If you just want to disable pylint then the updated VSCode makes it much more easier.
Just hit CTRL + SHIFT + P > Select linter > Disabled Linter.
Hope this helps future readers. | 2 | 32 | 0 | 0 | Simple question - but any steps on how to remove pylint from a Windows 10 machine with Python 3.5.2 installed.
I got an old version of pylint installed that's spellchecking on old Python 2 semantics and it's bugging the heck out of me when the squigglies show up in Visual Studio Code. | Visual Studio Code - removing pylint | 0 | 1 | 1 | 0 | 0 | 42,819 |
40,626,429 | 2016-11-16T07:37:00.000 | 0 | 0 | 1 | 0 | 0 | python,visual-studio-code,vscode-settings,pylint | 0 | 72,494,136 | 0 | 5 | 0 | false | 0 | 0 | i had this problem but it was fixed with this solution CTRL + SHIFT + P > Selecionar linter > Linter desabilitado. | 2 | 32 | 0 | 0 | Simple question - but any steps on how to remove pylint from a Windows 10 machine with Python 3.5.2 installed.
I got an old version of pylint installed that's spellchecking on old Python 2 semantics and it's bugging the heck out of me when the squigglies show up in Visual Studio Code. | Visual Studio Code - removing pylint | 0 | 0 | 1 | 0 | 0 | 42,819 |
40,627,395 | 2016-11-16T08:38:00.000 | 0 | 1 | 0 | 0 | 0 | java,python,amazon-web-services,amazon-s3,aws-lambda | 0 | 40,632,303 | 0 | 3 | 0 | false | 1 | 0 | Lambda would not be a good fit for the actual processing of the files for the reasons mentioned by other posters. However, since it integrates with S3 events it could be used as a trigger for something else. It could send a message to SQS where another process that runs on EC2 (ECS, ElasticBeanstalk, ECS) could handle the messages in the queue and then process the files from S3. | 1 | 0 | 0 | 0 | I have an s3 bucket which is used for users to upload zipped directories, often 1GB in size. The zipped directory holdes images in subfolders and more.
I need to create a lambda function, that will get triggered upon new uploads, unzip the file, and upload the unzipped content back to an s3 bucket, so I can access the individual files via http - but I'm pretty clueless as to how I can write such a function?
My concerns are:
Pyphon or Java is probably better performance over nodejs?
Avoid running out of memory, when unzipping files of a GB or more (can I stream the content back to s3?) | AWS lambda function to retrieve any uploaded files from s3 and upload the unzipped folder back to s3 again | 0 | 0 | 1 | 0 | 0 | 1,257 |
40,631,040 | 2016-11-16T11:31:00.000 | 0 | 0 | 1 | 0 | 0 | python,machine-learning,nlp,ocr | 0 | 42,022,673 | 0 | 1 | 0 | false | 0 | 0 | You could build extraction zones to fetch this content.
In other words, group documents that have the required content within a given area in the image and then fetch the contents from that area for all images. | 1 | 1 | 0 | 0 | I'm trying to apply NLP to an OCR document. To extract named entities, how can I use features like position of the word in the document?
For example, I have a health report I need to extract the chemical terms in the report in a particular area and avoid their occurrence elsewhere. Can I define a position feature for this in terms of {top:x , left:y} values?
Are there any sklearn libraries? | NLP: Position feature of a word in an OCR of a document | 0 | 0 | 1 | 0 | 0 | 393 |
40,666,853 | 2016-11-17T23:24:00.000 | 0 | 1 | 0 | 1 | 0 | python,file,directory,relative-path | 0 | 40,666,955 | 0 | 3 | 0 | false | 0 | 0 | Try this one:
os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "config") | 1 | 1 | 0 | 0 | Say I am running a Python script in C:\temp\templates\graphics. I can get the current directory using currDir = os.getcwd(), but how can I use relative path to move up in directories and execute something in C:\temp\config (note: this folder will not always be in C:\)? | Move up in directory structure | 0 | 0 | 1 | 0 | 0 | 930 |
40,672,551 | 2016-11-18T08:36:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,escaping,load-data-infile,pymysql | 1 | 40,672,954 | 0 | 2 | 0 | false | 0 | 0 | I think the problem is with the SQL statement you print. The single quote in ''' should be escaped: '\''. Your backslash escapes the quote at Python level, and not the MySQL level. Thus the Python string should end with ENCLOSED BY '\\'';
You may also use the raw string literal notation:
r"""INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" | 2 | 1 | 0 | 0 | I built a simple statement to run a load data local infile on a MySQL table. I'm using Python to generate the string and also run the query. So I'm using the Python package pymysql.
This is the line to build the string. Assume metadata_filename is a proper string:
load_data_statement = """load data local infile """ + metadata_filename + """INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';"""
I originally had string substitution, and wanted to see if that was the issue, but it isn't. If I edit the statement above and commend out the ENCLOSED BY part, it is able to run, but not properly load data since I need the enclosed character
If I print(load_data_statement), I get what appears to be proper SQL code, but it doesn't seem to be read by the SQL connector. This is what's printed:
load data local infile 'filename.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY ''';
That all appears to be correct, but the Mysql engine is not taking it. What should I edit in Python to escape the single quote or just write it properly?
Edit:
I've been running the string substitution alternative, but still getting issues: load_data_statement = """load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
And tried adding extra escapes: Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile \'tgt_metadata_%s.txt\' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
The execute line is simple `cur.execute(load_data_statement)
And the error I'm getting is odd: `pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tgt_metadta_mal.txt'.txt' INTO table tgt_metadata FIELDS TERMINATED BY ','; ENC' at line 1")
I don't understand why the message starts at 'tgt_metadata_mal.txt and shows only the first 3 letters of ENCLOSED BY... | In Python how do I escape a single quote within a string that will be used as a SQL statement? | 1 | 0 | 1 | 1 | 0 | 4,539 |
40,672,551 | 2016-11-18T08:36:00.000 | 3 | 0 | 0 | 0 | 0 | python,mysql,escaping,load-data-infile,pymysql | 1 | 40,672,606 | 0 | 2 | 0 | false | 0 | 0 | No need for escaping that string.
cursor.execute("SELECT * FROM Codes WHERE ShortCode = %s", text)
You should use %s instead of your strings and then (in this case text) would be your string. This is the most secure way of protecting from SQL Injection | 2 | 1 | 0 | 0 | I built a simple statement to run a load data local infile on a MySQL table. I'm using Python to generate the string and also run the query. So I'm using the Python package pymysql.
This is the line to build the string. Assume metadata_filename is a proper string:
load_data_statement = """load data local infile """ + metadata_filename + """INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';"""
I originally had string substitution, and wanted to see if that was the issue, but it isn't. If I edit the statement above and commend out the ENCLOSED BY part, it is able to run, but not properly load data since I need the enclosed character
If I print(load_data_statement), I get what appears to be proper SQL code, but it doesn't seem to be read by the SQL connector. This is what's printed:
load data local infile 'filename.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY ''';
That all appears to be correct, but the Mysql engine is not taking it. What should I edit in Python to escape the single quote or just write it properly?
Edit:
I've been running the string substitution alternative, but still getting issues: load_data_statement = """load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
And tried adding extra escapes: Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile \'tgt_metadata_%s.txt\' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
The execute line is simple `cur.execute(load_data_statement)
And the error I'm getting is odd: `pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tgt_metadta_mal.txt'.txt' INTO table tgt_metadata FIELDS TERMINATED BY ','; ENC' at line 1")
I don't understand why the message starts at 'tgt_metadata_mal.txt and shows only the first 3 letters of ENCLOSED BY... | In Python how do I escape a single quote within a string that will be used as a SQL statement? | 1 | 0.291313 | 1 | 1 | 0 | 4,539 |
40,673,629 | 2016-11-18T09:36:00.000 | 0 | 0 | 0 | 0 | 1 | python,numpy,machine-learning,scikit-learn | 0 | 40,673,779 | 0 | 1 | 0 | true | 0 | 0 | You cannot use NaN values because the input vector will, for instance, be multiplied with a weight matrix. The result of such operations needs to be defined.
What you typically do if you have gaps in your input data is, depending on the specific type and structure of the data, fill the gaps with "artificial" values. For instance, you can use the mean or median of the same column in the remaining training data instances. | 1 | 0 | 1 | 0 | I have a classifier that has been trained using a given set of input training data vectors. There are missing values in the training data which is filled as numpy.Nan values and Using imputers to fill in the missing values.
However, In case of my input vector for prediction, how do I pass in the input where the value is missing? should I pass the value as nan? Does imputer play a role in this.?
If I have to fill in the value manually, How do I fill in the value for such case will I need to calculate the mean/median/frequency from the existing data.
Note : I am using sklearn. | Pass a NAN in input vector for prediction | 0 | 1.2 | 1 | 0 | 0 | 787 |
40,680,022 | 2016-11-18T14:53:00.000 | 0 | 1 | 0 | 0 | 0 | python,automated-tests,hudson,testlink | 0 | 40,722,012 | 0 | 1 | 0 | true | 1 | 0 | I found how to do it :
I used testLink-API-Python-client. | 1 | 0 | 0 | 0 | I want to run automated test on my python script using Hudson and testlink. I configured Hudson with my testlink server but the test results are always "not run". Do you know how to do this? | Automate python test with testlink and hudson | 0 | 1.2 | 1 | 0 | 0 | 299 |
40,685,275 | 2016-11-18T20:15:00.000 | 0 | 0 | 0 | 0 | 0 | python | 0 | 40,699,877 | 0 | 1 | 0 | true | 1 | 0 | I was able to solve this just now!
I went through my DNS settings of my domain and pointed the DNS A record to the IP address that my flask application is running on.
Previously, I was using a redirect on the domain, which was not working. | 1 | 0 | 0 | 0 | I have built a flask web application that makes use of Google's authenticated login to authenticate users. I currently have it running on localhost 127.0.0.1:5000 however I would like to point a custom domain name to it that I would like to purchase.
I have used custom domains with Flask applications before, I'm just not sure how to do it with this. I'm confused as to what I would do with my oauth callback.
My callback is set to http://127.0.0.1:5000/authorized in my Google oauth client credentials. I don't think it would just be as easy as running the app on 0.0.0.0.
I would need to be able to match the flask routes to the domain. i.e be able to access www.mydomain.com/authorized. | Use custom domain for flask app with Google authenticated login | 0 | 1.2 | 1 | 0 | 0 | 447 |
40,689,085 | 2016-11-19T03:59:00.000 | 0 | 0 | 1 | 0 | 0 | python,terminal,pip,xlwings,numexpr | 0 | 40,692,275 | 0 | 1 | 0 | false | 0 | 0 | xlwings imports pandas, if it is installed. Pandas again is importing numexpr if it's available. This seems to be not correctly installed. I would reinstall numexpr using conda (as you are using anaconda) and if that doesn't help pandas and xlwings. You could also create a new conda environment and conda install xlwings to try it out in a fresh environment. | 1 | 0 | 0 | 0 | I'm trying to get started with xlwings, but am recieving a few errors when I go to import it.
I pulled up my OSX terminal, ran
pip install xlwings
no problem there. Fired up python
$ python
then ran
import xlwings as xw
And it gave me this:
/users/Joshua/anaconda/lib/python3.5/site-packages/numexpr/cpuinfo.py:53: UserWarning: [Errno 2] No such file or directory: 'arch'
stacklevel=stacklevel + 1)
/users/Joshua/anaconda/lib/python3.5/site-packages/numexpr/cpuinfo.py:53: UserWarning: [Errno 2] No such file or directory: 'machine'
stacklevel=stacklevel + 1)
/users/Joshua/anaconda/lib/python3.5/site-packages/numexpr/cpuinfo.py:76: UserWarning: [Errno 2] No such file or directory: 'sysctl'
stacklevel=stacklevel + 1):
I tried uninstalling and reinstalling the numexpr package
pip uninstall numexpr
pip install numexpr
and doing the same with xlwings, but still recieving this error. :/
Any ideas on how to get the missing files? | import xlwings, missing files from numexpr package | 0 | 0 | 1 | 0 | 0 | 179 |
40,690,016 | 2016-11-19T06:45:00.000 | 1 | 0 | 0 | 0 | 0 | python,artificial-intelligence | 0 | 40,690,032 | 0 | 1 | 0 | false | 0 | 0 | Treat them like human players; don't give them the internal guts, just give them an interface to use.
E.g. give them an object that contains only the information they're allowed to access, and have the AI return a choice of which action they wish to perform. | 1 | 0 | 0 | 0 | I'm making an AI for a card game in Python and was wondering how I can keep players' decision functions from accessing the information given to them by the game that they shouldn't be able to access (for example, other players' hands). Currently, the game object itself is being passed into the players' decision functions.
I can only see two avenues of improvement: to either carefully choose what you pass in (although even things like one's own deck shouldn't be able to be manipulated by oneself, sadly, so this might not work), or to somehow filter using some obfuscation method, but I can't really think of one. Can you think of a better way to design this?
Thanks!
Andrew | Obfuscating/Hiding Information from other players when writing an AI for a game | 0 | 0.197375 | 1 | 0 | 0 | 31 |
40,693,141 | 2016-11-19T12:54:00.000 | 0 | 0 | 1 | 0 | 0 | list,python-3.x,loops,iteration | 0 | 40,693,343 | 0 | 2 | 0 | false | 0 | 0 | Set a variable counter to 0
Loop through each item inthe wordlist and compare each word in the list with the given list
If the given word is greater than the item in list then increment the counter
So when you are out of the loop, counter value is the index you are looking for.
You can convert this to code. | 1 | 0 | 0 | 0 | I understand how to index a word in a given list, but if given a set list and a word not in the list, how do I find the index position of the new word without appending or inserting the new word to the sorted list?
For example:
def find_insert_position:
a_list = ['Bird', 'Dog', 'Alligator']
new_animal = 'Cow'
Without altering the list, how would I determine where the new word would be inserted within a sorted list? So that if you entered the new word, the list would stat in alphabetical order. Keep in mind this is a given list and word, so I would not know any of the words before hand. I am using Python3. | How to iterate a list without inserting the new word to list? | 0 | 0 | 1 | 0 | 0 | 43 |
40,703,762 | 2016-11-20T11:58:00.000 | 0 | 0 | 1 | 0 | 0 | python,c++,epoll | 0 | 40,704,285 | 0 | 1 | 0 | true | 0 | 1 | You can always maintain a separate map from fd to A or B. Then when an event gets triggered, lookup based on fd.
Doesn't look like epoll has a richer interface, even in Python 3+ | 1 | 0 | 0 | 0 | I use python's epoll but it can't use event.data.ptr like in c++.
Sometimes I will register class A.fd and sometimes I will register class B.fd.
So, when epoll.poll() returned, how can I know whether fd belongs to class A or B? | python,epoll.register(fd, eventmask) only has two parameters, how could I to use event.data_ptr like c++? | 0 | 1.2 | 1 | 0 | 0 | 171 |
40,712,203 | 2016-11-21T03:12:00.000 | 0 | 0 | 1 | 0 | 0 | python,pycharm | 0 | 58,099,676 | 0 | 4 | 0 | false | 0 | 0 | Please note that this button does not reset all the variables. But then you can enter in the command prompt: reset. | 2 | 32 | 0 | 0 | I've seen other IDEs have the option to right click and reinitialize the environment. Does anyone know if this is possible in PyCharm, and if so, how it's done? | How to reinitialize the Python console in PyCharm? | 1 | 0 | 1 | 0 | 0 | 31,725 |
40,712,203 | 2016-11-21T03:12:00.000 | 0 | 0 | 1 | 0 | 0 | python,pycharm | 0 | 71,106,939 | 0 | 4 | 0 | false | 0 | 0 | I had tried "rerun" and found that it didn't reload the new environment.
I suggest that "new console" button could help you to reload the environment that it had installed the new packages. | 2 | 32 | 0 | 0 | I've seen other IDEs have the option to right click and reinitialize the environment. Does anyone know if this is possible in PyCharm, and if so, how it's done? | How to reinitialize the Python console in PyCharm? | 1 | 0 | 1 | 0 | 0 | 31,725 |
40,729,919 | 2016-11-21T21:42:00.000 | 0 | 0 | 0 | 1 | 0 | python,linux,lttng | 0 | 40,730,003 | 0 | 1 | 0 | false | 0 | 0 | Handling read, write, pread, pwrite, readv, writev should be enough.
You just have to check whether the FD refers to the cache or disk. I think it would be easier in kernelspace, by writing a module, but... | 1 | 0 | 0 | 0 | I have a python program that read Linux kernel system calls (use Lttng), So with this program I could read all kernel calls. I have some operations and then with python program going to analyses system calls, in the operations I have some IO works, then with python program I need to know how many bytes that read from cache and how many read from disk. which system calls show me the bytes read from cache and disk? | Which linux kernel system calls shows bytes read from disk | 0 | 0 | 1 | 0 | 0 | 130 |
40,762,810 | 2016-11-23T11:08:00.000 | 0 | 0 | 0 | 0 | 0 | wxpython | 0 | 40,940,273 | 0 | 1 | 0 | false | 0 | 1 | Take a look at the FloatCanvas sample in the wxPython demo. There is also a sample for the OGL package there. Either, or both of those would likely be suitable for diagrams depending on your needs. | 1 | 0 | 0 | 0 | I want to build an app for making UML diagrams. For visibility problems, I wish I could move classes using the mouse.
So here is my problem :
Do you know how can I drag and drop a widget in a canvas using wxPython ?
Found some things about ogl, but sound strange to me ...
Thx guys | wxpython self made widget and drag and drop it in a canvas | 0 | 0 | 1 | 0 | 0 | 84 |
40,766,909 | 2016-11-23T14:18:00.000 | 33 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 40,767,005 | 0 | 4 | 1 | true | 0 | 0 | Just decrease the opacity of the lines so that they are see-through. You can achieve that using the alpha variable. Example:
plt.plot(x, y, alpha=0.7)
Where alpha ranging from 0-1, with 0 being invisible. | 1 | 28 | 1 | 0 | Does anybody have a suggestion on what's the best way to present overlapping lines on a plot? I have a lot of them, and I had the idea of having full lines of different colors where they don't overlap, and having dashed lines where they do overlap so that all colors are visible and overlapping colors are seen.
But still, how do I that. | Suggestions to plot overlapping lines in matplotlib? | 0 | 1.2 | 1 | 0 | 0 | 35,477 |
40,782,641 | 2016-11-24T09:36:00.000 | 0 | 0 | 0 | 0 | 1 | python-2.7,selenium-webdriver,selenium-chromedriver | 1 | 40,795,935 | 0 | 1 | 0 | false | 0 | 0 | Fixed. It was a compatibility error. Just needed to downloaded the latest chrome driver version and it worked. | 1 | 0 | 0 | 0 | Every time I open up Chrome driver in my python script, it says "chromedriver.exe has stopped working" and crashes my script with the error: [Errno 10054] An existing connection was forcibly closed by the remote host.
I read the other forum posts on this error, but I'm very new to this and a lot of it was jargon that I didn't understand. One said something about graceful termination, and one guy said "running the request again" solved his issue, but I have no idea how to do that. Can someone explain to me in more detail how to fix this? | [Errno 10054], selenium chromedriver crashing each time | 0 | 0 | 1 | 0 | 1 | 298 |
40,800,757 | 2016-11-25T08:40:00.000 | 0 | 1 | 0 | 0 | 0 | python,amazon-web-services,lambda | 0 | 65,574,478 | 0 | 2 | 0 | false | 1 | 0 | You can use S3 Event Notifications to trigger the lambda function.
In bucket's properties, create a new event notification for an event type of s3:ObjectCreated:Put and set the destination to a Lambda function.
Then for the lambda function, write a code either in Python or NodeJS (or whatever you like) and parse the received event and send it to Slack webhook URL. | 1 | 3 | 0 | 0 | I have a use case where i want to invoke my lambda function whenever a object has been pushed in S3 and then push this notification to slack.
I know this is vague but how can i start doing so ? How can i basically achieve this ? I need to see the structure | How to write a AWS lambda function with S3 and Slack integration | 0 | 0 | 1 | 0 | 1 | 1,589 |
40,815,079 | 2016-11-26T04:57:00.000 | 1 | 1 | 1 | 0 | 0 | python,python-2.7,python-3.x,version | 0 | 40,815,167 | 0 | 2 | 0 | false | 0 | 0 | Could you not look at the output from 2to3 to see if any code changes may be necessary ? | 2 | 4 | 0 | 0 | Is there any automated way to test that code is compatible with both Python 2 and 3? I've seen plenty of documentation on how to write code that is compatible with both, but nothing on automatically checking. Basically a kind of linting for compatibility between versions rather than syntax/style.
I have thought of either running tests with both interpreters or running a tool like six or 2to3 and checking that nothing is output; unfortunately, the former requires that you have 100% coverage with your tests and I would assume the latter requires that you have valid Python 2 code and would only pick up issues in compatibility with Python 3.
Is there anything out there that will accomplish this task? | Checking code for compatibility with Python 2 and 3 | 0 | 0.099668 | 1 | 0 | 0 | 4,817 |
40,815,079 | 2016-11-26T04:57:00.000 | 4 | 1 | 1 | 0 | 0 | python,python-2.7,python-3.x,version | 0 | 40,815,143 | 0 | 2 | 0 | true | 0 | 0 | There is no "fool-proof" way of doing this other than running the code on both versions and finding inconsistencies. With that said, CPython2.7 has a -3 flag which (according to the man page) says:
Warn about Python 3.x incompatibilities that 2to3 cannot trivially fix.
As for the case where you have valid python3 code and you want to backport it to python2.x -- You likely don't actually want to do this. python3.x is the future. This is likely to be a very painful problem in the general case. A lot of the reason to start using python3.x is because then you gain access to all sorts of cool new features. Trying to re-write code that is already relying on cool new features is frequently going to be very difficult. Your much better off trying to upgrade python2.x packages to work on python3.x than doing things the other way around. | 2 | 4 | 0 | 0 | Is there any automated way to test that code is compatible with both Python 2 and 3? I've seen plenty of documentation on how to write code that is compatible with both, but nothing on automatically checking. Basically a kind of linting for compatibility between versions rather than syntax/style.
I have thought of either running tests with both interpreters or running a tool like six or 2to3 and checking that nothing is output; unfortunately, the former requires that you have 100% coverage with your tests and I would assume the latter requires that you have valid Python 2 code and would only pick up issues in compatibility with Python 3.
Is there anything out there that will accomplish this task? | Checking code for compatibility with Python 2 and 3 | 0 | 1.2 | 1 | 0 | 0 | 4,817 |
40,829,181 | 2016-11-27T12:54:00.000 | 0 | 0 | 1 | 1 | 0 | python,cmd,interpreter | 0 | 40,829,247 | 0 | 3 | 0 | true | 0 | 0 | The simplest way would be to just do the following in cmd:
C:\path\to\file\test.py
Windows recognizes the file extension and runs it with Python.
Or you can change the directory to where the Python program/script is by using the cd command in the command prompt:
cd C:\path\to\file
Start Python in the terminal and import the script using the import function:
import test
You do not have to specify the .py file extension. The script will only run once per process so you'll need to use the reload function to run it after it's first import.
You can make python run the script from a specific directory:
python C:\path\to\file\test.py | 2 | 0 | 0 | 0 | I have Python 3.6 and Windows 7.
I am able to successfully start the python interpreter in interactive mode, which I have confirmed by going to cmd, and typing in python, so my computer knows how to find the interpreter.
I am confused however, as to how to access files from the interpreter. For example, I have a file called test.py (yes, I made sure the correct file extension was used).
However, I do not know how to access test.py from the interpreter. Let us say for the sake of argument that the test.py file has been stored in C:\ How then would I access test.py from the interpreter? | Running a python program in Windows? | 0 | 1.2 | 1 | 0 | 0 | 1,359 |
40,829,181 | 2016-11-27T12:54:00.000 | 0 | 0 | 1 | 1 | 0 | python,cmd,interpreter | 0 | 40,829,271 | 0 | 3 | 0 | false | 0 | 0 | In command prompt you need to navigate to the file location. In your case it is in C:\ drive, so type:
cd C:\
and then proceed to run your program:
python test.py
or you could do it in one line:
python C:\test.py | 2 | 0 | 0 | 0 | I have Python 3.6 and Windows 7.
I am able to successfully start the python interpreter in interactive mode, which I have confirmed by going to cmd, and typing in python, so my computer knows how to find the interpreter.
I am confused however, as to how to access files from the interpreter. For example, I have a file called test.py (yes, I made sure the correct file extension was used).
However, I do not know how to access test.py from the interpreter. Let us say for the sake of argument that the test.py file has been stored in C:\ How then would I access test.py from the interpreter? | Running a python program in Windows? | 0 | 0 | 1 | 0 | 0 | 1,359 |
40,830,517 | 2016-11-27T15:21:00.000 | 2 | 0 | 0 | 0 | 1 | android,python,kivy,crash-reports | 1 | 40,833,444 | 0 | 1 | 0 | true | 0 | 1 | Finally figured out how to use adb logcat. Install android studio 2.2. Connect your device to the PC via USB and enable debugging mode in the developer options. cd in command prompt to C:\Users[user]\AppData\Local\Android\sdk\platform-tools, and run adb devices. If the serial number of your android device appears, you're good to go. Then run adb logcat while executing the kivy app in your phone, and you'll get the realtime logs. | 1 | 1 | 0 | 0 | When running my kivy app on Android Kivy Launcher it crashes instantly. I've looked everywhere for the logs, but couldn't find them. No .kivy folder is created, apps that view android logcat require root access and I couldn't get working adb logcat. Can someone explain to me how to use adb logcat to catch the error, or state another solution to my problem?
PS: The app uses kivy 1.9.1 and py 3.4.4, runs fine on windows and my cellphone is a Xperia Z5 running Android Marshmallow 6.0.1 | Cant find kivy logs when running with Kivy Launcher | 0 | 1.2 | 1 | 0 | 0 | 1,259 |
40,837,982 | 2016-11-28T05:59:00.000 | 0 | 0 | 0 | 0 | 0 | python,hbase,happybase | 0 | 41,020,057 | 0 | 1 | 0 | false | 0 | 0 | the hbase data model does not offer that efficiently without encoding the timestamp as a prefix of the row key, or using a second table as a secondary index. read up on hbase schema design, your use case is likely covered in sections on time series data. | 1 | 0 | 0 | 0 | I'm trying to get the items (and their count) that were inserted in the last 24 hours using happybase. All I can think of is to use the timestamp to do that, but I couldn't figure out how to do that.
I can connect to hbase | How to scan between two timestamps range using happybase? | 0 | 0 | 1 | 0 | 0 | 862 |
40,840,480 | 2016-11-28T09:03:00.000 | -1 | 0 | 1 | 1 | 0 | python,python-3.x,ubuntu | 0 | 40,840,607 | 0 | 2 | 0 | false | 0 | 0 | I think you have to build it yourself from source... You can easily find a guide for that if you google it. | 1 | 2 | 0 | 0 | I would like to install my own package in my local dir (I do not have the root privilege). How to install python3-dev locally?
I am using ubuntu 16.04. | python3: how to install python3-dev locally? | 0 | -0.099668 | 1 | 0 | 0 | 2,282 |
40,851,318 | 2016-11-28T18:44:00.000 | 0 | 1 | 1 | 0 | 0 | python,string,hardcoded | 0 | 40,851,410 | 0 | 4 | 0 | false | 0 | 0 | You could do any of these: 1) read the string from a file, 2) read the string from a database, 3) pass the string as a command line argument | 4 | 0 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 0 | 1 | 0 | 0 | 933 |
40,851,318 | 2016-11-28T18:44:00.000 | 0 | 1 | 1 | 0 | 0 | python,string,hardcoded | 0 | 40,851,444 | 0 | 4 | 0 | false | 0 | 0 | One option is to use asymmetric encryption, you can request the private string from a server using it (see SSL/TLS).
If you want to do it locally you should write/read to a file (OS should take care of authorization, by means of user access) | 4 | 0 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 0 | 1 | 0 | 0 | 933 |
40,851,318 | 2016-11-28T18:44:00.000 | 0 | 1 | 1 | 0 | 0 | python,string,hardcoded | 0 | 43,877,881 | 0 | 4 | 0 | false | 0 | 0 | If you want to hide it, Encrypt it, you see this alot on github where people accidently post their aws keys or what not to github because they didnt use it as a "secret".
always encrypt important data, never use hardcoded values like ip as this can easily be changed, some sort of dns resolver can help to keep you on route.
in this specific case you mentioned using a DNS to point to a proxy will help to mask the endpoint ip. | 4 | 0 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 0 | 1 | 0 | 0 | 933 |
40,851,318 | 2016-11-28T18:44:00.000 | 1 | 1 | 1 | 0 | 0 | python,string,hardcoded | 0 | 40,852,193 | 0 | 4 | 0 | true | 0 | 0 | There are several ways to obscure a hard-coded string. One of the simplest is to XOR the bits with another string or a repeated constant. Thus you could have two hardcoded short arrays that, when XORed together, produce the string you want to obscure. | 4 | 0 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 1.2 | 1 | 0 | 0 | 933 |
40,856,714 | 2016-11-29T02:14:00.000 | 2 | 0 | 0 | 0 | 0 | python,web,web2py | 0 | 40,870,054 | 0 | 1 | 0 | true | 1 | 0 | web2py serves static files from the application's /static folder, so just put the files in there. If you need to generate links to them, you can use the URL helper: URL('static', 'path/to/static_file.html') (where the second argument represents the path within the /static folder). | 1 | 0 | 0 | 0 | Say I have some comp html files designer gave me and I want to just use it right away in a web2py website running on 127.0.0.1, with web2py MVC structure, how can I achieve that? | How to visit html static site inside web2py project | 0 | 1.2 | 1 | 0 | 0 | 110 |
40,864,539 | 2016-11-29T11:16:00.000 | 1 | 0 | 0 | 0 | 0 | python,openerp,odoo-9 | 0 | 40,881,979 | 0 | 2 | 0 | true | 1 | 0 | if you don't want to use the models in the addons store, You can create a new class inherit from models.Model and overrid the create and write method to save audit in another model and create new Model that inherit the new model not the models.Model class this when ever a create or write is happen it will call the create and write of the parent class not the create | 1 | 1 | 0 | 1 | I want to create a new module that save history of a record when someone edit it but doesn't find out any documents regards how to catch an edit action. Does anyone know how to do it ? | How can I "catch" action edit in odoo | 0 | 1.2 | 1 | 0 | 0 | 299 |
40,898,087 | 2016-11-30T20:58:00.000 | 0 | 1 | 0 | 0 | 0 | debian,python-3.5,file-sharing | 0 | 40,911,268 | 0 | 1 | 0 | true | 0 | 0 | Probably the easiest to achive, that is also secure is to use sshfs between the servers. | 1 | 0 | 0 | 0 | Right, i have a bot that has 2 shards, each on their own server. I need a way to share data between the two, preferably as files, but im unsure how to achieve this.
The bot is completely python3.5 based
The servers are both running Headless Debian Jessie
The two servers arent connected via LAN, so this has to be sharing data over the internet
The data dosent need to be encrypted, as no sensitive data is shared | Share data between two scripts on different servers | 0 | 1.2 | 1 | 0 | 1 | 27 |
40,942,391 | 2016-12-02T23:25:00.000 | 2 | 0 | 0 | 0 | 0 | python,machine-learning | 0 | 40,942,885 | 0 | 2 | 0 | false | 0 | 0 | Since there's an unknown function that generates the output, it's a regression problem. Neural network with 2 hidden layers and e.g. sigmoid can learn any arbitrary function. | 2 | 2 | 1 | 0 | I need some orientation for a problem I’m trying to solve. Anything would be appreciated, a keyword to Google or some indication !
So I have a list of 5 items. All items share the same features, let’s say each item has 3 features for the example.
I pass the list to a ranking function which takes into account the features of every item in the list and returns an arbitrary ordered list of these items.
For example, if I give the following list of items (a, b, c, d, e) to the ranking function, I get (e, a, b, d, c).
Here is the thing, I don’t know how the ranking function works. The only things I have is the list of 5 items (5 is for the example, it could be any number greater than 1), the features of every item and the result of the ranking function.
The goal is to train a model which outputs an ordered list of 5 items the same way the ranking function would have done it.
What ML model can I use to support this notion of ranking ? Also, I can’t determine if it is a classification or a regression problem. I’m not trying to determine a continuous value or classify the items, I want to determine how they rank compared to each other by the ranking function.
I have to my disposition an infinite number of items since I generate them myself. The ranking function could be anything but let’s say it is :
attribute a score = 1/3 * ( x1 + x2 + x3 ) to each item and sort by descending score
The goal for the model is to guess as close as possible what the ranking function is by outputting similar results for the same batch of 5 items.
Thanks in advance ! | ML model to predict rankings (arbitrary ordering of a list) | 1 | 0.197375 | 1 | 0 | 0 | 1,288 |
40,942,391 | 2016-12-02T23:25:00.000 | 1 | 0 | 0 | 0 | 0 | python,machine-learning | 0 | 40,947,117 | 0 | 2 | 0 | true | 0 | 0 | It could be treated as a regression problem with the following trick: You are given 5 items with 5 feature vectors and the "black box" function outputs 5 distinct scores as [1, 2, 3, 4, 5]. Treat these as continuous values. So, you can think of your function as operating by taking five distinct input vectors x1, x2, x3, x4, x5 and outputting five scalar target variables t1, t2, t3, t4, t5 where the target variables for your training set are the scores the items get. For example, if the ranking for a single sample is (x1,4), (x2,5), (x3,3), (x4,1), (x5,2) then set t1=4, t2=5, t3=3, t4=1 and t5=2. MLPs have the "universal approximation" capability and given a black box function, they can approximate it arbitrarily close, dependent on the hidden unit count. So, build a 2 layer MLP with the inputs as the five feature vectors and the outputs as the five ranking scores. You are going to minimize a sum of squares error function, the classical regression error function. And don't use any regularization term, since you are going to try to mimic a deterministic black fox function, there is no random noise inherent in the outputs of that function, so you shouldn't be afraid of any overfitting issues. | 2 | 2 | 1 | 0 | I need some orientation for a problem I’m trying to solve. Anything would be appreciated, a keyword to Google or some indication !
So I have a list of 5 items. All items share the same features, let’s say each item has 3 features for the example.
I pass the list to a ranking function which takes into account the features of every item in the list and returns an arbitrary ordered list of these items.
For example, if I give the following list of items (a, b, c, d, e) to the ranking function, I get (e, a, b, d, c).
Here is the thing, I don’t know how the ranking function works. The only things I have is the list of 5 items (5 is for the example, it could be any number greater than 1), the features of every item and the result of the ranking function.
The goal is to train a model which outputs an ordered list of 5 items the same way the ranking function would have done it.
What ML model can I use to support this notion of ranking ? Also, I can’t determine if it is a classification or a regression problem. I’m not trying to determine a continuous value or classify the items, I want to determine how they rank compared to each other by the ranking function.
I have to my disposition an infinite number of items since I generate them myself. The ranking function could be anything but let’s say it is :
attribute a score = 1/3 * ( x1 + x2 + x3 ) to each item and sort by descending score
The goal for the model is to guess as close as possible what the ranking function is by outputting similar results for the same batch of 5 items.
Thanks in advance ! | ML model to predict rankings (arbitrary ordering of a list) | 1 | 1.2 | 1 | 0 | 0 | 1,288 |
40,947,387 | 2016-12-03T11:46:00.000 | 0 | 0 | 1 | 1 | 0 | python,memory,memory-management | 0 | 40,947,523 | 0 | 3 | 0 | false | 0 | 0 | You can just open task manager and look how much ram does it take. I use Ubuntu and it came preinstalled. | 1 | 0 | 0 | 0 | I am running python program using Linux operating system and i want to know how much total memory used for this process. Is there any way to determine the total memory usage ? | total memory used by running python code | 1 | 0 | 1 | 0 | 0 | 494 |
40,957,941 | 2016-12-04T11:06:00.000 | 0 | 0 | 0 | 0 | 0 | python-3.x,tkinter | 0 | 54,459,365 | 0 | 1 | 0 | false | 0 | 1 | You can use PyAutoGUI and Tkinter to:
Get current mouse position relative to desktop coordinates.
Minimize tkinter window or drag it out of the screen.
Simulate click event when the window will be hidden.
Return back tkinter window.
It should work but I'm not sure how fast would it be. | 1 | 0 | 0 | 0 | I'm trying to write a semi transparent click trough program to use like onion skin over my 3d application.
the one thing I couldn't find googling is how to make the window click trough. is there an attribute or something for it in tkinter? or maybe some way around it? | python 3.4 tkinter, click trough window | 1 | 0 | 1 | 0 | 0 | 50 |
40,959,177 | 2016-12-04T13:34:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning | 0 | 40,960,903 | 0 | 1 | 0 | true | 0 | 0 | Embedding matrix is similar to any other variable. If you set the trainable flag to True it will train it (see tf.Variable) | 1 | 0 | 1 | 0 | In tensorflow,we may see these codes.
embeddings=tf.Variable(tf.random_uniform([vocabulary_size,embedding_size],-1.0,1.0))
embed=tf.nn.embedding_lookup(embeddings,train_inputs)
When tensorflow is training,does embedding matrix remain unchanged?
In a blog,it is said that embedding matrix can update.I wonder how does it work.Thanks a lot ! | In tensorflow,does embedding matrix remain unchanged? | 0 | 1.2 | 1 | 0 | 0 | 94 |
40,960,079 | 2016-12-04T15:15:00.000 | 1 | 0 | 1 | 0 | 0 | python,c++,c,algorithm | 0 | 40,960,224 | 0 | 1 | 0 | false | 0 | 0 | The bisection algorithm can be used to find a root in a range where the function is monotonic. You can find such segments by studying the derivative function, but in the general case, no assumptions can be made as to the monotonicity of a given function over any range.
For example, the function f(x) = sin(1/x) has an infinite number of roots between -1 and 1. To enumerate these roots, you must first determine the ranges where it is monotonic and these ranges become vanishingly small as x comes closer to 0. | 1 | 0 | 0 | 0 | Is there a way to find all the roots of a function using something on the lines of the bisection algorithm?
I thought of checking on both sides of the midpoint in a certain range but it still doesn't seem to guarantee how deep I would have to go to be able to know if there is a root in the newly generated range; also how would I know how many roots are there in a given range even when I know that the corresponding values on applying the function are of opposite sign?
Thanks. | Bisection algorithm to find multiple roots | 0 | 0.197375 | 1 | 0 | 0 | 1,208 |
40,969,733 | 2016-12-05T08:09:00.000 | 4 | 0 | 1 | 1 | 0 | python,variables,memory,ipc,ram | 0 | 40,969,773 | 0 | 4 | 0 | false | 0 | 0 | Make it a (web) microservice: formalize all different CLI arguments as HTTP endpoints and send requests to it from main application. | 1 | 1 | 0 | 0 | I have a python script, that needs to load a large file from disk to a variable. This takes a while. The script will be called many times from another application (still unknown), with different options and the stdout will be used. Is there any possibility to avoid reading the large file for each single call of the script?
I guess i could have one large script running in the background that holds the variable. But then, how can I call the script with different options and read the stdout from another application? | Keeping Python Variables between Script Calls | 0 | 0.197375 | 1 | 0 | 0 | 2,241 |
40,974,077 | 2016-12-05T12:15:00.000 | 0 | 0 | 0 | 0 | 0 | python,modbus,can-bus,canopen | 0 | 41,042,839 | 0 | 1 | 0 | true | 0 | 0 | This will be the gateway's business to fix. There is no general answer, nor is there a standard for how such gateways work. Gateways have some manner of software that allows you to map data between the two field buses. In this case I suppose it would be either a specific CANopen PDO or a specific CAN id that you map to a Modbus address.
In case you are just writing a CANopen client, neither you or the firmware should need to worry about Modbus. Just make a CANopen node that is standard compliant and let the gateway deal with the actual protocol conversion.
You may however have to do the PDO mapping in order to let your client and the gateway know how to speak with each other, but that should preferably be a user-level configuration of the finished product, rather than some hard-coded mapping. | 1 | 0 | 0 | 0 | I am now studying and developing a CANopen client with a python stack and i'm struggling to find out how to communicate with a slave Modbus through a gateway.
Since the gateway address is the one present in the Object Dictionary of the CANopen, and the Gateway has addresses of modbus Slaves I/O, how to specify the address of the modbus input ?
As i can see it CANopen uses the node-ID to select the server and an address to select the property to read/write, but in this case i need to go farther than that and point an input.
just to be clear i'm in the "studying" phase i have no CANopen/Modbus gateway in mind.
Regards. | How does CANopen client communicate with Modbus slave through CANopen/Modbus gateway ? | 0 | 1.2 | 1 | 0 | 1 | 339 |
40,976,901 | 2016-12-05T14:46:00.000 | 0 | 0 | 1 | 0 | 1 | python,json,csv | 0 | 40,977,897 | 0 | 1 | 0 | false | 0 | 0 | turns out it was the json.dumps(), should've read more into what it does! Thanks. | 1 | 0 | 1 | 0 | I've been researching the past few days on how to achieve this, to no avail.
I have a JSON file with a large array of json objects like so:
[{
"tweet": "@SHendersonFreep @realDonaldTrump watch your portfolios go to the Caribbean banks and on to Switzerland. Speculation without regulation",
"user": "DGregsonRN"
},{
"tweet": "RT @CodeAud: James Mattis Vs Iran.\n\"The appointment of Mattis by @realDonaldTrump got the Iranian military leaders' more attention\". https:\u2026",
"user": "American1765"
},{
"tweet": "@realDonaldTrump the oyou seem to be only fraud I see is you, and seem scared since you want to block the recount???hmm cheater",
"user": "tgg216"
},{
"tweet": "RT @realDonaldTrump: @Lord_Sugar Dopey Sugar--because it was open all season long--you can't play golf in the snow, you stupid ass.",
"user": "grepsalot"
},{
"tweet": "RT @Prayer4Chandler: @realDonaldTrump Hello Mr. President, would you be willing to meet Chairman #ManHeeLee of #HWPL to discuss the #PeaceT\u2026",
"user": "harrymalpoy1"
},{
"tweet": "RT @realDonaldTrump: Thank you Ohio! Together, we made history \u2013 and now, the real work begins. America will start winning again! #AmericaF\u2026",
"user": "trumpemall"
}]
And I am trying to access each key and value, and write them to a csv file. I believe using json.loads(json.dumps(file)) should work in normal json format, but because there is an array of objects, I can't seem to be able to access each individual one.
converter.py:
import json
import csv
f = open("tweets_load.json",'r')
y = json.loads(json.dumps(f.read(), separators=(',',':')))
t = csv.writer(open("test.csv", "wb+"))
# Write CSV Header, If you dont need that, remove this line
t.writerow(["tweet", "user"])
for x in y:
t.writerow([x[0],x[0]])
grab_tweets.py:
import tweepy
import json
def get_api(cfg):
auth = tweepy.OAuthHandler(cfg['consumer_key'], cfg['consumer_secret'])
auth.set_access_token(cfg['access_token'], cfg['access_token_secret'])
return tweepy.API(auth)
def main():
cfg = {
"consumer_key" : "xxx",
"consumer_secret" : "xxx",
"access_token" : "xxx",
"access_token_secret" : "xxx"
}
api = get_api(cfg)
json_ret = tweepy.Cursor(api.search, q="@realDonaldTrump",count="100").items(100)
restapi =""
for tweet in json_ret:
rest = json.dumps({'tweet' : tweet.text,'user' :str(tweet.user.screen_name)},sort_keys=True,indent=4,separators=(',',': '))
restapi = restapi+str(rest)+","
f = open("tweets.json",'a')
f.write(str(restapi))
f.close()
if __name__ == "__main__":
main()
The output so far is looking like:
tweet,user^M
{,{^M
"
","
"^M
, ^M
, ^M
, ^M
, ^M
"""",""""^M
t,t^M
w,w^M
e,e^M
e,e^M
t,t^M
"""",""""^M
:,:^M
, ^M
"""",""""^M
R,R^M
T,T^M
, ^M
@,@^M
r,r^M
e,e^M
a,a^M
l,l^M
D,D^M
o,o^M
n,n^M
a,a^M
l,l^M
What exactly am I doing wrong? | Access key value from JSON array of objects Python | 0 | 0 | 1 | 0 | 0 | 1,675 |
40,980,163 | 2016-12-05T17:42:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,wxpython,word-wrap | 0 | 64,083,696 | 0 | 3 | 0 | false | 0 | 1 | It would be more maintainable to use the constants stc.WRAP_NONE, stc.WRAP_WORD, stc.WRAP_CHAR and stc.WRAP_WHITESPACE instead of their numerical values. | 2 | 1 | 0 | 0 | I was wondering about this, so I did quite a bit of google searches, and came up with the SetWrapMode(self, mode) function. However, it was never really detailed, and there was nothing that really said how to use it. I ended up figuring it out, so I thought I'd post a thread here and answer my own question for anyone else who is wondering how to make an stc.StyledTextCtrl() have word wrap. | How to set up word wrap for an stc.StyledTextCtrl() in wxPython | 0 | 0 | 1 | 0 | 0 | 265 |
40,980,163 | 2016-12-05T17:42:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,wxpython,word-wrap | 0 | 40,986,021 | 0 | 3 | 0 | false | 0 | 1 | I see you answered your own question, and you are right in every way except for one small detail. There are actually several different wrap modes. The types and values corresponding to them are as follows:
0: None
1: Word Wrap
2: Character Wrap
3: White Space Wrap
So you cannot enter any value above 0 to get word wrap. In fact if you enter a value outside of the 0-3 you should just end up getting no wrap as the value shouldn't be recognized by Scintilla, which is what the stc library is. | 2 | 1 | 0 | 0 | I was wondering about this, so I did quite a bit of google searches, and came up with the SetWrapMode(self, mode) function. However, it was never really detailed, and there was nothing that really said how to use it. I ended up figuring it out, so I thought I'd post a thread here and answer my own question for anyone else who is wondering how to make an stc.StyledTextCtrl() have word wrap. | How to set up word wrap for an stc.StyledTextCtrl() in wxPython | 0 | 0 | 1 | 0 | 0 | 265 |
40,989,958 | 2016-12-06T07:26:00.000 | 0 | 1 | 0 | 1 | 1 | python,ibm-cloud,nao-robot | 0 | 41,919,816 | 0 | 1 | 0 | false | 0 | 0 | You can add any path to the PYTHONPATH environment variable from within your behavior. However, this has bad side effects, like:
If you forget to remove the path from the environment right after importing your module, you won't know anymore where you are importing modules from, since there is only one Python context for the whole NAOqi and all the behaviors.
For the same reason (a single Python context), you'll need to restart NAOqi if you change the module you are trying to import. | 1 | 0 | 0 | 0 | Currently, I am doing a project about Nao Robot. I am having problem with importing the python class file into choregraphe. So anyone knows how to do this?
Error message
[ERROR] behavior.box :init:8 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1271833616__root__RecordSound_3__RecSoundFile_4: ALProxy::ALProxy Can't find service: | How to import IBM Bluemix Watson speech-to-text in Choregraphe? | 0 | 0 | 1 | 0 | 0 | 272 |
41,005,047 | 2016-12-06T21:15:00.000 | 6 | 0 | 1 | 0 | 1 | python,pycharm | 0 | 41,005,483 | 0 | 2 | 0 | false | 0 | 0 | Select the file in the Project or Project Files pane and then hit File>Save as... The save-as dialog that pops up has an option (checked by default) to "open copy in editor". Not exactly what you asked for but it's the easiest way I could find. | 2 | 9 | 0 | 0 | Just got started with pycharm and was wondering how can I simply copy a file in pycharm for editing purposes? For instance, I have a file opened, want to edit the code but want to make sure that I do not accidentally "over-save" the original file.
In other environments, I can simply right click the file, and copy it to a new file. I do not see a 'copy to new file' option in pycharm, but instead I do have to manually open a new file (File>New>Python..), and then manually copy all the code from the original file and paste it in the empty new file.
Am I missing something or is that not possible in pycharm? | Pycharm - how can I copy a file (tab) in pycharm (to a new tab)? | 1 | 1 | 1 | 0 | 0 | 2,801 |
41,005,047 | 2016-12-06T21:15:00.000 | 7 | 0 | 1 | 0 | 1 | python,pycharm | 0 | 41,005,579 | 0 | 2 | 0 | true | 0 | 0 | If I understood well, you are looking for to a something similar duplicate functionality.
So to do this, you can simply go to the project structure (the panel to the left) and copy the file in this way:
simply Ctrl + c
Right click on choosen file > Copy
After this, paste your file in the directory that you want in this way:
simply Ctrl + v
Right click on choosen directory > Paste
In this way you have the possibility to duplicate the choosen file with another name. | 2 | 9 | 0 | 0 | Just got started with pycharm and was wondering how can I simply copy a file in pycharm for editing purposes? For instance, I have a file opened, want to edit the code but want to make sure that I do not accidentally "over-save" the original file.
In other environments, I can simply right click the file, and copy it to a new file. I do not see a 'copy to new file' option in pycharm, but instead I do have to manually open a new file (File>New>Python..), and then manually copy all the code from the original file and paste it in the empty new file.
Am I missing something or is that not possible in pycharm? | Pycharm - how can I copy a file (tab) in pycharm (to a new tab)? | 1 | 1.2 | 1 | 0 | 0 | 2,801 |
41,022,923 | 2016-12-07T16:45:00.000 | 2 | 1 | 0 | 0 | 1 | python,pic,pyserial,ftdi | 1 | 41,058,265 | 0 | 1 | 0 | false | 1 | 0 | So I found that the rounded numbers didn't work i.e. 100000, 200000, 250000 but the multiples of 115200 do. i.e. 230400, 460800
I tried to use 230400 at first but the baud rate my microcontroller can produce is either 235294 or 222222. 235294 yields an error of -2.1% and 222222 yields an error of 3.55%. I naturally picked the one with the lower error however it didn't work and didn't bother trying 222222. For some reason 222222 works when 235294 though. So I don't actually have to use the 250000 baud rate I initially thought I'd have to.
I still don't know why pyserial doesn't work with those baud rates when putty does, so clearly my laptop can physically do it. Anyway will know in future to try more standard baud rates as well as when using microcontrollers which can't produce the exact baud rate required to try frequencies both above and below. | 1 | 3 | 0 | 0 | I'm trying to get pySerial to communicate with a microcontroller over an FTDI lead at a baud rate of 500,000. I know my microcontroller and FTDI lead can both handle it as can my laptop itself, as I can send to a putty terminal at that baud no problem. However I don't get anything when I try to send stuff to my python script with pySerial, although the same python code works with a lower baud rate.
The pySerial documentation says:
"The parameter baudrate can be one of the standard values: 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, 115200. These are well supported on all platforms.
Standard values above 115200, such as: 230400, 460800, 500000, 576000, 921600, 1000000, 1152000, 1500000, 2000000, 2500000, 3000000, 3500000, 4000000 also work on many platforms and devices."
So, I'm assuming why it's not working is because my system doesn't support it, but how do I check what values my system supports/is there anything I can do to make it work? I unfortunately do need to transmit at least 250,000 and at a nice round number like 250,000 or 500000 (to do with clock error on the microcontroller).
Thanks in advance for your help! | PySerial - Max baud rate for platform (windows) | 1 | 0.379949 | 1 | 0 | 0 | 3,411 |
41,031,326 | 2016-12-08T03:33:00.000 | 0 | 0 | 1 | 0 | 0 | python,database,file,cloud,storage | 0 | 41,031,479 | 0 | 1 | 0 | false | 0 | 0 | You could install gsutil and the boto library and use that. | 1 | 0 | 0 | 0 | I currently have a Python program which reads a local file (containing a pickled database object) and saves to that file when it's done. I'd like to branch out and use this program on multiple computers accessing the same database, but I don't want to worry about synchronizing the local database files with each other, so I've been considering cloud storage options. Does anyone know how I might store a single data file in the cloud and interact with it using Python?
I've considered something like Google Cloud Platform and similar services, but those seem to be more server-oriented whereas I just need to access a single file on my own machines. | Access a Cloud-stored File using Python 3? | 0 | 0 | 1 | 1 | 0 | 57 |
41,041,998 | 2016-12-08T14:37:00.000 | 2 | 1 | 1 | 0 | 0 | python,julia | 0 | 45,740,595 | 0 | 2 | 0 | false | 0 | 0 | In my experience, calling Julia using the Python pyjulia package is difficult and not a robust solution outside HelloWorld usage.
1) pyjulia is very neglected.
There is practically no documentation aside the source code. For example, the only old tutorials I've found still use julia.run() that was replaced by julia.eval(). pyjulia is not registered on PyPI, the Python Package Index. Many old issues have few or no responses. And buggy, particularly with heavy memory usage and you could run into mysterious segfaults. Maybe, garbage collector conflicts...
2) You should limit pyjulia use to Julia functions that return simple type
Julia objects imported to python using pyjulia are difficult to use. pyjulia doesn't import any type constructors. For example, pyjulia seems to convert complex Julia types into plain python matrix.
3) If you can isolate the software modules and manage the I/O, you should consider the shell facility, particularly in Linux / Unix environment. | 1 | 1 | 0 | 0 | I wrote an optimization function in Julia 0.4 and I want to call it from Python. I'm storing the function in a .jl file in the same working directory as the Python code. The data to be transferred is numeric, and I think of using Numpy arrays and Julia arrays for calls. Is there a tutorial on how to make this work? | Calling a Julia 0.4 function from Python. How i can make it work? | 0 | 0.197375 | 1 | 0 | 0 | 569 |
41,067,947 | 2016-12-09T19:59:00.000 | 1 | 0 | 1 | 0 | 0 | python,interpreter,pdb | 0 | 41,201,288 | 0 | 2 | 0 | false | 0 | 0 | I read some posts that say that pdb is implemented using sys.settrace.
If nothing else works I should be able to recreate the behavior I need
using this.
Don't view this as a last resort. I think it's the best approach for what you want to accomplish. | 2 | 2 | 0 | 0 | I am trying to find a way that I can have a program step through Python code line by line and do something with the results of each line. In effect a debugger that could be controlled programmatically rather than manually. pdb would be exactly what I am looking for if it returned its output after each step as a string and I could then call pdb again to pickup where I left off. However, instead it outputs to stdout and I have to manually input "step" via the keyboard.
Things I have tried:
I am able to redirect pdb's stdout. I could redirect it to a second
Python program which would then process it. However, I cannot
figure out how to have the second Python program tell pdb to
step.
Related to the previous one, if I could get pdb to step all the way
through to the end (perhaps I could figure out something to spoof a
keyboard repeatedly entering "step"?) and redirect the output to a
file, I could then write another program that acted like it was
stepping through the program when it was actually just reading the
file line by line.
I could use exec to manually run lines of Python code. However,
since I would be looking at one line at a time, I would need to
manually detect and handle things like conditionals, loops, and
function calls which quickly gets very complicated.
I read some posts that say that pdb is implemented using
sys.settrace. If nothing else works I should be able to recreate
the behavior I need using this.
Is there any established/straight forward way to implement the behavior that I am looking for? | How to programmatically execute/step through Python code line by line | 0 | 0.099668 | 1 | 0 | 0 | 1,492 |
41,067,947 | 2016-12-09T19:59:00.000 | 2 | 0 | 1 | 0 | 0 | python,interpreter,pdb | 0 | 41,069,302 | 0 | 2 | 0 | true | 0 | 0 | sys.settrace() is the fundamental building block for stepping through Python code. pdb is implemented entirely in Python, so you can just look at the module to see how it does things. It also has various public functions/methods for stepping under program control, read the library reference for your version of Python for details. | 2 | 2 | 0 | 0 | I am trying to find a way that I can have a program step through Python code line by line and do something with the results of each line. In effect a debugger that could be controlled programmatically rather than manually. pdb would be exactly what I am looking for if it returned its output after each step as a string and I could then call pdb again to pickup where I left off. However, instead it outputs to stdout and I have to manually input "step" via the keyboard.
Things I have tried:
I am able to redirect pdb's stdout. I could redirect it to a second
Python program which would then process it. However, I cannot
figure out how to have the second Python program tell pdb to
step.
Related to the previous one, if I could get pdb to step all the way
through to the end (perhaps I could figure out something to spoof a
keyboard repeatedly entering "step"?) and redirect the output to a
file, I could then write another program that acted like it was
stepping through the program when it was actually just reading the
file line by line.
I could use exec to manually run lines of Python code. However,
since I would be looking at one line at a time, I would need to
manually detect and handle things like conditionals, loops, and
function calls which quickly gets very complicated.
I read some posts that say that pdb is implemented using
sys.settrace. If nothing else works I should be able to recreate
the behavior I need using this.
Is there any established/straight forward way to implement the behavior that I am looking for? | How to programmatically execute/step through Python code line by line | 0 | 1.2 | 1 | 0 | 0 | 1,492 |
41,082,675 | 2016-12-11T03:25:00.000 | 1 | 0 | 0 | 0 | 0 | python,pandas,dataframe | 0 | 41,082,749 | 0 | 1 | 0 | true | 0 | 0 | You can fix resulting DataFrame using df.replace({'FieldName': {'ErrorError': ''}}) | 1 | 0 | 1 | 0 | I have a dataframe in which one of the row is filled with string "Error"
I am trying to add rows of 2 different dataframe. However, since I have the string in one of the row, it is concatenating the 2 strings.
So, I am having the dataframe filled with a row "ErrorError". I would prefer leaving this row empty than concatenating the strings.
Any idea how to do it ?
Thanks
kiran | Throw an exception while doing dataframe sum in Pandas | 0 | 1.2 | 1 | 0 | 0 | 49 |
41,091,357 | 2016-12-11T21:31:00.000 | 0 | 0 | 1 | 0 | 1 | python,flask,dependencies,virtualenv | 0 | 41,091,835 | 0 | 2 | 0 | false | 1 | 0 | If you are using Pycharm just switch to python 3, and there is a list of packages installed. Hit add(+), and look for flask
You can switch to python 3 by hitting file-> setting -> Project:Projectname -> Project Interpreter | 1 | 0 | 0 | 0 | So, I know how to actually install flask, via pip install Flask
But, I'm running a virtualenv with python3.4. The problem is, Flask is in fact installed. The problem is though, it is installed for python2.7 and not python3.4 .
I did run this command with the virtualenv activated via source bin/activate , but it seems to install it for python2.7 even though a virtualenv running python3.4 is activated.
How do I fix this? im pulling my hair out over this.
Thanks | Install Flask in Python 3.4+ NOT for python2.7 | 0 | 0 | 1 | 0 | 0 | 1,368 |
41,094,926 | 2016-12-12T05:56:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,django-admin | 1 | 47,960,936 | 0 | 3 | 0 | false | 1 | 0 | After ./manage.py sqlmigrate admin 0001, please run python manage.py migrate. | 2 | 1 | 0 | 0 | I get this error
"ProgrammingError at /admin/
relation "django_admin_log" does not exist
LINE 1: ..."."app_label", "django_content_type"."model" FROM "django_ad..."
django_admin_log table does not exist in the database. Does anyone know how I can create it? I am not worried about deleting the data for my app.
when i try './manage.py sqlmigrate admin 0001' or './manage.py sqlmigrate admin 0001'
i get
"
BEGIN;
-- Create model LogEntry
CREATE TABLE "django_admin_log" ("id" serial NOT NULL PRIMARY KEY, "action_time" timestamp with time zone NOT NULL, "object_id" text NULL, "object_repr" varchar(200) NOT NULL, "action_flag" smallint NOT NULL CHECK ("action_flag" >= 0), "change_message" text NOT NULL, "content_type_id" integer NULL, "user_id" integer NOT NULL);
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_content_type_id_c4bce8eb_fk_django_content_type_id" FOREIGN KEY ("content_type_id") REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_log_user_id_c564eba6_fk_auth_user_id" FOREIGN KEY ("user_id") REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "django_admin_log_417f1b1c" ON "django_admin_log" ("content_type_id");
CREATE INDEX "django_admin_log_e8701ad4" ON "django_admin_log" ("user_id");
COMMIT;"
but i still get the same error? i use postgresql if anyone cares. | I accidentally deleted django_admin_log and now i can not use the django admin | 1 | 0 | 1 | 1 | 0 | 2,274 |
41,094,926 | 2016-12-12T05:56:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,django-admin | 1 | 60,692,503 | 0 | 3 | 0 | false | 1 | 0 | experienced the same issue, the best way is to copy the CREATE TABLE log, login to your database ./manage.py dbshell and Paste the content over there without adding the last line (COMMIT ) it will solve the problem and manually create the table for you. | 2 | 1 | 0 | 0 | I get this error
"ProgrammingError at /admin/
relation "django_admin_log" does not exist
LINE 1: ..."."app_label", "django_content_type"."model" FROM "django_ad..."
django_admin_log table does not exist in the database. Does anyone know how I can create it? I am not worried about deleting the data for my app.
when i try './manage.py sqlmigrate admin 0001' or './manage.py sqlmigrate admin 0001'
i get
"
BEGIN;
-- Create model LogEntry
CREATE TABLE "django_admin_log" ("id" serial NOT NULL PRIMARY KEY, "action_time" timestamp with time zone NOT NULL, "object_id" text NULL, "object_repr" varchar(200) NOT NULL, "action_flag" smallint NOT NULL CHECK ("action_flag" >= 0), "change_message" text NOT NULL, "content_type_id" integer NULL, "user_id" integer NOT NULL);
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_content_type_id_c4bce8eb_fk_django_content_type_id" FOREIGN KEY ("content_type_id") REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_log_user_id_c564eba6_fk_auth_user_id" FOREIGN KEY ("user_id") REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "django_admin_log_417f1b1c" ON "django_admin_log" ("content_type_id");
CREATE INDEX "django_admin_log_e8701ad4" ON "django_admin_log" ("user_id");
COMMIT;"
but i still get the same error? i use postgresql if anyone cares. | I accidentally deleted django_admin_log and now i can not use the django admin | 1 | 0.132549 | 1 | 1 | 0 | 2,274 |
41,107,643 | 2016-12-12T18:57:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,outlook | 0 | 41,108,481 | 0 | 1 | 0 | false | 0 | 0 | SMTP is only for sending. To receive (read) emails, you will need to use other protocols, such as POP3, IMAP4, etc. | 1 | 0 | 0 | 0 | I know how to send email through Outlook/Gmail using the Python SMTP library. However, I was wondering if it was possible to receive replys from those automated emails sent from Python.
For example, if I sent an automated email from Python (Outlook/Gmail) and I wanted the user to be able to reply "ok" or "quit" to the automated email to either continue the script or kick off another job or something, how would I go about doing that in Python?
Thanks | Python SMTP Send Email and Receive Reply | 1 | 0 | 1 | 0 | 1 | 308 |
41,108,551 | 2016-12-12T19:57:00.000 | 1 | 0 | 0 | 0 | 0 | python,python-requests,werkzeug | 0 | 59,821,311 | 0 | 3 | 0 | false | 0 | 0 | A PyPI package now exists for this so you can use pip install requests-flask-adapter. | 1 | 11 | 0 | 0 | I'm trying to make a usable tests for my package, but using Flask.test_client is so different from the requests API that I found it hard to use.
I have tried to make requests.adapters.HTTPAdapter wrap the response, but it looks like werkzeug doesn't use httplib (or urllib for that matter) to build it own Response object.
Any idea how it can be done? Reference to existing code will be the best (googling werkzeug + requests doesn't give any useful results)
Many thanks!! | requests-like wrapper for flask's test_client | 0 | 0.066568 | 1 | 0 | 1 | 1,360 |
41,124,843 | 2016-12-13T15:32:00.000 | 0 | 0 | 0 | 0 | 0 | python,authentication,flask,flask-security | 0 | 59,997,976 | 0 | 1 | 0 | false | 1 | 0 | There could be several ways off the top of my head you approach this, none of them striking a nice balance between simplicity and effectiveness:
One way could be to add a last_seen field to your User. Pick some arbitrary number(s) that could serve as a heuristic to determine whether someone is "active". Any sufficiently long gap in activity could trigger a reset of the active_login_count. This obviously has many apparent loopholes, the biggest I see at the moment being, users could simply coordinate logins and potentially rack up an unlimited number of active sessions without your application being any the wiser. It's a shame humans in general tend to use similar "logical" mechanisms to run their entire lives; but I digress...
You could make this approach more sophisticated by trying to track the user's active ip addresses. Add an active_ips field and populate a list of (n) ips, perhaps with some browser information etc to try and fingerprint users' devices and manage it that way.
Another way is to use an external service, such as a Redis instance or even a database. Create up to (n) session ids that are passed around in the http headers and which are checked every time the api is hit. No active session id, or if the next session id would constitute a breach of contract, no access to the app. Then you simply clear out those session ids at regular intervals to keep them fresh.
Hopefully that gives you some useful ideas. | 1 | 3 | 0 | 0 | I'm trying to write a basic Flask app that limits the number of active logins a user can have, a la Netflix. I'm using the following strategy for now:
Using Flask_Security
store a active_login_count field for my User class.
every time a successful login request is completed, the app increases the active_login_count by 1. If doing so makes the count greater than the limit, it calls logout_user() instead.
This is a bad solution, because if the user loses her session (closed the incognito mode without logging out), the app hasn't been able to decrement her login count, and so, future logins are blocked.
One solution is to store the sessions on the server, but I don't know how I would go about authenticating valid sessions. Flask_Sessions is something I'm looking into, but I have no idea how to limit the number of active sessions.
As per my understanding, in the default configuration, Flask generates new session cookies on every request to prevent CSRF attacks. How would I go about doing that? | How to limit number of active logins in Flask | 0 | 0 | 1 | 0 | 0 | 1,075 |
41,126,100 | 2016-12-13T16:35:00.000 | 0 | 0 | 1 | 0 | 0 | python,class | 0 | 41,126,396 | 0 | 3 | 0 | false | 0 | 0 | Try to do it with self.data=None , or make an instance variable and call whenever you need. writing algorithm will make this thing more complex try to solve issue with inbuilt functions or with algorithm program vulnerability will affect alot. | 1 | 0 | 0 | 0 | I would like some advice on how to best design a class and it's instance variables. I initialize the class with self.name. However, the main purpose of this class it to retrieve data from an API passing self.name as a parameter, and then parsing the data accordingly. I have a class method called fetch_data(self.name) that makes the API request and returns ALL data. I want to store this data into a class instance variable, and then call other methods on that variable. For example, get_emotions(json), get_personality_traits(json), and get_handle(json), all take the same dictionary as a parameter, assign it to their own local variables, and then manipulate it accordingly.
I know I can make fetch_data(self.name) return data, and then call fetch_data(self.name) within the other methods, assign the return value to a local variable, and manipulate that. The problem is then I will need to call the API 5 times rather than 1, which I can't do for time and money reasons.
So, how do I make the result of fetch_data(self.name) global so that all methods within the class have access to the main data object? I know this is traditionally done in an instance variable, but in this scenario I can't initiliaze the data since I don't have it until after I call fetch_data().
Thank you in advance! | How do I best store API data into a Python class instance variable? | 1 | 0 | 1 | 0 | 0 | 1,302 |
41,155,765 | 2016-12-15T03:05:00.000 | 1 | 0 | 1 | 0 | 0 | python,arrays,list,loops,iteration | 0 | 41,155,912 | 0 | 4 | 0 | false | 0 | 0 | You can just subtract the number rolled from distance from the player to the end of the board.
if the difference is less than 0, send the player back to the start of the board and add the absolute value of the difference to the player's position. | 1 | 2 | 0 | 0 | I'm currently in the process of learning how to use Python, and i'm trying to build a Monopoly simulator (for starters, I just want to simulate how one player moves about on the board).
How do i iteratively go through the list of board positions: eg. range(0, 39)? So, if the player is currently in position 35, and rolls a 6, he ends up in position 1.
Hopefully you're able to help! All the best :) | Iteratively searching through a looped list | 0 | 0.049958 | 1 | 0 | 0 | 49 |
41,166,406 | 2016-12-15T14:19:00.000 | 6 | 1 | 0 | 0 | 0 | python,android-studio | 1 | 41,166,600 | 0 | 1 | 0 | true | 1 | 1 | If you only need to run the scripts and not to edit them, the easiest way to do so is to configure an external tool through Settings | Tools | External Tools. This doesn't require the Python plugin or any Python-specific configuration; you can simply specify the command line to execute. External tools appear as items in the Tools menu, and you can also assign keyboard shortcuts to them using Settings | Keymap. | 1 | 9 | 0 | 0 | I am running Android Studio 2.2.3
I need to run Python scripts during my development to process some data files. The final apk does not need to run Python in the device.
At the moment I run the scripts from a terminal or from PyDev for Eclipse, but I was looking for a way of doing it from Android Studio.
There seems to be a way to do this, as when I right-click on a .py file and select the 'Run' option, an 'Edit configuration' dialog opens where I can configure several options. The problem is that I cannot specify the Python interpreter, having to select it from a combo box of already configured Python SDKs. While there is no interpreter selected, there is an error message stating "Error: Please select a module with a valid Python SDK".
I managed to create Java modules for my project, but not Python ones (I do have the Python Community Edition plugin installed).Does anybody know how to achieve this?.
TIA. | Running Python scripts inside Android Studio | 0 | 1.2 | 1 | 0 | 0 | 25,025 |
41,177,692 | 2016-12-16T05:17:00.000 | 1 | 0 | 0 | 0 | 1 | linux,django,python-3.x,ubuntu,sqlite | 0 | 41,196,939 | 0 | 1 | 0 | false | 1 | 0 | I figured out that this error was caused by me changing my python path to 3.5 from the default of 2.7. | 1 | 0 | 0 | 0 | So the issue is that apparently Django uses the sqlite3 that is included with python, I have sqlite3 on my computer and it works fine on its own. I have tried many things to fix this and have not found a solution yet.
Please let me know how I can fix this issue so that I can use Django on my computer.
:~$ python
Python 3.5.2 (default, Nov 6 2016, 14:10:16)
[GCC 6.2.0 20161005] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/usr/local/lib/python3.5/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: No module named '_sqlite3'
>>> exit() | How do I configure the sqlite3 module to work with Django 1.10? | 0 | 0.197375 | 1 | 1 | 0 | 197 |
41,179,622 | 2016-12-16T07:52:00.000 | 0 | 0 | 0 | 0 | 0 | python,mongodb,scrapy | 0 | 41,194,752 | 0 | 1 | 0 | false | 1 | 0 | I've got an answer.
for example:
using -s collection_name=abc for scrapy crawl command, then get the parameter in pipelines.py using param = settings.get('collection_name').
This is also found in stackoverflow, but can't remember which ticket.
Hope this would help some one facing same problem. | 1 | 0 | 0 | 0 | The spider is to crawl info on a certain B2B website, and I want it to be a webserver, where user sumbit a url then the spider starts crawl.
The url seems like: apple.b2bxxx.com, which is a minisite on a B2B website, where all the products are listed. The "apple" might be different because different companies use different names for there minisite, and duplication is not allowed.
On the backend, it's MongoDB to store the data scraped.
What I have done, is that, I can collect info on the given url, but all data are stored in the same db.collection.
I know I can get parameters using "-a" for running scrapy, but how should I use it?
Should I change the pipelines.py or the spider python file?
Any suggestions? | Usa parameter as collection name in a scrapy project | 0 | 0 | 1 | 0 | 0 | 79 |
41,198,719 | 2016-12-17T12:47:00.000 | 2 | 0 | 0 | 0 | 0 | python,pandas,dataframe | 0 | 41,200,368 | 0 | 3 | 0 | false | 0 | 0 | If the rest of the data in your columns is numeric then you should use pd.to_numeric(df, errors='coerce') | 1 | 0 | 1 | 0 | I am using df= df.replace('No data', np.nan) on a csv file containing ‘No data’ instead of blank/null entries where there is no numeric data. Using the head() method I see that the replace method does replace the ‘No data’ entries with NaN. When I use df.info() it says that the datatypes for each of the series is an object.
When I open the csv file in Excel and manually edit the data using find and replace to change ‘No data’ to blank/null entries, although the dataframes look exactly the same when I use df.head(), when I use df.info() it says that the datatypes are floats.
I was wondering why this was and how can I make it so that the datatypes for my series are floats, without having to manually edit the csv files. | Pandas replace method and object datatypes | 1 | 0.132549 | 1 | 0 | 0 | 1,782 |
41,200,785 | 2016-12-17T16:40:00.000 | -1 | 0 | 1 | 0 | 0 | python,nlp,nltk,spacy | 0 | 44,993,090 | 0 | 2 | 0 | false | 0 | 0 | Go through the spacy 2.0 nightly build. It should have the solution you're looking for. | 1 | 2 | 0 | 0 | Using Python Spacy, how to extract entity from simple passive voice sentence? In the follow sentence, my intention is to extract both "John” from the sentence as nsubjpass and .ent_.
sentence = "John was accused of crimes by David" | Extract entities from Simple passive voice sentence by Python Spacy | 0 | -0.099668 | 1 | 0 | 0 | 1,693 |
41,221,779 | 2016-12-19T11:39:00.000 | -1 | 0 | 0 | 0 | 0 | python,pelican | 0 | 41,222,267 | 0 | 4 | 0 | false | 0 | 0 | Probably is not the answer you are looking for, but if you are already customizing the CSS, think about usgin CSS to hide the section. | 2 | 1 | 0 | 0 | When generating content using Pelican, everything is Ok except that I see in the footer "Proudly powered by Pelican ..."
I want to get rid of it. I know I can remove it from the generated files manually, but that is tedious.
Is there a way to prevent the generation of the above phrase by asking Pelican to do that for me? Some magic Pelican command or settings, maybe? | how to automatically remove "Powered by ..." in Pelican CMS? | 0 | -0.049958 | 1 | 0 | 0 | 1,230 |
41,221,779 | 2016-12-19T11:39:00.000 | 3 | 0 | 0 | 0 | 0 | python,pelican | 0 | 47,449,423 | 0 | 4 | 0 | false | 0 | 0 | In your theme template, there will be a line like,
{% extends "!simple/base.html" %}
This base.html is used as the foundation for creating the theme. This file is available in : %PYTHON%\Lib\site-packages\pelican\themes\simple\templates
You can edit this file to remove the "Powered By.." | 2 | 1 | 0 | 0 | When generating content using Pelican, everything is Ok except that I see in the footer "Proudly powered by Pelican ..."
I want to get rid of it. I know I can remove it from the generated files manually, but that is tedious.
Is there a way to prevent the generation of the above phrase by asking Pelican to do that for me? Some magic Pelican command or settings, maybe? | how to automatically remove "Powered by ..." in Pelican CMS? | 0 | 0.148885 | 1 | 0 | 0 | 1,230 |
41,252,907 | 2016-12-20T23:33:00.000 | 0 | 0 | 0 | 0 | 0 | python,excel,openpyxl | 0 | 43,180,932 | 0 | 1 | 1 | false | 0 | 0 | As far as i know, openpyxl does not allow you to access only one cell, or a limited number of cells for that matter. In order to access any information in a given worksheet, openpyxl will create one in the memory. This is the reason for which you will be unable to add a Sheet without opening the entire document in memory and overwriting it at the end. | 1 | 0 | 0 | 0 | I want to generate excel spreadsheets by python, the first few tabs are exactly same, all are refer to the last page, so how to insert the last page by openpyxl? because the first few tabs are too complex so load_workbook is always failed, is there have any other way to insert tabs without loading? | How to use openpyxl to insert one sheet to a template? | 1 | 0 | 1 | 1 | 0 | 198 |
41,254,715 | 2016-12-21T03:40:00.000 | 1 | 0 | 1 | 0 | 0 | python,rabbitmq,mqtt,bottle,hivemq | 0 | 41,259,154 | 0 | 1 | 0 | true | 0 | 0 | I'm not familiar with bottle but a quick look at the docs it doesn't look like there is any other way to start it's event loop apart from with the run() function.
Paho provides a loop_start() which will kick off it's own background thread to run the MQTT network event loop.
Given there looks to be no way to run the bottle loop manually I would suggest calling loop_start() before run() and letting the app run on 2 separate threads as there is no way to combine them and you probably wouldn't want to anyway.
The only thing to be careful of will be if MQTT subscriptions update data that the REST service is sending out, but as long as are not streaming large volumes of data that is unlikely to be an issue. | 1 | 1 | 0 | 0 | I am a noob to web and mqtt programming, I am working on a python application that uses mqtt (via hivemq or rabbitmq broker) and also needs to implement http rest api for clients.
I realized using python bottle framework is pretty easy to provide a simple http server, however both bottle and mqtt have their event loop, how do I combine these 2 event loop, I want to have a single threaded app to avoid complexity. | Using http and mqtt together in a single threaded python app | 0 | 1.2 | 1 | 0 | 1 | 544 |
41,254,869 | 2016-12-21T04:00:00.000 | 6 | 0 | 0 | 0 | 1 | python,django | 0 | 41,256,129 | 0 | 2 | 0 | true | 1 | 0 | Better to reset the console frequently.
This is not a big problem but due to multiple terminals being not reset for long durations, such problem occurs. | 1 | 9 | 0 | 0 | I want to change my models in django
when I execute python manage.py makemigrations ,it asks a question:
Did you rename the demoapp.Myblog model to Blog? [y/N] y^M^M^M
that I input y and press Enter,but it adds ^M to the line
I've looked around and apparently but I've got no choices
can anybody tell me how to fix it? | django press Enter and it show ^M | 0 | 1.2 | 1 | 0 | 0 | 786 |
41,263,383 | 2016-12-21T12:59:00.000 | 0 | 0 | 0 | 0 | 0 | python,sqlite,cursor | 0 | 41,485,462 | 0 | 1 | 0 | false | 0 | 0 | The close() method allows you to close a cursor object before it is garbage collected.
The connection's execute() method is exactly the same as conn.cursor().execute(...), so the return value is the only reference to the temporary cursor object. When you just ignore it, CPython will garbage-collect the object immediately (other Python implementations might work differently). | 1 | 1 | 0 | 0 | Is closing a cursor needed when the shortcut conn.execute is used in place of an explicitly named cursor in SQLite? If so how is this done? Also, is closing a cursor only need for SELECT, when a recordset is returned, or is it also needed for UPDATE, etc.? | How does closing SQLite cursor apply when conn.execute is used in place of named cursor | 0 | 0 | 1 | 1 | 0 | 127 |
41,266,569 | 2016-12-21T15:38:00.000 | 1 | 0 | 1 | 0 | 0 | python-2.7,web2py | 0 | 41,273,813 | 0 | 1 | 0 | true | 0 | 0 | As you seem to conclude, there isn't much reason to be sending requests back and forth to the server given that the server isn't generating any new data that isn't already held in the browser. Just do all the filtering directly in the browser.
If you did need to do some manipulation on the server, though, don't necessarily assume it would be more efficient to search/filter a large dataset in Python rather than querying the database. You should do some testing to figure out which is more efficient (and whether any efficiency gains are worth the hassle of adding complexity to your code). | 1 | 0 | 0 | 0 | I researched up and down and I'm not seeing any answers that I'm quite understanding so I thought to post my own question.
I building a web application (specifically in web2py but that shouldn't matter I don't believe) on Python 2.7 to be hosted on Windows.
I have a list of about 2000 items in a database table.
The user will be opening the application which will initially select all 2000 items into Python and return the list to the user's browser. After that the user will be filtering the list based on one-to-many values of one-to-many attributes of the items.
I'm wanting Python to hold the unadulterated list of 2000 items in-memory between the user's changes of filtering options.
Every time the user changes their filter options,
trip the change back to Python,
apply the filter to the in-memory list and
return the subset to the user's browser.
This is to avoid hitting the database with every change of filter options. And to avoid passing the list in session over and over.
Most of this I'm just fine with. What I'm seeking you advise on is how to get Python to hold the list in-memory. In c# you would just make it a static object.
How do you do a static (or whatever other scheme that applies) in Python?
Thanks for your remarks.
While proofreading this I see I'm still probably passing at least big portions of the list back and forth anyway so I will probably manage the whole list in the browser.
But I still like to hear you suggestions. Maybe something you say will help. | Python 2.7 list of dictionaries in memory between page trips | 0 | 1.2 | 1 | 0 | 0 | 35 |
41,310,316 | 2016-12-24T04:26:00.000 | 1 | 0 | 1 | 0 | 0 | python,date,nlp | 1 | 41,347,322 | 0 | 1 | 0 | true | 0 | 0 | There is! (Python is amazing)
dateutil.parser.parse("today is 21 jan 2016", fuzzy=True) | 1 | 1 | 0 | 0 | when I use this code for extracting the date
dateutil.parser.parse('today is 21 jan 2016')
It throws an error -> ValueError: unknown string format
is there any way to extract dates and time from a sentence??? | how to find unstructured date and time from a sentence in python? | 1 | 1.2 | 1 | 0 | 0 | 1,640 |
41,346,055 | 2016-12-27T13:25:00.000 | 0 | 0 | 0 | 0 | 1 | python,machine-learning,scikit-learn,logistic-regression | 0 | 66,783,030 | 0 | 1 | 0 | false | 0 | 0 | You can use the warm_start option (with solver not liblinear), and manually set coef_ and intercept_ prior to fitting.
warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. | 1 | 4 | 1 | 0 | I am using scikit-learn's linear_model.LogisticRegression to perform multinomial logistic regress. I would like to initialize the solver's seed value, i.e. I want to give the solver its initial guess as the coefficients' values.
Does anyone know how to do that? I have looked online and sifted through the code too, but haven't found an answer.
Thanks! | Feeding a seed value to solver in Python Logistic Regression | 0 | 0 | 1 | 0 | 0 | 493 |
41,347,484 | 2016-12-27T15:02:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 41,347,592 | 0 | 4 | 0 | false | 0 | 0 | the hard way,
get the ascii equivalent of each letter
generate a random letter between the range of highest and lowest ascii value
join the random characters together | 1 | 0 | 0 | 0 | How to shuffle to inputted strings, and to mix together to jumble the two strings up.
For example "hello" and "world" shuffle together to become "wherd llohe" | Anybody know how to shuffle two strings together? | 0 | 0 | 1 | 0 | 0 | 1,312 |
41,364,326 | 2016-12-28T14:44:00.000 | 1 | 0 | 1 | 0 | 0 | python,initialization,delay,jupyter-notebook | 0 | 41,364,412 | 1 | 1 | 0 | false | 0 | 0 | Visual Studio Code + Python extension works fine (both Windows and Mac, not sure about Linux). Very fast and lightweight, Git integration, debugging refactorings, etc.
Also there is an IDE called Spyder that is more Python-specific. Also works fine but is more heavy-weight. | 1 | 0 | 0 | 0 | Usually the main reason I'm using jupyter notebook with python is the possibility to initialize once (and only once) objects (or generally "data") that tend to have long (lets say more than 30 seconds) loading times. When my work is iterative, i.e. I run minimally changed version of some algorithm multiple times, the accumulated cost of repeated initialization can get large at end of a day.
I'm seeking an alternative approach (allowing to avoid the cost of repeated initialization without using a notebook) for the following reasons:
No "out of the box" version control when using notebook.
Occasional problems of "I forgot to rename variable in a single place". Everything keeps working OK until the notebook is restarted.
Usually I want to have usable python module at the end anyway.
Somehow when using a notebook I tend to get code that if far from "clean" (I guess this is more self discipline problem...).
Ideal workflow should allow to perform whole development inside IDE (e.g. pyCharm; BTW linux is the only option). Any ideas?
I'm thinking of implementing a simple (local) execution server that keeps the problematic objects pre-initialized as global variables and runs the code on demand (that uses those globals instead of performing initialization) by spawning a new process each time (this way those objects are protected from modification, at the same time thanks to those variables being global there is no pickle/unpickle penalty when spawning new process).
But before I start implementing this - maybe there is some working solution or workflow already known? | Alternative workflow to using jupyter notebook (aka how to avoid repetitive initialization delay)? | 0 | 0.197375 | 1 | 0 | 0 | 849 |
41,381,705 | 2016-12-29T14:32:00.000 | 0 | 0 | 0 | 1 | 0 | python,flask,undefined,global,mod-wsgi | 1 | 41,394,185 | 0 | 1 | 0 | false | 1 | 0 | No main function call with mod_wsgi was the right answer. I do not implemented my required modules in the wsgi file,
but on top of the flask app. | 1 | 0 | 0 | 0 | I have changed my application running with flask and python2.7 from a standalone solution to flask with apache and mod_wsgi.
My Flask app (app.py) includes some classes which are in the directory below my app dir (../).
Here is my app.wsgi:
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.stdout = sys.stderr
project_home = '/opt/appdir/Application/myapp'
project_web = '/opt/appdir/Application/myapp/web'
if project_home not in sys.path:
sys.path = [project_home] + sys.path
if project_web not in sys.path:
sys.path = [project_web] + sys.path
from app import app
application = app
Before my configuration to mod_wsgi my main call in the app.py looks like that:
# Main
if __name__ == '__main__' :
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
from logger import Logger
from main import Main
from configReader import ConfigReader
print "Calling flask"
from threadhandler import ThreadHandler
ca = ConfigReader()
app.run(host="0.0.0.0", threaded=True)
I was perfectly able to load my classes in the directory below.
After running the app with mod_wsgi I get the following error:
global name \'Main\' is not defined
So how do I have to change my app that this here would work:
@app.route("/")
def test():
main = Main("test")
return main.responseMessage() | Flask with mod_wsgi - Cannot call my modules | 0 | 0 | 1 | 0 | 0 | 1,008 |
41,409,731 | 2016-12-31T15:43:00.000 | 2 | 0 | 0 | 0 | 0 | python,plugins,gimp,python-fu | 0 | 41,410,753 | 0 | 1 | 0 | true | 0 | 1 | Either you build a full GUI with PyGTK (or perhaps tkinter) or you find another way. Typically for this if you stick to the auto-generated dialogs you have the choice between:
a somewhat clumsy dialog that asks for both parameters and will ignore one or the other depending of the image format,
two menu entries for two different dialogs, one for PNG and one for JPG.
On the other hand, I have always use compression level 9 in my PNGs (AFAIK the only benefit of other levels is CPU time, but this is moot in modern machines) so your dialog could only ask for the JPEG quality which would mak it less clumsy.
However... JPEG quality isn't all there is to it and there are actually many options (chroma sub-sampling being IMHO at least as important as quality), and to satisfy all needs you could end up with a rather complex dialog. So you could either:
Just save with the current user's default settings (gimp_file_save())
Get these settings from some .ini file (they are less likely to change than other parameters of your script)
Not save the image and let the user Save/Export to his/her liking (if this isn't a batch processing script) | 1 | 2 | 0 | 0 | Situation:
My gimp python plug-in shows the user a drop down box with two options [".jpg", ".png"].
Question:
How to show a second input window with conditional content based on first input?
.jpg --> "Quality" range slider [0 - 100]
.png --> "Compression" range slider [0 - 9]
In different words:
How to trigger a (registered) plug-in WITH user-input-window from within the main function of a plug-in? | gimp python plug in: how to trigger another user input | 0 | 1.2 | 1 | 0 | 0 | 404 |
41,432,445 | 2017-01-02T19:40:00.000 | 1 | 0 | 1 | 0 | 0 | django,python-2.7 | 0 | 41,433,294 | 0 | 1 | 0 | false | 1 | 0 | Create a new virtualenv using Python 2.7. Use the -p flag to point to the python installation you want for that virtual environment, and then pip install django within that virtual environment. | 1 | 0 | 0 | 0 | The django app running on localhost in a virtualenv uses the default python version 2.7.3 that is under /usr/bin/ but I installed Python 2.7.9 under ~/.opt/bin/python2.7. I updated the $PATH but I want the django app to use the locally installed python version by default.
Please help me understand how to make that happen. Thank you. | Updating which python my django app uses | 0 | 0.197375 | 1 | 0 | 0 | 96 |
41,454,355 | 2017-01-04T00:17:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql | 1 | 41,454,481 | 0 | 2 | 0 | false | 0 | 0 | You can use mysql.connector on the server. However, you will have to install it first. Do you have root (admin) access? If no, you might need help from the server admin. | 1 | 0 | 0 | 0 | I have been using the mysql.connector module with Python 2.7 and testing locally using XAMPP. Whenever I upload my script to the server, I am getting an import error for the mysql.connector module. I am assuming this is because, unlike my local machine, I have not installed the mysql.connector module on the server.
My question is: can I somehow use the mysql.connector module on the server or is this something only for local development? I have looked into it, and apparently do not have SSH access for my server, only for the database. As well, if I cannot use the mysql.connector module, how do I connect to my MySQL database from my Python script on the server? | Can you use Python mysql.connector on actual Server? | 0 | 0 | 1 | 1 | 0 | 95 |
Subsets and Splits