Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
900 | 27,180,283 |
Using Beautiful Soup to get specific tr meta data
|
<p>I have a bizarre problem. I am using python to scrape a page using beautiful soup. One value I need is in the tr meta-data which I have been able to print to my screen using the following command:</p>
<pre><code>meta = tr.findNext('td', {'class':'field1'})
attr_dict = meta.a.attrs
print(attr_dict)
</code></pre>
<p>this produces:</p>
<pre><code>{'href': '/client/displayEmail.asp?rid=1318441&cid=12339',
'rel': ['gb_page_center[940,', '600]'],
'title': 'ID: manualavenue100daily120141127073104:EG_CO_NEWS_1/08-14-14_yahoo\rLooking for schools? Get free information today.\rFrom: [email protected]\rDate: 11/27/2014 7:33:34 AM'}
</code></pre>
<p>I want to extract the following information: EG_CO_NEWS_1/08-14-14_yahoo but cannot seem to get it.</p>
<p>Currently I'm doing this : </p>
<pre><code>campaign_raw = str(attr_dict['title'][:80])
</code></pre>
<p>which produces:</p>
<pre><code>'Lookianualavenue100daily120141127073104:EG_CO_NEWS_1/08-14-14_yahoo'
</code></pre>
<p>A weird concatenation of the subject and template name in an unexpected order.</p>
<p>I tried to split the string on a ':' and take the last segment, and that produces:
Looki_NEWS_1/08-14-14_yahoo</p>
<p>I have no idea what to do. I've experimented with regular expressions but that does not seem to work either. Anyone have any experience with this?</p>
|
<p>The unexpected order is causing by <code>'\r'</code> character or carriage return. replace it with <code>''</code> or <code>' '</code>and then process your string.</p>
<pre><code>str(attr_dict['title']).replace('\r', '')
</code></pre>
<p>Consider the string:</p>
<pre><code>st = "This is SO\rThat"
</code></pre>
<p>Now if you print the string,</p>
<pre><code>print st
That is SO
</code></pre>
<p>This happens because when a <code>\r</code> or <code>carriage return</code> is encountered the device's position is reset to the beginning of a line of text. </p>
|
python|html|regex|beautifulsoup|metadata
| 2 |
901 | 27,236,945 |
language detection code in python
|
<p>So, we have built a language detection program in python that just detects different languages. Our code seems fine; there is no error but I am not getting the desired result. Whenever I run it on Eclipse, it runs and terminates giving us the running time and an "OK". It is supposed to print the language of the text written.</p>
<pre><code>def compute_ratios(text):
tokens = wordpunct_tokenize(text)
words = [word.lower() for word in tokens]
langratios = {}
for language in stopwords.fileids():
stopwords_set = set(stopwords.words(language))
words_set = set (words)
common_elements = words_set.intersection(stopwords_set)
langratios[language] = len(common_elements)
return langratios
def max_ratio(text):
ratios = compute_ratios(text)
mostLang = max(ratios , key=ratios.get)
return mostLang
def main():
text = "This is cool"
x = max_ratio(text)
print(x)
</code></pre>
|
<p>Unlike in some other languages, <code>main()</code> is just like any other function in Python. If you want it to run, you have to explicitly call it:</p>
<pre><code>def main():
...
main()
</code></pre>
|
python|language-detection
| 4 |
902 | 12,428,089 |
Get files and put into list in another method
|
<p>I have a directory with files, I need to get a listing of these files to put into another method. It is in the context of webassets(https://github.com/miracle2k/webassets) so it looks like this, the specific case:</p>
<pre><code>app_css = Bundle('app_assets/css/base.css',
'app_assets/css/layout.css',,
output='output.css',
filters='cssmin')
</code></pre>
<p>and I want to be like this:</p>
<pre><code>app_css = Bundle( {any number of files in a directory},
output='output.css',
filters='cssmin')
</code></pre>
<p>So I need to retrieve a list of files which might vary and aren't fixed, then put that list into another function, rather than hardcoding each change.</p>
<p>I have this from my last unsuccessful attempt:</p>
<pre><code>csspath = "{}/static/css".format(os.path.dirname(__file__))
csss = [["app_assets/css/{}".format(files)] for files in os.listdir(csspath)]
app_css = Bundle("{}".format(*csss), output="packed.css", filters="cssmin")
</code></pre>
<p>but this isn't right. One of the issues is that I just need the file names, and becasue it is a flask blueprint, need to use the 'app_assets/directory/files' format. </p>
<p>This is basic-python-should-be-easy-101 and a learning experience, and I'll get it, but now that I've turned back to this I'm interested in other solutions, suggestions, etc.</p>
|
<p>If you want to get a list of all <code>css</code> files in a directory you can use the <code>glob</code> module:</p>
<pre><code>my_files = glob.glob('path_to_the_directory/*.css')
</code></pre>
<p>Basically <code>glob</code> will expand like the filenames in the shell. You can also use it for directories. For example this:</p>
<pre><code>glog.glob('My/dir/*/*.css')
</code></pre>
<p>Would return a list of all filenames that end in ".css" and that are located in a subdirectory of "My/dir".</p>
<p>edit:
A "translation" of your three lines of code:</p>
<pre><code>csspath = os.path.join(os.path.dirname(__file__), 'static', 'css')
csss = [os.path.join('app_assets', 'css', fname) for fname in os.listdir(csspath)]
app_css = Bundle(*csss, output='packed.css', filters='cssmin')
</code></pre>
<p>But I don't understand if you simply want to improve them or the original version doesn't work.</p>
|
python|file|directory
| 0 |
903 | 12,468,707 |
Retrieving Variable & Processing List from Tastypie URL
|
<p>Let's say my override_urls is like so:</p>
<pre><code>def override_urls(self):
return [
url(r"^(?P<resource_name>%s)/(?P<user__username>\w{4,30})%s$" % (self._meta.resource_name, trailing_slash()), self.wrap_view('dispatch_list'), name="api_dispatch_list"),
]
</code></pre>
<p>I'd like to do some custom processing owith user__username: I'd like to get all of a user's 'post' objects and combine it with everyone they follow's post objects.</p>
<p>How can I nab user__username for get_object_list to process? I tried to get it from the request using request.GET.get('user__username') but that didn't seem to make sense (and didn't work). </p>
<p>PS, is there anyway to make user__username into just username (for the sake of prettyness)?</p>
|
<p>The user_username argument is passed in kwargs through the dispatching process not in request.GET.</p>
<p>You probably would want to override the
<a href="https://github.com/toastdriven/django-tastypie/blob/master/tastypie/resources.py#L1116" rel="nofollow">get_list method</a> and process the additional argument inside it. If you do it that way you can name your argument whatever you want and process it the way you wish.</p>
|
python|django|tastypie
| 1 |
904 | 682,923 |
Dynamically change the choices in a wx.ComboBox()
|
<p>I didn't find a better way to change the different choices in a wx.ComboBox() than swap the old ComboBox with a new one. Is there a better way?</p>
<p>Oerjan Pettersen</p>
<pre><code>#!/usr/bin/python
#20_combobox.py
import wx
import wx.lib.inspection
class MyFrame(wx.Frame):
def __init__(self, *args, **kwargs):
wx.Frame.__init__(self, *args, **kwargs)
self.p1 = wx.Panel(self)
lst = ['1','2','3']
self.st = wx.ComboBox(self.p1, -1, choices = lst, style=wx.TE_PROCESS_ENTER)
self.st.Bind(wx.EVT_COMBOBOX, self.text_return)
def text_return(self, event):
lst = ['3','4']
self.st = wx.ComboBox(self.p1, -1, choices = lst, style=wx.TE_PROCESS_ENTER)
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, -1, '20_combobox.py')
frame.Show()
self.SetTopWindow(frame)
return 1
if __name__ == "__main__":
app = MyApp(0)
# wx.lib.inspection.InspectionTool().Show()
app.MainLoop()
</code></pre>
|
<p><a href="http://docs.wxwidgets.org/stable/classwx_combo_box.html" rel="noreferrer">wx.ComboBox</a> derives from <a href="http://docs.wxwidgets.org/stable/classwx_item_container.html" rel="noreferrer">wx.ItemContainer</a>, which has methods for <a href="http://docs.wxwidgets.org/stable/classwx_item_container.html#a8fdc0090e3eabc762ff0e49e925f8bc4" rel="noreferrer">Appending</a>, <a href="http://docs.wxwidgets.org/stable/classwx_item_container.html#aea621d4fdfbc3a06bf24dcc97304e2c1" rel="noreferrer">Clearing</a>, <a href="http://docs.wxwidgets.org/stable/classwx_item_container.html#a8844cacec8509fe6e637c6f85eb8b395" rel="noreferrer">Inserting</a> and <a href="http://docs.wxwidgets.org/stable/classwx_item_container.html#a0e8379f41e9d7b912564000828140a19" rel="noreferrer">Deleting</a> items, all of these methods are available on wx.ComboBox.</p>
<p>One way to do what you want would be to define the text_return() method as follows:</p>
<pre><code>def text_return(self, event):
self.st.Clear()
self.st.Append('3')
self.st.Append('4')
</code></pre>
|
python|wxpython|wxwidgets
| 36 |
905 | 47,119,364 |
Python/MyPy: How to annotate a method that can return one of several different types of objects?
|
<p>How should I annotate the return type of a method that can return multiple different types of objects?</p>
<p>Specifically this is the method I'm having trouble with:</p>
<pre><code>def _bin_factory(self) -> Any:
"""
Returns a bin with the specificed algorithm,
heuristic, and dimensions
"""
if self.algorithm == 'guillotine':
return guillotine.Guillotine(self.bin_width, self.bin_height, self.rotation,
self.rectangle_merge, self.split_heuristic)
elif self.algorithm == 'shelf':
return shelf.Sheet(self.bin_width, self.bin_height, self.rotation, self.wastemap)
elif self.algorithm == 'maximal_rectangle':
return maximal_rectangles.MaximalRectangle(self.bin_width, self.bin_height, self.rotation)
raise ValueError('Error: No such Algorithm')
</code></pre>
<p>I tried <code>Union[shelf.Sheet, guillotine.Guillotine, maximal_rectangles.MaximalRectangle]</code> but MyPy gives me a ton of errors where I use the _bin_factory method later on in my code. The errors seem to center around the fact that all three object types in the Union have different attributes from one another.</p>
|
<p>Here's a solution using <code>typing.Generic</code></p>
<pre><code>from typing import Generic, TypeVar
T = TypeVar('T', 'Guillotine', 'Sheet', 'MaximalRectangle')
class Guillotine:
pass
class Sheet:
pass
class MaximalRectangle:
pass
class Algo(Generic[T]):
def __init__(self, algorithm: str) -> None:
self.algorithm = algorithm
def _bin_factory(self) -> T:
"""
Returns a bin with the specificed algorithm,
heuristic, and dimensions
"""
if self.algorithm == 'guillotine':
return Guillotine() # type: ignore
elif self.algorithm == 'shelf':
return Sheet() # type: ignore
elif self.algorithm == 'maximal_rectangle':
return MaximalRectangle() # type: ignore
raise ValueError('Error: No such Algorithm')
algo: Algo[Guillotine] = Algo('guillotine')
reveal_type(algo._bin_factory())
</code></pre>
<p>Alternately, if you're willing to modify your approach a bit more, you can provide a cleaner API:</p>
<pre><code>from typing import Generic, TypeVar, Type
T = TypeVar('T', 'Guillotine', 'Sheet', 'MaximalRectangle')
class Guillotine:
pass
class Sheet:
pass
class MaximalRectangle:
pass
class Algo(Generic[T]):
def __init__(self, algorithm: Type[T]) -> None:
self.algorithm = algorithm # type: Type[T]
def _bin_factory(self) -> T:
"""
Returns a bin with the specificed algorithm,
heuristic, and dimensions
"""
if self.algorithm is Guillotine:
# handle custom arguments:
return self.algorithm()
elif self.algorithm is Sheet:
# handle custom arguments:
return self.algorithm()
elif self.algorithm is MaximalRectangle:
# handle custom arguments:
return self.algorithm()
raise ValueError('Error: No such Algorithm')
algo = Algo(Guillotine)
reveal_type(algo._bin_factory())
</code></pre>
|
python|python-3.x|oop|types|mypy
| 1 |
906 | 11,772,051 |
Ending a process when a tcp connection is closed
|
<p>I am developing a client-server application where whenever a new client connects to the server, the server spawns a new process using the <code>multiprocessing</code> module. Its target function is a function where it takes the socket and does I/O. The problem I have is once the TCP connection is closed between the client and the process on the server how/where do I put the .join() function call to end the child process? Also do I need to do any waitpid in the parent process like in C? </p>
<p>Server code:</p>
<pre><code>def new_client(conn_socket):
while True:
message = conn_socket.recv(BUFFER_SIZE)
conn_socket.send(message)
#just echo the message
#how to check to see if the TCP connection is still alive?
#put the .join() here??
def main():
#create the socket
server_socket = socket(AF_INET,SOCK_STREAM)
#bind the socket to the local ip address on a specific port and listen
server_port = 12000
server_socket.bind(('',server_port))
server_socket.listen(1)
#enter in a loop to accept client connections
while True:
connection_socket, client_address = server_socket.accept()
#create a new process with the new connection_socket
new_process = Process(target = new_client, args = (connection_socket,))
new_process.start()
#put the .join() here or what??
if __name__ == '__main__':
main()
</code></pre>
<p>Also for this setup would it be more beneficial to use threads in the <code>thread</code> module or stay with processes? The server code is being developed for heavy usage on a server with "average" specs(how to optimize this setup).</p>
|
<p>You need to check the return value of <code>recv</code>. If it returns zero then the connection is closed nicely, if negative then there was an error.</p>
<p>And the <code>join</code> call should be in the process that creates the sub-process. However, be carefull because <code>join</code> without argument will block the calling process until the sub-process is done. Put the processes in a list, and on regular intervals call <code>join</code> with a small timeout.</p>
<p><strong>Edit:</strong> Simplest is to add, at the end of the infinite accept loop, to iterate over the list of processes, and check if it's <code>is_alive</code>. If not then call <code>join</code> and remove it from the list.</p>
<p>Something like:</p>
<pre><code>all_processes = []
while True:
connection_socket, client_address = server_socket.accept()
#create a new process with the new connection_socket
new_process = Process(target = new_client, args = (connection_socket,))
new_process.start()
# Add process to our list
all_processes.append(new_process)
# Join all dead processes
for proc in all_processes:
if not proc.is_alive():
proc.join()
# And remove them from the list
all_processes = [proc for proc in all_processes if proc.is_alive()]
</code></pre>
<p>Note that purging of old processes will only happen if we get a new connection. This can take some time, depending on if you get new connections often or not. You could make the listening socket non-blocking and use e.g. <a href="http://docs.python.org/library/select.html" rel="nofollow"><code>select</code></a> with a timeout to know if there are new connections or not, and the purging will happen at more regular intervals even if there are no new connections.</p>
|
python|sockets|tcp|process
| 1 |
907 | 33,959,386 |
Django - getting objects by field value (model is unknown)
|
<p>I'm trying to a view in Django that creates an object which contains a generic foreign key, and I want to write it in a way that would allow me to create it without specifying the type of the object for that generic FK, just the ID of the object.</p>
<p>For instance if I have this:</p>
<pre><code>class Foo1 (models.Model):
uuid = models.UUIDField(unique=True, default=uuid.uuid4, editable=False)
...
class Foo2 (models.Model):
uuid = models.UUIDField(unique=True, default=uuid.uuid4, editable=False)
...
class Foo3 (models.Model):
uuid = models.UUIDField(unique=True, default=uuid.uuid4, editable=False)
...
class Bar (models.Model):
foo_type = models.ForeignKey(ContentType, related_name="content_type")
foo_id = models.PositiveIntegerField()
foo_object = GenericForeignKey('foo_type', 'foo_id')...
</code></pre>
<p>I would like to be able to get an object that belongs to any of the Foos by providing just the UUID (which should be unique throughout the DB).</p>
<p>I guess something that would look like this:</p>
<pre><code>GenericFoo.objects.get(uuid=uuid)
</code></pre>
<p>Is this at all possible in Django?</p>
<p>Do point that I don't have any inheritance/abstract model implementation going on between the Foos at the moment because they are not related.</p>
<p>Thanks!</p>
|
<p>I don't belive you can fetch an object from an SQL database, without searching each table it might be in. </p>
<p>However you can write a method in Django that does this for you:</p>
<pre><code>def get_by_uuid(uuid):
for Model in [Foo1, Foo2, Foo3]:
try:
return Model.objects.get(uuid=uuid)
except Model.DoesNotExist:
pass
return None # Maybe you want to raise an exception here instead
</code></pre>
<p>Note that the database will not ensure that uuids are unique across all tables, although if you use <code>uuid.uuid4()</code> to generate them, you shouldn't get any collisions.</p>
<p>If you don't want to hardcode the list of models, you could introspect your <a href="https://docs.djangoproject.com/en/1.8/ref/applications/#module-django.apps" rel="nofollow">applications</a></p>
|
python|django|object|generics|django-queryset
| 0 |
908 | 47,001,980 |
Flask testing client picking up wrong view function with method
|
<p>I have some view functions within a Blueprint. They are like following:</p>
<pre><code>@app.route('/panel/<int:id>', methods=['GET'])
def get_panel(id):
panel = Panel.query.filter_by(id=id).first()
return jsonify(panel.getJson())
@app.route('/panel/<int:id>', methods=['POST'])
def post_panel(id):
panel = request.get_json().get('panel')
# code for saving the data in database
return jsonify({"message": "Saved in database"})
</code></pre>
<p>When I try to test the view function <em>post_panel()</em>, it somehow picks up the <em>get_panel()</em>. As both functions url are same and I think that's what causing the problem. </p>
<p>Is there any way around?</p>
|
<p>This is not correct way to handle different request type for same API endpoint. Try below approach</p>
<pre><code>from flask import request
@app.route('/panel/<int:id>', methods=['GET', 'POST'])
def get_panel(id):
if request.method == 'GET':
panel = Panel.query.filter_by(id=id).first()
return jsonify(panel.getJson())
elif request.method == 'POST':
panel = request.get_json().get('panel')
# code for saving the data in database
return jsonify({"message": "Saved in database"})
</code></pre>
|
python|flask|python-unittest|werkzeug
| 1 |
909 | 46,956,567 |
JavaScript event on Odoo's header button
|
<p>I'm trying to have a JavaScript event fired on a header button (the workflow button).</p>
<p>This is my js</p>
<pre><code>var _t = instance.web._t, QWeb = instance.web.qweb;
instance.web.FormView.include({
init: function() {
this._super.apply(this, arguments);
},
events: {
"click .resume_consultation": "resume_consultation",
},
resume_consultation : function(ev) {
ev.preventDefault();
ev.stopPropagation();
}
})
</code></pre>
<p>The xml for button</p>
<pre><code><header>
<button type="object" class="resume_consultation"
name="testonly"
string="Test Only"/>
</header>
</code></pre>
<p>The python</p>
<pre><code>@api.multi
def testonly(self):
return False
</code></pre>
<p>The event is not called. But I know that the <code>init</code> from the FormView is executed. It's just that the event is not.</p>
<p>Anyone know how to do it for the workflow buttons?</p>
|
<p>first of all add your js file following code : </p>
<pre><code>odoo.define('Modulename.filename', function (require) {
"use strict";
var form_widget = require('web.form_widgets');
var core = require('web.core');
var _t = core._t;
var QWeb = core.qweb;
form_widget.WidgetButton.include({
on_click: function() {
if(this.node.attrs.custom === "click"){
//code//
}
this._super();
},
});
});
</code></pre>
<p>after this add your js file in xml :</p>
<pre><code><?xml version="1.0" encoding="utf-8"?> <odoo>
<template id="assets_backend" name="project assets" inherit_id="web.assets_backend">
<xpath expr="." position="inside">
<script type="text/javascript" src="/product_pack/static/src/js/product_pack.js"></script>
</xpath>
</template> </odoo>
</code></pre>
<p>ather that define your click event function in your py file </p>
<pre><code>class SalePetOrder(models.Model):
_inherit = "xyz"
def java_script(self):
return {"hello": "world"}
</code></pre>
<p>after define your function in your xml : </p>
<pre><code><data>
<header>
<button name="java_script" string="Java Script" type="object" custom="click"/>
</header>
</data>
</code></pre>
<p>and yes define your js file and xml file in your <strong>manifest</strong> /<em>openerp</em> file .</p>
<p>I hope it helps you.</p>
|
javascript|python|openerp|odoo-8
| 0 |
910 | 46,756,780 |
Azure Batch Pool: How do I use a custom VM Image via Python?
|
<p>I want to create my Pool using Python. I can do this when using an image (Ubuntu Server 16.04) from the marketplace, but I want to use a custom image (but also Ubuntu Server 16.04) -- one which I have prepared with the desired libraries and setup.</p>
<p>This is how I am creating my pool:</p>
<pre><code>new_pool = batch.models.PoolAddParameter(
id=pool_id,
virtual_machine_configuration=batchmodels.VirtualMachineConfiguration(
image_reference=image_ref_to_use, # ??
node_agent_sku_id=sku_to_use),
vm_size=_POOL_VM_SIZE,
target_dedicated_nodes=_POOL_NODE_COUNT,
start_task=start_task,
max_tasks_per_node=_CORES_PER_NODE
)
</code></pre>
<p>I imaging that I need to use <code>batch.models.ImageReference()</code> to create my image reference... but I do not know how to use it.</p>
<p>Yes, I checked the <a href="http://azure-sdk-for-python.readthedocs.io/en/latest/ref/azure.batch.models.html" rel="noreferrer">documentation</a>, which says the following:</p>
<blockquote>
<p>A reference to an Azure Virtual Machines Marketplace image or a custom
Azure Virtual Machine image.</p>
</blockquote>
<p>It lists the parameters as: </p>
<ul>
<li>publisher (str) </li>
<li>offer (str) </li>
<li>sku (str) </li>
<li>version (str) </li>
<li>virtual_machine_image_id (str)</li>
</ul>
<p>However, the parameter <code>virtual_machine_image_id</code> does not exists... In other words, <code>batch.models.ImageReference(virtual_machine_image_id)</code> is not allowed.</p>
<p>How can I use a custom image for my Pool?</p>
<p><strong>UPDATE</strong></p>
<p>So I figured out how to use a custom image... it turns out that no matter how many times I uninstall the azure python libraries and re-install them, the <code>virtual_machine_image_id</code> is never available.</p>
<p>I then went <a href="https://pypi.python.org/pypi/azure-batch" rel="noreferrer">here</a> downloaded the zip. Opened it up, checked the <code>ImageReference</code> class and low-and-behold, the <code>virtual_machine_image_id</code> was available in the <code>__init__</code> function of the <code>ImageReference</code> class. I then downloaded the python wheel and used pip to install it. Boom it worked.</p>
<p>Or so I thought.</p>
<p>I then had to fight though trying to figure out what the <code>node_agent_sku_id</code> is... only by manually creating a Pool and seeing the <code>Batch Node Agent SKU ID</code> field did I manage to find it.</p>
<p>Now I am struggling with the Authentication...</p>
<p>The error I am getting is:</p>
<blockquote>
<p>Server failed to authenticate the request. Make sure the value of
Authorization header is formed correctly including the signature.</p>
<p>AuthenticationErrorDetail: The specified type of authentication
SharedKey is not allowed when external resources of type Compute are
linked. </p>
<p>azure.batch.models.batch_error.BatchErrorException: {'lang':
'en-US', 'value': 'Server failed to authenticate the request. Make
sure the value of Authorization header is formed correctly including
the
signature.\nRequestId:f8c1a3b3-65c4-4efd-9c4f-75c5c253f992\nTime:2017-10-15T20:36:06.7898187Z'}</p>
</blockquote>
<p>From the error, I understand that I am not allowed to use <code>SharedKeyCredentials</code>:</p>
<pre><code>credentials = batchauth.SharedKeyCredentials(_BATCH_ACCOUNT_NAME,
_BATCH_ACCOUNT_KEY)
batch_client = batch.BatchServiceClient(
credentials,
base_url=_BATCH_ACCOUNT_URL)
</code></pre>
<p>What must I do?</p>
<p><strong>UPDATE 2</strong></p>
<p>OK. User <code>fpark</code> has informed me that I need to use: </p>
<pre><code>from azure.batch import BatchServiceClient
from azure.common.credentials import ServicePrincipalCredentials
credentials = ServicePrincipalCredentials(
client_id=CLIENT_ID,
secret=SECRET,
tenant=TENANT_ID,
resource="https://batch.core.windows.net/"
)
batch_client = BatchServiceClient(
credentials,
base_url=BATCH_ACCOUNT_URL
)
</code></pre>
<p>to authenticate. Unfortunately, that the code above is described <a href="http://azure-sdk-for-python.readthedocs.io/en/latest/batch.html#azure-active-directory-authentication" rel="noreferrer">here</a> and makes no reference to what <code>CLIENT_ID</code> et. al are.</p>
<p>I then managed to find another piece of documentation which appears to be the same thing: <a href="https://azure-sdk-for-python.readthedocs.io/en/v2.0.0rc3/resourcemanagementauthentication.html" rel="noreferrer">https://azure-sdk-for-python.readthedocs.io/en/v2.0.0rc3/resourcemanagementauthentication.html</a></p>
<p>That page pointed me to another webpage: <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal" rel="noreferrer">https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal</a></p>
<p>I followed that tutorial and managed to finally authenticate my application...</p>
<p><strong>NOTE</strong></p>
<p>When creating your application, the tutorial will tell you:</p>
<blockquote>
<p>Provide a name and URL for the application. Select either Web app /
API or Native for the type of application you want to create. After
setting the values, select Create.</p>
</blockquote>
<p><strong>DO NOT</strong> select <code>Native</code> as you will not have the option to get an application key...</p>
|
<p><strong>Required Minimum Azure Batch SDK</strong></p>
<p>The <a href="https://pypi.org/project/azure-batch/" rel="nofollow noreferrer">azure-batch</a> Python SDK v4.0.0 or higher is required. Typically with <code>pip install --upgrade azure-batch</code> you should just get the newest version. If that doesn't work you can add the <code>--force-reinstall</code> option to pip to force it (with <code>--upgrade</code>).</p>
<p><strong>Node Agent Sku Id</strong></p>
<p>Regarding the proper value for <code>node_agent_sku_id</code>, you need to use the <a href="https://docs.microsoft.com/python/api/azure-batch/azure.batch.operations.account_operations.accountoperations?view=azure-python#list-node-agent-skus-account-list-node-agent-skus-options-none--custom-headers-none--raw-false----operation-config-" rel="nofollow noreferrer"><code>list_node_agent_skus</code></a> operation to see the mapping between operating systems and the node agent skus supported.</p>
<p><strong>Azure Active Directory Authentication Required</strong></p>
<p>Regarding the auth issue, you must use <a href="https://docs.microsoft.com/azure/batch/batch-aad-auth" rel="nofollow noreferrer">Azure Active Directory authentication</a> to use this feature. It will not work with shared key auth.</p>
<p><strong>Documentation</strong></p>
<p>More information can be found in <a href="https://docs.microsoft.com/azure/batch/batch-custom-images" rel="nofollow noreferrer">this guide</a>, including all pre-requisites needed to enable custom images.</p>
|
python|azure|azure-batch
| 2 |
911 | 37,771,434 |
mac - pip install pymssql error
|
<p>I use Mac (OS X 10.11.5). I want to install module <code>pymssql</code> for python.
In <code>Terminal.app</code>, I input <code>sudo -H pip install pymssql</code>, <code>pip install pymssql</code>, <code>sudo pip install pymssql</code> . But error occur.</p>
<blockquote>
<p>The directory <code>/Users/janghyunsoo/Library/Caches/pip/http</code> or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing <code>pip</code> with <code>sudo</code>, you may want <code>sudo</code>'s <code>-H</code> flag.</p>
<p>The directory <code>/Users/janghyunsoo/Library/Caches/pip</code> or its parent directory is not owned by the current user and caching wheels has been disabled. Check the permissions and owner of that directory. If executing <code>pip</code> with <code>sudo</code>, you may want <code>sudo</code>'s <code>-H</code> flag.</p>
</blockquote>
<pre><code>Collecting pymssql
Downloading pymssql-2.1.2.tar.gz (898kB)
100% |ββββββββββββββββββββββββββββββββ| 901kB 955kB/s
Installing collected packages: pymssql
Running setup.py install for pymssql ... error
Complete output from command /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-KA5ksi/pymssql/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-A3wRBy-record/install-record.txt --single-version-externally-managed --compile:
setup.py: platform.system() => 'Darwin'
setup.py: platform.architecture() => ('64bit', '')
setup.py: platform.libc_ver() => ('', '')
setup.py: Detected Darwin/Mac OS X.
You can install FreeTDS with Homebrew or MacPorts, or by downloading
and compiling it yourself.
Homebrew (http://brew.sh/)
--------------------------
brew install freetds
MacPorts (http://www.macports.org/)
-----------------------------------
sudo port install freetds
setup.py: Not using bundled FreeTDS
setup.py: include_dirs = ['/usr/local/include', '/opt/local/include', '/opt/local/include/freetds']
setup.py: library_dirs = ['/usr/local/lib', '/opt/local/lib']
running install
running build
running build_ext
building '_mssql' extension
creating build
creating build/temp.macosx-10.6-intel-2.7
/usr/bin/clang -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch x86_64 -g -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/opt/local/include -I/opt/local/include/freetds -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mssql.c -o build/temp.macosx-10.6-intel-2.7/_mssql.o -DMSDBLIB
_mssql.c:18924:15: error: use of undeclared identifier 'DBVERSION_80'
__pyx_r = DBVERSION_80;
^
1 error generated.
error: command '/usr/bin/clang' failed with exit status 1
----------------------------------------
Command "/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-KA5ksi/pymssql/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-A3wRBy-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-KA5ksi/pymssql/
</code></pre>
|
<p>The top voted solution did not work for me as brew did not link the older version of freetds on its own. I did this to solve the problem:</p>
<pre><code>brew unlink freetds;
brew install [email protected];
brew link --force [email protected]
</code></pre>
|
python|macos|python-2.7|pymssql
| 60 |
912 | 67,823,882 |
How to align labels? Python
|
<p>So I have this program where I am supposed to receive certain information from a file and then I must separate it into groups like in a table.</p>
<pre><code>Like this:
Name Sales #Items
Randy 85 5
Charli 100 10
</code></pre>
<p>I know how to print it I would usually do this:</p>
<pre><code>print ("{:<10} {:<10} {:<10}".format('Name', 'Sales', '#Items'))
for name, sales, item in list:
name, age, course = value
print ("{:<10} {:<10} {:<10}".format(name, sales, item))
</code></pre>
<p>But that does not work with Labels and the outcome is not aligned. Any solutions?</p>
|
<p>You should never use keywords as a variable. Also, the variable to be assigned should always be on the left.</p>
<pre><code>lists=[('Randy',85,5),('Charli',100,10)]
print ("{:<10} {:<10} {:<10}".format('Name', 'Sales', '#Items'))
for name, sales, item in lists:
value=name, sales,item
print ("{:<10} {:<10} {:<10}".format(name, sales, item))
</code></pre>
<p>Output</p>
<pre><code>Name Sales #Items
Randy 85 5
Charli 100 10
</code></pre>
|
python|label
| 0 |
913 | 67,759,125 |
when opened a new frame and then going back to the home page, the home page messes up
|
<p>when I sign in everything is fine and it takes me to the home page, when I click on view menu and then click the back button it takes me back to the home page, everything is still fine and the way I want <strong>however</strong> when I click on order menu and then press the back button to go back to the home page, my home page messes up and I see parts of <strong>"function:</strong>" function, how can i fix this?<br />
I'm sorry if the code is a bit long I already cut out most of the unnecessary code (or tried to). Thank you for your help</p>
<p>from tkinter import*
from PIL import Image, ImageTk
import tkinter as tk</p>
<pre><code>root = Tk()
root.geometry('670x466')
accounts = []
food = ['Pizza','Burger','Nachos', 'French Toast']
foodprice=['20','9.50','7.50', '17']
drinks = ['Pepsi','Lemonade','Tea', 'Aperitivo Spritz']
drinksprice = ['3','4','3', '15.50']
class Goode_brothers:
def __init__(self, parent):
self.myFrame = Frame(parent)
self.myFrame.pack()
self.load = Image.open('new-dip-project\\food.jpg')
self.render = ImageTk.PhotoImage(self.load)
self.img = Label(parent, image = self.render)
self.img.place(x = -26, y =0)
self.img_login = PhotoImage(file = 'new-dip-project\\button (3).png')
self.b1 = Button(parent,image = self.img_login, command = self.read_info, bd = 0, bg = '#3b353b', activebackground = '#3b353b')
self.b1.place(x = 275, y = 340)
self.img_register = PhotoImage(file = 'new-dip-project\\register.png')
self.b2 = Button(parent,image = self.img_register, command = self.openNewWindow, bd = 0, bg = '#3b353b', activebackground = '#3b353b')
self.b2.place(x = 265, y = 400)
self.canvas = Canvas(parent, width = 400, height = 120)
self.canvas.pack()
self.img4 = ImageTk.PhotoImage(Image.open('new-dip-project\\goode.png'))
self.canvas.create_image(20, 20, anchor=NW, image=self.img4)
self.email = Entry(parent)
self.email.place(x = 340, y = 180)
self.password = Entry(parent)
self.password.place(x = 354, y = 250)
self.img_label = PhotoImage(file = 'new-dip-project\\label-image.png')
self.name = Label(parent, image = self.img_label, text = "Email:", bg = '#3c3a3b').place(x = 197,y = 178)
self.img_label_pass = PhotoImage(file = 'new-dip-project\\label_pass.png')
self.name = Label(parent, image = self.img_label_pass, text = "Password:", bg = '#3c3a3b').place(x = 177,y = 245)
def openMenu(self):
for wid in root.winfo_children():
wid.destroy()
self.myFrame.destroy()
self.myFrame2 = Frame(root, bg = '')
self.myFrame2.pack(fill = "both", expand = 1)
self.img77 = PhotoImage(file = 'new-dip-project\\goode.png')
self.name77 = Label(self.myFrame2, image = self.img77).pack()
self.img_menu = PhotoImage(file = 'new-dip-project\\menu_button.png')
self.b6 = Button(self.myFrame2,image = self.img_menu, command = self.view_menu, bd = 0)
self.b6.place(x = 246, y = 140)
self.img_order = PhotoImage(file = 'new-dip-project\\order_button.png')
self.b7 = Button(self.myFrame2,image = self.img_order, command = self.order_menu, bd = 0)
self.b7.place(x = 239, y = 228)
self.img_checkout = PhotoImage(file = 'new-dip-project\\checkout.png')
self.b8 = Button(self.myFrame2,image = self.img_checkout, bd = 0)
self.b8.place(x = 250, y = 316)
def view_menu(self):
self.myFrame2.destroy()
self.myFrame3 = LabelFrame(root, height = 700)
self.myFrame3.pack()
self.myFrame3.columnconfigure(0, weight=1)
self.myFrame3.columnconfigure(1, weight=2)
self.food_title = Label(self.myFrame3, font=("Impact", "23"), text = 'Food').grid(row = 0, column = 4)
self.food_space = Label(self.myFrame3, text = '').grid(row = 1, column = 4)
self.drinks_title = Label(self.myFrame3, font=("Impact", "23"), text = 'Drinks').grid(row = 8, column = 4)
self.price = Label(self.myFrame3, font=("Impact", "23"), text = 'Price($)').grid(row = 0, column = 8)
for x in range (len(food)):
self.foodop = Label(self.myFrame3, font=("Impact", "15"), text = food[x]).grid(row = 3+x, column = 4) #A created label defining where it is positioned
self.fprice = Label(self.myFrame3, font=("Impact", "15"), text = foodprice[x]).grid(row = 3+x, column = 8)
for x in range (len(drinks)):
self.drinksop = Label(self.myFrame3, font=("Impact", "15"), text = drinks[x]).grid(row = 5+(len(food))+x, column = 4)
self.drinksp = Label(self.myFrame3, font=("Impact", "15"), text = drinksprice[x]).grid(row = 5+(len(food))+x, column = 8)
self.img_back = PhotoImage(file = 'new-dip-project\\back_button.png')
self.b10 = Button(self.myFrame3,image = self.img_back, command = self.openMenu, bd = 0)
self.b10.grid(row = 38, column = 7)
def order_menu(self):
for wid2 in root.winfo_children():
wid2.destroy()
self.myFrame2.destroy()
self.myFrame4 = Frame(root)
self.myFrame4.pack(fill = "both", expand = 1)
self.tkvar = StringVar(self.myFrame4)
self.tkvar.set("Food")
self.tkvar2 = StringVar(self.myFrame4)
self.tkvar2.set("Drinks")
self.img_odmenu = PhotoImage(file = 'new-dip-project\\od_menu.png')
self.order_menu_message = Label(self.myFrame4, image = self.img_odmenu).place(x = 220)
self.foodMenu = OptionMenu(self.myFrame4, self.tkvar, *food)
self.foodMenu.place(x = 160, y = 110)
self.Foodlabel = Label(self.myFrame4, text="Choose Your Food", font=("Courier New","12"))
self.Foodlabel.place(x = 145, y = 83)
self.drinklabel = Label(self.myFrame4, text="Choose Your Drink", font=("Courier New","12"))
self.drinklabel.place(x = 370, y = 83)
self.drinkMenu = OptionMenu(self.myFrame4, self.tkvar2, *drinks)
self.drinkMenu.place(x = 385, y = 110)
self.pricelabel = Label(self.myFrame4, text = "Total price", font=("Courier New","12"))
self.pricelabel.place(x = 289, y = 208)
self.order_btn78 = PhotoImage(file = 'new-dip-project\\orderb.png')
self.order_btn = Button(self.myFrame4, image = self.order_btn78, bd = 0)
self.order_btn.place(x = 302, y = 160)
self.check_btn = PhotoImage(file = 'new-dip-project\\checkpay.png')
self.checkout_btn = Button(self.myFrame4, image = self.check_btn, bd = 0)
self.checkout_btn.place(x = 267, y = 410)
self.img_odmenu = PhotoImage(file = 'new-dip-project\\od_menu.png')
self.order_menu_message = Label(self.myFrame4, image = self.img_odmenu).place(x = 220)
self.foodMenu = OptionMenu(self.myFrame4, self.tkvar, *food)
self.foodMenu.place(x = 160, y = 110)
self.Foodlabel = Label(self.myFrame4, text="Choose Your Food", font=("Courier New","12"))
self.Foodlabel.place(x = 145, y = 83)
self.drinklabel = Label(self.myFrame4, text="Choose Your Drink", font=("Courier New","12"))
self.drinklabel.place(x = 370, y = 83)
self.drinkMenu = OptionMenu(self.myFrame4, self.tkvar2, *drinks)
self.drinkMenu.place(x = 385, y = 110)
self.pricelabel = Label(self.myFrame4, text = "Total price", font=("Courier New","12"))
self.pricelabel.place(x = 289, y = 208)
self.order_btn78 = PhotoImage(file = 'new-dip-project\\orderb.png')
self.order_btn = Button(self.myFrame4, image = self.order_btn78, bd = 0)
self.order_btn.place(x = 302, y = 160)
self.check_btn = PhotoImage(file = 'new-dip-project\\checkpay.png')
self.checkout_btn = Button(self.myFrame4, image = self.check_btn, bd = 0)
self.checkout_btn.place(x = 267, y = 410)
self.back_menu = PhotoImage(file = 'new-dip-project\\bbutton.png')
self.back_button2 = Button(self.myFrame4, image = self.back_menu, command = self.openMenu, bd = 0)
self.back_button2.place(x = 30, y = 410)
if __name__ == "__main__":
e = Goode_brothers(root)
root.title('Goode brothers')
root.mainloop()
</code></pre>
|
<p>You have to indent the methods under the class Goode Brothers</p>
|
python|tkinter
| 0 |
914 | 29,879,520 |
Import Django json into iPhone App
|
<p>I am looking to import some Django json into my iphone app. The following Django code:</p>
<pre><code>def jsonfixture(request):
data = StraightredFixture.objects.filter(fixturematchday=12)
json_data = serializers.serialize('json', data, use_natural_foreign_keys=True)
return HttpResponse(json_data, content_type='application/json')
</code></pre>
<p>produces the following json in my browser:</p>
<pre><code>[{"fields": {"awayteamscore": 2, "hometeamscore": 1, "home_team": "Stoke", "away_team": "Burnley", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T15:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136932}, {"fields": {"awayteamscore": 1, "hometeamscore": 2, "home_team": "ManCity", "away_team": "Swans", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T15:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136930}, {"fields": {"awayteamscore": 0, "hometeamscore": 0, "home_team": "Foxes", "away_team": "Sunderland", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T15:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 137852}, {"fields": {"awayteamscore": 1, "hometeamscore": 2, "home_team": "Everton", "away_team": "West Ham", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T15:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136929}, {"fields": {"awayteamscore": 0, "hometeamscore": 2, "home_team": "Chelsea", "away_team": "West Bromwich", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T15:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136928}, {"fields": {"awayteamscore": 0, "hometeamscore": 1, "home_team": "Newcastle", "away_team": "QPR", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T15:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136931}, {"fields": {"awayteamscore": 2, "hometeamscore": 1, "home_team": "Arsenal", "away_team": "ManU", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-22T17:30:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136927}, {"fields": {"awayteamscore": 1, "hometeamscore": 3, "home_team": "Crystal", "away_team": "Liverpool", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-23T13:30:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136926}, {"fields": {"awayteamscore": 2, "hometeamscore": 1, "home_team": "Hull", "away_team": "Spurs", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-23T16:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136925}, {"fields": {"awayteamscore": 1, "hometeamscore": 1, "home_team": "Aston Villa", "away_team": "Southampton", "fixturematchday": 12, "soccerseason": 354, "fixturedate": "2014-11-24T20:00:00", "fixturestatus": "FINISHED"}, "model": "straightred.straightredfixture", "pk": 136924}]
</code></pre>
<p>I then have the following swift code in xcode:</p>
<pre><code>let url2 = NSURL(string: "http://localhost:8000/straightred/jsonfixture")
let data = NSData(contentsOfURL: url2!)
var dict = NSJSONSerialization.JSONObjectWithData(data!, options: nil, error: nil) as NSDictionary
</code></pre>
<p>However, it errors at the third line in the code with error "EXC_BAD_INSTRUCTION".</p>
<p>Sadly I do not know if there is something wrong with the django json output or the swift xcode json import. Any help on this would be appreciated.</p>
<p>Many thanks in advance, Alan.</p>
|
<p>Your JSON data is an <em>array</em> of dictionaries, not a dictionary. <em>You have to cast the deserialization result as an NSArray instead of an NSDictionary.</em></p>
<p>Change this line:</p>
<pre><code>var dict = NSJSONSerialization.JSONObjectWithData(data!, options: nil, error: nil) as NSDictionary
</code></pre>
<p>with</p>
<pre><code>var arr = NSJSONSerialization.JSONObjectWithData(data!, options: nil, error: nil) as NSArray
</code></pre>
<p><strong>Swift 2 update:</strong></p>
<p>Still the same idea but safer. </p>
<p>For simplicity, we can use <code>try?</code> with a multiple <code>if let</code>:</p>
<pre><code>if let data = data,
json = try? NSJSONSerialization.JSONObjectWithData(data, options: []),
arr = json as? NSArray {
// use arr
}
</code></pre>
<p>And if you need to handle errors from NSJSONSerialization (or other throwing methods), then Do-Try-Catch is the way:</p>
<pre><code>do {
if let data = data,
json = try NSJSONSerialization.JSONObjectWithData(data, options: []),
arr = json as? NSArray {
// use arr
}
} catch let error as NSError {
print(error.localizedDescription)
}
</code></pre>
<p><em>Also, just a note: in your example you're using <code>NSData(contentsOfURL:)</code>. It's ok to use this for experimenting, but you should always use asynchronous methods like <code>NSURLSession</code> instead in real code.</em></p>
|
python|ios|json|django|swift
| 0 |
915 | 29,869,484 |
UnicodeDecodeError in a pandas dataframe created from JSON file
|
<p>I have a piece of code running on an iPython notebook that downloads a JSON file and then parses the content into a Pandas DF. However, if I try to inspect the DF, then I get an encoding error.</p>
<pre><code>output = r.json()
columns_map = {'/people/person/date_of_birth': 'birth_date',
'/people/person/place_of_birth': 'birth_place',
'/people/person/gender': 'gender'}
dF = pd.DataFrame(output['result'])
dF.rename(columns=columns_map, inplace=True)
dF.to_csv('file.csv',encoding='utf-8')
</code></pre>
<p>I can create a CSV from the DF w/o any problems, but If i type</p>
<pre><code>dF
</code></pre>
<p>To inspect the dF from inside the iPython notebook, I get this:</p>
<pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1894: ordinal not in range(128)
</code></pre>
<p>Can anybody help?</p>
|
<p>After some research, I found that this is a problem with Python version < 3.0. For some weird reason, the quick fix is to import sys and relaod sys. This worked for me:</p>
<pre><code>import sys
reload(sys)
sys.setdefaultencoding('utf8')
</code></pre>
|
python|json|encoding|utf-8|ipython-notebook
| 7 |
916 | 61,211,313 |
unable to filter on a specific string pattern and unable to change the index in pandas
|
<p>I have a dataframe as below:</p>
<pre><code>customer_data =
Account ID Account Name Account Status gb
1-ABC ABC Customer Active 90
2-XYZ XYZ Customer Inactive 100
1-CBA CBA Indirect - Active 50
2-GHC GHC Direct - Inactive 67
</code></pre>
<p>For</p>
<pre><code>print(customer_data.dtypes)
</code></pre>
<p>Output is</p>
<pre><code>gb int64
dtype: object
</code></pre>
<p>For</p>
<pre><code>print(customer_data.columns)
</code></pre>
<p>Output is</p>
<pre><code>Index(['gb'], dtype='object')
</code></pre>
<p>I am trying to have two dataframes, one with those accounts which have text <code>Active</code> in <code>Account Status</code>and other those which have string <code>Inactive</code> in <code>Account Status</code></p>
<p>I tried this
<code>
only_active = customer_data[customer_data['Account Status'].str.contains("Active")]
</code></p>
<p>and
<code>
only_inactive = customer_data[customer_data['Account Status'].str.contains('Inactive')]
</code></p>
<p>getting error like this</p>
<p><code>KeyError: 'Account Status'
</code></p>
<p>Please help me on this, I want two dataframes, one with those accounts which have text <code>Active</code> in <code>Account Status</code>and other those which have string <code>Inactive</code> in <code>Account Status</code></p>
|
<p>If your dataframe looks like</p>
<pre><code>Account ID | Account Name | Account Status | gb
</code></pre>
<p>but <code>customer_data.columns</code> only contains <code>['gb']</code>, then your other columns are in a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.html" rel="nofollow noreferrer">MultiIndex</a> from some groupby or similar earlier in the code. To get those indices back as columns, you can use <code>customer_data.reset_index()</code>, then your columns will be usable in filtering without using index or multiindex methods</p>
|
python|pandas
| 1 |
917 | 61,517,883 |
json dumps all returns index instead of attribute
|
<p>I am trying to grab data from my mysql database. </p>
<pre><code>from flask_mysqldb import MySQL
cur = mysql.connection.cursor()
cur.execute(SELECT id FROM users)
mysql.connection.commit()
data = cur.fetchall()
return jsonify({"result": data})
</code></pre>
<p>Right now my code returns: {result: [[1]]} However, I want my result to show something like this: {result: {id: 1}} the id is the attribute inside the SQL table and 1 is the value </p>
<p>I am wondering if theres is a easy way to retrieving the attribute manually from the SQL database or if I have to add manually something like this <code>data = {'id': data[0][0]}</code> before the return line.</p>
|
<p>The cursor object has <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-description.html" rel="nofollow noreferrer"><code>description</code></a> property, which gives you information about columns in a result set. It's a list of tuples, the first element being the column name.</p>
<p>So you could do this:</p>
<pre><code>cur.execute("SELECT id FROM users")
column_names = [col[0] for col in cur.description]
data = [dict(zip(column_names, row)) for row in cur.fetchall()]
return jsonify({"result": data})
</code></pre>
<p>Here <code>data</code> is a list of dictionaries with the format <code>{'column_name': value}</code>.</p>
|
python|flask
| 0 |
918 | 27,731,670 |
Scrapy ImportError: No module named project.settings when using subprocess.Popen
|
<p>I have scrapy crawler scraping thru sites. On some occasions scrapy kills itself due to RAM issues. I rewrote the spider such that it can be split and run for a site.</p>
<p>After the initial run, I use subprocess.Popen to submit the scrapy crawler again with new start item.</p>
<p>But I am getting error </p>
<p><code>ImportError: No module named shop.settingsTraceback (most recent call last):
File "/home/kumar/envs/ishop/bin/scrapy", line 4, in <module> execute()
File "/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/cmdline.py", line 109, in execute settings = get_project_settings()
File "/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/utils/project.py", line 60, in get_project_settings settings.setmodule(settings_module_path, priority='project')
File "/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 109, in setmodule module = import_module(module)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module __import__(name)ImportError: No module named shop.settings</code></p>
<p>The subprocess cmd is </p>
<p><code>newp = Popen(comm, stderr=filename, stdout=filename, cwd=fp, shell=True)</code></p>
<ul>
<li><p>comm -
<code>source /home/kumar/envs/ishop/bin/activate && cd /home/kumar/projects/usg/shop/spiders/../.. && /home/kumar/envs/ishop/bin/scrapy crawl -a category=laptop -a site=newsite -a start=2 -a numpages=10 -a split=1 'allsitespider'</code></p></li>
<li><p>cwd - <strong>/home/kumar/projects/usg</strong></p></li>
</ul>
<p>I checked sys.path and it is correct <code>['/home/kumar/envs/ishop/bin', '/home/kumar/envs/ishop/lib64/python27.zip', '/home/kumar/envs/ishop/lib64/python2.7', '/home/kumar/envs/ishop/lib64/python2.7/plat-linux2', '/home/kumar/envs/ishop/lib64/python2.7/lib-tk', '/home/kumar/envs/ishop/lib64/python2.7/lib-old', '/home/kumar/envs/ishop/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7', '/usr/lib/python2.7', '/home/kumar/envs/ishop/lib/python2.7/site-packages']</code></p>
<p>But looks like the import statement is using <code>"/usr/lib64/python2.7/importlib/__init__.py"</code> instead of my virtual env.</p>
<p>Where am I wrong? Help please?</p>
|
<p>Looks like the settings in not being loaded properly. One solution would be to build an egg and deploy it in the env before starting the crawler.</p>
<p>Official docs, <a href="http://doc.scrapy.org/en/0.7/topics/scrapyd.html#deploying-your-project" rel="nofollow">Eggify scrapy project</a></p>
|
python|scrapy|popen
| 2 |
919 | 72,284,929 |
Difference in placement of "return"
|
<p>I'm currently learning how to code and I've encountered an issue with the following use of the return function:</p>
<pre><code> unionFind = UnionFind(n)
for A, B in edges:
if not unionFind.union(A, B):
return False
return True
</code></pre>
<p>When I put return True with no indents, I am able to get the result of False. However, if I were to do the same with this:</p>
<pre><code> unionFind = UnionFind(n)
for A, B in edges:
if not unionFind.union(A, B):
return False
return True
</code></pre>
<p>I instead receive True only. This is the same with the following:</p>
<pre><code> unionFind = UnionFind(n)
for A, B in edges:
if not unionFind.union(A, B):
return False
else:
return True
</code></pre>
<p>I've checked multiple online sources and I can't seem to understand how the return function behaves in this scenario (is it affected by the for loop or the if condition) and would greatly appreciate some guidance. Thank you.</p>
|
<p>It is not how the return statement behaves. It is about the code scopes.
When you're putting the statement without indents, then the <code>return True</code> is outside the <code>for</code> loop scope, and it will be reached only after looping through all the <code>A</code> and <code>B</code> couples and only if the <code>not unionFind.union(A, B)</code> condition does not hold for all the couples.</p>
<p>If you're putting the <code>return True</code> with the indent or in the <code>else</code> statement, it will be reached in the <code>for</code> scope, which means that only the first couple of <code>A</code> and <code>B</code> will be checked.</p>
<p>I hope it makes sense to you.</p>
|
python|return
| 0 |
920 | 72,180,701 |
One of many recursive calls of a function found the correct result, but it can't "tell" the others. Is there a better fix than this ugly workaround?
|
<p>Recently, I was experimenting with writing a function to find a primitive value anywhere within an arbitrarily deeply nested sequence, and return the path taken to get there (as a list of indices inside each successive nested sequence, in order). I encountered a very unexpected obstacle: the function was finding the result, but not returning it! Instead of the correct output, the function kept returning the output which should only have been produced when attempting to find an item <em>not</em> in the sequence.</p>
<p>By placing <code>print</code> statements at various points in the function, I found that the problem was that after the recursive call which actually found the item returned, others which did not find the item were also returning, and evidently later in time than the one that found it. This meant that the final result was getting reset to the 'fail' value from the 'success' value unless the 'success' value was the last thing to be encountered.</p>
<p>I tried fixing this by putting an extra conditional inside the function to return early in the success case, trying to preempt the additional, unnecessary recursive calls which were causing the incorrect final result. Now, <em>this</em> is where I ran into the root cause of the problem:</p>
<p>There is no way of knowing <em>which</em> recursive call (if any) will find the item beforehand, and once one of them does find it, it has no way of 'communicating' with the others!</p>
<p>The only way I could come up with of avoiding this deeper issue was to completely refactor the function to 'set' a variable outside itself with the 'success' output if and only if the 'success' condition is encountered. The external, global variable starts out set to the 'failed to find item in sequence' value, and is not reset except in the 'success' case. All the other recursive calls just <code>return</code> without doing anything. This seems very ugly and inefficient, but it <strong>does</strong> work.</p>
<p>FIRST ATTEMPT</p>
<pre><code># ITERATIVE/RECURSIVE SEQUENCE TRAVERSER (First Attempt)
# Works on 'result1' but not on 'result2'
# Searches for 'item' in sequence (list or tuple) S, and returns a tuple
# containing the indices (in order of increasing depth) at which the item
# can be found, plus the depth in S at which 'item' was found.
# If the item is *not* found, returns a tuple containing an empty list and -1
def traverse(S, item, indices=[], atDepth=0):
# If the sequence is empty, return the 'item not found' result
if not S:
return ([], -1)
else:
# For each element in the sequence (breadth-first)
for i in range(len(S)):
# Success condition base case: found the item!
if S[i] == item:
return (indices + [i], atDepth)
# Recursive step (depth-first): enter nested sequence
# and repeat procedure from beginning
elif type(S[i]) in (list, tuple):
return traverse(S[i], item, indices + [i], atDepth + 1)
# Fail condition base case: searched the entire length
# and depth of the sequence and didn't find the item, so
# return the 'item not found' result
else:
print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".")
return ([], -1)
L = [0, 1, 2, [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])], [[8, ()]], (([9], ), 10)]
result1 = traverse(L, 7)
result2 = traverse(L, 9)
print("-------------------------------------------")
print(result1)
print("-------------------------------------------")
print(result2)
</code></pre>
<p>SECOND ATTEMPT</p>
<pre><code># ITERATIVE/RECURSIVE SEQUENCE TRAVERSER (Second Attempt)
# Does not work on either test case
# Searches for 'item' in sequence (list or tuple) S, and returns a tuple
# containing the indices (in order of increasing depth) at which the item
# can be found, plus the depth in S at which 'item' was found.
# If the item is *not* found, returns a tuple containing an empty list and -1
def traverse(S, item, indices=[], atDepth=0, returnValue=None):
# If the sequence is empty, return the 'item not found' result
if not S:
print("Sequence S is empty.")
return ([], -1)
# --- ATTEMPTED FIX:
# If the item is found before the end of S is reached,
# do not perform additional searches. In addition to being
# inefficient, doing extra steps would cause incorrect false
# negatives for the item being in S.
# --- DOES NOT WORK: the underlying issue is that the multiple recursive
# calls generated at the same time can't communicate with each other,
# so the others don't 'know' if one of them already found the item.
elif returnValue:
print("Inside 'elif' statement!")
return returnValue
else:
# For each element in the sequence (breadth-first)
for i in range(len(S)):
# Success condition base case: found the item!
if S[i] == item:
# Return the depth and index at that depth of the item
print("--- Found item " + str(item) + " at index path " + str(indices) + " in current sequence")
returnValue2 = (indices + [i], atDepth)
print("--- Item " + str(item) + " is at index path " + str(returnValue2) + " in S, SHOULD RETURN")
#return returnValue2 # THIS DIDN'T FIX THE PROBLEM
#break # NEITHER DID THIS
# Recursive step (depth-first): enter nested sequence
# and repeat procedure from beginning
elif type(S[i]) in (list, tuple):
# CANNOT USE 'return' BEFORE RECURSIVE CALL, as it would cause any items
# in the outer sequence which come after the first occurrence of a nested
# sequence to be missed (i.e. the item could exist in S, but if it is
# after the first nested sequence, it won't be found)
traverse(S[i], item, indices + [i], atDepth + 1, returnValue) # CAN'T USE 'returnValue2' HERE (out of scope);
# so parameter can't be updated in 'if' condition
# Fail condition base case: searched the entire length
# and depth of the sequence and didn't find the item, so
# return the 'item not found' result
else:
print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".")
return ([], -1)
L = [0, 1, 2, [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])], [[8, ()]], (([9], ), 10)]
result1 = traverse(L, 7)
result2 = traverse(L, 9)
print("-------------------------------------------")
print(result1)
print("-------------------------------------------")
print(result2)
</code></pre>
<p>THIRD AND FINAL ATTEMPT -- Working, but not ideal!</p>
<pre><code># ITERATIVE/RECURSIVE SEQUENCE TRAVERSER (Third Attempt)
# This 'kludge' is ** HIDEOUSLY UGLY **, but it works!
# Searches for 'item' in sequence (list or tuple) S, and generates a tuple
# containing the indices (in order of increasing depth) at which the item
# can be found, plus the depth in S at which 'item' was found.
# If the item is *not* found, returns nothing (implicitly None)
# The results of calling the function are obtained via external global variables.
# This 3rd version of 'traverse' is thus actually a void function,
# and relies on altering the global state instead of producing an output.
# ----- WORKAROUND: If the result is found, have the recursive call that found it
# send it to global scope and use this global variable as the final result of calling
# the 'traverse' function.
# Initialize the global variables to the "didn't find the item" result,
# so the result will still be correct if the item actually isn't in the sequence.
globalVars = {'result1': ([], -1), 'result2': ([], -1)}
def traverse(S, item, send_output_to_var, indices=[], atDepth=0):
# If the sequence is empty, return *without* doing anything to the global variable.
# It is already initialized to the "didn't find item" result.
if not S:
return
else:
# For each element in the sequence (breadth-first)
for i in range(len(S)):
# Success condition base case: found the item!
if S[i] == item:
# Set the global variable to the index path of 'item' in 'S'.
globalVars[send_output_to_var] = (indices + [i], atDepth)
# No need to keep on doing unnecessary work!
return
# Recursive step (depth-first): enter nested sequence
# and repeat procedure from beginning
elif type(S[i]) in (list, tuple):
# Don't use 'return' before the recursive call, or it will miss items
# in the outer sequence after a nested sequence is encountered.
traverse(S[i], item, send_output_to_var, indices + [i], atDepth + 1)
# Fail condition base case: searched the entire length
# and depth of the sequence and didn't find the item.
else:
# Return *without* setting the global variable, as it is
# already initialized to the "didn't find item" result.
return
L = [0, 1, 2, [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])], [[8, ()]], (([9], ), 10)]
traverse(L, 7, 'result1')
traverse(L, 9, 'result2')
print("-------------------------------------------")
print(globalVars['result1'])
print("-------------------------------------------")
print(globalVars['result2'])
</code></pre>
<p>I was wondering if I'm missing something and there is in fact a way of making this work without the use of external variables. The best possible solution would be somehow 'shutting down' all the other recursive calls as soon as one of them returns the success result, but I don't believe this is possible (I'd love to be wrong about this!). Or maybe some kind of 'priority queue' which delays the <code>return</code> of the 'success' case recursive call (if it exists) until <strong>after</strong> all the 'fail' case recursive calls have returned?</p>
<p>I looked at this similar question: <a href="https://stackoverflow.com/questions/59538165/recursively-locate-nested-dictionary-containing-a-target-key-and-value">Recursively locate nested dictionary containing a target key and value</a>
but although the accepted answer here <a href="https://stackoverflow.com/a/59538362/18248018">https://stackoverflow.com/a/59538362/18248018</a> by ggorlen solved OP's problem and even mentions what seems to be this exact issue ("matched result isn't being passed up the call stack correctly"), it is tailored towards performing a specific task, and doesn't offer the insight I'm looking for into the more general case.</p>
|
<p>Your first attempt is almost perfect, the only mistake is that you return the result of searching through the first list/tuple at the current depth, <em>regardless</em> of whether the <code>item</code> was found or not. Instead, you need to check for a positive result, and only return if it is one. That way you keep iterating through the current depth until you either find the <code>item</code> or it is not found at all.</p>
<p>So you need to change:</p>
<pre class="lang-py prettyprint-override"><code>return traverse(S[i], item, indices + [i], atDepth + 1)
</code></pre>
<p>to something like:</p>
<pre class="lang-py prettyprint-override"><code>t = traverse(S[i], item, indices + [i], atDepth + 1)
if t != ([], -1):
return t
</code></pre>
<p>Full code:</p>
<pre class="lang-py prettyprint-override"><code>def traverse(S, item, indices=[], atDepth=0):
# If the sequence is empty, return the 'item not found' result
if not S:
return ([], -1)
else:
# For each element in the sequence (breadth-first)
for i in range(len(S)):
# Success condition base case: found the item!
if S[i] == item:
return (indices + [i], atDepth)
# Recursive step (depth-first): enter nested sequence
# and repeat procedure from beginning
elif type(S[i]) in (list, tuple):
t = traverse(S[i], item, indices + [i], atDepth + 1)
if t != ([], -1):
return t
# Fail condition base case: searched the entire length
# and depth of the sequence and didn't find the item, so
# return the 'item not found' result
else:
print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".")
return ([], -1)
</code></pre>
<p>Output for your two test cases:</p>
<pre><code>>>> traverse(L, 7)
([3, 1, 2, 4], 3)
>>> traverse(L, 9)
We looked everywhere but didn't find 9 in [6, 6.25, 6.5, 6.75, 7].
We looked everywhere but didn't find 9 in (4, 5, [6, 6.25, 6.5, 6.75, 7]).
We looked everywhere but didn't find 9 in [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])].
We looked everywhere but didn't find 9 in [8, ()].
We looked everywhere but didn't find 9 in [[8, ()]].
([5, 0, 0, 0], 3)
</code></pre>
<p><strong>Note</strong> as pointed out by @FreddyMcloughlan, <code>atDepth</code> is simply the length of the returned list minus 1. So you can remove that parameter from the function call and just use:</p>
<pre class="lang-py prettyprint-override"><code>
def traverse(S, item, indices=[]):
# If the sequence is empty, return the 'item not found' result
if not S:
return ([], -1)
else:
# For each element in the sequence (breadth-first)
for i in range(len(S)):
# Success condition base case: found the item!
if S[i] == item:
return (indices + [i], len(indices))
# Recursive step (depth-first): enter nested sequence
# and repeat procedure from beginning
elif type(S[i]) in (list, tuple):
t = traverse(S[i], item, indices + [i])
if t != ([], -1):
return t
# Fail condition base case: searched the entire length
# and depth of the sequence and didn't find the item, so
# return the 'item not found' result
else:
print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".")
return ([], -1)
</code></pre>
|
python|algorithm|recursion
| 4 |
921 | 43,424,886 |
How to make a text file into a list of arrays (array-in-array) and remove spaces/newlines
|
<p>For example I have a txt file:</p>
<pre class="lang-none prettyprint-override"><code>3 2 7 4
1 8 9 3
6 5 4 1
1 0 8 7
</code></pre>
<p>On every line there are 4 numbers and there are 4 lines. At end of lines there's \n (except the last one). The code I have is:</p>
<pre><code>f = input("Insert file name: ")
file = open(f, encoding="UTF-8")
</code></pre>
<p>What I want is the text file to become <code>[[3,2,7,4],[1,8,9,3],[6,5,4,1],[1,0,8,7]]</code>.</p>
<p>I have tried everything, I know the answer is probably really simple, but I just really give up after an hour of attempts. Tried <code>read()</code>, <code>readlines()</code>, <code>split()</code>, <code>splitlines()</code>, <code>strip()</code> and whatever else I could find on the internet. So many can't even make a difference between them...</p>
|
<p>Once you opened the file, use this one-liner using <code>split</code> as you mentionned and nested list comprehension:</p>
<pre><code>with open(f, encoding="UTF-8") as file: # safer way to open the file (and close it automatically on block exit)
result = [[int(x) for x in l.split()] for l in file]
</code></pre>
<ul>
<li>the inner listcomp splits & converts each line to integers (making an array of integers)</li>
<li>the outer listcomp just iterates on the lines of the file</li>
</ul>
<p>note that it will fail if there are something else than integers in your file.</p>
<p>(as a side note, <code>file</code> is a built-in in python 2, but not anymore in python 3, however I usually refrain from using it)</p>
|
python|arrays|python-3.x|file
| 4 |
922 | 36,885,866 |
Python set decimal precision
|
<p>I'm writting a small code for encode with arithmetic encoding. I need to set a determinate precision but I must be doing something wrong. This is the code : </p>
<pre><code>def encode(string, probabilities):
getcontext().prec = 28
start = 0.0
width = 1.0
for ch in string:
d_start, d_width = probabilities[ch]
start += d_start*width
width *= d_width
return random.uniform(start, start + width)
</code></pre>
<p>As I've read in the python documentation getcontext().prec should set the precision I'm willing to work with. After some iterations, d_start and d_with is very small ( ~ e^-20 ) and the variables start and width stay with the same valor from that moment on. </p>
<p>If further information is needed please don't hesistate asking for it.</p>
<p>Thanks in advance</p>
<p>Edit 1: Proper indentation of the code</p>
<p>Edit 2: I've made a print of d_start after each sum to show what I mean by saying "and the variables start and width stay with the same valor from that moment on. "</p>
<p>Here you have the results:</p>
<pre><code>0.0
0.16
0.224
0.224
0.22784
0.22784
0.2280448
0.22812672
0.22812672
0.2281316352
0.2281316352
0.228131897344
0.228132002202
0.228132002202
0.228132008493
0.228132008493
0.228132008829
0.228132008963
0.228132008963
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
0.228132008971
</code></pre>
|
<p>The problem is that <code>getcontext().prec</code> is only used for <code>decimal.Decimal</code> variables... and you define <code>start</code> and <code>width</code> as float.</p>
<p>You should force usage of Decimal, for example that way (assuming a <code>from decimal import *</code>)</p>
<pre><code>def encode(string, probabilities):
getcontext().prec = 28
start = Decimal('0.0')
width = Decimal('1.0')
for ch in string:
d_start, d_width = probabilities[ch]
start += Decimal(d_start)*width
width *= Decimal(d_width)
return random.uniform(float(start), float(start + width))
</code></pre>
|
python|python-3.x
| 5 |
923 | 48,616,051 |
Web-element is visible and enabled but .click() fails in python selenium with phantomJS
|
<p>I want to click on the <code>Next</code>-button at <code>https://free-proxy-list.net/</code>. The XPATH selector is <code>//*[@id="proxylisttable_next"]/a</code> </p>
<p>I do this with the following piece of code:</p>
<pre><code>element = WebDriverWait(driver, 2, poll_frequency = 0.1).until
(EC.visibility_of_element_located((By.XPATH, '//*[@id="proxylisttable_next"]/a')))
if (element.is_enabled() == True) and (element.is_displayed() == True):
element.click()
print "next button located and clicked" # printed in case of success
</code></pre>
<p>Subsequently, I get all the IPs from the table like this:</p>
<pre><code>IPs = WebDriverWait(driver, 2, poll_frequency = 0.1).until
(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ':nth-child(n) > td:nth-child(1)')))
</code></pre>
<p>Although the CSS_selector is the same for all tabs, and although I get a <code>next button located and clicked</code>, the IPs output is the same for both tabs (i.e. it seems like the <code>Next</code>-button never was clicked). <strong>Additionally, there is no Exception thrown.</strong></p>
<p><strong>Therefore, there must be something fundamentally wrong with my approach.</strong></p>
<p><strong>How to click on visible & enabled buttons correctly in phantomJS using python/selenium?</strong></p>
<p>For your understanding, here is the html of the page section I am referring to:</p>
<p><a href="https://i.stack.imgur.com/iyKi1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iyKi1.png" alt="Target HTML"></a></p>
|
<p>As far as I see there could be two possible causes:</p>
<ol>
<li><p>The click was not registered, though this is highly unlikely. You can look at other ways to click like JavascriptExecutor's click. </p></li>
<li><p>(Most likely) The find elements are queried right after the click is performed and before the Page 2 results are loaded. Since elements is visible from page 1, it exits immediately with the list of elements from page 1. An ideal way of doing this would be (using psuedocode as I am not familiar with python)</p>
<p>a. Get the current page number</p>
<p>b. Get all the IPs from the current page</p>
<p>c. Click Next</p>
<p>d. Check if (Current page + 1 ) page has become active (class 'active' is added to the Number 2)</p>
<p>e. Get all the elements from the current page</p></li>
</ol>
|
python|selenium|web-scraping|phantomjs|screen-scraping
| 1 |
924 | 19,894,708 |
Can't Start Carbon - 12.04 - Python Error - ImportError: cannot import name daemonize
|
<p>I am really hoping someone can help me as I have spent at-least 15 hours trying to fix this problem. I have been given a task by a potential employer and my solution is to use graphite/carbon/collectd. I am trying to run and install carbon / graphite 0.9.12 but I simply can't get carbon to start. Every time I try and start carbon I end up with the following error. I am using a bash script to install to keep everything consistent. </p>
<p>I don't really know python at all so would appreciate any help you can provide.</p>
<pre><code>/etc/rc0.d/K20carbon-cache -> ../init.d/carbon-cache
/etc/rc1.d/K20carbon-cache -> ../init.d/carbon-cache
/etc/rc6.d/K20carbon-cache -> ../init.d/carbon-cache
/etc/rc2.d/S20carbon-cache -> ../init.d/carbon-cache
/etc/rc3.d/S20carbon-cache -> ../init.d/carbon-cache
/etc/rc4.d/S20carbon-cache -> ../init.d/carbon-cache
/etc/rc5.d/S20carbon-cache -> ../init.d/carbon-cache
Traceback (most recent call last):
File "/opt/graphite/bin/carbon-cache.py",
line 28, in from carbon.util import run_twistd_plugin
File "/opt/graphite/lib/carbon/util.py",
line 21, in from twisted.scripts._twistd_unix import daemonize
ImportError: cannot import name daemonize
</code></pre>
<p>Thanks</p>
<p>Shane</p>
|
<pre><code>pip install 'Twisted<12.0'
</code></pre>
<p>As you can see in the <a href="https://github.com/graphite-project/carbon/blob/master/requirements.txt">requirements.txt</a>, the newer version of Twisted does not seems to play well with it</p>
|
python|bash|caching|macos-carbon
| 46 |
925 | 19,985,818 |
Python 3: Pickling and UnPickling class instances returning "no persistent load" error
|
<p>I am trying to make a program that collects together lots of data about when certain Players in a band are available for busking this Christmas, and I'm struggling to get the pickle function to do what I want... The data is stored in class instances of the class below, <code>Player</code>:</p>
<pre><code>import pickle
class Player():
def __init__(self, name, instrument, availability):
self.Name=name
self.Instrument=instrument
self.Availability=availability
</code></pre>
<p>The list of players, <code>PlayerList</code> is defined as an empty list at first, and I have defined a function, <code>AddPlayer</code> that will initialise a class instance with the player's details stored as attributes...</p>
<pre><code>PlayerList=[]
def AddPlayer(PlayerList, name, instrument, availability):
NewPlayer = Player(name, instrument, availability)
PlayerList.append(NewPlayer)
print("Player "+name+" has been added.\n\n")
</code></pre>
<p>I then have the function that stores the list of players when the user quits the program...</p>
<pre><code>def StartProgram(PlayerList):
while True:
choice=input("Would you like to:\n1 Add a Player?\n2 Quit?\n")
if choice=="1":
## Adds the details of the Player using the above function
AddPlayer(PlayerList, "Test Player", "Instrument", ["1st Dec AM"])
StartProgram(PlayerList)
elif choice=="2":
file=open("BuskingList.txt", "wb")
file=open("BuskingList.txt", "ab")
def AddToList(PlayerList):
print("PlayerList: "+str(PlayerList))
HalfPlayerList=PlayerList[:5]
## For some reason, pickle doesn't like me trying to dump a list with more than
## 5 values in it, any reason for that?
for Player in HalfPlayerList:
print("Player: "+str(Player))
PlayerList.remove(Player)
## Each player removed from original list so it's only added once.
print("HalfPlayerList: "+str(HalfPlayerList))
pickle.dump(HalfPlayerList, file)
if len(PlayerList) !=0:
AddToList(PlayerList)
## Recursive function call while there are still players not dumped
AddToList(PlayerList)
file.close()
quit()
else:
print("Enter the number 1, 2, or 3.\n")
StartProgram(PlayerList)
</code></pre>
<p>And last the function run at the start of the program to load all the player's information...</p>
<pre><code>def Start():
file=open("BuskingList.txt", "rb")
print("File contains: "+str(file.readlines()))
PlayerList=[]
CheckForPlayers=file.read()
if CheckForPlayers!="":
file=open("BuskingList.txt", "rb")
ListOfLists=[]
for line in file:
ToAppend=pickle.load(file)
ListOfLists.append(ToAppend)
for ListOfPlayers in ListOfLists:
for Player in ListOfPlayers:
PlayerList.append(Player)
StartProgram(PlayerList)
print("When entering dates, enter in the form 'XXth Month AM/PM'\n")
Start()
</code></pre>
<p>When the program is first run (provided <code>BuskingList.txt</code> exists), the program is run fine, adding a name works and pickling it and dumping it on quitting apparently works. However, when the program is restarted, it fails to read the stored data with the error below...</p>
<pre><code>File contains: [b'\x80\x03]q\x00c__main__\n', b'Player\n', b'q\x01)\x81q\x02}q\x03(X\x04\x00\x00\x00Nameq\x04X\x0b\x00\x00\x00Test Playerq\x05X\n', b'\x00\x00\x00Instrumentq\x06h\x06X\x0c\x00\x00\x00Availabilityq\x07]q\x08X\n', b'\x00\x00\x001st Dec AMq\tauba.']
Traceback (most recent call last):
File "I:/Busking/Problem.py", line 63, in <module>
Start()
File "I:/Busking/Problem.py", line 54, in Start
ToAppend=pickle.load(file)
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.
</code></pre>
<p>I have done a bit of research and found that this persistent id malarkey shouldn't be an issue, so why has it come up here? Also, why the five value limit on lists when pickling? Any help would be appreciated.</p>
|
<p>You first read the list with <code>.readlines()</code>:</p>
<pre><code>print("File contains: "+str(file.readlines()))
</code></pre>
<p>then try to read it again:</p>
<pre><code>CheckForPlayers=file.read()
</code></pre>
<p>This won't work; the file pointer is now at the end of the file. Rewind or reopen the file:</p>
<pre><code>file.seek(0) # rewind to start
</code></pre>
<p>Not that you need to check for the file contents here; let <code>pickle</code> do that for you.</p>
<p>Next you read the file line by line:</p>
<pre><code>for line in file:
ToAppend=pickle.load(file)
</code></pre>
<p>This doesn't work; the pickle format is <em>binary</em>, not line oriented, and you are reading by using iteration, then <em>reading again</em> by passing in the file object.</p>
<p>Leave reading the file to the <code>pickle</code> module altogether:</p>
<pre><code>with open("BuskingList.txt", "rb") as infile:
ToAppend=pickle.load(file)
</code></pre>
<p>You also mention in your code:</p>
<pre><code>## For some reason, pickle doesn't like me trying to dump a list with more than
## 5 values in it, any reason for that?
</code></pre>
<p>Pickle has <em>no problems</em> with any list size, there is no reason to break up your players list into chunks of five. You didn't state what problem you encountered, but the number of items in the list cannot have been the cause:</p>
<pre><code>>>> import pickle
>>> with open('/tmp/test.pickle', 'wb') as testfile:
... pickle.dump(range(1000), testfile)
...
>>> with open('/tmp/test.pickle', 'rb') as testfile:
... print len(pickle.load(testfile))
...
1000
</code></pre>
<p>This stored and re-loaded a list of 1000 integers.</p>
|
python|persistence|pickle
| 3 |
926 | 20,177,086 |
How to delete unwanted quotation marks in dictionary
|
<p>I have this file:</p>
<pre><code>shorts: cat, dog, fox
longs: supercalifragilisticexpialidocious
mosts:dog, fox
count: 13
avglen: 5.6923076923076925
cat 3
dog 4
fox 4
frogger 1
supercalifragilisticexpialidocious 1
</code></pre>
<p>I want to convert this into a dictionary with the keys as shorts,longs,mosts,counts, and avglen and the values as whats after the colons. For the last part. that would be a dictionary within the dictionary.</p>
<p>I have this code:</p>
<pre><code>def read_report(filename):
list1 = []
d = {}
file_name = open(filename)
for line in file_name:
list1.append(line[:-1])
d = dict(zip(list1[::2], list1[1::2]))
file_name.close()
return d
</code></pre>
<p>and the result is:</p>
<pre><code>{'mosts: dog, fox': 'count: 13', 'shorts: cat, dog, fox': 'longs: supercalifragilisticexpialidocious', 'cat 3': 'dog 4', 'fox 4': 'frogger 1', 'avglen: 5.6923076923076925': ''}
</code></pre>
<p>How do I get rid of the unwanted colons and change the placement of the quotation marks so that it looks like a valid dictionary?</p>
|
<p>Try out JSON, it is a standard library onboard. Your file would look like this.</p>
<pre><code>'{"shorts": ["cat", "dog", "fox"], "longs": "supercalifragilisticexpialidocious", "mosts": ["dog", "fox"], "count": 13, "avglen": "5.6923076923076925", "cat": 3, "dog": 4, "fox": 4, "frogger": 1, "supercalifragilisticexpialidocious": 1}'
</code></pre>
<p>And you python script will be like this.</p>
<pre><code>import json
f = open('my_file.txt','r')
my_dictionary = json.loads(f.read())
f.close()
print my_dictionary
</code></pre>
<p>The output:</p>
<pre><code>{u'count': 13, u'shorts': [u'cat', u'dog', u'fox'], u'longs': u'supercalifragilisticexpialidocious', u'mosts': [u'dog', u'fox'], u'supercalifragilisticexpialidocious': 1, u'fox': 4, u'dog': 4, u'cat': 3, u'avglen': u'5.6923076923076925', u'frogger': 1}
</code></pre>
<p><a href="http://json.org/" rel="nofollow">JSON!</a> is so cool!</p>
|
python|dictionary|python-3.3
| 0 |
927 | 4,263,421 |
Django modeling problem, need a subset of foreign key field
|
<p>I intend to create an app for categories which will have separate category sets (vocabularies) for pages, gallery, product types etc. So there will need to be two models, vocabulary and category.</p>
<p>The categories/models.py code might be something like this:</p>
<pre><code>class Vocabulary(models.Model):
title = models.CharField()
class Category(models.Model):
title = models.CharField()
vocabulary = models.ForeignKey(Vocabulary)
</code></pre>
<p>From my pages, blogs, gallery, etc apps how I will need a ForeignKey field to categories:</p>
<pre><code>class Page(models.Model):
title = models.CharField()
content = models.TextField()
category = models.ForeignKey('categories.Category')
</code></pre>
<p>This will of course list all available categories in the admin app. If I have a product I want only the product categories to be avaialble. How can I filter the available categories to a specific vocabulary?</p>
<p>I'm learning Django and not really sure where to begin. Maybe I have the whole model wrong? If there are any apps which already do it please let me know.</p>
|
<p>Filtering of selection like this is done in the form <a href="http://docs.djangoproject.com/en/dev/ref/forms/fields/#django.forms.ModelChoiceField.queryset" rel="noreferrer">using a queryset</a>, or in the admin interface with <a href="http://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey.limit_choices_to" rel="noreferrer"><code>limit_choices_to</code></a>.</p>
|
python|django|django-models
| 7 |
928 | 69,532,571 |
Sum using the last two elements of a dictionary's tupled key
|
<p>I have lists</p>
<pre><code>A = [(i,j,k,l,m)]
B = [(l,m,k)]
</code></pre>
<p>and dictionaries</p>
<pre><code>C = {(i,j,k,l,m): val}
D = {(l,m,k): other_val}
</code></pre>
<p>I would like to create a dictionary of <code>E</code> such that</p>
<pre><code>E = {(i,j,k): C[(i,j,k,l,m)]*D[(l,m,k)]}
</code></pre>
<p>Assume that all indexing convention matches in the lists and dictionaries. I have the below non-Pythonic, extremely slow solution. Is there any Pythonic way to quickly do this for very large <code>A</code> sizes, e.g., 5 million rows?</p>
<pre><code>E = {}
for i,j,k,l,m in A:
E[i,j,k] = sum(
C[i,j,k,l,m] * D[l2,m2,k2]
for l2,m2,k2 in B if l2==l and m2==m and k2==k)
</code></pre>
<p>Below is the code to generate a sample dataset that is near the actual size trying to be dealt with.</p>
<pre><code>import numpy as np
np.random.seed(1)
Irange = range(50)
Jrange = range(10)
Krange = range(80)
Lrange = range(8)
Mrange = range(18)
A = [
(i,j,k,l,m)
for i in Irange
for j in Jrange
for k in Krange
for l in Lrange
for m in Mrange]
B = [
(l,m,k)
for k in Krange
for l in Lrange
for m in Mrange]
C = {key: np.random.uniform(1,10) for key in A}
D = {key: np.random.uniform(0,1) for key in B}
E = {}
for i,j,k,l,m in A:
E[i,j,k] = sum(
C[i,j,k,l,m] * D[l2,m2,k2]
for l2,m2,k2 in B if l2==l and m2==m and k2==k)
</code></pre>
|
<pre><code>from collections import defaultdict
b = set(B) # O(#B)
E = defaultdict(float)
for i,j,k,l,m in A: # O(#A)
if (l, m, k) in b:
E[i,j,k] += C[i,j,k,l,m] * D[l, m, k]
</code></pre>
<p>This approach has <code>O(#A + #B)</code> complexity.
The naive implementation, leaving correctness issues aside, is <code>O(#A * #B)</code>.</p>
|
python|sum
| 1 |
929 | 51,244,981 |
Neo Smart Contract boa.blockchain module
|
<p>i want to build a smart contract with neo-python and in my sc i want to have this module:</p>
<pre><code>from boa.blockchain.vm.Neo.Storage import GetContext, Get, Put, Delete
</code></pre>
<p>but i get<br>
<code>No module named boa.blockchain</code>
How can i get this module linked?</p>
|
<p>you forgot to put <code>from</code>, and also did you install it with <code>pip install neo-boa</code> ?</p>
<pre><code>from boa.blockchain.vm.Neo.Storage import GetContext, Get, Put, Delete
</code></pre>
|
python
| 0 |
930 | 17,313,558 |
Form for multiple models
|
<p>Suppose I have two models:</p>
<pre><code>class Topic(models.Model):
title = models.CharField()
# other stuff
class Post(models.Model):
topic = models.ForeignKey(Topic)
body = models.TextField()
# other stuff
</code></pre>
<p>And I want to create a form contains two fields: <code>Topic.title</code> and <code>Post.body</code>. Of course, I can create the following form:</p>
<pre><code>class TopicForm(Form):
title = forms.CharField()
body = forms.TextField()
# and so on
</code></pre>
<p>But I don't want to duplicate code, since I already have <code>title</code> and <code>body</code> in models. I'm looking for something like this:</p>
<pre><code>class TopicForm(MagicForm):
class Meta:
models = (Topic, Post)
fields = {
Topic: ('title', ),
Post: ('body', )
}
# and so on
</code></pre>
<p>Also, I want to use it in class based views. I mean, I would like to write view as:</p>
<pre><code>class TopicCreate(CreateView):
form_class = TopicForm
# ...
def form_valid(self, form):
# some things before creating objects
</code></pre>
<p>As suggested in comments, I could use two forms. But I don't see any simple way to use two forms in my <code>TopicCreate</code> view - I should reimplement all methods belongs to getting form(at least).</p>
<h2>So, my question is:</h2>
<p>Is there something already implemented in Django for my requirements? Or is there a better(simpler) way?</p>
<p><strong>or</strong></p>
<p>Do you know a simple way with using two forms in class based view? If so, tell me, it could solve my issue too.</p>
|
<p>You can create two separate forms having the required fields, for each model. Then show both forms in the template inside one html element. Both forms will get rendered and then submitted individually. You can then process the forms separately in the view.</p>
<pre><code>class TopicForm(ModelForm):
class Meta:
model = Topic
fields = ("title", ..)
class PostForm(ModelForm):
class Meta:
model = Post
fields = ("body", ..)
</code></pre>
<p>In view:</p>
<pre><code>form1 = TopicForm()
form2 = PostForm()
</code></pre>
<p>In template:</p>
<pre><code><form ...>
{{ form1 }}
{{ form2 }}
</form>
</code></pre>
<p>You can easily use form.save() and all other functions, without doing it all yourself.</p>
|
python|django
| 2 |
931 | 64,425,892 |
Can't execute Python with VBA to create a file using xlwings
|
<p>I would like to run a Python script from Excel. The Python script has the task to create a file. As a help I used the quickstart "Call Python from Excel" as you can see here:</p>
<p><a href="https://docs.xlwings.org/en/stable/quickstart.html" rel="nofollow noreferrer">https://docs.xlwings.org/en/stable/quickstart.html</a></p>
<p>However, I don't want Python to write "hello world" to Excel, but instead create a file:</p>
<pre><code>import numpy as np
import xlwings as xw
def world():
wb = xw.Book.caller()
wb.sheets[0].range('A1').value = 'Hello World!'
</code></pre>
<p>Instead of the code shown above I replaced the lines with:</p>
<pre><code>import numpy as np
import xlwings as xw
def world():
f = open("myfile.txt", "w")
</code></pre>
<p>However, no file is created. I have not found an answer to this problem on my research, so I ask for help here.</p>
|
<p>You need to save the workbook for anything to be created</p>
<p><a href="https://docs.xlwings.org/en/stable/api.html#xlwings.Book.save" rel="nofollow noreferrer">https://docs.xlwings.org/en/stable/api.html#xlwings.Book.save</a></p>
<p>In your second example, you're not specifying a full path - The working directory may be different to what you expect. Instead of "myFile.txt" can you do "c:\full\path\to\folder\myFile.txt" (Obviously change the path to suit your environment)</p>
|
python|excel|vba|xlwings
| 1 |
932 | 69,967,552 |
Creating subplot for multiple columns using loop
|
<p>I have a DataFrame whose list of columns look like this.</p>
<pre><code> df.columns
['dr1', 'r1', 'dr9', 'r9', 'dr21', 'r21', 'dr26', 'r26',
'dr32', 'r32', 'dr37', 'r37', 'dr49', 'r49', 'dr52', 'r52',
'dr105', 'r105', 'dr118', 'r118']
DF=
dr1 r1 dr9 r9 dr21 r21 dr26 r26 dr32 r32 dr37 r37 dr49 r49 dr52 r52 dr105 r105 dr118 r118
1.370077e-03 79.492551 1.885239e-03 91.721574 1.799699e-03 75.545364 7.945449e-04 56.431429 1.608503e-02 81.127766 2.142295e-03 68.240745 1.079295e-03 81.316295 6.329889e-03 75.426940 3.396244e-03 73.181260 3.068635e-03 74.735711
1 6.434528e-04 75.018284 1.793570e-03 89.052090 2.371287e-03 72.617954 8.695473e-04 52.827016 1.322238e-02 77.128761 2.404007e-03 66.440905 1.290229e-03 77.116401 3.996065e-03 71.487773 3.867551e-03 69.449838 2.715589e-03 70.733729
2 4.515897e-04 71.005130 2.193572e-03 86.580311 2.479454e-03 69.926743 8.851577e-04 49.512332 1.333745e-02 73.533564 2.027752e-03 64.363379 1.173162e-03 72.735975 2.546558e-03 67.581776 3.918930e-03 65.663389 2.571297e-03 66.733496
3 4.306166e-04 67.021659 2.013258e-03 83.587133 2.331958e-03 67.171358 9.085228e-04 46.485528 1.455461e-02 69.784763 1.783866e-03 62.075661 1.109990e-03 68.369232 2.612790e-03 63.726161 3.752804e-03 61.925863 2.718291e-03 63.163573
4 4.106338e-04 63.078692 2.162847e-03 80.224832 2.552906e-03 64.217342 9.786517e-04 43.847398 1.431416e-02 65.971753 1.601472e-03 59.616326 1.174008e-03 64.488470 2.752785e-03 60.275407 4.022395e-03 58.637315 2.613834e-03 59.649589
</code></pre>
<p>I want to create a subplot with multiple graphs between each <strong><code>r vs dr</code>.</strong>
I am able to do this by individually calling each subplot axis, but I want to automate this using loop. What can be done to automate this?</p>
|
<p>You can use <code>pd.wide_to_long</code> to turn your data into long form, then plot with pandas' groupby:</p>
<pre><code>long_data = pd.wide_to_long(df.reset_index(), ['r','dr'], i='index', j='type').reset_index()
long_data.groupby('type').plot(x='r',y='dr')
</code></pre>
<p>Or with seaborn's FacetGrid:</p>
<pre><code>g = sns.FacetGrid(data=long_data, col='type', col_wrap=4)
g.map(sns.lineplot, 'r','dr')
</code></pre>
<p>Output (with seaborn):</p>
<p><a href="https://i.stack.imgur.com/vjAAv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vjAAv.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|seaborn
| 1 |
933 | 69,811,310 |
Actual data from KFold split indices
|
<p>Suppose I have the following data:</p>
<pre><code>y = np.ones(10)
y[-5:] = 0
X = pd.DataFrame({'a':np.random.randint(10,20, size=(10)),
'b':np.random.randint(80,90, size=(10))})
X
a b
0 11 82
1 19 82
2 15 80
3 15 86
4 14 82
5 18 87
6 13 83
7 12 83
8 10 82
9 18 87
</code></pre>
<p>Splitting it to 5-fold gives the following indices:</p>
<pre><code>kf = KFold()
data = list(kf.split(X,y))
data
[(array([2, 3, 4, 5, 6, 7, 8, 9]), array([0, 1])),
(array([0, 1, 4, 5, 6, 7, 8, 9]), array([2, 3])),
(array([0, 1, 2, 3, 6, 7, 8, 9]), array([4, 5])),
(array([0, 1, 2, 3, 4, 5, 8, 9]), array([6, 7])),
(array([0, 1, 2, 3, 4, 5, 6, 7]), array([8, 9]))]
</code></pre>
<p>But I want to further prepare <code>data</code> such that it is organise to contain the actual values in the format:</p>
<pre><code>data =
[(train1,trainlabel1,test1,testlabel1),
(train2,trainlabel2,test2,testlabel2),
..,
(train5,trainlabel5,test5,testlabel5)]
</code></pre>
<p>Expected Output (from the given MWE):</p>
<pre><code>[array([
(array([[15,80],[15,86],[14,82],[18,87],[13,83],[12,83],[10,82],[18,87]]), array([[1],[1],[1],[0],[0],[0],[0],[0])]), #fold1 train/label
(array([[11,82],[19,82]]), array([[1],[1]])), #fold1 test/label
(array([[11,82],[19,82],[14,82],[18,87],[13,83],[12,83],[10,82],[18,87]]),array([[1],[1],[1],[0],[0],[0],[0],[0]])), #fold2 train/label
(array([[15,80],[15,86]]),array([[1],[1]])) #fold2 test/label
....
])]
</code></pre>
|
<p>As you understand, <code>KFold().split(data)</code> returns the selected indices by fold.
To select Pandas.DataFrame rows with indices list, the easiest way is the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">loc method</a>.</p>
<pre><code>for train_idx, test_idx in KFold(n_splits=2).split(X):
x_train = X.loc[train_idx]
x_test = X.loc[test_idx]
y_train = y.loc[train_idx]
y_test = y.loc[test_idx]
</code></pre>
<p>You can then add you subset dataframes to lists</p>
|
python|machine-learning|scikit-learn|cross-validation|k-fold
| 1 |
934 | 69,875,073 |
Confusion Matrix ValueError: Classification metrics can't handle a mix of binary and continuous targets
|
<p>I'm currently trying to make a confusion matrix for my neural network model, but keep getting this error:</p>
<pre><code>ValueError: Classification metrics can't handle a mix of binary and continuous targets.
</code></pre>
<p>I have a peptide dataset that I'm using with 100 positive and 100 negative examples, and the labels are 1s and 0s. I've converted each peptide into a Word2Vec embedding that was put into a ML model and trained.</p>
<p>This is my code:</p>
<pre><code>pos = "/content/drive/MyDrive/pepfun/Training_format_pos (1).txt"
neg = "/content/drive/MyDrive/pepfun/Training_format_neg.txt"
# pos sequences extract into list
f = open(pos, 'r')
file_contents = f.read()
data = file_contents
f.close()
newdatapos = data.splitlines()
print(newdatapos)
# neg sequences extract into list
f2 = open(neg, 'r')
file_contents2 = f2.read()
data2 = file_contents2
f2.close()
newdataneg = data2.splitlines()
print(newdataneg)
!pip install rdkit-pypi
import rdkit
from rdkit import Chem
# set up embeddings
import nltk
from gensim.models import Word2Vec
import multiprocessing
EMB_DIM = 4
# embeddings pos
w2vpos = Word2Vec([newdatapos], size=EMB_DIM, min_count=1)
sequez = "VVYPWTQRF"
w2vpos[sequez].shape
words=list(w2vpos.wv.vocab)
vectors = []
for word in words:
vectors.append(w2vpos[word].tolist())
print(len(vectors))
print(vectors[1])
data = np.array(vectors)
# embeddings neg
w2vneg = Word2Vec([newdataneg], size=EMB_DIM, min_count=1)
sequen = "GIGKFLHSAGKFGKAFLGEVMKS"
w2vneg[sequen].shape
wordsneg = list(w2vneg.wv.vocab)
vectorsneg = []
for word in wordsneg:
vectorsneg.append(w2vneg[word].tolist())
allvectors = vectorsneg + vectors
print(len(allvectors))
arrayvectors = np.array(allvectors)
labels = []
for i in range (100):
labels.append(1)
print(labels)
for i in range (100):
labels.append(0)
print(labels)
print(len(labels))
import seaborn as sns
!pip install keras
import keras
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.utils import shuffle
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from sklearn.preprocessing import StandardScaler
!pip install tensorflow==2.7.0
import tensorflow as tf
from keras import metrics
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Conv3D, Flatten, Dropout
import sklearn
a = sklearn.utils.shuffle(arrayvectors, random_state=1)
b = sklearn.utils.shuffle(labels, random_state=1)
dfa = pd.DataFrame(a, columns=None)
dfb = pd.DataFrame(b, columns=None)
X = dfa.iloc[:]
y = dfb.iloc[:]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=300)
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
y_train = y_train.astype(np.float32)
y_test = y_test.astype(np.float32)
# train data & test data tensor conversion
class trainData(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
train_data = trainData(torch.FloatTensor(X_train),
torch.FloatTensor(y_train))
## test data
class testData(Dataset):
def __init__(self, X_data):
self.X_data = X_data
def __getitem__(self, index):
return self.X_data[index]
def __len__ (self):
return len(self.X_data)
test_data = testData(torch.FloatTensor(X_test))
train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(test_data, batch_size=1)
# make model
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(4,)))
model.add(Dropout(0.1))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(16, input_dim=1, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(12,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1,activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',optimizer='RMSprop', metrics=['accuracy','AUC'])
history = model.fit(X_train, y_train, epochs=2000,batch_size=64, validation_data = (X_test, y_test), validation_batch_size=64)
from sklearn.metrics import confusion_matrix, classification_report
print(y_pred.round)
print(classification_report(y_test,y_pred))
</code></pre>
<p>I've tried printing my y_pred value to see the problem. This is what I get:</p>
<pre><code>[[6.0671896e-01]
[9.9999785e-01]
[1.6576621e-01]
[9.9999899e-01]
[5.6016445e-04]
[2.4935007e-02]
[4.4204036e-11]
[2.8884350e-11]
[6.3217885e-05]
[4.7181606e-02]
[9.9742711e-03]
[1.0780278e-01]
[7.0868194e-01]
[2.0298421e-02]
[9.5819527e-01]
[1.4784497e-01]
[1.7605269e-01]
[9.9643111e-01]
[4.7657710e-01]
[9.9991858e-01]
[4.5830309e-03]
[6.5091753e-01]
[3.8710403e-01]
[2.4756461e-02]
[1.1719930e-01]
[6.4381957e-03]
[7.1598434e-01]
[1.5749395e-02]
[6.8473631e-01]
[9.5499575e-01]
[2.2420317e-02]
[9.9999177e-01]
[6.9633877e-01]
[9.2811453e-01]
[1.8373668e-01]
[2.9298562e-07]
[1.1250973e-03]
[4.3785056e-01]
[9.6832716e-01]
[8.6754566e-01]]
</code></pre>
<p>It's not 1s and 0s. I believe there's a problem there as well, but I'm not sure.</p>
|
<p>The model outputs the predicted probabilities, you need to transform them back to class labels before calculating the classification metrics, see below.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
tf.random.set_seed(0)
# generate the data
X, y = make_classification(n_classes=2, n_features=4, n_informative=4, n_redundant=0, random_state=42)
# split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# build the model
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(4,)))
model.add(Dropout(0.1))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(16, input_dim=1, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(12, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
# fit the model
model.compile(loss='binary_crossentropy', optimizer='RMSprop', metrics=['accuracy', 'AUC'])
model.fit(X_train, y_train, epochs=100, batch_size=64, validation_data=(X_test, y_test), validation_batch_size=64, verbose=0)
# extract the predicted probabilities
p_pred = model.predict(X_test)
p_pred = p_pred.flatten()
print(p_pred.round(2))
# [1. 0.01 0.91 0.87 0.06 0.95 0.24 0.58 0.78 ...
# extract the predicted class labels
y_pred = np.where(p_pred > 0.5, 1, 0)
print(y_pred)
# [1 0 1 1 0 1 0 1 1 0 0 0 0 1 1 0 1 0 0 0 0 ...
print(confusion_matrix(y_test, y_pred))
# [[13 1]
# [ 2 9]]
print(classification_report(y_test, y_pred))
# precision recall f1-score support
# 0 0.87 0.93 0.90 14
# 1 0.90 0.82 0.86 11
# accuracy 0.88 25
# macro avg 0.88 0.87 0.88 25
# weighted avg 0.88 0.88 0.88 25
</code></pre>
|
python|tensorflow|machine-learning|keras|neural-network
| 4 |
935 | 73,024,448 |
How to plot groups of stacked bars from a dataframe
|
<p>I am creating the plots this way:</p>
<pre><code>fig, ax = plt.subplots(figsize=(8,6))
df = pd.concat((data.assign(source=name) for data, name in zip([train_ss_freq_df, train_ss_freq_df], ['train', 'blind'])))
df[df['source']=='train'].plot(kind='bar', stacked=True, color=sns.color_palette("crest", 3),ax=ax)
df[df['source']=='blind'].plot(kind='bar', stacked=True, color=sns.color_palette("crest", 3),ax=ax)
</code></pre>
<p>the datasets are these:</p>
<pre><code>,H,E,C
A,0.039065342828923426,0.014685241981597963,0.026069553464677445
R,0.023860269272755627,0.011930134636377814,0.017492332484275095
N,0.012605915683318605,0.007381608358891719,0.02365233664292769
D,0.018817902999428187,0.007589540988719654,0.03285335551281385
C,0.004028694702916255,0.002729115766491657,0.004470551541300619
Q,0.01886988615688517,0.007849456776004574,0.013437646202630348
E,0.032983313406456306,0.010578572542496232,0.020975204033893018
G,0.013827519883557727,0.013437646202630348,0.04777252170296824
H,0.008733170452773302,0.006004054686281644,0.011150387274523055
I,0.020793262982793576,0.021209128242449447,0.012969797785517493
L,0.04384779331496595,0.02666735977543276,0.024899932421895307
K,0.02352237874928523,0.010084732546654884,0.021832926131933255
M,0.007797473618547591,0.004678484171128554,0.005354265218069345
F,0.01465925040286947,0.013749545147372252,0.011150387274523055
P,0.009201018869886156,0.0047304673285855385,0.03246348183188647
S,0.017856214586473983,0.013229713572802412,0.028278837656599262
T,0.015387014607267246,0.01614077039039351,0.023990227166398086
W,0.006419919945937516,0.004600509434943078,0.004418568383843634
Y,0.01268389041950408,0.012943806206789,0.011202370431980038
V,0.02139106929354889,0.032151582887144564,0.016842543016062795
,H,E,C
A,0.04221830985915493,0.014446680080482898,0.02841549295774648
R,0.02193158953722334,0.011006036217303823,0.019577464788732395
N,0.010206237424547284,0.005719315895372234,0.023712273641851106
D,0.016267605633802817,0.007223340040241449,0.03400905432595573
C,0.0037927565392354124,0.0037374245472837023,0.005960764587525151
Q,0.017751509054325956,0.0064285714285714285,0.014305835010060362
E,0.03344567404426559,0.010326961770623743,0.026398390342052314
G,0.01176056338028169,0.011066398390342052,0.05057344064386318
H,0.00698692152917505,0.005060362173038229,0.01062374245472837
I,0.02187122736418511,0.02117706237424547,0.01375251509054326
L,0.043470824949698186,0.022062374245472836,0.028118712273641853
K,0.024170020120724348,0.011036217303822938,0.024622736418511065
M,0.009476861167002013,0.004592555331991952,0.007339034205231389
F,0.01391851106639839,0.011956740442655935,0.013083501006036218
P,0.007027162977867203,0.004964788732394366,0.033591549295774646
S,0.015774647887323943,0.01119718309859155,0.030653923541247484
T,0.015020120724346076,0.014507042253521127,0.024899396378269618
W,0.004964788732394366,0.003958752515090543,0.004678068410462776
Y,0.011156941649899397,0.010789738430583501,0.011091549295774649
V,0.023586519114688127,0.02992957746478873,0.018606639839034204
</code></pre>
<p>The problem is that they're overlapping. How can I make them side by side but still in the same figure? I concatenated them in the hope of finding a way to separate them by "source", but I haven't found the right argument to do so.</p>
<p><a href="https://i.stack.imgur.com/sVDEq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sVDEq.png" alt="enter image description here" /></a></p>
|
<ul>
<li>If the plot must be grouped and clustered, there is this <a href="https://stackoverflow.com/a/22845857/7758804">answer</a>. However, it's easier to set a multi-index and plot individual bars.</li>
<li>Plot directly with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>pandas.DataFrame.plot</code></a> and use <code>kind='bar'</code> or <code>kind='barh'</code>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code># given the two dataframes as train and blind
# combine them into a single dataframe
df = pd.concat((data.assign(source=name) for data, name in zip([train, blind], ['train', 'blind'])))
# reset, set, and sort the index
dfp = df.reset_index().set_index(['index', 'source']).sort_index()
# plot the bars with kind='bar' or kind='barh'
ax = dfp.plot(kind='barh', width=0.75, stacked=True, color=sns.color_palette("crest", 3), figsize=(9, 15))
</code></pre>
<p><a href="https://i.stack.imgur.com/CXhlb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CXhlb.png" alt="enter image description here" /></a></p>
<h2>DataFrame Views</h2>
<h3><code>df.head()</code></h3>
<pre><code> H E C source
A 0.039065 0.014685 0.026070 train
R 0.023860 0.011930 0.017492 train
N 0.012606 0.007382 0.023652 train
D 0.018818 0.007590 0.032853 train
C 0.004029 0.002729 0.004471 train
</code></pre>
<h3><code>dfp.head(6)</code></h3>
<pre><code> H E C
index source
A blind 0.042218 0.014447 0.028415
train 0.039065 0.014685 0.026070
C blind 0.003793 0.003737 0.005961
train 0.004029 0.002729 0.004471
D blind 0.016268 0.007223 0.034009
train 0.018818 0.007590 0.032853
</code></pre>
|
python|pandas|matplotlib|data-visualization|bar-chart
| 3 |
936 | 50,072,787 |
Add extra footer to sphinx_rtd_theme
|
<p>I am attempting to add an extra footer to the <code>sphinx_rtd_theme</code> in Sphinx, running on my local machine. I have created a file <code>footer.html</code> and stored it in <code>source/_templates</code>. The file contents are given here:</p>
<pre><code>{% extends '!footer.html' %} {% block extrafooter %} {{super}} <!--
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
--> {% endblock %}
</code></pre>
<p>I expected this markup to display after the <code>default sphinx_rtd_theme</code> footer but only the default footer was displayed.</p>
<p>I am running Windows 10 with Fall Creators Update and Python 3.6.5 - 64 bit.</p>
<p>Can anyone see what I am doing wrong?</p>
|
<p>The problem was that the Html was embedded in an Html comment. Removing the comment made it start working. The comment was there because I took the fragment verbatim from another post.</p>
|
python-sphinx
| 1 |
937 | 64,805,225 |
SWIG wrapper adding new_ prefix to the class name and unable to get class method
|
<p>I have written wrapper for simple C++ void function that takes 2 string parameters.
For some reasons when I try to create an object of my class, I'm getting the name error that "name 'my class name' is not defined".
When I tried create object and specify new_ prefix before class name the object was created, but I'm still can't call my function with similar name tricks.
Bellow the code of interface and classes:</p>
<pre><code> %module python_module
%{
#include "python_wrapper.h"
%}
%import std_string.i
%include "python_module.h"
</code></pre>
<p><strong>python_wrapper.h</strong></p>
<pre><code>#ifndef PYTHON_WRAPPER_H
#define PYTHON_WRAPPER_H
#include <string>
class Separator
{
public:
Separator();
~Separator();
public:
void separate(const std::string& Path, const std::string& destFolder);
};
#endif
</code></pre>
<p><strong>strong text</strong></p>
<pre><code>#include "python_wrapper.h"
#include <separator.h>
Separator::Separator()
{
}
Separator::~Separator()
{
}
void Separator::separate(const std::string& Path, const std::string& destFolder)
{
::SomeSeparate(assemblyPath, destFolder);
}
</code></pre>
<p>I build everything with cmake.
when I run in terminal:</p>
<pre><code> python3
>>> from _MyModule import *
>>> separator = Separator()
I'm getting name error : name 'Separator' is not defined.
</code></pre>
<p>Can somebody help with this ?</p>
<p>IN same time, when I text</p>
<pre><code>>>> separator = new_Separator()
</code></pre>
<p>the object creating, but even after this I unable to call my function.</p>
|
<p>Used pybin11 instead of SWIG. For my task that was much easier solution than with swig.</p>
|
python|c++|swig
| 0 |
938 | 68,645,645 |
Sorting selected multiple columns based on list in Pandas
|
<p>The objective is to sort a given multiple columns based on multiples list in pandas as below. Thanks to <a href="https://stackoverflow.com/a/68645254/6446053">sammywemmy</a> for the hint.</p>
<p>However, the suggestion produced a column of <code>nan</code> for the other columns that not being considered.</p>
<pre><code>import pandas as pd
sort_a=['a','d','e']
sort_b=['s1','s3','s6']
sort_c=['t1','t2','t3']
df=pd.DataFrame(zip([1,2,3,4,5,6,7],['a', 'e', 'd','a','a','d','e'], ['s3', 's1', 's6','s6','s3','s3','s1'], ['t3', 't2', 't1','t2','t2','t3','t3']),columns=['var',"a", "b", "c"])
categories = {col : pd.CategoricalDtype(categories=cat, ordered=True)
for col, cat
in zip(df.columns, [sort_a, sort_b, sort_c])}
df_ouput=df.astype(categories).sort_values([*df.columns])
var a b c
2 NaN NaN NaN t1
1 NaN NaN NaN t2
3 NaN NaN NaN t2
4 NaN NaN NaN t2
0 NaN NaN NaN t3
5 NaN NaN NaN t3
6 NaN NaN NaN t3
</code></pre>
<p>Whereas, the expected output</p>
<pre><code>var a b c
5 a s3 t2
1 a s3 t3
4 a s6 t2
6 d s3 t3
3 d s6 t1
2 e s1 t2
7 e s1 t3
</code></pre>
|
<p>Instead of passing <code>df.columns</code> pass the column names that you want to include:</p>
<pre><code>categories = {col : pd.CategoricalDtype(categories=cat, ordered=True)
for col, cat
in zip(['a','b','c'], [sort_a, sort_b, sort_c])}
</code></pre>
<p>Finally pass <code>by</code> parameter in <code>sort_values()</code> instead of unpacking <code>df.columns</code> pass the keys of categories and unpack it:</p>
<pre><code>df=df.astype(categories).sort_values([*categories.keys()])
</code></pre>
<p>output of <code>df</code>:</p>
<pre><code> var a b c
4 5 a s3 t2
0 1 a s3 t3
3 4 a s6 t2
5 6 d s3 t3
2 3 d s6 t1
1 2 e s1 t2
6 7 e s1 t3
</code></pre>
|
python|pandas|sorting
| 2 |
939 | 68,706,719 |
How to write an array to a file and then call that file and add more to the array?
|
<p>So as the title suggests I'm trying to write an array to a file, but then I need to recall that array and append more to it and then write it back to the same file, and then this same process over and over again.</p>
<p>The code I'm have so far is:</p>
<pre><code>c = open(r"board.txt", "r")
current_position = []
if filesize > 4:
current_position = [c.read()]
print(current_position)
stockfish.set_position(current_position)
else:
stockfish.set_fen_position("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")
#There is a lot more code here that appends stuff to the array but I don't want to #add anything that will be irrelevant to the problem
with open('board.txt', 'w') as filehandle:
for listitem in current_position:
filehandle.write('"%s", ' % listitem)
z = open(r"board.txt", "r")
print(z.read())
My array end up looking like this when I read the file
""d2d4", "d7d5", ", "a2a4", "e2e4",
</code></pre>
<p>All my code is on this <a href="https://replit.com/join/brzyakllho-harduino14" rel="nofollow noreferrer">replit</a> if anyone needs more info</p>
|
<p><strong>A few ways to do this:</strong></p>
<p>First, use newline as a delimiter (simple, not the most space efficient):</p>
<pre class="lang-py prettyprint-override"><code># write
my_array = ['d2d4', 'd7d5']
with open('board.txt', 'w+') as f:
f.writelines([i + '\n' for i in my_array])
# read
with open('board.txt') as f:
my_array = f.read().splitlines()
</code></pre>
<p>If your character strings all have the same length, you don't need a delimiter:</p>
<pre class="lang-py prettyprint-override"><code># write
my_array = ['d2d4', 'd7d5'] # must all be length 4 strs
with open('board.txt', 'w+') as f:
f.writelines(my_array)
# read file, splitting string into groups of 4 characters
with open('board.txt') as f:
in_str = f.read()
my_array = [in_str[i:i+4] for i in range(0, len(in_str), 4)]
</code></pre>
<p>Finally, consider <code>pickle</code>, which allows writing/reading Python objects to/from binary files:</p>
<pre class="lang-py prettyprint-override"><code>import pickle
# write
my_array = ['d2d4', 'd7d5']
with open('board.board', 'wb+') as f: # custom file extension, can be anything
pickle.dump(my_array, f)
# read
with open('board.board', 'rb') as f:
my_array = pickle.load(f)
</code></pre>
|
python
| 5 |
940 | 71,764,809 |
Dropdown Element with option text values in ul list - Element is not currently visible and may not be manipulated
|
<p><a href="https://i.stack.imgur.com/SAbCn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SAbCn.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/36p4O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/36p4O.png" alt="enter image description here" /></a></p>
<p>I want to select the option with the value "DATE" using Selenium. The thing about it is that the option text is set in another part which is the ul list.
I tried some of the <a href="https://stackoverflow.com/questions/59534665/element-not-interactable-element-is-not-currently-visible-and-may-not-be-manipu">solutions</a> that I found but none of them worked.
Here's my code:</p>
<pre><code> WebDriverWait(browser, 15).until(EC.presence_of_element_located((By.XPATH, '//select[@data-test="FilterSorts"]')))
dropdown_trigger = browser.find_element_by_xpath('//select[@data-test="FilterSorts"]')
browser.execute_script("arguments[0].click();", dropdown_trigger)
dropdown_option = WebDriverWait(browser, 15).until(EC.presence_of_element_located((By.XPATH, "//option[@value='DATE']")))
dropdown_option.click()
</code></pre>
<p>I also tried this and still got the same error:</p>
<pre><code>WebDriverWait(browser, 15).until(EC.presence_of_element_located((By.XPATH, '//select[@data-test="FilterSorts"]')))
dropdown_trigger = browser.find_element_by_xpath('//select[@data-test="FilterSorts"]')
select = Select(dropdown_trigger)
select.select_by_value('DATE').click()
</code></pre>
<p>error:</p>
<blockquote>
<p>dropdown_trigger =
browser.find_element_by_xpath('//select[@data-test="FilterSorts"]')
Message: element not interactable: Element is not currently visible
and may not be manipulated</p>
</blockquote>
|
<p>Here's the solution that worked for me:</p>
<pre><code>#Identify the element
element = browser.find_element_by_xpath("//span[contains(text(),'Most Recent')]")
#Apply Javascript of click action
browser.execute_script("arguments[0].click();", element)
</code></pre>
|
python|selenium-webdriver
| 0 |
941 | 60,426,647 |
RSA Encryption using the user-defined keys in python
|
<p>I am trying to encrypt a small amount of data using RSA algorithm using python. The problem is I have the public and private RSA key. Both are stored in .pem and .ppk respectively. I am not able to find any help in google which will help me encrypt it using my keys. All the code and examples I saw generates its own keys. Is there a way where I can use my own keys to encrypt and decrypt the data?</p>
|
<p>You can use the rsa module .</p>
<pre><code>import rsa
with open('public.ppm','r') as key_pub_file:
key_pub = key_pub_file.read()
message = "hello".encode('utf8')
enc_msg = rsa.encrypt(message, key_pub)
print(enc_msg)
</code></pre>
|
python|python-3.x|encryption|rsa|public-key-encryption
| 1 |
942 | 70,085,285 |
Web scraping 2020 data from IPL website. Getting an IndexError
|
<p>Can someone please help me proceed with this web scraping of IPL 2020 data?
My code is as follows:</p>
<pre><code>import json
import pandas as pd
from bs4 import BeautifulSoup
from urllib.request import urlopen
scrape_url="https://www.iplt20.com/stats/2020/most-runs"
page_connect = urlopen(scrape_url)
page_connect
page_html=BeautifulSoup(page_connect, 'html.parser')
page_html.findAll(name='div class="js-table"')
json_raw_string= page_html.findAll(name='div class="js-table"')[0].string
json_raw_string
</code></pre>
<p>I'm getting an error- IndexError: list index out of range.</p>
|
<p>This means there is no element that it finds from <code>page_html.findAll(name='div class="js-table"')</code></p>
<p>Also, when you say <code>.string</code>, are you looking for the text of the element? If so, use <code>.text</code></p>
<p>If you are looking for a div with a certain class, I would recommend using the class_ attribute of <code>findAll</code>, e.g. <code>page_html.findAll('div', class_='js-table')</code></p>
<p>If you only need the first element from the list of found elements, you can replace <code>findAll</code> with <code>find</code>, so you would have</p>
<pre class="lang-py prettyprint-override"><code>json_raw_string = page_html.find('div', class_='js-table').text
</code></pre>
<hr />
<p>To make this a dataframe, you can use <code>pandas.read_html</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>table_holder = page_html.find('div', class_='js-table') # the div element that has the table
table_html = str(table_holder.find('table')) # find the table, turn it into a raw string of the html
# get your dataframe
df = pd.read_html(table_html)[0] # read_html returns a list of dataframes, in this case it only has one dataframe in it
</code></pre>
|
python|web-scraping
| 0 |
943 | 66,274,589 |
How to deploy Java Lambda jar using Python CDK code?
|
<p>Can any one help me with syntax to deploy a Java Lambda using Python CDK code? Below is the python CDK code snippet Iam using to deploy Python written Lambda.</p>
<pre><code>handler = lmb.Function(self, 'Handler',
runtime=lmb.Runtime.PYTHON_3_7,
handler='handler.handler',
code=lmb.Code.from_asset(path.join(this_dir, 'lambda')))
</code></pre>
<p>And below is the Java CDK code snippet my colleague using:</p>
<pre><code>Function javafunc = new Function(this, CommonFunctions.getPropValues("HANDLER"),
FunctionProps.builder()
.runtime(Runtime.JAVA_8)
.handler(CommonFunctions.getPropValues("Java_LAMBDA"))
.code(Code.fromAsset(tmpBinDir + "/"+CommonFunctions.getPropValues("JAR_FILE_NAME")))
.timeout(Duration.seconds(300))
.memorySize(512)
.functionName(CommonFunctions.getPropValues("FUNCTION_NAME"))
.build());
</code></pre>
<p>I don't know Java and I have requirement to deploy Java compiled Lambda jar using Python CDK.</p>
|
<p>We need these imports</p>
<pre><code>from aws_cdk import (
core,
aws_lambda,
)
</code></pre>
<p><code>code</code>: jar file path
<code>handler</code>: mainClassName::methodName</p>
<pre><code> aws_lambda.Function(
self, "MyLambda",
code=aws_lambda.Code.from_asset(path='javaProjects/path/to/jar/my-lambda-1.0.jar'),
handler='com.test.handler.StreamLambdaHandler::handleRequest',
runtime=aws_lambda.Runtime.JAVA_11,
environment={
'ENV_APPLICATION_NAME': 'anyValue')
},
memory_size=1024,
timeout=core.Duration.seconds(30)
)
</code></pre>
|
python|python-3.x|amazon-web-services|aws-lambda|aws-cdk
| 2 |
944 | 66,005,217 |
groupby.mean function dividing by pre-group count rather than post-group count
|
<p>So I have the following dataset of trade flows that track imports, exports, by reporting country and partner countries. After I remove some unwanted columns, I edit my data frame such that trade flows between country A and country B is showing. I'm left with something like this:</p>
<p>[My data frame image] <a href="https://i.stack.imgur.com/dwzx1.png" rel="nofollow noreferrer">1</a></p>
<p>My issue is that I want to be able to take the average of imports and exports for every partner country ('partner_code') per year, but when I run the following:</p>
<blockquote>
<p>x = df[(df.location_code.isin(["IRN"])) &
df.partner_code.isin(['TCD'])]</p>
<p>grouped = x.groupby(['partner_code']).mean()</p>
</blockquote>
<p>I end up getting an average of all exports divided by all instances where there is a 'product_id' (so a much higher number) rather than averaging imports or exports by total for all the years.</p>
<p>Taking the average of the following 5 export values gives an incorrect average:</p>
<p><a href="https://i.stack.imgur.com/44Asm.png" rel="nofollow noreferrer">5 export values</a>
<a href="https://i.stack.imgur.com/CJnU8.png" rel="nofollow noreferrer">Wrong average</a></p>
|
<p>In pandas, we can <code>groupby</code> multiple columns, based on my understanding you want to group by partner, country and year.</p>
<p>The following line would work:</p>
<pre class="lang-py prettyprint-override"><code>df = df.groupby(['partner_code', 'location_code', 'year'])['import_value', 'export_value'].mean()
</code></pre>
<p>Please note that the resulting dataframe is has <code>MultiIndex</code> index.
For reference, the official documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code> documentation</a></p>
|
python|pandas|group-by|pandas-groupby
| 0 |
945 | 59,096,804 |
I am unable to import psycopg2 from Jupyter notebook or Jupyter Lab on Mac. I have a clean install of Catalina
|
<p>A similar report was posted, but the suggested solutions do not work. </p>
<pre><code>---- from Jupyter ----
Import psycopg2
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-7d2da0a5d979> in <module>
----> 1 import psycopg2
ModuleNotFoundError: No module named 'psycopg2'
</code></pre>
<p>If I run <code>python3</code> from Mac terminal and then <code>import psycopg2</code> that works.
If I run <code>python3</code> from Jupyterlab terminal this does not work. I get the following error after running <code>import pyscopg2</code></p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psycopg2/__init__.py", line 50, in <module>
from psycopg2._psycopg import ( # noqa
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psycopg2/_psycopg.cpython-38-darwin.so, 2): Library not loaded: @rpath/libssl.1.1.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psycopg2/_psycopg.cpython-38-darwin.so
Reason: image not found
</code></pre>
<p><code>echo $PATH</code> from Mac terminal is</p>
<pre><code>/Users/greg/opt/miniconda3/bin:/Users/greg/opt/anaconda3/condabin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
</code></pre>
<p><code>echo $PATH</code> from Jupyterlab terminal is</p>
<pre><code>/Library/Frameworks/Python.framework/Versions/3.8/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/greg/opt/miniconda3/bin:/Users/greg/opt/anaconda3/condabin
</code></pre>
<p>These look the same, just in a different order.</p>
<p>I have tried <code>pip install psycopg2</code> both with and without the binary option.Either way, it says already satisfied.
I have tried </p>
<pre><code>conda install -c anaconda psycopg2
</code></pre>
<p>Also tried installing postgresql both from the postgresql.org, and <code>brew install psycopg2</code>. Both worked, but no luck with Jupyterlab. </p>
|
<p>try installing the package using: pip install psycopg2-binary. This should work.
For more information, visit: <a href="https://www.psycopg.org/docs/install.html" rel="nofollow noreferrer">https://www.psycopg.org/docs/install.html</a></p>
|
python|python-3.x|jupyter-notebook|psycopg2|jupyter-lab
| 2 |
946 | 63,118,891 |
Calculating a formula with pandas objects
|
<p>I am facing a following problem. I have a DataFrame</p>
<pre><code>my_df = pd.DataFrame({'a.b': [1, 2, 3], 'c': [5, 6, 7], 'd': [8, 9, 10]})
</code></pre>
<p>I am reading the following string from a config data</p>
<pre><code>some_text = "-a.b + c - d"
</code></pre>
<p>is there a possibility to calculate the formula in some_text variable using Series from <code>my_df(df column)</code> as arguments?</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.eval.html" rel="nofollow noreferrer"><code>pd.eval</code></a> but you need to change the column names:</p>
<pre><code>my_df.columns=my_df.columns.str.replace('.','_')
my_df.eval(some_text.replace('.','_'))
0 -4
1 -5
2 -6
dtype: int64
</code></pre>
|
python|pandas
| 3 |
947 | 58,627,409 |
How to convert TimeStamp to string to save figures?
|
<pre><code>df.index
DatetimeIndex(['2019-01-25 17:00:00', '2019-01-25 17:01:00',,
'2019-01-25 17:08:00', '2019-01-25 17:09:00',
...
'2019-02-15 07:44:00', '2019-02-15 07:45:00',
'2019-02-15 07:52:00', '2019-02-15 07:53:00'],
</code></pre>
<p>I want to save figures in the loop using plt.savefig and I'm trying to name the figure as the index for that hour of which the plot is.
I'm looping for everyhour </p>
<pre><code>for hour, i in df.groupby('index_hour'):
plt.savefig(hour+'.png',dpi=300,bbox_inches='tight')
</code></pre>
<p>TypeError: unsupported operand type(s) for +: 'Timestamp' and 'str'</p>
<pre><code>hour
Timestamp('2019-01-25 17:00:00')
</code></pre>
<p>Hour is the final name of the .png file that I'm trying to get. Thanks.</p>
|
<p>Use the <code>strftime</code> method of datetime objects:</p>
<pre class="lang-py prettyprint-override"><code>timestamps = df.index.strftime("%Y-%m-%d %H:%M:%S")
</code></pre>
<p>If you just want the hour, then you can simply get that (as an int) via an attribute of the datetime index:</p>
<pre class="lang-py prettyprint-override"><code>hours = df.index.hour
</code></pre>
<p>Refer to <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">this</a> for information on how to use the datetime string formatting.</p>
|
python-3.x|pandas|matplotlib|timestamp|save
| 0 |
948 | 30,662,819 |
How to only add item from list if item in different list is not 0 keeping same index?
|
<p>I am working with Excel (using xlsxwriter and openpyxl) and I am trying to populate cells of one column from one list based on if the cell in the adjacent column has a 0 in it or not. If said adjacent column cell has a 0 in it, the code should ignore whatever number is in the second list and replace that with a 0 in the new cell.</p>
<p>To simplify my code, here is what I am working with, just less numbers. I have two lists:</p>
<pre><code>full[2, 5, 0, 1, 3, 0, 3, 4, 5, 0]
regr[3, 6, 4, 5, 1, 5, 7, 8, 9, 3]
</code></pre>
<p>List <code>full</code> is displayed in Excel's column <code>B</code>, one list item per cell. What I need to do is display the items fro list <code>regr</code> in the next column <code>C</code>, replacing current numbers with <code>0</code> if <code>0</code> is found in the adjacent cell in column <code>B</code>.</p>
<p>So it should ideally look something like this:</p>
<p><a href="http://i.stack.imgur.com/fJ0HG.png" rel="nofollow">http://i.stack.imgur.com/fJ0HG.png</a></p>
<p>What I am finding difficult is having a loop that keeps track of the index of each list, and a counter that adds each time (for column insertion purposes - <code>B1</code>, <code>B2</code>, <code>B3</code>, <code>B4</code> etc.)</p>
<p>I have code that populates column <code>B</code> with the <code>regr</code> list but it doesn't do the <code>0</code> check and all my attempts to store and use the <code>index</code> have failed.</p>
<pre><code>for x in range(0, 50):
worksheet1.write("B" + str(x), str(regr[x]))
</code></pre>
<p>Any help would be greatly appreciated. Thanks!</p>
|
<p>You should probably use <code>zip</code> to loop over the two lists in parallel. Also, don't try and create cell coordinates programmatically using the "A1" syntax. Both openpyxl and xlsxwriter allow the use of numeric row and column indices for this kind of thing.</p>
|
python|excel|list|openpyxl|xlsxwriter
| 1 |
949 | 72,174,758 |
Getting a disk I/O error when running an optimization with OpenMDAO
|
<p>I'm getting this error when running an optimtization inside a local respository using the ScipyOptimizeDriver and recording the result file. It worked flawlessly, however all of a sudden, I got this error:</p>
<pre><code> File "C:\Users\xxx\Anaconda3\envs\xxx\lib\site-packages\openmdao\recorders\sqlite_recorder.py", line 234, in _initialize_database
c.execute("CREATE TABLE global_iterations(id INTEGER PRIMARY KEY, "
sqlite3.OperationalError: disk I/O error
</code></pre>
<p>I didn't change anything to my code, hence I don't know where it came from. I also can find any solution online. Here, a generalized version of my code.</p>
<pre><code>import openmdao.api as om
prob = om.Problem()
indep_var_comp = om.IndepVarComp()
indep_var_comp.add_output(...)
prob.model.add_subsystem("prob_vars", indep_var_comp, promotes=["*"])
prob.model.connect(...)
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options["tol"] = 1e-6
prob.driver.options["optimizer"] = 'SLSQP'
prob.driver.recording_options["includes"] = ["*"]
rec_data_file = 'opt_output.db'
rec = om.SqliteRecorder(rec_data_file)
prob.driver.add_recorder(rec)
prob.add_recorder(rec)
prob.model.add_objective(....)
prob.model.add_design_var(...)
prob.model.add_constraint(...)
prob.setup()
prob.run_driver()
</code></pre>
|
<p>Id suggest you post this as a bug report to the <a href="https://Id%20suggest%20you%20post%20this%20as%20a%20bug%20report%20to%20the%20OpenMDAO%20issue%20tracker." rel="nofollow noreferrer">OpenMDAO issue tracker</a>. since it seems like a potentially a bug, and likely you'd need to give details about OpenMDAO version, python version, etc.</p>
<p>Broadly this kind of sql error implies that something "bad" happened to the database file. Maybe the operating system locked it for some reason, or the file system got messed up.</p>
<p>Are you running on a local machine? or on a "magic" file system on some kind of HPC cluster. If you're using a parallel file system on a cluster, then the sql-lite recorder is probably going to be grumpy in general (those file systems are REALLY not designed for that kind of disk i/o).</p>
<p>Ultimately, we just need more information to give you a more complete answer and this is probably best reported as a bug with some more details about your environment.</p>
|
python|openmdao
| 1 |
950 | 45,175,029 |
conditional regex in python string matching
|
<pre><code>pattern=re.compile(r'item (?(1)2|3)')
n=re.findall(pattern, 'item 2 item 3')
</code></pre>
<p>The output is:
['item 2', 'item 3']
But i want it to be just item 2 in case it's present in the string or item 3 in case item 2 is not present.
An explanation of my error along with the solution would be helpful.</p>
|
<p>Is this what you are looking for?</p>
<pre><code>import re
itemlist = ["pickles", "item 2", "item3"]
text = "item 3 item 2"
for item in itemlist:
if re.search(item, text):
print (item)
break
</code></pre>
<p>Iterating over the ordered list, if a match is found break out.</p>
<pre><code>item 2
</code></pre>
|
python|regex|conditional
| 0 |
951 | 42,447,373 |
OpenCV-Python installation - CMake error missing vtkRenderingOpenGL [Ubuntu 16.04]
|
<p>I'm trying to install <code>Python-OpenCV</code> in Python3 on my LTS system following <a href="http://www.pyimagesearch.com/2016/10/24/ubuntu-16-04-how-to-install-opencv/" rel="nofollow noreferrer">this</a> guide.</p>
<p>When I try to run CMake:</p>
<pre><code>cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.1.0/modules -D PYTHON_EXECUTABLE=~/.virtualenvs/cv/bin/python -D BUILD_EXAMPLES=ON ..
</code></pre>
<p>I get the following error:</p>
<pre><code>-- Detected version of GNU GCC: 54 (504)
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found suitable version "1.2.8", minimum required is "1.2.3")
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.8")
-- Checking for module 'gstreamer-base-1.0'
-- No package 'gstreamer-base-1.0' found
-- Checking for module 'gstreamer-video-1.0'
-- No package 'gstreamer-video-1.0' found
-- Checking for module 'gstreamer-app-1.0'
-- No package 'gstreamer-app-1.0' found
-- Checking for module 'gstreamer-riff-1.0'
-- No package 'gstreamer-riff-1.0' found
-- Checking for module 'gstreamer-pbutils-1.0'
-- No package 'gstreamer-pbutils-1.0' found
-- Checking for module 'gstreamer-base-0.10'
-- No package 'gstreamer-base-0.10' found
-- Checking for module 'gstreamer-video-0.10'
-- No package 'gstreamer-video-0.10' found
-- Checking for module 'gstreamer-app-0.10'
-- No package 'gstreamer-app-0.10' found
-- Checking for module 'gstreamer-riff-0.10'
-- No package 'gstreamer-riff-0.10' found
-- Checking for module 'gstreamer-pbutils-0.10'
-- No package 'gstreamer-pbutils-0.10' found
-- Checking for module 'libdc1394-2'
-- No package 'libdc1394-2' found
-- Checking for module 'libdc1394'
-- No package 'libdc1394' found
-- Looking for linux/videodev.h
-- Looking for linux/videodev.h - not found
-- Looking for linux/videodev2.h
-- Looking for linux/videodev2.h - found
-- Looking for sys/videoio.h
-- Looking for sys/videoio.h - not found
-- Checking for module 'libavresample'
-- No package 'libavresample' found
-- Looking for libavformat/avformat.h
-- Looking for libavformat/avformat.h - found
-- Looking for ffmpeg/avformat.h
-- Looking for ffmpeg/avformat.h - not found
-- Checking for module 'libgphoto2'
-- No package 'libgphoto2' found
-- found IPP (ICV version): 9.0.1 [9.0.1]
-- at: /home/matt/opencv-3.1.0/3rdparty/ippicv/unpack/ippicv_lnx
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- To enable PlantUML support, set PLANTUML_JAR environment variable or pass -DPLANTUML_JAR=<filepath> option to cmake
-- Found PythonInterp: /home/matt/.virtualenvs/cv/bin/python (found suitable version "3.5.2", minimum required is "2.7")
-- Found PythonInterp: /home/matt/.virtualenvs/cv/bin/python3 (found suitable version "3.5.2", minimum required is "3.4")
-- Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
-- Could NOT find Matlab (missing: MATLAB_MEX_SCRIPT MATLAB_INCLUDE_DIRS MATLAB_ROOT_DIR MATLAB_LIBRARIES MATLAB_LIBRARY_DIRS MATLAB_MEXEXT MATLAB_ARCH MATLAB_BIN)
CMake Error at /usr/local/lib/cmake/vtk-7.1/vtkModuleAPI.cmake:120 (message):
Requested modules not available:
vtkRenderingOpenGL
Call Stack (most recent call first):
/usr/local/lib/cmake/vtk-7.1/VTKConfig.cmake:89 (vtk_module_config)
cmake/OpenCVDetectVTK.cmake:6 (find_package)
CMakeLists.txt:597 (include)
-- Configuring incomplete, errors occurred!
See also "/home/matt/opencv-3.1.0/build/CMakeFiles/CMakeOutput.log".
See also "/home/matt/opencv-3.1.0/build/CMakeFiles/CMakeError.log".
</code></pre>
<p>I've followed the guide completely up until this point so I'm not sure what is wrong.</p>
<p>I've looked around to try and fix the missing package <code>vtkRenderingOpenGL</code> and followed <a href="https://stackoverflow.com/questions/31170869/cmake-could-not-find-opengl-in-ubuntu">this</a> post.</p>
<p>I'm sure that I've installed this in the past and so unsurprisingly I get:</p>
<pre><code>sudo apt -y install freeglut3-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
freeglut3-dev is already the newest version (2.8.1-2).
0 to upgrade, 0 to newly install, 0 to remove and 1 not to upgrade.
</code></pre>
<p><strong>Any and all help is much appreciated!</strong></p>
|
<p>Taking advice given from the OpenCV community forum (<a href="http://answers.opencv.org/question/130153/python-opencv-installation-cmake-error-missing-vtkrenderingopengl-ubuntu-1604/" rel="nofollow noreferrer">post</a>).</p>
<p>Add this to CMake options:</p>
<pre><code>-D WITH_VTK=OFF -D BUILD_opencv_viz=OFF
</code></pre>
<p><em>"opencv_viz is the only opencv module, that depends on vtk, and you cannot use it from python"</em></p>
<p>Therefore it is fine to just disable it all together and after doing so CMake completes.</p>
|
python|python-3.x|opencv|cmake|ubuntu-16.04
| 1 |
952 | 57,259,551 |
ZeroMQ: set LINGER=0 does not work as expected
|
<p>I'm using Python bindings for ZeroMQ. My <code>libzmq</code> version is 4.2.5 and my <code>pyzmq</code> version is 17.1.2.</p>
<p>I'm trying to let a "producer" transmit a large amount of data to a "consumer". The code of the "producer" is :</p>
<pre><code># producer.py
import zmq
import time
import os
ctx = zmq.Context()
sock = ctx.socket(zmq.PUB)
sock.bind('tcp://*:8000')
x = os.urandom(1000000000) # large amount of data, requires much time to transmit it
sock.send(x)
print('Done')
sock.setsockopt(zmq.LINGER, 0)
sock.close()
t1 = time.time()
ctx.term() # I expect this should return immediately
print(time.time() - t1)
</code></pre>
<p>And the code of "consumer" is :</p>
<pre><code># consumer.py
import zmq
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
sock.setsockopt_string(zmq.SUBSCRIBE, '')
sock.connect('tcp://localhost:8000')
data = sock.recv()
</code></pre>
<p>I expect the <code>ctx.term()</code> in the <code>producer.py</code> should return immediately, since the <code>LINGER</code> of the socket is already set to zero. But when I run these codes, the <code>ctx.term()</code> does not return immediately as expected. Instead, it takes tens of seconds for that function to return, and all of the large data have been successfully received by the <code>consumer.py</code>.</p>
<p>I am trying to figure out why, and I wish someone help me out a little.</p>
|
<blockquote>
<p><strong>Q</strong> : "<em>ZeroMQ: set <code>LINGER=0</code> does not work as expected</em>"</p>
</blockquote>
<h2>ZeroMQ set <code>LINGER=0</code> does IMHO work as expected <em>( as documented )</em>:</h2>
<p>ZeroMQ documentation is clear in stating that all the <strong><code>zmq_setsockopt()</code></strong> calls ( wrapped, for a use in python, into a method <code>.setsockopt()</code> ) take effect, i.e. modify <code>Socket</code>-instances' behaviour.</p>
<p>Older versions of ZeroMQ documentation ( being in use in my Projects with ZeroMQ-wrapped distributed systems since v2.x ) were more explicit on this:</p>
<blockquote>
<p><strong>Caution</strong>: All options, with the exception of <code>ZMQ_SUBSCRIBE</code>, <code>ZMQ_UNSUBSCRIBE</code>, <strong><code>ZMQ_LINGER</code></strong>, <code>ZMQ_ROUTER_MANDATORY</code> and <code>ZMQ_XPUB_VERBOSE</code> only take effect for subsequent socket <code>bind</code>/<code>connects</code>.</p>
</blockquote>
<p>Having this in mind, the <code>sock.setsockopt( LINGER, 0 )</code> indeed instructs the <code>Socket()</code>-instance <strong><code>sock</code></strong> not to wait on a respective <code><aContextINSTANCE>.term()</code> until all its attempts to complete all the yet-enqued-only messages get fully propagated to the queue-head-end and there processed into the wireline-protocol and under its supervision being successfully sent or accepted to have been lost on their network-wide way to the neighbouring counterparty.</p>
<p>Yet, this does not say, what is going to be done with the data-in-transit, that the <code>Context()</code>-instance is already moving down the wire.</p>
<p>As far as I have <a href="https://stackoverflow.com/search?tab=votes&q=user%3A3666197%20%5Bzeromq%5D">worked extensively</a> with ZeroMQ since v2.x, nothing IMHO reminds me about a way, how to interrupt an ongoing message transport, using the ZeroMQ semantics exposed to the public API, beyond the <code>LINGER</code>-instructed behaviour which might be paraphrased like :<br><em>" IGNORE ANY KNOWN SENDS/RECEIVES THAT STILL WAIT ITS TURN IN QUEUE "</em>,<br>
yet, this does not stop the progress of sending the data-in-transit down the line.</p>
<p><strong>ZeroMQ works intentionally this way.</strong></p>
<p>One might like to read more about ZeroMQ internals <a href="https://stackoverflow.com/search?tab=votes&q=user%3A3666197%20%5Bzeromq%5D">here</a> or perhaps to just have a general view from the orbit-high perspective as in <a href="https://stackoverflow.com/questions/46615141/how-to-control-the-source-ip-address-of-a-zeromq-packet-on-a-machine-with-multip/46620571#46620571">"<strong>ZeroMQ Hierarchy in less than a Five Seconds</strong>"</a>.</p>
<hr>
<p><sub><strong>Epilogue</strong> : For Use in a Last Resort Case Only<br>
<br>
If indeed in an ultimate need to have some way to stop even these in-transit message flows, feel free to post a new question on how to make things work that wild way.</sub></p>
|
python|multithreading|message-queue|zeromq|pyzmq
| 3 |
953 | 57,127,474 |
How do I take time away from a time in datetime?
|
<p>I am working on a project that can take on time away from another time and tell me the waiting time. Eg:</p>
<p>10:43:56 - 10:39:46 = 4 minutes 10 seconds.</p>
<p>I've tried to use multiple libraries which of none have worked. After that I resorted to online tutorials with datetime as it seemed to be the closest one to what I am looking for.</p>
<pre class="lang-py prettyprint-override"><code>import datetime
a = "9:54:34"
b = "9:52:34"
print(datetime.timedelta(a, b))
</code></pre>
<p>I was looking to see if there is a function along the lines of datetime.timedeltasubtract(a, b) but thats not a thing. Help would be appreciated, please ignore my lack of skill..</p>
<p>I was expecting the output to be 00:02:00 or 2 or 2 minutes. But the error was</p>
<blockquote>
<p>Traceback (most recent call last): File "C:/Users/photo/Desktop/CJ
work project/main.py", line 7, in
print(datetime.timedelta(a, b)) TypeError: unsupported type for timedelta seconds component: str</p>
</blockquote>
|
<p>As first step is necessary to convert the strings to <code>datetime</code> objects using <code>datetime.strptime</code> function (<a href="https://docs.python.org/3.6/library/datetime.html#datetime.datetime.strptime" rel="nofollow noreferrer">doc</a>). Then you can subtract these two times:</p>
<pre><code>import datetime
a = "9:54:34"
b = "9:52:34"
d1 = datetime.datetime.strptime(a,'%H:%M:%S')
d2 = datetime.datetime.strptime(b,'%H:%M:%S')
print(d1 - d2)
</code></pre>
<p>Prints:</p>
<pre><code>0:02:00
</code></pre>
|
python-3.7
| 0 |
954 | 25,750,652 |
Python-handler-socket (pyhs) update function example
|
<p>I'm using the Python client library for the HandlerSocket MySQL plugin (<a href="https://bitbucket.org/excieve/pyhs/overview" rel="nofollow">https://bitbucket.org/excieve/pyhs/overview</a>). I can make insert and find requests, but I can't find example of how to call manager.update() function. I've read through the docs of the library and googled a lot, but no luck.
Please anybody give me code a example how to work with update function.</p>
|
<p>I used code example for the find function (<a href="http://python-handler-socket.readthedocs.org/en/latest/usage.html#high-level" rel="nofollow">http://python-handler-socket.readthedocs.org/en/latest/usage.html#high-level</a>) to make the update function call</p>
<pre><code># UPDATE mydb.test1 SET Cnt=5 WHERE Id=1
hs.update('mydb', 'test1', '=', ['Id', 'Cnt'], ['1'], ['1', '5'])
</code></pre>
<p>Please put attention that the test1.Id column is INTEGER, but I must search it as string '1' </p>
|
python|mysql|handlersocket
| 0 |
955 | 29,341,708 |
Anaconda : IPython not running
|
<p>I am a newbie to python and needed to set up IPython for some project-work. I followed the Anaconda Installation Directions . Currently I am having a lot of problem in running IPython : </p>
<blockquote>
<ul>
<li>First I installed Anaconda in my home directory : <code>\home\pranav</code></li>
<li>Next I ran the command <code>conda</code> just to check if the installation was correct - it turned out to be prefectly fine</li>
<li>When I type in <code>ipython</code> OR <code>ipython notebook</code> , I get the following error :</li>
</ul>
<p><code>Traceback (most recent call last):
File "/usr/local/bin/ipython", line 7, in <module>
from IPython import start_ipython
ImportError: No module named IPython</code></p>
</blockquote>
<p>Can someone help me out ? Changing / Adding the PATH Variable is not working as suggested elsewhere on Stack Overflow</p>
|
<p>What was required was just a clean install of IPython :</p>
<blockquote>
<ul>
<li><code>pip install ipython</code></li>
<li><code>pip install 'ipython[all]'</code></li>
</ul>
</blockquote>
<p>Works like a charm :)</p>
|
python|python-2.7|ipython|anaconda
| 1 |
956 | 19,731,346 |
"The pydev nature is not configured on the project" while adding a new Python module in Eclipse
|
<p>I get the <strong>"The pydev nature is not configured on the project"</strong> error while adding a new Python module in Eclipse. </p>
<p>Any ideas how to fix it? How to configure the pydev nature?</p>
<p>My configuration:</p>
<ul>
<li>Mac OS X 10.9.</li>
<li>Eclipse SDK Version: 4.3.1 Build id: M20130911-1000</li>
<li>PyDev for Eclipse 2.8.2.2013090511</li>
</ul>
<p>.project:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>myproject</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.python.pydev.PyDevBuilder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.eclipse.wst.common.project.facet.core.builder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.python.pydev.pythonNature</nature>
</natures>
</projectDescription>
</code></pre>
|
<p>The quick solution for this problem is to uninstall PyDev and install it again. To do so: </p>
<ol>
<li>Open Eclipse.</li>
<li>Go to "Help β Install New Software..."</li>
<li>Click "What is already installed?"</li>
<li>Select "PyDev for Eclipse" and press Uninstall...</li>
<li>Go through all dialogs and repeat these steps to install PyDev again.</li>
</ol>
|
python|eclipse|pydev
| 1 |
957 | 39,317,437 |
set 'x-message-ttl' in pika python
|
<p>I want to set the TTL to 1 sec for a Rabbitmq queue using pika.
I tried the following code</p>
<pre><code>import ctypes
int32=ctypes.c_int
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
this=channel.queue_declare(queue='hello',
arguments={'x-message-ttl' : int32(1000)}
)
channel.basic_publish(exchange='',
routing_key='hello',
body=message)
print this.method.consumer_count
</code></pre>
<p>I am getting the following error</p>
<pre><code>Traceback (most recent call last):
File "rabt.py", line 8, in <module>
arguments={'x-message-ttl' : int32(1000)}
File "build\bdist.win32\egg\pika\adapters\blocking_connection.py", line 2397, in queue_declare
File "build\bdist.win32\egg\pika\channel.py", line 815, in queue_declare
File "build\bdist.win32\egg\pika\channel.py", line 1312, in _rpc
File "build\bdist.win32\egg\pika\channel.py", line 1324, in _send_method
File "build\bdist.win32\egg\pika\connection.py", line 2139, in _send_method
File "build\bdist.win32\egg\pika\connection.py", line 2119, in _send_frame
File "build\bdist.win32\egg\pika\frame.py", line 74, in marshal
File "build\bdist.win32\egg\pika\spec.py", line 1015, in encode
File "build\bdist.win32\egg\pika\data.py", line 85, in encode_table
File "build\bdist.win32\egg\pika\data.py", line 153, in encode_value
pika.exceptions.UnsupportedAMQPFieldException: (['\x00\x00', '\x05', 'hello', '\x00', None, '\r', 'x-message-ttl'], c_long(1000))
</code></pre>
<p>I am trying to dead-letter all the messages in this particular queue after 1 sec. Can I know how to set the TTL using Pika? Thanks!</p>
|
<pre><code>connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
this=channel.queue_declare(queue='hello',
arguments={'x-message-ttl' : 1000}
)
</code></pre>
<p>you don't need the cast</p>
|
python-2.7|rabbitmq|pika
| 12 |
958 | 37,227,264 |
sort list of lists by specific index of inner list
|
<p>I am trying perform some operation on a file and convert its lines to list. However the integer values are also taken as string</p>
<pre><code>l1 = [['test', 'hello', '60,'], ['why', 'to', '500,'], ['my', 'choice', '20,']]
</code></pre>
<p>because of this I am unable to sort the list of list based on these integer values.</p>
<p>Is there a way I can convert all these <code>list[2]</code> values into integer and sort the outer list based on that? Or any other way via which I can sort this list using integers in above list.</p>
<p>Intended result is, output of sorted list should show as:</p>
<pre><code>[['my', 'choice', '20,'], ['test', 'hello', '60,'], ['why', 'to', '500,']]
</code></pre>
|
<p>Use a custom sort key, to convert the last element to an integer just when sorting:</p>
<pre><code>sorted(l1, key=lambda l: int(l[2].rstrip(',')))
</code></pre>
<p>The <code>key</code> is used to produce the value on which to sort, for each element in the list. So the <code>lambda</code> function is called for each element, and the above code extracts the <code>l[2]</code> value, converting it to an integer. The <code>str.rstrip()</code> call removes the trailing comma first.</p>
<p>Demo:</p>
<pre><code>>>> l1 = [['test', 'hello', '60,'], ['why', 'to', '500,'], ['my', 'choice', '20,']]
>>> sorted(l1, key=lambda l: int(l[2].rstrip(',')))
[['my', 'choice', '20,'], ['test', 'hello', '60,'], ['why', 'to', '500,']]
</code></pre>
|
python|list|sorting|python-3.x
| 2 |
959 | 37,504,470 |
Tensorflow crashes when using sess.run()
|
<p>I'm using tensorflow 0.8.0 with Python v2.7. My IDE is PyCharm and my os is Linux Ubuntu 14.04</p>
<p>I'm noticing that the following code causes my computer to freeze and/or crash:</p>
<pre><code># you will need these files!
# https://www.kaggle.com/c/digit-recognizer/download/train.csv
# https://www.kaggle.com/c/digit-recognizer/download/test.csv
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# read in the image data from the csv file
# the format is: imagelabel pixel0 pixel1 ... pixel783 (there are 42,000 rows like this)
data = pd.read_csv('../train.csv')
labels = data.iloc[:,:1].values.ravel() # shape = (42000, 1)
labels_count = np.unique(labels).shape[0] # = 10
images = data.iloc[:,1:].values # shape = (42000, 784)
images = images.astype(np.float64)
image_size = images.shape[1]
image_width = image_height = np.sqrt(image_size).astype(np.int32) # since these images are sqaure... hieght = width
# turn all the gray-pixel image-values into percentages of 255
# a 1.0 means a pixel is 100% black, and 0.0 would be a pixel that is 0% black (or white)
images = np.multiply(images, 1.0/255)
# create oneHot vectors from the label #s
oneHots = tf.one_hot(labels, labels_count, 1, 0) #shape = (42000, 10)
#split up the training data even more (into validation and train subsets)
VALIDATION_SIZE = 3167
validationImages = images[:VALIDATION_SIZE]
validationLabels = labels[:VALIDATION_SIZE]
trainImages = images[VALIDATION_SIZE:]
trainLabels = labels[VALIDATION_SIZE:]
# ------------- Building the NN -----------------
# set up our weights (or kernals?) and biases for each pixel
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(.1, shape=shape, dtype=tf.float32)
return tf.Variable(initial)
# convolution
def conv2d(x, W):
return tf.nn.conv2d(x, W, [1,1,1,1], 'SAME')
# pooling
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# placeholder variables
# images
x = tf.placeholder('float', shape=[None, image_size])
# labels
y_ = tf.placeholder('float', shape=[None, labels_count])
# first convolutional layer
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
# turn shape(40000,784) into (40000,28,28,1)
image = tf.reshape(trainImages, [-1,image_width , image_height,1])
image = tf.cast(image, tf.float32)
# print (image.get_shape()) # =>(40000,28,28,1)
h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
# print (h_conv1.get_shape()) # => (40000, 28, 28, 32)
h_pool1 = max_pool_2x2(h_conv1)
# print (h_pool1.get_shape()) # => (40000, 14, 14, 32)
# second convolutional layer
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
#print (h_conv2.get_shape()) # => (40000, 14,14, 64)
h_pool2 = max_pool_2x2(h_conv2)
#print (h_pool2.get_shape()) # => (40000, 7, 7, 64)
# densely connected layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
# (40000, 7, 7, 64) => (40000, 3136)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#print (h_fc1.get_shape()) # => (40000, 1024)
# dropout
keep_prob = tf.placeholder('float')
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
print h_fc1_drop.get_shape()
#readout layer for deep neural net
W_fc2 = weight_variable([1024,labels_count])
b_fc2 = bias_variable([labels_count])
print b_fc2.get_shape()
mull= tf.matmul(h_fc1_drop, W_fc2)
print mull.get_shape()
print
mull2 = mull + b_fc2
print mull2.get_shape()
y = tf.nn.softmax(mull2)
# dropout
keep_prob = tf.placeholder('float')
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print sess.run(mull[0,2])
</code></pre>
<p>The lase line causes the crash: </p>
<p>print sess.run(mull[0,2])</p>
<p>This is basically one location in a very big 2d array. Something about the sess.run is causing it. I'm also getting a script issue popup... some sort of google script (think maybe it's tensorflow?). I can't copy the link because my computer is completely frozen.</p>
|
<p>I suspect the problem arises because <code>mull[0, 2]</code>—despite its small apparent size—depends on a very large computation, including multiple convolutions, max-poolings, and a large matrix multiplication; and therefore either your computer becomes fully loaded for a long period of time, or it runs out of memory. (You should be able to tell which by running <code>top</code> and checking what resources are used by the <code>python</code> process in which you are running TensorFlow.)</p>
<p>The amount of computation is so large because your TensorFlow graph is defined in terms of the entire training dataset, <code>trainImages</code>, which contains 40000 images:</p>
<pre><code>image = tf.reshape(trainImages, [-1,image_width , image_height,1])
image = tf.cast(image, tf.float32)
</code></pre>
<p>Instead, it would be more efficient to define your network in terms of a <code>tf.placeholder()</code> to which you can <em>feed</em> individual training examples, or mini-batches of examples. See the <a href="https://www.tensorflow.org/versions/r0.8/how_tos/reading_data/index.html#feeding" rel="nofollow">documentation on feeding</a> for more information. In particular, since you are only interested in the 0th row of <code>mull</code>, you only need to feed the 0th example from <code>trainImages</code> and perform computation on it to produce the necessary values. (In your current program, the results for all other examples are also being computed, and then discarded in the final slice operator.)</p>
|
python|crash|tensorflow
| 1 |
960 | 28,256,067 |
Typecast "340" to int results in '34' losing the last zero
|
<p><strong>Scenario1:<br></strong>
input: <code>"1-0:1.7.0(00.471*kW)"</code>
<br>regex: <code>"[0-9]-[0-9]:1.7.0\([0]{1,}(.*)\\*kW\)"</code>
<br>output: <strong>471</strong> (as it should be)</p>
<p><strong>Scenario2:<br></strong>
input: <code>"1-0:1.7.0(00.470*kW)"</code>
<br>regex: <code>"[0-9]-[0-9]:1.7.0\([0]{1,}(.*)\\*kW\)"</code>
<br>output: <strong><em>47</em></strong> instead of <em>470</em></p>
<p>manipulation:<code>output = int(str(str(float("{0:.4f}".format(float(re.search("[0-9]-[0-9]:1.7.0\([0]{1,}(.*)\\*kW\)",linestr).group(1))))).replace(".","")).replace("*",""))</code></p>
<p><strong>Question:</strong>
When the input is like Scenario 2, I want the output to be 470 instead of 47. <br> How can I get all characters including trailing zeros?</p>
|
<p>You only want the digits after the <code>.</code>:</p>
<pre><code>s = "1-0:1.7.0(00.471*kW)"
print(int(re.findall(":1.7.0\([0]+\.(\d+)\\*kW\)",s)[0]))
471
s = "1-0:1.7.0(00.470*kW)"
print(int(re.findall(":1.7.0\([0]+\.(\d+)\\*kW\)",s)[0]))
470
</code></pre>
<p>Or simply:</p>
<pre><code>print(int(re.findall("\([0]+\.(\d+)\\*kW\)",s)[0]))
</code></pre>
|
python|regex
| 1 |
961 | 44,076,370 |
Any way to make matplotlib's Nbagg backend faster, or Inline backend higher resolution?
|
<p>I love using Jupyter notebooks, but can't seem to find the correct backend for visualizing plots: <code>%matplotlib inline</code> generates really low-resolution, bitmap images, but fast, and <code>%matplotlib nbagg</code> or <code>%matplotlib notebook</code> are slow, but high-resolution vector graphics.</p>
<p>Could the latter be slow because it has to set up the interaction interface? I generally want all my figures to be reproducible with the click of a button so I don't want to manually manipulate them -- maybe I can disable interaction for every plot?</p>
<p>Or could the former be adjusted to show figures in higher-resolution, or to load up vector graphics instead of the bitmap images?</p>
|
<p>As often happens, I found the answer right after posting the question. Just use</p>
<pre><code>%config InlineBackend.figure_format = 'retina'
%matplotlib inline
</code></pre>
<p>for high-resolution bitmap, or for vector graphics,</p>
<pre><code>%config InlineBackend.figure_format = 'svg'
%matplotlib inline
</code></pre>
<p>and that does the trick!</p>
<p>Also figured out how to make the backend preserve the printed-figure bounding box even if <code>Artist</code>s go outside -- just use</p>
<pre><code>%config InlineBackend.print_figure_kwargs = {}
</code></pre>
<p>because by default, its value is <code>{'bbox_inches': 'tight'}</code>. Could not find any documentation on this, but <code>jupyter</code> displays the different options if you enter <code>%config InlineBackend</code>.</p>
|
python|matplotlib|jupyter-notebook|backend
| 2 |
962 | 37,922,514 |
Getting the md5 from .tar file
|
<p>I have seen answers for this but didn't get the right way to do it. Our package has been created in .tar format by some other team. How can i get the checksum of the contents of the files in tar ball using Python?
People have suggested to create md5 file while archiving but that is not the way we do. Can anybody suggest something on this?
Thanks in advance.</p>
|
<p>The <code>tar</code> format does not contain any file integrity information on the file contents themselves. The format only contains a checksum of each header block, which contains the file metadata, but that doesn't guarantee the integrity of the file contents. </p>
<p>You can read <a href="http://www.gnu.org/software/tar/manual/html_node/Standard.html" rel="nofollow">Basic Tar Format</a> for more about the format, and <a href="https://www.gnu.org/software/tar/manual/html_node/Data-corruption-and-repair.html" rel="nofollow">here</a> for a brief note about data corruption in tar files.</p>
<p>The only way to ensure the integrity of the files themselves is to calculate a checksum or hash on the files in the archive, or on the tarball itself, at the time of archiving, as you said people already suggested to you.</p>
|
python-2.7|shell|md5|tarfile
| 1 |
963 | 51,205,924 |
pip packages not available for new user
|
<p>I have installed several packages as sudoer using <code>sudo pip install package_name</code> command. The packages are installed and work well in this user.
Afterwards, I have defined a new user. My problem is that the packages are not available in the new user and when trying to import them this error is appeared: <code>No module named package_name</code>. Is there any way in which I do not need to reinstall the packages for new user and use the packages installed by the sudoer?</p>
|
<p>The environment variables must be defined for the new user again.Try setting the environment variables for python and pip for the new user</p>
|
python|python-2.7|ubuntu|pip|ubuntu-16.04
| 0 |
964 | 51,515,569 |
In Tensorflow I can't use any MultiRNNCell instance in dynamic decode, but a single RNNCell instance can work on it
|
<p>I make a seq2seq model using tensorflow and meet a problem that my program throws an error when I use MultiRNNCell in tf.contrib.seq2seq.dynamic_decode.</p>
<p>The problem happens over here:</p>
<pre><code>defw_rnn=tf.nn.rnn_cell.MultiRNNCell([
tf.nn.rnn_cell.LSTMCell(num_units=self.FLAGS.rnn_units,
initializer=tf.orthogonal_initializer)
for _ in range(self.FLAGS.rnn_layer_size)])
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=decoder_inputs,
sequence_length=self.decoder_targets_length,
time_major=False)
training_decoder = \
tf.contrib.seq2seq.BasicDecoder(
defw_rnn, training_helper,
encoder_final_state,
output_layer)
training_decoder_output, _, training_decoder_output_length = \
tf.contrib.seq2seq.dynamic_decode(
training_decoder,
impute_finished=True,
maximum_iterations=self.FLAGS.max_len)
</code></pre>
<p>When I run this code,the console shows this Error message:</p>
<blockquote>
<p><code>C:\Users\TopView\AppData\Local\Programs\Python\Python36\python.exe E:/PycharmProject/cikm_transport/CIKM/CIKM/translate_model/train.py
WARNING:tensorflow:From C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\rnn.py:417: calling reverse_sequence (from tensorflow.python.ops.array_ops) with seq_dim is deprecated and will be removed in a future version.
Instructions for updating:
seq_dim is deprecated, use seq_axis instead
WARNING:tensorflow:From C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py:432: calling reverse_sequence (from tensorflow.python.ops.array_ops) with batch_dim is deprecated and will be removed in a future version.
Instructions for updating:
batch_dim is deprecated, use batch_axis instead
encoder_final_state shpe
LSTMStateTuple(c=<tf.Tensor 'encoder/bidirectional_rnn/fw/fw/while/Exit_5:0' shape=(?, 24) dtype=float32>, h=<tf.Tensor 'encoder/bidirectional_rnn/fw/fw/while/Exit_6:0' shape=(?, 24) dtype=float32>)
decoder_inputs shape before embedded
(128, 10)
decoder inputs shape after embedded
(128, 10, 5)
Traceback (most recent call last):
File "E:/PycharmProject/cikm_transport/CIKM/CIKM/translate_model/train.py", line 14, in <module>
len(embedding_matrix['embedding'][0]))
File "E:\PycharmProject\cikm_transport\CIKM\CIKM\translate_model\model.py", line 109, in __init__
maximum_iterations=self.FLAGS.max_len)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 323, in dynamic_decode
swap_memory=swap_memory)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3209, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2941, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2878, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3179, in <lambda>
body = lambda i, lv: (i + 1, orig_body(*lv))
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 266, in body
decoder_finished) = decoder.step(time, inputs, state)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\basic_decoder.py", line 137, in step
cell_outputs, cell_state = self._cell(inputs, state)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 232, in __call__
return super(RNNCell, self).__call__(inputs, state)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\layers\base.py", line 329, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 703, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 1325, in call
cur_inp, new_state = cell(cur_inp, cur_state)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 339, in __call__
*args, **kwargs)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\layers\base.py", line 329, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 703, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 846, in call
(c_prev, m_prev) = state
File "C:\Users\TopView\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 436, in __iter__
"Tensor objects are not iterable when eager execution is not "
TypeError: Tensor objects are not iterable when eager execution is not enabled. To iterate over this tensor use tf.map_fn.</code></p>
<p><code>Process finished with exit code 1</code></p>
</blockquote>
<p>But when I change the instance of <code>defw_rnn</code>, make it a single RNN instance like LSTMCell, the Error disappears:</p>
<pre><code>defw_rnn=tf.nn.rnn_cell.LSTMCell(num_units=self.FLAGS.rnn_units,
initializer=tf.orthogonal_initializer)
</code></pre>
<p>And the code works well. However, I've found that most of the code about seq2seq model on the Internet using MultiRNNCell and they also use tensorflow, so it really confuse me that what is wrong with my program.</p>
<p>Here is the entire code:</p>
<pre><code>import tensorflow as tf
import numpy as np
class Seq2SeqModel(object):
def bw_fw_rnn(self):
with tf.name_scope("forward_rnn"):
fw = tf.nn.rnn_cell.MultiRNNCell([
tf.nn.rnn_cell.LSTMCell(num_units=self.FLAGS.rnn_units,
initializer=tf.orthogonal_initializer) for _ in
range(self.FLAGS.rnn_layer_size)])
fw = tf.nn.rnn_cell.DropoutWrapper(fw, output_keep_prob=self.FLAGS.keep_prob)
with tf.name_scope("backward_rnn"):
bw = tf.nn.rnn_cell.MultiRNNCell([
tf.nn.rnn_cell.LSTMCell(num_units=self.FLAGS.rnn_units,
initializer=tf.orthogonal_initializer) for _ in
range(self.FLAGS.rnn_layer_size)])
bw = tf.nn.rnn_cell.DropoutWrapper(bw, output_keep_prob=self.FLAGS.keep_prob)
return (fw, bw)
def decode_inputs_preprocess(self, data, id_matrix):
ending=tf.strided_slice(data,[0,0],[self.batch_size,-1],[1,1])
decoder_input=tf.concat([tf.fill([self.batch_size,1],id_matrix.index('<go>')),ending],1)
return decoder_input
def __init__(self, FLAGS, english_id_matrix, spanish_id_matrix, english_vocab_size,spanish_vocab_size, embedding_size):
self.FLAGS = FLAGS
self.english_vocab_size = english_vocab_size
self.embedding_size = embedding_size
self.encoder_input = tf.placeholder(shape=[None, self.FLAGS.max_len], dtype=tf.int32, name='encoder_inputs')
self.decoder_targets = tf.placeholder(shape=[None, self.FLAGS.max_len], dtype=tf.int32, name='decoder_targets')
self.encoder_input_sequence_length = tf.placeholder(shape=[None], dtype=tf.int32, name='encoder_inputs_length')
self.decoder_targets_length = tf.placeholder(shape=[None], dtype=tf.int32, name='decoder_targets_length')
self.batch_size = self.FLAGS.batch_size
with tf.name_scope('embedding_look_up'):
spanish_embeddings = tf.Variable(
tf.random_uniform([english_vocab_size,
embedding_size], -1.0, 1.0),
dtype=tf.float32)
english_embeddings = tf.Variable(
tf.random_uniform([english_vocab_size,
embedding_size], -1.0, 1.0),
dtype=tf.float32)
self.spanish_embeddings_inputs = tf.placeholder(
dtype=tf.float32, shape=[english_vocab_size, embedding_size],
name='spanish_embeddings_inputs')
self.english_embeddings_inputs = tf.placeholder(
dtype=tf.float32, shape=[english_vocab_size, embedding_size],
name='spanish_embeddings_inputs')
self.spanish_embeddings_inputs_op = spanish_embeddings.assign(self.spanish_embeddings_inputs)
self.english_embeddings_inputs_op = english_embeddings.assign(self.english_embeddings_inputs)
encoder_inputs = tf.nn.embedding_lookup(spanish_embeddings, self.encoder_input)
with tf.name_scope('encoder'):
enfw_rnn, enbw_rnn = self.bw_fw_rnn()
encoder_outputs, encoder_final_state = \
tf.nn.bidirectional_dynamic_rnn(enfw_rnn, enbw_rnn, encoder_inputs
, sequence_length=self.encoder_input_sequence_length, dtype=tf.float32)
print("encoder_final_state shpe")
# final_state_c=tf.concat([encoder_final_state[0][-1].c,encoder_final_state[1][-1].c],1)
# final_state_h=tf.concat([encoder_final_state[0][-1].h,encoder_final_state[1][-1].h],1)
# encoder_final_state=tf.contrib.rnn.LSTMStateTuple(c=final_state_c,
# h=final_state_h)
encoder_final_state=encoder_final_state[0][-1]
print(encoder_final_state)
with tf.name_scope('dense_layer'):
output_layer = tf.layers.Dense(english_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(
mean=0.0, stddev=0.1
))
# training decoder
with tf.name_scope('decoder'), tf.variable_scope('decode'):
decoder_inputs=self.decode_inputs_preprocess(self.decoder_targets,english_id_matrix)
print('decoder_inputs shape before embedded')
print(decoder_inputs.shape)
decoder_inputs = tf.nn.embedding_lookup(english_embeddings,decoder_inputs)
print('decoder inputs shape after embedded')
print(decoder_inputs.shape)
defw_rnn=tf.nn.rnn_cell.MultiRNNCell([
tf.nn.rnn_cell.LSTMCell(num_units=self.FLAGS.rnn_units,
initializer=tf.orthogonal_initializer)
for _ in range(self.FLAGS.rnn_layer_size)])
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=decoder_inputs,
sequence_length=self.decoder_targets_length,
time_major=False)
training_decoder = \
tf.contrib.seq2seq.BasicDecoder(
defw_rnn, training_helper,
encoder_final_state,
output_layer)
training_decoder_output, _, training_decoder_output_length = \
tf.contrib.seq2seq.dynamic_decode(
training_decoder,
impute_finished=True,
maximum_iterations=self.FLAGS.max_len)
training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
print("training logits shape")
print(training_logits.shape)
# predicting decoder
with tf.variable_scope('decode', reuse=True):
start_tokens = tf.tile(tf.constant([english_id_matrix.index('<go>')], dtype=tf.int32),
[self.batch_size], name='start_tokens')
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(english_embeddings,
start_tokens,
english_id_matrix.index('<eos>'))
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(defw_rnn,
predicting_helper,
encoder_final_state,
output_layer)
predicting_decoder_output, _, predicting_decoder_output_length =\
tf.contrib.seq2seq.dynamic_decode(
predicting_decoder,
impute_finished=True,
maximum_iterations=self.FLAGS.max_len)
self.predicting_logits = tf.identity(predicting_decoder_output.sample_id, name='predictions')
print("predicting logits shape")
print(self.predicting_logits.shape)
masks = tf.sequence_mask(self.decoder_targets_length, self.FLAGS.max_len, dtype=tf.float32, name='masks')
with tf.variable_scope('optimization'), tf.name_scope('optimization'):
# Loss
self.cost = tf.contrib.seq2seq.sequence_loss(training_logits, self.decoder_targets, masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(self.FLAGS.alpha)
# Gradient Clipping
gradients = optimizer.compute_gradients(self.cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
self.train_op = optimizer.apply_gradients(capped_gradients)
</code></pre>
|
<p>Wellβ¦β¦I've figured out.The problem happened because I only sent the final state of the encoder to a decoder.</p>
|
python-3.x|tensorflow|nlp|deep-learning|seq2seq
| 0 |
965 | 48,485,255 |
How can access Uploaded File in Google colab
|
<p>I'm new in python and I use <code>Google Colab</code> . I uploaded a <code>train_data.npy</code> into google Colab and then I want to use it . According to this link <a href="https://stackoverflow.com/questions/47212852/how-to-import-and-read-a-shelve-or-numpy-file-in-google-colaboratory">How to import and read a shelve or Numpy file in Google Colaboratory?
</a></p>
<p>when i run my code i face this error : </p>
<blockquote>
<p>TypeError: 'dict_keys' object does not support indexing</p>
</blockquote>
<p>Here is my code : </p>
<pre><code>uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
with open('train_data.npy', 'w') as f:
f.write(uploaded[uploaded.keys()[0]])
</code></pre>
<p>Thanks </p>
|
<p>Here's an adjustment to your snippet that will save any uploaded file in the current directory using the uploaded file name.</p>
<pre><code>from google.colab import files
uploaded = files.upload()
for name, data in uploaded.items():
with open(name, 'wb') as f:
f.write(data)
print ('saved file', name)
</code></pre>
|
python|google-colaboratory
| 5 |
966 | 4,835,050 |
Custom dictionary through **kw
|
<p>I have a library function that makes use of <code>**kw</code>, but I want to pass a dictionary-like class so that I can override <code>__getitem__</code> to track its accesses to data in the dictionary. For example, in the code below calling libfn does not print Accessed but libfn2 does.</p>
<pre><code>class Dtracker(dict):
def __init__(self):
dict.__init__(self)
def __getitem__(self,item):
print "Accessed %s" % str(item)
return dict.__getitem__(self, item)
def libfn(**kw):
a = kw["foo"]
print "a is %s" % a
return a
def libfn2(kw):
a = kw["foo"]
print "a is %s" % a
return a
d = Dtracker()
d["foo"] = "bar"
libfn(**d)
libfn2(d)
</code></pre>
|
<p>You can't, without changing Python itself. It's converted to a <code>dict</code> at a lower level.</p>
|
python
| 5 |
967 | 4,373,141 |
Dealing with huge (potentially over 30000x30000) images in Python?
|
<p>I'm trying to use a python script called deepzoom.py to convert large overhead renders (often over 1GP) to the Deep Zoom image format (ie, google maps-esque tile format), but unfortunately it's powered by PIL, which usually ends up crashing due to memory limitations. The creator has said he's delving into VIPS, but even nip2 (the GUI frontend for VIPS) fails to open the image. In another question by someone else (though on the same topic), someone suggested <a href="http://openimageio.org/wiki/index.php?title=Main_Page" rel="noreferrer">OpenImageIO</a>, which looks like it has the ability, and has Python wrappers, but there aren't any proper binaries provided, and trying to compile it on Windows is a nightmare.</p>
<p>Are there any alternative libraries for Python I can use? I've tried PythonMagickWand (wrapper for ImageMagick) and PythonMagick (wrapper for GraphicsMagick), but both of those also run into memory problems.</p>
|
<p>I had a very similar problem and I ended up solving it by using netpbm, which works fine on windows. Netpbm had no problem with converting huge .png files and then slicing, cropping, re-combining (using pamcrop, pamdice, and pamundice) and converting back to .png without using much memory at all. I just included the necessary netpbm binaries and dlls with my application and called them from python. </p>
|
python|image|python-imaging-library
| 3 |
968 | 4,394,483 |
How lambdas work?
|
<p>I'm learning python using the tutorial on the official python website and came across <a href="http://docs.python.org/tutorial/controlflow.html#lambda-forms" rel="nofollow">this example</a>:</p>
<pre><code>>>> def make_incrementor(n):
... return lambda x: x + n
...
>>> f = make_incrementor(42)
>>> f(0)
42
>>> f(1)
43
</code></pre>
<p>Where does <code>x</code> get it's value from? I'm not familiar with how lambda works, I understand anonymous functions just fine from javascript but this has me stumped. Anyone care to shed some light? I'd be grateful.</p>
|
<p>Consider this. <code>f</code> is the object created by the <code>make_incrementor</code> function.</p>
<p>It is a lambda, an "anonymous function".</p>
<pre><code>>>> f= lambda x: x+42
>>> f(10)
52
</code></pre>
<p>The value for <code>x</code> showed up when we applied <code>f</code> to a value.</p>
|
python|lambda
| 5 |
969 | 73,788,187 |
Python terminal Freeze but don't throw error
|
<p>can someone help me in working with python and i found a bug where the python just Freeze and but not crashing nor throw error
It's just Freeze.</p>
<p>It worked fine for a while but when it's been running for like 1-2 hours it freezed</p>
<p>i try to add print("something") to see if the loop still working but it's not printing.</p>
<p>can someone help me
please help me i don't know how to fix this</p>
<p>Im using the newest python</p>
<pre><code>import datetime
import os
import sys
import time
import dw2
class Watcher(object):
file = None
running = True
refresh_delay_secs = 1
# Constructor
def __init__(self, watch_file, call_func_on_change=None, *args, **kwargs):
self._cached_stamp = 0
self.filename = watch_file
self.file = watch_file
self.call_func_on_change = call_func_on_change
self.args = args
self.kwargs = kwargs
# Look for changes
def look(self):
stamp = os.stat(self.filename).st_mtime
if stamp != self._cached_stamp:
self._cached_stamp = stamp
# File has changed, so do something...
# print('Updated..')
if self.call_func_on_change is not None:
self.call_func_on_change(*self.args, **self.kwargs)
return "Updated"
else:
#print("Not Updated")
return "Not Updated"
# Keep watching in a loop
def watch(self):
try:
# Look for changes
time.sleep(self.refresh_delay_secs)
result=self.look()
if result.__eq__("Updated"):
dw2.dwc(self.file)
print(result)
#if datetime.datetime.now() >= (now + datetime.timedelta(seconds=20)):
#self.restart()
except KeyboardInterrupt:
print('\nDone')
except FileNotFoundError:
pass
except:
print('Unhandled error: %s' % sys.exc_info()[0])
def restart(self):
os.system('cls')
os.execv(sys.executable, ['python'] + sys.argv)
watch_file = 'status.txt'
watcher = Watcher(watch_file)
# simple
# also call custom action function
watcher = Watcher(watch_file)
while watcher.running :
try :
watcher.watch()
#start the watch going
except:
restart()
</code></pre>
|
<p>Write at the end of your code , at the exception in the while : sys.exc_info()</p>
|
python
| 0 |
970 | 64,805,533 |
Simple repeating level system
|
<p>Trying to prototype a simple leveling system where you can add or subtract xp, every time the user levels up the xp needed to level up again should increase by 100 and the users xp should go back down keeping any xp over the needed amount. So far all of those things mentioned sort of work however it seems if I go past a couple level the program break I don't get any errors this is probably a simple fix but I'm stuck.</p>
<pre><code>level = 0
next_level = 100
current_xp = 0
xp_to_next_level = 100
runtest = True
levelup = True
while runtest:
xp_added = int(input('Add xp: '))
current_xp = current_xp + xp_added
if current_xp < next_level:
xp_to_next_level = next_level - current_xp
print('Your level is '+str(level))
print('Your current xp is '+str(current_xp))
print('Xp to next level is '+str(xp_to_next_level))
print()
continue
while levelup:
if current_xp >= next_level:
print('Level Up!')
print()
level = level + 1
current_xp = current_xp - next_level
next_level = 100 * (level + 1 )
xp_to_next_level = next_level - current_xp
print('Your level is '+str(level))
print('Your current xp is '+str(current_xp))
print('Xp to next level is '+str(xp_to_next_level))
print()
continue
else:
levelup = False
</code></pre>
|
<p>If you're looking for other ways to improve your code, <strong>modularity</strong> is often a big one</p>
<pre class="lang-py prettyprint-override"><code># a function which returns the amount of xp needed to pass a level
def xp_per_level(level):
return (level + 1) * 100
# a function which wraps around the xp and changes the level
def update_level(xp, level):
while xp >= xp_per_level(level):
xp -= xp_per_level(level)
level += 1
while xp < 0:
level -= 1
xp += xp_per_level(level)
return xp, level
level = 0
xp = 0
while True:
xp += int(input('enter change in xp: '))
# since update_level returns two values, we need to "unpack" both here
xp, level = update_level(xp, level)
</code></pre>
<p>This lets you separate your logic into clear functions, and have greater control over the amount of xp needed to pass a level. In the example above, the player moves from level 0 to 1 once they reach 100xp, and from 1 to 2 at 200xp etc just as you want.</p>
<p>Hope it helps to provide an alternative structure, even though the logic is almost the same :)</p>
<p><strong>Edit: Added <code>xp_to_next_level</code> function</strong></p>
<pre class="lang-py prettyprint-override"><code>def xp_to_next_level(xp, level):
# this works by subtracting the current xp from max xp for that level
return xp_per_level(level) - xp
</code></pre>
|
python
| 1 |
971 | 63,763,884 |
Having Trouble Making a RESTful API with Flask-RestX: "No operations defined in spec!" and "404"s
|
<p>In summary, I have been following the flask restx tutorials to make an api, however none of my endpoints appear on the swagger page ("No operations defined in spec!") and I just get 404 whenever I call them</p>
<p>I created my api mainly following this <a href="https://flask-restx.readthedocs.io/en/latest/scaling.html" rel="nofollow noreferrer">https://flask-restx.readthedocs.io/en/latest/scaling.html</a></p>
<p>I'm using python 3.8.3 for reference.</p>
<p>A cut down example of what I'm doing is as follows.</p>
<p>My question in short is, what am I missing?
Currently drawing blank on why this doesn't work.</p>
<h3>Directory Structure</h3>
<pre><code>project/
- __init__.py
- views/
- __init__.py
- test.py
manage.py
requirements.txt
</code></pre>
<h3>File Contents</h3>
<p><strong>requirements.txt</strong></p>
<pre><code>Flask-RESTX==0.2.0
Flask-Script==2.0.6
</code></pre>
<p><strong>manage.py</strong></p>
<pre><code>from flask_script import Manager
from project import app
manager = Manager(app)
if __name__ == '__main__':
manager.run()
</code></pre>
<p><strong>project/<strong>init</strong>.py</strong></p>
<pre><code>from flask import Flask
from project.views import api
app = Flask(__name__)
api.init_app(app)
</code></pre>
<p><strong>project/views/<strong>init</strong>.py</strong></p>
<pre><code>from flask_restx import Api, Namespace, fields
api = Api(
title='TEST API',
version='1.0',
description='Testing Flask-RestX API.'
)
# Namespaces
ns_test = Namespace('test', description='a test namespace')
# Models
custom_greeting_model = ns_test.model('Custom', {
'greeting': fields.String(required=True),
})
# Add namespaces
api.add_namespace(ns_test)
</code></pre>
<p><strong>project/views/test.py</strong></p>
<pre><code>from flask_restx import Resource
from project.views import ns_test, custom_greeting_model
custom_greetings = list()
@ns_test.route('/')
class Hello(Resource):
@ns_test.doc('say_hello')
def get(self):
return 'hello', 200
@ns_test.route('/custom')
class Custom(Resource):
@ns_test.doc('custom_hello')
@ns_test.expect(custom_greeting_model)
@ns_test.marshal_with(custom_greeting_model)
def post(self, **kwargs):
custom_greetings.append(greeting)
pos = len(custom_greetings) - 1
return [{'id': pos, 'greeting': greeting}], 200
</code></pre>
<h3>How I'm Testing & What I Expect</h3>
<p>So going to the swagger page, I expect the 2 endpoints defined to be there, but I just see the aforementioned error.</p>
<p>Just using Ipython in a shell, I've tried to following calls using requests and just get back 404s.</p>
<pre><code>import json
import requests as r
base_url = 'http://127.0.0.1:5000/'
</code></pre>
<pre><code>response = r.get(base_url + 'api/test')
response
</code></pre>
<pre><code>response = r.get(base_url + 'api/test/')
response
</code></pre>
<pre><code>data = json.dumps({'greeting': 'hi'})
response = r.post(base_url + 'test/custom', data=data)
response
</code></pre>
<pre><code>data = json.dumps({'greeting': 'hi'})
response = r.post(base_url + 'test/custom/', data=data)
response
</code></pre>
|
<h2>TL;DR</h2>
<p>I made a few mistakes in my code and test:</p>
<ol>
<li>Registering api before declaring the routes.</li>
<li>Making a wierd assumption about how the arguments would be passed to the <code>post</code> method.</li>
<li>Using a model instead of request parser in the <code>expect</code> decorator</li>
<li>Calling the endpoints in my testing with an erroneous <code>api/</code> prefix.</li>
</ol>
<hr />
<h2>In Full</h2>
<p>I believe it's because I registered the namespace on the api before declaring any routes.</p>
<p>My understanding is when the api is registered on the app, the swagger documentation and routes on the app are setup at that point. Thus any routes defined on the api after this are not recognised. I think this because when I declared the namespace in the <code>views/test.py</code> file (also the model to avoid circular referencing between this file and <code>views/__init__.py</code>), the swagger documentation had the routes defined and my tests worked (after I corrected them).</p>
<p>There were some more mistakes in my app and my tests, which were</p>
<h3>Further Mistake 1</h3>
<p>In my app, in the <code>views/test.py</code> file, I made a silly assumption that a variable would be made of the expected parameter (that I would just magically have <strong>greeting</strong> as some non-local variable). Looking at the documentation, I learnt about the <a href="https://flask-restx.readthedocs.io/en/latest/parsing.html" rel="nofollow noreferrer">RequestParser</a>, and that I needed to declare one like so</p>
<pre><code>from flask_restx import reqparse
# Parser
custom_greeting_parser = reqparse.RequestParser()
custom_greeting_parser.add_argument('greeting', required=True, location='json')
</code></pre>
<p>and use this in the <code>expect</code> decorator. I could then retrieve a dictionary of the parameters in my <code>post</code> method. with the below</p>
<pre><code>...
def post(self):
args = custom_greeting_parser.parse_args()
greeting = args['greeting']
...
</code></pre>
<p>The <code>**kwargs</code> turned out to be unnecessary.</p>
<h3>Further Mistake 2</h3>
<p>In my tests, I was calling the endpoint <code>api/test</code>, which was incorrect, it was just <code>test</code>. The corrected test for this endpoint is</p>
<p><strong>Corrected test for <code>test</code> endpoint</strong></p>
<pre><code>import json
import requests as r
base_url = 'http://127.0.0.1:5000/'
response = r.get(base_url + 'test')
print(response)
print(json.loads(response.content.decode()))
</code></pre>
<h3>Further Mistake 3</h3>
<p>The test for the other endpoint, the post, I needed to include a header declaring the content type so that the parser would "see" the parameters, because I had specified the location explictily as json. Corrected test below.</p>
<p><strong>Corrected test for <code>test/custom</code> endpoint</strong></p>
<pre><code>import json
import requests as r
base_url = 'http://127.0.0.1:5000/'
data = json.dumps({'greeting': 'hi'})
headers = {'content-type': 'application/json'}
response = r.post(base_url + 'test/custom', data=data, headers=headers)
print(response)
print(json.loads(response.content.decode()))
</code></pre>
<h2>Corrected Code</h2>
<p>For the files with incorrect code.</p>
<p><strong>views/<strong>init</strong>.py</strong></p>
<pre><code>from flask_restx import Api
from project.views.test import ns_test
api = Api(
title='TEST API',
version='1.0',
description='Testing Flask-RestX API.'
)
# Add namespaces
api.add_namespace(ns_test)
</code></pre>
<p><strong>views/test.py</strong></p>
<pre><code>from flask_restx import Resource, Namespace, fields, reqparse
# Namespace
ns_test = Namespace('test', description='a test namespace')
# Models
custom_greeting_model = ns_test.model('Custom', {
'greeting': fields.String(required=True),
'id': fields.Integer(required=True),
})
# Parser
custom_greeting_parser = reqparse.RequestParser()
custom_greeting_parser.add_argument('greeting', required=True, location='json')
custom_greetings = list()
@ns_test.route('/')
class Hello(Resource):
@ns_test.doc('say_hello')
def get(self):
return 'hello', 200
@ns_test.route('/custom')
class Custom(Resource):
@ns_test.doc('custom_hello')
@ns_test.expect(custom_greeting_parser)
@ns_test.marshal_with(custom_greeting_model)
def post(self):
args = custom_greeting_parser.parse_args()
greeting = args['greeting']
custom_greetings.append(greeting)
pos = len(custom_greetings) - 1
return [{'id': pos, 'greeting': greeting}], 200
</code></pre>
|
python|python-3.x|flask-restx
| 3 |
972 | 72,077,490 |
Django multiple relations in one model
|
<p>I have been trying to create a model that could represent the form as it is, tried creating an <code>EntryForm</code> model which is linked to <code>EntryFormTable</code> where then each column in the table is a model class all linked to the table, but then this proved to be a long way and one that doesn't even work, maybe there's a short or even a working method to represent this in django models,
<a href="https://i.stack.imgur.com/q9nRY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q9nRY.jpg" alt="entry form" /></a></p>
|
<p>It is recommended to model the "things" as they are in real life, and not as they would appear on the screen. So don't create a model called <code>EntryForm</code>, <code>EntryFormTable</code> or <code>EntryFormColumn</code>, but rather name them what they are. Example based on your image:</p>
<pre><code>class CoveredWorkSet(Model):
school: CharField()
learning_area: CharField()
teacher: ForeignKey(Teacher) # or just CharField if you don't have them in your database
role:
grade: CharField()
class CoveredWork(Model):
covered_work_set = ForeignKey(CoveredWorkSet, related_name='records')
date = DateField()
lesson = CharField()
work_done = BooleanField()
reflection = TextField()
class Signature(Model):
"""
Represents a signature on either a CoveredWork record or a complete CoveredWorkSet
"""
ROLE_CHOICES = [
('subject', 'Subject teacher'),
('class', 'Class teacher'),
('head', 'Head teacher'),
]
teacher = ForeignKey(Teacher, related_name='signatures')
role = CharField(choices=ROLE_CHOICES)
covered_work_set = ForeignKey(CoveredWorkSet, null=True)
covered_work = ForeignKey(CoveredWor, null=True)
date = DateTimeField()
signature = ImageField()
</code></pre>
|
python|django|django-models
| 0 |
973 | 68,666,152 |
change a specific value in yaml file with lot of indentation using python
|
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: websitemanager
namespace: white
selfLink: /apis/apps/v1/namespaces/white/deployments/websitemanager
generation: 89
labels:
app: websitemanager
app.kubernetes.io/instance: websitemanager
backup: kube-noah
annotations:
deployment.kubernetes.io/revision: '75'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"75"},"labels":{"app":"websitemanager","app.kubernetes.io/instance":"websitemanager","backup":"kube-noah"},"name":"websitemanager","namespace":"white","selfLink":"/apis/apps/v1/namespaces/white/deployments/websitemanager"},"spec":{"progressDeadlineSeconds":600,"replicas":4,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"websitemanager"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"co.elastic.logs.json-logging/json.add_error_key":"true","co.elastic.logs.json-logging/json.keys_under_root":"true","co.elastic.logs.json-logging/json.message_key":"message_key","co.elastic.logs/enabled":"true"},"creationTimestamp":null,"labels":{"app":"websitemanager"}},"spec":{"containers":[{"env":[{"name":"APP","value":"websitemanager"},{"name":"PORT","value":"80"},{"name":"ALLOW_SKIPS","valueFrom":{"secretKeyRef":{"key":"ALLOW_SKIPS","name":"env-vars"}}},{"name":"AWS_ACCESS_KEY_ID","valueFrom":{"secretKeyRef":{"key":"AWS_ACCESS_KEY_ID","name":"env-vars"}}},{"name":"AWS_REGION","valueFrom":{"secretKeyRef":{"key":"AWS_REGION","name":"env-vars"}}},{"name":"AWS_SECRET_ACCESS_KEY","valueFrom":{"secretKeyRef":{"key":"AWS_SECRET_ACCESS_KEY","name":"env-vars"}}},{"name":"BO_RENDERER_URL","valueFrom":{"secretKeyRef":{"key":"BO_RENDERER_URL","name":"env-vars"}}},{"name":"BO_S3_URL","valueFrom":{"secretKeyRef":{"key":"BO_S3_URL","name":"env-vars"}}},{"name":"CHAT_V3","valueFrom":{"secretKeyRef":{"key":"CHAT_V3","name":"env-vars"}}},{"name":"CMS_RENDERER_URL","valueFrom":{"secretKeyRef":{"key":"CMS_RENDERER_URL","name":"env-vars"}}},{"name":"CRON_EXPRESSION","valueFrom":{"secretKeyRef":{"key":"CRON_EXPRESSION","name":"env-vars"}}},{"name":"CRON_GROUP_SCHEDULE_HOURS","valueFrom":{"secretKeyRef":{"key":"CRON_GROUP_SCHEDULE_HOURS","name":"env-vars"}}},{"name":"CRON_SCHEDULE_HOURS","valueFrom":{"secretKeyRef":{"key":"CRON_SCHEDULE_HOURS","name":"env-vars"}}},{"name":"DYNAMIC_PAGE_URL","valueFrom":{"secretKeyRef":{"key":"DYNAMIC_PAGE_URL","name":"env-vars"}}},{"name":"ENVIRONMENT_BUCKET_ENDPOINT","valueFrom":{"secretKeyRef":{"key":"ENVIRONMENT_BUCKET_ENDPOINT","name":"env-vars"}}},{"name":"ENVIRONMENT_BUCKET_NAME","valueFrom":{"secretKeyRef":{"key":"ENVIRONMENT_BUCKET_NAME","name":"env-vars"}}},{"name":"FIREBASE_KEY","valueFrom":{"secretKeyRef":{"key":"FIREBASE_KEY","name":"env-vars"}}},{"name":"FO_RENDERER_URL","valueFrom":{"secretKeyRef":{"key":"FO_RENDERER_URL","name":"env-vars"}}},{"name":"FO_S3_URL","valueFrom":{"secretKeyRef":{"key":"FO_S3_URL","name":"env-vars"}}},{"name":"GEFEN_ENV","valueFrom":{"secretKeyRef":{"key":"GEFEN_ENV","name":"env-vars"}}},{"name":"LEADS_SQS_DL_QUEUE","valueFrom":{"secretKeyRef":{"key":"LEADS_SQS_DL_QUEUE","name":"env-vars"}}},{"name":"LEADS_SQS_QUEUE","valueFrom":{"secretKeyRef":{"key":"LEADS_SQS_QUEUE","name":"env-vars"}}},{"name":"LOGGLY_INPUT_TOKEN","valueFrom":{"secretKeyRef":{"key":"LOGGLY_INPUT_TOKEN","name":"env-vars"}}},{"name":"LOGGLY_SUBDOMAIN","valueFrom":{"secretKeyRef":{"key":"LOGGLY_SUBDOMAIN","name":"env-vars"}}},{"name":"LOKALISE_API_KEY","valueFrom":{"secretKeyRef":{"key":"LOKALISE_API_KEY","name":"env-vars"}}},{"name":"LOKALISE_PROJECT_ID","valueFrom":{"secretKeyRef":{"key":"LOKALISE_PROJECT_ID","name":"env-vars"}}},{"name":"LP_RENDERER_URL","valueFrom":{"secretKeyRef":{"key":"LP_RENDERER_URL","name":"env-vars"}}},{"name":"MONGO_URL","valueFrom":{"secretKeyRef":{"key":"MONGO_URL","name":"env-vars"}}},{"name":"MONGO_URLS","valueFrom":{"secretKeyRef":{"key":"MONGO_URLS","name":"env-vars"}}},{"name":"NEW_RELIC_LICENSE_KEY","valueFrom":{"secretKeyRef":{"key":"NEW_RELIC_LICENSE_KEY","name":"env-vars"}}},{"name":"NEXTGEN_S3_URL","valueFrom":{"secretKeyRef":{"key":"NEXTGEN_S3_URL","name":"env-vars"}}},{"name":"NODE_ENV","valueFrom":{"secretKeyRef":{"key":"NODE_ENV","name":"env-vars"}}},{"name":"OPERATION_BUCKET_ENV_PART","valueFrom":{"secretKeyRef":{"key":"OPERATION_BUCKET_ENV_PART","name":"env-vars"}}},{"name":"PG_URL","valueFrom":{"secretKeyRef":{"key":"PG_URL","name":"env-vars"}}},{"name":"QUICKSIGHT_ACCESS_KEY_ID","valueFrom":{"secretKeyRef":{"key":"QUICKSIGHT_ACCESS_KEY_ID","name":"env-vars"}}},{"name":"QUICKSIGHT_ACCOUNT_ID","valueFrom":{"secretKeyRef":{"key":"QUICKSIGHT_ACCOUNT_ID","name":"env-vars"}}},{"name":"QUICKSIGHT_SECRET_KEY_ID","valueFrom":{"secretKeyRef":{"key":"QUICKSIGHT_SECRET_KEY_ID","name":"env-vars"}}},{"name":"REDIS_URL","valueFrom":{"secretKeyRef":{"key":"REDIS_URL","name":"env-vars"}}},{"name":"ROLLBAR_API_TOKEN","valueFrom":{"secretKeyRef":{"key":"ROLLBAR_API_TOKEN","name":"env-vars"}}},{"name":"RS_URL","valueFrom":{"secretKeyRef":{"key":"RS_URL","name":"env-vars"}}},{"name":"SES_PASS","valueFrom":{"secretKeyRef":{"key":"SES_PASS","name":"env-vars"}}},{"name":"SES_USER","valueFrom":{"secretKeyRef":{"key":"SES_USER","name":"env-vars"}}},{"name":"STELLAR_REDIS_URL","valueFrom":{"secretKeyRef":{"key":"STELLAR_REDIS_URL","name":"env-vars"}}},{"name":"USE_HTTP_SERVER","valueFrom":{"secretKeyRef":{"key":"USE_HTTP_SERVER","name":"env-vars"}}},{"name":"USE_LEADS_V3","valueFrom":{"secretKeyRef":{"key":"USE_LEADS_V3","name":"env-vars"}}},{"name":"androidBuildName","valueFrom":{"secretKeyRef":{"key":"androidBuildName","name":"env-vars"}}},{"name":"iosAppStoreId","valueFrom":{"secretKeyRef":{"key":"iosAppStoreId","name":"env-vars"}}},{"name":"iosBuildName","valueFrom":{"secretKeyRef":{"key":"iosBuildName","name":"env-vars"}}}],"image":"gefenonline/websitemanager:develop-111","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":80,"scheme":"HTTP"},"initialDelaySeconds":5,"periodSeconds":3,"successThreshold":1,"timeoutSeconds":1},"name":"websitemanager","resources":{"limits":{"memory":"700Mi"},"requests":{"cpu":"150m","memory":"400Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","imagePullSecrets":[{"name":"docker-registry"}],"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":null}
spec:
replicas: 4
selector:
matchLabels:
app: websitemanager
template:
metadata:
creationTimestamp: null
labels:
app: websitemanager
annotations:
co.elastic.logs.json-logging/json.add_error_key: 'true'
co.elastic.logs.json-logging/json.keys_under_root: 'true'
co.elastic.logs.json-logging/json.message_key: message_key
co.elastic.logs/enabled: 'true'
spec:
containers:
- name: websitemanager
image: 'gefenonline/websitemanager:develop-111'
resources:
limits:
memory: 700Mi
requests:
cpu: 150m
memory: 400Mi
livenessProbe:
httpGet:
path: /health
port: 80
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 3
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: docker-registry
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 89
replicas: 4
updatedReplicas: 4
readyReplicas: 4
availableReplicas: 4
conditions:
- type: Progressing
status: 'True'
lastUpdateTime: '2021-08-03T06:34:01Z'
lastTransitionTime: '2021-08-03T06:33:40Z'
reason: NewReplicaSetAvailable
message: ReplicaSet "websitemanager-7cdbcf6488" has successfully progressed.
- type: Available
status: 'True'
lastUpdateTime: '2021-08-04T13:57:07Z'
lastTransitionTime: '2021-08-04T13:57:07Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
</code></pre>
<p>I have a yaml file and i want to make change in the image key in the file which is under spec-containers by a python script.so far i have got this script but not able to access the key-value-pair and i am quite new to this so can anybody help me on this.i am not able to change the key-value pair in the deployments.yaml file</p>
<pre><code>import yaml
with open('deployment.yaml', 'w') as f:
content = yaml.load(f)
# for k,v in content.items():
# print(k['selector'])
print(content['spec']['template']['spec']['containers'])
#doc['spec']['spec']['template']['spec']['containers']['image'] = 'letsencrypt-prod'
# for k,v in (content['spec']['template']['spec']['containers']):
# print(v)
yaml.dump(content, f)
# with open("deployment.yaml", "w") as f:
# yaml.dump(content, f)
</code></pre>
|
<p>You were just missing that the level at key <code>"containers"</code> is a list, so the zeroth index must be used to get to the <code>image</code> key:</p>
<pre class="lang-py prettyprint-override"><code>import yaml
with open('deployment.yaml', 'r') as fin:
content = yaml.load(fin, Loader=yaml.FullLoader)
content['spec']['template']['spec']['containers'][0]['image'] = "letsencrypt-prod"
with open('deployment2.yaml', 'w') as fout:
yaml.dump(content, fout)
</code></pre>
|
python|yaml
| 1 |
974 | 62,029,517 |
Visualising geospatial .tiff images with Rasterio
|
<p>I am trying to visualise a .tiff image in Jupiter Notebook using Rasterio. I am a Junior Data Scientist for an AgriTech company and we just got access to 8 data layers (NDVI etc.) for two farms in .tiff format.</p>
<p>Here is the metadata for one image:</p>
<pre><code>{'driver': 'GTiff', 'dtype': 'float32', 'nodata': -125125.0, 'width': 72, 'height': 87, 'count': 1, 'crs': CRS.from_epsg(32734), 'transform': Affine(20.0, 0.0, 364480.0,
0.0, -20.0, 6292100.0), 'blockxsize': 256, 'blockysize': 256, 'tiled': True, 'compress': 'lzw', 'interleave': 'band'}
</code></pre>
<p>When I run the following:</p>
<pre><code>ax = plt.figure(figsize=(15,10))
pic = rasterio.open('/content20180109_biow_Meerlust.tif','r',driver='GTiff',width=72,
height=87,count=1, nodata=-125125.0)
show(pic,with_bounds=False)
</code></pre>
<p>I get a very pixelated image:</p>
<p><a href="https://i.stack.imgur.com/WIyG8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WIyG8.png" alt="test image"></a></p>
<p>How do I visualise the image without it being so pixelated? My knowledge of the array adjustments behind these .tiff images is limited as I just started in the Agronomics field. Open to any suggestions.</p>
<p>My aim is to create a web app using Streamlit where I can overlay these images and create a short video of how the layers change over time.</p>
|
<p>here are a couple solutions that might help visualize multiple-band rasters with clarity. In both examples, <code>raster</code> is a <a href="https://rasterio.readthedocs.io/en/latest/api/rasterio.io.html" rel="nofollow noreferrer"><code>rasterio.DatasetReader</code></a> with multiple bands (<a href="https://rasterio.readthedocs.io/en/latest/topics/reading.html" rel="nofollow noreferrer">indexed at 1</a>).</p>
<p><strong>1. Single Image</strong></p>
<p>To view all layers in a single 2D plane, the bands have to be concatenated:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
bands = []
for i in range(raster.count):
bands.append(raster.read(i+1, out_dtype=np.intc))
plt.title("Full Color Raster")
plt.imshow(np.array(bands))
plt.show()
</code></pre>
<p>Unfortunately, because of limitations from pyplot's <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.imshow.html" rel="nofollow noreferrer"><code>imshow()</code></a> function, this method only works with a few layers (traditionally RGB). Feel free to use datatypes other than <code>np.intc</code>.</p>
<br />
<p><strong>2. Visualize Layers Separately</strong></p>
<p>The <a href="https://earthpy.readthedocs.io/en/latest/api/earthpy.plot.html" rel="nofollow noreferrer"><code>earthpy.plot</code></a> module has several clean options for visualizing raster layers, including the convenient <a href="https://earthpy.readthedocs.io/en/latest/gallery_vignettes/plot_bands_functionality.html" rel="nofollow noreferrer"><code>plot_bands()</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import earthpy.plot as ep
bands = []
# Read the raster's bands to an array
for i in range(raster.count):
bands.append(raster.read((i+1), out_dtype=raster.dtypes[i]))
# Convert to an iterable np.ndarray and plot in a 3-column grid
ep.plot_bands(np.array(bands), cols=3)
</code></pre>
<br />
<br />
<p>Really hope this helps! <strong>This is my first Stack Overflow response</strong>, so let me know if there's anything critical I omitted.</p>
|
python|jupyter-notebook|visualization|geospatial|rasterio
| 2 |
975 | 67,354,339 |
How stop re-writing log data during program re-run / Python
|
<p>I having an issue with logging. Every time when i re-run my program it overwrites log data in file as I need to store previous data as well. I have created if statements when there is file it doesn't create a new one as i thought, but it doesn't solved my problem. Maybe someone knows the issue? Thank you in advance!</p>
<pre><code>from tkinter import *
from tkinter import filedialog
import easygui
import shutil
import os
from tkinter import filedialog
from tkinter import messagebox as mb
from pathlib import Path
import logging
from datetime import date
def open_window():
read=easygui.fileopenbox()
return read
#logging config
if Path('app.log').is_file():
print ("File exist")
else:
logging.basicConfig(filename='app.log', filemode="w", format='%(name)s - %(levelname)s - %(message)s ')
print ("File doesn't exist and will be created")
LOG_for="%(asctime)s, log content: %(message)s"
logger=logging.getLogger()
logger.setLevel(logging.DEBUG)
# Function for opening the
# file explorer window
def browseFiles():
filename = filedialog.askopenfilename(initialdir = "/",
title = "Select a File",
filetypes = (("Text files",
"*.txt*"),
("all files",
"*.*")))
# Change label contents
label_file_explorer.configure(text="File Opened: "+filename)
# move file function
def move_file():
source = open_window()
filename = os.path.basename(source)
destination =filedialog.askdirectory()
dest = os.path.join(destination,filename)
if(source==dest):
mb.showinfo('confirmation', "Source and destination are same, therefore file will be moved to Home catalog")
newdestination = Path("/home/adminas")
shutil.move(source, newdestination)
logging.shutdown()
#current_time()
logging.basicConfig(filename='app.log', filemode="w", format=LOG_for)
logging.info('File was moved to' + newdestination)
else:
shutil.move(source, destination)
mb.showinfo('confirmation', "File Moved !")
#current_time()
logging.basicConfig(filename='app.log', filemode="w", format=LOG_for)
logging.info('File was moved to' + destination)
# Create the root window
window = Tk()
# Set window title
window.title('File Explorer')
# Set window size
window.geometry("400x400")
#Set window background color
window.config(background = "white")
# Create a File Explorer label
label_file_explorer = Label(window,
text = "File Explorer using Tkinter",
width = 50, height = 4,
fg = "blue")
button_explore = Button(window,
text = "Browse Files",
command = browseFiles)
button_move = Button(window,
text = "Move File",
command = move_file)
button_exit = Button(window,
text = "Exit",
command = exit)
# Grid method is chosen for placing
# the widgets at respective positions
# in a table like structure by
# specifying rows and columns
label_file_explorer.grid(column = 1, row = 1)
button_move.grid(column = 1, row = 2)
button_exit.grid(column = 1,row = 3)
# Let the window wait for any events
logging.shutdown()
window.mainloop()
</code></pre>
<p>But in the shell it prints properly, but every time running program over and over again it overwrites as a example new data like this and after re-run it vanishes and being preplaces by new one:</p>
<pre><code>2021-05-02 11:04:15,384, log level: INFO, log content: File was moved to/home/adminas/Documents
</code></pre>
|
<p>Not quite understand your problem. But if you don't want to overwrite log files, change the <a href="https://docs.python.org/3/library/logging.html#logging.basicConfig" rel="nofollow noreferrer"><code>filemode</code></a> to <code>'a'</code> which will append new log to your log files.</p>
|
python|audit-logging
| 3 |
976 | 68,306,000 |
How the iter() method work in the str class?
|
<p>Why when you call the <code>__next__()</code> method on <code>str</code> it says it does not have this method ...</p>
<pre><code>b = 'hello'
b.__next__() # give AttributeError: 'str' object has no attribute '__next__'
a = iter(b)
a.__next__() # output == 'h'
</code></pre>
<p>Does not the <code>__iter__()</code> method return self?
Well, if it returns self, it is a string that does not have the <code>__next__()</code> method?</p>
<p>So how does return <code>"h"</code>?</p>
|
<p><a href="https://www.programiz.com/python-programming/methods/built-in/iter" rel="nofollow noreferrer"><code>iter</code></a> only returns its argument if the value is an iterator. <code>str</code> is <em>not</em> an iterator; it is an iterable whose <code>__iter__</code> method returns a <code>str_iterator</code> object.</p>
<pre><code>>>> a = iter(b)
>>> type(a)
<class 'str_iterator'>
</code></pre>
<p>The <code>str_iterator</code> object implements <code>__next__</code>, and maintains iteration state separate from any other iterator over the same object.</p>
<pre><code>>>> b = 'hello'
>>> a1 = iter(b)
>>> a2 = iter(b)
>>> next(a1)
'h'
>>> next(a2)
'h'
>>> next(a2)
'e'
>>> next(a2)
'l'
>>> next(a1)
'e'
</code></pre>
<hr />
<p>You could picture <code>str_iterator</code> being defined something like</p>
<pre><code>class str_iterator:
def __init__(self, s):
self.s = s
self.i = 0
def __iter__(self):
return self
def __next__(self):
if self.i == len(self.s):
raise StopIteration
i = self.i
self.i += 1
return self.s[i]
class str:
...
def __iter__(s):
return str_iterator(s)
</code></pre>
<p>The iterator remembers is position in the string between calls to <code>__next__</code>. The job of <code>__next__</code> is to advance the "pointer" and to return the character at the correct position.</p>
|
python
| 6 |
977 | 59,796,619 |
"errorMessage": "local variable 'action' referenced before assignment", "errorType": "UnboundLocalError"
|
<p>I tried to make the variable action global but it didn't work. It seems that any variable inside the else statement is isolated from the rest of the code, although they are in the same block of code in the for loop.</p>
<pre><code>for group in auto_scaling_groups:
if servers_need_to_be_started(group):
pass
else:
action = "Stopping"
min_size = 0
max_size = 0
desired_capacity = 0
print("Version is {}".format(botocore.__version__))
print (action + ": " + group) #Error in this line
response = client.update_auto_scaling_group(
AutoScalingGroupName=group,
MinSize=min_size,
MaxSize=max_size,
DesiredCapacity=desired_capacity,
)
print (response)
</code></pre>
|
<p>The error is saying "after executing the "then" block of the if statement, <code>action</code> is not set but is used on the error line". The fix is to ensure <code>action</code>, <code>min_size</code>, <code>max_size</code>, and <code>desired_capacity</code> are assigned when the "then" block of the if statement is executed.</p>
|
python
| 2 |
978 | 59,686,521 |
Explain curious behavior of Pandas.Series.interpolate
|
<pre><code>s = pd.Series([0, 2, np.nan, 8])
print(s)
interp = s.interpolate(method='polynomial', order=2)
print(interp)
</code></pre>
<p>This prints:</p>
<pre><code>0 0.0
1 2.0
2 NaN
3 8.0
dtype: float64
0 0.000000
1 2.000000
2 4.666667
3 8.000000
dtype: float64
</code></pre>
<p>Now if I add one more <code>np.nan</code> to the <code>series</code>, </p>
<pre><code>s = pd.Series([0, 2, np.nan, np.nan, 8])
print(s)
interp = s.interpolate(method='polynomial', order=2)
print(interp)
</code></pre>
<p>I get much more accurate results:</p>
<pre><code>0 0.0
1 2.0
2 NaN
3 NaN
4 8.0
dtype: float64
0 0.0
1 2.0
2 4.0
3 6.0
4 8.0
dtype: float64
</code></pre>
<p>Is <code>Series.interpolate</code> <code>recursive</code> in that it uses interpolated values for further interpolated values, which then can affect previously interpolated values?</p>
|
<p><strong>You are actually interpolating two different functions!</strong> <br></p>
<p>In the first case you look for a function that goes thorugh the following points: <br>
(0,0), (1,2), (<strong>3</strong>,8) <br>
But in the second case you look for a function that goes through the following points: <br>
(0,0), (1,2), (<strong>4</strong>,8)<br></p>
<p>The indices of a <code>pd.Series</code> represent the points on the x-Axis and the data of a <code>pd.Series</code> represents the points on the y-Axis. <br></p>
<p>So try the following change in your first example:
<br>
<s><code>s = pd.Series([0, 2, np.nan, 8])</code></s> <br></p>
<pre><code>s = pd.Series([0, 2, np.nan, 8], [0,1,2,4])
s.interpolate(method='polynomial', order=2)
</code></pre>
<p>You should get the output:</p>
<pre><code>0 0.0
1 2.0
2 4.0
4 8.0
dtype: float64
</code></pre>
<p>As an alternative you could also do:
<code>s = pd.Series([0, 2, np.nan, 8], [0,1,3,4])</code>
<br> and the output: <br></p>
<pre><code>0 0.0
1 2.0
3 6.0
4 8.0
dtype: float64
</code></pre>
<p>Hope this helps.</p>
|
pandas|numpy|scipy|interpolation
| 1 |
979 | 25,166,626 |
How to set the <title> tag for IPython notebook HTML output?
|
<p>I'm using an IPython notebook to store mixed documentation/examples for a project. I am using <code>ipython nbconvert notebook.ipynb</code> to render HTML output (uses <code>pandoc</code> internally). The problem I have is that <code>nbconvert</code> insists on giving the HTML output an ugly blank title tag:</p>
<pre class="lang-html prettyprint-override"><code><title>[]</title>
</code></pre>
<p>I've looked through all the options described in <code>ipython nbconvert --help-all</code> and can't find anything that will allow me to change the title.</p>
<pre class="lang-bash prettyprint-override"><code>ipython nbconvert --to html --template full notebook.ipynb
</code></pre>
<p>Any help?</p>
|
<p>The template which is being used is in:
<a href="https://github.com/jupyter/nbconvert/blob/master/nbconvert/templates/html/full.tpl#L11" rel="noreferrer">https://github.com/jupyter/nbconvert/blob/master/nbconvert/templates/html/full.tpl#L11</a></p>
<p>In particular, the line I've highlighted defines the html title. This comes from the metadata dictionary (which you can edit with Edit->"Edit notebook metadata" in the notebook itself).</p>
<p>Interestingly though, for me, the title <em>is</em> populated by default - it is the name of the notebook file.</p>
<p>Anyway, hope this is helpful.</p>
|
python-2.7|ipython-notebook
| 7 |
980 | 42,658,331 |
Python 3 on macOS: how to set process affinity
|
<p>I am trying to restrict the number of CPUs used by Python (for benchmarking & to see if it speeds up my program).</p>
<p>I have found a few Python modules for achieving this ('os', 'affinity', 'psutil') except that their methods for changing affinity only works with Linux (and sometimes Windows). There is also a suggestion to use the 'taskset' command (<a href="https://stackoverflow.com/questions/15639779/why-does-multiprocessing-use-only-a-single-core-after-i-import-numpy/15641148#15641148">Why does multiprocessing use only a single core after I import numpy?</a>) but this command not available on macOS as far as I know. </p>
<p>Is there a (preferable clean & easy) way to change affinity while running Python / iPython on macOS? It seems like changing processor affinity in Mac is not as easy as in other platforms (<a href="http://softwareramblings.com/2008/04/thread-affinity-on-os-x.html" rel="nofollow noreferrer">http://softwareramblings.com/2008/04/thread-affinity-on-os-x.html</a>). </p>
|
<p>Not possible. See <a href="https://developer.apple.com/library/content/releasenotes/Performance/RN-AffinityAPI/" rel="nofollow noreferrer">Thread Affinity API Release Notes</a>:</p>
<blockquote>
<p>OS X does not export interfaces that identify processors or control thread placementβexplicit thread to processor binding is not supported. Instead, the kernel manages all thread placement. Applications expect that the scheduler will, under most circumstances, run its threads using a good processor placement with respect to cache affinity.</p>
</blockquote>
<p>Note that thread affinity is something you'd consider fairly late when optimizing a program, there are a million things to do which have a larger impact on your program.</p>
<p>Also note that Python is particularly bad at multithreading to begin with.</p>
|
python|macos|ipython|affinity
| 3 |
981 | 57,509,250 |
dataframe into to list of dictonaries without the index
|
<p>I have a dataframe as below </p>
<pre><code> NY FL IL GA CA
80.0 30.0 60.0 NaN NaN
90.0 NaN NaN 10.0 20.0
</code></pre>
<p>
When i do as below</p>
<pre><code>df.apply(lambda x : x.dropna().to_dict(),axis=1)
</code></pre>
<p>i get </p>
<pre><code>0 {'NY': 80.0, 'FL': 30.0, 'IL': 60.0}
1 {'NY': 90.0, 'GA': 10.0, 'CA': 20.0}
dtype: object
</code></pre>
<p>but what i want is </p>
<pre><code>list = [{'NY':80,'FL':30, 'IL':60}, {'NY':90, 'GA':10, 'CA':20 }]
</code></pre>
<p>How do i achive this?</p>
|
<p>Try this: <code>[{k:v for (k,v) in d.items() if not np.isnan(v)} for d in df.to_dict(orient="rows")]</code></p>
|
python|pandas
| 1 |
982 | 57,611,567 |
Stripe API PaymentIntent and Billing with Python
|
<p>I try to use the new Stripe's PaymentIntent system to be ready when SCA will be launched in EU.</p>
<p>I only use one-time payment.</p>
<p>I succeed to make the payment with the PaymentIntent <a href="https://stripe.com/docs/payments/checkout/server#billing-address-collection" rel="nofollow noreferrer">following Stripe's documentation</a>. But I'm unable to create an invoice for every payment (I must have one according to the law), and I tried a lot of things. </p>
<p>But first, I think I need to show my code to introduce the troubles I have.</p>
<p><strong>In my view</strong>, I create a Stripe Session :</p>
<pre><code>public_token = settings.STRIPE_PUBLIC_KEY
stripe.api_key = settings.STRIPE_PRIVATE_KEY
stripe_sesssion = stripe.checkout.Session.create(
payment_method_types=['card'],
line_items=[{
'name':'My Product',
'description': description,
'amount': amount,
'currency': 'eur',
'quantity': 1,
}],
customer=customer_id,
success_url=f'{settings.SITE_URL}/ok.html',
cancel_url=f'{settings.SITE_URL}/payment_error.html',
)
</code></pre>
<p>Then, the user click on the "Purchase" button on my web page and is redirected to the Stripe's Checkout page.</p>
<p><a href="https://i.stack.imgur.com/oasjw.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oasjw.gif" alt="enter image description here"></a></p>
<p>After the user filled his payment card informations, Stripe call my Webhook (according to the checkout.session.completed event triggered).</p>
<p>Here's my <strong>webhook function code</strong> :</p>
<pre><code>@csrf_exempt
def webhook_payment_demande(request):
payload = request.body
sig_header = request.META['HTTP_STRIPE_SIGNATURE']
event = None
if settings.DEBUG is False:
endpoint_secret = "whsec_xxx"
else:
endpoint_secret = "whsec_xxx"
try:
event = stripe.Webhook.construct_event(
payload, sig_header, endpoint_secret
)
except ValueError as e:
# Invalid payload
return HttpResponse(status=400)
except stripe.error.SignatureVerificationError as e:
# Invalid signature
return HttpResponse(status=400)
# Handle the event
if event['type'] == 'checkout.session.completed':
stripe_session = event['data']['object']
invoice_item = stripe.InvoiceItem.create(
customer=customer_id,
amount=str(amount),
currency="eur",
description=description
)
invoice = stripe.Invoice.create(
customer=customer_id,
)
invoice_payment = stripe.Invoice.pay(
invoice,
source=card
)
[...] Deliver product by e-mail and stuff [...]
</code></pre>
<ul>
<li><em>If I execute that code</em>, the payment is done a first time (PaymentIntent) but also a second time to finalize the invoice I create after. So my customer payed twice the amount.</li>
<li><em>If I remove the Invoice.pay function</em>, Stripe will charge my client one hour after anyway using an existing payment card into Stripe.</li>
<li><em>If I don't create any invoice manually</em> inside my Web hook function,
Stripe doesn't make one automatically.</li>
<li><em>If I create the invoice into my first view</em>, just right after the Stripe
Checkout Session and before my customer fill his card informations, he will be charged for the amount even if
he didn't finalize the payment (because he had a existing card into
Stripe).</li>
</ul>
<p>I'm reading the documentation for days and I've not found a good tutorial to make a one-time Payment with SCA compatibility and having a bill after that.</p>
<p>Is a nice person has already fixed his/her Stripe payment API system for SCA compliance and have found a way to deal with this ?</p>
<p>A lot of thanks for your help ! :) </p>
|
<p>Your code is creating one-time charges via Checkout. What you are looking for is the email receipt feature as documented here <a href="https://stripe.com/docs/receipts" rel="nofollow noreferrer">https://stripe.com/docs/receipts</a></p>
<p>This lets you email your customer after a successful charge on your account which should act as an Invoice. You can also link to this email receipt directly by looking at the Charge's <code>receipt_url</code> property: <a href="https://stripe.com/docs/api/charges/object#charge_object-receipt_url" rel="nofollow noreferrer">https://stripe.com/docs/api/charges/object#charge_object-receipt_url</a></p>
|
python|django|stripe-payments
| 0 |
983 | 44,554,904 |
How to compute the mean for each channel in an image in tensorflow
|
<p>What is the proper way to compute the mean for each channel in an image in tensorflow?</p>
<p>Any help is much appreciated!!</p>
|
<p>Just use <a href="https://www.tensorflow.org/api_docs/python/tf/reduce_mean" rel="nofollow noreferrer"><code>tf.reduce_mean()</code></a> and specify the axis:</p>
<blockquote>
<p>axis: The dimensions to reduce. If None (the default), reduces all
dimensions.</p>
</blockquote>
|
image|tensorflow|mean|channel
| 2 |
984 | 66,542,847 |
Socket receiving incomplete data
|
<p>I have a p2p network and my socket sends are either sending incomplete data or are breaking before they send the complete data. I'm not exactly sure which is happening or if something else is wrong here. Below is my sending code where I loop and send till the entire msg is sent. I notice that in my listener code further below, my json_loads is throwing exceptions and when I print out the buffer it looks like some bytes are missing from the sends since the dictionaries im sending are incomplete. Not sure what I'm doing wrong here. Anybody have any idea?</p>
<pre><code>def unicast(self, msg, recipient_node):
#print('UNICAST HERE')
msg = json.dumps(msg)
msg_len = len(msg) # add the msg length header with the delimiter here
msg = f'{msg_len}--->{msg}'
byte_msg = bytes(msg, 'utf-8')
unicast_len = len(byte_msg)
total_sent = 0
with self.socket_lock:
try:
while total_sent < unicast_len:
#time.sleep(0.01) # necessary for GIL issues (connection reset by peer)
sent = recipient_node['socket'].send(byte_msg[total_sent:])
if sent == 0: # disconnected?
print('HERE BOIS')
break
total_sent += sent
except Exception as e:
print('HERE BOIS #2')
pass # disconnected socket or some error that will be handled by listener
</code></pre>
<p>Here is my listener code where I have a buffer that I believe will parse complete json strings once they've been received accordingly.</p>
<pre><code>msg_length = None
buffer = ''
while inputs:
#print(socket_to_identifier)
readable, writable, exceptional = select.select(inputs, outputs, inputs)
for s in readable:
if s is server_socket:
client_socket, client_address = server_socket.accept()
self.sockets_listening += 1
client_socket.setblocking(0)
inputs.append(client_socket)
message_queues[client_socket] = queue.Queue()
else:
data = s.recv(1024)
if data:
#print('NEW RECEIVE HERE')
message_queues[s].put(data)
if s not in outputs:
outputs.append(s)
decoded_recv = data.decode('utf-8')
buffer += decoded_recv
while True:
if msg_length is None:
if '--->' not in buffer:
break
length_str, ignored, buffer = buffer.partition('--->')
msg_length = int(length_str)
if len(buffer) < msg_length:
break
msg = buffer[:msg_length]
buffer = buffer[msg_length:]
msg_length = None
#print(msg)
msg = json.loads(msg)
if 'node_identifier' in msg: # this is the on connect msg sent
socket_to_identifier[s] = msg['node_identifier']
break
#print(msg)
#b_del_thread = Thread(target=self.basic_deliver, args=(msg,))
#b_del_thread.start()
self.basic_deliver(msg)
else:
self.handle_disconnect(socket_to_identifier.get(s, None))
print('disconnected boys')
if s in outputs:
outputs.remove(s)
inputs.remove(s)
s.close()
del message_queues[s]
for s in exceptional:
self.handle_disconnect(socket_to_identifier.get(s, None))
inputs.remove(s)
if s in outputs:
outputs.remove(s)
s.close()
del message_queues[s]
</code></pre>
|
<p>Welp, I figured it out. Since I'm using select as my non blocking listener, it is basically one dedicated piece of code to handle EVERY socket connection and so recvs need not be from one node. Instead of on buffer string, I now use a buffer map that maps a client socket to its own buffer so that it doesn't intermix messages from different sockets.</p>
|
python|sockets|tcp
| 0 |
985 | 65,210,680 |
Python tkinter Grid Manager doesn't place button on left with sticky = tk.W or sticky = 'w'
|
<p>With two frames within a frame, button placed in top frame, sticky=tk.W doesn't seem to have any effect.</p>
<pre><code>import tkinter as tk
def _exit():
raise SystemExit
root = tk.Tk()
frame = tk.Frame(root,width = 1200, height = 650, bg = 'Yellow')
top_frame = tk.Frame(frame, width = 1200, height = 50, bg = 'green')
bot_frame = tk.Frame(frame, width = 600, height = 600, bg = 'skyblue')
exit_button = tk.Button(top_frame, text = 'Exit',
command = _exit)
frame.grid()
top_frame.grid(column=0,row=0)
bot_frame.grid(column=0,row=1)
exit_button.grid(column=0,row=0, sticky = tk.W)
root.mainloop()
</code></pre>
<p>If I delete the bottom frame, the sticky works :</p>
<pre><code>import tkinter as tk
def _exit():
raise SystemExit
root = tk.Tk()
frame = tk.Frame(root,width = 1200, height = 650, bg = 'Yellow')
top_frame = tk.Frame(frame, width = 1200, height = 50, bg = 'green')
exit_button = tk.Button(top_frame, text = 'Exit',
command = _exit)
frame.grid()
top_frame.grid(column=0,row=0)
exit_button.grid(column=0,row=0, sticky = tk.W)
root.mainloop()
</code></pre>
|
<p>When you place a button inside of <code>top_frame</code>, it will shrink to fit the button. The button <em>is</em> to the left of <code>top_frame</code>, but <code>top_frame</code> is only as wide as the button and is centered in its space. Therefore it appears that the button isn't on the left, but it is. The button is on the left edge of <code>top_frame</code>, but <code>top_frame</code> is centered in <code>frame</code>.</p>
<p>If you want <code>top_frame</code> to fill the width of the window (or the width of the space allocated to it) you need to use <code>sticky</code> with it, too.</p>
<pre><code>top_frame.grid(column=0,row=0, sticky="ew")
</code></pre>
|
python|tkinter|grid|sticky
| 2 |
986 | 68,470,432 |
How to change PyDev version
|
<p>For Python 3.9, I've installed the latest PyDev updates on Eclipse, but on my projects it does not list python 3.9 as a grammar version. What is the problem here, is there any way to select latest PyDev version on project properties?</p>
|
<p>The Python grammar for 3.8 and 3.9 is the same (thus, you can just use the Python 3.8 grammar for 3.9).</p>
<p>I'll update the UI in PyDev so that this is clearer...</p>
|
python|eclipse|pydev|python-3.9
| 0 |
987 | 68,795,096 |
Use Apache beam `GroupByKey` and construct a new column - Python
|
<p>From this question: <a href="https://stackoverflow.com/questions/68794856/how-to-group-data-and-construct-a-new-column-python-pandas/68794973#68794973">How to group data and construct a new column - python pandas?</a>, I know how to groupby multiple columns and construct a new unique id by using <code>pandas</code>, but if I want to use <code>Apache beam</code> in Python to achieve the same thing that is described in that question, how can I achieve it and then write the new data to a newline delimited JSON format file (each line is one <code>unique_id</code> with an array of objects that belong to that unique_id)?</p>
<p>Assuming the dataset is stored in a csv file.</p>
<p>I'm new to Apache beam, here's what I have now:</p>
<pre><code>import pandas
import apache_beam as beam
from apache_beam.dataframe.io import read_csv
with beam.Pipeline() as p:
df = p | read_csv("example.csv", names=cols)
agg_df = df.insert(0, 'unique_id',
df.groupby(['postcode', 'house_number'], sort=False).ngroup())
agg_df.to_csv('test_output')
</code></pre>
<p>This gave me an error:</p>
<pre><code>NotImplementedError: 'ngroup' is not yet supported (BEAM-9547)
</code></pre>
<p>This is really annoying, I'm not very familiar with Apache beam, can someone help please...</p>
<p>(ref: <a href="https://beam.apache.org/documentation/dsls/dataframes/overview/" rel="nofollow noreferrer">https://beam.apache.org/documentation/dsls/dataframes/overview/</a>)</p>
|
<p>Assigning consecutive integers to a set is not something that's very amenable to parallel computation. It's also not very stable. Is there any reason another identifier (e.g. the tuple <code>(postcode, house_number)</code> or its hash would not be suitable?</p>
|
python|json|csv|apache-beam|apache-beam-io
| 0 |
988 | 71,745,071 |
Cannot update a Django user profile
|
<p>I have a react app interacting with django to create and update user profiles. I am encountering this error message when I try to update a user profile, specifically the first name and last name properties I get a response that indicates that my user profile has not been updated</p>
<pre><code>{"username":"***","email":"[email protected]","password":"****","first_name":"","last_name":""}
</code></pre>
<p>This is what I have in my urls.py:</p>
<pre><code>path('update_profile/<int:pk>', views.UpdateProfileView.as_view(), name='update_profile'),
</code></pre>
<p>UpdateProfileView:</p>
<pre><code>class UpdateProfileView(generics.UpdateAPIView):
queryset = User.objects.all()
serializer_class = UpdateUserSerializer
def profile(request):
if request.method == 'PUT':
try:
user = User.objects.get(id=request.user.id)
serializer_user = UpdateUserSerializer(user, many=True)
if serializer_user.is_valid():
serializer_user.save()
return Response(serializer_user)
except User.DoesNotExist:
return Response(data='no such user!', status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>UpdateUserSerializer:</p>
<pre><code>class UpdateUserSerializer(serializers.ModelSerializer):
email = serializers.EmailField(required=False)
class Meta:
model = User
fields = ['username', 'email', 'password', 'first_name', 'last_name']
extra_kwargs = {'username': {'required': False},
'email': {'required': False},
'password': {'required': False},
'first_name': {'required': False},
'last_name': {'required': False}}
def validate_email(self, value):
user = self.context['request'].user
if User.objects.exclude(pk=user.pk).filter(email=value).exists():
raise serializers.ValidationError({"email": "This email is already in use."})
return value
def validate_username(self, value):
user = self.context['request'].user
if User.objects.exclude(pk=user.pk).filter(username=value).exists():
raise serializers.ValidationError({"username": "This username is already in use."})
return value
def update(self, instance, validated_data):
user = self.context['request'].user
if user.pk != instance.pk:
raise serializers.ValidationError({"authorize": "You don't have permission for this user."})
instance.first_name = validated_data['first_name']
instance.last_name = validated_data['last_name']
instance.email = validated_data['email']
instance.username = validated_data['username']
instance.save()
return instance
</code></pre>
<p>Any ideas where I am going wrong? To summarize one more time; I would like to enable a user to successfully update the first name and last name properties associated with a particular user</p>
|
<p>Edit <code>urls.py</code> like this</p>
<pre><code>path('update_profile/<int:pk>/', views.UpdateProfileView.as_view(), name='update_profile'),
</code></pre>
<p>I hope this will work</p>
|
python|django
| 0 |
989 | 62,659,725 |
How do i set up the range for y axis?
|
<p>Im having i bit of a hard time figuring out how to plot this graph correctly, so what im doing is:</p>
<pre><code> names = ['Graves', 'Fallecidos', 'Moderados', 'Asintomaticos', 'Leves']
values = [str(df_2035_Gra), str(df_2035_fal), str(df_2035_Mod), str(df_2035_Asin), str(df_2035_leve)]
#Values: 69, 85, 876, 3593, 27572
plt.figure(figsize=(9, 3))
plt.bar(names, values)
plt.suptitle('Pacientes x estado')
plt.ylabel('Num. pacientes')
plt.show()
</code></pre>
<p>And what im getting is:</p>
<p><a href="https://i.stack.imgur.com/tqF3K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tqF3K.png" alt="enter image description here" /></a></p>
<p>So what i dont get is how to get the range on Y to be from 0 to my higher value (27572)</p>
|
<p>Ivan :)</p>
<p>You can use <code>plt.ylim(0, 27572)</code></p>
<p>You can check the documentation <a href="https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.ylim.html?highlight=ylim#matplotlib.pyplot.ylim" rel="nofollow noreferrer">here</a></p>
<p>I hope it helps!</p>
|
python|matplotlib
| 0 |
990 | 70,273,868 |
pandas: replace with a dictionary does not work with string of sentences
|
<p>I have a dataframe as follows:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'text':['Lary Page is visiting today',' His boss, Maria Jackson is here.']})
</code></pre>
<p>I have extracted the names in the list below. and used faker library to create fake names equal to the len of the person_name list, and created a dictionary out of the lists.</p>
<pre><code>from faker import Faker
fake = Faker()
person_name = ['Lary Page', 'Maria Jackson']
fake_name= [fake.name() for n in range(len(person_name))]
name_dict = dict(zip(person_name, fake_name ))
</code></pre>
<p>now I would like to replace them in the dataframe using the dictionary, but it returns an error.</p>
<pre><code>df.text.str.replace(name_dict)
</code></pre>
<p>my desired output:(e.g)</p>
<pre><code>print(df)
Angela Mindeston is visiting today
His boss, Emanuel Smith is here.
</code></pre>
|
<p>Use callback with lambda for <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>Series.str.replace</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer"><code>Series.replace</code></a>:</p>
<pre><code>regex = '|'.join(r"\b{}\b".format(x) for x in name_dict.keys())
df['text1'] = df.text.str.replace(regex, lambda x: name_dict[x.group()], regex=True)
df['text2'] = df.text.replace(name_dict, regex=True)
print (df)
text text1 \
0 Lary Page is visiting today Gary Cox is visiting today
1 His boss, Maria Jackson is here. His boss, Mr. George Jones is here.
text2
0 Gary Cox is visiting today
1 His boss, Mr. George Jones is here.
</code></pre>
|
python|pandas|list|dictionary|replace
| 1 |
991 | 63,621,597 |
Replace part of pandas dataframe column based on the first two letters
|
<p>I have a pandas dataframe where I need to conditionally update the value based on the first two letters. The pattern is simple and the code below works, but it doesn't feel pythonic. I need to extend this to other letters (at least 11-19/A-J) and, while I could just add additional rows, I'd really like to do this the right way. Existing code below</p>
<pre><code>df['REFERENCE_ID'] = df['PRECERT_ID'].astype(str)
df.loc[df['REFERENCE_ID'].str.startswith('11'), 'REFERENCE_ID'] = 'A' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('12'), 'REFERENCE_ID'] = 'B' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('13'), 'REFERENCE_ID'] = 'C' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('14'), 'REFERENCE_ID'] = 'D' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('15'), 'REFERENCE_ID'] = 'E' + df['PRECERT_ID'].str[-7:]
</code></pre>
<p>I thought I might be able to use a list of letters, like</p>
<pre><code>letters = list(string.ascii_uppercase)
</code></pre>
<p>but I'm new to dataframes (and python in general) and can't figure out the syntax to get the dataframe equivalent of</p>
<pre><code>letters = list(string.ascii_uppercase)
text = '1523456789'
first = int(text[:2])
text = letters[first-11] + text[-7:]
</code></pre>
<p>I wasn't able to find something addressing this, but would be grateful for any help or a link to a similar question if it exists. Thank you.</p>
|
<p>I would try to make a look up dictionary and use <code>map</code> to speed things up.</p>
<p>To make the look up dictonary you could use:</p>
<pre><code>lu_dict = dict(zip([str(i) for i in range(11,20)],[chr(i) for i in range(65,74)]))
</code></pre>
<p>which returns:</p>
<pre><code>{'11': 'A',
'12': 'B',
'13': 'C',
'14': 'D',
'15': 'E',
'16': 'F',
'17': 'G',
'18': 'H',
'19': 'I'}
</code></pre>
<p>Then you could use <code>.str.slice.map</code>to avoid the for loop.</p>
<pre><code>df = pd.DataFrame(data = {'Reference_ID':['112326345','12223356354','6735435634']})
df.Reference_ID = df.Reference_ID.astype(str)
df.loc[:,'Reference_new'] = df.Reference_ID.str.slice(0,2).map(lu_dict) + df.Reference_ID.str.slice(-7, )
</code></pre>
<p>Which results in:</p>
<pre><code> Reference_ID Reference_new
0 112326345 A2326345
1 12223356354 B3356354
2 6735435634 NaN
</code></pre>
|
python|pandas|dataframe
| 0 |
992 | 60,772,650 |
Getting the Import error when trying to import EntityRecognizer from spacy.language package
|
<p>ImportError: cannot import name 'EntityRecognizer' from 'spacy.language'. </p>
<p>getting the when try importing the packages in spyder,
import spacy </p>
<p>from spacy.gold import GoldParse </p>
<p>from spacy.language import EntityRecognizer </p>
<p>spyder version: 3.3.6</p>
<p>conda version: 4.8.3</p>
|
<p>Try:
from spacy.pipeline import EntityRecognizer</p>
|
python|nlp|spacy
| 4 |
993 | 68,110,133 |
Assigned dimension to values in Python
|
<p>I tried:</p>
<pre><code>x = xr.DataArray(x, coords=[ lat_line, lon_line], dims=[ 'lat','lon'])
</code></pre>
<p>with x is array, (array([1.47937608e-01, 6.56879655e-01, ..., 2.91481077e-01); lat, lon, lat_line, lon_line is already defined with the same elements. But it still does not work.</p>
|
<p>I guess your x should be 2D if you declare 2 dimensions.
I think it could work if you remove the dims declaration part.
lat and lon will then be coordinate but not dimensions.</p>
<p>But it will be easier to handle if you have lat & lon as dimensions (if you want to average by coordinate bins or like that). I don't know if xarray can build the cube itself.</p>
|
python|python-xarray
| 0 |
994 | 35,413,687 |
How to Bind Event to CheckBox in UltimateListCtrl?
|
<p>I have been trying to figure out how to use wx.UltimateListCtrl in Python to create a customized widget. Based on some internet examples I have this basic script but i'm stuck in how to bind events inside the widget in order to get the stringtext from column 1 if checkBox in column two is selected.
This is the code:</p>
<pre><code>import sys
import wx
import wx.lib.agw.ultimatelistctrl as ULC
class MyFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, -1, "UltimateListCtrl Demo")
list = ULC.UltimateListCtrl(self, wx.ID_ANY, agwStyle=ULC.ULC_HAS_VARIABLE_ROW_HEIGHT|wx.LC_REPORT|wx.LC_VRULES|wx.LC_HRULES|wx.LC_SINGLE_SEL)
list.InsertColumn(0, "File Name")
list.InsertColumn(1, "Select")
for _ in range(4):
index = list.InsertStringItem(sys.maxint, "Item " + str(_))
list.SetStringItem(index, 1, "")
checkBox = wx.CheckBox( list, wx.ID_ANY, u"", wx.DefaultPosition, wx.DefaultSize, 0 )
list.SetItemWindow(index, 1, checkBox , expand=True)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(list, 1, wx.EXPAND)
self.SetSizer(sizer)
checkBox1.Bind( wx.EVT_CHECKBOX, checkBoxOnCheckBox )
def __del__( self ):
pass
# Virtual event handlers, overide them in your derived class
def checkBoxOnCheckBox( self, event ):
print 'Yes'
#event.Skip()
app = wx.PySimpleApp()
frame = MyFrame(None)
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
</code></pre>
<p>Thanks in advance for your help
Ivo</p>
|
<p>First of all: Try not to name the ULC list as this masks the Python list.</p>
<p>There are of course multiple ways to do what you want. One solution is to keep a reference of the checkbox and link it with the index of the item. This way you can identify the item.</p>
<p>I hope this helps.</p>
<pre><code>import sys
import wx
import wx.lib.agw.ultimatelistctrl as ULC
class MyFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, -1, "UltimateListCtrl Demo")
agwStyle = (ULC.ULC_HAS_VARIABLE_ROW_HEIGHT | wx.LC_REPORT |
wx.LC_VRULES | wx.LC_HRULES | wx.LC_SINGLE_SEL)
self.mylist = mylist = ULC.UltimateListCtrl(self, wx.ID_ANY,
agwStyle=agwStyle)
mylist.InsertColumn(0, "File Name")
mylist.InsertColumn(1, "Select")
self.checkboxes = {}
for _ in range(4):
index = mylist.InsertStringItem(sys.maxint, "Item " + str(_))
mylist.SetStringItem(index, 1, "")
checkBox = wx.CheckBox(mylist, wx.ID_ANY, u"", wx.DefaultPosition,
wx.DefaultSize, 0)
self.checkboxes[checkBox.GetId()] = index
mylist.SetItemWindow(index, 1, checkBox, expand=True)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(mylist, 1, wx.EXPAND)
self.SetSizer(sizer)
self.Bind(wx.EVT_CHECKBOX, self.checkBoxOnCheckBox)
def __del__(self):
pass
# Virtual event handlers, overide them in your derived class
def checkBoxOnCheckBox(self, event):
cb = event.GetEventObject()
idx = self.checkboxes[cb.GetId()]
print(self.mylist.GetItemText(idx))
print(cb.GetValue())
event.Skip()
app = wx.PySimpleApp()
frame = MyFrame(None)
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
</code></pre>
|
python|user-interface|wxpython
| 1 |
995 | 25,281,612 |
Celery: log each task run to it's own file?
|
<p>I want each job running to log to it's own file in the logs/ directory where the filename is the taskid.</p>
<pre><code>logger = get_task_logger(__name__)
@app.task(base=CallbackTask)
def calc(syntax):
some_func()
logger.info('started')
</code></pre>
<p>In my worker, I set the log file to output to by using the <code>-f</code> argument. I want to make sure that it outputs each task to it's own log file.</p>
|
<p>Seems like I am 3 years late. Nevertheless here's my solution inspired from @Mikko Ohtamaa idea. I just made it little different by using Celery Signals and python's inbuilt logging framework for preparing and cleaning logging handle.</p>
<pre><code>from celery.signals import task_prerun, task_postrun
import logging
# to control the tasks that required logging mechanism
TASK_WITH_LOGGING = ['Proj.tasks.calc']
@task_prerun.connect(sender=TASK_WITH_LOGGING)
def prepare_logging(signal=None, sender=None, task_id=None, task=None, args=None, kwargs=None):
logger = logging.getLogger(task_id)
formatter = logging.Formatter('[%(asctime)s][%(levelname)s] %(message)s')
# optionally logging on the Console as well as file
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.INFO)
# Adding File Handle with file path. Filename is task_id
task_handler = logging.FileHandler(os.path.join('/tmp/', task_id+'.log'))
task_handler.setFormatter(formatter)
task_handler.setLevel(logging.INFO)
logger.addHandler(stream_handler)
logger.addHandler(task_handler)
@task_postrun.connect(sender=TASK_WITH_LOGGING)
def close_logging(signal=None, sender=None, task_id=None, task=None, args=None, kwargs=None, retval=None, state=None):
# getting the same logger and closing all handles associated with it
logger = logging.getLogger(task_id)
for handler in logger.handlers:
handler.flush()
handler.close()
logger.handlers = []
@app.task(base=CallbackTask, bind=True)
def calc(self, syntax):
# getting logger with name Task ID. This is already
# created and setup in prepare_logging
logger = logging.getLogger(self.request.id)
some_func()
logger.info('started')
</code></pre>
<p>The <code>bind=True</code> is necessary here in order to have id available within task. This will create individual log file with <code><task_id>.log</code> every time the task <code>calc</code> is executed.</p>
|
python|logging|celery
| 7 |
996 | 50,314,634 |
How to fix "ValueError: An initializer for variable conv2d/kernel of is required" when opencv and tensorflow is used
|
<p>I am writing a program which is supposed to use tensorflow and opencv to
perform sign language recognition with use of Convolutional Neural Networks.
I used examplary code for MNIST classifier which can be found <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/tutorials/layers/cnn_mnist.py" rel="nofollow noreferrer">here</a> and
I tried to change it in such manner that I could use opencv to load training
images and then camera capture as an input for CNN.
Right now, I have a problem with training of the model which reveals itself
in an error:</p>
<p><strong><em>ValueError: An initializer for variable conv2d/kernel of is required</em></strong></p>
<p>Whole error log can be found <a href="https://pastebin.com/HHU62DAQ" rel="nofollow noreferrer">here</a></p>
<p><strong>Versions of frameworks in use:</strong></p>
<ul>
<li>Tensorflow r 1.7.1</li>
<li>OpenCV 4.0.0</li>
<li>Python 3.6.5</li>
<li>Numpy 1.14.2</li>
</ul>
<p>The code that is supposed to prepare training data for the network can be seen
in the first code snippet. It just reads a bunch of jpg photos whith different
hand gestures, resizes those images and puts it into numpy array</p>
<pre><code>def prepareTrainingData(trainingLetterMaxId, training_image_size):
training_images = []
training_labels = []
for letter in training_letters:
for i in range(0, trainingLetterMaxId):
read_image = cv2.imread('/home/radkye/Documents/ASLRecognizer/images/'
+ letter + '/' + letter + '_' + str(i) + '.jpg', 0)
resized = np.array(cv2.resize(read_image, (training_image_size, training_image_size)))
flattened = resized.ravel()
image = tf.cast(flattened, tf.float32)
training_images.append(image)
net_output = np.zeros(len(training_letters))
net_output[letters_to_indices_map[letter]] = 1
training_labels.append(net_output)
result = np.array(training_images)
labels_result = np.array(training_labels)
return result, labels_result
training_data, training_labels = prepareTrainingData(100, 60)
train_labels_int = np.asarray(training_labels, dtype=np.int32)
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn.cnn_model_fn,
model_dir="/home/radkye/Documents/studia/ASLRecognizer_AutoTestVersion/asl_cnn_model")
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": training_data},
y=train_labels_int,
batch_size=3600,
num_epochs=None,
shuffle=True)
mnist_classifier.train(
input_fn=train_input_fn,
steps=20000,
hooks=[logging_hook])
</code></pre>
<p>The cnn_model_fn is defined as:</p>
<pre><code>def cnn_model_fn(features, labels, mode):
input_layer = tf.reshape(features["x"], [-1, 60, 60, 1])
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
pool2_flat = tf.reshape(pool2, [-1, 12 * 12 * 64])
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
logits = tf.layers.dense(inputs=dropout, units=24)
</code></pre>
<p>Please, can someone help me to recognize what can be the problem with data
structure that I passed into the CNN model? The problem is probaby with the way I prepared training data which can be seen in first code snippet.
I am not that fluent in tensorflow yet.</p>
<p>Otherwise, maybe someone has any tutorial or example where opencv is used along with tensorflow to create CNN. I did not manage to find something
I need in this manner.</p>
<p>I would be very grateful for any kind of help.
Thank you in advance.</p>
|
<p><code>[-1, 12 * 12 * 64]</code> - given the padding and maxpool layers, this should be <code>[-1, 15 * 15 * 64]</code>, because 60 / 2 / 2 = 15</p>
<p>That said, I'm not sure that's the actual or only problem, because I don't have a way to reproduce your problem.</p>
|
python|opencv|tensorflow|machine-learning|conv-neural-network
| 0 |
997 | 61,510,868 |
IBM Watson with pyqt5 window is freezing in the while loop
|
<p>My window freezing in the while loop how can i fix it or how to wait for input into loop if i add input("") something program is not freezing anymore but i doesnt want use console.</p>
<pre><code>from ibm_watson import AssistantV2
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
import sys
from PyQt5 import QtWidgets
class Pencere(QtWidgets.QWidget):
def __init__(self):
super().__init__()
self.init_ui()
def init_ui(self):
self.gr=0
self.say=0
self.yazi_alani=QtWidgets.QLineEdit()
self.send=QtWidgets.QPushButton("Send")
self.cevap=QtWidgets.QLabel("")
v_box=QtWidgets.QVBoxLayout()
h_box=QtWidgets.QHBoxLayout()
h_box.addStretch()
v_box.addWidget(self.cevap)
v_box.addStretch()
v_box.addWidget(self.yazi_alani)
v_box.addWidget(self.send)
h_box.addLayout(v_box)
self.setLayout(h_box)
self.send.clicked.connect(self.gonder)
self.setGeometry(200,200,500,500)
self.show()
self.watson()
def watson(self):
while self.say==0:
self.say+=1
# Set up Assistant service.
authenticator = IAMAuthenticator('4CpKAKUsLRpYvcUX_nQRR1MGnYM1WqJLUhE4XS-p4B7Y') # replace with API key
service = AssistantV2(
version = '2020-04-01',
authenticator = authenticator
)
service.set_service_url('https://api.eu-gb.assistant.watson.cloud.ibm.com/instances/49abc832-b899-4359-aca3-ea100ceb777a')
assistant_id = '8e3469cd-deaa-465f-8f6d-e1e4cbf7d9c1' # replace with assistant ID
# Create session.
session_id = service.create_session(
assistant_id = assistant_id
).get_result()['session_id']
# Initialize with emptyhi values to start the conversation.
message_input = {'text': ''}
current_action = ''
while current_action != 'end_conversation':
# Clear any action flag set by the previous response.
current_action = ''
# Send message to assistant.
response = service.message(
assistant_id,
session_id,
input = message_input
).get_result()
# Print the output from dialog, if any. Supports only a single
# text response.
if response['output']['generic']:
if response['output']['generic'][0]['response_type'] == 'text':
self.cevap.setText(response['output']['generic'][0]['text'])
# Check for client actions requested by the assistant.
if 'actions' in response['output']:
if response['output']['actions'][0]['type'] == 'client':
current_action = response['output']['actions'][0]['name']
# If we're not done, prompt for next round of input.
if current_action != 'end_conversation':
user_input=self.yazi_alani.text()
message_input = {
'text': user_input
}
# We're done, so we delete the session.
service.delete_session(
assistant_id = assistant_id,
session_id = session_id
)
def gonder(self):
return self.yazi_alani.text()
app=QtWidgets.QApplication(sys.argv)
pencere=Pencere()
sys.exit(app.exec_())
</code></pre>
<p>In this code i use input window is not frozing anymore i doesnt want use console</p>
<p>how can i fix this or something</p>
<pre><code>if current_action != 'end_conversation':
input("->")
user_input=self.yazi_alani.text()
message_input = {
'text': user_input
}
</code></pre>
|
<p>You don't have to run time consuming tasks since they freeze the GUI, if a task is very time consuming then you should run it in another thread and send the result to the GUI thread using signals as shown in the following example:</p>
<pre class="lang-py prettyprint-override"><code>import threading
import sys
from ibm_watson import AssistantV2
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
from PyQt5 import QtCore, QtWidgets
class IBMWatsonManager(QtCore.QObject):
connected = QtCore.pyqtSignal()
disconnected = QtCore.pyqtSignal()
messageChanged = QtCore.pyqtSignal(str)
def __init__(self, parent=None):
super().__init__(parent)
self._assistant_id = "8e3469cd-deaa-465f-8f6d-e1e4cbf7d9c1"
self._session_id = ""
self._service = None
self.connected.connect(self.send_message)
self._is_active = False
@property
def assistant_id(self):
return self._assistant_id
@property
def session_id(self):
return self._session_id
@property
def service(self):
return self._service
@property
def is_active(self):
return self._is_active
def create_session(self):
threading.Thread(target=self._create_session, daemon=True).start()
def _create_session(self):
authenticator = IAMAuthenticator(
"4CpKAKUsLRpYvcUX_nQRR1MGnYM1WqJLUhE4XS-p4B7Y"
) # replace with API key
self._service = AssistantV2(version="2020-04-01", authenticator=authenticator)
self.service.set_service_url(
"https://api.eu-gb.assistant.watson.cloud.ibm.com/instances/49abc832-b899-4359-aca3-ea100ceb777a"
)
self._session_id = self.service.create_session(
assistant_id=self.assistant_id
).get_result()["session_id"]
self._is_active = True
self.connected.emit()
@QtCore.pyqtSlot()
@QtCore.pyqtSlot(str)
def send_message(self, text=""):
threading.Thread(target=self._send_message, args=(text,), daemon=True).start()
def _send_message(self, text):
response = self.service.message(
self.assistant_id, self.session_id, input={"text": text}
).get_result()
generic = response["output"]["generic"]
if generic:
t = "\n".join([g["text"] for g in generic if g["response_type"] == "text"])
self.messageChanged.emit(t)
output = response["output"]
if "actions" in output:
client_response = output["actions"][0]
if client_response["type"] == "client":
current_action = client_response["name"]
if current_action == "end_conversation":
self._close_session()
self._is_active = False
self.disconnected.emit()
def _close_session(self):
self.service.delete_session(
assistant_id=self.assistant_id, session_id=self.session_id
)
class Widget(QtWidgets.QWidget):
sendSignal = QtCore.pyqtSignal(str)
def __init__(self):
super().__init__()
self.init_ui()
def init_ui(self):
self.message_le = QtWidgets.QLineEdit()
self.send_button = QtWidgets.QPushButton("Send")
self.message_lbl = QtWidgets.QLabel()
v_box = QtWidgets.QVBoxLayout()
v_box.addWidget(self.message_lbl)
v_box.addStretch()
v_box.addWidget(self.message_le)
v_box.addWidget(self.send_button)
h_box = QtWidgets.QHBoxLayout(self)
h_box.addStretch()
h_box.addLayout(v_box)
self.disable()
self.setGeometry(200, 200, 500, 500)
self.send_button.clicked.connect(self.on_clicked)
@QtCore.pyqtSlot()
def enable(self):
self.message_le.setEnabled(True)
self.send_button.setEnabled(True)
@QtCore.pyqtSlot()
def disable(self):
self.message_le.setEnabled(False)
self.send_button.setEnabled(False)
@QtCore.pyqtSlot()
def on_clicked(self):
text = self.message_le.text()
self.sendSignal.emit(text)
self.message_le.clear()
@QtCore.pyqtSlot(str)
def set_message(self, text):
self.message_lbl.setText(text)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
w = Widget()
w.show()
manager = IBMWatsonManager()
manager.connected.connect(w.enable)
manager.disconnected.connect(w.disable)
w.sendSignal.connect(manager.send_message)
manager.messageChanged.connect(w.set_message)
manager.create_session()
sys.exit(app.exec_())
</code></pre>
|
python|pyqt5|ibm-watson
| 0 |
998 | 69,343,943 |
How to remove leading and trailing whitespace from each line in an MD file while preserving empty lines?
|
<p>I have Markdown text stored in a variable which I later write to an <code>MD</code> file. The Markdown contains trailing and leading whitespace and lines with only whitespace. I have tried to remove the whitespace from the variable as well as from the <code>MD</code> file but to no avail.</p>
<p>Please note:</p>
<ul>
<li>The headline ## contains a leading whitespace</li>
<li>Paragraph [1] contains a leading whitespace</li>
<li>The second line after paragraph [2] is not empty but contains two whitespaces (might not be visible in code block)</li>
<li>Paragraph [a.] is followed by two trailing spaces</li>
</ul>
<pre class="lang-py prettyprint-override"><code>markdown = ''' ## This is a headline
[1]Β This is the first paragraph
[2]Β This is the second paragraph
a. This is the third paragraph;
b. This is the fourth paragraph.'''
with open("output.md", "w") as f_out:
f_out.write(markdown)
</code></pre>
<p>Ideally, <code>output.md</code> would look like this:</p>
<pre><code>## This is a headline
[1] This is the first paragraph
[2] This is the second paragraph
a. This is the third paragraph;
b. This is the fourth paragraph.
</code></pre>
<p>Edit: Applying the accepted answer of @Mortz to the real source, I realised that Markdown uses two spaces for the <code><br></code> tag. The removal of trailing spaces is therefore not needed in this case. The leading spaces can be removed with: <code>clean_markdown = '\n'.join(_.lstrip() for _ in markdown.split('\n'))</code></p>
|
<p>You could split on the line break character - <code>'\n'</code> and rejoin all the entries with the leading and trailing spaces stripped -</p>
<pre><code>print(repr(markdown))
#' ## This is a headline\n\n [1] This is the first paragraph\n \n [2] This is the second paragraph\n \n \n a. This is the third paragraph; \n b. This is the fourth paragraph.'
clean_markdown = '\n'.join(_.strip() for _ in markdown.split('\n'))
print(repr(clean_markdown))
#'## This is a headline\n\n[1] This is the first paragraph\n\n[2] This is the second paragraph\n\n\na. This is the third paragraph;\nb. This is the fourth paragraph.'
</code></pre>
<p>And write this <code>clean_markdown</code> to your file</p>
|
python|markdown
| 1 |
999 | 53,808,491 |
Can I delete the record where cursor is pointing not using SQL?
|
<pre><code>conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='123456', db='jd', charset='utf8')
cur = conn.cursor()
sql = "select * from user where username = 'XXX'"
cur.execute(sql)
</code></pre>
<p>After this piece of code has been executed,the cursor should pointing to the record I've selected.</p>
<p>What I want to do is get one record,handle something with the data in the record,and then detele the record.</p>
<p>I know there's <code>cur.fetchone()</code>,<code>cur.fetchall()</code> methods in Python.I my opnion,since the cursor can fetchone record,why there is no <code>cur.deteleOne()</code> or <code>cur.delete_previousOne()</code> function? </p>
|
<p>A cursor points at a row of data returned by a <code>SELECT</code> statement, which is not the same thing as an actual row in a table.</p>
<p>A SELECT statement can perform many manipulations on its results, such as joining two or more tables together, ordering the results by a specific field, gathering only distinct values for a specific field, etc.</p>
<p>In the course of performing these manipulations, it is not always possible to maintain a link back to specific table rows. You can even use the <code>SELECT AS</code> statement to fetch data that isn't part of <em>any</em> table; deleting such a "row" would clearly be impossible.</p>
<p>Is there a reason you don't want to use sql to delete the rows? </p>
|
python|sql|database|pymysql
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.