id
int64 4
73.8M
| title
stringlengths 10
150
| body
stringlengths 17
50.8k
| accepted_answer_id
int64 7
73.8M
| answer_count
int64 1
182
| comment_count
int64 0
89
| community_owned_date
stringlengths 23
27
⌀ | creation_date
stringlengths 23
27
| favorite_count
int64 0
11.6k
⌀ | last_activity_date
stringlengths 23
27
| last_edit_date
stringlengths 23
27
⌀ | last_editor_display_name
stringlengths 2
29
⌀ | last_editor_user_id
int64 -1
20M
⌀ | owner_display_name
stringlengths 1
29
⌀ | owner_user_id
int64 1
20M
⌀ | parent_id
null | post_type_id
int64 1
1
| score
int64 -146
26.6k
| tags
stringlengths 1
125
| view_count
int64 122
11.6M
| answer_body
stringlengths 19
51k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,139,973 | Django return HttpResponseRedirect to an url with a parameter | <p>I have a situation in my project where i need to make a <strong>redirection</strong> of the user to an <code>url</code> containing a parameter, it is declared in the <strong>urls.py</strong> like:</p>
<pre><code>url(r'^notamember/(?P<classname>\w+)/$',
notamember,
name='notamember'),)
</code></pre>
<p>How can i put that parameter in the return <code>HttpResponseRedirect</code>? I tried like:</p>
<pre><code>return HttpResponseRedirect('/classroom/notamember/classname')
</code></pre>
<p>anyway, this is foolish, i know, i cannot consider the <strong>classmane</strong> as a parameter.
For clarity, my <strong>view.py</strong> is:</p>
<pre><code>def leave_classroom(request,classname):
theclass = Classroom.objects.get(classname = classname)
u = Membership.objects.filter(classroom=theclass).get(member = request.user).delete()
return HttpResponseRedirect('/classroom/notamember/theclass/')
</code></pre>
<p>How can i include the variable <strong>theclass</strong> in that <code>url</code>?
Thanks a lot!</p> | 3,140,005 | 5 | 0 | null | 2010-06-29 10:57:01.163 UTC | 11 | 2021-04-26 03:08:21.397 UTC | 2021-04-26 03:08:21.397 UTC | null | 14,131,913 | null | 342,279 | null | 1 | 39 | django|url|httpresponse|urlvariables | 67,782 | <p>Try this:</p>
<pre><code>return HttpResponseRedirect('/classroom/notamember/%s/' % classname)
</code></pre>
<p><strong>EDIT:</strong></p>
<p>This is surely better (Daniel Roseman's answer):</p>
<pre><code>from django.core.urlresolvers import reverse
url = reverse('notamember', kwargs={'classname': classname})
return HttpResponseRedirect(url)
</code></pre> |
2,765,196 | Disable autocomplete for all jquery datepicker inputs | <p>I want to disable <code>autocomplete</code> for all inputs using jquery ui datepicker without doing this to every input manually.
How could be this done?</p> | 2,766,165 | 6 | 0 | null | 2010-05-04 12:27:18.997 UTC | 4 | 2022-07-01 11:39:39.717 UTC | 2010-05-04 12:39:59.47 UTC | null | 44,269 | null | 257,797 | null | 1 | 24 | javascript|jquery | 46,724 | <p>try this:</p>
<pre><code>$('.datepicker').on('click', function(e) {
e.preventDefault();
$(this).attr("autocomplete", "off");
});
</code></pre>
<blockquote>
<p>in anycase you have not mentioned in
your question that this is coming from
an ajax call!</p>
</blockquote> |
2,650,897 | In Rails, how should I implement a Status field for a Tasks app - integer or enum? | <p>For a Rails 3.0 Todo app, I have a Tasks model with a <strong>Status</strong> field. What's the best way to store the Status field data (field type) and still display a human-readable version in a view (HTML table)? Status can be: </p>
<p>0 = Normal<br>
1 = Active<br>
2 = Completed</p>
<p>Right now I have this: </p>
<h2>Rails Schema Here:</h2>
<blockquote>
<p>create_table "tasks", :force => true do |t|<br>
t.integer "status", :limit => 1, :default => 0, :null => false</p>
</blockquote>
<h2>Rails Model Here:</h2>
<pre><code>class Task < ActiveRecord::Base
validates_inclusion_of :status, :in => 0..2,
:message => "{{value}} must be 0, 1, or 2"
</code></pre>
<h2>Rails View Here:</h2>
<pre><code><h1>Listing tasks</h1>
<table>
<tr>
<th>Status</th>
<th>Name</th>
<th></th>
<th></th>
<th></th>
</tr>
<% @tasks.each do |task| %>
<tr>
<td><%= task.status %></td>
<td><%= task.name %></td>
<td><%= link_to 'Show', task %></td>
<td><%= link_to 'Edit', edit_task_path(task) %></td>
<td><%= link_to 'Delete', task, :confirm => 'Are you sure?', :method => :delete %></td>
</tr>
<% end %>
</table>
</code></pre>
<h2>Requirements</h2>
<ol>
<li><p>Store a Task's status in the db such that the values are easily localizable, i.e. I'm not sure I want to store "normal", "active", "completed" as a string field.</p></li>
<li><p>Solution must work with Rails 3.0.</p></li>
</ol>
<h2>Questions:</h2>
<ol>
<li><p>Should I store the field as an integer (see above)? If so, how do I display the correct human readable status in an HTML table in my Rails view, e.g. show "Active" instead of "1" in the HTML table.</p></li>
<li><p>Should I use an enum? If so, is this easy to localize later?</p></li>
<li><p>Should I use straight strings, e.g. "Normal", "Active", "Completed"</p></li>
<li><p>Can you provide a quick code sample of the view helper, controller or view code to make this work?</p></li>
</ol> | 2,651,261 | 6 | 1 | null | 2010-04-16 05:49:37.443 UTC | 21 | 2014-11-26 07:50:35.243 UTC | null | null | null | null | 32,558 | null | 1 | 32 | html|ruby-on-rails|enums|rails-models | 12,432 | <p>1.It depends on how much you want to optimize queries on the DB.</p>
<p>2.Not really, <strike> it is not supported 'out of the box' by AR. </strike> # As of Rails 4 <a href="http://api.rubyonrails.org/classes/ActiveRecord/Enum.html" rel="nofollow noreferrer">enums are supported out of the box</a>.</p>
<p>3.IMHO you can use strings without a big performance penalty (just remember to add field to an index). I would do this because it's easier to internationalize and to maintain. However, you can go with integers if you need extra performance.</p>
<p>You may take a look on 2 SO threads <a href="https://stackoverflow.com/questions/2578117/text-indexes-vs-integer-indexes-in-mysql">here</a> and <a href="https://stackoverflow.com/questions/63090/surrogate-vs-natural-business-keys">here</a> where this is debated.</p>
<p>4.If you want to keep them as integer, here is how you can accomplish this:</p>
<pre><code>class Task << AR::Base
NORMAL = 1
ACTIVE = 2
COMPLETED = 3
STATUSES = {
NORMAL => 'normal',
ACTIVE => 'active',
COMPLETED => 'completed'
}
validates_inclusion_of :status, :in => STATUSES.keys,
:message => "{{value}} must be in #{STATUSES.values.join ','}"
# just a helper method for the view
def status_name
STATUSES[status]
end
end
</code></pre>
<p>and in view:</p>
<pre><code><td><%= task.status_name %></td>
</code></pre>
<p>If you want to use strings, it's more simplified:</p>
<pre><code>STATUSES = ['normal', 'active', 'completed']
validates_inclusion_of :status, :in => STATUSES,
:message => "{{value}} must be in #{STATUSES.join ','}"
</code></pre> |
2,964,704 | Why Java does not allow overriding equals(Object) in an Enum? | <p>I've noticed that the following snippet...</p>
<pre><code>@Override
public boolean equals(Object otherObject) {
...
}
</code></pre>
<p>...is not allowed for an Enum, since the method <code>equals(Object x)</code> is defined as <code>final</code> in <a href="http://download.oracle.com/javase/6/docs/api/java/lang/Enum.html#equals%28java.lang.Object%29" rel="noreferrer"><code>Enum</code></a>. Why is this so? </p>
<p>I cannot think of any use case which would require overriding <code>equals(Object)</code> for Enum. I'm just curious to know the reasoning behind this behavior.</p> | 2,964,760 | 6 | 1 | null | 2010-06-03 09:36:21.6 UTC | 7 | 2022-09-02 13:39:46.357 UTC | 2011-08-26 09:05:54.103 UTC | null | 276,052 | null | 177,784 | null | 1 | 41 | java|enums|api-design | 21,231 | <p>Anything but <code>return this == other</code> would be counter intuitive and violate <a href="http://en.wikipedia.org/wiki/Principle_of_least_astonishment" rel="noreferrer">the principle of least astonishment</a>. Two enum constants are expected to be <code>equal</code> if and only if they are the same object and the ability to override this behavior would be error prone.</p>
<p>Same reasoning applies to <code>hashCode()</code>, <code>clone()</code>, <code>compareTo(Object)</code>, <code>name()</code>, <code>ordinal()</code>, and <code>getDeclaringClass()</code>.</p>
<hr>
<p>The JLS does not motivate the choice of making it final, but mentions equals in the context of enums <a href="http://docs.oracle.com/javase/specs/jls/se8/html/jls-8.html#jls-8.9.1" rel="noreferrer">here</a>. Snippet:</p>
<blockquote>
<p><em>The equals method in <code>Enum</code> is a final method that merely invokes <code>super.equals</code> on its argument and returns the result, thus performing an identity comparison.</em></p>
</blockquote> |
2,775,838 | Does anyone use config files for javascript? | <p>We have javascript files that are environment specific, and so I was thinking of going down the path of creating a generic way to read in an XML (config) file to store different environment specific settings. I was curious to know if anybody else on here does that (or if not, is there a reason why you don't)?</p> | 2,775,861 | 6 | 1 | null | 2010-05-05 18:35:13.83 UTC | 9 | 2022-01-26 08:21:49.197 UTC | null | null | null | null | 190,750 | null | 1 | 44 | javascript|xml|configuration | 69,882 | <p>JSON is hundreds of times faster to consume than XML, bring a native JavaScript object itself. Attach and forget.</p>
<p><strong>EDIT:</strong></p>
<p>James Westgate's example is JSON. You can use this inline or as an external file or even loaded via AJAX.</p>
<p>Here is another example:</p>
<pre><code>var clientData = {}
clientData.dataset1 = [
{name:'Dave', age:'41', userid:2345},
{name:'Vera', age:'32', userid:9856}
]
alert(clientData.dataset1[0].name)
</code></pre> |
2,339,371 | As a Java programmer learning Python, what should I look out for? | <p>Much of my programming background is in Java, and I'm still doing most of my programming in Java. However, I'm starting to learn Python for some side projects at work, and I'd like to learn it as independent of my Java background as possible - i.e. I don't want to just program Java in Python. What are some things I should look out for?</p>
<p>A quick example - when looking through the Python tutorial, I came across the fact that defaulted mutable parameters of a function (such as a list) are persisted (remembered from call to call). This was counter-intuitive to me as a Java programmer and hard to get my head around. (See <a href="https://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument">here</a> and <a href="https://stackoverflow.com/questions/2335160/what-is-the-scope-of-a-defaulted-parameter-in-python">here</a> if you don't understand the example.)</p>
<p>Someone also provided me with <a href="http://dirtsimple.org/2004/12/python-is-not-java.html" rel="noreferrer">this</a> list, which I found helpful, but short. Anyone have any other examples of how a Java programmer might tend to misuse Python...? Or things a Java programmer would falsely assume or have trouble understanding?</p>
<p><strong>Edit</strong>: Ok, a brief overview of the reasons addressed by the article I linked to to prevent duplicates in the answers (as suggested by Bill the Lizard). (Please let me know if I make a mistake in phrasing, I've only <em>just</em> started with Python so I may not understand all the concepts fully. And a disclaimer - these are going to be <em>very</em> brief, so if you don't understand what it's getting at check out the link.)</p>
<ul>
<li>A static method in Java does not translate to a Python classmethod</li>
<li>A switch statement in Java translates to a hash table in Python</li>
<li>Don't use XML</li>
<li>Getters and setters are evil (hey, I'm just quoting :) )</li>
<li>Code duplication is often a necessary evil in Java (e.g. method overloading), but not in Python</li>
</ul>
<p>(And if you find this question at all interesting, check out the link anyway. :) It's quite good.)</p> | 2,339,415 | 7 | 6 | null | 2010-02-26 03:48:47.087 UTC | 35 | 2013-07-18 19:49:37.287 UTC | 2017-05-23 12:34:08.02 UTC | null | -1 | null | 259,541 | null | 1 | 63 | java|python | 14,874 | <ul>
<li><strong>Don't put everything into classes</strong>. Python's built-in list and dictionaries will take you far.</li>
<li><strong>Don't worry about keeping one class per module</strong>. Divide modules by purpose, not by class.</li>
<li><strong>Use inheritance for behavior, not interfaces</strong>. Don't create an "Animal" class for "Dog" and "Cat" to inherit from, just so you can have a generic "make_sound" method. </li>
</ul>
<p>Just do this:</p>
<pre><code>class Dog(object):
def make_sound(self):
return "woof!"
class Cat(object):
def make_sound(self):
return "meow!"
class LolCat(object):
def make_sound(self):
return "i can has cheezburger?"
</code></pre> |
2,474,097 | How do I finish the merge after resolving my merge conflicts? | <p>I've read the <a href="https://book.git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging" rel="noreferrer">Basic Branching and Merging</a> section of the Git Community Book.</p>
<p>So I follow it and create one branch: <code>experimental</code>.</p>
<p>Then I:</p>
<ol>
<li>switch to experimental branch (git checkout experimental)</li>
<li>make a bunch of changes</li>
<li>commit it (git commit -a)</li>
<li>switch to master branch (git checkout master)</li>
<li>make some changes and commit there</li>
<li>switch back to experimental (git checkout experimental)</li>
<li>merge master change to experimental (git merge master)</li>
<li><p>there are some conflicts but after I resolve them, I did 'git add myfile'</p></li>
<li><p>And now i am stuck, I can't move back to master</p></li>
</ol>
<p>when I do </p>
<pre><code> $ git checkout master
error: Entry 'res/layout/my_item.xml' would be overwritten by merge. Cannot merge.
</code></pre>
<p>and I did:</p>
<pre><code>$ git rebase --abort
</code></pre>
<p>No rebase in progress?</p>
<p>and I did :</p>
<pre><code>$ git add res/layout/socialhub_list_item.xml
$ git checkout master
error: Entry 'res/layout/my_item.xml' would be overwritten by merge. Cannot merge.
</code></pre>
<p>What can I do so that I can go back to my master branch?</p> | 2,474,139 | 12 | 5 | null | 2010-03-18 23:43:08.67 UTC | 63 | 2022-03-09 09:08:37.647 UTC | 2017-06-16 10:00:06.467 UTC | null | 1,000,551 | null | 286,802 | null | 1 | 376 | git|git-merge | 483,223 | <p>When there is a conflict during a merge, you have to finish the merge commit manually. It sounds like you've done the first two steps, to edit the files that conflicted and then run <code>git add</code> on them to mark them as resolved. Finally, you need to actually commit the merge with <code>git commit</code>. At that point you will be able to switch branches again.</p>
<p><strong>Quick Tip</strong>: You can use <code>git commit -am "your commit message"</code> to perform add and commit operations on tracked files simultaneously. (Credit: @vaheeds)</p> |
2,920,114 | How to auto adjust the <div> height according to content in it? | <p>I have a <code><div></code> which needs to be auto adjusted according to the content in it. How can I do this? Right now my content is coming out of the <code><div></code></p>
<p>The class I have used for the div is as follows</p>
<pre><code>box-centerside {
background:url("../images/greybox-center-bg1.jpg") repeat-x scroll center top transparent;
float:left;
height:100px;
width:260px;
}
</code></pre> | 2,920,140 | 22 | 1 | null | 2010-05-27 10:03:23.593 UTC | 15 | 2022-06-10 07:22:14.383 UTC | 2011-12-01 17:23:03.353 UTC | null | 2,424 | null | 184,814 | null | 1 | 80 | css|html | 427,026 | <p>Try with the following mark-up instead of directly specifying height:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>.box-centerside {
background: url("../images/greybox-center-bg1.jpg") repeat-x scroll center top transparent;
float: left;
min-height: 100px;
width: 260px;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="box-centerside">
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
This is sample content<br>
</div></code></pre>
</div>
</div>
</p> |
42,365,275 | How to implement a "pure" ASP.NET Core Web API by using AddMvcCore() | <p>I've seen a lot of ASP.NET Core Web API projects that use the default <code>AddMvc()</code> service without the realizing that using <code>AddMvcCore()</code> is a superior option due to the control over services.</p>
<p>How exactly do you implement an ASP.NET Core Web API by using <code>AddMvcCore()</code> and <strong>why is it better?</strong></p> | 42,365,276 | 1 | 0 | null | 2017-02-21 10:50:58.947 UTC | 19 | 2018-10-12 19:16:40.513 UTC | 2017-02-25 14:46:09.9 UTC | null | 3,645,638 | null | 3,645,638 | null | 1 | 39 | c#|asp.net-core|asp.net-core-mvc|asp.net-core-webapi | 6,828 | <h2>What is the difference between <code>AddMvc()</code> and <code>AddMvcCore()</code>?</h2>
<p>The first thing key thing to understand is that <code>AddMvc()</code> is just a pre-loaded version of <code>AddMvcCore()</code>. You can see the exact implementation of the <code>AddMvc()</code> extension at the <a href="https://github.com/aspnet/Mvc/blob/release/2.2/src/Microsoft.AspNetCore.Mvc/MvcServiceCollectionExtensions.cs" rel="noreferrer">GitHub repository</a>.</p>
<p>I like using default VS templates as much as the next guy, but sometimes you need to know when it's the wrong choice. I have seen several guides online that lean more towards an attempt to "undo" these default services rather than just going with a solution that just does not implement them in the first place.</p>
<p>With the advent of ASP.NET Core being open source, there really isn't a good reason why we can't peel back a layer and work at a lower level without fear of losing "magic".</p>
<hr>
<h2>Definition of "minimal" and "pure"</h2>
<p><em>Note: The definitions are intended for the context of this answer only. Mostly for the sake of clarity and assisting in further understanding.</em></p>
<p>This answer leans more towards "pure" and not "minimal". I'd like to describe why, so it's clearer what I'm talking about.</p>
<p><strong>Minimal.</strong> A "minimal" solution would be an implementation that <strong>does not even call upon the</strong> <code>AddMvcCore()</code> <strong>method at all</strong>. The reason for this, is that MVC is not really a "required" component to assembling you own Web API, and it certainly adds some weight to your code with the additional dependencies. In this scenario, since you're not using the <code>AddMvcCore()</code> method, you also would not inject it into your application, here</p>
<pre><code>public void Configure(IApplicationBuilder app)
{
app.UseMvc(); // you don't need this
}
</code></pre>
<p>This would mean mapping your own routes and responding to the <code>context</code> in your own way. This really isn't challenging at all, but I don't want to dive into it, because it's quite off-topic, but <strong>here is a tiny taste of a minimal implementation</strong>:</p>
<pre><code>public void Configure(IApplicationBuilder app)
{
app.Map("/api", HandleMapApi);
// notice how we don't have app.UseMvc()?
}
private static void HandleMapApi(IApplicationBuilder app)
{
app.Run(async context =>
{
// implement your own response
await context.Response.WriteAsync("Hello WebAPI!");
});
}
</code></pre>
<p>For many projects, a "minimal" approach means we are giving up some of the features found in MVC. You would really have to weigh your options and see if you this design path is the right choice, as there is a balance between design pattern, convenience, maintainability, code footprint, and most importantly performance and latency. <strong>Simply put: a "minimal" solution would mean minimizing the services and middleware between your code and the request.</strong></p>
<p><strong>Pure.</strong> A "pure" solution (as far as the context of this answer) is to avoid all the default services and middleware that come "pre-bundled" with <code>AddMvc()</code> by not implementing it in the first place. Instead, we use <code>AddMvcCore()</code>, which is explained further in the next section:</p>
<hr>
<h2>Implementing our own services / middleware with <code>AddMvcCore()</code></h2>
<p>The first thing to get started is to setup <code>ConfigureServices</code> to using <code>AddMvcCore()</code>. If you look at the <a href="https://github.com/aspnet/Mvc/blob/release/2.2/src/Microsoft.AspNetCore.Mvc/MvcServiceCollectionExtensions.cs" rel="noreferrer">GitHub repository</a>, you can see that <code>AddMvc()</code> calls <code>AddMvcCore()</code> with a standard set of services / middleware:</p>
<p>Here are some of the services / middleware that stands out as "unneeded":</p>
<blockquote>
<pre><code>var builder = services.AddMvcCore();
builder.AddViews();
builder.AddRazorViewEngine();
builder.AddRazorPages();
</code></pre>
</blockquote>
<p>Many of these default services are great for a general web project, but are usually undesirable for a "pure" Web API.</p>
<p>Here is a sample implementation of <code>ConfigureServices</code> using <code>AddMvcCore()</code> for a Web API:</p>
<pre><code>public void ConfigureServices(IServiceCollection services)
{
// Build a customized MVC implementation, without using the default AddMvc(),
// instead use AddMvcCore(). The repository link is below:
// https://github.com/aspnet/Mvc/blob/release/2.2/src/Microsoft.AspNetCore.Mvc/MvcServiceCollectionExtensions.cs
services
.AddMvcCore(options =>
{
options.RequireHttpsPermanent = true; // this does not affect api requests
options.RespectBrowserAcceptHeader = true; // false by default
//options.OutputFormatters.RemoveType<HttpNoContentOutputFormatter>();
// these two are here to show you where to include custom formatters
options.OutputFormatters.Add(new CustomOutputFormatter());
options.InputFormatters.Add(new CustomInputFormatter());
})
//.AddApiExplorer()
//.AddAuthorization()
.AddFormatterMappings()
//.AddCacheTagHelper()
//.AddDataAnnotations()
//.AddCors()
.AddJsonFormatters();
}
</code></pre>
<p>The implementation above is mostly a duplicate of the <code>AddMvc()</code> extension method, however I have added a few new areas so that others can see the added benefits of doing this.</p>
<ul>
<li><strong>Custom Input/Output Formatters.</strong> This is where you can do your own highly optimized serializers (such as Protobuf, Thrift, Avro, Etc) instead of using JSON (or worse XML) serialization.</li>
<li><strong>Request Header Handling.</strong> You can make sure that the <code>Accept</code> header is recognized, or not.</li>
<li><strong>Authorization Handling.</strong> You can implement your own custom authorization or can take advantage of the built-in features.</li>
<li><strong>ApiExplorer.</strong> For some projects, you may likely include it, otherwise some WebAPI's may not want to this feature.</li>
<li><strong>Cross-Origin Requests (CORS).</strong> If you need a more relaxed security on your WebAPI, you could enable it.</li>
</ul>
<p>Hopefully with this example of a "pure" solution, you can see the benefits of using <code>AddMvcCore()</code> and be comfortable with using it.</p>
<p>If you're serious about control over performance and latency while working on top of ASP.NET Core's web host maybe a deep dive into a "minimal" solution is where you're dealing right at the edge of the request pipeline, rather than letting it get bogged down by the MVC middleware.</p>
<hr>
<h2>Additional Reading</h2>
<p>A visual look at how the middleware pipeline looks like... As per my definitions, less layers means "minimal", whereas "pure" is just a clean version of MVC.</p>
<p><a href="https://i.stack.imgur.com/kAPlp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kAPlp.png" alt="enter image description here"></a></p>
<p>You can read more about it on the Microsoft Documents: <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware" rel="noreferrer">ASP.NET Core Middleware Fundamentals</a></p> |
42,526,509 | How to test an iOS application on the CarPlay simulator | <p>I am trying to develop an iOS application compatible with CarPlay. </p>
<p>According to this post <a href="https://stackoverflow.com/questions/22372095/is-ios-carplay-api-public-how-to-integrate-carplay/38541642#38541642">Is iOS CarPlay API Public? How to Integrate CarPlay?</a>, I have to be enrolled with Apple’s MFi program, but I have noticed that there’s the possibility to use a CarPlay simulator with Xcode: launch the Simulator, then <em>Hardware</em> -> <em>External Displays</em> -> <em>CarPlay</em> (I use Xcode 8).
Once you have opened it, you see that the Message App is working in the CarPlay simulator so I imagine that there is the possibility to try custom application in this simulator without being enrolled in the MFi program.
I wonder if anyone has tried to launch an application on the CarPlay simulator and, in case of positive answer, he can explain how he did. </p>
<p>I have also watched the WWDC 2016 (<a href="https://developer.apple.com/videos/play/wwdc2016/722/" rel="noreferrer">https://developer.apple.com/videos/play/wwdc2016/722/</a> open it with Safari) that explains the CarPlay system and how it works. At the end of this presentation, they say that you have to declare a string protocol name (like <em>com.brand</em>) in the <em>SupportExternalAccessoryProtocols</em> key in the <em>Info.plist</em> file, but I don't understand how to get the string protocol name.</p>
<p>I also don’t find any information about the simulator and how to develop application compatible with it.</p>
<p>If someone knew something concerning it, it would be a great help.</p>
<p>Thanks in advance.</p> | 42,762,956 | 2 | 0 | null | 2017-03-01 08:17:46.783 UTC | 11 | 2020-01-17 22:48:27.613 UTC | 2017-05-23 12:26:23.92 UTC | null | -1 | null | 7,574,178 | null | 1 | 12 | ios|ios-simulator|mfi|wwdc|carplay | 10,441 | <p>You don't need to add anything for the <code>SupportExternalAccessoryProtocols</code> key to run a basic CarPlay app in the simulator. This is for declaring protocols for functionality that might be specific to a particular set of head units.</p>
<p>What you are probably missing is the playable content entitlement. If you don't have an entitlements file, you'll need to create it for your app;-</p>
<p><a href="https://i.stack.imgur.com/qVkkX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVkkX.png" alt="xcode navigator"></a></p>
<p>Then add a boolean value of YES for the <code>com.apple.developer.playable-content</code> key</p>
<p><a href="https://i.stack.imgur.com/TPprs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TPprs.png" alt="entitlements file"></a></p>
<p>This should now show your app on the CarPlay home screen in the simulator. As you allude, you won't be able to run that app on a physical device unless you can get Apple to add that entitlement to your account</p> |
42,453,124 | How to update mysql version in xampp (error with innodb_additional_mem_pool_size) | <p>I'm trying to <code>upgrade mysql</code> in <code>xamp</code>. I'm using <code>laravel</code> which requires <code>mariaDB v10.2.2</code>. So I downloaded the <code>latest msi package</code> from the <code>mariaDB website</code>. Now I followed following points to install the same:</p>
<ul>
<li>Install MySQL to C:\TEMP.</li>
<li>Make old installation folder to mysql_old.</li>
<li>copy the following folders "bin, include, lib, share, support-files" to xamp\mysql\ folder. <strong>I didn't copied the data folder</strong> </li>
<li>Copied the my.ini file from old installation to new installation in xamp\mysql\bin\ folder</li>
<li>Copied the old data folder to new mysql folder</li>
</ul>
<p>Now after doing this I tried to start mysql from the control panel and following error displays:</p>
<p><a href="https://i.stack.imgur.com/FQgbV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FQgbV.png" alt="XAMP Control Panel error screenshot"></a></p>
<p>Now while checking the error log I found this:</p>
<pre><code>2017-02-25 12:31:56 5736 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions
2017-02-25 12:31:56 5736 [Note] InnoDB: Uses event mutexes
2017-02-25 12:31:56 5736 [Note] InnoDB: Compressed tables use zlib 1.2.3
2017-02-25 12:31:56 5736 [Note] InnoDB: Number of pools: 1
2017-02-25 12:31:56 5736 [Note] InnoDB: Using generic crc32 instructions
2017-02-25 12:31:56 5736 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M
2017-02-25 12:31:56 5736 [Note] InnoDB: Completed initialization of buffer pool
2017-02-25 12:31:56 5736 [Note] InnoDB: Highest supported file format is Barracuda.
2017-02-25 12:31:56 5736 [Note] InnoDB: Creating shared tablespace for temporary tables
2017-02-25 12:31:56 5736 [Note] InnoDB: Setting file 'F:\xamp\mysql\data\ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2017-02-25 12:31:57 5736 [Note] InnoDB: File 'F:\xamp\mysql\data\ibtmp1' size is now 12 MB.
2017-02-25 12:31:57 5736 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2017-02-25 12:31:57 5736 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2017-02-25 12:31:57 5736 [Note] InnoDB: Waiting for purge to start
2017-02-25 12:31:57 5736 [Note] InnoDB: 5.7.14 started; log sequence number 2361919
2017-02-25 12:31:57 11468 [Note] InnoDB: Loading buffer pool(s) from F:\xamp\mysql\data\ib_buffer_pool
2017-02-25 12:31:57 11468 [Note] InnoDB: Buffer pool(s) load completed at 170225 12:31:57
2017-02-25 12:31:57 5736 [Note] Plugin 'FEEDBACK' is disabled.
2017-02-25 12:31:57 5736 [ERROR] f:\xamp\mysql\bin\mysqld.exe: unknown variable 'innodb_additional_mem_pool_size=2M'
2017-02-25 12:31:57 5736 [ERROR] Aborting
</code></pre>
<p>Now following is my.ini file:</p>
<pre><code># Example MySQL config file for small systems.
#
# This is for a system with little memory (<= 64M) where MySQL is only used
# from time to time and it's important that the mysqld daemon
# doesn't use much resources.
#
# You can copy this file to
# F:/xamp/mysql/bin/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is F:/xamp/mysql/data) or
# ~/.my.cnf to set user-specific options.
#
# In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.
# The following options will be passed to all MySQL clients
[client]
# password = your_password
port = 3306
socket = "F:/xamp/mysql/mysql.sock"
# Here follows entries for some specific programs
# The MySQL server
[mysqld]
port= 3306
socket = "F:/xamp/mysql/mysql.sock"
basedir = "F:/xamp/mysql"
tmpdir = "F:/xamp/tmp"
datadir = "F:/xamp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
# Change here for bind listening
# bind-address="127.0.0.1"
# bind-address = ::1 # for ipv6
# Where do all the plugins live
plugin_dir = "F:/xamp/mysql/lib/plugin/"
# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
#
# commented in by lampp security
#skip-networking
#skip-federated
# Replication Master Server (default)
# binary logging is required for replication
# log-bin deactivated by default since XAMPP 1.4.11
#log-bin=mysql-bin
# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id = 1
# Replication Slave (comment out master section to use this)
#
# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
# the syntax is:
#
# CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>,
# MASTER_USER=<user>, MASTER_PASSWORD=<password> ;
#
# where you replace <host>, <user>, <password> by quoted strings and
# <port> by the master's port number (3306 by default).
#
# Example:
#
# CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
# MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method, then
# start replication for the first time (even unsuccessfully, for example
# if you mistyped the password in master-password and the slave fails to
# connect), the slave will create a master.info file, and any later
# change in this file to the variables' values below will be ignored and
# overridden by the content of the master.info file, unless you shutdown
# the slave server, delete master.info and restart the slaver server.
# For that reason, you may want to leave the lines below untouched
# (commented) and instead use CHANGE MASTER TO (see above)
#
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
#server-id = 2
#
# The replication master for this slave - required
#master-host = <hostname>
#
# The username the slave will use for authentication when connecting
# to the master - required
#master-user = <username>
#
# The password the slave will authenticate with when connecting to
# the master - required
#master-password = <password>
#
# The port the master is listening on.
# optional - defaults to 3306
#master-port = <port>
#
# binary logging - not required for slaves, but recommended
#log-bin=mysql-bin
# Point the following paths to different dedicated disks
#tmpdir = "F:/xamp/tmp"
#log-update = /path-to-dedicated-directory/hostname
# Uncomment the following if you are using BDB tables
#bdb_cache_size = 4M
#bdb_max_lock = 10000
# Comment the following if you are using InnoDB tables
#skip-innodb
innodb_data_home_dir = "F:/xamp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "F:/xamp/mysql/data"
#innodb_log_arch_dir = "F:/xamp/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
## UTF 8 Settings
#init-connect=\'SET NAMES utf8\'
#collation_server=utf8_unicode_ci
#character_set_server=utf8
#skip-character-set-client-handshake
#character_sets-dir="F:/xamp/mysql/share/charsets"
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
</code></pre>
<p>Please look if I'm doing something wrong.</p> | 42,460,721 | 4 | 0 | null | 2017-02-25 07:12:03.713 UTC | 4 | 2022-08-30 22:19:53.52 UTC | 2018-07-01 21:01:25.62 UTC | null | 1,766,831 | null | 6,085,328 | null | 1 | 10 | mysql|xampp|upgrade|mariadb|mysqlupgrade | 53,723 | <p>Simply remove this line from <code>my.ini</code>:</p>
<pre><code> innodb_additional_mem_pool_size = 2M
</code></pre>
<p>It was "deprecated in MySQL 5.6.3, are removed in MySQL 5.7.4."</p>
<p>(If my notes are correct, it has been "unused" since 4.1.0!)</p> |
40,273,067 | Replace specific values in a dataframe column using Pandas | <p>I have a data frame df with a column called "Num_of_employees", which has values like 50-100, 200-500 etc. I see a problem with few values in my data. Wherever the employee number should be 1-10, the data has it as 10-Jan. Also, wherever the value should be 11-50, the data has it as Nov-50. How would I rectify this problem using pandas?</p> | 40,273,117 | 1 | 0 | null | 2016-10-26 22:22:53.46 UTC | 5 | 2016-10-26 22:27:44.303 UTC | 2016-10-26 22:27:44.303 UTC | null | 839,957 | null | 6,461,192 | null | 1 | 11 | python|pandas | 42,238 | <p>A clean syntax for this kind of "find and replace" uses a dict, as</p>
<pre><code>df.Num_of_employees = df.Num_of_employees.replace({"10-Jan": "1-10",
"Nov-50": "11-50"})
</code></pre> |
10,381,113 | Match a pattern in an array | <p>There is an array with 2 elements</p>
<pre><code>test = ["i am a boy", "i am a girl"]
</code></pre>
<p>I want to test if a string is found inside the array elements, say:</p>
<pre><code>test.include("boy") ==> true
test.include("frog") ==> false
</code></pre>
<p>Can i do it like that?</p> | 10,381,161 | 6 | 0 | null | 2012-04-30 09:11:50.65 UTC | 9 | 2017-07-20 07:46:04.1 UTC | 2016-04-18 20:46:13.737 UTC | null | 1,324 | null | 514,316 | null | 1 | 32 | ruby|arrays | 45,699 | <p>Using Regex.</p>
<pre><code>test = ["i am a boy" , "i am a girl"]
test.find { |e| /boy/ =~ e } #=> "i am a boy"
test.find { |e| /frog/ =~ e } #=> nil
</code></pre> |
6,202,667 | How to use subscripts in ggplot2 legends [R] | <p>Can I use subscripts in ggplot2 legends? I see <a href="https://stackoverflow.com/questions/5293715/how-to-use-greek-symbols-in-ggplot2">this question</a> on greek letters in legends and elsewhere, but I can't figure out how to adapt it.</p>
<p>I thought that using <code>expression()</code>, which works in axis labels, would do the trick. But my attempt below fails. Thanks!</p>
<pre><code>library(ggplot2)
temp <- data.frame(a = rep(1:4, each = 100), b = rnorm(4 * 100), c = 1 + rnorm(4 * 100))
names(temp)[2:3] <- c("expression(b[1])", "expression(c[1])")
temp.m <- melt(temp, id.vars = "a")
ggplot(temp.m, aes(x = value, linetype = variable)) + geom_density() + facet_wrap(~ a)
</code></pre> | 6,202,857 | 2 | 1 | null | 2011-06-01 14:18:34.973 UTC | 4 | 2012-10-26 10:06:04.523 UTC | 2017-05-23 12:24:18.227 UTC | null | -1 | null | 334,755 | null | 1 | 28 | r|ggplot2 | 21,583 | <p>The following should work (remove your line with <code>names(temp) <-</code>...):</p>
<pre><code>ggplot(temp.m, aes(x = value, linetype = variable)) +
geom_density() + facet_wrap(~ a) +
scale_linetype_discrete(breaks=levels(temp.m$variable),
labels=c(expression(b[1]), expression(c[1])))
</code></pre>
<p>See <code>help(scale_linetype_discrete)</code> for available customization (e.g. legend title via <code>name=</code>).</p> |
6,058,101 | In Git, how do I get a detailed list of file changes from one revision to another? | <p>I use a Git repository on my server to version user data files sent to the server. I'm interested in getting a list of changed files between any two revisions.</p>
<p>I know about <code>git diff --name-only <rev1> <rev2></code>, but this only gives me a list of file names. I'm especially interested in renames and copies, too. Ideally, the output would be something like this:</p>
<pre><code>updated: userData.txt
renamed: picture.jpg -> background.jpg
copied: song.mp3 -> song.mp3.bkp
</code></pre>
<p>Is it possible? <code>--name-status</code> also doesn't seem to indicate renames and copies.</p> | 6,058,162 | 2 | 0 | null | 2011-05-19 11:43:52.247 UTC | 8 | 2015-11-05 16:25:02.66 UTC | null | null | null | null | 390,581 | null | 1 | 29 | git|rename|file-listing | 7,842 | <pre><code>git diff --name-status -C <rev1> <rev2>
</code></pre>
<p>should be closer to what you are looking for.</p>
<p><a href="http://git-scm.com/docs/git-diff" rel="noreferrer"><code>--name-status</code></a> would display the file names and their respective status:</p>
<pre><code>(A|C|D|M|R|T|U|X|B)
</code></pre>
<blockquote>
<p>Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R),<br>
type (i.e. regular file, symlink, submodule, …) changed (T),<br>
Unmerged (U), Unknown (X), or pairing Broken (B)</p>
</blockquote>
<p>(to which the <a href="https://stackoverflow.com/users/390581/jean-philippe-pellet">OP Jean Philippe Pellet</a> adds:</p>
<blockquote>
<p>The status letters <code>R</code> and <code>C</code> “are always followed by a score denoting the percentage of similarity between the source and target of the move or copy, and are the only ones to be so".
)</p>
</blockquote>
<p>Regarding files copied or moved:</p>
<pre><code>-C[<n>]
--find-copies[=<n>]
</code></pre>
<blockquote>
<p>Detect copies as well as renames. If <code>n</code> is specified, it has the same meaning as for <code>-M<n></code>.</p>
</blockquote>
<pre><code>--find-copies-harder
</code></pre>
<blockquote>
<p>For performance reasons, by default, <code>-C</code> option finds copies only if the original file of the copy was modified in the same changeset.<br>
This flag makes the command inspect unmodified files as candidates for the source of copy.<br>
This is a very expensive operation for large projects, so use it with caution. Giving more than one <code>-C</code> option has the same effect.</p>
</blockquote>
<hr>
<p><a href="https://stackoverflow.com/users/670229/brauliobo">brauliobo</a> recommends <a href="https://stackoverflow.com/questions/6058101/in-git-how-do-i-get-a-detailed-list-of-file-changes-from-one-revision-to-anothe/6058162?noredirect=1#comment54878924_6058162">in the comments</a>:</p>
<pre><code>git diff --stat -C
git show --stat -C
git log --stat -C
</code></pre> |
6,102,077 | possible characters base64 url safe function | <p>What is the range of possible characters returned from this string?</p>
<pre><code>function base64url_encode($data)
{
return rtrim(strtr(base64_encode($data), '+/', '-_'), '=');
}
</code></pre>
<p>My guess is <code>[a-z0-9-_]</code></p> | 6,102,233 | 2 | 1 | null | 2011-05-23 19:39:37.587 UTC | 2 | 2020-12-26 14:52:57.453 UTC | 2011-05-23 19:55:13.887 UTC | null | 541,091 | null | 627,473 | null | 1 | 29 | php|base64 | 39,185 | <p>The range of possible characters returned are:</p>
<ul>
<li><code>A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z</code></li>
<li><code>a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z</code></li>
<li><code>0, 1, 2, 3, 4, 5, 6, 7, 8, 9</code></li>
<li><code>-</code> (<em>minus</em>) and <code>_</code> (<em>underscore</em>)</li>
</ul>
<p>In your regex-style, that would be <code>[a-zA-Z0-9_-]</code>.</p> |
30,773,431 | how to use List<WebElement> webdriver | <p>I am creating an automatic test for some webs and I'm using WebDriver, TestNG and code that is written in Java. On the page is shown register of categories, in parentheses is number of auctions and i need to get this number as variable.</p>
<p>I use this code</p>
<pre><code>By bycss =By.cssSelector(".list.list-categories>li:first-child");
WebElement number1_1_vse = driver.findElement(bycss );
String text_vse1_1 = number1_1_vse.getText();
</code></pre>
<p>but I get only first number and i need to get all. Someone advised me that I should use List. But when i use it, i get only </p>
<pre><code>[[[[[[[FirefoxDriver: firefox on WINDOWS (7e6e0d0f-5cbb-4e48-992f-26d743a321a5)] -> css selector: .list.list-categories>li:first-child]] -> xpath: ..]] -> xpath: .//*], [[[[[[FirefoxDriver: firefox on WINDOWS (7e6e0d0f-5cbb-4e48-992f-.....
</code></pre>
<p>code: </p>
<pre><code>By bycss2 =By.cssSelector(".list.list-categories>li:first-child");
WebElement number1_1_vse2 = driver.findElement(bycss2 );
WebElement parent1 = number1_1_vse2.findElement(By.xpath(".."));
List<WebElement> childs1 = parent1.findElements(By.xpath(".//*"));
System.out.println(childs1);
</code></pre>
<p><a href="http://www.labmultis.info/jpecka.portal-exdrazby/index.php?c1=2&a=s&aa=&ta=1" rel="nofollow">link to the website</a></p>
<p><a href="http://postimg.org/image/49t1rj2pp/" rel="nofollow">screenshot -> image with the number</a></p>
<p>can anyone advise me please?</p> | 30,774,161 | 3 | 0 | null | 2015-06-11 06:40:39.427 UTC | 5 | 2016-06-01 19:54:23.603 UTC | null | null | null | null | 4,826,243 | null | 1 | 2 | java|list|webdriver|testng|parents | 125,581 | <p>Try the following code:</p>
<pre><code>//...
By mySelector = By.xpath("/html/body/div[1]/div/section/div/div[2]/form[1]/div/ul/li");
List<WebElement> myElements = driver.findElements(mySelector);
for(WebElement e : myElements) {
System.out.println(e.getText());
}
</code></pre>
<p>It will returns with the whole content of the <code><li></code> tags, like: </p>
<pre><code><a class="extra">Vše</a> (950)</li>
</code></pre>
<p>But you can easily get the number now from it, for example by using <code>split()</code> and/or <code>substring()</code>.</p> |
19,606,739 | Maven: JAR will be empty - no content was marked for inclusion | <p>I have a minor problem with maven. When I run the command mvn package I get the following warning:</p>
<p><strong>[WARNING] JAR will be empty - no content was marked for inclusion!</strong></p>
<p>The build is successful but the produced jar-file is empty as the warning says.</p>
<p>Why is this and what am I doing wrong?</p>
<p>Here is my pom.xml</p>
<pre><code><project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>framework</groupId>
<artifactId>framework</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>framework</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
My dependencies
</dependencies>
<build>
<directory>target</directory>
<outputDirectory>target/classes</outputDirectory>
<finalName>${project.artifactId}-${project.version}</finalName>
<testOutputDirectory>target/test-classes</testOutputDirectory>
<sourceDirectory>src</sourceDirectory>
<scriptSourceDirectory>src</scriptSourceDirectory>
<testSourceDirectory>src</testSourceDirectory>
<testResources>
<testResource>
<directory>test</directory>
</testResource>
</testResources>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<includes>
<include>src</include>
</includes>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.4</version>
<configuration>
<outputDirectory>lib</outputDirectory>
</configuration>
</plugin>
</plugins>
</build>
</code></pre>
<p></p> | 19,606,819 | 2 | 0 | null | 2013-10-26 12:32:00.147 UTC | 5 | 2018-10-02 13:56:37.86 UTC | 2018-10-02 13:56:37.86 UTC | null | 6,139,527 | null | 1,884,684 | null | 1 | 28 | java|maven|jar | 62,118 | <p>I hope you have good reason <strong><em>not</em></strong> to follow the <a href="http://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html">standard directory layout</a>, otherwise consider to rearrange the folders: it will make your life a lot easier (as well as for your co-workers).
My guess is that nothing has been compiled. In that case: removing the configuration of the maven-compiler-plugin should be enough.</p> |
40,840,317 | Angular2 - how to start and with which IDE | <p>I have used AngularJS 1.x now for a couple of months. Now I will switch to Angular2 (with TypeScript) and actually I am not sure which IDE to use.
It is also not clear for me how to compile the TypeScript Code into JavaScript - actually is this necessary?
I have read that Visual Studio Code would be a nice editor for Angular2 projects - is there a TypeScript compiler included? I would be glad for any information in this direction.</p> | 40,842,224 | 2 | 0 | null | 2016-11-28 08:53:49.907 UTC | 35 | 2019-02-04 16:01:19.333 UTC | null | null | null | null | 3,318,489 | null | 1 | 57 | angular|ide | 76,710 | <p><strong>1) IDE</strong></p>
<p>I was wondering myself which IDE is the best suited for Angular2.</p>
<p>I'm a big <em>Sublime Text</em> supporter and even tho there's a Typescript plugin ... It didn't feel perfect with Typescript power.</p>
<p>So I tried with my second favourite editor : <em>Atom</em> (+ Typescript plugin).
Better BUT no support for auto import (maybe it has some now ?) and also, I had to wait 30s before I get any autocompletion.</p>
<p>Then I tried Webstorm as the company I'm currently working at has some licences. It was great and I was really happy for a month. But using an editor that is not free felt ... weird. I wouldn't use it at home for personal projects, I couldn't recommend it to other people easily. And honnestly, I'm not a super fan of Webstorm interface.</p>
<p>So I gave (another) try to <strong><a href="https://code.visualstudio.com/" rel="noreferrer">Visual Studio Code</a></strong> that I didn't find so great when I first tried it few months ago. It has seriously evolved and :<br>
- it's simple<br>
- it's complete<br>
- Code<br>
- Debugger (remote --> super powerful)<br>
- Git integration<br>
- Plugin store<br>
- it has great great Angular2 support<br>
- intellisense is really awesome</p>
<p>I'm using it since a month and so far, I'm really happy and do not feel the need to change.</p>
<p>Just to help you start with good plugins, here's mine :
<a href="https://i.stack.imgur.com/7VA4M.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7VA4M.png" alt="enter image description here"></a></p>
<hr>
<p><strong>2) Angular 2 : Discover the basics</strong></p>
<p>As you're familiar with AngularJs, I don't know how you felt about the official documentation but I couldn't learn from it. I had to follow many (different) tutorials and then I used to doc once I understood AngularJs.</p>
<p>With Angular2, they understood the challenge to have a great documentation and they pay attention to it since alpha version (even tho it was evolving continuously =) !).</p>
<p>So I'd recommend you to go on <a href="https://angular.io" rel="noreferrer">https://angular.io</a> and simply read the doc.</p>
<p>It's well done and is not only for advanced user. You will find good tutorials there !</p>
<hr>
<p><strong>3) How to use Typescript with Visual Studio Code ?</strong></p>
<p>I'd strongly recommend you to use <strong><a href="https://github.com/angular/angular-cli" rel="noreferrer">angular-cli</a></strong> for developing an Angular2 app. Not only for simplicity, but because in a community we need to have a basic starter which gives us the ability to have similar structured repo. So we can understand easily the structure of an other project.</p>
<p>Plus, angular-cli handles Typescript compilation for you and you don't have to deal with it in command line or from your IDE.</p>
<pre><code>npm i -g typescript
</code></pre>
<p>(no need for typings anymore since Typescript 2.0 !)</p>
<pre><code>npm i -g angular-cli
ng new my-super-project --style=scss
cd my-super-project
</code></pre>
<p>Then just run</p>
<pre><code>ng serve
</code></pre>
<p>And access to your app at : <a href="http://localhost:4200" rel="noreferrer">http://localhost:4200</a></p>
<p>Angular-cli compiles your Typescript and even your (sccs|sass|less) files.</p>
<p>When you want to deploy your app :</p>
<pre><code>ng serve --prod
</code></pre>
<p>Will also minimify JS and CSS.</p>
<hr>
<p><strong>4) What's next ?</strong></p>
<p>Once you feel more comfortable with Angular2 in general, I'd strongly recommend you to learn (more) about<br>
- <a href="http://redux.js.org/" rel="noreferrer">Redux</a><br>
- <a href="https://github.com/Reactive-Extensions/RxJS" rel="noreferrer">RxJs</a> </p>
<p>And once you're familiar with those concepts, you should start playing with <a href="https://github.com/ngrx/store" rel="noreferrer">ngrx</a>.</p>
<p>Good luck in this long (and awesome) journey !</p>
<p>I've released an ngrx starter! For those familiar with Redux and willing to discover angular and/or ngrx it might help you get started! I'm sure it might also be a good idea to use this template as a starter for any kind of ngrx project (small, medium or even large!). I tried to describe in the Readme how to use it and there is plenty of comments into the code itself: <a href="https://github.com/maxime1992/angular-ngrx-starter" rel="noreferrer">https://github.com/maxime1992/angular-ngrx-starter</a></p> |
19,112,209 | How to handle the new window in Selenium WebDriver using Java? | <p>This is my code:</p>
<pre><code>driver.findElement(By.id("ImageButton5")).click();
//Thread.sleep(3000);
String winHandleBefore = driver.getWindowHandle();
driver.switchTo().window(winHandleBefore);
driver.findElement(By.id("txtEnterCptCode")).sendKeys("99219");
</code></pre>
<p>Now I have the next error:</p>
<blockquote>
<p>Exception in thread "main" org.openqa.selenium.NoSuchElementException:
Unable to find element with id == txtEnterCptCode (WARNING: The server
did not provide any stacktrace information)
Command duration or timeout: 404 milliseconds.</p>
</blockquote>
<p>Any ideas?</p> | 19,116,017 | 6 | 0 | null | 2013-10-01 09:11:17.15 UTC | 8 | 2017-11-23 08:59:07.53 UTC | 2016-08-12 13:39:45.717 UTC | null | 617,450 | null | 2,790,425 | null | 1 | 17 | java|selenium-webdriver|new-window | 186,798 | <p>It seems like you are not actually switching to any new window. You are supposed get the window handle of your original window, save that, then get the window handle of the new window and switch to that. Once you are done with the new window you need to close it, then switch back to the original window handle. See my sample below:</p>
<p>i.e.</p>
<pre><code>String parentHandle = driver.getWindowHandle(); // get the current window handle
driver.findElement(By.xpath("//*[@id='someXpath']")).click(); // click some link that opens a new window
for (String winHandle : driver.getWindowHandles()) {
driver.switchTo().window(winHandle); // switch focus of WebDriver to the next found window handle (that's your newly opened window)
}
//code to do something on new window
driver.close(); // close newly opened window when done with it
driver.switchTo().window(parentHandle); // switch back to the original window
</code></pre> |
8,782,274 | Sendgrid / email sending issues in Ruby on Rails (hosted on Heroku) | <p>im having a problem getting sendgrid to send emails successfully on a rails 3.1 app that's using authlogic for authentication and is being deployed on heroku. i have the following action mailer configuration on config/environments/[development.rb and production.rb]: </p>
<p><pre><code>config.action_mailer.delivery_method = :smtp
config.action_mailer.default_url_options = { :host => 'localhost:3000' }
config.action_mailer.default_charset = "utf-8"
config.action_mailer.raise_delivery_errors = true
config.action_mailer.perform_deliveries = true
config.action_mailer.smtp_settings = {
:address => 'smtp.sendgrid.net',
:port => 587,
:domain => ENV['SENDGRID_DOMAIN'],
:user_name => ENV['SENDGRID_USERNAME'],
:password => ENV['SENDGRID_PASSWORD'],
:authentication => 'plain',
:enable_starttls_auto => true
}</pre></code></p>
<p>for production.rb, the above code is the same except for </p>
<p><pre><code>
config.action_mailer.default_url_options = { :host => [app name in heroku] }
</pre></code></p>
<p>when i run it development mode, i get the following error reported:</p>
<p><pre><code>
Completed 500 Internal Server Error in 21740ms
Net::SMTPFatalError (550 Cannot receive from specified address notification@[app-domain]: Unauthenticated senders not allowed
):
</pre></code></p>
<p>i now dont really know how to set it up to get it working. does anyone with some prior experience on setting up sendgrid on heroku and rails know what's going on?</p>
<p>thank you so much. you guys are the best!!!</p> | 8,800,632 | 3 | 0 | null | 2012-01-08 23:15:33.117 UTC | 11 | 2021-01-19 15:28:36.3 UTC | 2012-01-08 23:23:07.107 UTC | null | 835,698 | null | 835,698 | null | 1 | 16 | ruby-on-rails|heroku|actionmailer|sendgrid | 15,506 | <p>I spent half a freakin' day on this and finally got mine working now. Quite frustrated as it was due to a poor documentation error. I'm running Rails 3.1 and Cedar stack on Heroku by the way.</p>
<p>So <a href="http://devcenter.heroku.com/articles/sendgrid" rel="noreferrer">http://devcenter.heroku.com/articles/sendgrid</a> will tell you to put your SMTP settings stuff in config/initializers/mail.rb. BUT... on <a href="http://docs.sendgrid.com/documentation/get-started/integrate/examples/rails-example-using-smtp/" rel="noreferrer">http://docs.sendgrid.com/documentation/get-started/integrate/examples/rails-example-using-smtp/</a> it says to put all your SMTP settings stuff in config/environment.rb <strong>instead of config/initializers/mail.rb</strong></p>
<p>So the solution is to put that in your environment.rb file. This is how my environment.rb looks:</p>
<pre><code># Load the rails application
require File.expand_path('../application', __FILE__)
# Initialize the rails application
Freelanceful::Application.initialize!
# Configuration for using SendGrid on Heroku
ActionMailer::Base.delivery_method = :smtp
ActionMailer::Base.smtp_settings = {
:user_name => "yourSendGridusernameyougetfromheroku",
:password => "yourSendGridpasswordyougetfromheroku",
:domain => "staging.freelanceful.com",
:address => "smtp.sendgrid.net",
:port => 587,
:authentication => :plain,
:enable_starttls_auto => true
}
</code></pre>
<p>To get your SendGrid username and password, type</p>
<pre><code>$ heroku config -long
</code></pre>
<p>Hope that helps.. and more people in the future of this headache.</p> |
8,674,236 | Is typedef a storage-class-specifier? | <p>I tried the following code</p>
<pre><code>#include <stdio.h>
int main(void)
{
typedef static int sint;
sint i = 10;
return 0;
}
</code></pre>
<p>and hit the following error:</p>
<pre><code>error: multiple storage classes in declaration specifiers
</code></pre>
<p>When I referred the C99 specification, I came to know that <code>typedef</code> is a <code>storage class</code>. </p>
<pre><code>6.7.1 Storage-class specifiers
Syntax
storage-class-specifier:
typedef
extern
static
auto
register
Constraints: At most, one storage-class specifier may be
given in the declaration specifiers in a declaration
Semantics: The typedef specifier is called a ‘‘storage-class specifier’’
for syntactic convenience only;
</code></pre>
<p>The only explanation that I could find (based on some internet search and cross referring various sections in C99 specification) was <code>syntactic convenience only to make the grammar simpler</code>.</p>
<p>I'm looking for some justification/explanation on how can a type name have storage class specifier? </p>
<p>Doesn't it make sense to have a code like <code>typedef static int sint;</code>?</p>
<p>or Where am I going wrong?!</p> | 8,674,278 | 4 | 0 | null | 2011-12-29 22:27:31.32 UTC | 12 | 2013-09-02 14:17:42.193 UTC | null | null | null | null | 1,056,289 | null | 1 | 19 | c|typedef | 12,952 | <p>Yes, <code>typedef</code> is a storage-class-specifier as you found in the standard. In part it's a grammatical convenience, but it is deliberate that you can either have <code>typedef</code> <em>or</em> one of the more "obvious" storage class specifiers.</p>
<p>A typedef declaration creates an alias for a type.</p>
<p>In a declaration <code>static int x;</code> the type of <code>x</code> is <code>int</code>. <code>static</code> has nothing to do with the type.</p>
<p>(Consider that if you take the address of <code>x</code>, <code>&x</code> has type <code>int*</code>. <code>int *y = &x;</code> would be legal as would <code>static int *z = &x</code> but this latter <code>static</code> affects the storage class of <code>z</code> and is independent of the storage class of <code>x</code>.)</p>
<p>If something like this were allowed the <code>static</code> would have no effect as no object is being declared. The type being aliased is just <code>int</code>.</p>
<pre><code>typedef static int sint;
</code></pre> |
8,495,367 | Using additional command line arguments with gunicorn | <p>Assuming I'm starting a Flask app under gunicorn as per <a href="http://gunicorn.org/deploy.html#runit" rel="noreferrer">http://gunicorn.org/deploy.html#runit</a>, is there a way for me to include/parse/access additional command line arguments?</p>
<p>E.g., can I include and parse the <code>foo</code> option in my Flask application somehow?</p>
<pre><code>gunicorn mypackage:app --foo=bar
</code></pre>
<p>Thanks,</p> | 8,496,666 | 2 | 0 | null | 2011-12-13 19:58:47.227 UTC | 5 | 2017-12-08 09:11:22.653 UTC | 2017-12-08 09:11:22.653 UTC | null | 2,667,536 | null | 319,006 | null | 1 | 30 | python|flask|gunicorn | 25,409 | <p>You can't pass command line arguments directly but you can choose application configurations easily enough.</p>
<pre><code>$ gunicorn 'mypackage:build_app(foo="bar")'
</code></pre>
<p>Will call the function "build_app" passing the foo="bar" kwarg as expected. This function should then return the WSGI callable that'll be used.</p> |
47,565,761 | Gradle Warnings: Could not find google-services.json while looking in | <p>I got the following warnings.
How can I avoid this warnings?</p>
<blockquote>
<p>Could not find google-services.json while looking in
[src/flavor1/debug, src/debug, src/flavor1] Could not find
google-services.json while looking in [src/flavor1/release,
src/release, src/flavor1] Could not find google-services.json while
looking in [src/flavor2/debug, src/debug, src/flavor2] Could not find
google-services.json while looking in [src/flavor2/release,
src/release, src/flavor2]</p>
</blockquote>
<hr>
<p>I added two client_info at <strong>app/google-services.json</strong></p>
<pre><code>{
"project_info": {
"project_number": "000000000000",
"project_id": "****-*****"
},
"client": [
{
"client_info": {
"mobilesdk_app_id": ...,
"android_client_info": {
"package_name": "flavor1.package.name"
}
},
...
},
{
"client_info": {
"mobilesdk_app_id": ...,
"android_client_info": {
"package_name": "flavor2.package.name"
}
},
...
}
],
"configuration_version": "1"
}
</code></pre> | 47,565,868 | 2 | 1 | null | 2017-11-30 04:13:43.587 UTC | 3 | 2019-03-27 04:10:30.947 UTC | 2017-12-01 06:23:05.49 UTC | null | 3,395,198 | null | 2,306,678 | null | 1 | 29 | android|gradle|google-play-services|android-flavors | 42,163 | <p>As per my experience, this will occur when you are using any type of service related to firebase and you have not put the <code>google-services.json</code> the file which we needed to use the service,</p>
<p>The solution is you need to get the file from firebase console and you need to put inside of your app level folder if you are creating a single project in the console for all flavor and If you are creating different one then you need to create a number of files for a number of projects or flavors.</p> |
54,033,765 | How to give Image src dynamically in react js? | <p>I am trying to give image name in src dynamically. I want to set image name dynamically using variable with path. but I am not able to set src correctly. I tried solutions on stackoverflow but nothing is working. </p>
<p>I tried to give path like this </p>
<pre><code><img src={`../img/${img.code}.jpg`}></img>
<img src={'../img/' + img.code + '.jpg'}></img>
<img src={'../img/{img.code}.jpg'}></img>
</code></pre>
<p>my images are saved in src/img path
if i give path like this</p>
<pre><code><img src={require('../img/nokia.jpg')}/>
</code></pre>
<p>image is showing </p>
<p>I know this question is asked before but nothing is working for me.
Please help me how can I set image path?</p> | 54,033,931 | 5 | 3 | null | 2019-01-04 06:02:41.437 UTC | 4 | 2020-12-26 10:23:34.987 UTC | null | null | null | null | 9,814,965 | null | 1 | 43 | reactjs|react-redux | 54,771 | <p>if you dont want to require the image then you have to put all your images into public folder and then </p>
<pre><code><img src={`../img/${img.code}.jpg`}></img>
</code></pre>
<p>this method will work.</p> |
53,993,980 | How to validate and sanitize HTTP Get with Spring Boot? | <p>I keep getting this annoying error from Checkmarx code scanner,</p>
<pre><code>Method getTotalValue at line 220 of src\java\com\example\PeopleController.java
gets user input for the personName element. This element’s value then flows through
the code without being properly sanitized or validated and is eventually
displayed to the user. This may enable a Cross-Site-Scripting attack.
</code></pre>
<p><strong>Here is my code. I think I did ALL the validation necessary. What else???</strong> </p>
<pre><code>@Slf4j
@Configuration
@RestController
@Validated
public class PeopleController {
@Autowired
private PeopleRepository peopleRepository;
@RequestMapping(value = "/api/getTotalValue/{personName}", method = RequestMethod.GET)
@ResponseBody
public Integer getTotalValue(@Size(max = 20, min = 1, message = "person is not found")
@PathVariable(value="personName", required=true) String personName) {
PersonObject po = peopleRepository.findByPersonName(
Jsoup.clean(personName, Whitelist.basic()));
try {
return po.getTotalValue();
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
@ExceptionHandler
public String constraintViolationHandler(ConstraintViolationException ex) {
return ex.getConstraintViolations().iterator().next()
.getMessage();
}
}
</code></pre>
<p>There must be some missing validation. How to validate HTTP GET properly with Spring Boot </p> | 53,994,470 | 5 | 4 | null | 2019-01-01 08:12:37.32 UTC | 5 | 2022-05-03 09:01:02.803 UTC | 2019-01-01 09:13:16.12 UTC | null | 3,850,730 | null | 6,496,267 | null | 1 | 12 | validation|spring-boot|http-get|checkmarx | 50,926 | <p>You need to be a bit careful with these scanning tools as sometimes these tools do report false positives and sometimes no code changes are required. I am no expert of checkmarx but be sure that this tool really understands bean validation annotations that you are using & the call <code>Jsoup.clean(personName, Whitelist.basic())</code> . </p>
<blockquote>
<p>I think I did ALL the validation necessary. What else???</p>
</blockquote>
<p>First you need to understand the different between application level <strong>input sanitation</strong> & business level <strong>input validation</strong> for a controller. What you are doing here is second part & first might be missing in your set up which is exclusively done from security perspective & usually set up for whole application. </p>
<p>You are using <code>@Size</code> annotation to limit an input's size but that doesn't guarantee about bad strings - strings that can cause XSS attacks. Then, you are using call <code>Jsoup.clean(personName, Whitelist.basic()))</code> to clean this size validated input. As I am not sure what that call does so you need to ensure that new value is XSS - Safe. You are immediately passing that value to DB call & then returning an <code>Integer</code> to caller/client so I am very pessimist about any possibility of an XSS attack here but tool is saying so. </p>
<blockquote>
<p>There must be some missing validation. How to validate HTTP GET
properly with Spring Boot</p>
</blockquote>
<p>As I explained earlier, input validation is a term usually meant for business logic level input validation while input sanitization / clean up is about security. In Spring Boot environment, this is usually done by using <strong>Spring Security APIs</strong> & enabling XSS filters or by writing your own XSS filter and plug it in your application. Filter comes first and your controller later so your controller will always have a sanitized value & you will apply business validations on that sanitized value. </p>
<p>This is a broad level answer & for code etc you might do google. Also suggest to read more about XSS attacks. Just understand that there are multiple ways to accomplish same goal. </p>
<p><a href="https://www.checkmarx.com/2017/10/09/3-ways-prevent-xss/" rel="noreferrer">3 Ways to Prevent XSS</a></p>
<p><a href="https://searchsoftwarequality.techtarget.com/answer/XSS-prevention-in-Java" rel="noreferrer">XSS prevention in Java</a></p>
<p><a href="https://stackoverflow.com/questions/31282379/how-to-use-spring-security-to-prevent-xss-and-xframe-attack">How to create filter in Spring RESTful for Prevent XSS?</a></p>
<p><a href="https://www.softwaretestinghelp.com/cross-site-scripting-xss-attack-test/" rel="noreferrer">Cross Site Scripting (XSS) Attack Tutorial with Examples, Types & Prevention</a></p>
<p>In last link, its mentioned , </p>
<blockquote>
<p>The first step in the prevention of this attack is Input validation.
Everything, that is entered by the user should be precisely validated,
because the user’s input may find its way to the output.</p>
</blockquote>
<p>& that you are not doing in your code so I would guess that there is no XSS. </p>
<p><strong>EDIT:</strong> </p>
<p>There are two aspects of XSS security - first not allowing malicious input to server side code & that would be done by having an XSS filter & Sometimes, there is no harm in allowing malicious input ( lets say you are saving that malicious input to DB or returning in API response ) . </p>
<p>Second aspect is instructing HTML clients about possible XSS attacks ( if we know for sure that API client is going to be HTML / UI ) then we need to add <code>X-XSS-Protection</code> header & that would be done by below code. This will enable browser to turn on its XSS protection feature ( if present ) . </p>
<p>@Override
protected void configure(HttpSecurity http) throws Exception {</p>
<pre><code>http.headers().xssProtection()....
</code></pre>
<p>}</p>
<p><a href="https://stackoverflow.com/questions/9090577/what-is-the-http-header-x-xss-protection">What is the http-header “X-XSS-Protection”?</a></p>
<p><a href="https://stackoverflow.com/questions/37606227/is-xss-protection-in-spring-security-enabled-by-default">Is Xss protection in Spring security enabled by default?</a></p>
<p>For first aspect i.e. writing filter - refer <a href="https://stackoverflow.com/a/49399129/3850730">my this answer</a> and links in that answer. </p>
<p>I think, I have wrongly written above that Spring Security provides input sanitation filters , I guess , it doesn't. Will verify and let you know. I have written my custom filter on the lines mentioned in answer to this question - <a href="https://stackoverflow.com/questions/41938010/prevent-xss-in-spring-mvc-controller">Prevent XSS in Spring MVC controller</a></p>
<p>You have to also understand that Spring Boot gets used to write traditional MVC apps too where server side presents HTML to render too . In case of JSON responses ( REST APIs ) , UI client can control what to escape and what not, complexity arises because JSON output is not always fed to HTML clients aka browsers. </p> |
26,599,176 | Is it possible to duplicate a dashboard in grafana? | <p>I need to create a dev dashboard very similar to an existing prod one, and was wondering if there was an easy way of copying the existing dashboard. Any help would be greatly appreciated!</p> | 31,587,660 | 5 | 0 | null | 2014-10-28 00:44:36.113 UTC | 5 | 2019-05-10 10:02:49.507 UTC | 2019-05-10 10:02:49.507 UTC | null | 3,212,623 | null | 602,076 | null | 1 | 104 | grafana | 62,584 | <p>New versions got a "Save As..." button in the dashboard menu:</p>
<p><a href="https://i.stack.imgur.com/kjt3J.png"><img src="https://i.stack.imgur.com/kjt3J.png" alt="Save as in the menu"></a></p> |
566,139 | Detecting network connection speed and bandwidth usage in C# | <p>Is there a way to detect the network speed and bandwidth usage in C#? Even pointers to open-source components are welcome. </p> | 566,208 | 3 | 0 | null | 2009-02-19 16:37:36.3 UTC | 25 | 2021-07-19 16:13:20.147 UTC | 2017-11-27 22:22:01.687 UTC | null | 4,284,627 | Matthias | 16,440 | null | 1 | 44 | c#|networking|performance|bandwidth|detect | 32,382 | <p>Try using the System.Net.NetworkInformation classes. In particular, <a href="http://msdn.microsoft.com/en-us/library/system.net.networkinformation.ipv4interfacestatistics.aspx" rel="noreferrer"><code>System.Net.NetworkInformation.IPv4InterfaceStatistics</code></a> ought to have some information along the lines of what you're looking for.</p>
<p>Specifically, you can check the <code>bytesReceived</code> property, wait a given interval, and then check the <code>bytesReceived</code> property again to get an idea of how many bytes/second your connection is processing. To get a good number, though, you should try to download a large block of information from a given source, and check then; that way you should be 'maxing' the connection when you do the test, which should give more helpful numbers.</p> |
6,658,561 | How to enable mbstring from php.ini? | <p>I have real difficulties with enabling mbstring extension on my localhost. </p>
<p>I'm using XAMPP 1.7.4, for Windows, which has PHP 5.3.5, and tried to edit my php.ini file according to the <a href="http://www.php.net/manual/en/mbstring.configuration.php" rel="noreferrer">documentation</a> and various other examples I found online. After about 6 hours of this, all I managed to do is get a "Error 500 - Server error' message, that didn't go away even after I rolled-back all changes to the .ini file. </p>
<p>What I need to do, is create PDF invoices with Danish characters, using <a href="http://www.fpdf.org/" rel="noreferrer">tFPDF</a>, to support UTF-8 encoding. </p>
<p>If anybody here knows some tips, suggestions, or an example of a working php.ini setup, please help out, 'cause I'm starting to lose my hair over this one! :|</p>
<p>Thanks a lot!</p> | 6,658,975 | 1 | 1 | null | 2011-07-12 01:23:59.527 UTC | 5 | 2013-08-28 02:55:21.35 UTC | null | null | null | null | 720,682 | null | 1 | 21 | pdf|utf-8|php|mbstring | 118,365 | <p>All XAMPP packages come with Multibyte String (<i>php_mbstring.dll</i>) extension installed.</p>
<p>If you have accidentally removed DLL file from <code>php/ext</code> folder, just add it back (get the copy from XAMPP zip archive - its downloadable).</p>
<p>If you have deleted the accompanying INI configuration line from <code>php.ini</code> file, add it back as well:</p>
<p><code>extension=php_mbstring.dll</code></p>
<p>Also, ensure to restart your webserver (<i>Apache</i>) using XAMPP control panel.</p>
<p><strong>Additional Info on Enabling PHP Extensions</strong></p>
<ul>
<li>install extension (e.g. put <i>php_mbstring.dll</i> into <code>/XAMPP/php/ext</code> directory)</li>
<li>in <em>php.ini</em>, ensure extension directory specified (e.g. <code>extension_dir = "ext"</code>)</li>
<li>ensure correct build of DLL file (e.g. 32bit thread-safe VC9 only works with DLL files built using exact same tools and configuration: 32bit thread-safe VC9)</li>
<li>ensure PHP API versions match (If <strong>not</strong>, once you restart the webserver you will receive related error.)</li>
</ul> |
6,387,904 | Inserting an IEnumerable<T> collection with Dapper errors out with "class is not supported by Dapper." | <p>Yep, there are <a href="https://stackoverflow.com/questions/5957774/performing-inserts-and-updates-with-dapper">questions here</a> and <a href="https://stackoverflow.com/questions/6379478/clarification-of-dapper-example-code">here</a> about how to insert records with dapper-dot-net. However, the answers, while informative, didn't seem to point me in the right direction. Here is the situation: moving data from SqlServer to MySql. Reading the records into an <code>IEnumerable<WTUser></code> is easy, but I am just not getting something on the insert. First, the 'moving records code': </p>
<pre><code>// moving data
Dim session As New Session(DataProvider.MSSql, "server", _
"database")
Dim resources As List(Of WTUser) = session.QueryReader(Of WTUser)("select * from tbl_resource")
session = New Session(DataProvider.MySql, "server", "database", _
"user", "p@$$w0rd")
// *edit* - corrected parameter notation with '@'
Dim strInsert = "INSERT INTO tbl_resource (ResourceName, ResourceRate, ResourceTypeID, ActiveYN) " & _
"VALUES (@ResourceName, @ResourceRate, @ResourceType, @ActiveYN)"
Dim recordCount = session.WriteData(Of WTUser)(strInsert, resources)
// session Methods
Public Function QueryReader(Of TEntity As {Class, New})(ByVal Command As String) _
As IEnumerable(Of TEntity)
Dim list As IEnumerable(Of TEntity)
Dim cnn As IDbConnection = dataAgent.NewConnection
list = cnn.Query(Of TEntity)(Command, Nothing, Nothing, True, 0, CommandType.Text).ToList()
Return list
End Function
Public Function WriteData(Of TEntity As {Class, New})(ByVal Command As String, ByVal Entities As IEnumerable(Of TEntity)) _
As Integer
Dim cnn As IDbConnection = dataAgent.NewConnection
// *edit* if I do this I get the correct properties, but no data inserted
//Return cnn.Execute(Command, New TEntity(), Nothing, 15, CommandType.Text)
// original Return statement
Return cnn.Execute(Command, Entities, Nothing, 15, CommandType.Text)
End Function
</code></pre>
<p>cnn.Query and cnn.Execute call the dapper extension methods. Now, the WTUser class (note: the column name changed from 'WindowsName' in SqlServer to 'ResourceName' in MySql, thus the two properties pointing to the same field):</p>
<pre><code>Public Class WTUser
// edited for brevity - assume the following all have public get/set methods
Public ActiveYN As String
Public ResourceID As Integer
Public ResourceRate As Integer
Public ResourceType As Integer
Public WindowsName As String
Public ResourceName As String
End Class
</code></pre>
<p>I am receiving an exception from dapper: "WTUser is not supported by Dapper." This method in DataMapper (dapper):</p>
<pre><code> private static Action<IDbCommand, object> CreateParamInfoGenerator(Type OwnerType)
{
string dmName = string.Format("ParamInfo{0}", Guid.NewGuid());
Type[] objTypes = new[] { typeof(IDbCommand), typeof(object) };
var dm = new DynamicMethod(dmName, null, objTypes, OwnerType, true); // << - here
// emit stuff
// dm is instanced, now ...
foreach (var prop in OwnerType.GetProperties().OrderBy(p => p.Name))
</code></pre>
<p>At this point OwnerType = </p>
<blockquote>
<p>System.Collections.Generic.List`1[[CRMBackEnd.WTUser,
CRMBE, Version=1.0.0.0,
Culture=neutral,
PublicKeyToken=null]], mscorlib,
Version=2.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089</p>
</blockquote>
<p>It seems like OwnerType should be <code>CRMBackEnd.WTUser</code> ... not <code>List<CRMBackEnd.WTUser></code> ... ??? because what is happening is that the collection properties are being iterated: Count, Capacity, etc. <strong>What am I missing?</strong></p>
<p><strong>Update</strong> </p>
<p>If I modified session.WriteData as:</p>
<pre><code>Public Function WriteData(Of TEntity As {Class, New})(ByVal Command As String, _
ByVal Entities As IEnumerable(Of TEntity)) _
As Integer
Dim cnn As IDbConnection = dataAgent.NewConnection
Dim records As Integer
For Each entity As TEntity In Entities
records += cnn.Execute(Command, entity, Nothing, 15, CommandType.Text)
Next
Return records
End Function
</code></pre>
<p>records are inserted nicely ... but I didn't think this would be necessary given examples like:</p>
<pre><code>connection.Execute(@"insert MyTable(colA, colB) values (@a, @b)",
new[] { new { a=1, b=1 }, new { a=2, b=2 }, new { a=3, b=3 } }
).IsEqualTo(3); // 3 rows inserted: "1,1", "2,2" and "3,3"
</code></pre>
<p>... from <a href="http://code.google.com/p/dapper-dot-net/" rel="noreferrer">dapper-dot-net</a></p> | 6,500,834 | 1 | 0 | null | 2011-06-17 15:13:09.49 UTC | 14 | 2018-09-26 17:36:01.283 UTC | 2018-09-26 17:36:01.283 UTC | null | 3,195,477 | null | 210,709 | null | 1 | 34 | c#|vb.net|insert|dapper | 31,489 | <p>I just added a test for this: </p>
<pre><code>class Student
{
public string Name {get; set;}
public int Age { get; set; }
}
public void TestExecuteMultipleCommandStrongType()
{
connection.Execute("create table #t(Name nvarchar(max), Age int)");
int tally = connection.Execute(@"insert #t (Name,Age) values(@Name, @Age)", new List<Student>
{
new Student{Age = 1, Name = "sam"},
new Student{Age = 2, Name = "bob"}
});
int sum = connection.Query<int>("select sum(Age) from #t drop table #t").First();
tally.IsEqualTo(2);
sum.IsEqualTo(3);
}
</code></pre>
<p>It works as advertised. I made a few amendments to the way multi-exec works (so its a tad faster and supports object[]). </p>
<p>My guess is you were having issues cause you were missing a getter property on all you fields on <code>WTUser</code>. All params must have reader properties, we do not support pulling this from fields, it would require a complex parsing step to stay efficient. </p>
<hr>
<p>An additional point that caused an issue is passing dapper a param with unsupported mapping.</p>
<p>For example, the following class is not supported as a param: </p>
<pre><code>class Test
{
public int Id { get; set; }
public User User {get; set;}
}
cnn.Query("select * from Tests where Id = @Id", new Test{Id = 1}); // used to go boom
</code></pre>
<p>The issue is that dapper did <strong>not</strong> parse the SQL, it assumed all the props are settable as params but was unable to resolve the SQL type for <code>User</code>. </p>
<p>Latest rev resolves this</p> |
20,810,862 | Passing value of a variable to angularjs directive template function | <p>I am trying to pass a $scope's variable to a directive, but its not working. I am catching the variable in the template function:</p>
<pre><code>app.directive('customdir', function () {
return {
restrict: 'E',
template: function(element, attrs) {
console.log(attrs.filterby);
switch (attrs.filterby) {
case 'World':
return '<input type="checkbox">';
}
return '<input type="text" />';
}
};
});
</code></pre>
<p>What I need is the value of variable <code>filterby</code> not the variable name itself. </p>
<p><a href="http://plnkr.co/edit/OQqLeUIoFNhkqSoeIdyM?p=preview" rel="noreferrer">Plunkr Demo</a> </p> | 20,811,100 | 3 | 0 | null | 2013-12-28 04:02:01.053 UTC | 6 | 2014-04-30 10:51:58.38 UTC | 2014-04-30 10:51:58.38 UTC | null | 821,057 | null | 2,623,933 | null | 1 | 12 | javascript|angularjs | 51,064 | <p>Or like this</p>
<pre><code>app.directive('customdir', function ($compile) {
var getTemplate = function(filter) {
switch (filter) {
case 'World': return '<input type="checkbox" ng-model="filterby">';
default: return '<input type="text" ng-model="filterby" />';
}
}
return {
restrict: 'E',
scope: {
filterby: "="
},
link: function(scope, element, attrs) {
var el = $compile(getTemplate(scope.filterby))(scope);
element.replaceWith(el);
}
};
});
</code></pre>
<p><a href="http://plnkr.co/edit/yPopi0mYdViElCKrQAq9?p=preview">http://plnkr.co/edit/yPopi0mYdViElCKrQAq9?p=preview</a></p> |
20,644,536 | Square of each element of a column in pandas | <p>How can I square each element of a column/series of a DataFrame in pandas (and create another column to hold the result)?</p> | 20,644,575 | 3 | 0 | null | 2013-12-17 20:59:05.103 UTC | 10 | 2020-07-10 11:42:19.017 UTC | 2018-04-19 18:33:40.367 UTC | null | 2,074,981 | null | 2,634,197 | null | 1 | 52 | python|pandas | 82,390 | <pre><code>>>> import pandas as pd
>>> df = pd.DataFrame([[1,2],[3,4]], columns=list('ab'))
>>> df
a b
0 1 2
1 3 4
>>> df['c'] = df['b']**2
>>> df
a b c
0 1 2 4
1 3 4 16
</code></pre> |
42,925,447 | 'Start rollout to beta' disabled in Play Store Developer Console | <p>I am ready to send my first app to beta testers, so i click on '<strong>Manage Beta</strong>' > '<strong>Manage testers</strong>'</p>
<p><img src="https://i.stack.imgur.com/omaHR.png" alt="add a new List with one tester in it">.</p>
<p>and '<strong>Save</strong>' and '<strong>Resume</strong>'</p>
<p>APK is uploaded > '<strong>Review</strong>'</p>
<p>The review summary says '<em>This release is ready to be rolled out.</em>', but the button labled with '<strong>Start to rollout to beta</strong>' is disabled: <img src="https://i.stack.imgur.com/TH9Bd.png" alt="disabled button">.</p> | 50,566,133 | 14 | 1 | null | 2017-03-21 11:12:43.007 UTC | 15 | 2019-07-02 10:46:55.37 UTC | 2019-04-25 14:59:04.09 UTC | null | 423,105 | null | 7,528,394 | null | 1 | 254 | android|google-play | 93,415 | <p>To see what still needs to be done, you can hover your mouse over the grayed out checkmark. There will be a popup that tells you what you still need to finish.</p>
<p><a href="https://i.stack.imgur.com/WipLH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WipLH.png" alt="enter image description here"></a></p> |
6,147,121 | Dapper and anonymous Types | <p>Is it possible to use anonymous types with Dapper?</p>
<p>I can see how you can use dynamic i.e. </p>
<pre><code>connection.Query<dynamic>(blah, blah, blah)
</code></pre>
<p>is it then possible to do a </p>
<pre><code>.Select(p=> new { A, B ,C })
</code></pre>
<p>or some variation of that afterwards?</p>
<p><em>Edit</em></p>
<p>I thought I'd show you how I am using Dapper at the moment. I tend to cache (using an InMemoryCache) data so I just do one big query at the beginning (which is super quick using Dapper) then I use Linq to sort it all out in my Repository.</p>
<pre><code>using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data.Common;
using System.Linq;
using Dapper;
namespace SomeNamespace.Data
{
public class DapperDataContext : IDisposable
{
private readonly string _connectionString;
private readonly DbProviderFactory _provider;
private readonly string _providerName;
public DapperDataContext()
{
const string connectionStringName = " DataContextConnectionString";
_connectionString = ConfigurationManager.ConnectionStrings[connectionStringName].ConnectionString;
_providerName = ConfigurationManager.ConnectionStrings[connectionStringName].ProviderName;
_provider = DbProviderFactories.GetFactory(_providerName);
}
public IEnumerable<MyDataView> MyData1 { get; private set; }
public IEnumerable<MyDataView> MyData2 { get; private set; }
protected string SqlSelectMyTable1Query
{
get
{
return @"SELECT Id, A, B, C from table1Name";
}
}
protected string SqlSelectMyTable2Query
{
get
{
return @"SELECT Id, A, B, C from table2Name";
}
}
public void Dispose()
{
}
public void Refresh()
{
using (var connection = _provider.CreateConnection())
{
// blow up if null
connection.ConnectionString = _connectionString;
connection.Open();
var sql = String.Join(" ",
new[]
{
SqlSelectMyTable1Query,
SqlSelectMyTable2Query
});
using (var multi = connection.QueryMultiple(sql))
{
MyData1 = multi.Read<MyDataView>().ToList();
MyData2 = multi.Read<MyDataView>().ToList();
}
}
}
public class MyDataView
{
public long Id { get; set; }
public string A { get; set; }
public string B { get; set; }
public string C { get; set; }
}
}
}
</code></pre>
<p>The InMemoryCache looks like this</p>
<pre><code>namespace Libs.Web
{
public class InMemoryCache : ICacheService
{
#region ICacheService Members
public T Get<T>(string cacheId, Func<T> getItemCallback) where T : class
{
var item = HttpRuntime.Cache.Get(cacheId) as T;
if (item == null)
{
item = getItemCallback();
HttpContext.Current.Cache.Insert(cacheId, item);
}
return item;
}
public void Clear(string cacheId)
{
HttpContext.Current.Cache.Remove(cacheId);
}
#endregion
}
public interface ICacheService
{
T Get<T>(string cacheId, Func<T> getItemCallback) where T : class;
void Clear(string cacheId);
}
}
</code></pre> | 30,469,302 | 4 | 5 | null | 2011-05-27 02:07:17.64 UTC | 12 | 2022-03-08 20:16:45.737 UTC | 2011-06-20 01:17:59.237 UTC | null | 402,941 | null | 402,941 | null | 1 | 37 | c#|orm|dapper | 35,459 | <p>Here's another solution to use anonymous types with dapper:</p>
<pre><code>public static class DapperExtensions
{
public static IEnumerable<T> Query<T>(this IDbConnection connection, Func<T> typeBuilder, string sql)
{
return connection.Query<T>(sql);
}
}
</code></pre>
<p>and use it like this:</p>
<pre><code>var data = connection.Query(() => new
{
ContactId = default(int),
Name = default(string),
}, "SELECT ContactId, Name FROM Contact");
</code></pre> |
6,310,153 | How to ping or check status of WCF service using net.tcp endpoint from remote server? | <p>I really come from the world of <code>Http</code> and never did much with the old .NET Remoting which used TCP, but I understand the TCP concepts and now have implemented several WCF services using the net.tcp binding over the last few years. Most of the time it is up, running, I consume it, end of story. However sometimes the server setup is more advanced and I get communication errors that exist on 1 server and maybe not from another. To prove if it is a firewall/server/etc. issue I need to see if the WCF service can even be seen or reached without issue. This is for a Windows Service hosted WCF service using net.tcp that I am trying to figure out this situation.</p>
<p>The thing is with a WCF service exposed via a HTTP binding, I can just plop the URI in the browser to view the service page letting me know the service is running properly. Easy test.</p>
<p>How do I do the equivalent for a WCF service exposed via a net.tcp binding? Is there any tool or command I can use to test for example <code>net.tcp//mycustomWCFService:8123/MyService</code> ? I have seen a few posts on writing code to programmatically determine if the WCF service is available but I do not want to do it this way. I want to do this check if at all possible <em>without</em> code, analogous to me pulling up the http endpoint in a browser.</p>
<p>Any help is appreciated, thank you!</p> | 6,824,809 | 4 | 1 | null | 2011-06-10 17:45:32.083 UTC | 15 | 2017-08-17 17:41:49.647 UTC | 2013-11-26 16:42:57.76 UTC | null | 210,709 | null | 410,937 | null | 1 | 45 | .net|wcf | 89,581 | <p>If your service implements a metadata endpoint (typically named <code>mex</code> and nested beneath the principal endpoint, implemented using the <code>mexTcpBinding</code> in this case), you can "ping" it using the svcutil command line utility that's provided with Visual Studio. E.g., </p>
<pre><code>svcutil net.tcp://mycustomWCFService:8123/MyService/mex
</code></pre>
<p>If it throws an error, your service is (potentially) down. If it succeeds, you're (likely) in business. As the preceding parentheticals suggest, it's an approximation. It means there's a listener at the address and that it's able to service a metadata request.</p> |
6,304,056 | Does instanceof return true if instance of a parent? | <p>I have a class <code>Child</code> that extends <code>Parent</code>.</p>
<pre><code>Parent child = new Child();
if (child instanceof Parent){
// Do something
}
</code></pre>
<p>Does this returns true or false, and why?</p> | 6,304,115 | 4 | 1 | null | 2011-06-10 08:49:52.973 UTC | 4 | 2022-02-25 11:17:52.94 UTC | 2017-02-22 10:36:01.567 UTC | null | 452,775 | null | 479,129 | null | 1 | 58 | java|oop|inheritance|polymorphism|instanceof | 62,497 | <p><a href="http://www.java2s.com/Tutorial/Java/0060__Operators/TheinstanceofKeyword.htm" rel="nofollow noreferrer">Yes</a>, it would. And why should it not?</p>
<p>Because child is in fact an instance of Parent. If, you want to perform an operation only for a child you should check</p>
<pre><code>if (child instanceof Child){
}
</code></pre>
<p>However you should remember the following statement from Effective C++, by Scott Meyers :</p>
<blockquote>
<p>"Anytime you find yourself writing
code of the form "if the object is of
type T1, then do something, but if
it's of type T2, then do something
else," slap yourself.</p>
</blockquote>
<p>which I think applies in this case too. If you want to <em>doSomething</em> based on what type of class the referenced object belongs to, the following code structure should help you with it.</p>
<p><strong>NOTE:</strong> I have not compiled it.</p>
<pre><code>class Parent {
public void doSomething() {
System.out.println("I am the Parent, and I do as I like");
}
}
class ChildA extends Parent {
public void doSomething() {
System.out.println("I am a child named A, but I have my own ways, different from Parent");
}
}
class ChildB extends Parent {
public void doSomething() {
System.out.println("I am a child named B, but I have my own ways, different from my Parent and my siblings");
}
}
public class Polymorphism101 {
public static void main(String[] args) {
Parent p = new Parent();
p.doSomething();
p = new ChildA();
p.doSomething();
p = new ChildB();
p.doSomething();
}
}
</code></pre>
<p><strong>EDIT: A better example</strong></p>
<p>You could be developing a <em>drawing</em> application. An application that draws shapes of any kind. In that case, you should have an <em>abstract</em> type <code>Shape</code>.</p>
<p>For purpose(s) like; drawing all shapes; list all shapes; find a shape or delete a shape, you need to have a <em>list</em> of Shapes. Since the list is of a parent type, it can store any shapes.</p>
<p>The <code>Shape</code> <em>interface/abstract class/virtual class</em> should have an <em>abstract/pure virtual</em> function <code>Draw()</code>. So, in your DrawToDeviceLoop, you just call <code>Draw()</code> for each shape, you never need to check what shape it is.</p>
<p>The <code>Shape</code> interface can have an <em>abstract</em> implementation <code>AbstractShape</code>, which can have shape name or id as data members and GetName, Cleanup and other functions with functionality common to all shapes.</p>
<p>Remember an abstract type <strong>cannot</strong> be instantiated, so <code>Shape</code> itself cannot be instantiated, as it cannot be drawn either.</p>
<p><strong>EDIT 2: Polymorphism and Exception Handling</strong> - <a href="https://stackoverflow.com/users/1955934">user1955934</a> asked "What about checking for exception class" For exception handling the best practices with respect to polymorphism are:</p>
<ol>
<li>Prefer (to throw) specific exception - For example throw a NumberFormatException instead of IllegalArgumentException</li>
<li>Catch the most specific exception first - For example, if you catch an IllegalArgumentException first, you will never reach the catch block that should handle the more specific NumberFormatException because it’s a subclass of the IllegalArgumentException.</li>
</ol>
<p>So, its principally the same, if an exception needs to be handled differently, a child/specific class should be defined, and the specific exception should be <em>caught</em> (not checked instanceof)</p>
<p>To know more best practices on exception handling. See <a href="https://dzone.com/articles/9-best-practices-to-handle-exceptions-in-java" rel="nofollow noreferrer">9 Best practices to handle exception in Java</a> and <a href="https://docs.microsoft.com/en-us/dotnet/standard/exceptions/best-practices-for-exceptions" rel="nofollow noreferrer">Best practices for exceptions (C#)</a></p>
<p><strong>EDIT 3: I admit to an exception in this rule</strong></p>
<p>So, I am working with a legacy code (written in C++), which has mostly been written about 15 years ago, where they always check for the child class to perform certain actions. I was asked to add some code with the same logic, and I told my manager (he is a Dev too) I cannot do this, pointing to this answer, and then we discussed and accepted to this exception to the rule. In our case, this parent class has had just 2 children since the year 2000, and we do not see any child being added in the near future, with the core code limited for growth, we decided that with no addition to child classes and current number being just 2, it is more efficient to check for type, especially when that is how the code has been written, since ever.</p>
<p>There aren't many instances of this check too though, the parent implements the complicated functionalities mostly and it exists more for sharing code then specializing/differentiating them.</p> |
52,575,602 | How to go on about receiving JSON array in flutter and parsing it? | <p>I am trying to get a JSON array from a webservice URL and parse it in JSON. The thing is the tutorial I was following shows receving one JSOn obj and parsing it but I need to know how to receive a JSON array and parse it. Below is the code I am working on, I am stuck.</p>
<p><strong>Model</strong></p>
<pre><code>class Fact {
int id;
int fact_id;
String fact;
String image;
String reference;
Fact(this.id, this.fact_id, this.fact, this.image, this.reference);
Fact.fromJson(Map<String, dynamic> json)
: id = json['id'],
fact_id = json['fact_id'],
fact = json['fact'],
image = json['image'],
reference = json['reference'];
Map<String, dynamic> toJson() =>
{
'id' : id,
'fact_id': fact_id,
'fact': fact,
'image': image,
'reference': reference,
};
}
</code></pre>
<p>I don't get how to write this for the array of facts which I am getting from the webservice.</p>
<p><strong>Fact Download Manager</strong></p>
<pre><code>class FactsManager {
var constants = Constants();
fetchFacts() {
final lastFactId = 0;
var fetchRequestUrl = constants.fetch_facts_url;
if (lastFactId == 0) {
fetchRequestUrl = fetchRequestUrl + "?count=" + constants.firstTimePostCount.toString();
} else {
fetchRequestUrl = fetchRequestUrl + "?count=" + constants.firstTimePostCount.toString() + "&last_id=" + lastFactId.toString();
}
Future<List<Fact>> fetchPost() async {
final response = await http.get(fetchRequestUrl);
if (response.statusCode == 200) {
return List<Fact>
}
}
}
}
</code></pre>
<p>Example Data which I am trying to parse.</p>
<pre><code>[
{
"id": "407",
"fact": "Monsanto once tried to genetically engineer blue cotton, to produce denim without the use of dyes, reducing the pollution involved in the dyeing process. ",
"reference": null,
"image": "http:\/\/quickfacts.me\/wp-content\/uploads\/2015\/06\/fact492.png",
"fact_id": "1"
},
{
"id": "560",
"fact": "You can count from zero to nine hundred ninety-nine without ever having to use the letter \"a\" ",
"reference": null,
"image": "http:\/\/quickfacts.me\/wp-content\/uploads\/2015\/06\/fact04.png",
"fact_id": "2"
},
{
"id": "564",
"fact": "In order to keep the project a secret, the British army used the innocuous name \"mobile water carriers\" for a motorized weapons project - which is the reason we call them \"tanks\". ",
"reference": null,
"image": "http:\/\/quickfacts.me\/wp-content\/uploads\/2015\/06\/fact116.png",
"fact_id": "3"
},
{
"id": "562",
"fact": "In 2010 the mummified corpse of Sogen Kato, thought to be Tokyo's oldest man, was found in his bedroom by government officials. He had actually died in 1978. ",
"reference": null,
"image": "http:\/\/quickfacts.me\/wp-content\/uploads\/2015\/06\/fact216.png",
"fact_id": "4"
},
{
"id": "566",
"fact": "In 1927 the US Supreme Court ruled it constitutional for the government to forcefully sterilize mentally handicapped people ",
"reference": null,
"image": "http:\/\/quickfacts.me\/wp-content\/uploads\/2015\/06\/fact316.png",
"fact_id": "5"
}
]
</code></pre> | 52,576,858 | 2 | 1 | null | 2018-09-30 07:20:48.68 UTC | 5 | 2021-04-28 13:27:49.167 UTC | 2018-09-30 07:49:18.12 UTC | null | 348,851 | null | 348,851 | null | 1 | 21 | json|dart|flutter | 57,826 | <p>You can do the following:</p>
<pre><code>String receivedJson = "... Your JSON string ....";
List<dynamic> list = json.decode(receivedJson);
Fact fact = Fact.fromJson(list[0]);
</code></pre>
<p>In any case, you must consider the following in your json string and the Fact class that you have crafted:</p>
<ul>
<li>In the json string the id and fact_id are Strings and you treat them as int. Either you change the json or the Fact class</li>
<li>Some strings inside the json string produce errors as the have additional quotation marks and this confuses the decoder.</li>
</ul>
<p>A json string the works is the following:</p>
<pre><code>String receivedJson = """
[
{
"id": 407,
"fact": "Monsanto once tried to genetically engineer blue cotton, to produce denim without the use of dyes, reducing the pollution involved in the dyeing process. ",
"reference": null,
"image": "http:\/\/quickfacts.me\/wp-content\/uploads\/2015\/06\/fact492.png",
"fact_id": 1
}
]
""";
</code></pre> |
34,707,227 | Google Client API - Missing require parameter: redirect_uri | <p>So I followed the <a href="https://developers.google.com/google-apps/calendar/quickstart/php" rel="noreferrer">quickstart</a> guide and decided to break it into a class called scheduler. I am working on the the authentication code, but I keep getting this: "Error 400 (OAuth 2 Error) Error Invalid Request Missing required Parameter: redirect_uri".</p>
<pre><code>class scheduler{
//The Google Client object
private $googleClient;
//the Google Calendar Service ojbect
private $calendarService;
/*
* Google Calendar Setup
*
* This creates a Google Client object so that you may create a Google Calendar object.
*
*/
function __construct(){
//set the application name
define("APPLICATION_NAME", "Web client 1");
//
define("CREDENTIALS_PATH", "~/scheduler/credentials.json");
//
define("CLIENT_SECRET_PATH", __DIR__ . "/scheduler/client_secret.json");
//
define("SCOPES", implode(" ", array(Google_Service_Calendar::CALENDAR_READONLY)));
/*if(php_sapi_name() != "cli"){
throw new Exception("This application must be run on the command line");
}*/
//create the google client
$this->googleClient = new Google_Client();
//setup the client
$this->googleClient->setApplicationName(APPLICATION_NAME);
$this->googleClient->setDeveloperKey("AIzaSyBmJLvNdMYuFhVpWalkUdyStrEBoVEayYM");
$this->googleClient->setScopes(SCOPES);
$this->googleClient->setAuthConfigFile(CLIENT_SECRET_PATH);
$this->googleClient->setAccessType("offline");
//get the credentials file path
$credentialsPath = expandHomeDirectory(CREDENTIALS_PATH);
//if the file exists
if(file_exists($credentialsPath)){
//get the credentials from the file
$accessToken = file_get_contents($credentialsPath);
}//if it does not
else{
//request the authorization url
$authURL = $this->googleClient->createAuthUrl();
//print the authorization ulr
echo "<a href=\"$authURL\">Press Me</a><br /><br />";
//prompt the user to enter the auth code
print("Enter authentication code: ");
//
$authCode = trim(fgets(STDIN));
//exchange authorization for an access token
$accessToken = $this->googleClient->authenticate($authCode);
//store credentials to disk
if(!file_exists(dirname($credentialsPath))){
mkdir(dirname($credentialsPath), 0700, true);
}
//put the contents into the credential files
file_put_contents($credentialsPath, $accessToken);
}
$this->googleClient->setAccessToken($accessToken);
//refresh token if its expired
if($this->googleClient->isAccessTokenExpired()){
$this->googleClient->refreshToken($client->getRefreshToken());
file_put_contents($credentialsPath, $this->googleClient->getAccessToken());
}
}
</code></pre>
<p>I found the cause of the problem with no solution in sight. Under my Google Developer Console I tried putting "<a href="http://localhost/" rel="noreferrer">http://localhost/</a>" into the Authorized redirect URIs section. It gives me this error "Sorry, there’s a problem. If you entered information, check it and try again. Otherwise, the problem might clear up on its own, so check back later." Is there a way to make Google Developer Console to accept the redirect uri of a localhost server?</p> | 34,710,171 | 6 | 1 | null | 2016-01-10 15:32:49.477 UTC | 2 | 2021-07-08 08:40:02.923 UTC | 2016-01-10 16:00:30.093 UTC | null | 1,425,037 | null | 1,425,037 | null | 1 | 18 | php|google-api|google-calendar-api | 40,901 | <p>I got it to work. What I had to do was go back into Google Developer Console and delete the project I had created. Then when making a NEW project it allowed me to save my localhost url. The issue that was occuring was when I went to go add my localhost url to the redirect url it would say its not possible at this time. When I set the redirect url before hitting the create button it accepts it just fine.</p> |
29,447,920 | What is the Rust type keyword? | <p>I have seen the keyword <code>type</code> used in some Rust examples, but I never have seen an explanation of it. A few examples of how I've seen it used:</p>
<pre><code>impl Add<Foo> for Bar {
type Output = BarFoo;
// omitted
}
</code></pre>
<p>and this, <a href="https://doc.rust-lang.org/reference/paths.html" rel="noreferrer">taken from the reference</a>:</p>
<pre><code>type T = HashMap<i32,String>; // Type arguments used in a type expression
let x = id::<i32>(10); // Type arguments used in a call expression
</code></pre>
<p>Could somebody please explain what this keyword does? I can't find it in Rust by Example or the Rust book.</p> | 29,448,173 | 2 | 0 | null | 2015-04-04 15:05:54.497 UTC | 7 | 2020-12-24 15:22:01.93 UTC | 2018-02-09 20:01:26.34 UTC | null | 155,423 | null | 1,905,235 | null | 1 | 38 | rust | 12,478 | <p>A simple <code>type Foo = Bar;</code> outside of an <code>impl</code> defines a <em>type alias</em>, and is <a href="https://doc.rust-lang.org/book/ch19-04-advanced-types.html#creating-type-synonyms-with-type-aliases" rel="noreferrer">documented in The Book</a>. There's a generic version <code>type Foo<T> = ...</code> but if you understand generics in general then this is an obvious extension.</p>
<p><code>type</code> in an <code>impl</code> defines an <em>associated type</em>. They are <a href="https://doc.rust-lang.org/book/ch19-03-advanced-traits.html#specifying-placeholder-types-in-trait-definitions-with-associated-types" rel="noreferrer">documented in The Book</a>, but I've already written a short summary, so you get that too:</p>
<p>When you have a trait like <code>Add</code>, you want to abstract not only over what types of things can be added, but also over the type of their sum. Adding integers results in integers, adding floats results in floats. But you don't want the result type to be a parameter of <code>Add</code> as in <code>Add<ThingToAdd, ResultType></code>, for reasons that I'll skim over here.</p>
<p>Therefore, the trait comes with a type that's associated with the <code>impl</code>. Given any implementation of <code>Add</code>, e.g., <code>impl Add<Foo> for Bar</code>, the type of the addition result is already determined. This is declared in the trait like this:</p>
<pre><code>trait Add<Rhs> {
type Result;
// ...
}
</code></pre>
<p>And then all implementations define what the type of their result is:</p>
<pre><code>impl Add<Foo> for Bar {
type Result = BarPlusFoo;
// ...
}
</code></pre> |
50,230,466 | Kotlin: withContext() vs Async-await | <p>I have been reading <a href="https://github.com/Kotlin/kotlinx.coroutines/blob/master/coroutines-guide.md" rel="noreferrer">kotlin docs</a>, and if I understood correctly the two Kotlin functions work as follows :</p>
<ol>
<li><code>withContext(context)</code>: switches the context of the current coroutine, when the given block executes, the coroutine switches back to previous context.</li>
<li><code>async(context)</code>: Starts a new coroutine in the given context and if we call <code>.await()</code> on the returned <code>Deferred</code> task, it will suspends the calling coroutine and resume when the block executing inside the spawned coroutine returns.</li>
</ol>
<p>Now for the following two versions of <code>code</code> :</p>
<p><strong>Version1:</strong></p>
<pre><code> launch(){
block1()
val returned = async(context){
block2()
}.await()
block3()
}
</code></pre>
<p><strong>Version2:</strong></p>
<pre><code> launch(){
block1()
val returned = withContext(context){
block2()
}
block3()
}
</code></pre>
<ol>
<li>In both versions block1(), block3() execute in default context(commonpool?) where as block2() executes in the given context.</li>
<li>The overall execution is synchronous with block1() -> block2() -> block3() order.</li>
<li>Only difference I see is that version1 creates another coroutine, where as version2 executes only one coroutine while switching context.</li>
</ol>
<p>My questions are :</p>
<ol>
<li><p>Isn't it always better to use <code>withContext</code> rather than <code>async-await</code> as it is functionally similar, but doesn't create another coroutine. Large numbers of coroutines, although lightweight, could still be a problem in demanding applications.</p>
</li>
<li><p>Is there a case <code>async-await</code> is more preferable to <code>withContext</code>?</p>
</li>
</ol>
<p><strong>Update:</strong>
<a href="https://blog.jetbrains.com/kotlin/2018/06/kotlin-1-2-50-is-out/" rel="noreferrer">Kotlin 1.2.50</a> now has a code inspection where it can convert <code>async(ctx) { }.await() to withContext(ctx) { }</code>.</p> | 50,231,191 | 3 | 2 | null | 2018-05-08 09:35:56.423 UTC | 44 | 2022-08-05 10:22:36.613 UTC | 2020-08-19 19:01:53.21 UTC | null | 6,998,684 | null | 1,699,956 | null | 1 | 120 | kotlin|kotlin-coroutines | 34,836 | <blockquote>
<p>Large number of coroutines, though lightweight, could still be a problem in demanding applications</p>
</blockquote>
<p>I'd like to dispel this myth of "too many coroutines" being a problem by quantifying their actual cost.</p>
<p>First, we should disentangle the <em>coroutine</em> itself from the <em>coroutine context</em> to which it is attached. This is how you create just a coroutine with minimum overhead:</p>
<pre><code>GlobalScope.launch(Dispatchers.Unconfined) {
suspendCoroutine<Unit> {
continuations.add(it)
}
}
</code></pre>
<p>The value of this expression is a <code>Job</code> holding a suspended coroutine. To retain the continuation, we added it to a list in the wider scope.</p>
<p>I benchmarked this code and concluded that it allocates <strong>140 bytes</strong> and takes <strong>100 nanoseconds</strong> to complete. So that's how lightweight a coroutine is. </p>
<p>For reproducibility, this is the code I used:</p>
<pre><code>fun measureMemoryOfLaunch() {
val continuations = ContinuationList()
val jobs = (1..10_000).mapTo(JobList()) {
GlobalScope.launch(Dispatchers.Unconfined) {
suspendCoroutine<Unit> {
continuations.add(it)
}
}
}
(1..500).forEach {
Thread.sleep(1000)
println(it)
}
println(jobs.onEach { it.cancel() }.filter { it.isActive})
}
class JobList : ArrayList<Job>()
class ContinuationList : ArrayList<Continuation<Unit>>()
</code></pre>
<p>This code starts a bunch of coroutines and then sleeps so you have time to analyze the heap with a monitoring tool like VisualVM. I created the specialized classes <code>JobList</code> and <code>ContinuationList</code> because this makes it easier to analyze the heap dump.</p>
<hr>
<p>To get a more complete story, I used the code below to also measure the cost of <code>withContext()</code> and <code>async-await</code>:</p>
<pre><code>import kotlinx.coroutines.*
import java.util.concurrent.Executors
import kotlin.coroutines.suspendCoroutine
import kotlin.system.measureTimeMillis
const val JOBS_PER_BATCH = 100_000
var blackHoleCount = 0
val threadPool = Executors.newSingleThreadExecutor()!!
val ThreadPool = threadPool.asCoroutineDispatcher()
fun main(args: Array<String>) {
try {
measure("just launch", justLaunch)
measure("launch and withContext", launchAndWithContext)
measure("launch and async", launchAndAsync)
println("Black hole value: $blackHoleCount")
} finally {
threadPool.shutdown()
}
}
fun measure(name: String, block: (Int) -> Job) {
print("Measuring $name, warmup ")
(1..1_000_000).forEach { block(it).cancel() }
println("done.")
System.gc()
System.gc()
val tookOnAverage = (1..20).map { _ ->
System.gc()
System.gc()
var jobs: List<Job> = emptyList()
measureTimeMillis {
jobs = (1..JOBS_PER_BATCH).map(block)
}.also { _ ->
blackHoleCount += jobs.onEach { it.cancel() }.count()
}
}.average()
println("$name took ${tookOnAverage * 1_000_000 / JOBS_PER_BATCH} nanoseconds")
}
fun measureMemory(name:String, block: (Int) -> Job) {
println(name)
val jobs = (1..JOBS_PER_BATCH).map(block)
(1..500).forEach {
Thread.sleep(1000)
println(it)
}
println(jobs.onEach { it.cancel() }.filter { it.isActive})
}
val justLaunch: (i: Int) -> Job = {
GlobalScope.launch(Dispatchers.Unconfined) {
suspendCoroutine<Unit> {}
}
}
val launchAndWithContext: (i: Int) -> Job = {
GlobalScope.launch(Dispatchers.Unconfined) {
withContext(ThreadPool) {
suspendCoroutine<Unit> {}
}
}
}
val launchAndAsync: (i: Int) -> Job = {
GlobalScope.launch(Dispatchers.Unconfined) {
async(ThreadPool) {
suspendCoroutine<Unit> {}
}.await()
}
}
</code></pre>
<p>This is the typical output I get from the above code:</p>
<pre><code>Just launch: 140 nanoseconds
launch and withContext : 520 nanoseconds
launch and async-await: 1100 nanoseconds
</code></pre>
<p>Yes, <code>async-await</code> takes about twice as long as <code>withContext</code>, but it's still just a microsecond. You'd have to launch them in a tight loop, doing almost nothing besides, for that to become "a problem" in your app.</p>
<p>Using <code>measureMemory()</code> I found the following memory cost per call:</p>
<pre><code>Just launch: 88 bytes
withContext(): 512 bytes
async-await: 652 bytes
</code></pre>
<p>The cost of <code>async-await</code> is exactly 140 bytes higher than <code>withContext</code>, the number we got as the memory weight of one coroutine. This is just a fraction of the complete cost of setting up the <code>CommonPool</code> context.</p>
<p>If performance/memory impact was the only criterion to decide between <code>withContext</code> and <code>async-await</code>, the conclusion would have to be that there's no relevant difference between them in 99% of real use cases.</p>
<p>The real reason is that <code>withContext()</code> a simpler and more direct API, especially in terms of exception handling: </p>
<ul>
<li>An exception that isn't handled within <code>async { ... }</code> causes its parent job to get cancelled. This happens regardless of how you handle exceptions from the matching <code>await()</code>. If you haven't prepared a <code>coroutineScope</code> for it, it may bring down your entire application.</li>
<li>An exception not handled within <code>withContext { ... }</code> simply gets thrown by the <code>withContext</code> call, you handle it just like any other.</li>
</ul>
<p><code>withContext</code> also happens to be optimized, leveraging the fact that you're suspending the parent coroutine and awaiting on the child, but that's just an added bonus.</p>
<p><code>async-await</code> should be reserved for those cases where you actually want concurrency, so that you launch several coroutines in the background and only then await on them. In short:</p>
<ul>
<li><code>async-await-async-await</code> — don't do that, use <code>withContext-withContext</code></li>
<li><code>async-async-await-await</code> — that's the way to use it.</li>
</ul> |
32,464,280 | converting currency with $ to numbers in Python pandas | <p>I have the following data in pandas dataframe:</p>
<pre><code> state 1st 2nd 3rd
0 California $11,593,820 $109,264,246 $8,496,273
1 New York $10,861,680 $45,336,041 $6,317,300
2 Florida $7,942,848 $69,369,589 $4,697,244
3 Texas $7,536,817 $61,830,712 $5,736,941
</code></pre>
<p>I want to perform some simple analysis (e.g., sum, groupby) with three columns (1st, 2nd, 3rd), but the data type of those three columns is object (or string).</p>
<p>So I used the following code for data conversion:</p>
<pre><code>data = data.convert_objects(convert_numeric=True)
</code></pre>
<p>But, conversion does not work, perhaps, due to the dollar sign. Any suggestion?</p> | 32,465,968 | 5 | 2 | null | 2015-09-08 17:56:29.82 UTC | 8 | 2022-05-03 06:31:31.673 UTC | null | null | null | null | 2,088,027 | null | 1 | 58 | python|python-2.7|pandas | 69,173 | <p>@EdChum's answer is clever and works well. But since there's more than one way to bake a cake.... why not use regex? For example:</p>
<pre><code>df[df.columns[1:]] = df[df.columns[1:]].replace('[\$,]', '', regex=True).astype(float)
</code></pre>
<p>To me, that is a little bit more readable.</p> |
6,198,104 | Reference: What is a perfect code sample using the MySQL extension? | <blockquote>
<p>This is to create a <strong>community learning resource</strong>. The goal is to have examples of good code that do not repeat the awful mistakes that can so often be found in copy/pasted PHP code. I have requested it be made Community Wiki.</p>
<p>This is <strong>not meant as a coding contest.</strong> It's not about finding the fastest or most compact way to do a query - it's to provide a good, readable reference especially for newbies.</p>
</blockquote>
<p>Every day, there is a huge influx of questions with <em>really bad</em> code snippets using the <code>mysql_*</code> family of functions on Stack Overflow. While it is usually best to direct those people towards PDO, it sometimes is neither possible (e.g. inherited legacy software) nor a realistic expectation (users are already using it in their project).</p>
<p>Common problems with code using the <code>mysql_*</code> library include:</p>
<ul>
<li>SQL injection in values</li>
<li>SQL injection in LIMIT clauses and dynamic table names</li>
<li>No error reporting ("Why does this query not work?")</li>
<li>Broken error reporting (that is, errors always occur even when the code is put into production)</li>
<li>Cross-site scripting (XSS) injection in value output</li>
</ul>
<p>Let's write a PHP code sample that does the following using the <a href="http://php.net/manual/en/book.mysql.php" rel="noreferrer">mySQL_* family of functions</a>:</p>
<ul>
<li>Accept two POST values, <code>id</code> (numeric) and <code>name</code> (a string)</li>
<li>Do an UPDATE query on a table <code>tablename</code>, changing the <code>name</code> column in the row with the ID <code>id</code></li>
<li>On failure, exit graciously, but show the detailed error only in production mode. <code>trigger_error()</code> will suffice; alternatively use a method of your choosing</li>
<li>Output the message "<code>$name</code> updated."</li>
</ul>
<p>And does <strong>not</strong> show any of the weaknesses listed above.</p>
<p>It should be <strong>as simple as possible</strong>. It ideally doesn't contain any functions or classes. The goal is not to create a copy/pasteable library, but to <strong>show the minimum of what needs to be done to make database querying safe.</strong></p>
<p>Bonus points for good comments.</p>
<p>The goal is to make this question a resource that a user can link to when encountering a question asker who has bad code (even though it isn't the focus of the question at all) or is confronted with a failing query and doesn't know how to fix it.</p>
<p><strong>To pre-empt PDO discussion:</strong></p>
<p>Yes, it will often be preferable to direct the individuals writing those questions to PDO. When it is an option, we should do so. It is, however, not always possible - sometimes, the question asker is working on legacy code, or has already come a long way with this library, and is unlikely to change it now. Also, the <code>mysql_*</code> family of functions is perfectly safe if used properly. So no "use PDO" answers here please.</p> | 6,198,763 | 5 | 0 | 2011-06-01 11:21:10.64 UTC | 2011-06-01 08:09:03.803 UTC | 18 | 2012-07-03 12:12:54.527 UTC | 2020-06-20 09:12:55.06 UTC | null | -1 | null | 187,606 | null | 1 | 60 | php|mysql|security|sql-injection | 2,774 | <p>My stab at it. Tried to keep it as simple as possible, while still maintaining some real-world conveniences.</p>
<p>Handles unicode and uses loose comparison for readability. Be nice ;-)</p>
<pre><code><?php
header('Content-type: text/html; charset=utf-8');
error_reporting(E_ALL | E_STRICT);
ini_set('display_errors', 1);
// display_errors can be changed to 0 in production mode to
// suppress PHP's error messages
/*
Can be used for testing
$_POST['id'] = 1;
$_POST['name'] = 'Markus';
*/
$config = array(
'host' => '127.0.0.1',
'user' => 'my_user',
'pass' => 'my_pass',
'db' => 'my_database'
);
# Connect and disable mysql error output
$connection = @mysql_connect($config['host'],
$config['user'], $config['pass']);
if (!$connection) {
trigger_error('Unable to connect to database: '
. mysql_error(), E_USER_ERROR);
}
if (!mysql_select_db($config['db'])) {
trigger_error('Unable to select db: ' . mysql_error(),
E_USER_ERROR);
}
if (!mysql_set_charset('utf8')) {
trigger_error('Unable to set charset for db connection: '
. mysql_error(), E_USER_ERROR);
}
$result = mysql_query(
'UPDATE tablename SET name = "'
. mysql_real_escape_string($_POST['name'])
. '" WHERE id = "'
. mysql_real_escape_string($_POST['id']) . '"'
);
if ($result) {
echo htmlentities($_POST['name'], ENT_COMPAT, 'utf-8')
. ' updated.';
} else {
trigger_error('Unable to update db: '
. mysql_error(), E_USER_ERROR);
}
</code></pre> |
5,913,338 | Embedding SVG in PDF (exporting SVG to PDF using JS) | <p>The starting points: I don't have a server that can provide anything but static files. And I have an SVG element (dynamically created) in my <code><body></code> that I want to export to a vector format, preferrably PDF or SVG.</p>
<p>I started looking at using the already existing lib <a href="http://code.google.com/p/jspdf/">jsPDF</a> along with <a href="https://github.com/MrRio/jsPDF/wiki/jspdf-using-downloadify">downloadify</a>. It worked fine. Unfortunately, this does not support SVG, only text.</p>
<p>I've read about the PDF format's possiblities to embed SVG images, and <a href="http://www.kevlindev.com/utilities/index.htm">it seems</a> to have been enabled since Acrobat Reader 5 (along with the ImageViewer plugin). But it doesn't work. I've tried with 3 different PDF readers without success.</p>
<p>Does this mean that PDFs has dropped SVG embedding support? I haven't found anything on this.</p>
<p>I have two questions; can this be solved? And if yes, what are the specifications for embedding SVG inside of a PDF? With that info, I can build that support in jsPDF myself.</p>
<p>The browser support demands are Safari, Chrome and Firefox. The versions that supports SVG.</p> | 25,119,080 | 6 | 0 | null | 2011-05-06 15:08:54.34 UTC | 21 | 2021-04-08 11:03:56.783 UTC | 2011-05-06 16:34:38.18 UTC | null | 419,352 | null | 419,352 | null | 1 | 38 | javascript|pdf|svg|pdf-generation | 73,932 | <p>For anyone looking for a JS-only solution: <strong><a href="http://pdfkit.org/" rel="noreferrer">PDFKit</a></strong> seems to be the superior solution to generate PDF from JS these days, and it supports all SVG geometry primitives (including interpreting <code>path</code> geometry strings) out of the box. All that would be needed to render existing SVG content would be a DOM-walker that keeps track of CSS styling and inheritance, if you do not require complex stuff like symbols etc.</p>
<p>I wasn't successful with the sketchy SVG support of the <strong>jsPDF</strong>/<strong>svgToPdf</strong> combo mentioned in the other answer, and the source code of these two didn't look very well-crafted and complete to me.</p>
<p>Edit: Usage example <a href="https://jsfiddle.net/klesun/zg4qbwd8/42/" rel="noreferrer">JSFiddle</a></p> |
5,861,498 | Fast way to copy dictionary in Python | <p>I have a Python program that works with dictionaries a lot. I have to make copies of dictionaries thousands of times. I need a copy of both the keys and the associated contents. The copy will be edited and must not be linked to the original (e.g. changes in the copy must not affect the original.)</p>
<p>Keys are Strings, Values are Integers (0/1).</p>
<p>I currently use a simple way:</p>
<pre><code>newDict = oldDict.copy()
</code></pre>
<p>Profiling my Code shows that the copy operation takes most of the time.</p>
<p>Are there faster alternatives to the <code>dict.copy()</code> method? What would be fastest?</p> | 5,861,694 | 6 | 9 | null | 2011-05-02 19:25:22.893 UTC | 17 | 2018-04-17 14:45:53.29 UTC | 2013-08-07 21:48:59.953 UTC | null | 321,731 | null | 734,994 | null | 1 | 93 | python|performance|dictionary|copy | 86,269 | <p>Looking at the <a href="http://hg.python.org/cpython/file/tip/Objects/dictobject.c">C source</a> for the Python <code>dict</code> operations, you can see that they do a pretty naive (but efficient) copy. It essentially boils down to a call to <code>PyDict_Merge</code>:</p>
<pre><code>PyDict_Merge(PyObject *a, PyObject *b, int override)
</code></pre>
<p>This does the quick checks for things like if they're the same object and if they've got objects in them. After that it does a generous one-time resize/alloc to the target dict and then copies the elements one by one. I don't see you getting much faster than the built-in <code>copy()</code>.</p> |
5,617,390 | I am getting the error certificate identity appears more than once in the key chain | <p>When i got this error I checked in my organizer window and found a duplicate identity in my nameI tried to delete the duplicate identity in my organizer window. But i am not able to select or delete it.Please help me to delete this duplicate identity.</p> | 5,617,451 | 11 | 0 | null | 2011-04-11 06:09:15.993 UTC | 2 | 2013-10-31 22:56:37.13 UTC | null | null | null | null | 698,212 | null | 1 | 41 | iphone | 17,166 | <p>Your certificate is stored in your keychain. Just open up the keychain and look for a duplicate and then <strong><em>restart</em></strong> XCode.</p> |
5,319,488 | How to set google map marker by latitude and longitude and provide information bubble | <p>The following sample code provided by google maps api</p>
<pre><code> var geocoder;
var map;
function initialize() {
geocoder = new google.maps.Geocoder();
var latlng = new google.maps.LatLng(40.77627, -73.910965);
var myOptions = {
zoom: 8,
center: latlng,
mapTypeId: google.maps.MapTypeId.ROADMAP
}
map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);
}
</code></pre>
<p>the following only shows google map of the location without a marker.
I was wondering how I can place a marker by giving latitude/longitude parameters?
And how is it possible to store my own information pulled from a database on that marker?</p> | 5,319,835 | 1 | 4 | null | 2011-03-16 00:00:13.277 UTC | 9 | 2017-12-17 09:45:09.217 UTC | 2017-12-17 09:45:09.217 UTC | null | 1,033,581 | null | 369,610 | null | 1 | 13 | javascript|html|google-maps | 82,796 | <p>Here is a <a href="http://jsfiddle.net/kjy112/ZLuTg/" rel="noreferrer">JSFiddle Demo</a> that shows you how to set a google map marker by Lat Lng and also when click would give you an information window (bubble):</p>
<p>Here is our basic HTML with 3 hyperlinks when clicked adds a marker onto the map:</p>
<pre><code><div id="map_canvas"></div>
<a href='javascript:addMarker("usa")'>Click to Add U.S.A</a><br/>
<a href='javascript:addMarker("brasil")'>Click to Add Brasil</a><br/>
<a href='javascript:addMarker("argentina")'>Click to Add Argentina</a><br/>
</code></pre>
<p>First we set 2 global variables. one for map and another an array to hold our markers:</p>
<pre><code>var map;
var markers = [];
</code></pre>
<p>This is our initialize to create a google map:</p>
<pre><code>function initialize() {
var latlng = new google.maps.LatLng(40.77627, -73.910965);
var myOptions = {
zoom: 1,
center: latlng,
mapTypeId: google.maps.MapTypeId.ROADMAP
}
map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);
}
</code></pre>
<p>We then create 3 lat lng locations where we would like to place our markers: </p>
<pre><code>var usa = new google.maps.LatLng(37.09024, -95.712891);
var brasil = new google.maps.LatLng(-14.235004, -51.92528);
var argentina = new google.maps.LatLng(-38.416097, -63.616672);
</code></pre>
<p>Here we create a function to add our markers based on whatever is passed onto it. myloc will be either usa, brasil or argentina and we then create the marker based on the passed param. With in the addMarker function we check and make sure we don't create duplicate marker on the map by calling the for loop and if we the passed param has already been created then we return out of the function and do nothing, else we create the marker and push it onto the global markers array. After the marker is created we then attach an info window with it's associated marker by doing <code>markers[markers.length-1]['infowin']</code> markers.length-1 is just basically getting the newly pushed marker on the array. Within the info window we set the content using html. This is basically the information you put into the bubble or info window (it can be weather information which you can populate using a weather API and etc). After info window is attached we then attach an onclick event listener using the Google Map API's addListener and when the marker is clicked we want to open the info window that is associated with it by calling <code>this['infowin'].open(map, this)</code> where the map is our global map and this is the marker we are currently associating the onclick event with.</p>
<pre><code>function addMarker(myloc) {
var current;
if (myloc == 'usa') current = usa;
else if (myloc == 'brasil') current = brasil;
else if (myloc == 'argentina') current = argentina;
for (var i = 0; i < markers.length; i++)
if (current.lat() === markers[i].position.lat() && current.lng() === markers[i].position.lng()) return;
markers.push(new google.maps.Marker({
map: map,
position: current,
title: myloc
}));
markers[markers.length - 1]['infowin'] = new google.maps.InfoWindow({
content: '<div>This is a marker in ' + myloc + '</div>'
});
google.maps.event.addListener(markers[markers.length - 1], 'click', function() {
this['infowin'].open(map, this);
});
}
</code></pre>
<p>When all is done we basically attach <code>window.onload</code> event and call the initialize function:</p>
<pre><code>window.onload = initialize;
</code></pre> |
5,021,254 | PHP Testing, for Procedural Code | <p>Is there any way of testing procedural code? I have been looking at PHPUnit which seems like a great way of creating automated tests. However, it seems to be geared towards object oriented code, are there any alternatives for procedural code?</p>
<p>Or should I convert the website to object oriented before attempting to test the website? This may take a while which is a bit of a problem as I don't have a lot of time to waste. </p>
<p>Thanks,</p>
<p>Daniel. </p> | 5,021,508 | 1 | 3 | null | 2011-02-16 19:37:11.067 UTC | 11 | 2011-04-21 11:36:45.483 UTC | 2011-02-16 19:41:49.47 UTC | null | 564,338 | null | 564,338 | null | 1 | 27 | php|testing|phpunit|procedural | 7,036 | <p>You can test procedural code with PHPUnit. Unit tests are not tied to object-oriented programming. <strong>They test units of code</strong>. In OO, a unit of code is a method. In procedural PHP, I guess it's a whole script (file).</p>
<p>While OO code is easier to maintain and to test, that doesn't mean procedural PHP cannot be tested.</p>
<p>Per example, you have this script:</p>
<p><strong>simple_add.php</strong></p>
<pre><code>$arg1 = $_GET['arg1'];
$arg2 = $_GET['arg2'];
$return = (int)$arg1 + (int)$arg2;
echo $return;
</code></pre>
<p>You could test it like this:</p>
<pre><code>class testSimple_add extends PHPUnit_Framework_TestCase {
private function _execute(array $params = array()) {
$_GET = $params;
ob_start();
include 'simple_add.php';
return ob_get_clean();
}
public function testSomething() {
$args = array('arg1'=>30, 'arg2'=>12);
$this->assertEquals(42, $this->_execute($args)); // passes
$args = array('arg1'=>-30, 'arg2'=>40);
$this->assertEquals(10, $this->_execute($args)); // passes
$args = array('arg1'=>-30);
$this->assertEquals(10, $this->_execute($args)); // fails
}
}
</code></pre>
<p>For this example, I've declared an <code>_execute</code> method that accepts an array of GET parameters, capture the output and return it, instead of including and capturing over and over. I then compare the output using the regular assertions methods from PHPUnit.</p>
<p>Of course, the third assertion will fail (depends on error_reporting though), because the tested script will give an <em>Undefined index</em> error. </p>
<p>Of course, when testing, you should put error_reporting to <code>E_ALL | E_STRICT</code>.</p> |
49,695,836 | TypeError: string indices must be integers (Python) | <p>I am trying to retrieve the 'id' value : ad284hdnn.</p>
<p>I am getting the following error : <code>TypeError: string indices must be integers</code></p>
<pre><code>data = response.json()
print data
for key in data['result']:
print key['id']
</code></pre>
<p>Here is the json that is returned when print the data string.</p>
<pre><code>{u'meta': {u'httpStatus': u'200 - OK', u'requestId': u'12345'}, u'result': {u'username': u'[email protected]', u'firstName': u'joe', u'lastName': u'bloggs', u'accountStatus': u'active', u'id': u'ad284hdnn'}}
</code></pre> | 49,695,938 | 1 | 3 | null | 2018-04-06 15:08:35.837 UTC | 2 | 2018-04-06 15:14:00.83 UTC | null | null | null | null | 3,730,496 | null | 1 | 6 | python|json | 90,402 | <p><code>data['result']</code> is a dictionary. Iterating over <code>dict</code> means iterating over its keys. Therefore <code>key</code> variable stores a string. That's why <code>key['id']</code> raises <code>TypeError: string indices must be integers</code>.</p> |
25,309,969 | Powershell to read from database using ODBC DSN instead of connection string | <p>I know how to read value from database using connectionstring, i.e.</p>
<h1>Establish database connection to read</h1>
<pre><code>$conn = New-Object System.Data.SqlClient.SqlConnection
$conn.ConnectionString = "Server=10.10.10.10;Initial Catalog=database_name;User Id=$username;Password=$password;"
$SQL = "..."
$conn.Open()
# Create and execute the SQL Query
$cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
$count=0
do{
try{
$rdr = $cmd.ExecuteReader()
while ($rdr.read()){
$sql_output += ,@($rdr.GetValue(0), $rdr.GetValue(1))
$count=$count + 1
}
$transactionComplete = $true
}
catch{
$transactionComplete = $false
}
}until ($transactionComplete)
# Close the database connection
$conn.Close()
</code></pre>
<p>How can I accomplish the same thing with ODBC, i.e I have DSN (data source name) set up on the server?</p> | 25,310,911 | 3 | 1 | null | 2014-08-14 14:00:19.117 UTC | 2 | 2018-08-14 14:50:25.207 UTC | 2015-12-10 00:29:29.56 UTC | null | 2,144,390 | null | 2,630,486 | null | 1 | 5 | powershell|odbc|dsn | 44,961 | <p>According to <a href="https://www.connectionstrings.com/odbc-dsn/" rel="noreferrer">https://www.connectionstrings.com/odbc-dsn/</a> you would use something like...</p>
<pre><code>DSN=myDsn;Uid=myUsername;Pwd=;
</code></pre>
<p>Can probably just go with <code>DSN=...</code> if creds not required.</p> |
27,835,229 | Set the Calendar to a specific date? | <p>I want to set a reminder with notification on a specific date. Then I am using AlarmManager with NotificationManager currently. When I set selected date from dateDialog, the reminder is working. How can I put calendar value on alarm set with fixed time? I get the current date and time from this :</p>
<pre><code>Calendar calendar = Calendar.getInstance();
</code></pre>
<p>and then I can set the calendar manually like below and it's working: </p>
<pre><code>calendar.set(Calendar.SECOND, 0);
calendar.set(Calendar.MINUTE, 13);
calendar.set(Calendar.HOUR, 7);
calendar.set(Calendar.AM_PM, Calendar.AM);
calendar.set(Calendar.MONTH, Calendar.JANUARY);
calendar.set(Calendar.DAY_OF_MONTH, 8);
calendar.set(Calendar.YEAR,2015);
long when = calendar.getTimeInMillis();
</code></pre>
<p>But my question is how can I set the calendar to tomorrow and 9:00 AM or set the calendar exactly to a particular month (or year) later from the current date? I mean something like this :</p>
<pre><code>calendar.add(Calendar.DAY_OF_MONTH, 1);
</code></pre>
<p>but it does not work.</p> | 27,835,917 | 4 | 1 | null | 2015-01-08 07:46:34.413 UTC | null | 2018-02-20 08:11:07.127 UTC | 2015-01-08 08:00:07.63 UTC | null | 946,904 | null | 1,579,019 | null | 1 | 9 | java|android|date|calendar | 41,572 | <h1>Joda-Time</h1>
<p><strong>UPDATE:</strong> The <a href="http://www.joda.org/joda-time/" rel="nofollow noreferrer"><em>Joda-Time</em></a> project is now in <a href="https://en.wikipedia.org/wiki/Maintenance_mode" rel="nofollow noreferrer">maintenance mode</a>, with the team advising migration to the <a href="http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html" rel="nofollow noreferrer">java.time</a> classes. See <a href="https://docs.oracle.com/javase/tutorial/datetime/TOC.html" rel="nofollow noreferrer">Tutorial by Oracle</a>.</p>
<p>Try using a better date-time library, such as Joda-Time.</p>
<p>In Joda-Time you can change the date while keeping the time of day. Or, vice-versa, keep the time of day while keeping the date.</p>
<pre><code>DateTime now = DateTime.now( DateTimeZone.forID( "America/Montreal" ) ) ;
DateTime todayNoon = now.withTime( 12, 0, 0, 0 ) ;
DateTime midMarchSameYearSameTimeAsNow = now.withDate( now.getYear(), DateTimeConstants.MARCH, 15 );
DateTime tomorrowSameTime = now.plusDays( 1 );
</code></pre> |
9,368,360 | Using BX in Thumb code to call a Thumb function, or to jump to a Thumb instruction in another function | <p>I'm trying to learn skills useful in firmware modding (for which i don't have source code)
These questions concern use of BX from thumb code to jump or call other existing thumb code.</p>
<ol>
<li>How do i use BX to JUMP to existing firmware THUMB code, from my THUMB code.</li>
<li>How do i use BX to CALL an existing THUMB function (must set LR first), from my THUMB code.</li>
</ol>
<p>My understanding is that cpu looks at lsb bit (bit 0) and i have to make sure this is set to <code>1</code> in order to keep cpu state to "thumb state".
So I guess i have to ADD 1, to set lsb bit to 1.</p>
<p>So ...say i want to just JUMP to 0x24000 ( in the middle of some existing THUMB code)</p>
<pre><code>LDR R6, =0x24000
ADD R6, #1 @ (set lsb to 1)
BX R6
</code></pre>
<p>I think this is correct ?</p>
<p>Now say i want to CALL an existing thumb function, using BX, and i want it to return to me, so i need to set LR to where i want it to return.</p>
<p>Lets say the function i want to call is at 0x24000
It was <a href="https://stackoverflow.com/questions/9348709/arm-thumb-code-for-firmware-patches-how-to-tell-gcc-assembler-linker-to-bl-t">suggested to me</a> to use:</p>
<pre><code>ldr r2, =0x24000
mov lr, pc
bx r2
</code></pre>
<p>Here comes what i don't understand:</p>
<ol>
<li><p>the address in R2 doesn't have lsb bit set... so won't <code>bx r2</code> switch mode to ARM mode??</p></li>
<li><p>The LR..
The PC has the address of (begining of current instruction, + 4), i was told.
In both Thumb and Arm, any instruction address has to be aligned (16 bit or 32 bit), so it won't have the LSB bit set to 1. Only odd numbers have lsb bit set to 1.</p></li>
</ol>
<p>So in the code above, i'm setting LR to (PC), an address that DOESN'T have lsb bit 1 set either. So when the function i called comes to it's epilogue, and does <code>BX LR</code>, ... uhmmm.. how can that work to return to my THUMB code ? I must be missing something...</p>
<p>Normally BL is used to call functions. The manual says BL instruction sets the LR to the next line of code...
So does this mean that a (normally used) <code>BL</code> THUMB instruction, sets the LR to <code>return addr + 1</code> automatically? </p> | 9,370,417 | 1 | 0 | null | 2012-02-20 21:01:48.6 UTC | 9 | 2020-01-06 09:49:38.187 UTC | 2018-07-10 11:16:18.33 UTC | null | 608,639 | null | 1,129,009 | null | 1 | 17 | gcc|arm|thumb | 13,529 | <p>Wow, thanks for calling me out on this one. I know I tried the qemu code in <a href="http://github.com/dwelch67/yagbat" rel="nofollow noreferrer">http://github.com/dwelch67/yagbat</a> and thought XPUT32 which calls PUT32 in the way you describe, and it worked. But it DOES NOT appear to work. I created a number of experiments and am quite surprised, this is not what I was expecting. Now I see why the gnu linker does what it does. Sorry this is a long response but I think it very valuable. It is a confusing topic, I know I have had it wrong for years thinking the pc drags the mode bit around, but it doesn't.</p>
<p>Before I start with the experiments below, if you are going to do this:</p>
<pre><code>LDR R6, =0x24000
ADD R6, #1 @ (set lsb to 1)
BX R6
</code></pre>
<p>because you happen to know that 0x24000 is thumb code, just do this instead:</p>
<pre><code>LDR R6, =0x24001
BX R6
</code></pre>
<p>And yes, that is how you branch to thumb code from arm or thumb if you happen to know that that hardcoded address 0x24000 is a thumb instruction you <code>bx</code> with a register containing the address plus one.</p>
<p>If you don't know the address but know the name of the address:</p>
<pre><code>ldr r6,=something
bx r6
</code></pre>
<p>The nice thing about that is that something can be an arm or thumb address and the above code just works. Well it works if the linker properly knows what type of label that is arm or thumb, if that gets messed up it won't work right as you can see here:</p>
<pre><code>.thumb
ping:
ldr r0,=pong
bx r0
.code 32
pong:
ldr r0,=ping
bx r0
d6008148 <ping>:
d6008148: 4803 ldr r0, [pc, #12] ; (d6008158 <pong+0xc>)
d600814a: 4700 bx r0
d600814c <pong>:
d600814c: e59f0008 ldr r0, [pc, #8] ; d600815c <pong+0x10>
d6008150: e12fff10 bx r0
d6008158: d600814c strle r8, [r0], -ip, asr #2
d600815c: d6008148 strle r8, [r0], -r8, asr #2
</code></pre>
<p>That didn't work <code>pong</code> wanted to pull a thumb address from 0xD600815C but got an arm address.</p>
<p>This is all gnu assembler stuff btw, for other tools you may have to do something else. For gas you need to put <code>.thumb_func</code> before a label that you want declared as a thumb label (the term func implying function is misleading, don't worry about what <code>.thumb_func</code> means it just is an assembler/linker game).</p>
<pre><code>.thumb
.thumb_func
ping:
ldr r0,=pong
bx r0
.code 32
pong:
ldr r0,=ping
bx r0
</code></pre>
<p>and now we get what we wanted:</p>
<pre><code>d6008148 <ping>:
d6008148: 4803 ldr r0, [pc, #12] ; (d6008158 <pong+0xc>)
d600814a: 4700 bx r0
d600814c <pong>:
d600814c: e59f0008 ldr r0, [pc, #8] ; d600815c <pong+0x10>
d6008150: e12fff10 bx r0
d6008158: d600814c strle r8, [r0], -ip, asr #2
d600815c: d6008149 strle r8, [r0], -r9, asr #2
</code></pre>
<p>0xD600815C has that <code>lsbit</code> set so that you don't have to do any work. The compiler takes care of all of this when you are doing calls to C functions for example. For assembler though you have to use that <code>.thumb_func</code> (or some other directive if there is one) to get gas to know this is a thumb label and set the <code>lsbit</code> for you.</p>
<p>So the experiment below was done on an mpcore which is an ARM11 but I also tried <code>testthumb</code> functions 1 through 4 on an ARM7TDMI and qemu with the same results.</p>
<pre><code>.globl testarm
testarm:
mov r0,pc
bx lr
armbounce:
mov r0,lr
bx lr
.thumb
.thumb_func
.globl testthumb1
testthumb1:
mov r0,pc
bx lr
nop
nop
nop
bounce:
bx lr
.thumb_func
.globl testthumb2
testthumb2:
mov r2,lr
mov r0,pc
bl bounce
bx r2
nop
nop
nop
.thumb_func
.globl testthumb3
testthumb3:
mov r2,lr
mov lr,pc
mov r0,lr
bx r2
nop
nop
nop
.thumb_func
.globl testthumb4
testthumb4:
push {lr}
ldr r2,=armbounce
mov r1,pc ;@ -4
add r1,#5 ;@ -2
mov lr,r1 ;@ +0
bx r2 ;@ +2
pop {r2} ;@ +4
bx r2
.thumb_func
.globl testthumb5
testthumb5:
push {lr}
ldr r2,=armbounce
mov lr,pc
bx r2
pop {r2}
bx r2
.thumb_func
.globl testthumb6
testthumb6:
push {lr}
bl testthumb6a
.thumb_func
testthumb6a:
mov r0,lr
pop {r2}
bx r2
.thumb_func
.globl testthumb7
testthumb7:
push {lr}
bl armbounce_thumb
pop {r2}
bx r2
.thumb_func
.globl testthumb8
testthumb8:
push {lr}
bl armbounce_thumb_two
pop {r2}
bx r2
.align 4
armbounce_thumb:
ldr r1,[pc]
bx r1
.word armbounce
nop
.align 4
armbounce_thumb_two:
bx pc
nop
.code 32
b armbounce
</code></pre>
<p>Which becomes:</p>
<pre><code>d60080b4 <testarm>:
d60080b4: e1a0000f mov r0, pc
d60080b8: e12fff1e bx lr
d60080bc <armbounce>:
d60080bc: e1a0000e mov r0, lr
d60080c0: e12fff1e bx lr
d60080c4 <testthumb1>:
d60080c4: 4678 mov r0, pc
d60080c6: 4770 bx lr
d60080c8: 46c0 nop ; (mov r8, r8)
d60080ca: 46c0 nop ; (mov r8, r8)
d60080cc: 46c0 nop ; (mov r8, r8)
d60080ce <bounce>:
d60080ce: 4770 bx lr
d60080d0 <testthumb2>:
d60080d0: 4672 mov r2, lr
d60080d2: 4678 mov r0, pc
d60080d4: f7ff fffb bl d60080ce <bounce>
d60080d8: 4710 bx r2
d60080da: 46c0 nop ; (mov r8, r8)
d60080dc: 46c0 nop ; (mov r8, r8)
d60080de: 46c0 nop ; (mov r8, r8)
d60080e0 <testthumb3>:
d60080e0: 4672 mov r2, lr
d60080e2: 46fe mov lr, pc
d60080e4: 4670 mov r0, lr
d60080e6: 4710 bx r2
d60080e8: 46c0 nop ; (mov r8, r8)
d60080ea: 46c0 nop ; (mov r8, r8)
d60080ec: 46c0 nop ; (mov r8, r8)
d60080ee <testthumb4>:
d60080ee: b500 push {lr}
d60080f0: 4a15 ldr r2, [pc, #84] ; (d6008148 <armbounce_thumb_two+0x8>)
d60080f2: 4679 mov r1, pc
d60080f4: 3105 adds r1, #5
d60080f6: 468e mov lr, r1
d60080f8: 4710 bx r2
d60080fa: bc04 pop {r2}
d60080fc: 4710 bx r2
d60080fe <testthumb5>:
d60080fe: b500 push {lr}
d6008100: 4a11 ldr r2, [pc, #68] ; (d6008148 <armbounce_thumb_two+0x8>)
d6008102: 46fe mov lr, pc
d6008104: 4710 bx r2
d6008106: bc04 pop {r2}
d6008108: 4710 bx r2
d600810a <testthumb6>:
d600810a: b500 push {lr}
d600810c: f000 f800 bl d6008110 <testthumb6a>
d6008110 <testthumb6a>:
d6008110: 4670 mov r0, lr
d6008112: bc04 pop {r2}
d6008114: 4710 bx r2
d6008116 <testthumb7>:
d6008116: b500 push {lr}
d6008118: f000 f80a bl d6008130 <armbounce_thumb>
d600811c: bc04 pop {r2}
d600811e: 4710 bx r2
d6008120 <testthumb8>:
d6008120: b500 push {lr}
d6008122: f000 f80d bl d6008140 <armbounce_thumb_two>
d6008126: bc04 pop {r2}
d6008128: 4710 bx r2
d600812a: 46c0 nop ; (mov r8, r8)
d600812c: 46c0 nop ; (mov r8, r8)
d600812e: 46c0 nop ; (mov r8, r8)
d6008130 <armbounce_thumb>:
d6008130: 4900 ldr r1, [pc, #0] ; (d6008134 <armbounce_thumb+0x4>)
d6008132: 4708 bx r1
d6008134: d60080bc ; <UNDEFINED> instruction: 0xd60080bc
d6008138: 46c0 nop ; (mov r8, r8)
d600813a: 46c0 nop ; (mov r8, r8)
d600813c: 46c0 nop ; (mov r8, r8)
d600813e: 46c0 nop ; (mov r8, r8)
d6008140 <armbounce_thumb_two>:
d6008140: 4778 bx pc
d6008142: 46c0 nop ; (mov r8, r8)
d6008144: eaffffdc b d60080bc <armbounce>
d6008148: d60080bc ; <UNDEFINED> instruction: 0xd60080bc
d600814c: e1a00000 nop ; (mov r0, r0)
</code></pre>
<p>And the results of calling and printing all of these functions:</p>
<pre><code>D60080BC testarm
D60080C8 testthumb1
D60080D6 testthumb2
D60080E6 testthumb3
D60080FB testthumb4
testthumb5 crashes
D6008111 testthumb6
D600811D testthumb7
D6008127 testthumb8
</code></pre>
<p>So what is all of this doing and what does it have to do with your question. This has to do with mixed mode calling from thumb mode (and also from arm which is simpler)</p>
<p>I have been programming ARM and thumb mode at this level for many years, and somehow have had this wrong all along. I thought the program counter always held the mode in that <code>lsbit</code>, I know as you know that you want to have it set or not set when you do a bx instruction.</p>
<p>Very early in the CPU description of the ARM processor in the ARM Architectural Reference Manual (if you are writing assembler you should already have this, if not maybe most of your questions will be answered).</p>
<pre><code>Program counter Register 15 is the Program Counter (PC). It can be used in most
instructions as a pointer to the instruction which is two instructions after
the instruction being executed...
</code></pre>
<p>So let's check and see what that really means, does that mean in arm mode two instructions, 8 bytes ahead? And in thumb mode, two instructions ahead, or 4 bytes ahead?</p>
<p>So <code>testarm</code> verifies that the program counter is 8 bytes ahead. Which is also two instructions.</p>
<p><code>testthumb1</code> verifies that the program is 4 bytes ahead, which in this case is also two instructions.</p>
<p><code>testthumb2</code>:</p>
<pre><code>d60080d2: 4678 mov r0, pc
d60080d4: f7ff fffb bl d60080ce <bounce>
d60080d8: 4710 bx r2
</code></pre>
<p>If the program counter was two "instructions" ahead we would get 0xD60080D8 but we instead get 0xD60080D6 which is four bytes ahead, and that makes a lot more sense. Arm mode 8 bytes ahead, thumb mode 4 bytes ahead, no messing with decoding instructions (or data) that are ahead of the code being executed, just add 4 or 8.</p>
<p><code>testthumb3</code> was a hope that <code>mov lr,pc</code> was special, it isn't.</p>
<p>If you don't see the pattern yet, the <code>lsbit</code> of the program counter is NOT set, and I guess this makes sense for branch tables for example. So <code>mov lr,pc</code> in thumb mode does NOT set up the link register right for a return.</p>
<p><code>testthumb4</code> in a very painful way does take the program counter wherever this code happens to end up and based on carefully placed instructions, computes the return address, if you change that instruction sequence between <code>mov r1,pc</code> and <code>bx r2</code> you have to return the add. Now why couldn't we just do something like this:</p>
<pre><code>add r1,pc,#1
bx r2
</code></pre>
<p>With thumb instructions you can't, with thumb2 you probably could. And there appear to be some processors (armv7) that support both arm instructions and thumb/thumb2 so you might be in a situation where you would want to do that. But you wouldn't add #1 because a thumb2 add instruction, if there is one that allows upper registers and has three operands would be a 4 byte thumb 2 instruction. (you would need to add #3).</p>
<p>So <code>testthumb5</code> is directly from the code I showed you that lead to part of this question, and it crashes. This is not how it works, sorry I mislead folks I will try to go back and patch up the SO questions I used this with.</p>
<p><code>testthumb6</code> is an experiment to make sure we are all not crazy. All is well the link register does indeed get the <code>lsbit</code> set so that when you <code>bx lr</code> later it knows the mode from that bit.</p>
<p><code>testthumb7</code>, this is derived from the ARM side trampoline that you see the linker doing when going from arm mode to thumb mode, in this case though I am going from thumb mode to arm mode. Why can't the linker do it this way? Because in thumb mode at least you have to use a low register and at this point in the game, after the code is compiled the linker has no way of knowing what register it can trash. In arm mode though the ip register, not sure what that is maybe r12, can get trashed, I guess it is reserved for the compiler to use. I know in this case that <code>r1</code> can get trashed and used it, and this works as desired. The armbounce code gets called which grabs the link register if where to return to, which is a thumb instruction (<code>lsbit set</code>) after the <code>bl armbounce_thumb</code> in the <code>testthumb7</code> function, exactly where we wanted it to be.</p>
<p><code>testthumb8</code> this is how the gnu linker does it when it needs to get from thumb mode to arm mode. The <code>bl</code> instruction is set to go to a trampoline. Then they do something very very tricky, and crazy looking:</p>
<pre><code>d6008140 <armbounce_thumb_two>:
d6008140: 4778 bx pc
d6008142: 46c0 nop ; (mov r8, r8)
d6008144: eaffffdc b d60080bc <armbounce>
</code></pre>
<p>A <code>bx pc</code>. We know from the experiments above that the <code>pc</code> is four bytes ahead, we also know that the <code>lsbit</code> is NOT SET. So what this is saying is branch to the ARM CODE that is four bytes after this one. The <code>nop</code> is a two byte spacer, then we have to generate an ARM instruction four bytes ahead AND ALIGNED ON A FOUR BYTE BOUNDARY, and we make that an unconditional branch to whatever place we were going, this could be a b something or a <code>ldr pc</code>,=something depending on how far you need to go. Very tricky. </p>
<p>The original <code>bl arm_bounce_thumb_two</code> sets up the link register to return to the instruction after that <code>bl</code>. The trampoline does not modify the link register it simply performs branches.</p>
<p>If you want to get to thumb mode from arm then do what the linker does:</p>
<pre><code>...
bl myfun_from_arm
...
myfun_from_arm:
ldr ip,[pc]
bx ip
.word myfun
</code></pre>
<p>Which looks like this when they do it (grabbed from a different binary not at 0xD6008xxx but at 0x0001xxxx).</p>
<pre><code> 101f8: eb00003a bl 102e8 <__testthumb1_from_arm>
000102e8 <__testthumb1_from_arm>:
102e8: e59fc000 ldr ip, [pc] ; 102f0 <__testthumb1_from_arm+0x8>
102ec: e12fff1c bx ip
102f0: 00010147 andeq r0, r1, r7, asr #2
</code></pre>
<p>So whatever this ip register is (<code>r12</code>?) they don't mind trashing it and I assume you are welcome to trash it yourself.</p> |
47,344,722 | Linux command 'll' is not working | <p>I am able to run ll command with my user but not with sudo, it giving me error as command not found!</p> | 47,344,746 | 9 | 2 | null | 2017-11-17 06:40:37.38 UTC | 3 | 2021-11-11 08:20:52.56 UTC | 2017-11-17 10:12:22.66 UTC | null | 524,436 | null | 4,856,810 | null | 1 | 39 | linux|list | 69,635 | <p>Try <code>sudo ls -l</code>.</p>
<p>As <code>ll</code> is a shorthand for <code>ls -l</code>.</p> |
7,313,919 | C++11 alternative to localtime_r | <p>C++ defines time formatting functions in terms of <code>strftime</code>, which requires a <code>struct tm</code> "broken-down time" record. However, the C and C++03 languages provide no thread-safe way to obtain such a record; there is just one master <code>struct tm</code> for the whole program.</p>
<p>In C++03, this was more or less OK, because the language didn't support multithreading; it merely supported platforms supporting multithreading, which then provided facilities like POSIX <a href="http://pubs.opengroup.org/onlinepubs/9699919799/functions/localtime.html"><code>localtime_r</code></a>.</p>
<p>C++11 also defines new time utilities, which interface with the non-broken-down <code>time_t</code> type, which <em>is</em> what would be used to reinitialize the global <code>struct tm</code>. But obtaining a <code>time_t</code> isn't the problem.</p>
<p>Am I missing something or does this task still require reliance on POSIX?</p>
<p><strong>EDIT:</strong> Here is some workaround code. It maintains compatibility with multithreaded environments that provide <code>::localtime_r</code> and single-threaded environments that provide only <code>std::localtime</code>. It can easily be adapted to check for other functions as well, such as <code>posix::localtime_r</code> or <code>::localtime_s</code> or what-have-you.</p>
<pre><code>namespace query {
char localtime_r( ... );
struct has_localtime_r
{ enum { value = sizeof localtime_r( std::declval< std::time_t * >(), std::declval< std::tm * >() )
== sizeof( std::tm * ) }; };
template< bool available > struct safest_localtime {
static std::tm *call( std::time_t const *t, std::tm *r )
{ return localtime_r( t, r ); }
};
template<> struct safest_localtime< false > {
static std::tm *call( std::time_t const *t, std::tm *r )
{ return std::localtime( t ); }
};
}
std::tm *localtime( std::time_t const *t, std::tm *r )
{ return query::safest_localtime< query::has_localtime_r::value >().call( t, r ); }
</code></pre> | 7,314,430 | 2 | 6 | null | 2011-09-06 00:37:55.153 UTC | 9 | 2022-06-17 19:10:36.327 UTC | 2012-01-21 17:38:16.257 UTC | null | 500,104 | null | 153,285 | null | 1 | 38 | c++|time|c++11|strftime | 19,971 | <p>You're not missing anything.</p>
<p>The next C standard (due out probably this year) does have defined in Annex K:</p>
<pre><code>struct tm *localtime_s(const time_t * restrict timer,
struct tm * restrict result);
</code></pre>
<p>And this new function is thread safe! But don't get too happy. There's two major problems:</p>
<ol>
<li><p><code>localtime_s</code> is an <strong>optional</strong> extension to C11.</p>
</li>
<li><p>C++11 references C99, not C11. <code>local_time_s</code> is not to be found in C++11, optional or not.</p>
</li>
</ol>
<p><strong>Update</strong></p>
<p>In the 4 years since I answered this question, I have also been frustrated by the poor design of C++ tools in this area. I was motivated to create modern C++ tools to deal with this:</p>
<p><a href="http://howardhinnant.github.io/date/tz.html" rel="nofollow noreferrer">http://howardhinnant.github.io/date/tz.html</a></p>
<pre><code>#include "tz.h"
#include <iostream>
int
main()
{
using namespace date;
auto local_time = make_zoned(current_zone(), std::chrono::system_clock::now());
std::cout << local_time << '\n';
}
</code></pre>
<p>This just output for me:</p>
<p>2015-10-28 14:17:31.980135 EDT</p>
<p><code>local_time</code> is a pairing of <code>std::chrono::system_clock::time_point</code> and <code>time_zone</code> indicating the local time.</p>
<p>There exists utilities for breaking the <code>std::chrono::system_clock::time_point</code> into human-readable field types, such as year, month, day, hour, minute, second, and subseconds. Here is a presentation focusing on those (non-timezone) pieces:</p>
<p><a href="https://www.youtube.com/watch?v=tzyGjOm8AKo" rel="nofollow noreferrer">https://www.youtube.com/watch?v=tzyGjOm8AKo</a></p>
<p>All of this is of course thread safe (it is modern C++).</p>
<p><strong>Update 2</strong></p>
<p>The above is now part of C++20 with this slightly altered syntax:</p>
<pre><code>#include <chrono>
#include <iostream>
int
main()
{
namespace chr = std::chrono;
chr::zoned_time local_time{chr::current_zone(), chr::system_clock::now()};
std::cout << local_time << '\n';
}
</code></pre> |
7,044,864 | Symfony2-Doctrine: ManyToMany relation is not saved to database | <p>I have two PHP model classes named Category and Item. A Category may have many Items and an Item may belong to many Categories.
I have created a ManyToMany relation to both classes:</p>
<pre><code>class Category
{
/**
* @ORM\ManyToMany(targetEntity="Item", mappedBy="categories", cascade={"persist"})
*/
private $items;
/**
* Add items
*
* @param Ako\StoreBundle\Entity\Item $items
*/
public function addItems(\Ako\StoreBundle\Entity\Item $items)
{
$this->items[] = $items;
}
/**
* Get items
*
* @return Doctrine\Common\Collections\Collection
*/
public function getItems()
{
return $this->items;
}
}
</code></pre>
<p>And:</p>
<pre><code>class Item
{
/**
* @ORM\ManyToMany(targetEntity="Category", inversedBy="items", cascade={"persist"})
* @ORM\JoinTable(name="item_category",
* joinColumns={@ORM\JoinColumn(name="item_id", referencedColumnName="id")},
* inverseJoinColumns={@ORM\JoinColumn(name="category_id", referencedColumnName="id")}
* )
*/
private $categories;
/**
* Add categories
*
* @param Ako\StoreBundle\Entity\Category $categories
*/
public function addCategories(\Ako\StoreBundle\Entity\Category $categories)
{
$this->categories[] = $categories;
}
/**
* Get categories
*
* @return Doctrine\Common\Collections\Collection
*/
public function getCategories()
{
return $this->categories;
}
}
</code></pre>
<p>Now in my controller:</p>
<pre><code>$em = $this->getDoctrine()->getEntityManager();
$item = $em->getRepository('AkoStoreBundle:Item')->find($item_id);
$category = $em->getRepository('AkoStoreBundle:Category')->find($category_id);
$category->addItems($item);
$em->flush();
// Render the same page again.
</code></pre>
<p>In this page, I show the list of all items in a select field. The user can select one item, and add it to the category.</p>
<p>The list of items which belong to the category are shown below the form.</p>
<p>When the I submit the form, the selected item is added to the list of Category items, and is shown below, but it is not stored in the database, and if refresh the page, it disappears.</p>
<p>Can anyone please help me with this?
Thanks in advance.</p> | 7,045,693 | 2 | 1 | null | 2011-08-12 18:45:14.177 UTC | 21 | 2012-05-24 11:01:45.023 UTC | 2012-05-24 11:01:45.023 UTC | null | 374,001 | null | 382,827 | null | 1 | 61 | many-to-many|doctrine-orm|symfony | 53,533 | <p>Your Category entity is the <a href="http://www.doctrine-project.org/docs/orm/2.0/en/reference/association-mapping.html#association-mapping-owning-inverse">inverse side</a> of the relationship. </p>
<p>Try changing addItems to look like this:</p>
<pre><code>public function addItem(\Ako\StoreBundle\Entity\Item $item)
{
$item->addCategory($this);
$this->items[] = $item;
}
</code></pre>
<p>Note that I changed your plural names to singular, since you're dealing with single entities, not collections.</p> |
31,578,289 | Strip seconds from datetime | <p>I want to strip/remove seconds from a DateTime. Starting with a full Datetime like:</p>
<pre><code>DateTime datetime = DateTime.UtcNow;
</code></pre>
<p>I want to strip the seconds using any inbuilt function or regular expression.</p>
<p>Input: 08/02/2015 09:22:45</p>
<p>Expected result: 08/02/2015 09:22:00</p> | 31,578,514 | 9 | 3 | null | 2015-07-23 04:41:07.82 UTC | 2 | 2022-09-09 10:15:24.203 UTC | 2021-08-12 11:08:09.29 UTC | null | 11,598,475 | null | 4,302,675 | null | 1 | 27 | c# | 54,335 | <p>You can do</p>
<pre><code>DateTime dt = DateTime.Now;
dt = dt.AddSeconds(-dt.Second);
</code></pre>
<p>to set the seconds to 0.</p> |
37,864,974 | How to use the Firebase server timestamp to generate date created? | <p>Currently, the Google's version of <code>ServerValue.TIMESTAMP</code> returns <code>{".sv":"timestamp"}</code> which is used as a directive for Firebase to fill that field with the server timestamp once you save the data to the Firebase server.</p>
<p>When you create your data on the client side however, you don't have the actual timestamp to play with yet (ie. use as the creation date). You only will have an access to the timestamp after the initial save and consequent retrieval, which - I imagine - is sometimes too late and not very elegant.</p>
<hr>
<p><strong>Before Google:</strong></p>
<p><em>Update: Ignore this section as it is incorrect - I misunderstood the examples. <code>ServerValue.TIMESTAMP</code> always returned the <code>{".sv":"timestamp"}</code>.</em></p>
<p>As far as I understand in pre-google Firebase there seemed to be a server-generated timestamp available that allowed you to acquire the actual timestamp:</p>
<pre class="lang-java prettyprint-override"><code>import com.firebase.client.ServerValue;
ServerValue.TIMESTAMP // eg. 1466094046
</code></pre>
<p>(<a href="https://stackoverflow.com/a/33111791/3508719">ref 1</a>, <a href="https://stackoverflow.com/a/25512747/3508719">ref 2</a>)</p>
<hr>
<p><strong>Questions:</strong></p>
<ol>
<li>Is such save/retrieval the only way to get the server-generated creation date on my model instances?</li>
<li>If yes can you propose a method of implementing such pattern?</li>
<li>Am I understanding correctly ServerValue.TIMESTAMP has changed with Google's acquisition of Firebase? <em>Update: No, @FrankvanPuffelen replied that nothing's changed during acquisition.</em></li>
</ol>
<hr>
<p><strong>Note:</strong></p>
<p>I'm not considering using <code>new Date()</code> on client side as I've been reading it's not safe, though please share your thoughts if you think different.</p> | 37,868,163 | 2 | 2 | null | 2016-06-16 16:45:44.99 UTC | 6 | 2020-07-26 18:39:07.317 UTC | 2017-05-23 12:00:08.933 UTC | null | -1 | null | 3,508,719 | null | 1 | 21 | android|firebase|timestamp|firebase-realtime-database | 41,345 | <p>When you use the <code>ServerValue.TIMESTAMP</code> constant in a write operation, you're saying that the Firebase Database server should determine the correct timestamp when it executes the write operation.</p>
<p>Let's say we run this code:</p>
<pre><code>ref.addValueEventListener(new ValueEventListener() {
public void onDataChange(DataSnapshot dataSnapshot) {
System.out.println(dataSnapshot.getValue());
}
public void onCancelled(DatabaseError databaseError) { }
});
ref.setValue(ServerValue.TIMESTAMP);
</code></pre>
<p>This will execute as follows:</p>
<ol>
<li>you attach a listener</li>
<li>you write a value with <code>ServerValue.TIMESTAMP</code></li>
<li>the Firebase client immediate fires a value event with an approximation of the timestamp it will write on the server</li>
<li>your code prints that value</li>
<li>the write operation gets sent to the Firebase servers</li>
<li>the Firebase servers determine the actual timestamp and write the value to the database (assuming no security rules fail)</li>
<li>the Firebase server send the actual timestamp back to the client</li>
<li>the Firebase client raises a value event for the actual value</li>
<li>your code prints that value</li>
</ol>
<p>If you're using <code>ChildEventListener</code> instead of a <code>ValueEventListener</code>, then the client will call <code>onChildAdded</code> in step 3 and <code>onChildChanged</code> in step 8.</p>
<p>Nothing changed in the way we generate the <code>ServerValue.TIMESTAMP</code> since Firebase joined Google. Code that worked before, will continue to work. That also means that the <a href="https://stackoverflow.com/questions/33096128/when-making-a-pojo-in-firebase-can-you-use-servervalue-timestamp/33111791#33111791">first answer you linked</a> is a valid way to handle it.</p> |
3,331,643 | Python: Unpacking an inner nested tuple/list while still getting its index number | <p>I am familiar with using <code>enumerate()</code>:</p>
<pre><code>>>> seq_flat = ('A', 'B', 'C')
>>> for num, entry in enumerate(seq_flat):
print num, entry
0 A
1 B
2 C
</code></pre>
<p>I want to be able to do the same for a nested list:</p>
<pre><code>>>> seq_nested = (('A', 'Apple'), ('B', 'Boat'), ('C', 'Cat'))
</code></pre>
<p>I can unpack it with:</p>
<pre><code>>>> for letter, word in seq_nested:
print letter, word
A Apple
B Boat
C Cat
</code></pre>
<p>How should I unpack it to get the following?</p>
<pre><code>0 A Apple
1 B Boat
2 C Cat
</code></pre>
<p>The only way I know is to use a counter/incrementor, which is un-Pythonic as far as I know. Is there a more elegant way to do it?</p> | 3,331,658 | 1 | 1 | null | 2010-07-26 00:48:18.107 UTC | 8 | 2013-07-20 17:23:47.18 UTC | 2013-07-20 17:23:47.18 UTC | null | 1,014,938 | null | 366,309 | null | 1 | 43 | python|list|tuples|enumerate|iterable-unpacking | 11,932 | <pre><code>for i, (letter, word) in enumerate(seq_nested):
print i, letter, word
</code></pre> |
21,237,093 | Android 4.3: How to connect to multiple Bluetooth Low Energy devices | <p><strong>My Question is: Can Android 4.3 (client) have active connections with multiple BLE devices (servers)? If so, how can I achieve it?</strong></p>
<p><strong>What I did so far</strong></p>
<p>I try to evaluate what throughput you can achieve using BLE and Android 4.3 BLE API. In addition I also try to find out how many devices can be connected and active at the same time. I use a Nexus 7 (2013), Android 4.4 as master and TI CC2540 Keyfob as slaves.</p>
<p>I wrote a simple server software for the slaves, which transmits 10000 20Byte packets through BLE notifications. I based my Android App on the <a href="https://developer.bluetooth.org/Pages/bluetooth-smart-developers.aspx" rel="noreferrer">Application Accelerator</a> from the Bluetooth SIG.</p>
<p>It works well for one device and I can achieve around 56 kBits payload throughput at a Connection Interval of 7.5 ms. To connect to multiple slaves I followed the advice of a Nordic Employee who wrote in the <a href="https://devzone.nordicsemi.com/index.php/multiple-slave-support-for-nrf-toolbox-proximity-application" rel="noreferrer">Nordic Developer Zone</a>:</p>
<blockquote>
<p>Yes it's possible to handle multiple slaves with a single app. You would need to handle each slave with one BluetoothGatt instance. You would also need specific BluetoothGattCallback for each slave you connect to.</p>
</blockquote>
<p>So I tried that and it partly works. I can connect to multiple slaves. I can also register for notifications on multiple slaves. The problem begins when I start the test. I receive at first notifications from all slaves, but after a couple Connection Intervals just the notifications from one device come trough. After about 10 seconds the other slaves disconnect, because they seem to reach the connection time-out. Sometimes I receive right from the start of the test just notifications from one slave.</p>
<p>I also tried accessing the attribute over a read operation with the same result. After a couple of reads just the answers from one device came trough.</p>
<p>I am aware that there are a few similar questions on this forum: <a href="https://stackoverflow.com/questions/18327815/does-andriod-4-3-support-multiple-ble-device-connections">Does Android 4.3 support multiple BLE device connections?</a>, <a href="https://stackoverflow.com/questions/18011816/has-native-android-ble-gatt-implementation-synchronous-nature/18020287#18020287">Has native Android BLE GATT implementation synchronous nature?</a> or <a href="https://stackoverflow.com/questions/20214862/ble-multiple-connection">Ble multiple connection</a>. But none of this answers made it clear for me, if it is possible and how to do it.</p>
<p>I would be very grateful for advice.</p> | 30,455,650 | 5 | 3 | null | 2014-01-20 14:55:48.317 UTC | 37 | 2017-02-22 18:54:25.077 UTC | 2017-05-23 10:31:15.95 UTC | null | -1 | null | 3,202,707 | null | 1 | 37 | android|bluetooth-lowenergy | 64,692 | <p>I suspect everyone adding delays is just allowing the BLE system to complete the action you have asked before you submit another one. Android's BLE system has no form of queueing. If you do </p>
<pre><code>BluetoothGatt g;
g.writeDescriptor(a);
g.writeDescriptor(b);
</code></pre>
<p>then the first write operation will immediately be overwritten with the second one. Yes it's really stupid and the documentation should probably actually mention this.</p>
<p>If you insert a wait, it allows the first operation to complete before doing the second. That is a huge ugly hack though. A better solution is to implement your own queue (like Google should have). Fortunately Nordic have released one for us.</p>
<p><a href="https://github.com/NordicSemiconductor/puck-central-android/tree/master/PuckCentral/app/src/main/java/no/nordicsemi/puckcentral/bluetooth/gatt" rel="noreferrer">https://github.com/NordicSemiconductor/puck-central-android/tree/master/PuckCentral/app/src/main/java/no/nordicsemi/puckcentral/bluetooth/gatt</a></p>
<p>Edit: By the way this is the universal behaviour for BLE APIs. WebBluetooth behaves the same way (but Javascript does make it easier to use), and I believe iOS's BLE API also behaves the same.</p> |
1,981,459 | Using threads in C on Windows. Simple Example? | <p>What do I need and how can I use threads in C on Windows Vista?</p>
<p>Could you please give me a simple code example?</p> | 1,981,467 | 3 | 1 | null | 2009-12-30 17:49:17.13 UTC | 14 | 2016-01-13 15:34:41.987 UTC | 2016-01-13 15:34:41.987 UTC | null | 2,587,816 | null | 205,234 | null | 1 | 22 | c|windows|multithreading|semaphore | 61,007 | <p>Here is the <a href="http://msdn.microsoft.com/en-us/library/ms682516(VS.85).aspx" rel="noreferrer">MSDN sample</a> on how to use CreateThread() on Windows. </p>
<p>The basic idea is you call CreateThread() and pass it a pointer to your thread function, which is what will be run on the target thread once it is created.</p>
<p>The simplest code to do it is:</p>
<pre><code>#include <windows.h>
DWORD WINAPI ThreadFunc(void* data) {
// Do stuff. This will be the first function called on the new thread.
// When this function returns, the thread goes away. See MSDN for more details.
return 0;
}
int main() {
HANDLE thread = CreateThread(NULL, 0, ThreadFunc, NULL, 0, NULL);
if (thread) {
// Optionally do stuff, such as wait on the thread.
}
}
</code></pre>
<p>You also have the option of calling <a href="http://msdn.microsoft.com/en-us/library/bb759869(VS.85).aspx" rel="noreferrer">SHCreateThread()</a>—same basic idea but will do some shell-type initialization for you if you ask it, such as initializing COM, etc.</p> |
8,666,518 | How can I write a general Array to CSV file? | <p>Assume that I have an Array of objects in C#, and I want to write it to a file in CSV format.<br>
Assume that each object has a ToString() method, which is what I want to be printed.</p>
<p>Currently I am using this code:</p>
<pre><code>public static void dumpArray(Array arr, string fileName)
{
using (System.IO.StreamWriter file = new System.IO.StreamWriter(fileName))
{
foreach (Object obj in arr)
{
file.Write(obj.ToString()+",");
}
}
}
</code></pre>
<p>Is there anything built in C# framework, or do you think that there is a better way?</p> | 8,666,786 | 6 | 1 | null | 2011-12-29 10:17:43.68 UTC | 2 | 2018-10-31 13:24:25.963 UTC | 2013-01-27 11:54:30.457 UTC | null | 458,741 | null | 817,452 | null | 1 | 8 | c#|arrays|csv | 41,617 | <p>You could change your method to use <strong>C# Generics</strong> and use descriptive function and variable names. This essentially results in the same behaviour except that <a href="http://msdn.microsoft.com/en-us/library/yz2be5wk.aspx" rel="noreferrer">boxing</a> is avoided for value types.</p>
<pre><code>public static void SaveArrayAsCSV<T>(T[] arrayToSave, string fileName)
{
using (StreamWriter file = new StreamWriter(fileName))
{
foreach (T item in arrayToSave)
{
file.Write(item + ",");
}
}
}
</code></pre>
<p>Edit: to do the same with jagged Arrays (assuming they only ever contain <em>one</em> level of nesting) you could use this overload <strong>(not recommended, see below!)</strong>:</p>
<pre><code>public static void SaveArrayAsCSV<T>(T[][] jaggedArrayToSave, string fileName)
{
using (StreamWriter file = new StreamWriter(fileName))
{
foreach (T[] array in jaggedArrayToSave)
{
foreach (T item in array)
{
file.Write(item + ",");
}
file.Write(Environment.NewLine);
}
}
}
</code></pre>
<h1>EDIT - Regarding duplication:</h1>
<p>The above solution is, of course, sub-optimal in that it only works in a some specific scenarios (where we have exactly one level of nesting). Unfortunately, if refactor our code, we have to stop using generics because our input array might now contain elements of Type <code>T</code>or of Type <code>T[]</code>. Therefore, we come up with the following code, which is <strong>the preferred solution</strong> because, although it reintroduces boxing, is more readable, less redundant, and works for a wider range of scenarios:</p>
<pre><code>public static void SaveArrayAsCSV(Array arrayToSave, string fileName)
{
using (StreamWriter file = new StreamWriter(fileName))
{
WriteItemsToFile(arrayToSave, file);
}
}
private static void WriteItemsToFile(Array items, TextWriter file)
{
foreach (object item in items)
{
if (item is Array)
{
WriteItemsToFile(item as Array, file);
file.Write(Environment.NewLine);
}
else file.Write(item + ",");
}
}
</code></pre> |
8,957,025 | SBT including the version number in a program | <p>I want a program I'm building to be able to report its own version at runtime (e.g. <code>scala myprog.jar --version</code>). Traditionally in a maven project, I'd use resource filtering (pom.xml -> file.properties -> read value at runtime). I know there's <a href="https://bitbucket.org/wyuenho/sbt-filter-plugin/wiki/Home">sbt-filter-plugin</a> to emulate this functionality, but I'm curious if there's a more standard / preferred / clever way of doing this in SBT.</p>
<p>tl;dr how can I read the version number defined in <code>build.sbt</code> at runtime?</p> | 19,099,428 | 4 | 1 | null | 2012-01-21 21:50:50.153 UTC | 10 | 2017-12-21 21:46:28.233 UTC | 2012-01-21 22:22:17.727 UTC | null | 576,139 | null | 576,139 | null | 1 | 26 | scala|sbt|scala-2.9 | 11,363 | <p>Update...</p>
<p><a href="https://github.com/ritschwumm/xsbt-reflect" rel="noreferrer">https://github.com/ritschwumm/xsbt-reflect</a> (mentioned above) is Obsolete, but there is this cool SBT release tool that can automatically manage versions and more: <a href="https://github.com/sbt/sbt-release" rel="noreferrer">https://github.com/sbt/sbt-release</a>.</p>
<p>Alternatively, if you want a quick fix you can get version from manifest like this:</p>
<pre><code>val version: String = getClass.getPackage.getImplementationVersion
</code></pre>
<p>This value will be equal to <code>version</code> setting in your project which you set either in <code>build.sbt</code> or <code>Build.scala</code>.</p>
<p>Another Update ...</p>
<p>Buildinfo SBT plugin can generate a class with version number based on <code>build.sbt</code>:</p>
<pre><code>/** This object was generated by sbt-buildinfo. */
case object BuildInfo {
/** The value is "helloworld". */
val name: String = "helloworld"
/** The value is "0.1-SNAPSHOT". */
val version: String = "0.1-SNAPSHOT"
/** The value is "2.10.3". */
val scalaVersion: String = "2.10.3"
/** The value is "0.13.2". */
val sbtVersion: String = "0.13.2"
override val toString: String = "name: %s, version: %s, scalaVersion: %s, sbtVersion: %s" format (name, version, scalaVersion, sbtVersion)
}
</code></pre>
<p>See the docs on how to enable it here: <a href="https://github.com/sbt/sbt-buildinfo/" rel="noreferrer">https://github.com/sbt/sbt-buildinfo/</a>.</p> |
8,601,704 | Does VBScript have Increment Operators | <p>Like Javascript has <code>++</code> and <code>+=</code> for a increments?</p> | 8,601,724 | 1 | 1 | null | 2011-12-22 09:20:20.543 UTC | 1 | 2014-04-18 13:50:55.733 UTC | 2012-01-18 20:24:05.387 UTC | null | 918,414 | null | 916,535 | null | 1 | 36 | vbscript | 41,094 | <p>No. Unfortunately, you have to do:</p>
<pre><code>value = value + 1
</code></pre> |
19,683,846 | Why are my AngularJS, Karma / Jasmine tests running so slowly? | <p>I have some simple karma / jasmine unit-tests that run against an angularjs app. I use the latest version of Chrome and run my tests from within the WebStorm IDE.</p>
<p>Sometimes the test suite runs very quickly (0.24 seconds)</p>
<p>Sometimes exactly the same test suite against exactly the same code runs very slowly (120 seconds)</p>
<p>I have tried every common sense fix. I have scoured the web to try and discover what I am doing wrong.</p>
<p>Why do my tests run so slowly?</p> | 19,683,847 | 2 | 1 | null | 2013-10-30 13:38:32.353 UTC | 15 | 2017-11-28 15:40:39.98 UTC | null | null | null | null | 776,476 | null | 1 | 52 | angularjs|jasmine|karma-runner | 9,640 | <p>The answer turns out to be very simple.</p>
<p>I am using Chrome to run the karma server. When you first start the karma server an instance of Chrome is started as a maximised window. So naturally you minimise this so you can see your tests running.</p>
<p>The problem is that Chrome starves any minimised or secondary tabs (switched tabs) of CPU cycles. </p>
<p>Therefore, if you minimise the browser instance running the karma server, or just switch to a different tab, then the karma server is severely starved of CPU and the tests take a long time to complete.</p>
<p>The solution is to keep the karma tab active. The browser window can be hidden behind other windows but the karma tab <em>must be the selected tab</em> and the browser <em>must not be minimised</em>. </p>
<p>Following these simple rules will ensure that your tests always run at full speed.</p> |
19,460,078 | Configure Microsoft.AspNet.Identity to allow email address as username | <p>I'm in the process of creating a new application and started out using EF6-rc1, Microsoft.AspNet.Identity.Core 1.0.0-rc1, Microsoft.AspNet.Identity.EntityFramework 1.0.0-rc1, Microsoft.AspNet.Identity.Owin 1.0.0-rc1, etc and with the RTM releases yesterday, I updated them via NuGet this evening to RTM.</p>
<p>Apart from a couple of code changes to the work I'd done so far, all seemed to be going well, until I tried to create a local user account for the app.</p>
<p>I had been working on e-mail addresses being the username format which with the release candidate worked great, but now when creating a user with an email address for a username, it throws up the following validation error:</p>
<blockquote>
<p>User name [email protected] is invalid, can only contain letters or digits.</p>
</blockquote>
<p>I've spent the last hour or so searching for a solution or documentation on configuration options for it, but to no avail.</p>
<p>Is there a way I can configure it to allow e-mail addresses for usernames?</p> | 19,460,800 | 13 | 2 | null | 2013-10-18 22:35:39.31 UTC | 33 | 2020-02-18 16:35:47.62 UTC | 2020-02-18 16:35:47.62 UTC | null | 2,756,409 | null | 147,145 | null | 1 | 132 | asp.net|asp.net-identity | 54,549 | <p>You can allow this by plugging in your own UserValidator on the UserManager, or just by turning it off on the default implementation:</p>
<pre><code>UserManager.UserValidator = new UserValidator<TUser>(UserManager) { AllowOnlyAlphanumericUserNames = false }
</code></pre> |
1,280,470 | Maven repository for Google Code project | <p>I'm hosting a small open source project on Google Code, and I have been asked to submit the jar to a publicly accessible Maven repository. I have almost no practical knowledge of Maven. What would be the best way to do this?</p>
<p>Is there some central repository that I can submit to, or can I host my own? What would I need to do when I want to release a new version of the jar?</p>
<p>I've been Googling and found <a href="http://www.thewebsemantic.com/2009/04/11/your-very-own-google-code-maven-repo/" rel="noreferrer">this</a>, which looks nice and simple, but it seems a bit ... contrary to the spirit of Maven, to commit jar files to SVN :).</p>
<p>Also, would there be a way to still keep track of the download count, as Google Code does?</p>
<p><strong>EDIT</strong></p>
<p>I've been getting some answers, some of which containing hints on what to add to my <code>pom.xml</code>. Thanks guys! But obviously I forgot to mention one important thing: my build script is in ANT, and to put it bluntly, I intend to keep it that way :). I just want to make it easier for Maven users to include my jar in their projects.</p>
<p><br>
<strong>The solution I went with in the end</strong></p>
<p>In the end, I did use the solution I <a href="http://www.thewebsemantic.com/2009/04/11/your-very-own-google-code-maven-repo/" rel="noreferrer">referenced</a> before, where I simply commit a Maven repo to SVN. I have the ANT script call Maven to set up the local repo, and then call SVN to commit it to Google Code. For those interested: look at my build script <a href="http://code.google.com/p/equalsverifier/source/browse/trunk/build.xml" rel="noreferrer">here</a>, in the <code>publish-maven</code> target.</p> | 1,281,372 | 4 | 2 | null | 2009-08-14 22:17:51.743 UTC | 19 | 2012-07-07 23:36:20.123 UTC | 2010-07-23 20:32:28.39 UTC | null | 127,863 | null | 127,863 | null | 1 | 22 | maven-2|google-code | 12,604 | <p>There is a <a href="http://maven.apache.org/guides/mini/guide-central-repository-upload.html" rel="noreferrer">guide to the central repository</a> that has a section on uploading projects that may help. If nothing else you can check the naming conventions and minimal information requirements against your project.</p>
<p>Sonatype also do OSS Repository hosting, see <a href="https://docs.sonatype.com/display/NX/OSS+Repository+Hosting" rel="noreferrer">their guide</a> for details.</p>
<p>Update: I'm not saying you should change your build process - if Ant works for you stick with it. It's worth following the Maven conventions in your POM regardless of your build method. As the point of putting your jar in a Maven repository is to make it accessible to Maven users, you will therefore need to define a POM for your published artifact. Following the naming conventions will help your users so you might as well do it. For example adding the SCM details to the pom will (amongst other things) allow your users to import the project into their workspace using the IDE integrations for Maven.</p>
<p>Basically, you have 4 options:</p>
<ol>
<li>Perform a standard Maven build against a Maven repository (already ruled out)</li>
<li>Set up a Maven repository, do your builds with Ant, and use Maven to deploy the jar and POM.</li>
<li>Set up a Maven repository, ad use an Ant HTTP task to publish the artifacts</li>
<li>Use a Subversion "repository", and use the SvnAnt task to publish the artifacts</li>
</ol>
<hr>
<p><strong>Option 1</strong></p>
<p>Use Maven to build and deploy the artifacts (see the <a href="http://www.sonatype.com/books/maven-book/reference/" rel="noreferrer">Maven book</a> and the above links for details).</p>
<hr>
<p><strong>Option 2</strong></p>
<p>Assuming you have a build process that creates your jar, and you've defined the POM, your best bet is to publish it to the Sonatype OSS repository as above.</p>
<p>Deploying an existing jar to a standard Maven repository is simple with the Maven deploy plugin's deploy-file goal:</p>
<ol>
<li>Set up your repository (e.g on the Sonatype servers by raising a <a href="https://issues.sonatype.org/browse/OSSRH" rel="noreferrer">Jira request</a>)</li>
<li>Build your jar with Ant.</li>
<li>If you have defined a POM, put it in the same directory as the jar.</li>
<li><p>Run the deploy-file goal:</p>
<p>mvn deploy:deploy-file -Durl=<a href="http://path/to/your/repository" rel="noreferrer">http://path/to/your/repository</a>\
-DrepositoryId=some.id \
-Dfile=path-to-your-artifact-jar \
-DpomFile=path-to-your-pom.xml</p></li>
</ol>
<p>Note that the Maven deploy goal will automatically translate the pom.xml to [project-name]-[version].pom. If you are doing either of the other two alternatives, you will need to ensure you commit the POM with the final name, i.e. [project-name]-[version].pom. You'll also need to ensure you compose the relative paths for the artifacts following the Maven conventions.</p>
<p>E.g. for groupId=com.foo.bar, artifactId=my-project version=1.0.0, the path to the files will be:</p>
<pre><code>/com/foo/bar/my-project/my-project-1.0.0.jar
/com/foo/bar/my-project/my-project-1.0.0.pom
</code></pre>
<hr>
<p><strong>Option 3</strong></p>
<p>If you want to use Ant to deploy to a Maven repository, you can use an <a href="http://fikin-ant-tasks.sourceforge.net/" rel="noreferrer">Ant HTTP library</a> (Note I've not tried this myself) . You would compose two HTTP put tasks, one for the jar and one for the POM.</p>
<pre><code><httpput url="http://path/to/your/repository" putFile="/path/to/yourproject.pom">
<userCredentials username="user" password="password"/>
</httpput>
<httpput url="http://path/to/your/repository" putFile="/path/to/yourproject.jar">
<userCredentials username="user" password="password"/>
</httpput>
</code></pre>
<hr>
<p><strong>Option 4</strong></p>
<p>If you want to avoid Maven completely and use Ant to deploy to an SVN-backed repository, you can use the <a href="http://subclipse.tigris.org/svnant.html" rel="noreferrer">SvnAnt Subversion library</a>. you would simply need to do configure the <a href="http://subclipse.tigris.org/svnant/svn.html#import" rel="noreferrer">Svn import</a> task to add your artifacts to the Remote Subversion repository.</p>
<pre><code><import path ="/dir/containing/the/jar/and/pom"
url="svn://your/svn/repository"
message="release"/>
</code></pre> |
1,266,233 | What is activation record in the context of C and C++? | <p>What does it mean and how important to know about it for a C/C++ programmers?</p>
<p>Is it the same across the platforms, at least conceptually?</p>
<p>I understand it as a block of allocated memory used to store local variable by a function...</p>
<p>I want to know more</p> | 1,266,308 | 4 | 6 | null | 2009-08-12 13:40:11.33 UTC | 15 | 2021-01-10 09:37:28.88 UTC | null | null | null | null | 149,045 | null | 1 | 50 | c++|c | 61,463 | <p>An activation record is another name for Stack Frame. It's the data structure that composes a call stack. It is generally composed of:</p>
<ul>
<li>Locals to the callee</li>
<li>Return address to the caller</li>
<li>Parameters of the callee</li>
<li>The previous stack pointer (SP) value</li>
</ul>
<p>The Call Stack is thus composed of any number of activation records that get added to the stack as new subroutines are added, and removed from the stack (usually) as they return.</p>
<p>The actual structure and order of elements is platform and even implementation defined.</p>
<p>For C/C++ programmers, <strong>general knowledge</strong> of this structure is useful to understand certain implementation features like Calling Conventions and even why do buffer overflows allow 3rd party malicious code to be ran.</p>
<p>A more <strong>intimate knowledge</strong> will further the concepts above and also allow a programmer to debug their application and read memory dumps even in the absence of a debugger or debugging symbols.</p>
<p>More generally though, a C/C++ programmer can go by a large portion of their hobbyist programming career without even giving the call stack a moments thought.</p> |
49,161,120 | Pandas/Python: Set value of one column based on value in another column | <p>I need to set the value of one column based on the value of another in a Pandas dataframe. This is the logic:</p>
<pre><code>if df['c1'] == 'Value':
df['c2'] = 10
else:
df['c2'] = df['c3']
</code></pre>
<p>I am unable to get this to do what I want, which is to simply create a column with new values (or change the value of an existing column: either one works for me). </p>
<p>If I try to run the code above or if I write it as a function and use the apply method, I get the following:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre> | 49,161,313 | 9 | 0 | null | 2018-03-07 21:01:02.283 UTC | 42 | 2021-08-30 16:52:51.343 UTC | 2018-03-07 22:18:53.583 UTC | null | 5,514,476 | null | 8,610,662 | null | 1 | 118 | python|pandas|conditional | 345,213 | <p>one way to do this would be to use indexing with <code>.loc</code>. </p>
<p><strong>Example</strong></p>
<p>In the absence of an example dataframe, I'll make one up here:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'c1': list('abcdefg')})
df.loc[5, 'c1'] = 'Value'
>>> df
c1
0 a
1 b
2 c
3 d
4 e
5 Value
6 g
</code></pre>
<p>Assuming you wanted to <strong>create a new column</strong> <code>c2</code>, equivalent to <code>c1</code> except where <code>c1</code> is <code>Value</code>, in which case, you would like to assign it to 10:</p>
<p>First, you could create a new column <code>c2</code>, and set it to equivalent as <code>c1</code>, using one of the following two lines (they essentially do the same thing):</p>
<pre><code>df = df.assign(c2 = df['c1'])
# OR:
df['c2'] = df['c1']
</code></pre>
<p>Then, find all the indices where <code>c1</code> is equal to <code>'Value'</code> using <code>.loc</code>, and assign your desired value in <code>c2</code> at those indices:</p>
<pre><code>df.loc[df['c1'] == 'Value', 'c2'] = 10
</code></pre>
<p>And you end up with this:</p>
<pre><code>>>> df
c1 c2
0 a a
1 b b
2 c c
3 d d
4 e e
5 Value 10
6 g g
</code></pre>
<p>If, as you suggested in your question, you would perhaps sometimes just want to <strong>replace the values in the column you already have</strong>, rather than create a new column, then just skip the column creation, and do the following:</p>
<pre><code>df['c1'].loc[df['c1'] == 'Value'] = 10
# or:
df.loc[df['c1'] == 'Value', 'c1'] = 10
</code></pre>
<p>Giving you:</p>
<pre><code>>>> df
c1
0 a
1 b
2 c
3 d
4 e
5 10
6 g
</code></pre> |
19,339,227 | Bower and devDependencies vs dependencies | <p>I ran 'yo angular' and realized afterwards that it installs 1.0.8, I uninstalled the angular components, however the original bower.json file had angular-mocks and angular-scenario under 'devDependencies' when I re-add all the 1.2.0-rc.2 components angular-mocks and angular-scenario under dependencies instead of devDependencies.</p>
<p>I'm curious as to how devDependencies is used and if I should bother manually fixing it or leave as is. Is there a way to specify on the bower CLI how to mark something as a dev dependency?</p>
<p>After edits file:</p>
<pre><code>{
name: "Angular",
version: "0.0.0",
dependencies: {
json3: "~3.2.4",
jquery: "~1.9.1",
bootstrap-sass: "~2.3.1",
es5-shim: "~2.0.8",
angular-mocks: "1.2.0-rc.2",
angular-sanitize: "1.2.0-rc.2",
angular-resource: "1.2.0-rc.2",
angular-cookies: "1.2.0-rc.2",
angular: "1.2.0-rc.2",
angular-scenario: "1.2.0-rc.2"
},
devDependencies: { }
}
</code></pre>
<p>Before Edits:</p>
<pre><code>{
"name": "Angular",
"version": "0.0.0",
"dependencies": {
"angular": "~1.0.7",
"json3": "~3.2.4",
"jquery": "~1.9.1",
"bootstrap-sass": "~2.3.1",
"es5-shim": "~2.0.8",
"angular-resource": "~1.0.7",
"angular-cookies": "~1.0.7",
"angular-sanitize": "~1.0.7"
},
"devDependencies": {
"angular-mocks": "~1.0.7",
"angular-scenario": "~1.0.7"
}
}
</code></pre> | 19,341,028 | 1 | 0 | null | 2013-10-12 21:09:24.643 UTC | 55 | 2014-05-08 04:03:48.76 UTC | 2014-05-08 04:03:48.76 UTC | user9903 | null | null | 73,521 | null | 1 | 160 | bower | 80,899 | <p><code>devDependencies</code> are for the development-related scripts, e.g. unit testing, packaging scripts, documentation generation, etc.</p>
<p><code>dependencies</code> are required for production use, and assumed required for dev as well.</p>
<p>Including <code>devDependencies</code> within <code>dependencies</code>, as you have it, won't be harmful; the module will just bundle more files (bytes) during the install - consuming more (unnecessary) resources. From a purist POV, these extra bytes could be detrimental, just depends on your perspective.</p>
<p>To shed some light, looking at <code>bower help install</code>, modules listed under <code>devDependencies</code> can be omitted during the module installation via <code>-p</code> or <code>--production</code>, e.g.:</p>
<pre><code>bower install angular-latest --production
</code></pre>
<p>This is the recommended way to perform an installation for anything other than a development platform.</p>
<p>On the contrary, there is no way to omit modules listed under <code>dependencies</code>.</p>
<hr>
<p>As of <em>[email protected]</em> (see <a href="https://github.com/bower/bower/blob/master/templates/json/help.json">bower latest source</a>), <code>bower help</code> yields:</p>
<pre><code>Usage:
bower <command> [<args>] [<options>]
Commands:
cache Manage bower cache
help Display help information about Bower
home Opens a package homepage into your favorite browser
info Info of a particular package
init Interactively create a bower.json file
install Install a package locally
link Symlink a package folder
list List local packages
lookup Look up a package URL by name
prune Removes local extraneous packages
register Register a package
search Search for a package by name
update Update a local package
uninstall Remove a local package
Options:
-f, --force Makes various commands more forceful
-j, --json Output consumable JSON
-l, --log-level What level of logs to report
-o, --offline Do not hit the network
-q, --quiet Only output important information
-s, --silent Do not output anything, besides errors
-V, --verbose Makes output more verbose
--allow-root Allows running commands as root
See 'bower help <command>' for more information on a specific command.
</code></pre>
<p>and further, <code>bower help install</code> yields (see <a href="https://github.com/bower/bower/blob/master/templates/json/help-install.json">latest source</a>):</p>
<pre><code>Usage:
bower install [<options>]
bower install <endpoint> [<endpoint> ..] [<options>]
Options:
-F, --force-latest Force latest version on conflict
-h, --help Show this help message
-p, --production Do not install project devDependencies
-S, --save Save installed packages into the project's bower.json dependencies
-D, --save-dev Save installed packages into the project's bower.json devDependencies
Additionally all global options listed in 'bower help' are available
Description:
Installs the project dependencies or a specific set of endpoints.
Endpoints can have multiple forms:
- <source>
- <source>#<target>
- <name>=<source>#<target>
Where:
- <source> is a package URL, physical location or registry name
- <target> is a valid range, commit, branch, etc.
- <name> is the name it should have locally.
</code></pre> |
52,843,191 | Can you have optional destructured arguments in a Typescript function? | <p>I'd like to write a function that takes an object argument, uses destructuring in the function signature, and have that argument be optional:</p>
<pre><code>myFunction({opt1, opt2}?: {opt1?: boolean, opt2?: boolean})
</code></pre>
<p>However, Typescript doesn't let me ("A binding pattern parameter cannot be optional in an implementation signature.").</p>
<p>Of course I could do it if I didn't destructure:</p>
<pre><code>myFunction(options?: {opt1?: boolean, opt2?: boolean}) {
const opt1 = options.opt1;
const opt2 = options.opt1;
...
</code></pre>
<p>It seems like these should be the same thing, yet the top example is not allowed.</p>
<p>I'd like to use a destructured syntax (1) because it exists, and is a nice syntax, and it seems natural that the two functions above should act the same, and (2) because I also want a concise way to specify defaults:</p>
<pre><code>myFunction({opt1, opt2 = true}?: {opt1?: boolean, opt2?: boolean})
</code></pre>
<p>Without destructuring, I have to bury these defaults in the implementation of the function, or have an argument that is actually some class with a constructor...</p> | 52,843,348 | 3 | 0 | null | 2018-10-16 19:59:38.447 UTC | 5 | 2021-12-15 14:49:55.787 UTC | null | null | null | null | 1,231,271 | null | 1 | 69 | typescript | 25,408 | <p>Use a default parameter instead:</p>
<pre><code>function myFunction({ opt1, opt2 = true }: { opt1?: boolean; opt2?: boolean; } = {}) {
console.log(opt2);
}
myFunction(); // outputs: true
</code></pre>
<p>It's necessary in order to not destructure <code>undefined</code>:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>function myFunction({ opt1, opt2 }) {
}
// Uncaught TypeError: Cannot destructure property `opt1` of 'undefined' or 'null'.
myFunction();</code></pre>
</div>
</div>
</p> |
52,630,404 | How to install packages based on the lock-file with Yarn? | <p>We use Yarn to install dependencies. The yarn-lock file is in the repo. If Compared to composer for php, I would expect that when I run <code>yarn install</code>, that the dependencies are installed based on the lock-file, and the lock file does not change.</p>
<p>With <code>composer install</code> for php, you install always the same version for each package on any environment. I don't see why yarn does not work in a similar way.</p>
<p>I think that with <code>yarn install</code> the lock gets updated too often and the file loses its point since it actually does not lock versions. Or am I using the wrong commands?</p> | 58,223,363 | 3 | 0 | null | 2018-10-03 15:18:16.593 UTC | 8 | 2021-10-07 16:58:28.7 UTC | 2018-10-03 15:24:23.767 UTC | null | 7,852,833 | null | 3,273,556 | null | 1 | 33 | yarnpkg | 34,636 | <h2>Yarn 1</h2>
<p>I think your best bet is using the <a href="https://yarnpkg.com/en/docs/cli/install#toc-yarn-install-frozen-lockfile" rel="noreferrer"><code>--frozen-lockfile</code></a> flag with <code>yarn install</code>.</p>
<h3>Docs:</h3>
<blockquote>
<p>If you need reproducible dependencies, which is usually the case with the continuous integration systems, you should pass --frozen-lockfile flag.</p>
</blockquote>
<p>Also</p>
<blockquote>
<p>Don’t generate a yarn.lock lockfile and fail if an update is needed.</p>
</blockquote>
<hr />
<h2>Yarn2</h2>
<p>If using <a href="https://yarnpkg.com/" rel="noreferrer">yarn2</a> (aka yarn <code>berry</code>) this flag is renamed to <code>--immutable</code> as of <a href="https://github.com/yarnpkg/berry/blob/master/CHANGELOG.md#200" rel="noreferrer"><code>v2.0.0</code></a>.</p>
<p>From the <a href="https://yarnpkg.com/cli/install" rel="noreferrer">docs</a>...</p>
<blockquote>
<p>If the <code>--immutable</code> option is set (defaults to true on CI since <a href="https://github.com/yarnpkg/berry/blob/master/CHANGELOG.md#300" rel="noreferrer"><code>v3.0.0</code></a>), Yarn will abort with an error exit code if the lockfile was to be modified. For backward compatibility we offer an alias under the name of <code>--frozen-lockfile</code>, but it will be removed in a later release.</p>
</blockquote>
<hr />
<p>This way if someone tries to push changes to <code>package.json</code>, say upgrade <code>react</code> from <code>^16.8.0</code> to <code>^16.10.0</code>, without updating the <code>yarn.lock</code> file. Then it will error out in the CI like below.</p>
<pre class="lang-sh prettyprint-override"><code>> yarn install --frozen-lockfile
error Your lockfile needs to be updated, but yarn was run with `--frozen-lockfile`.
</code></pre>
<hr />
<p>To address your comment:</p>
<blockquote>
<p>I think that with yarn install the lock gets updated too often and the file loses its point since it actually does not lock versions. Or am I using the wrong commands?</p>
</blockquote>
<p>Yarn/npm is just doing what you tell it to. If you set the version in your <code>package.json</code> to <code>"react": "16.8.0"</code> it will never update the <code>yarn.lock</code> but when using any of the <a href="https://docs.npmjs.com/cli/v6/using-npm/semver#ranges" rel="noreferrer">npm ranges</a> like the <a href="https://docs.npmjs.com/cli/v6/using-npm/semver#caret-ranges-123-025-004" rel="noreferrer">Caret</a> (i.e. <code>"react": "^16.8.0"</code>), yarn/npm will resolve to the highest/newest version that satisfies the range <em>you</em> specified. <em>You</em> have all the power!</p>
<hr />
<h3>Update</h3>
<p>I found a small edge case. If you are running <code>yarn add</code> in your ci, such as for a ci only dependency, it will update the lock file and do an install for all dependencies. For example....</p>
<pre class="lang-sh prettyprint-override"><code># Add ci dep
yarn add codecov
# Install all deps from yarn.lock
yarn install --frozen-lockfile
</code></pre>
<p>This will not error like you might expect. Instead, add the <code>--frozen-lockfile</code> to yarn add command like this...</p>
<pre class="lang-sh prettyprint-override"><code># Add ci dep
yarn add codecov --frozen-lockfile
# Install all deps from yarn.lock
yarn install --frozen-lockfile
</code></pre> |
38,382,739 | Certbot not creating acme-challenge folder | <p>I had working Let's encrypt certificates some months ago (with the old letsencrypt client).
The server I am using is nginx.</p>
<p>Certbot is creating the .well-known folder, but not the acme-challenge folder</p>
<p>Now I tried to create new certificates via <code>~/certbot-auto certonly --webroot -w /var/www/webroot -d domain.com -d www.domain.com -d git.domain.com</code></p>
<p>But I always get errors like this:</p>
<pre><code>IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: git.domain.com
Type: unauthorized
Detail: Invalid response from
http://git.domain.com/.well-known/acme-challenge/ZLsZwCsBU5LQn6mnzDBaD6MHHlhV3FP7ozenxaw4fow:
"<.!DOCTYPE html>
<.html lang='en'>
<.head prefix='og: http://ogp.me/ns#'>
<.meta charset='utf-8'>
<.meta content='IE=edge' http-equiv"
Domain: www.domain.com
Type: unauthorized
Detail: Invalid response from
http://www.domain.com/.well-known/acme-challenge/7vHwDXstyiY0wgECcR5zuS2jE57m8I3utszEkwj_mWw:
"<.html>
<.head><.title>404 Not Found</title></head>
<.body bgcolor="white">
<.center><.h1>404 Not Found</h1></center>
</code></pre>
<p>(Of course the dots inside the HTML tags are not really there)</p>
<p>I have looked for a solution, but didn't found one yet.
Does anybody know why certbot is not creating the folders?</p>
<p>Thanks in advance!</p> | 38,385,186 | 3 | 0 | null | 2016-07-14 19:32:22.897 UTC | 9 | 2020-01-20 20:14:30.707 UTC | 2016-07-15 03:29:25.353 UTC | null | 3,768,332 | null | 3,527,644 | null | 1 | 26 | nginx|https|lets-encrypt|certbot | 23,842 | <p>The problem was the nginx configuration.
I replaced my long configuration files with the simplest config possible:</p>
<pre><code>server {
listen 80;
server_name domain.com www.domain.com git.domain.com;
root /var/www/domain/;
}
</code></pre>
<p>Then I was able to issue new certificates.</p>
<p>The problem with my long configuration files was (as far as I can tell) that I had the these lines:</p>
<pre><code>location ~ /.well-known {
allow all;
}
</code></pre>
<p>But they should be:</p>
<pre><code>location ~ /.well-known/acme-challenge/ {
allow all;
}
</code></pre>
<p>Now the renewal works, too.</p> |
23,401,365 | Laravel "At Least One" Field Required Validation | <p>So I have this form with these fields</p>
<pre><code>{{ Form::open(array('url' => 'user', 'id' => 'user_create_form')) }}
<div class="form-input-element">
<label for="facebook_id">ID Facebook</label>
{{ Form::text('facebook_id', Input::old('facebook_id'), array('placeholder' => 'ID Facebook')) }}
</div>
<div class="form-input-element">
<label for="twitter_id">ID Twitter</label>
{{ Form::text('twitter_id', Input::old('twitter_id'), array('placeholder' => 'ID Twitter')) }}
</div>
<div class="form-input-element">
<label for="instagram_id">ID Instagram</label>
{{ Form::text('instagram_id', Input::old('instagram_id'), array('placeholder' => 'ID Instagram')) }}
</div>
{{ Form::close() }}
</code></pre>
<p>I'd like to tell Laravel that at least one of these fields is required. How do I do that using the Validator?</p>
<pre><code>$rules = array(
'facebook_id' => 'required',
'twitter_id' => 'required',
'instagram_id' => 'required',
);
$validator = Validator::make(Input::all(), $rules);
</code></pre> | 23,401,415 | 3 | 0 | null | 2014-05-01 02:55:33.58 UTC | 17 | 2020-01-21 07:56:42.26 UTC | null | null | null | null | 975,987 | null | 1 | 69 | php|laravel|laravel-4 | 40,534 | <p>Try checking out <a href="http://laravel.com/docs/validation#rule-required-without-all"><code>required_without_all:foo,bar,...</code></a>, it looks like that should do it for you. To quote their documentation:</p>
<blockquote>
<p>The field under validation must be present only when the all of the other specified fields are not present.</p>
</blockquote>
<hr>
<h1>Example:</h1>
<pre><code>$rules = array(
'facebook_id' => 'required_without_all:twitter_id,instagram_id',
'twitter_id' => 'required_without_all:facebook_id,instagram_id',
'instagram_id' => 'required_without_all:facebook_id,twitter_id',
);
$validator = Validator::make(Input::all(), $rules);
</code></pre> |
31,062,435 | How can I loop scraping data for multiple pages in a website using python and beautifulsoup4 | <p>I am trying to scrape data from the PGA.com website to get a table of all of the golf courses in the United States. In my CSV table I want to include the Name of the golf course ,Address ,Ownership ,Website , Phone number. With this data I would like to geocode it and place into a map and have a local copy on my computer</p>
<p>I utilized Python and Beautiful Soup4 to extract my data. I have reached as far to extract the data and import it into a CSV but I am now having a problem of scraping data from multiple pages on the PGA website. I want to extract ALL THE GOLF COURSES but my script is limited only to one page I want to loop it in away that it will capture all data for golf courses from all pages found in the PGA site. There are about 18000 gold courses and 900 pages to capture data</p>
<p>Attached below is my script. I need help on creating code that will capture ALL data from the PGA website and not just one site but multiple. In this manner it will provide me with all the data of gold courses in the United States. </p>
<p>Here is my script below:</p>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
url = "http://www.pga.com/golf-courses/search?searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0"
r = requests.get(url)
soup = BeautifulSoup(r.content)
g_data1=soup.find_all("div",{"class":"views-field-nothing-1"})
g_data2=soup.find_all("div",{"class":"views-field-nothing"})
courses_list=[]
for item in g_data2:
try:
name=item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
except:
name=''
try:
address1=item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
except:
address1=''
try:
address2=item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
except:
address2=''
try:
website=item.contents[1].find_all("div",{"class":"views-field-website"})[0].text
except:
website=''
try:
Phonenumber=item.contents[1].find_all("div",{"class":"views-field-work-phone"})[0].text
except:
Phonenumber=''
course=[name,address1,address2,website,Phonenumber]
courses_list.append(course)
with open ('filename5.csv','wb') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)
#for item in g_data1:
#try:
#print item.contents[1].find_all("div",{"class":"views-field-counter"})[0].text
#except:
#pass
#try:
#print item.contents[1].find_all("div",{"class":"views-field-course-type"})[0].text
#except:
#pass
#for item in g_data2:
#try:
#print item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
#except:
#pass
#try:
#print item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
#except:
#pass
#try:
#print item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
#except:
#pass
</code></pre>
<p>This script only captures 20 at a time and I want to capture all in one script which account for 18000 golf courses and 900 pages to scrape form.</p> | 31,062,822 | 6 | 0 | null | 2015-06-25 23:24:36.833 UTC | 9 | 2020-10-10 00:32:23.997 UTC | 2016-03-17 01:18:12.453 UTC | null | 401,672 | null | 5,050,623 | null | 1 | 9 | python|loops|csv|web-scraping|beautifulsoup | 55,348 | <p>The PGA website's search have multiple pages, the url follows the pattern:</p>
<pre><code>http://www.pga.com/golf-courses/search?page=1 # Additional info after page parameter here
</code></pre>
<p>this means you can read the content of the page, then change the value of page by 1, and read the the next page.... and so on.</p>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
for i in range(907): # Number of pages plus one
url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)
# Your code for each individual page here
</code></pre> |
20,106,502 | PHP Composer can't find composer.json file for my project | <p>I have successfully installed Composer in the root directory (that was the default choice) on my Linux/Apache server using their installation guide. It's all been very simple so far, except for one php.ini tweak I've had to make (<code>detect_unicode = Off</code>) but now I'm stuck.</p>
<p>I'm trying to install Ratchet using Composer, with the use of this guide:</p>
<p><a href="http://socketo.me/docs/install">http://socketo.me/docs/install</a></p>
<p>It says I need to "<em>create a file called composer.json in your project folder</em>". So I created that file (with the contents they gave on their page) using the cPanel file manager, in my application's root directory. However, when I run Composer using:</p>
<pre><code>php composer.phar install
</code></pre>
<p>PuTTy gives the following error message:</p>
<pre><code>Composer could not find a composer.json file in /root
To initialize a project, please create a composer.json file as described in the http://getcomposer.org/ "Getting Started" section
</code></pre>
<p>But this doesn't seem to make sense, why would I place the JSON file in the server's root if the documentation says to place it in the project folder? What am I missing?</p> | 20,106,619 | 2 | 0 | null | 2013-11-20 20:38:23.507 UTC | 3 | 2014-04-11 07:32:32.663 UTC | null | null | null | null | 1,245,584 | null | 1 | 6 | composer-php | 39,681 | <p>It looks like you're executing <em>php composer.phar install</em> in /root path.</p> |
20,152,710 | GSON - Get JSON value from String | <p>I'm trying to parse the JSON String "{'test': '100.00'}" and in order to get the value: <code>100.00</code> with the GSON library. My code looks like this:</p>
<pre><code>String myJSONString = "{'test': '100.00'}";
JsonObject jobj = new Gson().fromJson(myJSONString, JsonObject.class);
String result = jobj.get("test").toString();
System.out.println(result);
</code></pre>
<p>My result looks like this: <code>"100.00"</code>, but I would need just <code>100.00</code> without the quotes. How can this be achieved?</p> | 20,152,866 | 3 | 0 | null | 2013-11-22 19:13:13.15 UTC | 12 | 2021-02-10 19:02:31.537 UTC | 2021-02-10 19:02:31.537 UTC | null | 1,246,547 | null | 1,931,996 | null | 1 | 70 | java|json|parsing|gson | 103,767 | <pre><code>double result = jobj.get("test").getAsDouble();
</code></pre> |
30,029,090 | cannot deploy - ERROR: You cannot have more than 500 Application Versions | <p>I get the following error when deploying to EB:</p>
<blockquote>
<p>ERROR: You cannot have more than 500 Application Versions. Either
remove some Application Versions or request a limit increase.</p>
</blockquote>
<p>I went manually and deleted some versions.
I don't want deploys to fail because of this limit.
Is there a way in Elastic Beanstalk to auto-evict unused versions? </p> | 30,060,544 | 6 | 0 | null | 2015-05-04 11:32:43.263 UTC | 5 | 2020-12-28 15:22:25.117 UTC | null | null | null | null | 1,802,462 | null | 1 | 50 | amazon-elastic-beanstalk | 17,339 | <p>A feature was recently added to eb cli (v3.3) to cleanup old versions</p>
<p><a href="https://m.reddit.com/r/aws/comments/340ce0/whats_the_thinking_behind_beanstalks_versioning/" rel="noreferrer">https://m.reddit.com/r/aws/comments/340ce0/whats_the_thinking_behind_beanstalks_versioning/</a></p>
<p>Copying command from reddit link </p>
<pre><code>$ eb labs cleanup-versions --help
usage: eb labs cleanup-versions [options...]
Cleans up old application versions.
optional arguments:
--num-to-leave NUM number of versions to leave DEFAULT=10
--older-than DAYS delete only versions older than x days DEFAULT=60
--force don't prompt for confirmation
</code></pre> |
29,856,116 | Handling "Unrecognized token" exception in custom json with Jackson | <p>I'm trying to use the Jackson json parser(v2.5.2) to parse a custom json document that isn't true json and I can't figure out how to make it work. I have a json document that might look like:</p>
<pre><code>{
"test": {
"one":"oneThing",
"two": nonStandardThing(),
"three": true
}
}
</code></pre>
<p>I want to use the ObjectMapper to map this to a <code>java.util.Map</code> and I would just like the <code>nonStandardThing()</code> to be added as a String value in my map for the key <code>two</code>.</p>
<p>When I run this through the <code>ObjectMapper.readValue(json, Map.class)</code> I get the exception:</p>
<pre><code>com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'nonStandardThing': was expecting 'null', 'true', 'false' or NaN
at [Source: { "test":{"test1":nonStandardThing(),"test2":"two"}}; line: 1, column: 35]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1487)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:518)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2300)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2277)
</code></pre>
<p>I have tried to register a <code>DeserializationProblemHandler</code> with the <code>ObjectMapper</code> but it is never called when this problem occurs. </p>
<p>Here is sample application that shows what I have tried:</p>
<pre><code>import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.deser.DeserializationProblemHandler;
import java.io.IOException;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;
public class JacksonDeserializerTest {
private Logger log = Logger.getLogger(JacksonDeserializerTest.class.getName());
public JacksonDeserializerTest() {
String validJson = "{ \"test\":{\"test1\":\"one\",\"test2\":\"two\"}}";
String invalidJson = "{ \"test\":{\"test1\":nonStandardThing(),\"test2\":\"two\"}}";
ObjectMapper mapper = new ObjectMapper();
mapper.addHandler(new DeserializationProblemHandler() {
@Override
public boolean handleUnknownProperty(DeserializationContext dc, JsonParser jp, JsonDeserializer<?> jd, Object bean, String property) throws IOException, JsonProcessingException {
System.out.println("Handling unknown property: " + property);
return false;
}
});
try {
log.log(Level.INFO, "Valid json looks like: {0}", mapper.readValue( validJson, Map.class).toString());
log.log(Level.INFO, "Invalid json looks like: {0}", mapper.readValue(invalidJson, Map.class).toString());
} catch (IOException ex) {
log.log(Level.SEVERE, "Error parsing json", ex);
}
}
public static void main(String[] args) {
JacksonDeserializerTest test = new JacksonDeserializerTest();
}
}
</code></pre>
<p>The output looks like:</p>
<pre><code>Apr 24, 2015 1:40:27 PM net.acesinc.data.json.generator.jackson.JacksonDeserializerTest <init>
INFO: Valid json looks like: {test={test1=one, test2=two}}
Apr 24, 2015 1:40:27 PM net.acesinc.data.json.generator.jackson.JacksonDeserializerTest <init>
SEVERE: Error parsing json
com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'nonStandardThing': was expecting 'null', 'true', 'false' or NaN
at [Source: { "test":{"test1":nonStandardThing(),"test2":"two"}}; line: 1, column: 35]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1487)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:518)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2300)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2277)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._matchToken(ReaderBasedJsonParser.java:2129)
</code></pre>
<p>Can anyone point out why the Handler never gets called? Or, if there is a better parse this custom json document (jackson or not...), let me know. </p> | 29,856,841 | 2 | 1 | null | 2015-04-24 19:42:53.493 UTC | null | 2019-10-13 19:47:13.45 UTC | null | null | null | null | 726,214 | null | 1 | 15 | java|json|data-binding|jackson | 102,963 | <p>The handler is not called because the invalid part is not the property (<code>"two"</code>) but the value (<code>nonStandardThing()</code>).</p>
<p>An obvious way to handle this, is to pass <code>nonStandardThing()</code> as a <code>String</code>, i.e. rewrite the JSON document as</p>
<pre><code>{
"test": {
"one":"oneThing",
"two": "nonStandardThing()",
"three": true
}
}
</code></pre>
<p>If that is not a possibility, there is not much to do. Using a custom <code>Jackson</code> <code>Deserializer</code> is only useful for properties, not values.</p> |
39,952,214 | Correct way to use image assets in Ionic 2 | <p>What’s the best practice for image assets in Ionic 2? I have a bunch of SVGs I want to use as non-system icons. I found some older tips on using Gulp but it seems Ionic team has decided on Rollup as the build tool of choice, no docs on this so far.</p>
<p>Somebody told me to just add them to <code>www/img</code>. Any downsides?</p> | 39,953,703 | 4 | 0 | null | 2016-10-10 06:11:45.69 UTC | 11 | 2019-03-25 17:24:29.383 UTC | 2018-09-01 08:22:24.417 UTC | null | 3,915,438 | null | 519,632 | null | 1 | 47 | angular|ionic-framework|ionic2|ionic3 | 88,875 | <p>Placing your images in <code>www/img</code> sounds like a good ideal but it will only work when serving locally using <code>ionic serve</code>. </p>
<p>When building your app, the <code>www/img</code> will get deleted unless you make a gulp task to copy the images from the folder you want to the <code>www/build</code> folder as shown <a href="https://stackoverflow.com/questions/36292000/how-to-insert-image-in-ionic-2">in this post</a>. </p>
<p>Images used in html files should be in <code>src/assets/img</code>(recommended) and not <code>www/assets/img</code>(obselete). Image tags would then look like this : </p>
<pre><code><img src="assets/img/yourimage.jpg" alt="your image">
</code></pre>
<p>In ionic 2, the <code>src/assets</code> folder is meant for images and fonts.</p>
<p>This is what the ionic team says in the <a href="https://github.com/driftyco/ionic/blob/master/CHANGELOG.md#modifying-your-existing-project" rel="noreferrer">guide to modify an existing ionic project</a> :</p>
<blockquote>
<ol start="22">
<li><p>Move www/img to src/assets/img.</p></li>
<li><p>Move any other resources you have in www/ to src/assets/.</p></li>
</ol>
</blockquote> |
20,723,735 | Any way to know if a variable is an angularjs promise? | <p>I'm making a directive that takes a function as a scope parameter (<code>scope: { method:'&theFunction' }</code>). I need to know if the result returned by that method is an angular promise (if yes something will happen on resolution, otherwise it happens right away). </p>
<p>For now I'm testing if <code>foo.then</code> exists but I was wondering if there was a better way to do it.</p> | 20,723,813 | 4 | 0 | null | 2013-12-21 21:27:29.743 UTC | 6 | 2018-08-22 13:11:03.307 UTC | null | null | null | null | 111,625 | null | 1 | 60 | angularjs | 19,801 | <p>You can use <code>$q.when</code> to wrap the object as a promise (whether it is or not). Then, you can be sure that you are always dealing with a promise. This should simplify the code that then handles the result.</p>
<p>Documentation for <code>$q.when</code> is <a href="http://docs.angularjs.org/api/ng.$q" rel="noreferrer">here with $q</a>.</p> |
24,225,766 | Elasticsearch Java API - building queries | <p>I have looked through the docs for the <a href="http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/search.html" rel="noreferrer">Search API</a> but find them not descriptive enough (even though they are very well written). I am trying to build a query but understand little about all the different options available and cannot find information on the matter, when building a query and I am unable to translate queries I can run in Sense to queries I can run using the <a href="http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/index.html" rel="noreferrer">Java API</a>.</p>
<p>In Sense I have the following: </p>
<pre><code>GET index/_search
{
"query": {
"match" : {
"name" : "some string"
}
}
}
</code></pre>
<p>And in my Java code I have:</p>
<pre><code>node = nodeBuilder().client(true).clusterName(CLUSTER_NAME).node();
client = node.client();
QueryBuilder qb = QueryBuilders.termQuery("name", "some string");
SearchResponse response = client.prepareSearch("index") //
.setQuery(qb) // Query
.execute().actionGet();
</code></pre>
<p>But they produce different search results. What is the difference as I cannot see it? Also is there a good source of information that might be of use?</p> | 24,225,902 | 1 | 0 | null | 2014-06-15 01:23:05.183 UTC | 6 | 2016-07-05 06:37:40.503 UTC | null | null | null | null | 1,126,241 | null | 1 | 15 | java|elasticsearch | 45,571 | <p>If you want the two queries to return the same results you need to use the same type of query. In your Sense query you are performing a match query:</p>
<pre><code>"query": {
"match" : {
"name" : "some string"
}
}
</code></pre>
<p>but in your Java code you are performing a termQuery:</p>
<pre><code>QueryBuilder qb = QueryBuilders.termQuery("name", "some string");
</code></pre>
<p>So to answer your question use a match query instead in your Java code:</p>
<pre><code>QueryBuilder qb = QueryBuilders.matchQuery("name", "some string");
</code></pre>
<p>Regarding your second question, it's a bit broad. I'd certainly try going thru the documentation and searching here on StackOverflow. Regarding the Java API I'd look <a href="http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/java-api.html">here</a> for the overview and <a href="http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/java-api.html">here</a> for the info on the query dsl thru Java.</p>
<p>I think a good general understanding of how Elasticsearch works and some comfort with the query mechanism thru the REST API would be very helpful in getting to understand the Java API. Good places to start:</p>
<p><a href="http://joelabrahamsson.com/elasticsearch-101/">http://joelabrahamsson.com/elasticsearch-101/</a></p>
<p><a href="http://exploringelasticsearch.com/">http://exploringelasticsearch.com/</a></p>
<p><a href="http://java.dzone.com/articles/elasticsearch-getting-started">http://java.dzone.com/articles/elasticsearch-getting-started</a></p> |
48,033,944 | How are coroutines implemented in JVM langs without JVM support? | <p>This question came up after reading the <a href="http://cr.openjdk.java.net/~rpressler/loom/Loom-Proposal.html" rel="noreferrer">Loom proposal</a>, which describes an approach of implementing coroutines in the Java programming language.</p>
<p>Particularly this proposal says that to implement this feature in the language, additional JVM support will be required.</p>
<p>As I understand it there are already several languages on the JVM that have coroutines as part of their feature set such as Kotlin and Scala.</p>
<p>So how is this feature implemented without additional support and can it be implemented efficiently without it?</p> | 48,034,412 | 4 | 0 | null | 2017-12-30 12:27:01.11 UTC | 14 | 2018-09-07 09:02:47.527 UTC | 2017-12-30 15:30:35.827 UTC | null | 964,243 | null | 3,424,394 | null | 1 | 54 | java|scala|jvm|kotlin|jvm-languages | 3,893 | <p><strong>tl;dr</strong> Summary:</p>
<blockquote>
<p>Particularly this proposal says that to implement this feature in the language the additional JVM support will be required.</p>
</blockquote>
<p>When they say "required", they mean "required in order to be implemented in such a way that it is both performant and interoperable between languages".</p>
<blockquote>
<p>So how this feature is implemented without additional support </p>
</blockquote>
<p>There are many ways, the most easy to understand how it can possibly work (but not necessarily easiest to implement) is to implement your own VM with your own semantics on top of the JVM. (Note that is <em>not</em> how it is actually done, this is only an intuition as to <em>why</em> it can be done.)</p>
<blockquote>
<p>and can it be implemented efficiently without it ?</p>
</blockquote>
<p>Not really.</p>
<p><strong>Slightly longer explanation</strong>:</p>
<p>Note that one goal of Project Loom is to introduce this abstraction <em>purely</em> as a library. This has three advantages:</p>
<ul>
<li>It is much easier to introduce a new library than it is to change the Java programming language.</li>
<li>Libraries can immediately be used by programs written in every single language on the JVM, whereas a Java language feature can only be used by Java programs.</li>
<li>A library with the same API that does not use the new JVM features can be implemented, which will allow you to write code that runs on older JVMs with a simple re-compile (albeit with less performance).</li>
</ul>
<p>However, implementing it as a library precludes clever compiler tricks turning co-routines into something else, because <em>there is no compiler involved</em>. Without clever compiler tricks, getting good performance is much harder, ergo, the "requirement" for JVM support.</p>
<p><strong>Longer explanation</strong>:</p>
<p>In general, all of the usual "powerful" control structures are equivalent in a computational sense and can be implemented using each other.</p>
<p>The most well-known of those "powerful" universal control-flow structures is the venerable <code>GOTO</code>, another one are Continuations. Then, there are Threads and Coroutines, and one that people don't often think about, but that is also equivalent to <code>GOTO</code>: Exceptions.</p>
<p>A different possibility is a re-ified call stack, so that the call-stack is accessible as an object to the programmer and can be modified and re-written. (Many Smalltalk dialects do this, for example, and it is also kind-of like how this is done in C and assembly.)</p>
<p>As long as you have <em>one</em> of those, you can have <em>all</em> of those, by just implementing one on top of the other.</p>
<p>The JVM has two of those: Exceptions and <code>GOTO</code>, but the <code>GOTO</code> in the JVM is <em>not</em> universal, it is extremely limited: it only works <em>inside</em> a single method. (It is essentially intended only for loops.) So, that leaves us with Exceptions.</p>
<p>So, that is one possible answer to your question: you can implement co-routines on top of Exceptions.</p>
<p>Another possibility is to not use the JVM's control-flow <em>at all</em> and implement your own stack.</p>
<p>However, that is typically not the path that is actually taken when implementing co-routines on the JVM. Most likely, someone who implements co-routines would choose to use Trampolines and partially re-ify the execution context as an object. That is, for example, how Generators are implemented in C♯ on the CLI (not the JVM, but the challenges are similar). Generators (which are basically restricted semi-co-routines) in C♯ are implemented by lifting the local variables of the method into fields of a context object and splitting the method into multiple methods on that object at each <code>yield</code> statement, converting them into a state machine, and carefully threading all state changes through the fields on the context object. And before <code>async</code>/<code>await</code> came along as a language feature, a clever programmer implemented asynchronous programming using the same machinery as well.</p>
<p><strong>HOWEVER</strong>, and that is what the article you pointed to most likely referred to: all that machinery is costly. If you implement your own stack or lift the execution context into a separate object, or compile all your methods into one <em>giant</em> method and use <code>GOTO</code> everywhere (which isn't even possible because of the size limit on methods), or use Exceptions as control-flow, at least one of these two things will be true:</p>
<ul>
<li>Your calling conventions become incompatible with the JVM stack layout that other languages expect, i.e. you lose <em>interoperability</em>.</li>
<li>The JIT compiler has no idea what the hell your code is doing, and is presented with byte code patterns, execution flow patterns, and usage patterns (e.g. throwing and catching <em>ginormous</em> amounts of exceptions) it doesn't expect and doesn't know how to optimize, i.e. you lose <em>performance</em>.</li>
</ul>
<p>Rich Hickey (the designer of Clojure) once said in a talk: "Tail Calls, Performance, Interop. Pick Two." I generalized this to what I call <em>Hickey's Maxim</em>: "Advanced Control-Flow, Performance, Interop. Pick Two."</p>
<p>In fact, it is generally hard to achieve even <em>one of</em> interop or performance.</p>
<p>Also, your compiler will become more complex.</p>
<p>All of this goes away, when the construct is available natively in the JVM. Imagine, for example, if the JVM didn't have Threads. Then, every language implementation would create its own Threading library, which is hard, complex, slow, and doesn't interoperate with any <em>other</em> language implementation's Threading library.</p>
<p>A recent, and real-world, example are lambdas: many language implementations on the JVM had lambdas, e.g. Scala. Then Java added lambdas as well, but because the JVM doesn't support lambdas, they must be <em>encoded</em> somehow, and the encoding that Oracle chose was different from the one Scala had chosen before, which meant that you couldn't pass a Java lambda to a Scala method expecting a Scala <code>Function</code>. The solution in this case was that the Scala developers completely re-wrote their encoding of lambdas to be compatible with the encoding Oracle had chosen. This actually broke backwards-compatibility in some places.</p> |
59,428,993 | Is it safe to swap two different vectors in C++, using the std::vector::swap method? | <p>Suppose that you have the following code: </p>
<pre><code>#include <iostream>
#include <string>
#include <vector>
int main()
{
std::vector<std::string> First{"example", "second" , "C++" , "Hello world" };
std::vector<std::string> Second{"Hello"};
First.swap(Second);
for(auto a : Second) std::cout << a << "\n";
return 0;
}
</code></pre>
<p>Imagine the vector are not <code>std::string</code>, yet classes:</p>
<pre><code>std::vector<Widget> WidgetVector;
std::vector<Widget2> Widget2Vector;
</code></pre>
<p>Is it still safe to swap the two vectors with the <code>std::vector::swap</code> method: <code>WidgetVector.swap(Widget2Vector);</code> or it will lead to an UB?</p> | 59,429,311 | 6 | 0 | null | 2019-12-20 17:07:46.32 UTC | 6 | 2019-12-22 13:54:19.153 UTC | 2019-12-22 13:36:53.22 UTC | null | 5,825,294 | user11121949 | null | null | 1 | 31 | c++|c++11|vector|stdvector|swap | 4,020 | <p>It is safe because nothing is created during the swap operation. Only data members of the class <code>std::vector</code> are swapped.</p>
<p>Consider the following demonstrative program that makes it clear how objects of the class <code>std::vector</code> are swapped.</p>
<pre><code>#include <iostream>
#include <utility>
#include <iterator>
#include <algorithm>
#include <numeric>
class A
{
public:
explicit A( size_t n ) : ptr( new int[n]() ), n( n )
{
std::iota( ptr, ptr + n, 0 );
}
~A()
{
delete []ptr;
}
void swap( A & a ) noexcept
{
std::swap( ptr, a.ptr );
std::swap( n, a.n );
}
friend std::ostream & operator <<( std::ostream &os, const A &a )
{
std::copy( a.ptr, a.ptr + a.n, std::ostream_iterator<int>( os, " " ) );
return os;
}
private:
int *ptr;
size_t n;
};
int main()
{
A a1( 10 );
A a2( 5 );
std::cout << a1 << '\n';
std::cout << a2 << '\n';
std::cout << '\n';
a1.swap( a2 );
std::cout << a1 << '\n';
std::cout << a2 << '\n';
std::cout << '\n';
return 0;
}
</code></pre>
<p>The program output is</p>
<pre><code>0 1 2 3 4 5 6 7 8 9
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4 5 6 7 8 9
</code></pre>
<p>As you see only data members <code>ptr</code> and <code>n</code> are swapped in the member function swap. Neither additional resources are used.</p>
<p>A similar approach is used in the class <code>std::vector</code>.</p>
<p>As for this example</p>
<pre><code>std::vector<Widget> WidgetVector;
std::vector<Widget2> Widget2Vector;
</code></pre>
<p>then there are objects of different classes. The member function swap is applied to vectors of the same type.</p> |
66,912,085 | Why is docker-compose failing with ERROR internal load metadata suddenly? | <p>I've been running docker-compose build for days, many times per day, and haven't changed my DOCKERFILEs or docker-compose.yml. Suddenly an hour ago I started getting this:</p>
<pre><code>Building frontdesk-api
failed to get console mode for stdout: The handle is invalid.
[+] Building 10.0s (3/4)
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 10.1s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for mcr.microsoft.com/dotnet/sdk:5.0. 10.0sailed to do request: Head https://mcr.microsoft.com/v2/dotnet/sdk/manifests/5.0.201-buster-slim: dial tcp: lookup mcr.microsoft.com on 192.168.65.5:53:
=> [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:5.0-bust 0.0s
------
> [internal] load metadata for mcr.microsoft.com/dotnet/sdk:5.0.201-buster-slim:
------
ERROR: Service 'frontdesk-api' failed to build
</code></pre>
<p>Things I've tried:</p>
<ul>
<li>Running it again</li>
<li><code> docker rm -f $(docker ps -a -q)</code></li>
<li><code>docker login</code></li>
<li>Different SDK image</li>
</ul>
<p>Here is the DOCKERFILE:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0.201-buster-slim AS build
WORKDIR /app
# run this from repository root
COPY ./ ./
#RUN ls -lha .
RUN echo 'Building FrontDesk container'
WORKDIR /app/FrontDesk/src/FrontDesk.Api
#RUN ls -lha .
RUN dotnet restore
RUN dotnet build "FrontDesk.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "FrontDesk.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "FrontDesk.Api.dll"]
</code></pre>
<p>How can I get my build working again?</p> | 66,912,110 | 15 | 0 | null | 2021-04-01 22:00:08.163 UTC | 4 | 2022-09-21 18:02:53.203 UTC | null | null | null | null | 13,729 | null | 1 | 30 | docker|docker-compose | 70,770 | <p>mcr.microsoft.com is down at the moment</p>
<p>I'm receiving several different errors when pulling:</p>
<pre><code>% docker pull mcr.microsoft.com/dotnet/sdk:5.0
Error response from daemon: Get https://mcr.microsoft.com/v2/: Service Unavailable
% docker pull mcr.microsoft.com/dotnet/sdk:5.0
5.0: Pulling from dotnet/sdk
received unexpected HTTP status: 500 Internal Server Error
</code></pre> |
40,012,866 | Could not find method android() for arguments in Android Studio project | <p>I am trying to do a grade sync on my android studio project and I keep getting this error in the title. My build.gradle file is </p>
<pre><code>// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
android {
compileSdkVersion 24
buildToolsVersion '24.0.0'
}
dependencies {
}
</code></pre>
<p>My error message is</p>
<pre><code>Gradle sync failed: Could not find method android() for arguments [build_aiwlctiq29euo9devcma4v4r7$_run_closure3@22efc0bc] on root project 'MyRadio
</code></pre>
<p>I have looked online and tried multiple solutions but nothing seems to work. What does this even mean? Any help would be appreciated.</p> | 40,013,914 | 6 | 2 | null | 2016-10-13 05:21:33.883 UTC | 3 | 2018-07-19 17:47:01.207 UTC | null | null | null | null | 4,096,781 | null | 1 | 11 | android|android-studio|gradle|build|android-gradle-plugin | 59,954 | <p>You are using the wrong build.gradle file.</p>
<p>In your top-level file, you can't define an android block.</p>
<p>Just move this part inside the module/build.Gradle file.</p>
<pre><code>android {
compileSdkVersion 17
buildToolsVersion '23.0.0'
}
dependencies {
compile files('app/libs/junit-4.12-JavaDoc.jar')
}
apply plugin: 'maven'
</code></pre> |
22,162,477 | How to update an installed Windows service? | <p>I have written a Windows service in C#.</p>
<p>I have since installed it on my machine, and it runs just fine.</p>
<p>When you install a service, does the <code>exe</code> get copied somewhere? Or does it point to my <code>bin</code> folder? </p>
<p>This is for me to know that when I update my code from time to time, do I have to uninstall and re-install my service to update it?</p> | 22,162,622 | 3 | 2 | null | 2014-03-04 03:46:07.083 UTC | 6 | 2021-01-21 15:00:51.107 UTC | 2016-02-22 08:15:17.223 UTC | null | 1,810,429 | null | 148,998 | null | 1 | 31 | c#|installation|windows-services|upgrade | 22,299 | <p>If you want to update your Service <em>automatically</em>, you can use a framework such as Google Omaha. This is the technology which Google use to update Chrome. It works well with Services because it runs silently in the background, just like a Service. <a href="https://omaha-consulting.com/auto-update-your-windows-service-with-google-omaha" rel="nofollow noreferrer">This article</a> gives more information about using Omaha to auto-update a Service.</p>
<p>On the other hand, if you want to manually update your Service: If the Service's location has not changed and the name of its executable has not changed, you should not have to uninstall and reinstall it. You can simply stop the service with <code>net stop</code>, update its executable with a new version, and start it again with <code>net start</code>. This approach worked reliably for me for many months.</p> |
23,758,858 | How can I extract elements from lists of lists in R? | <p>I have a bunch of lists containing lists within them (generalised linear model output). I want to write a function which will extract several elements from each list and then combine the results into a data frame. </p>
<p>I want to extract <code>modelset[[1]]$likelihood</code> & <code>modelset[[1]]$fixef</code>, <code>modelset[[2]]$likelihood</code> & <code>modelset[[2]]$fixef</code>, etc, and combine the results into a data frame.</p>
<p>Can someone give me an idea of how to do this? </p>
<p>Apologies if my question is confusing: what I am trying to do is beyond my limited programming understanding.</p>
<p>Further information about my list: </p>
<pre><code>modelset: Large list (16 elements, 7.3Mb)
:List of 29
..$ fixef : Named num [1:2] -1.236 -0.611
.. ..- attr(*, "names")= chr [1:2] "(Intercept)" "SMIstd"
..$ likelihood :List of 4
.. ..$ hlik: num 238
.. ..$ pvh : num 256
.. ..$ pbvh: num 260
.. ..$ cAIC: num 567
...etc
</code></pre> | 23,759,266 | 1 | 1 | null | 2014-05-20 11:41:54.947 UTC | 24 | 2022-01-20 23:38:48.72 UTC | 2016-06-30 14:24:41.587 UTC | null | 2,100,721 | null | 2,214,631 | null | 1 | 33 | r|list | 40,569 | <p>In order to solve this elegantly you need to understand that you can use <code>['…']</code> instead of <code>$…</code> to access list elements (but you will get a list back instead of an individual element).</p>
<p>So if you want to get the elements <code>likelihood</code> and <code>fixef</code>, you can write:</p>
<pre><code>modelset[[1]][c('likelihood', 'fixef')]
</code></pre>
<p>Now you want to do that for each element in <code>modelset</code>. That’s what <a href="http://stat.ethz.ch/R-manual/R-devel/library/base/html/lapply.html" rel="noreferrer"><code>lapply</code></a> does:</p>
<pre><code>lapply(modelset, function (x) x[c('likelihood', 'fixef')])
</code></pre>
<p>This works, but it’s not very R-like.</p>
<p>You see, in R, almost <strong>everything</strong> is a function. <code>[…]</code> is calling a function named <code>[</code> (but since <code>[</code> is a special symbol for R, in needs to be quoted in backticks: <code>`[`</code>). So you can instead write this:</p>
<pre><code>lapply(modelset, function (x) `[`(x, c('likelihood', 'fixef')))
</code></pre>
<p>Wow, that’s not very readable at all. However, we can now remove the wrapping anonymous <code>function (x)</code>, since inside we’re just calling another function, and move the extra arguments to the last parameter of <code>lapply</code>:</p>
<pre><code>lapply(modelset, `[`, c('likelihood', 'fixef'))
</code></pre>
<p>This works and is elegant R code.</p>
<hr />
<p>Let’s step back and re-examine what we did here. In effect, we had an expression which looked like this:</p>
<pre><code>lapply(some_list, function (x) f(x, y))
</code></pre>
<p>And this call can instead be written as</p>
<pre><code>lapply(some_list, f, y)
</code></pre>
<p>We did exactly that, with <code>somelist = modelset</code>, <code>f = `[`</code> and <code>y = c('likelihood', 'fixef')</code>.</p> |
42,788,139 | ES6 Tail Recursion Optimisation Stack Overflow | <p>Having read <a href="http://www.2ality.com/2015/06/tail-call-optimization.html" rel="noreferrer">Dr Rauschmayer's description</a> of recursive tail call optimisation in es6, I've since been trying to recreate the 'zero-stack' execution of the recursive factorial function he details.<br/><br/>
Using the Chrome debugger to step between stack frames, I'm seeing that the tail optimisation is not occurring and a stack frame is being created for each recursion.<br/><br/>
I've also tried to test the optimisation by calling the function without the debugger, but instead passing <code>100000</code> to the factorial function. This throws a 'maximum stack' error, which implies that it is, in fact, not optimised.</p>
<p>Here is my code:</p>
<pre><code>const factorial = (n, acc = 1) => n <= 1 ? acc : factorial(n - 1, n * acc)
console.log( factorial(100000) )
</code></pre>
<p>Result:</p>
<pre><code>Uncaught RangeError: Maximum call stack size exceeded
</code></pre> | 42,788,286 | 2 | 4 | null | 2017-03-14 14:04:32.2 UTC | 17 | 2022-01-11 14:51:15.117 UTC | null | null | null | null | 6,685,193 | null | 1 | 55 | javascript|recursion|optimization|ecmascript-6|stack-overflow | 16,295 | <p>V8, the JavaScript engine in Chrome, had TCO support for a while, but as of this updated answer (November 2017) it no longer does and as of this writing, there is no active development on TCO in V8, and none is planned. You can read the details in <a href="https://bugs.chromium.org/p/v8/issues/detail?id=4698" rel="noreferrer">the V8 tracking bug for it</a>.</p>
<p>TCO support seems to have reached a decent level in V8 at one point, but remained behind a flag for several reasons (debugging issues, bugs). But then several things happened, not least that <a href="https://v8project.blogspot.co.uk/2016/04/es6-es7-and-beyond.html" rel="noreferrer">the V8 team raised significant issues with TCO</a> and strongly supported a spec change called <a href="https://github.com/tc39/proposal-ptc-syntax" rel="noreferrer">syntactic tail calls (STCs)</a> that would require that tail calls be flagged in source code intentionally (e.g., <code>return continue doThat();</code>). That proposal became <a href="https://github.com/tc39/proposals/blob/master/inactive-proposals.md" rel="noreferrer">inactive</a> in July 2017, though. Also in July, with no TCO work being done, the V8 team removed the code for supporting TCO from the source for TurboFan* as it would otherwise be subject to bitrot. (E.g., become a maintenance pain and source of bugs.)</p>
<p>So at present (Nov 2017) it's not clear that "invisible" TCO will ever be in V8, whether some kind of STCs will come in, or what. The <a href="https://www.chromestatus.com/feature/5516876633341952" rel="noreferrer">Chrome Platform Status page</a> for this indicates "mixed" public signals from Mozilla (Firefox/SpiderMonkey) and Microsoft (Edge/Chakra) on supporting TCO, that Safari is shipping with TCO, and that web developers are "positive" about the feature. We'll see where we go from here. If anywhere.</p>
<p>* (TurboFan = the current cutting-edge JIT compiler in V8, now they've <a href="https://v8project.blogspot.co.uk/2017/05/launching-ignition-and-turbofan.html" rel="noreferrer">switched</a> from Full-Codegen [JIT] + Crankshaft [aggressive optimizing JIT] to Ignition [interpreter+] and TurboFan [aggressive optimizing JIT])</p> |
37,282,792 | Why use @Singleton over Scala's object in Play Framework? | <p>I have been using <a href="https://www.playframework.com/">Play! Framework</a> for <a href="http://www.scala-lang.org/">Scala</a> for nearly a year now. I am currently using version <a href="https://www.playframework.com/documentation/2.5.x/Highlights25">2.5.x</a>.</p>
<p>I am aware of the evolution of controllers in Play and how developers have been forced away from static <code>object</code> routes.</p>
<p>I am also aware of the <a href="https://github.com/google/guice">Guice</a> usage in play.</p>
<p>If you download <a href="https://www.lightbend.com/activator/download">activator</a> and run:</p>
<pre><code>activator new my-test-app play-scala
</code></pre>
<p>Activator will produce a template project for you.
My question is specifically around <a href="https://github.com/playframework/playframework/blob/master/templates/play-scala/app/services/Counter.scala">this</a> file of that template.</p>
<p><strong>my-test-app/app/services/Counter.scala</strong></p>
<pre><code>package services
import java.util.concurrent.atomic.AtomicInteger
import javax.inject._
/**
* This trait demonstrates how to create a component that is injected
* into a controller. The trait represents a counter that returns a
* incremented number each time it is called.
*/
trait Counter {
def nextCount(): Int
}
/**
* This class is a concrete implementation of the [[Counter]] trait.
* It is configured for Guice dependency injection in the [[Module]]
* class.
*
* This class has a `Singleton` annotation because we need to make
* sure we only use one counter per application. Without this
* annotation we would get a new instance every time a [[Counter]] is
* injected.
*/
@Singleton
class AtomicCounter extends Counter {
private val atomicCounter = new AtomicInteger()
override def nextCount(): Int = atomicCounter.getAndIncrement()
}
</code></pre>
<p>You can also see its usage in <a href="https://github.com/playframework/playframework/blob/master/templates/play-scala/app/controllers/CountController.scala">this</a> file:</p>
<p><strong>my-test-app/app/controllers/CountController.scala</strong></p>
<pre><code>package controllers
import javax.inject._
import play.api._
import play.api.mvc._
import services.Counter
/**
* This controller demonstrates how to use dependency injection to
* bind a component into a controller class. The class creates an
* `Action` that shows an incrementing count to users. The [[Counter]]
* object is injected by the Guice dependency injection system.
*/
@Singleton
class CountController @Inject() (counter: Counter) extends Controller {
/**
* Create an action that responds with the [[Counter]]'s current
* count. The result is plain text. This `Action` is mapped to
* `GET /count` requests by an entry in the `routes` config file.
*/
def count = Action { Ok(counter.nextCount().toString) }
}
</code></pre>
<p>This means every controller which has the constructor of <code>@Inject() (counter: Counter)</code> will receive the same instance of <code>Counter</code>.</p>
<p>So my question is:</p>
<p>Why use <code>@Singleton</code> and then <code>@Inject</code> it into a controller, when for this example you could just use a Scala object?<br>
Its a lot less code.</p>
<p>Example:</p>
<p><strong>my-test-app/app/services/Counter.scala</strong></p>
<pre><code>package services
trait ACounter {
def nextCount: Int
}
object Counter with ACounter {
private val atomicCounter = new AtomicInteger()
def nextCount(): Int = atomicCounter.getAndIncrement()
}
</code></pre>
<p>Use it like so:</p>
<p><strong>my-test-app/app/controllers/CountController.scala</strong></p>
<pre><code>package controllers
import javax.inject._
import play.api._
import play.api.mvc._
import services.{Counter, ACounter}
/**
* This controller demonstrates how to use dependency injection to
* bind a component into a controller class. The class creates an
* `Action` that shows an incrementing count to users. The [[Counter]]
* object is injected by the Guice dependency injection system.
*/
@Singleton
class CountController extends Controller {
//depend on abstractions
val counter: ACounter = Counter
def count = Action { Ok(counter.nextCount().toString) }
}
</code></pre>
<p>What is the difference? Is injection the preferred, and why?</p> | 37,284,215 | 3 | 1 | null | 2016-05-17 17:29:02.517 UTC | 6 | 2018-03-15 13:26:41.76 UTC | 2016-05-19 10:25:47.717 UTC | null | 5,398,254 | null | 5,398,254 | null | 1 | 29 | scala|object|playframework|guice | 8,864 | <p>Is injection the preferred way? Generally yes</p>
<p>A couple advantages of using dependency injection:</p>
<ol>
<li>Decouple controller from the concrete implementation of <code>Counter</code>.
<ul>
<li>If you were to use an <code>object</code>, you would have to change your controller to point to the different implementation. EG <code>Counter2.nextCount().toString</code></li>
</ul></li>
<li>You can vary the implementation during testing using Guice custom bindings
<ul>
<li>Lets say that inside of <code>Counter</code> you are doing a <code>WS</code> call. This could cause some difficulty unit testing. If you are using dependency injection with Guice, you can override the binding between <code>Counter</code> and <code>AtomicCounter</code> to point to an offline version of <code>Counter</code> that you have written specifically for your tests. See <a href="https://www.playframework.com/documentation/2.6.x/ScalaTestingWithGuice" rel="noreferrer">here</a> for more info on using Guice for Play tests.</li>
</ul></li>
</ol>
<p>Also see the <a href="https://www.playframework.com/documentation/2.6.x/ScalaDependencyInjection#motivation" rel="noreferrer">motivations</a> that Play had for migrating to DI.</p>
<p>I say generally because I've seen dependency injection go horribly wrong using Spring and other Java frameworks. I'd say you should use your own judgement but err on the side of using DI for Play.</p> |
48,367,042 | In chrome dev tools, what is the speed of each preset option for network throttling? | <p>Since a recent update to chrome, the presets are no longer labelled with bandwidth.
<a href="https://i.stack.imgur.com/qzL2r.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qzL2r.png" alt="bandwidth presets"></a></p>
<p>Chrome used to list the actual speed of each one so you could simply tell.</p>
<p>What bandwidth or latency do the options here represent?</p> | 48,682,519 | 5 | 1 | null | 2018-01-21 13:01:10.03 UTC | 12 | 2022-01-06 20:38:01.623 UTC | null | null | null | null | 3,931,173 | null | 1 | 84 | google-chrome | 43,186 | <p>I did some measurements with two speed tests available in the internet. With the following custom profile I received similar download speed and ping latency as with the presets.</p>
<p>Slow 3G Custom: Download 376 kb/s, Latency 2000 ms<br/>
Fast 3G Custom: Download 1500 kb/s = 1.5 Mb/s, Latency = 550 ms</p>
<p>The actually download speed measured via the speed tests was only slightly below the configured values. The measured ping latency was half of the value configured in the custom profile.</p> |
48,376,580 | Google Colab: how to read data from my google drive? | <p>The problem is simple: I have some data on gDrive, for example at
<code>/projects/my_project/my_data*</code>.</p>
<p>Also I have a simple notebook in gColab.</p>
<p>So, I would like to do something like:</p>
<pre><code>for file in glob.glob("/projects/my_project/my_data*"):
do_something(file)
</code></pre>
<p>Unfortunately, all examples (like this - <a href="https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/io.ipynb" rel="noreferrer">https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/io.ipynb</a>, for example) suggests to only mainly load all necessary data to notebook.</p>
<p>But, if I have a lot of pieces of data, it can be quite complicated.
Is there any opportunities to solve this issue?</p>
<p>Thanks for help!</p> | 48,385,944 | 16 | 1 | null | 2018-01-22 07:33:11.267 UTC | 76 | 2022-05-10 06:15:02.963 UTC | 2018-01-22 08:23:58.45 UTC | null | 5,747,242 | null | 7,712,955 | null | 1 | 200 | python|google-colaboratory | 470,711 | <p>Good news, <a href="http://pythonhosted.org/PyDrive/" rel="noreferrer">PyDrive</a> has first class support on CoLab! PyDrive is a wrapper for the Google Drive python client. Here is an example on how you would download <strong>ALL</strong> files from a folder, similar to using <code>glob</code> + <code>*</code>:</p>
<pre><code>!pip install -U -q PyDrive
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# choose a local (colab) directory to store the data.
local_download_path = os.path.expanduser('~/data')
try:
os.makedirs(local_download_path)
except: pass
# 2. Auto-iterate using the query syntax
# https://developers.google.com/drive/v2/web/search-parameters
file_list = drive.ListFile(
{'q': "'1SooKSw8M4ACbznKjnNrYvJ5wxuqJ-YCk' in parents"}).GetList()
for f in file_list:
# 3. Create & download by id.
print('title: %s, id: %s' % (f['title'], f['id']))
fname = os.path.join(local_download_path, f['title'])
print('downloading to {}'.format(fname))
f_ = drive.CreateFile({'id': f['id']})
f_.GetContentFile(fname)
with open(fname, 'r') as f:
print(f.read())
</code></pre>
<p>Notice that the arguments to <code>drive.ListFile</code> is a dictionary that coincides with the parameters used by <a href="https://developers.google.com/drive/v2/web/search-parameters" rel="noreferrer">Google Drive HTTP API</a> (you can customize the <code>q</code> parameter to be tuned to your use-case). </p>
<p>Know that in all cases, files/folders are encoded by id's (peep the <strong>1SooKSw8M4ACbznKjnNrYvJ5wxuqJ-YCk</strong>) on Google Drive. This requires that you search Google Drive for the specific id corresponding to the folder you want to root your search in. </p>
<p>For example, navigate to the folder <code>"/projects/my_project/my_data"</code> that
is located in your Google Drive. </p>
<p><img src="https://i.stack.imgur.com/eSeHs.png" alt="Google Drive"></p>
<p>See that it contains some files, in which we want to download to CoLab. To get the id of the folder in order to use it by PyDrive, look at the url and extract the id parameter. In this case, the url corresponding to the folder was:</p>
<p><img src="https://i.stack.imgur.com/Mq9Gl.png" alt="https://drive.google.com/drive/folders/1SooKSw8M4ACbznKjnNrYvJ5wxuqJ-YCk"></p>
<p>Where the id is the last piece of the url: <strong>1SooKSw8M4ACbznKjnNrYvJ5wxuqJ-YCk</strong>.</p> |
25,826,465 | How to move a project from Git to TFS in Visual Studio | <p>I have a project that I've been working on for some time now and I just cannot make Git work for me. I've spent a day trying to recover lost code and I am done with Git.</p>
<p>Can anyone tell me how to move an existing project into TFVC? I have a Visual Studio Online account with a TFVC project all set up for this task, but I cannot figure out how to change the source control settings so that the project is no longer tied to Git.</p>
<p>I am currently developing on VS 2013.</p>
<p>Any help is greatly appreciated!</p> | 25,834,683 | 4 | 1 | null | 2014-09-13 18:44:52.99 UTC | 3 | 2021-03-30 10:37:55.13 UTC | 2016-12-01 19:04:06.477 UTC | null | 11,799 | null | 2,900,166 | null | 1 | 39 | git|visual-studio|tfs|tfvc | 43,374 | <p>Just delete the <strong>.git</strong> folder (this one is normally hidden) in the root folder (f.e. via Windows Explorer). This deletes all things related to git. After that add the code to your TFS project and check it in into TFS.</p> |
30,637,654 | Android Webview gives net::ERR_CACHE_MISS message | <p>I built a web app and wants to create an android app that has a webview that shows my web app. After following the instructions from Google Developer to create an app, I successfully installed it on my phone with Android 5.1.1.</p>
<p>However, when I run the app for the first time, the webview shows the message:</p>
<blockquote>
<p>Web page not available</p>
<p>The Web page at [Lorem Ipsum URL] could not be loaded as:</p>
<p>net::ERR_CACHE_MISS</p>
</blockquote> | 60,647,176 | 7 | 1 | null | 2015-06-04 07:30:42.16 UTC | 16 | 2021-02-11 08:12:45.253 UTC | 2020-06-20 09:12:55.06 UTC | null | -1 | null | 4,849,865 | null | 1 | 225 | android|android-studio|webview | 206,731 | <p>Answers assembled! I wanted to just combine all the answers into one comprehensive one.</p>
<p><strong>1.</strong> Check if <code><uses-permission android:name="android.permission.INTERNET" /></code> is present in <code>manifest.xml</code>. <strong>Make sure that it is nested under <code><manifest></code> and not <code><application></code></strong>. Thanks to <a href="https://stackoverflow.com/users/6629042/sajid45">sajid45</a> and <a href="https://stackoverflow.com/users/9701254/liyanis-velazquez">Liyanis Velazquez</a></p>
<p><strong>2.</strong> Ensure that you are using <code><uses-permission android:name="android.permission.INTERNET"/></code> instead of the deprecated <code><uses-permission android:name="android.permission.internet"/></code>. Much thanks to <a href="https://stackoverflow.com/users/5133469/alan-shi">alan_shi</a> and <a href="https://stackoverflow.com/users/3903990/creos">creos</a>.</p>
<p><strong>3.</strong> If minimum version is below KK, check that you have </p>
<pre><code>if (18 < Build.VERSION.SDK_INT ){
//18 = JellyBean MR2, KITKAT=19
mWeb.getSettings().setCacheMode(WebSettings.LOAD_NO_CACHE);
}
</code></pre>
<p>or </p>
<pre><code>if (Build.VERSION.SDK_INT >= 19) {
mWebView.getSettings().setCacheMode(WebSettings.LOAD_CACHE_ELSE_NETWORK);
}
</code></pre>
<p>because proper webview is only added in KK (SDK 19). Thanks to <a href="https://stackoverflow.com/users/1310448/devavrata">Devavrata</a>, <a href="https://stackoverflow.com/users/6572459/mike-chanseong-kim">Mike ChanSeong Kim</a> and <a href="https://stackoverflow.com/users/9701254/liyanis-velazquez">Liyanis Velazquez</a></p>
<p><strong>4.</strong> Ensure that you don't have <code>webView.getSettings().setBlockNetworkLoads (false);</code>. Thanks to <a href="https://stackoverflow.com/users/2197176/technikh">TechNikh</a> for pointing this out.</p>
<p><strong>5.</strong> If all else fails, make sure that your Android Studio, Android SDK and the emulator image (if you are using one) is updated. And if you are still meeting the problem, just open a new question and make a comment below to your URL.</p> |
20,599,775 | Can you set graphical layout preview-only text on a TextView? | <p>I have a styled TextView whose real text is populated dynamically at runtime. The Graphical Layout view is very useful for getting a feel on how this component works with others in terms of look and feel, etc. There is no sensible default to this text field and I wish it to be blank before being populated. If I don't specify any text in the TextView declaration then the TextView is blank. I can set the text manually using:</p>
<pre><code><TextView
...
android:text="Preview text"/>
</code></pre>
<p>and then switch to the Graphical Layout. However, I must remember to remove this or risk it being shipped in my production version.</p>
<p>Is there a way to specify text which is only seen in the Graphical Layout preview but not applicable at runtime?</p>
<p>EDIT: I'm using Eclipse ADT.</p> | 20,599,902 | 2 | 0 | null | 2013-12-15 21:02:56.503 UTC | 9 | 2017-05-04 07:12:27.177 UTC | 2013-12-15 21:21:45.48 UTC | null | 296,108 | null | 296,108 | null | 1 | 39 | android|graphical-layout-editor | 8,430 | <p>Yes you can with the design tools extension attributes in Android Studio.</p>
<p>See this page <a href="https://developer.android.com/studio/write/tool-attributes.html" rel="noreferrer">https://developer.android.com/studio/write/tool-attributes.html</a></p>
<p>Basically you define the tools namespace</p>
<pre><code> xmlns:tools="http://schemas.android.com/tools"
</code></pre>
<p>Then use it to set your placeholder text.</p>
<pre><code><EditText
tools:text="John Doe"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
</code></pre>
<p>This actually works with most (if not all xml attributes).</p>
<p>e.g </p>
<pre><code>tools:visibility="gone"
</code></pre>
<p>would set the preview visibility to "gone" but the runtime visibility would be unchanged.</p> |
51,637,103 | How do I Return SUM from JPA Query Using Hibernate and Spring-boot? | <p>I am trying to use JPA and JPQL to query my entity and return the sum of a column (total days) from the table. I thought I had set it up right but I am getting this error:</p>
<pre><code>Caused by: org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'myRepository':
Invocation of init method failed; nested exception is
java.lang.IllegalArgumentException: Validation failed for query for method
public abstract java.lang.Float
com.nissan.rca.repository.MyRepository.selectTotals()!
</code></pre>
<p>Here is a representation of my entity:</p>
<pre><code>@Entity
@Table(name = "TABLENAME")
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class MyEntity implements Serializable {
private static final long serialVersionUID = 1L;
@EmbeddedId
private MyEntityCompositeKey myEntityCompositeKey;
@Column(name = "raiser_id")
private String raiserID;
@Column(name = "total_days")
private Float totalDays;
</code></pre>
<p>and here is my repository in which I make the query assigned to a method:</p>
<pre><code>@Repository
public interface MyRepository extends JpaRepository<MyEntity, ID> {
@Query("SELECT SUM(total_days) FROM MyEntity")
Float selectTotals();
}
</code></pre>
<p>I call the selectTotals() method from myRepository object in my rest controller for the api mapping.</p>
<pre><code>@GetMapping("/getForecastTotals")
public Float getForecastTotals() {
return myRepository.selectTotals();
}
</code></pre>
<p>I'm unsure as to why it can't be returned as a float.</p> | 51,637,609 | 3 | 1 | null | 2018-08-01 15:26:01.9 UTC | 2 | 2021-06-04 00:10:25.93 UTC | null | null | null | null | 10,166,224 | null | 1 | 11 | java|spring|jpa|jpql | 47,517 | <p>It's not a valid JPQL.</p>
<p>You either should have:</p>
<p><code>@Query("SELECT SUM(m.totalDays) FROM MyEntity m")</code></p>
<p>or, make it a native one:</p>
<p><code>@Query(value = "SELECT SUM(total_days) FROM MyEntity", nativeQuery = true)</code></p> |
32,268,986 | Git: How to remove proxy | <p>I am trying to push to my repo but receiving an error:</p>
<pre><code>fatal: unable to access 'https://github.com/myrepo.git/': Could not resolve proxy: --list
</code></pre>
<p>I already changed the proxy settings :</p>
<pre><code>git config --global --unset http.proxy
</code></pre>
<p>my global config settings are:</p>
<pre><code>push.default=simple
http.sslverify=false
url.https://.insteadof=git://
credential.helper=cache --timeout=3600
</code></pre>
<p>But still getting this error? How can I solve this?</p> | 32,269,086 | 12 | 3 | null | 2015-08-28 10:33:21.507 UTC | 22 | 2022-08-18 17:13:28.403 UTC | 2015-08-28 10:36:04.387 UTC | null | 956,397 | null | 5,085,112 | null | 1 | 82 | git|github | 177,613 | <p>Check your enviroment:</p>
<pre><code>echo $http_proxy
echo $https_proxy
echo $HTTPS_PROXY
echo $HTTP_PROXY
</code></pre>
<p>and delete with <code>export http_proxy=</code></p>
<p>Or check https and http proxy</p>
<pre><code>git config --global --unset https.proxy
git config --global --unset http.proxy
</code></pre>
<p>Or do you have the proxy in the local config?</p>
<pre><code>git config --unset http.proxy
git config --unset https.proxy
</code></pre> |
4,320,861 | What is the difference between a StackPanel and DockPanel in WPF? | <p>What can a <code>DockPanel</code> do that a <code>StackPanel</code> cannot? If anyone has an image of something that can be achieved with a <code>StackPanel</code>, but not a <code>DockPanel</code>, than that would be great.</p> | 4,320,907 | 1 | 0 | null | 2010-12-01 03:12:43.913 UTC | 14 | 2020-09-07 11:13:28.423 UTC | 2020-09-07 11:13:28.423 UTC | null | 5,395,773 | null | 165,495 | null | 1 | 57 | wpf|stackpanel|dockpanel | 39,681 | <p><strong>Stack Panel</strong>: The <code>StackPanel</code>, as the name implies, arranges content either horizontally or vertically. Vertical is the default, but this can be changed using the <code>Orientation</code> property. Content is automatically stretched based on the orientation (see screenshot below), and this can be controlled by changing the <code>HorizontalAlignment</code> or <code>VerticalAlignment</code> properties.</p>
<p><img src="https://i.stack.imgur.com/vYoRP.png" alt="StackPanel example screenshot"></p>
<p><strong>Dock Panel</strong>: The <code>DockPanel</code> is used to anchor elements to the edges of the container, and is a good choice to set up the overall structure of the application UI. Elements are docked using the <code>DockPanel.Dock</code> attached property. The order that elements are docked determines the layout.</p>
<p><img src="https://i.stack.imgur.com/S3YNB.png" alt="DockPanel example screenshot"></p> |
25,210,090 | Can't start mongod with config file | <p>I installed mongodb on windows 8.1</p>
<p>then I use command promp to navigate to D:\mongodb\bin</p>
<p>then I use this command </p>
<pre><code>mongod.exe --config D:\mongodb\mongodb.conf
</code></pre>
<p>The content of mongodb.conf</p>
<pre><code>bind_ip = 127.0.0.1,100.100.100.100
port = 3979
quiet = true
dbpath = D:\mongodb\data\db
logpath = D:\mongodb\data\log\mongodb.log
logappend = true
journal = true
</code></pre>
<p>But mongod doesn't start. If I use mongod.exe (without using config file), it works perfectly </p>
<p><strong>UPDATE:</strong></p>
<p>My intention is simple: change default port to another port and only allow access from certain IP addresses.</p>
<p>I was using the configuration for Ubuntu. Thanks to Panda_Claus that pointed out the new configuration. </p>
<p>So I changed the configuration to </p>
<pre><code>net:
bindIp: 127.0.0.1,100.100.100.100
port: 3979
</code></pre>
<p>The problem is, when I start mongod with this configuration, it got error then automatically exits</p>
<pre><code>ERROR: listen(): bind() failed errno:10049 The requested address is not valid in its context. for socket: 100.100.100.100:3979
</code></pre>
<p>So how do I allow only localhost and a specific IP address (in this case is 100.100.100.100) to connect to mongodb?</p>
<p><strong>UPDATE 2</strong></p>
<p>I used the configuration from maerics</p>
<pre><code>net:
bindIp: 127.0.0.1,192.168.10.104
port: 3979
storage:
dbPath: D:\mongodb\data\db
journal:
enabled: true
systemLog:
destination: file
path: D:\mongodb\data\log\mongodb.log
quiet: true
logAppend: true
</code></pre>
<p>Interestingly, using this, I can only connect to db on local machine, other LAN computer can't connect to </p>
<pre><code>192.168.10.104:3979.
</code></pre>
<p>However, if I remove the </p>
<pre><code>systemLog:
destination: file
path: D:\mongodb\data\log\mongodb.log
quiet: true
logAppend: true
</code></pre>
<p>other computers in LAN network are able connect to the database.</p> | 25,210,659 | 4 | 3 | null | 2014-08-08 18:34:56.053 UTC | 6 | 2021-06-17 19:45:57.4 UTC | 2014-08-08 19:31:54.04 UTC | null | 3,516,360 | null | 3,516,360 | null | 1 | 8 | mongodb | 42,533 | <h3>The config file must be <a href="http://docs.mongodb.org/manual/reference/configuration-options/#config-file-format" rel="noreferrer">valid YAML</a>.</h3>
<p>Try modifying the sample file provided with the documentation, for example:</p>
<pre class="lang-yaml prettyprint-override"><code>net:
bindIp: 127.0.0.1
port: 3979
storage:
dbPath: D:\mongodb\data\db
journal:
enabled: true
systemLog:
destination: file
path: D:\mongodb\data\log\mongodb.log
quiet: true
logAppend: true
</code></pre> |
31,394,171 | What was the rationale for making `return 0` at the end of `main` optional? | <p>Starting with the C99 standard, the compiler is required to generate the equivalent of a <code>return 0</code> or <code>return EXIT_SUCCESS</code> if no <em>return</em> is supplied at the end of <code>main</code>. There was also a corresponding and identical change to the C++ language standard around that same time. I am interested in the reasons for both and I guessed that it was unlikely they were entirely separate and unrelated changes. </p>
<p>My question is:</p>
<p><strong>What was the documented rationale for this change?</strong></p>
<p>An ideal answer would cite authoritative sources for both C and C++ which is why I have tagged the question with both languages.</p>
<p>Note that unlike the question <a href="https://stackoverflow.com/questions/2581993/what-the-reasons-for-against-returning-0-from-main-in-iso-c">What the reasons for/against returning 0 from main in ISO C++?</a>, I'm not asking for advice on whether to write <code>return 0</code> in my programs -- I'm asking why the language standards themselves were changed.</p>
<hr>
<p>To help understand the purpose for the question, here is a bit more of the context:</p>
<ol>
<li>Understanding why a change was made is helpful in deciding how to use it.</li>
<li>Rationale is frequently included within the standard itself. For example, the C90 standard includes many explanatory footnotes such as footnote 36 which starts, "The intent of this list..."</li>
</ol>
<p>I've studied the standards searching for the answer myself before I asked here, but did not find the answer. I've been asked to help write coding standards for both languages for a group of programmers and I wanted to make sure I understand why this feature exists so that I may accurately explain its use to others.</p> | 31,396,971 | 2 | 21 | null | 2015-07-13 21:55:39.253 UTC | 14 | 2015-07-14 15:56:42.863 UTC | 2017-05-23 12:26:23.613 UTC | null | -1 | null | 3,191,481 | null | 1 | 37 | c++|c|language-lawyer | 1,774 | <p>In <a href="http://www.knosof.co.uk/cbook/cbook.html">The New C Standard</a> section <a href="http://c0x.coding-guidelines.com/5.1.2.2.3.pdf">5.1.2.2.3 Program termination</a> the author <a href="http://www.informit.com/authors/bio/86F640F5-F526-4915-B28B-62689C48F793">Derek Jones</a> commentary on this lines from the C99 standard:</p>
<blockquote>
<p>reaching the } that terminates the main function returns a value of 0</p>
</blockquote>
<p>is:</p>
<blockquote>
<p>The standard finally having to bow to sloppy existing practices.</p>
</blockquote>
<p>Which indicates the rationale was to address poor programming practices with respect to explicitly returning a value from <code>main</code>. Prior to this the status returned was undefined.</p>
<p>He indicates that many implementations already implemented this even in C90, so the fact that this change already reflected common implementation also probably helped.</p> |
39,281,594 | ERROR 1698 (28000): Access denied for user 'root'@'localhost' | <p>I'm setting up a new server and keep running into this problem.</p>
<p>When I try to log into the MySQL database with the root user, I get the error:</p>
<blockquote>
<p>ERROR 1698 (28000): Access denied for user 'root'@'localhost'</p>
</blockquote>
<p>It doesn't matter if I connect through the terminal (SSH), through <a href="https://en.wikipedia.org/wiki/PhpMyAdmin" rel="noreferrer">phpMyAdmin</a> or a MySQL client, e.g., <a href="https://en.wikipedia.org/wiki/Navicat" rel="noreferrer">Navicat</a>. They all fail.</p>
<p>I looked in the <em>mysql.user</em> table and get the following:</p>
<pre class="lang-none prettyprint-override"><code>+------------------+-------------------+
| user | host |
+------------------+-------------------+
| root | % |
| root | 127.0.0.1 |
| amavisd | localhost |
| debian-sys-maint | localhost |
| iredadmin | localhost |
| iredapd | localhost |
| mysql.sys | localhost |
| phpmyadmin | localhost |
| root | localhost |
| roundcube | localhost |
| vmail | localhost |
| vmailadmin | localhost |
| amavisd | test4.folkmann.it |
| iredadmin | test4.folkmann.it |
| iredapd | test4.folkmann.it |
| roundcube | test4.folkmann.it |
| vmail | test4.folkmann.it |
| vmailadmin | test4.folkmann.it |
+------------------+-------------------+
</code></pre>
<p>As you can see, user <em>root</em> should have access.</p>
<p>The Server is quite simple, as I have tried to troubleshoot this for a while now.</p>
<p>It's running <a href="https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_16.04_LTS_.28Xenial_Xerus.29" rel="noreferrer">Ubuntu 16.04.1</a> LTS (Xenial Xerus) with Apache, MySQL and PHP, so that it can host websites, and iRedMail 0.9.5-1, so that it can host mail.</p>
<p>Log into the MySQL database works fine before I installed iRedMail. I also tried just installing iRedMail, but then root also doesn't work.</p>
<p>How can I fix my MySQL login problem or how can I install iRedMail over an existing MySQL install? And yes, I tried the <a href="https://code.google.com/archive/p/iredmail/wikis/Installation_Tips.wiki" rel="noreferrer">Installation Tips</a> and I can't find those variables in the configuration files.</p> | 42,742,610 | 21 | 3 | null | 2016-09-01 22:06:07.71 UTC | 317 | 2022-08-30 01:00:51.407 UTC | 2021-09-21 14:48:16.127 UTC | null | 1,707,353 | null | 3,562,385 | null | 1 | 569 | mysql|iredmail | 914,298 | <p>On some systems, like <a href="https://en.wikipedia.org/wiki/Ubuntu_%28operating_system%29" rel="noreferrer">Ubuntu</a>, MySQL is using the <a href="https://dev.mysql.com/doc/mysql-security-excerpt/5.5/en/socket-pluggable-authentication.html" rel="noreferrer">Unix auth_socket plugin</a> by default.</p>
<p>Basically it means that: <em>db_users using it, will be "authenticated" by <strong>the system user credentials.</strong></em> You can see if your <code>root</code> user is set up like this by doing the following:</p>
<pre class="lang-none prettyprint-override"><code>sudo mysql -u root # I had to use "sudo" since it was a new installation
mysql> USE mysql;
mysql> SELECT User, Host, plugin FROM mysql.user;
+------------------+-----------------------+
| User | plugin |
+------------------+-----------------------+
| root | auth_socket |
| mysql.sys | mysql_native_password |
| debian-sys-maint | mysql_native_password |
+------------------+-----------------------+
</code></pre>
<p>As you can see in the query, the <code>root</code> user is using the <code>auth_socket</code> plugin.</p>
<p>There are two ways to solve this:</p>
<ol>
<li>You can set the <em>root</em> user to use the <code>mysql_native_password</code> plugin</li>
<li>You can create a new <code>db_user</code> with you <code>system_user</code> (recommended)</li>
</ol>
<p><strong>Option 1:</strong></p>
<pre class="lang-none prettyprint-override"><code>sudo mysql -u root # I had to use "sudo" since it was a new installation
mysql> USE mysql;
mysql> UPDATE user SET plugin='mysql_native_password' WHERE User='root';
mysql> FLUSH PRIVILEGES;
mysql> exit;
sudo service mysql restart
</code></pre>
<p><strong>Option 2:</strong> (replace YOUR_SYSTEM_USER with the username you have)</p>
<pre class="lang-none prettyprint-override"><code>sudo mysql -u root # I had to use "sudo" since it was a new installation
mysql> USE mysql;
mysql> CREATE USER 'YOUR_SYSTEM_USER'@'localhost' IDENTIFIED BY 'YOUR_PASSWD';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'YOUR_SYSTEM_USER'@'localhost';
mysql> UPDATE user SET plugin='auth_socket' WHERE User='YOUR_SYSTEM_USER';
mysql> FLUSH PRIVILEGES;
mysql> exit;
sudo service mysql restart
</code></pre>
<p>Remember that if you use option #2 you'll have to connect to MySQL as your system username (<code>mysql -u YOUR_SYSTEM_USER</code>).</p>
<p><strong>Note:</strong> On some systems (e.g., <a href="https://en.wikipedia.org/wiki/Debian_version_history#Debian_9_(Stretch)" rel="noreferrer">Debian 9</a> (Stretch)) the 'auth_socket' plugin is called <a href="https://mariadb.com/kb/en/library/authentication-plugin-unix-socket/" rel="noreferrer">'unix_socket'</a>, so the corresponding SQL command should be: <code>UPDATE user SET plugin='unix_socket' WHERE User='YOUR_SYSTEM_USER';</code></p>
<p>From <a href="https://stackoverflow.com/questions/39281594/error-1698-28000-access-denied-for-user-rootlocalhost#comment92462504_42742610">andy's comment</a> it seems that MySQL 8.x.x updated/replaced the <code>auth_socket</code> for <code>caching_sha2_password</code>. I don't have a system setup with MySQL 8.x.x to test this. However, the steps above should help you to understand the issue. Here's the reply:</p>
<p><em>One change as of MySQL 8.0.4 is that the new default authentication plugin is 'caching_sha2_password'. The new 'YOUR_SYSTEM_USER' will have this authentication plugin and you can log in from the Bash shell now with "mysql -u YOUR_SYSTEM_USER -p" and provide the password for this user on the prompt. There isn’t any need for the "UPDATE user SET plugin" step.</em></p>
<p><em>For the 8.0.4 default authentication plugin update, see</em> <em><strong><a href="https://mysqlserverteam.com/mysql-8-0-4-new-default-authentication-plugin-caching_sha2_password/" rel="noreferrer">MySQL 8.0.4: New Default Authentication Plugin: caching_sha2_password</a></strong></em>.</p> |
39,173,345 | Unity with ASP.NET Core and MVC6 (Core) | <p><strong>Update 09.08.2018</strong><br>
Unity is being developed <a href="https://github.com/unitycontainer/container" rel="nofollow noreferrer">here</a> but I haven't had the time to test how it plays with the ASP.NET Core framework.</p>
<p><strong>Update 15.03.2018</strong><br>
This solution is for the specific problem of using ASP.NET Core v1 with Unity while using the .NET Framework 4.5.2 <b>NOT</b> the .NET Core Framework. I had to use this setup since I needed some .Net 4.5.2 DLLs but for anyone starting afresh I would not recommend this approach. Also Unity is not being developed any further (to my knowlage) so I would recommend using the Autofac Framework for new projects. See this <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection#replacing-the-default-services-container" rel="nofollow noreferrer">Post</a> for more info on how to do that.</p>
<p><strong>Intro</strong><br>
I am building a Web Application using ASP.NET with MVC. This Application depends on certain services (a WCF Service a Datastore service etc). Now to keep things nice and decoupled I want to use a DI (Dependecy Injection) Framework, specifically Unity.<br><br>
<strong>Initial Research</strong><br>
I found this <a href="http://www.global-webnet.com/blog/post/2015/08/24/Dependency-Injection-in-ASPNET-vNext-(adding-Unity-container).aspx" rel="nofollow noreferrer">blog post</a> but sadly its not working. The idea though is nice.<br> It basically says that you should not register all the services registered in the ServiceCollection into your own container, but rather reference the default ServiceProvider.<br> So. if something needs to be resolved the default ServiceProvider is called and in case it has no resolution the type will be resolved using your custom UnityContainer.<br>
<br>
<strong>The Problems</strong>
<br>
MVC always tries to resolve the Controller with the default ServiceProvider. <br>Also, I noticed that even if the Controller would get resolved correctly, I can never "mix" Dependencies. Now, if I want to use one of my Services but also an IOptions interface from ASP the class can never be resolved because not one of those two containers has resolutions for both types.
<br><br>
<strong>What I need</strong>
<br>
So to recap I need the following things:</p>
<ul>
<li>A setup where I dont need to copy ASP.NET Dependencies into my UnityContainer</li>
<li>A container which can resolve my MVC Controllers</li>
<li>A container which can resolve "mixed" Dependencies</li>
</ul>
<p><strong>EDIT:</strong><br>
So the question is how can I achieve these points ?</p>
<p><strong>Environment</strong>
<br>project.json:<br>
<a href="https://i.stack.imgur.com/OVg2F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OVg2F.png" alt="enter image description here" /></a></p> | 39,173,346 | 2 | 5 | null | 2016-08-26 19:19:06.063 UTC | 19 | 2021-01-11 10:42:42.893 UTC | 2020-08-22 21:53:09.97 UTC | null | 4,925,121 | null | 3,261,657 | null | 1 | 25 | c#|asp.net-core-mvc|unity-container | 31,550 | <p>So after some research I came up with the following solutions to my problems:
<br><br><strong>Use Unity with ASP</strong>
<br>To be able to use Unity with ASP I needed a custom IServiceProvider (<a href="https://docs.asp.net/en/latest/fundamentals/dependency-injection.html#replacing-the-default-services-container">ASP Documentation</a>) so I wrote a wrapper for the IUnityContainer which looks like this</p>
<pre><code>public class UnityServiceProvider : IServiceProvider
{
private IUnityContainer _container;
public IUnityContainer UnityContainer => _container;
public UnityServiceProvider()
{
_container = new UnityContainer();
}
#region Implementation of IServiceProvider
/// <summary>Gets the service object of the specified type.</summary>
/// <returns>A service object of type <paramref name="serviceType" />.-or- null if there is no service object of type <paramref name="serviceType" />.</returns>
/// <param name="serviceType">An object that specifies the type of service object to get. </param>
public object GetService(Type serviceType)
{
//Delegates the GetService to the Containers Resolve method
return _container.Resolve(serviceType);
}
#endregion
}
</code></pre>
<p>Also I had to change the Signature of the ConfigureServices method in my Startup class from this:</p>
<pre><code>public void ConfigureServices(IServiceCollection services)
</code></pre>
<p>to this:</p>
<pre><code>public IServiceProvider ConfigureServices(IServiceCollection services)
</code></pre>
<p>Now I can return my custom IServiceProvider and it will be used instead of the default one.<br>The full ConfigureServices Method is shown in the Wire up section at the bottom.
<br><br><strong>Resolving Controllers</strong><br/>
I found <a href="https://simpleinjector.org/blog/2016/07/working-around-the-asp-net-core-di-abstraction/">this blog post</a>. From it I learned that MVC uses an IControllerActivator interface to handle Controller instantiation. So I wrote my own which looks like this:</p>
<pre><code>public class UnityControllerActivator : IControllerActivator
{
private IUnityContainer _unityContainer;
public UnityControllerActivator(IUnityContainer container)
{
_unityContainer = container;
}
#region Implementation of IControllerActivator
public object Create(ControllerContext context)
{
return _unityContainer.Resolve(context.ActionDescriptor.ControllerTypeInfo.AsType());
}
public void Release(ControllerContext context, object controller)
{
//ignored
}
#endregion
}
</code></pre>
<p>Now if a Controller class is activated it will be instatiated with my UnityContainer. Therefore my UnityContainer must know how to Resolve any Controller!
<br/><br/><strong>Next Problem: Use the default IServiceProvider</strong><br/>
Now if I register services such as Mvc in ASP.NET I normally would do it like this:</p>
<pre><code>services.AddMvc();
</code></pre>
<p>Now if I use a UnityContainer all the MVC Dependencies could not be Resolved because they aren't Registered. So I can either Register them (like AutoFac) or I can create a UnityContainerExtension. I opted for the Extension and came up with following two clases :
<br><strong>UnityFallbackProviderExtension</strong></p>
<pre><code>public class UnityFallbackProviderExtension : UnityContainerExtension
{
#region Const
///Used for Resolving the Default Container inside the UnityFallbackProviderStrategy class
public const string FALLBACK_PROVIDER_NAME = "UnityFallbackProvider";
#endregion
#region Vars
// The default Service Provider so I can Register it to the IUnityContainer
private IServiceProvider _defaultServiceProvider;
#endregion
#region Constructors
/// <summary>
/// Creates a new instance of the UnityFallbackProviderExtension class
/// </summary>
/// <param name="defaultServiceProvider">The default Provider used to fall back to</param>
public UnityFallbackProviderExtension(IServiceProvider defaultServiceProvider)
{
_defaultServiceProvider = defaultServiceProvider;
}
#endregion
#region Overrides of UnityContainerExtension
/// <summary>
/// Initializes the container with this extension's functionality.
/// </summary>
/// <remarks>
/// When overridden in a derived class, this method will modify the given
/// <see cref="T:Microsoft.Practices.Unity.ExtensionContext" /> by adding strategies, policies, etc. to
/// install it's functions into the container.</remarks>
protected override void Initialize()
{
// Register the default IServiceProvider with a name.
// Now the UnityFallbackProviderStrategy can Resolve the default Provider if needed
Context.Container.RegisterInstance(FALLBACK_PROVIDER_NAME, _defaultServiceProvider);
// Create the UnityFallbackProviderStrategy with our UnityContainer
var strategy = new UnityFallbackProviderStrategy(Context.Container);
// Adding the UnityFallbackProviderStrategy to be executed with the PreCreation LifeCycleHook
// PreCreation because if it isnt registerd with the IUnityContainer there will be an Exception
// Now if the IUnityContainer "magically" gets a Instance of a Type it will accept it and move on
Context.Strategies.Add(strategy, UnityBuildStage.PreCreation);
}
#endregion
}
</code></pre>
<p><br><strong>UnityFallbackProviderStrategy</strong>:</p>
<pre><code>public class UnityFallbackProviderStrategy : BuilderStrategy
{
private IUnityContainer _container;
public UnityFallbackProviderStrategy(IUnityContainer container)
{
_container = container;
}
#region Overrides of BuilderStrategy
/// <summary>
/// Called during the chain of responsibility for a build operation. The
/// PreBuildUp method is called when the chain is being executed in the
/// forward direction.
/// </summary>
/// <param name="context">Context of the build operation.</param>
public override void PreBuildUp(IBuilderContext context)
{
NamedTypeBuildKey key = context.OriginalBuildKey;
// Checking if the Type we are resolving is registered with the Container
if (!_container.IsRegistered(key.Type))
{
// If not we first get our default IServiceProvider and then try to resolve the type with it
// Then we save the Type in the Existing Property of IBuilderContext to tell Unity
// that it doesnt need to resolve the Type
context.Existing = _container.Resolve<IServiceProvider>(UnityFallbackProviderExtension.FALLBACK_PROVIDER_NAME).GetService(key.Type);
}
// Otherwise we do the default stuff
base.PreBuildUp(context);
}
#endregion
}
</code></pre>
<p>Now if my UnityContainer has no Registration for something it just ask the default Provider for it.
<br>I learned all of this from several different articles</p>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/dn178462(v=pandp.30).aspx">MSDN Unity article</a></li>
<li><a href="http://www.mikevalenty.com/auto-mocking-unity-container-extension/">Auto-Mocking Unity Container Extension</a></li>
<li><a href="http://mark-dot-net.blogspot.de/2009/09/custom-object-factory-unity-extension.html">Custom Object Factory Unity Extension</a></li>
</ul>
<p>The nice thing about this approach is that I can also "mix" Dependencies now. If I need any of my Services AND an IOptions Interface from ASP my UnityContainer will resolve all of these Dependencies and Inject them into my Controller !!! <br>The only thing to remember is that if I use any of my own Dependencies I have to register my Controller class with Unity because the default IServiceProvider can no longer Resolve my Controllers Dependencies.
<br><br><strong>Finally: Wire up</strong>
<br>Now in my project I use different services (ASP Options, MVC with options). To make it all work my ConfigureServices Method looks like this now:</p>
<pre><code>public IServiceProvider ConfigureServices(IServiceCollection services)
{
// Add all the ASP services here
// #region ASP
services.AddOptions();
services.Configure<WcfOptions>(Configuration.GetSection("wcfOptions"));
var globalAuthFilter = new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.Build();
services.AddMvc(options => { options.Filters.Add(new AuthorizeFilter(globalAuthFilter)); })
.AddJsonOptions
(
options => options.SerializerSettings.ContractResolver = new DefaultContractResolver()
);
// #endregion ASP
// Creating the UnityServiceProvider
var unityServiceProvider = new UnityServiceProvider();
IUnityContainer container = unityServiceProvider.UnityContainer;
// Adding the Controller Activator
// Caution!!! Do this before you Build the ServiceProvider !!!
services.AddSingleton<IControllerActivator>(new UnityControllerActivator(container));
//Now build the Service Provider
var defaultProvider = services.BuildServiceProvider();
// Configure UnityContainer
// #region Unity
//Add the Fallback extension with the default provider
container.AddExtension(new UnityFallbackProviderExtension(defaultProvider));
// Register custom Types here
container.RegisterType<ITest, Test>();
container.RegisterType<HomeController>();
container.RegisterType<AuthController>();
// #endregion Unity
return unityServiceProvider;
}
</code></pre>
<p>Since I learned most of what I know about DI in the past week I hope I didnt break any big Pricipal/Pattern if so please tell me!</p> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.