prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Why does this jQuery code not work?
Why doesn't the following jQuery code work?
```
$(function() {
var regex = /\?fb=[0-9]+/g;
var input = window.location.href;
var scrape = input.match(regex); // returns ?fb=4
var numeral = /\?fb=/g;
scrape.replace(numeral,'');
alert(scrape); // Should alert the number?
});
```
Basically I have a link like this:
```
http://foo.com/?fb=4
```
How do I first locate the `?fb=4` and then retrieve the number only?
|
Consider using the following code instead:
```
$(function() {
var matches = window.location.href.match(/\?fb=([0-9]+)/i);
if (matches) {
var number = matches[1];
alert(number); // will alert 4!
}
});
```
Test an example of it here: <http://jsfiddle.net/GLAXS/>
The regular expression is only slightly modified from what you provided. The `g`lobal flag was removed, as you're not going to have multiple `fb=`'s to match (otherwise your URL will be invalid!). The case `i`nsensitive flag flag was added to match `FB=` as well as `fb=`.
The number is wrapped in curly brackets to denote a [capturing group](http://www.regular-expressions.info/brackets.html) which is the magic which allows us to use `match`.
If [`match`](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/String/match) matches the regular expression we specify, it'll return the matched string in the first array element. The remaining elements contain the value of each capturing group we define.
In our running example, the string "?fb=4" is matched and so is the first value of the returned array. The only capturing group we have defined is the number matcher; which is why `4` is contained in the second element.
|
How does upsampling in Fully Connected Convolutional network work?
I read several posts / articles and have some doubts on the mechanism of upsampling after the CNN downsampling.
I took the 1st answer from this question:
<https://www.quora.com/How-do-fully-convolutional-networks-upsample-their-coarse-output>
I understood that similar to normal convolution operation, the "upsampling" also uses kernels which need to be trained.
Question1: if the "spatial information" is already lost during the first stages of CNN, how can it be re-constructed in anyway ?
Question2: Why >"Upsampling from a small (coarse) featuremap deep in the network has good semantic information but bad resolution. Upsampling from a larger feature map closer to the input, will produce better detail but worse semantic information" ?
|
**Question #1**
Upsampling doesn't (and cannot) reconstruct any lost information. Its role is to bring back the resolution to the resolution of previous layer.
Theoretically, we can eliminate the down/up sampling layers altogether. However to reduce the number of computations, we can downsample the input before a layers and then upsample its output.
Therefore, the sole purpose of down/up sampling layers is to reduce computations in each layer, while keeping the dimension of input/output as before.
You might argue the down-sampling might cause information loss. That is always a possibility but remember the role of CNN is essentially extracting "useful" information from the input and reducing it into a smaller dimension.
**Question #2**
As we go from the input layer in CNN to the output layer, the dimension of data generally decreases while the semantic and extracted information hopefully increases.
Suppose we have the a CNN for image classification. In such CNN, the early layers usually extract the basic shapes and edges in the image. The next layers detect more complex concepts like corners, circles. You can imagine the very last layers might have nodes that detect very complex features (like presence of a person in the image).
So up-sampling from a large feature map close to the input produces better detail but has lower semantic information compared to the last layers. In retrospect, the last layers generally have lower dimension hence their *resolution* is worse compared to the early layers.
|
Select newest records that have distinct Name column
I did search around and I found this
[SQL selecting rows by most recent date with two unique columns](https://stackoverflow.com/questions/189213/sql-selecting-rows-by-most-recent-date)
Which is so close to what I want but I can't seem to make it work.
I get an error Column 'ID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I want the newest row by date for each Distinct Name
```
Select ID,Name,Price,Date
From table
Group By Name
Order By Date ASC
```
Here is an example of what I want
Table
| ID | Name | Price | Date |
| --- | --- | --- | --- |
| 0 | A | 10 | 2012-05-03 |
| 1 | B | 9 | 2012-05-02 |
| 2 | A | 8 | 2012-05-04 |
| 3 | C | 10 | 2012-05-03 |
| 4 | B | 8 | 2012-05-01 |
desired result
| ID | Name | Price | Date |
| --- | --- | --- | --- |
| 2 | A | 8 | 2012-05-04 |
| 3 | C | 10 | 2012-05-03 |
| 1 | B | 9 | 2012-05-02 |
I am using Microsoft SQL Server 2008
|
```
Select ID,Name, Price,Date
From temp t1
where date = (select max(date) from temp where t1.name =temp.name)
order by date desc
```
Here is a [SQL Fiddle](http://sqlfiddle.com/#!18/6a8e3/1) with a demo of the above
---
Or as Conrad points out you can use an INNER JOIN (another [SQL Fiddle](http://sqlfiddle.com/#!18/6a8e3/384) with a demo) :
```
SELECT t1.ID, t1.Name, t1.Price, t1.Date
FROM temp t1
INNER JOIN
(
SELECT Max(date) date, name
FROM temp
GROUP BY name
) AS t2
ON t1.name = t2.name
AND t1.date = t2.date
ORDER BY date DESC
```
|
Running xUnit tests on Teamcity using async methods
I made the following xUnit test which is using a HttpClient to call a status api method on a webserver.
```
[Fact]
public void AmIAliveTest()
{
var server = TestServer.Create<Startup>();
var httpClient = server.HttpClient;
var response = httpClient.GetAsync("/api/status").Result;
response.StatusCode.Should().Be(HttpStatusCode.OK);
var resultString = response.Content.ReadAsAsync<string>().Result;
resultString.Should().Be("I am alive!");
}
```
This test is running fine locally. But when I commit the code and try to run the same test on the TeamCity build server, it runs forever. I even have to kill the xunit runner process because stopping the build will not stop this process.
However when I write the test like this
```
[Fact]
public async void AmIAliveTest()
{
var server = TestServer.Create<Startup>();
var httpClient = server.HttpClient;
var response = await httpClient.GetAsync("/api/status");
response.StatusCode.Should().Be(HttpStatusCode.OK);
var resultString = await response.Content.ReadAsAsync<string>();
resultString.Should().Be("I am alive!");
}
```
It runs fine locally and also on TeamCity.
My concern is now that I forget to write the test like the second variant and that once in a while the teamcity build is hanging.
Can anybody explain to me why xUnit running on the teamcity buildserver is not running the test correctly in the first place? And is there a solution for this to solve this?
|
>
> Can anybody explain to me why xUnit running on the teamcity buildserver is not running the test correctly in the first place?
>
>
>
First, I'd check your xUnit versions - you should be running the recently-released 2.0. I suspect your local version may be out of date.
The core problem is in this line:
```
var resultString = response.Content.ReadAsAsync<string>().Result;
```
I suspect you're running into a [deadlock situation](http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html) that I describe on my blog. `HttpClient` has some methods on some platforms that do not properly use `ConfigureAwait(false)`, and is thus subject to this deadlock. xUnit 2.0 installs a single-threaded `SynchronizationContext` into all its unit tests, which provides the other half of the deadlock scenario.
The proper solution is to replace `Result` with `await`, and to change the return type of your unit test method from `void` to `Task`.
|
Echo implemented in Java
I implemented a simple version of [echo(1)](https://www.freebsd.org/cgi/man.cgi?query=echo&sektion=1&manpath=freebsd-release-ports) command utility. The program works as described in the man page: it writes to the standard output all command line arguments, separated by a whitespace and end with a newline. It can process the option `-n` that avoid to print the newline.
About my implementation, it is not complete, because it doesn't interpret common backslash-escaped characters (for example `\n`, `\c`, and so forth). i used a `StringBuilder` object to build the output string, because I'm not sure that the standard output is buffered. I also make some checks so the program can work without specifying any arguments.
You can compile the program with `javac JEcho` and run it with `java JEcho <...>`.
**JEcho.java**
```
/**
* JEcho writes any command line argument to the standard output; each argument
* is separated by a single whitespace and end with a newline (you can
* specify '-n' to suppress the newline).
*
* This program doesn't interpret common backslash-escaped characters (for
* exampe '\n' or '\c').
*/
public class JEcho {
public static void main(String[] args) {
boolean printNewline = true;
int posArgs = 0;
if (args.length > 0 && args[0].equals("-n")) {
printNewline = false;
posArgs = 1;
}
StringBuilder outputBuilder = new StringBuilder();
for (; posArgs < args.length; posArgs++) {
outputBuilder.append(args[posArgs]);
outputBuilder.append(" "); // Separator.
}
// Remove the trailing whitespace at the end.
int outputLength = outputBuilder.length();
if (outputLength > 0)
outputBuilder.deleteCharAt(outputBuilder.length() - 1);
String output = outputBuilder.toString();
if (printNewline)
System.out.println(output);
else
System.out.print(output);
}
}
```
|
If you're using Java 8, you can use [`StringJoiner`](https://docs.oracle.com/javase/8/docs/api/java/util/StringJoiner.html).
```
/**
* JEcho writes any command line argument to the standard output; each argument
* is separated by a single whitespace and end with a newline (you can
* specify '-n' to suppress the newline).
*
* This program doesn't interpret common backslash-escaped characters (for
* exampe '\n' or '\c').
*/
public class JEcho {
public static void main(String[] args) {
boolean printNewline = true;
int posArgs = 0;
if (args.length > 0 && args[0].equals("-n")) {
printNewline = false;
posArgs = 1;
}
StringJoiner outputBuilder = new StringJoiner(" ");
for (; posArgs < args.length; posArgs++) {
outputBuilder.add(args[posArgs]);
}
String output = outputBuilder.toString();
if (printNewline)
System.out.println(output);
else
System.out.print(output);
}
}
```
|
Why is this specs2 test using Mockito passing?
Suppose I had this interface and class:
```
abstract class SomeInterface{
def doSomething : Unit
}
class ClassBeingTested(interface : SomeInterface){
def doSomethingWithInterface : Unit = {
Unit
}
}
```
Note that the doSomethingWithInterface method does not actually do anything with the interface.
I create a test for it like this:
```
import org.specs2.mutable._
import org.specs2.mock._
import org.mockito.Matchers
import org.specs2.specification.Scope
trait TestEnvironment extends Scope with Mockito{
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
}
class ClassBeingTestedSpec extends Specification{
"The ClassBeingTested" should {
"#doSomethingWithInterface" in {
"calls the doSomething method of the given interface" in new TestEnvironment {
test.doSomethingWithInterface
there was one(interface).doSomething
}
}
}
}
```
This test passes. Why? Am I setting it up wrong?
When I get rid of the scope:
```
class ClassBeingTestedSpec extends Specification with Mockito{
"The ClassBeingTested" should {
"#doSomethingWithInterface" in {
"calls the doSomething method of the given interface" in {
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
test.doSomethingWithInterface
there was one(interface).doSomething
}
}
}
}
```
The test fails as expected:
```
[info] x calls the doSomething method of the given interface
[error] The mock was not called as expected:
[error] Wanted but not invoked:
[error] someInterface.doSomething();
```
What is the difference between these two tests? Why does the first one pass when it should fail? Is this not an intended use of Scopes?
|
When you mix-in the `Mockito` trait to another trait you can create expectations like `there was one(interface).doSomething`. If such an expression fails it only returns a `Result`, it doesn't throw an `Exception`. It then gets lost in a `Scope` because it is just a "pure" value inside the body of a trait.
However if you mix-in the `Mockito` trait to a `mutable.Specification` then an exception will be thrown on a failure. This is because the `mutable.Specification` class specifies that there should be `ThrownExpectations` by mixing in that trait.
So if you want to create a trait extending both `Scope` you can either:
1. create the trait from inside the specification and not have it extend Mockito:
```
class MySpec extends mutable.Specification with Mockito {
trait TestEnvironment extends Scope {
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
}
...
}
```
2. create trait and specification as you do, but mix-in `org.specs2.execute.ThrownExpectations`
```
trait TestEnvironment extends Scope with Mockito with ThrownExpectations {
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
}
class MySpec extends mutable.Specification with Mockito {
...
}
```
|
MongoDB atomic "findOrCreate": findOne, insert if nonexistent, but do not update
as the title says, I want to perform a find (one) for a document, by \_id, and if doesn't exist, have it created, then whether it was found or was created, have it returned in the callback.
I don't want to update it if it exists, as I've read findAndModify does. I have seen many other questions on Stackoverflow regarding this but again, don't wish to update anything.
I am unsure if by creating (of not existing), THAT is actually the update everyone is talking about, it's all so confuzzling :(
|
Beginning with MongoDB 2.4, it's no longer necessary to rely on a unique index (or any other workaround) for atomic `findOrCreate` like operations.
This is thanks to [the `$setOnInsert` operator](http://docs.mongodb.org/manual/reference/operator/setOnInsert/#op._S_setOnInsert) new to 2.4, which allows you to specify updates which should only happen when inserting documents.
This, combined with the `upsert` option, means you can use `findAndModify` to achieve an atomic `findOrCreate`-like operation.
```
db.collection.findAndModify({
query: { _id: "some potentially existing id" },
update: {
$setOnInsert: { foo: "bar" }
},
new: true, // return new doc if one is upserted
upsert: true // insert the document if it does not exist
})
```
As `$setOnInsert` only affects documents being inserted, if an existing document is found, no modification will occur. If no document exists, it will upsert one with the specified \_id, then perform the insert only set. In both cases, the document is returned.
|
Web API design tips
I am currently developing a very simple web service and thought I could write an API for that so when I decide to expand it on new platforms I would only have to code the parser application. That said, the API isn't meant for other developers but me, but I won't restrict access to it so anyone can build on that.
Then I thought I could even run the website itself through this API for various reasons like lower bandwidth consumption (HTML generated in browser) and client-side caching. Being AJAX heavy seemed like an even bigger reason to.
The layout looks like this:
```
Server (database, programming logic)
|
API (handles user reads/writes)
|
Client application (the website, browser extensions, desktop app, mobile apps)
|
Client cache (further reduces server reads)
```
After the introduction here are my questions:
1. Is this good use of API
2. Is it a good idea to run the whole website through the API
3. What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
**EDIT**
Additional questions:
1. Any alternative approaches I haven't considered
2. What are some potential issues I haven't accounted for that may arise using this approach
|
First things first.
Asking if a design (or in fact anything) is "good" depends on how you define "goodness". Typical criteria are performance, maintainability, scalability, testability, reusability etc. It would help if you could add some of that context.
Having said that...
*Is this good use of API*
It's usually a good idea to separate out your business logic from your presentation logic and your data persistence logic. Your design does that, and therefore I'd be happy to call it "good". You might look at a formal design pattern to do this - Model View Controller is probably the current default, esp. for web applications.
*Is it a good idea to run the whole website through the API*
Well, that depends on the application. It's totally possible to write an application entirely in Javascript/Ajax, but there are browser compatibility issues (esp. for older browsers), and you have to build support for things users commonly expect from web applications, like deep links and search engine friendliness. If you have a well-factored API, you can do some of the page generation on the server, if that makes it easier.
*What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)*
Tricky one - with this kind of app, you have to distinguish between authenticating the user, and authenticating the application. For the former, OpenID or OAuth are probably the dominant solutions; for the latter, have a look at how Google requires you to sign up to use their Maps API.
In most web applications, HTTPS is not used for authentication (proving the current user is who they say they are), but for encryption. The two are related, but by no means equivalent...
*Any alternative approaches I haven't considered*
Maybe this fits more under question 5 - but in my experience, API design is a rather esoteric skill - it's hard for an API designer to be able to predict exactly what the client of the API is going to need. I would seriously consider writing the application without an API for your first client platform, and factor out the API later - that way, you build only what you need in the first release.
*What are some potential issues I haven't accounted for that may arise using this approach*
Versioning is a big deal with APIs - once you've created an interface, you can almost never change it, especially with multiple clients that you don't control. I'd build versioning in as a first class concept - with RESTful APIs, you can do this as part of the URL.
|
Typescript: conflicting namespaces warning on Angular 9 project compilation with ng-packagr
I am unsure what is really to blame for this issue. I think it's Typescript, but it could be ng-packagr or Angular. It only started when I updated to Angular 9.
Here is the message I get on my production build...
```
WARNING: Conflicting namespaces: dist/web-apps-shared/esm2015/public_api.js re-exports 'ɵ0' from both dist/web-apps-shared/esm2015/lib/api-applications/reducers.js and dist/web-apps-shared/esm2015/lib/account-codes/reducers.js (will be ignored)
```
Here is one of the sources that is causing this...
```
export const selectTotalAccountCodes = createSelector(selectSharedAccountCodeState,
(state: SharedAccountCodeState) => state.totalItems);
```
The compiler for some reason takes the function parameter and assigns it to a `const` and then exports it like so...
```
const ɵ0 = (state) => state.totalItems;
export const selectTotalAccountCodes = createSelector(selectSharedAccountCodeState, ɵ0);
export { ɵ0 };
```
The question I have is, why does `ɵ0` need to be exported? It is only used internally in this file. I am I missing something? Should worry about this? It doesn't seem to be causing an issue when consuming the library that is built with this code.
|
I've got the same warning while updating to Angular 9, looking on the web for some info/solutions I've also found this Angular issue page <https://github.com/angular/angular/issues/33668> ( 11/2019 so 3 months ago ), where they say that is an Ivy's issue, something related to the "export \* ".
This is strange since I need to publish to npm and build recommendation says to disable Ivy , so I've disabled it ( angularCompilerOptions.enableIvy false in tsconfig.lib.json ): instead, set enableIvy to true makes the warning disappear.
So I did this try, while keeping enableIvy set to false in tsconfig.lib.json, in the public-api.ts I've modified the "export \* " replacing the "\*" with all the exported objects, one by one: warning is gone, the library is working.
But I really don't know if this is a good fix or if it is better just to show the warnings..
|
Histogram/distribution fitting for this dataset with unequal and open-ended intervals?
I have this income distribution data for various groups:
<https://docs.google.com/spreadsheet/ccc?key=0Akwg3n_e05cCdEdtT0VZYU5keW5DVkNoNmpBWmdzeUE>
As you can see, I have intervals/bins with varying widths. I also have an open-ended interval and the mean income in that open interval varies a lot between groups.
**For pedagogical purposes, I'd like to be able to calculate how many people and/or percent is in a given interval, given an equal bin width.**
Example:
On an x-axis from 0-2000 with a bin width of 10, I'd like to be able to say how many people is in the 590-600 group.
1) Is this doable?
2) Do I need to fit a distribution to a histogram or how do I do it?
2b) What would you base this histogram / distribution on (mean, median etc.)? (As mentioned, the mean in the open-ended interval varies a lot between groups, something I'd like to take into consideration = show truthfully graphically.)
3) Can you please explain this in simple mathematical operations based on the data given?
|
There are lots of possible data sets that could generate these summary bins, so it's impossible to be exact, but you can make reasonable guesses.
One way to get subinterval estimates is to create a function that gives the number of people at each income level. The easiest, and perhaps the best (simplest assumptions), is to connect known points and interpolate between them. You don't really have known points, but I used the `(x=median, y=intervalCount/intervalWidth)`. There's not much difference between the mean and medium in this set, which suggests the data values are pretty well-behaved in each interval.
Once you have such a function, you can integrate it between any two points to get any subinterval counts.

I left out the 0-0 interval because the value is literally off the chart and 1000+ because it has no real width.
Since the data is obviously not any traditional distribution, a local smoother is a decent way to smooth it out. Here's a spline smoother:

It does better at the tail, but is perhaps too smooth at the beginning.
The 100-119 interval looks high in both populations. It could be due to a propensity for people to round up to 100 when answering the survey.
As far are truth in graphics goes, it best to just plot the data that you have, which is the intervals. It might be useful to show the mean/medians, but they only depart from the middle for the high ranges, which might be worth separate study.

We can try in double our bin count by considering the medians. Theoretically, the median divides each interval into two intervals with equal population (two bars of equal area but possibly different heights). However, the breakdown is not so obvious due to possible ties and fractional medians. Here is it with interval widths of `(median-lo)` and `(hi-median+1)`: (each full interval width is `(hi-lo+1)`).

|
How do I know if I'm violating MVVM with WPF?
I was recently grabbing an auto generated TreeViewItem from a nested TreeView node in WPF, and I was using ItemContainerGenerator to get it in code behind. Then I thought to myself I'm certainly violating MVVM right now.
What are some other signs that someone is violating MVVM ?
|
You know when you are violating MVVM when:
1. The ViewModel is aware of the View. The ViewModel should never know or care if or what is sitting on top of it. It simply is. Sometimes, the ViewModel might need to generate events that should be handled on the View. When I was working with MVVM, we used the [Mediator pattern](http://en.wikipedia.org/wiki/Mediator_pattern) to handle those cases.
2. The Model is aware of any other component. The layers build up. The View is aware of the ViewModel and the Model (because it databinds against it), the ViewModel is aware of the Model, and the Model is aware of nothing.
3. There is logic in the View that does anything other than things specific to the implementation of that View. All generic View state and logic should exist in the ViewModel, but sometimes implementation details are needed in the View. These should be sparing and never interested in anything other than View implementation-specific items.
There is one common pattern that runs throughout: [separation of concerns](http://en.wikipedia.org/wiki/Separation_of_concerns). If any layer of MVVM is interested in anything other than its concern, there is a problem.
|
Which man page describes the process of a computer turning on?
A few years ago I recall using the terminal and reading a tutorial in the Linux manual (using `man`) on how a computer worked after it was turned on. It walked you through the whole process explaining the role of the BIOS, ROM, RAM and OS on this process.
Which page was this, if any? How can I read it again?
|
You're thinking of the [`boot(7)` manual](http://man7.org/linux/man-pages/man7/boot.7.html) (`man 7 boot`) and/or the [`bootup(7)` manual](http://man7.org/linux/man-pages/man7/bootup.7.html) (`man 7 bootup`). Those are the manuals I can think of on (Ubuntu) Linux that best fits your description.
These manuals are available on the web (see links above), but the definite text is what's available on the system that you are using. If a web-based manual says one thing but the manual on your system says another thing, then the manual on your system is the more correct one for you. This goes for all manuals.
See also the "See also" section in those manuals.
This other question may also be of interest: [How does the Linux or Unix " / " get mounted during bootup?](https://unix.stackexchange.com/questions/338150/how-does-the-linux-or-unix-get-mounted-during-bootup)
For a non-Linux take on the boot process, the OpenBSD [first-stage system bootstrap (`biosboot(8)`)](http://man.openbsd.org/biosboot.8) and [second-stage bootstrap (`boot(8)`)](http://man.openbsd.org/boot.8) manuals, followed by [`rc(8)`](http://man.openbsd.org/rc.8), may be interesting.
|
Iterate over properties of an object (in Realm, or maybe not)
I'm working on a project that uses Realm for the database (that will come into the picture later). I've just discovered key-value coding and I want to use it in converting a TSV table into object properties (using the column headers from the table as the keys). Right now it looks like this:
```
let mirror = Mirror(reflecting: newSong)
for property in mirror.children {
if let index = headers.index(of: property.label!) {
newSong.setValue(headers[index], forKey: property.label!)
} else {
propertiesWithoutHeaders.append(property.label!)
}
}
```
Is there a way to iterate over properties without a mirror? I really could have sworn that I read in the Realm documentation (or maybe even in Apple's KVC documentation) that you can do something like `for property in Song.properties` or `for property in Song.self.properties` to achieve the same thing.
Besides it being a little more efficient, the main reason I want to do this is because in the same place I think I read this, I think they said that the iterating (or the KVC?) only works with Strings, Ints, Bools and Dates, so it would automatically skip the properties that are Objects (since you can't set them with in the same way). The above code is actually a simplification of my code, in the actual version I'm currently skipping over the Objects like this:
```
let propertiesToSkip = ["title", "artist", "genre"]
for property in mirror.children where !propertiesToSkip.contains(property.label!) {
...
```
Did I imagine this `.properties` thing? Or, is there a way to iterate in this way, automatically skipping over Objects/Classes without having to name them as I do above?
Thanks :)
|
No you didn't imagine it. :)
Realm exposes the schema encompassing the properties of each type of model in a database in two places: in the parent `Realm` instance, or in an `Object` itself.
In the `Realm` instance:
```
// Get an instance of the Realm object
let realm = try! Realm()
// Get the object schema for just the Mirror class. This contains the property names
let mirrorSchema = realm.schema["Mirror"]
// Iterate through each property and print its name
for property in mirrorSchema.properties {
print(property.name)
}
```
Realm `Object` instances expose the schema for that object via the `Object.objectSchema` property.
Check out the [`schema` property of `Realm`](https://realm.io/docs/swift/2.5.1/api/Classes/Realm.html#/s:vC10RealmSwift5Realm6schemaCS_6Schema) in the Realm Swift Documentation for more information on what kind of data you can get out of the schema properties. :)
|
Client-side rendering of a Markdown file
One can follow the [Marked library documentation](https://marked.js.org/#usage) and render a Markdown string inline. This is a working code snippet.
```
<div id="content"></div>
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script>
document.getElementById('content').innerHTML =
marked.parse('# Hello Ayan \n\nRendered by **marked**.');
</script>
```
Is there a way to pass a **file** into the `marked.parse` function or through [any other client-side Markdown rendering library](https://openbase.com/categories/js/best-javascript-client-markdown-rendering-libraries) and render the **whole file** instead of just a string? I looked into getting the markdown file and passing it as a string. However, I [couldn't find a straightforward way](https://stackoverflow.com/questions/25212027/how-can-get-html-file-format-as-string-in-javascript).
The file is in the same folder as this HTML file and would be served from GitHub using GitHub Pages. However, I could use an absolute link from a CDN if needed. How would I pass the contents to `marked.parse()`? `marked.parse(Hello.md)` didn't work.
|
>
> The file is in the same folder as this HTML file and would be served from GitHub using GitHub Pages
>
>
>
You can have the browser [`fetch`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch) the content and then pass its content to `marked.parse()`. Something like this should work:
```
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script>
fetch("/pages-site/markdown.md") // The path to the raw Markdown file
.then(response => response.blob()) // Unwrap to a blob...
.then(blob => blob.text()) // ...then to raw text...
.then(markdown => { // ...then pass the raw text into marked.parse
document.getElementById("content").innerHTML = marked.parse(markdown);
});
</script>
```
[Here is a live example](https://ccharles.github.io/pages-test/).
|
iOS Simulator 7.1 crash running on Yosemite with weak linked new frameworks (Symbol not found: \_objc\_isAuto)
I have just updated to xCode 6.1 and had to reinstall my iOS 7 simulators.
I can run my app on all the simulators apart from the 5s iOS 7.1 sim.
I get this crash
```
dyld: Symbol not found: _objc_isAuto
Referenced from: /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
Expected in: /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator7.1.sdk/usr/lib/libobjc.A.dylib
in /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
(lldb)
```
What gives? Any ideas why this sim crashes?
Thanks
|
This issue can occur if you are building your project against the iOS 8.x SDKs and are weak linking a framework that is new to iOS 8.x and also present in OS X 10.10 and you run in the iOS 7.1 simulator.
The [Xcode 6.1 Release Notes](https://developer.apple.com/library/ios/releasenotes/DeveloperTools/RN-Xcode/Chapters/xc6_release_notes.html) mention this issue:
>
> If an app is weak linked against frameworks new in iOS 8 SDK and OS X
> 10.10 SDK, it may fail to run if the run destination is an iOS Simulator for older iOS runtimes and the host system is running OS X
> Yosemite. (17807439)
>
>
>
The issue is resolved in the updates iOS 7.1 simulator runtime that can be downloaded from Xcode 6.2 beta 4 an later (and I think possibly beta3 worked as well).
|
Are Kotlin lambdas the same as Java lambdas under the hood?
This is a follow up question of [this answer](https://stackoverflow.com/a/34642063).
>
> But when the application hasn’t used lambda expressions before¹, even
> the framework for generating the lambda classes has to be loaded
> (Oracle’s current implementation uses ASM under the hood). This is the
> actual cause of the slowdown, loading and initialization of a dozen
> internally used classes, not the lambda expression itself
>
>
>
Ok, Java uses ASM to generate the classes on runtime. I found [this](https://github.com/JetBrains/kotlin/blob/master/spec-docs/function-types.md#function0-function1--types) and if I understood correctly, it is basically saying that Kotlin lambdas are compiled to pre-existing anonymous classes being loaded at runtime (instead of generated).
If I'm correct, Kotlin lambdas aren't the same thing as Java and shouldn't have the same performance impact. Can someone confirm?
|
Of course, Kotlin has built-in support for inlining lambdas, where Java doesn't. So many lambdas in Kotlin code don't correspond to any objects at runtime at all.
But for those that can't be inlined, yes, according to <https://medium.com/@christian.c.carroll/exploring-kotlin-lambda-bytecode-8c2d15afd490> the anonymous class translation seems to be always used. Unfortunately the post doesn't specify the Kotlin version (1.3.30 was the latest available at that time).
I would also consider this an implementation detail which could change depending on Kotlin version at least when [`jvmTarget` is set to `"1.8"` or greater](https://kotlinlang.org/docs/reference/using-gradle.html#attributes-specific-for-jvm); so there is no substitute to actually checking your own bytecode.
|
What happens with duplicates when inserting multiple rows?
I am running a python script that inserts a large amount of data into a Postgres database, I use a single query to perform multiple row inserts:
```
INSERT INTO table (col1,col2) VALUES ('v1','v2'),('v3','v4') ... etc
```
I was wondering what would happen if it hits a duplicate key for the insert. Will it stop the entire query and throw an exception? Or will it merely ignore the insert of that specific row and move on?
|
The `INSERT` will just insert all rows and ***nothing*** special will happen, **unless** you have some kind of [**constraint**](https://www.postgresql.org/docs/current/ddl-constraints.html) disallowing duplicate / overlapping values (`PRIMARY KEY`, `UNIQUE`, `CHECK` or `EXCLUDE` constraint) - which you did not mention in your question. But that's what you are probably worried about.
Assuming a `UNIQUE` or PK constraint on `(col1,col2)`, you are dealing with a textbook `UPSERT` situation. Many related questions and answers to find here.
Generally, if *any* constraint is violated, an exception is raised which (unless trapped in subtransaction like it's possible in a procedural server-side language like plpgsql) will roll back not only the statement, but the ***whole transaction***.
### Without concurrent writes
I.e.: No other transactions will try to write to the same table at the same time.
- Exclude rows that are already in the table with `WHERE NOT EXISTS ...` or any other applicable technique:
- [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
- And don't forget to remove duplicates *within* the inserted set as well, which would *not* be excluded by the semi-anti-join `WHERE NOT EXISTS ...`
One technique to deal with both at once would be `EXCEPT`:
```
INSERT INTO tbl (col1, col2)
VALUES
(text 'v1', text 'v2') -- explicit type cast may be needed in 1st row
, ('v3', 'v4')
, ('v3', 'v4') -- beware of dupes in source
EXCEPT SELECT col1, col2 FROM tbl;
```
[**`EXCEPT`**](https://www.postgresql.org/docs/current/queries-union.html) without the key word `ALL` folds duplicate rows in the source. If you know there are no dupes, or you don't want to fold duplicates silently, use `EXCEPT ALL` (or one of the other techniques). See:
- [Using EXCEPT clause in PostgreSQL](https://stackoverflow.com/questions/35329419/using-except-clause-in-postgresql/35329553#35329553)
Generally, if the target table is *big*, `WHERE NOT EXISTS` in combination with `DISTINCT` on the source will probably be faster:
```
INSERT INTO tbl (col1, col2)
SELECT *
FROM (
SELECT DISTINCT *
FROM (
VALUES
(text 'v1', text'v2')
, ('v3', 'v4')
, ('v3', 'v4') -- dupes in source
) t(c1, c2)
) t
WHERE NOT EXISTS (
SELECT FROM tbl
WHERE col1 = t.c1 AND col2 = t.c2
);
```
If there can be many dupes, it pays to fold them in the source first. Else use one subquery less.
Related:
- [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
### With concurrent writes
Use the Postgres `UPSERT` implementation [**`INSERT ... ON CONFLICT ...`**](https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT) in **Postgres 9.5** or later:
```
INSERT INTO tbl (col1,col2)
SELECT DISTINCT * -- still can't insert the same row more than once
FROM (
VALUES
(text 'v1', text 'v2')
, ('v3','v4')
, ('v3','v4') -- you still need to fold dupes in source!
) t(c1, c2)
ON CONFLICT DO NOTHING; -- ignores rows with *any* conflict!
```
Further reading:
- [How to use RETURNING with ON CONFLICT in PostgreSQL?](https://stackoverflow.com/questions/34708509/how-to-use-returning-with-on-conflict-in-postgresql/42217872#42217872)
- [How do I insert a row which contains a foreign key?](https://dba.stackexchange.com/a/46477/3684)
Documentation:
- [The manual](https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT)
- [The commit page](https://commitfest.postgresql.org/3/35/)
- [The Postgres Wiki page](https://wiki.postgresql.org/wiki/UPSERT)
Craig's reference answer for `UPSERT` problems:
- [How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?](https://stackoverflow.com/questions/17267417/how-do-i-do-an-upsert-merge-insert-on-duplicate-update-in-postgresql/17267423#17267423)
|
What is the relationship between the different WebSocket protocol versions?
I recently learned that there are a plethora of WebSocket protocol specifications (a bunch of them named hixie-*, another bunch of hybi-*, and finally a RFC 6455).
I assumed that hixie- and hybi- were previous drafts, and that the RFC is "the final word" towards all the implementations will eventually converge. However, I was surprised to discover that the RFC is from December 2011, while the latest hybi-\* is from February 2012.
Could someone please shed some light? What is the historical development of all those branches and what is the roadmap for the future?
By the way, do those funny names (hixie and hybi) stand for something?
|
"Hixie" stems from Ian Hickson [email protected] .. original proposer/author of the WebSocket protocol.
"Hybi" stems from "hy\_pertext bi\_directional" .. IETF working group "BiDirectional or Server-Initiated HTTP (Active WG)".
The latest and final RFC is RFC6455. Do implement that.
Hixie-75/76 are deprecated, security flawed, outdated versions that were in use for some time.
Hybi-Draft-N .. where N is .., 10, .., 18 mark revisions of the protocol during the development of the final RFC from draft RFCs.
Everything >= Hybi-10 are only small variants of the final RFC6455.
In short: read and implement <https://www.rfc-editor.org/rfc/rfc6455> and you will be fine.
|
How can I detect CTRL+SHIFT+ANY\_KEY using a KeyAdapter?
Here's my code:
```
tabbedPane.addKeyListener(new java.awt.event.KeyAdapter() {
public void keyPressed(java.awt.event.KeyEvent evt) {
CheckShortcut controlShortcut = (key) -> {
return (evt.getKeyCode() == key) && ((evt.getModifiers() & KeyEvent.CTRL_MASK) != 0);
};
CheckShortcut controlShiftShortcut = (key) -> {
return (evt.getKeyCode() == key) && ((evt.getModifiers() & KeyEvent.CTRL_MASK & KeyEvent.SHIFT_MASK) != 0);
}; // Does not work <<<<<
if (controlShortcut.f(KeyEvent.VK_N)) {
createNewFile();
} else if (controlShortcut.f(KeyEvent.VK_O)) {
openFile();
} else if (controlShortcut.f(KeyEvent.VK_S)) {
save();
} else if (controlShiftShortcut.f(KeyEvent.VK_S)) {
saveAs();
} else if (controlShortcut.f(KeyEvent.VK_Q)) {
System.exit(0);
} else if (controlShortcut.f(KeyEvent.VK_W)) {
MainFrame.toggleFrame(qrWebcamFrame);
} else if (controlShortcut.f(KeyEvent.VK_C)) {
MainFrame.toggleFrame(comandaCreationFrame);
} else if (controlShortcut.f(KeyEvent.VK_P)) {
if (accessPasswordFrame("Senha de administrador",
"Login: ", "Senha de administrador inválida.",
ADMIN_TYPE)) {
MainFrame.toggleFrame(passwordFrame);
}
}
}
});
```
The `controlShortcut` works perfectly. Nevertheless, when I add `KeyEvent.SHIFT_MASK` to the test, it does not work. Also, when I do this:
```
CheckShortcut controlShiftShortcut = (key) -> {
return (evt.getKeyCode() == key) && ((evt.getModifiers() & KeyEvent.CTRL_MASK) != 0) && ((evt.getModifiers() & KeyEvent.SHIFT_MASK) != 0);
};
```
It does not work as well. I don't understand why, since I'm just adding the same test.
|
This is not how bitwise operators work. `KeyEvent.CTRL_MASK & KeyEvent.SHIFT_MASK` always gives 0 because you are AND-ing different masks.
What you want to do is create a mask for both using OR:
```
int down = KeyEvent.CTRL_DOWN_MASK | KeyEvent.SHIFT_DOWN_MASK;
if ((e.getModifiersEx() & down) == down && (e.getKeyCode() == KeyEvent.VK_D))
System.out.println(true);
```
This will print
>
> true
>
>
>
only when `ctrl`+`shift`+`D` is pressed.
If you write
```
(e.getModifiersEx() & down) != 0
```
it will check if *any* (or both) of `ctrl` or `shift` are pressed.
Note that you should use `getModifiersEx` along with `X_DOWN_MASK` and not `getModifiers` along with `X_MASK`.
|
Graphics driver stopped working
All was well and then my installed NVidia drivers disappeared after a reboot. I am back to only one monitor and a resolution of 1024x768. I had 2 monitors available and with higher resolution.
Now I'm sure I can figure out how to install the Nvidia drivers et al since I figured it out in the first place. Ive just never had to do it after doing it already and having it disappear after a reboot, so Im asking the community:
1. Is this normal?
2. Best way to proceed to avoid it again.
I had several reboots where the driver stayed put. Things that may have changed is:
1. I add a work space icon on the panel area at top and was playing with that, switching workspace etc.
2. thee was the red warning icon on the panel, something about the cache didnt match something, so I did an apt-get update and then rebooted, and viola!
any help appreciated.
im on:
Xubuntu 14.04
ASUS 750 Ti
16GB ram
AMD black
|
This problem happens when you install Nvidia drivers using .run files downloaded from Nvidia site. This is a wrong way to install drivers.
After each kernel upgrade you will have black screen or bad resolution.
The correct way is to install drivers from Ubuntu repositories or ppa.
You could install drivers by running
```
sudo apt-get install nvidia-331
```
But first you need to uninstall the driver you have already.
```
sudo sh ./NVIDIA-Linux-x86_64-334.21.run --uninstall
```
As an option you can install latest driver from xorg-edgers ppa.
```
sudo add-apt-repository ppa:xorg-edgers/ppa
sudo apt-get update
sudo apt-get install nvidia-346
sudo add-apt-repository -r ppa:xorg-edgers/ppa
```
I recommend to remove one driver before install another, because sometimes some packages do not install when they are still running.
Full removal of Nvidia proprietary drivers can be done by
```
sudo apt-get purge nvidia*
```
|
Replace default STL allocator
I have the source for a large (>250 files) library that makes heavy use of STL containers and strings. I need to run it in an embedded environment with limited heap, so I want to ensure that this library itself is limited in its heap usage.
The obvious solution is to create an allocator, but modifying the entire code base to include the allocator template parameter is a big job of last resort, and undesirable in case I ever want to take a new version of the source. Globally replacing new and delete is not feasible since that affects the entire image, not just this library.
My next thought was a stupid C macro trick, but that doesn't seem like it would be possible, although I admit to not being a clever macro author.
So I thought "is there a compiler or pragma switch to specify the allocator<> class at compile time"? But I'm open for anything.
The next question I'll ask, if anyone can come up with a solution, is how to do the same for new/delete within the set of files comprising this library.
I'm using the gcc 3.4.4 toolchain to run this under Cygwin, with a target of VxWorks, if that sparks any ideas.
|
I resorted to the preprocessor to get a possible solution, although it currently relies upon the GCC 3.4.4 implementation to work.
The GCC `<memory>` implementation includes the file `<bits/allocator.h>`, which in turn includes another file `<bits/c++allocator.h>`, which defines a macro that defines the class implementing the default allocator base class.
Since is found in a platform-dependent path *(`/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/i686-pc-cygwin/bits`)*, I don't feel (very) dirty in supplanting it with my own "platform-dependent" implementation.
So I just create a folder `bits/` in the root of my source's include path, and then create the file `c++allocator.h` in that folder. I define the required macro to be the name of my allocator class and it works like a charm, since gcc searches my include paths prior to searching the system includes.
Thanks for all your responses. I think I can go with this "solution", which will only work as long as I'm using 3.4.4 probably.
|
data.table and error handling using try statement
I am trying to incorporate a bit of error handling in my R code.
Pseudo-code below:
---
```
foo = function(X,Y) {
...
return(ret.df);
}
DT = DT[,ret.df := foo(X,Y), by=key(DT)];
```
---
The aim is to check if for some combination of `X`,`Y` the function `foo` raises an error. If it does raise an error then I want to skip that record combination in the ultimate resultant data-frame. I have tried below without much luck:
---
```
DT = DT[ , try(ret.df = : foo(X,y));
if(not (class(ref.df) %in% "try-error') ) {
return(ret.df);
}, by = key(DT) ];
```
---
I can always try and write a wrapper around `foo` to do the error checking however am looking for a way to write the syntax directly in data.table call. Is this possible?
Thanks for your help in advance!
|
Here's a dummy function and data :
```
foo = function(X,Y) {
if (any(Y==2)) stop("Y contains 2!")
X*Y
}
DT = data.table(a=1:3, b=1:6)
DT
a b
1: 1 1
2: 2 2
3: 3 3
4: 1 4
5: 2 5
6: 3 6
```
**Step by step :**
```
> DT[, c := foo(a,b), by=a ]
Error in foo(a, b) : Y contains 2!
```
Ok, that's by construction. Good.
Aside: notice column `c` was added, despite the error.
```
> DT
a b c
1: 1 1 1
2: 2 2 NA
3: 3 3 NA
4: 1 4 4
5: 2 5 NA
6: 3 6 NA
```
Only the first successful group was populated; it stopped at the second group. This is by design. At some point in the future we could add *transactions* to `data.table` internally, like SQL, so that if an error happened, any changes could be *rolled back*. Anyway, just something to be aware of.
To deal with the error you can use `{}`.
First attempt :
```
> DT[, c := {
if (inherits(try(ans<-foo(a,b)),"try-error"))
NA
else
ans
}, by=a]
Error in foo(a, b) : Y contains 2!
Error in `[.data.table`(DT, , `:=`(c, { :
Type of RHS ('logical') must match LHS ('integer'). To check and coerce would
impact performance too much for the fastest cases. Either change the type of
the target column, or coerce the RHS of := yourself (e.g. by using 1L instead
of 1)
```
The error tells us what to do. Let's coerce the type of the RHS (`NA`) from `logical` to `integer`.
```
> DT[, c:= {
if (inherits(try(ans<-foo(a,b)),"try-error"))
NA_integer_
else
ans
}, by=a]
Error in foo(a, b) : Y contains 2!
```
Better, the long error has gone. But why still the error from `foo`? Let's look at `DT` just to check.
```
> DT
a b c
1: 1 1 1
2: 2 2 NA
3: 3 3 9
4: 1 4 4
5: 2 5 NA
6: 3 6 18
```
Oh, so it has worked. The 3rd group has run and values 9 and 18 appear in rows 3 and 6. Looking at `?try` reveals the `silent` argument.
```
> DT[, c:= {
if (inherits(try(ans<-foo(a,b),silent=TRUE),"try-error"))
NA_integer_
else
ans
}, by=a]
> # no errors
> DT
a b c
1: 1 1 1
2: 2 2 NA
3: 3 3 9
4: 1 4 4
5: 2 5 NA
6: 3 6 18
```
|
Color Tint UIButton Image
I noticed that when I place a white or black `UIImage` into a `UISegmentedControl` it automatically color masks it to match the tint of the segmented control. I thought this was really cool, and was wondering if I could do this elsewhere as well. For example, I have a bunch of buttons that have a uniform shape but varied colors. Instead of making a PNG for each button, could I somehow use this color masking to use the same image for all of them but then set a tint color or something to change their actual color?
|
As of iOS 7, there is a new method on `UIImage` to specify the rendering mode. Using the rendering mode `UIImageRenderingModeAlwaysTemplate` will allow the image color to be controlled by the button's tint color.
Objective-C
```
UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom];
UIImage *image = [[UIImage imageNamed:@"image_name"] imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
[button setImage:image forState:UIControlStateNormal];
button.tintColor = [UIColor redColor];
```
Swift
```
let button = UIButton(type: .custom)
let image = UIImage(named: "image_name")?.withRenderingMode(.alwaysTemplate)
button.setImage(image, for: .normal)
button.tintColor = UIColor.red
```
|
What is the probability that a student gets a better score than another on a test with randomly selected questions?
Suppose there is a set $S$ of $100$ questions and there are $2$ students $a$ and $b$.
Let $P\_{ai}$ be the probability that $a$ answers the question $i$ correctly, and $P\_{bi}$ the same for $b$.
All $P\_{ai}$ and $P\_{bi}$ are given for $i = 1...100$.
Suppose an exam $E$ is made by taking $10$ random questions from $S$.
How can I find the probability of $a$ getting a better score than $b$?
---
I thought about checking the combinations and comparing the probabilities but it is a very large number and will take forever, so I ran out of ideas.
|
**A dynamic program will make short work of this.**
Suppose we administer all questions to the students and then randomly select a subset $\mathcal{I}$ of $k=10$ out of all $n=100$ questions. Let's define a random variable $X\_i$ to compare the two students on question $i:$ set it to $1$ if student A is correct and student B not, $-1$ if student B is correct and student A not, and $0$ otherwise. The total
$$X\_\mathcal{I} = \sum\_{i\in\mathcal{I}} X\_i$$
is the difference in scores for the questions in $\mathcal I.$ **We wish to compute $\Pr(X\_\mathcal{I} \gt 0).$** This probability is taken over the joint distribution of $\mathcal I$ and the $X\_i.$
**The distribution function of $X\_i$ is readily calculated** under the assumption the students respond independently:
$$\eqalign{
\Pr(X\_i=1) &= P\_{ai}(1-P\_{bi}) \\
\Pr(X\_i=-1) &= P\_{bi}(1-P\_{ai}) \\
\Pr(X\_i=0) &= 1 - \Pr(X\_i=1) - \Pr(X\_i=0).
}$$
As a shorthand, let us call these probabilities $a\_i,$ $b\_i,$ and $d\_i,$ respectively. Write
$$f\_i(x) = a\_i x + b\_i x^{-1} + d\_i.$$
This polynomial is a *probability generating function* for $X\_i.$
Consider the rational function
$$\psi\_n(x,t) = \prod\_{i=1}^n \left(1 + t f\_i(x)\right).$$
(Actually, $x^n\psi\_n(x,t)$ is a polynomial: it's a pretty simple rational function.)
When $\psi\_n$ is expanded as a polynomial in $t$, the coefficient of $t^k$ consists of the sum of all possible products of $k$ distinct $f\_i(x).$ This will be a rational function with nonzero coefficients only for powers of $x$ from $x^{-k}$ through $x^k.$ **Because $\mathcal{I}$ is selected uniformly at random, the coefficients of these powers of $x,$ when normalized to sum to unity, give the probability generating function for the difference in scores.** The powers correspond to the size of $\mathcal{I}.$
**The point of this analysis is that we may compute $\psi(x,t)$ easily and with reasonable efficiency:** simply multiply the $n$ polynomials sequentially. Doing this requires retaining the coefficients of $1, t, \ldots, t^k$ in $\psi\_j(x,t)$ for $j=0, 1, \ldots, n.$ (we may of course ignore all higher powers of $t$ that appear in any of these partial products). Accordingly, all the necessary information carried by $\psi\_j(x,t)$ can be represented by a $2k+1\times n+1$ matrix, with rows indexed by the powers of $x$ (from $-k$ through $k$) and columns indexed by $0$ through $k$.
Each step of the computation requires work proportional to the size of this matrix, scaling as $O(k^2).$ Accounting for the number of steps, this is a $O(k^2n)$-time, $O(kn)$-space algorithm. That makes it quite fast for small $k.$ I have run it in `R` (not known for excessive speed) for $k$ up to $100$ and $n$ up to $10^5,$ where it takes nine seconds (on a single core). In the setting of the question with $n=100$ and $k=10,$ the computation takes $0.03$ seconds.
Here is an example where the $P\_{ai}$ are uniform random values between $0$ and $1$ and the $P\_{bi}$ are their squares (which are always less than the $P\_{ai}$, thereby strongly favoring student A). I simulated 100,000 examinations, as summarized by this histogram of the net scores:
[](https://i.stack.imgur.com/RHMRU.png)
The blue bars indicate those results in which student A got a better score than B. The red dots are the result of the dynamic program. They agree beautifully with the simulation ($\chi^2$ test, $p=51\%$). **Summing all the positive probabilities gives the answer in this case, $0.7526\ldots.$**
Note that this calculation yields more than asked for: it produces the *entire probability distribution* of the difference in scores for *all exams of $k$ or fewer randomly selected questions.*
---
For those who wish a working implementation to use or port, here is the `R` code that produced the simulation (stored in the vector `Simulation`) and executed the dynamic program (with results in the array `P`). The `repeat` block at the end is there only to aggregate all unusually rare outcomes so that the $\chi^2$ test becomes obviously reliable. (In most situations this doesn't matter, but it keeps the sofware from complaining.)
```
n <- 100
k <- 10
p <- runif(n) # Student A's chances of answering correctly
q <- p^2 # Student B's chances of answering correctly
#
# Compute the full distribution.
#
system.time({
P <- matrix(0, 2*k+1, k+1) # Indexing from (-k,0) to (k,k)
rownames(P) <- (-k):k
colnames(P) <- 0:k
P[k+1, 1] <- 1
for (i in 1:n) {
a <- p[i] * (1 - q[i])
b <- q[i] * (1 - p[i])
d <- (1 - a - b)
P[, 1:k+1] <- P[, 1:k+1] +
a * rbind(0, P[-(2*k+1), 1:k]) +
b * rbind(P[-1, 1:k], 0) +
d * P[, 1:k]
}
P <- apply(P, 2, function(x) x / sum(x))
})
#
# Simulation to check.
#
n.sim <- 1e5
set.seed(17)
system.time(
Simulation <- replicate(n.sim, {
i <- sample.int(n, k)
sum(sign((runif(k) <= p[i]) - (runif(k) <= q[i]))) # Difference in scores, A-B
})
)
#
# Test the calculation.
#
counts <- tabulate(Simulation+k+1, nbins=2*k+1)
n <- sum(counts)
k.min <- 5
repeat {
probs <- P[, k+1]
i <- probs * n.sim >= k.min
z <- sum(probs[!i])
if (z * n >= 5) break
if (k.min * (2*k+1) >= n) break
k.min <- ceiling(k.min * 3/2)
}
probs <- c(z, probs[i])
counts <- c(sum(counts[!i]), counts[i])
chisq.test(counts, p=probs)
#
# The answer.
#
sum(P[(1:k) + k+1, k+1]) # Chance that A-B is positive
```
|
What does the ^M character mean in Vim?
I keep getting the `^M` character in my `.vimrc` and it breaks my
configuration.
|
Unix uses 0xA for a newline character. Windows uses a combination of two characters: 0xD 0xA. 0xD is the carriage return character. `^M` happens to be the way vim displays 0xD (0x0D = 13, M is the 13th letter in the English alphabet).
You can remove all the `^M` characters by running the following:
```
:%s/^M//g
```
Where `^M` is entered by holding down `Ctrl` and typing `v` followed by `m`, and then releasing `Ctrl`. This is sometimes abbreviated as `^V^M`, but note that you must enter it as described in the previous sentence, rather than typing it out literally.
This expression will replace all occurrences of `^M` with the empty string (i.e. nothing). I use this to get rid of `^M` in files copied from Windows to Unix (Solaris, Linux, OSX).
|
SSIS transaction management MSSQL
I need to copy data from DB "source" to db "destination" should the copying fail, I need to roll back on "destination". The two connections are defined in the connection manager as OLE DB.
Here is my current attempt which is not working. I tried playing around with the in-built transaction managemen (setting the tasks transaction to **required**) but that only made it impossible to connect to "destination".
The destination has set "RetainSameConnection" = true, while this is false for "source" for no particular reason.
I also set the "MaxConcurrentExecutables" = 1 in to hinder SSIS from executing my rollback as the first thing.
Each of the tasks in the sequence is set to "Isolation level"=ReadUncommitted and "transactionOption"=supported.
The "failing script" is a script that always fail in order for me to test the transaction is working.

The code for the task "begin tran" is "BEGIN TRANSACTION " and the connection is set to "destination"
The Code for the task "rollback tran" is "rollback transaction" and the connection is set to "destination"
The rollback fails with "the rollback transaction request has no corresponding 'BEGIN TRANSACTION'"
|
You are mixing two concepts here. There are 2 ways to achieve transactions in SSIS. The first is SSIS Transactions. Here, your package should be set to `TransactionOption = Supported`, you container should be set to `TransactionOption = Required` (which will begin a transaction) and then your two Data Flow Tasks would need to be set to `TransactionOption = Supported`, which would make both of them join the open transaction. However, please not that this option **requires Distributed Transaction Coordinator** and there is no way around that.
The second way of achieving transactions is with SQL Native Transactions. Here, you would have an Execute SQL Task that starts a transaction, followed by your Data Flow Tasks and then another Execute SQL that commits the transaction (and of course another to rollback). The issue here, is that it is a requirement that all of the tasks I have just mentioned **Use the same connection manager and that `retainsameconnection = True` on that connection manager** otherwise it will not work, as SSIS and SQl Server still regard it as a distributed transaction, even though they are not on the same server, and you would have to use BEGIN DISTRIBUTED transaction, which again requires Distributed Transaction Coordinator. Also I recall that Distributed Native SQL Transactions do not work properly in SSIS.
The short answer is that you cannot achieve what you are trying to do with transactions in SSIS. An alternative would be to use a compensation block. Here, on failure of insert, you would have an Execute SQL Task that deletes the data you have just inserted, based on either Time, or a `SELECT MAX(ID)`, which ever suits your requirements best.
|
How can I get a leaflet.js instance using only a DOM object?
I'm right now building a custom Knockout.js binding to handle drawing of polygons. In this case the Knockout API only gives me a reference to a DOM object to access whatever it is I need to update. However, it looks like by design leaflet.js wants the user to store the map instance in their implementation. I don't have that option.
Trying this gave me an error: `var existingMap = L.map('aMapIDGoesHere')`
And the error was: `map already initialized`.
Any way I can use a DOM element or element ID to access the map instance?
By request here's the custom binding, please note it's a work in progress:
```
ko.bindingHandlers.leafletDraw = {
init: function(element, valueAccessor, allBindingsAccessor) {
var map = L.map(element).setView([40, -90], 3);
var tiles = L.tileLayer('http://{s}.tile.osm.org/{z}/{x}/{y}.png', {
attribution: 'OSM',
minZoom: 2
}).addTo(map);
// Initialise the FeatureGroup to store editable layers
var editableLayers = new L.FeatureGroup();
map.addLayer(editableLayers);
// Initialise the draw control and pass it the FeatureGroup of editable layers
var drawOptions = {
edit: {
featureGroup: editableLayers,
remove: false
},
draw: {
polyline: false,
circle: false,
marker: false,
polygon: {
allowIntersection: false,
showArea: true
}
}
}
var drawControl = new L.Control.Draw(drawOptions);
map.addControl(drawControl);
// when a shape is first created
map.on('draw:created', function (e) {
var shapeString = $.map(e.layer._latlngs, function(pair) { return pair.lng.toString()+"::"+pair.lat.toString(); }).join(";;;");
var value = valueAccessor();
if (ko.isObservable(value)) {
value(shapeString);
}
editableLayers.addLayer(e.layer);
drawControl.removeFrom(map);
drawOptions.draw.polygon = false;
drawOptions.draw.rectangle = false;
var editControl = new L.Control.Draw(drawOptions);
map.addControl(editControl);
});
// handle when a shape is edited
map.on('draw:edited', function (e) {
var editedLayer = e.layers._layers[Object.keys(e.layers._layers)[0]];
var shapeString = $.map(editedLayer._latlngs, function(pair) { return pair.lng.toString()+"::"+pair.lat.toString(); }).join(";;;");
var value = valueAccessor();
if (ko.isObservable(value)) {
value(shapeString);
}
});
},
update: function(element, valueAccessor) {
// need to figure this out since we can't access leaflet params from
}
};
```
**Special Note** You'll notice that I am converting points into a concatenated string. This is necessary for the time being.
|
As long as you are sure that the DOM element will not be removed, you could just add it as a subproperty on the DOM element itself. Here's a binding handler using the code on the leaflet front page for setting up the leaflet map:
```
ko.bindingHandlers.leaflet = {
init: function(element, valueAccessor){
var map = L.map(element);
element.myMapProperty = map;
L.tileLayer('http://{s}.tile.osm.org/{z}/{x}/{y}.png', {
attribution: '© <a href="http://osm.org/copyright">OpenStreetMap</a> contributors'
}).addTo(map);
},
update: function(element, valueAccessor){
var existingMap = element.myMapProperty;
var value = ko.unwrap(valueAccessor());
var latitude = ko.unwrap(value.latitude);
var longitude = ko.unwrap(value.longitude);
var zoom = ko.unwrap(value.zoom);
existingMap.setView([latitude, longitude], zoom);
}
};
```
To use the binding handler you would just bind like the following:
```
<div data-bind="leaflet: { latitude: latitudeProperty, longitude: longitudeProperty, zoom: zoomProperty }"></div>
```
Just ensure that you have also styled the `div` to ensure it has a height and width. I have written [a jsfiddle which uses the above leaflet bindingHandler](http://jsfiddle.net/23qq8/) where you could try it out.
I have only tested this jsfiddle in Internet Explorer 11, Firefox 26.0 and Firefox 27.0.1.
|
What Is The Difference Between -anydpi And -nodpi?
If you use the Vector Asset wizard in Android Studio 1.5.0, any vector drawable XML you import using that wizard goes into `res/drawable/`.
However, the `build/` directory, and the resulting APK show that those XML files get moved into a `res/drawable-anydpi-v21/` resource directory. The `-v21` part makes sense, as `VectorDrawable` is only supported on API Level 21+. However, `-anydpi` seems to be undocumented. I would have expected `-nodpi`, both for the original import destination and for where the build system elects to move it.
Has anyone seen official statements for what `-anydpi` means, and what its relationship is with `-nodpi`? I am looking for practical effects, not merely what some code comments hint at.
|
# nodpi
>
> These are density-independent resources. The system does not scale resources tagged with this qualifier, regardless of the current screen's density.
>
>
>
For instance:
- drawable-**nodpi**/dot.png
The dot will appear small on xxhdpi, big on ldpi.
However, the resource resolver will match a specific qualifier if it exists.
For instance
- drawable-**hdpi**/eg.png
- drawable-**nodpi**-v21/eg.xml
On a Lollipop (API 21) hdpi device, the **bitmap** is used.
On a Lollipop (API 21) xhdpi device, the vector is used.
# anydpi
>
> These resources take precedence in any dpi.
>
>
>
For instance
- drawable-**hdpi**/eg.png
- drawable-**anydpi**-v21/eg.xml
On a Lollipop (API 21) hdpi device, the **vector** is used.
On a Lollipop (API 21) xhdpi device, the vector is used.
# Reference
*Note*: anydpi was added in [change Ic3288d0236fe0bff20bb1599aba2582c25b0db32](https://android.googlesource.com/platform/frameworks/base/+/31245b4%5E!/).
|
FTP Check if file exist when Uploading and if it does rename it in C#
I have a question about Uploading to a FTP with C#.
What I want to do is if the file exists then I want to add like Copy or a 1 after the filename so it doesn't replace the file. Any Ideas?
```
var request = (FtpWebRequest)WebRequest.Create(""+destination+file);
request.Credentials = new NetworkCredential("", "");
request.Method = WebRequestMethods.Ftp.GetFileSize;
try
{
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
}
catch (WebException ex)
{
FtpWebResponse response = (FtpWebResponse)ex.Response;
if (response.StatusCode == FtpStatusCode.ActionNotTakenFileUnavailable)
{
}
}
```
|
It's not particularly elegant as I just threw it together, but I guess this is pretty much what you need?
You just want to keep trying your requests until you get a "ActionNotTakenFileUnavailable", so you know your filename is good, then just upload it.
```
string destination = "ftp://something.com/";
string file = "test.jpg";
string extention = Path.GetExtension(file);
string fileName = file.Remove(file.Length - extention.Length);
string fileNameCopy = fileName;
int attempt = 1;
while (!CheckFileExists(GetRequest(destination + "//" + fileNameCopy + extention)))
{
fileNameCopy = fileName + " (" + attempt.ToString() + ")";
attempt++;
}
// do your upload, we've got a name that's OK
}
private static FtpWebRequest GetRequest(string uriString)
{
var request = (FtpWebRequest)WebRequest.Create(uriString);
request.Credentials = new NetworkCredential("", "");
request.Method = WebRequestMethods.Ftp.GetFileSize;
return request;
}
private static bool checkFileExists(WebRequest request)
{
try
{
request.GetResponse();
return true;
}
catch
{
return false;
}
}
```
Edit: Updated so this will work for any type of web request and is a little slimmer.
|
start systemd service on start of requirement
I have a service which depends on mysql.service via `Requires=`. This is necessary, because the service crashes, if mysql is not available.
Executing `systemctl restart mysql` works fine. It stops my service first, restarts mysql and starts my service again.
How can I configure my unit, that it is also started again, if I run `systemctl stop mysql && systemctl start mysql`?
This is especially a problem during debian’s unattended-upgrades of mysql, because the update-process uses `stop`/`start` and not `restart`.
|
---
## new and better answer
Clearly i have not been reading well yesterday.
It seems your problem can easily be solved by adding `mysql.service` to your services `WantedBy`under the `[Install]` section.
then after you reenable your service, it should be started whenever `mysql.service` is started, as long as your service is enabled
the result looks like this:
```
[Unit]
Requires=mysql.service
After=mysql.service
[Install]
WantedBy=multi-user.target mysql.service
```
---
## my old answer for reference:
I don't know if it is possible to configure your unit to behave the way you want it to.
I have, however, solved similar problems by installing systemd drop-in files to modify the foreign unit, in your case `mysql.service`.
assuming your unit is `foo.service` you could create a `.conf` file in
`/etc/systemd/system/mysql.service.d/` with the following content:
```
[Unit]
Wants=foo.service
Before=foo.service
```
this would cause systemd to try and start `foo.service` after every start of `mysql.service`
for completeness sake, and to quote systemd [documentation](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Description):
>
> In addition to /etc/systemd/system, the drop-in ".conf" files for system services can be placed in /usr/lib/systemd/system or /run/systemd/system directories. Drop-in files in /etc take precedence over those in /run which in turn take precedence over those in /usr/lib. Drop-in files under any of these directories take precedence over unit files wherever located. (Of course, since /run is temporary and /usr/lib is for vendors, it is unlikely drop-ins should be used in either of those places.)
>
>
>
|
onchange this.form.submit() not working for web form
been working on this way too long...but can't seem to identify the problem. Already read dozens of articles on stackoverflow and elsewhere.
when I click and change the value, it doesn't auto-submit:
```
<form id="orderbyfrm" name="orderbyfrm" action="http://staging.whiterabbitexpress.com/" method="post" class="orderbyfrm">
<input name="s" value="<?php echo $wre_search_txt?>" type="hidden">
<label for="orderby" class="sortByLabel">Sort by </label>
<select class="sortByDropdown" name="orderby" id="orderby" onchange="this.form.submit();">
<option value="Relevance">Relevance</option>
<option value="likes" selected="selected">Likes</option>
<option value="comments" selected="comments">Comments</option>
</select>
</form>
```
in Chrome inspector I see an error
"Uncaught TypeError: Cannot call method 'submit' of null" onchange
I also tried onchange="javascript:document.orderbyfrm.submit" but that didn't work either.
|
Probably you have element or JS object called `form` or `submit` somewhere, conflicting with the real form.
Most safe way is using document.getElementById:
```
<select onchange="SubmitForm('orderbyfrm');">
```
And the JavaScript:
```
function SubmitForm(formId) {
var oForm = document.getElementById(formId);
if (oForm) {
oForm.submit();
}
else {
alert("DEBUG - could not find element " + formId);
}
}
```
Further debugging with good old alert.. instead of the `alert("DEBUG ...` have this:
```
var sDebugInfo = "found " + document.forms.length + " forms: \n";
for (var i = 0; i < document.forms.length; i++) {
var curForm = document.forms[i];
sDebugInfo += "name: " + curForm.name + ", id: " + curForm.id;
sDebugInfo += "\n";
}
alert(sDebugInfo);
```
Depending on what you get, debug should continue.
|
Select column name per row for max value in PySpark
I have a dataframe like this, shown only two columns however there are many columns in original dataframe
```
data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.show()
+---+----+----+
| ID|colA|colB|
+---+----+----+
|ID1| 3| 5|
|ID2| 4| 12|
|ID3| 8| 3|
+---+----+----+
```
I want to extract the name of the column per row, which has the max value. Hence the expected output is like this
```
+---+----+----+-------+
| ID|colA|colB|Max_col|
+---+----+----+-------+
|ID1| 3| 5| colB|
|ID2| 4| 12| colB|
|ID3| 8| 3| colA|
+---+----+----+-------+
```
In case of tie, where colA and colB have same value, choose the first column.
How can I achieve this in pyspark
|
You can use `UDF` on each row for row wise computation and use `struct` to pass multiple columns to udf. Hope this helps.
```
from pyspark.sql import functions as F
from pyspark.sql.types import IntegerType
from operator import itemgetter
data = [(("ID1", 3, 5,78)), (("ID2", 4, 12,45)), (("ID3", 70, 3,67))]
df = spark.createDataFrame(data, ["ID", "colA", "colB","colC"])
df.show()
+---+----+----+----+
| ID|colA|colB|colC|
+---+----+----+----+
|ID1| 3| 5| 78|
|ID2| 4| 12| 45|
|ID3| 70| 3| 70|
+---+----+----+----+
cols = df.columns
# to get max of values in a row
maxcol = F.udf(lambda row: max(row), IntegerType())
maxDF = df.withColumn("maxval", maxcol(F.struct([df[x] for x in df.columns[1:]])))
maxDF.show()
+---+----+----+----+-------+
|ID |colA|colB|colC|Max_col|
+---+----+----+----+-------+
|ID1|3 |5 |78 |78 |
|ID2|4 |12 |45 |45 |
|ID3|70 |3 |67 |70 |
+---+----+----+----+-------+
# to get max of value & corresponding column name
schema=StructType([StructField('maxval',IntegerType()),StructField('maxval_colname',StringType())])
maxcol = F.udf(lambda row: max(row,key=itemgetter(0)), schema)
maxDF = df.withColumn('maxfield', maxcol(F.struct([F.struct(df[x],F.lit(x)) for x in df.columns[1:]]))).\
select(df.columns+['maxfield.maxval','maxfield.maxval_colname'])
+---+----+----+----+------+--------------+
| ID|colA|colB|colC|maxval|maxval_colname|
+---+----+----+----+------+--------------+
|ID1| 3 | 5 | 78 | 78 | colC |
|ID2| 4 | 12 | 45 | 45 | colC |
|ID3| 70 | 3 | 67 | 68 | colA |
+---+----+----+----+------+--------------+
```
|
Mongodb aggregate match array item with child array item
I would like to find documents that contains specific values in a child array.
This is an example document:
```
{
"_id" : ObjectId("52e9658e2a13df5be22cf7dc"),
"desc" : "Something somethingson",
"imageurl" : "http://",
"tags" : [
{
"y" : 29.3,
"brand" : "52d2cecd0bd1bd844d000018",
"brandname" : "Zara",
"type" : "Bow Tie",
"x" : 20,
"color" : "52d50c19f8f8ca8448000001",
"number" : 0,
"season" : 0,
"cloth" : "52d50d57f8f8ca8448000006"
},
{
"y" : 29.3,
"brand" : "52d2cecd0bd1bd844d000018",
"brandname" : "Zara",
"type" : "Bow Tie",
"x" : 20,
"color" : "52d50c19f8f8ca8448000001",
"number" : 0,
"season" : 0,
"cloth" : "52d50d57f8f8ca8448000006"
}
],
"user_id" : "52e953942a13df5be22cf7af",
"username" : "Thompson",
"created" : 1386710259971,
"occasion" : "ID",
"sex" : 0
}
```
The query I would like to do should look something like this:
```
db.posts.aggregate([
{$match: {tags.color:"52d50c19f8f8ca8448000001", tags.brand:"52d2cecd0bd1bd844d000018", occasion: "ID"}},
{$sort:{"created":-1}},
{$skip:0},
{$limit:10}
])
```
my problem is that I dont know how to match anything inside an array in the document like "tags". How can I do this?
|
You could try to do it without aggregation framework:
```
db.posts.find(
{
occasion: "ID",
tags: { $elemMatch: { color:"52d50c19f8f8ca8448000001", brand:"52d2cecd0bd1bd844d000018" } }
}
).sort({created: -1}).limit(10)
```
And if you want to use aggregation:
```
db.posts.aggregate([
{$match:
{
tags: { $elemMatch: { color:"52d50c19f8f8ca8448000001", brand: "52d2cecd0bd1bd844d000018" } },
occasion: "ID"
}
},
{$sort:{"created":-1}},
{$limit:10}
])
```
|
Quick and Dirty Explanation of MVC3 Ajax/JS files, please :)
I've been trying to find a decent article or snippet that explains what the difference is in JS (Validate & Ajax) files are Microsoft includes with MVC3 and I've not been able to find anything.
Would someone kindly explain the differences and how they're supposed to be use (e.g. does one piggy-back off the other, used instead for X reason, etc):
- jquery.validate, jquery.validate.unobtrusive and MicrosoftMVCValidation
- jquery.unobtrusive-ajax, MicrosoftAjax & MicrosoftMVCAjax
To add on to this - basically why would I use jquery.validate vs. the unobtrusive or MVC validation. Or what is their purpose in conjunction with jquery.validate, etc. Likewise for the Ajax files.
Thanks a ton in advance :)
|
Here are my 2 cents:
- (jquery.validate and jquery.validate.unobtrusive) vs (MicrosoftMVCValidation)
Pick the first as it is unobtrusive meaning that HTML5 data-\* attributes are generated on the input fields and validators are unobtrusively attached into separate javascript files. With Microsoft validation your final markup is no longer markup but it is markup mixed with javascript. Not only that this increases the size of your HTML pages but it makes them uglier and impossible to benefit from the browser caching capabilities of external static resources.
Depending on the project I decide whether I would use directly jQuery.validate plugin or benefit from the autogenerated HTML5 data-\* attributes and let the jquery.validate.unobtrusive script do the automatic client script validation based on some DataAnnotations rules on my view models. Well, actually I don't use DataAnnotations but FluentValidation.NET but it doesn't matter to the client side as they both emit ModelMetaData. I must agree that with ASP.NET MVC 3 Microsoft made a step forward with those
jquery.validate.unobtrusive scripts. But basically it will really depend on the project I am working on and the amount of control I need.
- (jquery.unobtrusive-ajax) vs (MicrosoftAjax & MicrosoftMVCAjax)
None of the above :-) I would recommend you using pure jQuery but if you had to pick between the jquery.unobtrusive-ajax and MicrosoftAjax pick the first for the exact same reasons as the previous one. Now I should probably explain why I don't like either. I have already pointed out the complete crapiness of all Microsoft\* scripts so let's not repeat it again. Even Microsoft themselves realized their mistake and starting from ASP.NET MVC 3 jQuery becomes the default library and their scripts are only included for compatibility with older applications that you might be converting, but I guess in future versions they will disappear completely. The question is why pure jQuery compared to jQuery.unobtrusive-ajax? Well, with the first I have far more control over the events of the AJAX requests. With jquery.unobtrusive-ajax for example when JSON objects are returned in the OnSuccess callback they are not automatically parsed into javascript objects you will have to do the parsing manually and that is just driving me nuts.
|
Windows 7 firewall causing remote connection to time out; should refuse connection
The Windows 7 Firewall is running with default options to allow outbound connections and block incoming connections. I have a process trying to connect regularly from within my local network, which is fine. When the program that accepts the connection is running, all goes well. When it's not, Windows 7 Firewall is causing the connection attempt to time out instead of refusing the connection (like it should be doing).
I've tried creating rules to explicitly allow all connections on that port, but it's timing out none-the-less. I'm wondering if the firewall is blocking the `RST` packet from being sent back? Disabling the public profile completely causes the connection to be refused (as expected) but a firewall rule does not.
Here is the rules I've tried so far without success:
>
> Port type, TCP, Specified port 11211, Allow the connection, apply to Domain, Private, and Public
>
>
>
Any suggestions?
**Note:** When I say "refused" I'm referring to the operating system's response to the connection saying "sorry, nothing is listening." Test this out on your local machine. Open a command prompt and type `telnet localhost 60000`. You should see something like this:
`Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused`
This is a connection refused. The operating system is "refusing" the request, because (at least on my system) nothing is listening on port 60000.
Now try `telnet example.com 60000`. Wait. You'll eventually see something like this:
`Trying 192.0.43.10...
telnet: connect to address 192.0.43.10: Operation timed out`
See the difference? There is no host at example.com to reply with a "refused" so *your* local operating system (e.g. not the server) says "nothing is there, timeout."
Very different. The problem is Windows 7 firewall is not generating the "refused" as it should be. I'd like to find out how to fix this.
|
This is due to a windows firewall feature called "stealth mode". The idea is that refusing a connection instead of timing it out will tell an attacker that there actually is a computer on that IP-Address. With the connection attempt timing out, the hope is that the attacker will ignore the computer.
For more information regarding stealth mode see [technet](http://technet.microsoft.com/en-us/library/dd448557%28WS.10%29).
You can disable stealth mode by modifying the registry as documented [here](http://msdn.microsoft.com/en-us/library/ff720058%28v=prot.10%29.aspx):
To `Software\Policies\Microsoft\WindowsFirewall\DomainProfile, Software\Policies\Microsoft\WindowsFirewall\PrivateProfile, Software\Policies\Microsoft\WindowsFirewall\PublicProfile, Software\Policies\Microsoft\WindowsFirewall\StandardProfile` add a `REG_DWORD` named `DisableStealthMode` with a value of `0x00000001`
After doing so you need to restart the windows firewall service for the changes to take effect.
|
Kiss FFT seems to multiply data by the number of points that it transforms
My limited understanding of the Fourier transform is that you should be able to toggle between the time and frequency domain without changing the original data. So, here is a summary of what I (think I) am doing:
1. Using `kiss_fft_next_fast_size(994)` to determine that I should use 1000.
2. Using `kiss_fft_alloc(...)` to create a **kiss\_fft\_cfg** with `nfft = 1000`.
3. Extending my input data from size 994 to 1000 by padding extra points as zero.
4. Passing **kiss\_fft\_cfg** to `kiss_fft(...)` along with my input and output arrays.
5. Using `kiss_fft_alloc(...)` to create an **inverse kiss\_fft\_cfg** with `nfft = 1000`.
6. Passing the **inverse kiss\_fft\_cfg** to `kiss_fft(...)` inputting the previous output array.
7. Expecting the original data back, but getting each datum exactly 1000 times bigger!
I have put a [full example here](http://liveworkspace.org/code/912ca15cac001f0bf63b18cb107c730b "full example here"), and my 50-odd lines of code can be found **right at the end**. Although I can work around this by dividing each result by the value of `OPTIMAL_SIZE` (i.e. 1000) that fix makes me very uneasy without understanding why.
Please can you advise what simply stupid thing(s) I am doing wrong?
|
This is to be expected: the inverse discreet Fourier transform (which can be implemented using the Fast Fourier Transform), requires a division by 1/N:
>
> The normalization factor multiplying the DFT and IDFT (here 1 and 1/N)
> and the signs of the exponents are merely conventions, and differ in
> some treatments. The only requirements of these conventions are that
> the DFT and IDFT have opposite-sign exponents and that the product of
> their normalization factors be 1/N. A normalization of \sqrt{1/N} for both the DFT and IDFT makes the transforms unitary,
> which has some theoretical advantages. But it is often more practical
> in numerical computation to perform the scaling all at once as above
> (and a unit scaling can be convenient in other ways).
>
>
>
<http://en.wikipedia.org/wiki/Dft>
|
How do I prevent the status bar and navigation bar from animating during an activity scene animation transition?
Firstly, my status bar background is set to dark brown, and my navigation bar background is default black. I'm using the Material light theme.
I'm starting a new activity using `ActivityOptions.makeSceneTransitionAnimation` with default transitions, and I notice that both the status and navigation bars briefly fade to white and then back to the correct colors.
According to the [documentation](https://developer.android.com/training/material/animations.html):
>
> To get the full effect of a transition, you must enable window content transitions on both the calling and called activities. Otherwise, the calling activity will start the exit transition, but then you'll see a window transition (like scale or fade)
>
>
>
I am using `getWindow().requestFeature(Window.FEATURE_CONTENT_TRANSITIONS);` on both the calling and called activities.
Similarly, if I change the enter transition to a slide, both the status and navigation bars briefly have a slide transition with a white background.
How do I prevent the status bar and navigation bar from animating during an activity scene animation transition?
|
There are two approaches you can use that I know of to prevent the navigation/status bar from animating during the transition:
## Approach #1: Exclude the status bar and navigation bar from the window's default exit/enter fade transition
The reason why the navigation/status bar are fading in and out during the transition is because by default all non-shared views (including the navigation/status bar backgrounds) will fade out/in in your calling/called Activitys respectively once the transition begins. You can, however, easily get around this by excluding the navigation/status bar backgrounds from the window's default exit/enter `Fade` transition. Simply add the following code to your Activitys' `onCreate()` methods:
```
Transition fade = new Fade();
fade.excludeTarget(android.R.id.statusBarBackground, true);
fade.excludeTarget(android.R.id.navigationBarBackground, true);
getWindow().setExitTransition(fade);
getWindow().setEnterTransition(fade);
```
This transition could also be declared in the activity's theme using XML (i.e. in your own `res/transition/window_fade.xml` file):
```
<?xml version="1.0" encoding="utf-8"?>
<fade xmlns:android="http://schemas.android.com/apk/res/android">
<targets>
<target android:excludeId="@android:id/statusBarBackground"/>
<target android:excludeId="@android:id/navigationBarBackground"/>
</targets>
</fade>
```
## Approach #2: Add the status bar and navigation bar as shared elements
This approach builds off of klmprt's answer, which *almost* worked for me... although I still needed to make a couple of modifications.
In my calling Activity, I used the following code to start the Activity:
```
View statusBar = findViewById(android.R.id.statusBarBackground);
View navigationBar = findViewById(android.R.id.navigationBarBackground);
List<Pair<View, String>> pairs = new ArrayList<>();
if (statusBar != null) {
pairs.add(Pair.create(statusBar, Window.STATUS_BAR_BACKGROUND_TRANSITION_NAME));
}
if (navigationBar != null) {
pairs.add(Pair.create(navigationBar, Window.NAVIGATION_BAR_BACKGROUND_TRANSITION_NAME));
}
pairs.add(Pair.create(mSharedElement, mSharedElement.getTransitionName()));
Bundle options = ActivityOptions.makeSceneTransitionAnimation(activity,
pairs.toArray(new Pair[pairs.size()])).toBundle();
startActivity(new Intent(context, NextActivity.class), options);
```
So far this is essentially the same thing that klmprt suggested in his answer. However, I also needed to add the following code in my called Activity's `onCreate()` method in order to prevent the status bar and navigation bar from "blinking" during the transition:
```
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_next);
// Postpone the transition until the window's decor view has
// finished its layout.
postponeEnterTransition();
final View decor = getWindow().getDecorView();
decor.getViewTreeObserver().addOnPreDrawListener(new ViewTreeObserver.OnPreDrawListener() {
@Override
public boolean onPreDraw() {
decor.getViewTreeObserver().removeOnPreDrawListener(this);
startPostponedEnterTransition();
return true;
}
});
}
```
Adding the status bar and navigation bar backgrounds as shared elements will force them to be drawn on top of the window's default exit/enter fade transition, meaning that they will not fade during the transition. More discussion about this approach can be found in [this Google+ post](https://plus.google.com/+AlexLockwood/posts/RPtwZ5nNebb).
|
Try Catch Throw
I'm trying to understand how I'm going to use Throw in my code. I have a MainForm class to handle the Windows Form GUI and then I have the Manager class to read and save data from/to files.
I use Try/Catch in both classes, but my instructor want me to use Throw in the Manager class and despite that I'm reading about it, I don't get the point what it will do? Will Throw affect the Try/Catch in the MainForm class?
I also use a message box in the manager class if an exception is catched, but no message box are allow to be in the manager according to the instructor, so how would I do then? Can I use the message box in MainForm class only? Preciate some help to understand and expand my knowledge! Thanks!
MainForm class:
```
try
{
motelManager.SaveToFile(file);
}
catch
{
MessageBox.Show("Ett fel uppstod!", "Varning!", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
```
Manager class:
```
public void SaveToFile(string filePath)
{
try
{
string newFilePath = filePath.Replace(".bin", "");
filestream = new FileStream(newFilePath + ".bin", FileMode.Create);
BinaryFormatter b = new BinaryFormatter();
b.Serialize(filestream, animals);
}
catch(Exception ex)
{
MessageBox.Show(ex.Message, "Varning!", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
if (filestream != null) filestream.Close();
}
```
|
your manager class should look like this:
```
public void SaveToFile(string filePath)
{
try
{
string newFilePath = filePath.Replace(".bin", "");
filestream = new FileStream(newFilePath + ".bin", FileMode.Create);
BinaryFormatter b = new BinaryFormatter();
b.Serialize(filestream, animals);
}
catch(Exception ex)
{
if (filestream != null) filestream.Close();
throw;
// but don't use
// throw ex;
// it throws everything same
// except for the stacktrace
}
// or do it like this
//catch(Exception ex)
//{
// throw;
// but don't use
// throw ex;
// it throws everything same
// except for the stacktrace
//}
//finally
//{
// if (filestream != null) filestream.Close();
//}
}
```
and in your main class:
```
try
{
motelManager.SaveToFile(file);
}
catch (Exception e)
{
MessageBox.Show("Ett fel uppstod!", "Varning!", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
```
|
Precise seek in MP3 files on Android
I'm building an app where it's important to have accurate seeking in MP3 files.
Currently, I'm using ExoPlayer in the following manner:
```
public void playPeriod(long startPositionMs, long endPositionMs) {
MediaSource mediaSource = new ClippingMediaSource(
new ExtractorMediaSource.Factory(mDataSourceFactory).createMediaSource(mFileUri),
startPositionMs * 1000,
endPositionMs * 1000
);
mExoPlayer.prepare(mediaSource);
mExoPlayer.setPlayWhenReady(true);
}
```
In some cases, this approach results in offsets of 1-3 seconds relative to the expected playback times.
[I found this issue on ExoPlayer's github](https://github.com/google/ExoPlayer/issues/4676). Looks like this is an intrinsic limitation of ExoPlayer with Mp3 format and it won't be fixed.
[I also found this question](https://stackoverflow.com/questions/30401733/android-mediaextractor-seek-accuracy-with-mp3-audio-files) which seems to suggest that the same issue exists in Android's native MadiaPlayer and MediaExtractor.
Is there a way to perform accurate seek in local (e.g. on-device) Mp3 files on Android? I'm more than willing to take any hack or workaround.
|
MP3 files are not inherently seekable. They don't contain any timestamps. It's just a series of MPEG frames, one after the other. That makes this tricky. There are two methods for seeking an MP3, each with some tradeoffs.
The most common (and fastest) method is to read the bitrate from the first frame header (or, maybe the average bitrate from the first few frame headers), perhaps 128k. Then, take the byte length of the entire file, divide it by this bitrate to estimate the time length of the file. Then, let the user seek into the file. If they seek `1:00` into a `2:00` file, divide the byte size of the file to the 50% mark and "needle drop" into the stream. Read the file until a sync word for the next frame header comes by, and then begin decoding.
As you can imagine, this method isn't accurate. At best, you're going to be within a half frame of the target on-average. With frame sizes being 576 samples, this is pretty accurate. However, there are problems with calculating the needle drop point in the first place. The most common issue is that ID3 tags and such add size to the file, throwing off the size calculations. A more severe issue is a variable bitrate (VBR) file. If you have music encoded with VBR, and the beginning of the track is silent-ish or otherwise easy to encode, the beginning might be 32 kbps whereas one second in might be 320 kbps. A 10x error in calculating the time length of the file!
The second method is to decode the whole file to raw PCM samples. This means you can guarantee sample-accurate seeking, but you *must* decode at least up to the seek point. If you want a proper time length for the full track, you *must* decode the whole file. Some 20 years ago, this was painfully slow. Seeking into a track would take almost as long as listening to the track to the point you were seeking to! These days, for short files, you can probably decode them so fast that it doesn't matter so much.
TL;DR; If you must have sample-accurate seeking, decode the files first before putting them in your player, but understand the performance penalty first before deciding this tradeoff.
|
Doing an HTTP Post with headers and a body
Right, so I've been working on something which requires basic authentication through headers, and passing some variables via HTTP Post. This is a terminal app.
This is what my code looks like:
```
import 'package:http/http.dart' as http;
import 'dart:io';
void main() {
var url = "http://httpbin.org/post";
var client = new http.Client();
var request = new http.Request('POST', Uri.parse(url));
var body = {'content':'this is a test', 'email':'[email protected]', 'number':'441276300056'};
request.headers[HttpHeaders.CONTENT_TYPE] = 'application/json; charset=utf-8';
request.headers[HttpHeaders.AUTHORIZATION] = 'Basic 021215421fbe4b0d27f:e74b71bbce';
request.body = body;
var future = client.send(request).then((response) => response.stream.bytesToString().then((value) => print(value.toString()))).catchError((error) => print(error.toString()));
}
```
I'm using httpbin as an echo server, so it tells me what I'm passing in. My code works correctly if I don't pass a body, or if I pass a string as the body.
Obviously that's because the body attribute in http.Request only accepts strings, and I'm trying to pass a map to it.
I could convert that to a string, and it would probably work, but I still think my code could be improved. Not from a syntax point of view, or from how it's handling the future, but I'm not certain using http.dart is the right thing to do.
Could someone point me in the right direction?
Thanks in advance.
|
JSON **is** a String. You need to encode your map to JSON and pass it as a String.
You can use `bodyFields` instead of `body` to pass a Map.
This way your `content-type` is fixed to `"application/x-www-form-urlencoded"`.
The DartDoc for `post` says:
```
/// If [body] is a Map, it's encoded as form fields using [encoding]. The
/// content-type of the request will be set to
/// `"application/x-www-form-urlencoded"`; this cannot be overridden.
```
I was able to send JSON data this way a while ago
```
return new http.Client()
.post(url, headers: {'Content-type': 'application/json'},
body: JSON.encoder.convert({"distinct": "users","key": "account","query": {"active":true}}))
.then((http.Response r) => r.body)
.whenComplete(() => print('completed'));
```
**EDIT**
```
import 'package:http/http.dart' as http;
import 'dart:io';
void main() {
var url = "http://httpbin.org/post";
var client = new http.Client();
var request = new http.Request('POST', Uri.parse(url));
var body = {'content':'this is a test', 'email':'[email protected]', 'number':'441276300056'};
// request.headers[HttpHeaders.CONTENT_TYPE] = 'application/json; charset=utf-8';
request.headers[HttpHeaders.AUTHORIZATION] = 'Basic 021215421fbe4b0d27f:e74b71bbce';
request.bodyFields = body;
var future = client.send(request).then((response)
=> response.stream.bytesToString().then((value)
=> print(value.toString()))).catchError((error) => print(error.toString()));
}
```
produces
```
{
"args": {},
"data": "",
"files": {},
"form": {
"content": "this is a test",
"email": "[email protected]",
"number": "441276300056"
},
"headers": {
"Accept-Encoding": "gzip",
"Authorization": "Basic 021215421fbe4b0d27f:e74b71bbce",
"Connection": "close",
"Content-Length": "63",
"Content-Type": "application/x-www-form-urlencoded; charset=utf-8",
"Host": "httpbin.org",
"User-Agent": "Dart/1.5 (dart:io)",
"X-Request-Id": "b108713b-d746-49de-b9c2-61823a93f629"
},
"json": null,
"origin": "91.118.62.43",
"url": "http://httpbin.org/post"
}
```
|
Creating new array with contents from old array while keeping the old array static
Say I have an array of integers,
```
int[] array = new int[7];
for(int i = 0; i < 7; i++)
{
array[i] = i;
}
```
Now i want to get only the first four numbers in that `array`, and turn put that into another array.
So I really want something like...
```
newArray = array[0-3].
```
I know that syntax is wrong, but I'm just giving the general idea of what I'm trying to do, is anything like that possible? Or do i have to create a loop and add it manually into the `newArray`?
|
**Method 1**
```
int[] newArr = new int[4];
System.arraycopy(array, 0, newArr, 0, 4);
```
The method takes five arguments:
1. `src`: The source array.
2. `srcPosition`: The position in the source from where you wish to begin
copying.
3. `des`: The destination array.
4. `desPosition`: The position in the destination array to where the copy
should start.
5. `length`: The number of elements to be copied.
This method throws a NullPointerException if either of src or des are null.
It also throws an ArrayStoreException in the following cases:
- If the src is not an array.
- If the des is not an array.
- If src and des are arrays of different data types.
**Method 2**
Utilize
`Arrays.copyOf(array,4)` to copy the first 4 elements, truncating the rest.
of
`Arrays.copyOfRange(array,1,5)` to copy elements 1-4 if you need the middle of an array.
|
php mysqli fetch data and divide two variables toFixed(2) / two decimal places
I have been able to divide two variables from a mysqli query... How do i divide the number to two decimal places / toFIxed(2)
**php**
```
$Date = $_GET['date'];
$Win = 'Win';
$testsql="
SELECT
count(*) AS bet_count,
SUM(IF(result ='$Win', 1, 0)) AS win_count
FROM bets WHERE betDate = '$Date' GROUP BY betDate
";
$testresult = mysqli_query($connection, $testsql);
while ($testrow = mysqli_fetch_assoc($testresult))
{
echo "<tr>";
echo "<td class='text-center'>".($testrow['bet_count']/$testrow['win_count']). "</td>";
echo "</tr>";
}
```
So, the bet\_count / win\_count works as expected.....I just need the integer e.g. 2.371237234 to two decimal places 2.37
|
You may try with [number\_format()](http://php.net/manual/en/function.number-format.php) function:
```
<?php
$Date = $_GET['date'];
$Win = 'Win';
$testsql = "
SELECT
count(*) AS bet_count,
SUM(IF(result ='$Win', 1, 0)) AS win_count
FROM bets WHERE betDate = '$Date' GROUP BY betDate
";
$testresult = mysqli_query($connection, $testsql);
while ($testrow = mysqli_fetch_assoc($testresult)) {
echo "<tr>";
$value = number_format($testrow['bet_count']/$testrow['win_count'], 2, '.', '');
echo "<td class='text-center'>".$value."</td>";
echo "</tr>";
}
```
|
Put functions into vector and execute
I want to make a vector/deque as function container.
This containter must take diffrent functions with diffrent arguments.
Example functions:
```
program_data::getInstance().function(int,std::string);
program_data::getInstance().foo(int);
program_data::getInstance().exam(int,std::string,double);
```
And could you show example code how to **put** these functions with arguments to std::vector / std::deque and **execute**?
I think that i should use std::function and std::bind but i dont know how to support diffrent functions with diffrent args size.
**With arguments** -> my functions (called `function` `foo` `exam` from `program_data`) do some actions using arguments.
*For example: normally i execute this function:*
`program_data::getInstance().function(10,"halo");`
And now i want to put these functions to `std::vector // deque` and execute with arguments which i put with function. i.e (if i put function above to vector and use (pseudocode) `vector[0].run();` then my program run function `program_data::getInstance().function(int,std::string);`)
|
Absolutely, use `std::vector<std::function<void()>>` - that is, a vector of type-erased function containers that can hold any object that is callable without arguments:
```
std::vector<std::function<void()>> vector;
// populate the vector
vector.push_back([]{ program_data::getInstance().function(10,"halo"); });
// execute items in the vector
vector[0]();
```
Here I'm populating the vector with a captureless lambda; you can also use lambdas with captures, bind-expressions (`std::bind`), and other callable objects.
If you use a lambda with a capture, you need to ensure that captured variables are either captured by value or have a lifetime that contains that of the collection of callables:
```
std::string tmp;
vector.push_back([&tmp]{ program_data::getInstance().something(tmp); });
```
|
What 's the minimal requirement for a code be considered an AI implementation?
I'd like to know at what point can be considerated an AI implementation?
I means, what is the minimal requeriment for that?
Can you give a simple code example?
|
Any program in which the decisions made at time *t* are impacted by the outcome of decisions made at time *t-1*. It learns.
A very simple construct within the field of Neural Networks is a [Perceptron](http://en.wikipedia.org/wiki/Perceptron). It *learns* by adjusting weights given to different input values based on the accuracy of the result. It is *trained* with a known set of good inputs. Here is an article that covers the theory behind a [single layer Perceptron network](http://www.computing.dcu.ie/~humphrys/Notes/Neural/single.neural.html) including an introduction to the the proof that networks of this type can solve specific types of problems:
>
> If the exemplars used to train the perceptron are drawn from two linearly separable classes, then the perceptron algorithm converges and positions the decision surface in the form of a hyperplane between the two classes.
>
>
>
Here is a [book chapter](http://www.bcp.psych.ualberta.ca/~mike/Book3/pdfstuff/Chapter11.pdf) in PDF form that covers the topic. Here is an [Excel Spreadsheet](http://www.cs.nott.ac.uk/~gxk/courses/g5aiai/006neuralnetworks/perceptron.xls) that explains a bit more with a concrete example. And finally, here is a beautiful [Javascript Example](http://intsys.mgt.qub.ac.uk/notes/perceptr.html) that you can watch learn.
|
Write Struct to Json File using Struct Fields (not json keys)
How can I read a json file into a struct, and then Marshal it back out to a json string with the Struct fields as keys (rather than the original json keys)?
(see `Desired Output to Json File` below...)
Code:
```
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
)
type Rankings struct {
Keyword string `json:"keyword"`
GetCount uint32 `json:"get_count"`
Engine string `json:"engine"`
Locale string `json:"locale"`
Mobile bool `json:"mobile"`
}
func main() {
var jsonBlob = []byte(`
{"keyword":"hipaa compliance form", "get_count":157, "engine":"google", "locale":"en-us", "mobile":false}
`)
rankings := Rankings{}
err := json.Unmarshal(jsonBlob, &rankings)
if err != nil {
// nozzle.printError("opening config file", err.Error())
}
rankingsJson, _ := json.Marshal(rankings)
err = ioutil.WriteFile("output.json", rankingsJson, 0644)
fmt.Printf("%+v", rankings)
}
```
Output on screen:
```
{Keyword:hipaa compliance form GetCount:157 Engine:google Locale:en-us Mobile:false}
```
Output to Json File:
```
{"keyword":"hipaa compliance form","get_count":157,"engine":"google","locale":"en-us","mobile":false}
```
***Desired Output to Json File:***
```
{"Keyword":"hipaa compliance form","GetCount":157,"Engine":"google","Locale":"en-us","Mobile":false}
```
|
If I understand your question correctly, all you want to do is remove the json tags from your struct definition.
So:
```
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
)
type Rankings struct {
Keyword string
GetCount uint32
Engine string
Locale string
Mobile bool
}
func main() {
var jsonBlob = []byte(`
{"keyword":"hipaa compliance form", "get_count":157, "engine":"google", "locale":"en-us", "mobile":false}
`)
rankings := Rankings{}
err := json.Unmarshal(jsonBlob, &rankings)
if err != nil {
// nozzle.printError("opening config file", err.Error())
}
rankingsJson, _ := json.Marshal(rankings)
err = ioutil.WriteFile("output.json", rankingsJson, 0644)
fmt.Printf("%+v", rankings)
}
```
Results in:
```
{Keyword:hipaa compliance form GetCount:0 Engine:google Locale:en-us Mobile:false}
```
And the file output is:
```
{"Keyword":"hipaa compliance form","GetCount":0,"Engine":"google","Locale":" en-us","Mobile":false}
```
Running example at <http://play.golang.org/p/dC3s37HxvZ>
Note: GetCount shows 0, since it was read in as `"get_count"`. If you want to read in JSON that has `"get_count"` vs. `"GetCount"`, but output `"GetCount"` then you'll have to do some additional parsing.
See [Go- Copy all common fields between structs](https://stackoverflow.com/q/11527935/1162491) for additional info about this particular situation.
|
How to avoid multicolinearity in SVM input data?
Do you know of any techniques that allows one to avoid and get rid of multicolinearity in SVM input data? We all know that if multicolinearity exists, explanatory variables have a high degree of correlation between themselves which is problematic in all regression models (the data matrix is not invertible and so on).
My question is actually a bit more subtle as relevant data has to be selected. In other words, multicolinearity must be avoided while keeping relevant input data. So in a way, I guess that my question is also about reducing the input data matrix dimension.
So, I've thought about PCA, which permits both reducing dimension and getting uncorrelated vectors (the PCs), but then I don't know how to deal with the eigenvector's signs.
|
Multicolinearity is not generally a problem for SVMs. Ridge regression is often used where multicolinearity is an issue, as the regularisation term resolves the invertibility issue by adding a ridge. The SVM uses the same regularisation term as ridge regression does, but with the hinge loss in place of the squared error.
Ridge regression has a link with PCA [as explained by Tamino](http://tamino.wordpress.com/2011/02/12/ridge-regression/), essentially it penalises principal components with large eigenvalues less than components with low eigenvalues, so it is a bit like having a soft selection of the most important PCs, rather than a hard (binary) selection.
The important thing is to make sure that the regularisation parameter, C, and any kernel parameters are tuned correctly and carefully, preferably by minimising an appropriate cross-validation based model selection criterion. This really is the key to getting good results with an SVM (and kernel methods in general).
|
START TRANSACTION inside BEGIN ... END context or outside and LOOP syntax
I have two questions about Compound-Statement and Transactions in MySQL.
FIRST:
There are two notes in MySQL Manual:
>
> Note
>
>
> Within all stored programs, the parser treats BEGIN [WORK] as the
> beginning of a BEGIN ... END block. To begin a transaction in this
> context, use START TRANSACTION instead.
>
>
> Note
>
>
> Within all stored programs (stored procedures and functions, triggers,
> and events), the parser treats BEGIN [WORK] as the beginning of a
> BEGIN ... END block. Begin a transaction in this context with START
> TRANSACTION instead.
>
>
>
I can't understand what exactly is meant. They mean that I have to put `START TRANSACTION` instead of `BEGIN` or right after `BEGIN`?
```
// 1st variant:
BEGIN
START TRANSACTION
COMMIT
END
// 2nd variant:
START TRANSACTION
COMMIT
END
```
Which one is the right way, 1st variant or 2nd variant?
SECOND:
I don't want to create a Stored Procedure or Function. I just want to create a Compound-Statement Block with a loop inside it in the general flow, like this:
```
USE 'someDb';
START TRANSACTION
... create table statement
... insert statement
// now I want to implement some insert/select statements using loop, I do as follows:
DELIMITER $
BEGIN
SET @n = 1, @m = 2;
lab1: LOOP
... some insert, select statements here
END LOOP lab1;
END $
DELIMITER ;
END
COMMIT
```
Is it possible such kind of structure? Because I have an error thrown:
```
Query: BEGIN SET @n = 1, @m = 2; lab1: LOOP SELECT ...
Error Code: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SET @n = 1, @m = 2;
lab1: LOOP SELECT ...
```
My questions are:
1. Is it allowed to use `BEGIN...END` just in general flow without creating and using Stored Procedures or Functions?
2. Is it allowed to use `BEGIN...END` inside of `START TRANSACTION...COMMIT` or I have to put `START TRANSACTION...COMMIT` inside of `BEGIN...END`?
```
BEGIN
START TRANSACTION
COMMIT
END
// vs.
START TRANSACTION
BEGIN
END
COMMIT
```
3. Do I by all means have to use `BEGIN...END` if I want to use only `LOOP`? May I just use `LOOP` syntax without starting `BEGIN...END`? The only example in the manual for `LOOP` is this:
```
CREATE PROCEDURE doiterate(p1 INT)
BEGIN
label1: LOOP
...
```
|
1. >
> Is it allowed to use BEGIN...END just in general flow without creating and using Stored Procedures or Functions?
>
>
>
No: compound statements can only be used within the body of stored programs.
2. >
> Is it allowed to use `BEGIN...END` inside of `START TRANSACTION...COMMIT` or I have to put `START TRANSACTION...COMMIT` inside of `BEGIN...END`?
>
>
>
`START TRANSACTION;` and `COMMIT;` are separate statements. If you want the body of a stored program to contain multiple statements, it will need to enclose those statements in some sort of compound statement block such as `BEGIN ... END` (which is similar to enclosing a block of statements in braces `{ ... }` within a C-like language).
That said, you *could* have a stored program which contains only the single-statement `START TRANSACTION;` or `COMMIT;`—such a program would not require any compound statement block and would merely commence a new / commit the current transaction respectively.
Outside of a stored program, where compound statement blocks are not permitted, you can issue `START TRANSACTION;` and `COMMIT;` statements as & when required.
3. >
> Do I by all means have to use `BEGIN...END` if I want to use only `LOOP`? May I just use `LOOP` syntax without starting `BEGIN...END`?
>
>
>
`LOOP` is also a compound statement block, which is only valid within a stored procedure. It is not *necessary* to enclose a `LOOP` block within a `BEGIN ... END` block, although it is usual (as otherwise it is difficult to perform any required loop initialisation).
In your case, where you apparently want to insert data into a table from a looping construct, you will either need to:
- define a stored program in which you use `LOOP`;
- iterate a loop in an external program that executes database queries on each iteration; or
- redefine your logic in terms of sets upon which SQL can directly operate.
|
delete rows containing numeric values in strings from pandas dataframe
I have a pandas data frame with 2 columns, type and text
The text column contains string values.
How can I delete rows which contains some numeric values in the text column.
e.g:
```
`ABC 1.3.2`, `ABC12`, `2.2.3`, `ABC 12 1`
```
I have tried below, but get an error. Any idea why this is giving error?
```
df.drop(df[bool(re.match('^(?=.*[0-9]$)', df['text'].str))].index)
```
|
In your case, I think it's better to use simple indexing rather than drop. For example:
```
>>> df
text type
0 abc b
1 abc123 a
2 cde a
3 abc1.2.3 b
4 1.2.3 a
5 xyz a
6 abc123 a
7 9999 a
8 5text a
9 text a
>>> df[~df.text.str.contains(r'[0-9]')]
text type
0 abc b
2 cde a
5 xyz a
9 text a
```
That locates any rows with no numeric text
**To explain:**
```
df.text.str.contains(r'[0-9]')
```
returns a boolean series of where there are any digits:
```
0 False
1 True
2 False
3 True
4 True
5 False
6 True
7 True
8 True
9 False
```
and you can use this with the `~` to index your dataframe wherever that returns false
|
Draw arbitrary plane from plane equation, OpenGL
I have a plane defined by the standard plane equation a\*x + b\*y + c\*z + d = 0, which I would like to be able to draw using OpenGL. How can I derive the four points needed to draw it as a quadrilateral in 3D space?
My plane type is defined as:
```
struct Plane {
float x,y,z; // plane normal
float d;
};
void DrawPlane(const Plane & p)
{
???
}
```
EDIT:
So, rethinking the question, what I actually wanted was to draw a discreet representation of a plane in 3D space, not an infinite plane.
Base on the answer provided by @a.lasram, I have produced this implementation, which doest just that:
```
void DrawPlane(const Vector3 & center, const Vector3 & planeNormal, float planeScale, float normalVecScale, const fColorRGBA & planeColor, const fColorRGBA & normalVecColor)
{
Vector3 tangent, bitangent;
OrthogonalBasis(planeNormal, tangent, bitangent);
const Vector3 v1(center - (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v2(center + (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v3(center + (tangent * planeScale) + (bitangent * planeScale));
const Vector3 v4(center - (tangent * planeScale) + (bitangent * planeScale));
// Draw wireframe plane quadrilateral:
DrawLine(v1, v2, planeColor);
DrawLine(v2, v3, planeColor);
DrawLine(v3, v4, planeColor);
DrawLine(v4, v1, planeColor);
// And a line depicting the plane normal:
const Vector3 pvn(
(center[0] + planeNormal[0] * normalVecScale),
(center[1] + planeNormal[1] * normalVecScale),
(center[2] + planeNormal[2] * normalVecScale)
);
DrawLine(center, pvn, normalVecColor);
}
```
Where OrthogonalBasis() computes the tangent and bi-tangent from the plane normal.
|
To see the plane as if it's infinite you can find 4 quad vertices so that the clipped quad and the clipped infinite plane form the same polygon. Example:
Sample 2 random points `P1` and `P2` on the plane such as `P1 != P2`.
Deduce a tangent `t` and bi-tangent `b` as
```
t = normalize(P2-P1); // get a normalized tangent
b = cross(t, n); // the bi-tangent is the cross product of the tangent and the normal
```
Compute the bounding sphere of the view frustum. The sphere would have a diameter `D` (if this step seems difficult, just set `D` to a large enough value such as the corresponding sphere encompasses the frustum).
Get the 4 quad vertices `v1` , `v2` , `v3` and `v4` (CCW or CW depending on the choice of P1 and P2):
```
v1 = P1 - t*D - b*D;
v2 = P1 + t*D - b*D;
v3 = P1 + t*D + b*D;
v4 = P1 - t*D + b*D;
```
|
Is mutex+atomic necessary to make this code thread safe, or is mutex enough?
I have some doubts wether mutexes are enough to ensure thread safety of the following code example or if atomics are required. In short question is: Would making `idxActive` a regular `int` make this code thread unsafe? Or is code even with atomics thread unsafe? :(
If it is important, I'm on 32 bit x86, linux, gcc 4.6. Of course I presume that 32 or 64 bit makes no diff, but if there is any diff between 32 and 64 bit I would like to know.
```
#include <memory>
#include <boost/thread/thread.hpp>
#include <string>
#include <vector>
#include <atomic>
#include <boost/thread/mutex.hpp>
using namespace std;
using namespace boost;
static const int N_DATA=2;
class Logger
{
vector<string> data[N_DATA];
atomic<int> idxActive;
mutex addMutex;
mutex printMutex;
public:
Logger()
{
idxActive=0;
for (auto& elem: data)
elem.reserve(1024);
}
private:
void switchDataUsed()
{
mutex::scoped_lock sl(addMutex);
idxActive.store( (idxActive.load()+1)%N_DATA );
}
public:
void addLog(const string& str)
{
mutex::scoped_lock sl(addMutex);
data[idxActive.load()].push_back(str);
}
void printCurrent()
{
mutex::scoped_lock sl(printMutex);
switchDataUsed();
auto idxOld=(idxActive.load()+N_DATA-1)%N_DATA; //modulo -1
for (auto& elem:data[idxOld])
cout<<elem<<endl;
data[idxOld].clear();
}
};
int main()
{
Logger log;
log.addLog(string("Hi"));
log.addLog(string("world"));
log.printCurrent();
log.addLog(string("Hi"));
log.addLog(string("again"));
log.printCurrent();
return 0;
}
```
|
You do not need to use atomic variables if all accesses to those variables are protected by a mutex. This is the case in your code, as all public member functions lock `addMutex` on entry. Therefore `addIndex` can be a plain `int` and everything will still work fine. The mutex locking and unlocking ensures that the correct values become visible to other threads in the right order.
`std::atomic<>` allows concurrent access **outside** the protection of a mutex, ensuring that threads see correct values of the variable, even in the face of concurrent modifications. If you stick to the default memory ordering it also ensures that each thread reads the **latest** value of the variable. `std::atomic<>` can be used to write thread-safe algorithms without mutexes, but is not required if all accesses are protected by the **same mutex**.
**Important Update**:
I just noticed that you're using **two** mutexes: one for `addLog` and one for `printCurrent`. In this case, you **do** need `idxActive` to be atomic, because the **separate mutexes do not provide any synchronization between them**.
|
R - Repetitions of an array in other array
From a dataframe I get a new array, sliced from a dataframe.
I want to get the amount of times a certain repetition appears on it.
For example
```
main <- c(A,B,C,A,B,V,A,B,C,D,E)
p <- c(A,B,C)
q <- c(A,B)
someFunction(main,p)
2
someFunction(main,q)
3
```
I've been messing around with `rle` but it counts every subrepetion also, undersirable.
Is there a quick solution I'm missing?
|
You can use one of the regular expression tools in R since this is really a pattern matching exercise, specifically `gregexpr` for this question. The `p` and `q` vectors represent the search pattern and `main` is where we want to search for those patterns. From the help page for `gregexpr`:
```
gregexpr returns a list of the same length as text each element of which is of
the same form as the return value for regexpr, except that the starting positions
of every (disjoint) match are given.
```
So we can take the length of the first list returned by `gregexpr` which gives the starting positions of the matches. We'll first collapse the vectors and then do the searching:
```
someFunction <- function(haystack, needle) {
haystack <- paste(haystack, collapse = "")
needle <- paste(needle, collapse = "")
out <- gregexpr(needle, haystack)
out.length <- length(out[[1]])
return(out.length)
}
> someFunction(main, p)
[1] 2
> someFunction(main, q)
[1] 3
```
Note - you also need to throw "" around your vector `main`, `p`, and `q` vectors unless you have variables A, B, C, et al defined.
```
main <- c("A","B","C","A","B","V","A","B","C","D","E")
p <- c("A","B","C")
q <- c("A","B")
```
|
Parse files in AWS S3 with boto3
I am attempting to read files from my S3 bucket, and parse them with a regex pattern. However, I have not been able to figure out to read the files line by line. Is there a way to do this or a different way I need to be approaching this for parsing?
```
pattern = '^(19|20)\d\d[-.](0[1-9]|1[012])[-.](0[1-9]|[12][0-9]|3[01])[ \t]+([0-9]|0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9][ \t]+(?:[0-9]{1,3}\.){3}[0-9]{1,3}[ \t]+(?:GET|POST|PUT)[ \t]+([^\s]+)[ \t]+[1-5][0-9][0-9][ \t]+(\d+)[ \t]+(\d+)[ \t]+"(?:[^"\\]|\\.)*"[ \t]+"(?:[^"\\]|\\.)*"[ \t]+"(?:[^"\\]|\\.)*"'
s3 = session.resource('s3')
bucket_name = s3.Bucket(bucket)
data = [obj for obj in list(bucket_name.objects.filter(Prefix=prefix)) if obj.key != prefix]
for obj in data:
key = obj.key
body = obj.get()['Body'].read()
print(key)
print(body)
for line in body:
print(line)
```
So I am able to see the correct file and able to read the whole body of the file (close to an IIS log). However when I try to iterate the lines, I get numbers. So the output of `print(line)` is
```
35
101
119
147
etc.
```
I have no idea where these numbers are coming from. Are they words, characters, something else?
My goal is to apply my pattern once I am able to read the file line by line with the regular expression operator.
EDIT: Here is one of my log lines
```
2016-06-14 14:03:42 1.1.1.1 GET /origin/ScriptResource.axd?=5f9d5645 200 26222 0 "site.com/en-US/CategoryPage.aspx" "Mozilla/5.0 (Linux; Android 4.4.4; SM-G318HZ Build/KTU84P)" "ASP.NET_SessionId=emfyTVRJNqgijw=; __SessionCookie=bQMfQzEtcnfMSQ==; __CSARedirectTags=ABOcOxWK/O5Rw==; dtCookie=B52435A514751459148783108ADF35D5|VVMrZVN1aXRlK1BXU3wx"
```
|
Text file with below content I have used in below solution:
```
I love AWS.
I love boto3.
I love boto2.
```
I think the problem is with line :
```
for line in body:
```
It iterates character by character instead of line by line.
```
C:\Users\Administrator\Desktop>python bt.py
I
l
o
v
e
A
W
S
.
I
l
o
v
e
b
o
t
o
3
.
I
l
o
v
e
b
o
t
o
2
.
C:\Users\Administrator\Desktop>
```
Instead we use as below :
```
for line in body.splitlines():
```
then the output looks like this
```
C:\Users\Administrator\Desktop>python bt.py
I love AWS.
I love boto3.
I love boto2.
C:\Users\Administrator\Desktop>
```
Applying above things, I tried the below code on text file with small regex which will give boto versions from the file.
```
import re
header = ['Date', 'time', 'IP', 'method', 'request', 'status code', 'bytes', 'time taken', 'referrer', 'user agent', 'cookie']
s3 = session.resource('s3')
bucket_name = s3.Bucket(bucket)
data = [obj for obj in list(bucket_name.objects.filter(Prefix=prefix)) if obj.key != prefix]
for obj in data:
key = obj.key
body = obj.get()['Body'].read()
#print(key)
#print(body)
x=0
for line in body:
m = re.search(r'(\d{4}-\d{2}-\d{2})\s+(\d{2}:\d{2}:\d{2})\s+([\d\.]+)\s+([GET|PUT|POST]+)\s+([\S]+)\s+(\d+)\s+(\d+)\s+(\d+)\s+([\S]+)\s+(\".*?\")\s+(.*)',line)
if m is not None:
for i in range(11):
print header[i]," - ",m.group(x)
x+=1
print "------------------------------------"
x=0
```
|
Stripe Elements - change in layout of 'card-element'
I am looking for a way to customise the layout of 'card number' 'expiry date' and 'CCV' fields when using Stripe Elements and injecting these fields through `card.mount('#card-element');` as described in the first example of this page <https://stripe.com/docs/stripe-js>
It puts all the card fields in one row, I want to change that layout and put them in different rows.
Any ideas?
Thanks in advance
|
You can do this by creating a separate div for each card input (number, expiry, CVC), which you can layout however you like:
```
<div id="example3-card-number"></div>
<div id="example3-card-expiry"></div>
<div id="example3-card-cvc"></div>
```
Then tell Stripe Elements about each one:
```
var cardNumber = elements.create('cardNumber');
cardNumber.mount('#example3-card-number');
var cardExpiry = elements.create('cardExpiry');
cardExpiry.mount('#example3-card-expiry');
var cardCvc = elements.create('cardCvc');
cardCvc.mount('#example3-card-cvc');
```
Ref: <https://stripe.dev/elements-examples/> (Example 3)
|
Engine.io or SockJS, which one to choose?
I have run into trouble with Socket.io regarding memory leaks and scaling issues lately. My decision to use Socket.io was made over a year ago when it was undoubtedly the best library to use.
Now that Socket.io causes much trouble, I spent time looking for alternatives that became available in the meantime and think that both Engine.io and SockJS are generally well suited for me. However, in my opinion both have some disadvantages and I am not sure which one to choose.
Engine.io is basically the perfect lightweight version of Socket.io that does not contain all the features I do not require anyway. I have already written my own reconnection and heartbeat logic for Socket.io, because I was not satisfied with the default logics and I never intended to use rooms or other features that Socket.io offers.
But - in my opinion - the major disadvantage of Engine.io is the way connections are established. Clients start with slower jsonp-polling and are upgraded if they support better transports. The fact that the clients which support websockets natively (number increasing steadily) have a disadvantage in the form of a longer and unstable connection procedure over those clients which use outdated browsers, contradicts my sense of how it should be handled.
SockJS on the other hand handles the connections exactly as I would like to. From what I have read it seems to be pretty stable while Engine.io has some issues at this time.
My app is running behind an Nginx router on a single domain, therefore I do not need the cross-domain functionality SockJS offers. Because of providing this functionality, however, SockJS does not expose the cookie data of the client at all. So far I had a 2-factor authorization with Socket.io via cookie AND query string token and this would not be possible with SockJS (with Engine.io it would).
I have read pretty much all what is avilable about and pros and cons of both, but it seems there is not much being discussed or published so far, espacially about Engine.io (there are only 8 questions tagged with engine.io here).
- Which of the 2 libraries do you prefer and for which reason? Do you use them in production?
- Which one will likely be maintained more actively and could have a major advantage over the other in the future?
|
Have you looked at [Primus](https://github.com/primus/primus)? It offers the cookie requirements you mention, it supports all of the [major](https://github.com/primus/primus#supported-real-time-frameworks) 'real-time'/websocket libraries available and is a pretty active project. To me it also sounds like vendor lock-in could be a concern for you and Primus would address that.
The fact that it uses a [plugin system](https://github.com/primus/primus#primus-project-plugins) should also a) make it easier for you to extend if needed and b) may actually have a [community plugin](https://github.com/primus/primus#community-plugins) that already does what you need.
>
> Which of the 2 libraries do you prefer and for which reason? Do you use them in production?
>
>
>
I have only used SockJS via the Vert.x API and it was for an internal project that I would consider 'production', but not a production facing consumer app. That said, it performed very well.
>
> Which one will likely be maintained more actively and could have a major advantage over the other in the future?
>
>
>
Just looking over the commit history of [Engine.io](https://github.com/Automattic/engine.io/graphs/commit-activity) and [SockJS](https://github.com/sockjs/sockjs-client/graphs/commit-activity), and the fact that Auttomatic is supporting Engine.io makes me inclined to think that it will be more stable, for a longer period of time, but of course that's debatable. Looking at the issues for [Engine.io](https://github.com/Automattic/engine.io/issues) and [SockJS](https://github.com/sockjs/sockjs-client/issues) is another good place to evaluate, but since they're both split over multiple repos it should be taken with a grain of salt. I'm not sure where/how Automattic is using Engine/Socket.io, but if it's in WordPress.com or one of their plugins, it has substantial production-at-scale battle testing.
*edit: change answer to reflect cookie support confirmed by Primus author in comments below*
|
Bash: Filter rows by line number
If I have a delimited file with many lines and columns (`data.txt`):
```
346 dfd asw 34
565 sd wdew 34
667 ffg wew 23
473 sa as 21
533 jhf qwe 54
```
and another file with line numbers that I want to extract (`positions.txt`)
```
3
5
8
```
How do I use the `positions.txt` file to extract those positions from `data.txt`? This is the result I would expect for this example:
```
667 ffg wew 23
533 jhf qwe 54
```
|
Simply with **`awk`**:
```
awk 'NR==FNR{ pos[$1]; next }FNR in pos' positions.txt data.txt
```
- `NR==FNR{ ... }` - processing the 1st input file (i.e. `positions.txt`):
- `pos[$1]` - accumulating positions(record numbers) set as `pos` array keys
- `next` - jump to next record
- `FNR in pos` - while processing the 2nd input file `data.txt`(`FNR` indicates how many records have been read from the current input file). Print record only if current record number `FNR` is in array of positions `pos` (search on keys)
---
Sample output:
```
667 ffg wew 23
533 jhf qwe 54
...
```
|
timestamp NOT NULL DEFAULT CURRENT\_TIMESTAMP can be null on one machine but not another?
I have a MySql table with a field defined as:
```
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
```
On my local machine, I can run:
```
INSERT INTO mytbl (id, user_id, created) VALUES(88882341234, 765, null);
SELECT id, user_id, created FROM mytbl WHERE id = '88882341234';
```
And then 'created' will show something like '2014-06-13 21:16:42'.
But on my staging server, if I run the same queries, I get this error:
```
Column 'created' cannot be null.
```
The schemas of the tables are the same (across local and staging), which I ensured via mysqldump (to clone the table before running this test).
I'm running MySql 5.6.17 on both machines. I've also ensured that both have the same sql\_mode.
What could be the problem?
P.S. For people who don't know why I'd be setting a non-nullable field's value to null, [MySql Docs](http://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html) say:
>
> In addition, you can initialize or update any TIMESTAMP column to the
> current date and time by assigning it a NULL value, unless it has been
> defined with the NULL attribute to permit NULL values.
>
>
>
|
I found what the problem was. The MySql variable/parameter `explicit_defaults_for_timestamp` was OFF on my local machine but ON on my remote machine.
I visited my AWS RDS Parameter Groups page and changed explicit\_defaults\_for\_timestamp from 1 to 0.
Then I went to my AWS RDS instances page to watch when "Parameter Group" changed from "Applying" to "pending-reboot".
Then I rebooted the particular instance.
These links helped me:
- <https://stackoverflow.com/a/23392448/470749>
- [How to import MySQL binlog that contains INSERTs of a TIMESTAMP field with default value CURRENT\_TIMESTAMP](https://stackoverflow.com/questions/18264942/how-to-import-mysql-binlog-that-contains-inserts-of-a-timestamp-field-with-defau)
- <https://forums.aws.amazon.com/thread.jspa?threadID=132676>
|
Why does the Boost library use an m\_generation variable in its implementation of a thread barrier?
The boost library (before the C++11 standard), offered support for threads. As part of its support, it also offers the implementation of a "barrier", a simple class which allows synchronization. To quote the [boost website](http://www.boost.org/doc/libs/1_56_0/doc/html/thread/synchronization.html#thread.synchronization.barriers):
"A barrier is a simple concept. Also known as a rendezvous, it is a synchronization point between multiple threads. The barrier is configured for a particular number of threads (n), and as threads reach the barrier they must wait until all n threads have arrived. Once the n-th thread has reached the barrier, all the waiting threads can proceed, and the barrier is reset."
The implementation of the main function of the barrier (wait), as of Boost 1.54, is shown below:
```
bool wait()
{
boost::mutex::scoped_lock lock(m_mutex);
unsigned int gen = m_generation;
if (--m_count == 0)
{
m_generation++;
m_count = m_threshold;
m_cond.notify_all();
return true;
}
while (gen == m_generation)
m_cond.wait(lock);
return false;
}
```
It can be seen that the barrier is reusable: Once constructed, it doesn't need to be destroyed after its first use.
My question now is: What is the variable m\_generation for? I am assuming the writers of the boost library had a reason to include it. It is incremented each time the barrier is reset/ready to be reused, but to what end? It is a private variable, thus it cannot be read out from the outside. The same problem could just as easily be solved with a simple internal bool variable inside the wait() function, without having a private class variable.
|
**In a nutshell, `m_generation` is needed to deal with [spurious wakeups](http://en.wikipedia.org/wiki/Spurious_wakeup).**
The generation counter is used in conjunction with the condition variable to signal to all threads waiting on the barrier that they are free to proceed:
- Once there are `m_threshold` threads that have reached the barrier, its generation number gets bumped up, and the condition variable is signalled. This causes the waiting threads (i.e. those that have reached the barrier earlier) to wake up from `m_cond.wait(lock)`.
- Now, the waiting threads can wake up from `m_cond.wait(lock)` for [other reasons](http://en.wikipedia.org/wiki/Spurious_wakeup). This is where `m_generation` comes in: if it's changed, then the barrier has been reset and the thread can proceed. If `m_generation` still contains the same value, the thread needs to go back into `m_cond.wait(lock)`.
Having an automatic variable inside `wait()` would not achieve this, since each thread would have its own instance.
|
How to Remove/Disable Sign Up From Devise
I'm trying to remove/disable the `user/sign_up` path from Devise. I'm doing this because I don't want random people gaining access to the application. I have it partly working by adding the following in routes.rb
```
Rails.application.routes.draw do
devise_scope :user do
get "/sign_in" => "devise/sessions#new" # custom path to login/sign_in
get "/sign_up" => "devise/registrations#new", as: "new_user_registration" # custom path to sign_up/registration
end
...
devise_for :users, :skip => :registration
end
```
However, this breaks `<%= link_to "Profile", edit_user_registration_path, class: "btn btn-info btn-flat" %>`
which I want to keep so that users can update their profile.
I know it's because of the `devise_for :users, :skip => :registration`
Is there a solution for this issue?
Running
Devise (4.2.0, 4.1.1, 4.1.0)
Rails 4.2.5
ruby 2.3.0p0 (2015-12-25 revision 53290) [x86\_64-linux]
|
**Solution to removing sign\_up path from Devise**
Enter the following at the beginning of `routes.rb`
```
Rails.application.routes.draw do
devise_scope :user do
get "/sign_in" => "devise/sessions#new" # custom path to login/sign_in
get "/sign_up" => "devise/registrations#new", as: "new_user_registration" # custom path to sign_up/registration
end
```
...After the statement above, add the following below in `routes.rb`
```
devise_for :users, :skip => [:registrations]
as :user do
get 'users/edit' => 'devise/registrations#edit', :as => 'edit_user_registration'
put 'users' => 'devise/registrations#update', :as => 'user_registration'
end
```
This will remove/disable the `user/sign_up` path for Devise without breaking `edit_user_registration_path`
Restart your rails server and it should work.
|
confusion about function call stack
According to Wiki:
>
> the caller pushes the return address onto the stack, and the called
> subroutine, when it finishes, pops the return address off the call
> stack and transfers control to that address.
>
>
>
**Pic from Wiki:**

I don't quite understand this.
Say I have a C program as follows:
```
#include <stdio.h>
int foo(int x)
{
return x+1;
}
void spam()
{
int a = 1; //local variable
int b = foo(a); //subroutine called
int c = b; //local variable
}
int main()
{
spam();
return 0;
}
```
And I think the call stack should be something like a drawing as follows:
```
<None> means none local variables or params
_| parameters for foo() <int x> |_
top | local of spam() <int c> |
^ | return address of foo() |<---foo() called, when finishes, return here?
| | local of spam() <int b> |
bot | local of spam() <int a> |
_| parameters for spam() <None> |_
| locals of main() <None> |
| return address of spam() |<---spam() called, when finishes, return here?
| parameters for main() <None> |
```
Question:
According to the words quoted from Wiki,
>
> the called subroutine, when it finishes, pops the return address off
> the call stack and transfers control to that address.
>
>
>
1.Is my drawing right?
2.If it is right, then when foo() finishes, it will
>
> pop the return address off the call stack and transfer control to
> that address
>
>
>
, but how can it pop off the return address?
Because when foo finishes, the current stack pointer is pointing to spam's local ,
right?
**UPDATE:**
what if the main() looks like this:
```
int main()
{
spam();
foo();
}
```
then what the call stack should look like?
|
Your drawing is not correct. The local stack variables for a function are all below any return addresses. Otherwise, as you have observed, the locals would get lost when you call a function.
It should be like this:
```
| parameters for foo() <int x> |
| return address of foo() |
| local of spam() <int c> |
| local of spam() <int b> |
| local of spam() <int a> |
| parameters for spam() <None> |
| return address of spam() |
| locals of main() <None> |
| parameters for main() <None> |
```
I think the confusion is that you believe that variable declarations are treated as statements and executed in order. In fact the compiler will typically analyse a function to decide how much stack space is needed for all the local variables. Then it emits code to adjust the stack pointer accordingly and that adjustment is made on entry to the function. Any calls to other functions can then push onto the stack without interfering with this function's stack frame.
|
Split a list into sublists in java using if possible flatMap
here is my list:
```
List<Integer> mylist = Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12);
```
Assuming my list is always even, then i would like to split it in 6 equal parts.
List as a Sketch:
```
[1,2,3,4,5,6,7,8,9,10,11,12]
```
Output Sketch:
```
[[1,2][3,4],[5,6],[7,8],[9,10],[11,12]]
```
I would prefer a solution if possible with Java 8 stream `flatMap`
|
Given that the "sublists" are all of equal size and that you can divide the list into exact sublists of the same size, you could calculate the desired size and then map an `IntStream` to the starting indexes of each sublist and use that to extract them:
```
List<Integer> mylist = Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12);
int size = mylist.size();
int parts = 6;
int partSize = size / parts;
List<List<Integer>> result =
IntStream.range(0, parts)
.mapToObj(i -> mylist.subList(i * partSize, (i + 1) * partSize)))
.collect(Collectors.toList());
```
**EDIT:**
[IdeOne demo](https://ideone.com/dGQ5Rs) graciously provided by @Turing85
|
How to combine asynchronous calls with synchronous in javascript
I'm kind of beginning to learn javascript and have a question about combining synchronous and asynchronous calls to functions. This will be just a theoric problem, but I hope it transmits the idea.
Let's suppose we have javascript program that decides how many bananas and oranges I need to buy.
```
console.log('buy %d bananas and %d oranges', bananas, oranges)
```
Now, I can decide how many bananas I can buy, but I need to ask my wife how many oranges she wants, so I text her. (I can write an async function to represent this).
This would be my immediate approach:
```
var bananas = 10;
var oranges = 0;
textWife('askAboutOranges',function(number){ oranges = number; }
console.log('buy %d bananas and %d oranges', bananas, oranges)
```
But to me this doesn't make sense, because I have to wait for my wife to reply, so I probably won't have the number of oranges in time.
So I can change my program to:
```
var bananas = 10;
var oranges = 0;
textWife('askAboutOranges',function(number){
oranges = number;
console.log('buy %d bananas and %d oranges', bananas, oranges);
}
```
But I don't like this because now I have the logic of deciding what to buy including the bananas, inside the response from my wife. What if I decide I don't want oranges, do I have to do something like this:
```
var bananas = 10;
var oranges = 0;
if (wantOranges)
{
textWife('askAboutOranges',function(number){
oranges = number;
console.log('buy %d bananas and %d oranges', bananas, oranges);
}
}
else
console.log('buy %d bananas and %d oranges', bananas, oranges);
```
**So my question is, can anyone explain me what's the best/right way to do something like this?**
|
[jQuery Deferred](http://api.jquery.com/jQuery.Deferred/) is a great tool to have in your belt. I might do something like this to separate concerns:
```
function decideHowManyBananas() {
return 10;
}
function decideHowManyOranges() {
var deferred = $.Deferred();
if (wantOranges) {
textWife('askAboutOranges', function(number) {
deferred.resolve(number);
});
} else {
deferred.resolve(0);
}
return deferred.promise();
}
$.when(decideHowManyBananas(), decideHowManyOranges()).done(function(bananas, oranges) {
console.log('buy %d bananas and %d oranges', bananas, oranges);
});
```
|
Make multiple NSURLConnections and using sendAsynchronousRequest:queue:completionHandler: iOS 5 method
I have some difficulties to set up the correct configuration relative to `sendAsynchronousRequest:queue:completionHandler:` method (`NSURLConnection` class).
My scenario is the following:
I set up a singleton class that manages different `NSURLConnection`s. This singleton istance has a `NSOperation Queue` (called `downloadQueue`) that makes a request to a web server and retrieves a string path (1).
Once done, the path is used to download a file within a web server (2). Finally, when the file has been correctly downloaded, I need to update the UI (3).
I figured out only the first request: the one through which I'm able to download the path. Could you suggest me a way to perform the other two steps?
Few questions here:
- the download queue (`downloadQueue`) is not the main one, is it possible to open a new NSURLConnection in that queue? In other words, is it correct? (See comments in code snippets)
- if the previous question is correct, how can I grab the main queue and update the UI?
Here the code snippet I use to perform the first step where `downloadQueue` is an instance variable that can be obtain through accessor mehods (`@property`/`@synthesized`);
```
// initializing the queue...
downloadQueue = [[NSOperation alloc] init];
// other code here...
[NSURLConnection sendAsynchronousRequest:urlRequest queue:[self downloadQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
if([data length] > 0 && error == nil) {
// here the path (1)
// how to perform a second connection?
// what type of queue do I have to use?
}
}];
```
|
You're on the right track for performing your first download.
In the completion handler block after the first download, you're computing the URL that you'll need for a second download, right? Then you can perform that second download the same way: call `+[NSURLConnection sendAsynchronousRequest:...]` again with the new URL and the same queue. You can do this within the completion block for the first download.
To update the UI after the second download is done, switch to the main queue within the completion block for that download. You can do this with `dispatch_async()` or `dispatch_sync()` (in this case it doesn't matter which because you don't have further work to do on the download queue) and `dispatch_get_main_queue()`, or with `-[NSOperationQueue addOperationWithBlock:]` and `+[NSOperationQueue mainQueue]`.
Your code should look something like this:
```
// init download queue
downloadQueue = [[NSOperationQueue alloc] init];
// (1) first download to determine URL for second
[NSURLConnection sendAsynchronousRequest:urlRequest queue:[self downloadQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
if([data length] > 0 && error == nil) {
// set newURLRequest to something you get from the data, then...
// (2) second download
[NSURLConnection sendAsynchronousRequest:newURLRequest queue:[self downloadQueue] completionHandler:^(NSURLResponse *newResponse, NSData *newData, NSError *newError) {
if([newData length] > 0 && newError == nil) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// (3) update UI
}];
}
}];
}
}];
```
|
Attaching class labels to a Keras model
I am using the Keras Sequential model to train a number of multiclass classifiers.
On evaluation, Keras outputs a vector of confidences and I can infer the correct class id from that using argmax. I can then use a lookup table to receive the actual class label (e.g. a string).
So far the solution is to load the trained model, and then to load a lookup table separately. Since I have quite a number of classifiers I would prefer to keep both structures in one file.
So what I am looking for is a way to integrate the actual label lookup vector into the Keras model. That would allow me to have a single classifier file that is capable of taking some input data and returning the correct class label for that data.
One way to solve this would be to store both the model and the lookup table in a tuple and write that tuple into a pickle, but this doesn't seem very elegant.
|
So I tried my hand at a solution myself and this seems to work. I was hoping for something simpler though.
Opening the model file a second time is not really optimal I think. If anyone can do better, by all means, do.
```
import h5py
from keras.models import load_model
from keras.models import save_model
def load_model_ext(filepath, custom_objects=None):
model = load_model(filepath, custom_objects=None)
f = h5py.File(filepath, mode='r')
meta_data = None
if 'my_meta_data' in f.attrs:
meta_data = f.attrs.get('my_meta_data')
f.close()
return model, meta_data
def save_model_ext(model, filepath, overwrite=True, meta_data=None):
save_model(model, filepath, overwrite)
if meta_data is not None:
f = h5py.File(filepath, mode='a')
f.attrs['my_meta_data'] = meta_data
f.close()
```
Since h5 files do not accept python containers, you should consider converting the meta data into a string. Assuming that your meta data exists in the form of a dictionary or a list, you can use json to do the conversion. This would also allow you to store more complex data structures within your model.
Full usage example:
```
import json
import keras
# prepare model and label lookup
model = keras.Sequential();
model.add(keras.layers.Dense(10, input_dim=8, activation='relu'));
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile()
filepath = r".\mymodel.h5"
labels = ["dog", "cat", "automobile"]
# save
labels_string = json.dumps(labels)
save_model_ext(model, filepath, meta_data=labels_string)
# load
loaded_model, loaded_labels_string = load_model_ext(filepath)
loaded_labels = json.loads(loaded_labels_string)
# label of class 0: "dog"
print(loaded_labels[0])
```
If you prefer to have a dictionary for your classes, be aware that json will convert numeric dictionary keys to strings, so you will have to convert them back to numbers after loading.
|
GXT UiBinder FieldLabel
I am new to UiBinder and I try to use a FieldLabel.
I found this post : [GXT: UiBinder new instance of a Widget](https://stackoverflow.com/questions/11020042/gxt-uibinder-new-instance-of-a-widget)
where the developer uses those markups
```
<form:FieldLabel text="Text">
<form:Widget>
<form:TextField ui:field="text"/>
</form:Widget>
</form:FieldLabel>
```
When I do exactly the same, I get the following error :
```
ERROR: Illegal child <form:Widget> in a text-only context. Perhaps you are trying to use unescaped HTML where text is required, as in a HasText widget?: <form:FieldLabel text='Text'> (:7)
```
It seems that my version of GXT (3.0.1) does not allow to have a non-text-only child for the FieldLabel markup.
Up to now, the only solution I found to include a FieldLabel is to use
```
@UiField(provided = true)
```
Is there a better way to use FieldLabel with UiBinder?
|
The problem is that you shouldn't capitalize `form:widget`:
```
<form:FieldLabel text="Text">
<form:widget>
<form:TextField ui:field="text"/>
</form:widget>
</form:FieldLabel>
```
In UiBinder, elements with a capital letter represent a real Widget subclass, while lowercase elements are either a regular html dom element, or some modifier for the parent widget tag. We can see that this is *not* a dom element because of the `form:` namespace, the same as the parent widget tag (i.e. `<form:FieldLabel>`).
How this works: The `FieldLabel` class has a Java method to give it the widget to display - `setWidget`. This method is decorated with a `@UiChild` annotation (in its superclass):
```
@Override
@UiChild(limit = 1, tagname = "widget")
public void setWidget(Widget w) {
//...
```
This enables you, the UiBinder user, to refer to a tag as `widget`, and have that method invoked with whatever contents you enter.
|
Customise Visual Studio 2012 installation (C# only)
Is there any way to customise which languages are installed with Visual Studio 2012? I only want C#, no VB or F# etc.
I'm trying to install Visual Studio 2012 Premium RTM.
The first install page I see this:

and the next I see this:

but there's no option to choose which languages I want to install :-(
|
I think thats as much customization as your going to get.
<http://blogs.msdn.com/b/visualstudio/archive/2012/06/04/setup-improvements-for-visual-studio.aspx>
<http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2639283-bring-back-the-visual-studio-installation-customiz>
According to those links they [Microsoft] removed then brought back the option for customization. And what you see there is all you get. I know my installation of VS 2012 (upgraded express to ultimate) is only taking up 2 gb so IDK why it is saying it needs 6.

|
Html Content with rgb color not displaying in android but in ios(apple) work perfectly
**This is work perfect**
```
val htmlContent = "<p><span style=\"background-color: #008080;\">Heloo This is new <span style=\"color: #0000ff;\">document</span> <span style=\"background-color: #ffffff;\"> TEXTT HHHH<strong>Hhhhhhhh</strong>hhhhhhhhhhhhh</span></span></p>\n" +
"<h1><span style=\"background-color: #008080;\"><span style=\"background-color: #ffffff;\">TEst dfsdf </span></span></h1>"
```
**But When I trying to set this below Html content not working in android but in ios work perfectly**
```
<p style="text-align:left;"></p>
<p style="text-align:justify;"><span style="color: rgb(250,197,28);background-color: rgb(255,255,255);font-size: 14px;font-family: Open Sans;"><strong>Yellow </strong></span><span style="color: rgb(0,0,0);background-color: rgb(255,255,255);font-size: 14px;font-family: Open Sans;"><strong>Ipsum</strong></span> <span style="color: rgb(0,0,0);background-color: rgb(255,255,255);font-size: 24px;font-family: Open Sans;">is simply </span><span style="color: rgb(209,72,65);background-color: rgb(255,255,255);font-size: 24px;font-family: Open Sans;">Red</span><span style="color: rgb(209,72,65);background-color: rgb(255,255,255);font-size: 14px;font-family: Open Sans;"> </span><span style="color: rgb(0,0,0);background-color: rgb(255,255,255);font-size: 14px;font-family: Open Sans;">text </span><span style="color: rgb(65,168,95);background-color: rgb(255,255,255);font-size: 14px;font-family: Open Sans;">greenthe </span><span style="color: rgb(0,0,0);background-color: rgb(255,255,255);font-size: 14px;font-family: Open Sans;">printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</span></p>
<p style="text-align:left;"><br> </p>
```
**Kotlin code**
```
descriptionTextView.text = Html.fromHtml(
htmlContent, Html.FROM_HTML_MODE_LEGACY)
```
|
You should know all of HTML tags and attributes are not supported by android TextView and there are some limitations. For more information please look at the following questions:
1. [Which HTML tags are supported by Android TextView?](https://stackoverflow.com/questions/9754076/which-html-tags-are-supported-by-android-textview)
2. [Android textview html font size tag](https://stackoverflow.com/questions/18295881/android-textview-html-font-size-tag)
In your case, the following attributes are not supported:
1. text-align
2. font-size
3. font-family
and also if need to set color, `rgb(int, int, int)` value is unsupported as well.
**Solution:**
1. `text-align` attribute only supports `start`, `end` and `center` values.
2. You can use `<small>` tag for smaller font size or `<big>` tag for bigger font size.
3. You can use `<font>` tag and `face` attribute to set font face.
4. Use hex color code (For example #FF0000) instenad of rgb color code (For example rgb(255, 0, 0))
**Important:** If you have more than one attribute in an html tag, seperate them with `;[Space]` instead of `;`. Look at the following examples:
This exmaple doesn't work
```
<span style="color: #ff0000;background-color: #ffffff;">
RED
</span>
```
but this one works
```
<span style="color: #ff0000; background-color: #ffffff;">
RED
</span>
```
|
How to configure Automapper to automatically ignore properties with ReadOnly attribute?
# Context:
Let's say I have the following "destination" class:
```
public class Destination
{
public String WritableProperty { get; set; }
public String ReadOnlyProperty { get; set; }
}
```
and a "source" class with the `ReadOnly` attribute on one of it's properties:
```
public class Source
{
public String WritableProperty { get; set; }
[ReadOnly(true)]
public String ReadOnlyProperty { get; set; }
}
```
It's obvious, but to be clear: I am going to map from `Source` class to `Destination` class in the following way:
```
Mapper.Map(source, destination);
```
# Problem:
**What are the ways to configure Automapper to automatically ignore property with `ReadOnly(true)` attribute?**
# Constraints:
I use Automapper's `Profile` classes for configuration. I don't want to dirty up classes with Automapper-specific attributes. I don't want to configure Automapper for every single read-only property and cause a lot of duplication by this way.
# Possible (but not suited) solutions:
## 1) Add attribute `IgnoreMap` to the property:
```
[ReadOnly(true)]
[IgnoreMap]
public String ReadOnlyProperty { get; set; }
```
I don't want to dirty up classes with automapper-specific attributes and make it dependent from it. Also I don't want to add additional attribute along with `ReadOnly` attribute.
## 2) Configure Automapper to ignore the property:
```
CreateMap<Source, Destination>()
.ForSourceMember(src => src.ReadOnlyProperty, opt => opt.Ignore())
```
It is not a way because it forces me to do that for every single property everywhere and also causes a lot of duplication.
|
Write **Extension Method** as shown below:
```
public static class IgnoreReadOnlyExtensions
{
public static IMappingExpression<TSource, TDestination> IgnoreReadOnly<TSource, TDestination>(
this IMappingExpression<TSource, TDestination> expression)
{
var sourceType = typeof(TSource);
foreach (var property in sourceType.GetProperties())
{
PropertyDescriptor descriptor = TypeDescriptor.GetProperties(sourceType)[property.Name];
ReadOnlyAttribute attribute = (ReadOnlyAttribute) descriptor.Attributes[typeof(ReadOnlyAttribute)];
if(attribute.IsReadOnly == true)
expression.ForMember(property.Name, opt => opt.Ignore());
}
return expression;
}
}
```
**To call extension method:**
`Mapper.CreateMap<ViewModel, DomainModel>().IgnoreReadOnly();`
|
$\_GET variable with messed up encoding
I'm having a great deal of trouble with encoding in my site.
This is my problem right now, if I go to `analize.php?dialog=árbol` which code is:
```
<?
echo $_GET['dialog'];
echo "sabía";
```
on it I get:
```
sabÃa
sabía
```
I'm using ANSI, changing to UTF-8 breaks both. I don't understand why this happens, also there isn't any code above this. I don't care about how they display since this file is only used to fetch data from my database. But I need to make `$_GET` display properly so I can include it on the query.
How can this be done?
|
You cannot send the character "í" in a URL, URLs must use a subset of the ASCII charset. Therefore the URL is encoded to `?dialog=sab%C3%ADa` by your browser before being sent to the server. `%C3%AD` represents the two bytes `C3 AD`, which is the UTF-8 encoding for the character "í". You can confirm this with `var_dump($_SERVER['QUERY_STRING']);`. This is automatically decoded by PHP, the result is the UTF-8 byte sequence for "sabía" with the "í" being encoded using the two bytes `C3 AD`.
Your browser is interpreting this byte sequence using the Windows-1252 or ISO-8859-1 charset. The byte `C3` represents "Ã" in this encoding, the byte `AD` represents a soft-hyphen and is invisible.
Two possible solutions:
1. use UTF-8 everywhere (recommended!)
- save your source code as UTF-8
- output a header that forces the browser to interpret the site as UTF-8:
```
header('Content-Type: text/html; charset=utf-8');
```
2. convert the `$_GET` values to Windows-1252/ISO-8859-1 (or whatever encoding you want to use on your site) using `mb_convert_encoding` or `iconv` (not recommended)
- even in this case you should set a header that announces to the browser what encoding exactly you're using
In short, you need to make sure you're using the same encoding everywhere and specify to the browser what encoding exactly that is.
|
Generate a plane with triangle strips
What would be the best algorithm to generate a list of vertices to draw a plane using triangle strips?
I'm looking for a function which receives the plane's width and height and returns a float array containing correctly indexed vertices.
width represents the number of vertices per row.
height represents the number of vertices per column.
```
float* getVertices( int width, int height ) {
...
}
void render() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, getVertices(width,heigth));
glDrawArrays(GL_TRIANGLE_STRIP, 0, width*height);
glDisableClientState(GL_VERTEX_ARRAY);
}
```
|
Thanks you all. I've coded this. Is it correct? Or is the generated strip somehow wrong?
```
int width;
int height;
float* vertices = 0;
int* indices = 0;
int getVerticesCount( int width, int height ) {
return width * height * 3;
}
int getIndicesCount( int width, int height ) {
return (width*height) + (width-1)*(height-2);
}
float* getVertices( int width, int height ) {
if ( vertices ) return vertices;
vertices = new float[ getVerticesCount( width, height ) ];
int i = 0;
for ( int row=0; row<height; row++ ) {
for ( int col=0; col<width; col++ ) {
vertices[i++] = (float) col;
vertices[i++] = 0.0f;
vertices[i++] = (float) row;
}
}
return vertices;
}
int* getIndices( int width, int height ) {
if ( indices ) return indices;
indices = new int[ iSize ];
int i = 0;
for ( int row=0; row<height-1; row++ ) {
if ( (row&1)==0 ) { // even rows
for ( int col=0; col<width; col++ ) {
indices[i++] = col + row * width;
indices[i++] = col + (row+1) * width;
}
} else { // odd rows
for ( int col=width-1; col>0; col-- ) {
indices[i++] = col + (row+1) * width;
indices[i++] = col - 1 + + row * width;
}
}
}
if ( (mHeight&1) && mHeight>2 ) {
mpIndices[i++] = (mHeight-1) * mWidth;
}
return indices;
}
void render() {
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 0, getVertices(width,height) );
glDrawElements( GL_TRIANGLE_STRIP, getIndicesCount(width,height), GL_UNSIGNED_INT, getIndices(width,height) );
glDisableClientState( GL_VERTEX_ARRAY );
}
```
With width=4 and height=4 this is what I got:

And here I'm modifying some vertex height:

|
NGINX basic authentication based on environment variable
I'm setting up a docker image with nginx-lua installed. The scenario is to have a basic authentication on staging, but not in production. My idea was to have an ENV variable with the name of the stage and check the value in the nginx.conf file.
The content of the `docker-compose.yml` file (for staging, and for production the STAGE env will be `prod` of course):
```
docs-router:
build: ./nginx
environment:
- API_BASE_URI=staging.example.com
- DOCS_STATIC_URI=docs-staging.example.com
- STAGE=staging
ports:
- "8089:8089"
- "8090:8090"
```
The content of the `nginx.conf` file:
```
...
env API_BASE_URI;
env DOCS_STATIC_URI;
env STAGE;
...
http {
server {
listen 8089 default_server;
charset utf-8;
resolver 8.8.8.8;
access_log off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^(/.*\.(?:apib|svg))?$ {
set_by_lua_block $api_base_uri { return os.getenv("API_BASE_URI") }
set_by_lua_block $stage { return os.getenv("STAGE") }
set $unprotected "prod";
if ($stage = $unprotected) {
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
proxy_pass https://$api_base_uri$1;
proxy_set_header Host $api_base_uri;
}
...
}
}
```
But it's not working. Any idea, how can I achieve this?
|
I just find a solution with some help from [Serverfault](https://serverfault.com/a/761972/364901). It's not the best one because the URLs are in the `nginx.conf` file, but it solves my problem:
I just removed the variable form the `docker-compose.yml` file:
```
docs-router:
build: ./nginx
environment:
- API_BASE_URI=staging.example.com
- DOCS_STATIC_URI=docs-staging.example.com
ports:
- "8089:8089"
- "8090:8090"
```
And then I mapped the URLs in the `nginx.conf` file:
```
...
env API_BASE_URI;
env DOCS_STATIC_URI;
...
http {
##
# URL protection
##
map $http_host $auth_type {
default "off";
stage1.example.com "Restricted";
stage2.example.com "Restricted";
}
server {
listen 8089 default_server;
charset utf-8;
resolver 8.8.8.8;
access_log off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^(/.*\.(?:apib|svg))?$ {
set_by_lua_block $api_base_uri { return os.getenv("API_BASE_URI") }
auth_basic $auth_type;
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass https://$api_base_uri$1;
proxy_set_header Host $api_base_uri;
}
...
}
}
```
If there is a better / nicer solution for this, please let me know.
|
How to increase memory to handle super large Lua tables
I have a Lua function that, given n, generates all permutations of the series from 1 to n and stores each unique series in table form within a container table.
The size of this generated table gets very large very quickly (and necessarily so). About the time I try n = 11, the script will run for several seconds before failing out to "lua: not enough memory." I have 16gb of physical RAM, but watching the performance monitor in the Windows task manager allows me to watch the ram be consumed during run time, and it only gets up to about 20% before the script ends with the memory error.
I found this post that looks like the direction I need to head: [memory of a process in Lua](https://stackoverflow.com/questions/7507033/memory-of-a-process-in-lua#answer-7507499)
Since I'm running my script with Lua.exe, I'm assuming that I'm limited to how much memory Windows allocates for Lua.exe. Can I increase this amount? Can I use a C# wrapper program to simply run the Lua script (the idea being that it will have a higher/less restricted memory allocation)? Or am I looking in the wrong direction?
---
|
Do you need to store all the permutations in advance? You could generate them on-the-fly instead.
Example:
```
local function genPerm(self, i)
local result = {}
local f = 1
for j = 1, self.n do
f = f * j
table.insert(result, j)
end
for j = 1, self.n-1 do
f = f / (self.n + 1 - j)
local k = math.floor((i - 1) / f)
table.insert(result, j, table.remove(result, j+k))
i = i - k * f
end
return result
end
local function perms(n)
return setmetatable({n=n}, {__index=genPerm})
end
local generator = perms(11)
for _, i in ipairs {1, 42, 1000000, 39916800} do
print(table.concat(generator[i], ','))
end
```
|
How to read NA value?
file t.csv:
```
a ; b ; c ; d
1 ; 2 ; NA; 4
5 ; NA; 6 ; 7
```
I read the file t.csv
```
> t <- read.table("t.csv",header = T, sep = ";")
```
What I expect:
```
> str(t)
'data.frame': 2 obs. of 4 variables:
$ a: num 1 5
$ b: num 2 NA
$ c: num NA 6
$ d: num 4 7
```
What I get
```
> str(t)
'data.frame': 2 obs. of 4 variables:
$ a: num 1 5
$ b: Factor w/ 2 levels " 2 "," NA": 1 2
$ c: Factor w/ 2 levels " 6 "," NA": 2 1
$ d: num 4 7
```
What is my mistake?
|
You could try `strip.white=TRUE` in the `read.table`
```
dat <- read.table('t.csv', header=TRUE, sep=';', strip.white=TRUE)
str(dat)
# 'data.frame': 2 obs. of 4 variables:
#$ a: int 1 5
#$ b: int 2 NA
#$ c: int NA 6
#$ d: int 4 7
```
To get `numeric` class, it can be specified with `colClasses`
```
dat <- read.table('t.csv', header=TRUE, sep=';',
strip.white=TRUE, colClasses=rep('numeric',4))
str(dat)
#'data.frame': 2 obs. of 4 variables:
#$ a: num 1 5
#$ b: num 2 NA
#$ c: num NA 6
#$ d: num 4 7
```
|
Is it possible to replace content on every page passed through a proxy similar to how mod\_rewrite is used for URLs?
Is it possible to replace content on every page passed through a proxy similar to how mod\_rewrite is used for URLs? The documentation on substitute is not clear.
I have some pages I am reverse proxying that have absolute paths. This breaks the site. They need replacing and tools like mod\_rewrite are not picking them up as they are not URL requests.
```
<VirtualHost *:80>
ServerName servername1
ServerAlias servername2
ErrorLog "/var/log/proxy/jpuat_prox_error_log"
CustomLog "/var/log/proxy/jpuat_prox_access_log" common
RewriteEngine on
LogLevel alert rewrite:trace2
RewriteCond %{HTTP_HOST} /uat.site.co.jp$ [NC]
RewriteRule ^(.*)$ http://jp.uat.site2uk.co.uk/$1 [P]
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|uat.site.co.jp|jp.uat.site2uk.co.uk|i"
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass / http://uat.site.co.jp/
ProxyPassReverse / http://uat.site.co.jp/
</VirtualHost>
```
Neither of the above works at replacing the HTML string
```
<link href="//uat.site.co.jp/css/css.css
```
with
```
<link href="//uat.site2uk.co.uk/css/css.css
```
## Conf after changes:
```
<VirtualHost *:80>
ServerName jp.uat.site2uk.co.uk
ServerAlias uat.site.co.jp
ErrorLog "/var/log/proxy/jpuat_prox_error_log"
CustomLog "/var/log/proxy/jpuat_prox_access_log" common
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass / http://uat.site.co.jp/
ProxyPassReverse / http://uat.site.co.jp/
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|uat.site.co.jp|jp.uat.site2uk.co.uk|ni"
</VirtualHost>
```
|
There's an apache module called mod\_substitute that can do this. Here's a short example:
```
<Location "/">
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s/uat.site.co.jp/jp.uat.site2uk.co.uk/ni"
</Location>
```
Or, when combined with mod\_proxy:
```
ProxyPass / http://uat.site.co.jp/
ProxyPassReverse / http://uat.site.co.jp/
Substitute "s|http://uat.site.co.jp/|http://jp.uat.site2uk.co.uk/|i"
```
There's more information at [the Apache documentation for mod\_substitute](http://httpd.apache.org/docs/2.4/mod/mod_substitute.html).
|
How to add a sort key to an existing table in AWS Redshift
In AWS Redshift, I want to add a sort key to a table that is already created. Is there any command which can add a column and use it as sort key?
|
UPDATE:
>
> Amazon Redshift now enables users to add and change sort keys of existing Redshift tables without having to re-create the table. The new capability simplifies user experience in maintaining the optimal sort order in Redshift to achieve high performance as their query patterns evolve and do it without interrupting the access to the tables.
>
>
>
source: <https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-redshift-supports-changing-table-sort-keys-dynamically/>
~~At the moment I think its not possible (hopefully that will change in the future). In the past when I ran into this kind of situation I created a new table and copied the data from the old one into it.~~
from <http://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE.html>:
>
> ADD [ COLUMN ] column\_name
> Adds a column with the specified name to the table. You can add only one column in each ALTER TABLE statement.
>
>
> **You cannot add a column that is the distribution key (DISTKEY) or a sort key (SORTKEY) of the table.**
>
>
> You cannot use an ALTER TABLE ADD COLUMN command to modify the following table and column attributes:
>
>
> UNIQUE
>
>
> PRIMARY KEY
>
>
> REFERENCES (foreign key)
>
>
> IDENTITY
>
>
> The maximum column name length is 127 characters; longer names are truncated to 127 characters. The maximum number of columns you can define in a single table is 1,600.
>
>
>
|
Calculating Source Path based on source-destination dictionary
I have an input dictionary - `dict_input` that has destinations as `keys` and sources as `values`. One destination can have one or more sources.
```
dict_input = {'C411':['C052'],'C052':['C001','C002'], 'C001':['9001'], 'C002':['9002']}
```
In above `dict_input`, the terminal destination is `C411` whereas the initial sources are `9001` and `9002`. I am trying to come up with source paths for the terminal destination `C411`. Expected output in the form of `list` -
```
[['C411', 'C052', 'C001', '9001'], ['C411', 'C052','C002', '9002']]
```
I have this code:
```
def get_source(node, dict_input, source=[]):
if node in dict_input:
source.append(node)
for i in dict_input[node]:
if i != node:
get_source(i, dict_input, source)
else:
source.append(node)
else:
source.append(node)
return source
return source
dict_input = {'C052':['C001','C002'], 'C411':['C052'], 'C001':['9001'], 'C002':['9002']}
print(get_source('C411', dict_input, []))
```
The output is the two source paths clubed into a single list -
```
['C411', 'C052', 'C001', '9001', 'C002', '9002']
```
How do I modify my code to get separate list of each source path?
|
- Don't forget to `pop` from the current path when you have finished visiting a node
- If you come across a "leaf" (a node ID which is not a key), store a **copy** of the current path in output list
- Be wary of corrupt data e.g. cyclic links - would be useful to keep a `set` of visited nodes
Sample implementation of the above:
```
def get_source(root, dict_input):
# output list
path_list = []
# current path
cur_path = []
# visited set
visited = set()
# internal recursive helper function
def helper(node):
cur_path.append(node)
# normal node
if node in dict_input:
if not node in visited:
visited.add(node)
for child in dict_input[node]:
helper(child)
# else: cycle detected, raise an exception?
# leaf node
else:
# important: must be a copy of the current path
path_list.append(list(cur_path))
cur_path.pop()
# call this helper function on the root node
helper(root)
return path_list
```
Test:
```
>>> dict_input = {'C411':['C052'],'C052':['C001','C002'], 'C001':['9001'], 'C002':['9002']}
>>> get_source('C411', dict_input)
[['C411', 'C052', 'C001', '9001'], ['C411', 'C052', 'C002', '9002']]
```
|
Enable and disable Button according to the text in EditText in Android
I want to disable by Button if the words in the EditText is less than 3 words, and if the words in the EditText are more than 3 words then I want to enable it so that it can get Clicked.
Can anyone help me here?
|
You have to `addTextChangedListener` to your `EditText`
Like this:
```
yourEditText.addTextChangedListener(new TextWatcher() {
@Override
public void afterTextChanged(Editable arg0) {
enableSubmitIfReady();
}
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
}
});
```
In that method, you should do like this:
```
public void enableSubmitIfReady() {
boolean isReady = yourEditText.getText().toString().length() > 3;
yourbutton.setEnabled(isReady);
}
```
Hope it helps.
|
Create a view with ORDER BY clause
I'm trying to create a view with an `ORDER BY` clause. I have create it successfully on SQL Server 2012 SP1, but when I try to re-create it on SQL Server 2008 R2, I get this error:
>
> Msg 102, Level 15, State 1, Procedure TopUsers, Line 11
> Incorrect
> syntax near 'OFFSET'.
>
>
>
The code to create the view is
```
CREATE View [dbo].[TopUsersTest]
as
select
u.[DisplayName] , sum(a.AnswerMark) as Marks
From Users_Questions us inner join [dbo].[Users] u
on u.[UserID] = us.[UserID]
inner join [dbo].[Answers] a
on a.[AnswerID] = us.[AnswerID]
group by [DisplayName]
order by Marks desc
OFFSET 0 ROWS
```
=====================
This is a screen shot of the diagram

I wish to return users' `DisplayName` and the `UserTotalMarks` and order this result desc, so the user with the biggest result with be on the top.
|
I'm not sure what you think this `ORDER BY` is accomplishing? Even if you *do* put `ORDER BY` in the view in a legal way (e.g. by adding a `TOP` clause), if you just select from the view, e.g. `SELECT * FROM dbo.TopUsersTest;` without an `ORDER BY` clause, SQL Server is free to return the rows in the most efficient way, which won't necessarily match the order you expect. This is because `ORDER BY` is overloaded, in that it tries to serve two purposes: to sort the results and to dictate which rows to include in `TOP`. In this case, `TOP` always wins (though depending on the index chosen to scan the data, you might observe that your order is working as expected - but this is just a coincidence).
**In order to accomplish what you want, you need to add your `ORDER BY` clause to the queries that pull data from the view, not to the code of the view itself.**
So your view code should just be:
```
CREATE VIEW [dbo].[TopUsersTest]
AS
SELECT
u.[DisplayName], SUM(a.AnswerMark) AS Marks
FROM
dbo.Users_Questions AS uq
INNER JOIN [dbo].[Users] AS u
ON u.[UserID] = us.[UserID]
INNER JOIN [dbo].[Answers] AS a
ON a.[AnswerID] = uq.[AnswerID]
GROUP BY u.[DisplayName];
```
The `ORDER BY` is meaningless so should not even be included.
---
To illustrate, using AdventureWorks2012, here is an example:
```
CREATE VIEW dbo.SillyView
AS
SELECT TOP 100 PERCENT
SalesOrderID, OrderDate, CustomerID , AccountNumber, TotalDue
FROM Sales.SalesOrderHeader
ORDER BY CustomerID;
GO
SELECT SalesOrderID, OrderDate, CustomerID, AccountNumber, TotalDue
FROM dbo.SillyView;
```
Results:
```
SalesOrderID OrderDate CustomerID AccountNumber TotalDue
------------ ---------- ---------- -------------- ----------
43659 2005-07-01 29825 10-4020-000676 23153.2339
43660 2005-07-01 29672 10-4020-000117 1457.3288
43661 2005-07-01 29734 10-4020-000442 36865.8012
43662 2005-07-01 29994 10-4020-000227 32474.9324
43663 2005-07-01 29565 10-4020-000510 472.3108
```
And you can see from the execution plan that the `TOP` and `ORDER BY` have been absolutely ignored and optimized away by SQL Server:

There is no `TOP` operator at all, and no sort. SQL Server has optimized them away completely.
Now, if you change the view to say `ORDER BY SalesID`, you will then just happen to get the ordering that the view states, but only - as mentioned before - by coincidence.
But if you change your outer query to perform the `ORDER BY` you wanted:
```
SELECT SalesOrderID, OrderDate, CustomerID, AccountNumber, TotalDue
FROM dbo.SillyView
ORDER BY CustomerID;
```
You get the results ordered the way you want:
```
SalesOrderID OrderDate CustomerID AccountNumber TotalDue
------------ ---------- ---------- -------------- ----------
43793 2005-07-22 11000 10-4030-011000 3756.989
51522 2007-07-22 11000 10-4030-011000 2587.8769
57418 2007-11-04 11000 10-4030-011000 2770.2682
51493 2007-07-20 11001 10-4030-011001 2674.0227
43767 2005-07-18 11001 10-4030-011001 3729.364
```
And the plan still has optimized away the `TOP`/`ORDER BY` in the view, but a sort is added (at no small cost, mind you) to present the results ordered by `CustomerID`:

So, moral of the story, **do not put ORDER BY in views. Put ORDER BY in the queries that reference them.** And if the sorting is expensive, you might consider adding/changing an index to support it.
|
Find the minimum distance to a set
I have two sets, A and B, both containing positions of some particles. What I want to do is the following:
```
For each element a in A,
Calculate the minimum distance between a and the elements of B.
Put these distances in to a list and return.
```
I know how to do this with looks, but I don't know how to do it in a fast way using `data.table` syntax.
|
We can use `sapply` to loop over 'A', get the `min` `abs`olute difference from the 'B' vector and store as a `vector`
```
sapply(A, function(x) min(abs(x - B)))
```
Or with `data.table` syntax
```
dt1[, lapply(A, function(x) min(abs(x - B)))]
```
---
If the vectors are sorted, a fast option is `findInterval`
```
A[findInterval(A, B)]
```
---
If these are columns of `data.table`
```
dt1[, A[findInterval(A, B)]]
```
---
Or using `outer`
```
outer(A, B, FUN = function(x, y) min(abs(x - y)))
```
|
Rails 3, @font-face failing in production with firefox
I'm using [font-awesome](http://fortawesome.github.com/Font-Awesome/) in a rails 3 app, and everything is ok in development mode, but when I push to Heroku, Firefox fails to render the icons, and instead, I see this:

- Chrome renders the icons fine in development and production
- This just affects FireFox (although I've not tried IE)
- The app is [here](https://pure-sunset-9274.herokuapp.com), I'd appreciate if someone could confirm that this is not just happening on my machine (to help me rule out a localhost caching issue).
- All assets, including fonts and stylesheets, are hosted on S3, using the [asset\_sync](https://github.com/rumblelabs/asset_sync) gem.
**Here's what I've done:**
Added the following to the top of font-awesome.css.scss:\*\*
```
// font-awesome.css.scss
@font-face {
font-family: 'FontAwesome';
src: font-url("fontawesome-webfont.eot");
src: font-url("fontawesome-webfont.eot?#iefix") format("eot"),
font-url("fontawesome-webfont.woff") format("woff"),
font-url("fontawesome-webfont.ttf") format("truetype"),
font-url("fontawesome-webfont.svg#FontAwesome") format("svg");
font-weight: normal;
font-style: normal;
}
```
Then I put this in application.rb:
```
# application.rb
config.assets.paths << Rails.root.join("app", "assets", "fonts")
config.assets.precompile += %w( .svg .eot .woff .ttf )
```
Finaly I placed all 4 font files in `app/assets/fonts`.
I would really like to know what I'm doing wrong here.
|
I fixed my problem.
From [this article](http://blog.fallingmanstudios.net/post/14281163243/firefox-font-face-in-rails-with-asset-subdomains#notes), I learned that:
>
> Firefox rejects all cross-site font requests unless some specific headers are set:
>
>
> [i.e. Access-Control-Allow-Origin]
>
>
>
And, from [this article](http://motoole.com/2011/10/19/hosting-font-face-fonts-on-s3.html):
>
> Sadly, right now S3 doesn’t allow you to specify the Access-Control-Allow-Origin header that your objects get served with
>
>
>
So you have a few options:
1. Serve the fonts from your app's public folder, not from S3
2. Serve the fonts from Rackspace, where you **can** set the headers
3. Embed the font in your as a [Base64 string](http://geoff.evason.name/2010/05/03/cross-domain-workaround-for-font-face-and-firefox/)
I've gone with the first option since this is gonna be a low traffic site, but here's a nice write-up on how to [serve fonts from Rackspace](http://www.bryandragon.com/articles/rails-asset-pipeline-cdns-and-serving-cross-domain-fonts) whilst simultaneously serving all other assets from S3.
---
**UPDATE:**
Amazon [announced yesterday](http://aws.amazon.com/about-aws/whats-new/2012/08/31/amazon-s3-announces-cross-origin-resource-sharing-CORS-support/) that they now support Cross Origin Resource Sharing (CORS), so the solution posted above should no longer be necessary. Their [developer guide](http://docs.amazonwebservices.com/AmazonS3/latest/dev/cors.html) explains more.
|
Reasons for using sqlalchemy in Qt
This is really a "pardon my ignorance" question so apologies if it doesn't meet the requirements.
I want to develop a fairly simple database application. It will be desktop based and lightweight so I'm happy that SQLite will suffice. I have also decided upon Qt and pyside.
Looking through the mass of tutorials out there, I keep coming across sqlalchemy and exlixir (and Camelot). I am basically just wondering what advantages there are to using sqlalchemy (and elixir) over basic QSql in Qt? What would I be missing if I didn't use such a thing.
I know this is basic but before I can progress on my self-tuition process, I just want to get this clear in my head.
|
Basically, you have 3 options here.
## [QtSql](http://doc.qt-project.org/qtsql.html)
QtSql is a separate module in Qt for working with SQL databases.
Pros:
- Integration with Qt may be easier
Cons:
- Hard to learn
- Was made for C++, requires some redundant code
- Requires adding one more Qt module to your project
- Documentation looks bad
## [sqlite3](http://docs.python.org/library/sqlite3.html) module
This a module in Python standard library to work with SQLite databases.
Pros:
- Very easy to use
- Code is quite concise
- No external dependencies
Cons:
- You do have to write SQL queries
## [SQLAlchemy](http://www.sqlalchemy.org/) ORM
SQLAlchemy makes work with databases similar to work with usual classes.
Pros:
- Object Relational Mapper: exposes an object-oriented interface and makes SQL queries for you
- Once you've set up the table information, work with databases is pure joy
Cons:
- Steep learning curve
---
Here's my conclusion:
If you are comfortable with writing SQL queries and don't have to do a lot of work with databases, use sqlite3. And if you don't mind spending some time to learn something awesome, go for SQLAlchemy.
About the other projects you've mentioned:
Elixir seems dead, and SQLAlchemy has its functionality built-in now, probably better.
Camelot is just weird... I wouldn't use it.
|
How to get the precision and recall from a nltk classifier?
```
import nltk
from nltk.corpus import movie_reviews
from nltk.tokenize import word_tokenize
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(document):
words = set(document)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in documents]
training_set = featuresets[500:1500]
testing_set = featuresets[:1500]
classifier = nltk.DecisionTreeClassifier.train(training_set)
print "Classifier accuracy percent:",(nltk.classify.accuracy(classifier, testing_set))*100 , "%"
string = raw_input("Enter the string: ")
print (classifier.classify(find_features(word_tokenize(string))))
```
This code will display the accuracy of the classifier and then get input from user. And it returns the polarity of the string input by the user.
But here's my question: since I can obtain the accuracy by using `nltk.accuracy()`, is it possible to get its precision and recall as well?
|
If you're using the nltk package, then it appears you can use the `recall` and `precision` functions from `nltk.metrics.scores` ([see the docs](http://www.nltk.org/api/nltk.metrics.html)).
The functions should be available after invoking
```
from nltk.metrics.scores import (precision, recall)
```
Then you need to call them with `reference` (known labels) and `test` (the output of your classifier on the test set) sets.
Something like the code below should produce these sets as `refsets` and `testsets`
```
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
for i, (feats, label) in enumerate(testing_set):
refsets[label].add(i)
observed = classifier.classify(feats)
testsets[observed].add(i)
```
Then, you can see the precision and recall for positive predictions with something like
```
print( 'Precision:', nltk.metrics.precision(refsets['pos'], testsets['pos']) )
print( 'Recall:', nltk.metrics.recall(refsets['pos'], testsets['pos']) )
# `'pos'` is for the "positive" (as opposed to "negative") label
```
|
how to add the images to AlertDialog in android?
I crated the one AlertDialog that was two option one is open camera and other one is gallery
as image as shown

but i need to add the camera and gallery images to it like this

how to add the the image is this code
```
final String [] items = new String [] {"Take from camera", "Select from gallery"};
ArrayAdapter<String> adapter = new ArrayAdapter<String> (this, android.R.layout.select_dialog_item,items);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle("Select Image");
builder.setAdapter( adapter, new DialogInterface.OnClickListener() {
public void onClick( DialogInterface dialog, int item ) { //pick from camera
if (item == 0) {
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
mImageCaptureUri = Uri.fromFile(new File(Environment.getExternalStorageDirectory(),
"tmp_avatar_" + String.valueOf(System.currentTimeMillis()) + ".jpg"));
intent.putExtra(android.provider.MediaStore.EXTRA_OUTPUT, mImageCaptureUri);
try {
intent.putExtra("return-data", true);
startActivityForResult(intent, PICK_FROM_CAMERA);
} catch (ActivityNotFoundException e) {
e.printStackTrace();
}
} else { //pick from file
Intent intent = new Intent();
intent.setType("image/*");
intent.setAction(Intent.ACTION_GET_CONTENT);
startActivityForResult(Intent.createChooser(intent, "Complete action using"), PICK_FROM_FILE);
}
}
});
final AlertDialog dialog = builder.create();
dialog.show();
}
```
|
Create layout like you want then add that layout in your dialogue
**Layout**
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent" >
<ImageView
android:id="@+id/image"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginRight="5dp" />
<TextView
android:id="@+id/text"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:textColor="#FFF"
android:layout_toRightOf="@+id/image" />
<Button
android:id="@+id/dialogButtonOK"
android:layout_width="100px"
android:layout_height="wrap_content"
android:text=" Ok "
android:layout_marginTop="5dp"
android:layout_marginRight="5dp"
android:layout_below="@+id/image" />
</RelativeLayout>
```
**then on activity**
```
final Dialog dialog = new Dialog(context);
dialog.setContentView(R.layout.custom);
dialog.setTitle("Bla Bla");
TextView text = (TextView) dialog.findViewById(R.id.text);
text.setText("Your Text");
ImageView image = (ImageView) dialog.findViewById(R.id.image);
image.setImageResource(R.drawable.ic_launcher);
Button dialogButton = (Button) dialog.findViewById(R.id.dialogButtonOK);
dialogButton.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
dialog.dismiss();
}
});
dialog.show();
```
|
Force addition between ulong and long variable
I have the following function:
```
long Foo (long min, long max)
{
ulong range = max - min; // guaranteed to fit in ulong
ulong x = GenerateRandomULongBetween0AndRange(range);
return x + min; // result is guaranteed to fit in long
}
```
But the C# compiler says I cannot add `ulong` and `long`. `x` can be greater than `long.MaxValue` though, and `min` may be negative. So I can't cast one to the other. How to proceed? :-(
|
>
> The C# compiler says I cannot add ulong and long.
>
>
>
Correct.
>
> x can be greater than long.MaxValue though, and min may be negative.
>
>
>
Correct.
>
> So I can't cast one to the other.
>
>
>
Incorrect. Cast them. See what happens! You might be pleasantly surprised.
Long/ulong arithmetic is not different; the bit patterns are exactly the same and it compiles down to the same code. It's just the *interpretation* of the bits that differs.
One word of warning: it is possible to put C# into a mode where it crashes when there is an overflow involving integer arithmetic, and you explicitly do NOT want that to crash here. You can ensure that this does not happen even if someone turns on "checked by default" by using the `unchecked` feature in either its expression or statement form.
|
php foreach and glob() function
PHP versione 5.2.\*
my function not working :/
images in server, in folder: /public\_html/gallery/images

```
<?php
foreach(glob('gallery/images/*', GLOB_NOSORT) as $image)
{
echo "Filename: " . $image . "<br />";
}
?>
```
any help? what im doing wrong?
error i get is : Warning: Invalid argument supplied for foreach() in /home/a9773555/public\_html/gallery/index.php on line 2
|
The problem looks you have put your php file in gallery folder...
/home/a9773555/public\_html/gallery/index.php on line 2
if that is the case (if you put index.php in gallery) then try the following:
```
<?php
foreach(glob('images/*', GLOB_NOSORT) as $image)
{
echo "Filename: " . $image . "<br />";
}
?>
```
Or do the following,
Put your index.php in your /home folder. then...
```
<?php
foreach(glob('a9773555/public_html/gallery/images/*', GLOB_NOSORT) as $image)
{
echo "Filename: " . $image . "<br />";
}
?>
```
Give it a try. and let me know...
|
Run Gradle with verbose class loading?
My build script is encountering an error (below). Is there a way to run Gradle with the same type of output as invoking Java with `-verbose:class`?
The error in question, should anyone have some input:
```
Caused by: org.gradle.api.artifacts.ResolveException: Could not resolve all dependencies for configuration ':Project:compile'.
at org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingArtifactDependencyResolver.wrapException(ErrorHandlingArtifactDependencyResolver.java:49)
... more
Caused by: java.lang.LinkageError: loader constraints violated when linking org/apache/ivy/core/module/descriptor/DependencyDescriptor class
```
|
You can set the following environment variable, I believe...
```
GRADLE_OPTS="$GRADLE_OPTS -verbose:class"
```
and then invoke `gradle`. Read this [link](http://java.dzone.com/articles/first-look-building-java).
>
> Once Gradle is downloaded and unzipped, the environment variable
> GRADLE\_HOME can be set to the directory of the unzipped Gradle
> installation and the PATH should be set to $GRADLE\_HOME/bin or
> %GRADLE\_HOME%\bin. The Gradle installation page tells us that JVM
> options used by Gradle can be set via either GRADLE\_OPTS or JAVA\_OPTS.
> The Grade installation and configuration in the path can be confirmed
> by running gradle -v at the command line once the environment variable
> settings are sourced.
>
>
>
|
retrieve data from database and show them in bar chart using MPAndroidChart in android
I know how to show the raw data in bar chart using MPAndroidChart but I got problem in real scenario. I have some data in my sqlite database and I want to show them in bar chart. Please give me some reference or simple example.
Database table has two columns month and some numbers. eg: Jan = 200, Feb = 300, etc.
I have searched many but could not find the right one. They were either 2-3 years old or not the type as I mentioned.
Thanks in advance.
|
Here is the answer to my question.
these are two methods to store the x-axis data and y-axis data in arrays.
```
public ArrayList<String> queryXData(){
ArrayList<String> xNewData = new ArrayList<String>();
String query = "SELECT " + DAILY_DATE + " FROM " + TABLE_DAILY_FRAG;
Cursor cursor = mSQLiteDatabase.rawQuery(query, null);
for (cursor.moveToFirst(); !cursor.isAfterLast(); cursor.moveToNext()) {
xNewData.add(cursor.getString(cursor.getColumnIndex(DAILY_DATE)));
}
cursor.close();
return xNewData;
}
public ArrayList<Float> queryYData(){
ArrayList<Float> yNewData = new ArrayList<Float>();
String query = "SELECT " + DAILY_TOTAL + " FROM " + TABLE_DAILY_FRAG;
Cursor cursor = mSQLiteDatabase.rawQuery(query, null);
for (cursor.moveToFirst(); !cursor.isAfterLast(); cursor.moveToNext()) {
yNewData.add(cursor.getFloat(cursor.getColumnIndex(DAILY_TOTAL)));
}
cursor.close();
return yNewData;
}
```
this is another method to show the barchart. Call this method in your activity.
```
private void addData(){
ArrayList<BarEntry> yVals = new ArrayList<BarEntry>();
for (int i = 0; i < mExpenseDB.queryYData().size(); i++)
yVals.add(new BarEntry(mExpenseDB.queryYData().get(i), i));
ArrayList<String> xVals = new ArrayList<String>();
for(int i = 0; i < mExpenseDB.queryXData().size(); i++)
xVals.add(mExpenseDB.queryXData().get(i));
BarDataSet dataSet = new BarDataSet(yVals, "expense values");
dataSet.setColors(ColorTemplate.COLORFUL_COLORS);
BarData data = new BarData(xVals, dataSet);
LimitLine line = new LimitLine(12f, "average daily expense");
line.setTextSize(12f);
line.setLineWidth(4f);
YAxis leftAxis = barChart.getAxisLeft();
leftAxis.addLimitLine(line);
barChart.setData(data);
barChart.setDescription("The expenses chart.");
barChart.animateY(2000);
}
```
If you found this helpful, let me know.
|
What's the difference between Mockito Matchers isA, any, eq, and same?
I am confused on what's the difference between them, and which one to choose in which case. Some difference might be obvious, like `any` and `eq`, but I'm including them all just to be sure.
I wonder about their differences because I came across this problem:
I have this POST method in a Controller class
```
public Response doSomething(@ResponseBody Request request) {
return someService.doSomething(request);
}
```
And would like to perform a unit test on that controller.
I have two versions. The first one is the simple one, like this
```
@Test
public void testDoSomething() {
//initialize ObjectMapper mapper
//initialize Request req and Response res
when(someServiceMock.doSomething(req)).thenReturn(res);
Response actualRes = someController.doSomething(req);
assertThat(actualRes, is(res));
}
```
But I wanted to use a MockMvc approach, like this one
```
@Test
public void testDoSomething() {
//initialize ObjectMapper mapper
//initialize Request req and Response res
when(someServiceMock.doSomething(any(Request.class))).thenReturn(res);
mockMvc.perform(post("/do/something")
.contentType(MediaType.APPLICATION_JSON)
.content(mapper.writeValueAsString(req))
)
.andExpect(status().isOk())
.andExpect(jsonPath("$message", is("done")));
}
```
Both work well. But I wanted my `someServiceMock.doSomething()` in the MockMvc approach to receive `req`, or at least an object that has the same variable values as `req` (not just any `Request` class), and return `res`, just like the first. I know that it's impossible using the MockMvc approach (or is it?), because the object passed in the actual call is always different from the object passed in the mock. Is there anyway I can achieve that? Or does it even make sense to do that? Or should I be satisfied using `any(Request.class)`? I've tried `eq`, `same`, but all of them fail.
|
- [`any()`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#any--) checks absolutely nothing. Since Mockito 2.0, [`any(T.class)`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#any-java.lang.Class-) shares `isA` semantics to mean "any `T`" or properly "any instance of type `T`".
[This is a change compared to Mockito 1.x](https://stackoverflow.com/posts/comments/55631479), where [`any(T.class)`](https://javadoc.io/doc/org.mockito/mockito-core/1.10.19/org/mockito/Matchers.html) checked absolutely nothing but saved a cast prior to Java 8: "Any kind object, not necessary of the given class. The class argument is provided only to avoid casting."
- [`isA(T.class)`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#isA-java.lang.Class-) checks that the argument `instanceof T`, implying it is non-null.
- [`same(obj)`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#same-T-) checks that the argument refers to the same instance as `obj`, such that `arg == obj` is true.
- [`eq(obj)`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#eq-T-) checks that the argument equals `obj` according to its `equals` method. This is also the behavior if you pass in real values without using matchers.
Note that unless `equals` is overridden, you'll see the default Object.equals implementation, which would have the same behavior as `same(obj)`.
If you need more exact customization, you can use an adapter for your own predicate:
- For Mockito 1.x, use [`argThat`](https://javadoc.io/doc/org.mockito/mockito-core/1.10.19/org/mockito/Matchers.html) with a custom Hamcrest `Matcher<T>` that selects exactly the objects you need.
- For Mockito 2.0 and beyond, use [`Matchers.argThat`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#argThat-org.mockito.ArgumentMatcher-) with a custom `org.mockito.ArgumentMatcher<T>`, or [`MockitoHamcrest.argThat`](https://site.mockito.org/javadoc/current/org/mockito/hamcrest/MockitoHamcrest.html#argThat(org.hamcrest.Matcher)) with a custom Hamcrest `Matcher<T>`.
You may also use [`refEq`](https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/ArgumentMatchers.html#refEq-T-java.lang.String...-), which uses *reflection* to confirm object equality; Hamcrest has a similar implementation with [SamePropertyValuesAs](http://hamcrest.org/JavaHamcrest/javadoc/1.3/org/hamcrest/beans/SamePropertyValuesAs.html) for public bean-style properties. Note that on GitHub [issue #1800 proposes deprecating and removing `refEq`](https://github.com/mockito/mockito/issues/1800), and as in that issue you might prefer `eq` to better give your classes better encapsulation over their sense of equality.
|
The mysterious nature of Fortran 90 modules
Fortran 90 modules are evanescent creatures. I was using a (singular) module for a while with some success (compiling using Intel Visual Fortran and Visual Studio 2010). Then I wrote another module and tried to USE it in another function, before receiving this error:
```
error #7002: Error in opening the compiled module file. Check INCLUDE paths.
```
So I deleted the offending module. But now I receive the same error after when trying to access my original module!
How can I locate these mysterious creatures? Why does one module work but not two? I'm assuming that I need to delete and recompile them, or tell the compiler to include them somehow. I know the file locations of the source code but not *where they compile to*.
|
For that specific processor (many other Fortran processors have similar characteristics, but the details differ):
- When a module is compiled successfully, the compiler generates a .mod file (and perhaps an .obj file) that contains information about the entities provided by the module. It is this mod file that the error message you quote is referring to. The compiler requires this mod file when it encounters a USE statement for the module while compiling other source. (The obj file is used in the link stage.)
- Hence, before a module is USE'd, the compiler must at some time have compiled the source code for the module. That means that the module's source code (MODULE...END MODULE) must have appeared earlier in the source file prior to the USE statement, or must have been in a separate file that was compiled prior to the source file with the USE statement.
- When compiling using an Intel Fortran project within Visual Studio, the build environment will automatically attempt to arrange an appropriate compilation order for the source files within a project. When compiling using the ifort command from the command line the programmer is responsible for managing the compilation order.
- The directory that receives the generated mod files is specified by the first /module command line option given to the compiler. In Visual Studio this option is set using the Fortran > Output Files > Module Path property. By default, a Fortran project in Visual Studio has this property set to be the name of the current configuration, hence the mod files appear in a child directory of the project called Debug or Release. In the absence of a /module command line option the mod files appear in the current directory.
- Directories specified by the /module command line option (or the equivalent Visual Studio property) are also used in the search for mod files. In addition, directories specified by the /I command line option (in Visual Studio, Fortran > General > Additional Include Directories) are searched.
It is not clear from your question as to how you have distributed your modules amongst your source files, whether you have a single Visual Studio project or multiple projects etc. If you are only dealing with a single project then typically all that is required is to add all the Fortran files to the Source files for the project, and the default settings should "work". Errors in finding mod files may be because:
- the associated source for the module isn't in one of the source files for the project;
- compilation of the source for a module failed for some other reason (are other errors listed earlier in the build sequence?)
- a module is defined after its use in a particular source file;
- there are circular dependencies between modules (module A uses module B which uses A or similar - this is not allowed by the rules of the language);
- some source construct that confuses the automatic determination of build order (older versions of the build system were confused by the F2003 form of use statement with the double colon, plus it is possible to obfuscate USE statements such that the build system fails to identify them) but these aspects are pretty obscure.
With multiple Fortran projects in Visual Studio there may be a need to modify the module search directories for dependent projects, such that they can find the mod files compiled by previous projects in the project dependency tree. Later versions of Intel Fortran handle this aspect automatically too, if the inter-project dependency settings in Visual Studio are correct.
|
cassandra filtering on an indexed column isn't working
I'm using (the latest version of) Cassandra nosql dbms to model some data.
I'd like to get a count of the number of active customer accounts in the last month.
I've created the following table:
```
CREATE TABLE active_accounts
(
customer_name text,
account_name text,
date timestamp,
PRIMARY KEY ((customer_name, account_name))
);
```
So because I want to filter by date, I create an index on the date column:
```
CREATE INDEX ON active_accounts (date);
```
When I insert some data, Cassandra automatically updates data on any existing primary key matches, so the following inserts only produce two records:
```
insert into active_accounts (customer_name, account_name, date) Values ('customer2', 'account2', 1418377413000);
insert into active_accounts (customer_name, account_name, date) Values ('customer1', 'account1', 1418377413000);
insert into active_accounts (customer_name, account_name, date) Values ('customer2', 'account2', 1418377414000);
insert into active_accounts (customer_name, account_name, date) Values ('customer2', 'account2', 1418377415000);
```
This is exactly what I'd like - I won't get a huge table of data, and each entry in the table represents a unique customer account - so no need for a select distinct.
The query I'd like to make - is how many distinct customer accounts are active within the last month say:
```
Select count(*) from active_accounts where date >= 1418377411000 and date <= 1418397411000 ALLOW FILTERING;
```
In response to this query, I get the following error:
```
code=2200 [Invalid query] message="No indexed columns present in by-columns clause with Equal operator"
```
What am I missing; isn't this the purpose of the Index I created?
|
Table design in Cassandra is extremely important and it must match the kind of queries that you are trying to preform. The reason that Cassandra is trying to keep you from performing queries on the date column, is that any query along that column will be extremely inefficient.
**Table Design - Model your queries**
One of the main reasons that Cassandra can be fast is that it partitions user data so that most( 99%)
of queries can be completed without contacting all of the nodes in the cluster. This means less network traffic, less disk access, and faster response time. Unfortunately Cassandra isn't able to determine automatically what the best way to partition data. The end user must determine a schema which fits into the C\* datamodel and allows the queries they want at a high speed.
```
CREATE TABLE active_accounts
(
customer_name text,
account_name text,
date timestamp,
PRIMARY KEY ((customer_name, account_name))
);
```
This schema will only be efficient for queries that look like
SELECT timestamp FROM active\_accounts where customer\_name = ? and account\_name = ?
This is because on the the cluster the data is actually going to be stored like
```
node 1: [ ((Bob,1)->Monday), ((Tom,32)->Tuesday)]
node 2: [ ((Candice, 3) -> Friday), ((Sarah,1) -> Monday)]
```
The PRIMARY KEY for this table says that data should be placed on a node based on the hash of the combination of CustomerName and AccountName. This means we can only look up data quickly if we have both of those pieces of data. Anything outside of that scope becomes a batch job since it requires hitting multiple nodes and filtering over all the data in the table.
To optimize for different queries you need to change the layout of your table or use a distributed analytics framework like Spark or Hadoop.
An example of a different table schema that might work for your purposes would be something like
```
CREATE TABLE active_accounts
(
start_month timestamp,
customer_name text,
account_name text,
date timestamp,
PRIMARY KEY (start_month, date, customer_name, account_name)
);
```
In this schema I would put the timestamp of the first day of the month as the partitioning key and date as the first clustering key. This means that multiple account creations that took place in the same month will end up in the same partition and on the same node. The data for a schema like this would look like
```
node 1: [ (May 1 1999) -> [(May 2 1999, Bob, 1), (May 15 1999,Tom,32)]
```
This places the account dates in order within each partition making it very fast for doing range slices between particular dates. Unfortunately you would have to add code on the application side to pull down the multiple months that a query might be spanning. This schema takes a lot of (dev) work so if these queries are very infrequent you should use a distributed analytics platform instead.
For more information on this kind of time-series modeling check out:
<http://planetcassandra.org/getting-started-with-time-series-data-modeling/>
Modeling in general:
<http://www.slideshare.net/planetcassandra/cassandra-day-denver-2014-40328174>
<http://www.slideshare.net/johnny15676/introduction-to-cql-and-data-modeling>
Spark and Cassandra:
<http://planetcassandra.org/getting-started-with-apache-spark-and-cassandra/>
**Don't use secondary indexes**
Allow filtering was added to the cql syntax to prevent users from accidentally designing queries that will not scale. The secondary indexes are really only for use by those do analytics jobs or those C\* users who fully understand the implications. In Cassandra the secondary index lives on every node in your cluster. This means that any query that requires a secondary index necessarily will require contacting every node in the cluster. This will become less and less performant as the cluster grows and is definitely not something you want for a frequent query.
|
Changing large number of if-elif-else statements to use underlying structure
I have a function that looks something like this:
```
function_name(step, ... , typ):
if typ == 'some type of calc method':
if step == 1:
do_me_at_step_1(...)
elif step == 2:
do_me_at_step_2(...)
elif ...
elif typ == 'another calc method':
if step == 1:
do_me_at_step_1(...)
elif ...
```
Hopefully the general structure is clear here. What are the differences between the different `typ`s and how they are handled? There is one instance of `typ` which changes one variable in a very different way from all the others. All instances of `typ` vary in the number of separate step instructions, ranging from an `if step == 1: ... else:` to `if step == 1 ... elif step == 5 ... else`. The last `elif` and the `else` statements are different, basically.
How can I better structure this function, because it currently looks horrific?
My thought was to pull out all the initial steps...but I have to have a way to associate each `typ` with the step which becomes "key". So I thought somewhere along the lines of:
```
function_name(step, ... , typ):
orders = {typ1:key_step, typ2:key_step, ...}
while key_step['typ1'] != step:
if step == 1:
...
step +=1
```
But I noticed two problems: one, this doesn't really fully take advantage of the while loop. It would require me to add if statements up until the max step out of all the `typ`s that is not "key" in there, which doesn't seem to me to really take advantage of the information in `orders`.
The other problem is once you are *out* of the while loop. I suppose you could do something like
```
if typ == 'something':
do_the_key_step_function_for_this_typ(...)
```
but this really isn't doing anything new to clean up this function. How can I change the structure of this code so it is cleaner and shorter? There's an underlying structure, so I feel like there should be a better way to do this.
Feel free to ask me if you need any specifics about this function, but hopefully there are enough details provided here. Thanks!
|
Your initial function executes one specific step for one specific type, both of them being provided as argument. In the second case you iterate over steps trying to find the right one.
**Alternative 1: double dictionary**
Based on your alternative, I propose you to consider a dictionary of dictionaries, using two indexes instead of just one. Exemple:
```
# just some functions to be invoked
def do_me_at_step_1():
return 1
def do_me_at_step_2():
return 2
def do_other_step_2():
return 102
# Use a dictionary assigning one type to a second level dictionary
# In the second level dictionary, assign one step with a function
command = { 'A': { 1:do_me_at_step_1, 2:do_me_at_step_2 }, 'B': { 1:do_me_at_step_1, 2:do_other_step_2 } }
# How you'd invoke the right function in function_name()
print (command['A'][1]())
print (command['A'][2]())
print (command['B'][1]())
print (command['B'][2]())
```
So in this case, your function\_name() just finds the function to invoke using a dictionary of dictionaries, and then invokes them.
**Alternative 2: object oriented code ?**
You could also consider using an object oriented approach:
So in fact, you'd create different classes, each corresponding to a different type. And then you'd invoke for an instantiation of one specific class the method corresponding to the step.
In fact, the implementation would be very close to the the [state pattern](https://sourcemaking.com/design_patterns/state/python/1)
|
Add plugins under a same project in sbt
I'm trying to build a Scala project aggregated by multiple projects, one of which is an sbt plugin. I would like to use this plugin in another subproject under the same parent project, but I don't quite understand how to do this.
My "build.sbt" in the project root is like this:
```
lazy val plugin = project
.in(file("sbt-Something"))
.dependsOn(lib)
.settings(common: _*)
.settings(name := "My plugin",
sbtPlugin := true)
lazy val examples = project
.in(file("examples"))
.dependsOn(lib, plugin)
.settings(common: _*)
.settings(name := "Examples")
```
How to add the `plugin` as a plugin to project `examples`?
|
I don't think you can have a plugin at the same "level" that project which is using it.
If you think of it, the plugin must be available before the compilation of the project that is using it. This is because it may, for example modify the build settings, which would influence the way the project is built.
If you want to keep your plugin within your project you can do so by declaring a project in the `project/project` directory.
### $YOUR\_PROJECT\_ROOT/project/build.sbt
```
lazy val plugin = project
.in(file("sbt-plugin"))
.dependsOn(lib)
.settings(name := "My plugin", sbtPlugin := true)
lazy val lib = project.in(file("lib"))
lazy val root = project.in(file(".")).dependsOn(plugin)
```
Then you can put your code to `sbt-plugin` directory, and your shared library code to the `lib` folder.
In your normal build you can reference the shared library and the plugin.
### $YOUR\_PROJECT\_ROOT/build.sbt
```
val lib = ProjectRef(file("project/lib"), "lib")
val root = project.in(file(".")).dependsOn(lib).enablePlugins(MyPlugin)
```
Please note that maybe it would be better to keep the shared library as a separate project, because I think this setup may be a bit tricky. For example if you change something in the shared library the main project should recompile and should use new code. The plugin however will only use new code after issuing the `reload` command on the project.
If you want to share settings between the projects you can check answers to [How to share version values between project/plugins.sbt and project/Build.scala?](https://stackoverflow.com/questions/23944108/how-to-share-version-values-between-project-plugins-sbt-and-project-build-scala)
|
adding two linked lists efficiently in C
I have two linked lists representing the digits of decimal numbers in order from most- to least-significant. for eg `4->7->9->6` and `5->7`
The answer should be `4->8->5->3` without reversing the lists because reversing the lists would result in decrease of efficiency.
I am thinking of solving the problem using stack.I will traverse both the lists and push the data elements into two separate stacks.One for each linked list.Then I pop both the stacks together and add both the elements and if the result is a two digit no I 10 modulo it and store the carry in a temp variable.The remainder is stored in the node and the carry is added to the next sum and so on.
if the two stacks are s1 and s2 and the result linked list is res.
```
temp = 0;
res = (node*)(malloc(sizeof(node*));
while(s1->top!=-1 || s2->top!=-1)
{
temp = 0;
sum = pop(s1) + pop(s2);
n1 = (node*)(malloc(sizeof(node*));
temp = sum/10;
sum = sum%10;
sum = sum+temp;
n1->data = sum;
n1->next = res;
res = n1;
free n1;
//temp=0;
}
if((s1->top==-1)&&(s2->top==-1))
{
return res;
}
else if(s1->top==-1)
{
while(s2->top!=-1)
{
temp = 0;
sum = pop(s2);
sum = sum + temp;
temp = sum/10;
sum = sum%10;
n1 = (node*)(malloc(sizeof(node*));
n1->data = sum;
n1->next = res;
res = n1;
free n1;
}
}
else
{
while(s2->top!=-1)
{
temp = 0;
sum = pop(s2);
sum = sum+temp;
temp = sum/10;
sum = sum%10;
n1=(node*)(malloc(sizeof(node*));
n1->data = sum;
n1->next = res;
res = n1;
free n1;
}
}
return res;
```
I have come across this problem many times in interview questions but this is the best solution that I could think of.
If anyone can come with something more efficient in c i will be very glad.
|
Two passes, no stack:
- Get the length of the two lists.
- Create a solution list with one node. Initialize the value of this node to zero. This will hold the carry digit. Set a list pointer (call it the carry pointer) to the location of this node. Set a list pointer (call it the end pointer) to the location of this node.
- Starting with the longer list, for each excess node, link a new node to the end pointer and assign it the value of the excess node. Set the end pointer to this new node. If the
value is less than 9, set the carry pointer to the new node.
- Now we're left with both list pointers having the same number of nodes in each.
- While the lists are not empty...
- Link a new node to the end pointer and advance the end pointer to this node.
- Get the values from each list and advance each list pointer to the next node.
- Add the two values together.
1. If value is greater than nine, set the value to `value mod 10`, increment the value held in the carry pointer's node, move the carry pointer to the next node. If carry pointer's value is nine, set to zero and go to next node.
2. If value is nine. Set it. Do nothing else.
3. If value is less than nine. Set it. Set carry pointer to current node.
- When you're done with both lists, check if the solution pointer's node value is zero. If it is, set the solution pointer to the next node, deleting the unneeded extra digit.
|
How can I add a flag at the end of the linking command line using CMake?
I've got an issue where CMake [can't detect pthread](https://stackoverflow.com/questions/24813827/cmake-failing-to-detect-pthreads-due-to-warnings). As a work-around I tried:
```
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -lpthread")
```
However, this inserts `-lpthread` in the wrong place:
```
/usr/bin/c++ -std=c++11 -D_GNU_SOURCE -Wall [manyflags ...] -lpthread \
CMakeFiles/connectivity_tool.dir/connectivity_tool/conn_tool.cpp.o \
-o connectivity_tool -rdynamic -lboost_system [many libraries...]
```
This results in:
```
/usr/bin/ld: /tmp/ccNvRifh.ltrans3.ltrans.o: undefined reference to symbol 'pthread_mutexattr_settype@@GLIBC_2.2.5'
/lib/x86_64-linux-gnu/libpthread.so.0: error adding symbols: DSO missing from command line
```
Of course, the `-lpthread` should be at the end of the 3rd line, not the end of the 1st.
How can I go about either getting CMake to add `-lpthread` at the end of this line, or perhaps even modifying the generated Makefiles somehow in some hacky way to get this to work?
(If the answer involves actually detecting pthread properly then answer the linked question.)
|
>
> "How can I go about either getting CMake to add `-lpthread` at the end of this line, or perhaps even modifying the generated Makefiles somehow in some hacky way to get this to work?"
>
>
>
1st be sure that your
```
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -lpthread")
```
is the last seen in line by CMake.
Any further library/module references (like e.g. [`FIND_BOOST`](http://www.cmake.org/cmake/help/v3.0/module/FindBoost.html)) may screw up the order of the flags you want to provide directly.
I would use
```
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread")
```
and
```
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -pthread")
```
I think with this option, the linker automatically detects the appropriate `pthread` library, linked appearing at the end of the linker objects chain.
|
make tooltip display on click and add a modal to it
strangely, I find it difficult to bind jquery's onclick event handler to this fiddle. I don't even know what I'm doing wrong. The html is as follows:-
```
<ul>
<li><a id="tooltip_1" href="#" class="tooltip" >Trigger1</a><li>
<li><a id="tooltip_2" href="#" class="tooltip" >Trigger2</a><li>
<li><a id="tooltip_3" href="#" class="tooltip" >Trigger3</a><li>
</ul>
<div style="display: none;">
<div id="data_tooltip_1">
data_tooltip_1: You can hover over and interacte with me
</div>
<div id="data_tooltip_2">
data_tooltip_2: You can hover over and interacte with me
</div>
<div id="data_tooltip_3">
data_tooltip_3: You can hover over and interacte with me
</div>
</div>
```
styled this way:-
```
li {
padding: 20px 0px 0px 20px;
}
```
with a jquery like this:-
```
$(document).ready(function() {
$('.tooltip[id^="tooltip_"]').each
(function(){
$(this).qtip({
content: $('#data_' + $(this).attr('id')),
show: {
},
hide: {
fixed: true,
delay: 180
}
});
});
});
```
you check out the fiddle page I created:- `http://jsfiddle.net/UyZnb/339/.`
Again, how do I implement a jquery modal-like appearance to it so the tooltip becomes the focus?
|
**Working Demo**: using mouse over and out: <http://jsfiddle.net/swxzp/> **or** using click <http://jsfiddle.net/rjGeS/> ( I have written a small JQuery/ Css / Opacity demo)
**Update:** Working sample with trigger 1, 2 & 3: <http://jsfiddle.net/HeJqg/>
How it works:
It has 2 divs i.e. `background` which is used to make rest page become grey-ish like modal and second div `large` which will act as a placeholder for the toolTip so thta you can close and open it in any event you want even though the background is grey.
Rest feel free to play around with the code, hope it helps the cause `:)`
**Code**
```
$('.tooltip_display').click(function() {
var $this = $(this);
$("#background").css({
"opacity": "0.3"
}).fadeIn("slow");
$("#large").html(function() {
$('.ttip').css({
left: $this.position() + '20px',
top: $this.position() + '50px'
}).show(500)
}).fadeIn("slow");
});
$('.note').on('click', function() {
$('.ttip').hide(500);
$("#background").fadeOut("slow");
$("#large").fadeOut("slow");
});
$("#large").click(function() {
$(this).fadeOut();
});
```
**CSS**
```
.ttip {
position: absolute;
width: 350px;
height: 100px;
color: #fff;
padding: 20px;
-webkit-box-shadow: 0 1px 2px #303030;
-moz-box-shadow: 0 1px 2px #303030;
box-shadow: 0 1px 2px #303030;
border-radius: 8px 8px 8px 8px;
-moz-border-radius: 8px 8px 8px 8px;
-webkit-border-radius: 8px 8px 8px 8px;
-o-border-radius: 8px 8px 8px 8px;
background-image:-moz-linear-gradient(top, #F45000, #FF8000);
background-image: -webkit-gradient(linear, left top, left bottom, from(#F45000), to(#FF8000));
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#F45000', endColorstr='#FF8000', GradientType=0);
background-color:#000;
display: none
}
.contents {
font-size: 15px;
font-weight:bold
}
.note {
font-size: 13px;
text-align:center;
display:block;
width: 100%
}
#background{
display: none;
position: absolute;
height: 100%;
width: 100%;
top: 0;
left: 0;
background: #000000;
z-index: 1;
}
#large {
display: none;
position: absolute;
background: #FFFFFF;
padding: 0px;
z-index: 10;
min-height: 0px;
min-width: 0px;
color: #336699;
}
```
**HTML**
```
<span class="tooltip_display">Trigger</span>
<div id="large">
<div class="ttip">
<div class="contents">Here goes contents...</div>
<span class="note">(click here to close the box)</span>
</div>
</div>
<div id="background"></div>
```
**Image** of working demo:

|
Merging two data.tables that don't have common columns
I want to merge two data.tables that don't have a common column, so I would end up with `N1*N2` rows, where `N1` and `N2` are the number of rows in each dataframe.
Doing this with base R works:
```
A <- data.frame(id = 1:6, value = 19:24)
B <- data.frame(value2 = c(25, 25, 26, 26), value3 = 4:5)
A
#> id value
#> 1 1 19
#> 2 2 20
#> 3 3 21
#> 4 4 22
#> 5 5 23
#> 6 6 24
B
#> value2 value3
#> 1 25 4
#> 2 25 5
#> 3 26 4
#> 4 26 5
merge(A, B, all = TRUE)
#> id value value2 value3
#> 1 1 19 25 4
#> 2 2 20 25 4
#> 3 3 21 25 4
#> 4 4 22 25 4
#> 5 5 23 25 4
#> 6 6 24 25 4
#> 7 1 19 25 5
#> 8 2 20 25 5
#> 9 3 21 25 5
#> 10 4 22 25 5
#> 11 5 23 25 5
#> 12 6 24 25 5
#> 13 1 19 26 4
#> 14 2 20 26 4
#> 15 3 21 26 4
#> 16 4 22 26 4
#> 17 5 23 26 4
#> 18 6 24 26 4
#> 19 1 19 26 5
#> 20 2 20 26 5
#> 21 3 21 26 5
#> 22 4 22 26 5
#> 23 5 23 26 5
#> 24 6 24 26 5
```
But if I now have two data.tables and not dataframes anymore, it errors:
```
library(data.table)
A <- data.table(id = 1:6, value = 19:24)
B <- data.table(value2 = c(25, 25, 26, 26), value3 = 4:5)
merge(A, B, all = TRUE)
#> Error in merge.data.table(A, B, all = TRUE): A non-empty vector of column names for `by` is required.
```
How can I reproduce the base R behavior with `data.table` (without necessarily using `merge()`)?
|
You are looking for a cross-join. In `data.table`, there is a `CJ` function but it only works with one data set, otherwise you can do:
```
res <- setkey(A[, c(k=1, .SD)], k)[B[, c(k=1, .SD)], allow.cartesian = TRUE][, k := NULL]
res
id value value2 value3
1: 1 19 25 4
2: 2 20 25 4
3: 3 21 25 4
4: 4 22 25 4
5: 5 23 25 4
6: 6 24 25 4
7: 1 19 25 5
8: 2 20 25 5
9: 3 21 25 5
10: 4 22 25 5
11: 5 23 25 5
12: 6 24 25 5
13: 1 19 26 4
14: 2 20 26 4
15: 3 21 26 4
16: 4 22 26 4
17: 5 23 26 4
18: 6 24 26 4
19: 1 19 26 5
20: 2 20 26 5
21: 3 21 26 5
22: 4 22 26 5
23: 5 23 26 5
24: 6 24 26 5
id value value2 value3
```
---
Note the alternative `dplyr` solution:
```
dplyr::cross_join(A, B)
```
|
Zend\_Form not displaying error message with calling addError
I am implementing an updatePasswordAction and its not displaying an error with an invalid current password. I could not implement this with a Zend\_Validate class to use with a supplied record->password, so i just validated for now in my controller action and if failed then i add the error message to the form element. this is just before i run `$form->isValid`. In any case, its working. but when the validation fails, its not displaying the error message on this on the element. any help would be greatly appreciated.
FYI: When I submit a blank current password, it shows the validation
```
class Admin_Form_UserPassword extends Katana_Form
{
public function init()
{
$element = $this->createElement('hidden', 'id');
$this->addElement($element);
$element = $this->createElement('password','password');
$element->setLabel('Current Password:');
$element->setRequired(true);
$this->addElement($element);
$element = $this->createElement('password','new_password');
$element->setLabel('New Password:');
$element->addValidator('StringLength', false, array(6,24));
$element->setRequired(true);
$element->addValidator('NotEmpty');
$this->addElement($element);
$element = $this->createElement('password','new_password_confirm');
$element->setLabel('Confirm:');
$element->addValidator('StringLength', false, array(6,24));
$element->addValidator('IdenticalField', false, array('new_password', 'Confirm Password'));
$element->setRequired(true);
$this->addElement($element);
$this->addElement('submit', 'submit', array('label' => 'Submit'));
}
```
}
```
public function updatePasswordAction()
{
$resourceModel = new Core_Model_Resource_User();
$form = new Admin_Form_UserPassword();
$form->setMethod(Katana_Form::METHOD_POST);
$form->setAction($this->getActionUrl('update-password'));
if($this->getRequest()->isPost()){
$id = $this->getRequest()->getParam('id');
$record = $resourceModel->find($id)->current();
$currPassword = $record->password;
$typedPassword = md5($this->getRequest()->getParam('password'));
if($currPassword !== $typedPassword){
$form->getElement('password')->addError('Current password is incorrect.');
}
if($form->isValid($_POST)){
$data = $form->getValues();
$result = $resourceModel->updatePassword($id, $data['new_password']);
if($result){
$this->redirectSimple('list');
}
}
} else {
$id = $this->getRequest()->getParam('id');
$recordData = array(
'id' => $id
);
$form->populate($recordData);
}
$this->getView()->form = $form;
}
```
|
Adding an error to the element doesn't cause the form itself to then be invalid.
There are at least 2 methods I use to get around this:
```
if($currPassword !== $typedPassword){
$form->getElement('password')->addError('Current password is incorrect.');
$form->markAsError();
}
// or
if ($form->isValid($_POST) && 0 == sizeof($form->getMessages()) {
// form was valid, and no errors were set on elements
}
```
To clarify, when you add the error to the form ELEMENT, there is an error attached to that element, but Zend\_Form::isValid only runs the validators and sets appropriate errors, it doesn't check to see if you had set an error on a particular element.
You can however call `$form->getMessages()` to get all the error messages attached to the form or its child elements. If this is 0 and you have validated your form, then it means there were no errors. If your form passed `isValid` but you added an error to an element, it will include the error message you added.
|
why is a date invalid in javascript?
Why is the following expression invalid?
```
var hello = new Date(2010, 11, 17, 0, 0, 0, 0);
```
For example, `hello.UTC()` doesn't work.
|
```
var hello = new Date(2010, 11, 17, 0, 0, 0, 0);
console.log(hello.toUTCString());
```
and
```
var hello2 = Date.UTC(2010, 11, 17, 0, 0, 0, 0);
console.log(hello2);
```
These two are actually two different functions that print out different things.
`toUTCString()` - Converts a Date object to a string, according to universal time
where as
`Date.UTC()` - returns the number of milliseconds in a date string since midnight of January 1, 1970, according to universal time.
If you are attempting to calculate the milliseconds in a date string since midnight of 1-1-1970 then you will have to use `Date.UTC();`. However if you are attempting to get properties, in different forms, of the new Date(2010, 11, 17, 0, 0, 0, 0); then you'll have to use its own constructor methods (`toUTCString()` or `getUTCMilliseconds()` and etc).
UTC appears to be a member of the Date constructor itself, and not a member of Date instances. So, in order to invoke UTC you have to use `Date.UTC();` `Date()` converts current time to a string, according to universal time and `Date.UTC()` retrieves and use value that is calculated in milliseconds from 01 January, 1970 00:00:00 Universal Time (UTC). So, they are like a 'static' functions.
Moreover, in JavaScript whenever you use the `'new'` keyword to create a new object (instantiate), the `this` value of the constructor points to the new Object. So, hello can have a date of its own as oppose to `Date()` or `Date.UTC()`'s `this` would be pointing to a different scope (global i think) which would do its calculation based on 1-1-1970 00:00:00 or return the time which Date function is invoked. The Object hello, on the other hand, would have a base date which was instantiated with `new Date(2010, 11, 17, 0, 0, 0, 0)` with its set of constructed methods (`toUTCString();` and etc). The new Date with `this` pointing to the new Object using the passed properties as the base "date" value.
With all these being said, `hello.UTC()` is accessing a function that is not a member of its constructor and thus doesn't work. This is part of the OOP in JavaScript. This is all on top of my head and probably a bit fuzzy if you are reading this. Please correct me if i have errors.
|
Generate rainbow colors based on value
I am trying to generate rainbow colors based on the numeric value.
```
var max = 10000;
var min = 0;
var val = 8890;
function getcolor(min,max,val) {
// return red - black;
}
```
**colors**
Red
Yellow
Green
Blue
Black (value == 0)
Like in the url <https://i.stack.imgur.com/MRevs.jpg>
From the above values how to generate color between red - black. High value is red and low value is black.
|
There's just a couple things you need to do and understand in order to accomplish this task. The first is realizing that the RGB colourspace is not the one you want for this task - the HSV or HSL ones are far better. Since the browsers are happy to work with HSL, we'll use that one.
Next, if you look at the H channel of the HSL colourspace, you can see that the exact band of colours you want is present there. The blue has a hue of about 240° and the red has one of 0°.
This means that we want to map the range of [min..max] to the Hues of [240..0] (yes, the mapping is 'backwards')
With this in mind, we can set about creating a function that will do the mapping for us and return a valid colour string.
```
function calcColor(min, max, val)
{
var minHue = 240, maxHue=0;
var curPercent = (val - min) / (max-min);
var colString = "hsl(" + ((curPercent * (maxHue-minHue) ) + minHue) + ",100%,50%)";
return colString;
}
```
First, we setup the two end-points of the Hues that we wish to use.
Next, we work out where abouts in the current range the current value is.
Finally, we put ourselves in the same position relatively speaking, in the Hue range instead of the user-input range.
A simple, example of its use is shown.
```
<!DOCTYPE html>
<html>
<head>
<script>
"use strict";
function newEl(tag){return document.createElement(tag);}
window.addEventListener('load', onDocLoaded, false);
function onDocLoaded()
{
for (var i=0; i<10; i++)
{
var elem = newEl('div');
elem.style.backgroundColor = calcColor(0, 9, i);
elem.className = "rgb";
document.body.appendChild(elem);
}
}
function calcColor(min, max, val)
{
var minHue = 240, maxHue=0;
var curPercent = (val - min) / (max-min);
var colString = "hsl(" + ((curPercent * (maxHue-minHue) ) + minHue) + ",100%,50%)";
return colString;
}
</script>
<style>
.rgb
{
width: 16px;
height: 16px;
display: inline-block;
}
</style>
</head>
<body>
</body>
</html>
```
|
List and description of all packages in CRAN from within R
I can get a list of all the available packages with the function:
```
ap <- available.packages()
```
But how can I also get a description of these packages from within R, so I can have a `data.frame` with two columns: package and description?
|
*Edit* of an almost ten-year old accepted answer. What you likely want is *not* to scrape (unless you want to practice scraping) but use an existing interface: `tools::CRAN_package_db()`. Example:
```
> db <- tools::CRAN_package_db()[, c("Package", "Description")]
> dim(db)
[1] 18978 2
>
```
The function brings (currently) 66 columns back of which the of interest here are a part.
---
I actually think you want "Package" and "Title" as the "Description" can run to several lines. So here is the former, just put "Description" in the final subset if you *really* want "Description":
```
R> ## from http://developer.r-project.org/CRAN/Scripts/depends.R and adapted
R>
R> require("tools")
R>
R> getPackagesWithTitle <- function() {
+ contrib.url(getOption("repos")["CRAN"], "source")
+ description <- sprintf("%s/web/packages/packages.rds",
+ getOption("repos")["CRAN"])
+ con <- if(substring(description, 1L, 7L) == "file://") {
+ file(description, "rb")
+ } else {
+ url(description, "rb")
+ }
+ on.exit(close(con))
+ db <- readRDS(gzcon(con))
+ rownames(db) <- NULL
+
+ db[, c("Package", "Title")]
+ }
R>
R>
R> head(getPackagesWithTitle()) # I shortened one Title here...
Package Title
[1,] "abc" "Tools for Approximate Bayesian Computation (ABC)"
[2,] "abcdeFBA" "ABCDE_FBA: A-Biologist-Can-Do-Everything of Flux ..."
[3,] "abd" "The Analysis of Biological Data"
[4,] "abind" "Combine multi-dimensional arrays"
[5,] "abn" "Data Modelling with Additive Bayesian Networks"
[6,] "AcceptanceSampling" "Creation and evaluation of Acceptance Sampling Plans"
R>
```
|
Laravel-5 how to populate select box from database with id value and name value
I want to create a select box like the one below using [illuminate\html](https://github.com/illuminate/html/) :
```
<select>
<option value="$item->id">$item->name</option>
<option value="$item->id">$item->name</option>
</select>
```
In my controller I tried this:
```
public function create()
{
$items = Items::all(['id', 'name']);
return view('prices.create', compact('id', 'items'));
}
```
And in my view this:
```
<div class="form-group">
{!! Form::Label('item', 'Item:') !!}
{!! Form::select('item_id', $items, null, ['class' => 'form-control']) !!}
</div>
```
The issue is that instead of `$item->name` is displaying all the info of the entity.
|
Laravel provides a Query Builder with lists() function
In your case, you can replace your code
```
$items = Items::all(['id', 'name']);
```
**with**
```
$items = Items::lists('name', 'id');
```
Also, you can chain it with other Query Builder as well.
```
$items = Items::where('active', true)->orderBy('name')->lists('name', 'id');
```
source: <http://laravel.com/docs/5.0/queries#selects>
---
**Update for Laravel 5.2**
Thank you very much @jarry. As you mentioned, the function for Laravel 5.2 should be
```
$items = Items::pluck('name', 'id');
```
or
```
$items = Items::where('active', true)->orderBy('name')->pluck('name', 'id');
```
ref: <https://laravel.com/docs/5.2/upgrade#upgrade-5.2.0> -- look at Deprecations lists
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.