prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Access members of outer class in TypeScript
Since TypeScript 1.6, we can easily create inner classes with [class expressions](https://stackoverflow.com/questions/32494174/can-you-create-nested-classes-in-typescript). In other OOP-centric languages like Java, inner classes can access members of the outer class, [even private ones](https://stackoverflow.com/questions/1801718/why-can-outer-java-classes-access-inner-class-private-members).
This behavior is similar to concept of closures, where function could access variables from the scope in which it was defined.
Why I can't achieve this in TypeScript? Does specification of classes in ECMAScript 2015 plays role here?
Code that presents expected behavior:
```
class OuterClass {
private outerField = 1337;
public InnerClass = class {
public accessOuter() {
return this.outerField; // outerField not defined
}
}
}
var outer = new OuterClass();
var inner = new outer.InnerClass();
var win = inner.accessOuter();
```
|
It's easier to understand why you can't do that if you look at the compiled javascript of your code:
```
var OuterClass = (function () {
function OuterClass() {
this.outerField = 1337;
this.InnerClass = (function () {
function class_1() {
}
class_1.prototype.accessOuter = function () {
return this.outerField; // outerField not defined
};
return class_1;
}());
}
return OuterClass;
}());
```
As you can see, `outerField` is defined as a member of `OuterClass` like so:
```
this.outerField = 1337;
```
When you try to access it in your `InnerClass` you do:
```
return this.outerField;
```
But the `this` here is the instance of `class_1` and not `OuterClass` so there's no `outerField` in `this`.
Also, you have no access from the inner class to the instance of the outer class.
The way this is solved in java is like so:
```
class OuterClass {
private int outerField = 1337;
public class InnerClass {
public int accessOuter() {
return OuterClass.this.outerField;
}
}
}
```
But there's no equivalent to `OuterClass.this.outerField` in typescript/javascript.
Look at typescript inner classes more like static inner classes in java, but here too you'll only be able to access public properties:
```
class OuterClass {
public static outerField = 1337; // has to be public
public InnerClass = class {
public accessOuter() {
return OuterClass.outerField;
}
}
}
```
You can pass an instance of the outer class to the inner class:
```
class OuterClass {
public outerField = 1337;
public InnerClass = class {
constructor(private parent: OuterClass) {}
public accessOuter() {
return this.parent.outerField;
}
}
}
```
But again, you'll need to have `outerField` public.
---
### Edit
In case you want to achieve something that will simulate the needed behavior (that is, the inner class instance will have access to a private outer class members), then you can do something like this:
```
interface OuterClassProxy {
outerField: number;
}
interface IInnerClass {}
class OuterClass {
private outerField = 1337;
static InnerClass = class implements IInnerClass {
constructor(private parent: OuterClassProxy) {}
public accessOuter() {
return this.parent.outerField;
}
}
public createInnerClass(): IInnerClass {
let outerClassInstance = this;
return new OuterClass.InnerClass({
get outerField(): number {
return outerClassInstance.outerField;
},
set outerField(value: number) {
outerClassInstance.outerField = value;
}
});
}
}
```
It's quite a lot of work, but it will do it.
|
Detecting the present annotations within the given object passed into a constructor
My question in short: how do I detect if a java annotation is present (and in the right place) for a given user class/object.
Details of the "problem"
Lets say I have two java classes:
```
public class Line {
private List<Cell> cells;
public Line(Object... annotatedObjects) {
// check if annotations @Line and @Cell are present in annotatedObjects.
}
// getter/setter for cells.
}
public class Cell {
// some members
// some methods
}
```
A Line object holds Cells.
I also have two annotations, like:
```
public @interface Line {
// some stuff here
}
public @interface Cell {
// some stuff here
}
```
I also have a bunch of user classes (two will do for this example) that contain the @Line and @Cell annotations I specified, like:
```
@Line(name="pqr", schema="three")
public class AUserClass {
@Cell
private String aString;
}
@Line(name="xyz", schema="four")
public class AnotherUserClass {
@Cell(name="birthday")
private Date aDate;
}
```
The problem: When I instantiate a new Line object, I want to be able to pass the user classes/objects into the Line constructor. The Line constructor then finds out if the passed user classes/objects are valid classes that can be processed. Only user classes that have a `@Line` annotation for the class, and at least one `@Cell` annotation for its members are valid objects that can be passed into the constructor of the Line object.
All other passed objects are invalid. The moment a valid user object is passed, all the available members that are tagged as `@Cell` in that object are transformed to Cell objects and added to the cells list.
My questions:
1. is this possible to detect the annotations in this object/class at runtime, and only for THIS passed object (I don't want to scan for annotations on the classpath!)?
2. is it possible to detect the datatype of the `@Cell` tagged members? This is needed because the Cell class doesn't accept all datatypes.
3. is it possible to retrieve the actual member name (specified in the java file) so that the user doesn't have to specify the members Cell name. I want the user to be able to write `@Cell` (without a name) and `@Cell(name="aName")`, and when only `@Cell` is specified, the name of the member is used instead. I have no idea if this information is still available at runtime using reflection.
4. How to detect if the annotations are in the right place?If code is tagged like this, then the object should be ignored (or maybe an exception is thrown)?
```
@Cell // oh oh, that's no good :(
public class WrongClass {
// some members
}
```
5. Could you provide some startup code, so I know a little to get going with this problem. I am really new to annotations and reflection. BTW: I am using the latest jvm 1.6+
Thank you for your kind help!
|
First you need to have retention policy on your annotations so you can read them with reflection
```
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public static @interface Line {
}
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public static @interface Cell {
}
```
Second you need to test if the class has the Line annotation with `isAnnotationPresent(annotationClass)`. This method is accessible from `java.lang.Class` and a java.lang.reflect.Field.
**NOTE**: that you need to retrieve the fields that are private with `class.getDeclaredField(fieldName)`.
`3.` I don't think you can make an annotation have a default value based on a propertyName but you can make name optional by providing a default `String name() default DEFAULT` and check for that value when iterating through the fields and either use the value stored in `name()` or the `propertyName`
|
Is it normal for printf '%s\n' "${array[@]}" to print one line per array element?
In my bash script I'm including an array of other files required by the program. I then print those in the help description. When I used printf for their output, I got results I was not expecting. I have read that the form ${array[@]} is generally preferred as the default for expansion, so I started with that.
My array declaration:
```
scriptDependencies=("script1.sh" "script2.sh")
```
And the (initial) printf command:
```
printf "Dependencies: %s\n" "${scriptDependencies[@]}"
```
What I got as output:
```
Dependencies: script1.sh
Dependencies: script2.sh
```
Although I believe I understand a basic difference between the subscripts '@' and '\*' to be all individual elements vs all elements grouped together, I was not expecting two separate lines to print.
When I switched the printf command to use the ${scriptDependencies[\*]} form, a single line (closer to what I desired) was printed:
```
Dependencies: script1.sh script2.sh
```
Is this expected behavior of printf, and of the subscripts? Or does it point to a problem with printf?
I'm using GNU bash, version 3.2.57(1)-release (x86\_64-apple-darwin15).
|
This is documented and expected behavior. If there are more strings passed to `printf` than a format string can handle, that format string will be repeated.
`"${array[@]}"` expands to a shell word per array element. Thus, for an array with two elements, two arguments to `printf` will be passed; if the format string only has one placeholder (only one `%s`), then it will be repeated.
By contrast, `"${array[*]}"` (with the quotes!) will expand to only one argument (with the first character of `IFS`, a space by default, inserted between each element).
---
Bash is required to do this by [POSIX specification](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/printf.html), emphasis added below:
>
> 9. **The format operand shall be reused as often as necessary to satisfy the argument operands.** Any extra c or s conversion specifiers shall be evaluated as if a null string argument were supplied; other extra conversion specifications shall be evaluated as if a zero argument were supplied. If the *format* operand contains no conversion specifications and *argument* operands are present, the results are unspecified.
>
>
>
Consequently, this behavior is portable to all POSIX-compliant shells.
|
JavaScript: Position DIV Centered Above Text Selection?
I am trying to position a `<div>` above a users text selection that will act as a toolbar similar to Mediums.
[](https://i.stack.imgur.com/eX6z8.png)
While I have successfully gotten the `<div>` to be positioned next to the selection, I cannot seem to get it to correctly center relative to the selection:
```
$(function() {
// Setup the Event Listener
$('.article').on('mouseup', function() {
// Selection Related Variables
let selection = window.getSelection(),
getRange = selection.getRangeAt(0),
selectionRect = getRange.getBoundingClientRect();
// Set the Toolbar Position
$('.toolbar').css({
top: selectionRect.top - 42 + 'px',
left: selectionRect.left + 'px'
});
});
});
```
I can determine the selection's center point by subtracting the selections left offset from the viewport by its width as such:
```
selectionRect.left - selectionRect.width
```
However, I am not sure how to use that to set the position of my toolbar to be centered relative to the selection rectangle?
I tried subtracting the toolbars left offset from the width of the selection divided by 2 but that doesn't align to the center perfectly either.
**JSFiddle**
<https://jsfiddle.net/e64jLd0o/>
|
One solution would be to add the following to your CSS:
```
.toolbar {
transform: translateX(-50%);
}
```
and update your script to offset the left position of the toolbar element like so:
```
$('.toolbar').css({
top: selectionRect.top - 42 + 'px',
left: ( selectionRect.left + (selectionRect.width * 0.5)) + 'px'
});
```
Here is a working snippet:
```
$(function() {
// Setup the Event Listener
$('.article').on('mouseup', function() {
// Selection Related Variables
let selection = window.getSelection(),
getRange = selection.getRangeAt(0),
selectionRect = getRange.getBoundingClientRect();
// Set the Toolbar Position
$('.toolbar').css({
top: selectionRect.top - 42 + 'px',
left: ( selectionRect.left + (selectionRect.width * 0.5)) + 'px'
});
});
});
```
```
.article {
position: relative;
height: 300px;
padding: 20px;
}
.toolbar {
position: absolute;
display: flex;
align-items: center;
justify-content: center;
width: 169px;
padding-top: 10px;
padding-bottom: 10px;
background: black;
text-align: center;
color: white;
border-radius: 8px;
transform: translateX(-50%);
}
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.0.0/jquery.min.js"></script>
<!-- Editor -->
<div class="article">
<p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Tenetur dignissimos facilis id repellat sint deserunt voluptates animi eaque tempore debitis, perferendis repudiandae voluptatem. Eligendi fuga deleniti saepe quod eum voluptas.</p>
</div>
<!-- Toolbar -->
<div class="toolbar">Toolbar</div>
```
|
Meteor upsert equivalent
How soon will the upsert command be implemented in Meteor?
And, what is the best way to do the same thing in the mean time?
Something like this is what I'm doing at the moment:
```
if typeof ( item = Items.findOne({title:'Foo'}) ) == 'undefined'
item = Items.insert({title:'Foo'})
else
Items.update(item._id, {$set: {title:'Foo'}})
# do something with item
```
|
>
> How soon will the upsert command be implemented in Meteor?
>
>
>
UPDATE: [@Thomas4019](https://stackoverflow.com/users/360051/thomas4019) points out that `upsert` is now supported:
v0.6.6
"Add upsert support. Collection.update now supports the {upsert: true} option. Additionally, add a Collection.upsert method which returns the newly inserted object id if applicable."
Source: [History.md](https://github.com/meteor/meteor/blob/devel/History.md#mongodb)
Usage documentation: <http://docs.meteor.com/#upsert>
-- original answer follows --
There is a card on the Trello Meteor Roadmap which you can vote on to indicate its importance to you: <https://trello.com/c/C91INHmj>
It is currently in the "Later" list which means it will be a while before it is implemented unless it receives a lot of votes.
The other important thing to note is that since meteor is open-source, you could implement the necessary changes yourself and submit back.
---
>
> What is the best way to do the same thing in the mean time?
>
>
>
There are several solutions but which is most appropriate for your use-case is impossible to tell without more knowledge of your design.
1. Use the code as is, add an unique index to the collection, and handle the duplicate key error if/when it arises
2. Change design to implement explicit optimistic concurrency.
The core of both of these solutions is the same, gracefully handle the error case. #1 is easier to implement. #2 allows for greater flexibility in how the optimistic concurrency is handled.
|
Understanding this Pandas script
I received this code to group data into a histogram type data. I have been Attempting to understand the code in this pandas script in order to edit, manipulate and duplicate it. I have comments for the sections I understand.
### Code
```
import numpy as np
import pandas as pd
column_names = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6',
'col7', 'col8', 'col9', 'col10', 'col11'] #names to be used as column labels. If no names are specified then columns can be refereed to by number eg. df[0], df[1] etc.
df = pd.read_csv('data.csv', header=None, names=column_names) #header= None means there are no column headings in the csv file
df.ix[df.col11 == 'x', 'col11']=-0.08 #trick so that 'x' rows will be grouped into a category >-0.1 and <= -0.05. This will allow all of col11 to be treated as a numbers
bins = np.arange(-0.1, 1.0, 0.05) #bins to put col11 values in. >-0.1 and <=-0.05 will be our special 'x' rows, >-0.05 and <=0 will capture all the '0' values.
labels = np.array(['%s:%s' % (x, y) for x, y in zip(bins[:-1], bins[1:])]) #create labels for the bins
labels[0] = 'x' #change first bin label to 'x'
labels[1] = '0' #change second bin label to '0'
df['col11'] = df['col11'].astype(float) #convert col11 to numbers so we can do math on them
df['bin'] = pd.cut(df['col11'], bins=bins, labels=False) # make another column 'bins' and put in an integer representing what bin the number falls into.Later we'll map the integer to the bin label
df.set_index('bin', inplace=True, drop=False, append=False) #groupby is meant to run faster with an index
def count_ones(x):
"""aggregate function to count values that equal 1"""
return np.sum(x==1)
dfg = df[['bin','col7','col11']].groupby('bin').agg({'col11': [np.mean], 'col7': [count_ones, len]})
dfg.index = labels[dfg.index]
dfg.ix['x',('col11', 'mean')]='N/A'
print(dfg)
dfg.to_csv('new.csv')
```
The section I really struggle to understand is in this section:
```
def count_ones(x):
"""aggregate function to count values that equal 1"""
return np.sum(x==1)
dfg = df[['bin','col7','col11']].groupby('bin').agg({'col11': [np.mean], 'col7': [count_ones, len]})
dfg.index = labels[dfg.index]
dfg.ix['x',('col11', 'mean')]='N/A'
print(dfg)
dfg.to_csv('new.csv')
```
If any one is able to comment this script I would be greatly appreciative. Also feel free to correct or add to my comments (these are what I assume so far they may not be correct). Im hoping this isnt too off topic for SOF. I will gladly give a 50 point bounty to any user who can help me with this.
|
I'll try and explain my code. As it uses a few tricks.
- I've called it `df` to give a shorthand name for a pandas DataFrame
- I've called it `dfg` to mean group my `df`.
- Let me build up the expression `dfg = df[['bin','col7','col11']].groupby('bin').agg({'col11': [np.mean], 'col7': [count_ones, len]})`
- the code `dfg = df[['bin','col7','col11']]` is saying take the columns named 'bin' 'col7' and 'col11' from my DataFrame `df`.
- Now that I have the 3 columns I am interested in, I want to group by the values in the 'bin' column. This is done by `dfg = df[['bin','col7','col11']].groupby('bin')`. I now have groups of data i.e. all records that are in bin #1, all records in bin#2, etc.
- I now want to apply some aggregate functions to the records in each of my bin groups( An aggregate funcitn is something like sum, mean or count).
- Now I want to apply three aggregate functions to the records in each of my bins: the mean of 'col11', the number of records in each bin, and the number of records in each bin that have 'col7' equal to one. The mean is easy; numpy already has a function to calculate the mean. If I was just doing the mean of 'col11' I would write: `dfg = df[['bin','col7','col11']].groupby('bin').agg({'col11': [np.mean]})`. The number of records is also easy; python's `len` function (It's not really a function but a property of lists etc.) will give us the number of items in list. So I now have `dfg = df[['bin','col7','col11']].groupby('bin').agg({'col11': [np.mean], 'col7': [len]})`. Now I can't think of an existing function that counts the number of ones in a numpy array (it has to work on a numpy array). I can define my own functions that work on a numpy array, hence my function `count_ones`.
- Now I'll deconstruct the `count_ones` function. the varibale `x` passed to the function is always going to be a 1d numpy array. In our specific case it will be all the 'col7' values that fall in bin#1, all the 'col7' values that fall in bin#2 etc.. The code `x==1` will create a boolean (TRUE/FALSE) array the same size as x. The entries in the boolean array will be True if the corresponding values in x are equal to 1 and false otherwise. Because python treats True as 1 if I sum the values of my boolean array I'll get a count of the values that ==1. Now that I have my `count_ones` function I apply it to 'col7' by: `dfg = df[['bin','col7','col11']].groupby('bin').agg({'col11': [np.mean], 'col7': [count_ones, len]})`
- You can see that the syntax of the `.agg` is `.agg({'column_name_to_apply_to': [list_of_function names_to_apply]}`
- With the boolean arrays you can do all sorts of wierd condition combinations (x==6) | (x==3) would be 'x equal to 6 or x equal to 3'. The 'and' operator is &. Always put `()` around each condition
- Now to `dfg.index = labels[dfg.index]`. In `dfg`, because I grouped by 'bin', the index (or row label) of each row of grouped data (i.e. my dfg.index) will be my bin numbers:1,2,3, `labels[dfg.index]` is using fancy indexing of a numpy array. labels[0] would give me the first label, labels[3] would give me the 4th label. With normal python lists you can use slices to do labels[0:3] which would give me labels 0,1, and 2. With numpy arrays we can go a step further and just index with a list of values or another array so labels[np.array([0,2,4]) would give me labels 0,2,4. By using `labels[dfg.index]` I'm requesting the labels corresponding to the bin#. Basically I'm changng my bin number to bin label. I could have done that to my original data but that would be thousands of rows; by doing it after the group by I'm doing it to 21 rows or so. Note that I cannot just do `dfg.index = labels` as some of my bins might be empty and therefore not present in the group by data.
- Now the `dfg.ix['x',('col11', 'mean')]='N/A'` part. Remember way back when I did `df.ix[df.col11 == 'x', 'col11']=-0.08` that was so all my invalid data was treated as a number and would be placed into the 1st bin. after applying group by and aggregate functions the mean of 'col11' values in my first bin will be -0.08 (because all such values are -0.08). Now I know this not correct, all values of -0.08 actually indicate that the original value wsa x. You can't do a mean of x. So I manually put it to N/A. ie. `dfg.ix['x',('col11', 'mean')]='N/A'` means in dfg where index (or row) is 'x' and column is 'col11 mean') set the value to 'N/A'. the `('col11', 'mean')` I believe is how pandas comes up with the aggreagate column names i.e. when I did `.agg({'col11': [np.mean]})`, to refer to the resulting aggregate column i need `('column_name', 'aggregate_function_name')`
The motivation for all this was: convert all data to numbers so I can use the power of Pandas, then after processing, manually change any values that I know are garbage. Let me know if you need any more explanation.
|
Maximum and minimum exponents in double-precision floating-point format
According to the [IEEE Std 754-2008](http://en.wikipedia.org/wiki/IEEE_754_revision) standard, the exponent field width of the binary64 double-precision floating-point format is 11 bits, which is compensated by an exponent bias of 1023. The standard also specifies that the maximum exponent is 1023, and the minimum is -1022. Why is the maximum exponent not:
```
2^10 + 2^9 + 2^8 + 2^7 + 2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0 - 1023 = 1024
```
And the minimum exponent not:
```
0 - 1023 = -1023
```
|
The bits for the exponent have two reserved values, one for encoding `0` and subnormal numbers, and one for encoding ∞ and NaNs. As a result of this, the range of normal exponents is two smaller than you would otherwise expect. See §3.4 of the IEEE-754 standard (`w` is the number of bits in the exponent — `11` in the case of `binary64`):
>
> The range of the encoding's biased exponent E shall include:
>
>
> ― Every integer between 1 and 2w – 2, inclusive, to encode normal numbers
>
>
> ― The reserved value 0 to encode ±0 and subnormal numbers
>
>
> ― The reserved value 2w – 1 to encode ±∞ and NaNs.
>
>
>
|
APC Smart-UPS: remote power shutoff?
Given:
1. A server room containing about two dozen rackmount servers and a PBX.
2. About a dozen APC Smart-UPS 2200s providing power to this equipment.
3. A fire suppression project requiring immediate shutoff of power to all servers in the event that the fire suppression agent is released to extinguish an electrical fire.
4. We assume that, using separate hardware, we will be able to generate an SNMP trap from the fire suppression system's alarm relay output, and that this will trigger some kind of a UPS power shutdown script on our Nagios server (which fortunately does not share a rack or a UPS with any other servers).
Question:
1. Is it possible to shut down the power to all devices plugged into an APC UPS via SNMP, assuming that we equip each UPS with an ethernet card? The documentation that I've seen refers extensively to the ability to shut down servers gracefully in the event of a power failure via PowerChute; in this case, we would not have time to shut them down gracefully.
2. Does it matter which model of ethernet adapter we use in order to accomplish this? It appears that the old 10Mbps models (AP9606) are available refurbished for quite a lot less money than it would cost to purchase the current model cards; I am unclear as to whether there are significant differences in feature set, other than ethernet bandwidth.
3. Would it be better, for some reason, to use serial cables instead of ethernet cards?
The intent of this question is to tap into your real-world experience with APC UPSs. To date, we have not used any remote management features, and the documentation that I have found to date does not seem to be clear as to whether power can be shut off unceremoniously via SNMP.
|
Depending on the precise model of UPS, your best bet is to wire the EPO port on the back of those UPSes to the EPO switch on the fire panel.
[Smart-UPS 2200 manual](http://www.apcmedia.com/salestools/ASTE-6Z8LLW_R1_EN.pdf), see page 6.
Since these UPSes actually have an actual Emergency Power Off function built in, in order to make your fire-inspectors happy you **need** to wire those ports to the EPO function of the fire-panel itself. I'm not an electrician, so I don't know the specifics of how you'd wire 10's of these to a single EPO port on the fire panel, but that's a question for your fire panel system vendor.
Half-assing it with suicide software monitors on servers connected to each UPS is the kind of close-enough that gets fire-inspectors to shake their head in that special way that means you'll have to do it all over again, but right this time.
|
Hiding variables
Quick silly question: is it possible to hide a variable name used by a property in VB.NET?
I am primarily a C# programmer, and am currently helping out a friend with some VB.NET stuff. Anyway, I have a String called `stateprovincename`, and a Property called `StateProvinceName`. VS does not appear to like this naming convention, and declares that they are in fact one and the same. Ideas?
|
VB.Net is [case-insensitve](https://stackoverflow.com/questions/2300983/is-vb-really-case-insensitive).
- Try something like `_stateProvinceName` or `mStateProvinceName` (discussion of naming conventions [here](https://stackoverflow.com/questions/2526346/naming-conventions-for-private-members-of-net-types)).
- Or use an [automatic property](https://stackoverflow.com/questions/5582836/does-vb-net-support-automatic-getters-and-setters-on-properties), which will implicitly declare a hidden backing variable, but you won't be able to write custom code in the `Get` and `Set`. `Public Property StateProvinceName As String`
|
return and auto deduce std::initializer\_list
In the following:
```
auto x = {0}; // auto deduction of std::initializer_list<int>
auto y = []() -> std::initializer_list<int> { return {0}; }(); //explicit
auto z = []() { return {0}; }(); // won't compile
```
why it's not possible to return and auto deduce the type of std::initializer\_list?
|
Well, because the Standard says so, and because a *braced-init-list* is not an expression. Per paragraph 5.1.2/4 of the C++11 Standard:
>
> [...] If
> a *lambda-expression* does not include a *trailing-return-type*, it is as if the *trailing-return-type* denotes the
> following type:
>
>
> — if the *compound-statement* is of the form
>
>
> `{` *attribute-specifier-seq(opt)* `return` *expression* `; }`
>
>
> **the type of the returned expression** after lvalue-to-rvalue conversion (4.1), array-to-pointer conversion
> (4.2), and function-to-pointer conversion (4.3);
>
>
> — otherwise, `void`.
>
>
>
The above makes it clear that the return type will be deduced to be anything else then `void` if and only if the `return` statement is followed by an *expression*, and a *braced-init-list* is not in itself an expression - it does not have a type, and it does not yield a value. It is just a language construct that can be used in the context of initialization.
The above paragraph also provides an example:
>
> [ *Example*:
>
>
>
> ```
> auto x1 = [](int i){ return i; }; // OK: return type is int
> auto x2 = []{ return { 1, 2 }; }; // error: the return type is void (a
> // braced-init-list is not an expression)
>
> ```
>
> —*end example* ]
>
>
>
Finally, if the question is:
"*Why a special rule was introduced for deducing the type of an `auto` variable initialized from a braced-init-list, while a similar rule was **not** introduced for deducing the return type of a lambda when `return` is followed by a braced-init-list?*"
Then the question is not constructive. Also notice, that type deduction for templates does not work with *braced-init-lists* either:
```
template<typename T>
void foo(T);
foo({1, 2}); // ERROR! T is NOT deduced to be std::initializer_list<int>
```
|
Pull new updates from original GitHub repository into forked GitHub repository
I forked someone's repository on GitHub and would like to update my version with commits and updates made in the original repository. These were made after I forked my copy.
How can I pull in the changes that were made in the origin and incorporate them into my repository?
|
You have to add the original repository (the one you forked) as a remote.
From the [GitHub documentation on forking a repository](https://help.github.com/articles/fork-a-repo):

>
> Once the clone is complete your repo will have a remote named “`origin`” that points to your fork on GitHub.
>
> Don’t let the name confuse you, this does not point to the original repo you forked from. To help you keep track of that repo we will add another remote named “upstream”:
>
>
>
> ```
> $ cd PROJECT_NAME
> $ git remote add upstream https://github.com/ORIGINAL_OWNER/ORIGINAL_REPOSITORY.git
> $ git fetch upstream
>
> # then: (like "git pull" which is fetch + merge)
> $ git merge upstream/master master
>
> # or, better, replay your local work on top of the fetched branch
> # like a "git pull --rebase"
> $ git rebase upstream/master
>
> ```
>
>
There's also a [command-line tool (`gh`) which can facilitate the operations above](https://cli.github.com/manual/gh).
Here's a visual of how it works:

See also "[Are Git forks actually Git clones?](https://stackoverflow.com/a/6286877/6309)".
|
How do instance variables in rspec work?
Here is a bit of code from M Hartl's Ruby on Rails Tutorial. Can anyone explain why an instance variable (@user) is necessary and why not use a local variable. Also, since instance variables are supposed to be the variables in the instance of a class, which class is @user instantiated from?
```
require 'spec_helper'
describe User do
before { @user = User.new(name: "Example User", email: "[email protected]") }
subject { @user }
it { should respond_to(:name) }
it { should respond_to(:email) }
end
```
|
Use of a local variable in that instance would mean that its scope would be restricted to the `before` and hence result in an error. The `@user` is of type User but is an instance variable of the `describe` block. Rspec has some magic that at run-time makes a class out of each `describe` block. Each example (`it` block) ends up being a subclass of said class. Class inheritance lets the examples see `@user`.
**Edited 2017-05-14**
Linked blog post is no longer available. Updating with Wayback Machine link + inlining relevant section here.
Note that this is considered an anti-pattern as detailed in this [blog post](http://bitfluxx.com/2011/05/23/some-rspec-tips-and-best-practices.html). Use `let` instead.
`let` has the following advantages:
- It is memoized when used multiple times in one example, but not
across examples.
- It is lazy-loaded, so you wont waste time initializing the variable for examples that don't reference it.
- Will raise an exception if you have a typo in your variable name.
|
PMU x86-64 performance counters not showing in perf under AWS
I am running a C++ benchmark test for a specific application. In this test, I open the performance counter file (\_\_NR\_perf\_event\_open syscall) before the critical section, proceed with the section and then after read the specified metric (instructions, cycles, branches, cachemisses, etc).
I verified that this needs to run under sudo because the process needs CAP\_PERFCOUNT capabilities. I also have to verify that `/proc/sys/kernel/perf_event_paranoid` is set to a number higher than 2, which seems to be always the case with Ubuntu 20.04.3 with kernel 5.11.0 which is the OS I standardized across tests.
This setup works on all my local machines. On the cloud, however, it works only on some instances as m5zn.6xlarge (Intel Xeon Platinum 8252C). It does not work on others as t3.medium, c3.4xlarge, c5a.8xlarge.
The AMI on all them are the same ami-09e67e426f25ce0d7.
One easy way to verify this behavior is run the following command:
```
sudo perf stat /bin/sleep 1
```
On the m5zn box I will see:
```
Performance counter stats for '/bin/sleep 1':
0.54 msec task-clock # 0.001 CPUs utiliz
1 context-switches # 0.002 M/sec
1 cpu-migrations # 0.002 M/sec
75 page-faults # 0.139 M/sec
2191485 cycles # 4.070 GHz
1292564 instructions # 0.59 insn per cyc
258373 branches # 479.860 M/sec
11090 branch-misses # 4.29% of all branc
1.000902741 seconds time elapsed
0.000889000 seconds user
0.000000000 seconds sys
```
[Perf with valid output](https://i.stack.imgur.com/lN0DQ.png)
While on the other boxes I will see:
```
Performance counter stats for '/bin/sleep 1':
0.62 msec task-clock # 0.001 CPUs utilized
2 context-switches # 0.003 M/sec
0 cpu-migrations # 0.000 K/sec
76 page-faults # 0.124 M/sec
<not supported> cycles
<not supported> instructions
<not supported> branches
<not supported> branch-misses
1.002488031 seconds time elapsed
0.000930000 seconds user
0.000000000 seconds sys
```
[Perf with not supported values](https://i.stack.imgur.com/LmzM3.png)
My suspicion is that the m5zn.6xlarge is backed by a real instance while the others are shared instances. is my suspicion correct?
What instances I can launch that will provide me with performance counter PMU support?
Thank you!
|
After some research I found out that because all Amazon AWS instances are virtual instances, none of the guest operating systems can directly access the hardware performance counters (PMC or PMU).
The guest OS can only read the performance counters through a kernel driver called virtual PMU (vPMU), which is available only for certain Intel Xeon CPUs.
Therefore in my attempted list of instances, only the m5zn with an Intel Platinum 8252 has a supported CPU.
It is easy to check if the guest OS supports vPMU by running
```
cat /proc/cpuinfo | grep arch_perfmon
```
It is also possible to check in the dmesg output right after smpboot:
```
[ 0.916264] smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4)
[ 0.916410] Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
```
On AWS the rule of thumb is that you will get vPMU only on the largest instances, or instances that take an entire socket.
<https://oavdeev.github.io/posts/vpmu_support_z1d/>
Currently these instances support vPMU:
```
i3.metal
c5.9xlarge
c5.18xlarge
m4.16xlarge
m5.12xlarge
m5.24xlarge
r5.12xlarge
r5.24xlarge
f1.16xlarge
h1.16xlarge
i3.16xlarge
p2.16xlarge
p3.16xlarge
r4.16xlarge
x1.32xlarge
c5d.9xlarge
c5d.18xlarge
m5d.12xlarge
m5d.24xlarge
r5d.12xlarge
r5d.24xlarge
x1e.32xlarge
```
|
Dissolve holes in polygon in R
I am running some geoprocessing tasks in R, in which I am trying to create some polygons for clipping rasters of environmental information. I am buffering somewhat complex polygons, and this leaves small subgeometries that I would like to get rid of. In ArcGIS, I think this would involve converting my polygon from multipart to singlepart (or something along those lines) and then dissolving, but I don't know how to do this in R.
Here's an example that illustrates the problem:
```
require(maptools)
require(rgeos)
data(wrld_simpl)
wrld_simpl[which(wrld_simpl@data$NAME=='Greece'),]->greece
proj4string(greece)<-CRS('+proj=lonlat +datum=WGS84')
gBuffer(greece,width=0.5)->buf
plot(buf)
```
What I really want is the outer boundary of the polygon, with nothing else inside. Any ideas?
|
If you just want to get the one ring that forms the boundary of your buffer, then this:
```
plot(SpatialPolygons(list(Polygons(list(buf@polygons[[1]]@Polygons[[1]]),ID=1))),lwd=2)
```
is a very ad-hoc way of doing it (and plotting it) for your case.
What you really really want is to get all the rings with `ringDir=1`, since the rest will be holes. You need all the rings because your buffer might still be two disconnected islands.
```
outerRings = Filter(function(f){f@ringDir==1},buf@polygons[[1]]@Polygons)
outerBounds = SpatialPolygons(list(Polygons(outerRings,ID=1)))
plot(outerBounds)
```
might do the trick... Try it with `width=0.1` and you'll see it work with multiple islands, but still removing a hole.
|
how to memory map a huge matrix?
Suppose you got a huge (40+ GB) feature value (floating-point) matrix, rows are different features and columns are the samples/images.
The table is precomputed column-wise.
Then it is completely accessed row-wise and multi-threaded (each thread loads a whole row) several times.
What would be the best way to handle this matrix? I'm especially pondering over 5 points:
1. Since it's run on an x64 PC I could memory map the whole matrix at once but would that make sense?
2. What about the effects of multithreading (multithreaded initial computation as well?)?
3. How to layout the matrix: row or column major?
4. Would it help to mark the matrix as read-only after the precomputation has been finished?
5. Could something like <http://www.kernel.org/doc/man-pages/online/pages/man2/madvise.2.html> be used to speed it up?
|
Memory mapping the whole file could make the process much easier.
You want to lay out your data to optimize for the most common access pattern. It sounds like the data is going to be written once (column-wise) and read several times (row-wise). That suggests the data should be stored in row-major order.
Marking the matrix read-only once the pre-computation is done probably won't help performance (there are some possible low-level optimizations, but I don't think anything implements them), but it will prevent bugs from accidentally writing to data you don't intend to. Might as well.
`madvise` could end up being useful, once you've got your application written and working.
My overall advice: write the program in the simplest way you can, sequentially at first, and then put timers around the whole thing and the various major operations. Make sure the major operation times sum to the overall time, so you can be sure you're not missing anything. Then target your performance improvement efforts toward the components that are actually taking the most time.
Per JimR's mention of 4MB pages in his comment, you may end up wanting to look into hugetlbfs or using a Linux Kernel release with transparent huge page support (merged for 2.6.38, could probably be patched into earlier versions). This would likely save you a whole lot of TLB misses, and convince the kernel to do the disk IO in sufficiently large chunks to amortize any seek overhead.
|
Is only one element in boolean list true?
What would be the most minimal way of determining whether only one element in a boolean list is `True` in `Python`?
I was thinking of converting each boolean to 0 (false) or 1 (true) and adding them all up and checking if the sum is 1. That is pretty minimalist but I was wondering if there is a (bitwise) operation that will return true if only one element is true and all the other ones false, which would save me the bool-->int conversion (however simple it is). I am mostly just curious on whether such a bitwise operation exists.
|
Python `bool` subclass from `int` so you don't need to do any conversion:
```
>>> sum([True, False, False])
1
>>> sum([True, True, True])
3
```
This solution doesn't short-circuit however ... there are some cases where you might want to be able to bail out earlier:
```
result = 0
for item in boolean_iterable:
result += item
if result > 1:
break # Short-circuit early
```
However, unless your boolean iterables are really large, and you expect to short-circuit frequently, I would expect this to perform worse than the `sum` in the average case (which can push the loop to more optimized code).
Also if you're looking for clever ways to do this with bitwise arithmetic, you can use `xor` in a `reduce` operation:
```
>>> from functools import reduce
>>> import operator
>>> reduce(operator.xor, [True, False, False], False)
True
>>> reduce(operator.xor, [True, False, True], False)
False
>>> reduce(operator.xor, [], False)
False
>>> reduce(operator.xor, [True], False)
True
```
But I wouldn't advise using this version :-)
|
What is a typed table in PostgreSQL? How can I make such table?
While studying the `information_schema` views, I noticed the following properties for a table:
```
is_typed YES if the table is a typed table, NO if not
user_defined_type_catalog If the table is a typed table, the name of the database...
user_defined_type_schema If the table is a typed table, the name of the schema...
user_defined_type_name If the table is a typed table, the name of the data type...
```
See <https://www.postgresql.org/docs/current/static/infoschema-tables.html>.
I never heard of such concept. I have been looking in the documentation, but I cannot find more information.
What is a *typed table*? How can I create such table?
|
It's creating a table from a type. Per [the documentation](https://www.postgresql.org/docs/current/static/sql-createtable.html):
>
> OF type\_name
> Creates a typed table, which takes its structure from the specified composite type (name optionally schema-qualified). A typed table is tied to its type; for example the table will be dropped if the type is dropped (with DROP TYPE ... CASCADE).
>
>
> When a typed table is created, then the data types of the columns are determined by the underlying composite type and are not specified by the CREATE TABLE command. But the CREATE TABLE command can add defaults and constraints to the table and can specify storage parameters.
>
>
>
Basically when you create a [composite type](https://www.postgresql.org/docs/current/static/rowtypes.html) (e.g., using the `create type` statement) you can create a table from that type. When you cascade changes of the type (altering columns or dropping the type), it affects all tables built with that type, which means you can have many tables that are structured the same way. This is helpful for logging, replication, an ETL process, etc.
---
## Creating the table ([SQLFiddle Example](http://sqlfiddle.com/#!17/4ab43/9))
### Step 1: Create the type
```
CREATE TYPE people_type AS ( age INT, name TEXT, dob DATE );
```
### Step 2: Create the table
```
CREATE TABLE sales_staff OF people_type;
CREATE TABLE service_staff OF people_type;
```
### Step 3: Alter type
```
ALTER TYPE people_type ADD ATTRIBUTE gender CHAR CASCADE;
```
---
After altering the type you will be able to see that both tables have been affected. This can be done by using `\d` in psql, or querying the INFORMATION\_SCHEMA.COLUMNS table in the database as in the example at the SQLFiddle link above.
|
SQL Group By / Count: Count Same Values Across Multiple Columns?
I'm trying to figure out how to write a query that counts values across multiple columns, with the result table having a count in each column for every possible value of *any* column.
Example: Say I have *mytable*
```
Source data table:
P1 P2 P3
-----------
a b a
a a a
b b b
a b b
```
I want a query that counts a's and b's in each column, producing something like:
```
Desired query output:
P1 P2 P3
-------------
a | 3 1 2
b | 1 3 2
```
I know I can do this for a single column easily with a *group by*:
```
select P1, count(*) as mycounts
from mytable
group by P1
```
But is it possible to do this for *every* column?
I'm using SQL Server 2008 (T-SQL). Thanks in advance for any help!
|
Maybe something like this:
First some test data:
```
DECLARE @tbl TABLE(P1 VARCHAR,P2 VARCHAR,P3 VARCHAR)
INSERT INTO @tbl
SELECT 'a','b','a' UNION ALL
SELECT 'a','a','a' UNION ALL
SELECT 'b','b','b' UNION ALL
SELECT 'a','b','b'
```
Then a pivot like this:
```
SELECT
*
FROM
(
SELECT 'P1' AS P, P1 AS PValue,P1 AS test FROM @tbl
UNION ALL
SELECT 'P2',P2,P2 FROM @tbl
UNION ALL
SELECT 'P3',P3,P3 FROM @tbl
) AS p
PIVOT
(
COUNT(PValue)
FOR P IN ([P1],[P2],[P3])
) AS pvt
```
[Here](http://msdn.microsoft.com/en-us/library/ms177410.aspx) is more information about pivot and unpivot
|
Casting shared\_ptr to weak\_ptr and back
How would I get a `weak_ptr<void>` to a `shared_ptr<Type>`?
How would I lock a `weak_ptr<void>` and ultimately produce a `shared_ptr<Type>`?
`Type` has a non-trivial destructor, is it right to assume `weak_ptr<...>` will never call this destructor?
The void weak pointer is what I want in this case, it's used only to keep tabs on the reference count of shared pointers of multiple types, and give out shared pointers to existing objects without itself owning the object (it's part of a one object many references resource manager).
|
>
> How would I get a `weak_ptr<void>` to a `shared_ptr<Type>`?
>
>
>
`std::shared_ptr<Type>` is implicitly convertible to `std::weak_ptr<void>`.
>
> How would I lock a `weak_ptr<void>` and ultimately produce a `shared_ptr<Type>`?
>
>
>
Call `lock()` to get `std::shared_ptr<void>`, then use `std::static_pointer_cast`.
>
> `Type` has a non-trivial destructor, is it right to assume `weak_ptr<...>` will never call this destructor
>
>
>
Yes. Whenever the last `shared_ptr` is destroyed, the object is destroyed. If you want to keep the object alive, you should be storing `shared_ptr<void>` and not `weak_ptr<void>`. If you don't want to keep the object alive but you just want the `weak_ptr` to always know the reference count, then there is no problem.
|
How to determine what happened and locate the problem when my system freezes?
Hello I had some freezes with my desktop. It is Lucid 10.04 on a Lenovo 3000 N200. System freezes completely.
How can I locate the problem?
|
Sometimes it's just the X server that is frozen. Try to get a console by hitting `CTRL-ALT-F1`. If that won't do, try logging it your machine using SSH (install openssh-server package first). If you can't get a shell, reboot.
Once you have a shell, check the system logs (`/var/log/syslog`, `/var/log/messages`, `/var/log/Xorg.0.log` and `~/.xsession-errors`). Scroll up to the time the crash happened. If you didn't have to reboot, the `dmesg` command will show you the kernel log buffer in case it can't write to your filesystem.
Whether or not you are able to find anything in the log file, what makes it easier to debug is being able to reproduce the problem at will. If you can't, and the logs show nothing unusual, then you can try to isolate the source of the problem by booting a rescue CD (or a completely different OS), removing non-essential pieces of hardware, or replacing some parts with others that are known to be functionnal. Doing this can be tedious, and requires rigour.
Once you have minimal information, file a bug report for the suspected faulty software, unless you think your hardware is faulty.
|
Update bash history on other terminals when exiting one terminal
I know this question is not obscure, as it is asked here [keep updating](https://unix.stackexchange.com/questions/1288/preserve-bash-history-in-multiple-terminal-windows) (and [duplicated](https://unix.stackexchange.com/questions/25334/how-to-preserve-bash-history-in-multiple-terminal-windows) here).
What I'm trying to achieve is a bit different. I don't like the idea of my prompt rewriting a file every `ls` I type (`history -a; history -c; history -r`).
I would like to update the file on exit. That's easy (actually, default), but you need to append instead of rewriting:
```
shopt -s histappend
```
Now, when a terminal is closed, I would like to make all others that remain open to be aware of the update.
I prefer to do this without checking via `$PS1` on every `command` that I type. I think it would be better to capture some sort of signal. How would you do that? If not possible, maybe a simple `cronjob`?
How can we solve this puzzle?
|
Creative and involving signals, you say? OK:
```
trap on_exit EXIT
trap on_usr1 USR1
on_exit() {
history -a
trap '' USR1
killall -u "$USER" -USR1 bash
}
on_usr1() {
history -n
}
```
Chuck that in `.bashrc` and go. This uses signals to tell every `bash` process to check for new history entries when another one exits. This is pretty awful, but it really works.
---
**How does it work?**
[`trap` sets a signal handler](https://www.gnu.org/software/bash/manual/bashref.html#index-trap) for either a system signal or one of Bash's internal events. The `EXIT` event is any controlled termination of the shell, while `USR1` is `SIGUSR1`, a meaningless signal we're appropriating.
Whenever the shell exits, we:
- Append all history to the file explicitly.
- Disable the `SIGUSR1` handler and make this shell [ignore](https://www.gnu.org/software/bash/manual/bashref.html#index-trap) the signal.
- Send the signal to all running `bash` processes from the same user.
When a `SIGUSR1` arrives, we:
- Load all new entries from the history file into the shell's in-memory history list.
Because of the way Bash handles signals, you won't actually get the new history data until you hit `Enter` the next time, so this doesn't do any better on that front than putting [`history -n`](https://www.gnu.org/software/bash/manual/bashref.html#index-history) into `PROMPT_COMMAND`. It does save reading the file constantly when nothing has happened, though, and there's no writing at all until the shell exits.
---
There are still a couple of issues here, however. The first is that [the default response to `SIGUSR1` is to terminate](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html) the shell. Any other `bash` processes (running shell scripts, for example) will be killed. `.bashrc` is not loaded by non-interactive shells. Instead, [a file named by `BASH_ENV` is loaded](https://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files): you can set that variable in your environment globally to point to a file with:
```
trap '' USR1
```
in it to ignore the signal in them (which resolves the problem).
Finally, although this does what you asked for, the ordering you get will be a bit unusual. In particular, bits of history will be repeated in different orders as they're loaded up and saved separately. That's essentially inherent in what you're asking for, but do be aware that up-arrow history becomes a lot less useful at this point. [History substitutions](https://www.gnu.org/software/bash/manual/bashref.html#History-Interaction) and the like will be shared and work well, though.
|
Erlang exception error for no match of right hand side value
I have this code that is supposed to print the numbers 1 to N-1 in a list, but `I` in here won't append to the list.
```
enum(N,[],N) -> [];
enum(N,L,I) ->
io:format("current number: ~w~n", [I]),
L = L ++ I,
enum(N,[],I+1).
enumFunc(N) -> enum(N,[],1).
```
When I run sample:enumFunc(100)., it returns `exception error: no match of right hand side value [1]`
Please help me solve this. Thanks.
|
Erlang is a single assignment language. This means that you cannot assign a new value to `L` if a value has already been assigned to `L`. When you try to 'assign' a new value via `L = L ++ I` you are actually preforming a *matching* operation. The reason you are seeing the `no match of right hand side value [1]` error is because `L` does not equal `L ++ I` because `L` is already assigned the value of `[1]` and does not match `[1,2]`
```
enum(N,L,N) -> L;
enum(N,L,I) ->
io:format("current number: ~w~n", [I]),
L0 = L ++ [I],
enum(N,L0,I+1).
enumFunc(N) -> enum(N,[],1).
```
|
Plotting function that returns complex values using pyplot
I found this image on Facebook and wanted to plot it, as I thought it will be a good opportunity to learn something new.
[](https://i.stack.imgur.com/1VFvU.jpg)
(source: [akamaihd.net](https://fbcdn-sphotos-c-a.akamaihd.net/hphotos-ak-ash4/482615_550821728272176_1584451064_n.jpg))
The code:
```
import numpy as np
import scipy.special
import pylab
x = np.linspace( -1.0, 1.0, 200 )
y = np.sqrt( 1-np.square(np.complex64(x) ) ) + np.complex64( scipy.special.cbrt( np.square( x ) ) )
pylab.plot( np.complex64( x ), y )
pylab.show()
```
The Output:
[](https://i.stack.imgur.com/go5Hj.png)
I am unable to obtain the plot as given in the figure. In order to plot, I reordered the formula into `y = f(x)` form and translated it into the code above.
|
```
import matplotlib.pyplot as plt
import numpy as np
y, x = np.ogrid[-1:2:100j, -1:1:100j]
plt.contour(x.ravel(), y.ravel(), x**2 + (y-((x**2)**(1.0/3)))**2, [1])
plt.axis('equal')
plt.show()
```

---
You can play with the aspect-ratio to make the curve a bit more heart-like:
```
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
y, x = np.ogrid[-1.5:2:1000j, -2:2:1000j]
plt.contour(x.ravel(), y.ravel(), x**2 + (y-((x**2)**(1.0/3)))**2, [1])
ax.set_aspect(0.75)
plt.show()
```

|
BigQuery external tables with python
How can i create external tables (federated data source) in BigQuery using python (google-cloud-bigquery)?
I know you can use bq commands like this, but that is not how i want to do it:
```
bq mk --external_table_definition=path/to/json tablename
bq update tablename path/to/schemafile
with external_table_definition as:
{
"autodetect": true,
"maxBadRecords": 9999999,
"csvOptions": {
"skipLeadingRows": 1
},
"sourceFormat": "CSV",
"sourceUris": [
"gs://bucketname/file_*.csv"
]
}
```
and a schemafile like this:
```
[
{
"mode": "NULLABLE",
"name": "mycolumn1",
"type": "INTEGER"
},
{
"mode": "NULLABLE",
"name": "mycolumn2",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "mycolumn3",
"type": "STRING"
}
]
```
Thank you for your help!
Lars
|
```
table_id = 'table1'
table = bigquery.Table(dataset_ref.table(table_id), schema=schema)
external_config = bigquery.ExternalConfig('CSV')
external_config = {
"autodetect": true,
"options": {
"skip_leading_rows": 1
},
"source_uris": [
"gs://bucketname/file_*.csv"
]
}
table.external_data_configuration = external_config
table = client.create_table(table)
```
Schema Format is :
```
schema = [
bigquery.SchemaField(name='mycolumn1', field_type='INTEGER', is_nullable=True),
bigquery.SchemaField(name='mycolumn2', field_type='STRING', is_nullable=True),
bigquery.SchemaField(name='mycolumn3', field_type='STRING', is_nullable=True),
]
```
|
file\_get\_html() not working?
I am trying to get the title and meta description data of page by providing it url of target page but file\_get\_html() always return FALSE value.
Any suggestions? by the way I have enabled the php extension php\_openssl.
```
<?php
include("inc/simple_html_dom.inc.php");
$contents = file_get_html("https://www.facebook.com");
if($contents !=FALSE) //always skips if condition
{
foreach($contents->find('title') as $element)
{
$title = $element->plaintext;
}
foreach($contents->find('meta[description]') as $element)
{
$meta_description = $element->plaintext;
}
$output = array('title'=>$title, 'meta'=> $meta_description);
echo json_encode($output);
}
else
{
echo"Couldn't load contents";
}
?>
```
UPDATE:
So file\_get\_html() works fine now but any idea about dealing with facebook update browser message?
|
```
<?php
$ch = curl_init('https://www.facebook.com/');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
/*
* XXX: This is not a "fix" for your problem, this is a work-around. You
* should fix your local CAs
*/
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
/* Set a browser UA so that we aren't told to update */
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36');
$res = curl_exec($ch);
if ($res === false) {
die('error: ' . curl_error($ch));
}
curl_close($ch);
$d = new DOMDocument();
@$d->loadHTML($res);
$output = array(
'title' => '',
'meta' => ''
);
$x = new DOMXPath($d);
$title = $x->query("//title");
if ($title->length > 0) {
$output['title'] = $title->item(0)->textContent;
}
$meta = $x->query("//meta[@name = 'description']");
if ($meta->length > 0) {
$output['meta'] = $meta->item(0)->getAttribute('content');
}
print_r($output);
?>
```
This displays:
```
Array
(
[title] => Welcome to Facebook - Log In, Sign Up or Learn More
[meta] => Facebook is a social utility that connects people with friends and others who work, study and live around them. People use Facebook to keep up with friends, upload an unlimited number of photos, post links and videos, and learn more about the people they meet.
)
```
|
How to remove a word prefix using grep?
How can I remove the beginning of a word using grep? For example, I have a file that contains this:
```
www.abc.com
```
I only need the this part:
```
abc.com
```
Sorry for the basic question, but I have no experience with Linux.
|
You don't edit strings with `grep` in Unix shell, `grep` is usually used to find or remove some lines from the text. You'd rather use `sed` instead:
```
$ echo www.example.com | sed 's/^[^\.]\+\.//'
example.com
```
You'll need to learn regular expressions [to use it effectively](http://www.grymoire.com/Unix/Sed.html).
Sed can also edit file in-place (modify the file), if you pass `-i` argument, but be careful, you can easily lose data if you write the wrong `sed` command and use `-i` flag.
## An example
From your comments guess you have a TeX document, and your want to remove the first part of all .com domain names. If it is your document `test.tex`:
```
\documentclass{article}
\begin{document}
www.example.com
example.com www.another.domain.com
\end{document}
```
then you can transform it with this `sed` command (redirect output to file or edit in-place with `-i`):
```
$ sed 's/\([a-z0-9-]\+\.\)\(\([a-z0-9-]\+\.\)\+com\)/\2/gi' test.tex
\documentclass{article}
\begin{document}
example.com
example.com another.domain.com
\end{document}
```
Please note that:
- A common sequence of allowed symbols followed by a dot is matched by `[a-z0-9-]\+\.`
- I used groups in the regular expression (parts of it within `\(` and `\)`) to indicate the first and the second part of the URL, and I replace the entire match with its second group (`\2` in the substitution pattern)
- The domain should be at least 3rd level .com domain (every `\+` repition means at least one match)
- The search is case insensitive (`i` flag in the end)
- It can do more than match per line (`g` flag in the end)
|
How can I make Windows' system tray two rows instead of one?
My taskbar has two rows. First is the Quick Launch, second is for applications.
But my system trays is showing the icons only in one row, which is taking too much space. Is there any possibility to force them to appear in two rows?
This is how it looks now:
[](https://i.stack.imgur.com/M3ifi.jpg)
I'd like something like this:
[](https://i.stack.imgur.com/Dd4Qs.jpg)
|
### How can I make my System Tray icons occupy two rows?
1. First make sure that your taskbar is [not locked](http://www.howtogeek.com/225568/how-to-configure-and-customize-the-taskbar-in-windows-10/).
2. Now you can drag the top of the taskbar to resize it.
3. This will increase the number of row used by the System Tray icons.
Before:
[](https://i.stack.imgur.com/Kw5jI.png)
After:
[](https://i.stack.imgur.com/rRFCG.png)
Note:
- Screenshots are for Windows 7
|
Moment.js unix timestamp to display time ago always in minutes
I am using Moment.js and would like to convert unix timestamps to (always) display minutes ago from the current time. E.g.) 4 mins ago, 30 mins ago, 94 mins ago, ect.
Right now I am using:
```
moment.unix(d).fromNow()
```
But this does not always display in minutes e.g.) an hour ago, a day ago, ect. I have tried using .asMinutes() but I believe this only words with moment.duration().
|
Not sure if this is possible with native Moment methods, but you can easily make your own Moment extension:
```
moment.fn.minutesFromNow = function() {
return Math.floor((+new Date() - (+this))/60000) + ' mins ago';
}
//then call:
moment.unix(d).minutesFromNow();
```
[**Fiddle**](http://jsfiddle.net/ult_combo/GF4Mr/3/)
Note that other moment methods won't be chainable after `minutesFromNow()` as my extension returns a string.
**edit:**
Extension with fixed plural (0 min**s**, 1 min, 2 min**s**):
```
moment.fn.minutesFromNow = function() {
var r = Math.floor((+new Date() - (+this))/60000);
return r + ' min' + ((r===1) ? '' : 's') + ' ago';
}
```
You can as well replace "min" with "minute" if you prefer the long form.
[**Fiddle**](http://jsfiddle.net/ult_combo/GF4Mr/4/)
|
Extra close behavior in WIndows 8.1
In Windows 8.0 to close an app you drag down along the horizontal median from the top to the bottom of the screen. The user is rewarded with a nice vanishing animation.
In Windows 8.1, this is exactly the same. However, if you don't let go, but just hold your app at the bottom of the screen - after 3 seconds it flips over.
Why?
|
Well, according to [here](http://windows.microsoft.com/en-us/windows-8/how-close-app) and [here](http://www.wpcentral.com/windows-8-1-apps-dont-want-to-close-tombstoning-new-cool), dragging an app to the bottom of the screen "tombstones" the app. This removes it from your list of active apps and suspends all app activity until it is opened again, and it resumes where it left off. If an app is tombstoned and your computer needs more memory, it will start to delete tombstoned apps from memory
When the app flips over, however, all processes associated with that app are terminated (the app closes everything associated with it).
For some more detailed into on tombstoning see [this](http://blogs.msdn.com/b/b8/archive/2012/02/07/improving-power-efficiency-for-applications.aspx) link on MSDN Blogs - specifically the part "Suspending apps in the background"
|
Is it possible to easily copy applications settings from one web app to another on azure
I was wondering if there is an easy way to completely copy all the key values from one web app's application settings to another, as seen in the below picture I have a lot of these key values and having to do this manually every time is very cumbersome.
[](https://i.stack.imgur.com/4vfJm.png)
|
You can use Azure PowerShell. Here is a PowerShell Script for you.
```
try{
$acct = Get-AzureRmSubscription
}
catch{
Login-AzureRmAccount
}
$myResourceGroup = '<your resource group>'
$mySite = '<your web app>'
$myResourceGroup2 = '<another resource group>'
$mySite2 = '<another web app>'
$props = (Invoke-AzureRmResourceAction -ResourceGroupName $myResourceGroup `
-ResourceType Microsoft.Web/sites/Config -Name $mySite/appsettings `
-Action list -ApiVersion 2015-08-01 -Force).Properties
$hash = @{}
$props | Get-Member -MemberType NoteProperty | % { $hash[$_.Name] = $props.($_.Name) }
Set-AzureRMWebApp -ResourceGroupName $myResourceGroup2 `
-Name $mySite2 -AppSettings $hash
```
This script copy app settings from `$mySite` to `$mySite2`. If your web app involves with slot, for `$props`, you should use the following command instead.
```
$props = (Invoke-AzureRmResourceAction -ResourceGroupName $myResourceGroup `
-ResourceType Microsoft.Web/sites/slots/Config -Name $mySite/$slot/appsettings `
-Action list -ApiVersion 2015-08-01 -Force).Properties
```
And, use `Set-AzureRMWebAppSlot` instead of `Set-AzureRMWebApp`
```
Set-AzureRMWebAppSlot -ResourceGroupName $myResourceGroup2 `
-Name $mySite2 -Slot $slot -AppSettings $hash
```
|
OpenSSL self signed certificate with a common name longer than 64 bytes
I can create a self signed certificate using openSSL as follows:
```
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days XXX -nodes
```
The interface somehow restricts me to 64 bytes for the common name. How can I create a certificate that has a common name longer than 64 bytes?
|
In my case, all the answers of "don't do this, it's against standards" were very unhelpful since I needed to do this as part of a reverse engineering challenge. In my case, the fact that it was against the standards didn't matter whatsoever.
Here are the (rough) steps:
1. Download the latest source of libressl from <https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/> (I used 2.6.0 because it's the version that ships on macOS Mojave)
2. Unzip/tar/gz and then open `/crypto/asn1/a_mbstr.c` in your favorite editor
3. Search for something that looks like the following:
```
if ((maxsize > 0) && (nchar > maxsize)) {
ASN1error(ASN1_R_STRING_TOO_LONG);
ERR_asprintf_error_data("maxsize=%ld", maxsize);
return -1;
}
```
and comment it out. For version 2.6.0, this was on lines 155-159. By removing these lines, you are removing the max CN length check.
4. Follow the directions in the `README` file to build the binary. I didn't need to install any libraries when I built on macOS but YMMV. I used `cmake` which dropped the new openssl binary in `/build/apps/openssl`
5. Generate a CSR using the command line flags (read: NOT THE INTERACTIVE TOOL -- it has a special check that is not patched out by this modification!).
For example:
```
/build/apps/openssl/openssl req -new -newkey rsa:2048 -nodes -out a.csr -keyout a.key -subj "/CN=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
```
6. Using the stock `openssl` binaries (or the modified ones, if you want), sign the CSR:
```
openssl x509 -req -in a.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out a.crt -days 500 -sha256
```
After that, you should have your wonderful non-compliant certificate ready to use. I have noticed quite a few issues with using certificates with CNs longer than 64 characters (Wireshark truncates the CN in the disector display, etc) but it does in fact work for what I needed.
|
Setting up table relations what do "Cascade", "Set Null" and "Restrict" do?
I want to start using table relations in a new project.
After some googling I got 2 tables set up as InnoDB:
The keys I want to link are
->users->userid (primary)
->sessions->userid (index)
The only thing that I don't understand in this process is what the different settings for "On update" and "On delete" do
The options here are:
- -- (nothing?)
- Cascade (???)
- Set Null (sets everything to null?)
- No action (well duh...)
- Restrict (???)
I basically want the data in sessions to be deleted when a user is completely deleted
This since the sessions will only be deleted when the expiration is detected by my session manager...
So if anyone can tell me what these options do it would be much appreciated.
|
`CASCADE` will propagate the change when the parent changes. (If you delete a row, rows in constrained tables that reference that row will also be deleted, etc.)
`SET NULL` sets the column value to NULL when a parent row goes away.
`RESTRICT` causes the attempted DELETE of a parent row to fail.
EDIT: You didn't ask about them, but the SQL standard defines two other actions: `SET DEFAULT` and `NO ACTION`. In MySQL, `NO ACTION` is equivalent to `RESTRICT`. (In some DBMSs, `NO ACTION` is a deferred check, but in MySQL all checks are immediate.) The MySQL parser accepts `SET DEFAULT`, but both the InnoDB and NDB engines reject those statements, so `SET DEFAULT` can't actually be used for either an `ON UPDATE` or `ON DELETE` constraint.
Also, note that cascading foreign key actions do not activate triggers in MySQL.
|
What is the best way to draw multiple VAO Using the same shader but not having the same texture or colors
I'm wondering what would be the best thing to do if I want to draw
more than ~6000 different VAOs using the same shader.
At the moment I bind my shader then give it all information needed (uniform) then looping through each VAO to binding and draw them.
This code make my computer fall at ~ 200 fps instead of 3000 or 4000.
According to *<https://learnopengl.com/Advanced-OpenGL/Instancing>*, using glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 different VAO It seems like I can't use it.
Can someone confirm me this? What you guys would do to draw so many VAO and save many performance as you can?
|
Step 1: do not have 6,000 different VAOs.
You are undoubtedly treating each VAO as a separate mesh. Stop doing this. You should instead treat each VAO as a separate vertex format. That is, you only need a new VAO if you're passing different *kinds* of vertex data. The number of attributes and the format of each attributes constitute the format information.
Ideally, you only need between 4 and 10 separate sets of vertex formats. Given that you're using the same shader on multiple VAOs, you probably already have this understanding.
So, how do you use the same VAO for multiple meshes? Ideally, you would do this by putting all of the mesh data for a particular kind of mesh (ie: vertex format) in the same buffer object(s). You would select which data to retrieve for a particular rendering operation via tricks like the baseVertex parameter of `glDrawElementsBaseVertex`, or just by selecting which range of index data to draw from for a particular draw command. Other alternatives include the multi-draw family of rendering functions.
If you cannot put all of the data in the same buffers for some reason, then you should [adopt the `glVertexAttribFormat` style of VAO usage](https://stackoverflow.com/a/37972230/734069). That way, you set your vertex format data with `glVertexAttribFormat` calls, and you can change the buffers as needed with `glBindVertexBuffers` without ever having to touch the vertex format itself. This is known to be faster than changing VAOs.
And to be honest, you should adopt `glVertexAttribFormat` anyway, because it's a much better API that isn't stupid like `glVertexAttribPointer` and its ilk.
|
How to keep a child NSManagedObjectContext up to date when using bindings
I have a `NSManagedObjectContext` set to have a `NSPrivateQueueConcurrencyType` which I'm using most of the time across my app.
As well as this I created a child MOC with `NSMainQueueConcurrencyType` for use with cocoa bindings (I hear that bindings don't work with private queue MOCs). I have bound some `ObjectController`s and an `ArrayController` to this child context. I very much want to keep the child on the main queue rather than swapping the MOC queue types.
When I make changes to the bound objects via UI, the changes don't propagate up to the parent context. And when I make changes to the parent context, they don't filter down to the Object/ArrayControllers.
How can I make this happen? Is there a setting that will tell the Object/ArrayControllers to refresh their context appropriately and save it when they make changes?
|
To bring changes to the parent, you need to save the child. If you want to save the changes persistently, you also need to save the parent after that.
```
[child save:&error];
[parent performBlock:^{
[parent save:&parentError];
}];
```
To bring changes from parent to the child, you need either merge changes using a notification method from parent's `NSManagedObjectContextDidSaveNotification` or re-fetch in child context. Merging is probably better in your case.
```
- (void)managedObjectContextDidSave:(NSNotification *)notification {
// Here we assume that this is a did-save notification from the parent.
// Because parent is of private queue concurrency type, we are
// on a background thread and can't use child (which is of main queue
// concurrency type) directly.
[child performBlock:^{
[child mergeChangesFromContextDidSaveNotification:notification];
}];
}
```
|
SAPUI5 Table - Remove Filter/Grouping/Sort?
I have a simple table (type `sap.ui.table.Table`) where I allow my users to sort, filter and group elements. However there is no possibility to remove sorting or grouping once it is applied? The filter could be removed by entering no value in the filter, but how do I remove sorting/grouping?
```
var oTableEmpl = new sap.ui.table.Table({
width : "100%",
visibleRowCount : 20,
selectionMode : sap.ui.table.SelectionMode.Multi,
navigationMode : sap.ui.table.NavigationMode.Scrollbar,
editable : false,
enableCellFilter : true,
enableColumnReordering : true,
enableGrouping : true,
extension : oMatrixLayout,
});
oTableEmpl.addColumn(new sap.ui.table.Column({
label : new sap.ui.commons.Label({
text : "Label",
textAlign : sap.ui.core.TextAlign.Center
}),
template : new sap.ui.commons.TextView({
text : "{Value}",
textAlign : sap.ui.core.TextAlign.Center
}),
visible : false,
sortProperty: "Value",
filterProperty: "Value",
}));
```
This might seem easy, but in the table itself there is no option to remove anything. Does it really have to be removed by programming something?
|
Yes, there is only way to do this by coding. Basically you need to clear sorters and filters of the `ListBinding`, and then refresh the `DataModel`. For grouping, reset the grouping of `Table` and `Column` to false, after reset, set grouping of `Table` back to true.
```
//set group of table and column to false
oTableEmpl.setEnableGrouping(false);
oTableEmpl.getColumns()[0].setGrouped(false);
var oListBinding = oTableEmpl.getBinding();
oListBinding.aSorters = null;
oListBinding.aFilters = null;
oTableEmpl.getModel().refresh(true);
//after reset, set the enableGrouping back to true
oTableEmpl.setEnableGrouping(true);
```
I also attached a working code snippet. Please have a check.
```
<script id='sap-ui-bootstrap' type='text/javascript' src='https://sapui5.hana.ondemand.com/resources/sap-ui-core.js' data-sap-ui-libs="sap.m,sap.ui.commons,sap.ui.table,sap.viz" data-sap-ui-theme="sap_bluecrystal"></script>
<script id="view1" type="sapui5/xmlview">
<mvc:View xmlns:core="sap.ui.core" xmlns:layout="sap.ui.commons.layout" xmlns:mvc="sap.ui.core.mvc" xmlns="sap.ui.commons" xmlns:table="sap.ui.table" controllerName="my.own.controller" xmlns:html="http://www.w3.org/1999/xhtml">
<layout:VerticalLayout>
<Button text="Reset" press="onPress" />
<table:Table id="testTable" rows="{/}" enableGrouping="true">
<table:Column sortProperty="abc" sorted="true" visible="true">
<table:label>
<Label text="abc"></Label>
</table:label>
<table:template>
<Label text="{abc}"></Label>
</table:template>
</table:Column>
<table:Column>
<table:label>
<Label text="abc2"></Label>
</table:label>
<table:template>
<Label text="{abc2}"></Label>
</table:template>
</table:Column>
</table:Table>
</layout:VerticalLayout>
</mvc:View>
</script>
<script>
sap.ui.controller("my.own.controller", {
onInit: function() {
var aTableData = [{
abc: 1,
abc2: "a"
}, {
abc: 6,
abc2: "b"
}, {
abc: 6,
abc2: "c"
}, {
abc: 3,
abc2: "g"
}, {
abc: 3,
abc2: "h"
}];
var oTableModel = new sap.ui.model.json.JSONModel();
oTableModel.setData(aTableData);
var oTable = this.getView().byId("testTable");
oTable.setModel(oTableModel);
oTable.sort(oTable.getColumns()[0]);
},
onPress: function() {
var oTable = this.getView().byId("testTable");
//set group of table and column to false
oTable.setEnableGrouping(false);
oTable.getColumns()[0].setGrouped(false);
var oModel = oTable.getModel();
var oListBinding = oTable.getBinding();
oListBinding.aSorters = null;
oListBinding.aFilters = null;
oModel.refresh(true);
//after reset, set the enableGroup back to true
oTable.setEnableGrouping(true);
}
});
var myView = sap.ui.xmlview("myView", {
viewContent: jQuery('#view1').html()
}); //
myView.placeAt('content');
</script>
<body class='sapUiBody'>
<div id='content'></div>
</body>
```
|
Newly created desktop doesn't receive keyboard events
I have created a small program which launches itself in a new desktop.
```
HDESK hDesktop = ::CreateDesktop(strDesktopName.c_str(),
NULL, // Reserved
NULL, // Reserved
0, // DF_ALLOWOTHERACCOUNTHOOK
GENERIC_ALL,
NULL); // lpSecurity
::SetThreadDesktop(hDesktop);
```
Later on, started another application on that desktop using the following lines:
```
PROCESS_INFORMATION pi = { 0 };
STARTUPINFO si = { 0 };
si.cb = sizeof(si);
si.lpDesktop = &strDesktop[0];
if (FALSE == ::CreateProcess(pathModuleName.file_string().c_str(), L"abc def", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi))
return false;
DWORD dwWaitRes = ::WaitForSingleObject(pi.hProcess, INFINITE);
```
`pathModuleName` is a self location obtained by `GetModuleFileName(NULL)`.
The newly created application obtains a HWND to another window and sends window messages using the following commands:
```
// bring window to front
::SetForegroundWindow(hwnd);
// set focus so keyboard inputs will be caught
::SetFocus(hwnd);
::keybd_event(VK_MENU, 0x45, KEYEVENTF_EXTENDEDKEY | 0, 0);
...
```
So basically application `A` on desktop DEFAULT is starting application `B` on desktop X, which obtains an HWND to another application `C` started on the same desktop X.
My problem is that keyboard events coming from application `B` on desktop X are not being triggered in application `C`. Only if I use `SwitchDesktop(B)`, then events are triggered and code is executed properly.
What am I missing?
|
You are trying to simulate user input on a desktop that is not active on the physical console (screen, mouse, keyboard), which is not likely to work, and why `SwitchDesktop()` makes it work. According to the documentation:
[SwitchDesktop function](http://msdn.microsoft.com/en-us/library/windows/desktop/ms686347.aspx)
>
> Makes the specified desktop visible and activates it. **This enables the desktop to receive input from the user.**
>
>
>
`keybd_event()`, `mouse_event()`, `SendInput()`, they all simply generate and store input messages into the same input queue that the physical mouse/keyboard post their messages to. The input system does not know the difference between user input and synthesized input when dispatching the input messages to applications.
Raymond Chen touched on that in his blog:
[How do I simulate input without SendInput?](https://devblogs.microsoft.com/oldnewthing/20101221-00/?p=11953)
>
> SendInput operates at the bottom level of the input stack. **It is just a backdoor into the same input mechanism that the keyboard and mouse drivers use to tell the window manager that the user has generated input.** The SendInput function doesn't know what will happen to the input. That is handled by much higher levels of the window manager, like the components which hit-test mouse input to see which window the message should initially be delivered to.
>
>
>
He also posted a nice little diagram in another blog article showing where `SendInput()` sits in relation to the input queue:
[When something gets added to a queue, it takes time for it to come out the front of the queue](https://devblogs.microsoft.com/oldnewthing/20140213-00/?p=1773)

|
Find text file containing a given text ignoring new lines and spaces?
I have a string like: `"thisissometext"`. I want to find all text files inside a given directory (recursively) that containg this string, or any variations of it with white spaces and/or newlines in the middle of it. For example, a text file containing `"this is sometext"`, or `"this\n issometext"`, `"this\n isso metext"` should show up in the search. How can I do this?
|
With the newer versions of GNU `grep` (that has the `-z` option) you can use this one liner:
```
find . -type f -exec grep -lz 'this[[:space:]]*is[[:space:]]*some[[:space:]]*text' {} +
```
Considering the whitespaces can come in between the words only.
If you just want to search all files recursively starting from current directory, you don't need `find`, you can just use `grep -r` (recursive). `find` can be used to be selective on the files to search e.g. choose files of which directory to exclude. So, simply:
```
grep -rlz 'this[[:space:]]*is[[:space:]]*some[[:space:]]*text' .
```
- The main trick here is `-z`, it will treat the each line of input stream ended in ASCII NUL instead of new line, as a result we can match newlines by using usual methods.
- `[[:space:]]` character class pattern indicates any whitespace characters including space, tab, CR, LF etc. So, we can use it to match all the whitespace characters that can come in between the words.
- `grep -l` will print only the file names that having any of the desired patterns. If you want to print the matches also, use `-H` instead of `-l`.
On the other hand, if the whitespaces can come at any places rather than the words, this would loose its good look:
```
grep -rlz
't[[:space:]]*h[[:space:]]*i[[:space:]]*s[[:space:]]*i[[:space:]]*\
s[[:space:]]*s[[:space:]]*o[[:space:]]*m[[:space:]]*e[[:space:]]*\
t[[:space:]]*e[[:space:]]*x[[:space:]]*t' .
```
With `-P` (PCRE) option you can replace the `[[:space:]]` with `\s` (this would look much nicer):
```
grep -rlzP 't\s*h\s*i\s*s\s*i\s*s\s*s\s*o\s*m\s*e\s*\
t\s*e\s*x\s*t' .
```
Using @steeldriver's suggestion to get `sed` to generate the pattern for us would be the best option:
```
grep -rlzP "$(sed 's/./\\s*&/2g' <<< "thisissometext")" .
```
|
How to run headless REMOTE chrome using robot framework
I'm trying to run chrome headless with my robot framework tests suites.
I managed to do it independtly with python using selenium as follows:
```
options = webdriver.ChromeOptions()
options.add_argument('--headless')
my_driver = webdriver.Remote(command_executer=my_remote_address, desired_capabilities=options.to_capabilities)
```
The following code is what I did in robot but didn't work:
```
${options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
${options.add_argument}= Set Variable add_argument=--headless
Create WebDriver Chrome chrome_options=${options}
Open Browser http://www.google.com chrome
```
|
To run headless you need to set the arguments and convert them to capabilities so that they can be used when using the `Remote Driver` option. This works for both the `Open Browser` as well as the `Create Webdriver` way of navigating to a URL.
```
*** Settings ***
Library Selenium2Library
Suite Teardown Close All Browsers
*** Test Cases ***
Headless Chrome - Create Webdriver
${chrome_options} = Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument headless
Call Method ${chrome_options} add_argument disable-gpu
${options}= Call Method ${chrome_options} to_capabilities
Create Webdriver Remote command_executor=http://localhost:4444/wd/hub desired_capabilities=${options}
Go to http://cnn.com
Maximize Browser Window
Capture Page Screenshot
Headless Chrome - Open Browser
${chrome_options} = Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument headless
Call Method ${chrome_options} add_argument disable-gpu
${options}= Call Method ${chrome_options} to_capabilities
Open Browser http://cnn.com browser=chrome remote_url=http://localhost:4444/wd/hub desired_capabilities=${options}
Maximize Browser Window
Capture Page Screenshot
```
|
postgresql left join multiple conditions
I am still a NB to PostgreSQL - can anyone help with this query:
```
select distinct j.id, tt.title, m_scopus.provider_id
from journal j
join temporal_title "tt"
on (j.id = tt.journal_id and tt.list_index = 0)
left join journal_metrics "jm_scopus"
on (jm_scopus.journal_id = j.id)
left join metrics "m_scopus"
on (m_scopus.id = jm_scopus.metrics_id
and m_scopus.source_id = 235451508
and m_scopus.year_integer = 2017)
```
The problem is that I get rows with empty "provider\_id" where I don't want them:
```
journal_id title provider_id
263290036 German Journal of... scopusJournalsMetricsProvider
263290036 German Journal of... NULL
72418282 Europa azul NULL
207412571 IAC International... NULL
```
Rows 1, 3 and 4 are OK, but row 2 is not OK, because I the information I need is provider\_id if it is there and just NULL if it is not.
|
If I understand your data(model) correctly, the `journal_metrics` table is a *junction table*, which is not needed in the final result, so it can be kept out of the main query, avoiding the double `LEFT JOIN`:
---
```
SELECT j.id, tt.title, m.provider_id
FROM journal j
JOIN temporal_title tt
ON j.id = tt.journal_id AND tt.list_index = 0
LEFT JOIN metrics m
ON m.source_id = 235451508
AND m_scopus.year_integer = 2017
AND EXISTS ( SELECT *
FROM journal_metrics jm -- The junction table
WHERE jm.journal_id = j.id
AND jm.metrics_id = m.id
)
;
```
|
pandas - pivot table to square matrix
I have this simple dataframe in a `data.csv` file:
```
I,C,v
a,b,1
b,a,2
e,a,1
e,c,0
b,d,1
a,e,1
b,f,0
```
I would like to pivot it, and then return a square table (as a matrix). So far I've read the dataframe and build a pivot table with:
```
df = pd.read_csv('data.csv')
d = pd.pivot_table(df,index='I',columns='C',values='v')
d.fillna(0,inplace=True)
```
correctly obtaining:
```
C a b c d e f
I
a 0 1 0 0 1 0
b 2 0 0 1 0 0
e 1 0 0 0 0 0
```
Now I would like to return a square table with the missing columns indices in the rows, so that the resulting table would be:
```
C a b c d e f
I
a 0 1 0 0 1 0
b 2 0 0 1 0 0
c 0 0 0 0 0 0
d 0 0 0 0 0 0
e 1 0 0 0 0 0
f 0 0 0 0 0 0
```
|
[`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html#pandas-dataframe-reindex) can add rows and columns, and fill missing values with 0:
```
index = d.index.union(d.columns)
d = d.reindex(index=index, columns=index, fill_value=0)
```
yields
```
a b c d e f
a 0 1 0 0 1 0
b 2 0 0 1 0 0
c 0 0 0 0 0 0
d 0 0 0 0 0 0
e 1 0 0 0 0 0
f 0 0 0 0 0 0
```
|
Importing multiple versions/branches of a module to benchmark in Julia
How can I use several different versions or branches of the same module in a single script in Julia?
e.g. If I wanted to benchmark each of the tagged releases.
(Someone asked a similar question recently and I answered the wrong one but though this might be useful anyway.)
**Edit: I have answered this myself but I am sure their may be a better way!**
|
You can just git checkout a different version of the module and then use benchmarkTools.jl to benchmark. However it may be better to use multiple scripts though (or at least ignore the first trial) (See this comment [Importing multiple versions of the same Module/Package for Benchmarking](https://stackoverflow.com/questions/39219804/importing-multiple-versions-of-the-same-module-package-for-benchmarking/39224392?noredirect=1#comment65782781_39219804) for more info).
e.g.
```
packagedir = Pkg.dir("DSP")
version2checkout = "v0.0.7"
run(`cd $packagedir`); run(`git checkout tags/$version2checkout`)
import DSP
# do all your benmarking stuff
# start again
```
Prevents you from having to copy the modules but still a little clunky I guess.
You could even do it in a loop for lots of versions by capturing the output of git tag
```
for i in readlines(`git tag`)
version2checkout = chomp(i)
# checkout version and benchmark
end
```
|
Is it possible to make a rounded triangle with just CSS?
I'm trying to make the following shape with just CSS:
[](https://i.stack.imgur.com/5jBvq.png)
Here is what I currently have:
```
.triangle {
width: 0;
height: 0;
border-style: solid;
border-width: 56px 56px 0 0;
border-color: #ff4369 transparent transparent transparent;
}
```
```
<div class="triangle">
</div>
```
I'm unable to use border radius to make the top-left corner rounded... Is there another way to do this or do I need to use an SVG?
|
To actually answer your question (and provide the first answer without `border-radius`): If you want a CSS only solution, you will have to use `border-radius`.
Nevertheless I would highly recommend to use SVG for creating shapes, as simple shapes like this are easy to create manually, it's responsive, it's [widely supported now](https://caniuse.com/#feat=svg) and (as @chharvey mentioned in the comments) semantically more appropriate.
```
<svg viewbox="0 0 50 50" height="56px">
<path d="M1 50 V10 Q1 1 10 1 H50z" fill="#ff4369" />
</svg>
```
You can find more information about the path properties [in the specs](https://www.w3.org/TR/SVG/paths.html#PathDataMovetoCommands).
|
How to save a canvas as PNG in Selenium?
I am trying to save a canvas element as a png image. This is my code right now but, unfortunately, it does not work:
```
import time
from selenium import webdriver
# From PIL import Imag.
driver = webdriver.Firefox()
driver.get('http://www.agar.io')
driver.maximize_window()
driver.find_element_by_id('freeCoins').click()
time.sleep(2)
# The part below does not seem to work properly.
driver.execute_script('function download_image(){var canvas = document.getElementByTagName("canvas");canvas.toBlob(function(blob) {saveAs(blob, "../images/output.png");}, "image/png");};')
```
I would like to see the solution in Python. I would also like to see a solution that does not require cropping at the end of the screenshot.
|
You could call `HTMLCanvasElement.toDataURL()` to get the canvas as PNG base64 string. Here is a working example:
```
import base64
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("http://curran.github.io/HTML5Examples/canvas/smileyFace.html")
canvas = driver.find_element_by_css_selector("#canvas")
# get the canvas as a PNG base64 string
canvas_base64 = driver.execute_script("return arguments[0].toDataURL('image/png').substring(21);", canvas)
# decode
canvas_png = base64.b64decode(canvas_base64)
# save to a file
with open(r"canvas.png", 'wb') as f:
f.write(canvas_png)
```
|
Multiple or Single Try Catch
I'm working on cleaning up some of my code and I came to a point where I wasn't sure which route would be better.
Currently I have a single try catch block over the majority of my method and it handles a few separate exceptions at the end, but I figured having more try catch blocks would be better just for maintenance. However, while breaking down the code I came to a point where I was writing multiple blocks for the same type of exception. I can see the up side to writing a block for each part since I can then give more specifics to why it failed.
My question is this... is there a downside to doing this? Could there be performance issues or some other hidden monster that I'm not seeing?
Also, what is the preferred way of handling multiple exceptions in a method, is there an industry standard?
Just to better illustrate my point here's some pseudo code
```
//multiple try catch for same exception
try {
//some code here
} catch (MyException e) {
//specific error message here
}
try {
//some different code here
} catch (MyException e) {
//more specific error message indicating a different issue
}
```
|
I always try to reduce levels of nesting for readability and maintainability. If you have n try/catch blocks, each handling the same type of exception, why not refactor the code that can throw the exception into methods...it would look something like:
```
try {
firstBatchOfTricky();
secondBatchOfTricky();
....
nthBatchOfTricky();
} catch (ItWentBoomException e) {
// recover from boom
} catch (ItWentBangException e) {
// recover from bang
}
```
This is much more readable than having multiple try/catches. Note that your methods should describe what they do in the spirit of self documenting code.
Since you have your own Exception type, you can add the data you need to the exception to do different things in the catch block. When you say 'more specific message', you can just throw the exception with the detailed message; you shouldn't need multiple catch blocks. If you want to do drastically different things based on the state of the exception, just create more exception types and catch blocks but only one try block, as my pseudocode shows...
Finally, if you can't recover from the exception(s), you should not clutter the code with catch blocks. Throw a runtime exception and let it bubble. (Good advice from @tony in the comments)
|
ClaimedIdentifier for Google Accounts when using Dotnetopenauth
I'm currently switching from Janrain engange (rpxnow) to dotnetopenauth. In my database I have the claimedidentifieres from my users stored in the form
```
https://www.google.com/accounts/o8/id?id=AItOawnVaDz_Os6ysv4-tB0zlhFan1ltyHWa10k
```
When using engange I was able to switch between dev, test and prod enviroments and the claimed Ids would always remain the same.
So I was asuming that these Ids would also be the same when using dotnetopenauth. To my surprise the are smiliar but not identical. What makes things even more confusing, is that it seems, that when I change the path of my webapp, e.g. from <http://localhost/> to <http://localhost/mvc>, the claimed identifier also changes.
Could somebody shed some light on this please and help me make my ids moveable from enviroment to another
BTW: Moving other account types such as yahoo has worked without a problem
|
Google leverages a feature of OpenID called "directed identity", which means that the Claimed Identifier that it sends the RP is going to be different for the same Google Account that is logging in for each unique RP. Yahoo and other providers do not leverage this feature so that's why you only see it for Google accounts.
The key that Google uses to distinguish each RP is the IAuthenticationRequest.Realm property, which by default DotNetOpenAuth sets to be the root URL of your web site. This is why different hosting URLs such as <http://localhost> or <http://localhost/mvc> get different Claimed Identifiers.
Janrain Engage has various service levels, some of which (last I checked) use their own domain name, with your RP's name included as a 3rd-level domain name, as the Realm, which *can* make it difficult or impossible to stop using their service because the claimed identifiers would all change, causing all your Google customers to lose access to their accounts on your site. However, last I heard, Janrain stores some special account details to make migrating off of their service possible. I suggest you contact their support people to ask them what the process is.
|
What is the datastore used by Google for Google Instant?
My understanding is that Google is not using MapReduce for Google Instant, but I can't find anywhere that states what they're using as the datastore for Google Instant. What is the datastore used by Google for Google Instant?
|
This is probably the closest you will get to an [answer](https://stackoverflow.com/questions/3670831/how-does-google-instant-work). Without talking to someone who actually worked on the project. Or more information is released to the public.
The link above contains this answer. It would have been hard to get point across without images. I would appreciate if you give the original poster up votes as well if you found this answer useful.
**UPDATE**: Google have just published a blog article called Google Instant, behind the scenes. It's an interesting read, and obviously related to this question. You can read how they tackled the extra load (5-7X according to the article) on the server-side, for example. The answer below examines what happens on the client-side:
Examining with Firebug, Google is doing an Ajax GET request on every keypress:

I guess it's working the same way as the auto completion. However this time, it also returns the search results of the partially complete search phrase in JSON format.
Examining one of the JSON responses while typing "Stack Overflow":

We can see that the JSON response contains the content to construct the search results as we type.
The formatted JSON responses look something like this:
```
{
e: "j9iHTLXlLNmXOJLQ3cMO",
c: 1,
u: "http://www.google.com/search?hl\x3den\x26expIds\x3d17259,24472,24923,25260,25901,25907,26095,26446,26530\x26sugexp\x3dldymls\x26xhr\x3dt\x26q\x3dStack%20Overflow\x26cp\x3d6\x26pf\x3dp\x26sclient\x3dpsy\x26aq\x3df\x26aqi\x3dg4g-o1\x26aql\x3d\x26oq\x3dStack+\x26gs_rfai\x3d\x26pbx\x3d1\x26fp\x3df97fdf10596ae095\x26tch\x3d1\x26ech\x3d1\x26psi\x3dj9iHTO3xBo2CONvDzaEO12839712156911",
d: "\x3clink rel\x3dprefetch href\x3d\x22https://stackoverflow.com/\x22\x3e\x3cscript\x3eje.pa(_loc, \x27rso\x27, \x27\\x3c!--m--\\x3e\\x3clink rel\\x3dprefetch href\\x3d\\x22https://stackoverflow.com/\\x22\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22https://stackoverflow.com/\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNERidL9Hb6OvGW93_Y6MRj3aTdMVA\\x27,\\x27\\x27,\\x270CBYQFjAA\\x27)\\x22\\x3e\\x3cem\\x3eStack Overflow\\x3c/em\\x3e\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3eA language-independent collaboratively edited question and answer site for programmers.\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3e\\x3cb\\x3estackoverflow\\x3c/b\\x3e.com/\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:U1GC2GYOToIJ:stackoverflow.com/+Stack+Overflow\\x26amp;cd\\x3d1\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNFfKMag7Tq8CMbbfu8Gcj_GjukTbA\\x27,\\x27\\x27,\\x270CBgQIDAA\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:stackoverflow.com/+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CBkQHzAA\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3cbr\\x3e\\x3ctable class\\x3dslk style\\x3d\\x22border-collapse:collapse;margin-top:4px\\x22\\x3e\\x3ctr\\x3e\\x3ctd style\\x3d\\x22padding-left:14px;vertical-align:top\\x22\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22https://stackoverflow.com/questions\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNHmP78gEctJKvBrydP2c52F_FEjvA\\x27,\\x27\\x27,\\x270CBoQqwMoADAA\\x27)\\x22\\x3eQuestions\\x3c/a\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22https://stackoverflow.com/questions/ask\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNGZF-qwWVTZOWPlr4vgSA7qB64LLQ\\x27,\\x27\\x27,\\x270CBsQqwMoATAA\\x27)\\x22\\x3eAsk Question\\x3c/a\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22https://stackoverflow.com/users/135152/omg-ponies\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNE9zo6Qi_AM1bjmPGeMGfbnPi3niA\\x27,\\x27\\x27,\\x270CBwQqwMoAjAA\\x27)\\x22\\x3eOMG Ponies\\x3c/a\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22http://careers.stackoverflow.com/\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNEaqlBrfDcc1gdPZ6dgthff0s5WmA\\x27,\\x27\\x27,\\x270CB0QqwMoAzAA\\x27)\\x22\\x3eCareers\\x3c/a\\x3e\\x3c/div\\x3e\\x3ctd style\\x3d\\x22padding-left:14px;vertical-align:top\\x22\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22https://stackoverflow.com/about\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNEqgPttrXj3r4o3TZHX5WaWvFe1HQ\\x27,\\x27\\x27,\\x270CB4QqwMoBDAA\\x27)\\x22\\x3eAbout\\x3c/a\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22https://stackoverflow.com/faq\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNF3X3eRc0RsxYynXZhhbyYkuKWZ5g\\x27,\\x27\\x27,\\x270CB8QqwMoBTAA\\x27)\\x22\\x3eThe FAQ\\x3c/a\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22http://blog.stackoverflow.com/\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNG7KphjK6RuC5cj-6U5jeuvipt5dg\\x27,\\x27\\x27,\\x270CCAQqwMoBjAA\\x27)\\x22\\x3eBlog\\x3c/a\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dsld\\x3e\\x3ca class\\x3dsla href\\x3d\\x22https://stackoverflow.com/users\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x271\\x27,\\x27AFQjCNFfN_wcGm4HE5XpDxvcH4bIrkv2dw\\x27,\\x27\\x27,\\x270CCEQqwMoBzAA\\x27)\\x22\\x3eUsers\\x3c/a\\x3e\\x3c/div\\x3e\\x3ctr\\x3e\\x3ctd colspan\\x3d2 style\\x3d\\x22padding-left:14px;vertical-align:top\\x22\\x3e\\x3cdiv style\\x3d\\x22padding-top:6px\\x22\\x3e\\x3ca class\\x3dfl href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3d+site:stackoverflow.com+Stack+Overflow\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CCIQrAM\\x22\\x3eMore results from stackoverflow.com\\x26nbsp;\\x26raquo;\\x3c/a\\x3e\\x3c/div\\x3e\\x3c/table\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22https://stackoverflow.com/questions\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x272\\x27,\\x27AFQjCNHmP78gEctJKvBrydP2c52F_FEjvA\\x27,\\x27\\x27,\\x270CCUQFjAB\\x27)\\x22\\x3eHottest Questions - \\x3cem\\x3eStack Overflow\\x3c/em\\x3e\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3eHello \\x3cem\\x3eStack Overflow\\x3c/em\\x3e! I\\x26#39;m working with someone else\\x26#39;s PHP function that works fine as long as I pass it at least three arguments. If I pass it two argument, \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3e\\x3cb\\x3estackoverflow\\x3c/b\\x3e.com/questions\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:6S_0sErDKfQJ:stackoverflow.com/questions+Stack+Overflow\\x26amp;cd\\x3d2\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x272\\x27,\\x27AFQjCNH7WHzefYlnS05ln4j6rzfE3byDKg\\x27,\\x27\\x27,\\x270CCcQIDAB\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:stackoverflow.com/questions+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CCgQHzAB\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://en.wikipedia.org/wiki/Stack_overflow\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x273\\x27,\\x27AFQjCNEAxaeWmWtD7cBcmZ5QBMsTRNbnCw\\x27,\\x27\\x27,\\x270CCkQFjAC\\x27)\\x22\\x3e\\x3cem\\x3eStack overflow\\x3c/em\\x3e - Wikipedia, the free encyclopedia\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3eIn software, a \\x3cem\\x3estack overflow\\x3c/em\\x3e occurs when too much memory is used on the call stack. The call stack contains a limited amount of memory, often determined at \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3een.wikipedia.org/wiki/\\x3cb\\x3eStack\\x3c/b\\x3e_\\x3cb\\x3eoverflow\\x3c/b\\x3e\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:mWu8b0BQAmwJ:en.wikipedia.org/wiki/Stack_overflow+Stack+Overflow\\x26amp;cd\\x3d3\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x273\\x27,\\x27AFQjCNFG_5ndK-KmWJy6s3pOsi8lsxqEZg\\x27,\\x27\\x27,\\x270CCsQIDAC\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:en.wikipedia.org/wiki/Stack_overflow+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CCwQHzAC\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://blog.stackoverflow.com/\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x274\\x27,\\x27AFQjCNG7KphjK6RuC5cj-6U5jeuvipt5dg\\x27,\\x27\\x27,\\x270CC0QFjAD\\x27)\\x22\\x3eBlog – \\x3cem\\x3eStack Overflow\\x3c/em\\x3e\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3e6 Sep 2010 \\x3cb\\x3e...\\x3c/b\\x3e The latest version of the \\x3cem\\x3eStack Overflow\\x3c/em\\x3e Trilogy Creative Commons Data Dump is now available. This reflects all public data in … \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3eblog.\\x3cb\\x3estackoverflow\\x3c/b\\x3e.com/\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:iqtvg9Ge1c0J:blog.stackoverflow.com/+Stack+Overflow\\x26amp;cd\\x3d4\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x274\\x27,\\x27AFQjCNFX2P2-RTCs_GaR6NgSw30p007UEA\\x27,\\x27\\x27,\\x270CC8QIDAD\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:blog.stackoverflow.com/+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CDAQHzAD\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\x27,_ss);\x3c/script\x3e"
}/*""*/{
e: "j9iHTLXlLNmXOJLQ3cMO",
c: 1,
u: "http://www.google.com/search?hl\x3den\x26expIds\x3d17259,24472,24923,25260,25901,25907,26095,26446,26530\x26sugexp\x3dldymls\x26xhr\x3dt\x26q\x3dStack%20Overflow\x26cp\x3d6\x26pf\x3dp\x26sclient\x3dpsy\x26aq\x3df\x26aqi\x3dg4g-o1\x26aql\x3d\x26oq\x3dStack+\x26gs_rfai\x3d\x26pbx\x3d1\x26fp\x3df97fdf10596ae095\x26tch\x3d1\x26ech\x3d1\x26psi\x3dj9iHTO3xBo2CONvDzaEO12839712156911",
d: "\x3cscript\x3eje.pa(_loc, \x27rso\x27, \x27\\x3c!--m--\\x3e\\x3cli class\\x3dg style\\x3d\\x22margin-left:16px\\x22\\x3e\\x3ch3 class\\x3d\\x22r hcw\\x22\\x3e\\x3ca href\\x3d\\x22http://blog.stackoverflow.com/category/podcasts/\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x275\\x27,\\x27AFQjCNGnAJSxUa4GLcg-E7PNvIFmPC53gQ\\x27,\\x27\\x27,\\x270CDEQFjAE\\x27)\\x22\\x3epodcasts - Blog – \\x3cem\\x3eStack Overflow\\x3c/em\\x3e\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s hc\\x22\\x3eJoel and Jeff sit down with our new community coordinator, Robert Cartaino, to record a “bonus” podcast discussing the future of \\x3cem\\x3eStack Overflow\\x3c/em\\x3e and Stack \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3eblog.\\x3cb\\x3estackoverflow\\x3c/b\\x3e.com/category/podcasts/\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:JT0sWmmtiAEJ:blog.stackoverflow.com/category/podcasts/+Stack+Overflow\\x26amp;cd\\x3d5\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x275\\x27,\\x27AFQjCNErCiLBch55HA8i5BAdChcmQYH8nw\\x27,\\x27\\x27,\\x270CDMQIDAE\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:blog.stackoverflow.com/category/podcasts/+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CDQQHzAE\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://itc.conversationsnetwork.org/series/stackoverflow.html\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x276\\x27,\\x27AFQjCNHG9l1PMbilYkhohNFuj3g6ce1LuA\\x27,\\x27\\x27,\\x270CDUQFjAF\\x27)\\x22\\x3e\\x3cem\\x3eStackOverflow\\x3c/em\\x3e\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3eJoel and Jeff sit down with our new community coordinator, Robert Cartaino, to discuss the future of \\x3cem\\x3eStack Overflow\\x3c/em\\x3e and Stack Exchange 2.0. \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3eitc.conversationsnetwork.org/series/\\x3cb\\x3estackoverflow\\x3c/b\\x3e.html\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:8MkFpx7D4wYJ:itc.conversationsnetwork.org/series/stackoverflow.html+Stack+Overflow\\x26amp;cd\\x3d6\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x276\\x27,\\x27AFQjCNFP62Bg_o2kaz3jzXxzsrTs_7RdNA\\x27,\\x27\\x27,\\x270CDcQIDAF\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:itc.conversationsnetwork.org/series/stackoverflow.html+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CDgQHzAF\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://support.microsoft.com/kb/145799\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x277\\x27,\\x27AFQjCNHzyj5rHEX7IiyFWnP0ziE3B32rGg\\x27,\\x27\\x27,\\x270CDkQFjAG\\x27)\\x22\\x3eHow to Troubleshoot Windows Internal \\x3cem\\x3eStack Overflow\\x3c/em\\x3e Error Messages\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3eThis article lists steps to help you troubleshoot problems with \\x3cem\\x3estack overflow\\x3c/em\\x3e errors in Windows. Stacks are reserved memory that programs use to process \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3esupport.microsoft.com/kb/145799\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:ECO9ORCsraAJ:support.microsoft.com/kb/145799+Stack+Overflow\\x26amp;cd\\x3d7\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x277\\x27,\\x27AFQjCNHYsox9EW1Ye9Nn2G6WQzEpJDOzcw\\x27,\\x27\\x27,\\x270CDsQIDAG\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:support.microsoft.com/kb/145799+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CDwQHzAG\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://stackoverflow.carsonified.com/\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x278\\x27,\\x27AFQjCNHcEPoch5soLj2CpLpRfnW-Z2-aLw\\x27,\\x27\\x27,\\x270CD0QFjAH\\x27)\\x22\\x3e\\x3cem\\x3eStackOverflow\\x3c/em\\x3e DevDays » Home\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3e\\x3cem\\x3eStackOverflow\\x3c/em\\x3e Dev Days is run by Carsonified, so please give us a shout if you need anything or are interested in sponsoring the event. \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3e\\x3cb\\x3estackoverflow\\x3c/b\\x3e.carsonified.com/\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:uhl8NPgikN0J:stackoverflow.carsonified.com/+Stack+Overflow\\x26amp;cd\\x3d8\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x278\\x27,\\x27AFQjCNFf9Vl5L3FaQGPapUpIFw5gqVUCnA\\x27,\\x27\\x27,\\x270CD8QIDAH\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:stackoverflow.carsonified.com/+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CEAQHzAH\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://stackoverflow.org/\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x279\\x27,\\x27AFQjCNF-YrPvTLTJlFFDJrJE0cjGdlOpbg\\x27,\\x27\\x27,\\x270CEEQFjAI\\x27)\\x22\\x3e\\x3cem\\x3eStackOverflow\\x3c/em\\x3e.org\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3e\\x3cem\\x3eStackOverflow\\x3c/em\\x3e.org began as the merging of two ideas that have been kicking around in my head for years. First, I wanted a dorky programming-related domain \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3e\\x3cb\\x3estackoverflow\\x3c/b\\x3e.org/\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:u0dIlJW-XMYJ:stackoverflow.org/+Stack+Overflow\\x26amp;cd\\x3d9\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x279\\x27,\\x27AFQjCNHcJcV2QVybr6voztyPwHCrNOOD1w\\x27,\\x27\\x27,\\x270CEMQIDAI\\x27)\\x22\\x3eCached\\x3c/a\\x3e - \\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3drelated:stackoverflow.org/+Stack+Overflow\\x26amp;tbo\\x3d1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CEQQHzAI\\x22\\x3eSimilar\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\\x3c!--m--\\x3e\\x3cli class\\x3dg\\x3e\\x3ch3 class\\x3d\\x22r\\x22\\x3e\\x3ca href\\x3d\\x22http://embeddedgurus.com/stack-overflow/\\x22 class\\x3dl onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x2710\\x27,\\x27AFQjCNFYQ5E8irNUCpRsbOHHyfc0oqGpWw\\x27,\\x27\\x27,\\x270CEUQFjAJ\\x27)\\x22\\x3e\\x3cem\\x3eStack Overflow\\x3c/em\\x3e\\x3c/a\\x3e\\x3c/h3\\x3e\\x3cdiv class\\x3d\\x22s\\x22\\x3e\\x3cem\\x3eStack Overflow\\x3c/em\\x3e. Nigel Jones. Nigel Jones has over 20 years of experience designing electronic circuits and firmware. (full bio). Pages. Contact Nigel. Links \\x3cb\\x3e...\\x3c/b\\x3e\\x3cbr\\x3e\\x3cspan class\\x3df\\x3e\\x3ccite\\x3eembeddedgurus.com/\\x3cb\\x3estack\\x3c/b\\x3e-\\x3cb\\x3eoverflow\\x3c/b\\x3e/\\x3c/cite\\x3e - \\x3cspan class\\x3dgl\\x3e\\x3ca href\\x3d\\x22http://webcache.googleusercontent.com/search?q\\x3dcache:Rl_rUfEG_fIJ:embeddedgurus.com/stack-overflow/+Stack+Overflow\\x26amp;cd\\x3d10\\x26amp;hl\\x3den\\x26amp;ct\\x3dclnk\\x22 onmousedown\\x3d\\x22return rwt(this,\\x27\\x27,\\x27\\x27,\\x27\\x27,\\x2710\\x27,\\x27AFQjCNFqMjsc1pBI9JexjMSPY7wm5QLI8w\\x27,\\x27\\x27,\\x270CEcQIDAJ\\x27)\\x22\\x3eCached\\x3c/a\\x3e\\x3c/span\\x3e\\x3c/span\\x3e\\x3c/div\\x3e\\x3c!--n--\\x3e\x27,_ss);\x3c/script\x3e"
}/*""*/{
e: "j9iHTLXlLNmXOJLQ3cMO",
c: 1,
u: "http://www.google.com/search?hl\x3den\x26expIds\x3d17259,24472,24923,25260,25901,25907,26095,26446,26530\x26sugexp\x3dldymls\x26xhr\x3dt\x26q\x3dStack%20Overflow\x26cp\x3d6\x26pf\x3dp\x26sclient\x3dpsy\x26aq\x3df\x26aqi\x3dg4g-o1\x26aql\x3d\x26oq\x3dStack+\x26gs_rfai\x3d\x26pbx\x3d1\x26fp\x3df97fdf10596ae095\x26tch\x3d1\x26ech\x3d1\x26psi\x3dj9iHTO3xBo2CONvDzaEO12839712156911",
d: "\x3cscript\x3eje.p(_loc,\x27botstuff\x27,\x27 \\x3cdiv id\\x3dbrs style\\x3d\\x22clear:both;margin-bottom:17px;overflow:hidden\\x22\\x3e\\x3cdiv class\\x3d\\x22med\\x22 style\\x3d\\x22text-align:left\\x22\\x3eSearches related to \\x3cem\\x3eStack Overflow\\x3c/em\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dbrs_col\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+error\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CEkQ1QIoAA\\x22\\x3estack overflow \\x3cb\\x3eerror\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+internet+explorer\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CEoQ1QIoAQ\\x22\\x3estack overflow \\x3cb\\x3einternet explorer\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dfix+stack+overflow\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CEsQ1QIoAg\\x22\\x3e\\x3cb\\x3efix\\x3c/b\\x3e stack overflow\\x3c/a\\x3e\\x3c/p\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+xp\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CEwQ1QIoAw\\x22\\x3estack overflow \\x3cb\\x3exp\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3c/div\\x3e\\x3cdiv class\\x3dbrs_col\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+javascript\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CE0Q1QIoBA\\x22\\x3estack overflow \\x3cb\\x3ejavascript\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+java\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CE4Q1QIoBQ\\x22\\x3estack overflow \\x3cb\\x3ejava\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+c%2B%2B\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CE8Q1QIoBg\\x22\\x3estack overflow \\x3cb\\x3ec++\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3cp\\x3e\\x3ca href\\x3d\\x22/search?hl\\x3den\\x26amp;q\\x3dstack+overflow+windows+xp\\x26amp;revid\\x3d-1\\x26amp;sa\\x3dX\\x26amp;ei\\x3dj9iHTLXlLNmXOJLQ3cMO\\x26amp;sqi\\x3d2\\x26amp;ved\\x3d0CFAQ1QIoBw\\x22\\x3estack overflow \\x3cb\\x3ewindows xp\\x3c/b\\x3e\\x3c/a\\x3e\\x3c/p\\x3e\\x3c/div\\x3e\\x3c/div\\x3e \x27,_ss);/* */\x3c/script\x3e"
}/*""*/
```
|
How does Android ObjectAnimator recognize the attributes setter methods?
How is ObjectAnimator able to call the appropriate method `setX` if the attribute `x` is specified as a string? What I mean is, what technique is used to recognize that I want to animate the attribute `rotation` of my view and call the appropriate method `setRotation` of that view?
I've already understood how ObjectAnimator works and have managed to use it, it is quite simple, I am just curious about the operating principles.
|
There are a number of ways to animate the rotation of a view:
**1.** `ObjectAnimator.ofFloat(view, "rotation", 0f, 90f).start();`
This uses [reflection](https://stackoverflow.com/questions/37628/what-is-reflection-and-why-is-it-useful) to call the `setRotation(float f)` and `float getRotation()` methods of the view.
You can use this method to animate any property of a class as long as that class has implemented the appropriate getter and setter methods for that property.
But reflection is a slow operation, so there is a second method that doesn't use reflection.
**2.** `ObjectAnimator.ofFloat(view, View.ROTATION, 0f, 90f).start();`
This uses the rotation Property of the view. [Property](http://developer.android.com/reference/android/util/Property.html) is an abstract class that defines the `setValue(T)` and the `T get()` methods which in turn call the actual getter and setter of the supplied object. For example, the rotation property on the `View` class uses the following code:
```
public static final Property<View, Float> ROTATION = new FloatProperty<View>("rotation") {
@Override
public void setValue(View object, float value) {
object.setRotation(value);
}
@Override
public Float get(View object) {
return object.getRotation();
}
};
```
If you want to animate a custom property of an object, you can implement your own Property like the one above.
Then there is a third method, which also doesn't use reflection.
**3.** `view.animate().rotation(90f);`
This one has a fluent interface so it's easier to use. You can also chain multiple animations to run together, for example: `view.animate().rotation(90f).translationX(10f);`
The downside of this method is that you can only animate the standard properties of a View and not custom properties or properties on your own classes.
|
Persistent DB Connection in Django/WSGI application
I want to keep a persistent connection open to a third party legacy database in a django powered web application.
[](https://i.stack.imgur.com/dgrtS.png)
I want to keep the a connection between the web app and the legacy database open since creating a new connection is very slow for this special DB.
It is not like usual connection pooling since I need to store the connection per web user. User "Foo" needs its own connection between web server and legacy DB.
Up to now I use Apache and wsgi, but I could change if an other solution fits better.
Up to now I use django. Here I could change, too. But the pain would be bigger since there is already a lot of code which needs to be integrated again.
Up to now I use Python. I guess Node.js would fit better here, but the pain to change is too high.
Of course some kind of timeout would be needed. If there is not http-request from user "Foo" for N minutes, then the persistent connection would need to get shut down.
How could this be solved?
**Update**
I call it `DB` but it is not a DB which is configured via settings.DATABASES. It is a strange, legacy not wide spread DB-like system I need to integrate.
If I there are 50 people online using the web app in this moment, then I need to have 50 persistent connections. One for each user.
**Code to connect to DB**
I could execute this line in every request:
```
strangedb_connection = strangedb.connect(request.user.username)
```
But this operation is slow. Using the connection is fast.
Of course the `strangedb_connection` can't be serialized and can't be stored in a session :-)
|
# worker daemon managing the connection
Your picture currently looks like:
```
user -----------> webserver <--------[1]--> 3rd party DB
connection [1] is expensive.
```
You could solve this with:
```
user ----> webserver <---> task queue[1] <---> worker daemon <--[2]-> 3rd party DB
[1] task queue can be redis, celery or rabbitmq.
[2] worker daemon keeps connection open.
```
A worker daemon would do the connection to the 3rd party database and keep the connection open. This would mean that each request would not have to pay the connection costs. The task queue would be the inter-process communication, dispatching work to the daemon and do the queries in the 3rd party db. The webserver should be as light as possible in terms of processing and let workers do expensive tasks.
# preloading with apache + modwsgi
You can actually `preload` and have the expensive connection done before the first request. This is done with the [`WSGIImportScript` configuration directive](https://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIImportScript). I don't remember at the top of my head if having a pre-load + forking configuration means each request will already have the connection opened and share it; but since you have most of the code, this could be an easy experiment.
# preloading with uwsgi
`uwsgi` supports preloading too. This is done with [the `import` directive](http://uwsgi-docs.readthedocs.org/en/latest/PythonDecorators.html).
|
Unable to update SharePoint 2010's managed metadata field with Lists.UpdateListItems web service call
I'm trying to update a SharePoint managed metadata (MMD) field using Lists.UpdateListItems web service but it's not working.
Here is my SOAP request
```
<?xml version="1.0" ?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Body>
<UpdateListItems xmlns="http://schemas.microsoft.com/sharepoint/soap/">
<listName>My Test List</listName>
<updates>
<Batch ListVersion="0" PreCalc="TRUE" OnError="Continue">
<Method Cmd="Update" ID="1">
<Field Name="ID">3</Field>
<Field Name="Document_x0020_Title">foo</Field>
<Field Name="Fiscal_x0020_Year1">13;#FY 2006|7e8205da-57a1-45a3-8147-469b795ad6e8</Field>
</Method>
</Batch>
</updates>
</UpdateListItems>
</S:Body></S:Envelope>
```
This request will succesfully update the "Document Title" (a text field) but the MMD field, "Fiscal Year", was unchanged and there is no error returned from the web service.
Note that the value of the MMD is in the format "WssId;#TermValue|TermGuid" and the term has already been created for the site.
Please help.
|
Figured it out.
I must use a different field name. The label for the field is "Fiscal Year", but the field name that worked is actually "d3c0ddc947ab4b8c90b6a0fe2d4caf09" instead of "Fiscal\_x0020\_Year1". So my SOAP request would look like
```
<Method Cmd="Update" ID="1">
<Field Name="ID">3</Field>
<Field Name="Document_x0020_Title">foo</Field>
<Field Name="d3c0ddc947ab4b8c90b6a0fe2d4caf09">13;#FY 2006|7e8205da-57a1-45a3-8147-469b795ad6e8</Field>
</Method>
```
To get this field name I use the Lists.GetListContentType web service method to return fields information and look for fieldtype "Note". Here is an example of what SharePoint returned
```
<Field Type="Note" DisplayName="Fiscal Year_0"
StaticName="d3c0ddc947ab4b8c90b6a0fe2d4caf09" Name="d3c0ddc947ab4b8c90b6a0fe2d4caf09"
ID="{1afa458b-d50a-4139-ad8d-f1172774de34}" ShowInViewForms="FALSE" Required="FALSE"
CanToggleHidden="TRUE" SourceID="{77871b4e-f3ba-42dc-8940-ab33fb431099}" Hidden="TRUE"
Version="1" Customization="" ColName="ntext8" RowOrdinal="0"/>
```
I also find it useful to use the Lists.GetListContentTypes method to get the content type id use in Lists.GetListContentType method call.
----***Update***--
I found that you don't have to use the format of "WssId;#TermValue|TermGuid". You can simply use "TermValue|TermGuid". So in the example above the value would be "FY 2006|7e8205da-57a1-45a3-8147-469b795ad6e8".
This is very useful because you can reuse the same value across different site unlike the former value, where you can only use it in one site. For multi value you only need to delimit it with a ";" instead of ";#". For example "FY 2006|7e8205da-57a1-45a3-8147-469b795ad6e8;FY 2007|823205da-57a1-45a3-8147-469b795ade13".
|
Align the capline (top of text) with different size fonts
I have a set of numbered subsections on a web page, each of which has an associated title in line with the section's ordinal (number). It's essentially an ordered list where each item contains arbitrary content.
The design calls for (a) the ordinal of each section to be approximately twice as tall as the title text, (b) the capline (top) of the title (rendered FULLCAPS) to be aligned with the capline of the ordinal.
The title text is dynamic, and may take up anywhere from 1-4 "lines" depending on length.
I've tried using `vertical-align:top` within elements formatted using `table-cell`, and it is very close to the desired look:
```
.title_line {
display: table;
}
.title_line .ordinal {
display: table-cell;
vertical-align: top;
font-size: 4em;
}
.title_line .title {
display: table-cell;
vertical-align: top;
font-size: 2em;
text-transform: uppercase;
}
```
```
<body>
<div class="title_line">
<div class="ordinal">3</div>
<div class="title">The capline (top) of this text should be aligned with the top of the ordinal.</div>
</div>
</body>
```
[](https://i.stack.imgur.com/s5Wn6.png)
But there is a visible difference in the vertical gap above the ordinal as compared to the title text.
Is there any way to specify capline alignment for text of different size?
|
This can be fixed by setting a `line-height` of `1em`.
```
.title_line {
display: table;
}
.title_line .ordinal {
vertical-align: top;
display: table-cell;
line-height: 1em;
font-size: 4em;
}
.title_line .title {
display: table-cell;
vertical-align: top;
font-size: 2em;
text-transform: uppercase;
}
```
```
<body>
<div class="title_line">
<div class="ordinal">3</div>
<div class="title">The capline (top) of this text should be aligned with the top of the ordinal.</div>
</div>
</body>
```
This effectively removes the space between the actual font and how it is rendered. If you add an outline or background to `.ordinal`, you can see how this works:
```
span {
vertical-align: top;
font-size: 4em;
outline: 1px solid red;
}
.fix {
line-height: 1em;
}
```
```
<span>3</span>
<span class="fix">3</span>
```
|
ios Swift fatal error: use of unimplemented initializer 'init()'
I've been trying very hard, have looked up every similar question pertaining to this issue on StackOverflow and trying them to no avail.
```
class TimeLineTableViewController:
UITableViewController,
UIImagePickerControllerDelegate,
UINavigationControllerDelegate {
var timelineData = [PFObject]()
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
override func viewDidLoad() {
super.viewDidLoad()
self.loadData()
}
@IBAction func loadData(){
timelineData.removeAll(keepCapacity: false)
var findTimelineData:PFQuery = PFQuery(className:"timelineMessages")
findTimelineData.findObjectsInBackgroundWithBlock { (objects:[AnyObject]! , error:NSError!) -> Void in
if error == nil {
self.timelineData = objects.reverse() as [PFObject]
//let array:NSArray = self.timelineData.reverseObjectEnumerator().allObjects
// self.timelineData = array as NSMutableArray
self.tableView.reloadData()
}
}
}
override func viewDidAppear(animated: Bool) {
var footerView:UIView = UIView(frame: CGRectMake(0, 0, self.view.frame.size.width, 50))
self.tableView.tableFooterView = footerView
var logoutButton:UIButton = UIButton.buttonWithType(UIButtonType.System) as UIButton
logoutButton.frame = CGRectMake(20, 10, 50, 20)
logoutButton.setTitle("Logout", forState: UIControlState.Normal)
logoutButton.addTarget(self, action:"logout:", forControlEvents: UIControlEvents.TouchUpInside)
footerView.addSubview(logoutButton)
}
```
To clarify, timelineTableViewController has one class that inherits, MessageTableCell. It's also part of a project that I've integrated into Objective-C code, so it's a combination of both Swift and ObjC. I've run both projects (the swift one and the ObjC one) independently and they work fine; it's only when I try to run it together do they mess up. Any suggestions? I'm at an utter loss for this.
|
“Unlike subclasses in Objective-C, Swift subclasses do not inherit their superclass initializers by default.”
**Automatic Initializer Inheritance**
- **Rule 1:**
If your subclass doesn’t define any designated initializers, it automatically inherits all of its superclass designated initializers.
- **Rule 2:**
If your subclass provides an implementation of all of its superclass designated initializers—either by inheriting them as per rule 1, or by providing a custom implementation as part of its definition—then it automatically inherits all of the superclass convenience initializers.
Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. <https://itun.es/tw/jEUH0.l>
Since you have override the `init(coder aDecoder: NSCoder)`, `TimeLineTableViewController` won't have the `init()` initiailzer.
You can provide an implementation of all of its superclass designated initialisers like this
```
override init() {
super.init()
}
override init(nibName nibNameOrNil: String?, bundle nibBundleOrNil: NSBundle?) {
super.init(nibName: nibNameOrNil, bundle: nibBundleOrNil)
}
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
```
, or just delete the implementation of `init(coder aDecoder: NSCoder)`.
|
Python: What does for x in A[1:] mean?
I was trying to understand Kadane's algorithm from Wikipedia, when I found this:
```
def max_subarray(A):
max_ending_here = max_so_far = A[0]
for x in A[1:]:
max_ending_here = max(x, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
```
I'm not familiar with Python. I tried to google what this syntax does but I couldn't find the right answer because I didn't know what's it called. But, I figured `A[1:]` is the equivalent of omitting `A[0]`, so I thought `for x in A[1:]:` is equivalent to `for(int i = 1; i < A.length; i++)` in Java
But, after changing `for x in A[1:]:` to `for x in range(1,len(A))`, I got the wrong result
Sorry if this is a stupid question, but I don't know where else to find the answer. Can somebody tell me what this syntax does and what is it called? Also, could you give me the equivalent of `for x in A[1:]:` in Java?
|
This is [array slice](https://docs.python.org/2/tutorial/introduction.html#lists) syntax. See this SO question:
[Explain Python's slice notation](https://stackoverflow.com/questions/509211/explain-pythons-slice-notation) .
For a list `my_list` of objects e.g. `[1, 2, "foo", "bar"]`, `my_list[1:]` is equivalent to a shallow copied list of all elements starting from the 0-indexed `1`: `[2, "foo", "bar"]`. So your `for` statement iterates over these objects:
```
for-iteration 0: x == 2
for-iteration 1: x == "foo"
for-iteration 2: x == "bar"
```
`range(..)` returns a list/generator of indices (integers), so your for statement would iterate over integers `[1, 2, ..., len(my_list)]`
```
for-iteration 0: x == 1
for-iteration 1: x == 2
for-iteration 2: x == 3
```
So in this latter version you could use `x` as an index into the list: `iter_obj = my_list[x]`.
Alternatively, a slightly more pythonic version if you still need the iteration index (e.g. for the "count" of the current object), you could use `enumerate`:
```
for (i, x) in enumerate(my_list[1:]):
# i is the 0-based index into the truncated list [0, 1, 2]
# x is the current object from the truncated list [2, "foo", "bar"]
```
This version is a bit more future proof if you decide to change the type of `my_list` to something else, in that it does not rely on implementation detail of 0-based indexing, and is therefore more likely to work with other iterable types that support slice syntax.
|
Rmarkdown/Bookdown: Separate figure numbering for Supplemental Section
Certain kinds of documents, such as journal articles, often have a Supplemental Section, where the numbering of figures is different from the main body.
For example, in the main body, you might have Fig 1-5. But then, for the Supplemental section, the numbering restarts as Fig S1, S2, S3, etc.
Bookdown allows cross-referencing (`\@ref(fig:label)` but I'm not sure how to restart the numbering in a separate section. Is there a good way to do this?
|
You can define a new [LaTeX function](https://tex.stackexchange.com/questions/438952/how-to-end-the-supplementary-part) in the YAML header of your `.rmd` file as follows:
```
\newcommand{\beginsupplement}{
\setcounter{table}{0}
\renewcommand{\thetable}{S\arabic{table}}
\setcounter{figure}{0}
\renewcommand{\thefigure}{S\arabic{figure}}
}
```
Then type `\beginsupplement` when you're ready to start labelling the figures and tables with S1, S2... etc. This solution works fine if you export to PDF only, as it uses LaTeX commands to format the output. It therefore will not work for HTML or Word outputs.
```
---
title: "title"
author:
- My Namington*
- '*\textit{[email protected]} \vspace{5mm}'
output:
bookdown::pdf_document2
fontsize: 12pt
header-includes:
\usepackage{float} \floatplacement{figure}{H}
\newcommand{\beginsupplement}{\setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}} \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}}}
---
```{r, include=FALSE}
knitr::opts_chunk$set(echo = FALSE)
library(ggplot2)
```
# Main text
Here is the main text of my paper, and a link to a normally-labelled Figure \@ref(fig:irisPlot).
```{r irisPlot, fig.cap="This is a figure caption."}
ggplot(iris, aes(Species, Sepal.Length, colour = Species)) + geom_jitter()
```
\newpage
# Supplementary material {-}
\beginsupplement
Here is the supplement, including a link to a figure prefixed with the letter S Figure \@ref(fig:irisPlot2).
```{r irisPlot2, echo=FALSE, fig.cap= "This is a supplementary figure caption."}
ggplot(iris, aes(Sepal.Width, Sepal.Length, colour = Species)) +
geom_point() +
stat_smooth(method = "lm")
```
```
[](https://i.stack.imgur.com/PkgWw.png)
|
Guice : Inject an ArrayList of Strings
I'm trying to inject an `ArrayList` of `String`s with the help of Guice. I want to show a panel with many RadioButtons (for example) where an user can select some services to activate.
Once selected, I would like to get all the names of the selected services and add them into a list, and inject this list to the manager responsible to create the services. Here is an example:
```
public class UIProviderModule extends ProviderServiceModule {
private ArrayList<String> requestedServices;
public UIProviderModule(ArrayList<String> requestedServices) {
this.requestedServices = requestedServices;
}
@Override
protected void configure() {
bindConstant().annotatedWith(Names.named(Constants.REQ_SERVICES)).to(requestedServices);
bind(IParser.class).to(UIParser.class);
super.configure();
}
}
```
I've seen many posts about [Multibindings](https://github.com/google/guice/wiki/Multibindings) and also about Providers, but I did not understand how this could help me. I just want to retrieve names, since I'm not working with classes that have to be bound to an interface. Am I missing something?
Note: I know this is maybe not the *good way to use Guice* because I'm giving the list to be bound to the `Module`.
|
I think you are misunderstanding how modules are supposed to work.
Modules *don't create the objects*, modules **define rules for how objects might be created when they are needed.**
The reason [MapBinder](http://google.github.io/guice/api-docs/latest/javadoc/com/google/inject/multibindings/MapBinder.html) would help is that you would define **all** of the services in your radio buttons list, and then use the injected map to activate the services that you need.
Here's some code to illustrate what I mean:
```
public class ServiceModule extends AbstractModule {
protected void configure() {
MapBinder<String, Service> mapbinder
= MapBinder.newMapBinder(binder(), String.class, Service.class);
mapbinder.addBinding("service1").to(Service1.class).in(Singleton.class);
mapbinder.addBinding("service2").to(Service2.class);
// Define ALL the services here, not just the ones being used.
// You could also look this up from a ClassLoader or read from a configuration file if you want
}
}
```
Then, inject the MapBinder to your `ServiceManager` class - which is **not** a module:
```
public class ServiceManager {
private final Map<String, Service> serviceMap;
@Inject
public ServiceManager(Map<String, Service) serviceMap) {
this.serviceMap = serviceMap;
}
// This is just one way to do it. It depends on how your UI works
public void startAll(List<String> serviceList) {
for(String serviceName : serviceList) {
serviceMap.get(serviceName).start();
}
}
}
```
|
Making Julia find files in the LOAD\_PATH
I want to help Julia find my .jl file by modifying the `LOAD_PATH` variable:
```
julia> readdir()
1-element Array{String,1}:
"test.jl"
shell> cmd /c type test.jl
# Test.jl
module Test
export f
f() = println("hi")
end
julia> push!(LOAD_PATH,pwd());
julia> import Test
ERROR: ArgumentError: Module Test not found in current path.
Run `Pkg.add("Test")` to install the Test package.
in require(::Symbol) at .\loading.jl:365
```
The first call to `readdir()` proves that I have a file called test.jl in my current directory. The following shell call shows that this file contains a module called Test. The next call to `push!(LOAD_PATH,pwd());` puts the current directory in `LOAD_PATH`. But even with the current directory in `LOAD_PATH`, Julia still can't find the `Test` module in test.jl.
What's wrong?
|
The error was talking about something concerning `require`. As the doc says:
>
> Given the statement using `Foo`, the system looks for `Foo` within `Main`. If the module does not exist, the system attempts to `require("Foo")`, which typically results in loading code from an installed package. ... `require` is **case-sensitive** on all platforms, including those with case-insensitive filesystems like macOS and Windows.
>
>
>
and the reason is clear: `require` couldn't find a file named `Test` in `LOAD_PATH`. So we need to make the file name matching the module name, but this is just a convention, not a mandatory rule. What will happen if someone mistakenly runs `using test`?
```
julia> push!(LOAD_PATH,pwd())
julia> using test
WARNING: requiring "test" in module "Main" did not define a corresponding module.
julia> whos()
Base 34427 KB Module
Core 12386 KB Module
Main 41296 KB Module
Test 1837 bytes Module
```
The result shows that we've loaded the file `test.jl` and the module(`Test`) in it, but not actually `using/import` the module. This is a respected behavior since we used a wrong module name, which is also the reason why julia complained in the warning. In this case, `using test` is equivalent to `include("test.jl")`, but I highly recommend you to follow the convention and do not use this behavior.
BTW, `require` became generally case-sensitive after [this PR](https://github.com/JuliaLang/julia/pull/13542). A side-effect is your `LOAD_PATH` should also be case-sensitive, this will be fixed by [this PR](https://github.com/JuliaLang/julia/pull/19291) in julia-0.6.
|
How to save matched pattern into variable using 'sed' command?
I'm trying to find a pattern using `sed` command in `file.txt` between first *char1* and *char2* and then replace that with string. like below with `echo` mode example:
```
echo "This X is test Y. But X is not test Y." | sed 's/X[^Y]*Y/REPLACE/'
```
Also I need to save matched pattern( like `is test` {<--spaces around is important} ) in a variable.
|
Here is a single invocation of sed that writes the revised line to stdout while at the same time saving the removed text in shell variable `var`:
```
$ var=$(echo "This X is test Y. But X is not test Y." | sed -nr 'h;s/[^X]*X([^Y]*)Y.*/\1/;p;x;s/X[^Y]*Y/REPLACE/;w /dev/stderr') 2>&1
This REPLACE. But X is not test Y.
```
The value of `var` is:
```
$ echo "==$var=="
== is test ==
```
Explanation:
- `h`
This command copies the current pattern to the hold space.
- `s/[^X]*X([^Y]*)Y.*/\1/;p`
This removes everything from the pattern space except the text between the first `X` and `Y` including any spaces. This is then printed to stdout. This is the output that is captured by the shell and assigned to `var`.
- `x`
This copies the hold space back to the pattern space. When this is done, the pattern space contains a copy of the original input line.
- `s/X[^Y]*Y/REPLACE/; w /dev/stderr`
The substitution is made and the result is written to `stderr`.
- `2>&1`
After the shell has captured stdout into `var`, this instructs the shell to copy stderr (which has the line with REPLACE) to stdout.
### Aside on handling of variable `var`
The variable `var` includes the leading and trailing spaces. If the shell were to subsequently subject `var` to word-splitting, these spaces would be removed. To prevent that, when `var` is referenced, do so inside double-quotes, as in the example above.
|
Binding mutually dependent but different data formats to inputs in Angular
As an exercise, I'm creating inputs for color values using both RGB and hex.
html:
```
<form ng-controller="myCtrl">
R:<input ng-model="rChannel" type="number" min="0" max="255" required></input>
G:<input ng-model="gChannel" type="number" min="0" max="255" required></input>
B:<input ng-model="bChannel" type="number" min="0" max="255" required></input>
hex: #<input ng-model="hexColor" type="text" required></input>
</form>
```
js:
```
function myCtrl($scope) {
$scope.$watch('[rChannel, gChannel, bChannel]',
function() {
$scope.hexColor = rgbToHex($scope.rChannel, $scope.gChannel, $scope.bChannel)
},
true);
$scope.$watch('hexColor',
function() {
var rgbArr = hexToRgbArray($scope.hexColor);
$scope.rChannel = rgbArr[0];
$scope.gChannel = rgbArr[1];
$scope.bChannel = rgbArr[2];
});
}
```
<http://jsfiddle.net/F545z/>
It works... with one big hiccup. The moment any one input value goes invalid (an empty string, or the hex is less than six characters), all of the inputs disappear! This has the effect of deleting values that the user has already input. E.g. when the user types a valid 6-character hex value, and then presses the delete key to correct the last character of the hex, the entire hex value disappears, and needs to be retyped completely. If you watch in the console, you can see that what's occurring. I think it's correct behavior for the rgb inputs to disappear when the hex is invalid, but it clearly hampers the user to erase the value he/she is in the process of typing.
This is clearly happening because of the "double-bind"-- the rgb and hex values are watching their own model, but also each other. There's some serious infinite loop potential here, and it's probably only working at all because [the angular docs say the loop only runs 10x](http://docs.angularjs.org/api/ng.%24rootScope.Scope) to prevent an infinite loop deadlock.
I'm pretty sure I'm doing this wrong way to begin with. Should I try to write a separate directive for the hex input? If so how should I link them all up? Is $watch legit for this kind of use? A working fiddle is most helpful.
|
$watch works well for one way dependency. You want to have something that flip flops dependency based on what the user inputs. For this, use ng-change on the inputs:
<http://jsfiddle.net/F545z/1/>
```
<div ng-app>
<form ng-controller="myCtrl" novalidate>
R:<input ng-model="redChannel" ng-change="updateHex()" type="number" min="0" max="255" required></input>
G:<input ng-model="greenChannel" ng-change="updateHex()" type="number" min="0" max="255" required></input>
B:<input ng-model="blueChannel" ng-change="updateHex()" type="number" min="0" max="255" required></input>
<br><br>
hex: #<input ng-model="hexColor" ng-change="updateChannels()" type="text" required></input>
</form>
</div>
```
|
Loading inside a textarea after typing
```
<textarea id='txedit'></textarea>
<button id='btnload'>LOAD</button>
$('#btnload').on('click', function(){
$('#txedit').load('test.php');
});
```
Before typing anything inside `txedit` the above code works i.e. `test.php` is loaded inside `txedit`.
After typing even a letter inside `txedit` - clicking on `btnload` doesn't load `test.php`.
|
This is because of a unique property of the `<textarea>` element. When you put content in-between the tags, like so:
```
<textarea> I'm in the middle! </textarea>
```
... that text serves as the "default" placeholder text, and once the user starts typing, it is no longer used, **even if you change it dynamically**. When you call `$('#txedit').load()`, JQuery is not putting the value of that AJAX call into the textarea as a text value, it is putting it in between the tags as default text!
What you want to do instead is pass the result of JQuery load to the *value* of the textarea, like so:
```
<textarea id='txedit'></textarea>
<button id='btnload'>LOAD</button>
<script>
$('#btnload').on('click', function(){
$.get('test.php', function(res){
$('#txedit').val(res);
})
});
</script>
```
|
Filesystem block size confusion
I know that block size is, generally speaking, the minimal operation unit of a filesystem, but I was confused by fio, the I/O performance benchmark tool, which has a `bs` parameter to set the "block size".
If block size is for a filesystem, why fio can set one for a benchmark? Are they the same "block size"?
If I find the performance is optimal with 1M "fio block size", since fio can do this, can I force all processes operating on this filesystem to use 1M block size to achieve best performance? What about remote filesystems mounted with fuse, can I set the block size at mount-time?
|
There are quite many details here, I'll try to summarize the main points here.
The filesystem block size is the minimum allocation unit that can be reserved at one time. So, if a filesystem has a block size of 4096, a file that is one byte in size, still takes 4096 bytes on the hard disk.
This is due to the fact that the filesystem has to know which part of the hard disk belongs to which file. If the block size was smaller, then the allocation table would be larger. If the block size was larger, then even more space would be wasted with small files.
There are also other strategies for allocating space to files, depending on the filesystem used. But this is the basic strategy that is most often used.
However, on the application level, the `bs` parameter is the block size of an individual write/read operation `fio` uses when performing the benchmark. Larger write / read operation sizes can provide better performance, because several filesystem level block writes can be combined.
You cannot force applications to use any specific block size. They write / read exactly the amount of data they want to at any time. The sizes of individual write / read operations vary from a few bytes up to megabytes, all depending on the application.
`fio` is an exception here, because it is a benchmark tool.
Remote filesystems operate on top of the remote server's filesystem, and therefore the actual filesystem on the remote server specifies the block size used on the hard disk.
The protocol used when accessing the remote server's filesystem also has some "block sizes", most often defined by TCP segment size, which is determined from MTU. MTU is most often 1500 bytes in the Internet. In some local environments MTU can be 9000 bytes. This needs support from all parts of the network (switches, routers and NICs) to work properly.
|
Pandas dataframe filtering based on group counts
My dataframe looks like this:
```
item_id sales_quantity
1 10
1 11
1 1
1 2
... ...
10 1
10 9
... ...
```
I want to filter out all the rows corresponding to an `item_id` which appear less than 100 times. Here is what I tried:
```
from pandas import *
from statsmodels.tsa.stattools import adfuller
def adf(X):
result = adfuller(X)
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
print('Critical Values:')
for key, value in result[4].items():
print('\t%s: %.3f' % (key, value))
filtered = df.groupby('item_id_copy')['sales_quantity'].filter(lambda x: len(x) >= 100)
df[df['sales_quantity'].isin(filtered)]
df['sales_quantity'].groupby(df['item_id_copy']).apply(adf)
```
But, when I run the following:
`df['sales_quantity'].groupby(df['item_id_copy']).size()`, I get lots of item\_ids with size less than 100. Can someone please tell me what is wrong with my code?
|
It seems you need remove `['sales_quantity']`:
```
df = df.groupby('item_id_copy').filter(lambda x: len(x) >= 100)
```
Or:
```
df = df[df.groupby('item_id_copy')['sales_quantity'].transform('size') > 100]
```
Sample:
```
np.random.seed(130)
df=pd.DataFrame(np.random.randint(3, size=(10,2)), columns=['item_id_copy','sales_quantity'])
print (df)
item_id_copy sales_quantity
0 1 1
1 1 2
2 2 1
3 0 1
4 2 0
5 2 0
6 0 1
7 1 2
8 1 2
9 1 2
df1 = df.groupby('item_id_copy').filter(lambda x: len(x) >= 4)
print (df1)
item_id_copy sales_quantity
0 1 1
1 1 2
7 1 2
8 1 2
9 1 2
df1 = df[df.groupby('item_id_copy')['sales_quantity'].transform('size') >= 4]
print (df1)
item_id_copy sales_quantity
0 1 1
1 1 2
7 1 2
8 1 2
9 1 2
```
EDIT:
For columns after apply some function is possible add `Series` constructor, then reshape by [`unstack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html). Last create new `DataFrame` from dicts in column `Critical Values` and [`join`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) to original:
```
np.random.seed(130)
df = pd.DataFrame(np.random.randint(10, size=(1000,2)),
columns=['item_id_copy','sales_quantity'])
#print (df)
from statsmodels.tsa.stattools import adfuller
def adf(X):
result = adfuller(X)
return pd.Series(result, index=['ADF Statistic','p-value','a','b','Critical Values','c'])
df1 = df[df.groupby('item_id_copy')['sales_quantity'].transform('size') >= 100]
df2 = df1['sales_quantity'].groupby(df1['item_id_copy']).apply(adf).unstack()
df3 = pd.DataFrame(df2['Critical Values'].values.tolist(),
index=df2.index,
columns=['1%','5%','10%'])
df2=df2[['ADF Statistic','p-value']].join(df3).reset_index()
print (df2)
item_id_copy ADF Statistic p-value 1% 5% 10%
0 1 -12.0739 2.3136e-22 -3.498198 -2.891208 -2.582596
1 2 -4.48264 0.000211343 -3.494850 -2.889758 -2.581822
2 7 -4.2745 0.000491609 -3.491818 -2.888444 -2.581120
3 9 -11.7981 9.47089e-22 -3.486056 -2.885943 -2.579785
```
|
MVC - is it model to view or controller to view?
I see numerous sites and articles explaining that the view is updated from the model like the example below 
however i see a few other examples of MVC architecture showing that the view is updated via the controller 
Is this depending on whether you have the @Model into your views? im just wondering why the different versions of MVC, we were taught that it should be the second image.
|
MVC is a loosely defined pattern that gives the architect much discretion over implementation details. This is probably why so many variations of MVC exist.
To my knowledge, it all started with Classic (Original) MVC that separate web application into three parts i.e. Model, View and Controller. The objectives were:
- Achieve loose coupling between Model and View (Observer pattern
employed to achieve it).
- Encapsulate business logic into Model so that it can be exhaustively
tested.
- Make View as dumb/thin as possible to lessen the need to test it.

*The pattern charmed so many that there were several variations (Active Model, Passive Model, Model2). These variations were due to implementations of the pattern in particular frameworks to suit the frameworks' design goals.*
For example, one variation is Model2. Model2 is a web variation (Classic MVC was actually targeted for desktop application) and got popular as "ASP.NET MVC Framework".

The key difference between Classic MVC and ASP.NET MVC Framework is, the later provides a neat separation between Model and View i.e. no direct interaction. Rather Controller is responsible to facilitate this communication between Model and View in ASP.NET MVC Framework. This makes ASP.NET MVC Framework web applications a lot easier & efficient to test.
Furthermore, in Model2 there's a sort of loose contract that can be established between the view and the controller. In the ASP.NET MVC Framework, this contract used to be represented by the ViewData container object and nowadays using a ViewModel object.
To avoid confusion, one need to look at the interactions between these three parts (Model, View & Controller) in the MVC pattern implementations currently in use. Also embrace the fact that it is a particular implementation of MVC pattern and one diagram may not be able to describe it completely.
|
Removing (possibly nested) text quotes in command line
I need to parse large amounts text in command line and replace all (possibly nested) text quotes with spaces. Quotes are marked with specific syntax: `[quote=username]quoted text[/quote]`.
Example input with nested quotes could be something like:
```
text part 1 [quote=foo] outer quote 1 [quote=bar] inner quote [/quote] outer quote 2 [/quote] text part 2 [quote=foo-bar] next quote [/quote] text part 3
```
And expected output would be:
```
text part 1 text part 2 text part 3
```
With help of [this question](https://unix.stackexchange.com/questions/166879/how-can-i-delete-all-text-between-nested-curly-brackets-in-a-multiline-text-file) I got it somehow work (got output above) with `sed ':b; s/\[quote=[^]]*\][^[\/]*\[\/quote\]/ /g; t b'` but middle part (`[^[\/]`] is problematic since quotes can contain characters like `[` or `]`.
That being said, my `sed` command doesn't work if input is eg.
```
text part 1 [quote=foo] outer quote 1 [quote=bar] inner quote [foo] [/quote] outer quote 2 [/quote] text part 2 [quote=foo-bar] next quote [/quote] text part 3
```
One problem is that `sed` doesn't seem to support non-greedy qualifier and thus catches always longest possible match from the input. That makes it hard to deal with **a)** usernames and **b)** quoted texts in general.
I also guess that `sed` is not the best tool to solve this and it might not be even capable of doing things like that. Maybe eg. `perl` or `awk` could work better?
Now the final question is that what would be the best and most efficient way to solve this?
|
If you know the input doesn't contain `<` or `>` characters, you could do:
```
sed '
# replace opening quote with <
s|\[quote=[^]]*\]|<|g
# and closing quotes with >
s|\[/quote\]|>|g
:1
# work our way from the inner quotes
s|<[^<>]*>||g
t1'
```
If it may contain `<` or `>` characters, you can escape them using a scheme like:
```
sed '
# escape < and > (and the escaping character _ itself)
s/_/_u/g; s/</_l/g; s/>/_r/g
<code-above>
# undo escaping after the work has been done
s/_r/>/g; s/_l/</g; s/_u/_/g'
```
With `perl`, using recursive regexps:
```
perl -pe 's@(\[quote=[^\]]*\](?:(?1)|.)*?\[/quote\])@@g'
```
Or even, as you mention:
```
perl -pe 's@(\[quote=.*?\](?:(?1)|.)*?\[/quote\])@@g'
```
With `perl`, you can handle multiline input by adding the `-0777` option. With `sed`, you'd need to prefix the code with:
```
:0
$!{
N;b0
}
```
So as to load the whole input into the pattern space.
|
How do i work out the nearest resolution with 16:9 aspect ratio. python 3
For a project I am working on I need to make a function that takes an input of height and width and outputs the nearest height and width that have a ratio of 16:9
here is what i have got so far
```
def image_to_ratio(h, w):
if width % 16 < height % 9:
h -= (h % 9)
else:
w -= (w% 9)
return h, w
```
input:
1920, 1200
output of my function:
1920, 1197
|
You can try something like:
```
from __future__ import division # needed in Python2 only
def image_to_ratio(w, h):
if (w / h) < (16 / 9):
w = (w // 16) * 16
h = (w // 16) * 9
else:
h = (h // 9) * 9
w = (h // 9) * 16
return w, h
>>> image_to_ratio(1920, 1200)
(1920, 1080)
```
That same logic can be condensed into:
```
def image_to_ratio(w, h):
base = w//16 if w/h < 16/9 else h//9
return base * 16, base * 9
```
|
Build Docker with Go app: cannot find package
I have my Dockerfile in the root of directory with `src/myapp` folder, `myapp` contains `myapp.go` with main package.
`Dockerfile` looks like following:
```
FROM golang:1.9.2
ADD . /
RUN go build myapp;
ENTRYPOINT ["/go/bin/myapp"]
```
I get following error:
```
can't load package: package myapp: cannot find package "myapp" in any of:
/usr/local/go/src/myapp (from $GOROOT)
/go/src/myapp (from $GOPATH)
```
What am I doing wrong? Can I log `ls` command after docker has done `ADD`?
|
You are copying all the files to Image root directory, Didn't installed any dependencies, Trying to Build it and then run the binary from `/go/bin/app`. The binary doesn't exists in that directory and it's generating errors.
I would recommend using a Dockerfile like this,
```
FROM golang:1.9.2
ADD . /go/src/myapp
WORKDIR /go/src/myapp
RUN go get myapp
RUN go install
ENTRYPOINT ["/go/bin/myapp"]
```
This'll do the following.
1. Copy project files to `/go/src/myapp`.
2. Set Working directory to `/go/src/myapp`.
3. Install dependencies, I used go get but replace it with which ever dependency management tool you are using.
4. Install/build the binary.
5. Set entry point.
You can run `ls` or any other command using `docker exec`.
Example:
```
docker exec <image name/hash> ls
```
You can also enter the shell in the generated image to understand it well using
```
docker run --rm -it <image hash/name> /bin/sh
```
|
JavaScript override methods
Let's say you have the below code:
```
function A() {
function modify() {
x = 300;
y = 400;
}
var c = new C();
}
function B() {
function modify(){
x = 3000;
y = 4000;
}
var c = new C();
}
C = function () {
var x = 10;
var y = 20;
function modify() {
x = 30;
y = 40;
};
modify();
alert("The sum is: " + (x+y));
}
```
Now the question is, if there is any way in which I can override the method `modify` from `C` with the methods that are in `A` and `B`. In Java you would use the `super`-keyword, but how can you achieve something like this in JavaScript?
|
Edit: It's now six years since the original answer was written and a lot has changed!
- If you're using a newer version of JavaScript, possibly compiled with a tool like [Babel](http://babeljs.io), you can [use real classes](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes#Sub_classing_with_extends).
- If you're using the class-like component constructors provided by [Angular](https://docs.angularjs.org/guide/component) or [React](https://facebook.github.io/react/docs/react-without-es6.html), you'll want to look in the docs for that framework.
- If you're using ES5 and making "fake" classes by hand using prototypes, the answer below is still as right as it ever was.
---
JavaScript inheritance looks a bit different from Java. Here is how the native JavaScript object system looks:
```
// Create a class
function Vehicle(color){
this.color = color;
}
// Add an instance method
Vehicle.prototype.go = function(){
return "Underway in " + this.color;
}
// Add a second class
function Car(color){
this.color = color;
}
// And declare it is a subclass of the first
Car.prototype = new Vehicle();
// Override the instance method
Car.prototype.go = function(){
return Vehicle.prototype.go.call(this) + " car"
}
// Create some instances and see the overridden behavior.
var v = new Vehicle("blue");
v.go() // "Underway in blue"
var c = new Car("red");
c.go() // "Underway in red car"
```
Unfortunately this is a bit ugly and it does not include a very nice way to "super": you have to manually specify which parent classes' method you want to call. As a result, there are a variety of tools to make creating classes nicer. Try looking at Prototype.js, Backbone.js, or a similar library that includes a nicer syntax for doing OOP in js.
|
Spring batch with MongoDB and transactions
I have a Spring Batch application with two databases: one SQL DB for the Spring Batch meta data, and another which is a MongoDB where all the business data is stored. The relation DB still uses `DataSourceTransactionManager`.
However I dont think the Mongo writes are done within an active transaction with rollbacks. Here is the excerpt from the [official Spring Batch documentation](https://docs.spring.io/spring-batch/docs/current-SNAPSHOT/api/org/springframework/batch/item/data/MongoItemWriter.html) on `MongoItemWriter`:
>
> A ItemWriter implementation that writes to a MongoDB store using an implementation of Spring Data's MongoOperations. Since MongoDB is not a transactional store, a best effort is made to persist written data at the last moment, yet still honor job status contracts. No attempt to roll back is made if an error occurs during writing.
>
>
>
However this is not the case any more; [MongoDB introduced ACID transactions in version 4.](https://docs.mongodb.com/manual/core/transactions/)
How do I go about adding transactions to my writes? I could use `@Transactional` on my service methods when I use `ItemWriterAdapter`. But still dont know what to do with `MongoItemWriter`... What is the right configuration here? Thank you.
|
>
> I have a Spring Batch application with two databases: one SQL DB for the Spring Batch meta data, and another which is a MongoDB where all the business data is stored.
>
>
>
I invite you to take a look at the following posts to understand the implications of this design choice:
- [How to java-configure separate datasources for spring batch data and business data? Should I even do it?](https://stackoverflow.com/questions/25256487/how-to-java-configure-separate-datasources-for-spring-batch-data-and-business-da)
- [How does Spring Batch transaction management work?](https://stackoverflow.com/questions/29303155/how-does-spring-batch-transaction-management-work)
In your case, you have a distributed transaction across two data sources:
- SQL datasource for the job repository, which is managed by a `DataSourceTransactionManager`
- MongoDB for your step (using the `MongoItemWriter`), which is managed by a `MongoTransactionManager`
If you want technical meta-data and business data to be committed/rolled back in the scope of the same distributed transaction, you need to use a `JtaTransactionManager` that coordinates the `DataSourceTransactionManager` and `MongoTransactionManager`. You can find some resources about the matter here: <https://stackoverflow.com/a/56547839/5019386>.
BTW, there is a feature request to use MongoDB as a job repository in Spring Batch: <https://github.com/spring-projects/spring-batch/issues/877>. When this is implemented, you could store both business data and technical meta-data in the same datasource (so no need for a distributed transaction anymore) and you would be able to use the same `MongoTransactionManager` for both the job repository and your step.
|
Push object onto end of array using new literal syntax
PHP has:
```
arr[] = 'Push this onto my array';
```
Where the string will be added to the end of the array.
Is there any equivalent of this in the new Objective-C literal syntax? The most succinct way I can think of doing it is:
```
arr[arr.count] = @"Add me";
```
but maybe there's something better out there?
|
Take a look at [the documentation for the new literal syntax](http://clang.llvm.org/docs/ObjectiveCLiterals.html). When you assign into an array using a subscript, the call is translated into the `setObject:atIndexedSubscript:` method call:
```
NSMutableArray *foo = …;
foo[0] = @"bar"; // => [foo setObject:@"bar" atIndexedSubscript:0];
```
If you had your own mutable array implementation, you could add a special case to the `setObject:atIndexedSubscript:` method to grow the array when assigning past the array size. I very much doubt the default `NSMutableArray` implementation does anything like it, most probably you’d just get an exception about index being out of bounds. And this is indeed what `NSMutableArray` does, see [the reference documentation](https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSMutableArray_Class/Reference/Reference.html):
>
> If the index is equal to count the element is added to the end of the
> array, growing the array.
>
>
>
Thank you Martin for the heads-up!
|
ListAdapter submitList not updating
I have this issue with the pagination infinite scroll in RecyclerView, I am adding all new item using `.addAll()`
```
movieList.addAll(it.movieList)
adapter.submitList(movieList)
Log.wtf("WTF", movieList.size.toString())
```
The size keeps increasing whenever we get a success response from API which indicates that the list is indeed being populated but the item in RecyclerView stays the same and `submitList()` seems to work only at first call.
Here is my DiffUtil class and the Adapter
```
class DiffUtilMovies : DiffUtil.ItemCallback<MovieItem>() {
// DiffUtil uses this test to help discover if an item was added, removed, or moved.
override fun areItemsTheSame(oldItem: MovieItem, newItem: MovieItem): Boolean {
return oldItem.id == newItem.id
}
// Check whether oldItem and newItem contain the same data; that is, whether they are equal.
// If there are differences between oldItem and newItem, this code tells DiffUtil that the item has been updated.
override fun areContentsTheSame(oldItem: MovieItem, newItem: MovieItem): Boolean {
// Check for now if there is a difference on the price, removing specific fields
// means checking all the data for changes
return oldItem.title == newItem.title
}
}
class MovieAdapter(private val context: Context) : ListAdapter<MovieItem, MovieAdapter.ItemView>(DiffUtilMovies()) {
private var isDetached: Boolean = false
class ItemView(itemView: MovieCardBinding) : RecyclerView.ViewHolder(itemView.root) {
val titleTxt = itemView.titleTxt
val rateTxt = itemView.rateTxt
val rateBar = itemView.rateBar
val imageThumb = itemView.thumbnail
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ItemView {
return ItemView(
MovieCardBinding.inflate(
LayoutInflater.from(parent.context),
parent,
false
)
)
}
override fun onBindViewHolder(holder: ItemView, position: Int) {
holder.apply {
val movieItem = getItem(position)
titleTxt.text = movieItem.title
rateTxt.text = movieItem.voteAverage.toString()
val rateAvg = movieItem.voteAverage?.toFloat() ?: run {
0.0f
}
rateBar.rating = rateAvg/2
if (!isDetached)
GlideApp.with(context)
.load(context.getString(R.string.image_link,AppConfig.image_endpoint, movieItem.posterPath))
.thumbnail(GlideApp.with(context).load(R.drawable.loading).centerCrop())
.error(R.drawable.no_image)
.into(imageThumb)
this.itemView.setOnClickListener {
try {
// context.startActivity(Intent(context, AssetInfoActivity::class.java).apply {
// putExtra(context.getString(R.string.assets), movieItem)
// })
}
catch (ignored: Exception){
// The user probably already leave before the activity started
}
}
}
}
override fun onDetachedFromRecyclerView(recyclerView: RecyclerView) {
super.onDetachedFromRecyclerView(recyclerView)
isDetached = true
}
}
```
|
ListAdapter doesn’t work with mutable lists. This is because if you modify the contents of the List, when it tries to compare the contents of the old list and new list, it’s comparing the same list to itself. There is no old list instance that still holds the old contents, so it cannot detect any differences.
Instead of mutating the original list, you should create a new one, for example
```
movieList = movieList + it.movieList
adapter.submitList(movieList)
```
Alternatively, you can use a mutable backing list, but always create a copy when passing it to `submitList`. You must use a copy even the very first time you pass the List so it is never referring to your mutable List.
```
movieList.addAll(it.movieList)
adapter.submitList(movieList.toList())
```
|
Finding prime isn't working
>
> **Warning:** Simple homework assignment, no idea what I'm doing
>
>
>
So, I'm trying to make a program that finds the first prime numbers from 1 to 100 and prints them in a listbox. This is my code:
```
private bool IsPrime(int number)
{
int count;
if (number == 2)
return true;
for (count = 3; count < number; count = count + 2)
{
if (number % count == 0)
return false;
}
return true;
}
private void calculateButton_Click(object sender, EventArgs e)
{
int number;
for (number = 1; number < 100; number = number++)
if (IsPrime(number))
primeList.Items.Add(number);
}
```
And the program isn't catching any syntax errors, but it also freezes up every time I try to run it. Any idea why this happens? Thanks.
|
You use:
```
for (number = 1; number < 100; number = number++)
```
while you should write
```
for (number = 1; number < 100; number++)
```
You should read these articles to understand why your original code didn't increment the : [for](http://msdn.microsoft.com/en-us/library/ch45axte.aspx), [++ Operator](http://msdn.microsoft.com/en-us/library/36x43w8w.aspx)
You can learn the behaivour of the `++` operator in some test code:
```
int n = 0;
Console.WriteLine(n); //0
n = n++;
Console.WriteLine(n); //0
n = ++n;
Console.WriteLine(n); //1
n = n++;
Console.WriteLine(n); //1
n = ++n;
Console.WriteLine(n); //2
```
Another nice example would be:
```
int n = 0;
int x = n++;
int y = ++n;
Console.WriteLine(string.Format("x={0}", x)); //0
Console.WriteLine(string.Format("y={0}", y)); //2
Console.WriteLine(x + y); //n++ + ++n == 0 + 2 == 2
n = 0;
x = ++n;
y = n++;
Console.WriteLine(string.Format("x={0}", x)); //1
Console.WriteLine(string.Format("y={0}", y)); //1
Console.WriteLine(x + y); //++n + n++ == 1 + 1 == 2
```
|
React Native: Refs in ListView
It looks like the refs of views in is hard to be directly accessed.
Now I have a list view with a cells. In the `renderRow` function I have something like:
```
renderRowView: function(rowData){
return
<View >
<TextInput
ref="text"
/>
</View>
},
```
In this case, if I want to access this TextInput using ref, it will be `undefined`.
I saw a thread on Github (<https://github.com/facebook/react-native/issues/897>) mentioned about a way to resolve this, but I still couldn't understand how to use it:
```
render: function() {
return (
<ListView
dataSource={this.state.dataSource}
renderRow={(rowData, sec, i) =>
<Text ref={(row) => this.rows[sec][i] = row}>{rowData}</Text>
}
/>
);
},
```
Please help me understand how this ref function works, and how to use it (i.e. programmatically focus on the `TextInput` in the row.). Thank you!
|
The `ref` attribute on a React Component can either be a `string` or a `callback` function, which will be called with the component in its first argument.
So passing a function to a `ref` attribute will execute it when the component is mounted, with the component itself in the first argument.
What the github code you pasted is doing is adding the component to a two-dimensional array when it's mounted via the `ref` callback attribute. The `row` argument is essentially the `<TextInput/>` itself.
What do you want to achieve ? There might be an easier and cleaner solution.
---
**EDIT:** Regarding what you're trying to achieve, this would work :
```
render: function() {
return (
<ListView
dataSource={this.state.dataSource}
renderRow={(rowData) => {
var inputRefs = [];
var _focusInput = function(name) {
inputRefs[name].focus();
};
var input1 =
<TextInput
ref={(input) => {
inputRefs['input1'] = input;
}}
onSubmitEditing={_focusInput.bind(null, 'input2')}
onEndEditing={_focusInput.bind(null, 'input2')} />;
var input2 =
<TextInput
ref={(input) => {
inputRefs['input2'] = input;
}} />;
return (
<View>
{input1}
{input2}
</View>
);
}}/>
);
}
```
You can dive more into TextInput events there <https://facebook.github.io/react-native/docs/textinput.html#content>.
|
Find all strings in between two strings in Go
I am working on extracting mutliple matches between two strings.
In the example below, I am trying to regex out an `A B C` substring out of my string.
Here is my code:
```
package main
import (
"fmt"
"regexp"
)
func main() {
str:= "Movies: A B C Food: 1 2 3"
re := regexp.MustCompile(`[Movies:][^Food:]*`)
match := re.FindAllString(str, -1)
fmt.Println(match)
}
```
I am clearly doing something wrong in my regex. I am trying to get the `A B C` string between `Movies:` and `Food:`.
What is the proper regex to get all strings between two strings?
|
In Go, since its RE2-based `regexp` does not support lookarounds, you need to use *capturing* mechanism with [`regexp.FindAllStringSubmatch`](https://golang.org/pkg/regexp/#Regexp.FindAllStringSubmatch) function:
```
left := "LEFT_DELIMITER_TEXT_HERE"
right := "RIGHT_DELIMITER_TEXT_HERE"
rx := regexp.MustCompile(`(?s)` + regexp.QuoteMeta(left) + `(.*?)` + regexp.QuoteMeta(right))
matches := rx.FindAllStringSubmatch(str, -1)
```
Note the use of [`regexp.QuoteMeta`](https://pkg.go.dev/regexp#QuoteMeta) that automatically escapes all special regex metacharacters in the left- and right-hand delimiters.
The `(?s)` makes `.` match across lines and `(.*?)` captures all between `ABC` and `XYZ` into Group 1.
So, here you can use
```
package main
import (
"fmt"
"regexp"
)
func main() {
str:= "Movies: A B C Food: 1 2 3"
r := regexp.MustCompile(`Movies:\s*(.*?)\s*Food`)
matches := r.FindAllStringSubmatch(str, -1)
for _, v := range matches {
fmt.Println(v[1])
}
}
```
See the [Go demo](https://play.golang.org/p/8DhhpY_v5XZ). Output: `A B C`.
|
Why can't I catch SIGINT when asyncio event loop is running?
Using Python 3.4.1 on Windows, I've found that while executing an [asyncio event loop](https://docs.python.org/3/library/asyncio.html), my program can't be interrupted (i.e. by pressing Ctrl+C in the terminal). More to the point, the SIGINT signal is ignored. Conversely, I've determined that SIGINT is handled when not in an event loop.
Why is it that SIGINT is ignored when executing an asyncio event loop?
The below program should demonstrate the problem - run it in the terminal and try to stop it by pressing Ctrl+C, it should keep running:
```
import asyncio
import signal
# Never gets called after entering event loop
def handler(*args):
print('Signaled')
signal.signal(signal.SIGINT, handler)
print('Event loop starting')
loop = asyncio.SelectorEventLoop()
asyncio.set_event_loop(loop)
loop.run_forever()
print('Event loop ended')
```
See [discussion](https://groups.google.com/forum/#!topic/python-tulip/pr9fgX8Vh-A) on official (Tulip) mailing list.
|
I've found a workaround, which is to schedule a periodic callback. While this running, SIGINT is apparently processed:
```
import asyncio
def wakeup():
# Call again
loop.call_later(0.1, wakeup)
print('Event loop starting')
loop = asyncio.SelectorEventLoop()
# Register periodic callback
loop.call_later(0.1, wakeup)
asyncio.set_event_loop(loop)
loop.run_forever()
print('Event loop ended')
```
Not sure why this is necessary, but it indicates that signals are blocked while the event loop waits for events ("polls").
The matter has been [discussed](https://groups.google.com/forum/#!topic/python-tulip/pr9fgX8Vh-A) on the official (Tulip) mailing list, my workaround is apparently the way to go as of now.
## Update
A fix has supposedly [made its way into Python 3.5](https://code.google.com/p/tulip/issues/detail?id=191), so hopefully my workaround will be made obsolete by that Python version.
|
Random Forest Regression and trended time-series
I am comparing a random forest model to a GLS model using a univariate time series that has a deterministic linear trend. I am going to add a linear time trend covariate (among other predictors) to the GLS model to account for the changing trend. To be consistent in my comparison, I was hoping to add this predictor to the random forest regression model as well. I have been looking for literature on this subject and can't find much.
Does anyone know if adding this type of predictor is inappropriate in a random forest regression for any reason? The random forest regression already includes time-lagged variables to account for autocorrelation.
|
RFs, of course, can identify and model a long-term trend in the data. However, the issue becomes more complicated when you are trying to forecast out to never seen before values, as you often are trying to do with time-series data. For example, if see that activity increases linearly over a period between 1915 and 2015, you would expect it to continue to do so in the future. RF, however, would not make that forecast. It would forecast all future variables to have the same activity as 2015.
```
from sklearn import ensemble
import numpy as np
years = np.arange(1916, 2016)
#the final year in the training data set is 2015
years = [[x] for x in years]
print 'Final year is %s ' %years[-1][0]
#say your ts goes up by 1 each year - a perfect linear trend
ts = np.arange(1,101)
est = ensemble.RandomForestClassifier().fit(years,ts)
print est.predict([[2013], [2014], [2015], [2016] , [2017], [2018]])
```
The above script will print 2013, 2014, 2015, 2015, 2015, 2015. Adding lag variables into the RF does not help in this regard. So careful. I'm not sure if adding trend data to your RF is gonna do what you think it will.
|
Oracle Data Provider to CLR type mapping
Where can I find the listing of ODP to CLR type mapping?
On Oracle database, the NUMBER(9,0) type comes out in .NET app as System.Decimal from the MS Oracle driver, but as System.Int32 from ODP driver. I need an exact specification of types coming out from database (not the CLR to DB parameter mapping).
|
Run this simple test to get mappings for SqlServer and Oracle (both MS and ODP.NET drivers):
```
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
using System.Linq;
using Oracle.DataAccess.Client;
namespace DbOutTypeTest
{
public class Program
{
private static string SqlServerConnectionString = @"";
private static string OracleConnectionString = @"";
private static void WriteHeader(string title)
{
Console.WriteLine("----------------------------------------------------------");
Console.WriteLine("-- {0}", title);
Console.WriteLine("----------------------------------------------------------");
}
private static void WriteRow(string key, string value)
{
Console.WriteLine("{0}\t\t{1}", key.PadRight(30, ' '), value);
}
private static void EnumerateTypes(IDbConnection connection, string template, IEnumerable<string> types)
{
EnumerateTypes(connection, template, types, (arg1, arg2) => { });
}
private static void EnumerateTypes(IDbConnection connection, string template, IEnumerable<string> types, Action<string, string> action)
{
connection.Open();
using (var command = connection.CreateCommand())
{
foreach (var type in types)
{
var value = "";
command.CommandText = string.Format(template, type);
try
{
using (var reader = command.ExecuteReader())
{
if (reader.Read())
value = reader[0].GetType().FullName;
else
value = "<no data read>";
}
}
catch (Exception ex)
{
value = ex.Message;
}
WriteRow(type, value);
action(type, value);
}
}
}
private static IEnumerable<string> SqlServerIntegers()
{
yield return "tinyint";
yield return "smallint";
yield return "int";
yield return "bigint";
for (int precision = 1; precision <= 38; ++precision)
{
yield return "numeric(" + precision + ", 0)";
}
yield break;
}
private static IEnumerable<string> SqlServerFloatings()
{
yield return "real";
yield return "float";
for (int precision = 1; precision <= 38; ++precision)
{
for (int scale = 1; scale <= precision; ++scale)
yield return "numeric(" + precision + ", " + scale + ")";
}
yield break;
}
private static IEnumerable<string> OracleIntegers()
{
for (int precision = 1; precision <= 38; ++precision)
{
yield return "number(" + precision + ", 0)";
}
yield break;
}
private static IEnumerable<string> OracleFloatings()
{
for (int precision = 1; precision <= 38; ++precision)
{
for (int scale = 1; scale <= precision; ++scale)
yield return "number(" + precision + ", " + scale + ")";
}
yield break;
}
public static void Main(string[] args)
{
WriteHeader("C# types - CLR names");
Console.WriteLine("{0}\t\t{1}", "byte".PadRight(30, ' '), typeof(byte).FullName);
Console.WriteLine("{0}\t\t{1}", "short".PadRight(30, ' '), typeof(short).FullName);
Console.WriteLine("{0}\t\t{1}", "int".PadRight(30, ' '), typeof(int).FullName);
Console.WriteLine("{0}\t\t{1}", "long".PadRight(30, ' '), typeof(long).FullName);
Console.WriteLine("{0}\t\t{1}", "float".PadRight(30, ' '), typeof(float).FullName);
Console.WriteLine("{0}\t\t{1}", "double".PadRight(30, ' '), typeof(double).FullName);
var OracleToClrInteger = new Dictionary<string, string>();
var OracleToClrFloating = new Dictionary<string, string>();
var SqlServerToClrInteger = new Dictionary<string, string>();
var SqlServerToClrFloating = new Dictionary<string, string>();
WriteHeader("Oracle integers mapping (Oracle Data Provider)");
using (var connection = new OracleConnection(OracleConnectionString))
{
EnumerateTypes(connection, "SELECT CAST(0 AS {0}) FROM DUAL", OracleIntegers(), (type, value) => OracleToClrInteger.Add(type, value));
}
WriteHeader("SQLServer integers mapping");
using (var connection = new SqlConnection(SqlServerConnectionString))
{
EnumerateTypes(connection, "SELECT CAST(0 AS {0})", SqlServerIntegers(), (type, value) => SqlServerToClrInteger.Add(type, value));
}
WriteHeader("Oracle integers mapping (Microsoft Oracle Client)");
using (var connection = new System.Data.OracleClient.OracleConnection(OracleConnectionString))
{
EnumerateTypes(connection, "SELECT CAST(0 AS {0}) FROM DUAL", OracleIntegers());
}
WriteHeader("Oracle floats mapping (Oracle Data Provider)");
using (var connection = new OracleConnection(OracleConnectionString))
{
EnumerateTypes(connection, "SELECT CAST(0 AS {0}) FROM DUAL", OracleFloatings(), (type, value) => OracleToClrFloating.Add(type, value));
}
WriteHeader("SQLServer floats mapping");
using (var connection = new SqlConnection(SqlServerConnectionString))
{
EnumerateTypes(connection, "SELECT CAST(0 AS {0})", SqlServerFloatings(), (type, value) => SqlServerToClrFloating.Add(type, value));
}
WriteHeader("Oracle floats mapping (Microsoft Oracle Client)");
using (var connection = new System.Data.OracleClient.OracleConnection(OracleConnectionString))
{
EnumerateTypes(connection, "SELECT CAST(0 AS {0}) FROM DUAL", OracleFloatings());
}
WriteHeader("Suggested integer type mapping Oracle -> SqlServer");
foreach (var pair in OracleToClrInteger)
{
if (pair.Value == "System.Decimal")
WriteRow(pair.Key, pair.Key.Replace("number", "numeric"));
else
{
if (!SqlServerToClrInteger.Values.Contains(pair.Value))
WriteRow(pair.Key, "???");
else
WriteRow(pair.Key, SqlServerToClrInteger.First(p => p.Value == pair.Value).Key);
}
}
WriteHeader("Suggested floating type mapping Oracle -> SqlServer");
foreach (var pair in OracleToClrFloating)
{
if (pair.Value == "System.Decimal")
WriteRow(pair.Key, pair.Key.Replace("number", "numeric"));
else
{
if (!SqlServerToClrFloating.Values.Contains(pair.Value))
WriteRow(pair.Key, "???");
else
WriteRow(pair.Key, SqlServerToClrFloating.First(p => p.Value == pair.Value).Key);
}
}
}
}
}
```
The most interesting part:
```
----------------------------------------------------------
-- Oracle integers mapping (Oracle Data Provider)
----------------------------------------------------------
number(1, 0) System.Int16
number(2, 0) System.Int16
number(3, 0) System.Int16
number(4, 0) System.Int16
number(5, 0) System.Int32
number(6, 0) System.Int32
number(7, 0) System.Int32
number(8, 0) System.Int32
number(9, 0) System.Int32
number(10, 0) System.Int64
number(11, 0) System.Int64
number(12, 0) System.Int64
number(13, 0) System.Int64
number(14, 0) System.Int64
number(15, 0) System.Int64
number(16, 0) System.Int64
number(17, 0) System.Int64
number(18, 0) System.Int64
number(19, 0) System.Decimal
number(20, 0) System.Decimal
number(21, 0) System.Decimal
number(22, 0) System.Decimal
number(23, 0) System.Decimal
number(24, 0) System.Decimal
```
|
What are inode generation numbers?
I'm planning to implement a FUSE filesystem using low-level API and currently trying to understand the `fuse_entry_param` structure.
I wonder what `unsigned long fuse_entry_param::generation` actually means. [Documentation says](https://web.archive.org/web/20150910165818/http://fuse.sourceforge.net/doxygen/structfuse__entry__param.html#a6481786ffc9fcf968df41953d3f0bf98) just that `ino`/`generation` pair should be unique for the filesystem's lifetime, but does not go into any details.
What's the semantics of inode generations and how they are used?
For example, can I just consider `generation` as an additional bit of `ino` (like some sort of namespace) and use them freely to map arbitrary lifetime-unique 128-bit (`2*sizeof(unsigned long)` on x86\_64) values to inodes? Or are generations meant to be only incremented sequentially? What happens when inode numbers collide, but their generation numbers differ?
|
The 'generation' field is important if your inode number generator may generate different inode numbers at different times for the same object. This is uncommon for on-disk file systems, but it may happen for network file systems (like NFS, see [1](https://web.archive.org/web/20100109210802/http://old.nabble.com/fuse_entry_param-td26661841.html)).
It is mentioned in [1](https://web.archive.org/web/20100109210802/http://old.nabble.com/fuse_entry_param-td26661841.html) that a server may use a different set of (fuse) inode numbers/(nfs) file handles after a restart. If that happens, it is possible that the new inode numbers map to objects in a different way then the inode numbers which were given out before the server restart.
A client could use a different generation number for the set of inodes *before* the restart and for the set of inodes *after* the restart to make clear which inode is meant.
If your file system has a static generation scheme for inodes (where an inode number always points to the same object), there is no need to use the generation number and it may be used to extend the inode number.
|
How to link a work item to a pull request using REST API in Azure DevOps?
Within a release pipeline a new Pull Requested is created using REST API.
How to link a specific (already existing) Work Item to the Pull Request using REST API?
In the current version (DevOps 2019) it is not supported to link Work Items using [Pull Request API](https://learn.microsoft.com/en-us/rest/api/azure/devops/git/pull%20requests/update?view=azure-devops-server-rest-5.0). (See also related [community issue](https://developercommunity.visualstudio.com/content/problem/569673/unable-to-link-work-items-to-pull-request-via-rest.html).)
|
Using PowerShell the following snipped may help.
```
$requestUri = "$tfsCollectionUri/$teamProject/_apis/wit/workitems/$workItemId?api-version=5.0"
$json = '
[ {
"op": "add",
"path": "/relations/-",
"value": {
"rel": "ArtifactLink",
"url": "$pullRequestArtifact",
"attributes": { "name": "pull request" }
}
} ]'
$response = Invoke-RestMethod -Uri $requestUri -UseDefaultCredentials -ContentType "application/json-patch+json" -Method Patch -Body $json
```
Note, `$pullRequestArtifact` needs to be set. You can get it e.g. from [get request](https://learn.microsoft.com/en-us/rest/api/azure/devops/git/pull%20requests/get%20pull%20request?view=azure-devops-server-rest-5.0#gitpullrequest).
|
Difference between / and /root-node
My document looks like the following:
```
<a>
whatever
</a>
```
If I run `/` or `/a` on the entire document is returned(at least effectively).
If I run `/a/..` the entire document is returned.
But `/..` returns an empty sequence
Considering `/` and `/a` are returning the same node how come `/a/..` and `/..` are different?
|
## The Document Node
The XML code you provided as document is actually wrapped in another node, the "document node". The document is another node kind, others are elements, attributes, text nodes, comments and processing instructions. Using XQuery/XPath 2.0 notation, it would look something like this:
```
document{
<a>
whatever
</a>
}
```
## Effects on Queries
- `/` selects the document node
- `/a` selects the root element, which is the only child of the document node
- `/..` returns the empty sequence, as the document node has no parent node
- `/a/..` again selects the parent node of the root element, which again is the document node
- `/../a` has no results, as we "stepped out of the tree" (compare with `/..`)
## Why we Need a Document Node
The document node is important, as the [XML specification](http://www.w3.org/TR/REC-xml/#sec-documents) allows other nodes to follow the root node, namely processing instructions and comments (and whitespace). From the XML grammar:
```
document ::= prolog element Misc*
Misc ::= Comment | PI | S
```
Without a document node, these elements wouldn't be reachable for XPath, as they are no elements of the "root element subtree".
So, this would also be a valid XML document (\*):
```
document {
<a>
whatever
</a>
<!-- Just do nothing -->
<?php foo(); ?>
}
```
(\*) *This isn't valid XPath 2.0 any more, as we would have to give a node sequence. I omitted the commas `,` after each node necessary for XPath 2.0, as this is only for demonstration purpose.*
|
How to add properties to topojson file?
**Given a data.tsv file such :**
```
id code name
1 AL Alabama
2 AK Alaska
4 AZ Arizona
5 AR Arkansas
6 CA California
... ... ...
```
**Given a topojson.json file such :** (the structure is correct, the numeral values are random)
```
{
"type":"Topology",
"transform":
{
"scale": [0.0015484881821515486,0.0010301030103010299],
"translate":[-5.491666666666662,41.008333333333354]
},
"objects":
{
"states":
{
"type":"GeometryCollection",
"geometries":
[
{"type":"Polygon","arcs":[[0]],"properties":{"code_2":"AL"}},
{"type":"Polygon","arcs":[[1]],"properties":{"code_2":"AK"}}
]
}
},
"arcs":
[
[[2466,9916],[-25,-5],[3,-13]],
[[2357,9852],[1,-2],[1,-2]]
]
}
```
**How to use the common fields(1) to inject the values of an other field(2) into the json file ?**
1]: `data.txt#code` and `topojson.txt.objects.states.geometries.properties.code_2`
2]: `data.txt#name`
The end result should contains :
```
{"type":"Polygon","arcs":[[0]],"properties":{"code_2":"AL", "name":"Alabama" }},
{"type":"Polygon","arcs":[[1]],"properties":{"code_2":"AK", "name":"Alaska" }},
```
---
**EDIT: Accepted answer:**
```
topojson -o final.json -e data.tsv --id-property=code_2,code -p code_2,state=name -- topojson.json
```
|
Try using this:
```
topojson -o final.json -e data.tsv \
--id-property=code_2,code -p code_2,state=name \
-- topojson.json
```
Which should output:
```
{
"type": "Topology",
"transform": {
"scale": [
0.000016880209206372492,
0.000007005401010148724
],
"translate": [ -1.8418800213354616, 51.15278777877789 ]
},
"objects": {
"states": {
"type": "GeometryCollection",
"geometries": [
{
"type": "Polygon",
"arcs": [
[ 0 ]
],
"id": "AK",
"properties": {
"code_2": "AK",
"state": "Alaska"
}
}
]
}
},
"arcs": [
[[2466,9916],[-25,-5],[3,-13]],
[[2357,9852],[1,-2],[1,-2]]
]
}
```
From the [Command Line Reference wiki](https://github.com/mbostock/topojson/wiki/Command-Line-Reference):
>
> **--id-property** name of feature property to promote to geometry id
>
>
>
By using the `code_2` property with this option, you promote it as the feature ID.
>
> Prepend a **+** in front of the input property name to coerce its value to a number.
>
>
>
Plus:
>
> If the properties referenced by **--id-property** are null or undefined,
> they are **omitted** from the output geometry object. Thus, the generated
> objects may not have a defined ID if the input features did not have a
> property with the specified name.
>
>
>
So, when you are using `+code` and `+code_2`, they are probably `undefined`, as you can't convert the `AK` string value to a number.
>
> Here, the input property "FIPS" is coerced to a number and used as the
> feature identifier; likewise, the column named "FIPS" is used as the
> identifier in the CSV file. (If your CSV file uses a different column
> name for the feature identifier, you can specify multiple id
> properties, such as `--id-property=+FIPS,+id`.)
>
>
>
That's why you have to add the `code` to the `--id-property=code_2,code` option. This is how the mapping is made (the `code_2` from *topojson.json* and the `code` column from *data.tsv*).
>
> Then, the output property "unemployment" is generated from the
> external data file, unemployment.tsv, which defines the input property
> "rate"
>
>
>
In our case, `-p code_2,state=name` specifies that we will preserve the `code_2` property and we will rename the `name` property to `state`. The **Properties** and **External Properties** sections in the aforementioned documentation wiki are pretty informative on the matter.
|
How To Pass GET Parameters To Laravel From With GET Method ?
i'm stuck at this very basic form, that i could not accomplish, which i want to build a search form with an text input, and two select controls, with a route that accept 3 parameters, the problem that when the i submit the form, it map the parameters with the question mark, not the Laravel way,
## Markup
```
{{ Form::open(['route' => 'search', 'method' => 'GET'])}}
<input type="text" name="term"/>
<select name="category" id="">
<option value="auto">Auto</option>
<option value="moto">Moto</option>
</select>
{{ Form::submit('Send') }}
{{ Form::close() }}
```
## Route
```
Route::get('/search/{category}/{term}', ['as' => 'search', 'uses' => 'SearchController@search']);
```
When i submit the form it redirect me to
```
search/%7Bcategory%7D/%7Bterm%7D?term=asdasd&category=auto
```
How can i pass these paramters to my route with the Laravel way, and without Javascript ! :D
|
The simplest way is just to accept the incoming request, and pull out the variables you want in the Controller:
```
Route::get('search', ['as' => 'search', 'uses' => 'SearchController@search']);
```
and then in `SearchController@search`:
```
class SearchController extends BaseController {
public function search()
{
$category = Input::get('category', 'default category');
$term = Input::get('term', false);
// do things with them...
}
}
```
Usefully, you can [set defaults in `Input::get()`](http://laravel.com/docs/4.2/requests#basic-input) in case nothing is passed to your Controller's action.
[As joe\_archer says](https://stackoverflow.com/a/27298740/658210), it's not necessary to put these terms into the URL, and it might be better as a POST (in which case you should update your call to `Form::open()` and also your search route in routes.php - `Input::get()` remains the same)
|
What is the "Block Element" in Firefox?
Using Firefox 38.6.0 on RHEL7, if I right click on an image on a webpage, I can see: "Block element".
If I accidentally blocked an image, how can I enable it again, where is the list of blocked elements in Firefox? I only have ublock as an Add-on, I don't think this feature is for this add-on
|
It is definitely uBlock. I also use uBlock and if I right click on an image, the uBlock logo is shown next to "Block Element".
[](https://i.stack.imgur.com/HY4VW.png)
To remove the filter, click on uBlock and click the gear to open the Dashboard.
[](https://i.stack.imgur.com/cuOA7.png)
Then go to My filters and remove the ones you accidentally added. Apply changes and it should be fine.
[](https://i.stack.imgur.com/WwNiS.png)
|
What are benefits of Scrum compared to 'tackle things as they come'?
Much has been told about the advantages of agile development and Scrum in particular, however, most of these assessments assume that an organisation comes from a very rigid methodology of Waterfall. But what if a company is organised less strictly than advised by Scrum, not more?
The organisation I work for is a former startup. The team is used to tackling any issues or requirements as they come. The organisation does make a half-hearted attempt at implementing Scrum but the general attitude is rather negative, with many people seeing Scrum only as corporate red tape. What benefits will we get from applying more discipline?
- On the dev side, the team is reluctant to sacrifice coding time for planning sessions. Especially if the whole team was supposed to attend: so far any planning or estimation was done by a team leader alone (unless they actively sought assistance from some of their devs).
- The business, on the other hand, is not used to having to provide detailed requirements for whole two weeks up front. So far they only provided a general sketch of the things they needed most at the moment, and fleshed it out on the go. Now, if there are gaps in the requirements at the start of the sprint, the team should in theory tell them that the item is not 'ready' and the new feature will not be worked on for the next two weeks, meaning it will be deployed in a month at the earliest. How would you make the business accept this loss of flexibility?
How do you sell agile development to a company that is already agile to the point of chaos?
|
If your teams are producing code of appropriate quality at a rate acceptable by management, there is nothing to be gained by using scrum. In a sense, scrum is a framework to guide a team to reach this exact state.
However, if your code is of low quality, or if the code you develop doesn't always meet the needs of the end user, or if management is unhappy with the lack of transparency into the development process and finds planning around development schedules difficult, scrum may help.
Scrum is about embracing change by helping teams focus on efforts that deliver real value to the end customer in a predictable manner. By introducing a short feedback cycle, teams can be sure they are providing exactly what the customer needs. In addition, management has more visibility into what the team is doing now, what they are planning to do over the next few weeks, and when features will be ready.
And finally, scrum provides a framework for teams to become better at writing software through continuous improvement.
The end result of a team using scrum is a team that can predictably deliver high quality code that meets the needs of the user. If your team is already doing that, there's no point in changing what you are doing.
|
Using integer values and String identifier in an Enum
So I currently have two enums:
```
public enum AuthorizationLevel
{
FULL,
HALF,
NONE;
};
public enum DatabaseLoggingLevel
{
HIGH,
MED,
LOW,
AUTH_ONLY,
NONE
}
```
I want to be able to associate integers with the values in the enums so that I can have code like so:
```
if(databaseLoggingLevel < ClassName.DatabaseLoggingLevel.HIGH) return;
```
This is just so that certain logging is disabled when the level is less than `HIGH`. I thought about making a helper function that returns an integer value associated with each enum value with a switch statement, but that seems hacky. Is there something I am missing?
|
You can add an integer instance field to your enum, and add a parameterized constructor:
```
enum AuthorizationLevel {
FULL(1), HALF(2), NONE(3);
private final int level;
private AuthorizationLevel(int level) {
this.level = level;
}
public int getLevel() {
return level;
}
};
```
Note that there is an `ordinal()` method which will give you position of your enum constants, but you should avoid using it. Again, quoting [Effective Java Item - 31 (Use instance fields instead of ordinal)](http://books.google.co.in/books?id=ka2VUBqHiWkC&pg=PR10&lpg=PR10&dq=Effective+Java+Item+31&source=bl&ots=yYJnMen5O_&sig=8VQVA0z44gWmg4zOJRoY9oyp6EE&hl=en&sa=X&ei=N7ooUrmmDMSQrQe6sIDwDA&ved=0CD8Q6AEwAw#v=onepage&q=Effective%20Java%20Item%2031&f=false):
>
> If the constants are reordered, the numberOfMusicians method will
> break. If you want to **add a second enum constant associated with an
> int value that you’ve already used**, you’re out of luck.
>
>
> Also, you can’t add a constant for an int value without adding
> constants for all intervening int values. For example, suppose you
> want to add a constant representing a triple quartet, which consists
> of twelve musicians. There is no standard term for an ensemble
> consisting of eleven musicians, so you are forced to add a dummy
> constant for the unused int value (11). At best, this is ugly. If many
> int values are unused, it’s impractical.
>
>
>
|
Sizeof arrays and pointers
Here is my sample code
```
#include<stdio.h>
void main()
{
int arr[]={1,2,3,4,5,6};
char *ptr,a;
a='c';
ptr=&a;
int *ptr1,a1;
a1=4;
ptr1=&a1;
printf("%d %d %d",sizeof(arr), sizeof(ptr1), sizeof(ptr));
}
```
Now, as far as I understand, size of will tell me the size required to store the variable, now the output for this one is
`24 4 4`
Why is the size of `arr=24`, after all it's just a pointer and it should be having size =4 ?
Thanks.
|
"...after all it's just a pointer..."? No. Array is not a pointer. Array is an array object: a solid continuous block of memory that stores the array elements, no pointers of any kind involved. In your case array has 6 elements of size 4 each. That is why your `sizeof` evaluates to 24.
The common misconception about arrays being pointers has been debunked millions of times, but somehow it continues to pop up now and then. Read the FAQ, come back if you have any questions about it
<http://c-faq.com/aryptr/index.html>
P.S. As @Joachim Pileborg correctly noted in his answer, `sizeof` is not a function. It is an operator.
---
Another context in which arrays behave differently from pointers is the unary `&` operator (the "address of" operator). When unary `&` is applied to a pointer of type `int *` is produces a pointer of type `int **`. When unary `&` is applied to an array of type `int [10]` is produces a pointer of type `int (*)[10]`. These are two very different types.
```
int *p = 0;
int a[10] = { 0 };
int **p1 = &p; /* OK */
int **p2 = &a; /* ERROR */
int (*p3)[10] = &a; /* OK */
```
It is another popular source of questions (and errors): sometimes people expect `&` to produce a `int **` pointer when applied to an `int [10]` array.
|
Laravel automatic resolution with parameters
I have a class like this:
```
class PostValidator
{
public function __construct(Validator $validator, $data)
{
$this->validator = $validator;
$this->data = $data;
}
}
```
I read Laravel doc about IoC automatic resolution, it gives an example:
```
class FooBar {
public function __construct(Baz $baz)
{
$this->baz = $baz;
}
}
$fooBar = App::make('FooBar');
```
Is it possible to use `App::make` only without `App::bind` (with closure) to instantiate my class above which also passing parameter `$data`?
|
No, you can't do that.
The idea is that you pass only the dependencies to the constructor, and obviously `data` is not one. Validator works with the data, but does not depend on the data.
Instead use setter for the data.
```
class PostValidator
{
public function __construct(Validator $validator)
{
$this->validator = $validator;
}
public function setData($data)
{
$this->data = $data;
}
}
```
and simply call it explicitly:
```
$validator = App::make('PostValidator');
$validator->setData($data);
// or in the controller, which is what you're doing most likely
public function __construct(PostValidator $validator)
{
$this->validaotr = $validator;
}
public function update($id)
{
$data = Input::only([ input that you need ]);
$this->validator->setData($data);
// run the validation
...
}
```
---
edit: as per comment, this is what 2nd argument `$parameters` does:
```
// Foo class with Eloquent Models as dependencies
public function __construct(User $user, Category $category, Post $post)
{
$this->user = $user;
$this->category = $category;
$this->post = $post;
}
```
then IoC container will resolve the dependencies as newly instantiated models:
```
$foo = App::make('Foo');
$foo->user; // exists = false
$foo->category; // exists = false
$foo->post; // exists = false
```
but you can do this if you want:
```
$user = User::first();
$cat = Category::find($someId);
$foo = App::make('Foo', ['category' => $cat, 'user' => $user]);
$foo->user; // exists = true, instance you provided
$foo->category; // exists = true, instance you provided
$foo->post; // exists = false, newly instantiated like before
```
|
Can age be a composite attribute in DBMS?
my question would be can age be considered a composite attribute? Because name is a composite attribute and it can be divided into first name, middle name and last name. And therefore can age be a composite attribute since you can divide it into years, months and then days?
|
Can *age* be a composite attribute? No. age is a function of birthdate and now.
```
age = now - birthdate
```
So, what about birthdate? Can it be a composite attribute?
Yes, it can, but it only makes sense to store dates as a composite in data warehousing situations.
Often, when warehousing data, you would store year, month, and day as separate things to make it easier to write queries such as
>
> How many people were born in March?
>
>
>
Or
>
> Of all the people born in 1982, how many have blue eyes.
> How does that compare to April 1992?
>
>
>
You'd also likely have a table that maps dates to quarters, so you could ask things like:
>
> How do birth rates compare between Q1 and Q2 over the last decade?
>
>
>
These are contrived examples, but hopefully illustrate the point. I'd recommend doing some research on "star schema" databases and "slowly changing metrics".
|
Use container width in Bootstrap 4
We have recently changed to Bootstrap 4, and I'm looking into all the new ways to set widths, breakpoints etc.
I want to make a div have a `max-width` of the current breakpoint's container width, but I can't find the standard SASS variable name for that specific width.
Note that I can't use classes (e.g. `.col-xl-`) for this, since I don't have access to this specific part of the HTML.
|
Have a look at this built-in mixin:
```
// For each breakpoint, define the maximum width of the container in a media query
@mixin make-container-max-widths($max-widths: $container-max-widths, $breakpoints: $grid-breakpoints) {
@each $breakpoint, $container-max-width in $max-widths {
@include media-breakpoint-up($breakpoint, $breakpoints) {
width: $container-max-width;
max-width: 100%;
}
}
}
```
You can create a similar one to your needs like:
```
@mixin make-max-widths-container-width($max-widths: $container-max-widths, $breakpoints: $grid-breakpoints) {
@each $breakpoint, $container-max-width in $max-widths {
@include media-breakpoint-up($breakpoint, $breakpoints) {
max-width: $container-max-width;
}
}
}
```
and use it:
```
.my-custom-class{
@include make-max-widths-container-width();
}
```
"Compiled" CSS produced (with default values on breakpoints and container widths):
```
@media (min-width: 576px) {
.my-custom-class {
max-width: 540px;
}
}
@media (min-width: 768px) {
.my-custom-class {
max-width: 720px;
}
}
@media (min-width: 992px) {
.my-custom-class {
max-width: 960px;
}
}
@media (min-width: 1200px) {
.my-custom-class {
max-width: 1140px;
}
}
```
|
How to retrieve a list of available/installed fonts in android?
In Java I would do something like:
```
java.awt.GraphicsEnvironment ge =
java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment();
Font[] fonts = ge.getAllFonts();
```
is there an Android equivalent?
|
Taken from Mark Murphy's answer on the Android Developers mailing list:
>
> <http://developer.android.com/reference/android/graphics/Typeface.html>
>
>
> There are only three fonts: normal
> (Droid Sans), serif (Droid Serif), and
> monospace (Droid Sans Mono).
>
>
> While there may be additional fonts
> buried in WebKit somewhere, they
> appear to be inaccessible to
> developers outside of WebKit. :-(
>
>
> The only other fonts are any TrueType ones you bundle with your application.
>
>
>
Edit: Roboto is a new font which came in with Android 4.0. You can use this library project to use it in all versions back to API level 4 <https://github.com/mcalliph/roboto-text-view>
|
Designing endpoint for performing partial updates with PATCH
Consider a big entity named `entity`. It exposes 100 different updates operations including add and remove properties, update properties, etc.
What should be prefered between using a single URI and many cases handled by application server route:
```
PATCH /entity/[id] {"type":"a","key1":"val1","key2":"val2"} or
{"type":"b","key3":"val3","key4":"val4"} or ...
```
... and using many URIs, one case handled by each application server route:
```
PATCH /entity/[id]/a {"key1":"val1","key2":"val2"}
PATCH /entity/[id]/b {"key3":"val3","key4":"val4"}
```
Or maybe using `PUT`? The need is to update (very) partially the entity.
What would be the best approach taking into consideration on REST compliance, load balancers, caching, KISS, etc. Any ideas would be appreciated.
|
# tl;dr
You should be fine with a single endpoint supporting `PATCH`, as long as you do it right.
The `PATCH` request payload is expected to contain a *set of instructions* for modifying the target resource. Suitable formats for representing this set of intructions are [*JSON Patch*](https://www.rfc-editor.org/rfc/rfc6902) and [*JSON Merge Patch*](https://www.rfc-editor.org/rfc/rfc7396) (and they allow modifications to nested values in your target JSON document).
# Performing modifications to the state of a resource
To perform modifications to a resource, you could either use `PUT` or `PATCH` (or both). The difference between these methods, however, is reflected in the way the server processes the request payload to modify the target resource:
- In a `PUT` request, the payload is a *modified version* of the resource stored on the server. And the client is requesting the stored version to be *replaced* with the new version. So it may not be entirely suitable for *partial* modifications.
- In a `PATCH` request, the request payload contains a *set of instructions* describing how a resource currently stored on the server should be *modified* to produce a new version. It means it's the most suitable approach for performing *partial* modifications to a resource.
To make things clear, I put together some examples below.
## Using `PUT`
Consider, for example, you are creating an API to manage contacts. On the server, you have a resource that can be represented with the following JSON document:
```
{
"id": 1,
"name": "John Appleseed",
"work": {
"title": "Engineer",
"company": "Acme"
},
"phones": [
{
"phone": "0000000000",
"type": "mobile"
}
]
}
```
Let’s say that John has been promoted to senior engineer and you want to keep you contact list updated. We could modify this resource using a `PUT` request, as shown below:
```
PUT /contacts/1 HTTP/1.1
Host: example.org
Content-Type: application/json
{
"id": 1,
"name": "John Appleseed",
"work": {
"title": "Senior Engineer",
"company": "Acme"
},
"phones": [
{
"phone": "0000000000",
"type": "mobile"
}
]
}
```
With `PUT`, the full representation of the new state of the resource must be sent to the server even when you need to modify a single field of a resource, which may not be desirable in some situations.
From the [RFC 7231](https://www.rfc-editor.org/rfc/rfc7231):
>
> [**4.3.4. PUT**](https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4)
>
>
> The `PUT` method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload. […]
>
>
>
## Using `PATCH`
The `PATCH` method definition, however, doesn’t enforce any format for the request payload apart from mentioning that the request payload should contain a set of instructions describing how the resource will be modified and that set of instructions is identified by a media type (which defines how the `PATCH` should be applied by the server).
From the [RFC 5789](https://www.rfc-editor.org/rfc/rfc5789):
>
> [**2. The PATCH Method**](https://www.rfc-editor.org/rfc/rfc5789#section-2)
>
>
> The `PATCH` method requests that a set of changes described in the request entity be applied to the resource identified by the Request-URI. The set of changes is represented in a format called a “patch document” identified by a media type. […]
>
>
>
Some suitable formats for describing such set of changes are listed below:
### JSON Patch
It expresses a sequence of operations to be applied to a JSON document. It is defined in the [RFC 6902](https://www.rfc-editor.org/rfc/rfc6902) and is identified by the `application/json-patch+json` media type.
The JSON Patch document represents an array of objects and each object represents a single operation to be applied to the target JSON document.
The evaluation of a JSON Patch document begins against a target JSON document and the operations are applied sequentially in the order they appear in the array. Each operation in the sequence is applied to the target document and the resulting document becomes the target of the next operation. The evaluation continues until all operations are successfully applied or until an error condition is encountered.
The operation objects must have exactly one [`op`](https://www.rfc-editor.org/rfc/rfc6902#section-4) member, whose value indicates the operation to perform:
- [`add`](https://www.rfc-editor.org/rfc/rfc6902#section-4.1): Adds the value at the target location; if the value exists in the given location, it’s replaced.
- [`remove`](https://www.rfc-editor.org/rfc/rfc6902#section-4.2): Removes the value at the target location.
- [`replace`](https://www.rfc-editor.org/rfc/rfc6902#section-4.3): Replaces the value at the target location.
- [`move`](https://www.rfc-editor.org/rfc/rfc6902#section-4.4): Removes the value at a specified location and adds it to the target location.
- [`copy`](https://www.rfc-editor.org/rfc/rfc6902#section-4.5): Copies the value at a specified location to the target location.
- [`test`](https://www.rfc-editor.org/rfc/rfc6902#section-4.6): Tests that a value at the target location is equal to a specified value.
Any other values are considered errors.
Having said that, a request to modify John’s job title could be:
```
PATCH /contacts/1 HTTP/1.1
Host: example.org
Content-Type: application/json-patch+json
[
{ "op": "replace", "path": "/work/title", "value": "Senior Engineer" }
]
```
### JSON Merge Patch
It's a format that describes the changes to be made to a target JSON document using a syntax that closely mimics the document being modified. It is defined in the [RFC 7396](https://www.rfc-editor.org/rfc/rfc7396) and is identified by the `application/merge-patch+json` media type.
The server processing a JSON Merge Patch document determine the exact set of changes being requested by comparing the content of the provided patch against the current content of the target document:
- If the merge patch contains members that do not appear within the target document, those members are added.
- If the target does contain the member, the value is replaced.
- `null` values in the merge patch indicate that existing values in the target document are to be removed.
- Other values in the target document will remain untouched.
With this, a request to modify John’s job title could be:
```
PATCH /contacts/1 HTTP/1.1
Host: example.org
Content-Type: application/merge-patch+json
{
"work": {
"title": "Senior Engineer"
}
}
```
# The bottom line
When it comes to modifying the state of a resource, the usage of `PUT` and `PATCH` ultimately depends on your needs. Choose one, the other or both. But stick to the standards when supporting them:
- If you choose to support `PUT`, the request payload must be a new representation of the resource (so the server will *replace* the state of the resource).
- If you choose to support `PATCH`, use a standard format for the payload, such as JSON Patch or JSON Merge Patch (which contains a set of instructions that tell the server how to *modify* the state of the resource).
There are many libraries around to parse JSON Patch documents, so don't reinvent the wheel. If you think JSON Patch is too complex for your needs, then use JSON Merge Patch. If are confortable with both formats, then nothing stops you from supporting both formats.
And, as both formats allow you to modify nested values in your JSON document, you should be fine with a single endpoint for `PATCH` even when you have to allow modification in the nested values or your resource.
### Validating the state of your resources
If you are concerned about how to keep things consistent after applying `PATCH` to your resources, I recommend having a look at this [answer](https://stackoverflow.com/a/36175349/1426227). In short, you are advised to **decouple** the models that represent your API resources from the models that represent your domain.
So, when handling a `PATCH` request:
1. Fetch the domain model instance you intend to update.
2. Convert the domain model to the correspondent API resource model.
3. Apply the patch to the API resource model.
4. Convert the API resource model with the updates back to the domain model.
5. Validate the state of the domain model:
- If the state is valid, accept the request and persist the changes made to the domain model.
- Otherwise, refuse the request.
You may also want to refer to the [RFC 5789](https://www.rfc-editor.org/rfc/rfc5789) for details on [error handling](https://www.rfc-editor.org/rfc/rfc5789#section-2.2) when processing `PATCH` request.
### Further reading (specially if you use Java and Spring)
In case you use Java and Spring, you may want to have a look this [post](https://cassiomolin.com/using-http-patch-in-spring) from my blog. If you don't use Java or Spring, the considerations around modifying resources with `PUT` or `PATCH`, in the first part of the post, may still interest you (although I nicely summarized the main idea in this answer).
|
How to access object with "creative" names in scala?
For example, given
```
object ~ {
def foo = ???
}
```
How do i access that method?
Neither of those work:
```
~.foo
`~`.foo
```
With both the compiler complains about "illegal start of simple expression".
And yes, i know i probably shouldn't name classes "~" but both the standard library and some other libraries do, and sometimes you need to work with them.
Added: Looking at [sschaef's answer](https://stackoverflow.com/a/40527103/130019) i tried
```
$tilde.foo
```
And that actually works. Not sure if that's intended or just an implementation detail of how those names are translated to JVM-identifiers. And whether that would work in other flavors of scala (e.g. Scala.js)?
I'll leave this open for a bit to see maybe someone chimes in with a more extensive answer.
|
The problem seems to exist only in 2.11:
```
Welcome to Scala 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_102).
Type in expressions for evaluation. Or try :help.
scala> object ~ { def foo = 0 }
defined object $tilde
scala> ~.foo
<console>:1: error: illegal start of simple expression
~.foo
^
```
In 2.12 it works fine:
```
Welcome to Scala 2.12.0 (OpenJDK 64-Bit Server VM, Java 1.8.0_102).
Type in expressions for evaluation. Or try :help.
> object ~ { def foo = 0 }
defined object $tilde
> ~.foo
res0: Int = 0
```
|
Unity3d Rotate around a specific point over time
I'm writing a function that slowly turns exactly 90 degreees over the course of a period of time. So far everything is working fine, I just need to figure out how to make it rotate around a point instead of rotation around the transforms center.
```
protected bool _isRotating = false;
public IEnumerator RotateWithEasing(GameHelper.Axis axis, Vector3 isolatedAxisPoint, float inTime)
{
if(_isRotating)
{
yield break;
}
_isRotating = true;
var degrees = this.GetDegreesFromAxis(axis);
Quaternion fromAngle = transform.rotation;
Quaternion toAngle = Quaternion.Euler(transform.eulerAngles + degrees);
for (float t = 0f; t < 1f; t += Time.deltaTime / inTime)
{
transform.rotation = Quaternion.Lerp(fromAngle, toAngle, t);
yield return null;
}
_isRotating = false;
}
```
Can anyone help point me in the right direction, How can I modify this so it rotates around the specified isolatedAxisPoint?
|
Here's an actual example Unity tween that happened to be showing in a text editor another window!
If you want to rotate something, it is trivial just use deltaTime in a loop in an IEnumerator, and adjust the **eulerAngles** or just call **Rotate**. (Never use quaternions for any reason in Unity.) In this example I just call Rotate.
This is a **very basic pattern in Unity**.
Here is the **basic pattern of a tween** in Unity. You will do this 1000s of times in Unity!
1. calculate the end time
2. do a while loop until that time is reached
3. each frame, slide (or whatever) by the appropriate fraction.
Note that of course you
```
yield return null;
```
inside the loop, it means "wait until the next frame".
(Note that as with any tween, it's good practice to **force-set the final values** at the end so you know it's perfect. Notice in this example I simply set the eulerAngles, at the end when the loop is finished.)
```
private IEnumerator _roll( Vector3 delta )
{
rollBegins.Invoke();
Vector3 begin = transform.eulerAngles;
float rollSecs = 2f; // speed of the roll
float startTime = Time.time;
float endTime = startTime+rollSecs;
while (Time.time < endTime)
{
Vector3 d = delta * ( Time.deltaTime / rollSecs );
transform.Rotate(d, Space.World );
yield return null;
}
transform.eulerAngles = .. perfectly correct end values;
busy = false;
rollComplete.Invoke();
}
```
{Note, in that actual code example, "delta" is understood to be only on one axis; don't worry about this, it's just an example of a tween using a coroutine.}
Note - of course there are **"two ways"** to go inside the loop. You can calculate the **small amount** you should move it that frame. Or, you can just calculate what the **new position** should be at that time. It's up to you. Many programmers think the latter is more logical, if so, do that. Note too that you can do the while loop until a certain time has passed, or, you can do the while loop until you reach your destination! (Being very careful about float equalities.) The choice is yours.
Note that very very often you use **Lerp** or even better **SmoothStep** with such tweens. It is ubiquitous in Unity. Once you master the basic pattern, experiment with that.
Note that in my example the code is using **UnityEvent** to flag the beginning and end of the animation. This is extremely common. After all it's usually the case that once some movement finishes, you then have to go on and do something else. Another good example is, while something is animating, the user is blocked from say steering or whatever is the case in your game.
Excellent intro to UnityEvent <https://stackoverflow.com/a/36249404/294884> pls vote it up :)
---
For the record.
# Tweeng in Unity.
The truly advanced way to tween in Unity is to tweeng:
[Basic Tweeng code base appears in this question.](https://stackoverflow.com/q/36121120/294884)
It takes a while to get the hang of that but it is incredibly powerful. It's mindboggling that the tweeng extension is only a few lines of code.
|
Merging/Appending Justs in Haskell
I'm trying to do what must be blindingly obvious in Haskell, which is go from `Just [1]` and `Just [2]` to `Just [1, 2]`. However I can't find anything online as I keep finding related but unhelpful pages. So, how do you achieve this?
|
You can use `liftA2 (++)`:
```
liftA2 (++) :: Maybe [a] -> Maybe [a] -> Maybe [a]
```
`liftA2` just lifts a binary function into an [`Applicative`](http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Applicative.html). `Applicative`s were designed for lifting functions of arbitrary arguments in a context, so they're perfect for this. In this case, the `Applicative` we're using is `Maybe`. To see how this works, we can look at the definition:
```
liftA2 :: (Applicative f) => (a -> b -> c) -> f a -> f b -> f c
liftA2 f a b = f <$> a <*> b
```
`(<$>)` just lifts any function on pure values to one operating inside `f`: `(a -> b) -> f a -> f b`. (It's just an alias for `fmap`, if you're familiar with `Functor`s.) For `Maybe`:
```
_ <$> Nothing = Nothing
f <$> Just x = Just (f x)
```
`(<*>)` is a bit trickier: it applies a function inside `f` to a value inside `f`: `f (a -> b) -> f a -> f b`. For `Maybe`:
```
Just f <*> Just x = Just (f x)
_ <*> _ = Nothing
```
(In fact, `f <$> x` is the same thing as `pure f <*> x`, which is `Just f <*> x` for `Maybe`.)
So, we can expand the definition of `liftA2 (++)`:
```
liftA2 (++) a b = (++) <$> a <*> b
-- expand (<$>)
liftA2 (++) (Just xs) b = Just (xs ++) <*> b
liftA2 (++) _ _ = Nothing
-- expand (<*>)
liftA2 (++) (Just xs) (Just ys) = Just (xs ++ ys)
liftA2 (++) _ _ = Nothing
```
Indeed, we can use these operators to lift a function of *any* number of arguments into any `Applicative`, just by following the pattern of `liftA2`. This is called *applicative style*, and is very common in idiomatic Haskell code. In this case, it might even be more idiomatic to use it directly by writing `(++) <$> a <*> b`, if `a` and `b` are already variables. (On the other hand, if you're partially applying it — say, to pass it to a higher-order function — then `liftA2 (++)` is preferable.)
Every `Monad` is an `Applicative`, so if you ever find yourself trying to "lift" a function into a context, `Applicative` is probably what you're looking for.
|
Two-sample test for multivariate normal distributions under the assumption that means are the same
Let $\{x\_i\}\_{i=1}^n$ be a sample from a multivariate Gaussian distribution ${\cal N}(0, \Sigma\_X)$ and $\{y\_i\}\_{i=1}^m$ be a sample from ${\cal N}(0, \Sigma\_Y)$.
Are there hypothesis tests for $\Sigma\_X = \Sigma\_Y$? Pointers to relevant literature would be very appreciated.
|
The [Mauchly's test](http://en.wikipedia.org/wiki/Mauchly%27s_sphericity_test) allows to test if a given covariance matrix is proportional to a reference (identity or other) and is available through `mauchly.test()` under R. It is mostly used in repeated-measures design (to test (1) if the dependent variable VC matrices are equal or homogeneous, and (2) whether the correlations between the levels of the within-subjects variable are comparable--altogether, this is known as the *sphericity assumption*).
Box’s M statistic is used (in MANOVA or LDA) to test for homogeneity of covariance matrices, but as it is very sensitive to normality it will often reject the null ([R code](http://finzi.psych.upenn.edu/R/Rhelp02a/archive/33330.html) not available in standard packages).
Covariance structure models as found in [Structural Equation Modeling](http://en.wikipedia.org/wiki/Structural_equation_modeling) are also an option for more complex stuff (although in multigroup analysis testing for the equality of covariances makes little sense if the variances are not equal), but I have no references to offer actually.
I guess any textbook on multivariate data analysis would have additional details on these procedures. I also found this article for the case where normality assumption is not met:
>
> Aslam, S and Rocke, DM. [A robust
> testing procedure for the equality of
> covariance matrices](http://dmrocke.ucdavis.edu/papers/Aslam.Rocke.2005.pdf), Computational
> Statistics & Data Analysis 49 (2005)
> 863-874
>
>
>
|
So much dependencies for installing the package 'nmap'
I want to install the package `nmap` on debian by the command `apt-get`, but It seems that the package `nmap` depends on so many dependencies! For example It depends on the packages `imagemagick` and `x11-common`! Unless `nmap` needs the X window system !? Is everything OK?
```
# apt-get install nmap
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
dbus fontconfig fonts-droid fonts-liberation ghostscript gnuplot gnuplot-nox groff gsfonts hicolor-icon-theme imagemagick imagemagick-common libavahi-client3
libavahi-common-data libavahi-common3 libblas3 libblas3gf libcairo2 libcroco3 libcups2 libcupsimage2 libdatrie1 libdbus-1-3 libdjvulibre-text libdjvulibre21 libexiv2-12
libffi5 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgfortran3 libglib2.0-0 libglib2.0-data libgs9 libgs9-common libice6 libijs-0.35 libilmbase6 libjasper1 libjbig0
libjbig2dec0 liblcms1 liblcms2-2 liblensfun-data liblensfun0 liblinear-tools liblinear1 liblqr-1-0 liblua5.1-0 libmagickcore5 libmagickcore5-extra libmagickwand5
libnetpbm10 libopenexr6 libpango1.0-0 libpaper-utils libpaper1 libpcap0.8 libpixman-1-0 librsvg2-2 librsvg2-common libsm6 libsvm-tools libsystemd-login0 libthai-data
libthai0 libtiff4 libwmf0.2-7 libxaw7 libxcb-render0 libxcb-shm0 libxft2 libxmu6 libxrender1 libxt6 netpbm poppler-data psutils shared-mime-info ufraw-batch x11-common
```
|
This is a problem on [Debian 7 wheezy (oldstable)](https://www.debian.org/releases/wheezy/) which was fixed in [Debian 8 jessie (stable)](https://www.debian.org/releases/jessie/). Nmap requires the [liblinear1 package](https://packages.debian.org/wheezy/liblinear1) in order to do IPv6 OS detection, and that package had a "recommends" dependency on [liblinear-tools](https://packages.debian.org/wheezy/liblinear-tools), which further recommended [libsvm-tools](https://packages.debian.org/wheezy/libsvm-tools), which requires [gnuplot](https://packages.debian.org/wheezy/gnuplot), and that is where the X11 dependencies are coming from. In Debian jessie and later releases, [liblinear1](https://packages.debian.org/jessie/liblinear1) downgraded the dependency on liblinear-tools to "suggests" so that this is not a problem.
To solve your particular issue, you can either upgrade to a more recent Debian release, or use `apt-get --no-install-recommends install nmap`. An even better solution, though, would be to use a recent version of Nmap built from source or installed from the official RPMs. You can use [alien](https://help.ubuntu.com/community/RPM/AlienHowto) to install RPMs on a Debian system. The version of Nmap that Debian wheezy installs (6.00) is over 3 years old and has several known bugs. Debian jessie is on the (current) latest stable version, 6.47, but that is also a year old and about to be eclipsed. The latest version can always be downloaded from <https://nmap.org/download.html>
|
Cakephp 2.0: How to provide ORDER BY instructions for data retrieved recursively through find()?
I am trying to set up a model with a lot of hasMany/hasOne relationships, and I'd like for the owned models to be presented, *alphabetically*, in a drop-down menu within the "Add" view. I am not sure how to provide ORER BY instructions to models that are retrieved recursively. This is what I've tried:
```
$services = $this->service->find('all', array(
'order'=>array(
'Service.start_year DESC',
'Servicecat.title DESC'
),
'recursive'=>0
) );
```
But of course this yields no change. I have a suspicion it's not very complicated, I just can't find a solution in the cookbook/api. Ideas?
|
You could do this a few ways. I prefer to use the containable behavior for models and then it allows you to finely control how the models are retrieved. You could also setup order in your model's relationship definition.
**Model Way**
```
$hasMany = array(
'Servicecat' => array(
'order' => array(
'Servicecat.title DESC'
)
)
);
```
**Containable Way**
In your models, set them to use the containable behavior:
```
public $actsAs = array('Containable');
```
Then, when you find from your controller, you can explicitly state which models are linked. You can use multidimensional arrays to determine the depth of recursion.
```
$services = $this->Service->find(
'all',
array(
'contains' => array(
'Servicecat' => array(
'order' => array('Servicecat.title' => 'DESC')
)
),
'order' => array('Services.start_year' => 'DESC')
)
);
```
|
How do I know if I'm seeing Simpson's paradox?
I need some guidance on how to analyse the results of a subgroup breakdown in an A/B test.
I have the results of an (ongoing) A/B test and need to do an interim analysis on
1. The overall headline results
2. The breakdowns by relevant dimensions.
The result is complicated by the fact that the results for the effect-size on some of the broken-down groups are wildly different to the headline (overall) results.
I'm 90% certain that what we are seeing is an artifact of us breaking down the overall allocations into these different groups, i.e. it's Simpson's paradox.
For example we have numbers which look like this:
We have ttwo groups: A, B with the following allocations and conversions (these are not the real numbers, but illustrative):
| | | A | B |
| --- | --- | --- | --- |
| Mobile | allocated | 200,000 | 200,000 |
| Mobile | converted | 10,000 | 10,775 |
| Desktop | allocated | 50,000 | 50,000 |
| Desktop | allocated | 2500 | 2350 |
I.e. we're seeing an overall lift of 7.75% but a decrease of ~6% on Desktop.
Are there any results on confidence intervals or ways of relating the empirical means and standard deviations of the subgroups to the overall mean, so I could rule out some kind of further analysis?
|
This is an interesting sub-population effect, but it's not Simpson's Paradox. Simpson's Paradox occurs when there is a trend in *each* group of data individually, but the trend *reverses* in the full dataset. Here, we don't have Simpson's Paradox - the trend has opposite directionalities in either group, and the overall trend is simply the trend in the numerically larger group. There is no paradox here, we simply observe that the overall trend is the same trend we observed in the majority of the data, which is rather intuitive.
For it to be Simpson's paradox, you'd have to see an increase in each of Mobile and Desktop individually, but see a decrease overall when combining them (or the reverse).
I'll also add that Simpson's Paradox, in cases where it exists, is not typically an "artifact" in the sense of some kind of spurious signal. Rather, it represents that there *really is* some confounding variable that you should take into account.
|
Single prediction with linear regression
Implementing linear regression as below:
```
from sklearn.linear_model import LinearRegression
x = [1,2,3,4,5,6,7]
y = [1,2,1,3,2.5,2,5]
# Create linear regression object
regr = LinearRegression()
# Train the model using the training sets
regr.fit([x], [y])
# print(x)
regr.predict([[1, 2000, 3, 4, 5, 26, 7]])
```
produces :
```
array([[1. , 2. , 1. , 3. , 2.5, 2. , 5. ]])
```
In utilizing the predict function why cannot utilize a single x value in order to make prediction?
Trying `regr.predict([[2000]])`
returns:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-3a8b477f5103> in <module>()
11
12 # print(x)
---> 13 regr.predict([[2000]])
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/base.py in predict(self, X)
254 Returns predicted values.
255 """
--> 256 return self._decision_function(X)
257
258 _preprocess_data = staticmethod(_preprocess_data)
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/base.py in _decision_function(self, X)
239 X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
240 return safe_sparse_dot(X, self.coef_.T,
--> 241 dense_output=True) + self.intercept_
242
243 def predict(self, X):
/usr/local/lib/python3.6/dist-packages/sklearn/utils/extmath.py in safe_sparse_dot(a, b, dense_output)
138 return ret
139 else:
--> 140 return np.dot(a, b)
141
142
ValueError: shapes (1,1) and (7,7) not aligned: 1 (dim 1) != 7 (dim 0)
```
|
When you do this:
```
regr.fit([x], [y])
```
You are essentially inputing this:
```
regr.fit([[1,2,3,4,5,6,7]], [[1,2,1,3,2.5,2,5]])
```
that has a shape of `(1,7)` for `X` and `(1,7)` for `y`.
Now looking at the [documentation of `fit()`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.fit):
Parameters:
>
>
> ```
> X : numpy array or sparse matrix of shape [n_samples,n_features]
> Training data
>
> y : numpy array of shape [n_samples, n_targets]
> Target values. Will be cast to X’s dtype if necessary
>
> ```
>
>
So here, what the model assumes it that you have data which have data has 7 features and you have 7 targets. Please see [this for more information on multi-output regression](http://scikit-learn.org/stable/modules/multiclass.html).
So at the prediction time, model will require data with 7 features, something of shape `(n_samples_to_predict, 7)` and will output the data with shape `(n_samples_to_predict, 7)`.
If instead, you wanted to have something like this:
```
x y
1 1.0
2 2.0
3 1.0
4 3.0
5 2.5
6 2.0
7 5.0
```
then you need to have a shape of `(7,1)` for input `x` and `(7,)` or `(7,1)` for target `y`.
So as @WStokvis said in comments, you need to do this:
```
import numpy as np
X = np.array(x).reshape(-1, 1)
y = np.array(y) # You may omit this step if you want
regr.fit(X, y) # Dont wrap it in []
```
And then again at prediction time:
```
X_new = np.array([1, 2000, 3, 4, 5, 26, 7]).reshape(-1, 1)
regr.predict(X_new)
```
And then doing the following will not raise error:
```
regr.predict([[2000]])
```
because the required shape is present.
**Update for the comment:-**
When you do `[[2000]]`, it will be internally converted to `np.array([[2000]])`, so it has the shape `(1,1)`. This is similar to `(n_samples, n_features)`, where `n_features = 1`. This is correct for the model because at the training, the data has shape `(n_samples, 1)`. So this works.
Now lets say, you have:
```
X_new = [1, 2000, 3, 4, 5, 26, 7] #(You havent wrapped it in numpy array and reshape(-1,1) yet
```
Again, it will be internally transformed as this:
```
X_new = np.array([1, 2000, 3, 4, 5, 26, 7])
```
So now X\_new has a shape of `(7,)`. See its only a one dimensional array. It doesn't matter if its a row vector or a column vector. Its just one-dimensional array of `(n,)`.
So scikit may not infer whether its `n_samples=n` and `n_features=1` or other way around (`n_samples=1` and `n_features=n`). Please see [my other answer which explains about this](https://stackoverflow.com/a/42063867/3374996).
So we need to explicitly convert the one-dimensional array to 2-d by `reshape(-1,1)`. Hope its clear now.
|
Preventing misuse of libspotify key
The terms of use for libspotify state that the key should be stored in a secure manner. The only recommendation for storing the key that I've found is compiling your application and distributing the binary. I have a hard time seeing this as anything else than security by obscurity since the key is easily retrievable using a debugger.
Is this really the approach Spotify suggests? What about if I only compile the file containing the key and distribute the rest of my application as open source?
I guess the essence of my question is this: how do I avoid breaching the ToS without requiring every user to obtain their own key?
|
The logic is this (I work for Spotify): requiring our developers to jump through a bunch of hoops just to get their API key into their binary isn't going to be worth it - developers will be turned off by it and everyone will be unhappy.
However, we don't want keys to be spread around, simply because if everyone is using one key, we can't track it reliably and if that key ends up being used for something malicious and we kill it, lots of applications will suddenly be broken.
To force in a terrible car analogy, imagine the API key is some valuable item and your application is a car. If you leave the item on the car's seat (i.e., having your API key in plain text), you're practically inviting someone to break in and steal it (i.e., use your key in their own app). If you put it in the glove box (compile it into your binary), if someone breaks in to your car (disassembles your app) because they know the item is in the glovebox, it's pretty much game over anyway.
In short: Compiling in the key is absolutely security through obscurity, but we feel it's enough to dissuade people from casually reusing other applications' API keys when it's fairly trivial to get one from us directly.
>
> I guess the essence of my question is this: how do I avoid breaching the ToS without requiring every user to obtain their own key?
>
>
>
If you're distributing your application in binary form, compiling it in is just fine. If you're distributing it in source form, you can't really include the key.
|
Does the -depth option of find imply depth-first tree traversal?
As I understand it the `-depth` option of the `find` command causes the specified actions to take place on the way out of a directory (and maybe I understand it wrong) during a depth-first traversal of a tree structure.
Without the `-depth` option specified, does it normally make an action occur before the depth-first traversal is complete, or does it do a breadth-first traversal of the directories and run the action first normally?
|
`find` uses a depth-first strategy (as opposed to breadth-first), whether `-depth` is specified or not. `-depth` only guarantees that sub-directories are processed before their parents.
A quick example:
```
mkdir -p a/{1,2,3} b c
find .
```
produces
```
.
./a
./a/2
./a/1
./a/3
./b
./c
```
whereas
```
find . -depth
```
produces
```
./a/2
./a/1
./a/3
./a
./b
./c
.
```
If you want breadth-first search, you can use [`bfs`](https://github.com/tavianator/bfs) which is a breadth-first implementation of `find`.
|
How can I deploy a discord.js bot to Cloud Functions?
I want to deploy a Discord bot running on discord.js to Firebase Cloud Functions but I can't get the bot to run in Firebase. If I use nodemon it runs perfectly but if I use firebase deploy it will not start the bot.
Here is my current code:
```
const functions = require('firebase-functions');
require('dotenv').config();
const token = process.env.TOKEN
const Discord = require('discord.js')
const Client = new Discord.Client();
Client.on('ready', () => {
Client.channels.find(x => x.name === 'main-cannel').send('bot is deployed')
Client.user.setGame(`The Cult | ${Client.guilds.size} servers`)
Console.log('test')
});
Client.login(token);
//is is not working but de basic
//export.App = functions.... {Client}
exports.app = functions.https.onRequest((request, response) => {
response.send("Test");
});
```
|
This may not be the best combination of google cloud platform services, since cloud functions where not designed with this in mind. You can just host your Discord bot on a compute engine machine.
If you want to use the dynamic scaling have a look at [Discord Microservice Bots](https://gist.github.com/DasWolke/c9d7dfe6a78445011162a12abd32091d) where DasWolke describes what microservices are. Hey also included his javascript code to split up the different services for Discord.
What you can do on Google cloud platform specifically, is creating a VM with the [Gateway](https://github.com/DasWolke/CloudStorm) running. This needs to run 24/7 and should be lightweight. You can use an f1-micro (which is free) for this though google recommends a g1-small for the task.
The gateway should filter the events you are looking for (because Discord sends a lot of events and you don't need most) and send the data to [cloud function](https://cloud.google.com/functions) or [cloud run](https://cloud.google.com/run) (you can send the data via [pub/sub](https://cloud.google.com/pubsub)). In my experience, a cloud run has much shorter startup times so I went with that.
Inside your function, you receive the data do with it what you want. If you want something in Discord (send a message, manage channels, ...) you can use [SnowTransfer](https://github.com/DasWolke/SnowTransfer) for that. SnowTransfer just calls the rest API on discord.
|
How to share WiFi/hotspot over SSH tunnel
I have created a SSH tunnel on my laptop connected to LAN. I want to share the internet via WiFi/hotspot, but over the SSH tunnel. I mean any computer using this wireless should connect to the internet via the SSH tunnel (automatically without setting proxy).
In other words, I want to set the proxy setting only on my laptop (e.g., 127.0.0.1:1028), and any computer using the WiFi/hotspot from my laptop should be able to use internet without any proxy setting.
|
What you want is not possible with pure SSH (i.e. the -D proxy option to create the *poor-mans VPN*).
Here's two options that do work, though:
1. use **sshuttle** (available in the repositories) and tell it to forward all traffic from the subnet of your hotspot through the *"VPN"*. See the [manpage](http://manpages.ubuntu.com/manpages/trusty/man8/sshuttle.8.html) for more info.
2. set up **OpenVPN** on the remote system and your local system. The traffic of the connected hotspot users should go through the VPN by default. You might also want to look at [this serverfault question](https://serverfault.com/questions/512102/routing-through-openvpn-gateway).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.