prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Remove all the records from ets table having datestamp older than 10 seconds
I have an `ets` set table in an elixir app. I need to clean-up records which have their updated\_at field older than 10 seconds. Is there a way I can set expiry or do it manually without iterating over all the records? I match records based on the timestamps greater than given time.
example record:
```
key: key_1
record: %{id: key_1, updated_at: ~N[2018-12-19 10:08:47.803075]}
```
so far I have this code
```
def clean_stale(previous_key) do
if previous_key == :"$end_of_table" do
:ok
else
device = get(previous_key)
next_key = :ets.next(__MODULE__, previous_key)
if NaiveDateTime.diff(NaiveDateTime.utc_now, device.last_recorded_at) > 10 do
remove(device.id)
end
clean_stale(next_key)
end
end
```
|
If you store the "updated at" time as an integer instead of as a `NaiveDateTime` struct, you can use a match spec.
For example, to get the current time as the number of seconds since the Unix epoch:
```
> DateTime.to_unix(DateTime.utc_now())
1545215338
```
You can do something like this:
```
iex(3)> :ets.new(:foo, [:public, :named_table])
:foo
iex(4)> :ets.insert(:foo, {:key1, DateTime.to_unix(DateTime.utc_now())})
true
iex(5)> :ets.insert(:foo, {:key2, DateTime.to_unix(DateTime.utc_now())})
true
iex(6)> :ets.tab2list(:foo)
[key2: 1545215144, key1: 1545215140]
iex(7)> :ets.select_delete(:foo, [{{:_, :"$1"}, [{:<, :"$1", 1545215144}], [true]}])
1
iex(8)> :ets.tab2list(:foo)
[key2: 1545215144]
```
In the call to [`ets:select_delete/2`](http://erlang.org/doc/man/ets.html#select_delete-2), I pass a [match specification](http://erlang.org/doc/apps/erts/match_spec.html). It consists of three parts:
- With `{:_, :"$1"}`, I perform a match on the records in the table. In this example, I have a tuple with two elements. I ignore the key with `:_`, and assign the timestamp to a match variable with `:"$1"`.
- With `[{:<, :"$1", 1545215144}]`, I specify that I only want to match records with a timestamp before this time. In your case, you would calculate the time ten seconds in the past and put that value here.
- With `[true]`, I specify that I want to return `true` for matching records, which in the case of `select_delete` means "delete this record".
So after calling `select_delete`, only the second record remains in the table.
---
If the timestamp is inside a map, you can use `map_get` to access it and compare it:
```
:ets.select_delete(:foo, [{{:_, :"$1"},
[{:<, {:map_get, :updated_at, :"$1"}, 1545215339}],
[true]}])
```
Or (in Erlang/OTP 18.0 and later) match out the map value:
```
:ets.select_delete(:foo, [{{:_, #{updated_at: :"$1"}},
[{:<, :"$1", 1545215339}],
[true]}])
```
|
Await inside for loop is admitted in Dart?
I have a program like the following:
```
main() async {
ooClass = new OoClass(1);
int val = await function1();
print(val);
ooClass = new OoClass(2);
val = await function1();
print(val);
ooClass = new OoClass(3);
val = await function1();
print(val);
}
OoClass ooClass;
Future<int> function1() async {
List list3 = await function2();
return list3.indexOf('Ok');
}
Future<List<String>> function2() async {
List<String> list1 = new List<String>();
function3(Map<String, int> map1) async {
String string1 = '';
bool bool1 = false;
List<String> list2 = [];
String string2;
function4(String string3) async {
if (ooClass.function7(string3)) return;
if (ooClass.function8() && !bool1) {
bool1 = true;
return;
}
string2 = await function5(string3);
list2.add(string2);
}
for (String key in map1.keys) {
await function4(key);
}
string1 = list2.join(', ');
list1.add(string1);
}
for (Map<String, int> idxList in ooClass.function6()) {
await function3(idxList);
}
return list1;
}
function5(String s1) {
return new Future.value('Ok');
}
class OoClass {
List<Map<String, int>> map2;
bool bool3 = false;
OoClass(int type) {
switch(type) {
case 1:
map2 = [{'Ok':1}];
break;
case 2:
map2 = [{'id': 1, 'Ok':1}];
break;
case 3:
map2 = [{'foo': 1, 'Ok':1}];
bool3 = true;
break;
}
}
List<Map<String, int>> function6() {
return map2;
}
bool function7(String string9) {
if (string9 == 'id') return true;
return false;
}
bool function8() {
return bool3;
}
}
```
This snippet works perfectly.
In my real environment, instead, when await function4(key); is called, function2 returns the list1 List (empty). Function4 call is executed later but the result of function2 is lost.
I don't really understand this behavior. Could it be a bug or await inside for loop is not to be used?
If await should not be used inside for loop how could I implement it in another way?
I'm using dart 1.22.0-dev.4 but I've tried also with older (and stable) versions and I had the same result.
---
I finally got the problem and it did not depend on `await` in a `for` loop. It was instead an error in my code.
|
Yes, `await` is permitted inside a `for` loop in Dart, and it will work as expected.
```
for (var o in objects) {
await doSomething(o);
}
```
And there is even `await for` for Streams, if that's what you're looking for:
```
await for (var event in eventStream) {
print("Event received: $event");
}
```
Your example [works correctly](https://dartpad.dartlang.org/0ec1502d63ff4489fbf0245297159cc2) in DartPad. It's too complex & abstract to debug but, at least superficially, it should work. You say that the snippet doesn't work in your "real environment", though. Maybe we could help if you explained what you mean by that?
Additional tip: [take full advantage of static analysis](https://www.dartlang.org/guides/language/analysis-options), especially the [await\_only\_futures](http://dart-lang.github.io/linter/lints/await_only_futures.html) and [unawaited\_futures](http://dart-lang.github.io/linter/lints/unawaited_futures.html) linter rules. This can help you catch many bugs.
|
Ray-triangle intersection
I saw that [Fast Minimum Storage Ray/Triangle Intersection](http://www.cs.virginia.edu/~gfx/Courses/2003/ImageSynthesis/papers/Acceleration/Fast%20MinimumStorage%20RayTriangle%20Intersection.pdf) by Moller and Trumbore is frequently recommended.
The thing is, I don't mind pre-computing and storing any amounts of data, as long as it speeds-up the intersection.
So my question is, not caring about memory, what are the fastest methods of doing ray-triangle intersection?
Edit: I wont move the triangles, i.e. it is a static scene.
|
As others have mentioned, the most effective way to speed things up is to use an acceleration structure to reduce the number of ray-triangle intersections needed. That said, you still want your ray-triangle intersections to be fast. If you're happy to precompute stuff, you can try the following:
Convert your ray lines and your triangle edges to [Plücker coordinates](http://en.wikipedia.org/wiki/Pl%C3%BCcker_coordinates). This allows you to determine if your ray line passes through a triangle at 6 multiply/add's per edge. You will still need to compare your ray start and end points with the triangle plane (at 4 multiply/add's per point) to make sure it actually hits the triangle.
Worst case runtime expense is 26 multiply/add's total. Also, note that you only need to compute the ray/edge sign once per ray/edge combination, so if you're evaluating a mesh, you may be able to use each edge evaluation twice.
Also, these numbers assume everything is being done in homogeneous coordinates. You may be able to reduce the number of multiplications some by normalizing things ahead of time.
|
Regular expressions. Match specific word between two words
I use C#. I have a string:
```
wordA wordB wordC wordB wordD
```
I need to match all occurrences of wordB between wordA and wordD.
I use lookahead and lookbehind to match everything between wordA and worD like this:
```
(?<=wordA)(.*?)(?=wordD)
```
But something like
```
(?<=wordA)(wordB)(?=wordD)
```
matches nothing.
What would be the best way to match all occurrences of wordB between wordA and wordD?
|
Put the `.*?` into the lookarounds:
```
(?<=wordA.*?)wordB(?=.*?wordD)
```
See the [regex demo](http://regexstorm.net/tester?p=%28%3F%3C%3DwordA.*%3F%29wordB%28%3F%3D.*%3FwordD%29&i=wordA%20wordB%20wordC%20wordB%20wordD)
Now, the pattern means:
- `(?<=wordA.*?)` - (a positive lookbehind) requires the presence of `wordA` followed with any 0+ chars (as few as possible) immediately before...
- `wordB` - word B
- `(?=.*?wordD)` - (a positive lookahead) requires the presence of any 0+ chars (as few as possible) followed with a `wordD` after them (so, it can be right after `wordB` or after some chars).
If you need to account for multiline input, compile the regex with `RegexOptions.Singleline` flag so that `.` could match a newline symbol (or prepend the pattern with `(?s)` inline modifier option - `(?s)(?<=wordA.*?)wordB(?=.*?wordD)`).
If the "words" consist of letters/digits/underscores, and you need to match them as whole words, do not forget to wrap the `wordA`, `wordB` and `wordD` with `\b`s (word boundaries).
Always test your regexes in the target environment:
```
var s = "wordA wordB wordC wordB \n wordD";
var pattern = @"(?<=wordA.*?)wordB(?=.*?wordD)";
var result = Regex.Replace(s, pattern, "<<<$&>>>", RegexOptions.Singleline);
Console.WriteLine(result);
// => wordA <<<wordB>>> wordC <<<wordB>>>
// wordD
```
See [C# demo](http://ideone.com/iqL85L).
|
How can I replicate `algorithm` header type functions?
Just like algorithms in algo header like
`std::sort(std::begin(), std::end(), greater)`
where `greater` is a `bool func`
I am trying to do the same for understanding as how it works actually.
So I am trying to do the same by
1 -> making a lambda func in a separate namespace
2 -> making a separate function like `std::sort` in other namespace
3 -> passing a vector and it supposed to return a pointer to the element to find
And i am getting error:
```
<source>: In function 'int main()':
<source>:41:22: error: no matching function for call to 'chk(std::vector<int>::iterator, std::vector<int>::iterator, lambda::<lambda(auto:16, int)>&, int)'
41 | auto l {test::chk(series.begin(), series.end(), lambda::find, 50)};
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<source>:22:10: note: candidate: 'template<class auto:17, class auto:18, class auto:19> auto test::chk(auto:17, auto:18, std::function<bool(auto:19, int)>&, int)'
22 | auto chk(auto start, auto end, std::function<bool (auto, int)>& find, int what)
| ^~~
<source>:22:10: note: template argument deduction/substitution failed:
<source>:41:22: note: 'lambda::<lambda(auto:16, int)>' is not derived from 'std::function<bool(auto:19, int)>'
41 | auto l {test::chk(series.begin(), series.end(), lambda::find, 50)};
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
As per the error I tried putting `std::iterator` at `chk func` call at line no `41` but it gives even more error
help me with this
code:
```
#include <iostream>
#include <vector>
#include <algorithm> //
#include <functional> //std::function
#include <iterator> //std::iterator
using vec = std::vector<int>;
namespace lambda
{
auto find{
[](auto start, int what) -> bool
{
if(*start == what)
return true;
}
};
};
namespace test
{
auto chk(auto start, auto end, std::function<bool (auto, int)>& find, int what)
{
if(start == end)
return end;
for(;start != end; ++start)
{
if(find(start, what))
return start;
}
return end; //is this really necessary?
}
};
int main()
{
vec series{1, 2, 3, 4, 5, 6, 7, 50, 8, 9, 10, 11, 12};
auto l {test::chk(series.begin(), series.end(), lambda::find, 50)};
//test
std::cout << "Old array\n";
for(int x: series)
std::cout << x << ' ';
std::cout << "\nNew array\n";
*l = 666;
for(int x: series)
std::cout << x << ' ';
}
```
|
By using `auto` you end up with each parameter having a different type and the compiler has no way to deduce what all of the parameters should be (see the `template argument deduction/substitution failed` part of the error message).
Replacing the `auto` with normal templates gives:
```
template <typename A>
bool find =
[](A start, int what) ...
template <typename B, typename C, typename D, typename E>
E chk(A start, B end, std::function<bool (C, int)>& find, int what);
```
The compiler has no way to know that `A`, `B`, `C`, `D` and `E` should all be the same type.
If you change `chk` to:
```
template< typename Iterator >
Iterator chk(Iterator start, Iterator end, std::function<bool (Iterator, int)>& find, int what)
```
We're closer but still have two issues:
1. You're taking `find` by a non-const reference so the types have to match exactly
2. The compiler still can't deduce the type required for the `find` lambda as `std::function` prevents this kind of type deduction.
If we change to the more conventional form of the std algorithms of just taking the predicate as a template:
```
template< typename Iterator, typename Predicate >
Iterator chk(Iterator start, Iterator end, Predicate find, int what)
```
Then your code compiles. This has the added benefit of not involving the overhead of `std::function` and will use (and probably inline) the lambda directly.
You can use a [concept](https://en.cppreference.com/w/cpp/language/constraints) to restrict the type of `Predicate` to match your requirements to improve compiler errors if used with an incorrect predicate.
`find` only ever returns true, you need to return in all cases otherwise your code has undefined behaviour:
```
auto find{
[](auto start, int what) -> bool
{
if(*start == what)
return true;
else
return false;
}
```
Or more simply:
```
auto find{
[](auto start, int what) -> bool
{
return *start == what;
}
```
Finally your predicate is unusual for a std algorithm in that it takes an iterator rather than a value, changing it to a value results in:
```
namespace lambda
{
auto find{
[](int value, int what)
{
return value == what;
}
};
};
namespace test
{
template< typename Iterator, typename Predicate >
Iterator chk(Iterator start, Iterator end, Predicate find, int what)
{
for(;start != end; ++start)
{
if(find(*start, what))
return start;
}
return end;
}
};
```
Note I've also removed the `start == end` check as it's unnecessary.
|
TensorFlow: Compute Hessian matrix (and higher order derivatives)
I would like to be able to compute higher order derivatives for my loss function. At the very least I would like to be able to compute the Hessian matrix. At the moment I am computing a numerical approximation to the Hessian but this is more expensive, and more importantly, as far as I understand, inaccurate if the matrix is ill-conditioned (with very large condition number).
Theano implements this through symbolic looping, see [here](http://deeplearning.net/software/theano/library/scan.html), but Tensorflow does not seem to support symbolic control flow yet, see [here](https://github.com/samjabrahams/tensorflow-white-paper-notes/commit/156736904eb48a0a44de4c23b227ae938e37e654). A similar issue has been raised on TF github page, see [here](https://github.com/tensorflow/tensorflow/issues/675), but it looks like nobody has followed up on the issue for a while.
Is anyone aware of more recent developments or ways to compute higher order derivatives (symbolically) in TensorFlow?
|
Well, you can , with little effort, compute the hessian matrix!
Suppose you have two variables :
```
x = tf.Variable(np.random.random_sample(), dtype=tf.float32)
y = tf.Variable(np.random.random_sample(), dtype=tf.float32)
```
and a function defined using these 2 variables:
```
f = tf.pow(x, cons(2)) + cons(2) * x * y + cons(3) * tf.pow(y, cons(2)) + cons(4) * x + cons(5) * y + cons(6)
```
where:
```
def cons(x):
return tf.constant(x, dtype=tf.float32)
```
So in algebraic terms, this function is
[](https://i.stack.imgur.com/VdkRN.jpg)
Now we define a method that compute the hessian:
```
def compute_hessian(fn, vars):
mat = []
for v1 in vars:
temp = []
for v2 in vars:
# computing derivative twice, first w.r.t v2 and then w.r.t v1
temp.append(tf.gradients(tf.gradients(f, v2)[0], v1)[0])
temp = [cons(0) if t == None else t for t in temp] # tensorflow returns None when there is no gradient, so we replace None with 0
temp = tf.pack(temp)
mat.append(temp)
mat = tf.pack(mat)
return mat
```
and call it with:
```
# arg1: our defined function, arg2: list of tf variables associated with the function
hessian = compute_hessian(f, [x, y])
```
Now we grab a tensorflow session, initialize the variables, and run `hessian` :
```
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print sess.run(hessian)
```
Note: Since the function we used is quadratic in nature (and we are differentiating twice), the hessian returned will have constant values irrespective of the variables.
The output is :
```
[[ 2. 2.]
[ 2. 6.]]
```
|
Code only works if I NSLog some values - but why?
I'm writing a simple [QR code](http://en.wikipedia.org/wiki/QR_Code) generator (just for fun and to learn some Obj-C), and I'm working on tracing the outline of connected "modules" (i.e. the black squares that make up a QR code). This is in order to have nicer vector output than simply making a bunch of rects for each module.
Long story short, my outline-tracing code works - BUT ONLY if I make sure to call NSLog in a specific place! If I remove the `NSLog`-call, the code loops! I'm literally doing nothing but logging. And it doesn't matter what I log; I just have to call NSLog or things break.
The tracing algorithm is simple enough: Go clockwise around the shape of connected modules. When you hit a corner, turn right until you're back to following the outline of the shape. Stop when you reach the starting point again. The shape can have two modules that share a corner-point. The tracing-loop will thus hit that point twice. This is expected, and the code handles it correctly - *if* I call `NSLog`.
Otherwise, the code will say that a certain point is a corner the first time it sees it, and *not* a corner the second time, which causes the tracing to loop around. Detecting if something's a corner-point is not dependent on anything except the x and the y coordinates of the point and an array of module objects - but neither the modules nor the array changes while the tracing is going on, so given the same x,y you should *always* get the same result. And it does – *if* I call `NSLog`.
Without `NSLog`, the coordinates – e.g. (10,9) – is corner on moment, and a moment later (10,9) is suddenly *not* a identified as a corner. But *with* an `NSLog`-call, (10,9) is correctly seen as a corner-point every time.
Again: I change absolutely nothing; I just log something - anything! And suddenly it works. It's like it's saying that 2 == 2 is true *or* false, unless I tell it to log 2 and 2, in which case 2 == 2 is always true, as it should be.
Here's the flaky code. It's hard to understand out of context, but there's *a lot* of context, so I hope this is enough. Everything is integers (no fuzzy floating point values).
```
do { // start going around the shape
// If this isn't here or simply commented out, the code loops.
NSLog(@"foobar"); // doesn't matter what I log - I just need to log something
// Branch: Is current x,y a corner-point? This should
// always return the same result given the same X and Y
// values, but it only does if NSLog is there!
if( [self cornerAtX:x Y:y] ) {
// add the point to the path
[path addPoint:NSMakePoint(x, y)];
// rotate the direction clockwise, until
// the direction is following the edge of the
// the shape again.
do {
dt = dx;
dx = -dy;
dy = dt;
} while( ![self boundaryFromX:x Y:y inDirectionX:dx Y:dy] );
}
// continue along direction
x += dx;
y += dy;
} while( !(sx == x && sy == y) ); // we're back to the start of the shape, so stop
```
If anyone can tell me why NSLog can make code work (or rather: Why *not* using NSLog makes working code break), I'd be happy to hear it! I hope someone can make sense of it.
|
Make sure `cornerAtX:Y:` always returns something—i.e., that there's no code path that fails to return a value.
Otherwise, it may very well “return” whatever the last function you called returns, in which case calling `NSLog` (which doesn't return a value, but may ultimately call a function that does) causes it to “return” something different, which may always be something that's considered true.
The compiler should warn you if you fail to return a value from a function or method that you declared as doing so. You should listen to it. [Turn on all the warnings you can get away with and fix all of them.](http://boredzo.org/blog/archives/2009-11-07/warnings)
You should also turn on the static analyzer (also included in that post), as it, too, may tell you about this bug, and if it does, it will tell you step-by-step how it's happening.
|
How to convert a CVS repo to GIT using cvs-fast-export?
As another attempt, I'm trying to convert a CVS repository to GIT. [**cvs-fast-export**](https://gitlab.com/esr/cvs-fast-export) was recommended. The only problem is, I have no idea how to actually do so. I have it built, but how do I call it? There's tons of flags explained [here](http://www.catb.org/%7Eesr/cvs-fast-export/cvs-fast-export.html) but that's more confusing than it is helping.
Is there a short explanation somewhere of the actual process of conversion?
The repository is remote, and I normally go in via SSH. I'm not sure if the folder structure of this project is normal. Here's roughly what it looks like:
- (on the remote server)/somefolder/cvs/anotherfolder/
- Contains: CVSROOT repo-I-want-to-clone-name
- repo-I-want-to-clone-name contains an `Attic`, and all the source code files with `,v` after them. (No CVSROOT)
- CVSROOT contains an `Attic`, cleanlog.sh, cvsignore, checkoutlist, checkoutlist,v, editinfo, editinfo,v, commmitinfo, loginfo, taginfo, etc. (No source-code like files here)
|
You need to have a local copy of the CVS repository directory (containing all the RCS `,v` files with history logs). If you have SSH access, use it to download the files via sftp/rsync/tar. If you only have a pserver URL, you need to use something like `cvssuck` to generate a local repository. In case the repository is hosted at SourceForge, you can download the whole thing using rsync.
Once you have the RCS files, feed a list of the filenames to cvs-fast-export, and it will output a repository in the intermediate "Git fast-export" format
```
cd ~/cvsfiles
find . -name '*,v' | cvs-fast-export [some options] > ~/converted.fe
```
**Note:** Make sure to include any `Attic` directories, as they contain files which existed in old commits but were eventually "deleted".
(Besides that, however, there are no additional metadata files needed – each `,v` file is completely self-contained, as it uses the same single-file history format as RCS does. The job of cvs-fast-export is to mingle those individual file histories into multi-file commits somehow.)
You can then make edits to the dump using `reposurgeon` (e.g. assign authors, squash split commits), and finally import it into Git using:
```
git init ~/result
cd ~/result
git fast-import < ~/converted.fe
```
The import will generate branches and commits, and will update the working-tree index, but apparently doesn't extract the working-tree files themselves: use `git reset --hard` or `git checkout -f` to do that.
(In theory, the same "fast-export" dump can also be imported by various other SCMs such as Mercurial, Plastic, or Bzr.)
|
How to improve lambda expression in python suffix array
There is a code that uses lambda expression
```
def ComputeArray(text):
# text is ended with $
if text[-1] != "$":
text += "$"
sarray = sorted(range(len(text)), key = lambda i: text[i:])
print ", ".join([str(x) for x in sarray])
if __name__ == "__main__":
ComputeArray("AACGATAGCGGTAAACGATAGCGGTAGA$")
```
it correctly outputs desired array
```
28, 27, 12, 0, 13, 1, 14, 25, 6, 19, 4, 17, 2, 15, 8, 21, 26, 3, 16, 7, 20, 9, 22, 10, 23, 11, 24, 5, 18
```
How could I improve line
```
sarray = sorted(range(len(text)), key = lambda i: text[i:])
```
so when increasing length of text I do not use lots of memory on a lambda expression?
```
Traceback (most recent call last):
File "C:\Array.py", line 23, in <module>
ComputeArray(text)
File "C:\Array.py", line 11, in ComputeArray
sarray = sorted(range(len(text)), key = lambda i: text[i:])
File "C:\Array.py", line 11, in <lambda>
sarray = sorted(range(len(text)), key = lambda i: text[i:])
MemoryError
```
# UPDATE
There is other code like:
```
sarray=[]
for i in range(len(text)):
sarray.append(text[i:])
order=[i[0] for i in sorted(enumerate(sarray), key=lambda x:x[1])]
print ", ".join([str(x) for x in order])
```
However is taking so much memory,
# Also I tried solution using library available on <https://code.google.com/p/pysuffix/>
```
s = 'AACGATAGCGGTAGA'
s = unicode(s,'utf-8','replace')
n = len(s)
sa = tks.simple_kark_sort(s)
lcp = tks.LCP(s,sa)
print n print sa
```
although it solves the problem, it takes too much time with larger strings, ... do you know other library or a method to improve suffix?
|
Looks like you're trying to build a suffix array. Luckily, there are already Python implementations of this algorithm: <https://code.google.com/p/pysuffix/>
If you must implement it yourself, think about what your code is doing:
1. Make a list of integers the same length as your text with `range`.
2. Apply the `key` function to each element of the list and store the result in a new list.
3. Sort the new list
4. Return the original integer associated with each element of the new list.
(This is also known as the [Schwartzian Transform](http://en.wikipedia.org/wiki/Schwartzian_transform), which a pretty neat idea.)
The point is, you're making a slice of your (presumably large) text for *each offset* in the text, and storing it in a new list. You'll want to use a more specialized suffix array construction algorithm to avoid this cost.
Finally, to address your original question: the lambda expression isn't the culprit here. You're simply running into an algorithmic wall.
>
> **EDIT**
>
>
> Here's a good resource for fast SA algorithms:
> [What's the current state-of-the-art suffix array construction algorithm?](https://stackoverflow.com/questions/7857674/whats-the-current-state-of-the-art-suffix-array-construction-algorithm)
>
>
>
|
Detect overlapping date ranges from the same table
I have a table with the following data
```
PKey Start End Type
==== ===== === ====
01 01/01/2010 14/01/2010 S
02 15/01/2010 31/01/2010 S
03 05/01/2010 06/01/2010 A
```
And want to get the following results
```
PKey Start End Type
==== ===== === ====
01 01/01/2010 14/01/2010 S
03 05/01/2010 06/01/2010 A
```
Any ideas on where to start? A lot of the reading I've done suggests I need to create entries and for each day and join on matching days, is this the only way?
|
If you already have entries for each day that should work, but if you don't the overhead is significant, and if that query is used often, if will affect performance.
If the data is in this format, you can detect overlaps using simple date arithmetic, because an overlap is simply one interval starting after a given interval, but before the given is finished, something like
```
select dr1.* from date_ranges dr1
inner join date_ranges dr2
on dr2.start > dr1.start -- start after dr1 is started
and dr2.start < dr1.end -- start before dr1 is finished
```
If you need special handling for interval that are wholly within another interval, or you need to merge intervals, i.e.
```
PKey Start End Type
==== ===== === ====
01 01/01/2010 20/01/2010 S
02 15/01/2010 31/01/2010 S
```
yielding
```
Start End Type
===== === ====
01/01/2010 31/01/2010 S
```
you will need more complex calculation.
In my experience with this kind of problems, once you get how to do the calculation by hand, it's easy to transfer it into SQL :)
|
Electron (atom shell) window getting closed on its own after some time
Just a plain Hello World application using electron-prebuilt is set up.
I run it by `npm start` command.
Window shows up as expected normally. However it is getting closed on it's own after some time.
**In command prompt it is throwing the following warnings before window is getting closed:**
```
WARNING:raw_channel_win.cc(473)] WriteFile: The pipe is being closed. (0xE8)
WARNING:channel.cc(549)] Failed to send message to ack remove remote endpoint (local ID 1, remote ID 1)
WARNING:channel.cc(315)] RawChannel write error
```
What is causing this issue?
npm version is 1.4.10 & node (via io.js) version is 0.11.13 (Windows 7 x64)
|
As @Oztaco stated, in the [QuickStart Guide](http://electron.atom.io/docs/v0.28.0/tutorial/quick-start/) it has the following code sample:
```
// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the javascript object is GCed.
var mainWindow = null;
// This method will be called when Electron has done everything
// initialization and ready for creating browser windows.
app.on('ready', function() {
// Create the browser window.
mainWindow = new BrowserWindow({width: 800, height: 600});
// other code ommited
// Emitted when the window is closed.
mainWindow.on('closed', function() {
// Dereference the window object, usually you would store windows
// in an array if your app supports multi windows, this is the time
// when you should delete the corresponding element.
mainWindow = null;
});
});
```
|
asynchronous aiohttp requests fails, but synchronous requests succeed
With the following code I get `Cannot connect to host ...:443 ssl:True` when I use the asynchronous `aiohttp`. When I use synchronous `requests`, it succeeds.
The `whitehouse.gov` links fail, but the `google.com` succeeds for both async and sync cases.
What is going wrong? This is with python 3.4.2 on FreeBSD8, aiohttp 0.14.4, requests 2.5.3
```
import asyncio
import aiohttp
import requests
urls = [
'http://www.whitehouse.gov/cea/',
'http://www.whitehouse.gov/omb',
'http://www.google.com']
def test_sync():
for url in urls:
r = requests.get(url)
print(r.status_code)
def test_async():
for url in urls:
try:
r = yield from aiohttp.request('get', url)
except aiohttp.errors.ClientOSError as e:
print('bad eternal link %s: %s' % (url, e))
else:
print(r.status)
if __name__ == '__main__':
print('async')
asyncio.get_event_loop().run_until_complete(test_async())
print('sync')
test_sync()
```
The output from this is:
```
async
bad eternal link http://www.whitehouse.gov/cea: Cannot connect to host www.whitehouse.gov:443 ssl:True
bad eternal link http://www.whitehouse.gov/omb: Cannot connect to host www.whitehouse.gov:443 ssl:True
200
sync
200
200
200
```
|
I suspect certificate validation chain is broken on your machine.
On Ubuntu everything is working, as @dano mentioned.
Anyway, you may disable ssl validation by creating custom `Connector` instance:
```
import asyncio
import aiohttp
urls = [
'http://www.whitehouse.gov/cea/',
'http://www.whitehouse.gov/omb',
'http://www.google.com']
def test_async():
connector = aiohttp.TCPConnector(verify_ssl=False)
for url in urls:
try:
r = yield from aiohttp.request('get', url, connector=connector)
except aiohttp.errors.ClientOSError as e:
print('bad eternal link %s: %s' % (url, e))
else:
print(r.status)
if __name__ == '__main__':
print('async')
asyncio.get_event_loop().run_until_complete(test_async())
```
BTW, `requests` library is shipped with own certificate bundle. Maybe we need to do the same for aiohttp?
**UPD.** See also <https://github.com/aio-libs/aiohttp/issues/341>
|
UILabel not refreshing in UITableViewCell when reloaded
I add two `UILabels` to a `UITableViewCell` and then name and number labels to that cell. When I reload the data in the `UITableView` (whether I add or delete a contact from the table view) the data in the labels are overlapping which means it's not reloading the labels, until the entire view is reloaded.
Can anyone please help?
Code:
```
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
static NSString *CellIdentifier = @"Cell";
UILabel *name;
UILabel *number;
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (cell == nil) {
cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease];
//NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES];
// NSArray *sortDescriptors = [NSArray arrayWithObject:sortDescriptor];
// [sortDescriptor release];
}
CGRect Label1Frame = CGRectMake(10, 10, 140, 25);
name = [[UILabel alloc] initWithFrame:Label1Frame];
name.tag = 1;
[name setBackgroundColor:[UIColor clearColor]];
[cell.contentView addSubview:name];
CGRect Label2Frame = CGRectMake(150, 10, 140, 25);
number = [[UILabel alloc] initWithFrame:Label2Frame];
number.tag = 1;
[number setBackgroundColor:[UIColor clearColor]];
[cell.contentView addSubview:number];
name.text = [NSString stringWithFormat:@"%@",[names objectAtIndex:indexPath.row]];
number.text = [NSString stringWithFormat:@"%@",[phonenumbers objectAtIndex:indexPath.row]];
[name release];
[number release];
return cell;
}
```
|
You should move your label creation code inside the cell initialization and then just reference them later by tag like this:
```
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (cell == nil) {
cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease];
//NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES];
// NSArray *sortDescriptors = [NSArray arrayWithObject:sortDescriptor];
// [sortDescriptor release];
CGRect Label1Frame = CGRectMake(10, 10, 140, 25);
UILabel *name = [[UILabel alloc] initWithFrame:Label1Frame];
name.tag = 1;
[name setBackgroundColor:[UIColor clearColor]];
[cell.contentView addSubview:name];
[name release];
CGRect Label2Frame = CGRectMake(150, 10, 140, 25);
UILabel *number = [[UILabel alloc] initWithFrame:Label2Frame];
number.tag = 2;
[number setBackgroundColor:[UIColor clearColor]];
[cell.contentView addSubview:number];
[number release];
}
UILabel *nameLabel = (UILabel *)[cell viewWithTag:1];
UILabel *numberLabel = (UILabel *)[cell viewWithTag:2];
name.text = [NSString stringWithFormat:@"%@",[names objectAtIndex:indexPath.row]];
number.text = [NSString stringWithFormat:@"%@",[phonenumbers objectAtIndex:indexPath.row]];
return cell;
}
```
|
Flux not subscribing in Spring 5 reactor
I'm probably missing something but I can't figure out what it is.
The following code does nothing at all:
```
webClient.get().uri("/some/path/here").retrieve()
.bodyToMono(GetLocationsResponse.class)
.doOnNext(System.out::println)
.subscribe();
```
If I try to block the call it works fine:
```
webClient.get().uri("/some/path/here").retrieve()
.bodyToMono(GetLocationsResponse.class)
.doOnNext(System.out::println)
.block();
```
The weird thing is that if I create a Flux "manually" (i.e not coming from the spring webClient) this works fine:
```
Flux.just("1", "2", "3")
.filter(s -> !s.equals("2"))
.doOnNext(System.out::println)
.subscribe();
```
Could someone please explain what it is that I am doing wrong? Isn't `.subscribe()` supposed to execute the operation in the first case, just like it did in the last?
Thanks!
|
**Short Answer**
`subscribe` does not block current thread, that means app main thread can complete earlier than Flux emits any element. So either use `block` or employ waiting in the main thread.
**Details**
Call to the no-args [subscribe()](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#subscribe--) just makes `request(unbounded)` on `Flux` without setting up any `Subscriber`. It triggers operation generally in a *separate thread* but *does not block the current thread*. Most likely, your main thread ends before `WebClient` received the response in that separate thread and [passive side effect `doOnNext(...)`](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#doOnNext-java.util.function.Consumer-) happened.
To illustrate/test that operation is started, wait for some time in the main thread. Just put the following line right after `subscribe()` call:
```
Thread.sleep(1000);
```
Now, after playing with the timeout value, you'll be able to see result printed.
Let's now implicitly ship a custom `Scheduler` for async operations and wait for all its tasks to be completed. Also, let's pass the `System.out::println` as `subscribe(...)` argument instead of `doOnNext`, so that complete code appears as follows:
```
ExecutorService executor = Executors.newSingleThreadExecutor();
webClient.get().uri("/some/path/here").retrieve()
.bodyToMono(GetLocationsResponse.class)
.publishOn(Schedulers.fromExecutor(executor)) // next operation will go to this executor
.subscribe(System.out::println); //still non-blocking
executor.awaitTermination(1, TimeUnit.SECONDS); //block current main thread
```
This example uses slightly different [subscribe(Consumer)](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#subscribe-java.util.function.Consumer-). Most importantly, it adds [publishOn(Scheduler)](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#publishOn-reactor.core.scheduler.Scheduler-) which is backed by `ExecutorService`. The latter is used then to wait for termination in the main thread.
Surely, the much easier way to achieve the same result is to use `block()` as you mentioned initially:
>
>
> ```
> webClient.get().uri("/some/path/here").retrieve()
> .bodyToMono(GetLocationsResponse.class)
> .doOnNext(System.out::println)
> .block();
>
> ```
>
>
Finally, note on your third example with `Flux.just(...)...subscribe()` - seems it just quickly completes before your main thread gets terminated. That's because it requires way less time to emit a few `String` elements compared to the emission of a single `GetLocationsResponse` element (implying timings for write request+read response+parse into POJO). However, if you make this `Flux` to delay elements, you'll get the same behavior reproduced:
```
Flux.just("1", "2", "3")
.filter(s -> !s.equals("2"))
.delayElements(Duration.ofMillis(500)) //this makes it stop printing in main thread
.doOnNext(System.out::println)
.subscribe();
Flux.just("1", "2", "3")
.filter(s -> !s.equals("2"))
.delayElements(Duration.ofMillis(500))
.doOnNext(System.out::println)
.blockLast(); //and that makes it printing back again
```
|
polars equivalent to groupby.last
Say i have a polars dataframe:
```
import polars as pl
df = pl.DataFrame({'index': [1,2,3,2,1],
'object': [1, 1, 1, 2, 2],
'period': [1, 2, 4, 4, 23],
'value': [24, 67, 89, 5, 23],
})
```
How would i get dict of index -> to last value
`df.col('value').last().over(['index']).alias("last")` is the last value but that requires a lot of extra computation and more work to get to the key value pairs.
|
The `over` function will keep all rows, which is probably not what you want. An easy way to get just the last `value` for `index` is to use [`unique`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.DataFrame.unique.html).
```
(
df
.select(['index', 'value'])
.unique(subset='index', keep="last")
)
```
```
shape: (3, 2)
┌───────┬───────┐
│ index ┆ value │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪═══════╡
│ 1 ┆ 23 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 5 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 89 │
└───────┴───────┘
```
From this point, you can use the `to_dicts` method to convert the DataFrame to a list of dictionaries.
```
last_values = (
df
.select(["index", "value"])
.unique(subset="index", keep="last")
.to_dicts()
)
last_values
```
```
[{'index': 1, 'value': 23}, {'index': 2, 'value': 5}, {'index': 3, 'value': 89}]
```
If you are looking to later import this into a DataFrame, you'll want to stop at this point. For example:
```
pl.DataFrame(last_values)
```
```
>>> pl.DataFrame(last_values)
shape: (3, 2)
┌───────┬───────┐
│ index ┆ value │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪═══════╡
│ 1 ┆ 23 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 5 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 89 │
└───────┴───────┘
```
However, if you want to collapse this into a single dictionary of `index`:`value` pairs, you can use a dictionary comprehension.
```
{
next_dict["index"]: next_dict["value"]
for next_dict in last_values
}
```
```
{1: 23, 2: 5, 3: 89}
```
#### Edit: Updating based on date
Let's assume that we have this data:
```
import polars as pl
import datetime
df = pl.DataFrame({
"index": [1, 2, 3],
"value": [10, 20, 30],
}).join(
pl.DataFrame({
'date': pl.date_range(datetime.date(2021, 1, 1), datetime.date(2023, 1, 1), "1y")
}),
how="cross"
)
df
```
```
shape: (9, 3)
┌───────┬───────┬────────────┐
│ index ┆ value ┆ date │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ date │
╞═══════╪═══════╪════════════╡
│ 1 ┆ 10 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2023-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2023-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2023-01-01 │
└───────┴───────┴────────────┘
```
And we have these values that we want to update.
```
update_df = pl.DataFrame({
"index": [2, 3],
"value": [200, 300],
})
update_df
```
```
shape: (2, 2)
┌───────┬───────┐
│ index ┆ value │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═══════╪═══════╡
│ 2 ┆ 200 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 300 │
└───────┴───────┘
```
Note: I've purposely left out `index` "1" (to show what will happen).
If we want to update the `value` associated with each `index`, but only beyond a certain date, we can use a [`join_asof`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.DataFrame.join_asof.html#polars.DataFrame.join_asof).
Since this is an advanced method, we'll take it in steps.
We'll add the `current_date` to the `update_df` as a literal. (The same value for all rows.)
We also need to make sure both our DataFrames are sorted by the "as\_of" column (`date`, not `index`). (`update_df` will already be sorted because its the same date on each row.)
I'll also sort after the `join_asof` so that we can see what is happening more clearly. (You don't need to do this step.)
```
current_date = datetime.date(2022, 1, 1)
(
df
.sort(['date'])
.rename({'value': 'prev_value'})
.join_asof(
update_df.with_column(pl.lit(current_date).alias('date')),
on='date',
by=['index'],
strategy='backward'
)
.sort(['index', 'date'])
)
```
```
shape: (9, 4)
┌───────┬────────────┬────────────┬───────┐
│ index ┆ prev_value ┆ date ┆ value │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ date ┆ i64 │
╞═══════╪════════════╪════════════╪═══════╡
│ 1 ┆ 10 ┆ 2021-01-01 ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2022-01-01 ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2023-01-01 ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2021-01-01 ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2022-01-01 ┆ 200 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2023-01-01 ┆ 200 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2021-01-01 ┆ null │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2022-01-01 ┆ 300 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2023-01-01 ┆ 300 │
└───────┴────────────┴────────────┴───────┘
```
Notice that only those rows with a date >= 2022-01-01 have a non-null value for `value`. (I'll show how to do a > 2022-01-01 at the end.)
Next we'll use [`fill_null`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.Expr.fill_null.html#polars.Expr.fill_null) to fill the null values in `value` with the `prev_value` column.
```
current_date = datetime.date(2022, 1, 1)
(
df
.sort(['date'])
.rename({'value': 'prev_value'})
.join_asof(
update_df.with_column(pl.lit(current_date).alias('date')),
on='date',
by=['index'],
strategy='backward'
)
.sort(['index', 'date'])
.with_column(pl.col('value').fill_null(pl.col('prev_value')))
)
```
```
shape: (9, 4)
┌───────┬────────────┬────────────┬───────┐
│ index ┆ prev_value ┆ date ┆ value │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ date ┆ i64 │
╞═══════╪════════════╪════════════╪═══════╡
│ 1 ┆ 10 ┆ 2021-01-01 ┆ 10 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2022-01-01 ┆ 10 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2023-01-01 ┆ 10 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2021-01-01 ┆ 20 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2022-01-01 ┆ 200 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2023-01-01 ┆ 200 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2021-01-01 ┆ 30 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2022-01-01 ┆ 300 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2023-01-01 ┆ 300 │
└───────┴────────────┴────────────┴───────┘
```
Now, to clean up, we can drop the `prev_value` column, and re-arrange the columns.
```
current_date = datetime.date(2022, 1, 1)
(
df
.sort(['date'])
.rename({'value': 'prev_value'})
.join_asof(
update_df.with_column(pl.lit(current_date).alias('date')),
on='date',
by=['index'],
strategy='backward'
)
.sort(['index', 'date'])
.with_column(pl.col('value').fill_null(pl.col('prev_value')))
.drop(['prev_value'])
.select([
pl.exclude('date'),
pl.col('date')
])
)
```
```
shape: (9, 3)
┌───────┬───────┬────────────┐
│ index ┆ value ┆ date │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ date │
╞═══════╪═══════╪════════════╡
│ 1 ┆ 10 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2023-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 200 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 200 ┆ 2023-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 300 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 300 ┆ 2023-01-01 │
└───────┴───────┴────────────┘
```
If you need to update only those rows that are strictly greater than `current_date`, you can simply add one day to your `current_date`. Polars makes this easy with the [`offset_by`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.internals.expr.ExprDateTimeNameSpace.offset_by.html#polars.internals.expr.ExprDateTimeNameSpace.offset_by) expression.
```
(
df
.sort(['date'])
.rename({'value': 'prev_value'})
.join_asof(
update_df.with_column(pl.lit(current_date).dt.offset_by('1d').alias('date')),
on='date',
by=['index'],
strategy='backward'
)
.sort(['index', 'date'])
.with_column(pl.col('value').fill_null(pl.col('prev_value')))
.drop(['prev_value'])
.select([
pl.exclude('date'),
pl.col('date')
])
)
```
```
shape: (9, 3)
┌───────┬───────┬────────────┐
│ index ┆ value ┆ date │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ date │
╞═══════╪═══════╪════════════╡
│ 1 ┆ 10 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 ┆ 10 ┆ 2023-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 200 ┆ 2023-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2021-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 30 ┆ 2022-01-01 │
├╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 3 ┆ 300 ┆ 2023-01-01 │
└───────┴───────┴────────────┘
```
|
Using Elasticsearch Java REST API with self signed certificates
I want to use the Java REST API (RestHighLevelClient) to communicate with an Elasticsearch 5.6 server over HTTPS. However, the certificate for the server is self signed and when I try to connect it throws a SSLHandshakeException.
Is there a way of configuring the REST client to accept self signed certificates?
|
I got this working using a custom Java Key Store. Here's my code:
```
CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
final SSLContext sslContext = SSLContexts.custom()
.loadTrustMaterial(new File("my_keystore.jks"), keystorePassword.toCharArray(),
new TrustSelfSignedStrategy())
.build();
RestClient client = RestClient.builder(new HttpHost(host, port, scheme)).setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder
.setDefaultCredentialsProvider(credentialsProvider)
.setSSLContext(sslContext)
).build();
```
To create the keystore, I downloaded the cert for the domain through Firefox, and used:
```
keytool -import -v -trustcacerts -file my_domain.crt -keystore my_keystore.jks -keypass password -storepass password
```
|
How SqlDataAdapter works internally?
I wonder how `SqlDataAdapter` works internally, especially when using `UpdateCommand` for updating a huge `DataTable` (since it's usually a lot faster that just sending sql statements from a loop).
Here is some idea I have in mind :
- It creates a prepared sql statement (using `SqlCommand.Prepare()`) with `CommandText` filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call `SqlCommand.ExecuteNonQuery()`.
- It creates a bunch of `SqlCommand` objects with everything filled inside (`CommandText` and sql parameters). Several SqlCommands at once are then batched to the server (depending of `UpdateBatchSize`).
- It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (`UpdateCommand` here) would be executed against each of these rows).
|
It uses an internal facility of the SQL Server client classes which is called **command sets**. You can send multiple batches with a single command to SQL Server. This cuts down on per-call overhead. You have less server roundtrips and such.
A single row is updated per statement, and one statement per batch is sent, but multiple batches per roundtrip are send. The last point in this list is the *magic sauce*.
Unfortunately, this facility is **not publicly exposed**. [Ayende took a hack on this and built a private-reflection bases API for it.](http://ayende.com/blog/1679/opening-up-query-batching)
If you want more information I encourage you to look at the internal `SqlCommandSet` class.
That said, **you can go faster than this by yourself**: Transfer the update data using a TVP and issue a single `UPDATE` that updates many rows. That way you save all per-batch, per-roundtrip and per-statement overheads.
Such a query would look like this:
```
update T set T.x = @src.x from T join @src on T.ID = @src.ID
```
|
Include Value From Previous Row
I'm using SQL Server here. I have a table for inventory where only January has a beginning quantity. Each row has a changed quantity for that month.
So..
```
Item | Month | Beg | PerChg
---------------------------
001 1 5 5
001 2 0 -1
001 3 0 4
```
So in the above case, January would be 10, February 9, and March 13, and so on...
Is it possible to take care of this line by line in SQL?
I looked into lag briefly but I'm not so sure it does what I need or if it's the best way to go about this.
|
You were on the right track with the **window functions**, however, **Lag()** would not be much help here. The good news is you can use **sum() over**
**Example**
```
Declare @YourTable Table ([Item] varchar(50),[Month] int,[Beg] int,[PerChg] int)
Insert Into @YourTable Values
('001',1,5,5)
,('001',2,0,-1)
,('001',3,0,4)
Select *
,RunningTotal = sum(Beg+PerChg) over (Partition By Item Order by Month)
From @YourTable
```
**Returns**
```
Item Month Beg PerChg RunningTotal
001 1 5 5 10
001 2 0 -1 9
001 3 0 4 13
```
|
How do I use the CoffeeScript existential operator to check some object properties for undefined?
I would like to use the CoffeeScript existential operator to check some object properties for undefined. However, I encountered a little problem.
Code like this:
```
console.log test if test?
```
Compiles to:
```
if (typeof test !== "undefined" && test !== null) console.log(test);
```
Which is the behavior I would like to see. However, when I try using it against object properties, like this:
```
console.log test.test if test.test?
```
I get something like that:
```
if (test.test != null) console.log(test.test);
```
Which desn't look like a check against undefined at all. The only way I could have achieved the same (1:1) behavior as using it for objects was by using a larger check:
```
console.log test.test if typeof test.test != "undefined" and test.test != null
```
The question is - am I doing something wrong? Or is the compiled code what is enough to check for existence of a property (a null check with type conversion)?
|
This is a common point of confusion with the existential operator: Sometimes
```
x?
```
compiles to
```
typeof test !== "undefined" && test !== null
```
and other times it just compiles to
```
x != null
```
*The two are equivalent,* because `x != null` will be `false` when `x` is either `null` or `undefined`. So `x != null` is a more compact way of expressing `(x !== undefined && x !== null)`. The reason the `typeof` compilation occurs is that the compiler thinks `x` may not have been defined at all, in which case doing an equality test would trigger `ReferenceError: x is not defined`.
In your particular case, `test.test` may have the value `undefined`, but you can't get a `ReferenceError` by referring to an undefined property on an existing object, so the compiler opts for the shorter output.
|
Hiding columns in CSS Grid
I am newish to css grid and I am trying to figure out how I can make a column completely go away and enforce the others to stack.
I basically want the red crossed out ones below to completely go away and just stack all the others in the grid into several rows.
[](https://i.stack.imgur.com/3EqXk.png)
With that code below I want `block 4` to disappear. Then I want blocks 1/2/3/5/6 to stack.
```
.wrapper {
display: grid;
grid-template-columns: repeat(3, minmax(155px, 1fr)) 1fr;
grid-gap: 10px;
padding: 0.5rem;
}
```
```
<div class="wrapper">
<div>
Block 1
</div>
<div>
Block 2
</div>
<div>
Block 3
</div>
<div>
Block 4
</div>
<div>
Block 5
</div>
<div>
Block 6
</div>
</div>
```
A codepen of it:
<https://codepen.io/allencoded/pen/goNYwv>
|
Ensure that there can only be four items per row:
```
grid-template-columns: repeat(auto-fill, minmax(20%, 1fr));
grid-gap: 10px;
```
With 20% minimum width per item, and a grid gap (of any length), there can never be more than four items per row.
Then, hide the fourth item in each row:
```
div:nth-child(4) { visibility: hidden; }
```
<https://codepen.io/anon/pen/LeKzzx>
```
.wrapper {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(20%, 1fr));
grid-gap: 10px;
padding: 0.5rem;
}
.wrapper > div:nth-child(4) {
visibility: hidden;
}
.wrapper > div {
background-color: lightgreen;
}
```
```
<div class="wrapper">
<div>Block 1</div>
<div>Block 2</div>
<div>Block 3</div>
<div>Block 4</div>
<div>Block 5</div>
<div>Block 6</div>
</div>
```
|
Scala Abstract type members - inheritance and type bounds
I ran into some strange situation in Scala today while I tried to refine the type bounds on an abstract type member.
I have two traits that define bounds on a type member and combine them in a concrete class. That works fine but when matching / casting with the trait combination only one of the two TypeBounds is "active" and I struggle to understand why ...
I tried to prepare an example:
```
trait L
trait R
trait Left {
type T <: L
def get: T
}
trait Right {
type T <: R
}
```
now if I combine these two traits in one concrete class
```
val concrete = new Left with Right {
override type T = L with R
override def get: T = new L with R {}
}
```
I can access my member via get as intended
```
// works fine
val x1: L with R = concrete.get
```
but if I cast to (Left with Right) or pattern match I cannot access the member anymore. Dependent on the order I get either the type bounds from Left or from Right but not the combination of both.
```
// can only access as R, L with R won't work
val x2: R = concrete.asInstanceOf[Left with Right].get
// can only access as L, L with R won' work
val x3: L = concrete.asInstanceOf[Right with Left].get
```
I understand that Left with Right is not the same thing as Right with Left but in both cases both type bounds are included, so why can I only get one to work?
can anyone shed some light on why this is happening?
|
the second type member overrides the first one.
```
trait L
trait R
trait Left {
type T <: L
def get: T
}
trait Right {
type T <: R
}
object X {
type LR = Left with Right // Right#T overrides Left#T, LR#T is now <: R
type RL = Right with Left // Left#T overrides Right#T, LR#T is now <: L
val concrete = new Left with Right {
override type T = L with R
override def get: T = new L with R {}
}
// ok
val r: R = concrete.asInstanceOf[LR].get
val l: L = concrete.asInstanceOf[RL].get
// ok
implicitly[LR#T <:< R]
implicitly[RL#T <:< L]
// doesn't compile, LR#T is a subclass of R because Right#T overrides Left#T
implicitly[LR#T <:< L]
// doesn't compile, RL#T is a subclass of L because Left#T overrides Right#T
implicitly[RL#T <:< R]
}
```
In "concrete" you override the type member with `L with R`, but when you cast it to `Left with Right` you lose that refinement, and T becomes \_ <: L or \_ <: R depending on the order of the traits.
Since type members can be overridden, if you upcast (e.g. to LR or RL) you lose the refinement you applied in concrete. You could say concrete is at the same time a RL and a LR, but when you upcast it to LR or RL you lose the information you have in the other one
|
Why Haskell uses bottom instead of null in partial functions?
I'm reading about Haskell denotational semantics (<http://en.wikibooks.org/wiki/Haskell/Denotational_semantics>) and I fail to see why, in a type, bottom "value" is placed at another level compared to "normal" values, eg why it can't be pattern matched.
I believe that pattern patching bottom would cause trouble as bottom denotes also non-terminating computations, but why should non-terminating computations and errors be treated the same? (I'm assuming calling a partial function with unsupported argument can be considered as an error).
What useful properties would be lost, if all Haskell types included a pattern-matchable Java-null-like value instead of bottom?
In other words: why wouldn't it be wise to make all Haskell functions total by lifting all types with null value?
(Do non-terminating computations need a special type at all?)
|
You can't get rid of non-termination without restricting the turing-completeness of your language, and by the halting problem, we can't generally detect non-termination and replace it by a value.
So *every* turing complete language has bottom.
The only difference between Haskell and Java is then that Java has bottom *and* null. Haskell doesn't have the latter, which is handy because then we don't have to check for nulls!
Put another way, since bottom is *inescapable* (in the turing complete world), then what's the point of *also* making everything nullable too, other than inviting bugs?
Also note that while some functions in the Prelude are partial for historic reasons, modern Haskell style leans towards writing total functions nearly everywhere and using an explicit `Maybe` return type in functions such as `head` that would otherwise be partial.
|
Webpack import \* messes tree shaking?
I read this here - <https://www.thedevelobear.com/post/5-things-to-improve-performance/> - that doing importing all things from a library will not allow tree shaking to remove it, even if it is not used. I don't believe this, is it really true? I would think that tree shaking would identify that none of the functions except a couple were used, and then remove those.
>
> There is a really easy way to reduce bundle size by just checking your imports. When performing methods or components from 3rd party libraries, make sure you import only the things you need, not the whole library itself. For instance, if you’re using lodash and need the fill method, import it directly instead of calling it on lodash object:
>
>
>
> ```
> // Instead of this
>
> import _ from ‘lodash’
>
> let array = [1, 2, 3];
> _.fill(array, '');
>
> // Do this
>
> import { fill } from ‘lodash’
>
> let array = [1, 2, 3];
> fill(array, '');
>
> ```
>
>
|
Using current version of Webpack (5.3.0), this is not true. With the following files:
```
// src/index.js
import * as a from './modules'
console.log(a.foo)
// Or: console.log(a.baz.foo)
```
```
// src/modules.js
export const foo = 'foo'
export const bar = 'bar'
export const baz = {
foo: 'foo',
bar: 'bar',
}
```
Webpack outputs:
```
// dist/main.js
(()=>{"use strict";console.log("foo")})();
```
Based on this [Github issue](https://github.com/webpack/webpack/issues/2713), it was not true even at the time of the previous answer.
|
whats the fastest way to find eigenvalues/vectors in python?
Currently im using numpy which does the job. But, as i'm dealing with matrices with several thousands of rows/columns and later this figure will go up to tens of thousands, i was wondering if there was a package in existence that can perform this kind of calculations faster ?
|
- \*\*if your matrix is sparse, then instantiate your matrix using a constructor from *scipy.sparse* then use the analogous eigenvector/eigenvalue methods in *spicy.sparse.linalg*. From a performance point of view, this has two advantages:
- your matrix, built from the spicy.sparse constructor, will be smaller in proportion to how sparse it is.
- the [eigenvalue/eigenvector methods](http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html#scipy.sparse.linalg.eigs) for sparse matrices (*eigs*, *eigsh*) accept an optional argument, *k* which is the number of eigenvector/eigenvalue pairs you want returned. Nearly always the number required to account for the >99% of the variance is far less then the number of columns, which you can verify *ex post*; in other words, you can tell method not to calculate and return all of the eigenvectors/eigenvalue pairs--beyond the (usually) small subset required to account for the variance, it's unlikely you need the rest.
- **use the linear algebra library in *SciPy*,** *scipy.linalg*, instead
of the *NumPy* library of the same name. These two libraries have
the same name and use the same method names. Yet there's a difference in performance.
This difference is caused by the fact that *numpy.linalg* is a
*less* faithful wrapper on the analogous LAPACK routines which
sacrifice some performance for portability and convenience (i.e.,
to comply with the *NumPy* design goal that the entire *NumPy* library
should be built without a Fortran compiler). *linalg* in *SciPy* on
the other hand is a much more complete wrapper on LAPACK and which
uses *f2py*.
- ***select the function appropriate for your use case***; in other words, don't use a function does more than you need. In *scipy.linalg*
there are several functions to calculate eigenvalues; the
differences are not large, though by careful choice of the function
to calculate eigenvalues, you should see a performance boost. For
instance:
- *scipy.linalg.eig* returns *both* the eigenvalues and
eigenvectors
- *scipy.linalg.eigvals*, returns only the eigenvalues. So if you only need the eigenvalues of a matrix then *do not* use *linalg.eig*, use *linalg.eigvals* instead.
- if you have a real-valued square symmetric matrices (equal to its transpose) then use *scipy.linalg.eigsh*
- ***optimize your Scipy build*** Preparing your SciPy build environement
is done largely in SciPy's *setup.py* script. Perhaps the
most significant option performance-wise is identifying any optimized
LAPACK libraries such as *[ATLAS](http://math-atlas.sourceforge.net/)* or Accelerate/vecLib framework (OS X
only?) so that SciPy can detect them and build against them.
Depending on the rig you have at the moment, optimizing your SciPy
build then re-installing can give you a substantial performance
increase. Additional notes from the SciPy core team are [here](http://www.scipy.org/Installing_SciPy/BuildingGeneral).
Will these functions work for large matrices?
I should think so. These are industrial strength matrix decomposition methods, and which are just thin wrappers over the analogous Fortran ***LAPACK*** routines.
I have used most of the methods in the linalg library to decompose matrices in which the number of columns is usually between about 5 and 50, and in which the number of rows usually exceeds 500,000. Neither the *SVD* nor the *eigenvalue* methods seem to have any problem handling matrices of this size.
Using the *SciPy* library *linalg* you can calculate eigenvectors and eigenvalues, with a single call, using any of several methods from this library, ***eig***, ***eigvalsh***, and ***eigh***.
```
>>> import numpy as NP
>>> from scipy import linalg as LA
>>> A = NP.random.randint(0, 10, 25).reshape(5, 5)
>>> A
array([[9, 5, 4, 3, 7],
[3, 3, 2, 9, 7],
[6, 5, 3, 4, 0],
[7, 3, 5, 5, 5],
[2, 5, 4, 7, 8]])
>>> e_vals, e_vecs = LA.eig(A)
```
|
Generate a tree based on a array of object
I want to generate a tree in the following form :
```
[
{
folder: 'src',
children: [
{
folder: 'app',
children: [
{ file: 'app.module.ts', status: 'M' },
{
folder: 'components',
children: [
{
folder: 'accordion',
children: [
{ file: 'accordion.components.scss', status: 'M'}
]
},
{
folder: 'file-diff-commit',
children: [
{ file: 'file-diff-commit.component.html', status: 'A' },
{ file: 'file-diff-commit.component.ts', status: 'A' }
]
}
]
},
{
folder: 'models',
children: [
{ file: 'MockGitService.ts' , status: 'M' },
{ file: 'MockLeftPanelService.ts', status: 'M'}
]
}
]
},
{
folder: 'assets'
children: [
]
}
]
},
{ file: 'package.json', status: 'M' },
{ file: 'yarn.lock', status: 'M' }
]
```
And i need to generate by using a array of object. An exemple :
```
[
{ status: "M", path: "src/app/app.module.ts" },
{ status: "M", path: "src/app/components/accordion/accordion.component.scss" },
{ status: "A", path: "src/app/components/file-diff-commit/file-diff-commit.component.html" },
{ status: "A", path: "src/app/components/file-diff-commit/file-diff-commit.component.ts" },
{ status: "M", path: "src/app/models/MockGitService.ts" },
{ status: "M", path: "src/app/models/MockLeftPanelService.ts" },
{ status: "M", path: "src/assets/i18n/en.json" },
{ status: "M", path: "src/assets/i18n/fr.json" },
{ status: "M", path: "package.json" },
{ status: "M", path: "yarn.lock" }
]
```
I'm not asking for a whole method but just a small clue to guide my research. From what i was checking on internet I've only found tree generated using index but i was rather looking for a logic using path.
Any help could is welcome. Thanks in advance.
|
You could take an iterative approach for the objects and a recursive approach for the splitted directories and the final file.
In the recursion, the actual level is searched for an object with the wanted directory name and if not found, a new object is generated with `folder` and `children`.
At the end, the file object is pushed to the last level.
```
var data = [{ status: "M", path: "src/app/app.module.ts" }, { status: "M", path: "src/app/components/accordion/accordion.component.scss" }, { status: "A", path: "src/app/components/file-diff-commit/file-diff-commit.component.html" }, { status: "A", path: "src/app/components/file-diff-commit/file-diff-commit.component.ts" }, { status: "M", path: "src/app/models/MockGitService.ts" }, { status: "M", path: "src/app/models/MockLeftPanelService.ts" }, { status: "M", path: "src/assets/i18n/en.json" }, { status: "M", path: "src/assets/i18n/fr.json" }, { status: "M", path: "package.json" }, { status: "M", path: "yarn.lock" }],
tree = [];
data.forEach(({ status, path }) => {
var dirs = path.split('/'),
file = dirs.pop();
dirs
.reduce((level, folder) => {
var object = level.find(o => o.folder === folder);
if (!object) {
level.push(object = { folder, children: [] });
}
return object.children;
}, tree)
.push({ file, status });
});
console.log(tree);
```
```
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
|
Stopping GIF Animation Programmatically
I am developing a Twitter application which references to the images directly from Twitter.
How can I prevent animated gifs from being played?
Using `window.stop()` at the end of the page does not work for me in Firefox.
Is there a better JavaScript hack? Preferable this should work for all browsers
|
This is not a cross browser solution but this worked in firefox and opera (not in ie8 :-/). Taken [from here](https://userscripts-mirror.org/scripts/review/80588)
```
[].slice.apply(document.images).filter(is_gif_image).map(freeze_gif);
function is_gif_image(i) {
return /^(?!data:).*\.gif/i.test(i.src);
}
function freeze_gif(i) {
var c = document.createElement('canvas');
var w = c.width = i.width;
var h = c.height = i.height;
c.getContext('2d').drawImage(i, 0, 0, w, h);
try {
i.src = c.toDataURL("image/gif"); // if possible, retain all css aspects
} catch(e) { // cross-domain -- mimic original with all its tag attributes
for (var j = 0, a; a = i.attributes[j]; j++)
c.setAttribute(a.name, a.value);
i.parentNode.replaceChild(c, i);
}
}
```
|
Is there a difference of stability between Linux distros?
I've using Debian since 2010 for some home purposes and it has been stable. Is Debian still a good option if I need a server for heavy network, cpu, disk and memory usage?
Last month I listened to some admins say that RedHat is the most stable in bulk operations and that CentOS is a free version of RHEL. Their opinion is that CentOS is the best free distro. CentOS is getting very popular in my country (Dominican Republic) and I've wondered if Debian is getting behind.
Can RedHat, Debian, CentOS or Suse be used for bulk operations servers?
|
This kind of question cannot possibly be answered objectively. For many reasons:
1. The word *stable* could mean literally anything. It's easy to find benchmarks ([random example off Google](http://www.phoronix.com/scan.php?page=article&item=linux_2010_fiveway&num=1)) comparing certain particular aspects of computing, but to go as far as declare a distro more "stable" or "performant" or any other broad term like this is a bit far fetched.
2. There's a big difference between a vanilla install of a distribution and a tweaked one. With proper hacking, Debian, Red Hat, SuSE or any other distro can be made to behave the way you want. In any case, [should you encounter any stability/performance issue](http://c2.com/cgi/wiki?PrematureOptimization), you'll find ways to overcome them regardless of the distro you're using.
3. Most of the work that makes a system *stable* happens in the kernel, that is Linux. Now this may lead distros to act a bit differently, since each ships with separate versions of the kernel, activating certain modules or not. However since installing your own kernel is always an option (again, only do this after profiling your system and detecting issues there), this is not inherent to the distro itself, but to different instances of the kernel.
4. It is a bit misguided to imagine that distros will compete on that level. They usually compete on the level of what admin tools they offer (package management is the best example), the quality of help and documentation (Ubuntu targets the casual desktop user, where Red Hat addresses the seasoned corporate sysadmin) or the quality of their commercial support.
My personal advice to you is not to get dragged into these meaningless flamewars (my distro is better than yours). Ultimately, it's a matter of personal preferences. Try something for yourself and you'll quickly realize that even though each distro acts a bit differently, there's virtually nothing one can do that the others can't. It helps knowing someone in real life who's already familiar with one distro (in your case CentOS).
Also, Debian is waaaaay more stable than RHEL or CentOS.
|
const reference to temporary vs. return value optimization
I'm aware of the fact that assigning an rvalue to a const lvalue reference extends the temporaries lifetime until the end of the scope. However, it is not clear to me when to use this and when to rely on the return value optimization.
```
LargeObject lofactory( ... ) {
// construct a LargeObject in a way that is OK for RVO/NRVO
}
int main() {
const LargeObject& mylo1 = lofactory( ... ); // using const&
LargeObject mylo2 = lofactory( ... ); // same as above because of RVO/NRVO ?
}
```
According to Scot Meyers' More Effective C++ (Item 20) the second method could be optimized by the compiler to construct the object in place (which would be ideal and exactly what one tries to achieve with the `const&` in the first method).
1. Are there any generally accepted rules or best practices when to use `const&` to temporaries and when to rely on RVO/NRVO?
2. Could there be a situation in which using the `const&` method is worse than not using it? (I'm thinking for example about C++11 move semantics if `LargeObject` has those implemented ...)
|
Let's consider the most simple case:
```
lofactory( ... ).some_method();
```
In this case one copy from **lofactory** to caller context is possible – but it can be optimized away by **RVO/NRVO**.
---
```
LargeObject mylo2 ( lofactory( ... ) );
```
In this case possible copies are:
1. Return **temporary** from **lofactory** to caller context – can be optimized away by **RVO/NRVO**
2. Copy-construct **mylo2** from **temporary** – can be optimized away by **copy-elision**
---
```
const LargeObject& mylo1 = lofactory( ... );
```
In this case, there one copy is still possible:
1. Return **temporary** from **lofactory** to caller context – can be optimized away by **RVO/NRVO** (too!)
A reference will bind to this temporary.
---
So,
>
> Are there any generally accepted rules or best practices when to use const& to temporaries and when to rely on RVO/NRVO?
>
>
>
As I said above, even in a case with `const&`, an unnecesary copy is possible, and it can be optimized away by **RVO/NRVO**.
If your compiler applies **RVO/NVRO** in some case, then most likely it will do copy-elision at stage 2 (above). Because in that case, copy-elision is much simpler than NRVO.
But, in the worst case, you will have one copy for the `const&` case, and two copies when you init the value.
>
> Could there be a situation in which using the const& method is worse than not using it?
>
>
>
I don't think that there are such cases. At least unless your compiler uses strange rules that discriminate `const&`. (For an example of a similar situation, I noticed that MSVC does not do NVRO for aggregate initialization.)
>
> (I'm thinking for example about C++11 move semantics if LargeObject has those implemented ...)
>
>
>
In C++11, if `LargeObject` has move semantics, then in the worst case, you will have one move for the `const&` case, and two moves when you init the value. So, `const&` is still a little better.
---
>
> So a good rule would be to always bind temporaries to const& if possible, since it might prevent a copy if the compiler fails to do a copy-elision for some reason?
>
>
>
Without knowing actual context of application, this seems like a good rule.
In C++11 it is possible to bind temporary to rvalue reference - LargeObject&&. So, such temporary can be modified.
---
By the way, move semantic emulation is available to C++98/03 by different tricks. For instance:
- [Mojo](http://www.drdobbs.com/move-constructors/184403855)/[Boost.Move](http://www.boost.org/doc/libs/1_52_0/doc/html/move.html)
- Bjarne Stroustrup [describes another trick using small mutable flag inside class](http://www.youtube.com/watch?v=OB-bdWKwXsU&t=71m17s). Example code that he mentioned is [here](http://www.stroustrup.com/Programming/Matrix/Matrix.h).
However, even in presence of move semantic - there are objects which can't be cheaply moved. For instance, 4x4 matrix class with double data[4][4] inside. So, Copy-elision RVO/NRVO are still very important, even in C++11. And by the way, when Copy-elision/RVO/NRVO happens - it is faster than move.
---
P.S., in real cases, there are some additional things that should be considered:
For instance, if you have function that returns vector, even if Move/RVO/NRVO/Copy-Elision would be applied - it still may be not 100% efficient. For instance, consider following case:
```
while(/*...*/)
{
vector<some> v = produce_next(/* ... */); // Move/RVO/NRVO are applied
// ...
}
```
It will be more efficient to change code to:
```
vector<some> v;
while(/*...*/)
{
v.clear();
produce_next( v ); // fill v
// or something like:
produce_next( back_inserter(v) );
// ...
}
```
Because in this case, already allocated memory inside vector can be re-used when v.capacity() is enough, without need to do new allocations inside produce\_next on each iteration.
|
What do square brackets without semicolons mean?
I found the code below in the book `F# Design Patterns` by Gene Belitski. I have been reading about F# but I have not found an explanation for this syntax either on that book or elsewhere. I understand the keyword `yield` and what it does and I know that both the yield and printfn statements return values are of type unit. What I do not understand is the square brackets. The statements are not separated by semicolons as they would in a literal list creation. This must be a special syntax but I cannot find references to it. Can someone help?
```
let eagerList = [
printfn "Evaluating eagerList"
yield "I"
yield "am"
yield "an"
yield "eager"
yield "list" ]
```
|
That is actually a list comprehension or some sort of computation expression. I think that example conflates a few different things which probably baffles you, so maybe that book is actually not the best first or intro book to F#. If you read p. 169 carefully it explains what it is all about though. Author wants to demonstrate that with `Seq.delay`, well, you can delay the eager evaluation. If you evaluate the list first it actually prints the "Evaluating eager list" part, but only when you create it. After that it won't. You can see that `printfn` is not part of the list. So it's more like a statement, one printfn, and an expression (string list) combined.
1. List syntax
Usually you will just generate a list, and not type it in but you can separate the items with a newline or a `;`. So these two lists are equivalent:
```
["a";"b";"c"]
["a"
"b"
"c"
]
```
2. Lists are always eager in F#, however seqs are lazy.
3. `yield` is sort of like `return` in C#, it doesn't return unit, it returns a value, especially in a sequence.
When you create the example you will get this:
>
> Evaluating eagerList
>
> val eagerList : string list = ["I"; "am"; "an";
> "eager"; "list"]
>
>
>
So you can see the `printfn` is not part of the list.
If you run `eagerList` you will only see:
>
> val it : string list = ["I"; "am"; "an"; "eager"; "list"]
>
>
>
Confirming our suspicion.
Now what Gene Belinitsky wanted to demonstrate is this:
```
let delayed = Seq.delay (fun _ -> ([
printfn "Evaluating eagerList"
yield "I"
yield "am"
yield "an"
yield "eager"
yield "list" ] |> Seq.ofList))
```
>
> val delayed : seq
>
>
>
Not only it doesn't print out the list, it doesn't event prints the printfn statement! And it's a seq, not a list.
If you do `delayed` or `delayed |> Seq.toList` each time you will get back the result from the printfn statement as well as the string list:
>
> Evaluating eagerList
>
> val it : string list = ["I"; "am"; "an"; "eager";
> "list"]
>
>
>
So to summarize it, yes, usually within [] you have a list and you either separate the itenms with a `;` or a newline. However that specific example is not actually a usual list but an expression designed to demonstrate how you can delay evaluation, and for that it contains a print statement as well as a string list.
Also `;` can be used to separate statements on the same line, e.g.
`printfn "%A" "foo";printfn "%A" "bar"`
|
HTML5 Boilerplate and Twitter Bootstrap
I have read various questions answered on Stack about Twitter [Bootstrap](https://getbootstrap.com/) and [HTML5 Boilerplate](https://html5boilerplate.com/).
I do know the following:
- Bootstrap and H5BP are not equivalent and are used for different purposes
- I know what Bootstrap is used for, I use it every day - it is a front-end framework, CSS and JS ready-to-use. Just apply the right classes to your selectors/elements and Bootstrap does everything for you. They have extensive support documentation on their website where you can see what they have to offer inside the framework and how to implement it...
- You can combine the powers of the two by using [initializr](http://www.initializr.com/)
- H5BP is a template...um that's about it...
But what **I don't know yet**, is the following:
- What the heck is H5BP used for? When visiting their website they only give you the source files and that's it(oh yes, and a video). **What are the benefits of using it and why would you want to use it alongside Bootstrap for example?**
|
You're exactly right, it is a template. It's a template which brings together a set of best practices for developing HTML5 websites. You don't have to use it but if you do you can rest assured that you are building on a solid foundation which has been developed and honed by top developers over the years. In the words of Paul Irish, one of the principle developers of the project:
>
> “It’s essentially a good starting template of html and css and a
> folder structure that works., but baked into it is years of best
> practices from front-end development professionals.”
>
>
>
I would suggest downloading the fully commented version of the boilerplate and reading through the code to give yourself an idea of the thought which has gone into it and the cases it handles. (See also [this article](http://www.1stwebdesigner.com/design/snippets-html5-boilerplate/) which highlights code snippets from the boilerplate which you should be using on your site). These should help you decide whether you want to use it as a starting point for your project or not.
The [HTML5bp initializr](http://www.initializr.com/) even includes a bootstrap option, so it couldn't really be any easier!
|
Adding and using header (HTTP) in nginx
I am using two system (both are Nginx load balancer and one act as backup).
I want to add and use few HTTP custom headers.
Below is my code for both:
```
upstream upstream0 {
#list of upstream servers
server backend:80;
server backup_load_balancer:777 backup;
#healthcheck
}
server {
listen 80;
#Add custom header about the port and protocol (http or https)
server_name _;
location / {
# is included since links are not allowed in the post
proxy_pass "http://upstream0;"
}
}
```
Backup system
```
server {
listen 777;
server_name _;
#doing some other extra stuff
#use port and protocol to direct
}
```
How can I achieve that?
|
To add a header, add the [`add_header`](https://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header) declaration to either the `location` block or the `server` block:
```
server {
add_header X-server-header "my server header content!";
location /specific-location {
add_header X-location-header "my specific-location header content!";
}
}
```
An `add_header` declaration within a `location` block will override the *same* `add_header` declaration in the outer `server` block.
e.g. if `location` contained `add_header X-server-header ...` that would override the outer declaration for that path location.
Obviously, replace the values with what you want to add. And that's all there is to it.
|
Difference between incremental DOM and virtual DOM in Angular
I have two questions regarding Angular. I've tried reading some articles but I can't get the idea.
1. What is incremental DOM?
2. What is the difference between incremental DOM and virtual DOM?
|
Incremental DOM is a library for building up DOM trees and updating them in-place when data changes. It differs from the established virtual DOM approach in that no intermediate tree is created (the existing tree is mutated in-place). This approach significantly reduces memory allocation and GC thrashing for incremental updates to the DOM tree therefore increasing performance significantly in some cases.
<https://github.com/google/incremental-dom>
**Virtual DOM** compares (diff) a new entire virtual DOM with the previous virtual DOM for changes then applies those changes to the actual DOM. - This approach creates a new virtual DOM to determine the changes (memory heavy).
**Incremental DOM** has one virtual DOM and walks along the tree to find changes then mutates the virtual DOM and then apply those changes to the actual DOM - (reduced memory size and garbage collection).
Virtual DOM - has a big memory footprint because it needs headroom for changes that "might" happen to the virtual DOM.
Incremental DOM - doesn’t need such a big footprint as memory is only allocated for changes.
Tests have shown that Incremental DOM is fast enough to work even without a virtual DOM also.
|
Positive, Negative or zero in a 2D array python
I'm trying to find a way to know if a number on a matrix is positive, negative or zero. Below is my code, but I'm wondering if there is a more Pythonic way to write that. Any ideas?
```
array = np.random.randint(-10,10, size=(10,10))
def function(array):
case1 = np.where(array < 0, -1, array)
case2 = np.where(case1 > 0, 1, case1)
case3 = np.where(case2 == 0, 0, case2)
return case3
print(function(array))
```
|
You should choose better functions names, `function` does not give any indication what the purpose of the function is.
In addition, a [docstring comment](https://www.python.org/dev/peps/pep-0257/) can be used to give a short description.
But actually there is no need for a custom function because [`numpy.sign`](https://numpy.org/doc/stable/reference/generated/numpy.sign.html) already provides the exact functionality:
>
> Returns an element-wise indication of the sign of a number.
>
>
>
Example:
```
import numpy as np
array = np.random.randint(-10,10, size=(4,4))
print(np.sign(array))
```
Output:
```
[[-1 1 1 1]
[ 1 1 0 0]
[ 1 1 -1 1]
[ 1 -1 1 -1]]
```
|
What are the differences between "each", "foreach", "collect" and "map"?
It seems like there are a lot of ways to loop over an Enumerable in Ruby. Are there any differences between `each`, `foreach`, or in `collect`, `map` or other similar methods?
Or is there a reason I shouldn't use certain methods in certain situations?
|
`collect`/`map` are equivalent. They differ from `each` in that `each` only executes the block for each element, whereas `collect`/`map` return an array with the results of calling the block for each element. Another way to put it might be, `each` is expected to *do* something with each element, whereas `map` is expected to *transform* each element (map it onto something else).
You could use `collect` or `map` anywhere `each` is used, and your code will still work. But it will probably be slightly less efficient because it collects the results in an array, unless your Ruby implementation realizes it doesn't have to bother creating an array because it's never used.
Another reason to use `each` instead of `map` or `collect` is to help out anyone reading your code. If I see `each` then I can be like okay, we're about to use each element of the data to do something. If I see `map` then I'm expecting to see new data being created, based on a remapping of the old data.
With regards to `map` vs. `collect` I would say it's a matter of preference, but you should pick one and stick with it for consistency.
|
D3: remap mousewheel to be panning gesture instead of zoom gesture
By default, when you create a new zoom behavior in D3, it maps the mousewheel to control zoom level. You can also click and drag to pan the chart if it's larger than the chart area. I'd like to remap the mousewheel gesture to instead pan on the vertical axis (mousewheel up pans up, mousewheel down pans down). Anyone have any clue how to accomplish this?
|
Ok, here we go:
Based on [Lars' comment](https://stackoverflow.com/questions/13713528/how-to-disable-pan-for-d3-behavior-zoom), we can specify event handler for mousewheel event. As shown in the [answer](https://stackoverflow.com/questions/13713528/how-to-disable-pan-for-d3-behavior-zoom), we start by mapping `wheel.zoom` event to a custom handler `pan`
```
selection.call(zoomer)
.on("wheel.zoom",pan) // handler function for mousewheel zoom
```
Second, we need to define the pan gesture, which is basically a `translate` in *x* and/or *y* direction.
```
function pan() {
svg.attr("transform", "translate(" + [dx,dy] + ")");
}
```
We also need to quantify the movement in both directions, and relate it to mousewheel movement. By inspecting the details of `MouseWheel` event, we find two useful attributes *deltaX* and *deltaY*, indicating how much the mousewheel moved in each direction.
The final `pan` function is as follows
```
function pan() {
current_translate = d3.transform(svg.attr("transform")).translate;
dx = d3.event.wheelDeltaX + current_translate[0];
dy = d3.event.wheelDeltaY + current_translate[1];
svg.attr("transform", "translate(" + [dx,dy] + ")");
d3.event.stopPropagation();
}
```
Here is a [working fiddle](http://jsfiddle.net/wyuz5fje/) and [bl.ock](http://bl.ocks.org/ahmohamed/82ac20ccc949470e3206) also modifying Mike's [geometric zoom example](http://bl.ocks.org/mbostock/3680999).
## Cross-browser support:
It seems that mousewheel events differs between browsers. For Safari and Firefox support, you need to add the following:
```
selection.call(zoomer)
.on("wheel.zoom",pan)
.on("mousewheel.zoom", pan)
.on("DOMMouseScroll.zoom", pan)
```
Also, the interpretation of `wheelDelta` is reversed in Firefox. `wheelDelta` can be fixed by [this function](http://www.sitepoint.com/html5-javascript-mouse-wheel/)
```
function crossWheelDelta()
// cross-browser wheel delta
var e = window.event || e; // old IE support
return Math.max(-1, Math.min(1, (e.wheelDelta || -e.detail)));
}
```
|
How can I guarantee that a variable will never be zero without using a conditional statement in C?
For example,
Let's say a variable `x`,
`x` could be anything include `0`.
Then we got code like:
```
if(x==0) {
y = 1;
}
else {
y = x;
}
```
Could I do this without producing branches in C/C++?
I'm trying to optimize a piece of code. I want to remove branches as much as possible. There are similar judgments, so I want to convert them into statements without branches to make the code as efficient as possible.
|
Some general notes:
1. As mentioned in other comments, some compilers can optimize and eliminate the branch. You can check the assembly output (e.g. in Godbolt) to make sure.
2. Beware of [premature optimizations](https://wiki.c2.com/?PrematureOptimization).
3. Always measure and make sure your speculation about what's taking up time is correct.
Having said that you can try the following "trick":
```
y = !x + x;
```
Assuming `x`,`y` are integer types:
- If `x==0`, `!x` will be `1` and `y` will be assigned the value `1`.
- If `x!=0`, `!x` will be `0` and `y` will be assigned the value `x`.
**Note:** see @CostantinoGrana's comment below about the guarantee in the standard. You can also verify it in your specific environment (compiler etc.).
|
Other properties accessors besides 'set' and 'get'?
I was reflecting over properties of a type and wanted to check if a property has both a **public** setter and getter. Unfortunatly, `PropertyInfo`'s `CanRead` and `CanWrite` doesn't indicate the accessibility level. So I turned to `PropertyInfo.GetAccessors()` which has an [interesting description](http://msdn.microsoft.com/en-us/library/ff3b842x.aspx) (emphasis mine):
>
> Returns an array whose elements reflect the public get, set, and **other accessors** of the property reflected by the current instance.
>
>
>
What 'other accessors' are there? Is there merely a possibility for other accessors, or does a CLI language exists that actually has more than the simple set/get duo for a property?
|
In real terms, there is just a getter and setter. Technically, IIRC the CLI spec (ECMA 335 section 1.8.11.3) doesn't *restrict* to just these, so some other language would be free to add other meanings, but in reality none do.
This is shown in table `II.17`, and uses the `.other` caption in IL (note it is `.get` for the getter, `.set` for the setter and `.custom` for attributes).
**edit**
*In particular*, note the example included in the specification:
```
// the declaration of the property
.property int32 Count() {
.get instance int32 MyCount::get_Count()
.set instance void MyCount::set_Count(int32)
.other instance void MyCount::reset_Count()
}
```
This *suggests* that "reset" is an option; however, in reality this is handled via a reflection pattern; so for:
```
public int Foo {get;set;}
```
it is *convention* that `public void ResetFoo()` is the reset method for `Foo`, but the *compiler* does not process this into a custom accessor.
```
using System;
using System.ComponentModel;
public class MyType
{
public int Foo { get; set; }
public void ResetFoo() { Foo = 0; }
static void Main()
{
var obj = new MyType {Foo = 123};
TypeDescriptor.GetProperties(typeof(MyType))["Foo"].ResetValue(obj);
Console.WriteLine(obj.Foo); // outputs: 0
var accessors = typeof (MyType).GetProperty("Foo").GetAccessors();
// writes get_Foo and set_Foo
foreach(var acc in accessors) Console.WriteLine(acc.Name);
}
}
```
|
How to solve conflict dependence with SKMaps.jar and guava
I'm working on a project which use OSM by Skobbler. my project needs to use a Google's library called guava. I have the SKMaps.jar placed on libs/SKMaps.jar, also another jars too. On the other side i got some dependencies like:
```
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:support-v4:19.1.+'
compile 'com.squareup.okhttp:okhttp:2.2.0'
compile 'com.squareup.okhttp:okhttp-urlconnection:2.2.0'
compile group: 'com.google.guava', name: 'guava', version: '18.0'
}
```
This cause a conflict when you build the project, like this
```
Error:Execution failed for task ':app:dexPlatoDebug'.
```
>
> com.android.ide.common.internal.LoggedErrorException: Failed to run command:
> /home/opt/android-studio/sdk/build-tools/21.1.0/dx --dex --no-optimize --output /home/alex/Android/proy/app/build/intermediates/dex/plato/debug --input-list=/home/alex/Android/proy/app/build/intermediates/tmp/dex/plato/debug/inputList.txt
> Error Code:
> 2
> Output:
> UNEXPECTED TOP-LEVEL EXCEPTION:
> com.android.dex.DexException: Multiple dex files define Lcom/google/common/annotations/Beta;
> at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:596)
> at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:554)
> at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:535)
> at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:171)
> at com.android.dx.merge.DexMerger.merge(DexMerger.java:189)
> at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:454)
> at com.android.dx.command.dexer.Main.runMonoDex(Main.java:302)
> at com.android.dx.command.dexer.Main.run(Main.java:245)
> at com.android.dx.command.dexer.Main.main(Main.java:214)
> at com.android.dx.command.Main.main(Main.java:106)
>
>
>
As we can see here the conflict arise because SKMaps has a dependence from
```
Lcom/google/common/annotations/Beta;
```
And I say SKMaps because i have been doing some tests on a side project to detect which jars has conflicts, and SKMaps.jar and guava combinations has this conflict. Now here is my question:
**How can i exclude the guava dependence from SKMaps.jar?**
or
**Is possible to aisle both jars to not cause conflict?**
Thanks
|
There are 2 options, the first being easier than the second one:
1. Deleting the Google package from the already compiled 'SKMaps.jar' file
- make a backup of the SKMaps.jar file
- open the .jar file with an archive reader, for ex. 7zip
- from the 3 folders, enter 'com' -> then select the google folder & delete it
- close the archive saving it automatcally, there will be no error after this
- \*BUT you have to ensure the availibility of the Guava library along with the SKMaps.jar in the project (have it included in the lib folder)
2. Write a gradle script which decompiles the SKMaps.jar and recompiles it with the com/google/ folder excluded (which is basically the same thing as the first option) and including the new jar in the build process
- the difference is that this option will not modify the original SKMaps.jar but make a copy of it without the duplicate package
---
Applying either approach, the project will compile without any problems.
|
How to make apt-cache, apt-get into one command?
In Ubuntu 11.04 or 10.04, how can I make the `apt-get`, `apt-cache` all in one `myapt` so that I can simply use one only, like in Fedora?
For example:
```
myapt search yum # same as apt-cache search
myapt remove yum # same as apt-get remove
myapt install yum # same as apt-get install
```
Any idea how to do alias for this?
|
You could create a script with the following content
```
#!/bin/bash
command="$1"; shift
case $command in
add|depends|dotty|dump|dumpavail|gencaches|madison|pkgnames|policy|rdepends|search|show|showpkg|showsrc|stats|unmet|xvcg)
apt-cache "$command" "$@"
;;
autoclean|autoremove|build-dep|check|clean|dist-upgrade|dselect-upgrade|install|purge|remove|source|update|upgrade)
apt-get "$command" "$@"
;;
esac
```
Suppose you call it `myapt`. Then, to still having the benefit of bash completion, you need to add the following lines to `~/.bashrc`:
```
_myapt() {
_apt_get
tt=("${COMPREPLY[@]}")
_apt_cache
COMPREPLY+=("${tt[@]}")
return 0
} &&
complete -F _myapt $filenames myapt
```
Unfortunately `$command` should precede any options, but seems that bash completion do not works for options that follow command.
|
How do I update a section of a Bytes/BytesMut?
I have a fixed size buffer in a `Bytes` struct, and I want to copy some data over the middle of it.
The only thing I can see at the moment would be to take a slice of the beginning, add what I want, and add the slice at the end, but I'm sure this will result in a large copy or two that I want to avoid, I simply need to update the middle of the buffer. Is there a simple way of doing that without using `unsafe`?
|
You *don't* mutate `Bytes`. The entire purpose of the struct is to represent a reference-counted immutable view of data. You will need to copy the data in some fashion. Perhaps you create a `Vec<u8>` or `BytesMut` from the data.
`BytesMut` implements `AsMut<[u8]>`, `BorrowMut<[u8]>` and `DerefMut`, so you can use any existing technique for modifying slices in-place. For example:
```
use bytes::BytesMut; // 0.5.4
fn main() {
let mut b = BytesMut::new();
b.extend_from_slice(b"a good time");
let middle = &mut b[2..][..4];
middle.copy_from_slice(b"cool");
println!("{}", String::from_utf8_lossy(&b));
}
```
See also:
- [How to idiomatically copy a slice?](https://stackoverflow.com/q/28219231/155423)
- [How do you copy between arrays of different sizes in Rust?](https://stackoverflow.com/q/25225346/155423)
- [How can I write data from a slice to the same slice?](https://stackoverflow.com/q/39604042/155423)
- [How to operate on 2 mutable slices of a Rust array?](https://stackoverflow.com/q/36244762/155423)
- [How do I create two new mutable slices from one slice?](https://stackoverflow.com/q/24872634/155423)
>
> without using `unsafe`
>
>
>
**Do not use `unsafe` for this problem**. You *will* cause undefined behavior.
|
ResourceWarning: unclosed file <\_io.BufferedReader name=4>
Consider the following program:
```
import tempfile
import unittest
import subprocess
def my_fn(f):
p = subprocess.Popen(['cat'], stdout=subprocess.PIPE, stdin=f)
yield p.stdout.readline()
p.kill()
p.wait()
def my_test():
with tempfile.TemporaryFile() as f:
l = list(my_fn(f))
class BuggyTestCase(unittest.TestCase):
def test_my_fn(self):
my_test()
my_test()
unittest.main()
```
Running it results in the following output:
```
a.py:13: ResourceWarning: unclosed file <_io.BufferedReader name=4>
l = list(my_fn(f))
ResourceWarning: Enable tracemalloc to get the object allocation traceback
.
----------------------------------------------------------------------
Ran 1 test in 0.005s
OK
```
What is the actual cause of the warning and how to fix it? Note that if I comment out `unittest.main()` the problem disappears, which means that it's specific to subprocess+unittest+tempfile.
|
You should be closing the streams associated with the `Popen()` object you opened. For your example, that's the `Popen.stdout` stream, created because you instructed the `Popen()` object to create a pipe for the child process standard output. The easiest way to do this is by using the `Popen()` object as a context manager:
```
with subprocess.Popen(['cat'], stdout=subprocess.PIPE, stdin=f) as p:
yield p.stdout.readline()
p.kill()
```
I dropped the `p.wait()` call as `Popen.__exit__()` handles this for you, after having closed the handles.
If you want to further figure out exactly what cause the resource warning, then we can start by doing what the warning tells us, and [enable the `tracemalloc` module](https://docs.python.org/3/library/tracemalloc.html) by setting the [`PYTHONTRACEMALLOC` environment variable](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONTRACEMALLOC):
```
$ PYTHONTRACEMALLOC=1 python a.py
a.py:13: ResourceWarning: unclosed file <_io.BufferedReader name=4>
l = list(my_fn(f))
Object allocated at (most recent call last):
File "/.../lib/python3.8/subprocess.py", lineno 844
self.stdout = io.open(c2pread, 'rb', bufsize)
.
----------------------------------------------------------------------
Ran 1 test in 0.019s
OK
```
So the warning is thrown by a file opened by the `subprocess` module. I'm using Python 3.8.0 here, and so line 844 in the trace points to [these lines in `subprocess.py`](https://github.com/python/cpython/blob/f90e0d2371bc2bcab7bf5307cfd73571eb06b375/Lib/subprocess.py#L843-L844):
```
if c2pread != -1:
self.stdout = io.open(c2pread, 'rb', bufsize)
```
`c2pread` is the file handle for one half of a [`os.pipe()` pipe object](https://docs.python.org/3/library/os.html#os.pipe) created to handle communication from child process to Python parent process (created by the [`Popen._get_handles()` utility method](https://github.com/python/cpython/blob/f90e0d2371bc2bcab7bf5307cfd73571eb06b375/Lib/subprocess.py#L1457-L1508), because you set `stdout=PIPE`). `io.open()` is exactly the same thing as the built-in [`open()` function](https://docs.python.org/3/library/functions.html#open). So this is where the `BufferedIOReader` instance is created, to act as a wrapper for a pipe to receive the output the child process produces.
You could also explicitly close `p.stdout`:
```
p = subprocess.Popen(['cat'], stdout=subprocess.PIPE, stdin=f)
yield p.stdout.readline()
p.stdout.close()
p.kill()
p.wait()
```
or use `p.stdout` as a context manager:
```
p = subprocess.Popen(['cat'], stdout=subprocess.PIPE, stdin=f)
with p.stdout:
yield p.stdout.readline()
p.kill()
p.wait()
```
but it's easier to just *always* use `subprocess.Popen()` as a context manager, as it'll continue to work properly regardless of how many `stdin`, `stdout` or `stderr` pipes you created.
Note that most `subprocess` code examples don't do this, because they tend to use [`Popen.communicate()`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate), which closes the file handles for you.
|
Pl/pgSQL there is no parameter $1 in EXECUTE statement
I can't solve this:
```
CREATE OR REPLACE FUNCTION dpol_insert(
dpol_cia integer, dpol_tipol character, dpol_nupol integer,
dpol_conse integer,dpol_date timestamp)
RETURNS integer AS
$BODY$
DECLARE tabla text := 'dpol'||EXTRACT (YEAR FROM $5::timestamp);
BEGIN
EXECUTE '
INSERT INTO '|| quote_ident(tabla) ||'
(dpol_cia, dpol_tipol, dpol_nupol, dpol_conse, dpol_date) VALUES ($1,$2,$3,$4,$5)
';
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
```
---
When trying
```
SELECT dpol_insert(1,'X',123456,1,'09/10/2013')
```
return next message:
```
ERROR: there is no parameter $1
LINE 3: ...tipol, dpol_nupol, dpol_conse, dpol_date) VALUES ($1,$2,$3,$...
^
QUERY:
INSERT INTO dpol2013
(dpol_cia, dpol_tipol, dpol_nupol, dpol_conse, dpol_date) VALUES ($1,$2,$3,$4,$5)
CONTEXT: PL/pgSQL function "dpol_insert" line 4 at EXECUTE statement
```
\*\*\* *Error* \*\**\**
```
ERROR: there is no parameter $1
SQL state: 42P02
Context: PL/pgSQL function "dpol_insert" line 4 at EXECUTE statement
```
|
You have a couple problems here. The immediate problem is:
>
> ERROR: there is no parameter $1
>
>
>
That happens because `$1` inside the SQL that you're handing to EXECUTE isn't the same as `$1` inside the main function body. The numbered placeholders within the EXECUTE SQL are in the context of the EXECUTE, not in the function's context so you need to supply some arguments to EXECUTE for those placeholders:
```
execute '...' using dpol_cia, dpol_tipol, dpol_nupol, dpol_conse, dpol_date;
-- ^^^^^
```
See [Executing Dynamic Commands](http://www.postgresql.org/docs/current/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN) in the manual for details.
The next problem is that you're not returning anything from your function which `RETURNS integer`. I don't know what you intend to return but maybe your `tablea` has a SERIAL `id` that you'd like to return. If so, then you want something more like this:
```
declare
tabla text := 'dpol' || extract(year from $5::timestamp);
id integer;
begin
execute 'insert into ... values ($1, ...) returning id' into id using dpol_cia, ...;
-- ^^^^^^^^^^^^ ^^^^^^^
return id;
end
```
in your function.
|
Find type given generic base type and implemented type
Here is what I have.
```
BaseClass<T>
{
}
MyClass
{
}
NewClass : BaseClass<MyClass>
{
}
```
I need to see if there is a class that implements the BaseClass with the specific generic implementation of MyClass and get the type of that class. In this case it would be NewClass
**Edit**
```
AppDomain.CurrentDomain.GetAssemblies().SelectMany(s => s.GetTypes()).Where(
typeof(BaseClass<>).MakeGenericType(typeof(MyClass)).IsAssignableFrom);
```
This returns a list of all types that implement BaseClass< MyClass >.
Thanks
|
If you have a list of types (called `types`) that you want to test, you can get all types that inherit from the specified base class using `IsAssignableFrom` method:
```
var baseType = typeof(BaseClass<MyClass>);
var info = types.Where(t => baseType.IsAssignableFrom(t)).FirstOrDefault();
if (info != null)
// We have some type 'info' which matches your requirements
```
If you have only runtime information about the `MyClass` type then you can get the `baseType` like this:
```
Type myClassInfo = // get runtime info about MyClass
var baseTypeGeneric = typeof(BaseClass<>);
var baseType = baseTypeGeneric.MakeGenericType(myClassInfo);
```
You wrote that you need to find the class somehow, so the last question is how to get the list `types`. To do that, you'll need to search some assemblies - you can start with currently executing assembly (if the type is in your application) or with some well-known assembly identified by the name.
```
// Get all types from the currently running assembly
var types = Assembly.GetExecutingAssembly().GetTypes();
```
Unfortunately, there is no way to "magically" find some type like this - you'll need to search all available types, which may be quite an expensive operation. However, if you need to do this only once (e.g. when the application starts), you should be fine.
|
JavaScript Typeahead Autocomplete match accents and tildes Issue
I'm using [Typeahead](http://plugins.jquery.com/typeahead.js/0.10.1/) jQuery library, but when a user types a vocal like `e` it should match foreign vocals too, like `é`, `ë`, `è`.
Match `n` with `ñ` too.
This is my code:
```
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" href="typeahead.css">
<script src="jquery-1.11.0.min.js"></script>
<script src="typeahead.bundle.min.js"></script>
</head>
<body>
<center>
<div id="nombre">
<input class="typeahead" type="text" placeholder="Nombre">
</div>
</center>
<script>
var charMap = {
"à": "a",
"á": "a",
"ä": "a",
"è": "e",
"é": "e",
"ë": "e",
"ì": "i",
"í": "i",
"ï": "i",
"ò": "o",
"ó": "o",
"ö": "o",
"ù": "u",
"ú": "u",
"ü": "u",
"ñ": "n"};
var normalize = function (input) {
$.each(charMap, function (unnormalizedChar, normalizedChar) {
var regex = new RegExp(unnormalizedChar, 'gi');
input = input.replace(regex, normalizedChar);
});
return input;
}
var substringMatcher = function(strs) {
return function findMatches(q, cb) {
var matches, substringRegex;
matches = [];
substrRegex = new RegExp(q, "i");
$.each(strs, function(i, str) {
if (substrRegex.test(str)) {
matches.push({ value: str });
}
});
cb(matches);
};
};
var nombres = ["Sánchez", "Árbol", "Müller", "Ératio", "Niño"];
$("#nombre .typeahead").typeahead({
hint: true,
highlight: true,
minLength: 1
},
{
name: "nombres",
displayKey: "value",
source: substringMatcher(nombres)
});
</script>
</body>
```
How can I achieve this?
Thanks!
|
I've written a solution in the jsFiddle below:
<http://jsfiddle.net/Fresh/F3hG9/>
The key part of the solution is to normalize the name used for searching and to also include the original name which will be used for display purposes; you can see this in the definition of "local" below:
```
var nombres = new Bloodhound({
datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'),
queryTokenizer: queryTokenizer,
local: $.map(names, function (name) {
// Normalize the name - use this for searching
var normalized = normalize(name);
return {
value: normalized,
// Include the original name - use this for display purposes
displayValue: name
};
})
});
```
It is the normailze function (shown below) which replaces with "foreign" characters with Western alternatives:
```
var normalize = function (input) {
$.each(charMap, function (unnormalizedChar, normalizedChar) {
var regex = new RegExp(unnormalizedChar, 'gi');
input = input.replace(regex, normalizedChar);
});
return input;
};
```
I have omitted charMap (which determines what western character the foreign character maps to) for brevity; you can find the list in the Fiddle.
|
IDataErrorInfo vs ValidationRule vs Exception
Can anyone tell me which is a better approach for Validation in WPF.
1. Implementing IDataErrorInfo
2. Creating ValidationRule
3. Throwing Exceptions
in terms of performance, memory leaks, code maintainability and re-use.
|
This is kind of a complex request, and honestly it'll probably vary based on preference more than anything else. But, here's my understanding:
- **Performance**: Exceptions will lose nearly every time unless your other implementations are horrendous. There's significant overhead to the throw/catch cycle. (Anecdote: I had a 'must be a number' check that was an exception, it "lagged" the UI for a noticeable time when it failed, but when converted to a ValidationRule it was effectively instant.)
- **Memory leaks**: This depends on how your validation rules or IDataErrorInfo implementations are done.
- **Code maintanability, reuse**: This is the interesting part, of course. What you really should be asking is "when is it appropriate to use a ValidationRule instead of IDataErrorInfo or vice versa?"
ValidationRules are older than IDataErrorInfo (I believe the latter was introduced in .Net 3.5). Based on this alone, it would seem that the WPF team prefers IDataErrorInfo. But the truth is they're built for different things. If you have MVVM or an equivalent pattern, IDataErrorInfo is superior for errors in the *model* (like, say, a negative age) whereas ValidationRules are superior for errors in the *view* (say, an age of ☃). It's of course possible to have the ValidationRules perform "business logic" checks, or to have the IDataErrorInfo tell you "a unicode snowman is not a valid age", but you'll (probably) get the best maintainability by keeping to this pattern.
But don't use exceptions for validation beyond the initial testing to see what exact conditions you should be testing for.
|
ReflectionTypeLoadException Unable to load one or > more of the requested types
I have the following code , that loades all aseemblies that are refrenced by my porject and all referenced used by them
```
var types = AppDomain.CurrentDomain.GetAssemblies().SelectMany(t => t.GetTypes())
.Where(t => t.IsClass && !t.IsAbstract && (typeof (MyType).IsAssignableFrom(t)));
foreach (var type in types.Where(type => typeof (MyType).IsAssignableFrom(type)))
{... do something ...}
```
On my dev box this works as expected , on other environment this causes an exception
>
> System.Reflection.ReflectionTypeLoadException: Unable to load one or
> more of the requested types. Retrieve the LoaderExceptions property
> for more information.
>
>
>
Why is that exception thrown ?
is it possible to refactor my code to make it so that exception is ignored ?
|
I had the exact same issue but the Assembly it was having issues with I didn't really care about.
I went from this (worked locally):
```
Type t = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(a => a.GetTypes())
.Where(a => a.FullName == clientEx.ExceptionType)
.FirstOrDefault();
```
To this:
```
Type t = null;
foreach (Assembly a in AppDomain.CurrentDomain.GetAssemblies())
{
try
{
foreach (Type type in a.GetTypes())
{
if (type.FullName == clientEx.ExceptionType)
{
t = type;
break;
}
}
if (t != null)
break;
}
catch (Exception) { }
}
```
|
C# Field Naming Guidelines?
I am going to be working on a bit of C# code on my own but I want to make sure that I follow the most widely accepted naming conventions in case I want to bring on other developers, release my code, or sell my code. Right now I am following the naming convention that Microsoft has set as they seem to be the most widely accepted. The one thing they don't mention though is naming for private fields. For the most part I have seen them named in camelCase like protected fields however that present me with an issue as parameter names should be in camelCase. Take the following constructor for example:
```
public GameItem(string baseName, string prefixName, string suffixName)
{
//initialize code
}
```
Now if I use camelCase for the private fields too there is a naming conflict unless I use "this" in order to access the class fields (which I think is against most standards not to mention means more typing). One solution is to give the parameter a different name but that does not make logical sense to give the same data 2 different names. The only other solution that I know of that was common in C++ coding is giving private members an underscore at the beginning (\_camelCase). Is that solution commonly accepted with C# coding? Is there another solution to this problem (like only using properties (which use PascalCase) to access fields, even in the class itself)?
|
Follow the [Microsoft Naming Guidelines](http://msdn.microsoft.com/en-us/library/xzf533w0(VS.71).aspx). The [guidelines for field usage](http://msdn.microsoft.com/en-us/library/ta31s3bc(v=VS.71).aspx) indicate that it should be camelCase and not be prefixed. Note that the general rule is no prefix; the specific rule is not to prefix to distinguish between static and non-static fields.
>
> Do not apply a prefix to field names or static field names. Specifically, do not apply a prefix to a field name to distinguish between static and nonstatic fields. For example, applying a g\_ or s\_ prefix is incorrect.
>
>
>
and (from [General Naming Conventions](http://msdn.microsoft.com/en-us/library/ms229045.aspx))
>
> Do not use underscores, hyphens, or any other nonalphanumeric characters.
>
>
>
**EDIT**: I will note that the docs are not specific with regard to *private* fields but indicate that *protected* fields should be camelCase only. I suppose you could infer from this that any convention for private fields is acceptable. Certainly public static fields differ from protected (they are capitalized). My personal opinion is that protected/private are not sufficiently different in scope to warrant a difference in naming convention, especially as all you seem to want to do is differentiate them from parameters. That is, if you follow the guidelines for protected fields, you'd have to treat them differently in this respect than private fields in order to distinguish them from parameters. I use `this` when referring to class members within the class to make the distinction clear.
**EDIT 2**
I've adopted the convention used at my current job, which is to prefix private instance variables with an underscore and generally only expose protected instance variables as properties using PascalCase (typically autoproperties). It wasn't my personal preference but it's one that I've become comfortable with and probably will follow until something better comes along.
|
Validation of objects inside array of jsonb objects with RubyOnRails
How would you validate each object inside an array of object with Rails?
I am building a user profile form in our Rails app. Inside user model, we have basic string attributes but some jsonb fields as well. JSONb fields default to `[]` because we want to store an array of objects inside that attribute. Here is an example of simplified user model attributes:
- `name: string`
- `email: string`
- `education: jsonb, default: []`
Education is an array of objects such as:
```
[{
school: 'Harvard university',
degree: 'Computer Science',
from_date: 'Tue, 11 Jul 2017 16:22:12 +0200`,
to_date: 'Tue, 11 Jul 2017 16:22:12 +0200'
},{
school: 'High school',
degree: 'Whatever',
from_date: 'Tue, 11 Jul 2017 16:22:12 +0200`,
to_date: 'Tue, 11 Jul 2017 16:22:12 +0200'
}]
```
User should be able to click `Add school` button, to add more fields via jquery. That jquery part is not important for this question - maybe just an explanation why we used an Array of objects.
How would you validate each item in education array, so I can mark the text field containing validtion error with red color? I got adviced that using FormObject pattern might help here. I have also tried writing custom validator that inherits from `ActiveModel::Validator` class, but the main problem still lies in fact, that I am dealing with an array, not actual object..
Thanks for any constructive help.
|
You could treat education records as first-class citizens in your Rails model layer by introducing a non-database backed `ActiveModel` model class for them:
```
class Education
include ActiveModel::Model
attr_accessor :school, :degree, :from_date, :to_date
validates :school, presence: true
validates :degree, presence: true
def initialize(**attrs)
attrs.each do |attr, value|
send("#{attr}=", value)
end
end
def attributes
[:school, :degree, :from_date, :to_date].inject({}) do |hash, attr|
hash[attr] = send(attr)
hash
end
end
class ArraySerializer
class << self
def load(arr)
arr.map do |item|
Education.new(item)
end
end
def dump(arr)
arr.map(&:attributes)
end
end
end
end
```
Then you can transparently serialize and deserialize the `education` array in your `User` model:
```
class User
# ...
serialize :education, Education::ArraySerializer
# ...
end
```
This solution should allow you to validate individual attributes of `Education` objects with built-in Rails validators, embed them in a nested form, and so on.
**Important:** I wrote the code above without testing it, so it might need a few modifications.
|
Where is the Java Swing counterpart of "GetMessage()" loop?
I did some Win32 GUI programming several years ago. Now I am using Java Swing.
Just out of curiosity, where is the Swing counterpart of Win32 message loop logic? In Win32, it is achieved with the API [GetMessage()](https://msdn.microsoft.com/en-us/library/windows/desktop/ms644936%28v=vs.85%29.aspx). I guess it must have been wrapped deeply somewhere.
|
### Overview
The following diagram broadly illustrates how Swing/AWT works on the Windows platform:
```
Our Listeners
▲
│ (Events dispatched to our code by EDT)
╭ ◀─────────┴───────────╮
│ Event Dispatch Thread │
╰───────────▲─────────▶ ╯
│ (Events pulled from the queue by EDT)
│
Event Queue
▲
│ (Events posted to the queue by WToolkit)
╭ ◀─────────┴───────────╮
│ WToolkit Thread │
╰───────────▲─────────▶ ╯
│ (Messages pulled by WToolkit via PeekMessage)
│
Windows API
```
This architecture is almost entirely hidden from us by the event-driven abstraction. We only interact with the top-most end when events are triggered (`actionPerformed`, `paintComponent`, etc.) and by occasionally posting events ourselves (`invokeLater`, `repaint`, etc.).
Official documentation on the subject tends to be very general so I'm going to use (very paraphrased) excerpts from the source code.
### Event Dispatch Thread
The EDT is the Swing event processing thread and [all Swing programs run primarily on this thread](https://docs.oracle.com/javase/tutorial/uiswing/concurrency/dispatch.html). For the most part, this is just the AWT system and it's located in [`java.awt.EventDispatchThread`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventDispatchThread.java).
The event dispatching system is pretty dispersed, so I'll walk through a specific example supposing a `JButton` has been clicked.
To begin figuring out what's going on, we might look at a stack trace.
```
class ClickStack {
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
JFrame frame = new JFrame();
JButton button = new JButton("Click for stack trace");
button.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent ae) {
new Error().printStackTrace(System.out);
}
});
frame.add(button);
frame.pack();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setVisible(true);
}
});
}
}
```
This program gets us a call stack like the following:
```
at sscce.ClickStack$1$1.actionPerformed
at javax.swing.AbstractButton.fireActionPerformed
...
at javax.swing.DefaultButtonModel.setPressed
at javax.swing.plaf.basic.BasicButtonListener.mouseReleased
at java.awt.Component.processMouseEvent
...
at java.awt.Component.processEvent
...
at java.awt.Component.dispatchEventImpl
...
at java.awt.Component.dispatchEvent
at java.awt.EventQueue.dispatchEventImpl
...
at java.awt.EventQueue.dispatchEvent
at java.awt.EventDispatchThread.pumpOneEventForFilters
at java.awt.EventDispatchThread.pumpEventsForFilter
...
at java.awt.EventDispatchThread.pumpEvents
at java.awt.EventDispatchThread.run
```
If we take a look at the [`EventDispatchThread.run`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventDispatchThread.java#l80) method, we see:
```
public void run() {
try {
pumpEvents(...);
} finally {
...
}
}
```
[`EventDispatchThread.pumpEvents`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventDispatchThread.java#l92) takes us to [`EventDispatchThread.pumpEventsForFilter`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventDispatchThread.java#l112) which contains the outer loop logic:
```
void pumpEventsForFilter(...) {
...
while(doDispatch && ...) {
pumpOneEventForFilters(...);
}
...
}
```
An event is then pulled off the queue and sent off for dispatch in [`EventDispatchThread.pumpOneEventForFilters`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventDispatchThread.java#l156):
```
void pumpOneEventForFilters(...) {
AWTEvent event = null;
...
try {
...
EventQueue eq = getEventQueue();
...
event = eq.getNextEvent();
...
eq.dispatchEvent(event);
...
} catch(...) {
...
} ...
}
```
[`java.awt.EventQueue`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventQueue.java) contains logic where the type of event is narrowed and the event is further dispatched. [`EventQueue.dispatchEvent`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventQueue.java#l689) calls [`EventQueue.dispatchEventImpl`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/EventQueue.java#l736) where we see the following decision structure:
```
if (event instanceof ActiveEvent) {
...
((ActiveEvent)event).dispatch();
} else if (src instanceof Component) {
((Component)src).dispatchEvent(event);
...
} else if (src instanceof MenuComponent) {
((MenuComponent)src).dispatchEvent(event);
} else if (src instanceof TrayIcon) {
((TrayIcon)src).dispatchEvent(event);
} else if (src instanceof AWTAutoShutdown) {
...
dispatchThread.stopDispatching();
} else {
...
}
```
Most events we are familiar with go through the `Component` path.
[`Component.dispatchEvent`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/Component.java#l4698) calls [`Component.dispatchEventImpl`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/Component.java#l4708) which, for most listener-type events, calls [`Component.processEvent`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/awt/Component.java#l6261) where the event is narrowed down and forwarded again:
```
/**
* Processes events occurring on this component. By default this
* method calls the appropriate process<event type>Event
* method for the given class of event.
* ...
*/
protected void processEvent(AWTEvent e) {
if (e instanceof FocusEvent) {
processFocusEvent((FocusEvent)e);
} else if (e instanceof MouseEvent) {
switch(e.getID()) {
case MouseEvent.MOUSE_PRESSED:
case MouseEvent.MOUSE_RELEASED:
case MouseEvent.MOUSE_CLICKED:
case MouseEvent.MOUSE_ENTERED:
case MouseEvent.MOUSE_EXITED:
processMouseEvent((MouseEvent)e);
break;
case ...:
...
}
} else if (e instanceof KeyEvent) {
processKeyEvent((KeyEvent)e);
} else if (e instanceof ComponentEvent) {
processComponentEvent((ComponentEvent)e);
} else if (...) {
...
} ...
}
```
For a `JButton` click, we're following a `MouseEvent`.
These low level events ultimately have a single handler internal to the `Component`. So for example, we might have a look at [`javax.swing.plaf.BasicButtonListener`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/javax/swing/plaf/basic/BasicButtonListener.java) which implements a number of listener interfaces.
`BasicButtonListener` uses the mouse events to change the pressed state of the button model. Finally, the button model determines if it's been clicked in [`DefaultButtonModel.setPressed`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/javax/swing/DefaultButtonModel.java#l237), fires an `ActionEvent` and our listener's `actionPerformed` gets called.
### Native Messaging
How the actual native window is implemented is of course platform-specific but I can go through the Windows platform a bit since it's what you asked about. You'll find the Windows platform stuff in the following directories:
- Java: [`src/windows/classes/sun/awt/windows`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/classes/sun/awt/windows/)
- Native: [`src/windows/native/sun/windows`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/)
The Windows implementation of `java.awt.Toolkit`, which is [`sun.awt.windows.WToolkit`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/classes/sun/awt/windows/WToolkit.java), starts a separate thread for the actual message loop. [`WToolkit.run`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/classes/sun/awt/windows/WToolkit.java#l292) calls a JNI method `eventLoop`. [A comment in the source file](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/classes/sun/awt/windows/WToolkit.java#l310) explains that:
```
/*
* eventLoop() begins the native message pump which retrieves and processes
* native events.
* ...
```
This leads us to the C++ `AwtToolkit` class, located in [`awt_Toolkit.h`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.h) and [`awt_Toolkit.cpp`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp) (other classes follow the same file name convention).
The [native implementation of `eventLoop`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp#l2204) calls [`AwtToolkit::MessageLoop`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp#l1217):
```
AwtToolkit::GetInstance().MessageLoop(AwtToolkit::PrimaryIdleFunc,
AwtToolkit::CommonPeekMessageFunc);
```
([`AwtToolkit::CommonPeekMessageFunc`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp#l1384) calls [`PeekMessage`](https://learn.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-peekmessagea), which is the non-blocking alter-ego of [`GetMessage`](https://learn.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-getmessage).)
This is where the outer loop is located:
```
UINT
AwtToolkit::MessageLoop(IDLEPROC lpIdleFunc,
PEEKMESSAGEPROC lpPeekMessageFunc)
{
...
m_messageLoopResult = 0;
while (!m_breakMessageLoop) {
(*lpIdleFunc)();
PumpWaitingMessages(lpPeekMessageFunc); /* pumps waiting messages */
...
}
...
}
```
[`AwtToolkit::PumpWaitingMessages`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp#l1317) actually has a familiar-looking message loop, which calls [`TranslateMessage`](https://learn.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-translatemessage) and [`DispatchMessage`](https://learn.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-dispatchmessage):
```
/*
* Called by the message loop to pump the message queue when there are
* messages waiting. Can also be called anywhere to pump messages.
*/
BOOL AwtToolkit::PumpWaitingMessages(PEEKMESSAGEPROC lpPeekMessageFunc)
{
MSG msg;
BOOL foundOne = FALSE;
...
while (!m_breakMessageLoop && (*lpPeekMessageFunc)(msg)) {
foundOne = TRUE;
ProcessMsg(msg); // calls TranslateMessage & DispatchMessage (below)
}
return foundOne;
}
void AwtToolkit::ProcessMsg(MSG& msg)
{
if (msg.message == WM_QUIT) {
...
}
else if (msg.message != WM_NULL) {
...
::TranslateMessage(&msg);
::DispatchMessage(&msg);
}
}
```
(And recall that `DispatchMessage` calls the [`WindowProc`](https://msdn.microsoft.com/en-us/library/windows/desktop/ms633573%28v=vs.85%29.aspx) callback.)
The native window is wrapped by a C++ object which has platform-specific stuff, as well as a loose parallel of some of the API we have in Java code.
There seem to be a couple of `WindowProc` functions. One is just used internally by the toolkit, [`AwtToolkit::WndProc`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp#l688), along with [an empty window](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Toolkit.cpp#l346).
The `WindowProc` function we're actually interested in is [`AwtComponent::WndProc`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Component.cpp#l380). `WndProc` calls a virtual function called [`AwtComponent::WindowProc`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Component.cpp#l1308). Some subclasses override `WindowProc` (e.g. [`AwtFrame::WindowProc`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Frame.cpp#l408)), but a majority of messages are handled by `AwtComponent::WindowProc`. For example, it contains the following switch case:
```
case WM_LBUTTONDOWN:
case WM_LBUTTONDBLCLK:
mr = WmMouseDown(static_cast<UINT>(wParam), myPos.x, myPos.y,
LEFT_BUTTON);
break;
```
[`AwtComponent::WmMouseDown`](https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/sun/windows/awt_Component.cpp#l2311) begins a series of calls that posts a `java.awt.MouseEvent` to the `EventQueue` in Java:
```
SendMouseEvent(java_awt_event_MouseEvent_MOUSE_PRESSED, now, x, y,
GetJavaModifiers(), clickCount, JNI_FALSE,
GetButton(button), &msg);
```
After the event is posted, we are ultimately led back to the top where the event is seen on the EDT.
|
how to add macro defines in Qt .pro file
I am now using asio alone in my project, which is supposed to be built as a shared library used by others.
But I got the following error:
```
warning Please define _WIN32_WINNT or _WIN32_WINDOWS appropriately.
```
My question is, can I add a macro definition to my `.pro` file to avoid it.
Can someone tell me how to do it, just like add
```
#define PI 3.1415926
```
to the `.pro` file.
|
Add the following to your `.pro` file:
```
DEFINES += "PI=\"3.1415926\""
```
In your compile output you will see something like
>
> g++ -c -pipe -g -Wall -W -D\_REENTRANT -fPIE **-DPI="3.1415926"** -DQT\_GUI\_LIB -DQT\_CORE\_LIB -I../nobackup/qbuffertest -I. -I../nobackup/Qt/5.4/gcc\_64/include -I../nobackup/Qt/5.4/gcc\_64/include/QtGui -I../nobackup/Qt/5.4/gcc\_64/include/QtCore -I. -I../nobackup/Qt/5.4/gcc\_64/mkspecs/linux-g++ -o main.o ../nobackup/qbuffertest/main.cpp
>
>
>
Now you can access the macro in your C++ files:
```
qDebug() << PI;
```
|
Laravel Eloquent - querying pivot table
in my Laravel app I have three database tables called users, projects and roles. There is m:n relation between them so I have also pivot table called project\_user\_role. Pivot table contains user\_id, project\_id and role\_id columns. See image for screenshot from MySQL Workbench.
[](https://i.stack.imgur.com/AEdfu.jpg)
My User, Project and Role models got defined belongsToMany relation like that:
```
//User model example
public function projects()
{
return $this->belongsToMany('App\Library\Models\Project', 'project_user_role')->withPivot(['user_id','role_id']);
}
```
Now I can easily get projects of authenticated user like that:
```
$user = Auth::user();
$projects = $user->projects;
```
Response of this looks like that:
```
[
{
"id": 1,
"name": "Test project",
"user_id": 1,
"created_at": "2018-05-01 01:02:03",
"updated_at": "2018-05-01 01:02:03",
"pivot": {
"user_id": 2,
"project_id": 17,
"role_id": 1
}
},
]
```
but I would like to "inject" information about user role into response likat that:
```
[
{
"id": 1,
"name": "Test project",
"user_id": 1,
"created_at": "2018-05-01 01:02:03",
"updated_at": "2018-05-01 01:02:03",
"pivot": {
"user_id": 2,
"project_id": 17,
"role_id": 1
},
roles: [
{
"id": 1,
"name": "some role name",
"display_name": "Some role name",
"description": "Some role name",
"created_at": "2018-05-01 01:02:03",
"updated_at": "2018-05-01 01:02:03",
}
]
},
]
```
Is it possible? Thanks
|
You're essentially asking for an eager-load on a pivot table. The problem is, the data from the pivot table isn't coming from a top-level Model class, so there isn't anything in the way of a relationship method for you to reference.
There's a little awkwardness in your DB structure too, in that your pivot table is joining three tables instead of two. I'll get into some thoughts on that after actually answering your question though...
So, you can go from the User to the Project through the pivot table. And you can go from the User to the Role through your pivot table. But what you're looking for is to go from the Project to the Role through that pivot table. (i.e. your desired datagram shows the project data to be top-level with nested 'roles'.) . This can only be done if the Projects model is your entry point as opposed to your User.
So start by adding a many-to-many relation method to your Projects Model called `roles`, and run your query like this:
```
app(Projects::class)->with('roles')->wherePivot('user_id', Auth::user()->getKey())->get()
```
As for the structure, I think you have a little bit of a single-responsibility violation there. "User" represents an individual person. But you're also using it to represent the concept of a "Participant" of a project. I believe you need a new Participant table that has a many-to-one relationship with User, and a one-to-one relationship with Project. Then your pivot table need only be a many-to-many between Participant and Role, and leave User out of it.
Then your query would look like this:
```
Auth::user()->participants()->with(['project', 'roles'])->get()
```
This would also give you the opportunity to add some data describing things like what the overall participant.status is, when they became associated with that project, when they left that project, or who their supervisor (parent\_participant\_id) might be.
|
reprepro - Is there any chance to enter the passphrase via bash-script?
I use "reprepro" for getting the latest Debian packages out of my local repository, which works fine manually.
Now I need to automate this process through a cron job but the reprepro passphrase is a prompt.
Is there any possibility to send the password via bash script? Couldn't find anything in the reprepro manpage.
|
I needed the same thing and was looking for solution. Apart from running `gpg-agent`, which will ask for the password only once (e.g. during boot) and cache it for next usage, I have found nothing.
Problem is how to interact with interactive scripts, which are the ones, who ask for user input from stdin. [Expect](http://linux.die.net/man/1/expect) (`apt-get install expect`) solves exactly that.
This is the script I wrote and saved in /usr/local/bin/reprepro\_expect:
```
#!/usr/bin/expect -f
set timeout 2
set passphrase "mysupersecretpassword"
spawn reprepro -b [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3]
expect {
"*passphrase:*" {
send -- "$passphrase\r"
}
}
expect {
"*passphrase:*" {
send -- "$passphrase\r"
}
}
interact
```
You can run it like this:
`reprepro_expect [path_to_repository] [command] [distribution] [package_name]`
For example:
Add new package:
`reprepro_expect /var/www/myrepo includedeb wheezy mypackage_0.1-1_all.deb`
Delete package
`reprepro_expect /var/www/myrepo remove wheezy mypackage`
Security:
Since password to your private key is stored in the script, I recommend to `chown` it to user, under which it will be used and `chmod` it to 500. Why isn't passphrase passed as another argument? Because it would be stored in ~/.bash\_history and it would show in `ps axu` during runtime.
|
Detect click outside control
I display TestControl in grid when user clicks on the button:
```
<Grid>
<myControls:TestControl Panel.ZIndex="2" Visibility="Collapsed" Width="100" Height="100" />
<Button "Open Test Control" Click="btnOpen_Click" />
</Grid>
```
How can I hide TestControl when user clicks outside this control?
|
You could handle the `PreviewMouseLeftButtonDown` event for the `Grid`:
```
<Grid Background="Transparent" PreviewMouseLeftButtonDown="Grid_PreviewMouseLeftButtonDown">
<Grid.RowDefinitions>
<RowDefinition />
<RowDefinition />
</Grid.RowDefinitions>
<myControls:TestControl x:Name="tc" Visibility="Collapsed" Width="100" Height="100" />
<Button Content="Open Test Control" Click="btnOpen_Click" Grid.Row="1" />
</Grid>
```
---
```
private void Grid_PreviewMouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
if (!tc.IsMouseOver)
tc.Visibility = Visibility.Collapsed;
}
```
|
ios break nested loop
If I have a `while` loop with a `for` loop *inside* of the `while` loop, how can I `break` both loops?
I'm doing this because the extra 250ms I get from not completing these loops after I found what I want adds up to be valuable after a while.
pseudocode:
```
while(alwaysTrue) {
for(NSArray *arr in twoThousandItems) {
if(IFoundWhatIWasLookingFor) {
// assign some stuff here
// break everything, not just the for loop.
}
}
}
```
|
This is where `goto` is your friend. Yes, that `goto`.
```
while(alwaysTrue) {
for(NSArray *arr in twoThousandItems) {
if(IFoundWhatIWasLookingFor) {
// assign some stuff here
// break everything, not just the for loop.
goto BAIL;
}
}
}
BAIL:
NSLog(@"Freedom!");
```
The other option is to have short circuits in your loops.
```
while(alwaysTrue && !found) {
for(NSArray *arr in twoThousandItems) {
if(IFoundWhatIWasLookingFor) {
// assign some stuff here
// break everything, not just the for loop.
found = YES;
break;
}
}
}
```
|
Why does this key class for sorting heterogeneous sequences behave oddly?
Python 3.x's [`sorted()`](https://docs.python.org/3/library/functions.html#sorted) function cannot be relied on to sort heterogeneous sequences, because most pairs of distinct types are unorderable (numeric types like `int`, `float`, `decimal.Decimal` etc. being an exception):
```
Python 3.4.2 (default, Oct 8 2014, 08:07:42)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> sorted(["one", 2.3, "four", -5])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: float() < str()
```
In contrast, comparisons between objects that have no natural order are arbitrary but consistent in Python 2.x, so `sorted()` works:
```
Python 2.7.8 (default, Aug 8 2014, 14:55:30)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> sorted(["one", 2.3, "four", -5])
[-5, 2.3, 'four', 'one']
```
In order to replicate Python 2.x's behaviour in Python 3.x, I wrote a class to use as the `key` parameter to `sorted()`, which relies on the fact that `sorted()` is [guaranteed](https://docs.python.org/3/howto/sorting.html#odd-and-ends) to use only less-than comparisons:
```
class motley:
def __init__(self, value):
self.value = value
def __lt__(self, other):
try:
return self.value < other.value
except TypeError:
return repr(type(self.value)) < repr(type(other.value))
```
Example usage:
```
>>> sorted(["one", 2.3, "four", -5], key=motley)
[-5, 2.3, 'four', 'one']
```
So far, so good.
**However**, I've noticed a surprising behaviour when `sorted(s, key=motley)` is called with certain sequences containing complex numbers:
```
>>> sorted([0.0, 1, (1+0j), False, (2+3j)], key=motley)
[(1+0j), 0.0, False, (2+3j), 1]
```
I would have expected `0.0`, `False` and `1` to be in one group (because they are mutually orderable), and `(1+0j)` and `(2+3j)` in another (because they are of the same type). The fact that the complex numbers in this result are not only separated from each other, but one of them is sitting in the middle of a group of objects that are comparable with each other but not with it, is somewhat perplexing.
What's going on here?
|
You do not know what order the comparisons are done in, or even which items are compared, which means you can't really know what effect your `__lt__` will have. Your defined `__lt__` sometimes depends on the actual values, and sometimes on the string representations of the types, but both versions may be used for the same object in the course of the sort. This means that your ordering is not determined solely by the objects in the list, but also may depend on their initial order. This in turn means that just because objects are mutually comparable does not mean they will be sorted together; they may be "blocked" by an incomparable object between them.
You can get an inkling of what is going on by putting some debugging prints in to see what it's comparing:
```
class motley:
def __init__(self, value):
self.value = value
def __lt__(self, other):
fallback = False
try:
result = self.value < other.value
except TypeError:
fallback = True
result = repr(type(self.value)) < repr(type(other.value))
symbol = "<" if result else ">"
print(self.value, symbol, other.value, end="")
if fallback:
print(" -- because", repr(type(self.value)), symbol, repr(type(other.value)))
else:
print()
return result
```
Then:
```
>>> sorted([0.0, 1, (1+0j), False, (2+3j)], key=motley)
1 > 0.0
(1+0j) < 1 -- because <class 'complex'> < <class 'int'>
(1+0j) < 1 -- because <class 'complex'> < <class 'int'>
(1+0j) < 0.0 -- because <class 'complex'> < <class 'float'>
False > 0.0
False < 1
(2+3j) > False -- because <class 'complex'> > <class 'bool'>
(2+3j) < 1 -- because <class 'complex'> < <class 'int'>
[(1+0j), 0.0, False, (2+3j), 1]
```
You can see, for instance, that the type-based ordering is used for comparing the complex number to 1, but not for comparing 1 and 0. Likewise `0.0 < False` for "normal" reasons, but `2+3j > False` for type-name-based reasons.
The result is that it sorts `1+0j` to the beginning, and then leaves `2+3j` where it is above False. It never even attempts to compare the two complex numbers to each other, and the only item they are both compared to is 1.
More generally, your approach can lead to an intransitive ordering with appropriate choices for the names of the types involved. For instance, if you define classes A, B, and C, such that A and C can be compared, but they raise exceptions when comparing to B, then by creating objects `a`, `b` and `c` (from the respective classes) such that `c < a`, you can create a cycle `a < b < c < a`. `a < b < c` will be true because the classes will be compared based on their names, but `c < a` since these types can be directly compared. With an intransitive ordering, there is no hope of a "correct" sorted order.
You can even do this with builtin types, although it requires getting a little creative to think of objects whose type names are in the right alphabetical sequence:
```
>>> motley(1.0) < motley(lambda: 1) < motley(0) < motley(1.0)
True
```
(Because `'float' < 'function'` :-)
|
How to check if a chrome app is installed in a different chrome app?
I am wondering how I could check if a app is installed in chrome from a different chrome app. For example I have made app1 and app2 now I want to know if the user has app1 installed when he/she opens app2. Is this possible by some chrome api or is this not possible?
If I can not check if the user installed app1 then is their a work around of some sort?
I do have access to the chrome webstore if that matters.
What I want to do is provide some loyalty perks to those who install my other apps.
|
Since you wrote both apps, it's pretty simple by using [External Messaging](https://developer.chrome.com/apps/messaging#external):
In app1 background script:
```
var app2id = "abcdefghijklmnoabcdefhijklmnoab2";
chrome.runtime.onMessageExternal.addListener(
// This should fire even if the app is not running, as long as it is
// included in the event page (background script)
function(request, sender, sendResponse) {
if(sender.id == app2id && request.areYouThere) sendResponse(true);
}
);
```
Somewhere in app2:
```
var app1id = "abcdefghijklmnoabcdefhijklmnoab1";
chrome.runtime.sendMessage(app1id, {areYouThere: true},
function(response) {
if(response) {
// Installed and responded
} else {
// Could not connect; not installed
}
}
);
```
|
Is testing Cat5 cable using a testing device necessary if I have verified connectivity with a computer?
I'm running some CAT5 cable around the house and this is the first time I've needed to build/terminate my own cables. I hooked up the cables to a computer and router and was able to reach the router configuration page with no trouble.
If it matters, the cable run is fairly long and I will be running Power-Over-Ethernet on that cable. I also tested with the POE device and everything seems to be in working order.
So all seems good, but I'm wondering if I also need to test with a handheld cable tester device. Is this device simply a way to test a cable without going to the trouble of hooking it up to an actual network, or does it test something that I really should be checking?
I'd hate to have to buy one of those devices if I don't need it since I don't really foresee the need to be running a lot of cable in the future. This is a one-off project.
So is it completely necessary in this situation to test using one of those devices?
|
These handheld devices are useful, since they test the cabling on the physical level and can help discover problems (cross-talk, wrong impedance, etc.) which are difficult to diagnose otherwise.
If you pulled some cable too much, or some turn inside your wall is too sharp, your network will "kinda work", but you won't get maximum throughput, or you'll get weird connectivity issues, etc.
It will be a pain to re-do stuff once you make everything neat and declare the job is done. Maybe you can lease such a device or hire someone who have it to test your network? These testers are prohibitively expensive to buy if you need it for just one job...
So, it is not strictly necessary to test with such tester, but it gives an extra assurance you did things right, especially if you're not a pro in network cabling.
|
MySQL automatically updating field when inserting data
I'm making a web application to make customers order items for anything. For that I've made a MySQL database which has the following tables:
- `customers`
- `orders`
- `orders-items`
- `products`
In the `customers` table is all the information about the person such as:
- The customer ID, for the primary key and auto increment (id)
- The first name (first\_name)
- The last name (last\_name)
- The email address (email\_address)
- Information about the customer (customer\_info)
**Example:**

In the `orders` table is all the specific information about it such as:
- The order ID, for the primary key and auto increment (id)
- Which customer it ordered, linked with `id` field from the `customers` table (customer\_id)
- Order information (order\_info)
- The location where the order needs to go to (location)
- The total price the customer has to pay (total\_price)
- When the order was created (created)
**Example:**

In the `orders-items` table are all the items which every customer ordered, this is being linked by the `order-id` from the previous table.
- The ID, for primary key and auto increment, not used for any relation (id)
- The order ID, used for which product is for which order. This is linked with the `id` field from the `orders` table (order\_id)
- The product ID, this is used for what product they ordered, this is linked with the id field from the `products` table. (product\_id)
- The amount of this product they ordered (quantity)
**Example:**

In the `products` table is all the information about the products:
- The ID, for primary key and auto incrementing, This is linked with the `product_id` field from the `order_items` table (id)
- The name of the product (name)
- The description of the product (description)
- The price of the product (price)
**Example:**

**Question:**
I've got this query:
```
SELECT `orders-items`.`order_id` , SUM(`orders-items`.`quantity`* `products`.`price`) total
FROM `orders-items`
INNER JOIN `Products` ON `orders-items`.`products_id` = `products`.`id`
```
And it shows me a list of all the total prices every `order_id` has to pay.
But how do I make this so that this value of the `total_price` every `order_id` has to pay is automatticly inserted into the `orders` table inside the `total_price` field at the right `order_id` when inserting a product into my `orders-list` table?
Or is it still better to not keep track of the `total_prices` the `customers` have to pay?
|
A couple things to consider.
Having a `total_price` for itself is redundant. You can learn this total by summing the prices of this order's items at any time. It might be interesting to have it for performance reasons, but is this really necessary for your scenario? It rarely is.
Having a `price` on each `order_item` in the other hand would be useful. And the why is because thoses products prices might change in the future and you don't want to lose information of for how much they were sold at the time of that particular sale.
In any case, you can update your `total_price` using triggers like this:
```
DELIMITER $$
CREATE TRIGGER order_items_insert AFTER INSERT ON `orders-items` FOR EACH ROW
BEGIN
UPDATE orders o INNER JOIN (SELECT i.order_id id, SUM(i.quantity * p.price) total_price FROM `orders-items` i INNER JOIN products p ON p.id = i.products_id AND i.order_id = new.order_id) t ON t.id = o.id SET o.total_price = t.total_price;
END$$
CREATE TRIGGER order_items_update AFTER UPDATE ON `orders-items` FOR EACH ROW
BEGIN
UPDATE orders o INNER JOIN (SELECT i.order_id id, SUM(i.quantity * p.price) total_price FROM `orders-items` i INNER JOIN products p ON p.id = i.products_id AND i.order_id = new.order_id) t ON t.id = o.id SET o.total_price = t.total_price;
END$$
CREATE TRIGGER order_items_delete AFTER DELETE ON `orders-items` FOR EACH ROW
BEGIN
UPDATE orders o INNER JOIN (SELECT i.order_id id, SUM(i.quantity * p.price) total_price FROM `orders-items` i INNER JOIN products p ON p.id = i.products_id AND i.order_id = old.order_id) t ON t.id = o.id SET o.total_price = t.total_price;
END$$
DELIMITER ;
```
|
JSF: How validate fields and return error messages by bean validation?
I have a contact form and I have some fields that are validated by bean validation, how could I return bean validation error messages after submitting?
For example:
```
<h:form>
<h:inputText id="name" value="#{contact.client.name}"></h:inputText>Name (Required)
<h:inputText id="email" value="#{contact.client.email}"></h:inputText>E-Mail (Required)
<h:inputText id="website" value="#{contact.client.website}"></h:inputText>Website (Optional)
<h:inputText id="text" value="#{contact.client.text}"></h:inputText>Message (Required):
<h:commandButton value="Send" action="#{contact.sendMessage}" >
<f:ajax execute="@form" render="@form"/>
</h:commandButton>
</h:form>
```
This is how I'm validating my fields:
```
// Client.java (model)
@NotNull(message="Please provide your name")
private String name;
@NotNull(message="Please provide your email")
@Pattern(regexp = "([^.@]+)(\\.[^.@]+)*@([^.@]+\\.)+([^.@]+)", message = "Invalid e-mail")
private String email;
@Pattern(regexp = "(http[s]?://|ftp://)?(www\\.)?[a-zA-Z0-9-\\.]+\\.([a-zA-Z]{2,5})$", message = "Not valid URL")
private String website;
@NotNull(message="Please provide your message")
private String text;
```
|
Either use [`<h:message>`](http://download.oracle.com/javaee/6/javaserverfaces/2.1/docs/vdldocs/facelets/h/message.html) which you attach to specific components by `for` attribute which should refer the `id` of the input component:
```
<h:inputText id="name" value="#{contact.client.name}"></h:inputText>Name (Required)
<h:message for="name" />
<h:inputText id="email" value="#{contact.client.email}"></h:inputText>E-Mail (Required)
<h:message for="email" />
<h:inputText id="website" value="#{contact.client.website}"></h:inputText>Website (Optional)
<h:message for="website" />
<h:inputText id="text" value="#{contact.client.text}"></h:inputText>Message (Required):
<h:message for="text" />
```
or use [`<h:messages/>`](http://download.oracle.com/javaee/6/javaserverfaces/2.1/docs/vdldocs/facelets/h/messages.html) to display them all at a single place:
```
<h:messages />
```
Yes, bean validation messages also ends in there.
Don't forget to ensure that the button's `render` attribute covers them as well.
### See also:
- [JSF 2.0 tutorial with Glassfish and Eclipse - Hello World - The view](http://balusc.blogspot.com/2011/01/jsf-20-tutorial-with-eclipse-and.html#CreateJSFHelloWorldInEclipseTheView)
|
Express CSRF token validation
I'm having issues with CSRF tokens. When I submit a form, a new `XSRF-TOKEN` is being generated but I think I'm generating two different tokens, I'm kinda confused. There's also a token called `_csrf`, so I see two different cookies in developer tools (XSRF-TOKEN and \_csrf), `_csrf` doesn't change after a post.
What I want to do is to generate a new token for each post request and check whether it's valid or not. One thing I know that I should do it for security, but I stuck.
It has been a long day and I'm new into Express and NodeJS.
Here's my current setup.
```
var express = require('express')
, passport = require('passport')
, flash = require('connect-flash')
, utils = require('./utils')
, csrf = require('csurf')
// setup route middlewares
,csrfProtection = csrf({ cookie: true })
, methodOverride = require('method-override')
, bodyParser = require("body-parser")
, parseForm = bodyParser.urlencoded({ extended: false })
, cookieParser = require('cookie-parser')
, cookieSession = require('cookie-session')
, LocalStrategy = require('passport-local').Strategy
, RememberMeStrategy = require('../..').Strategy;
var app = express();
app.set('views', __dirname + '/views');
app.set('view engine', 'ejs');
app.engine('ejs', require('ejs-locals'));
app.use(express.logger());
app.use(express.static(__dirname + '/../../public'));
app.use(cookieParser());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.use(methodOverride());
app.use(express.session({ secret: 'keyboard cat' }));
app.use(flash());
// Initialize Passport! Also use passport.session() middleware, to support
// persistent login sessions (recommended).
app.use(passport.initialize());
app.use(passport.session());
app.use(passport.authenticate('remember-me'));
app.use(app.router);
app.use(csrf());
app.use(function (req, res, next) {
res.cookie('XSRF-TOKEN', req.csrfToken());
res.locals.csrftoken = req.csrfToken();
next();
});
```
Routes
```
app.get('/form', csrfProtection, function(req, res) {
// pass the csrfToken to the view
res.render('send', { csrfToken: req.csrfToken()});
});
app.post('/process', parseForm, csrfProtection, function(req, res) {
res.send('data is being processed');
});
```
send.ejs (/form GET)
```
<form action="/process" method="POST">
<input type="hidden" name="_csrf" value="<%= csrfToken %>">
Favorite color: <input type="text" name="favoriteColor">
<button type="submit">Submit</button>
</form>
```
|
Based on the amount of code you shared, I will mention a few things that don't look quite right to me:
1 . You may need to swap the lines below so that csrf runs before the routes.
```
app.use(csrf());
app.use(app.router);
```
2 . The csrftoken setup needs to also be placed before the routes.
```
app.use(csrf());
app.use(function (req, res, next) {
res.cookie('XSRF-TOKEN', req.csrfToken());
res.locals.csrftoken = req.csrfToken();
next();
});
app.use(app.router);
```
3 . You'll need to use `locals.csrftoken` in your form:
```
<form action="/process" method="POST">
<input type="hidden" name="_csrf" value="<%= csrftoken %>">
Favorite color: <input type="text" name="favoriteColor">
<button type="submit">Submit</button>
</form>
```
|
What do the commas in "${var,,}" mean?
```
read -p "Enter yes/no " SOMEVAR
SOMEVAR=`"echo ${SOMEVAR,,}"`
```
The code above gives me a `${SOMEVAR,,}: bad substitution` error.
|
The parameter expansion `${variable,,}` would expand to the value of `$variable` with all character in lower case in the `bash` shell. Given that you get a "bad substitution" error when this code runs suggests that you are in fact either
- not using that shell but possibly `/bin/sh` (which is not always `bash`). But not getting an error for `read -p` suggests that it's more likely that you are
- using an older release of `bash` which does not support this expansion (introduced in release 4 of `bash`).
The generic form of the expansion is `${variable,,pattern}` in which all characters in `$variable` that matches `pattern` would be converted to lower case (use `^^` to convert to upper case):
```
$ str="HELLO"
$ printf '%s\n' "${str,,[HEO]}"
heLLo
```
See also the `bash` manual on your system.
---
For older releases of `bash`, you could instead do the following to lowercase the value of a variable:
```
variable=$( tr 'A-Z' 'a-z' <<<"$variable" )
```
This passes the value of the variable through `tr` using a "here-string". The `tr` utility transliterates all characters in the `A` to `Z` ASCII range (assuming the C/POSIX locale) to the corresponding character in the `a` to `z` range.
---
Note also that
```
SOMEVAR=`"echo ${SOMEVAR,,}"`
```
is better written as
```
SOMEVAR=${SOMEVAR,,}
```
In fact, what you wrote would give you a "command not found" error in `bash` release 4+, unless you have a command called `echo string`, including the space (where `string` was what the user inputted). This is due to the command substitution trying to execute the double quoted string.
|
EXC\_BAD\_ACCESS on device, but fine on Simulator
I have a scroll view app which runs fine on the simulator, however, when installed on the device, it gives me an EXC\_BAD\_ACCESS, when i attempt to scroll one page. I have ran it through Instruments with Allocations and Leaks, but nothing is leaked and no zombies are messaged... i'm just curious what could cause such a difference in simulator vs device? Any ways to debug this, since my symbolicated crash log (partial below), doesn't seem to be very symbolicated.
Exception Type: EXC\_BAD\_ACCESS (SIGBUS)
Exception Codes: KERN\_PROTECTION\_FAILURE at 0x0000000f
Crashed Thread: 0
Thread 0 Crashed:
0 libobjc.A.dylib 0x0000286e objc\_msgSend + 18
1 MyApp 0x00004fee 0x1000 + 16366
2 UIKit 0x000668f4 -[UIViewController view] + 104
3 MyApp 0x00009716 0x1000 + 34582
4 MyApp 0x0000960c 0x1000 + 34316
5 UIKit 0x0001426c -[UIScrollView setContentOffset:] + 344
Thanks
|
Your code in the simulator could have the bug, but isn't triggering EXC\_BAD\_ACCESS by just the luck that a pointer that you dereference is not in unmapped memory. A pointer could be bad, and accessed, but not detected -- it's still a bug.
You have already checked to see that no Zombies are messaged, which would have been my first suggestion.
The next thing to do is Enable Guard Malloc -- and then read this
<http://developer.apple.com/iphone/library/documentation/Performance/Conceptual/ManagingMemory/Articles/MallocDebug.html>
You can only do this in the simulator -- your goal is to use the extra-sensitive heap to make the bug throw EXC\_BAD\_ACCESS in the simulator.
In the article:
1. Look at how to set the variables in GDB
2. Read the "Detecting Heap Corruption" section
|
Android Studio Action Bar Color not Changing
in styles.xml running Android 5.0 lollipop
```
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
<!-- Customize your theme here. -->
<item name="android:colorPrimary">@color/primary</item>
<item name="android:colorPrimaryDark">@color/primary_dark</item>
<item name="android:colorAccent">@color/accent</item>
<item name="android:statusBarColor">@color/primary</item>
<item name="android:colorAccent">@color/accent</item>
<item name="android:textColorPrimary">@color/primary_text</item>
<item name="android:textColor">@color/secondary_text</item>
<item name="android:navigationBarColor">@color/primary_dark</item>
</style>
```
when I build it and run it, I only get the status bar colored with the colorPrimaryDark while the toolbar remains black. How do I make it turn to colorPrimary?
This is what I'm currently getting
<https://www.dropbox.com/s/alp8d2fhzfd5g71/Screenshot_2015-02-25-21-13-01.png?dl=0>
|
UPDATE:
Make a new file in your layouts folder called tool\_bar.xml and paste the following code:
```
<?xml version="1.0" encoding="utf-8"?>
<android.support.v7.widget.Toolbar
android:layout_height="wrap_content"
android:layout_width="match_parent"
android:background="@color/ColorPrimary"
android:elevation="2dp"
android:theme="@style/Base.ThemeOverlay.AppCompat.Dark"
xmlns:android="http://schemas.android.com/apk/res/android" />
```
Add these colors in your color.xml file:
```
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="ColorPrimary">#00897B</color>
<color name="ColorPrimaryDark">#00695C</color>
</resources>
```
This is the code for your styles.xml file:
```
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
<item name="colorPrimary">@color/ColorPrimary</item>
<item name="colorPrimaryDark">@color/ColorPrimaryDark</item>
</style>
```
You should add the folowing code to your MainActivity.xml file:
```
<include
android:id="@+id/tool_bar"
layout="@layout/tool_bar"
android:layout_height="wrap_content"
android:layout_width="match_parent" />
```
This way worked for me!
This should give you the actionbar that looks like the one below a result

|
How do I delete items from a dictionary while iterating over it?
Can I delete items from a dictionary in Python while iterating over it?
I want to remove elements that don't meet a certain condition from the dictionary, instead of creating an entirely new dictionary. Is the following a good solution, or are there better ways?
```
for k, v in mydict.items():
if k == val:
del mydict[k]
```
|
For **Python 3+**:
```
>>> mydict
{'four': 4, 'three': 3, 'one': 1}
>>> for k in list(mydict.keys()):
... if mydict[k] == 3:
... del mydict[k]
>>> mydict
{'four': 4, 'one': 1}
```
The other answers work fine with **Python 2** but raise a `RuntimeError` for **Python 3**:
>
> RuntimeError: dictionary changed size during iteration.
>
>
>
This happens because `mydict.keys()` returns an iterator not a list.
As pointed out in comments simply convert `mydict.keys()` to a list by `list(mydict.keys())` and it should work.
---
For **Python 2**:
A simple test in the console shows you cannot modify a dictionary while iterating over it:
```
>>> mydict = {'one': 1, 'two': 2, 'three': 3, 'four': 4}
>>> for k, v in mydict.iteritems():
... if k == 'two':
... del mydict[k]
------------------------------------------------------------
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
```
As stated in delnan's answer, deleting entries causes problems when the iterator tries to move onto the next entry. Instead, use the `keys()` method to get a list of the keys and work with that:
```
>>> for k in mydict.keys():
... if k == 'two':
... del mydict[k]
>>> mydict
{'four': 4, 'three': 3, 'one': 1}
```
If you need to delete based on the items value, use the `items()` method instead:
```
>>> for k, v in mydict.items():
... if v == 3:
... del mydict[k]
>>> mydict
{'four': 4, 'one': 1}
```
|
numpy Loadtxt function seems to be consuming too much memory
When I load an array using numpy.loadtxt, it seems to take too much memory. E.g.
```
a = numpy.zeros(int(1e6))
```
causes an increase of about 8MB in memory (using htop, or just 8bytes\*1million \approx 8MB). On the other hand, if I save and then load this array
```
numpy.savetxt('a.csv', a)
b = numpy.loadtxt('a.csv')
```
my memory usage increases by about 100MB! Again I observed this with htop. This was observed while in the iPython shell, and also while stepping through code using Pdb++.
Any idea what's going on here?
After reading jozzas's answer, I realized that if I know ahead of time the array size, there is a much more memory efficient way to do things if say 'a' was an mxn array:
```
b = numpy.zeros((m,n))
with open('a.csv', 'r') as f:
reader = csv.reader(f)
for i, row in enumerate(reader):
b[i,:] = numpy.array(row)
```
|
Saving this array of floats to a text file creates a 24M text file. When you re-load this, numpy goes through the file line-by-line, parsing the text and recreating the objects.
I would expect memory usage to spike during this time, as numpy doesn't know how big the resultant array needs to be until it gets to the end of the file, so I'd expect there to be at least 24M + 8M + other temporary memory used.
Here's the relevant bit of the numpy code, from `/lib/npyio.py`:
```
# Parse each line, including the first
for i, line in enumerate(itertools.chain([first_line], fh)):
vals = split_line(line)
if len(vals) == 0:
continue
if usecols:
vals = [vals[i] for i in usecols]
# Convert each value according to its column and store
items = [conv(val) for (conv, val) in zip(converters, vals)]
# Then pack it according to the dtype's nesting
items = pack_items(items, packing)
X.append(items)
#...A bit further on
X = np.array(X, dtype)
```
This additional memory usage shouldn't be a concern, as this is just the way python works - while your python process appears to be using 100M of memory, internally it maintains knowledge of which items are no longer used, and will re-use that memory. For example, if you were to re-run this save-load procedure in the one program (save, load, save, load), your memory usage will not increase to 200M.
|
Ckeditor character limitation with charcount plugin
How can i **prevent** users to enter new characters **after max character** limit is reached ?
Ckeditor charcount plugin just shows me the remaining characters, i want it to stop at 0. But it goes minus integers.
Here's my html code.
```
<textarea id="myTextEditor1" name="myTextEditor"></textarea>
<script type="text/javascript">
CKEDITOR.replace('myTextEditor1', {
height: 200,
extraPlugins: 'charcount',
maxLength: 10,
toolbar: 'TinyBare',
toolbar_TinyBare: [
['Bold','Italic','Underline'],
['Undo','Redo'],['Cut','Copy','Paste'],
['NumberedList','BulletedList','Table'],['CharCount']
]
});
</script>
```
Do i have to use **onChange plugin** ? If i have to how can i limit users entering new characters ?
|
I used the ckeditor jQuery adapter for this.
```
<textarea id="myTextEditor1" name="myTextEditor"></textarea>
<script type="text/javascript">
$(function () {
var myEditor = $('#myTextEditor1');
myEditor.ckeditor({
height: 200,
extraPlugins: 'charcount',
maxLength: 10,
toolbar: 'TinyBare',
toolbar_TinyBare: [
['Bold','Italic','Underline'],
['Undo','Redo'],['Cut','Copy','Paste'],
['NumberedList','BulletedList','Table'],['CharCount']
]
}).ckeditor().editor.on('key', function(obj) {
if (obj.data.keyCode === 8 || obj.data.keyCode === 46) {
return true;
}
if (myEditor.ckeditor().editor.document.getBody().getText().length >= 10) {
alert('No more characters possible');
return false;
}else { return true; }
});
});
</script>
```
The keyCode check is to allow backspace and delete key presses. To use the jQuery adapter, don't forget to insert it:
```
<script src="/path-to/ckeditor/adapters/jquery.js"></script>
```
|
Can I specify conditional default values for a parameter in PowerShell?
I thought if this was possible it might work using parameter sets so I tried the following:
```
Function New-TestMultipleDefaultValues {
[CmdletBinding(DefaultParameterSetName="Default1")]
param (
[Parameter(Mandatory,ParameterSetName="Default1")]$SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1,
[Parameter(ParameterSetName="Default1")]$Test1 = "Value1",
[Parameter(ParameterSetName="Default2")]$Test1 = "Value2"
)
$PSBoundParameters
}
```
Executing this to create the function results in the error `Duplicate parameter $test1 in parameter list.` so it doesn't look like this way is an option.
The only thing I can think of at this point is to do something like this:
```
Function New-TestMultipleDefaultValues {
param (
$SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1,
$Test1
)
if (-not $Test1 -and $SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1) {
$Test1 = "Value1"
} elseif (-not $Test1 -and -not $SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1) {
$Test1 = "Value2"
}
$Test1
}
```
Which works but seems ugly:
```
PS C:\Users\user> New-TestMultipleDefaultValues -SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1 "thing"
Value1
PS C:\Users\user> New-TestMultipleDefaultValues
Value2
PS C:\Users\user> New-TestMultipleDefaultValues -Test1 "test"
test
```
Any better way to accomplish this?
|
The following should work:
Since there is then no longer a need for explicit parameter sets, I've omitted them; without specific properties, the `[Parameter()]` attributes aren't strictly needed anymore either.
```
Function New-TestMultipleDefaultValues {
[CmdletBinding()]
param (
[Parameter()] $SomeOtherThing,
[Parameter()] $Test1 =
('Value2', 'Value1')[$PSBoundParameters.ContainsKey('SomeOtherThing')]
)
# * As expected, if -Test1 <value> is explicitly specified,
# parameter variable $Test1 receives that value.
# * If -Test1 is omitted, the expression assigns 'Value1` to $Test1
# if -SomeOtherThing was specified, and 'Value2' otherwise.
$Test1 # Output the effective value of $Test1
}
```
- It is possible to use *expressions* as parameter default values.
- The above code *is* an expression and therefore can be used as-is.
- To use a single *command* (a call to a PowerShell cmdlet, function, script or to an external program) as an expression, enclose it in `(...)`, the [grouping operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators#grouping-operator--).
- In all other cases you need `$(...)`, the [subexpression operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators#subexpression-operator--) (or `@(...)`, the [array-subexpression operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators#array-subexpression-operator--)) to convert the code to an expression; these cases are:
- A `Throw` statement (and, hypothetically, `exit` and `return` statements, but you wouldn't use them in this context)
- A compound construct such as `foreach`, `while`, ...
- *Multiple* commands, expressions, or compound constructs, separated with `;`
- However, it is safe to *always* use `$(...)` (or `@(...)`) to enclose the code that calculates the default value, which you may opt to do for simplicity.
- These expressions are evaluated *after* the explicitly specified parameters have been bound, which allows an expression to examine what parameters have been bound, via the automatic `$PSBoundParameters` variable:
- `('Value2', 'Value1')[$PSBoundParameters.ContainsKey('SomeOtherThing')]` is simply a more concise reformulation of
`if ($PSBoundParameters.ContainsKey('SomeOtherThing')) { 'Value1' } else { 'Value2' }`
that takes advantage of `[bool]` values mapping onto `0` (`$false`) and `1` (`$true`) when used as an array index (integer).
- In PowerShell v7+ you could use a *ternary conditional* instead, which has the added advantage of short-circuiting the evaluation:
`$PSBoundParameters.ContainsKey('SomeOtherThing') ? 'Value1' : 'Value2'`
|
java remove elements which match certain duplication rules from list
I have a list like this:
```
List<Map<String, String>> list = new ArrayList<Map<String, String>>();
Map<String, String> row;
row = new HashMap<String, String>();
row.put("page", "page1");
row.put("section", "section1");
row.put("index", "index1");
list.add(row);
row = new HashMap<String, String>();
row.put("page", "page2");
row.put("section", "section2");
row.put("index", "index2");
list.add(row);
row = new HashMap<String, String>();
row.put("page", "page3");
row.put("section", "section1");
row.put("index", "index1");
list.add(row);
```
I need to remove duplicates based on 2 out of 3 elements ("section", "index") of the row (Map) being the same. This is what I'm trying to do:
```
for (Map<String, String> row : list) {
for (Map<String, String> el : list) {
if (row.get("section").equals(el.get("section")) && row.get("index").equals(el.get("index"))) {
list.remove(el);
}
}
}
```
it fails with `java.util.ConcurrentModificationException`. There must be another way of doing this, but I don't know how. Any ideas?
**UPDATE:** I've tried to use Iterator, as suggested, still the same exception:
```
Iterator<Map<String, String>> it = list.iterator();
while (it.hasNext()) {
Map<String, String> row = it.next();
for (Map<String, String> el : list) {
if (row.get("section").equals(el.get("section")) && row.get("index").equals(el.get("index"))) {
list.remove(row);
}
}
}
```
**UPDATE2:** This fails with the same exception:
```
Iterator<Map<String, String>> it = list.iterator();
while (it.hasNext()) {
Map<String, String> row = it.next();
Iterator<Map<String, String>> innerIt = list.iterator();
while (innerIt.hasNext()) {
Map<String, String> el = innerIt.next();
if (row.get("section").equals(el.get("section")) && row.get("index").equals(el.get("index"))) {
innerIt.remove();
//it.remove(); //fails as well
}
}
}
```
**UPDATE 3, SOLUTION:** Annoyingly simple:
```
for (int i = 0; i < list.size(); i++) {
for (int j = 0; j < list.size(); j++) {
if (list.get(i).get("section").equals(list.get(j).get("section")) && list.get(i).get("index").equals(list.get(j).get("index"))) {
list.remove(i);
}
}
}
```
**UPDATE 4:** "Solution" didn't work as intended. Correct answer is now selected.
|
You cannot add/remove elements of collections while iterating on it, except if you iterate via an [Iterator](http://docs.oracle.com/javase/6/docs/api/java/util/Iterator.html).
See [Collection#iterator()](http://docs.oracle.com/javase/6/docs/api/java/util/Collection.html#iterator%28%29) to get an iterator on your Map.
See [Iterator#remove()](http://docs.oracle.com/javase/6/docs/api/java/util/Iterator.html#remove%28%29) to see how to remove elements from a Collection while iterating on it.
You can structure your code like this :
```
//Get an iterator on your list.
Iterator<Map<String, String>> itr = list.iterator();
//iterate
while(itr.hasNext()) {
Map<String, String> elt= itr.next();
if(isDuplicate(list, elt)) {
itr.remove();
}
}
```
Here is an example of method to find if you have a duplicate :
```
public boolean isDuplicate(List<Map<String, String>> list, Map<String, String> map){
//Count the occurences of the searched element.
int cpt = 0;
/*
* Here, another iterator is implicitly created.
* It is not the same as in the main loop.
* That's why I avoid the ConcurrentModificationException.
*/
for(Map<String, String> m : list) {
if(m.get("section").equals(map.get("section")) && m.get("index").equals(map.get("index"))) {
cpt++;
}
}
//If the element is found twice, then it is a duplicate.
return cpt == 2;
}
```
Here is an extract of Javadoc for method ArrayList#remove() (from Sun JDK sources) :
>
> The iterators returned by this class's iterator and listIterator methods are fail-fast: if the list is structurally modified at any time after the iterator is created, in any way except through the iterator's own remove or add methods, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.
>
>
>
To understand furtherly how iterators works, let's read the Sun JDK source for ArrayList iterators. This is an inner class found in ArrayList.java :
```
private class Itr implements Iterator<E> {
int cursor; // index of next element to return
int lastRet = -1; // index of last element returned; -1 if no such
int expectedModCount = modCount;
```
Here, we can see that when instantiated ( with Collection#iterator() ), the iterator initializes a `expectedModCount` (modCount = modification count). Here, `modCount` is an attribute of the class ArrayList.
Each time you call a method on the iterator (`next()`, `previous()`, `add()`, `remove()`), this method is called :
```
final void checkForComodification() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
}
```
This is the method that throws **ConcurrentModificationException**!
Each time you do a modification on a list, the ArrayList updates `modCount`. So if you modify a list without an iterator, `modCount` becomes != `expectedModCount`. At the next call of any method of the iterator, you get the exception.
When you use a for-each loop, you are implicitly creating an iterator and calling next() at the end of each loop.
Each time you modify the list via a method from the iterator, `expectedModCount` is updated to `modCount`, avoiding you the ConcurrentModificationException.
|
Why C# Garbage Collection behavior differs for Release and Debug executables?
Let's consider following simple program:
```
class Program
{
class TestClass
{
~TestClass()
{
Console.WriteLine("~TestClass()");
}
}
static void Main(string[] args)
{
WeakReference weakRef;
{
var obj = new TestClass();
weakRef = new WeakReference(obj);
Console.WriteLine("Leaving the block");
}
Console.WriteLine("GC.Collect()");
GC.Collect();
System.Threading.Thread.Sleep(1000);
Console.WriteLine("weakRef.IsAlive == {0}", weakRef.IsAlive);
Console.WriteLine("Leaving the program");
}
}
```
When built in Release mode, it predictably prints:
```
Leaving the block
GC.Collect()
~TestClass()
weakRef.IsAlive == False
Leaving the program
```
When Debug version is launched (not under the Debugger, usual launch from Windows Explorer), the output differs:
```
Leaving the block
GC.Collect()
weakRef.IsAlive == True
Leaving the program
~TestClass()
```
Running under the debugger for both versions doesn't change the output.
I've discovered this strange difference during debugging of my custom collection that keeps weak references to objects.
Why garbage collector in debug executables does not collect objects that clearly are not referenced?
**UPDATE:**
Situation differs if object creation is performed in other method:
```
class Program
{
class TestClass
{
~TestClass()
{
Console.WriteLine("~TestClass()");
}
}
static WeakReference TestFunc()
{
var obj = new TestClass();
WeakReference weakRef = new WeakReference(obj);
Console.WriteLine("Leaving the block");
return weakRef;
}
static void Main(string[] args)
{
var weakRef = TestFunc();
Console.WriteLine("GC.Collect()");
GC.Collect();
System.Threading.Thread.Sleep(1000);
Console.WriteLine("weakRef.IsAlive == {0}", weakRef.IsAlive);
Console.WriteLine("Leaving the program");
}
}
```
It outputs the same output in Release and Debug versions:
```
Leaving the block
GC.Collect()
~TestClass()
weakRef.IsAlive == False
Leaving the program
```
|
The short answer is that the GC isn't required to do anything like what you're describing. The long answer is that it's not uncommon for something to work more pessimistically under debug configuration, in order to allow you to debug more easily.
For example, in this case, because you declared `obj` as a local variable somewhere inside the method, the C# compiler can reasonably choose to retain references of that instance, so that utilities like the Locals window or the Watch windows in Visual Studio can function predictably.
Indeed, this is the IL of your code generated using the Debug configuration:
```
.method private hidebysig static void Main (
string[] args
) cil managed
{
.entrypoint
.locals init (
[0] class [mscorlib]System.WeakReference weakRef,
[1] class _GC.Program/TestClass obj
)
IL_0000: nop
IL_0001: nop
IL_0002: newobj instance void _GC.Program/TestClass::.ctor()
IL_0007: stloc.1
IL_0008: ldloc.1
IL_0009: newobj instance void [mscorlib]System.WeakReference::.ctor(object)
IL_000e: stloc.0
IL_000f: ldstr "Leaving the block"
IL_0014: call void [mscorlib]System.Console::WriteLine(string)
IL_0019: nop
IL_001a: nop
IL_001b: ldstr "GC.Collect()"
IL_0020: call void [mscorlib]System.Console::WriteLine(string)
IL_0025: nop
IL_0026: call void [mscorlib]System.GC::Collect()
IL_002b: nop
IL_002c: ldc.i4 1000
IL_0031: call void [mscorlib]System.Threading.Thread::Sleep(int32)
IL_0036: nop
IL_0037: ldstr "weakRef.IsAlive == {0}"
IL_003c: ldloc.0
IL_003d: callvirt instance bool [mscorlib]System.WeakReference::get_IsAlive()
IL_0042: box [mscorlib]System.Boolean
IL_0047: call void [mscorlib]System.Console::WriteLine(string, object)
IL_004c: nop
IL_004d: ldstr "Leaving the program"
IL_0052: call void [mscorlib]System.Console::WriteLine(string)
IL_0057: nop
IL_0058: ret
}
```
And this is the IL generated using the Release configuration:
```
.method private hidebysig static void Main (
string[] args
) cil managed
{
.entrypoint
.locals init (
[0] class [mscorlib]System.WeakReference weakRef
)
IL_0000: newobj instance void _GC.Program/TestClass::.ctor()
IL_0005: newobj instance void [mscorlib]System.WeakReference::.ctor(object)
IL_000a: stloc.0
IL_000b: ldstr "Leaving the block"
IL_0010: call void [mscorlib]System.Console::WriteLine(string)
IL_0015: ldstr "GC.Collect()"
IL_001a: call void [mscorlib]System.Console::WriteLine(string)
IL_001f: call void [mscorlib]System.GC::Collect()
IL_0024: ldc.i4 1000
IL_0029: call void [mscorlib]System.Threading.Thread::Sleep(int32)
IL_002e: ldstr "weakRef.IsAlive == {0}"
IL_0033: ldloc.0
IL_0034: callvirt instance bool [mscorlib]System.WeakReference::get_IsAlive()
IL_0039: box [mscorlib]System.Boolean
IL_003e: call void [mscorlib]System.Console::WriteLine(string, object)
IL_0043: ldstr "Leaving the program"
IL_0048: call void [mscorlib]System.Console::WriteLine(string)
IL_004d: ret
}
```
Notice how in the Debug build, the `TestClass` instance is retained as a local throughout the entire method:
```
.entrypoint
.locals init (
[0] class [mscorlib]System.WeakReference weakRef,
[1] class _GC.Program/TestClass obj
)
```
The fact that you declared that variable in a nested scope in the C# code is irrelevant, because the IL code doesn't have an equivalent notion of nested scopes. So, the variable is declared as a local of the entire method either way.
Also notice how if you manually perform this change in your C# code (local variable inlining):
```
WeakReference weakRef;
{
weakRef = new WeakReference(new TestClass());
Console.WriteLine("Leaving the block");
}
```
Then the IL of the Debug build skips the local declaration as well, matching the Release configuration:
```
.method private hidebysig static void Main (
string[] args
) cil managed
{
.entrypoint
.locals init (
[0] class [mscorlib]System.WeakReference weakRef
)
```
And similarly, the Debug configuration output matches the output of the Release configuration as well:
```
Leaving the block
GC.Collect()
~TestClass()
weakRef.IsAlive == False
Leaving the program
```
Obviously, the reason for this is that part of the optimizations that the C# compiler performs when building using the Release configuration is to automatically inline local variables wherever possible. And that's where the different behavior kicks in.
|
Classes Hierarchy and Casting between Objects
as far as i know **it's not possible to cast an Object of a superclass into an Object of a subclass**. This will compile but during runtime it will return an error.
More specifically, given the following Hierarchy of Classes and Interfaces:
. Alpha is a superclass for Beta and Gamma
. Gamma is a superclass for Delta and Omega
. Interface "In" is implemented by Beta and Delta
In this scenario i define the following code:
```
Delta r;
Gamma q;
```
Is this correct?
```
r = (Delta) q;
```
Can i cast q to type Delta even if Delta is a subclass of Gamma?
I think this isn't possible, my text book says otherwise. I already searched a lot and according to [this](https://stackoverflow.com/questions/4862960/explicit-casting-from-super-class-to-subclass) i'm right and this is an error from the textbook.
**Am i missing anything?**
|
This is legal:
```
Gamma q = new Delta();
Delta d = (Delta)q;
```
This will compile but will give you a runtime error:
```
Gamma q = new Gamma();
Delta d = (Delta)q;
```
In the first case, `q` is a `Delta`, so you can cast it to a `Delta`. In the second case, `q` is a `Gamma`, so you cannot case it to a `Delta.`
A `Gamma` *variable* can refer to a `Gamma` *object* or to an object that is a subclass of `Gamma`, e.g. a `Delta` object; when you cast a `Gamma` to a `Delta` then you are telling the compiler that the `Gamma` variable refers to a `Delta` object or to an object that is a subclass of `Delta` (and if you're wrong then you'll get a `ClassCastException`). The types of the objects themselves are immutable - you cannot change the type of a `Gamma` object to a `Delta` object at runtime, so if a `Gamma` variable actually refers to a `Gamma` object but you then try to cast it to a `Delta` object then you'll get a runtime exception.
|
@include font-face SCSS issue
While trying to have my SCSS import some fonts I encountered the following:
I exactly copied [the docs from the compass website](http://compass-style.org/examples/compass/css3/font-face/), but when the CSS is being compiled Compass adds random numbers behind my `src` URLs. The SCSS code I wrote and the resulting CSS looks like this
**SCSS**
```
@include font-face("NexaBold", font-files("nexa_bold-webfont.woff", "nexa_bold-webfont.ttf", "nexa_bold-webfont.svg", "nexa_bold-webfont.eot"));
```
**Resulting CSS**
```
@font-face {
font-family: "NexaBold";
src: url('/fonts/nexa_bold-webfont.woff?1417439024') format('woff'), url('/fonts/nexa_bold-webfont.ttf?1417439024') format('truetype'), url('/fonts/nexa_bold-webfont.svg?1417439024') format('svg'), url('/fonts/nexa_bold-webfont.eot?1417439024') format('embedded-opentype');
}
```
Thanks!
|
Random numbers were added because browser cache fonts base on url, then these random numbers cause every time you compile your codes and put it in your html, it download fonts again.
I have Visual Studio 2013 and compile your code with sass and the result is:
```
@font-face {
font-family: "NexaBold";
src: font-files("nexa_bold-webfont.woff", "nexa_bold-webfont.ttf", "nexa_bold-webfont.svg", "nexa_bold-webfont.eot"); }
```
and here is my compass source for `font-face mixin`:
```
@mixin font-face(
$name,
$font-files,
$eot: false,
$weight: false,
$style: false
) {
$iefont: unquote("#{$eot}?#iefix");
@font-face {
font-family: quote($name);
@if $eot {
src: font-url($eot);
$font-files: font-url($iefont) unquote("format('eot')"), $font-files;
}
src: $font-files;
@if $weight {
font-weight: $weight;
}
@if $style {
font-style: $style;
}
}
}
```
if you look my compass version doesn't add any random number at the end of file path.
I myself suggest you to use `font-face` without compass, use code below:
```
@font-face {
font-family: 'IranSans';
src: url('/css/fonts/IranSans.eot'); /* IE9 Compat Modes */
src: url('/css/fonts/IranSans.eot?#iefix') format('embedded-opentype'), /* IE6-IE8 */
url('/css/fonts/IranSans.woff') format('woff'), /* Modern Browsers */
url('/css/fonts/IranSans.ttf') format('truetype'), /* Safari, Android, iOS */
url('/css/fonts/IranSans.svg') format('svg'); /* Legacy iOS */
}
```
|
Add multiple entities to JavaScript namespace from different files
Given a namespaces `ns` used in two different files:
### abc.js
```
ns = ns || (function () {
foo = function() { ... };
return {
abc : foo
};
}());
```
### def.js
```
// is this correct?
ns = ns || {}
ns.def = ns.def || (function () {
defoo = function () { ... };
return {
deFoo: defoo
};
}());
```
Is this the proper way to add `def` to the `ns` to a namespace? In other words, how does one merge two contributions to a namespace in javascript?
If `abc.js` comes before `def.js` I'd expect this to work. If `def.js` comes before `abc.js` I'd expect `ns.abc` to not exist because `ns` is defined at the time.
It seems there ought to be a design pattern to eliminate any uncertainty of doing inclusions with the javascript namespace pattern.
I'd appreciate thoughts and input on how best to go about this sort of 'inclusion'.
Thanks for reading.
Brian
|
That would certainly work. Keep in mind, though, that source order will affect your implementation: if def.js ever winds up included before abc.js, your definition of foo and ns.abc will never be executed.
Take a look at [YUI's old namespace function for an example](http://github.com/yui/yui2/blob/master/src/yahoo/js/YAHOO.js#L98): they make sure either use the existing object or a new object initialization, probably for that reason above.
It might very well help you to keep your modules separate with something like this:
```
ns = ns || {};
ns.abc = function(){ ... }();
and
ns = ns || {};
ns.def = function() ... }();
```
That way, each is a separate module, source order doesn't matter, and each has access to its own closure as you have in your example.
|
can't get tornado staticfilehandler to work
Why doesn't this work:
```
application = tornado.web.Application([(r"/upload.html",tornado.web.StaticFileHandler,\
{"path":r"../web/upload.html"}),])
if __name__ == "__main__":
print "listening"
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
```
Hitting
```
http://localhost:8888/upload.html throws:
TypeError: get() takes at least 2 arguments (1 given)
ERROR:tornado.access:500 GET /upload.html (::1) 6.47ms
```
I have tried to search across the internet but it seems like my usage is totally correct.
So I can't find why it is not working. Most of the examples on the internet are about giving a static handler for a complete directory. So is it the case, that it does not work for individual files?
|
You have two options to fix this error.
1. Add all the files of the `../web/` directory. Tornado does not handle single files.
```
application = tornado.web.Application([(r"/(.*)", \
tornado.web.StaticFileHandler, \
{"path":r"../web/"}),])
```
2. You can render the HTML passing a file as input. You need to create a handler for each HTML file.
```
import tornado.web
import tornado.httpserver
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/upload.html", MainHandler)
]
settings = {
"template_path": "../web/",
}
tornado.web.Application.__init__(self, handlers, **settings)
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render("upload.html")
def main():
applicaton = Application()
http_server = tornado.httpserver.HTTPServer(applicaton)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
```
|
Why can I set [enumerability and] writability of unconfigurable property descriptors?
<https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/defineProperty> states:
>
> **configurable**:
> True if and only if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object. Defaults to `false`.
>
>
>
So, I have a
```
var x = Object.defineProperty({}, "a", {
value:true,
writable:true,
enumerable:true,
configurable:false
});
```
Now I can play with `x.a = false`, `for(i in x)` etc. But even though the descriptor ~~is~~ should be unconfigurable, I can do
```
Object.defineProperty(x, "a", {writable:true}); // others defaulting to false
Object.defineProperty(x, "a", {}); // everything to false
Object.freeze(x); // does the same to the descriptors
```
The other way round, setting them to true again, or trying to define an accessor descriptor, raises errors now. To be exact: `Object.defineProperty: invalid modification of non-configurable property`.
Why can I "downgrade" descriptors though they say they were non-configurable?
|
First, even when `configurable` is `false`, `writable` can be changed from `true` to `false`. This is the only attribute change allowed when `configurable` is `false`. This transition was allowed because some built-in properties including (most notably) the `length` property of arrays (including `Array.prototype`) are specified to be `writable: true, configurable: false`. This is a legacy of previous ECMAScript editions. If `configurable: false` prevented changing `writable` from `true` to `false` then it would be impossible to freeze arrays.
`Object.defineProperty` doesn't work quite like you're assuming. In particular, how it processes the property descriptor works differently depending upon whether or not the property already exists. If a property does not exist, the descriptor is supposed to provide a definition of all attributes so any missing attributes in the descriptor are assigned default values before the descriptor is used to create the property. However, for an already existing property the descriptor is taken as a set of delta changes from the current attribute settings of the property. Attributes that are not listed in the descriptor are not changed. Also, a attribute that has the same value in the delta descriptor as the current property attribute value is also consider no change. So the following are all legal:
```
Object.defineProperty(x, "a", {writable:false}); // can always change writable to false.
//others attributes, not changed
Object.defineProperty(x, "a", {}); // no attributes, so nothing changes
Object.freeze(x); // same as Object.defineProperty(x, "a", {writable:false});
Object.defineProperty(x, "a", {enumerable:true, configurable: false}); //no change,
```
|
Why do here-documents attempt shell substitution even on a commented line?
I wanted to see how some ASCII art looked in terminal so:
```
$ cat <<EOF
> # ____ _
> # _ _ / __/ ___ _ | |_
> # | | | |/ / / _` || __|
> # | |_| |\ \__ (_| || |_
> # | _,_| \___\ \___,_| \__|
> # |_/
> #
> EOF
bash: bad substitution: no closing "`" in ` || __|
# | |_| |\ \__ (_| || |_
# | _,_| \___\ \___,_| \__|
# |_/
#
```
The `#` octothorpes were there perchance, but now I'm confused.
```
$ cat <<EOF
> # echo hi
> EOF
# echo hi
```
As expected.
However:
```
$ cat <<EOF
> # `echo hello`
> EOF
# hello
```
So `bash` gets at expanding ```` and `$( )` before `cat` does, but it doesn't care about `#` comments? What's the explanation behind this behaviour?
|
This is more general than bash. In POSIX shell, your `EOF` is referred to as a *word*, in the discussion of [here-documents](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_07_04):
>
> If no characters in *word* are quoted, all lines of the here-document shall be expanded for parameter expansion, command substitution, and arithmetic expansion. In this case, the *`<backslash>`* in the input behaves as the *`<backslash>`* inside double-quotes (see Double-Quotes). However, the double-quote character ( `'"'` ) shall not be treated specially within a here-document, except when the double-quote appears within `"$()"`, `"``"`, or `"${}"`.
>
>
>
Quoting is done using single-, double-quotes or the backslash character. POSIX mentions the here-documents in the discussion of [quoting](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02):
>
> The various quoting mechanisms are the escape character, single-quotes, and double-quotes. The here-document represents another form of quoting; see [Here-Document](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_07_04).
>
>
>
The key to understanding the lack of treatment of `#` characters is the definition for here-documents:
>
> allow *redirection* of lines contained in a shell input file
>
>
>
That is, no meaning (other than possible parameter expansion, etc) is given to the data by the shell, because the data is *redirected* to another program: `cat`, which is not a shell interpreter. If you redirected to a *shell program*, the result would be whatever the shell could do with the data.
|
C# Data Layer and Dto's
I have recently joined a company that using typed datasets as their 'Dto'. I think they are really rubbish and want to change it to something a little more modern and user friendly. So, I am trying to update the code so that the data layer is more generic, i.e. using interfaces etc, the other guy does not know what a Dto is and we are having a slight disagreement about how it should be done.
Without trying to sway people to my way of thinking, I would like to get impartial answers from you people as to what layers the Dto can be present in. All layers; DAL, BL and Presentation or a small sub set within these layers only.
Also, whether IList objects should or should not be present in the DAL.
Thanks.
|
It really depends on your architecture.
For the most point you should try to code to interfaces then it doesn't really matter what your implementation is. If you return ISomething it could be your SomethingEntity or your SomethingDTO but your consuming code doesn't care less as long as it implements the interface.
You should be returning an IList/ICollection/IEnumerable over a concrete collection or array.
- [Properties should not return arrays](http://msdn.microsoft.com/en-us/library/0fss9skc(VS.80).aspx)
- [Do not expose generic lists](http://msdn.microsoft.com/en-US/library/ms182142(v=VS.80).aspx)
What you should try to do first is separate your code and make it loosely coupled by inserting some interfaces between your layers such as a repository for your DataAccess layer. Your repository then returns your entities encapsulated by an interface. This will make your code more testable and allow you to mock more easily. Once you have your tests in place you can then start to change the implementations with less risk.
If you do start to use interfaces I would suggest integrating an IoC such as Windsor sooner rather than later. If you do it from the get go it will make things easier later on.
|
Flow of class loading for a simple program
I am just now beginning to learn the internal architecture of Java. I have roughly understood the concept of class loading which loads the required classes when `jvm` runs, `ClassNotFoundException` is thrown when a class is not found and specific class loader loads the classes referenced by the class.
Can someone please explain clearly the flow of class loading i.e. the sequence of bootstrap class loading and user-defined class loading in the sample Java code below.
```
import java.io.File;
public class Sample
{
public static void main(String[] args)
{
String fileName = "sample";
File file = new File(fileName);
file.isFile();
}
}
```
Also I learnt from a reference material that "`classloader` maintains the namespaces of the classes it loads". By namespaces, does that mean the literal names of the class? Also can someone please explain the implication/advantage of that?
|
You will run your `Sample` class as follows
`> java Sample`
for little magic, check out the output of`-verbose:class` option and you see tons of following lines..
```
[Opened C:\jdk1.6.0_14\jre\lib\rt.jar]
[Loaded java.lang.Object from C:\jdk1.6.0_14\jre\lib\rt.jar]
[Loaded java.io.Serializable from C:\jdk1.6.0_14\jre\lib\rt.jar]
[Loaded java.lang.Comparable from C:\jdk1.6.0_14\jre\lib\rt.jar]
.
.
.
.
.
.
[Loaded java.security.cert.Certificate from C:\jdk1.6.0_14\jre\lib\rt.jar]
[Loaded Sample from file:/D:/tmp/]
[Loaded java.lang.Shutdown from C:\jdk1.6.0_14\jre\lib\rt.jar]
[Loaded java.lang.Shutdown$Lock from C:\jdk1.6.0_14\jre\lib\rt.jar]
```
You see a bunch of classes from `\jre\lib\rt.jar` are loaded, much before your class is loaded by `Bootstrap` class loader (or Primordial ). These are the pre-requisite for running any Java program hence loaded by Bootstrap.
Another set of jars is loaded by `Extension` class loader. In this particular example, there was no need of any classes from the lib `\jre\lib\ext` hence its not loaded. But Extension class loader are specifically assigned the task of loading the classes from the extension lib.
**EDIT:** Apart from the standard platform java classes Sun/Oracle also provide a set of jars which are used to **extend the platform's core API**. The jars placed in the extension lib folder are automatically placed in the classpath and hence not needed to be included in classpath explicitly. Here is [nice official article](http://docs.oracle.com/javase/tutorial/ext/basics/install.html) on the same topic.
Finally, your class `Sample` is loaded by `Application` class loader after Bootstrap and Extension have finished loading.
|
using "using" too much?
Given the following code, i see no more need for the finally block for closing the reader or the connection (if it were still available). Are there any benefits or disadvantages to using so many nested "using" statements? Or shall i go the finally block route?
```
List<string> platforms = new List<string>();
NpgsqlDataReader reader = null;
try
{
using (NpgsqlConnection conn = new NpgsqlConnection(GetConnectionString()))
{
// Making connection with Npgsql provider
string sql = @"SELECT platforms.""name"" FROM public.""platforms""";
using (NpgsqlCommand command = new NpgsqlCommand(sql))
{
command.Connection = conn;
command.CommandType = System.Data.CommandType.Text;
conn.Open();
using (reader = command.ExecuteReader())
{
while (reader.Read())
{
platforms.Add((string)reader["name"].ToString());
}
}
}
}
}
catch (Exception err)
{
HandleError(err, "GetPlatforms");
}
finally
{
platforms = null;
if (!reader.IsClosed)
{
reader.Close();
}
}
```
|
It ensures the release of resources when the using block is finished. Per [MSDN](http://msdn.microsoft.com/en-us/library/yh598w02%28v=vs.80%29.aspx):
>
> The using statement allows the programmer to specify when objects
> that use resources should release
> them. The object provided to the using
> statement must implement the
> IDisposable interface. This interface
> provides the Dispose method, which
> should release the object's resources.
>
>
> A using statement can be exited either when the end of the using
> statement is reached or if an
> exception is thrown and control leaves
> the statement block before the end of
> the statement.
>
>
>
I do not see anything wrong with the multiple `using` statement blocks you have listed in your code. It ensures the resources are released and that way the programmer does not forget.
If you do not like the identation, then you can re-write it something like this:
```
using (StreamWriter w1 = File.CreateText("W1"))
using (StreamWriter w2 = File.CreateText("W2"))
{
// code here
}
```
See also this SO question on [nested using statements in C#](https://stackoverflow.com/questions/1329739/nested-using-statements-in-c)
|
HTTP BOSH and HTTP pipelining
I am confused about HTTP BOSH and HTTP Pipelining.
BOSH spec: <http://xmpp.org/extensions/xep-0124.html>
>
> If the client needs to send some data to the connection manager then
> it simply sends a second request containing the data. **Unfortunately
> most constrained clients do not support HTTP Pipelining** (concurrent
> requests over a single connection), so the client typically needs to
> send the data over a second HTTP connection.
>
>
>
And in this specification is many info about HTTP pipelining, but from
<http://en.wikipedia.org/wiki/HTTP_pipelining>
>
> Non-idempotent methods like POST should not be pipelined.
>
>
>
And in BOSH spec:
>
> All information is encoded in the body of standard HTTP POST requests
> and responses. Each HTTP body contains a single wrapper which
> encapsulates the XML elements being transferred (see Wrapper
> Element).
>
>
>
So how can be HTTP pipelining used with HTTP BOSH??
|
The [HTTP RFC](http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.2.2) says
>
> Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2).
>
>
>
"SHOULD NOT" in this context has the meaning given by [RFC 2119](http://www.ietf.org/rfc/rfc2119.txt), namely,
>
> This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label
>
>
>
What this means is that, in general, it is not recommended to use HTTP pipelining in conjunction with `POST` requests (this being in line with [RFC 2616's](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5) notion of `POST`); however, the HTTP protocol does not actually forbid it. If it actually forbade the behavior, RFC 2616 would use the language "MUST NOT". The author's of the `BOSH` spec made a judgement that, in the case of `BOSH` there are no adverse effects to pipelining `POST` requests.
|
Fast way to count neighbouring points in 3D array
I have a code where I want to loop over all points in a grid, and for each point check if a given condition holds for a sufficient number of neighbouring points. Additionally, I have periodic boundaries on the grid.
The problem is very similar to the Game of Life.
My current code looks something like this
```
do k=1,ksize; do j=1,jsize; do i=1,isize ! Loop over all points
ncount = 0
kkloop: do kk=k-1,k+1 ! Loop over neighbours
ktmp = kk
if(kk>ksize) ktmp = 1 ! Handle periodic boundary
if(kk<1) ktmp = ksize
do jj=j-1,j+1
jtmp = jj
if(jj>jsize) jtmp = 1
if(jj<1) jtmp = jsize
do ii=i-1,i+1
if(ii == 0 .and. jj == 0 .and. kk == 0) cycle ! Skip self
itmp = ii
if(ii>isize) itmp = 1
if(ii<1) itmp = isize
if(grid(itmp,jtmp,ktmp)) ncount = ncount + 1 ! Check condition for neighbour
if(ncount > threshold) then ! Enough neigbours with condition?
do_stuff(i,j,k)
break kkloop
end if
end do
end do
end do
end do; end do; end do
```
This is neither elegant, nor probably very efficient. Is there a better way to do this? This code will be repeated a lot, so I would like to make it as fast as possible.
|
I'll work this out in 2D, leave it to you to inflate to 3D.
The first thing I'd do is pad the array with a halo of depth equal to the depth of the neighbourhood you are interested in. So, if your array is declared as, say
```
real, dimension(100,100) :: my_array
```
and you are interested in the 8 immediate neighbours of each cell,
```
real, dimension(0:101,0:101) :: halo_array
.
.
.
halo_array(1:100,1:100) = my_array
halo_array(0,:) = my_array(100,:)
! repeat for each border, mutatis mutandis
```
This will save a lot of time checking for the boundary and will be worth doing whether or not you follow the next suggestion. You could do this 'in place' if you like, I mean just expand `my_array` rather than copy it.
For an elegant solution you could write something like this
```
forall (i=1:100,j=1:100)
if (logical_function_of(my_array(i-1,j),my_array(i+1,j),my_array(i,j-1),my_array(i,j+1),...) then
do_stuff(my_array(i,j))
end if
end forall
```
Here, `logical_function_of()` returns true when the neighbourhood of `my_array(i,j)` satisfies your criteria. I got tired after listing the N,S,E,W neighbours and for production code I'd probably write this as a function of the indices anyway. In my experience `forall` is elegant (to some) but not as high-performing as nested loops.
|
How to get current URL in python web page?
I am a noob in Python. Just installed it, and spent 2 hours googleing how to get to a simple parameter sent in the URL to a Python script
Found [this](https://stackoverflow.com/questions/5074803/retrieving-parameter-from-url-in-python)
Very helpful, except I cannot for anything in the world to figure out how to replace
```
import urlparse
url = 'http://foo.appspot.com/abc?def=ghi'
parsed = urlparse.urlparse(url)
print urlparse.parse_qs(parsed.query)['def']
```
With what do I replace url = 'string' to make it work?
I just want to access <http://site.com/test/test.py?param=abc> and see abc printed.
---
Final code after Alex's answer:
```
url = os.environ["REQUEST_URI"]
parsed = urlparse.urlparse(url)
print urlparse.parse_qs(parsed.query)['param']
```
|
If you don't have any libraries to do this for you, you can construct your current URL from the HTTP request that gets sent to your script via the browser.
The headers that interest you are `Host` and whatever's after the HTTP method (probably `GET`, in your case). [Here are some more explanations](http://djce.org.uk/dumprequest) (first link that seemed ok, you're free to Google some more :).
[This answer](https://stackoverflow.com/a/2677423/182629) shows you how to get the headers in your CGI script:
>
> If you are running as a CGI, you can't read the HTTP header directly,
> but the web server put much of that information into environment
> variables for you. You can just pick it out of `os.environ[]`.
>
>
>
If you're doing this as an exercise, then it's fine because you'll get to understand what's behind the scenes. If you're building anything reusable, I recommend you use libraries or a framework so you don't reinvent the wheel every time you need something.
|
error: incompatible types in assignment of 'char\*' to 'char [20]'
I'm new to this C++ environment and I am having difficulty with my constructor.
Here is my code:
```
class Student {
char name[20];
long number;
public:
Student (char nm[20], long val) :
name(nm), number(val) {}
```
When I compile it, it gives me an error saying incompatible types in assignment of `char*` to `char[20]`.
How can i fix this??
|
Your constructor argument `nm` is, actually, *not* an array! Yes, I know it looks like one, because you wrote `char nm[20]`. But, actually, it's `char* nm`. That translation is performed automatically when you write an array type in a function parameter list. Yes, it's stupid. Blame C.
So, the error message is telling you that you cannot assign a pointer to an array. Fair enough. Doesn't really matter anyway, since the language *also* doesn't let you assign arrays to arrays. Lol.
This is why, since 1998, we've had `std::string` to fix all these terrible problems:
```
class Student {
std::string name;
long number;
public:
Student (std::string nm, long val) :
name(nm), number(val) {}
};
```
If you *must* use an array, you can do this:
```
class Student {
std::array<char, 20> name;
long number;
public:
Student (std::array<char, 20> nm, long val) :
name(nm), number(val) {}
};
```
because `std::array`, introduced in 2011, is a handy wrapper around raw arrays that *can* be assigned (and don't have that weird decay to pointers!).
"Ah, but my teacher told me to use raw arrays," I hear you say. A shame, but we can fix that too. Either accept the array by reference, or take in a pointer to it (as you're doing now! but this drops the dimension `20` from the type and makes things unsafe -.-) and do a manual copy of each element from the source to the destination. Certainly not ideal, but it may be what your teacher is expecting if this is homework:
```
class Student {
char name[20];
long number;
public:
Student (char (&nm)[20], long val) :
number(val)
{
assert(sizeof(nm) == sizeof(name));
std::copy(std::begin(nm), std::end(nm), std::begin(name));
}
};
```
```
class Student {
char name[20];
long number;
public:
Student (char* nm, long val) :
number(val)
{
// just have to hope that the input is a pointer to **20** elements!
std::copy(nm, nm+20, name);
}
};
```
|
How to list JBoss AS 7 datasource properties in Java code?
I'm running JBoss AS 7.1.0.CR1b. I've got several datasources defined in my standalone.xml e.g.
```
<subsystem xmlns="urn:jboss:domain:datasources:1.0">
<datasources>
<datasource jndi-name="java:/MyDS" pool-name="MyDS_Pool" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>some-url</connection-url>
<driver>the-driver</driver>
[etc]
```
Everything works fine.
I'm trying to access the information contained here within my code - specifically the `connection-url` and `driver` properties.
I've tried getting the Datasource from JNDI, as normal, but it doesn't appear to provide access to these properties:
```
// catches removed
InitialContext context;
DataSource dataSource = null;
context = new InitialContext();
dataSource = (DataSource) context.lookup(jndi);
```
ClientInfo and DatabaseMetadata from a Connection object from this Datasource also don't contain these granular, JBoss properties either.
My code will be running inside the container with the datasource specfied, so all should be available. I've looked at the IronJacamar interface `org.jboss.jca.common.api.metadata.ds.DataSource`, and its implementing class, and these seem to have accessible hooks to the information I require, but I can't find any information on how to create such objects with these already deployed resources within the container (only constructor on impl involves inputting all properties manually).
JBoss AS 7's Command-Line Interface allows you to navigate and list the datasources as a directory system. <http://www.paykin.info/java/add-datasource-programaticaly-cli-jboss-7/> provides an excellent post on how to use what I believe is the Java Management API to interact with the subsystem, but this appears to involve connecting to the target JBoss server. My code is already running *within* that server, so surely there must be an easier way to do this?
Hope somebody can help. Many thanks.
|
What you're really trying to do is a management action. The best way to is to use the management API's that are available.
Here is a simple standalone example:
```
public class Main {
public static void main(final String[] args) throws Exception {
final List<ModelNode> dataSources = getDataSources();
for (ModelNode dataSource : dataSources) {
System.out.printf("Datasource: %s%n", dataSource.asString());
}
}
public static List<ModelNode> getDataSources() throws IOException {
final ModelNode request = new ModelNode();
request.get(ClientConstants.OP).set("read-resource");
request.get("recursive").set(true);
request.get(ClientConstants.OP_ADDR).add("subsystem", "datasources");
ModelControllerClient client = null;
try {
client = ModelControllerClient.Factory.create(InetAddress.getByName("127.0.0.1"), 9999);
final ModelNode response = client.execute(new OperationBuilder(request).build());
reportFailure(response);
return response.get(ClientConstants.RESULT).get("data-source").asList();
} finally {
safeClose(client);
}
}
public static void safeClose(final Closeable closeable) {
if (closeable != null) try {
closeable.close();
} catch (Exception e) {
// no-op
}
}
private static void reportFailure(final ModelNode node) {
if (!node.get(ClientConstants.OUTCOME).asString().equals(ClientConstants.SUCCESS)) {
final String msg;
if (node.hasDefined(ClientConstants.FAILURE_DESCRIPTION)) {
if (node.hasDefined(ClientConstants.OP)) {
msg = String.format("Operation '%s' at address '%s' failed: %s", node.get(ClientConstants.OP), node.get(ClientConstants.OP_ADDR), node.get(ClientConstants.FAILURE_DESCRIPTION));
} else {
msg = String.format("Operation failed: %s", node.get(ClientConstants.FAILURE_DESCRIPTION));
}
} else {
msg = String.format("Operation failed: %s", node);
}
throw new RuntimeException(msg);
}
}
}
```
The only other way I can think of is to add module that relies on servers internals. It could be done, but I would probably use the management API first.
|
How to vertically align a UILabel used as a leftView in a UITextField with the textField's text?
I'm using a `UILabel` as the `leftView` of a `UITextField`. The issue is that the textField's text is higher than the label's.
This is the code I've used so far
```
UILabel *startsWith = [[UILabel alloc] init];
startsWith.font = [UIFont systemFontOfSize:14];
startsWith.textColor = [UIColor blackColor];
startsWith.backgroundColor = [UIColor clearColor];
startsWith.text = @"Text";
[startsWith sizeToFit];
self.textField.leftViewMode = UITextFieldViewModeAlways;
self.textField.leftView = startsWith;
```
I've tried slightly changing the label's frame but it didn't work...
How can I align both texts?
|
You could create a container view in which you position the `UILabel` 1px up.
```
UIView * v = [[UIView alloc] init];
v.backgroundColor = [UIColor clearColor];
UILabel *startsWith = [[UILabel alloc] init];
startsWith.font = self.textfield.font;
startsWith.textAlignment = self.textfield.textAlignment;
startsWith.textColor = [UIColor blackColor];
startsWith.backgroundColor = [UIColor clearColor];
startsWith.text = @"Text";
[startsWith sizeToFit];
startsWith.frame = CGRectOffset(startsWith.frame, 0, -1);
v.frame = startsWith.frame;
[v addSubview:startsWith];
self.textfield.leftViewMode = UITextFieldViewModeAlways;
self.textfield.leftView = v;
```
|
How to change language in mapbox
I need to change the language of the mapbox in javascript, I only see the below code in the documentation
```
map.setLayoutProperty('country-label', 'text-field', ['get', 'name_de'])
```
but this line of code will only change the country names but I need everything ( city, town, ...etc)
|
Based on your mapbox style, there will be different text layers. For `dark-v9`, these are the available text layers.
```
country-label
state-label
settlement-label
settlement-subdivision-label
airport-label
poi-label
water-point-label
water-line-label
natural-point-label
natural-line-label
waterway-label
road-label
```
Using the code snippet that you mentioned in the question on the above layers, you should be able to change the language.
```
map.setLayoutProperty('country-label', 'text-field', ['get', 'name_de'])
```
Or you can use [mapbox-language-plugin](https://github.com/mapbox/mapbox-gl-language) to change the language of all the possible layers. Here is the working example of the plugin.
```
<!DOCTYPE html>
<html>
<head>
<meta charset='utf-8' />
<title>Change a map's language</title>
<meta name='viewport' content='initial-scale=1,maximum-scale=1,user-scalable=no' />
<script src='https://api.tiles.mapbox.com/mapbox-gl-js/v1.4.1/mapbox-gl.js'></script>
<script src='https://api.mapbox.com/mapbox-gl-js/plugins/mapbox-gl-language/v0.10.1/mapbox-gl-language.js'></script>
<link href='https://api.tiles.mapbox.com/mapbox-gl-js/v1.4.1/mapbox-gl.css' rel='stylesheet' />
<style>
body { margin:0; padding:0; }
#map { position:absolute; top:0; bottom:0; width:100%; }
</style>
</head>
<body>
<style>
#buttons {
width: 90%;
margin: 0 auto;
}
.button {
display: inline-block;
position: relative;
cursor: pointer;
width: 20%;
padding: 8px;
border-radius: 3px;
margin-top: 10px;
font-size: 12px;
text-align: center;
color: #fff;
background: #ee8a65;
font-family: sans-serif;
font-weight: bold;
}
</style>
<div id='map'></div>
<ul id="buttons">
<li id='button-fr' class='button'>French</li>
<li id='button-ru' class='button'>Russian</li>
<li id='button-de' class='button'>German</li>
<li id='button-es' class='button'>Spanish</li>
</ul>
<script>
mapboxgl.accessToken = 'pk.eyJ1IjoibXVyYWxpcHJhamFwYXRpIiwiYSI6ImNrMHA1d3VjYzBna3gzbG50ZjR5b2Zkb20ifQ.guBaIUcqkTdYHX1R6CM6FQ';
var map = new mapboxgl.Map({
container: 'map',
style: 'mapbox://styles/mapbox/dark-v9',
center: [16.05, 48],
zoom: 2.9
});
mapboxgl.setRTLTextPlugin('https://api.mapbox.com/mapbox-gl-js/plugins/mapbox-gl-rtl-text/v0.1.0/mapbox-gl-rtl-text.js');
var mapboxLanguage = new MapboxLanguage({
defaultLanguage: 'en'
});
map.addControl(mapboxLanguage);
document.getElementById('buttons').addEventListener('click', function(event) {
var language = event.target.id.substr('button-'.length);
map.setStyle(mapboxLanguage.setLanguage(map.getStyle(), language));
});
</script>
</body>
</html>
```
Ref: <https://blog.mapbox.com/how-to-localize-your-maps-in-mapbox-gl-js-da4cc6749f47>
|
How should I put native C++ pointers in .NET generic collections?
In C++/CLI it is not possible to put pointers to native C++ classes in managed .NET generic collections, e.g.
```
class A {
public:
int x;
};
public ref class B {
public:
B()
{
A* a = GetPointerFromSomewhere();
a->x = 5;
list.Add(a);
}
private:
List<A*> listOfA; // <-- compiler error (T must be value type or handle)
}
```
is not allowed. I could of course use `std::vector<A*> list;` but then I could only make `list` a member of a managed class by using a pointer and it feels unnatural to use pointers to STL containers.
What is a good way to store native C++ pointers in .NET generics? (I'm not interesting in resource management here; the object the pointer points to is managed elsewhere)
|
The method I've been using is to wrap the pointer in managed value class, and then overload the dereferencing operator:
```
template<typename T>
public value class Wrapper sealed
{
public:
Wrapper(T* ptr) : m_ptr(ptr) {}
static operator T*(Wrapper<T>% instance) { return instance.m_ptr; }
static operator const T*(const Wrapper<T>% instance) { return instance.m_ptr; }
static T& operator*(Wrapper<T>% instance) { return *(instance.m_ptr); }
static const T& operator*(const Wrapper<T>% instance) { return *(instance.m_ptr); }
static T* operator->(Wrapper<T>% instance) { return instance.m_ptr; }
static const T* operator->(const Wrapper<T>% instance) { return instance.m_ptr; }
T* m_ptr;
};
```
I can then use the pointer naturally as follows:
```
public ref class B {
public:
B()
{
A* a = GetPointerFromSomewhere();
a->x = 5;
list.Add(Wrapper<A>(a));
Console.WriteLine(list[0]->x.ToString());
}
private:
List<Wrapper<A>> listOfA;
}
```
Any improvements welcomed...
|
Start new R package development on github
How do I create new repository on github using devtools in RStudio? I've tried to:
1. Create empty repository on github named "MyNewRPackage"
2. Started new project in RStudio using ssh connection to my git repository
3. Installed and loaded devtools
Then I thought I will use `create("MyNewRPackage")` to initialize directory structure and `README.md` file. But the package skeleton is created as subfolder of my project and I have `~/MyNewRPackage/MyNewRPackage/R`. But I need to create package skeleton in the root folder of my github repository.
What is the standard way to start new R package development on github using devtools and RStudio?
|
Hope this helps someone:
1. Create empty repository on github (I will use name `rpackage` in this example)
2. Create package locally using devtools, `create("rpackage")` (this will create rpackage folder)
3. Create new project in RStudio (Create project from: Existing directory) and choose `rpackage` directory
4. In RStudio go to Tools/Shell... and type `git init`
5. Reopen the project (this will refresh the Git tab)
6. Start Git/More/Shell and type
`git add *`
`git commit -m "first commit"`
`git remote add origin [email protected]:[username]/rpackage.git`
`git push -u origin master`
Then you can refresh repository on `github`. Now you can close (or even delete) your local project and next time you can start a new project Project/New project/Version Control/Git
|
iOS: what is the value expression function when migrating coredata relationship?
The function for a relationship is like: `FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:","employeesToEmployees",$source.employees")`
What is this "Function"? How will it be called? Is there any guide introducing to this?
I've read Apple's
*[Core Data Model Versioning and Data Migration programming guide](http://developer.apple.com/library/mac/#documentation/cocoa/conceptual/CoreDataVersioning/Articles/Introduction.html)*
but I still don't get this.
|
This is a "function expressions with arbitrary method invocations" which seem to be very poorly documented. The only reference that I know of is one paragraph in the [NSExpression Class Reference](https://developer.apple.com/documentation/foundation/nsexpression?language=objc):
>
> **Function Expressions**
>
>
> On OS X v10.4, `NSExpression` only supports a
> predefined set of functions: `sum`, `count`, `min`, `max`, and `average`. These
> predefined functions were accessed in the predicate syntax using
> custom keywords (for example, `MAX(1, 5, 10)`).
>
>
> On OS X v10.5 and later, function expressions also support arbitrary
> method invocations. To use this extended functionality, you can now
> use the syntax `FUNCTION(receiver, selectorName, arguments, ...)`, for
> example:
>
>
> `FUNCTION(@"/Developer/Tools/otest", @"lastPathComponent") => @"otest"`
>
>
>
The quoting in that sample code seems be incorrect. But the following code compiles and runs on iOS 5/6:
```
NSExpression *expr = [NSExpression expressionWithFormat:@"FUNCTION('/Developer/Tools/otest', 'lastPathComponent')"];
id result = [expr expressionValueWithObject:nil context:nil];
NSLog(@"result: %@", result);
// Output:
// otest
```
So in your case, it is a function expression which calls, when evaluated
```
[$manager destinationInstancesForEntityMappingNamed:@"employeesToEmployees"
sourceInstances:$source.employees]
```
where `$manager` and `$source` are replaced by the migration manager and the source object, as described in [Mapping Model Objects](http://developer.apple.com/library/mac/#documentation/cocoa/conceptual/CoreDataVersioning/Articles/vmMappingOverview.html#//apple_ref/doc/uid/TP40004399-CH5-SW2) in the "Core Data Model Versioning and Data Migration Programming Guide".
|
Why cant I override Semantic UI CSS?
I'm trying to override Semantic UI's CSS styling, however, it's not working, even though I'm giving a specific class name to the relevant class. Here is my code:
```
return (
<div className="ui grid">
<div className="three column row">
<div className="four wide column" ></div>
<div className="text eight wide column"> <h1>Team Selection</h1></div>
<div className="four wide column"></div>
</div>
<div className= "three column row">
<div className="four wide column"></div>
<div className="eight wide column" style={style1}> </div>
<div className="four wide column"></div>
</div>
</div>
)
```
CSS file:
```
.text {
display: flex;
justify-content: center;
}
```
|
Use more specific rules to override other styles.
>
> Specificity is a weight that is applied to a given CSS declaration,
> determined by the number of each selector type in the matching
> selector. When multiple declarations have equal specificity, the last
> declaration found in the CSS is applied to the element. Specificity
> only applies when the same element is targeted by multiple
> declarations. As per CSS rules, directly targeted elements will always
> take precedence over rules which an element inherits from its
> ancestor.
>
>
>
Read more about [Specificity](https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity).
```
.ui.grid > .row > .column.text {
display: flex;
justify-content: center;
}
```
```
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.4.1/semantic.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.4.1/semantic.min.js"></script>
<div class="ui grid">
<div class="three column row">
<div class="four wide column" ></div>
<div class="text eight wide column"> <h1>Team Selection</h1></div>
<div class="four wide column"></div>
</div>
<div class="three column row">
<div class="four wide column"></div>
<div class="eight wide column"> </div>
<div class="four wide column"></div>
</div>
</div>
```
|
Brackets on registers in Intel x86 assembly syntax
I tought I understood brackets in x86 assembly. In this example, the register `ax` should contain `X`, because brackets represents the current address of `LABEL`.
```
mov ax, [LABEL]
LABEL:
db "X", 0
```
But I dont understand the following two assembly lines:
```
mov al, [ebx]
```
Why do I need brackets? Is it because `ebx` is a 32 bits register and `ax` a 16 bits? Whats the difference with:
```
mov al, ebx
```
Or this one, I don't understand why I need brackets...
```
mov [edx], ax
```
|
The bracket notation is used to let you access the "value pointed to" by the register or label.
```
mov ax, [LABEL]
LABEL:
db "X", 0
```
You are loading `ax` with the **value** from the memory labeled by `LABEL`. In this case, you are copying the 'X' (0x58 ASCII) into the `ax` register, along with the `0` into the high byte of `ax`. So `ax = 0x0058`, with `ah = 0`, `al = 0x58`.
`LABEL` is attached to the address where "X" is located.
---
This is not a valid operation:
```
mov al, ebx
```
And this:
```
mov [edx], ax
```
You are moving the value of `ax` into the first two bytes of "the value pointed to by `edx`", since `ax` is a 16 bit register and `edx` is just holding the memory address where it should be written to.
|
IIS and Parallel.ForEach
I am using Parallel.ForEach on WCF service that hosted in IIS .
As far as I know for each task in parallel loop, thread will be opened.
But IIS have restriction on numner of threads that can be opened. I think it's 20.
So my question is : Does it recommeneded to use Parallel.ForEach on IIS processes ?
|
As far as I'm aware there are a few issues when using TPL in IIS.
First, IIS does not have a hard restriction on threads in its thread pool. It really depends on what the threads are doing (CPU bound, IO bound etc). So, the number of threads by itself doesn't matter that much.
Second, as I understand it, TPL will take threads from the thread pool. So, in effect, you are taking threads that could be used to service requests. Again, this could be bad depending on how much traffic you are consuming.
Third, IIS application pools will recycle. When this happens, again as I understand it, IIS will freeze the state of the threads and move them to another process. This could have unintended effects on the threaded operations.
Usually, you want to use TPL (`Parallel.ForEach`) because you have a long running process. For long running processes, it might be better to call another host (Windows Service) to do the heavy lifting. In a Windows Service, for example, you have more control over how you manage threads.
Hope this helps.
|
Animating the movement of a to another
I'm trying to learn jQuery and am tinkering around with a little project to try to learn through practice, but I'm stuck on how to animate the movement of one div to another div.
To clarify, when I click on one div, I would like a different div to move to the div that I clicked on. I'm using appendTo() to move him there, but as you already know, the movement is instant, and I would like to animate the movement.
Here is a fiddle of what is happening currently: <https://jsfiddle.net/Chirpizard/jyatn89r/>
Just click on a blue dot to see what's happening.
Here is the JS:
```
$(document).ready(function () {
$(".node").click(function() {
if ($(this).hasClass('unlocked')) {
$("#main-char").appendTo(this).animate( {
top: "+=50"
}, 1000, function() {
});
$(".node").removeClass("activeNode"); //Removes all instances of activeNode
$(this).addClass('activeNode') //Adds activeNode class to the node clicked on
}
else {
console.log("Sorry, broken");
}
});
});
```
I've checked several other posts and haven't found the exact solution I'm looking for. Any guidance on how to get to red dot to slowly move to the clicked element would be greatly appreciated!
Thanks
|
If I got your point ... I don't think you need to appendTo .. just use animate
```
$(document).ready(function () {
$(".node").click(function() {
if ($(this).hasClass('unlocked')) {
$("#main-char").animate( {
top: $(this).offset().top -27
}, 1000, function() {
});
$(".node").removeClass("activeNode"); //Removes all instances of activeNode
$(this).addClass('activeNode') //Adds activeNode class to the node clicked on
}
else {
console.log("Sorry, broken");
}
});
});
```
[**DEMO**](https://jsfiddle.net/mohamedyousef1980/jyatn89r/1/)
|
Display data based on read configuration file from ssh sessions
I started playing with Python last week. This is a section of code from an application which reads a configuration file and launches `ssh` sessions and continuously displays data back to the local user.
The whole project is [on Github](https://github.com/maxmackie/vssh).
```
def open_connections(sessions):
""" open_connections(sessions)
Opens ssh connections and forks them to the background. Passwords are sent
from the user via STDIN to each process. If correctly identified, the
process is successfully created and added to the Session object.
"""
for server in sessions:
usr = raw_input("Enter username for " + server.address + ": ")
pwd = getpass.unix_getpass("Enter passphrase for " + server.address + ": ")
con = paramiko.SSHClient()
con.set_missing_host_key_policy(paramiko.AutoAddPolicy())
con.connect(server.address,
username=usr,
password=pwd)
print "[OK]"
server.connection = con
def run(sessions):
""" run(sessions)
Takes the file descriptors from the open sessions and runs the commands
at the specified intervals. Display the STDOUT to the current tty.
"""
while True:
time.sleep(TIMEOUT)
os.system("clear")
for session in sessions:
print session.command + " @ " + session.address
print "------------------------------------------------------------"
stdin, stdout, stderr = session.connection.exec_command(session.command)
print format_stdout(stdout.readlines())
```
A couple of my concerns:
- Am I doing any big Python no-nos?
- Am I following proper comment/code conventions?
- Is this the best way to display continuous information to user (screen refreshes)?
This is my first contact with Python so I'm definitely doing something I shouldn't be. The rest of the code for this project is really only 1 file, and what I posted above is the bulk of it. If anyone went to the Github page and wanted to comment on something else, I'd be happy to respond.
|
## Docstrings
This:
```
def open_connections(sessions):
""" open_connections(sessions)
Opens ssh connections and forks them to the background. Passwords are sent
from the user via STDIN to each process. If correctly identified, the
process is successfully created and added to the Session object.
"""
for server in sessions:
# ...
```
Should be:
```
def open_connections(sessions):
"""Opens ssh connections and forks them to the background.
Passwords are sent from the user via STDIN to each process. If correctly
identified, the process is successfully created and added to the Session
object.
"""
for server in sessions:
# ...
```
To know more, take a look at [PEP257](http://www.python.org/dev/peps/pep-0257/#multi-line-docstrings) about multi line docstrings.
There's also no need to repeat the function declaration inside your docstring. If you tomorrow decide to use [Sphinx](http://sphinx.pocoo.org/) for generating your code documentation, that part will be taken care of on its own. You can find other possible conventions on docstrings [here](http://packages.python.org/an_example_pypi_project/sphinx.html#function-definitions).
## while 1 and while True
Replace this:
```
while True:
# ...
```
with:
```
while 1:
# ...
```
This will do a tiny difference in Python 2 (in Python 3 they will be exactly the same). This topic was well discussed on SO [here](https://stackoverflow.com/q/3815359/1132524).
## print formatting
This line:
```
print "------------------------------------------------------------"
```
Will be more readable this way:
```
print "-" * 60
```
Check out the [Format Specification Mini-Language](http://docs.python.org/library/string.html#format-specification-mini-language) if you want to do something fancy.
## Don't shadow the built-in
I tracked down in your repository, this function:
```
def format_stdout(stdout):
""" format_stdout(stdout)
Takes the list from the stdout of a command and format it properly with
new lines and tabs.
"""
nice_stdout = ""
for tuple in stdout:
nice_stdout += tuple
return nice_stdout
```
Do not use `tuple` as variable because it will shadow the built in [`tuple()`](http://docs.python.org/library/functions.html#tuple). But more important I think that **you don't need this function at all**.
Take a look at [`join()`](http://docs.python.org/library/stdtypes.html#str.join), it will be faster and better (since strings in Python are immutables).
So you should replace this line:
```
print format_stdout(stdout.readlines())
```
with something like:
```
print ''.join(stdout.readlines())
```
## Write only one time what you could write several
Change this:
```
usr = raw_input("Enter username for " + server.address + ": ")
pwd = getpass.unix_getpass("Enter passphrase for " + server.address + ": ")
```
To something like:
```
template_msg = "Enter {} for " + server.address + ": "
usr = raw_input(template_msg.format("username"))
pwd = getpass.unix_getpass(template_msg.format("passphrase"))
```
This way you'll be also also calling server.address one time, and one line less to maintain :)
## Clear console
It will be better to avoid calling [`os.system`](http://docs.python.org/library/os.html#os.system) directly, the [`subprocess`](http://docs.python.org/library/subprocess.html#module-subprocess) module is there exactly for that.
Anyway calling clear is not your only option: check out this [SO question](https://stackoverflow.com/q/2084508/1132524).
## Don't check the lenght of a list
### (unless you really need to know how long it is)
Again peeking from your git repo, I've found this:
```
if ( len(sessions) == 0 ):
```
1. Lose the brackets:
```
if len(sessions) == 0:
```
This will work too, Python is not C/C++ so don't put brackets everywhere :)
2. More important check the list directly:
```
if session:
# sessions is not empty
else:
# sessions is empty
```
The implicit boolean conversion is a very smart Python feature :)
|
How to show the custom PDF template while clicking the button
I want to show the PDF Template in new window while clicking the button in Sales Order. I created the button in sales order process using user event script. after that i'm unable to proceed it. It is possible to show the custom PDF template in new window while clicking the sales order?
**My CODE:**
```
USER EVENT SCRIPT:
// creating button in user event script before load event in view mode
unction userEventBeforeLoad(type, form, request){
if(type == 'view'){
var internalId = nlapiGetRecordId();
if (internalId != null) {
var createPdfUrl = nlapiResolveURL('SUITELET', 'customscript_back0rdered_itm_pdf', 'customdeploy_backord_itm_pdf_dep', false);
createPdfUrl += '&id=' + internalId;
//---add a button and call suitelet on click that button and it will open a new window
var addButton = form.addButton('custpage_printpdf', 'Print PDF', "window.open('" + createPdfUrl + "');");
}
else {
nlapiLogExecution('DEBUG', 'Error', 'Internaal id of the record is null');
}
}
}
SUITELET SCRIPT:
function suitelet(request, response){
var xml = "<?xml version=\"1.0\"?>\n<!DOCTYPE pdf PUBLIC \"-//big.faceless.org//report\" \"report-1.1.dtd\">\n";
xml += "<pdf>";
xml += "<head><macrolist><macro id=\"myfooter\"><p align=\"center\"><pagenumber /></p></macro></macrolist></head>";
xml += "<body size= \"A4\" footer=\"myfooter\" footer-height=\"0.5in\">";
var record = request.getParameter('internalId');
xml +="record"; //Add values(in string format) what you want to show in pdf
xml += "</body></pdf>";
var file = nlapiXMLToPDF(xml);
response.setContentType('PDF', 'Print.pdf ', 'inline');
response.write(file.getValue());
}
```
thanks in advance
|
The way I did it recently:
- User Event Adds the Button that calls a suitelet (window.open('suitelet URL'))
- Suitelet Renders the custom template
You can do the rendering like this insise a Suitelet (params: request, response), the custscript\_pdf\_template points to an html file on the cabinet using the NetSuite Advanced HTML syntax
```
var template = nlapiGetContext().getSetting('SCRIPT', 'custscript_pdf_template');
var purchaseOrder = nlapiLoadRecord('purchaseorder', tranId);
var xmlTemplate = nlapiLoadFile(template);
var renderer = nlapiCreateTemplateRenderer();
var file;
xmlTemplate = xmlTemplate.getValue();
renderer.setTemplate(xmlTemplate);
renderer.addRecord('record', purchaseOrder);
xmlTemplate = renderer.renderToString();
file = nlapiXMLToPDF(xmlTemplate);
resObj = file.getValue();
response.setContentType('PDF', 'printOut.pdf', 'inline');
response.write(resObj)
```
|
Best practice - Calling APIs & Services in Single page applications
I have a single page applications that needs to call a variety of web services and/or APIs. I would like to understand what is a generally agreed approach to making api or service calls from SPA. We currently have two approaches
1. For certain 3rd party APIs- we make direct calls from the single page application without a server side proxy. In order for this to work we have CORS enabled.
2. For other API calls - we make calls to a proxy (wrapper) which is responsible for redirecting them to the appropriate endpoints.
The way we decide which approach to use is - if there is some kind of data manipulation thats needed before calling the 3rd party api - we use the proxy - else we make direct calls from the SPA. Is this a valid approach. Would you have any feedback on if the 1st approach is robust from a security point of view? In the 1st approach we have a http-only cookie that is being used as an access token to make calls to the 3rd party api. Does this make the API we are exposing vulnerable?
thanks in advance
|
I highly recommend that you proxify all your API calls.
Calling a 3rd party API is ok for some use cases, but not if you start having to deal with a lot of them.
Here are my key points:
- Interfacing APIs permits to concentrate the listing, organization and updates of the 3rd party APIs. It also makes easier to build your own tracking, stats and monitoring.
- You can reroute any API if it is down, and provide adequate error handling so that you avoid frustrating timeouts / ugly error messages for your customers due to that 3rd party API being down
- You isolate and secure your customers from outside service: yes if the 3rd party API is exploited by a malicious user (eg: returns 'bad' pictures', redirects the navigation...), you can filter it.
- It is easy to change ONE hostname for your API proxy, it is harder to change 20. If you want to migrate your application into closed environments (private networks) the API proxy will come as a real helper when it comes to all the issues with DNS, proxies, gateways, etc.
- You can offer your own standardized API which interfaces all the others: this will be an accelerator for developments. Take a look at **GraphQL** if that could help you perform both multi-API calls and result sizes optimizations.
|
How can I check if a computer is configured to use WSUS?
I'm the user of a computer (Windows 7) that is part of a domain and I want to make sure its configured to use WSUS (Windows Server Update Services) and download updates from a local server instead of downloading updates directly from Microsoft servers.
Is there a way to definitely assert if WSUS is configured and maybe even obtain information about the configured WSUS server if one exists?
*Sidenote: I have no interest in setting up a WSUS server so I believe this question fits better in superuser than serverfault.*
|
As it is part of a domain, it is most likely done through group policy, you can go to the desktop as an administrator and open up the `Group Policy Editor` (Go to Run and type `MMC`, then go to `File` > `Add / Remove Snapin`) and find `Group Policy`.
Take a look under `Computer Configuration` > `Administrative Templates` > `Windows Components` > `Windows Update`.
If you are not using group policy, just take a look directly at the registry key:
`HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate`
You should see the keys `WUServer` and `WUStatusServer` which should have the the locations of the specific servers.
There are far too many keys and entries for me to write, but if you are interested, Technet have an [excellent article](http://technet.microsoft.com/en-us/library/dd939844(WS.10).aspx) on all of them.
|
Origin header null for XHR request made from with sandbox attribute
I have a project where I am trying download some data in a tab separated value format from a datahandler however, Google Chrome is sending a null value for the Origin header value.
I'm seeing this when I navigate to <http://server.corp.somebiz.com/reportpages/Report_Page_Requests_Over_Time.aspx?app=76ac42b7-ba6f-4be4-b297-758ebc9fe615>
```
var url = 'http://server.corp.somebiz.com/DataHandlers/ReportSets.ashx?task=pagerequestsovertime&app=188d1956-c4a7-42f7-9bdd-38f54c14e125&format=tsv';
d3.tsv(url, function(d) {
d.date = parseTime(d.date);
d.close = +d.close;
return d;
}, function(error, data) {
if (error) throw error;
console.log('Do stuff');
});
```
Here are the raw headers on the request:
```
GET /DataHandlers/ReportSets.ashx?task=pagerequestsovertime&app=786b5ef3-1389-4890-8004-533fd1f66f16&format=tsv HTTP/1.1
Host: server.corp.somebiz.com
Connection: keep-alive
accept: text/tab-separated-values,*/*
Origin: null
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
```
This ends with an error on the console:
```
XMLHttpRequest cannot load http://server.corp.somebiz.com/DataHandlers/ReportSets.ashx?task=pagere…6ac42b7-ba6f-4be4-b297-758ebc9fe615&start=2/1/2017&end=3/2/2017&format=tsv. The 'Access-Control-Allow-Origin' header has a value 'http://server.corp.somebiz.com' that is not equal to the supplied origin. Origin 'null' is therefore not allowed access.
```
Not only am I looking for the why is this happening, what the conditions are that leads to Chrome sending a null Origin header to the server.
This seems to be a Chrome specific issue as Internet Explorer 11 is sending the proper Origin value to the server.
Update: To add another wrinkle, that may or may not be a contributing factor.
I load the calling page in an `<iframe>` element to isolate scripted elements. Calling the page outside of the iframe causes a different behavior, the Origin header on Chrome is missing entirely.
|
If the `iframe` you’re loading the calling page in has a `sandbox` attribute that doesn’t contain the value `allow-same-origin`, [browsers give it a “unique” origin](https://html.spec.whatwg.org/multipage/iframe-embed-object.html#the-iframe-element:concept-origin-2):
>
> When the [`sandbox`] attribute is set, the content is treated as being from a unique origin, forms, scripts, and various potentially annoying APIs are disabled, links are prevented from targeting other browsing contexts, and plugins are secured. The `allow-same-origin` keyword causes the content to be treated as being from its real origin instead of forcing it into a unique origin
>
>
>
…and when determining the value of the `Origin` header to send in a cross-origin request, browsers serialize any unique origin as `null` and give the `Origin` header that value.
|
Regex for a name and number dart
I need to validate a form that a user provides their name and a number. I have a regex that is supposed to make sure the name contains only letters and nothing else and also for the number input i need to make sure only numbers are in the field.
The code I have looks like
```
validator: (value) => value.isEmpty
? 'Enter Your Name'
: RegExp(
'!@#<>?":_``~;[]\|=-+)(*&^%1234567890')
? 'Enter a Valid Name'
: null,
```
can i get a regex expression that validates a name in such a way if any special character or number is inputted it becomes wrong meaning only letters are valid and another that validates a number in such a way if an alphabetical letter or any special character is inputted it becomes wrong meaning only the input of a number is valid
|
It seems to me you want
```
RegExp(r'[!@#<>?":_`~;[\]\\|=+)(*&^%0-9-]').hasMatch(value)
```
Note that you need to use a raw string literal, put `-` at the end and escape `]` and `\` chars inside the resulting *character class*, then check if there is a match with `.hasMatch(value)`. Notre also that `[0123456789]` is equal to `[0-9]`.
As for the second pattern, you can remove the digit range from the regex (as you need to allow it) and add a `\s` pattern (`\s` matches any whitespace char) to disallow whitespace in the input:
```
RegExp(r'[!@#<>?":_`~;[\]\\|=+)(*&^%\s-]').hasMatch(value)
```
|
How to run userscript(/greasemonkey) after AngularJS library is loaded
I just want to create some extensions to angular object to make AngularJS debugging more convenient.
But when I run add userscript, it can't find an angular object. AngularJS library is loaded in the bottom of tag.
**UPD**: @estus provided right answer, but if you want to use chrome you need to install it via [Tampermonkey](https://tampermonkey.net/) extension.
You can find final code snippet [here](https://github.com/stevermeister/userscripts/blob/master/angular-shortcuts.user.js).
|
The fact that Angular is unavailable at the moment when user script runs indicates that Angular is loaded asynchronously, this is quite normal for any SPA (also check that [@run-at](http://wiki.greasespot.net/Metadata_Block#.40run-at) is *not* set to `document-start`, it isn't the desirable behaviour here).
The usual workaround for user scripts is to watch for the desired variable:
```
var initWatcher = setInterval(function () {
console.log('watch');
if (unsafeWindow.angular) {
clearInterval(initWatcher);
init();
}
}, 100);
function init() {
console.log('angular', unsafeWindow.angular);
}
```
If cross-browser compatibility is not required, then FF-specific [Object.prototype.watch](https://developer.mozilla.org/en-US/docs/Archive/Web/JavaScript/Object.watch) can be used instead:
```
unsafeWindow.watch('angular', function (prop, oldVal, newVal) {
console.log('watch');
if (newVal) {
unsafeWindow.unwatch('angular');
// angular is still undefined ATM, run init() a bit later
setTimeout(init);
}
return newVal;
});
function init() {
console.log('angular', unsafeWindow.angular);
}
```
|
When are subviews completely, correctly laid out?
I am trying to programmatically draw a circle progress view and center it within a subview `circleView`, which I have set up/constrained in the interface builder. However, I am not sure when `circleView`'s final size and center will be accessible (I'm using auto layout), which I ultimately need to draw the circle. Here's the involved code:
```
@IBOutlet weak var circleView: UIView!
let circleShapeLayer = CAShapeLayer()
let trackLayer = CAShapeLayer()
override func viewDidLoad() {
super.viewDidLoad()
// createCircle()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
print(circleView.frame.size.width)
createCircle()
}
func createCircle() {
// Draw/modify circle
let center = circleView.center
// Where I need to use circleView's width/center
let circularPath = UIBezierPath(arcCenter: center, radius: circleView.frame.size.width, startAngle: -CGFloat.pi/2, endAngle: 2 * CGFloat.pi, clockwise: true)
trackLayer.path = circularPath.cgPath
trackLayer.strokeColor = UIColor.lightGray.cgColor
trackLayer.lineWidth = 10
trackLayer.fillColor = UIColor.clear.cgColor
circleView.layer.addSublayer(trackLayer)
circleShapeLayer.path = circularPath.cgPath
circleShapeLayer.strokeColor = Colors.tintColor.cgColor
circleShapeLayer.lineWidth = 10
circleShapeLayer.fillColor = UIColor.clear.cgColor
circleShapeLayer.lineCap = kCALineCapRound
// circleShapeLayer.strokeEnd = 0
circleView.layer.addSublayer(circleShapeLayer)
}
```
This prints the width of `circleView` twice, and only on the second run of `viewDidLayoutSubviews()` is it actually correct:
```
207.0
187.5 // Correct (width of entire view is 375)
```
However, the circle draws incorrectly along the same exact path both times, which baffles me because the width changes as shown above. Maybe I'm thinking about this the wrong way?
I'd rather not draw the circle twice and was hoping there would a way to run `createCircle()` within `viewDidLoad()` instead, but at the moment this just gives me the same result. Any help would be very much appreciated.
|
@rmaddy's comment is correct: The best way to handle this is to use a custom view to manage `trackLayer` and `circleShapeLayer`. Override the custom view's `layoutSubviews` to set the frame and/or path of the layers.
That said, I'll answer your stated question of “When are subviews completely, correctly laid out?”
Consider this view hierarchy:
```
A
|
+--- B
| |
| +--- C
| |
| +--- D
|
+--- E
|
+--- F
|
+--- G
```
During the layout phase of the run loop, Core Animation traverses the layer hierarchy in depth-first order, looking for layers that need layout. So Core Animation visits A's layer first, then B's layer, then C's layer, then D's layer, then E's layer, then F's layer, then G's layer.
If a layer needs layout (its `needsLayout` property is true), then Core Animation sends `layoutSublayers` to the layer. A layer handles this by default by sending `layoutSublayersOfLayer:` to its delegate. Usually the delegate is the `UIView` that owns the layer.
By default, a `UIView` handles `layoutSublayersOfLayer:` by (among other things) sending three messages:
1. The `UIView` sends `viewWillLayoutSubviews` to its view controller, if the view is owned by a view controller.
2. The `UIView` sends itself `layoutSubviews`.
3. The `UIView` sends `viewDidLayoutSubviews` to its view controller, if the view is owned by a view controller.
In the default implementation of `-[UIView layoutSubviews]`, the view sets the frame of each of its direct subviews, based on auto layout constraints.
Note that in `layoutSubviews`, a view only sets the frames of its **direct** subviews. So for example, A only sets the frames of B and E. It does **not** set the frames of C, D, F, and G.
So let's suppose A is the view of a view controller, but none of the other views are owned by a view controller.
When A's handles `layoutSubviews`, it sets the frames of B and E. Then it sends `viewDidLayoutSubviews` to its view controller. The frames of C, D, F, and G have **not** been updated at this point. The view controller cannot assume that C, D, F, and G have correct frames in its `viewDidLayoutSubviews`.
There are two good places to put code that will run when C's frame has definitely been updated:
1. Override B's `layoutSubviews`. Since B is the direct superview of C, you can be sure that after B's `layoutSubviews` calls `super.layoutSubviews()`, C's frame has been updated.
2. Put a view controller in charge of B. That is, make B be the view of some view controller. Then, override `viewDidLayoutSubviews` in the view controller that owns B.
If you only need to know when C's size has definitely been updated, you have a third option:
3. Override C's `layoutSubviews`. This will be called if `C` changes size. It won't necessarily be called if C changes position but stays the same size.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.