prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
What are disadvantages of JavaFX and Silverlight?
JavaFX is a new technology introduced by Sun. I would like to know what are the limitations and pitfalls of JavaFX and silverlight.........
|
The main problems I see with JavaFX are:
- It is very immature technology. While everything, as others have noted, must have a start somewhere, JavaFX is very immature technology *in a field that has several more mature competitors*. That makes all the difference. Flex/AIR and Silverlight are out there already and in active use all over the place. To tackle an entrenched market you need tools that are far superior to what's out there, not just a little superior (if at all).
- It is stacked on top of the rather large JRE. The JRE is available on every major platform (and many minor ones) but it is not ubiquitously installed. The same is true of Flex/AIR and Silverlight, of course, but Flash is a lighter-weight solution than the whole JRE+JavaFX and the latter is basically a default on the target platform anyway.
- JavaFX is heavily reliant upon the huge infrastructure that surrounds Java. This is a strength, of course, given that if you need it there's a library for it somewhere, but it is also a weakness in that it involves even more bloat than the JRE+JavaFX and starts to introduce you to the wonderful world of JAR Hell.
- The final reason to be wary of JavaFX: I prefer it to all of the other currently available RIA technology. (Yes, despite my critiques, I like it better.) If I like something, it is guaranteed to fail. Computing technology is positively littered with technologies I preferred over what was eventually chosen by the market. My liking something is a kiss of death for it.
One of those may not have been entirely serious....
|
how to resume ddrescue process?
I have to recover my 1TB USB HD. I'm using `ddrescue` to accomplish this. Is there a way that I can resume `ddrescue` once I start it?
I'm using the command:
```
ddrescue /dev/sdd1 ./bye1t.dd_rescue.image
```
|
*ddrescue* can be resumed, but it requires a log file to be able to do so. The log file will record the progress that ddrescue has made so far, and restarting ddrescue will read the log file and start where it left off.
The log file would be the third parameter:
```
ddrescue /dev/sdd1 ./bye1t.dd_rescue.image ~/sdd1.log
```
If you have already started a ddrescue run without a log file and cancel it, the next time ddrescue runs, it will start at the beginning since it has no record of what has already been recovered.
*Note*: [ddrescue](http://www.forensicswiki.org/wiki/Ddrescue) and [dd\_rescue](http://www.forensicswiki.org/wiki/Dd_rescue) are different programs.
|
What is the best practice to deal with RxSwift retry and error handling
I read some post says that the best practice to deal with RxSwift is to only pass fatal error to the onError and pass Result to the onNext.
It makes sense to me until I realise that I can't deal with retry anymore since it only happen on onError.
How do I deal with this issue?
Another question is, how do I handle global and local retry mixes together?
A example would be, the iOS receipt validation flow.
1, try to fetch receipt locally
2, if failed, ask Apple server for the latest receipt.
3, send the receipt to our backend to validate.
4, if success, then whole flow complete
5, if failed, check the error code if it's retryable, then go back to 1.
and in the new 1, it will force to ask for new receipt from apple server. then when it reaches 5 again, the whole flow will stop since this is the second attempt already. meaning only retry once.
So in this example, if using state machine and without using rx, I will end up using state machine and shares some global state like `isSecondAttempt: Bool`, `shouldForceFetchReceipt: Bool`, etc.
How do I design this flow in rx? with these global shared state designed in the flow.
|
>
> I read some post says that the best practice to deal with RxSwift is to only pass fatal error to the onError and pass Result to the onNext.
>
>
>
I don't agree with that sentiment. It is basically saying that you should only use `onError` if the programmer made a mistake. You should use errors for un-happy paths or to abort a procedure. They are just like throwing except in an async way.
Here's your algorithm as an Rx chain.
```
enum ReceiptError: Error {
case noReceipt
case tooManyAttempts
}
struct Response {
// the server response info
}
func getReceiptResonse() -> Observable<Response> {
return fetchReceiptLocally()
.catchError { _ in askAppleForReceipt() }
.flatMapLatest { data in
sendReceiptToServer(data)
}
.retryWhen { error in
error
.scan(0) { attempts, error in
let max = 1
guard attempts < max else { throw ReceiptError.tooManyAttempts }
guard isRetryable(error) else { throw error }
return attempts + 1
}
}
}
```
Here are the support functions that the above uses:
```
func fetchReceiptLocally() -> Observable<Data> {
// return the local receipt data or call `onError`
}
func sendReceiptToServer(_ data: Data) -> Observable<Response> {
// send the receipt data or `onError` if the server failed to receive or process it correctly.
}
func isRetryable(_ error: Error) -> Bool {
// is this error the kind that can be retried?
}
func askAppleForReceipt() -> Observable<Data> {
return Observable.just(Bundle.main.appStoreReceiptURL)
.map { (url) -> URL in
guard let url = url else { throw ReceiptError.noReceipt }
return url
}
.observeOn(ConcurrentDispatchQueueScheduler(qos: .userInitiated))
.map { try Data(contentsOf: $0) }
}
```
|
Is there a reason to use C language pointers like this &(\*foo)?
I use to code pointers like this when I need to change the original memory address of a pointer.
Example:
```
static void get_line_func(struct data_s *data,
char **begin)
{
data->slot_number = strsep(&(*(begin)), "/");
data->protocol = *begin;
strsep(&(*begin), ">");
data->service_name = strsep(&(*begin), "\n");
}
```
I mean, isn't `&(*foo) == foo`?
|
There is no reason to do that directly. However, the combination can arise in machine-generated code (such as the expansion of a preprocessor macro).
For instance, suppose we have a macro `do_something_to(obj)` which expects the argument expression `obj` to designate an object. Suppose somewhere in its expansion, this macro takes the address of the object using `&(obj)`. Now suppose we would like to apply the macro to an object which we only hold via a pointer `ptr`. To designate the object, we must use the expression `*ptr` so that we use the macro as `do_something_to(*ptr)`. That of course means that`&(*ptr)` now occurs in the program.
The status of the expression `&*ptr` has changed over the years. I seem to remember that in the ANSI C 89 / ISO C90 dialect, the expression produced undefined behavior if `ptr` was an invalid pointer.
In ISO C11 the following is spelled out (and I believe nearly the same text is in C99), requiring `&*` not to dereference the pointer: *"if the operand [of the address-of unary `&` operator] is the result of a unary `*` operator,
neither that operator nor the `&` operator is evaluated and the result is as if both were
omitted, except that the constraints on the operators still apply and the result is not an lvalue"*. Thus in the modern C dialect, the expression `&*ptr` doesn't dereference `ptr`, hence has defined behavior even if that value is null.
What does that mean? "constraints still apply" basically means that it still has to type check. Just because `&*P` doesn't dereference `P` doesn't mean that `P` can be a `double` or a `struct`; it has to be a pointer.
The "result is not an lvalue" part is potentially useful. If we have a pointer `P` which is an value, if we wrap it in the expression `&*P`, we obtain the same pointer value as a non-lvalue. There are other ways to obtain the value of `P` as a non-lvalue, but `&*P` is a "code golfed" solution to the problem requiring only two characters, and having the property that it will remain correct even if `P` changes from one pointer type to another.
|
Unable to load pdo\_sqlite.so extension
I'm trying to install the pdo-sqlite ext for the cli.
I have tried to run `sudo apt-get install php5-sqlite` and it says the ext is already up to date but when i run `php -m` it does not appear in the list.
In my /usr/lib/php/ folder I have 2 folders named 20131226 and 20160303. The latter one (20160303) contains pdo\_sqlite.so but when i try to add this to the php.ini file i get the error
`PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/20160303/pdo_sqlite.so' - /usr/lib/php/20160303/pdo_sqlite.so: undefined symbol: zend_fetch_resource2_ex in Unknown on line 0`
I am running ubuntu 14.04 and php 5.6.
Is anyone able to help with this problem? Also is anyone able to explain why the php extensions are stored in these different numbered folders and how php knows which one to look in? Thanks.
|
You've got (at least part of) two different versions of php installed, and they're conflicting. You'll probably want to clean out the version you're not using in order to prevent further issues. It might be easiest to just wipe both and then reinstall the one you want. Also make sure that your php.ini has its `extension_dir` setting pointing to the correct directory for the version you're keeping.
The internal PHP API is versioned according to date, so when backwards-incompatible changes are made and a new version is required, you'll get a new subdirectory name. If the changes *are* backward-compatible, and no new API version is required, then you can make minor updates to your PHP version without having to recompile your extensions.
I believe 20131226 is for PHP version 5.6 and 20160303 is for PHP version 7.
|
Python - subprocess.Popen - ssh -t user@host 'service --status-all'
I've read a bunch of examples but none of them work for this specific task.
Python code:
```
x = Popen(commands, stdout=PIPE, stderr=PIPE, shell=True)
print commands
stdout = x.stdout.read()
stderr = x.stderr.read()
print stdout, stderr
return stdout
```
Output:
```
[user@host]$ python helpers.py
['ssh', '-t', 'user@host', ' ', "'service --status-all'"]
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-W host:port] [-w local_tun[:remote_tun]]
[user@]hostname [command]
```
Why am i getting this error?
Using os.popen(...) it works, it executes at least but i can't retrieve the output of the remote command via the SSH tunnel.
|
I think your commands list is wrong:
```
commands = ['ssh', '-t', 'user@host', "service --status-all"]
x = Popen(commands, stdout=PIPE, stderr=PIPE)
```
Additionally, I don't think you should pass `shell=True` if you're going to pass a list to `Popen`.
e.g. either do this:
```
Popen('ls -l',shell=True)
```
or this:
```
Popen(['ls','-l'])
```
but not this:
```
Popen(['ls','-l'],shell=True)
```
Finally, there exists a convenience function for splitting a string into a list the same way your shell would:
```
import shlex
shlex.split("program -w ith -a 'quoted argument'")
```
will return:
```
['program', '-w', 'ith', '-a', 'quoted argument']
```
|
Convert a float literal to int representation in x86 assembly?
The following C code:
```
int main()
{
float f;
f = 3.0;
}
```
Is converted to the following assembly instructions:
```
main:
pushl %ebp
movl %esp, %ebp
subl $16, %esp
flds .LC0
fstps -4(%ebp)
movl $0, %eax
leave
ret
.LC0:
.long 1077936128
```
What is the correct way to calculate the `.long`/`int` representation of the `float` literal?
*e.g.* `1077936128` generated from `3.0` for the example shown above
---
>
> For this example `gcc` is used with the `-m32 -S -O0 -fno-stack-protector -fno-asynchronous-unwind-tables` flags using intel settings to generate the assembly output.
>
>
>
**References**:
[Compiler Explorer Link](https://godbolt.org/g/X6mgaT) with compilation flags and other settings
|
x86 FPU hardware uses [IEEE754 binary32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) / binary64 representations for `float` / `double`.
Determining the IEEE 754 representation of a floating point number is not trivial for humans. In handwritten assembly code, it's usually a good idea to use the `.float` or `.double` directives instead:
```
.float 3.0 # generates 3.0 as a 32 bit float
.double 3.0 # generates 3.0 as a 64 bit float
```
---
If you really want to compute this manually, refer to the [explanations on Wikipedia](https://en.wikipedia.org/wiki/Single-precision_floating-point_format). It might be interesting to do so as an exercise, but for actual programming it's tedious and mostly useless.
Compilers do the conversion (with rounding to the nearest representable FP value) internally, because FP values often don't come directly from a literal in the source; they can come from constant folding. e.g. `1.23 * 4.56` is evaluated at compile time, so the compiler already ends up with FP values in float or double binary representation. Printing them back to decimal for the assembler to parse and re-convert to binary would be slower and might require a lot of decimal places.
---
To compute the representation of a 32 bit float as a 32 bit integer, you can use [an online IEEE754 converter](https://www.h-schmidt.net/FloatConverter/IEEE754.html), or a program like this:
```
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main(int argc, char *argv[])
{
union { uint32_t u32; float f32; } intfloat;
if (argc != 2) {
fprintf(stderr, "Usage: %s some-number\n", argv[0]);
return EXIT_FAILURE;
}
intfloat.f32 = atof(argv[1]);
printf("0x%08" PRIx32 "\n", intfloat.u32);
return EXIT_SUCCESS;
}
```
|
mysql: very simple SELECT id ORDER BY LIMIT will not use INDEX as expected (?!)
I have a simple table with about 3 million records. I made the neccessary indexes, i also force the index PRIMARY but still doesnt work. **It searches for nearly all 3 million rows instead of using the index** to execute this one (record\_id is INT auto-increment):
```
EXPLAIN SELECT record_id
FROM myrecords
FORCE INDEX (
PRIMARY )
ORDER BY record_id ASC
LIMIT 2955900 , 300
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE myrecords index NULL PRIMARY 4 NULL 2956200 Using index
```
The index is
```
Keyname Type Unique Packed Column Cardinality Collation Null
PRIMARY BTREE Yes No record_id 2956742 A No
```
I would like to know why this FORCED index is not being used the right way.
Without forcing index 'primary' both ASC and DESC tried, result is the same. Table has been repaired-optimized-analyzed. No luck.
query needs over a minute to execute!
WHAT I EXPECTED: query should proccess only 300 rows since that column is indexed. not nearly all 3 million of them as you can see in the first code-formatted block (scroll a little to the right)
|
Index lookups are by *value*, not by *position*. An index can search for a value 2955900, but you're not asking for that. You're asking for the query to start at an offset of the 2955900th row in the table.
The optimizer can't assume that all primary key values are consecutive. So it's pretty likely that the 2955900th row has a value much higher than that.
Even if the primary key values are consecutive, you might have a WHERE condition that only matches, for example, 45% of the rows. In which case the id value on the 2955900th row would be *way* past the id value 2955900.
In other words, an index lookup of the id value 2955900 will not deliver the 2955900th row.
So MySQL can't use the index for a limit's offset. It *must* scan the rows to count them until it reaches offset+limit rows.
MySQL does have [optimizations related to LIMIT](http://dev.mysql.com/doc/refman/5.5/en/limit-optimization.html), but it's more about stopping a table-scan once it has reached the number of rows to return. The optimizer may still report in an EXPLAIN plan that it expects it *might* have to scan the whole table.
A frequent misunderstand about [FORCE INDEX](http://dev.mysql.com/doc/refman/5.5/en/index-hints.html) is that it forces the use of an index. :-)
In fact, if the query *can't* use an index (or if the available indexes don't have any benefit for this query), FORCE INDEX has no effect.
---
Re your comment:
Pagination is a frequent bane of data-driven web applications. Despite how common this feature is, it's not easy to optimize. Here are a few tips:
- Why are you querying with offset 2955900? Do you really expect users to sift through that many pages? Most users give up after a few pages (exactly how many depends on the type of application and the data).
- Reduce the number of queries. Your pagination function could fetch the first 5-10 pages, even if only it shows the first page to the user. Cache the other pages, with the assumption that the user will advance through a few pages. Only if they advance past the cached set of pages does your app have to do another query. You could even cache all 10 pages in Javascript on the client's browser so clicking "Next" is *instantaneous* for them (at least for those first few pages).
- Don't put a "Last" button on any user interface, because people will click it out of curiosity. Notice Google has a "Next" button but not a "Last" button. So the UI itself discourages people from running inefficient queries with high offsets.
- If the user is advancing one page at a time, use the highest id value returned in the previous page in the WHERE clause of the next page's query. I.e. the following *does* use the index, even with no FORCE INDEX hint:
```
SELECT * FROM thistable WHERE id > 544 LIMIT 20
```
|
Using recursion for a function with a pass by reference parameter?
For my void function reverse, I have to get a string and reverse it with recursion.
So far, I have
```
void reverse(string &s)
{
string b = s;
s = s.substr(1) + s.at(0);
}
```
but it won't work because of the pass by reference requirement for the function. How do I utilize the pass by reference for recursion?
|
**For the record, here's what the function was when I answered:**
```
void reverse(string &s)
{
s = reverse(s.substr(1)) + s.at(0);
}
```
Multiple problems here:
First you are adding the result of `reverse` with a `char`. `reverse` returns `void` so this doesn't work. Second, substr creates a new string and passing a temp by reference is asking for trouble. Finally, in recursion, you need an exit condition. As it is, your method is crashing trying to get a substr when length is 1.
Here's a version that works:
```
void reverse(string &s)
{
if (s.size() == 1)
return;
string sub = s.substr(1);
reverse(sub);
s = sub + s.at(0);
}
```
|
Why does my takeWhile fail to work with my Stream
The following code prints "\*1". What's mystifying is if I remove the comment it returns "\*4" which is what I was expecting
```
var max = 0
lazy val list: Stream[Int] = 1 #:: Stream.from(2)
list.takeWhile {
x =>
max = x
x < 4
}//.foreach(println)
println("*" + max)
```
|
First of all: the `lazy` in your second line isn't doing anything—you can remove it and get the same result.
More importantly: `takeWhile` is *actually* lazy, in that it just returns another `Stream`, and nothing past the head of that stream will be evaluated until it's needed. Consider the following:
```
val s = Stream.from(1).takeWhile(_ > 0)
```
You and I know that `s` is going to be an infinite stream, but if we fire up the REPL and type this in, it's perfectly happy to evaluate it:
```
scala> val s = Stream.from(1).takeWhile(_ > 0)
s: scala.collection.immutable.Stream[Int] = Stream(1, ?)
```
The same thing is happening in your example: the `(Int) ⇒ Boolean` that you've passed to `takeWhile` isn't going to get fed any elements beyond the head of the stream, until something like your `foreach` makes that necessary.
You can see this even more dramatically by adding something like a `println` inside of the `takeWhile` predicate:
```
scala> val s = Stream.from(1).takeWhile { x => println("Checking: " + x); x < 4 }
Checking: 1
s: scala.collection.immutable.Stream[Int] = Stream(1, ?)
scala> val l = s.toList
Checking: 2
Checking: 3
Checking: 4
l: List[Int] = List(1, 2, 3)
```
Clearly the predicate only gets called for the head of the stream, until we force the evaluation of the rest of the stream by calling `toList`.
|
Declarations within an if statement
Are declarations within an `if` statement illegal without an enclosing brace? For example:
```
if (argc == 2)
char* str = "Two";
else
char* str = "Not two";
```
Would need to be converted to:
```
if (argc == 2) {
char* str = "Two";
} else {
char* str = "Not two";
}
```
Out of curiosity, why is this so? I understand the need for braces with multiple statements such as:
```
if (something)
printf(something);
printf(something);
```
Where the above really "means" the following in lieu of explicit braeces:
```
if (something) {
printf(something);
}
printf(something);
```
But why is a single statement/declaration allowed, if it includes only a single `;`?
|
The syntax of an `if` statement (with an `else` clause) is:
>
> **if** (expression) statement **else** statement
>
>
>
And this:
```
char* str = "Two";
```
Is not a statement but a **declaration**, therefore it's not valid in your first code snippet.
The second snippet with curly braces works because the braces denote a **compound statement**, and a compound statement may contain both statements and declarations within it.
Note however that even in the second case which compiles, the variable `str` defined in each sub-block goes out of scope once the `if` statement completes and cannot be referenced. What you probably want is:
```
char *str;
if (argc == 2) {
str = "Two";
} else {
str = "Not two";
}
```
Or equivalently:
```
char *str;
if (argc == 2)
str = "Two";
else
str = "Not two";
```
|
Error: Excepiton in System.Threading.ThreadAbortException: Thread was being aborted
While downloading template i m getting following error message.
I have tried instead of Response.Flush(); with Response.End();. But getting same error.
```
Error: Excepiton in Download:System.Threading.ThreadAbortException: Thread was being aborted.
at System.Threading.Thread.AbortInternal()
at System.Threading.Thread.Abort(Object stateInfo)
at System.Web.HttpResponse.End()
```
Any idea to avoid above exception
Code
```
private void DownloadFile(string filePath, string downloadFileName)
{
Response.ContentType = "application/ms-excel";
Response.AddHeader("content-disposition", "attachment; filename=" + downloadFileName);
Response.TransmitFile(filePath);
// Response.Flush();
Response.End();
}
```
Thanks in advance..
|
As answered here :- [How to Avoid Response.End() "Thread was being aborted" Exception during the Excel file download](https://stackoverflow.com/questions/20988445/how-to-avoid-response-end-thread-was-being-aborted-exception-during-the-exce)
>
> Replace this : `HttpContext.Current.Response.End();`
>
>
> With this :
>
>
>
```
HttpContext.Current.Response.Flush(); // Sends all currently buffered output to the client.
HttpContext.Current.Response.SuppressContent = true; // Gets or sets a value indicating whether to send HTTP content to the client.
HttpContext.Current.ApplicationInstance.CompleteRequest(); // Causes ASP.NET to bypass all events and filtering in the HTTP pipeline**
```
>
> chain of execution and directly execute the EndRequest event.
>
>
>
And answered here :- [ASP.NET exception "Thread was being aborted" causes method to exit](https://stackoverflow.com/questions/1014439/asp-net-exception-thread-was-being-aborted-causes-method-to-exit)
>
> This is a ThreadAbortException; it's a special exception that is
> automatically rethrown at the end of every catch block, unless you
> call Thread.ResetAbort().
>
>
> ASP .Net methods like Response.End or Response.Redirect (unless you
> pass false) throw this exception to end processing of the current
> page; your someFunctionCall() is probably calling one of those
> methods.
>
>
> ASP .Net itself handles this exception and calls ResetAbort to
> continue processing.
>
>
>
|
Getting false instead of true in ProLog
I have a database of product information in a shop, which looks like this: `product(Barcode, ProductName, Category, Refrigerated, VendorName)`.
I am trying to write a (very) basic operation to check whether two products have the same barcode.
My code:
```
sameCode(product(code1,_,_,_,_),product(code2,_,_,_,_)):- code1=code2.
```
Yet while checking it I got:
```
52 ?- sameCode(product(972000456745, matches, 05, false, fire_ltd),product(972000456745, lighter, 05, false, fire_ltd)).
false.
```
So I tried to tell it to return `true` always, by making it a statement without demands:
```
sameCode(product(code1,_,_,_,_),product(code2,_,_,_,_)).
```
Yet I still got `false`. Any ideas why it's happening?
|
This is because `code1` & `code2` are [atoms](http://www.cse.unsw.edu.au/~billw/prologdict.html#atom), and since you’re not passing in the atoms `code1` & `code2`, it doesn’t match and returns `false`. You want [variables](http://www.cse.unsw.edu.au/~billw/prologdict.html#variable) (which begin with a capital letter or underscore) instead:
```
sameCode(product(Code1,_,_,_,_), product(Code2,_,_,_,_)) :- Code1 = Code2.
```
This can be simplified to just:
```
sameCode(product(Code,_,_,_,_), product(Code,_,_,_,_)).
```
as `Code` is, of course, unified with itself.
|
Is there an equivalent of canvas's toDataURL method for SVG?
I am trying to load an svg image into canvas for pixel manipulation
I need a method like `toDataURL` or `getImageData` for svg
on Chrome/Safari I can try doing it through and image and canvas
```
var img = new Image()
img.onload = function(){
ctx.drawImage(img,0,0) //this correctly draws the svg image to the canvas! however...
var dataURL = canvas.toDataURL(); //SECURITY_ERR: DOM Exception 18
var data = ctx.getImageData(0,0,img.width, img.height).data //also SECURITY_ERR: DOM Exception 18
}
img.src = "image.svg" //that is an svg file. (same domain as html file :))
```
But I get security errors.
Any other way?
Here is a live demo of the problem <http://clstff.appspot.com/gist/462846> (you can view source)
|
From: <http://www.svgopen.org/2009/papers/12-Using_Canvas_in_SVG/#d4e105>
>
> The reason why you cannot use an SVG
> image element as source for the
> drawImage method is simple, but
> painful: the current Canvas
> specification does not (yet) allow to
> reference SVGImageElement as source
> for drawImage and can only cope with
> HTMLImageElement, HTMLCanvasElement
> and HTMLVideoelement. This
> short-coming will hopefully be
> addressed during the process of
> defining "SVG in HTML5" behavior and
> could be extended to allow
> SVGSVGElement as well. The xhtml:img
> element in listing 3 uses
> visibility:hidden as we do not want it
> to interfere with its visible copy on
> the Canvas.
>
>
>
|
How do I reshape this dataset in Python pandas?
Say I have a dataset like this:
```
is_a is_b is_c population infected
1 0 1 50 20
1 1 0 100 10
0 1 1 20 10
...
```
How do I reshape it to look like this?
```
feature 0 1
a 10/20 30/150
b 20/50 20/120
c 10/100 30/70
...
```
In the original dataset, I have features `a`, `b`, and `c` as their own separate columns. In the transformed dataset, these same variables are listed under column `feature`, and two new columns `0` and `1` are produced, corresponding to the values that these features can take on.
In the original dataset where `is_a` is `0`, add `infected` values and divide them by `population` values. Where `is_a` is `1`, do the same, add `infected` values and divide them by `population` values. Rinse and repeat for `is_b` and `is_c`. The new dataset will have these fractions (or decimals) as shown. Thank you!
I've tried `pd.pivot_table` and `pd.melt` but nothing comes close to what I need.
|
After doing the `wide_to_long` , your question is more clear
```
df=pd.wide_to_long(df,['is'],['population','infected'],j='feature',sep='_',suffix='\w+').reset_index()
df
population infected feature is
0 50 20 a 1
1 50 20 b 0
2 50 20 c 1
3 100 10 a 1
4 100 10 b 1
5 100 10 c 0
6 20 10 a 0
7 20 10 b 1
8 20 10 c 1
df.groupby(['feature','is']).apply(lambda x : sum(x['infected'])/sum(x['population'])).unstack()
is 0 1
feature
a 0.5 0.200000
b 0.4 0.166667
c 0.1 0.428571
```
|
What's difference between pthread\_mutex\_trylock's return and pthread\_mutex\_lock's return
I read the Linux man page and OpenGroup for `pthread_mutex_lock` and get this:
>
> If successful, the pthread\_mutex\_lock() and pthread\_mutex\_unlock()
> functions shall return zero, otherwise, an error number shall be
> returned to indicate the error.
>
>
> The pthread\_mutex\_trylock() function shall return zero if a lock on the mutex object referenced by mutex is acquired. Otherwise,
> an error number is returned to indicate the error.
>
>
>
1. I was confused by these two lines. If you both return zero when sucess and return non-zero when error, where do they write this in two lines?
2. I know mutex can be lock and unlock, but what does *a mutex is acquired* mean?
|
In this context *mutex is acquired* means that there was no thread holding the lock at the time. If the mutex is recursive, the call to `pthread_mutex_trylock()` will succeed unless it has been recursively locked too many times.
You can think of `pthread_mutex_trylock()` as a non-blocking call, where if it would have blocked, it returns with an error instead. If it returns success, it means you have the lock as if `pthred_mutex_lock()` returned successfully. If it fails with `EBUSY` it means some other is holding the lock. If it fails with `EOWNERDEAD`, the lock was held by another thread, but that thread had died (getting the lock actually succeeded, but the current data state may not be consistent). If it fails with `EAGAIN` it was locked recursively too many times. There are other failure reasons, but in those cases, the lock has not been acquired.
```
int error = pthread_mutex_trylock(&lock);
if (error == 0) {
/*... have the lock */
pthread_mutex_unlock(&lock);
} else if (error == EBUSY) {
/*... failed to get the lock because another thread holds lock */
} else if (error == EOWNERDEAD) {
/*... got the lock, but the critical section state may not be consistent */
if (make_state_consistent_succeeds()) {
pthread_mutex_consistent(&lock);
/*... things are good now */
pthread_mutex_unlock(&lock);
} else {
/*... abort()? */
}
} else {
switch (error) {
case EAGAIN: /*... recursively locked too many times */
case EINVAL: /*... thread priority higher than mutex priority ceiling */
case ENOTRECOVERABLE:
/*... mutex suffered EOWNERDEAD, and is no longer consistent */
default:
/*...some other as yet undocumented failure reason */
}
}
```
The `EAGAIN`, `EINVAL`, `ENOTRECOVERABLE`, and `EOWNERDEAD` also happen with `pthread_mutex_lock()`. For more information, consult the [documentation](http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html) and [man page](http://linux.die.net/man/3/pthread_mutex_trylock).
|
Keyring Warning when running pip list -o
I've been trying to run `pip list -o` and `pip list --outdated` to see if any packages need to be updated but it enters a loop of printing: `WARNING: Keyring is skipped due to an exception: Failed to create the collection: Prompt dismissed..`
I've upgraded keyring and the version was already up-to-date. I've seen this keyring warning whilst using `pip install {package} --upgrade` to upgrade other packages as well.
|
I searched the web about that topic and find that [GitHub issue](https://github.com/pypa/pip/issues/8090).
If your pip version is any version before "21.1", you can try to upgrade pip to the latest version with `pip install --upgrade pip` command.
Also, as a workaround, you can consider the following answer of `jrd` from the above link:
>
> Exporting `PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring` prevent python from using any keyring. `PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring pipenv update` does not ask me anything anymore.
> So, as a temporary solution, one might want to put this in a .env file.
>
>
>
|
call a partial view using @url.action click using jquery
call a partial view on @url.action. i am displaying the records using url.action and want to load the partial view when user click on the records.
here is my code on which i want to call the partial view when user click on it.
```
<td>
<a href="@Url.Action("Details", new { id=item.TeamId})">
@Html.DisplayFor(modelItem => item.TeamName)
</a>
</td>
```
here is my Div in which i am placing the Partial view
```
<div id="detailsPlace" class="dialog_content3" style="display:none"></div>
@Html.Partial("_TeamDetails")
</div>
```
here is my partial view which i want to render
```
@model light.ViewModels.ViewDetailTeam
@{
var item = Model.Team;
}
<div class="dialogModal_header">@Html.Label(item.TeamName)</div>
<div class="dialogModal_content">
<div class="main-content">
<div class="navi-but">
@Html.Label(item.TeamName)
</div>
@Html.Label(item.Description)
</div>
</div>
</div>
```
and here is my controller
```
public ActionResult Details(int id)
{
lightCoreModel.User loggedInUser = new lightCoreModel.User();
ViewDetailTeam viewDetailTeam = new ViewDetailTeam();
ViewData["DetailModel"] = viewDetailTeam;
viewDetailTeam.Retrieve(id);
return PartialView("_TeamDetails",viewDetailTeam);
}
now i am facing this problem with pop up its showing me the following screen.
```

|
You would need Ajax to do this. First, add a script block in your view with this code:
```
<script type="text/javascript">
$(function () {
$('.details').click(function () {
var $buttonClicked = $(this);
var id = $buttonClicked.attr('data-id');
$.ajax({
url: '@Url.Action("Details")',
type: 'GET',
data: { id: id },
success: function (partialView) {
$('#detailsPlace').html(partialView);
$('#detailsPlace').show();
}
});
});
});
</script>
```
Then change your anchor tag to this:
```
<a href="#" class="details" data-id="@item.TeamId">Details</a>
```
The ajax call will be fired whenever an element with the class of details is clicked. Once clicked, the Id that is stored in the data-id attribute will be passed along to the controller. When your controller passes the partial view back, the partial view will be loaded in the success function of the ajax call, and the detailsPlace will be shown, since it's display is set to none.
|
Lua: print integer as a binary
How can I represent integer as Binary?
so I can print `7` as `111`
|
You write a function to do this.
```
num=7
function toBits(num)
-- returns a table of bits, least significant first.
local t={} -- will contain the bits
while num>0 do
rest=math.fmod(num,2)
t[#t+1]=rest
num=(num-rest)/2
end
return t
end
bits=toBits(num)
print(table.concat(bits))
```
In Lua 5.2 you've already have bitwise functions which can help you ( [bit32](http://www.lua.org/manual/5.2/manual.html#6.7) )
---
Here is the most-significant-first version, with optional leading 0 padding to a specified number of bits:
```
function toBits(num,bits)
-- returns a table of bits, most significant first.
bits = bits or math.max(1, select(2, math.frexp(num)))
local t = {} -- will contain the bits
for b = bits, 1, -1 do
t[b] = math.fmod(num, 2)
num = math.floor((num - t[b]) / 2)
end
return t
end
```
|
Confusion with Haskell do blocks
I have the following code:
```
doSomething :: [Int] -> [Int]
doSomething arg = arg ++ [1]
afterThreeTurns = do
first <- ["test"]
doSomething [1] -- COMMENT THIS
return first
```
This returns:
```
*Main> afterThreeTurns
["test","test"]
```
If I take out the line marked COMMENT THIS, it returns ["test"] as expected. Why? The way I see it doSomething should have no effect on first?
|
Since `doSomething [1]` is `[2,1]`, your code is equivalent to:
```
afterThreeTurns = do
first <- ["test"]
x <- [2,1]
return first
```
This is the same as the list comprehension `[ first | first <- ["test"], x <- [2,1] ]` which explains why you are getting a list of length 2.
Note that the variable `x` is not referenced anywhere, so this could also be written:
```
afterThreeTurns = do
first <- ["test"]
_ <- [2,1]
return first
```
Here is an analogous case using the `IO` monad. The code:
```
thirdLine = do
getLine
getLine
x <- getLine
putStrLn $ "The third line is: " ++ x
```
is the same as:
```
thirdLine = do
_ <- getLine
_ <- getLine
x <- getLine
putStrLn $ "The third line is: " ++ x
```
You can get ghc to flag these kinds of monadic statements with the `-fwarn-unused-do-bind` compiler flag. In your example ghc will emit the warning:
```
...: Warning:
A do-notation statement discarded a result of type ‘Int’
Suppress this warning by saying ‘_ <- doSomething [1]’
or by using the flag -fno-warn-unused-do-bind
```
|
SaveAs won't accept strings that contain "." in Excel VBA
I am using the following code in order to add a new workbook,save and name the workbook (based on a date which is located in a certain cell in the sheet).
```
Dim wb As Workbook
Dim wbName As String
wbName = ThisWorkbook.Sheets("Sheet1").Range("M145").value
fName = Application.GetSaveAsFilename(wbName)
If fName = False Then
MsgBox "Publish couldn't be completed since you didn't choose where to save the file."
Exit Sub
Else
Set wb = Workbooks.Add
wb.SaveAs (fName)
End If
```
But it seems that whenever cell "M145" contains dots (".") as in "31.3.16", my file name doesn't appear in the SaveAs prompt and I see a blank line without
any error message.
I don't think that this has anything to do with it, but my sheet is right-to-left. Does anyone has an idea on how to fix this?
|
While I'm not able to replicate the error, perhaps you will have better luck with a `FileDialog` object:
```
Dim wb As Workbook
Dim wbName As String
Dim fdlg As FileDialog
wbName = ThisWorkbook.Sheets("Sheet1").Range("M145").value
Set fdlg = Application.FileDialog(msoFileDialogSaveAs)
With fdlg
.InitialFileName = wbName
.Show
Set wb = Workbooks.Add
On Error Resume Next 'Suppress any errors due to invalid filename, etc.
wb.SaveAs(fdlg.SelectedItems(1))
If Err.Number <> 0 Then
MsgBox "Publish couldn't be completed since you didn't choose where to save the file."
wb.Close False 'Get rid of the workbook since it's not being saved
Exit Sub
End If
On Error GoTo 0 'Resume normal error handling
End With
```
|
(Why) Should I learn a new programming language?
I'm quite proficient with Java, C/C++, JavaScript/jQuery and decently good at Objective-C. I'm quite productive with the languages and their corresponding frameworks too and do produce enterprise level systems (and also small scale ones) with sufficient ease all the while keeping code 'clean' and maintainable (yes, I can read my own code after six months :)
Unless mandated by the platform (iPhone, iPad, etc.) or by the client/implementation organization, just "why" should I learn a new programming language? Just for "fun"? And do what with that fun if I'm not going to do anything worthwhile with it?
A lot of my peers are ready to dive in to learn the "next new thing/language" and it's usually Python, Ruby or PHP (just naming a few popular ones). Now, just knowing the language by itself is futile IMHO. You also need to know the frameworks, learn their usage/APIs as well as 'good implementation practices', etc.
So from an 'economic' sense, is there any benefit in learning a new programming language? If the language is learned in a quick and dirty fashion, it'll probably also be used for quick and dirty prototyping/implementation - but I don't see THAT as a justifiable investment of time/effort.
So just WHY should I (or anyone for that matter) learn a new programming language other than "it's fun so let's try it out" - if the investment of time may not be worth it in the long run?
|
From *The Pragmatic Programmer*, Tip #8 "Invest Regularly in Your Knowledge Portfolio":
>
> Learn at least one new language every year. Different languages
> solve the same problems in different ways. By learning several different
> approaches, you can help broaden your thinking and avoid
> getting stuck in a rut. Additionally, learning many languages is far
> easier now, thanks to the wealth of freely available software on the
> Internet
>
>
>
It's not about the next "new thing". It's about thinking in different ways outside of your normal thought patterns.
There is a saying, "When you're a hammer, everything looks like a nail". Maybe there is a better way to solve a problem using some other technology. If you don't explore, you may not know that it was available.
|
How do I run certain code/function before the program exits in D?
Suppose I have loop which awaits for user input. If user presses Ctrl+C the program exits normally. However, I'd like to do a couple of things before exit. Is it possible to run a function once Ctrl+C was pressed and program is about to exit?
|
You could use `core.stdc.signal`, which contains bindings to the C header `signal.h`. Now, if this is for Windows, you might run into some [problems](https://msdn.microsoft.com/en-us/library/xdkz3x12.aspx):
>
> SIGINT is not supported for any Win32 application. When a
> CTRL+Cinterrupt occurs, Win32 operating systems generate a new thread to
> specifically handle that interrupt. This can cause a single-thread
> application, such as one in UNIX, to become multithreaded and cause
> unexpected behavior.
>
>
>
```
__gshared bool running = true;
extern(C) void handleInterrupt(int) nothrow @nogc
{
running = false;
}
void main()
{
import core.stdc.signal;
signal(SIGINT, &handleInterrupt);
scope(exit)
{
//Cleanup
import std.stdio : writeln;
writeln("Done");
}
while(running)
{
//Do some work
}
}
```
|
Distinction between positive and negative overflow in C# arithmetic operation
I am performing arithmetic operations on integers in a checked scope in C#/.NET, in order to catch when an overflow happens. I want to find out if the overflow was positive or negative, in a short, smart, simple way, without a lot of special cases and checks depending on the operands or operation.
```
checked
{
try
{
// The code below is an example operation that will throw an
// overflow exception, that I expect to be a positive overflow.
// In my real code, all arithmetic operations are included in this
// code block and can result in both positive and negative overflows.
int foo = int.MaxValue;
int bar = 1;
foo += bar;
}
catch (OverflowException)
{
// I have found out that a overflow occurred,
// but was it positive or negative?
}
}
```
Can it be done? I found no information in the exception itself to use to find out.
|
## TL;DR:
>
> I want to find out if the overflow was positive or negative, in a short, smart, simple way, without a lot of special cases and checks depending on the operands or operation.
>
>
>
**You can't**: C# doesn't expose that information because:
- CPUs today don't make it easy to detect overflow direction.
- While it *can be done*, the necessary steps required to introspect the CPU's state *post-moterm* will wreck performance on modern superscalar processors.
- The alternative is to perform safety-checks before performing any arithmetic, but that also ruins performance.
- And that's just for the x86/64 alone. There's probably about a dozen radically different CPU ISAs that .NET CLR now supports, and having to handle all of their own overflow/carry/sign idiosyncrasies, to ensure that C# programs all behave the same and correctly for your proposed "`checked`-with-overflow-direction" just isn't feasible.
- All of the per-ISA logic happens [in .NET's JIT component, which is already a hideously complex beast some 21 years in the making now](https://github.com/dotnet/runtime/tree/main/src/coreclr/jit).
- That single repo directory has *15 megabytes* of C and C++ source code in it. That's a lot.
- There is very little value in knowing the direction of an arithmetic overflow. The important thing is that the system detected that overflow happened, *that means there's a bug in your code* that you need to go-fix: and because you'll need to reproduce the problem as part of normal debugging practices it means you'll be able to trace execution in full detail and capture every value and state, which is all you need to correct whatever the underlying issue was that caused overflow - whereas knowing that minor detail of overflow direction during normal runtime execution helps... how?
- (That's a rhetorical question: I don't believe it significantly helps at all, and might even be a red-herring that just wastes your time)
## Longer answer:
### Problem 1: CPUs don't care about the direction of overflow
**Some background**: in pretty-much every microprocessor today there's these [special-function-registers](https://en.wikipedia.org/wiki/Special_function_register) (aka *CPU flags*, aka *status registers* [which are often similar to these 4 in ARM](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/condition-codes-1-condition-flags-and-codes):
- `N` - Negative
- `Z` - [Zero flag](https://en.wikipedia.org/wiki/Zero_flag)
- `C` - [Carry flag](https://en.wikipedia.org/wiki/Carry_flag)
- `V` - [Signed overflow flag](https://en.wikipedia.org/wiki/Overflow_flag)
And of course, the *CS-theoretical* basic design of an ALU (the bit that does arithmetic) is such that integer operations are the same, regardless of whether they're signed or unsigned, positive or negative (e.g. subtraction is addition with negative operands), and the flags *by themselves* don't automatically signal an error (e.g. the overflow flag is ignored for unsigned arithmetic, while the carry-flag is actually less significant in signed arithmetic than unsigned).
(This post won't explain what they represent or how they work as I assume that *you*, my erudite reader, [is already familiar with the basic fundamentals of computer integer arithmetic](https://www.quora.com/What-is-the-difference-in-carry-and-overflow-flag-during-binary-multiplication))
Now, *you might assume* that in a `checked` block in a C#/.NET program that the native machine code will check the status of these CPU flags after each-and-every arithmetic operation to see if that immediately previous operation had a signed-overflow or unexpected bit-carry - and if so to pass that information in a call/jump to the CLR's internal function that creates and throw the `OverflowException`.
...and *to an extent* that is what happens, except that *surprisingly little useful information* can realistically be gotten from the CPU. Here's why:
- In a C# `checked` block on x86/x64, the CLR's JIT inserts an x86/x64 `jo [CORINFO_HELP_OVERFLOW]` instruction after every arithmetic instruction that might overflow.
- [You can see it in this Godbolt example.](https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(filename:%271%27,fontScale:14,fontUsePx:%270%27,j:1,lang:csharp,selection:(endColumn:14,endLineNumber:1,positionColumn:14,positionLineNumber:1,selectionStartColumn:14,selectionStartLineNumber:1,startColumn:14,startLineNumber:1),source:%27using+System%3B%0A%0Apublic+static+class+Program%0A%7B%0A%09public+static+void+Main()%0A%09%7B%0A%09%09ClassWithVTable+obj+%3D+new+ClassWithVTable()%3B%0A%09%09%0A%09%09checked%0A%09%09%7B%0A%09%09%09Int32+x+%3D+obj.GetInt32()%3B%0A%09%09%09Int32+y+%3D+obj.GetInt32()%3B%0A%09%09%09Int32+z+%3D+x+%2B+y%3B%0A%09%09%09%0A%09%09%09Console.WriteLine(+z+)%3B%0A%09%09%7D%0A%09%7D%0A%7D%0A%0Apublic+class+ClassWithVTable%0A%7B%0A%09//+The+%60DateTime.UtcNow.Year+%3E%3D+2022%60+check+is+necessary+to+stop+JIT+compiler+optimizations+making+it+jump+to+%60CORINFO_HELP_OVERFLOW%60+immediately.%0A%09public+virtual+Int32+GetInt32()+%3D%3E+DateTime.UtcNow.Year+%3E%3D+2022+%3F+Int32.MaxValue+:+0%3B%0A%7D%27),l:%275%27,n:%270%27,o:%27C%23+source+%231%27,t:%270%27)),k:50,l:%274%27,n:%270%27,o:%27%27,s:0,t:%270%27),(g:!((h:compiler,i:(compiler:dotnet601csharp,filters:(b:%270%27,binary:%271%27,commentOnly:%270%27,demangle:%270%27,directives:%270%27,execute:%271%27,intel:%270%27,libraryCode:%271%27,trim:%271%27),flagsViewOpen:%271%27,fontScale:14,fontUsePx:%270%27,j:1,lang:csharp,libs:!(),options:%27%27,selection:(endColumn:1,endLineNumber:1,positionColumn:1,positionLineNumber:1,selectionStartColumn:1,selectionStartLineNumber:1,startColumn:1,startLineNumber:1),source:1,tree:%271%27),l:%275%27,n:%270%27,o:%27.NET+6.0.101+(C%23,+Editor+%231,+Compiler+%231)%27,t:%270%27)),k:50,l:%274%27,n:%270%27,o:%27%27,s:0,t:%270%27)),l:%272%27,n:%270%27,o:%27%27,t:%270%27)),version:4)
- `CORINFO_HELP_OVERFLOW` is the address of [the native function `JIT_Overflow`](https://github.com/dotnet/runtime/blob/5c57f2c0cda44176e237574ceb51d659ef9915fa/src/coreclr/vm/jithelpers.cpp#L4176) that (eventually) calls `RealCOMPlusThrowWorker` to throw the `OverflowException`.
- Note that the `jo` instruction **is only capable of telling us that the Overflow flag was set**: it doesn't expose or reveal the state of any of the other CPU flags, nor the sign of the instruction's operands, so **the `jo` instruction cannot be used** to tell if the overflow was (to use your terminology) a "negative overflow" nor "positive overflow".
- [Oh, and the `jo` instruction itself is surprisingly expensive](http://danluu.com/integer-overflow/).
- So if programs want more information than just "[it overflowed, Jim](https://www.youtube.com/watch?v=MH7KYmGnj40)" it will need to use CPU instructions that save/copy the rest of the CPU flags state into memory, and if those flags aren't sufficient enough to determine the *direction* of overflow then the JIT compiler will also have to retain copies of all arithmetic operands in-memory somewhere, which in-practice means increasing your stack space drastically or wasting CPU registers holding old values that you don't want to drop until your arithmetic operation succeeds.
- ...unfortunately the CPU instructions used to copy CPU flags to memory or other registers tend to wreck overall system performance:
- Consider the sheer complexity of modern CPU designs, what with their superscalar, speculative and out-of-order execution, and other neat gizomos: [modern CPUs work best when programs follow a predictable "*happy path*"](https://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-processing-an-unsorted-array) which don't use too many awkward instructions that mess-around with the CPU's internal state. So altering a program to be more introspective will do more than harm just your own program's performance, but the entire computer system. Oog.
- [This comment from Rust contributor *Tom-Phinney* summarizes the situation well](https://internals.rust-lang.org/t/adding-access-to-the-carry-flag-bit/12854/13):
>
> Instruction-level access to a "carry bit", so that the value can be used as an input to a subsequent instruction, was trivial to implement in the early days of computing when each instruction was completed **before** the next instruction was begun.
>
>
> For a modern, out-of-order, superscalar processor implementation that cost/benefit is reversed; the cost in gates and/or instruction-cycle slowdown of the "carry-bit feature" far, far outweighs any possible benefit. That is why RISC-V, which is a state-of-the-art computer architecture whose expected implementations span the range from embedded processors of 10k gate complexity (e.g., RV32EC) to superscalar processors with 100x more gates, does not materialize an instruction-stream-synchronous carry bit.
>
>
>
## Problem 2: Hetereogenity
- The .NET CLR is ostensibly portable: .NET has to run on every platform Windows supports, and other platforms as per the whims of Microsoft's C-levels and D-levels: today it runs on x86/x64, [different varieties of ARM](https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265) (including [Apple Silicon](https://www.mfractor.com/blogs/news/net-6-for-mac-with-apple-silicon-developers)), and in the past [Itani](https://www.intel.com/pressroom/archive/releases/2003/20031028dev.htm)[~~c~~um](https://www.networkworld.com/article/3628450/the-itanic-finally-sinks.html), while the XNA build ran on the Xbox 360's PowerPC chip, and the Compact Framework supported SH-3/SH-4, MIPS, and I'm sure dozens others. Oh, and don't forget how Silverlight had its own edition of the CLR, which ultimately became the basis for .NET Core and now .NET 5 - which replaced .NET Framework 4.x - [and Silverlight also ran on PowerPC back in 2007](https://web.archive.org/web/20070419225539/http://www.microsoft.com/silverlight/asp/system-requirements-mac.aspx).
- Or in list form, an off-the-top-of-my-head list of all the ISAs that *official* .NET CLR implementations have supported... that I can think of:
- x86/x64
- ARM / ARM-Thumb
- SH-3 (Compact Framework)
- SH-4 (Compact Framework)
- MIPS (Compact Framework)
- PowerPC (Silverlight 1.0 on PPC Mac, XNA on Xbox 360)
- Itanium IA-64
- So that's a nice variety - I'm sure there's others I've forgotten, not to mention all the platforms that Mono supported.
- What do all of these processors/ISAs have in common? Well, *they all have their own different ways of handling integer overflow* - sometimes quite *very differently*.
- For example, some ISAs (like [MIPS](https://people.cs.pitt.edu/%7Edon/coe1502/current/Unit4a/Unit4a.html)) raise a hardware exception (like divide-by-zero) on overflow instead of setting a flag.
- While .NET is fairly portable already, the granddaddy of portability is probably the venerable *C Programming Language*: if there's an ISA out there then someone's certainly written a C compiler for it. For all of C's life and history from the early 1970s through to today (2022) it *never featured support for checked arithmetic* (it's UB) because doing-so would be a lot of work for something not really needed in systems-programming which tends to use a lot of intentional unchecked overflows and bitwise operations.
- ...though [C23 (for release in 2023) does (finally) add checked arithmetic to the standard library](https://stackoverflow.com/a/20956705/159145). It only took *50 years* though...
- ...though of course C compilers were always free to add extensions to support checked arithmetic, but it was never a part of the *portable C* language.
- For C programmers who needed it, they [had to resort to gnarly (and performance-killing) workarounds involving validating every operand](https://stackoverflow.com/questions/3944505/detecting-signed-overflow-in-c-c) before each operation and aborting-early instead of performing the calculation and then checking CPU flags, again, because there's zero consistency in overflow handling in each of the myriad of CPUs/archs that C supports.
- ...so if C, of all programming languages, backed by all the major players and international standards organizations, had this much trouble with arithmetic overflow then we really can't expect Microsoft to handle that degree of complexity - heck, I must say we're actually very lucky that we even have support for `checked` arithmetic at all considering that .NET's progenitor, Java, [didn't support checked arithmetic until Java 8, and only for 2 operations, which also doesn't reveal the direction of overflow either](https://stackoverflow.com/a/26016643/159145).
|
Graphviz outputs a mess
I'm trying to visualise the following data in dot format using GraphViz:
```
digraph n {
node [nodesep=2.0, fontsize=11];
graph [ overlap=false];
edge[weight=0.2];
A -> B [label="I/5"]
A -> A [label="*/0"]
A -> C [label="ii/2"]
A -> H [label="vii/1"]
B -> D [label="iii/1"]
B -> E [label="IV/2"]
B -> F [label="V/2"]
B -> B [label="vi/2"]
C -> F [label="V/2"]
C -> H [label="vii/1"]
D -> E [label="IV/2"]
E -> D [label="iii/2"]
E -> G [label="iv/1"]
E -> F [label="V/3"]
F -> B [label="I/4"]
F -> B [label="vi/1"]
F -> B [label="*/0"]
G -> B [label="I/5"]
H -> B [label="vi/1"]
}
```
Running the following command:
`neato -Tpng Chord_NFA.dot > Chord_NFA.png`
gives me this output:

I'd like it to fit on A4 and for none of the edges to overlap or cross over a node. This is possible (mathematically) because I did it myself with a biro.
|
I played with it a few and got this:
```
digraph n {
node [nodesep=2.0, fontsize=11];
graph [overlap = false];
subgraph cluster_a {
style=invisible;
A; B; H;
}
subgraph cluster_b {
style=invisible;
D; E; G;
}
subgraph cluster_c {
style=invisible;
C; F;
}
A -> B [label="I/5"];
A -> A [label="*/0"];
A -> C [label="ii/2"];
A -> H [label="vii/1"];
B:w -> D [label="iii/1"];
B:nw -> E [minlen=3 label="IV/2"];
B -> F [minlen=2 label="V/2"];
B -> B [label="vi/2"];
C -> F [minlen=2 label="V/2"];
C -> H [label="vii/1"];
D -> E [label="IV/2"];
D -> E [minlen=2 dir=back label="iii/2"];
G -> E [minlen=2 dir=back label="iv/1"];
F -> E [dir=back label="V/3"];
B -> F [minlen=2 dir=back label="I/4"];
B -> F [minlen=2 dir=back label="vi/1"];
B -> F [minlen=2 dir=back label="*/0"];
B -> G [dir=back label="I/5"];
H -> B [label="vi/1"];
}
```
Compile with:
```
dot -Tpng -o Chord_NFA.png Chord_NFA.gv
```
The output is this, without any line crossings:

The trick is:
1. To add the minlen attribute to force some separation, giving more spacing for rendering without overlaps and crossings.
2. To invert the logic of some edges (rendering them uninverted with dir=back). This way, dot always sees an acyclic graph and can order the edges without getting confused.
3. Focusing in some subgraphs first and grouping their nodes in clusters to give they some "protection" from interference when rendering the rest of the graph.
|
Where to validate nonce in OAuth 2.0 Implict Flow?
I have the following architecture.
[](https://i.stack.imgur.com/D35yf.png)
Where:
- Client - is a single page JavaScript application.
- Authorisation server - is Azure AD.
- Resource server - is an Azure App Service using [Azure AD authentication](https://learn.microsoft.com/en-gb/azure/app-service/app-service-authentication-overview).
- All communications are secured using HTTPS.
I am using [Implicit Flow](https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols-implicit) to access a JWT access token from Azure AD.
```
https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&response_type=id_token+token
&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
&scope=openid%20https%3A%2F%2Fgraph.microsoft.com%2Fmail.read
&response_mode=fragment
&state=12345
&nonce=678910
```
This JWT token is then later passed to the resource server as a Bearer authorization. The same token could be reused multiple times before it expires.
As part of the Authorize request I pass state and a nonce value.
Presently I validate the state on my client in JavaScript using a simple `if`:
```
function isValid() {
if (token.state !== expectedState) {
return false;
}
...
}
```
If I understand correctly the nonce is to prevent replay attacks - which I assume meant against my resource server, but perhaps also against the [client](https://security.stackexchange.com/questions/3001/what-is-the-use-of-a-client-nonce).
I am unsure where (or if) I should validate the nonce.
On the server doesnt seem right, the token as a whole is being validated, and the token is meant to be reusable (within its expiry).
On the client, seems to be a better location, but is that any different to validating the state?
|
>
> I am unsure where (or if) I should validate the nonce.
>
>
>
Of course, you should validate the nonce. Because the `nonce` **is** required and it will be returned and contained as a claim in the `id_token`. When you validate the `id_token`, you would just validate the nonce claim. Using nonce is to mitigate **token replay attacks** (someone who want to use token replay attack won't know the nonce, so each token has different nonce to identify the origin of the request).
There is a clear explanation for nonce for AAD v2 endpoint:
#**`nonce`** (required)
>
> A value included in the request, **generated by the app, that will be
> included in the resulting `id_token` as a claim. The app can then verify
> this value to mitigate token replay attacks.** The value is typically a
> randomized, unique string that can be used to identify the origin of
> the request.
>
>
>
So, you can just validate the id\_token to validate the nonce.
>
> but is that any different to validating the state?
>
>
>
Yes, the effect of nonce is different from state. First, nonce will be returned in the `id_token` and you can validate it when you decode and validate the `id_token`. But `state` is returned in the response, not in the token. Also, `state` has different meaning and effect from nonce.
#**`state`** (recommended)
>
> A value included in the request that will also be returned in the
> token response. It can be a string of any content that you wish. **A
> randomly generated unique value is typically used for preventing
> [cross-site request forgery attacks](https://www.rfc-editor.org/rfc/rfc6749#section-10.12)**. The state is also used to encode
> information about the user's state in the app before the
> authentication request occurred, such as the page or view they were
> on.
>
>
>
Additional, replay attack is different from cross-site request forgery attacks. You can refer for more details about these two attacks. Then, you will understand why `nonce` is in the token and `state` is in the response.
**Whether validate the nonce (token) at client**
For `id_token`, yes, it just should be validate from the client.
For SPA with implicit flow, we can use [ADAL.js](https://github.com/AzureAD/azure-activedirectory-library-for-js) to validate `nonce`, the `id_token` which contains the `nonce` claim to mitigate token replay attacks.
Hope this helps!
|
Google Analytics within iPhone SDK 4 Built App
Three questions for iPhone developers using Google Analytics within their apps for tracking use of their apps:
1. Will using Google Analytics cause us to be in breach of the terms and conditions of the Apple SDK 4 for developers?
2. If the answer to #1 is YES, then what are we -- as iPhone developers -- allowed to use to track usage of our apps?
3. Has anyone who is using the iPhone SDK 4 built their apps that included the Google Analytics library and found it not to work? I'm being told by my developer that it doesn't work when you build with a Base SDK set to iPhone Device 4.0 and and an iPhone OS Deployment Target set to iPhone OS 3.0.
Thanks in advance!
|
Answers to your questions:
1. Yes, with the current API and data collection it looks like it is not compliant with the terms of the SDK. I am using both Flurry and Google in my apps because they offer different feature sets that I need. While Flurry has been very vocal that they are working with Apple to resolve the terms of the SDK, Google hasn't said a peep. So, I'm nervous about using Google but not Flurry, because I think Flurry will change their data gathering if Apple presses hard enough. In any event, I have made sure that I can rip out either analytics service quickly if Apple rejects my app.
2. My understanding is that it's fine to collect your own device data, as long as you don't report it to others. You especially don't want to share any data that would hint at new devices. That's what got Flurry in trouble. Just remember that you are under NDA with Apple, so anything not publicly announced is between you and Apple. There are also rules about what you do with Device IDs, so make sure you understand those as well.
3. I am using Google analytics on iOS 4, with deployment target set to 3.0. While I no longer have a device to test against 3.0, it is deployed on the AppStore and seems to be working. (No crash reports)
|
Unable to deserialize list directly inside rootelement using Jackson XML
I am unable to deserialize a list that is directory a child of the root element, I have tried various kinds of things.
My code:
```
private static class Cards {
@JacksonXmlElementWrapper(localName = "Cards")
private List<Card> cards;
public List<Card> getCards() {
return new ArrayList<>(cards);
}
}
private static class Card {
@JsonProperty("Name")
private String name;
@JsonProperty("Image")
private String image;
@JsonProperty("CardType")
private String cardType;
private final Map<String, Integer> resources = new HashMap<>();
private boolean duplicateResources = false;
private final List<String> duplicateResourceNames = new ArrayList<>();
@JsonAnySetter
private void addResource(final String name, final Object value) {
if (resources.containsKey(name)) {
duplicateResources = true;
duplicateResourceNames.add(name);
}
resources.put(name, Integer.parseInt(value.toString()));
}
public String getName() {
return name;
}
public String getImage() {
return image;
}
public String getCardType() {
return cardType;
}
@JsonAnyGetter
public Map<String, Integer> getResources() {
if (duplicateResources) {
throw new UncheckedCardLoadingException("Resources " + duplicateResourceNames + " have duplicate entries");
}
return new HashMap<>(resources);
}
}
```
And:
```
ObjectMapper xmlMapper = new XmlMapper();
Cards cards = xmlMapper.readValue(path.toFile(), Cards.class);
```
When trying to deserialize the following XML:
```
<Cards>
<Card>
<Name>test</Name>
<Image></Image>
<CardType>test</CardType>
</Card>
</Cards>
```
It gives as error:
>
> com.cardshifter.core.cardloader.CardLoadingException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "Card" (class com.cardshifter.core.cardloader.XmlCardLoader$Cards), not marked as ignorable (one known property: "Cards"])
> at [Source: C:\Users\Frank\Dropbox\NetbeansProjects\Cardshifter\cardshifter-core\target\test-classes\com\cardshifter\core\cardloader\single-card.xml; line: 3, column: 9] (through reference chain: com.cardshifter.core.cardloader.Cards["Card"])
>
>
>
|
First of all, look at these:
```
@JacksonXmlElementWrapper(localName = "Cards")
private List<Card> cards;
```
And then look at the error:
```
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "Card" (class com.cardshifter.core.cardloader.XmlCardLoader$Cards)
```
Nowhere does it say *"Card"* in your class.
Secondly, after fixing that, here's how I solved your entire loading:
```
private static class Cards {
@JacksonXmlElementWrapper(localName = "Card")
@JsonProperty("Card")
private List<Card> card = new ArrayList<>();
@JsonSetter
public void setCard(Card card) {
this.card.add(card);
}
}
```
The `setCard` method simply tells Jackson that if it encounters this, it should interpret it as a `Card`, and then you provide the method that adds it to the array.
|
on-change doesn't work on v-select
I tried to use a v-select who display all countries. so i did :
```
<v-select on-change="updateCountryId" label="country_name" :options="countries" ></v-select>
```
it works great and display my countries but the function updateCountryId doesn't seems to work
```
methods: {
updateCountryId: function() {
alert('ok');
}
}
```
but i never see the ok
to import vue-select I did :
```
<script src="/js/vue-select/vue-select.js"> </script>
```
i use it in a twig file so in my vue-select.js i rewrite what i found on <https://unpkg.com/[email protected]> but replace the {{ }} by <% %>
ps : i already tried v-on:change, @change and onChange
and my code looks like that (i skip thing i judge useless)
```
<div id="General">
<div class="form-group">
<label>Pays :</label>
<v-select onChange="updateCountryId" label="country_name" :options="countries" ></v-select>
</div>
.
.
.
<script src="/js/vue-select/vue-select.js"> </script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/vue/1.0.13/vue.min.js"></script>
<script>
Vue.config.delimiters = ['<%', '%>'];
Vue.component('v-select', VueSelect.VueSelect);
var vm = new Vue({
el: "#General",
data: {
countries: [],
},
filters: {
},
methods: {
updateCountryId: function () {
console.log('ok');
alert('ok');
},`
```
|
You are missing the colon :
```
Vue.component('v-select', VueSelect.VueSelect);
new Vue({
el: '#app',
data: {
options: ['one', 'two', 'three'],
selected: ''
},
methods: {
updateCountryId: function() {
alert('ok');
}
}
});
```
```
<script src="https://unpkg.com/vue@latest"></script>
<!-- use the latest release -->
<script src="https://unpkg.com/vue-select@latest"></script>
<div id="app">
<v-select :on-change="updateCountryId" label="country_name" :options="options" :searchable="false" ></v-select>
</div>
```
# Update
you need to use [unpkg.com/[email protected]](http://unpkg.com/[email protected]) because version 1 is not **compatible** with the current version of `Vuejs`
|
Swift 5.5: Asynchronously iterating line-by-line through a file
In the ["Platforms State of the Union" video of WWDC2021](https://developer.apple.com/wwdc21/102) at 28:00 it was mentioned that
>
> [Apple] even added support for asynchronously iterating line-by-line through a file
>
>
>
in Foundation for macOS 12/iOS 15 and Swift 5.5.
What is that new API, how can I now asynchronously iterate line-by-line through a file?
|
The main thing they added that enables this, is `AsyncSequence`. `AsyncSequence` is like `Sequence`, but its `Iterator.next` method is `async throws`.
Specifically, you can use [`URLSession.AsyncBytes.lines`](https://developer.apple.com/documentation/foundation/urlsession/asyncbytes/3767336-lines) to get an `AsyncSequence` of the lines in a file.
Suppose you are in an `async throws` method, you can do:
```
let (bytes, response) = try await URLSession.shared.bytes(from: URL(string: "file://...")!)
for try await line in bytes.lines {
// do something...
}
```
Note that there is also [`FileHandle.AsyncBytes.lines`](https://developer.apple.com/documentation/foundation/filehandle/asyncbytes/3766668-lines), but in the [documentation](https://developer.apple.com/documentation/foundation/filehandle/3766681-bytes) it says:
>
> Rather than creating a `FileHandle` to read a file asynchronously, you can instead use a file:// URL in combination with the async-await methods in `URLSession`. These include the `bytes(for:delegate:)` and `bytes(from:delegate:)` methods that deliver an asynchronous sequence of bytes, and `data(for:delegate:)` and `data(from:delegate:)` to return the file’s entire contents at once.
>
>
>
|
ServiceStack.NET Windows Authentication (NTLM) in ASP.NET MVC
How to implement Windows Authentication in a ServiceStack project build on ASP.NET MVC4?
I started with a global Request-Filter added in the `AppHost`:
```
private void ConfigureAuth(Funq.Container container)
{
this.RequestFilters.Add((httpReq, httpResp, requestDto) =>
{
var user = HttpContext.Current.User.Identity;
if (!user.IsAuthenticated ||
!user.Name.Contains(_myTestUser)) //todo: check username here in database (custom logic) if it has access to the application
httpResp.ReturnAuthRequired();
});
}
```
This opens up a login dialog, which if entered correctly (username exists and valid password is entered and also the `myTestUser` is set to this), results in a successful response.
If anything is wrong, the login dialog is shown again. -- Thats sounds ok to me.
But after retyping the correct user in that second login window, it stops working. The dialog opens again, if like its again incorrect. No breakpoint is hit inside the filter function.
Any idea what might cause this?
Thats what i added in the web.config:
```
<authentication mode="Windows"/>
<authorization>
<deny users="?" /> <!--only allow authenticated users-->
</authorization>
```
I want to completely lock up the website and enable access to specified windows users in the database only with their specific permissions (roles). I need to implement custom logic to access the "list of users and roles".
Maybe there is an other way to do this in MVC4/ ASP.NET?
|
# ServiceStack Custom Authentication for Windows Intranets
I have been batting my head against this all day and have come up with the following.
First the use case:
You are on a corporate intranet using Windows Authentication. You set up authentication mode="Windows" in your web.config and that's it!
Your strategy is this:
1. You dont' know who the user is because they are not in your table of users or ActiveDirectory group or whatever. In this case you give them the role of "guest" and trim the UI accordingly. Maybe give them an email link to request access.
2. You have the user in your list of users but they have not been assigned a role. So give them the role of "user" and trim the UI as above. Maybe they can see their stuff but nothing else.
3. The user is in your list and has been assigned a role. Initially you will assign the role by manually updating the UserAuth table in the database. Eventually you will have a service that will do this for authorised users.
So let's get to the code.
## Server Side
In ServiceStack Service layer we create a Custom Credentials Authorisation Provider as per <https://github.com/ServiceStack/ServiceStack/wiki/Authentication-and-authorization>
```
public class CustomCredentialsAuthProvider : CredentialsAuthProvider
{
public override bool TryAuthenticate(IServiceBase authService, string userName, string password)
{
//NOTE: We always authenticate because we are always a Windows user!
// Yeah, it's an intranet
return true;
}
public override void OnAuthenticated(IServiceBase authService, IAuthSession session, IOAuthTokens tokens, Dictionary<string, string> authInfo)
{
// Here is why we set windows authentication in web.config
var userName = HttpContext.Current.User.Identity.Name;
// Strip off the domain
userName = userName.Split('\\')[1].ToLower();
// Now we call our custom method to figure out what to do with this user
var userAuth = SetUserAuth(userName);
// Patch up our session with what we decided
session.UserName = userName;
session.Roles = userAuth.Roles;
// And save the session so that it will be cached by ServiceStack
authService.SaveSession(session, SessionExpiry);
}
}
```
And here is our custom method:
```
private UserAuth SetUserAuth(string userName)
{
// NOTE: We need a link to the database table containing our user details
string connStr = ConfigurationManager.ConnectionStrings["YOURCONNSTRNAME"].ConnectionString;
var connectionFactory = new OrmLiteConnectionFactory(connStr, SqlServerDialect.Provider);
// Create an Auth Repository
var userRep = new OrmLiteAuthRepository(connectionFactory);
// Password not required.
const string password = "NotRequired";
// Do we already have the user? IE In our Auth Repository
UserAuth userAuth = userRep.GetUserAuthByUserName(userName);
if (userAuth == null ){ //then we don't have them}
// If we don't then give them the role of guest
userAuth.Roles.Clear();
userAuth.Roles.Add("guest")
// NOTE: we are only allowing a single role here
// If we do then give them the role of user
// If they are one of our team then our administrator have already given them a role via the setRoles removeRoles api in ServiceStack
...
// Now we re-authenticate out user
// NB We need userAuthEx to avoid clobbering our userAuth with the out param
// Don't you just hate out params?
// And we re-authenticate our reconstructed user
UserAuth userAuthEx;
var isAuth = userRep.TryAuthenticate(userName, password, out userAuthEx);
return userAuth;
}
```
In appHost Configure add the following ResponseFilters at the end of the function
```
ResponseFilters.Add((request, response, arg3) => response.AddHeader("X-Role",request.GetSession(false).Roles[0]));
ResponseFilters.Add((request, response, arg3) => response.AddHeader("X-AccountName", request.GetSession(false).UserName));
```
This sends some additional headers down to the client so that we can trim the UI as per the user's role.
## Client Side
On the client side when we make out first request to the server we POST a UserName and Password as required by Custom Authentication. Both are set to "NotRequired" as we will know who the user is on the server side via HttpContext.Current.User.Identity.Name.
The following uses AngularJS for AJAX comms.
```
app.run(function($templateCache, $http, $rootScope) {
// Authenticate and get X-Role and X-AccountName from the response headers and put it in $rootScope.role
// RemeberMe=true means that the session will be cached
var data={"UserName" : "NotRequired", "Password" : "NotRequired", "RememberMe": true };
$http({ method : 'POST', url : '/json/reply/Auth', data : data }).
success(function (data, status, headers, config) {
// We stash this in $rootScope for later use!
$rootScope.role = headers('X-Role');
$rootScope.accountName = headers('X-AccountName');
console.log($rootScope.role);
console.log($rootScope.role);
}).
error(function (data, status, headers, config) {
// NB we should never get here because we always authenticate
toastr.error('Not Authenticated\n' + status, 'Error');
});
};
```
|
Is Java 9 modularity for WAR files as well?
A Java EE 7 application server, and specifically a Servlet 3.1 container should allow us to deploy a WAR file that contains a web application. And a WAR file is essentially a JAR with deployment descriptor (`web.xml`) and couple of other elements.
On the other hand, Java 9 introduces a notion of **modular JAR file**, that is, a JAR with module descriptor (`module-info.class`) which makes a module in terms of project Jigsaw.
So, can a WAR file contain a module descriptor as well and make a "modular WAR file"?
Are there already application servers capable of accepting deployment of a "modular WAR file"?
Generally what is the future of servlet containers and WAR files in terms of Java 9 modularity ?
|
Currently there are no links between Java EE and java 9(jigsaw) modules.
Future Java EE releases might introduce spec defined behavior on how to interact with jigsaw modules.
At this point all app servers run in "classpath mode" when running on JDK9 which in practice mean they do not utilize jigsaw.
In most cases introducing module-info to your war deployment can only cause deployment issues as JDK might try to load it in different way than other parts of the app server. Or it could not even make a difference as most app servers have custom logic for loading jars & classes.
Some application servers out there today already implement some kind of modularity such as OSGi(GlassFish, Liberty) or jboss-modules (Wildfly)
But at this time jigsaw still has some limitations to be able to run such modular systems on top of it.
So in short, until there is EE spec (probably 9+) that is updated to describe how EE deployments should run on jigsaw enabled runtime there is no "official" way on how such deployment should behave.
Until such time, each app server can implement some custom support for it but it wont be standard.
|
How to insert 40000 records fast into an sqlite database in an iPad
I want to insert 40000 records that i get from a web service into a sqlite database in my iPad app.
I wrote the following code, but it takes around 20 minutes, is there a faster way?
```
- (NSArray *)insertPriceSQLWithPrice:(Price *) price
{
SQLiteManager *dbInfo = [SQLiteManager sharedSQLiteManagerWithDataBaseName:@"codefuel_catalogo.sqlite"];
sqlite3 *database;
NSString *querySQL=[self formatStringQueryInsertWithTable:@"prices_list" andObject:price];
if(sqlite3_open([dbInfo.dataBasePath UTF8String], &database) == SQLITE_OK)
{
sqlite3_stmt * compiledStatement;
const char *query_stmt = [querySQL UTF8String];
int result = sqlite3_prepare_v2(database, query_stmt, -1, &compiledStatement, NULL);
if (result == SQLITE_OK)
{
int success = sqlite3_step(compiledStatement);
NSLog(@"el numero de success es -> %i",success);
if (success == SQLITE_ERROR)
NSLog(@"Error al insertar en la base de datps");
}
else
NSLog(@"Error %@ ERROR!!!!",querySQL);
sqlite3_finalize(compiledStatement);
}
sqlite3_close(database);
return nil;
}
```
|
There are three things that you need to do in order to speed up the insertions:
- **Move the call of `sqlite3_open` outside the loop.** Currently, the loop is not shown, so I assume it is outside your code snippet
- **Add `BEGIN TRANSACTION` and `COMMIT TRANSACTION`** calls - you need to begin transaction before the insertion loop and end it right after the loop is over.
- **Make `formatStringQueryInsertWithTable` truly parameterized** - Currently it appears that you are not using prepared statements to their fullest, because despite using `sqlite3_prepare_v2`, you have no calls of `sqlite3_bind_XYZ` in your code.
Here is [a nice post that shows you how to do all of the above](http://blog.quibb.org/2010/08/fast-bulk-inserts-into-sqlite/). It is plain C, but it will work fine as part of an Objective C program.
```
char* errorMessage;
sqlite3_exec(mDb, "BEGIN TRANSACTION", NULL, NULL, &errorMessage);
char buffer[] = "INSERT INTO example VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)";
sqlite3_stmt* stmt;
sqlite3_prepare_v2(mDb, buffer, strlen(buffer), &stmt, NULL);
for (unsigned i = 0; i < mVal; i++) {
std::string id = getID();
sqlite3_bind_text(stmt, 1, id.c_str(), id.size(), SQLITE_STATIC);
sqlite3_bind_double(stmt, 2, getDouble());
sqlite3_bind_double(stmt, 3, getDouble());
sqlite3_bind_double(stmt, 4, getDouble());
sqlite3_bind_int(stmt, 5, getInt());
sqlite3_bind_int(stmt, 6, getInt());
sqlite3_bind_int(stmt, 7, getInt());
if (sqlite3_step(stmt) != SQLITE_DONE) {
printf("Commit Failed!\n");
}
sqlite3_reset(stmt);
}
sqlite3_exec(mDb, "COMMIT TRANSACTION", NULL, NULL, &errorMessage);
sqlite3_finalize(stmt);
```
|
How to access and use index of each item inside ng-repeat
I have a table where the last column in each row contains a little loading icon which I would like to display when a button inside the table is clicked.
When each table row is generated with ng-repeat, the loader shows up in every row rather than the individual one. How can I set ng-show to true or false for only the current index clicked?
Template:
```
<tr ng-repeat="record in records">
<td>{{ record.name }}</td>
<td><a ng-click="someAction(record.name)">Some Action</a></td>
<td ng-show="loading">Loading...</td>
</tr>
```
Controller:
```
$scope.someAction = function(recordName) {
$scope.loading = true;
};
```
|
You can pass in the `$index` parameter and set/use the corresponding index. `$index` is automatically available in the scope of an `ng-repeat`.
```
<td><a ng-click="someAction(record.name, $index)">Some Action</a></td>
<td ng-show="loading[$index]">Loading...</td>
$scope.someAction = function(recordName, $index) {
$scope.loading[$index] = true;
};
```
Here's a generic sample with all the logic in the view for convenience: **[Live demo (click).](http://jsbin.com/AzifiFa/12/edit)**
```
<div ng-repeat="foo in ['a','b','c']" ng-init="loading=[]">
<p ng-click="loading[$index]=true">Click me! Item Value: {{foo}}<p>
<p ng-show="loading[$index]">Item {{$index}} loading...</p>
</div>
```
|
How do I tell how close I'm getting to somaxconn?
The sysctl option `net.core.somaxconn` defaults to 128 (on our systems) but can be raised.
1. What exactly is this limit measuring and capping?
2. How do I find out how close I am to the limit?
Context: I had a problem recently that appeared to be corrected by raising this limit. The problem was intermittent, so I don't trust that it is really fixed. I would like to find out if the current number of [whatever this setting caps] is greater than the previous maximum limit of 128.
|
`somaxconn` determines the maximum number of backlogged connections allowed for each TCP port on the system. Increasing it (recommended for servers) can prevent "connection refused" messages, but it can result in slow connections if the server can't handle the increased load.
You can check the current backlog with `netstat -ant | grep -c SYN_REC` according to [this page](http://www.beingroot.com/articles/apache/socket-backlog-tuning-for-apache). It will count how many connections are in the "SYN received" state, meaning the system has received a SYN packet (connection request) but hasn't acknowledged it yet.
If your system has `ss` installed, you can also use `ss -s` to display a summary of connections. Look for `synrecv` in the output, or `ss -s | grep -Po '(?<=synrecv )\d+(?=,)'` to just print the number.
|
Arguments to a template function aren't doing any implicit conversion
For some strange reason, I can't get the template arguments in this one piece of code to implicitly cast to a compatible type.
```
#include <type_traits>
template <typename T, unsigned D>
struct vec;
template <>
struct vec<float, 2> {
typedef float scalar;
static constexpr unsigned dimension = 2;
float x, y;
float& operator[] (unsigned i) { return (&x)[i]; }
float const& operator[] (unsigned i) const { return (&x)[i]; }
};
template <typename L, typename R>
struct add;
template <typename L, typename R, unsigned D>
struct add<vec<L, D>, vec<R, D>> {
typedef vec<L, D> left_type;
typedef vec<R, D> right_type;
typedef vec<typename std::common_type<L, R>::type, D> return_type;
add(left_type l, right_type r)
: left(l),
right(r)
{}
operator return_type() const
{
return_type result;
for (unsigned i = 0; i < D; ++i)
result[i] = left[i] + right[i];
return result;
}
left_type left;
right_type right;
};
template <typename L, typename R, unsigned D>
add<vec<L, D>, vec<R, D>>
operator+(vec<L, D> const& lhs, vec<R, D> const& rhs)
{
return {lhs, rhs};
}
int main()
{
vec<float, 2> a, b, c;
vec<float, 2> result = a + b + c;
}
```
Fails with:
```
prog.cpp: In function 'int main()':
prog.cpp:55:36: error: no match for 'operator+' in 'operator+ [with L = float, R = float, unsigned int D = 2u](((const vec<float, 2u>&)((const vec<float, 2u>*)(& a))), ((const vec<float, 2u>&)((const vec<float, 2u>*)(& b)))) + c'
```
So if I'm correct, the compiler *should* see the code in the main function as this:
- `((a + b) + c)`
- compute `a + b`
- cast the result of `a + b` from `add<...>` to `vec<float, 2>` using the conversion operator in `add<...>`
- compute `(a + b) + c`
But it never does the implicit cast. If I explicitly cast the result of (a + b) to a vec, the code works fine.
|
I'm going to side-step your actual problem and instead make a recommendation: Rather than writing all of this complicated boilerplate from scratch, have a look at [Boost](http://www.boost.org/).[Proto](http://www.boost.org/doc/libs/release/libs/proto/index.html), which has taken care of all the tricky details for you:
>
> Proto is a framework for building Domain Specific Embedded Languages in C++. It provides tools for constructing, type-checking, transforming and executing *expression templates*. More specifically, Proto provides:
>
>
> - An expression tree data structure.
> - A mechanism for giving expressions additional behaviors and members.
> - Operator overloads for building the tree from an expression.
> - Utilities for defining the grammar to which an expression must conform.
> - An extensible mechanism for immediately executing an expression template.
> - An extensible set of tree transformations to apply to expression trees.
>
>
>
See also the library author's [Expressive C++](http://cpp-next.com/archive/2010/08/expressive-c-introduction/) series of articles, which more-or-less serve as an (excellent) in-depth Boost.Proto tutorial.
|
XP\_DirTree in SQL Server
Variations to this have been asked. I have no problem searching a local directory with the below piece of code.
```
EXEC MASTER.sys.xp_dirtree 'C:\', 1, 1
```
When I switch the path to a network location the results are empty.
```
EXEC MASTER.sys.xp_dirtree '\\Server\Folder', 1, 1
```
I first thought maybe it was something to do with permissions. I added the SQL Server Service to the ACL list on the shared volume as well as the security group.
Any help or direction to point me in is greatly appreciated or even another way to get a list of files in a directory and sub directories.
[Edited]
|
The two things to look out for are:
- Make certain that the Log On account for the SQL Server service (the service typically listed as "SQL Server (MSSQLSERVER)" in the Services list) has rights to that network share.
>
> # UPDATE
>
>
> The problem ended up being that the O.P. was running the SQL Server service as a local system account. So, the O.P. created a domain account for SQL Server, assigned that new domain account as the Log On As account for the SQL Server service, and granted that domain account the proper NTFS permissions.
>
>
> Please note that this might have also been fixable while keeping the SQL Service running as a local system account by adding the server itself that SQL Server is running on to the NTFS permissions. This should usually be possible by specifying the server name followed by a dollar sign ($). For example: `MySqlServer01$`. Of course, this then gives that NTFS permission to *all* services on that server that are running as a local system account, and this might not be desirable. Hence, it is still preferable to create a domain account for the SQL Server service to run as (which is a good practice in any case!).
>
>
>
It sounds like this has been done, so it should be tested by logging onto windows directly as that account and attempting to go to that specific network path.
- Make sure that the Login in SQL Server that is executing `xp_dirtree` has "sysadmin" rights:
- This can be done directly by adding the account to the `sysadmin` server role, or
- Sign a stored procedure that runs `xp_dirtree`:
- Create a certificate in [master]
- Create a login based on that certificate
- Add the certificate-based login to the `sysadmin` server role
- Backup the certificate
- Restore the certificate into whatever database has, or will have, the stored procedure that runs `xp_dirtree`
- Sign the stored procedure that runs `xp_dirtree`, using [ADD SIGNATURE](https://learn.microsoft.com/en-us/sql/t-sql/statements/add-signature-transact-sql) and the certificate that was just restored
- GRANT EXECUTE on that stored procedure to the user(s) and/or role(s) that should be executing this.
Just to have it stated, another option is to do away with `xp_dirtree` altogether and instead use SQLCLR. There is probably sample C# code on various blogs. There are also a few CodePlex projects that have file system functions and might also provide a pre-compiled assembly for those that don't want to deal with compiling. And, there is also the [SQL#](https://SQLsharp.com/?ref=so_26765943) library that has several filesystem functions including `File_GetDirectoryListing` which is a TVF (meaning: you can use it in a SELECT statement with a WHERE condition rather than needing to dump all columns and all rows into a temp table first). It is also fully-streamed which means it is very fast, even for 100k or more files. Please note that the `FILE_*` functions are only in the Full version (i.e. not free) and I am the creator of SQL#, but it does handle this situation quite nicely.
|
Trying to interpolate JSX with a string
I currently have my welcome message on my landing page setup as to only having "Hello" displayed until a user types their name in the modal. I would love to take the "Hello" away and have it appear with the user's name, after user clicks submit.
This is how it currently looks.
[](https://i.stack.imgur.com/RwkKl.png)
After I enter my name in the modal, it looks like this:
[](https://i.stack.imgur.com/StVv2.png)
I tried interpolating JSX with the string, but no luck. Can somebody help?
This is how I have it in my render.
```
<h1>{`Hello ${this.state.submitedFirstName && this.state.submitedFirstName}`}</h1>
```
|
You could use a conditional operator:
```
<h1>{ this.state.submitedFirstName ? ` Hello ${this.state.submitedFirstName}` : ""}</h1>
```
If this.state.submitedFirstName is undefined or anything falsey (e.g. empty string "") it will return 'Hello Mike' or it will return an empty string.
You can also move the conditional outside so it doesn't even render the h1 tag unless there is a name.
```
{this.state.submitedFirstName && <h1>Hello {this.state.submitedFirstName}</h1>}
```
---
Here's an update to explain why the original code didn't work. I'll break it down into the order it gets executed and explain:
The state `this.state.submitedFirstName` starts as undefined. Therefore, when the code is first run the left side of the && (AND operator) evaluates to `undefined`. Because the && is not satisfied the left side is returned from the expression:
```
console.log(this.state.submitedFirstName && this.state.submitedFirstName) // undefined (from the left side to be specific)
console.log(`${undefined}`) // "undefined" is converted to string
console.log(`Hello ${undefined}`) // "Hello undefined"
```
After the state is set `this.state.submitedFirstName === "Mike"`. The left side "Mike" evaluates to truthy so it returns the right side which is also "Mike".
```
console.log(this.state.submitedFirstName && this.state.submitedFirstName) // "Mike" (from the right side to be specific)
// console.log(`Hello ${"Mike"}`) // "Hello Mike"
```
Thats why it either returns "Hello undefined" or "Hello Mike".
Check out [mdn for more info on the && and other logical operators](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators).
|
Custom style to jQuery UI dialogs
I am trying to change jQuery UI dialog's default styles to something similar to this -

I got it to close changing some CSS in jQuery UI.
```
.ui-widget {
font-family: Verdana,Arial,sans-serif;
font-size: .8em;
}
.ui-widget-content {
background: #F9F9F9;
border: 1px solid #90d93f;
color: #222222;
}
.ui-dialog {
left: 0;
outline: 0 none;
padding: 0 !important;
position: absolute;
top: 0;
}
#success {
padding: 0;
margin: 0;
}
.ui-dialog .ui-dialog-content {
background: none repeat scroll 0 0 transparent;
border: 0 none;
overflow: auto;
position: relative;
padding: 0 !important;
}
.ui-widget-header {
background: #b0de78;
border: 0;
color: #fff;
font-weight: normal;
}
.ui-dialog .ui-dialog-titlebar {
padding: 0.1em .5em;
position: relative;
font-size: 1em;
}
```
HTML :
```
<div id="popup-msg">
<div id="loading">
<h2>Loading...</h2>
<h3>Please wait a few seconds.</h3>
</div>
<div id="success" title="Hurray,">
<p>User table is updated.</p>
</div>
</div>
```
[THIS IS FIDDLE](https://jsfiddle.net/qP8DY/23/)
But when I add this style its apply to all my dialogs. Can anybody tell me how can I avoid from this problem.
Thank you.
|
See <https://jsfiddle.net/qP8DY/24/>
You can add a class (such as "success-dialog" in my example) to div#success, either directly in your HTML, or in your JavaScript by adding to the dialogClass option, as I've done.
```
$('#success').dialog({
height: 50,
width: 350,
modal: true,
resizable: true,
dialogClass: 'no-close success-dialog'
});
```
Then just add the success-dialog class to your CSS rules as appropriate. To indicate an element with two (or more) classes applied to it, just write them all together, with no spaces in between. For example:
```
.ui-dialog.success-dialog {
font-family: Verdana,Arial,sans-serif;
font-size: .8em;
}
```
|
How to calculate the trendline for stock price
I am trying to calculate and draw the trendlines for stock prices. I did some searches and thought for a whole day, there is no a really good idea on how to do.
I have daily price history and want to find the cross point between trendline and price line.
Could you provide some ideas or guidances?
Thank you so much!!!
[](https://i.stack.imgur.com/wYjrP.png)
|
```
import pandas as pd
import quandl as qdl
from scipy.stats import linregress
# get AAPL 10 years data
data = qdl.get("WIKI/AAPL", start_date="2007-01-01", end_date="2017-05-01")
data0 = data.copy()
data0['date_id'] = ((data0.index.date - data0.index.date.min())).astype('timedelta64[D]')
data0['date_id'] = data0['date_id'].dt.days + 1
# high trend line
data1 = data0.copy()
while len(data1)>3:
reg = linregress(
x=data1['date_id'],
y=data1['Adj. High'],
)
data1 = data1.loc[data1['Adj. High'] > reg[0] * data1['date_id'] + reg[1]]
reg = linregress(
x=data1['date_id'],
y=data1['Adj. High'],
)
data0['high_trend'] = reg[0] * data0['date_id'] + reg[1]
# low trend line
data1 = data0.copy()
while len(data1)>3:
reg = linregress(
x=data1['date_id'],
y=data1['Adj. Low'],
)
data1 = data1.loc[data1['Adj. Low'] < reg[0] * data1['date_id'] + reg[1]]
reg = linregress(
x=data1['date_id'],
y=data1['Adj. Low'],
)
data0['low_trend'] = reg[0] * data0['date_id'] + reg[1]
# plot
data0['Adj. Close'].plot()
data0['high_trend'].plot()
data0['low_trend'].plot()
```
[](https://i.stack.imgur.com/7mT6L.png)
|
How to link library in CLion
I'm trying to use NTL library for ZZ class, and would like to use dedicated functions. Unfortunately during compilation I'm getting a lot of errors:
```
[100%] Linking CXX executable hpc5
CMakeFiles/hpc5.dir/main.cpp.o: In function `findX(NTL::ZZ, NTL::ZZ, NTL::ZZ)':
/home/rooter/CLionProjects/hpc5/main.cpp:44: undefined reference to `find_xi(NTL::ZZ, NTL::ZZ)'
/home/rooter/CLionProjects/hpc5/main.cpp:57: undefined reference to `chinese_remainder(NTL::ZZ*, NTL::ZZ*, NTL::ZZ)'
/home/rooter/CLionProjects/hpc5/main.cpp:58: undefined reference to `NTL::operator<<(std::ostream&, NTL::ZZ const&)'
CMakeFiles/hpc5.dir/main.cpp.o: In function `NTL::ZZ::ZZ(NTL::ZZ const&)':
/usr/include/NTL/ZZ.h:58: undefined reference to `_ntl_gcopy(void*, void**)'
CMakeFiles/hpc5.dir/main.cpp.o: In function `NTL::ZZ::operator=(NTL::ZZ const&)':
/usr/include/NTL/ZZ.h:73: undefined reference to `_ntl_gcopy(void*, void**)'
CMakeFiles/hpc5.dir/main.cpp.o: In function `NTL::ZZ::operator=(long)':
/usr/include/NTL/ZZ.h:75: undefined reference to `_ntl_gintoz(long, void**)'
```
I have installed libntl-dev on my linux mint, added `set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -lntl" )` to my `CMakeLists.txt` and set CMake option `-lntl` and it has no effect. How can I link this library?
My CMakeLists.txt contains:
```
cmake_minimum_required(VERSION 3.10)
project(hpc5)
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -lntl" )
add_executable(hpc5 main.cpp)
```
|
If you want to link to a runtime library using CMake, you need to use [target\_link\_libraries](https://cmake.org/cmake/help/v3.3/command/target_link_libraries.html) command. For example, you may change your CMakeLists.txt file as follows:
```
cmake_minimum_required(VERSION 3.10)
project(hpc5)
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall" )
add_executable(hpc5 main.cpp)
target_link_libraries(hpc5 ntl)
```
This is assuming CMake is able to find the NTL library in your system.
EDIT : Fix executable name typo.
|
MySQL: What effect does DEFINER have on procedures and functions?
I use SQLyog to write procedures and functions in a remote MySQL database I am developing. The database has only one username/password. It is accessed via a front end executable application that I have written in Delphi and which is used by a couple of dozen different people, all using the same username.
When I use SQLYog to write a procedure using, for example,
```
CREATE PROCEDURE age_frequency_count(IN bin_size INT)
```
The resulting procedure gets the definer put in whether I like it or not, resulting in
```
CREATE DEFINER=<the_user_name>@<my_IP_address> PROCEDURE age_frequency_count(IN bin_size INT)
```
(I think this is being done by MySQL, not by the SQLYog DBMS.)
From reading the documentation and from posts here on SO, I have a rough, but rather confused idea of how the definer is used to say what permissions are needed to execute, or maybe alter, the procedure and what the procedure is allowed to do, but I would welcome some clarification.
**Question**
If my IP address is in the definer, can the procedure still be **executed** by other people who will be logging in from a different IP address (although with the same username)?
Can someone please clarify what the definer is doing? i.e. what can a connection from my IP address do that connections from other IP addresses can't?
|
I use the `DEFINER` clause to create a stored procedure with the security privileges of a powerful user that also has `UPDATE`, `DELETE`, `SELECT`, and `INSERT` rights to a particular database table. Then, I only grant `EXECUTE` on that stored procedure to a *minon* user (some people call it a *www* user, versus the more powerful *wwwproxy* user).
In this way, the *minion* can only execute designated stored procedures and has no `UPDATE`, `DELETE`, `SELECT`, `INSERT`, or other rights on a database table.
I hope that helps frame the idea behind the DEFINER clause. Use it to separate *power* from *tasks*.
You are correct, by default, MySQL uses the identity of the current user as the `DEFINER` when creating a stored procedure. This identity could be the identity of the front-end application (so to speak), or, like I said, you can use a *proxy user* that has normal table privileges. Then the application user would be the `minion` with only one privilege on the stored procedure, `EXECUTE`.
In short, if the default `DEFINER` user does not represent what the front end application uses to login to the database, and you want it to, then you need to change the stored procedure with `ALTER`, if possible.
On the other hand, the better idea would be to use the minon/proxy scenario. Application users on the Internet have no bearing on the IP that ends up in the stored procedures `DEFINER` clause. All that matters is the IP of where your app is logging in from to MySQL. Your app is talking to the database, not user agents on peoples' computers. However, that notion is, generally, a point of initial confusion. You are fine!
Hope that helps.
|
Sun Tool-less Slide Rails Removal for X4150
I have some tool-less slide rails for some 1ru x4150 sun servers. I can't seem to see how to remove them. Sun in their wisdom did not include this seemingly important information in their documentation; they simply (possibly with a cheeky smile) say to...
>
> "Refer to the (long since lost) installation card included with the rackmount kit for instructions on attaching tool-less slide-rail assemblies to the rack."
>
>
>
So if anyone knows please let me know.
|
If I assume you are talking about square post hole rack rails, the way to remove them is less than obvious and almost invisible when they are installed in rack.
There are 2 ways to remove them.
1. Access above/below:
In this case there is a spring loaded tab that faces the outside of the racks that can be pressed in if you have smallish hands or wide racks. Tackle the back mount point first, push in the tab then pull on the sliding portion to release it, then repeat the process on the front.
2. Fully populated above and below:
The same technique applies but in this instance you should use a small, thin flat bladed screwdriver and insert it into the hole in the middle of the rail mounting area. You should only insert it approx 1cm into the hole. Gently lever the spring loaded tab towards the server itself, i.e. move the handle towards the outside of the rack. It should not require much force. Then as above, pull the rail towards you.
The imgur gallery here (<https://i.stack.imgur.com/JAwFJ.jpg>) shows the spiring loaded tab and the access hole highlighted in red in the first and second images respectively. In the second image it is orientated as it would be for a LEFT hand rail in the rack.
|
Closures vs Delegate pattern
I'm working with network request classes and I'm concerned about **crashes**. For instance, working with closures is really easy as you pass a callback method to a function:
```
// some network client
func executeHttpRequest(#callback: (success: Bool) -> Void) {
// http request
callback(true)
}
// View Controller
func reload() {
networkClient.executeHttpRequest() { (success) -> Void in
self.myLabel.text = "it succeeded" // NOTE THIS CALL
}
}
```
However, since the process that should execute the callback is async, when callbacks interact with container class element (in this case an `UIKit` class) it may be vulnerable to crashes in situations like
1. The user navigated to another View Controller while the async task was still executing
2. The user pressed the home button while the async task was still executing
3. Etc...
So, when the callback finally gets fired, `self.myLabel.text` might result in a crash, as the View Controller to whom `self` was refering could already be deallocated.
**Up to this point. Am I right or do swift implement something internally so that this never happens?**
If I am right, then here's when the delegate pattern comes in handy, as delegate variables are `weak references`, which means, they are not kept in memory if deallocated.
```
// some network client
// NOTE this variable is an OPTIONAL and it's also a WEAK REFERENCE
weak var delegate: NetworkClientDelegate?
func executeHttpRequest() {
// http request
if let delegate = self.delegate {
delegate.callback(success: true)
}
}
```
Note how `self.delegate`, since it is a `weak reference`, it will point to `nil` if the View Controller (who implements the `NetworkClientDelegate` protocol) gets deallocated, and the callback is not called in that case.
My question would be: do closures have anything special that makes them a good choice in scenarios similar to this one, rather than going back to delegate pattern? It would be good if examples of closures (that won't end up in crashes due to nil pointer) are provided. Thanks.
|
>
> So, when the callback finally gets fired, self.myLabel.text might result in a crash, as the View Controller to whom self was referring could already be deallocated.
>
>
>
If `self` has been imported into the closure as a strong reference, it is guaranteed that `self` will *not* be deallocated up until the closure has been finished executing. That is, the view controller is still alive when the closure gets called - even if it's view is not visible at this time. Statement `self.myLabel.text = "it succeeded"` will be executed, but even the label will not be visible, it will not crash.
There is, though, a subtle issue which can lead to a crash under certain circumstances:
Suppose, the closure has the last and only strong reference to the view controller. The closure finishes, and subsequently gets deallocated, which also releases the last strong reference to the view controller. This inevitable will call the `dealloc` method of the view controller. The `dealloc` method will execute on the same thread where the closure will be executed. Now, that the view controller is a `UIKit` object, it MUST be guaranteed that all methods send to this object will be executed on the *main thread*. So, IFF `dealloc` will be actually executed on some other thread, your code may crash.
A suitable approach would require to "cancel" an asynchronous task whose result is no longer needed by the view controller when it is "closed". This, of course, requires that your "task" can be cancelled.
To alleviate some issues with your former approach, you might capture a *weak* reference of your view controller instead of a *strong* reference when defining the closure. This would not prevent the asynchronous task to run up to its completion, but in the completion handler you can check whether the view controller is still alive, and just bail out if it does not exists anymore.
And, if you need to "keep" an UIKit object in some closure which may execute on some arbitrary thread, take care that this might be the *last* strong reference, and ensure this last strong reference gets released on the *main* thread.
See also: [Using weak self in dispatch\_async function](https://stackoverflow.com/questions/21987067/using-weak-self-in-dispatch-async-function/21988407#21988407)
*Edit:*
>
> My question would be: do closures have anything special that makes them a good choice in scenarios similar to this one, rather than going back to delegate pattern?
>
>
>
I would say, closures are the "better" approach in many use-cases:
Delegates are more prone to issues like circular references than closures (since they are "owned" by an object, and this object might be captured as a variable in the delegate).
The classic use-case for closure as completion handlers also improves the "locality" of your code making it more comprehensible: you state *what* shall happen when a task finished *right after* the statement invoking the task - no matter how long that may take.
The huge advantage with closures versus regular "functions" is that a closure captures the whole "context" at the time when it is defined. That is, it can refer to variables and "import" them into the closure at the time when it is defined - and use it when it executes, no matter *when* this happens, and when the original "stack" at definition-time is gone already.
|
Calculating a simple sum with Core Data
I have a simple Core Data store with an entity Cost with an integer property "value". I want to sum up all amounts of the costs in my store, which is equivalent to the following sql statement:
```
SELECT sum(value) FROM costs
```
How do I do it the most efficient way in Cocoa Touch? By using Core Data? Or just get all cost entities and do the summation manually?
|
The best way is to use a [fetch for specific values](http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreData/Articles/cdFetching.html#//apple_ref/doc/uid/TP40002484-SW6) and supply a [NSExpressionDescription with a `sum:` function](http://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSExpression_Class/Reference/NSExpression.html#//apple_ref/occ/clm/NSExpression/expressionForFunction:arguments:).
When you execute the fetch you get a one element array containing a dictionary whose keys match the expression descriptions and whose values are the results of the expressions. In this case, you would get a `sum` key whose value would be the sum of the attributes given the expression.
|
Using LINQ query syntax with custom Maybe monad implementation
I am trying to write a simple `Maybe` monad in C#. I want to be able to use the LINQ query syntax with it. This is what I have come up with so far:
```
using System;
using System.Collections.Generic;
abstract class Maybe<A> {
public abstract Maybe<B> SelectMany<B>(Func<A, Maybe<B>> f);
public abstract Maybe<B> Select<B>(Func<A, B> f);
}
class Just<A> : Maybe<A> {
private readonly A a;
public Just(A a) {
this.a = a;
}
override public Maybe<B> SelectMany<B>(Func<A, Maybe<B>> f) {
return f(a);
}
override public Maybe<B> Select<B>(Func<A, B> f) {
return new Just<B>(f(a));
}
override public string ToString() {
return "Just " + a;
}
}
class Nothing<A> : Maybe<A> {
override public Maybe<B> SelectMany<B>(Func<A, Maybe<B>> f) {
return new Nothing<B>();
}
override public Maybe<B> Select<B>(Func<A, B> f) {
return new Nothing<B>();
}
override public string ToString() {
return "Nothing";
}
}
static class Program {
public static void Main(string[] args) {
Maybe<int> m = new Just<int>(12);
Maybe<int> n = new Nothing<int>();
Maybe<int> result = from m0 in m
from n0 in n
select m0 + n0;
Console.WriteLine(result);
}
}
```
And this is the error message:
```
prog.cs(48,25): error CS1501: No overload for method `SelectMany' takes `2' arguments
prog.cs(5,28): (Location of the symbol related to previous error)
Compilation failed: 1 error(s), 0 warnings
```
Can anyone please guide me on what I should do to be able to use query syntax with my `Maybe` implementation? Thanks.
|
SelectMany must should be declared as an extension in a static class, for example:
```
public static class Maybe {
public static Maybe<B> SelectMany<B>(this Maybe<A> maybe, Func<A, Maybe<B>> f) {
return f(a);
}
...
}
```
**EDIT**:
you still need a piece. With this should work:
```
public static Maybe<V> SelectMany<T, U, V>(this Maybe<T> m, Func<T, Maybe<U>> k, Func<T, U, V> s)
{
return m.SelectMany(x => k(x).SelectMany(y => new Just<V>(s(x, y))));
}
```
You need this because:
```
from m0 in m
from n0 in n
select m0 + n0
```
would be translated in:
```
m.SelectMany(m0 => n, (m, n0) => m0 + n0);
```
Instead, for example:
```
var aa = new List<List<string>>();
var bb = from a in aa
from b in a
select b;
```
is translated in
```
aa.SelectMany(a => a);
```
|
How do I use the TABLE\_QUERY() function in BigQuery?
A couple of questions about the TABLE\_QUERY function:
- The examples show using `table_id` in the query string, are there other fields available?
- It seems difficult to debug. I'm getting "error evaluating subsidiary query" when I try to use it.
- How does `TABLE_QUERY()` work?
|
The `TABLE_QUERY()` function allows you to write a SQL `WHERE` clause that is evaluated to find which tables to run the query over. For instance, you can run the following query to count the rows in all tables in the `publicdata:samples` dataset that are older than 7 days:
```
SELECT count(*)
FROM TABLE_QUERY(publicdata:samples,
"MSEC_TO_TIMESTAMP(creation_time) < "
+ "DATE_ADD(CURRENT_TIMESTAMP(), -7, 'DAY')")
```
Or you can run this to query over all tables that have ‘git’ in the name (which are the `github_timeline` and the `github_nested` sample tables) and find the most common urls:
```
SELECT url, COUNT(*)
FROM TABLE_QUERY(publicdata:samples, "table_id CONTAINS 'git'")
GROUP EACH BY url
ORDER BY url DESC
LIMIT 100
```
Despite being very powerful, `TABLE_QUERY()` can be difficult to use. The `WHERE` clause must be specified as a string, which can be a little bit awkward. Moreover, it can be difficult to debug, since when there is a problem, you only get the error “Error evaluating subsidiary query”, which isn’t always helpful.
**How it works:**
`TABLE_QUERY()` essentially executes two queries. When you run `TABLE_QUERY(<dataset>, <table_query>)`, BigQuery executes `SELECT table_id FROM <dataset>.__TABLES_SUMMARY__ WHERE <table_query>` to get the list of table IDs to run the query on, then it executes your actual query over those tables.
The `__TABLES__` portion of that query may look unfamiliar. `__TABLES_SUMMARY__` is a meta-table containing information about tables in a dataset. You can use this meta-table yourself. For example, the query `SELECT * FROM publicdata:samples.__TABLES_SUMMARY__` will return metadata about the tables in the `publicdata:samples` dataset.
**Available Fields:**
The fields of the `__TABLES_SUMMARY__` meta-table (that are all available in the `TABLE_QUERY` query) include:
- `table_id`: name of the table.
- `creation_time`: time, in milliseconds since 1/1/1970 UTC, that the table was created. This is the same as the `creation_time` field on the table.
- `type`: whether it is a view (2) or regular table (1).
The following fields are *not* available in `TABLE_QUERY()` since they are members of `__TABLES__` but not `__TABLES_SUMMARY__`. They're kept here for historical interest and to partially document the `__TABLES__` metatable:
- `last_modified_time`: time, in milliseconds since 1/1/1970 UTC, that the table was updated (either metadata or table contents). Note that if you use the `tabledata.insertAll()` to stream records to your table, this might be a few minutes out of date.
- `row_count`: number of rows in the table.
- `size_bytes`: total size in bytes of the table.
**How to debug**
In order to debug your `TABLE_QUERY()` queries, you can do the same thing that BigQuery does; that is, you can run the the metatable query yourself. For example:
```
SELECT * FROM publicdata:samples.__TABLES_SUMMARY__
WHERE MSEC_TO_TIMESTAMP(creation_time) <
DATE_ADD(CURRENT_TIMESTAMP(), -7, 'DAY')
```
lets you not only debug your query but also see what tables would be returned when you run the `TABLE_QUERY` function. Once you have debugged the inner query, you can put it together with your full query over those tables.
|
How do I draw a circular gradient?
How do I draw a circular gradient [like this](http://psfreak.com/images/tutorials/elegant-navigation-box/back.jpg) in vb.net?

|
Check out this [great page](https://web.archive.org/web/20140906083853/http://bobpowell.net/pgb.aspx). The code in the article is in C#. Here is a VB.NET port of the code you're interested in and updated for a rectangular fill (based on the article's triangle fill sample):
```
Dim pgb As New PathGradientBrush(New Point() { _
New Point(0, 0), _
New Point(0, Me.ClientRectangle.Height), _
New Point(Me.ClientRectangle.Width, Me.ClientRectangle.Height), _
New Point(Me.ClientRectangle.Width, 0)})
pgb.SurroundColors = New Color() {Color.Red}
pgb.CenterColor = Color.Gray
e.Graphics.FillRectangle(pgb, Me.ClientRectangle)
pgb.Dispose()
```
Here's another possible solution:
```
Dim pth As New GraphicsPath()
pth.AddEllipse(Me.ClientRectangle)
Dim pgb As New PathGradientBrush(pth)
pgb.SurroundColors = New Color() {Color.Red}
pgb.CenterColor = Color.Gray
e.Graphics.FillRectangle(pgb, Me.ClientRectangle)
```
Note that this last code snippet will draw a circle bounded inside of a rectangle. If you want the circular gradient to fill the entire rectangle you'll have to calculate a larger elliptic path with a larger rectangle.
|
How to show object bounding box when mouse hover objects in Fabricjs?
I want to show object bounding box when mouse hover over objects like this video, how to do that ?
[](https://i.stack.imgur.com/HWAJx.gif)
I'm using **canvas.on('mouse:over')** with **selectedObj.drawBorders** function. However, outline box is drawn in not correct position. And I don't know how to clear that outline box when mouse moves out of the object.
Here is my code:
```
$(function() {
var canvasObject = document.getElementById("editorCanvas");
// set canvas equal size with div
$(canvasObject).width($("#canvasContainer").width());
$(canvasObject).height($("#canvasContainer").height());
var canvas = new fabric.Canvas('editorCanvas', {
backgroundColor: 'white',
selectionLineWidth: 2,
width: $("#canvasContainer").width(),
height: $("#canvasContainer").height()
});
canvas.viewportTransform[4] = 20;
canvas.viewportTransform[5] = 40;
canvas.on('mouse:over', function(opts) {
var selectedObj = opts.target;
if (selectedObj != null) {
selectedObj.drawBorders(canvas.getContext())
}
});
var text = new fabric.Text('hello world', { left: 50, top: 50 });
canvas.add(text);
setObjectCoords();
function setObjectCoords() {
canvas.forEachObject(function(object) {
object.setCoords();
});
}
});
```
```
<style>
#canvasContainer {
width: 100%;
height: 100vh;
background-color: gray;
}
</style>
```
```
<script src="https://code.jquery.com/jquery-3.3.1.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/2.4.3/fabric.js"></script>
<div id="canvasContainer">
<canvas id="editorCanvas"></canvas>
</div>
```
Please help me resolve this problem!
Thank you!
|
Use method [\_renderControls](http://fabricjs.com/docs/fabric.Object.html#_renderControls) and in styleOverride set `hasControls : false` to draw only borders.
***DEMO***
```
$(function() {
var canvas = new fabric.Canvas('editorCanvas', {
backgroundColor: 'white',
selectionLineWidth: 2,
width: $("#canvasContainer").width(),
height: $("#canvasContainer").height()
});
var text = new fabric.IText('hello world', {
left: 50,
top: 50
});
canvas.add(text);
canvas
text.on('mouseover', function() {
this._renderControls(this.canvas.contextTop, {
hasControls: false
})
})
text.on('mousedown', function() {
this.canvas.clearContext(this.canvas.contextTop);
})
text.on('mouseout', function() {
this.canvas.clearContext(this.canvas.contextTop);
})
});
```
```
<style>
#canvasContainer {
width: 100%;
height: 100vh;
background-color: gray;
}
</style>
```
```
<script src="https://code.jquery.com/jquery-3.3.1.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/2.4.3/fabric.js"></script>
<div id="canvasContainer">
<canvas id="editorCanvas"></canvas>
</div>
```
|
Stop WordPress from 301 redirecting /index.php to /
I need to be able to browse to <http://www.example.com/index.php>, but WordPress automatically 301 redirects this to <http://www.example.com/>.
Is it possible to stop this redirection ONLY for the homepage?
Here is my .htaccess file:
```
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
```
|
The redirection occurs in the `redirect_canonical` function. There is a filter applied to the redirect URL before the redirection occurs:
```
$redirect_url = apply_filters( 'redirect_canonical', $redirect_url, $requested_url );
```
If you hook into that filter you should be able to disable the redirection.
```
add_filter('redirect_canonical', function($redirect_url, $requested_url) {
if($requested_url == home_url('index.php')) {
return '';
}
}, 10, 2);
```
I verified that this is working by adding the above filter to my theme's `functions.php` file. Note that this filter must be attached before the redirect action fires so placing the filter in a template file will not work.
|
Async two-way communication with Windows Named Pipes (.Net)
I have a windows service and a GUI that need to communicate with each other. Either can send messages at any time.
I'm looking at using NamedPipes, but it seems that you cant read & write to the stream at the same time (or at least I cant find any examples that cover this case).
Is it possible to do this kind of two-way communication via a single NamedPipe?
Or do I need to open two pipes (one from GUI->service and one from service->GUI)?
|
Using WCF you can use duplex named pipes
```
// Create a contract that can be used as a callback
public interface IMyCallbackService
{
[OperationContract(IsOneWay = true)]
void NotifyClient();
}
// Define your service contract and specify the callback contract
[ServiceContract(CallbackContract = typeof(IMyCallbackService))]
public interface ISimpleService
{
[OperationContract]
string ProcessData();
}
```
Implement the Service
```
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall)]
public class SimpleService : ISimpleService
{
public string ProcessData()
{
// Get a handle to the call back channel
var callback = OperationContext.Current.GetCallbackChannel<IMyCallbackService>();
callback.NotifyClient();
return DateTime.Now.ToString();
}
}
```
Host the Service
```
class Server
{
static void Main(string[] args)
{
// Create a service host with an named pipe endpoint
using (var host = new ServiceHost(typeof(SimpleService), new Uri("net.pipe://localhost")))
{
host.AddServiceEndpoint(typeof(ISimpleService), new NetNamedPipeBinding(), "SimpleService");
host.Open();
Console.WriteLine("Simple Service Running...");
Console.ReadLine();
host.Close();
}
}
}
```
Create the client application, in this example the Client class implements the call back contract.
```
class Client : IMyCallbackService
{
static void Main(string[] args)
{
new Client().Run();
}
public void Run()
{
// Consume the service
var factory = new DuplexChannelFactory<ISimpleService>(new InstanceContext(this), new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/SimpleService"));
var proxy = factory.CreateChannel();
Console.WriteLine(proxy.ProcessData());
}
public void NotifyClient()
{
Console.WriteLine("Notification from Server");
}
}
```
|
Golang tour Switch evaluation order: time.Now().Weekday() + 2 yields runtime error: index out of range
I am learning Golang, was going through the tour where I found a tutorial on **Switch evaluation order**. I modified it a bit (e.g. Saturday to Sunday), just to play around. It printed *Too far away.* even for Sunday. So, I modified the code to look like this:
```
package main
import (
"fmt"
"time"
)
func main() {
day := time.Monday
fmt.Printf("When's %v?\n", day)
today := time.Now().Weekday()
switch day {
case today + 0:
fmt.Println("Today.")
case today + 1:
fmt.Println("Tomorrow.", today + 1)
case today + 2:
fmt.Println("In two days.", today + 2)
default:
fmt.Println("Too far away.", today + 2)
}
}
```
Now, it gives me the output:
```
When's Monday?
Too far away. %!v(PANIC=runtime error: index out of range)
```
What can I do to MOD the index, instead of adding it beyond array? Seems to me like some kind of operator overloading. Shouldn't it do MOD, on add operation, by default in case of days, at least?
|
This is an implementation detail.
In this line
```
fmt.Println("In two days.", today + 2)
```
`today` is of type [`time.Weekday`](https://golang.org/pkg/time/#Weekday) which has `int` as its underlying type, `2` is an untyped integer constant, which will be converted to `time.Weekday` and the addition will be carried out.
The implementation of [`fmt.Println()`](https://golang.org/pkg/fmt/#Println) will check if values passed to it implement [`fmt.Stringer`](https://golang.org/pkg/fmt/#Stringer), and because `time.Weekday` does, its [`String()`](https://golang.org/pkg/time/#Weekday.String) method will be called whose implementation is:
```
// String returns the English name of the day ("Sunday", "Monday", ...).
func (d Weekday) String() string { return days[d] }
```
Where `days` is an array of 7 elements:
```
var days = [...]string{
"Sunday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday",
}
```
There is no range check in `Weekday.String()` because `time.Saturday + 2` for example is not a weekday. `Weekday.String()` only guarantees to work properly for the constants defined in the `time` package:
```
type Weekday int
const (
Sunday Weekday = iota
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
)
```
If you want to make it work, you have to use the remainder after dividing by 7, like this:
```
switch day {
case (today + 0) % 7:
fmt.Println("Today.")
case (today + 1) % 7:
fmt.Println("Tomorrow.", (today+1)%7)
case (today + 2) % 7:
fmt.Println("In two days.", (today+2)%7)
default:
fmt.Println("Too far away.", (today+2)%7)
}
```
|
How to convert Enumeration to Seq/List in scala?
I'm writing a servlet, and need to get all parameters from the request. I found `request.getParameterNames` returns a `java.util.Enumeration`, so I have to write code as:
```
val names = request.getParameterNames
while(names.hasMoreElements) {
val name = names.nextElement
}
```
I wanna know is there any way to convert a `Enumeration` to a `Seq/List`, then I can use the `map` method?
|
**Use JavaConverters**
See <https://stackoverflow.com/a/5184386/133106>
**Use a wrapper Iterator**
You could build up a wrapper:
```
val nameIterator = new Iterator[SomeType] { def hasNext = names.hasMoreElements; def next = names.nextElement }
```
**Use JavaConversions wrapper**
```
val nameIterator = new scala.collection.JavaConversions.JEnumerationWrapper(names)
```
**Using JavaConversions implicits**
If you import
```
import scala.collection.JavaConversions._
```
you can do it implicitly (and you’ll also get implicit conversions for other Java collecitons)
```
request.getParameterNames.map(println)
```
**Use Iterator.continually**
You might be tempted to build an iterator using `Iterator.continually` like an earlier version of this answer proposed:
```
val nameIterator = Iterator.continually((names, names.nextElement)).takeWhile(_._1.hasMoreElements).map(_._2)
```
but it's incorrect as the last element of the enumerator will be discarded.
The reason is that the `hasMoreElement` call in the `takeWhile` is executed after calling `nextElement` in the `continually`, thus discarding the last value.
|
Why is RSS distributed chi square times n-p?
I would like to understand why, under the OLS model, the RSS (residual sum of squares) is distributed $$\chi^2\cdot (n-p)$$ ($p$ being the number of parameters in the model, $n$ the number of observations).
I apologize for asking such a basic question, but I seem to not be able to find the answer online (or in my, more application oriented, textbooks).
|
I consider the following linear model: ${y} = X \beta + \epsilon$.
The vector of residuals is estimated by
$$\hat{\epsilon} = y - X \hat{\beta}
= (I - X (X'X)^{-1} X') y
= Q y
= Q (X \beta + \epsilon) = Q \epsilon$$
where $Q = I - X (X'X)^{-1} X'$.
Observe that $\textrm{tr}(Q) = n - p$ (the trace is invariant under cyclic permutation) and that $Q'=Q=Q^2$. The eigenvalues of $Q$ are therefore $0$ and $1$ (some details below). Hence, there exists a unitary matrix $V$ such that ([matrices are diagonalizable by unitary matrices if and only if they are normal.](http://en.wikipedia.org/wiki/Diagonalizable_matrix))
$$V'QV = \Delta = \textrm{diag}(\underbrace{1, \ldots, 1}\_{n-p \textrm{ times}}, \underbrace{0, \ldots, 0}\_{p \textrm{ times}})$$
Now, let $K = V' \hat{\epsilon}$.
Since $\hat{\epsilon} \sim N(0, \sigma^2 Q)$, we have $K \sim N(0, \sigma^2 \Delta)$ and therefore $K\_{n-p+1}=\ldots=K\_n=0$. Thus
$$\frac{\|K\|^2}{\sigma^2} = \frac{\|K^{\star}\|^2}{\sigma^2} \sim \chi^2\_{n-p}$$
with $K^{\star} = (K\_1, \ldots, K\_{n-p})'$.
Further, as $V$ is a unitary matrix, we also have
$$\|\hat{\epsilon}\|^2 = \|K\|^2=\|K^{\star}\|^2$$
Thus
$$\frac{\textrm{RSS}}{\sigma^2} \sim \chi^2\_{n-p}$$
Finally, observe that this result implies that
$$E\left(\frac{\textrm{RSS}}{n-p}\right) = \sigma^2$$
---
Since $Q^2 - Q =0$, the [minimal polynomial](http://en.wikipedia.org/wiki/Minimal_polynomial_%28linear_algebra%29) of $Q$ divides the polynomial $z^2 - z$. So, the eigenvalues of $Q$ are among $0$ and $1$. Since $\textrm{tr}(Q) = n-p$ is also the sum of the eigenvalues multiplied by their multiplicity, we necessarily have that $1$ is an eigenvalue with multiplicity $n-p$ and zero is an eigenvalue with multiplicity $p$.
|
Get SID for each member of a local group
On an Active Directory domain member running Windows 7 I have a local group. It has users and other groups as members:
[](https://i.stack.imgur.com/rdJog.png)
**How can I obtain the SID for each member of this local group?** I'm aware of the Sysinternals utility [PSGetSid](https://technet.microsoft.com/en-us/sysinternals/bb897417) but it doesn't seem to be able to enumerate group members.
|
Here's a Powershell function you should be able to use. I only tested it on Windows 10, but I don't think it's using anything that wasn't available in Windows 7.
```
Function Get-LocalGroupMembers {
[Cmdletbinding()]
Param(
[Parameter(Mandatory=$true)]
[string]$GroupName
)
[adsi]$adsiGroup = "WinNT://$($env:COMPUTERNAME)/$GroupName,group"
$adsiGroup.Invoke('Members') | %{
$username = $_.GetType().InvokeMember('Name','GetProperty',$null,$_,$null)
$path = $_.GetType().InvokeMember('AdsPath','GetProperty',$null,$_,$null).Replace('WinNT://','')
$class = $_.GetType().InvokeMember('Class','GetProperty',$null,$_,$null)
$userObj = New-Object System.Security.Principal.NTAccount($username)
$sid = $userObj.Translate([System.Security.Principal.SecurityIdentifier])
[pscustomobject]@{
Username = $username
Type = $class
SID = $sid
Path = $path
}
}
}
```
|
How to get items to stretch to fill width?
```
.container {
display: grid;
grid-template-columns: repeat(auto-fill, 100px);
grid-gap: 3px;
border: 1px solid green;
}
.item {
border: 1px solid red;
height: 50px;
}
```
```
<div class="container">
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
</div>
```
How can I stretch the red `item` boxes a little bit so that they fill the entire width of the green `container` box?
They should all be the same width, and a little bit bigger than 100px.
The last row should be left-aligned as it is now.
|
You just need to add a *fraction unit* to the `grid-template-columns` rule.
```
grid-template-columns: repeat(auto-fill, minmax(100px, 1fr));
```
This rule creates as many columns as will fit in the container.
The minimum width of each column is 100px.
The maximum width is `1fr`, which means the column will consume any free space on the row.
Now all horizontal space in the container is being used.
```
.container {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(100px, 1fr));
grid-auto-rows: 50px;
grid-gap: 3px;
border: 1px solid green;
}
.item {
border: 1px solid red;
}
```
```
<div class="container">
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
</div>
```
These posts explain the solution above in more detail:
- [Equal width flex items even after they wrap](https://stackoverflow.com/q/44154580/3597276)
- [The difference between percentage and fr units in CSS Grid Layout](https://stackoverflow.com/q/45090726/3597276)
|
How to determine "real world" x, y coordinates from kinect depth data?
The depth values returned by the kinect sensor correspond to the real world z distance (in mm) from the xy-plane. So far I can't find anything in the latest sdk that returns anything similar for the x and y coordinates. Is this provided in the sdk? Failing that, what would be a good way of computing x and y?
|
You can use KinectSensor.MapDepthToSkeletonPoint method which return a SkeletonPoint. The sample code can be
```
using (DepthImageFrame depthimageFrame = e.OpenDepthImageFrame())
{
if (depthimageFrame == null)
{
return;
}
pixelData = new short[depthimageFrame.PixelDataLength];
depthimageFrame.CopyPixelDataTo(pixelData);
for (int x = 0; x < depthimageFrame.Width; x++)
{
for (int y = 0; y < depthimageFrame.Height; y++)
{
SkeletonPoint p = sensor.MapDepthToSkeletonPoint(DepthImageFormat.Resolution640x480Fps30, x, y, pixelData[x + depthimageFrame.Width * y]);
}
}
}
```
|
AVFoundation Image orientation off by 90 degrees in the preview but fine in Camera roll
Something really strange is happening, I am trying to capture an image using AVFoundation, the Camera roll image seems just fine, but the image preview has the image rotated by 90 degrees.
This is the code I am using to capture an image
```
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection)
{
break;
}
}
//NSLog(@"about to request a capture from: %@", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
//NSLog(@"attachements: %@", exifAttachments);
} else {
NSLog(@"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
self.vImage.image = image;
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}];
```
|
yes ,it happens when you capture image in Portrait orientation of your device and use that image in your application, because the default orientation of image is Landscape in any IOS Device, so you need to change the orientation of image after picking from Gallery to use in your app.
I have put code to achieve this
**Objective-C code**
```
- (UIImage *)fixOrientationOfImage:(UIImage *)image {
// No-op if the orientation is already correct
if (image.imageOrientation == UIImageOrientationUp) return image;
// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
CGAffineTransform transform = CGAffineTransformIdentity;
switch (image.imageOrientation) {
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
transform = CGAffineTransformTranslate(transform, image.size.width, image.size.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
transform = CGAffineTransformTranslate(transform, image.size.width, 0);
transform = CGAffineTransformRotate(transform, M_PI_2);
break;
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
transform = CGAffineTransformTranslate(transform, 0, image.size.height);
transform = CGAffineTransformRotate(transform, -M_PI_2);
break;
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
break;
}
switch (image.imageOrientation) {
case UIImageOrientationUpMirrored:
case UIImageOrientationDownMirrored:
transform = CGAffineTransformTranslate(transform, image.size.width, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
case UIImageOrientationLeftMirrored:
case UIImageOrientationRightMirrored:
transform = CGAffineTransformTranslate(transform, image.size.height, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
case UIImageOrientationUp:
case UIImageOrientationDown:
case UIImageOrientationLeft:
case UIImageOrientationRight:
break;
}
// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
CGContextRef ctx = CGBitmapContextCreate(NULL, image.size.width, image.size.height,
CGImageGetBitsPerComponent(image.CGImage), 0,
CGImageGetColorSpace(image.CGImage),
CGImageGetBitmapInfo(image.CGImage));
CGContextConcatCTM(ctx, transform);
switch (image.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
// Grr...
CGContextDrawImage(ctx, CGRectMake(0,0,image.size.height,image.size.width), image.CGImage);
break;
default:
CGContextDrawImage(ctx, CGRectMake(0,0,image.size.width,image.size.height), image.CGImage);
break;
}
// And now we just create a new UIImage from the drawing context
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
return img;
}
```
**Swift code**
```
func fixOrientationOfImage(image: UIImage) -> UIImage? {
if image.imageOrientation == .Up {
return image
}
// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
var transform = CGAffineTransformIdentity
switch image.imageOrientation {
case .Down, .DownMirrored:
transform = CGAffineTransformTranslate(transform, image.size.width, image.size.height)
transform = CGAffineTransformRotate(transform, CGFloat(M_PI))
case .Left, .LeftMirrored:
transform = CGAffineTransformTranslate(transform, image.size.width, 0)
transform = CGAffineTransformRotate(transform, CGFloat(M_PI_2))
case .Right, .RightMirrored:
transform = CGAffineTransformTranslate(transform, 0, image.size.height)
transform = CGAffineTransformRotate(transform, -CGFloat(M_PI_2))
default:
break
}
switch image.imageOrientation {
case .UpMirrored, .DownMirrored:
transform = CGAffineTransformTranslate(transform, image.size.width, 0)
transform = CGAffineTransformScale(transform, -1, 1)
case .LeftMirrored, .RightMirrored:
transform = CGAffineTransformTranslate(transform, image.size.height, 0)
transform = CGAffineTransformScale(transform, -1, 1)
default:
break
}
// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
guard let context = CGBitmapContextCreate(nil, Int(image.size.width), Int(image.size.height), CGImageGetBitsPerComponent(image.CGImage), 0, CGImageGetColorSpace(image.CGImage), CGImageGetBitmapInfo(image.CGImage).rawValue) else {
return nil
}
CGContextConcatCTM(context, transform)
switch image.imageOrientation {
case .Left, .LeftMirrored, .Right, .RightMirrored:
CGContextDrawImage(context, CGRect(x: 0, y: 0, width: image.size.height, height: image.size.width), image.CGImage)
default:
CGContextDrawImage(context, CGRect(origin: .zero, size: image.size), image.CGImage)
}
// And now we just create a new UIImage from the drawing context
guard let CGImage = CGBitmapContextCreateImage(context) else {
return nil
}
return UIImage(CGImage: CGImage)
}
```
**Swift 3.0**
```
func fixOrientationOfImage(image: UIImage) -> UIImage? {
if image.imageOrientation == .up {
return image
}
// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
var transform = CGAffineTransform.identity
switch image.imageOrientation {
case .down, .downMirrored:
transform = transform.translatedBy(x: image.size.width, y: image.size.height)
transform = transform.rotated(by: CGFloat(Double.pi))
case .left, .leftMirrored:
transform = transform.translatedBy(x: image.size.width, y: 0)
transform = transform.rotated(by: CGFloat(Double.pi / 2))
case .right, .rightMirrored:
transform = transform.translatedBy(x: 0, y: image.size.height)
transform = transform.rotated(by: -CGFloat(Double.pi / 2))
default:
break
}
switch image.imageOrientation {
case .upMirrored, .downMirrored:
transform = transform.translatedBy(x: image.size.width, y: 0)
transform = transform.scaledBy(x: -1, y: 1)
case .leftMirrored, .rightMirrored:
transform = transform.translatedBy(x: image.size.height, y: 0)
transform = transform.scaledBy(x: -1, y: 1)
default:
break
}
// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
guard let context = CGContext(data: nil, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: image.cgImage!.bitsPerComponent, bytesPerRow: 0, space: image.cgImage!.colorSpace!, bitmapInfo: image.cgImage!.bitmapInfo.rawValue) else {
return nil
}
context.concatenate(transform)
switch image.imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
context.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.height, height: image.size.width))
default:
context.draw(image.cgImage!, in: CGRect(origin: .zero, size: image.size))
}
// And now we just create a new UIImage from the drawing context
guard let CGImage = context.makeImage() else {
return nil
}
return UIImage(cgImage: CGImage)
}
```
|
Unique sort order for postgres pagination
While trying to implement pagination from server side in postgres, i came across a point that while using limit and offset keywords you have to provide an ORDER BY clause on a unique column probably the primary key.
In my case i am using the UUID generation for Pkeys so I can't rely on a sequential order of increasing keys. ORDER BY pkey DESC - might not result in newer rows on top always.
So i resorted to using Created Date column - timestamp column which should be unique.
But my question comes what if the UI client wants to sort by some other column? in the event that it might not always be a unique column i resort to ORDER BY user\_column, created\_dt DESC so as to maintain predictable results for postgres pagination.
is this the right approach? i am not sure if i am going the right way. please advise.
|
I talked about this exact problem on an old blog post (in the context of using an ORM):
>
> One last note about using sorting and paging in conjunction. A query
> that implements paging can have odd results if the ORDER BY clause
> does not include a field that represents an empirical sequence in the
> data; sort order is not guaranteed beyond what is explicitly specified
> in the ORDER BY clause in most (maybe all) database engines. An
> example: if you have 100 orders that all occurred on the exact same
> date, and you ask for the first page of this data sorted by this date,
> then ask for the second page of data sorted the same way, it is
> entirely possible that you will get some of the data duplicated across
> both pages. So depending on the query and the distribution of data
> that is “sortable,” it can be a good practice to always include a
> unique field (like a primary key) as the final field in a sort clause
> if you are implementing paging.
>
>
>
<http://psandler.wordpress.com/2009/11/20/dynamic-search-objects-part-5sorting/>
|
How do I install a Windows program (written in C) downloaded from GitHub?
I've downloaded a dead simple Windows program from GitHub ([this](https://github.com/renatosilva/winutils) if it's relevant). I downloaded it as a ZIP file, but I can't figure out how to install it. It's written in C, I think. Do I need a compiler? (Visual Studio?) Is there something simple that I'm missing?
|
Github is primarily used by programmers to collaborate on projects. The 'Download ZIP' option downloads a copy of the source code to your computer. This *usually*1 doesn't contain a copy of the compiled usable executables/binaries (ie; exe files)
[**Releases**](https://github.com/blog/1547-release-your-software), is a Github feature for shipping software to end users (who usually aren't interested in the actual coding). Releases are accompanied by release notes and links to download the software or source code. *This is the first place you should check for binaries*.
In your case, the [releases page](https://github.com/renatosilva/winutils/releases) offers downloads and setup files.
However, many projects won't have any releases (especially not for Windows), in which case you can do one of the following:
1. Search for binaries elsewhere on the internet. Usually search engines like [DuckDuckGo](https://duckduckgo.com) (or [Google](https://google.com), if you prefer) will find what you want easily. Try searching for `<application name> for Windows` or `<application name> exe`.
2. **Compile it yourself.** You need to have at least some basic knowledge of the programming language to be able to do so. Here again, search engines can be *massively* helpful. Try searching for `compile <application name> on Windows` or `MinGW compile <application name>`.
Here I'll run through the basics of compiling utilities for Widnows:
- Download [MinGW](http://www.mingw.org/). I personally favor [this](https://nuwen.net/mingw.html) package because it comes with boost (which several applications use) and a cool shell. It also comes with `git`, which can be super useful if you want to compile files hosted on Github or other Git version control repositories without having to separately download the source.
- Run the following commands in `cmd` or `powershell`:
- `cd c:\sources\hello`: Change the directory to wherever it is that the source file(s) is/are located.
- `g++ helloworld.cpp -o helloworld.exe`: here `helloworld.cpp` is the name of the main source file, and `helloworld.exe` is the name of the compiled executable.
- In case you have to compile multiple files list them before the `-o` switch: `g++ helloworld.cpp foo.cpp -o helloworld.exe`
- These are the instructions for C++. If you're dealing with applications programmed in C, the file extension will be `.c` instead of `.cpp`, and you should use the `gcc` command instead of `g++`. The rest is more or less identical
- *Note that you may need to specify more command line arguments in order to compile an executable which works properly.* The project page will usually have these details.
- You also probably definitely want to look into makefiles. [This](https://stackoverflow.com/questions/2481269/how-to-make-a-simple-c-makefile#2481326) Stack Overflow post is about makefiles in general, and [this one](https://stackoverflow.com/questions/22873884/how-do-i-run-configure-with-mingw) tells us how to use them with MinGW. ***If a project comes with a makefile, try it before manually compiling the source.***
- You should now be able to run `helloworld.exe`.
- *Note:* Most projects come with a README file. Please do what the name implies, these files contain important information regarding compiling and running software and by reading it you can avoid a lot of unnecessary hassle.
1Note: Sometimes there may be a folder called `bin` among the downloaded files, which should contain usable executables/binaries
Also see [Cygwin](https://www.cygwin.com) and [GOW](https://github.com/bmatzelle/gow) if you want GNU/Linux command line utilities on Windows. I make some the latest version of some useful executables available for download [here](https://rahul.tech/archive/?dir=executables).
*Further reading:*
<http://www.mingw.org/wiki/MinGW_for_First_Time_Users_HOWTO>
<https://gcc.gnu.org/onlinedocs/index.html#dir>
<https://stackoverflow.com/questions/22873884/how-do-i-run-configure-with-mingw>
|
How to strip double quote (") characters from sequence
I have a sequence of sequences and each sequence is similar to the following:
("9990999" "43" "ROADWAY" "MORRISON, VAN X DMD" "43 ROADWAY" "SOMETHINGTON" "XA" "00000" "501" "18050" "2500" "1180" "14370" "0")
clojure-csv won't help me here, because it -- as it should -- quotes fields with embedded commas. I want pipe-delimited output without quotes around each field, some of which contain embedded commas.
I have looked at a number of ways to remove the double quote characters including the following, but the quotes stay put.
`(filter (fn [x] (not (= (str (first (str x))) (str (first (str \")))))) d1)`
where d1 is the sequence above.
In addition to an answer, I am more interested in a pointer to documentation. I have been playing with this but to no avail.
|
As far as I understand you have a sequence of strings. Clojure provides a very specific toString implementation for sequences, you can see it [here](https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/RT.java#L1759).
If you do `(str d1)` or simply type `d1` in repl and press enter you'll see more or less what you typed: sequence of strings (String is printed as sequence of characters in double quotes).
Now if you want to concatenate all the string you can do this:
```
(apply str d1)
```
If you want to print it separated with commas you could do this:
```
(apply str (interpose "," d1))
```
To output is CSV format I would recommend to use [clojure-csv](https://github.com/davidsantiago/clojure-csv).
Finally if you simply want to print the list but without the double quotes around strings you could do this:
```
(print d1)
```
Hope this helps.
---
**EDIT1** (update due to changes in the question):
This can easily be achieved with:
```
(apply str (interpose "|" d1))
```
Please don't pay attention to double quotes around the entire result if you `print` it or `spit` it into a file you won't see them, this is just how Clojure prints string [readably](https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/RT.java#L1766).
Alternatively if you have multiple sequences like that that you want to output at once you can still use clojure-csv but with different separator:
```
(ns csv-test.core
(:require [clojure-csv.core :as csv]))
(def d1 (list "9990999" "43" "ROADWAY" "MORRISON, VAN X DMD" "43 ROADWAY" "SOMETHINGTON" "XA" "00000" "501" "18050" "2500" "1180" "14370" "0"))
(print (csv/write-csv [d1] :delimiter "|"))
;;prints:
;;9990999|43|ROADWAY|MORRISON, VAN X DMD|43 ROADWAY|SOMETHINGTON|XA|00000|501|18050|2500|1180|14370|0
```
|
Postgres check constraint in text array for the validity of the values
I want to create something similar to this
```
CHECK (ALL(scopes) IN ('read', 'write', 'delete', 'update'))
```
`scopes` here is a field in the table which is `text[]` and I want to be sure that all the values in this array are one of the values above. Any opinions on this? And also is it possible the get these values via `SELECT` from another table?
I have seen the below solution but I was curious if there is a simpler one.
[Postgresql check constraint on all individual elements in array using function](https://stackoverflow.com/questions/45376336/postgresql-check-constraint-on-all-individual-elements-in-array-using-function)
|
[demo:db<>fiddle](https://dbfiddle.uk/?rdbms=postgres_11&fiddle=cd76b077f90520d8635e51d7a43202f4)
Using the [`<@` operator](https://www.postgresql.org/docs/current/functions-array.html):
```
CHECK(scopes <@ ARRAY['read', 'write', 'delete', 'update'])
```
---
Not knowing your exact use case, but I would prefer a more normalized solution: Putting the four operations into a separate table which can be updated. Then you can work with foreign keys instead of the check contraint. If you have to update these four keywords you do not need to change the table DDL but only the values in the foreign table.
|
Underscores in a Scala map/foreach
Can you please help me understand what the underscore is doing in the second case below? I guess it's defining an anonymous function for each element of the list, but why is that function not being called like it is in the first case?
```
scala> List(1,2,3,4).foreach(x => println("*" * x))
*
**
***
****
scala> List(1,2,3,4).foreach(_ => println("*" * _))
$line25.$read$$iw$$iw$$iw$$iw$$$Lambda$1197/562203102@a632ae0
$line25.$read$$iw$$iw$$iw$$iw$$$Lambda$1197/562203102@a632ae0
$line25.$read$$iw$$iw$$iw$$iw$$$Lambda$1197/562203102@a632ae0
$line25.$read$$iw$$iw$$iw$$iw$$$Lambda$1197/562203102@a632ae0
```
|
The right way to do this is as below
```
List(1,2,3,4).map("*" * _).foreach(println)
```
There are many different use cases for underscore in scala. I am listing three of those use cases that are relevant to this question here.
**case 1: using underscore in input argument**
You can use underscore for the argument of a lambda expression when the input argument is not going to used in the body of the lambda expression and thus you use the underscore as a placeholder instead of declaring a input argument for the lambda expression as shown below.
`List(1,2,3,4).foreach(_ => println("*" * 10)) // here 10 '*' characters are displayed irrespective of the input value.`
**case 2: using underscore in body of lambda expression.**
when underscore is used in body of lambda expression it refers to the input argument. You can use the underscore in this fashion if the input is going to be referred only once.
for eg: `List(1,2,3,4).foreach(println("*" * _)) // the underscore will be subsituted with the input argument.`
**case 3: to refer to unapplied methods.**
lets say I have a method `foo(bar: Int)`. I can refer to the unapplied method method by expression `foo _` (ie foo immediately followed by an underscore).
unapplied function here means getting a reference to a function object which can be executed later on demand.
```
@ def foo(bar: Int) = bar
defined function foo
@ val baz = foo _
baz: Int => Int = $sess.cmd24$$$Lambda$2592/612249759@73fbe2ce
@ baz.apply(10)
res25: Int = 10
```
**you cannot mix case 1 and case 2**. ie you can use the underscore either in input argument or in the body of the lambda function but not in both. since you are mixing both the cases you are unexpectedly using case 3 of underscore usage as shown below. ie you are referring to the unapplied method `*` defined via implicits on `java.lang.String`.
```
@ "*" * _
res20: Int => String = $sess.cmd20$$$Lambda$2581/1546372166@20967474
```
so effectively what you are doing is something like the below.
```
List(1,2,3,4).foreach(x => println(("*" * _).toString))
```
|
iPhone private API compiling
I'm searching through the whole internet since a few hours now, and I just can't find the informations I'm looking for.
I would like to mess around with the private apis, see whats possible, etc., but I can't even compile something.
So I've got a few pretty basic questions:
- Do I have to dump the headers? Because I downloaded a sample, where the API is loaded with
```
char *framework = "/System/Library/PrivateFrameworks/...";
dlopen(...);
```
I would like to use the objc-syntax (if possible) rather than using C (as mentioned above), if there are any opportunities.
- How do I make Xcode compile, if I import the private APIs? Do I have to add other link flags? (because I read about two different opinions)
I added a private framework and created a new Folder "Headers" and put all the headers files in there, so the framework shows up correctly in Xcode. Do I have to import the whole .framework, or only the headers from the framework I would like to use? After I imported the framework, I get 20+ errors, unknown type names, and many more.
And, finally, I've read about entitlements (which seem to be new in iOS 7). How do I use these entitlements and when do I use them?
Could someone please just type a few lines as an example?
|
## Background
In order to use methods in *any* framework, you can choose to either reference those frameworks statically or dynamically. I haven't seen anything in your question that suggests you need to use *dynamic* linking, so I'm going to avoid that (it's slightly more complicated for a beginner). (‡)
To statically reference APIs in a framework, you would import the relevant headers, and then configure your Xcode project to link to the framework. These two steps only change *slightly* for **Private** APIs.
Private APIs usually don't provide you with the headers (\*.h) that describe the APIs. I say "usually", because sometimes, an API that's private on iOS is actually public on Mac OS X, so to use it, you simply copy the OS X version of the header into your project.
## Generating Headers
Probably more common, though, is that you have to **generate** the header yourself. If you know which header you need, often you can find it posted [online under someone's github account](https://github.com/nst/iOS-Runtime-Headers). If not, you need a tool like [class-dump](http://stevenygard.com/projects/class-dump/), or [class-dump-z](https://code.google.com/p/networkpx/wiki/class_dump_z). Run the class dump tool on the private framework, by finding it on your Mac:
```
cd /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks/
class-dump -H -o ~/Headers/7.0/MusicLibrary/ MusicLibrary
```
Then, go into `~/Headers/7.0/MusicLibrary/` and find lots of dumped header files. Copy (only) the header(s) you need into your Xcode iOS project directory. Then, from inside Xcode, right click on a source folder in your Project Navigator view, select **"Add files to <Project Name> ..."**. Pick the dumped header file you need to include in your project.
## Linking
In order to successfully link against the API, you also need to add the framework to your Xcode Build Phases. From your project **Target** settings, select **Build Phases** then **Link Binary with Libraries**. You normally choose a public framework from the default list that the iOS SDK provides you. However, you can choose to browse your Mac for 3rd-party frameworks, or private frameworks, too. For private frameworks, you're just going to have to navigate to a folder location like this
```
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks/
```
and then pick the `*.framework` directory.
Then, simply use the APIs like you would use any public/private API. `#import` the header file, call the APIs, instantiate the classes, etc.
The use of this code:
```
char *framework = "/System/Library/PrivateFrameworks/...";
dlopen(...);
```
is an attempt to **dynamically** open a private framework. That's not necessary, if you know at compile time which framework you want to use, and have it present on your Mac to let Xcode link against.
## Entitlements
Entitlements are not new to iOS 7. They have existed for quite some time, and are one technique iOS uses to prevent unauthorized usage of some private APIs. iOS will check to see if your app has been granted a particular entitlement (by name), and if it does not have that entitlement, calling the protected API will fail (usually *silently*, although sometimes you'll see a message in the Console log).
[See here for an example of granting your (jailbreak) app an entitlement](https://stackoverflow.com/a/14885266/119114).
---
**(‡) Update:** iOS 9.3 has brought some changes with respect to Private APIs, and static vs dynamic linking. Please [see this Stack Overflow question here for more](https://stackoverflow.com/a/37001091/119114).
|
IntelliJ: how to make non java files copied to the bin directory as well?
My module contains some non java files along the java source files. When the module is built, the java files are copied to the bin folder (and included in the jar artifact), but the non java files are left out.
I need them to be copied as well (this is what Eclipse does). Note, that they do appear in the project tree view on the left, I did not exclude them in any way.
How can I make them get into the bin folder (jar artifact)?
Thanks.
|
`Settings` (`Preferences` on Mac) | `Compiler` | **Resource Patterns**.
This question duplicates/relates to:
- [copy jbehave stories into target directory using IntelliJ Idea](https://stackoverflow.com/questions/8629223/copy-jbehave-stories-into-target-directory-using-intellij-idea/8629542#8629542)
- [IntelliJ, Akka and Configuration files](https://stackoverflow.com/questions/9333093/intellij-akka-and-configuration-files/9333259#9333259)
- [IntelliJ IDEA 11.1.2 does not copy SQL files to the out folder](https://stackoverflow.com/questions/11100690/intellij-idea-11-1-2-does-not-copy-sql-files-to-the-out-folder/11100804#11100804)
- [Add a properties file to IntelliJ's classpath](https://stackoverflow.com/questions/3765998/add-a-properties-file-to-intellijs-classpath/3769608#3769608)
- [import images into an intelliJ Java project](https://stackoverflow.com/questions/10359635/import-images-into-an-intellij-java-project/10359665#10359665)
- [Intellij - how do I add a text file to the resources](https://stackoverflow.com/questions/7556478/intellij-how-to-add-a-text-file-to-the-resources/7556624#7556624)
- [Null Pointer Exception for read properties file in Idea](https://stackoverflow.com/questions/7101484/null-pointer-exception-for-read-properties-file-in-idea/7162823#7162823)
- [IntelliJ Idea - resource SQL files not being copied to target](https://stackoverflow.com/questions/8134250/intellij-idea-resource-sql-files-not-being-copied-to-target/8136368#8136368)
- [Scala getClass.getResource() returning null](https://stackoverflow.com/questions/12133632/scala-getclass-getresource-returning-null)
|
Get all fields and values of hash key using redis in node
In red is using hash, I need to store hash key with multiple fields and values.
I tried as below:
```
client.hmset("Table1", "Id", "9324324", "ReqNo", "23432", redis.print);
client.hmset("Table1", "Id", "9324325", "ReqNo", "23432", redis.print);
var arrrep = new Array();
client.hgetall("Table1", function(err, rep){
console.log(rep);
});
```
Output is: `{ Id: '9324325', ReqNo: '23432' }`
I am getting only one value. How to get all fields and values in the hash key? Kindly help me if I am wrong and let me get the code. Thanks.
|
You are getting one value because you override the previous value.
```
client.hmset("Table1", "Id", "9324324", "ReqNo", "23432", redis.print);
```
This adds Id, ReqNo to the Table1 hash object.
```
client.hmset("Table1", "Id", "9324325", "ReqNo", "23432", redis.print);
```
This overrides Id and ReqNo for the Table1 hash object. At this point, you only have two fields in the hash.
Actually, your problem comes form the fact you are trying to map a relational database model to Redis. You should not. With Redis, it is better to think in term of data structures and access paths.
You need to store one hash object per record. For instance:
```
HMSET Id:9324324 ReqNo 23432 ... and some other properties ...
HMSET Id:9324325 ReqNo 23432 ... and some other properties ...
```
Then, you can use a set to store the IDs:
```
SADD Table1 9324324 9324325
```
Finally to retrieve the ReqNo data associated to the Table1 collection:
```
SORT Table1 BY NOSORT GET # GET Id:*->ReqNo
```
If you want to also search for all the IDs which are associated to a given ReqNo, then you need another structure to support this access path:
```
SADD ReqNo:23432 9324324 9324325
```
So you can get the list of IDs for record 23432 by using:
```
SMEMBERS ReqNo:23432
```
In other words, do not try to transpose a relational model: just create your own data structures supporting your use cases.
|
Get module.exports from within the same file
In a file I have this code:
```
module.exports.greet = function() {...}
```
I want to use that function from within the same file.
I thought this would work:
```
this.greet()
```
But it didn't.
What is the reference I have to use?
|
Normally, this should work just fine, but let's see why this might fail.
**Some Background first**
What happens is that `exports` is an object that, along with a few other things like `require`, `module`, `__dirname` etc. , gets passed into the closure that wraps the modules contents, `exports` is then returned by `require()`.
See: <https://github.com/ry/node/blob/master/src/node.js#L327>
`this` inside the module refers to the `exports` object, the `module` object then holds a reference to the `exports` object. The name space inside the module is provided via the closure.
In the end there's also the `global` object which provides the global name space and hosts things like `process`.
**Examples**
```
// main.js
this.bla = function(){} // sets bla on the the exports object
require('./sub');
console.log(this); // { bla: [Function] }
console.log(exports); // { bla: [Function] }
console.log(module); /* { id: '.',
exports: { bla: [Function] },
parent: undefined,
filename: '/home/ivo/Desktop/main.js',
loaded: false,
exited: false,
children: [] } */
// sub.js
this.greet = function() {} // sets greet on the exports object
console.log(this); // { greet: [Function] }
console.log(exports); // { greet: [Function] }
console.log(module); /* { id: './sub',
exports: { greet: [Function] },
parent:
{ id: '.',
exports: { bla: [Function] },
parent: undefined,
filename: '/home/ivo/Desktop/main.js',
loaded: false,
exited: false,
children: [] },
filename: '/home/ivo/Desktop/sub.js',
loaded: false,
exited: false,
children: [] } */
```
**Cause of the problem**
The only explanation for the fact that your code doesn't work is that the environment variable `NODE_MODULE_CONTEXTS` was set to an integer bigger than zero.
In this case the modules get run in their own context. `this` inside the main module will now refer to the `global` object and inside sub modules, it will refer to a **sandbox** object. Therefore `this.foo` will not set any property on the `exports` object.
See: <https://github.com/ry/node/blob/master/src/node.js#L98>
And: <https://github.com/ry/node/blob/master/src/node.js#L296>
**Fixing the problem**
You can check the environment variables that were passed to the node process:
```
console.log(process.env); // get a list of all variables
// get just the one that's causing trouble, if this returns a number > 0 then it's in effect
console.log(process.env['NODE_MODULE_CONTEXTS']);
```
In case that `NODE_MODULE_CONTEXTS` is in effect, you need to check your `~/.bashrc` and `~/.bash_profile` files for something like `export NODE_MODULE_CONTEXTS=1;` and remove it.
Make sure to open up a new terminal, since changes those two files are only read in when one is created.
|
Why let word added to servlet,applet,cmdlet?
Myself a history buff.I love to find out nuances behind naming.As a java developer we have come across ***servlet(Server side java component),applet(client side java component with gui),portlet(similar to servlet in portal environment)***.Now in Powershell also they have **cmdlet** or **commandlet**.My question is about why these are suffixed with **let** and what is the generic semantic meaning of this **let**?
|
In the English language some words can get the suffix *"-let"* to express that something is small, e.g. *"chainlet"* (for a small chain) or *"arrowlet"* (for a small arrow), see [the list of *"English words suffixed with -let"*](https://en.wiktionary.org/wiki/Category:English_words_suffixed_with_-let) on *wiktionary.org*. So I think the *"let"* in all the words mentioned by you was added to express that the corresponding component is small or lightweight, e.g. an Applet is a small computer program/component. However, I am not a native speaker of the English language.
On [this page](https://github.com/mide42/ListOfLets/blob/master/ListOfLets.md) you'll find a list of further names for software components suffixed with *"-lets"* based on Java technology (e.g. Batchlets and Doclets) and also non-Java technology (e.g. Dashlets or Droplets).
|
How to read lines in bash and delimit them by a specified delimiter?
I need to write a script with the following behaviour:
```
$ echo $'one&some text\ntwo&other text' | ./my_script.sh --delimiter &
Line:
1st: one
2nd: some tex
Line:
1st: two
2nd: other text
```
Which can be also called with the default delimiter which is `\t`:
`$ echo $'one\tsome text\nfive\tother text' | ./my_script.sh`
Output should be the same as above.
Script should take input via standard in.
What is the easiest way to do this? Possibly in pure bash.
I've tried this approach but it does not work and I don't know why:
```
while read -r line
do
echo "$line"
IFS=$DELIMITER
arr=(${line//$DELIMITER/ })
echo ${arr[0]}
echo ${arr[1]}
done
```
|
You can do it in bash without using external programs.
```
$ cat script.sh
#!/bin/bash
if [ "$1" = "--delimiter" ]
then
d=$2
else
d=$'\t'
fi
while IFS="$d" read -r first rest; do
echo "1st: $first"
echo "2nd: $rest"
done
$ echo $'one\tsome text\nfive\tother text' | ./script.sh
1st: one
2nd: some text
1st: five
2nd: other text
$ echo $'one&some text\nfive&other text' | ./script.sh --delimiter \&
1st: one
2nd: some text
1st: five
2nd: other text
```
Note that the ampersand symbol must be escaped (or quoted) otherwise it will execute the command in the background.
|
What is the difference between numpy.fft and scipy.fftpack?
Is the later just a synonym of the former, or are they two different implementations of FFT? Which one is better?
|
SciPy does more:
- <http://docs.scipy.org/doc/numpy/reference/routines.fft.html>
- <http://docs.scipy.org/doc/scipy/reference/fftpack.html#>
In addition, SciPy exports some of the NumPy features through its own interface, for example if you execute *scipy.fftpack.helper.fftfreq* and *numpy.fft.helper.fftfreq* you're actually running the same code.
However, SciPy has its own implementations of much functionality. The source has performance benchmarks that compare the original NumPy and new SciPy versions. My archaic laptop shows something like this:
```
Fast Fourier Transform
=================================================
| real input | complex input
-------------------------------------------------
size | scipy | numpy | scipy | numpy
-------------------------------------------------
100 | 0.07 | 0.06 | 0.06 | 0.07 (secs for 7000 calls)
1000 | 0.06 | 0.09 | 0.09 | 0.09 (secs for 2000 calls)
256 | 0.11 | 0.11 | 0.12 | 0.11 (secs for 10000 calls)
512 | 0.16 | 0.21 | 0.20 | 0.21 (secs for 10000 calls)
1024 | 0.03 | 0.04 | 0.04 | 0.04 (secs for 1000 calls)
2048 | 0.05 | 0.09 | 0.08 | 0.08 (secs for 1000 calls)
4096 | 0.05 | 0.08 | 0.07 | 0.09 (secs for 500 calls)
8192 | 0.10 | 0.20 | 0.19 | 0.21 (secs for 500 calls)
```
It does seem that SciPy runs significantly faster as the array increases in size, though these are just contrived examples and it would be worth experimenting with both for your particular project.
It's worth checking out the source code <http://www.scipy.org/Download#head-312ad78cdf85a9ca6fa17a266752069d23f785d1> . Yes those .f files really are Fortran! :-D
|
Static function overloading?
I'll start by saying I understand that that only nonstatic member functions can be virtual, but this is what I want:
1. A base class defining an interface: so I can use base class pointers to access functions.
2. For memory management purposes (this is an embedded system with limited ram) I want the overriding functions to be statically allocated. I accept the consequence that with a static function, there will be constraints on how I can manipulate data in the function.
My current thinking is that I may keep a light overloading function by making it a wrapper for a function that actually is static.
Please forbear telling me I need to re-think my design. This is why I am asking the question. If you'd like to tell me I'm better off using c and using callbacks, please direct me to some reading material to explain the pitfalls of using an object oriented approach. Is there a object oriented pattern of design which meets the requirements I have enumerated?
|
>
> Is there a object oriented pattern of design which meets the requirements I have enumerated?
>
>
>
Yes, plain old virtual functions. Your desire is "the overriding functions to be statically allocated." Virtual functions *are* statically allocated. That is, the code which implements the functions exists once, and only once, and is fixed at compile/link time. Depending upon your linker command, they are as likely to be stored in flash as any other function.
```
class I {
public:
virtual void doit() = 0;
virtual void undoit() = 0;
};
class A : public I {
public:
virtual void doit () {
// The code for this function is created statically and stored in the code segment
std::cout << "hello, ";
}
virtual void undoit () {
// ditto for this one
std::cout << "HELLO, ";
}
};
class B : public I {
public:
int i;
virtual void doit() {
// ditto for this one
std::cout << "world\n";
}
virtual void undoit() {
// yes, you got it.
std::cout << "WORLD\n";
}
};
int main () {
B b; // So, what is stored inside b?
// There are sizeof(int) bytes for "i",
// There are probably sizeof(void*) bytes for the vtable pointer.
// Note that the vtable pointer doesn't change size, regardless of how
// many virtual methods there are.
// sizeof(b) is probably 8 bytes or so.
}
```
|
Checking radio buttons in Cypress
I'm very new to Javascript and it's my only second week using Cypress, so I need help in getting radio buttons to be clicked. I'm getting errors from Cypress all the time.
The element that I'm trying to check looks like:
`<input class="XyzTypeRadio" type="radio" name="zzz_type" value="2">`
And what I tried to implement after reading the Cypress documentation (at <https://docs.cypress.io/api/commands/check.html#Syntax> )was:
`cy.get('[type="radio"]').first('.XyzTypeRadio').check('value=2')`
Also tried simply `.... .check('2')` and `... .check('Xyz')`
|
(edited and working answer)
Try this:
```
cy.get('[type="radio"].XyzTypeRadio').check("2")
```
Or if you don't care which radio button is checked, you could check the first one:
```
cy.get('[type="radio"].XyzTypeRadio').first().check()
```
Takeaways:
- The [first()](https://docs.cypress.io/api/commands/first.html#Syntax) function does not understand selectors, that's why we need to pass our selector ".XyzTypeRadio" to [get()](https://docs.cypress.io/api/commands/get.html#Syntax).
- The [check()](https://docs.cypress.io/api/commands/check.html#Syntax) function expects the value or values as its argument, so instead of "value=2" we simply give it "2".
- The [check()](https://docs.cypress.io/api/commands/check.html#Syntax) function does a bit of selecting, ie the result of everything before calling check("2") is a list of inputs and the check("2") function searches and selects the one whose value is "2".
- We could use first().check() if we want to check the first radio button, or we could remove first() and check a radio button with a specific value using check("2").
|
Edmonds-Karp Algorithm for a graph which has nodes with flow capacities
I am implementing this algorithm for a directed graph. But the interesting thing about this graph nodes have also their own flow capacities. I think, this subtle change of the original problem must be handled in a special way. Because, In original max-flow problem It was okay to find any path from start to finish(actually, in Edmonds-Karp algorithm, we need to do BFS, and choose the first path that reaches the final node) But with this node-capacity extension, we need to be more careful about 'this path selection' job. I know it because, I implemented the original-algorithm and found myself getting smaller flow values than max-flow, I doubt that it has to do with this node-capacity restrictions.
I put a lot effort on this and came up with some ideas like transforming the initial graph into a graph which has no capacity constraint on nodes by adding self loops (adding self loops to each node and finding paths which includes this self-loops for each node on the path) or adding virtual nodes and edges whose weights supersede the initial node-capacity constraints) However, I am not convinced that any of these are nice solution for this problem.
Any idea would be much appreciated.
Thanks in advance.
|
There's a simple reduction from the max-flow problem with node capacities to a regular max-flow problem:
For every vertex `v` in your graph, replace with two vertices `v_in` and `v_out`. Every incoming edge to `v` should point to `v_in` and every outgoing edge from `v` should point from `v_out`. Then create one additional edge from `v_in` to `v_out` with capacity `c_v`, the capacity of vertex `v`.
So you just run Edmunds-Karp on the transformed graph.
So let's say you have the following graph in your problem (vertex `v` has capacity 2):
```
s --> v --> t
1 2 1
```
This would correspond to this graph in the max-flow problem:
```
s --> v_in --> v_out --> t
1 2 1
```
It should be apparent that the max-flow obtained is the solution (and it's not particularly difficult to prove either).
|
Cost of std::vector::push\_back either succeeding or having no effect?
If I understand correctly, [`std::vector::insert`](http://en.cppreference.com/w/cpp/container/vector/insert) does not guarantee a commit-or-rollback for [`std::vector`](http://en.cppreference.com/w/cpp/container/vector)s (it does for `std::list`s for obvious reasons) in the case an exception is thrown during copying or moving, because of the high cost of checking for exceptions. I recently saw that `push_back` DOES guarantee either successful insertion at the end or nothing happens.
My question is the following. Suppose that during a `push_pack` the vector has to be resized (reallocation happens). In that case, all elements have to be copied in the new container, via copy or move semantics. Assuming that the move constructor is not guaranteed `noexcept`, `std::vector` will then use copy semantics. So, to guarantee the above behaviour of `push_pack`, `std::vector` has to check for successful copying, and if not, roll back via a swap the initial vector. Is this what is happening, and if so, isn't this expensive? Or, because reallocation happens rarely, one can say that the amortized cost is low?
|
In C++98/03 we (obviously) had no move semantics, only copy semantics. And in C++98/03, `push_back` has the strong guarantee. One of the strong motivations in C++11 was to not break existing code that relied on this strong guarantee.
The C++11 rules are:
1. If `is_nothrow_move_constructible<T>::value` is true, move, else
2. If `is_copy_constructible<T>::value` is true, copy, else
3. If `is_move_constructible<T>::value` is true, move, else
4. The code is ill-formed.
If we are in the case of 1 or 2, we have the [strong guarantee](http://en.wikipedia.org/wiki/Exception_safety). If we are in case 3, we have only the [basic guarantee](http://en.wikipedia.org/wiki/Exception_safety). Since in C++98/03, we were always in case 2, we have backwards compatibility.
Case 2 is not expensive to maintain the strong guarantee. One allocates the new buffer, preferably using an RAII device such as a second vector. Copies into it, and only if all of that succeeds, swap `*this` with the RAII device. This is the cheapest way to do things, whether or not you want the strong guarantee, and you get it for free.
Case 1 is also inexpensive to maintain the strong guarantee. The best way I know of is to first copy/move the new element into the middle of the new allocation. If that succeeds, then move the elements from the old buffer and swap.
**More Detail Than You Probably Want**
[libc++](http://libcxx.llvm.org) accomplishes all 3 cases with the same algorithm. To accomplish this, two tools are used:
1. [`std::move_if_noexcept`](http://en.cppreference.com/w/cpp/utility/move_if_noexcept)
2. A non-std `vector`-like container where the data is contiguous, but can start at a non-zero offset from the beginning of the allocated buffer. libc++ calls this thing `split_buffer`.
Assuming the reallocation case (the non-reallocation case is trivial), the `split_buffer` is constructed with a reference to this `vector`'s allocator, and with twice the capacity of this `vector`, and with its starting position set to `this->size()` (though the `split_buffer` is still `empty()`).
Then the new element is copied or moved (depending on which `push_back` overload we are talking about) to the `split_buffer`. If this fails, the `split_buffer` destructor undoes the allocation. If it succeeds, then the `split_buffer` now has `size() == 1`, and the `split_buffer` has room for exactly `this->size()` elements prior to its first element.
Next the elements are moved/copied *in reverse order* from `this` to the `split_buffer`. [`move_if_noexcept`](http://en.cppreference.com/w/cpp/utility/move_if_noexcept) is used for this, which has a return type of either `T const&` or `T&&` exactly as we need as specified by the 3 cases above. On each successful move/copy, the `split_buffer` is doing a `push_front`. If successful, the `split_buffer` now has `size() == this->size()+1`, and its first element is at a zero offset from the beginning of its allocated buffer. If any move/copy fails, `split_buffer`'s destructor destructs whatever is in the `split_buffer` and deallocates the buffer.
Next the `split_buffer` and `this` swap their data buffers. This is a `noexcept` operation.
Finally the `split_buffer` destructs, destructing all of its elements, and deallocating its data buffer.
No try-catches needed. There is no extra expense. And everything works as specified by C++11 (and summarized above).
|
How to use Performance Counter or Process class correctly in C# to get memory usage of current process?
According to [How to use .NET PerformanceCounter to track memory and CPU usage per process?](https://stackoverflow.com/questions/3411805/how-to-use-net-performancecounter-to-track-memory-and-cpu-usage-per-process) `PerformanceCounter` should give me the number of memory usage of a given process.
According to [MSDN](http://msdn.microsoft.com/en-us/library/ccf1tfx0.aspx), `Process` instance may also give me more or less the same number.
In order to verify my assumptions, I wrote the following code:
```
class Program
{
static Process process = Process.GetCurrentProcess();
static PerformanceCounter privateBytesCounter = new PerformanceCounter("Process", "Private Bytes", process.ProcessName);
static PerformanceCounter workingSetCounter = new PerformanceCounter("Process", "Working Set", process.ProcessName);
static void Main(string[] args)
{
GetMeasure();
Console.WriteLine("\nPress enter to allocate great amount of memory");
Console.ReadLine();
int[] arr = new int[10000000];
for (int i = 0; i < arr.Length; i++)
{
arr[i] = i;
}
GetMeasure();
privateBytesCounter.Dispose();
workingSetCounter.Dispose();
Console.ReadKey();
}
private static void GetMeasure()
{
Console.WriteLine("{0,38} {1,20}", "Private bytes", "working set");
Console.WriteLine("process data{0,23} {1,20}", process.PrivateMemorySize64 / 1024, process.WorkingSet64 / 1024);
Console.WriteLine("PerformanceCounter data{0,12} {1,20}", privateBytesCounter.NextValue() / 1024, workingSetCounter.NextValue() / 1024);
}
}
```
The output looks like
```
Private bytes working set
process data 22880 17516
PerformanceCounter data 21608 15608
Press enter to allocate great amount of memory
Private bytes working set
process data 22880 17516
PerformanceCounter data 21608 15608
```
Exactly the same! In the contrast, private bytes shown in Process Explorer increased from 32732 to 63620.
So am I doing something wrong?
|
You have to tell your `process` instance it should refresh its cached data. Data is not gathered each time you access to a property for performance purposes. You have to manually demand the data update.
```
private static void GetMeasure()
{
process.Refresh(); // Updates process information
Console.WriteLine("{0,38} {1,20}", "Private bytes", "working set");
Console.WriteLine("process data{0,23} {1,20}", process.PrivateMemorySize64 / 1024, process.WorkingSet64 / 1024);
Console.WriteLine("PerformanceCounter data{0,12} {1,20}", privateBytesCounter.NextValue() / 1024, workingSetCounter.NextValue() / 1024);
}
```
That's for your `process`.
For performance counters, `NextValue()` is supposed to retrieve a new fresh data each time, so I can't explain why it doesn't on your machine. On mine it works fine.
*EDIT*:
With the `process.Refresh()` added, here's what I get:
```
Private bytes working set
process data 25596 22932
PerformanceCounter data 26172 23600
Press enter to allocate great amount of memory
Private bytes working set
process data 65704 61848
PerformanceCounter data 65828 61880
```
|
Custom inputView with dynamic height in iOS 8
I have some trouble with my custom inputView for `UITextFields`. Depending on the text the user needs to input in a `UITextField`, the inputView displays only the needed letters. That means for short texts, an inputView with only one line of letters is sufficient, longer texts may require 2 or even 3 lines so the height of the inputView is variabel.
Since I was expecting better performance, there exists only one inputView instance that is used by every textField. That way the creation must only happen once and it made the sometimes needed direct access to the inputView easier. The inputView is set up in `- (BOOL)textFieldShouldBeginEditing:(UITextField *)textField`, sets its required height and will be shown.
That works perfectly, but not on iOS8. There some system view containing the inputView will not update its frame to match the inputView's bounds when they are changed (first time works).
I know that can be fixed by using one instance of my inputView per textField. But I'm asking if there is a recommended/better way to adjust the frame or to report its change to the containing view. Maybe it is an iOS8 bug that could be fixed until release?
Here's some example code to reproduce the issue:
CustomInputView
```
@implementation CustomInputView
+ (CustomInputView*)sharedInputView{
static CustomInputView *sharedInstance;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
sharedInstance = [[CustomInputView alloc] init];
});
return sharedInstance;
}
- (id)init
{
self = [super init];
if (self) {
self.backgroundColor = [UIColor greenColor];
}
return self;
}
- (void)setupForTextField:(UITextField*)textField{
CGFloat height;
if(textField.tag == 1){
height = 100;
}else height = 50;
self.frame = CGRectMake(0, 0, 320, height);
}
@end
```
TestViewController code
```
- (void)viewDidLoad
{
[super viewDidLoad];
UITextField *tf = [[UITextField alloc] initWithFrame:CGRectMake(15, 50, 290, 30)];
tf.text = @"bigKeyboard";
tf.inputView = [CustomInputView sharedInputView];
tf.layer.borderWidth = 1;
tf.layer.borderColor = [UIColor lightGrayColor].CGColor;
tf.delegate = self;
tf.tag = 1;
[self.view addSubview:tf];
tf = [[UITextField alloc] initWithFrame:CGRectMake(15, 100, 290, 30)];
tf.text = @"smallKeyboard";
tf.inputView = [CustomInputView sharedInputView];
tf.layer.borderWidth = 1;
tf.layer.borderColor = [UIColor lightGrayColor].CGColor;
tf.delegate = self;
tf.tag = 2;
[self.view addSubview:tf];
UIButton *button = [UIButton buttonWithType:UIButtonTypeSystem];
[button setTitle:@"dismissKeyboard" forState:UIControlStateNormal];
[button addTarget:self action:@selector(endEditing) forControlEvents:UIControlEventTouchUpInside];
button.frame = CGRectMake(15, 150, 290, 30);
[self.view addSubview:button];
}
- (void)endEditing{
[self.view endEditing:YES];
}
- (BOOL)textFieldShouldBeginEditing:(UITextField *)textField{
[[CustomInputView sharedInputView] setupForTextField:textField];
return YES;
}
```
|
I had similar issues with sizing a custom keyboard from iOS 8 to iOS 10. I believe the proper solution is to have the input view provide a proper `intrinsicContentSize` and change (and invalidate!) that value when you want to change the view's height. Sample code:
```
class CustomInputView: UIInputView {
var intrinsicHeight: CGFloat = 200 {
didSet {
self.invalidateIntrinsicContentSize()
}
}
init() {
super.init(frame: CGRect(), inputViewStyle: .keyboard)
self.translatesAutoresizingMaskIntoConstraints = false
}
required init?(coder: NSCoder) {
super.init(coder: coder)
self.translatesAutoresizingMaskIntoConstraints = false
}
override var intrinsicContentSize: CGSize {
return CGSize(width: UIViewNoIntrinsicMetric, height: self.intrinsicHeight)
}
}
class ViewController: UIViewController {
@IBOutlet weak var textView: UITextView!
override func viewDidLoad() {
super.viewDidLoad()
textView.becomeFirstResponder()
let inputView = CustomInputView()
// To make the view's size more clear.
inputView.backgroundColor = UIColor(red: 0.5, green: 1, blue: 0.5, alpha: 1)
textView.inputView = inputView
// To demonstrate a change to the view's intrinsic height.
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + .seconds(2)) {
inputView.intrinsicHeight = 400
}
}
}
```
See also <https://stackoverflow.com/a/40359382/153354>.
|
When use Spawn and Spawn\_link?
When I need to create a process,I will use the customary spawn bif.But there is one more bif spawn\_link that is often used to do the same thing.
So basically when should one use `spawn` and `spawn_link`?
|
Doing `spawn` and then `link` manually is equivalent in operation to `spawn_link`, but it is not equivalent in time; in particular it is not atomic (as in, two independent operations, not a single, indivisible one). If you spawn a process and it dies in its initialization (whatever your `start` or `init` functions do) then it might die *before* the call to `link` completes, and the linked process will never get notified that the process died since it died before it was linked. Oops!
From Joe Armstrong's *Programming Erlang* Ch.13 "Why Spawning and Linking Must be an Atomic Operation":
>
> Once upon a time Erlang had two primitives, `spawn` and `link`, and `spawn_link(Mod, Func, Args)` was defined like this:
>
>
>
```
spawn_link(Mod, Func, Args) ->
Pid = spawn(Mod, Func, Args),
link(Pid),
Pid.
```
>
> Then an obscure bug occurred. The spawned process died before the link statement was called, so the process died but no error signal was generated. This bug took a long time to find. To fix this, `spawn_link` was added as an atomic operation. Even simple-looking programs can be tricky when concurrency is involved.
>
>
>
|
How to set the background of Android PopupMenu to White
I'm struggling with setting the background of PopupMenu. After googling it for a while, I found it should goes to the app theme. To be more specific, this should be defined in the style.xml.
```
<style name="AppTheme" parent="AppBaseTheme">
<item name="android:popupMenuStyle">MY_STYLE</item>
</style>
```
However, I didn't really figure out which style exactly I should use, as I assume there's a built-in one. I tried with `@android:style/Widget.Holo.PopupMenu` and `@android:style/Widget.Holo.Light.PopupMenu`, but with no luck.
|
For instance, try something like this:
```
<style name="Theme.MyAppTheme" parent="@style/Theme.Holo.Light">
<item name="popupMenuStyle">@style/PopupMenu.MyAppTheme</item>
</style>
```
And then on the style itself:
```
<style name="PopupMenu.MyAppTheme" parent="@style/Widget.Holo.Light.ListPopupWindow">
<item name="android:popupBackground">@drawable/popup_menu_bg_color</item>
</style>
```
That is the way it's done via [ActionBarStyleGenerator](http://jgilfelt.github.io/android-actionbarstylegenerator/) and some old references here on the [Developer Site](http://android-developers.blogspot.com/2011/04/customizing-action-bar.html).
|
Cosine function period
I'm using the cosine function in C++ to simulate flashing for a sprite in my game. The method looks like this:
(anything in the `sf` namespace is part of the SFML graphics library)
```
void Player::update(const float& deltaTime)
{
mAccumulatedTime += deltaTime;
float opacity = abs(cosf(5*mAccumulatedTime)) * 255;
static int numFlashes = 0;
if (opacity == 255) {
cout << ++numFlashes << endl;
}
mSprite.setFillColor(sf::Color(255, 255, 255, opacity));
}
```
So every time `opacity` is equal to 255 (basically the passing of one full period), `numFlashes` should be incremented. The problem is, `cos()` isn't perfect, meaning it doesn't *exactly* reach 1 and 0, so the `if` condition is rarely met. If I use rough checking like `if (opacity > 255*0.9999)`, then `numFlashes` becomes really high, really fast.
Does anyone know a way to accurately check when a full period has passed? Or is that just not possible?
|
One may use the following closed formula to compute `numFlashes`:
```
numFlashes = 5 * mAccumulatedTime / pi.
```
This follows from the fact that the period of the function `abs(cos(x))` is `pi` and, if an oscillating function in variable `x` has a period `T`, then the number of oscillations, `n`, is given by the formula:
```
n = x / T.
```
Thus, your function definition may be corrected, and even simplified, as follows:
```
const float PI = acosf(-1);
void Player::update(const float& deltaTime)
{
mAccumulatedTime += deltaTime;
float opacity = abs(cosf(5*mAccumulatedTime)) * 255;
int numFlashes = 5 * mAccumulatedTime / PI;
cout << numFlashes << endl;
mSprite.setFillColor(sf::Color(255, 255, 255, opacity));
}
```
|
How to find from where the Request came to ASP.NET Application?
I have an ASP.NET Application deployed to IIS, application will be called from another separate website(Java based application which is like a portal), Can I find the URL of the that website (Java based application )from where the request came to my asp.net application ?
The reason why I need that requested URL is that in my application I have a HOME button when clicked should redirect the USER to the website(Java based application) from where my asp.net application is called. I can't hard code the URL as the request will come from multiple places (as that java based website is in multiple servers with different or same domain names).
|
The most reliable way is probably to have the "home" application provide a "return URL" when it links to your application. Something like:
```
<a href="http://yourserver/yourapplication?HomeURL=http%3A%2F%2Fhomeserver%2Fhomeapplication
```
Then in your application you'd check the query string for that value and probably just keep it in session or some transient session-like persistence:
```
Session["HomeURL"] = Request.QueryString["HomeURL"];
```
(Or of course something less coupled to the HTTP Context if possible, particularly in an MVC application.)
You can perhaps create a default of some sort of none is provided, maybe linking back just to your application's home or perhaps trying to get the value from `Request.UrlReferrer` (which isn't always available). Then your "Home" link would be that value, at least for the duration of that user's session (or however long you persist it).
|
How can I test several exceptions within one test using an ExpectedException Rule?
Got a question regarding the usage of junit's ExpectedException rule:
As suggested here: [junit ExpectedException Rule](https://stackoverflow.com/questions/156503/how-do-you-assert-that-a-certain-exception-is-thrown-in-junit4-5-tests)
starting from junit 4.7 one can test exceptions like this (which is much better then the @Test(expected=Exception.class)):
```
@Rule
public ExpectedException exception = ExpectedException.none();
@Test
public void testFailuresOfClass() {
Foo foo = new Foo();
exception.expect(Exception.class);
foo.doStuff();
}
```
Now I needed to test several exceptions in one test method and got a green bar after running the following test and thus thought every test passed.
```
@Test
public void testFailuresOfClass() {
Foo foo = new Foo();
exception.expect(IndexOutOfBoundsException.class);
foo.doStuff();
//this is not tested anymore and if the first passes everything looks fine
exception.expect(NullPointerException.class);
foo.doStuff(null);
exception.expect(MyOwnException.class);
foo.doStuff(null,"");
exception.expect(DomainException.class);
foo.doOtherStuff();
}
```
However after a while I realized that the testmethod is quit after the first check passes. This is ambiguous to say the least. In junit 3 this was easily possible...
So here is my question:
**How can I test several exceptions within one test using an ExpectedException Rule?**
|
Short answer: You can't.
If the first call - to `foo.doStuff()` - throws an exception, you will never reach `foo.doStuff(null)`. You'll have to split your test up into several (and for this trivial case I'd propose going back to the simple notation, without `ExpectedException`):
```
private Foo foo;
@Before
public void setUp() {
foo = new Foo();
}
@Test(expected = IndexOutOfBoundsException.class)
public void noArgsShouldFail() {
foo.doStuff();
}
@Test(expected = NullPointerException.class)
public void nullArgShouldFail() {
foo.doStuff(null);
}
@Test(expected = MyOwnException.class)
public void nullAndEmptyStringShouldFail() {
foo.doStuff(null,"");
}
@Test(expected = DomainException.class)
public void doOtherStuffShouldFail() {
foo.doOtherStuff();
}
```
If you really want one and only one test, you can `fail` if no error is thrown, and catch the things you expect:
```
@Test
public void testFailuresOfClass() {
Foo foo = new Foo();
try {
foo.doStuff();
fail("doStuff() should not have succeeded");
} catch (IndexOutOfBoundsException expected) {
// This is what we want.
}
try {
foo.doStuff(null);
fail("doStuff(null) should not have succeeded");
} catch (NullPointerException expected) {
// This is what we want.
}
// etc for other failure modes
}
```
This gets quite messy pretty fast, though, and if the first expectation fails, you won't see if anything else fails as well, which can be annoying when troubleshooting.
|
ZedGraph filling areas
I am using the ZedGraph control and want to fill one side of the graph function with some color and other side with other color.
```
PointPairList list1 = new PointPairList();
list1.Add(0, 4);
list1.Add(4, 0);
LineItem myCurve = myPane.AddCurve("y(n)", list1, Color.Red, SymbolType.Diamond);
//This filling bottom side.
myCurve.Line.Fill = new Fill(Color.White, Color.FromArgb(113, 255, 0, 0), 90F);
//How to fill the top side?
```
|
I'm not very clear on what you're asking - but hopefully the below will help. You said in comments
>
> Can I fill some polygon area in Zedgraph?
>
>
>
So here's how...
```
var zed = new ZedGraph.ZedGraphControl { Dock = System.Windows.Forms.DockStyle.Fill };
var poly = new ZedGraph.PolyObj
{
Points = new[]
{
new ZedGraph.PointD(0, 0),
new ZedGraph.PointD(0.5, 1),
new ZedGraph.PointD(1, 0.5),
new ZedGraph.PointD(0, 0)
},
Fill = new ZedGraph.Fill(Color.Blue),
ZOrder = ZedGraph.ZOrder.E_BehindCurves
};
var poly1 = new ZedGraph.PolyObj
{
Points = new[]
{
new ZedGraph.PointD(1, 0),
new ZedGraph.PointD(0.25, 1),
new ZedGraph.PointD(0.5, 0),
new ZedGraph.PointD(1, 0)
},
Fill = new ZedGraph.Fill(Color.Red),
ZOrder = ZedGraph.ZOrder.E_BehindCurves
};
zed.GraphPane.AddCurve("Line", new[] { 0.0, 1.0 }, new[] { 0.0, 1.0 }, Color.Green);
zed.GraphPane.GraphObjList.Add(poly1);
zed.GraphPane.GraphObjList.Add(poly);
```
Results in

Hopefully this will point you in the right direction!
([Code in VB as requested](http://pastebin.com/jsFB1jxP) via <http://converter.telerik.com/> - no guarentee of the VB code working or even compiling!)
|
node.js http.IncomingMessage does not fire 'close' event
**When does the http.IncomingMessage fire its 'close' event?**
According to the [documentation](http://nodejs.org/api/http.html#http_event_close_2 "documentation") it should occur when the underlaying connection was closed. However, it is never called for the following example code (I made sure it is not caused by keep-alive):
```
var http = require('http'),
fs = require('fs');
var server = http.createServer(function(req, res) {
res.shouldKeepAlive = false;
req.on("end", function() {
console.log("request end");
});
req.on("close", function() {
console.log("request close"); // Never called
});
res.end("Close connection");
});
server.listen(5555);
```
I'm using node.js v0.10.22.
|
The 'close' event is fired, when the underlying connection was closed before the response was sent.
Can be tested using the following server code and aborting the request midway through.
```
var http = require('http'),
fs = require('fs');
var server = http.createServer(function(req, res) {
res.shouldKeepAlive = false;
req.on("end", function() {
console.log("request end");
});
req.on("close", function() {
console.log("request close"); // Called, when connection closed before response sent
});
setTimeout(function () {
res.end("Close connection");
}, 5000); // Wait some time to allow user abort
});
server.listen(5555);
```
Thanks to [gustavohenke](https://stackoverflow.com/users/2083599/gustavohenke)!
|
Repeating selects or use IN clause, which is faster?
I find this becomes a common situation when design JDBC/JPA queries when using a collection as where condition for selection.
Let's say if there is a table of 50 thousand records with field `order_id` which is properly indexed. Now the java application have a list of 500 order ids to find order details and need to assign values to each order object. So there can be two plan
```
1. run 500 SELECT queries
for(String id:order_ids){
Order order = QueryAgent.execute("SELECT * FROM ORDES o WHERE o.order_id ="+id);
modifyOrder(order);
}
2. run one query whith 500 parameters in
String orders_in_string = getOrdersInString(order_ids);
List<Order> orders = QueryAgent.execute("SELECT * FROM ORDES o WHERE o.order_id IN ="+orders_in_string);
for(Order order:orders){
modifyOrder(order);
}
```
I cannot tell which one get better performance.
|
The two queries are going to return the same volume of results. So, your real question is which is faster: running 500 small queries or one large query to get the same results.
The proper answer to a performance question is to encourage you to try it on your system and see which is faster on your data in your environment.
In this case, there is every reason to expect that a single query would be faster. SQL incurs overhead in parsing queries (or finding already parsed equivalent queries), in starting a query, and in preparing results. All of these happen *once* per query. So, I would expect one query to be faster.
If the list of `order_id`s is already coming from the database, I would recommend building a more complex query, so they don't have to go back to the application. Something like:
```
select o.*
from orders o
where o.order_id in (select . . . );
```
Also, if you don't need all the columns, you should just explicitly choose the ones you want. Actually, it is a good idea to be always explicit in the columns being chosen.
|
Quickest way to get user input in Android
What's the fastest way for creating user input dialogs ? I could make different activitys everytime I need user input put that looks like overkill in my case. I just need a small amount of popups in different screens for a user interface.
Can someone point me in the right direction?
|
[AlertDialog.Builder](http://developer.android.com/reference/android/app/AlertDialog.Builder.html) from the API. Here is an example:
```
AlertDialog.Builder alert = new AlertDialog.Builder(this);
alert.setTitle("Title");
alert.setMessage("Message");
// Set an EditText view to get user input
final EditText input = new EditText(this);
alert.setView(input);
alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int whichButton) {
String value = input.getText().toString();
// Do something with value!
}
});
alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int whichButton) {
// Canceled.
}
});
alert.show();
```
It's a convenient way of retrieving user input.
<http://www.androidsnippets.com/prompt-user-input-with-an-alertdialog>
|
KEEP keyboard ON when UIAlertcontroller is presented in Swift?
When the alert pops up the keyboard is dismissed. I have looked everywhere but did not find solutions to keep the keyboard visible. When alert is presented the textfield seems to resign first responder automatically as the alert is presented modally. How is it possible to keep the keyboard behind this alert which means the textfield still editing even if no interaction will be possible ?
|
This solution works for me:
```
let rootViewController: UIViewController =
UIApplication.sharedApplication().windows.lastObject.rootViewController!!
rootViewController.presentViewController(alert, animated: true, completion: nil)
```
---
*edit by @galambalazs:* The reason it works is because:
You can grab the window with the current highest window level and present your *View Controller* inside that *Window* (making it the top *View Controller* in the top *Window*).
>
> [**UIApplication.sharedApplication().windows**](https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIApplication_Class/#//apple_ref/occ/instp/UIApplication/windows)
>
> The windows in the array are ordered from back to front by window level;
>
> thus, the last window in the array is on top of all other app windows.
>
>
>
Also you might want to set the tintColor of that window so that it matches your apps global tintColor.
```
UIWindow *topWindow = [UIApplication sharedApplication].windows.lastObject;
// we inherit the main window's tintColor because topWindow may not have the same
topWindow.tintColor = [UIApplication sharedApplication].delegate.window.tintColor;
```
|
Autowiring of beans generated by EasyMock factory-method?
I have a problem that seems really strange to me. I have the following setup:
An interface:
```
package com.example;
public interface SomeDependency {
}
```
A spring component:
```
package com.example;
@Component
public class SomeClass {
}
```
A spring test config with a mocked bean generated by EasyMock:
```
<beans ....>
<context:component-scan base-package="com.example"/>
<bean id="someInterfaceMock" class="org.easymock.EasyMock" factory-method="createMock">
<constructor-arg value="com.example.SomeDependency" />
</bean>
</beans>
```
And a unit test:
```
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/testconfig.xml")
public class SomeClassTest {
@Autowired
SomeClass someClass;
@Autowired
SomeDependency someDependency;
@Test
public void testSomeClass() throws Exception {
assertNotNull(someClass);
}
@Test
public void testSomeDependency() throws Exception {
assertNotNull(someDependency);
}
}
```
The project compiles and the tests pass without any problem, i.e. autowiring of both *SomeClass* (a "real" object) and *SomeDependency* (a mock object generated by EasyMock) succeed.
However, if I change the implementation of *SomeClass* to:
```
@Component
public class SomeClass {
@Autowired
SomeDependency someDependency;
}
```
both tests fail because
```
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type [com.example.SomeDependency] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)}
```
So my questions are:
1. Why does Spring fail to autowire the dependency to *SomeClass* (when it succeeds autowiring the same dependency to *SomeClassTest*)?
2. How can I change the *SomeClassTest* or *testconfig.xml* to make the tests pass?
Comment: In reality the class represented by *SomeClass* is part of a framework. Consequently, it cannot easily be updated, at least not within reasonable time.
Dependencies:
- Spring: 3.0.5.RELEASE
- EasyMock: 3.0
**Edit:**
As of Spring 3.2 RC1, the problem with generic factory methods and mock objects has been [solved](http://blog.springsource.org/2012/11/07/spring-framework-3-2-rc1-new-testing-features/).
/Mattias
|
It seems the order of the definitions in the xml actually matter when using factories to create beans with autowiring. If you place the declaration of `someInterfaceMock` above `component-scan` it will work.
Some clarification why: When Spring tries to autowire `SomeClass` it searches for a bean of type `SomeDependency`. At this stage `someInterfaceMock` is still a factory so Spring checks the signature of the factory method `EasyMock.createMock(...)` which returns `<T>` so Spring only finds an `Object` which isn't the type required.
A better way would be to use Spring's `FactoryBean` interface to create your mocks.
Here is a basic implementation that should work:
```
public class EasyMockFactoryBean<T> implements FactoryBean<T> {
private Class<T> mockedClass;
public void setMockedClass(Class mockedClass) {
this.mockedClass = mockedClass;
}
public T getObject() throws Exception {
return EasyMock.createMock(mockedClass);
}
public Class<T> getObjectType() {
return mockedClass;
}
public boolean isSingleton() {
return true;
}
}
```
Here is the bean definition (the order won't matter!):
```
<bean class="com.example.EasyMockFactoryBean">
<property name="mockedClass" value="com.example.Dependancy"/>
</bean>
```
|
Read full dpkg database state without dpkg program
From a Live USB, and without using the installed Ubuntu's programs like `dpkg`, `dpkg-query`, `dselect`, `aptitude`, `apt-get`, or any other (as they are all [broken `Illegal instruction`](https://askubuntu.com/questions/1083787/kernel-panic-at-boot-and-many-programs-segfault-core-dump)), how do I read the package lists that `apt` reads, as a list in order to catalogue the installed packages, to reinstall later?
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
```
[where is local package database?](https://ubuntuforums.org/showthread.php?t=1372833) is almost a perfect answer:
>
> `/var/lib/dpkg/status` is modified by dpkg if you change the state of a package, i.e. install it, remove it, mark it for some action, ...
>
>
>
- I don't want just to list "changed" packages, but also depended packages, the entire state of the system, etc. Is that contained by `dpkg/status`?
- On another healthy system, that file has ~3800 `Package:` entries; on the target system it has 3600 `Package:` entries, and on the Live USB it has only 2000 entries. That sounds right except I worry it doesn't list *all* the packages.
- I am not looking for the downloaded `.deb` cache, nor the `cache` for `dpkg` / `apt` which holds temporary data files -- these do not give the full system state.
- [`/var/lib/apt/lists`](https://askubuntu.com/questions/937927/where-is-the-information-from-apt-show-packagename-stored?rq=1) seems to contain a list of package sources and packages, which is almost perfect except a lot of parsing is involved in getting a list of package names from it. Is this the best option?
The file [`/var/cache/apt/pkgcache.bin`](https://askubuntu.com/questions/911785/where-does-apt-cache-depends-collect-the-dependency-information-from) exists but since it's a generated binary, I don't think copying it to a fresh install will be sufficient.
|
See [`man dpkg`](http://manpages.ubuntu.com/manpages/bionic/en/man1/dpkg.1.html):
>
> The other files listed below are in their default directories, see
> option `--admindir` to see how to change locations of these files.
>
>
> - `/var/lib/dpkg/available` List of available packages.
> - `/var/lib/dpkg/status` Statuses of available packages.
>
>
> This file contains information about whether a package is marked for
> removing or not, whether it is installed or not, etc. See section
> INFORMATION ABOUT PACKAGES for more info. The status file is backed up
> daily in `/var/backups`. It can be useful if it's lost or corrupted due
> to filesystems troubles.
>
>
>
You can quit worrying. If the package isn't in `status`, `dpkg` doesn't know about it. At all.
---
By the way, the "package lists" that apt is taking about is a different thing: those are the lists it downloaded from the sources in `/etc/apt/sources.list`, etc. and are in `/var/lib/apt/lists`. These contain information about packages in the repositories, and have nothing about the local system.
---
About your current problem, you can use the live system's `dpkg` to read that file, using the `--admindir` option mentioned above.
|
Numpy: Replace random elements in an array
I already googled a bit and didn't find any good
answers.
The thing is, I have a 2d numpy array and I'd like to
replace some of its values at random positions.
I found some answers using numpy.random.choice to create
a mask for the array. Unfortunately this does not create
a view on the original array so I can not replace its values.
So here is an example of what I'd like to do.
Imagine I have 2d array with float values.
```
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]]
```
And then I'd like to replace an arbitrary amount of
elements. It would be nice if I could tune with a parameter
how many elements are going to be replaced.
A possible result could look like this:
```
[[ 3.234, 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 2.234]]
```
I couldn't think of nice way to accomplish this.
Help is appreciated.
|
Just mask your input array with a random one of the same shape.
```
import numpy as np
# input array
x = np.array([[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]])
# random boolean mask for which values will be changed
mask = np.random.randint(0,2,size=x.shape).astype(np.bool)
# random matrix the same shape of your data
r = np.random.rand(*x.shape)*np.max(x)
# use your mask to replace values in your input array
x[mask] = r[mask]
```
Produces something like this:
```
[[ 1. 2. 3. ]
[ 4. 5. 8.54749399]
[ 7.57749917 8. 4.22590641]]
```
|
Export image from R to word with alpha channel (transparency)
I am wanting to export an R produced figure to Word. The figure contains transparency (alpha channel). Below is some example code - when exported to Windows metafile it throws an error:
>
> Warning message:
> In plot.xy(xy, type, ...) :
> semi-transparency is not supported on this device: reported only once per page
>
>
>
Exporting to SVG produces the desired result, but this image format is not supported by MS Office. Is there a way around this? What image type could I use while retaining the alpha channel? PNG is possible, but this doesn't produce very crisp graphics - it loses the clear vectorized image.
```
# Get some colours with transparency (alpha = 0.6)
col.dot <- rainbow(5, alpha = .6)
# Save to svg file - OK
svg("test_fig.svg")
plot(1:5,col = col.dot, pch=15)
dev.off()
# Save to wmf - warning "semi-transparency is not supported on this device..."
win.metafile("test_fig.wmf")
plot(1:5,col = col.dot, pch=15)
dev.off()
```
I should add, this is on a Windows system (Windows 8 64 bit, with Word 2013)
|
I just made a new package `export` to easily export R graphs to Office (Word, Powerpoint), see
<https://cran.r-project.org/web/packages/export/index.html> and
for demo <https://github.com/tomwenseleers/export>.
Typical syntax is very easy, e.g.:
```
install.packages("export")
library(export)
library(ggplot2)
qplot(Sepal.Length, Petal.Length, data = iris, color = Species,
size = Petal.Width, alpha = I(0.7))
graph2ppt(file="ggplot2_plot.pptx", width=6, height=5)
```
Output is vector format and so fully editable after you ungroup your graph in Powerpoint. You can also use it to export to Word, Excel, Latex or HTML and you can also use it to export statistical output of various R stats objects.

This results in a fully editable, high quality Powerpoint graph in native Office vector-based DrawingML format, which you can also readily copy & paste as enhanced metafile if you like, and which unlike the EMFs exported from R also fully supports transparency.
|
awk approach to subset file based on three columns
I have a file in .BED format (input.bed)
```
less input.bed
A 678 934 0bb6fdb5 0 +
A 715 986 8bc17666 0 +
A 716 1029 065f9d32 0 -
A 769 1367 572ae096 0 +
B 801 924 e6f469bf 0 +
B 876 1007 384622c7 0 +
B 921 1118 9f8bdbbb 0 -
C 1038 1841 f9a29a0g 0 +
C 1078 1929 9cd655be 0 +
```
I want to subset the file from the LINUX command line and produce a new file that contains all lines with "A" in the first column, a value larger than 715 in the second column, and a value smaller than 1367 in the third column.
I tried using awk
```
awk -F '$1 == "A" && $2 >= 715 && $3 <= 1367' input.bed > output.bed
```
But I get the error message:
```
awk: cmd. line:1: input.bed
awk: cmd. line:1: ^ syntax error
```
The desired output looks like this
```
less output.bed
A 715 986 8bc17666 0 +
A 716 1029 065f9d32 0 -
A 769 1367 572ae096 0 +
```
|
Please try following, with your shown samples. Assuming that your Input\_file is space delimited then we need not to use space delimiter since that's default one for `awk`(in your attempted code no value is set for field separator is causing issues). In case you have tab as field separator then change `awk` to `awk 'BEGIN{FS=OFS="\t"}` in following code.
```
awk '$1=="A" && $2>=715 && $3<=1367' Input_file
```
***Explanation:*** Simple explanation would be, checking 2 conditions if 1st field is A, 2nd field is greater than or equals to 715 AND 3rd field is lesser than or equals to 1367 then print that line. Once you are happy with above code(which will print lines on terminal) then append `> output.bed` to get output file from program.
***NOTE:*** For OP's statement:
>
> a value larger than 715 in the second column, and a value smaller than
> 1367 in the third column.
>
>
>
In case we don't want to include 715 and 1367 values then change conditions to:
```
awk '$1=="A" && $2>715 && $3<1367' Input_file
```
|
Why isn't my JavaScript function able to access global-scope functions/variables defined in my other .js files?
I wrote a script like that:
```
NS.load = function(src) {
var script = document.createElement("script").setAttribute("src", src);
document.getElementsByTagName("head")[0].appendChild(script);
}
```
It loads files but I can't reach functions and variables defiened in other files.
```
//js/main.js
var qux = {name: "name"};
NS.load("js/foo.js");
//js/foo.js
alert(qux.name); //undefined variable
```
But if I define qux like this:
```
window.qux = {name: "name"};
```
I can reach qux variable in other modules. As far as I know all globals are already a member of window object. So why I have to define variables like this. Could you offer another method?
Thanks.
|
It looks like you tried to shortcut your code by calling `createElement` and `setAttribute` all on 1 line, but `setAttribute` doesn't return anything, so you can't go calling `appendChild` on it's return value, because there is none.This will fix it:
```
NS.load = function(src) {
var script = document.createElement("script");
script.setAttribute("src", src)
document.getElementsByTagName("head")[0].appendChild(script);
}
```
**Edit:**
What sort of environment are you running your code in? Is something happening cross-site or are you defining qux inside of another function? The following works for me, running the files via <http://localhost/test.html>
```
<html>
<head>
<script type="text/javascript">
load = function(src) {
var script = document.createElement("script");
script.setAttribute("src", src);
document.getElementsByTagName("head")[0].appendChild(script);
}
var qux = {name: "name"};
load("foo.js");
</script>
</head>
<body></body>
</html>
```
foo.js:
```
alert(qux.name);
```
I get an alert with "name" when the page loads.
|
What is the difference between running @powershell and powershell (any command with @ at the front)?
Just installed [Chocolatey](https://chocolatey.org/) and in their [installation instructions](https://chocolatey.org/) they have
`@powershell -NoProfile -ExecutionPolicy unrestricted......`
I tried to run **`@cls`** in my console and **it worked too**.
It's amazing to find something new in this space after so many years in the industry.
**What difference `@` makes?**
|
The most recognizable instance of this is probably `@echo off` at the beginning of batch files (since DOS).
`@` turns off console echo of that command. You see it in `@echo off` all the time because if you don't use it, you'll see the actual command `echo off` in the console display when the batch is run. Which is counter-productive when you're trying to control what's displayed/echoed in your batch script. :)
Having said that, I'm not sure why the Chocolaty website says to use it in that (non-batch) context.
Here's Microsoft's TechNet article on [Echo](http://technet.microsoft.com/en-us/library/bb490897.aspx), which explains the `@`:
>
> If you want to turn echo off and you do not want to echo the echo command, type an at sign (@) before the command as follows:
>
>
> @echo off
>
>
>
Turns out someone asked about this a year ago on StackOverflow as well:
- [What is the at sign (@) in a batch file and what does it do?](https://stackoverflow.com/questions/21074863/what-is-the-at-sign-in-a-batch-file-and-what-does-it-do)
|
Why is the expected value of y written as E(y|x)?
The expected value of the simple linear regression model $y = \beta\_0 + \beta\_1x + \epsilon$ is typically written as $E(y|x) = \beta\_0 + \beta\_1x$. Why is it written as $E(y|x)$ instead of just $E(y)$? What would $E(y)$ on its own represent?
|
The post linked in comments covers much of the necessary information for a more expansive answer, so I will give a shorter, more direct response:
>
> Why is it written as E(y|x) instead of just E(y)?
>
>
>
$E(y|x)$, is the expected value of $y$ *given* that we know the value of $X$ is $x$.
For example, the average height of children changes as they age -- the expected height (y) of a 4 year old girl is not the same as the expected height of a nine year old girl. i.e. $E(y|X=4) \neq E(y|X=9)$
>
> What would E(y) on its own represent?
>
>
>
The unconditional mean of $y$. Think "the average height of a female child" in the above example. Given some distribution of ages and the average height at each age you could work out the average height overall.
If your sample were a random sample from the population, you could estimate $E(y)$ by $\bar{y}$, but if you know the distribution of ages, you don't necessarily have to sample the ages in proportion to the population of ages to estimate average height from the individual conditionals.
[Linear regression assumes that the conditional expectation is a linear function of the predictor (or predictors), but the notions of conditional expectation vs unconditional expectation are more general]
|
@interface keyword in the \*.m file
I am just want to know if there is any difference between:
```
@interface AppDelegate : NSObject <NSApplicationDelegate>
@property ...
@end
```
in the \*.h file and:
```
@interface AppDelegate()
@property ...
@end
@implementation AppDelegate
```
in the \*.m file.
When do I need prefer the first version and when it will be better to use second version ?
|
Anything you can put in a .h file you can put in a .m file. The contents of an imported .h are directly copied into the .m during the early stages of compilation, so there is no syntactic difference based on where a statement is placed.
But you will note that there *is* a difference in syntax between
```
@interface AppDelegate : NSObject <NSApplicationDelegate>
```
and
```
@interface AppDelegate()
```
The second form is an *extension* of the already-described interface, vs being a new (and conflicting) version. The fact that it is placed in the .m (which is not required -- it could be placed in the .h without raising an error) means that any other module importing the .h will not "see" it, making it "private" (as much as anything in Objective-C is private).
|
Scatterplot with too many points
I am trying to plot two variables where N=700K. The problem is that there is too much overlap, so that the plot becomes mostly a solid block of black. Is there any way of having a grayscale "cloud" where the darkness of the plot is a function of the number of points in an region? In other words, instead of showing individual points, I want the plot to be a "cloud", with the more the number of points in a region, the darker that region.
|
One way to deal with this is with alpha blending, which makes each point slightly transparent. So regions appear darker that have more point plotted on them.
This is easy to do in `ggplot2`:
```
df <- data.frame(x = rnorm(5000),y=rnorm(5000))
ggplot(df,aes(x=x,y=y)) + geom_point(alpha = 0.3)
```

Another convenient way to deal with this is (and probably more appropriate for the number of points you have) is hexagonal binning:
```
ggplot(df,aes(x=x,y=y)) + stat_binhex()
```

And there is also regular old rectangular binning (image omitted), which is more like your traditional heatmap:
```
ggplot(df,aes(x=x,y=y)) + geom_bin2d()
```
|
Date conversion without specifying the format
I do not understand how the "ymd" function from the library "lubridate" works in R. I am trying to build a feature which converts the date correctly without having to specify the format. I am checking for the minimum number of NA's occurring as a result of dmy(), mdy() and ymd() functions.
So ymd() is giving NA sometimes and sometimes not for the same Date value. Are there any other functions or packages in R, which will help me get over this problem.
```
> data$DTTM[1:5]
[1] "4-Sep-06" "27-Oct-06" "8-Jan-07" "28-Jan-07" "5-Jan-07"
> ymd(data$DTTM[1])
[1] NA
Warning message:
All formats failed to parse. No formats found.
> ymd(data$DTTM[2])
[1] "2027-10-06 UTC"
> ymd(data$DTTM[3])
[1] NA
Warning message:
All formats failed to parse. No formats found.
> ymd(data$DTTM[4])
[1] "2028-01-07 UTC"
> ymd(data$DTTM[5])
[1] NA
Warning message:
All formats failed to parse. No formats found.
>
> ymd(data$DTTM[1:5])
[1] "2004-09-06 UTC" "2027-10-06 UTC" "2008-01-07 UTC" "2028-01-07 UTC"
[5] "2005-01-07 UTC"
```
Thanks
|
@user1317221\_G has already pointed out that you dates are in day-month-year format, which suggests that you should use `dmy` instead of `ymd`. Furthermore, because your month is in `%b` format ("Abbreviated month name in the current locale"; see `?strptime`), your problem may have something to do with your `locale`. The month names you have seem to be English, which may differ from how they are spelled in the locale you are currently using.
Let's see what happens when I try `dmy` on the dates in my `locale`:
```
date_english <- c("4-Sep-06", "27-Oct-06", "8-Jan-07", "28-Jan-07", "5-Jan-07")
dmy(date_english)
# [1] "2006-09-04 UTC" NA "2007-01-08 UTC" "2007-01-28 UTC" "2007-01-05 UTC"
# Warning message:
# 1 failed to parse.
```
"27-Oct-06" failed to parse. Let's check my time `locale`:
```
Sys.getlocale("LC_TIME")
# [1] "Norwegian (Bokmål)_Norway.1252"
```
dmy does not recognize "oct" as a valid `%b` month in my locale.
One way to deal with this issue would be to change "oct" to the corresponding Norwegian abbreviation, "okt":
```
date_nor <- c("4-Sep-06", "27-Okt-06", "8-Jan-07", "28-Jan-07", "5-Jan-07" )
dmy(date_nor)
# [1] "2006-09-04 UTC" "2006-10-27 UTC" "2007-01-08 UTC" "2007-01-28 UTC" "2007-01-05 UTC"
```
Another possibility is to use the original dates (i.e. in their original 'locale'), and set the `locale` argument in `dmy`. Exactly how this is done is platform dependent (see `?locales`. Here is how I would do it in Windows:
```
dmy(date_english, locale = "English")
[1] "2006-09-04 UTC" "2006-10-27 UTC" "2007-01-08 UTC" "2007-01-28 UTC" "2007-01-05 UTC"
```
|
Time complexity of finding a string in an unordered set of strings
<http://www.cplusplus.com/reference/unordered_set/unordered_set/find/>
The time complexity of find in an unordered\_set has been given to be constant on average for an unordered\_set.
If I have an unordered\_set of strings, then what will be the time complexity of finding a string in that set?
Will it be constant or O(length of the string)?
|
`std::unordered_set` is implemented as a hash table. Its `find` method has an average-case complexity of `O(1)` and a worst-case complexity of `O(set.size())` key comparisons as specified by [[tab:container.hash.req]](https://timsong-cpp.github.io/cppwp/unord.req#tab:container.hash.req).
---
By default, `std::unordered_set<std::string>` uses `operator==` to compare keys, so each key comparison runs in `O(str.size())` as specified in [[string.view.ops]](https://timsong-cpp.github.io/cppwp/string.view#ops-13) (`operator==(const std::string&, const std::string&)` is defined to be equivalent to `std::string_view(str1) == std::string_view(str2)`, which is defined to be equivalent to `std::string_view(str1).compare(std::string_view(str2) == 0`).
For an `std::unordered_set<std::string>`, the container must calculate a hash of the string to find. By default it uses `std::hash<std::string>` for this. The standard doesn't specify any complexity requirements for `std::hash<std::string>`, but it's most likely `O(str.size())`.
|
What is the native narrow string encoding on Windows?
The Subversion API has a [number of functions](http://subversion.apache.org/docs/api/latest/svn__utf_8h.html) for converting from "natively-encoded" strings to strings that are encoded in UTF-8. My question is: what is this native encoding on Windows? Does it depend on locale?
|
"Natively encoded" strings are strings written in whatever [code page](http://en.wikipedia.org/wiki/Code_page) the user is using. That is, they are numbers that are translated to the appropriate glyphs based on the correct code page. Assuming the file was saved that way and not as a UTF-8 file.
This is a candidate question for [Joel's article on Unicode](http://www.joelonsoftware.com/articles/Unicode.html).
Specifically:
>
> Eventually this OEM free-for-all got
> codified in the ANSI standard. In the
> ANSI standard, everybody agreed on
> what to do below 128, which was pretty
> much the same as ASCII, but there were
> lots of different ways to handle the
> characters from 128 and on up,
> depending on where you lived. These
> different systems were called code
> pages. So for example in Israel DOS
> used a code page called 862, while
> Greek users used 737. They were the
> same below 128 but different from 128
> up, where all the funny letters
> resided. The national versions of
> MS-DOS had dozens of these code pages,
> handling everything from English to
> Icelandic and they even had a few
> "multilingual" code pages that could
> do Esperanto and Galician on the same
> computer! Wow! But getting, say,
> Hebrew and Greek on the same computer
> was a complete impossibility unless
> you wrote your own custom program that
> displayed everything using bitmapped
> graphics, because Hebrew and Greek
> required different code pages with
> different interpretations of the high
> numbers.
>
>
>
|
How to configure read only transaction in GET request with Guice and Jersey?
I am using Guice, Guice Persist, Hibernate and Jersey. All my resource methods are annotated with @Transactional except for the methods that are annotated with @GET. Is this the correct way configure the transaction-per-request scope?
|
There is no "correct" way; it depends on what you are trying to achieve. There is a few separate issues involved in answering your question, as discussed below.
1. Data access pattern: The traditional pattern in enterprise Java is to have your controllers make calls to DAOs (Data Access Objects), which handle persistence. In this pattern, your transaction annotations would be placed on the DAO methods, not your controller methods. However, many people find DAOs to be overkill and prefer to have the entity manager injected into the controller. This is a perfectly acceptable alternative and, from what I can tell, this is the approach you have chosen.
2. Thread safety: You must be aware that entity manager instances are not thread safe, so your controllers must not be singletons and must be request-scoped. In other words, you cannot share an entity manager across multiple requests, so you must set your controllers to be re-created and have a new entity manager injected for every request.
3. Transactions: If your data retrieval operation only requires one query, you will not require a transaction. However, building a complete object graph generally requires many queries. In this circumstance, your retrieval must be transactional in order to guarantee consistency, because the data might change between queries. Some frameworks will actually require all database access to be transactional, or the entity manager will not be injected correctly.
In summary, your data access pattern is fine, as long as your controllers are request-scoped. However, your "GET" functions should be transactional if many queries will be involved in creating the object graph.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.