prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Add element to null (empty) List Property I got a problem. The problem is that I try to ad an object to a list of this objects. This list is a property, no error, but when I run it fails at this point, becouse: "NullReferenceException". Sounds logical, becouse the Property of the list is "null", but I cant declare a property, can I? Her is some Code snipped: ``` class Maps { protected virtual List<Ant> AllAntsAtMap { get; set; } ``` [...] ``` class Quadrangle : Maps { protected override List<Ant> AllAntsAtMap { get; set; } public override void AddAntToMap(Ant ant) { AllAntsAtMap.Add(ant); //Error here } public override void AddAntsToMap(List<Ant> ants) { foreach (Ant ant in ants) { AddAntToMap(ant); } } ``` [...]
> > Add element to null (empty) List Property > > > `null` and an empty list are two different things: Adding an element to an empty list works fine, but if your property is `null` (as all reference-type properties are initially `null`), you need to initialize it with an empty list first. You could use an auto-property initializer for that (see [Kędrzu's answer](https://stackoverflow.com/a/32504928/87698)), or you could manually initialize the list in the constructor: ``` class Maps { public Maps() { AllAntsAtMap = new List<Ant>(); } ... } ``` (Since the property is declared in the superclass Maps, I'd do the initialization there rather than in the subclass Quadrangle.)
Coffeescript: "else" within array comprehension Does coffeescript offer an equivalent for "else" within array comprehensions like python's list comprehensions? ### Python example ``` foo = ["yes" if i < 50 else "no" for i in range(100)] ``` Since, in python, the if/else is actually a ternary statement, I figured coffeescript might be similar, so I tried this: ### coffeescript attempt ``` foo = (if i < 50 then "yes" else "no" for i in [0..100]) ``` The difference is that python appropriately gives me 50 yes's and 50 no's, but coffeescript only gives me a single "yes". So, just to be clear, I want to know if there is a way to use an "else" in coffeescript's array comprehensions.
Your original query transpiles to this: ``` var _i, _results; if (i < 50) { return "yes"; } else { _results = []; for (i = _i = 0; _i <= 100; i = ++_i) { _results.push("no"); } return _results; } ``` As you can see, the `i < 50` is met immediately since it's undefined, which returns a single "yes". You need to rewrite it this way to get the desired result: ``` foo = ((if i < 50 then "yes" else "no") for i in [0..100]) ``` This results in the following: ``` for (i = _i = 0; _i <= 100; i = ++_i) { _results.push(i < 50 ? "yes" : "no"); } ```
How to plot family tree in R I've been searching around how to plot a family tree but couldn't find something i could reproduce. I've been looking in Hadley's book about ggplot but the same thing. I want to plot a family tree having as a source a dataframe similar to this: ``` familyTree <- data.frame( id = 1:6, cnp = c("11", NA, "22", NA, NA, "33"), last_name = c("B", "B", "B", NA, NA, "M"), last_name_alyas = rep(c(NA, "M"), c(5L, 1L)), middle_name = rep(c("C", NA), c(1L, 5L)), first_name = c("Me", "P", "A", NA, NA, "S"), first_name_alyas = rep(c(NA, "F"), c(5L, 1L)), maiden_name = c(NA, NA, "M", NA, NA, NA), id_father = c(2L, 4L, 6L, NA, NA, 8L), id_mother = c(3L, 5L, 7L, NA, NA, 9L), birth_date = c("1986-01-01", "1963-01-01", "1964-01-01", NA, NA, "1936-01-01"), birth_place = c("City", "Village", "Village", NA, NA, "Village"), death_date = c("0000-00-00", NA, NA, NA, NA, "2007-12-23"), death_reason = rep(c(NA, "stroke"), c(5L, 1L)), nr_brothers = c(NA, 1L, NA, NA, NA, NA), brothers_names = c(NA, "M", NA, NA, NA, NA), nr_sisters = c(1L, NA, 1L, NA, NA, 2L), sisters_names = c("A", NA, "E", NA, NA, NA), school = c(NA, "", "", NA, NA, ""), occupation = c(NA, "", "", NA, NA, ""), diseases = rep(NA_character_, 6L), comments = rep(NA_character_, 6L) ) ``` Is there any way I can plot a family tree with ggplot? If not, how can i plot it using another package. The primary key is 'id' and you connect to other members of the family using "id\_father" and "id\_mother".
As noted in the comments, you should try `igraph`. Here is a quick start: ``` require(igraph) mothers=familyTree[,c('id','id_mother','first_name', 'last_name')] fathers=familyTree[,c('id','id_father','first_name', 'last_name')] mothers$name=paste(mothers$first_name,mothers$last_name) fathers$name=paste(fathers$first_name,fathers$last_name) names(mothers)=c('parent','id','first_name','last_name','name') names(fathers)=c('parent','id','first_name','last_name','name') links=rbind(mothers,fathers) links=links[!is.na(links$id),] g=graph.data.frame(links) co=layout.reingold.tilford(g, flip.y=F) plot(g,layout=co) ``` ![enter image description here](https://i.stack.imgur.com/oxUW5.png) There aren't any names, and the arrows are going in the wrong direction, but you should be able to go from there.
how to determine the max user process value? Which value is correct?(or they are all correct, but which one will take effect?) ``` $ cat /proc/sys/kernel/pid_max 32768 $ ulimit -a |grep processes max user processes (-u) 77301 $ cat /proc/1/limits |grep processes Max processes 77301 77301 p ```
All values is correct and have different meanings.`/proc/sys/kernel/pid_max` is maximum value for `PID`, `ulimit -u` is maximum value for `number of processes`. From `man 5 proc`: ``` /proc/sys/kernel/pid_max (since Linux 2.5.34) This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platforms, 32768 is the maximum value for pid_max. On 64-bit systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million). ``` From `man bash`: ``` ulimit [-HSTabcdefilmnpqrstuvx [limit]] ..... -u The maximum number of processes available to a single user ..... ``` **Note** When a new process is created, it is assigned next number available of kernel processes counter. When it reached `pid_max`, the kernel restart the processes counter to 300. From linux source code, `pid.c` file: ``` .... #define RESERVED_PIDS 300 .... static int alloc_pidmap(struct pid_namespace *pid_ns) { int i, offset, max_scan, pid, last = pid_ns->last_pid; struct pidmap *map; pid = last + 1; if (pid >= pid_max) pid = RESERVED_PIDS; ```
How to use clip-path property for border in css I have `clip-part` to make "cut corner" effect. [![Button looks like this](https://i.stack.imgur.com/fbHev.png)](https://i.stack.imgur.com/fbHev.png) I would like to change background to white and use green border. Problem is, when I change background to white, corners are empty: [![Borders corners don't show without background](https://i.stack.imgur.com/eSuED.png)](https://i.stack.imgur.com/eSuED.png) How can I make green border corners on hover? ``` .test { background: red; width: 100px; height: 100px; /* CORNERS */ clip-path: polygon(10px 0%, calc(100% - 10px) 0%, 100% 10px, 100% calc(100% - 10px), calc(100% - 10px) 100%, 10px 100%, 0% calc(100% - 10px), 0% 10px); } .test:hover { background: white; cursor: pointer; border: 3px solid green; } ``` ``` <div class='test'>Test</div> ``` [JSFIDDLE](https://jsfiddle.net/b2n56o84/)
add some gradient to fill the missing spaces: ``` .test { background: red; width: 100px; height: 100px; box-sizing:border-box; /* CORNERS */ clip-path: polygon(10px 0%, calc(100% - 10px) 0%, 100% 10px, 100% calc(100% - 10px), calc(100% - 10px) 100%, 10px 100%, 0% calc(100% - 10px), 0% 10px); } .test:hover { --grad:transparent 49.5%,green 50%; background: linear-gradient(to top right ,var(--grad)) top right, linear-gradient(to top left ,var(--grad)) top left, linear-gradient(to bottom right,var(--grad)) bottom right, linear-gradient(to bottom left ,var(--grad)) bottom left, white; background-size:13px 13px; /* 10px of the clip-path + 3px of border */ background-repeat:no-repeat; background-origin:border-box; cursor: pointer; border: 3px solid green; } ``` ``` <div class='test'> </div> ```
How to list all the linux aliases I am aware that in Linux I can use the `alias` command to get a list of defined aliases. I am now trying to do the same through Go code with: ``` func ListAlias() error { out, err := exec.Command("alias").Output() if err != nil { fmt.Println(err) return err } fmt.Println(out) return nil } ``` but all that were returned were: ``` exec: "alias": executable file not found in $PATH ``` I tried looking for where the actual binary of `alias` is but that leads nowhere either: ``` $whereis alias alias: ``` The alternative I've considered is to parse the `~/.bashrc` file for the list of aliases defined but I have encountered this scenario where the `bashrc` lists another `custom_aliases.sh` file and all the aliases are listed there. That's why I am trying to use the `alias` command to list all the aliases.
`alias` isn't an executable but a [shell builtin](https://en.wikipedia.org/wiki/Shell_builtin). You can easily see that by running ``` $ type alias alias is a shell builtin ``` Therefore you need to call the shell's `alias` command depending on which shell you're using. For example with `bash` you'll need to use ``` out, err := exec.Command("/bin/bash", "-c", "alias").Output() ``` But that still won't give you the answer because [bash doesn't `source` the `.bashrc` file in that case](https://superuser.com/q/1300875/241386) so aliases won't be available in the subshell. You'll need the `--rcfile` or `--login`/`-l` option and also need to [specify the shell as interactive with `-i`](https://superuser.com/a/671488/241386) ``` out, err := exec.Command("/bin/bash", "-lic", "alias").Output() // or out, err := exec.Command("/bin/bash", "--rcfile", "~/.bashrc", "-ic", "alias").Output() ``` `exec.Command("/bin/bash", "-ic", "alias")` would also possibly work depending on where your aliases are sourced. Other shells like zsh, sh, dash... may source different files with different options, so check your shell's documentation if `-ic` or `-lic` doesn't work
Why do we need to include the .h while everything works when including only the .cpp file? Why do we need to include both the `.h` and `.cpp` files while we can make it work solely by including the `.cpp` file? For example: creating a `file.h` containing declarations, then creating a `file.cpp` containing definitions and including both in `main.cpp`. Alternatively: creating a `file.cpp` containing declaration/definitions ( no prototypes ) including it in `main.cpp`. Both work for me. I can't see the difference. Maybe some insight into the compiling and linking process may help.
While you *can* include `.cpp` files as you mentioned, this is a bad idea. As you mentioned, declarations belong in header files. These cause no problems when included in multiple compilation units because they do not include implementations. Including a the definition of a function or class member multiple times will normally cause a problem (but not always) because the linker will get confused and throw an error. What should happen is each `.cpp` file includes definitions for a subset of the program, such as a class, logically organized group of functions, global static variables (use sparingly if at all), etc. Each *compilation unit* (`.cpp` file) then includes whatever declarations it needs to compile the definitions it contains. It keeps track of the functions and classes it references but does not contain, so the linker can resolve them later when it combines the object code into an executable or library. **Example** - `Foo.h` -> contains declaration (interface) for class Foo. - `Foo.cpp` -> contains definition (implementation) for class Foo. - `Main.cpp` -> contains main method, program entry point. This code instantiates a Foo and uses it. Both `Foo.cpp` and `Main.cpp` need to include `Foo.h`. `Foo.cpp` needs it because it is defining the code that backs the class interface, so it needs to know what that interface is. `Main.cpp` needs it because it is creating a Foo and invoking its behavior, so it has to know what that behavior is, the size of a Foo in memory and how to find its functions, etc. but it does not need the actual implementation just yet. The compiler will generate `Foo.o` from `Foo.cpp` which contains all of the Foo class code in compiled form. It also generates `Main.o` which includes the main method and unresolved references to class Foo. Now comes the linker, which combines the two object files `Foo.o` and `Main.o` into an executable file. It sees the unresolved Foo references in `Main.o` but sees that `Foo.o` contains the necessary symbols, so it "connects the dots" so to speak. A function call in `Main.o` is now connected to the actual location of the compiled code so at runtime, the program can jump to the correct location. If you had included the `Foo.cpp` file in `Main.cpp`, there would be *two* definitions of class Foo. The linker would see this and say "I don't know which one to pick, so this is an error." The compiling step would succeed, but linking would not. (Unless you just do not compile `Foo.cpp` but then why is it in a separate `.cpp` file?) Finally, the idea of different file types is irrelevant to a C/C++ compiler. It compiles "text files" which hopefully contain valid code for the desired language. Sometimes it may be able to tell the language based on the file extension. For example, compile a `.c` file with no compiler options and it will assume C, while a `.cc` or `.cpp` extension would tell it to assume C++. However, I can easily tell a compiler to compile a `.h` or even `.docx` file as C++, and it will emit an object (`.o`) file if it contains valid C++ code in plain text format. These extensions are more for the benefit of the programmer. If I see `Foo.h` and `Foo.cpp`, I immediately assume that the first contains the declaration of the class and the second contains the definition.
Can you use INNER JOIN with a primary key? I have two tables with a one-to-one relationship. Table1 has a composite primary key consisting of about 4 columns. Table2's foreign key is set to Table1's primary key. When I try the following UPDATE clause, I am getting an error: ``` UPDATE Table2 SET column1 = fakeTable.c1 FROM Table2 INNER JOIN ( SELECT Table1.primaryKey , (Table1.column3 + Table1.column4) AS c1 FROM Table1 ) AS c1 ON Table2.foreignKey = fakeTable.primaryKey ``` Am I not allowed to reference keys as if they are columns?
No, you need to list all the fields individually. But you can avoid the sub-query that you have... ``` UPDATE Table2 SET column1 = Table1.column3 + Table1.column4 FROM Table2 INNER JOIN Table1 ON Table2.foreignKey1 = Table1.primaryKey1 AND Table2.foreignKey2 = Table1.primaryKey2 AND Table2.foreignKey3 = Table1.primaryKey3 AND Table2.foreignKey4 = Table1.primaryKey4 ``` **EDIT** Response to comment: - `I thought the whole point of keys was to avoid having to concatenate columns!` Keys aren't a time saving device, they're data integrity devices. A primary key is a unique identifier. I can be a composite or not, but the important thing is that it is unique and not nullable. A foreign key is also a data integrity device. It ensure that if data refers to something in another table, it actually *must* exist in that other table.
How to create a page that's split diagonally and the two halves are clickable links I need to create a landing page that's split diagonally. Something like this ![enter image description here](https://i.stack.imgur.com/5nHBs.png) I need both areas of the page to be clickable and, in the best possible scenario, everything should adapt dinamically to the monitor of the user so that the monitor is always split in half. How could i do it?Should i use canvas?Any advice is welcome, also on possible fallbacks if i use canvas.
This can be realized in several ways: 1) on modern browsers in pure CSS using `clip-path` > > [Codepen Demo](http://codepen.io/anon/pen/dMNzYG) > > > **HTML** ``` <div> <a href="#1"></a> <a href="#2"></a> </div> ``` **CSS** ``` a { position: absolute; top: 0; left: 0; height: 100vh; width: 100%; display: block; } a:first-child { -webkit-clip-path: polygon(0 0, 0 100vh, 100% 100vh); clip-path: polygon(0 0, 0 100vh, 100% 100vh); background: #d6d6d6; } a:last-child { -webkit-clip-path: polygon(0 0, 100% 0, 100% 100vh); clip-path: polygon(0 0, 100% 0, 100% 100vh); background: #212121; } ``` --- 2) On less recent browsers, involving only a bit of javascript and `2D Transformation` > > [Codepen Demo](http://codepen.io/fcalderan/pen/NPNGQw) > > > **HTML** ``` <div> <section><a href="#1"></a></section> <section><a href="#2"></a></section> </div> ``` **CSS** ``` html, body, div{ height: 100%; width: 100%; padding: 0; margin: 0; } div { overflow : hidden; position: relative; } section { position : absolute; top : -100%; height : 500vw; width : 500vh; background : #ccc; -webkit-transform-origin: 0 0; -moz-transform-origin: 0 0; transform-origin: 0 0; } section + section { background : #333; top : 0%; } section a { display: block; width: 100%; height: 100%; cursor: pointer; } ``` **Js/jQuery**: ``` $(function() { $(window).on('resize', function() { var h = $(document).height(), w = $(document).width(); /* Math.atan() function returns the arctangent (in radians) * of a number and 1 rad ~= 57.29577 deg */ var angle = Math.atan(h/w) * 57.29577; var rotateProperty = "rotate(" + angle + "deg)"; $('section').css({ "-webkit-transform": rotateProperty, "-moz-transform": rotateProperty, "transform": rotateProperty }); }) .triggerHandler('resize'); }); ```
discordjs how to add all intents/permissions I'm trying to add a role when members join the server, but it says I don't have that permission. How do I add all intents/permissions for the bot? I'll just leave the beginning of the code. ``` // Require the necessary discord.js classes const { Client, Intents, Message } = require('discord.js'); const { Player } = require("discord-player"); const { token, prefix } = require("./config.json"); // Create a new client instance const client = new Client( { restTimeOffset: 0, shards: "auto", intents: [ Intents.FLAGS.DIRECT_MESSAGES, Intents.FLAGS.DIRECT_MESSAGE_REACTIONS, Intents.FLAGS.DIRECT_MESSAGE_TYPING, Intents.FLAGS.GUILDS, Intents.FLAGS.GUILD_BANS, Intents.FLAGS.GUILD_EMOJIS_AND_STICKERS, Intents.FLAGS.GUILD_INTEGRATIONS, Intents.FLAGS.GUILD_INVITES, Intents.FLAGS.GUILD_MEMBERS, Intents.FLAGS.GUILD_MESSAGES, Intents.FLAGS.GUILD_MESSAGE_REACTIONS, Intents.FLAGS.GUILD_MESSAGE_TYPING, Intents.FLAGS.GUILD_PRESENCES, Intents.FLAGS.GUILD_SCHEDULED_EVENTS, Intents.FLAGS.GUILD_VOICE_STATES, Intents.FLAGS.GUILD_WEBHOOKS, ] }); ```
The error could be coming because of other reasons as well. One explanation behind the error might be that the bot was trying to add a role to a member with a role higher than its own. In that case, you would have to manually rearrange the role hierarchy and put the bot's role on the top. About the intents: It's not advisable to just add all the intents you want. The whole point of intents was that developers could choose what type of data they wanted their bot to receive. This whole concept is trashed if we just selected all the intents. If you still want to enable all the intents, you can just check for `intents calculator` in Google, and then copy the intent value given there and paste it like this: ``` const { Intents, Client } = require("discord.js") const client = new Client({ intents: new Intents(value) // Insert the value here }) ```
ZFS pool created with ZFS on Linux, usable by SmartOS? I have a Linux server running libvirt + KVM which I am thinking of migrating to SmartOS. The server has a ZFS pool created with ZFS on Linux, where the KVM guests are stored (as ZVOLs). My question is, can this pool be usable by SmartOS?
Yes. ZFS zpools are pretty portable... In your case, you'll want to make sure that the ZFS version on the destination system is greater-than or equal to the version of the system you're migrating from. Of course, we're missing that information from your question... You'll want to run `zpool upgrade -v` to obtain your version information. See [Oracle's documentation](https://blogs.oracle.com/bobn/entry/live_upgrade_and_zfs_versioning) and the [ZFS version Wikipedia page](http://en.wikipedia.org/wiki/ZFS#Comparisons). These days, SmartOS is at version 28 with *feature flags*. Also see: [Can I trust that ZFS is consistent between Linux and FreeBSD?](https://serverfault.com/questions/417471/can-i-trust-that-zfs-is-consistent-between-linux-and-freebsd)
Is stdout line buffered, unbuffered or indeterminate by default? Section `7.19.3/7` of `c99` states that: > > At program start-up, three text streams are predefined and need not be opened explicitly - standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). > > > As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device. > > > So that makes sense. If you're pushing your standard output to a file, you want it fully buffered for efficiency. But I can find no mention in the standard as to whether the output is line buffered or unbuffered when you *can't* determine the device is non-interactive (ie, normal output to a terminal). The reason I ask was a comment to my answer [here](https://stackoverflow.com/questions/3723658/does-anyone-have-an-example-of-a-conditional-statement-in-c/3723670#3723670) that I should insert an `fflush(stdout);` between the two statements: ``` printf ("Enter number> "); // fflush (stdout); needed ? if (fgets (buff, sizeof(buff), stdin) == NULL) { ... } ``` because I wasn't terminating the `printf` with a newline. Can anyone clear this up?
The C99 standard does not specify if the three standard streams are unbuffered or line buffered: **It is up to the implementation.** All UNIX implementations I know have a line buffered `stdin`. On Linux, `stdout` in line buffered and `stderr` unbuffered. As far as I know, POSIX does not impose additional restrictions. POSIX's [fflush](http://www.opengroup.org/onlinepubs/9699919799/functions/fflush.html) page does note in the EXAMPLES section: > > [...] The *fflush()* function is used because standard output is usually buffered and the prompt may not immediately be printed on the output or terminal. > > > So the remark that you add `fflush(stdout);` is correct. --- An alternative could be to make `stdout` unbuffered: ``` setbuf(stdout, NULL); /* or */ setvbuf(stdout, NULL, _IONBF, 0); ``` But as R. notes you can only do this once, and it must be before you write to `stdout` or perform any other operantion on it. (C99 7.19.5.5 2) --- I just read a [recent thread](http://groups.google.com/group/comp.lang.c/browse_thread/thread/76f44ab7e8ae3b8d#) on `comp.lang.c` about the same thing. One of the remarks: > > Unix convention is that `stdin` and `stdout` are line-buffered when associated with a terminal, and fully-buffered (aka block-buffered) otherwise. `stderr` is always unbuffered. > > >
How can I split a drive image created with 'dd' into separate files for each partition? I created an image of a failing drive with: ``` dd if=/dev/sde of=/mnt/image001.dd ``` The drive had only two partitions: ``` Device Boot Start End Blocks Id System /dev/sde1 * 1 13 102400 7 HPFS/NTFS /dev/sde2 13 60802 488282112 7 HPFS/NTFS ``` **How can I split the image (image001.dd) into two or three files** (1: MBR; 2: Partition 1; 3: Partition 2) **so that I can mount the filesystems in it?** A solution I've found that wouldn't work for me is to use `split` to create many 512K files, then `cat` them back together into three files (1: 512K, 2: 105M, 3: the rest), but I don't have the disk space for that. History: I have already copied the entire image to a new drive, and it boots and mostly works. It seems that the FS was corrupted on the old failing drive, and `dd` copied the corrupted parts (as it should), and I wrote them to the new drive. My solution is to mount the FS that I copied and the copy just the files (using `rsync` or something) so that *hopefully* I won't copy the bad bits. *UPDATE 1*: I've tried `dd if=/mnt/image001.dd of=/mnt/image001.part1.dd bs=512 count=204800 skip=1` but `mount` complains that `NTFS signature is missing`, so I think I didn't do it right.
You don't need to split this at all. Use `parted` to get details about the partition table: ``` parted image001.dd ``` In `parted`, switch to byte units with the command `u`, then `B`. After that, issue the command `print`. You will get an output that looks like this (output is from an actual system, not an image): ``` Model: Virtio Block Device (virtblk) Disk /dev/vda: 25165824000B Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 1048576B 400556031B 399507456B primary ext4 boot 3 400556032B 21165506559B 20764950528B primary ext4 1 21165506560B 25164775423B 3999268864B primary linux-swap(v1) ``` You can use the `Start` number as an offset for a loopback mount: ``` mount -o loop,ro,offset=400556032 image001.dd /mnt/rescue ``` would mount the third partition at `/mnt/rescue`.
Automatically perform action in client side when the session expires I want to display in a `<p:growl>` that the session has expired. I found many methods to handle session expiration like: - [Session timeout and ViewExpiredException handling on JSF/PrimeFaces ajax request](https://stackoverflow.com/questions/11203195/session-timeout-and-viewexpiredexception-handling-on-jsf-primefaces-ajax-request) But I couldn't push a faces message to `<p:growl>`. To the point, how can I automatically run some (JavaScript) code in client side when the HTTP session has automatically expired in server side?
You can use PrimeFaces [idle monitor](https://www.primefaces.org/showcase/ui/misc/idleMonitor.xhtml) for this. User is redirected to logout action after timeout to invalidate the session. 2 minutes before a countdown dialog is shown to warn user. After moving the mouse again session is extended. PrimeFaces idle monitor and [dialog](https://www.primefaces.org/showcase/ui/df/basic.xhtml) is placed in a template you can add to every page which is involved: ``` <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:p="http://primefaces.org/ui"> <ui:composition> <h:form prependId="false"> <p:idleMonitor timeout="#{session.maxInactiveInterval * 1000 - 125000}" onidle="startIdleMonitor()" onactive="timeoutDialog.hide()" /> <p:dialog id="timeoutSession" header="#{msg['session.expire']}" widgetVar="timeoutDialog" showEffect="fade" hideEffect="fade" modal="true" width="400" height="110" closable="false" draggable="false" resizable="false" appendToBody="true" onHide="stopCount()" onShow="doTimer()"> <br /> <p> <span class="ui-icon ui-icon-alert" style="float: left; margin: 8px 8px 0;"/> <p:panel> #{msg['logoff.soon.1']} <span id="dialog-countdown" style="font-weight: bold"></span> #{msg['logoff.soon.2']} </p:panel> </p> <br /> <p style="font-weight: bold;">#{msg['move.cursor']}</p> </p:dialog> <p:remoteCommand name="keepAlive" actionListener="#{auth.keepSessionAlive}" /> </h:form> <script type="text/javascript"> var TIME = 120; // in seconds var countTimer = TIME; var processTimer; var timer_is_on = 0; var redirectPage = "#{request.contextPath}/auth/j_verinice_timeout"; var countDownDiv = "dialog-countdown"; var txtCountDown = null; if (!txtCountDown) txtCountDown = document.getElementById(countDownDiv); function startIdleMonitor() { countTimer = TIME; txtCountDown.innerHTML = countTimer; timeoutDialog.show(); } function timedCount() { txtCountDown.innerHTML = countTimer; if (countTimer == 0) { stopCount(); window.location.href = redirectPage; return; } countTimer = countTimer - 1; processTimer = setTimeout("timedCount()", 1000); } function doTimer() { if (!timer_is_on) { timer_is_on = 1; timedCount(); } } function stopCount() { clearTimeout(processTimer); timer_is_on = 0; keepAlive(); } </script> </ui:composition> </html> ``` - Line 11: The timeout of the idle monitor is set by system var *session.maxInactiveInterval*. The value you set in your *web.xml* or server configuration. - Line 12/13: Javascript method startIdleMonitor() is called after timeout without any user interaction. This method opens the dialog. *timeoutDialog.hide()* is called when user is busy again: Dialog is closed - Line 26/27: Two more Javascript methods are called when dialog is shown or hide: *doTimer()* starts and *stopCount()* stops the countdown. - Line 40: PrimeFaces remote command to keep session alive. By calling an arbitrary method on server the session is extended. Command is called by Javascript method *keepAlive()* in line 78. - Line 59-68: Javascript method *timedCount()* is called every second to execute the countdown. After timeout redirect is done in line 63. To activate timeout handling in multiple pages include the timeout template in your layout template: ``` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xml:lang="de-DE"> <h:head> ... </h:head> <body> <ui:include src="/template/sessionTimeOut.xhtml" /> <ui:include src="/nav.xhtml"/>> <ui:insert name="content">Default content</ui:insert> <ui:include src="/footer.xhtml"/>> </body> </html> ``` A specific time out for your web application you can set in your web.xml: ``` <!--?xml version="1.0" encoding="UTF-8"?--> <web-app> ... <session-config> <!-- Session idle timeout in min. --> <session-timeout>30</session-timeout> </session-config> </web-app> ``` You can read more about this solution in this blog post: [JSF and PrimeFaces: Session Timeout Handling](http://murygin.wordpress.com/2012/11/29/jsf-primefaces-session-timeout-handling/)
How resource heavy is a Timer? I'm running a timer like this: ``` private void InitializeTimer() { Timer myTimer = new Timer(); myTimer.Interval = 3000; myTimer.Enabled = true; myTimer.Tick += new EventHandler(TimerEventProcessor); myTimer.Start(); } ``` It will trigger an event every 3rd second. The event is not very heavy I think, it is reading text from a file, comparing text length to the text in a textbox and will replace the text in the box if it has more characters. But how resource heavy is the timer? And is it a bad idea to read text from a file every 3rd second (the file is a log file in plain text).
A Timer is fairly lightweight, though it does depend a bit on which type of timer you're using. Having a timer fire every 3 seconds is not likely to be a big deal, though if you're using a Windows Forms timer, make sure your Tick event handler is not doing any significant processing, as it happens in the UI thread. (For example, check the file length using [`FileInfo.Length`](http://msdn.microsoft.com/en-us/library/system.io.fileinfo.length.aspx) instead of "reading" the file, etc) That being said, for watching for changes in a log file, [`FileSystemWatcher`](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.aspx) is likely better than using a timer and re-reading the file continually. In addition to notifying you immediately when the file changes, it also will keep you from having to reading the file continually.
Create a image from text with background and wordwrap I need to convert a text into a image. The image must have: - A white background which fill all image. - The text in black color. - Text needs to be wordwrap if width/height is less than text length (can be variable depend on font size) - Image size 125x125 I create this function which do some of above requeriments: ``` public static Image crearImagenDesdeTexto(String texto) throws IOException { BufferedImage img = new BufferedImage(1, 1, BufferedImage.TYPE_INT_ARGB); Graphics2D g2d = img.createGraphics(); FontMetrics fm = g2d.getFontMetrics(); int width = fm.stringWidth(texto); int height = fm.getHeight(); g2d.setColor(java.awt.Color.WHITE); g2d.fillRect(0, 0, width, height); g2d.setColor(java.awt.Color.BLACK); Font font = new Font("Arial", Font.PLAIN, 12); g2d.setFont(font); g2d.dispose(); img = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); g2d = img.createGraphics(); g2d.setRenderingHint(RenderingHints.KEY_ALPHA_INTERPOLATION, RenderingHints.VALUE_ALPHA_INTERPOLATION_QUALITY); g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2d.setRenderingHint(RenderingHints.KEY_COLOR_RENDERING, RenderingHints.VALUE_COLOR_RENDER_QUALITY); g2d.setRenderingHint(RenderingHints.KEY_DITHERING, RenderingHints.VALUE_DITHER_ENABLE); g2d.setRenderingHint(RenderingHints.KEY_FRACTIONALMETRICS, RenderingHints.VALUE_FRACTIONALMETRICS_ON); g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR); g2d.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY); g2d.setRenderingHint(RenderingHints.KEY_STROKE_CONTROL, RenderingHints.VALUE_STROKE_PURE); g2d.setFont(font); fm = g2d.getFontMetrics(); g2d.setColor(java.awt.Color.BLACK); g2d.drawString(texto, 0, fm.getAscent()); g2d.dispose(); ByteArrayOutputStream out = new ByteArrayOutputStream(); ImageIO.write((RenderedImage) img, "png", out); out.flush(); ByteArrayInputStream in = new ByteArrayInputStream(out.toByteArray()); return new javafx.scene.image.Image(in); } ``` I don't know if exists a better way to apply the conversion of it's correct that way.
How about ``` import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.image.WritableImage ; import javafx.scene.control.Label; // ... private static Image textToImage(String text) { Label label = new Label(text); label.setMinSize(125, 125); label.setMaxSize(125, 125); label.setPrefSize(125, 125); label.setStyle("-fx-background-color: white; -fx-text-fill:black;"); label.setWrapText(true); Scene scene = new Scene(new Group(label)); WritableImage img = new WritableImage(125, 125) ; scene.snapshot(img); return img ; } ``` Since you want to return a `javafx.scene.image.Image`, it's probably better to keep as much of it in the JavaFX API as possible.
'Namespace' object is not iterable Attempting to pass an undetermined amount of integers using argparse. When I input: **py main.py 3 2** ``` %%writefile main.py import sorthelper import argparse integers = 0 #top-level parser creation parser = argparse.ArgumentParser("For sorting integers") nargs = '+' #-> gathers cmd line arguments into a list args = parser.add_argument('-f', metavar='N', type=int, nargs='+', help='yada yada yada') args = parser.parse_args() print(sorthelper.sortNumbers(args)) %%writefile sorthelper.py def sortNumbers(args): sorted(args) ``` Error Namespace Argument is not iterable I think is is because I am passing an argument that is not of the correct type. After reading through all the documentation I could find I cannot figure out how to make this work. I want the program to sort the numbers I am passing.
[`parser.parse_args()`](https://docs.python.org/3/library/argparse.html#the-parse-args-method) returns [a `Namespace` object](https://docs.python.org/3/library/argparse.html#argparse.Namespace), which is an object whose attributes represent the flags that were parsed. It is not iterable. It seems like you want to get the command-line arguments given after `-f`, in which case you would take *that particular flag* out of the `Namespace` object: ``` print(sorthelper.sortNumbers(args.f)) ``` --- Also, your code as you currently have it will print `None`, because `sortNumbers()` doesn't return anything. The built-in `sorted()` function *does not sort in place* (though `list.sort()` does, if you want to use that), so you have to actually do ``` def sortNumbers(args): return sorted(args) ```
Add a deep copy ctor to std::unique\_ptr I would like to store some `std::unique_ptr<my_type>` into a `std::vector`. Since `my_type` provides a `clone()` method it's quite straightforward to make deep copies of `my_type *`. The point is how to extend `std::unique_ptr` preserving all its functionalities while adding the copy ctor and the assignment operator. Inheritance? Templace specialization? Could you please provide a code snippet?
This looks like a way to go: ``` struct my_type_ptr: public std::unique_ptr<my_type,std::default_delete<my_type>>{ using unique_ptr::unique_ptr; //inheriting constructors //adding copy ctor and assigment operator my_type_ptr(const my_type_ptr & o): unique_ptr<my_type,std::default_delete<my_type>>() { reset( o ? o->clone() : nullptr); } my_type_ptr& operator=(const my_type_ptr & o) { reset( o ? o->clone() : nullptr); return *this; } }; ``` It compiles without any warning from gcc and clang, and valgrind doesn't report any memory leak while playing around with copies and vectors.
dragula JS move from one list to another with on click event Im using Dragula JS for the drag and drop functionality and I would like to also have the option to move back and forth the elements in my list with the mouse click without loosing the drag and drop functionality.. How can I achieve this? so I click on element 1 and it moves to the list. I click it back from that list and it moves back. That's the idea. I prepared a fiddle with the basic drag and drop if it helps. <http://jsfiddle.net/vf6dnwxj/10/> my structure in the fiddle above: ``` <div class="wrapper panel panel-body"> <ul id="left1" class="cont-dragula"> </ul> <ul id="right1" class="cont-dragula"> <li>Item 1</li> <li>Item 2</li> <li>Item 3.</li> <li>Item 4.</li> <li>Item 5.</li> <li>Item 6.</li> </ul> </div> ``` JS: ``` dragula([left1, right1]); ```
Well dragula doesn't do anything special it just moves items around. So You can simply move them around Yourself: ``` var leftList = document.querySelector('#left1'); var rightList = document.querySelector('#right1'); var list = document.querySelectorAll('#right1 li, #left1 li'); for (var i = 0; i < list.length; i++) { list[i].addEventListener('click', function(){ if (this.parentNode.id == 'right1') { leftList.appendChild(this); } else { rightList.appendChild(this); } }); } ``` [demo fiddle](http://jsfiddle.net/vf6dnwxj/11/) If You want dragulas callbacks to fire before manipulating DOM add `drake.start(this)` and after manipulation `drake.end()`: ``` drake = dragula([left1, right1]); drake.on('drop', function(el, target, source, sibling){ console.log(el); console.log(target); console.log(source); console.log(sibling); }); var leftList = document.querySelector('#left1'); var rightList = document.querySelector('#right1'); var list = document.querySelectorAll('#right1 li, #left1 li'); for (var i = 0; i < list.length; i++) { list[i].addEventListener('click', function(){ drake.start(this); if (this.parentNode.id == 'right1') { leftList.appendChild(this); } else { rightList.appendChild(this); } drake.end(); }); } ```
Why do we need biases in the neural network? We have weights and optimizer in the neural network. Why cant we just W \* input then apply activation, estimate loss and minimize it? Why do we need to do W \* input + b? Thanks for your answer!
There are two ways to think about why biases are useful in neural nets. The first is conceptual, and the second is mathematical. Neural nets are loosely inspired by biological neurons. The basic idea is that human neurons take a bunch of inputs and "add" them together. If the sum of the inputs is greater than some threshold, then the neuron will "fire" (produce an output that goes to other neurons). This threshold is essentially the same thing as a bias. So, in this way, the bias in artificial neural nets helps to replicate the behavior of real, human neurons. Another way to think about biases is simply by considering any linear function, y = mx + b. Let's say you are using y to approximate some linear function z. If z has a non-zero z-intercept, and you have no bias in the equation for y (i.e. y = mx), then y can never perfectly fit z. Similarly, if the neurons in your network have no bias terms, then it can be harder for your network to approximate some functions. All that said, you don't "need" biases in neural nets--and, indeed, recent developments (like batch normalization) have made biases less frequent in convolutional neural nets.
AngularJS: read response data from server I have a problem which should be wasy to solve, but I just cant figure out what I am doing wrong. I receive `data` through an `$http` request. ``` alert(data) ``` gives me `object object` ``` alert(data.response) ``` gives me `{"id":"123456","post_id":"12345"}` ``` alert (data.response.id) ``` gives me `undefined` My question: I want to get the ID. **Why does the last expression give me undefined and not the ID?** Do I have to transform the data in some way? I am thankful for any hints!
It looks like your data.response is a string. You use `angular.fromJson` to convert it to object ie : ``` $scope.temp = angular.fromJson($scope.data.response); ``` please see working demo below ``` var app = angular.module('app', []); app.controller('firstCtrl', function($scope){ $scope.data = { response:'{"id":"123456","post_id":"12345"}' }; alert($scope.data); alert($scope.data.response); alert($scope.data.response.id); $scope.temp = angular.fromJson($scope.data.response); alert($scope.temp.id); }); ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <body ng-app="app"> <div ng-controller="firstCtrl"> </div> </body> ```
zsh while loop exiting prematurely in a triple nested loop using floating point arithmetic I have a zsh script that should generate a bunch of Z-matrix files, but it exits the while loops after one iteration. N and M never increase ``` #!/bin/zsh n=0.5 m=0.5 a=60.0 i=1 while [[ $n -le 1.5 ]];do while [[ $m -le 1.5 ]];do while [[ $a -le 160 ]];do echo $i echo "o" > H2O.$i.Z eval echo "h 1" $n >> H2O.$i.Z eval echo "h 1" $m "2" $a >> H2O.$i.Z let i=$i+1 let a=$a+5.0 done let m=$m+0.05 done let n=$n+0.05 done ``` I'm new to zsh; is my syntax wrong?
The `-le` operator operates on integers. For floating point, use the built-in arithmetic. I also believe you might actually want to re-initialize the values for `m` and `a` in the inner loops. Then the bug is that you only initialized them once at the beginning and the inner loop conditions on the second iteration were false. You might use a `for((;;))` loop to write this more compact: ``` #!/usr/local/bin/zsh i=1 for ((n=0.5; n <= 1.5; n+=0.05)); do for ((m=0.5; m <= 1.5; m+=0.05)); do for ((a=60.0; a <= 160; a+=5.0)); do echo $i $a $m $n let i=$i+1 done done done ``` Note also that it is **unwise to use floating point numbers as loop counts**. Did you know that ten times 0.1 is hardly ever one? This might explain why you lose or gain an extra iteration. Instead, you should iterate using an integer number N and compute the required floats as 0.5 + N \* 0.05, for example.
Zoom image to cursor breaks when mouse is moved This is a followup question to [How to zoom to mouse pointer while using my own mousewheel smoothscroll?](https://stackoverflow.com/questions/14378046/how-to-zoom-to-mouse-pointer-while-using-my-own-mousewheel-smoothscroll/14431123#14431123) I am using css transforms to zoom an image to the mouse pointer. I am also using my own smooth scroll algorithm to interpolate and provide momentum to the mousewheel. With Bali Balo's help in my previous question I have managed to get 90% of the way there. You can now zoom the image all the way in to the mouse pointer while still having smooth scrolling as the following JSFiddle illustrates: <http://jsfiddle.net/qGGwx/7/> However, the functionality is broken when the mouse pointer is moved. To further clarify, If I zoom in one notch on the mousewheel the image is zoomed around the correct position. This behavior continues for every notch I zoom in on the mousewheel, completely as intended. If however, after zooming part way in, I move the mouse to a different position, the functionality breaks and I have to zoom out completely in order to change the zoom position. The intended behavior is for any changes in mouse position during the zooming process to be correctly reflected in the zoomed image. The two main functions that control the current behavior are as follows: ``` self.container.on('mousewheel', function (e, delta) { var offset = self.image.offset(); self.mouseLocation.x = (e.pageX - offset.left) / self.currentscale; self.mouseLocation.y = (e.pageY - offset.top) / self.currentscale; if (!self.running) { self.running = true; self.animateLoop(); } self.delta = delta self.smoothWheel(delta); return false; }); ``` This function collects the current position of the mouse at the current scale of the zoomed image. It then starts my smooth scroll algorithm which results in the next function being called for every interpolation: ``` zoom: function (scale) { var self = this; self.currentLocation.x += ((self.mouseLocation.x - self.currentLocation.x) / self.currentscale); self.currentLocation.y += ((self.mouseLocation.y - self.currentLocation.y) / self.currentscale); var compat = ['-moz-', '-webkit-', '-o-', '-ms-', '']; var newCss = {}; for (var i = compat.length - 1; i; i--) { newCss[compat[i] + 'transform'] = 'scale(' + scale + ')'; newCss[compat[i] + 'transform-origin'] = self.currentLocation.x + 'px ' + self.currentLocation.y + 'px'; } self.image.css(newCss); self.currentscale = scale; }, ``` This function takes the scale amount (1-10) and applies the css transforms, repositioning the image using transform-origin. Although this works perfectly for a stationary mouse position chosen when the image is completely zoomed out; as stated above it breaks when the mouse cursor is moved after a partial zoom. Huge thanks in advance to anyone who can help.
Before you check [**this fiddle**](http://jsfiddle.net/onury/3k332/) out; I should mention: **First of all**, within your `.zoom()` method; you shouldn't divide by `currentscale`: ``` self.currentLocation.x += ((self.mouseLocation.x - self.currentLocation.x) / self.currentscale); self.currentLocation.y += ((self.mouseLocation.y - self.currentLocation.y) / self.currentscale); ``` because; you already use that factor when calculating the `mouseLocation` inside the `initmousewheel()` method like this: ``` self.mouseLocation.x = (e.pageX - offset.left) / self.currentscale; self.mouseLocation.y = (e.pageY - offset.top) / self.currentscale; ``` So instead; (in the `.zoom()` method), you should: ``` self.currentLocation.x += (self.mouseLocation.x - self.currentLocation.x); self.currentLocation.y += (self.mouseLocation.y - self.currentLocation.y); ``` But (for example) `a += b - a` will always produce `b` so the code above equals to: ``` self.currentLocation.x = self.mouseLocation.x; self.currentLocation.y = self.mouseLocation.y; ``` in short: ``` self.currentLocation = self.mouseLocation; ``` Then, it seems you don't even need `self.currentLocation`. (2 variables for the same value). So why not use `mouseLocation` variable in the line where you set the `transform-origin` instead and get rid of `currentLocation` variable? ``` newCss[compat[i] + 'transform-origin'] = self.mouseLocation.x + 'px ' + self.mouseLocation.y + 'px'; ``` **Secondly**, you should include a `mousemove` event listener within the `initmousewheel()` method (just like other devs here suggest) but it should update the transform continuously, not just when the user wheels. Otherwise the tip of the pointer will never catch up while you're zooming out on "any" random point. ``` self.container.on('mousemove', function (e) { var offset = self.image.offset(); self.mouseLocation.x = (e.pageX - offset.left) / self.currentscale; self.mouseLocation.y = (e.pageY - offset.top) / self.currentscale; self.zoom(self.currentscale); }); ``` So; you wouldn't need to calculate this anymore within the `mousewheel` event handler so, your `initmousewheel()` method would look like this: ``` initmousewheel: function () { var self = this; self.container.on('mousewheel', function (e, delta) { if (!self.running) { self.running = true; self.animateLoop(); } self.delta = delta; self.smoothWheel(delta); return false; }); self.container.on('mousemove', function (e) { var offset = self.image.offset(); self.mouseLocation.x = (e.pageX - offset.left) / self.currentscale; self.mouseLocation.y = (e.pageY - offset.top) / self.currentscale; self.zoom(self.currentscale); // <--- update transform origin dynamically }); } ``` **One Issue:** This solution works as expected but with a small issue. When the user moves the mouse in regular or fast speed; the `mousemove` event seems to miss the final position (tested in Chrome). So the zooming will be a little off the pointer location. Otherwise, when you move the mouse slowly, it gets the exact point. It should be easy to workaround this though. **Other Notes and Suggestions:** - You have a duplicate property (`prevscale`). - I suggest you always use [**JSLint**](http://www.jslint.com/) or [**JSHint**](http://www.jshint.com/) (which is available on jsFiddle too) to validate your code. - I highly suggest you to use closures (often refered to as **Immediately Invoked Function Expression** (IIFE)) to avoid the global scope when possible; and hide your internal/private properties and methods.
using docker-compose without sudo doesn't work I was recently told that running `docker` or `docker-compose` with sudo is a big nono, and that I had to create/add my user to the `docker` group in order to run `docker` and `docker-compose` commands without `sudo`. Which I did, as per the [documentation here](https://docs.docker.com/engine/install/linux-postinstall/) Now, `docker` runs normally via my user. e.g. : ``` ~$ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world b8dfde127a29: Pull complete Digest: sha256:df5f5184104426b65967e016ff2ac0bfcd44ad7899ca3bbcf8e44e4461491a9e Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ ``` But when I try to run docker-compose, I get a `Permission Denied` ``` ~$ docker-compose --help -bash: /usr/local/bin/docker-compose: Permission denied ``` Could you please explain how this works ? I thought having a `docker` group enabled the usage of these commands because the binaries belong to this group, but actually they don't, they only belong to `root`... ``` ~$ ls -al /usr/bin/docker* -rwxr-xr-x 1 root root 71706288 Jul 23 19:36 /usr/bin/docker -rwxr-xr-x 1 root root 804408 Jul 23 19:36 /usr/bin/docker-init -rwxr-xr-x 1 root root 2944247 Jul 23 19:36 /usr/bin/docker-proxy -rwxr-xr-x 1 root root 116375640 Jul 23 19:36 /usr/bin/dockerd ``` ``` ~$ ls -al /usr/local/bin/ total 12448 drwxr-xr-x 2 root root 4096 May 26 11:08 . drwxr-xr-x 10 root root 4096 May 14 19:36 .. -rwxr--r-- 1 root root 12737304 May 26 11:08 docker-compose ``` So, how does this work? And how do I enable `docker-compose` to run for users that belong to the `docker` group?
``` sudo chmod a+x /usr/local/bin/docker-compose ``` --- As of Jun 2023, the `docker-compose` command has been deprecated in favor of the compose plugin. <https://docs.docker.com/compose/install/linux/> > > Install the Compose plugin > > > ``` sudo apt-get update sudo apt-get install docker-compose-plugin ... docker compose ps # note that docker and compose are now two words. ``` --- Will turn your permissions on. `docker-compose` is just a wrapper, and it uses an external *docker daemon*, the same way the `docker` command doesn't actually run anything but gives an order to a *docker daemon*. You can change the docker daemon you communicate with using the `DOCKER_HOST` variable. By default, it is empty ; and when it is empty, both `docker` and `docker-compose` assume it is located at `/var/run/docker.sock` According to the [dockerd documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option) : > > By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring either root permission, or docker group membership. > > > And this is enforced by giving read and write access to the docker group to the socket. ``` $ ls -l /var/run/docker.sock srw-rw---- 1 root docker 0 nov. 15 19:54 /var/run/docker.sock ``` As described in <https://docs.docker.com/engine/install/linux-postinstall/>, to add an user to the `docker` group, you can do it like that : ``` sudo usermod -aG docker $USER # this adds the permissions newgrp docker # this refreshes the permissions in the current session ``` --- That being said, using `docker` with `sudo` is *the same as* using it with the `docker` group, because giving acces to the `/var/run/docker.sock` is *equivalent* to giving full root acces: From <https://docs.docker.com/engine/install/linux-postinstall/> > > The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system, see Docker Daemon Attack Surface. > > > If root permission is a security issue for your system, another page is mentioned : > > To run Docker without root privileges, see [Run the Docker daemon as a non-root user (Rootless mode)](https://docs.docker.com/engine/security/rootless/). > > > --- docker is composed of multiple elements : <https://docs.docker.com/get-started/overview/> First, there are clients : ``` $ type docker docker is /usr/bin/docker $ dpkg -S /usr/bin/docker docker-ce-cli: /usr/bin/docker ``` You can see that the `docker` command is installed when you install the `docker-ce-cli` package. Here, *ce* stands for *community edition*. The `docker` cli communicates with the docker daemon, also known as `dockerd`. `dockerd` is a daemon (a server) and exposes by default the unix socket `/var/run/docker.sock` ; which default permissions are `root:docker`. There are other components involved, for instance `dockerd` uses `containerd` : <https://containerd.io/> --- The rest is basic linux permission management : - operating the docker daemon is the *same* as having root permission on that machine. - to operate the docker daemon, you need to be able to read and write from and to the socket it listens to ; in your case it is `/var/run/docker.sock`. **whether or not you are a sudoer does not change anything to that**. - to be able to read and write to and from `/var/run/docker.sock`, you must either be `root` or being in the `docker` group. - `docker-compose` is another cli it has the same requirements as `docker`.
Ant: Problem: failed to create task or type propertyregex I'm using Ant 1.8.1. I have downloaded ant-contrib-1.0b3.jar and placed it in my $ANT\_HOME/lib directory. However, when I include this in my build.xml file ... ``` <propertyregex property="selenium.email.success.subject" input="package.ABC.name" regexp="(.*)__ENV__(.*)" replace="\1${buildtarget}\2" override="true" casesensitive="false" /> ``` I get the error "Problem: failed to create task or type propertyregex. Cause: The name is undefined." upon running my Ant build file. What else do I need to do to get this task recognized?
The `propertyregex` ant task is part of [ant-contrib](http://ant-contrib.sourceforge.net/), and not included by default in any [apache-ant](http://ant.apache.org/) installation. You have to properly install `ant-contrib`. From the [ant-contrib](http://ant-contrib.sourceforge.net/) page, you have two choices: > > 1. Copy `ant-contrib-0.3.jar` to the lib directory of your Ant > installation. If you want to use one of the tasks in your own project, > add the line `<taskdef > resource="net/sf/antcontrib/antcontrib.properties"/>` to your build > file. > 2. Keep `ant-contrib-0.3.jar` in a separate location. You now have to > tell Ant explicitly where to find it (say in `/usr/share/java/lib`): > > > `<taskdef resource="net/sf/antcontrib/antcontrib.properties">` > > `<classpath>` > > `<pathelement > location="/usr/share/java/lib/ant-contrib-0.3.jar"/>` > > `</classpath>` > > `</taskdef>` > > >
Cannot run Doxygen from Meson on a C++ project I cannot run Doxygen through Meson's configuration. This is the related code in `meson.build`: ``` doxygen = find_program('doxygen') ... run_target('docs', command : 'doxygen ' + meson.source_root() + '/Doxyfile') ``` The doxygen executable is successfully found: > > Program doxygen found: YES (/usr/bin/doxygen) > > > However, when launched, I get this error message: > > [0/1] Running external command docs. > > Could not execute command "doxygen /home/project/Doxyfile". File not found. > > FAILED: meson-docs > > > Running it manually from the command line it works: ``` /usr/bin/doxygen /home/project/Doxyfile doxygen /home/project/Doxyfile ``` What is wrong in my `meson.build` configuration?
According to reference [manual](https://mesonbuild.com/Reference-manual.html), > > **command** is a **list** containing the command to run and the arguments to > pass to it. Each list item may be a string or a target > > > So, in your case the whole string is treated by meson as command, i.e. tool name, not as command + arguments. So, try this: ``` run_target('docs', command : ['doxygen', meson.source_root() + '/Doxyfile']) ``` Or it could be better to use directly the result of **find\_program()**: ``` doxygen = find_program('doxygen', required : false) if doxygen.found() message('Doxygen found') run_target('docs', command : [doxygen, meson.source_root() + '/Doxyfile']) else warning('Documentation disabled without doxygen') endif ``` Note that if you want to improve docs generation with support of Doxyfile.in, take a look at [custom\_target()](https://mesonbuild.com/Reference-manual.html#custom_target) instead and example like [this](https://gitlab.freedesktop.org/libinput/libinput/blob/a52f0db3c54b093a2c44dce37ea6dd5582a19c5a/doc/api/meson.build).
Why is array initialization with ternary operator illegal? `C` lets me use `char` pointers and arrays interchangeably often enough that I often think of them as completely interchangeable. But the following code demonstrates this is not true. Can anyone please explain why the initialization of `const char d[]` with the ternary operator, in the code below, is illegal? ``` /* main.c */ #include <stdio.h> int main() { const char* a = "lorem"; const char b[] = "ipsum"; int* p; const char* c = ( *p ? "dolor" : "sit" ); const char d[] = ( *p ? "amet" : "consectetur" ); // Why am I an error? return 0; } ``` Compilation: ``` > gcc -g main.c main.c: In function \u2018main\u2019: main.c:10:20: error: invalid initializer const char d[] = ( *p ? "amet" : "consectetur" ); // Why am I an error? ``` Related question: in case my terminology has been imprecise here: what is the correct term to describe `const char d[]`? Is it an array? A variable-length array? Something else? It is not considered a pointer - true? **Edit: I believe this question is not answered by [Array initialization with a ternary operator?](https://stackoverflow.com/questions/15877560/array-initialization-with-a-ternary-operator)** RE: the referenced question, I believe the premise is slightly different. E.g. the accepted answer explains that `{ 1, 2 };` (or `{ 'a', 'b' );`) are not valid `C` expressions, which I know already and accept. However `"amet";` and `"consectetur";` are valid `C` expressions.
> > ### 6.7.9 Initialization > > > ... > > 14 An array of character type may be initialized by a character string literal or UTF−8 string > literal, optionally enclosed in braces. Successive bytes of the string literal (including the > terminating null character if there is room or if the array is of unknown size) initialize the > elements of the array. > *[C 2011 Online Draft](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf)* `( *p ? "amet" : "consectetur" )` is *not* a string literal, nor does it *evaluate* to a string literal. It evaluates to an expression of type `char *`, which on its own is not a valid array initializer, and that evaluation does not occur until runtime. Not to mention, `p` is uninitialized, so the expression is undefined to begin with.
What is the best way to process large CSV files? I have a third party system that generates a large amount of data each day (those are `CSV` files that are stored on FTP). There are 3 types of files that are being generated: - every 15 minutes (2 files). These files are pretty small (~ `2 Mb`) - everyday at 5 PM (~ `200 - 300 Mb`) - every midnight (this `CSV` file is about `1 Gb`) Overall the size of 4 `CSV`s is `1.5 Gb`. But we should take into account that some of the files are being generated every 15 minutes. These data should be aggregated also (not so hard process but it will definitely require time). I need fast responses. I am thinking how to store these data and overall on the implementation. We have `java` stack. The database is `MS SQL Standard`. From my measurements `MS SQL Standard` with other applications won't handle such load. What comes to my mind: - This could be an upgrade to `MS SQL Enterprise` with the separate server. - Usage of `PostgreSQL` on a separate server. Right now I'm working on PoC for this approach. What would you recommend here? Probably there are better alternatives. # Edit #1 Those large files are new data for the each day.
Okay. After spending some time with this problem (it includes reading, consulting, experimenting, doing several PoC). I came up with the following solution. # Tl;dr **Database**: `PostgreSQL` as it is good for CSV, free and open source. **Tool**: [Apache Spark](http://spark.apache.org/) is a good fit for such type of tasks. Good performance. # DB Regarding database, it is an important thing to decide. What to pick and how it will work in future with such amount of data. It is definitely should be a separate server instance in order not to generate an additional load on the main database instance and not to block other applications. ## NoSQL I thought about the usage of `Cassandra` here, but this solution would be too complex right now. `Cassandra` does not have ad-hoc queries. `Cassandra` data storage layer is basically a key-value storage system. It means that you must "model" your data around the queries you need, rather than around the structure of the data itself. ## RDBMS I didn't want to overengineer here. And I stopped the choice here. ### MS SQL Server It is a way to go, but the big downside here is pricing. Pretty expensive. Enterprise edition costs a lot of money taking into account our hardware. Regarding pricing, you could read this [policy document](https://www.microsoft.com/en-us/cloud-platform/sql-server-pricing). Another drawback here was the support of CSV files. This will be the main data source for us here. `MS SQL Server` can neither import nor export CSV. - `MS SQL Server` silently truncating a text field. - `MS SQL Server`'s text encoding handling going wrong. MS SQL Server throwing an error message because it doesn't understand quoting or escaping. More on that comparison could be found in the article [PostgreSQL vs. MS SQL Server](http://www.pg-versus-ms.com/). ### PostgreSQL This database is a mature product and well battle-tested too. I heard a lot of positive feedback on it from others (of course, there are some tradeoffs too). It has a more classic SQL syntax, good CSV support, moreover, it is open source. It is worth to mention that [SSMS](https://msdn.microsoft.com/en-us/library/mt238290.aspx) is a way better than [PGAdmin](https://www.pgadmin.org/). [SSMS](https://msdn.microsoft.com/en-us/library/mt238290.aspx) has an autocomplete feature, multiple results (when you run several queries and get the several results at one, but in [PGAdmin](https://www.pgadmin.org/) you get the last one only). Anyway, right now I'm using [DataGrip](https://www.jetbrains.com/datagrip/) from JetBrains. # Processing Tool I've looked through [Spring Batch](http://projects.spring.io/spring-batch/) and [Apache Spark](http://spark.apache.org/). [Spring Batch](http://projects.spring.io/spring-batch/) is a bit too low-level thing to use for this task and also [Apache Spark](http://spark.apache.org/) provides the ability to scale easier if it will be needed in future. Anyway, [Spring Batch](http://projects.spring.io/spring-batch/) could also do this work too. Regarding [Apache Spark](http://spark.apache.org/) example, the code could be found in [learning-spark](https://github.com/databricks/learning-spark/blob/master/src/main/java/com/oreilly/learningsparkexamples/java/BasicLoadWholeCsv.java) project. My choice is [Apache Spark](http://spark.apache.org/) for now.
How can I be sure of the file encoding? I have a PHP file that I created with VIM, but I'm not sure which is its encoding. When I use the terminal and check the encoding with the command `file -bi foo` (My operating system is Ubuntu 11.04) it gives me the next result: `text/html; charset=us-ascii` But, when I open the file with gedit it says its encoding is UTF-8. Which one is correct? I want the file to be encoded in UTF-8. My guess is that there's no BOM in the file and that the command `file -bi` reads the file and doesn't find any UTF-8 characters, so it assumes that it's ascii, but in reality it's encoded in UTF-8.
Well, first of all, note that ASCII is a subset of UTF-8, so if your file contains only ASCII characters, it's correct to say that it's encoded in ASCII *and* it's correct to say that it's encoded in UTF-8. That being said, `file` typically only examines a short segment at the beginning of the file to determine its type, so it might be declaring it us-ascii if there are non-ASCII characters but they are beyond the initial segment of the file. On the other hand, gedit might say that the file is UTF-8 even if it's ASCII because UTF-8 is gedit's preferred character encoding and it intends to save the file with UTF-8 if you were to add any non-ASCII characters during your edit session. Again, if that's what gedit is saying, it wouldn't be wrong. Now to your question: 1. Run this command: ``` tr -d \\000-\\177 < your-file | wc -c ``` If the output says "0", then the file contains only ASCII characters. It's in ASCII (and it's also valid UTF-8) End of story. 2. Run this command ``` iconv -f utf-8 -t ucs-4 < your-file >/dev/null ``` If you get an error, the file does not contain valid UTF-8 (or at least, some part of it is corrupted). If you get no error, the file is extremely likely to be UTF-8. That's because UTF-8 has properties that make it very hard to mistake typical text in any other commonly used character encoding for valid UTF-8.
Save a ggplot2 time series plot grob generated by ggplotGrob [This post describes a method](https://stackoverflow.com/questions/44616530/axis-labels-on-two-lines-with-nested-x-variables-year-below-months) to create a two-line x-axis (year below months) on a time series plot. Unfortunately, the method that I use from this post (*option 2*) is not compatible with `ggsave()`. ``` library(tidyverse) library(lubridate) df <- tibble( date = as.Date(41000:42000, origin = "1899-12-30"), value = c(rnorm(500, 5), rnorm(501, 10)) ) p <- ggplot(df, aes(date, value)) + geom_line() + geom_vline( xintercept = as.numeric(df$date[yday(df$date) == 1]), color = "grey60" ) + scale_x_date(date_labels = "%b", date_breaks = "month", expand = c(0, 0)) + theme_bw() + theme(panel.grid.minor.x = element_blank()) + labs(x = "") # Get the grob g <- ggplotGrob(p) # Get the y axis index <- which(g$layout$name == "axis-b") # which grob xaxis <- g$grobs[[index]] # Get the ticks (labels and marks) ticks <- xaxis$children[[2]] # Get the labels ticksB <- ticks$grobs[[2]] # Edit x-axis label grob # Find every index of Jun in the x-axis labels and a year label junes <- grep("Jun", ticksB$children[[1]]$label) ticksB$children[[1]]$label[junes] <- paste0( ticksB$children[[1]]$label[junes], "\n ", # adjust the amount of spaces to center the year unique(year(df$date)) ) # Center the month labels between ticks ticksB$children[[1]]$label <- paste0( paste(rep(" ", 12), collapse = ""), # adjust the integer to center month ticksB$children[[1]]$label ) # Put the edited labels back into the plot ticks$grobs[[2]] <- ticksB xaxis$children[[2]] <- ticks g$grobs[[index]] <- xaxis # Draw the plot grid.newpage() grid.draw(g) # Save the plot ggsave("plot.png", width = 11, height = 8.5, units = "in") ``` A plot is saved, but without the years. How do I `ggsave()` the final plot from `grid.draw(g)`? This `grid.draw(g)` plot is shown below, but the actual `plot.png` file is slightly different, with the three years `2012`, `2013` and `2014` omitted. [![enter image description here](https://i.stack.imgur.com/iC5K9.png)](https://i.stack.imgur.com/iC5K9.png)
``` library(tidyverse) library(lubridate) library(scales) set.seed(123) df <- tibble( date = as.Date(41000:42000, origin = "1899-12-30"), value = c(rnorm(500, 5), rnorm(501, 10)) ) # create year column for facet df <- df %>% mutate(year = as.factor(year(date))) p <- ggplot(df, aes(date, value)) + geom_line() + geom_vline(xintercept = as.numeric(df$date[yday(df$date) == 1]), color = "grey60") + scale_x_date(date_labels = "%b", breaks = pretty_breaks(), expand = c(0, 0)) + # switch the facet strip label to the bottom facet_grid(.~ year, space = 'free_x', scales = 'free_x', switch = 'x') + labs(x = "") + theme_bw(base_size = 14, base_family = 'mono') + theme(panel.grid.minor.x = element_blank()) + # remove facet spacing on x-direction theme(panel.spacing.x = unit(0,"line")) + # switch the facet strip label to outside # remove background color theme(strip.placement = 'outside', strip.background.x = element_blank()) p ggsave("plot.png", plot = p, type = "cairo", width = 11, height = 8.5, units = "in", dpi = 150) ``` ![](https://i.stack.imgur.com/14JMw.png) --- Using `theme_classic()` ``` p <- ggplot(df, aes(date, value)) + geom_line() + geom_vline(xintercept = as.numeric(df$date[yday(df$date) == 1]), color = "grey60") + scale_x_date(date_labels = "%b", breaks = pretty_breaks(), expand = c(0, 0)) + # switch the facet strip label to the bottom facet_grid(.~ year, space = 'free_x', scales = 'free_x', switch = 'x') + labs(x = "") + theme_classic(base_size = 14, base_family = 'mono') + theme(panel.grid.minor.x = element_blank()) + # remove facet spacing on x-direction theme(panel.spacing.x = unit(0,"line")) + # switch the facet strip label to outside # remove background color theme(strip.placement = 'outside', strip.background.x = element_blank()) p ``` ![](https://i.stack.imgur.com/vhxfu.png) Add the top and right most borders ``` ymax <- ceiling(1.1 * max(df$value, na.rm = TRUE)) xmax <- max(df$date, na.rm = TRUE) p <- ggplot(df, aes(date, value)) + geom_line() + geom_vline(xintercept = as.numeric(df$date[yday(df$date) == 1]), color = "grey60") + scale_x_date(date_labels = "%b", breaks = pretty_breaks(), expand = c(0, 0)) + # switch the facet strip label to the bottom facet_grid(.~ year, space = 'free_x', scales = 'free_x', switch = 'x') + labs(x = "") + theme_classic(base_size = 14, base_family = 'mono') + theme(panel.grid.minor.x = element_blank()) + # remove facet spacing on x-direction theme(panel.spacing.x = unit(0,"line")) + # switch the facet strip label to outside # remove background color theme(strip.placement = 'outside', strip.background.x = element_blank()) + ### add top and right most borders scale_y_continuous(expand = c(0, 0), limits = c(0, ymax)) + geom_hline(yintercept = ymax) + geom_vline(xintercept = as.numeric(df$date[df$date == xmax])) + theme(panel.grid.major = element_line()) p ``` ![](https://i.stack.imgur.com/Cyg53.png) Created on 2018-10-01 by the [reprex package](https://reprex.tidyverse.org) (v0.2.1.9000)
Including (and interpreting!) random intercepts and/or slopes in linear mixed models I'm new to linear mixed modeling, and have some theory-driven questions that I'm not sure how to analytically resolve. I am analyzing experimental data with a within-subjects factor (`discount`). My theory hypothesizes that the effect of this within-subjects factor is contingent upon a between-subjects characteristic of respondents (`iipm`). Because my data is in long form, and respondents are making 8 choices over time, I model my data as follows: ``` library(lme4) m1 <- glmer(chose ~ iipm*discount + product + (1|id) + (1|time), data=long1, family="binomial") ``` All that I'm trying to do here is fit a simple model that accounts for the dependence between observations for a single subject (`1|id`), and the potential effect of making several choices in a row (`1|time`). However, my theory further specifies that this relationship **should not be affected** by the inclusion of other demographic variables in the model (let's say `ideology` and `partisanship`). So, based on some reading I've done [(as well as previous answers on this site)](https://stats.stackexchange.com/questions/3757/random-effect-slopes-in-linear-mixed-models), I fit the following model: ``` m2 <- glmer(chose ~ iipm*discount + product + (1 |id) + (1|time) + (1|partisanship) + (1|ideology) , data=long1, family="binomial") ``` Because random slopes goes beyond my expertise, I'm just using random intercepts to see what happens when I account for baseline variation amongst individuals attributable to their partisanship and ideology. However, if I **were** to use random slopes to essentially say that the effects of partisanship and ideology vary on an individual basis, even after accounting for baseline variability, I should specify the following model: ``` m3 <- glmer(chose ~ iipm*discount + product + (1 + partisanship +ideology |id) + (1|time) , data=long1, family="binomial") ``` To test the hypothesis that this baseline variability ***doesn't matter***, I then run a likelihood ratio test comparing the two models: ``` library(lmtest) lrtest(m2,m1) # p=.349 lrtest(m3,m1) # p=.416 ``` If there's no improvement in fit (p>.05), I (very tentatively) interpret this as support for my hypothesis that `demographics` and `ideology` don't matter. Is this a right way to approach the data, or is there a more sophisticated way to test this hypothesis using multilevel modeling? Any expertise and advice is greatly appreciated.
I think you're specifying a great deal of random effects which are better modeled as fixed effects since you'll directly estimate the odds ratio for their association with outcome. For instance, the question of whether partisanship modifies the relationship of discount and product is simply done as a test of interaction: ``` m1 <- glmer(chose ~ partisanship + iipm*discount + product + (1|id) + (1|time), data=long1, family="binomial") m2 <- glmer(chose ~ partisanship*(iipm*discount + product) + (1|id) + (1|time), data=long1, family="binomial") lrtest(m2, m1) ``` Under the null hypothesis, the interaction effects have a log odds ratio of 0. Note we have a LOT of such effects: `partisianship:iipm`, `partisianship:discount`, `partisianship:product` and the three way effect `partisianship:iipm:discount`. Another important modification to the null model is that I included partisanship as a fixed effect in the null model. This is because when testing for effect modification, you need to ensure that the interaction parameters have the interpretation as a difference in odds ratios in the main effect for a unit difference in the modifier. I think "Regression Methods in Biostatistics" from Vittinghoff, ch. 7 addresses this. While the meaning of the p-value in the interaction analysis is difficult to interpret, I guarantee you it's much easier than the test of random effect you included above. Worse, the variance component test doesn't address the question you stated. Lastly, > > If there's no improvement in fit (p>.05), I (very tentatively) interpret this as support for my hypothesis that demographics and ideology don't matter. > > > This is not the correct interpretation of a $p$-value. Basic NHST says there was insufficient evidence to suggest otherwise. Given the complexity of this model, I'd be very dubious of the interpretation you provided.
Javascript Number formatting min/max decimals I'm trying to create a function that can format a number with the minimum decimal places of 2 and a maximum of 4. So basically if I pass in 354545.33 I would get back 354,545.33 and if I pass in 54433.6559943 I would get back 54,433.6559. ``` function numberFormat(num){ num = num+""; if (num.length > 0){ num = num.toString().replace(/\$|\,/g,''); num = Math.floor(num * 10000) / 10000; num += ''; x = num.split('.'); x1 = x[0]; x2 = x.length > 1 ? '.' + x[1] : ''; var rgx = /(\d+)(\d{3})/; while (rgx.test(x1)) { x1 = x1.replace(rgx, '$1' + ',' + '$2'); } return x1 + x2; } else{ return num; } } ```
### New 2016 solution ``` value.toLocaleString('en-US', { minimumFractionDigits: 2, maximumFractionDigits: 4 }); ``` Do not forget to include [polyfill](https://github.com/andyearnshaw/Intl.js/). - [compat table](http://caniuse.com/#feat=internationalization) - [API docs](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/NumberFormat) ### Old 2011 solution To format a part, after decimal point you can use this: ``` value.toFixed(4).replace(/0{0,2}$/, ""); ``` And for part before decimal point: [How to write this JS function in best(smartest) way?](https://stackoverflow.com/questions/4969823/how-to-write-this-js-function-in-bestsmartest-way)
Does default constructor for inner class need an object of an outer class? I was fiddling around with java, and I created two classes, `Outer` and `Inner` ``` public class Outer { public class Inner { } } ``` Inner class is using default constructor. I can create new instance of `Inner` class inside `Outer` just by calling non-args default constructor `new Inner()`. But when I tried do same thing using reflection, I noticed constructor require `Outer` type object. Is that mean inner class default constructor is not non-args? Why there is disjoint between calling constructor in normal way and reflection?
There is no "disjoint between calling constructor in normal way and reflection", only between calling constructor from inside the `Outer` class and from outside of `Outer` class. Since the `Inner` class is not `static`, it has a reference to `Outer`. If you want to create an instance of `Inner` from outside of `Outer`, you must write it like this: ``` Outer outerObj = new Outer(); Outer.Inner innerObj = outerObj.new Inner(); ``` The requirement to provide `outerObj` to the constructor of `Inner` is exactly the same as when you instantiate the `Inner` class through reflection. The only difference is that the syntax places `outerObj` on the left of operator `new`, while reflection passes `outerObj` to the constructor. Java compiler adds a hidden argument to the constructor that it generates automatically, in the same way that it adds a hidden field to the generated class in order to hold a reference to the `Outer` object. Note that this is true only for non-static inner classes. When your inner class is `static`, there is no hidden parameter and no hidden field, so reflection lets you create instances of the inner class without an `Outer` object.
How do I get information out of IMMDevice? I'm a complete beginner with regards to IMMDevice, and I need to ask what a good, easy way to get some device out of information out of it is. In my program, I've already successfully set a pointer to one of those with GetDefaultAudioEndpoint(). Now I just need to be able to get some basic information out of it, like some speaker info or the ID of the speaker or anything like that. What's a good way to do this that a total beginner would be able to pick up on? Thanks!
Devices have various properties attached, and you can read them from a device property store. MSDN gives you a code snippet here: [Audio Endpoint Devices > Device Properties](http://msdn.microsoft.com/en-us/library/windows/desktop/dd370812%28v=vs.85%29.aspx) You can also use pre-built utilities to quickly check your devices and see what you can obtain from a `IMMDevice` pointer: - [How to enumerate audio endpoint (IMMDevice) properties on your system](http://blogs.msdn.com/b/matthew_van_eerde/archive/2011/06/13/how-to-enumerate-audio-endpoint-immdevice-properties-on-your-system.aspx) - [Enumerate Audio ‘MMDevice’s](http://alax.info/blog/1279) The latter presents you the properties like this: [![enter image description here](https://i.stack.imgur.com/vaunr.png)](https://i.stack.imgur.com/vaunr.png) And you can check source code here <http://www.alax.info/trac/public/browser/trunk/Utilities/EnumerateAudioDevices/MainDialog.h#L72> that it starts from as much as having a `IMMDevice` pointer on hands in line 72.
How to install Elasticsearch on 16.04 LTs I have Ubuntu 16.04 LTS on virtual box and I'd like to install `elasticsearch` in order to use it with [Ruby on Rails](https://github.com/ankane/searchkick#get-started). I've done a search on google on how to do the installation, but I have found mutiple guides that have a few differences which got me a little bit confused, for instance: [Installing elasticsearch on Ubuntu 16.04](https://www.accesstomemory.org/en/docs/2.3/admin-manual/installation/linux/ubuntu-xenial/#elasticsearch) [Digital Ocean: Install and Configure Elasticsearch on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-16-04) (see @ Prerequisites) How should I install elasticsearch safely, without messing up my development environment?
Elastic has their own guide to install elasticsearch with Debian packages. you can find it on their website. <https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html> I used this one and now everything works like a charm. Elasticsearch requires Java 8 or later: ``` $ sudo apt-get install openjdk-8-jdk $ java -version openjdk version "1.8.0_111" ``` Install Elasticsearch ``` $ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - $ sudo apt-get install apt-transport-https $ echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list $ sudo apt-get update && sudo apt-get install elasticsearch ``` **some remarks (from elastic website):** - Do not use `add-apt-repository` as it will add a deb-src entry as well, but we do not provide a source package. If you have added the deb-src entry, you will see an error like the following: > > > ``` > Unable to find expected entry 'main/source/Sources' in Release file > Wrong sources.list entry or malformed file) > > ``` > > => Delete the deb-src entry from the /etc/apt/sources.list file and the installation should work as expected. - If two entries exist for the same Elasticsearch repository, you will see an error like this during apt-get update: > > Duplicate sources.list entry > <https://artifacts.elastic.co/packages/5.x/apt/> ...` > > > => Examine `/etc/apt/sources.list.d/elasticsearch-5.x.list` for the duplicate entry or locate the duplicate entry amongst the files in `/etc/apt/sources.list.d/` and the `/etc/apt/sources.list` file
Haskell: Not in scope: foldl'? As my first module experience, I imported Data.List to my GHCi. (More precisely, I typed `import Data.List` on my GHCi) It seems to be working fine because I can use some functions that I did not have before such as `foldl'` on my GHCi. I wrote `haha = foldl' (+) 0 [1..10]` on my notepad++, and saved it and loaded then GHCi says `Not in scope: foldl'` even though it workds just fine when I type `foldl' (+) 0 [1..10]` directly on my GHCi. Why is that and how can I define functions with `foldl'` on my notepad?
What's in scope at the GHCi prompt is not necessarily the same as what's in scope in whatever file you may be loading from GHCi. GHCi has its own notion of current scope, which usually includes the toplevel of whatever file you've loaded plus any other modules you explicitly add or anything you import. (It also behaves differently if loading a file that hasn't been changed since it was last compiled, which still confuses me...) Anyway, you just need to import `Data.List` in the code file itself, e.g.: ``` module Main where import Data.List haha = foldl' (+) 0 [1..10] ``` After doing that, loading the file should result in `Data.List` being effectively imported at the GHCi prompt as well, since it's visible at the toplevel of the loaded module.
Laravel 5.2 Login Event Handling In the database I have a table users with column last\_login\_at. Everytime when some user logs in - I want to uptade *last\_login\_at*. So, I created app/Listeners/**UpdateLastLoginOnLogin.php**: ``` namespace App\Listeners; use Carbon\Carbon; class UpdateLastLoginOnLogin { public function handle($user, $remember) { $user->last_login_at = Carbon::now(); $user->save(); } } ``` In app/Providers/EventServiceProvider: ``` protected $listen = [ 'auth.login' => [ 'App\Listeners\UpdateLastLoginOnLogin', ], ]; ``` BUT this doesn't work, event is not handled. The same problem has already been mentioned here: [EventServiceProvider mapping for Laravel 5.2 login](https://stackoverflow.com/questions/34974192/eventserviceprovider-mapping-for-laravel-5-2-login) but without solution. I have tried to do like this: ... ``` use Illuminate\Auth\Events\Login; class UpdateLastLoginOnLogin { public function handle(Login $event) { $event->user->last_login_at = Carbon::now(); $event->user->save(); } } ``` and: ``` protected $listen = [ 'Illuminate\Auth\Events\Login' => [ 'App\Listeners\UpdateLastLoginOnLogin', ], ]; ``` But it doesn't work. Also, I checked this: <https://laracasts.com/discuss/channels/general-discussion/login-event-handling-in-laravel-5> but **php artiasn clear-compiled** didn't solve the problem. **EDIT:** FOR OTHER DETAILS, HERE'S A LINK TO THE PROJECT which is actually exactly the same (it is done in the same way): <https://github.com/tutsplus/build-a-cms-with-laravel>
You are almost there, just a few changes more, Events and Listeners for authentication have changed a little in Laravel 5.2: the handle method in **UpdateLastLoginOnLogin** should have just an event as parameter ``` namespace App\Listeners; use Carbon\Carbon; use Auth; class UpdateLastLoginOnLogin { public function handle($event) { $user = Auth::user(); $user->last_login_at = Carbon::now(); $user->save(); } } ``` And for the **EventServiceProvider** you specify the listeners like this : ``` protected $listen = [ 'Illuminate\Auth\Events\Login' => [ 'App\Listeners\UpdateLastLoginOnLogin@handle', ], ]; ```
Evaluate mathematical expressions I have column which type is `varchar`, values of this column are mathematical operations (just addition and minus operations) ``` col ------ 2+3+2+1 3+3-4 1+1-2.5 ``` There is possible, that evalute this expressions? that is needed result is: ``` col ---------- 8 2 -0.5 ```
Since you can't use `EXECUTE IMMEDIATE` or prepared statements in `CREATE FUNCTION` (which would have helped in evaluating the expressions with a simple `CONCAT("SELECT ", expr, " FROM dual")`), please see below a solution that makes use of a function that actually computes the mathematical expression (since your question concerns only simple expressions with `+` and `-` operators) ``` DELIMITER $$ CREATE FUNCTION calc(expr VARCHAR(255)) RETURNS FLOAT BEGIN DECLARE result FLOAT; DECLARE operand VARCHAR(255); DECLARE operator INT; DECLARE i INT; DECLARE c CHAR; SET i = 1; SET result = 0; SET operand = 0; SET operator = 1; WHILE(i <= LENGTH(expr)) DO SET c = SUBSTR(expr, i, 1); IF c = '+' THEN SET result = result + operator * operand; SET operator = 1; SET operand = ''; ELSEIF c = '-' THEN SET result = result + operator * operand; SET operator = -1; SET operand = ''; ELSE SET operand = CONCAT(operand, c); END IF; SET i = i + 1; END WHILE; SET result = result + operator * operand; RETURN result; END$$ ``` After you create this function you can simply use the `SELECT` command: ``` SELECT calc(col) FROM tbl; ``` [Here's the DEMO.](http://sqlfiddle.com/#!2/3fc9f/1)
Should we (still) consider a synced XMLHTTPRequest bad practice? As you may know, `XMLHTTPRequest` can be used synchronously or asynchronously, but synchronized requests have always been considered bad practice, and I've always agreed with that. Not only is the ideology of modern JS development heavily based on an event model, but there were also some more prosaic reasons to avoid synchronized requests. For example, old Internet Explorer versions could just freeze suddenly. Today I saw a synchronized request in Liferay source code and thought "What a shame, how dare they. Don't they know that it is wrong?". But then I asked myself what is actually wrong with this approach in modern times, and I wasn't able to give an accurate, logical answer. For example, on the server side it is common practice to use synchronized HTTP requests. Of course all the data could be fetched asynchronously, and perhaps should be fetched that way, but we often need data that will be used to fetch another chunk of data, so in that case the request must be synchronized. So, should this still be considered bad practice? **PS**: I haven't used the term `AJAX` since the first A stands for asynchronous. :)
In short, yes. Synchronous HTTP requests halt execution of subsequent code while they are en route. While browsers may *no longer* block the UI during this time, we're relying on the user's available bandwidth, network reliability, and the server's current load for the performance of our code. This is generally not good practice. On the [MDN "Using XMLHttpRequest"](https://developer.mozilla.org/En/XMLHttpRequest/Using_XMLHttpRequest) page, there is also a vague warning about memory use and event leakage when using synchronous XMLHttpRequest. There is no detail given, but given that the UI is not blocked, and JavaScript is, it would make sense that we might miss UI events we would have caught had we been using Async. > > **Note:** You shouldn't use synchronous XMLHttpRequests because, due to > the inherently asynchronous nature of networking, there are various > ways memory and events can leak when using synchronous requests. > > > **Edit: Server Side** While I agree that synchronous requests are common, I don't know that they're best practice. They may, however, be *better* practice on the server than on the client. It is *better* to put faith in your own server and its network connections than it is to put faith in those of your user. The server is (theoretically) a constant, the client is a variable. There are also no UI issues to deal with, and no possibility of uncaptured events.
Django Authenticate Backend Multiple Databases I am rewriting a legacy application that has a database for each customer. Each customer has its own authentication and user set. Thus, I'll need a custom authentication backend because django's auth is set to only use default. I have written middleware that examines the url upon every request and extracts information there to set a database\_name on the request. If I had access to the request during processing of my custom authencation backend, I could easily perform database calls as `user = User.objects.using(request.db).get(username=username)` However, I see no easy way to accomplish this. I've seen an answer to that here: [Access request.session from backend.get\_user](https://stackoverflow.com/questions/1091593/django-access-request-session-from-backend-get-user), but this wouldn't appear to be thread safe so I don't want to go down that road. The only solution I can see that still uses django-auth is to have an authentication backend for each customer that sets the database name to be used as a class attribute. Then, I would create a custom login function that sets the request.session['\_auth\_user\_backend'] to be the customer specific backend. Thus, when get\_user(user\_id) is called on each request, it uses the customer backend which knows which database to request from. I would like to avoid having to manage an authentication backend for each customer if possible. Is there a better way to do this?
Since the auth backend is not calling the QuerySet method `using` you could use a [database router](https://docs.djangoproject.com/en/dev/topics/db/multi-db/#database-routers) with a [thread local](http://docs.python.org/library/threading.html#threading.local) variable and some middleware to set the variable to the customer's database name. The middleware would have to be placed before the authentication middleware. The thread local variable is thread safe. It creates a thread local global variable. If you were following the path of a request it would do the following: 1. The request hits django 2. Your custom middleware grabs the database name from the url sets it to the thread local global variable. 3. The django authentication middleware starts and sets the user by running the query `User.object.get(id=user_id)`. This will use your database router which will just return the thread local global variable that was set in the previous middleware. 4. The request continues into the rest of the django stack. For example you have the following modules: my\_app/middleware.py ``` from threading import local my_local_global = local() class CustomerMiddleware(object): def process_request(self, request): my_local_global.database_name = get_database_name(request) ``` my\_app/routers.py ``` from middleware import my_local_global class MultiCustomerRouter(object): def db_for_read(self, model, **hints): return my_local_global.database_name ``` settings.py ``` ... MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'my_app.middleware.CustomerMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) DATABASE_ROUTERS = ['my_app.routers.MultiCustomerRouter'] ... ```
How to edit a View hidden behind other Views in Xcode Storyboard? Is there an easy way in Xcode to edit a view that is hidden behind other views within the Storyboard? I know I can hide the views in front of it and move the view to the front temporarily, but this is difficult to keep doing over and over again. Is there a way to isolate the view somehow in Storyboard so I can edit it on its own?
Select your view controller in storyboard and Open `File Inspector` Select `Use Auto Layout` and `Use Trait Variations` [![enter image description here](https://i.stack.imgur.com/SPWKa.png)](https://i.stack.imgur.com/SPWKa.png) Now select the `view` you want to hide in your storyboard. Open `Attributes inspector`. Check/uncheck the `installed` option to show/hide the selected view. You can hide multiple views by changing this property after selecting all the views. Don't forget to check this option for all the views while running the app. Else it will crash [![enter image description here](https://i.stack.imgur.com/O2o11.gif)](https://i.stack.imgur.com/O2o11.gif)
mailto: links unsupported in Android? I dont have a real Android device so I'm using emulators for all my development for now, are mailto: web links really unsupported on Android devices 2.1 and below? 2.2 works, but every time I click a `mailto:` link on 1.6 or 2.1 even, I get an [unsupported action] dialog. Anybody with a real device want to test this out?
You have to handle it yourself in a WebViewClient ``` public class MyWebViewClient extends WebViewClient { Activity mContext; public MyWebViewClient(Activity context){ this.mContext = context; } @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { if(url.startsWith("mailto:")){ MailTo mt = MailTo.parse(url); Intent i = new Intent(Intent.ACTION_SEND); i.setType("text/plain"); i.putExtra(Intent.EXTRA_EMAIL, new String[]{mt.getTo()}); i.putExtra(Intent.EXTRA_SUBJECT, mt.getSubject()); i.putExtra(Intent.EXTRA_CC, mt.getCc()); i.putExtra(Intent.EXTRA_TEXT, mt.getBody()); mContext.startActivity(i); view.reload(); return true; } view.loadUrl(url); return true; } } ``` In your activity you keep a reference to `MyWebViewClient` and assign it to your `webview` with `setWebViewClient(mWebClient)`.
Adding to path vs. linking from /bin Our sys admin installed a software application (Maven) on the server and told everyone to add the `/usr/local/maven/bin/` folder to their path. I think it could be more convenient to just link the few programs in that folder from the `/bin` folder (or other folder that everyone has in their path) like this: ``` ln -s /usr/local/maven/bin/* /bin ``` Is this correct? Are there some hidden side effects to my suggestion?
### On linking You generally do not link `/usr/local/*` with `/bin`, but this is more of a historical practice. In general, there are a few "technical" reason why you cannot do what you're suggesting. Making links to executables in `/bin` can cause problems: 1. Probably the biggest caveat would be if you're system is having packages managed by some sort of package manager such as RPM, dpkg, APT, YUM, pacman, pkg\_add, etc. In these cases, you'll generally want to let the package manager do its job and manage directories such as `/sbin`, `/bin`, `/lib`, and `/usr`. One exception would be `/usr/local` which is typically a safe place to do as you see fit on the box, without having to worry about a package manager interfering with your files. 2. Often times executables built for `/usr/local` will have this PATH hard-coded into their executables. There may also be configuration files that are included in `/usr/local` as part of the installation of these applications. So linking to just the executable could cause issues with these apps finding the `.cfg` files later one. Here's an example of such a case: ``` $ strings /usr/local/bin/wit | grep '/usr/local' /usr/local/share/wit /usr/local/share/wit/ ``` 3. The same issue that applies to finding `.cfg` files can also occur with "helper" executables that the primary app needs to run. These too would also need to be linked into `/usr/bin`, knowing this might be problematic and only show up when you actually attempted to execute the linked app. **NOTE:** in general it's best to avoid the temptation to link to one off apps in `/usr/bin`. ### /etc/profile.d Rather then have all the users provide this management, the admin could very easily add this to everyone's `$PATH` on the box by adding a corresponding file in the `/etc/profile.d` directory. A file such as this, `/etc/profile.d/maven.sh`: ``` PATH=$PATH:/usr/local/maven/bin ``` You generally do this as an admin instead of polluting all the users' setups with this. ### Using alternatives Most distros now provide another tool called `alternatives` (Fedora/CentOS) or `update-alternatives` (Debian/Ubuntu) which you can also use to loop into the `$PATH` tools which might be outside the `/bin`. Using tools such as these is preferable since these are adhering more to what most admins would consider "standard practice" and so makes the systems easier to hand off from one admin to another. This tool does a similar thing in making links in `/bin`; but it manages the creation and destruction of these links, so it's easier to understand a system's intended setup when done through a tool vs. done directly as you're suggesting. Here I'm using that system to manage Oracle's Java on a box: ``` $ ls -l /etc/alternatives/ | grep " java" lrwxrwxrwx. 1 root root 73 Feb 5 13:15 java -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/jre/bin/java lrwxrwxrwx. 1 root root 77 Feb 5 13:15 java.1.gz -> /usr/share/man/man1/java-java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64.1.gz lrwxrwxrwx. 1 root root 70 Feb 5 13:19 javac -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/bin/javac lrwxrwxrwx. 1 root root 78 Feb 5 13:19 javac.1.gz -> /usr/share/man/man1/javac-java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64.1.gz lrwxrwxrwx. 1 root root 72 Feb 5 13:19 javadoc -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/bin/javadoc lrwxrwxrwx. 1 root root 80 Feb 5 13:19 javadoc.1.gz -> /usr/share/man/man1/javadoc-java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64.1.gz ``` You can see the effects of this: ``` $ type java java is /usr/bin/java $ readlink -f /usr/bin/java /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/jre/bin/java ``` ### My $0.02 Making links in `/bin`, though plausible, would likely be highly discouraged by most sysadmins: 1. Would be frowned upon because it's viewed as custom and can lead to confusion if another admin is required to pick up the box 2. Can lead to a system becoming broken at a future state as a result of this "fragile" customization.
Postgres can't listen to a specific IP address I'm trying to block access to my PostgreSQL and allow access only to Localhost and my machine external IP, something like: "172.211.xx.xx". This IP is provided by my ISP (Internet Service Provider). In `postgresql.conf` I set the following line: ``` listen_addresses = '179.211.xx.xx' ``` But I can't connect to the database from my machine. I get "Server don't listen". If I change to: ``` listen_addresses = '*' ``` everything works, but I can't do it. I need to enable access only to this IP. This is a security requirement of my project. MY `pg_hba.conf`: ``` host all all 0.0.0.0/0 md5 ```
The parameter `listen_addresses` at `postgresql.conf` sort of controls which ip addresses the server will answer on, not which ones the server will permit connections to authenticate. In my eyes, it's alright to set the `listen_addresses` to `*` and constrain the rest in the `pg_hba.conf`. In other words: doing the fine tuning at the `pg_hba.conf` is just fine. So .. ``` listen_addresses = '*' ``` .. and .. ``` host all all 179.211.198.0/24 ``` .. should do. Which means that all users have access to all databases from this IP range `179.211.198.1 - 179.211.198.254` You can go further limiting access for specific users to certain databases: ``` host my_db my_user 179.211.198.0/24 ```
Why do I get the last value in a list in scalar context in perl? I had assumed that, in perl, `$x=(2,3,4,5)` and `($x)=(2,3,4,5)` would give me the same result, but was surprised at what happened in my tests. I am wondering why this behavior is the way it is and why `wantarray` behaves differently. Here are my tests and the results: ``` >perl -e '$x=(1,2,3,5);print("$x\n")' 5 >perl -e '($x)=(1,2,3,5);print("$x\n")' 1 >perl -e '$x=(wantarray ? (1,2,3,5) : 4);print("$x\n")' 4 >perl -e '($x)=(wantarray ? (1,2,3,5) : 4);print("$x\n")' 4 ``` Is this behavior consistent/reliable across all platforms? Whoops. wantarray is for context of subroutine calls... ``` >perl -e '$x=test();sub test{return(1,2,3,5)};print("$x\n")' 5 >perl -e '($x)=test();sub test{return(1,2,3,5)};print("$x\n")' 1 >perl -e '$x=test();sub test{return(wantarray ? (1,2,3,5) : 4)};print("$x\n")' 4 >perl -e '($x)=test();sub test{return(wantarray ? (1,2,3,5) : 4)};print("$x\n")' 1 ``` So I guess it is consistent, but why does the list return the last value in scalar context?
> > So I guess it is consistent, but why does the list return the last value in scalar context? > > > Because it's useful. ``` my $x = f() || ( warn('!!!'), 'default' ); ``` Well, more useful than any other alternative, at least. It's also consistent with its stronger cousin, `;`. ``` sub f { x(), y(), z() }; ``` is the same as ``` sub f { x(); y(); z() }; ``` --- Each operator determines decides what it returns in both scalar and list context.[1] There are operators that only even return a single scalar. ``` say time; # Returns the number of seconds since epoch say scalar( time ); # Ditto. ``` But operators that normally returns more than one scalar and those that return a variable number of scalars cannot possible return that in scalar context, so they will return something else. It's up to each one to decide what that is. ``` say scalar( 4,5,6 ); # Last item evaluated in scalar context. say scalar( @a ); # Number of elements in @a. say scalar( grep f(), g() ); # Number of matching items. say scalar( localtime ); # Formatted timestamp. ``` The list operator (e.g. `x,y,z`) returns what the last item of the list (`z`) returns when evaluated in scalar context. For example, ``` my $x = (f(),g(),@a); ``` is a weird way of writing ``` f(); g(); my $x = @a; ``` --- Notes 1. Same goes for subs, though it's common to write subs that are useless to call in scalar context.
Count enumerated values? If my table looks like this: ``` CREATE TABLE `daily_individual_tracking` ( `daily_individual_tracking_id` int(10) unsigned NOT NULL auto_increment, `daily_individual_tracking_date` date NOT NULL default ''0000-00-00'', `sales` enum(''no'',''yes'') NOT NULL COMMENT ''no'', `repairs` enum(''no'',''yes'') NOT NULL COMMENT ''no'', `shipping` enum(''no'',''yes'') NOT NULL COMMENT ''no'', PRIMARY KEY (`daily_individual_tracking_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1 ``` basically the fields can be either yes or no. How can I count how many yes's their are for each column over a date range? Thanks!!
You can either run three queries like this: ``` SELECT COUNT(*) FROM daily_individual_tracking WHERE sales = 'YES' AND daily_individual_tracking_date BETWEEN '2010-01-01' AND '2010-03-31' ``` Or if you want you can get all three at once like this: ``` SELECT ( SELECT COUNT(*) FROM daily_individual_tracking WHERE sales = 'YES' AND daily_individual_tracking_date BETWEEN '2010-01-01' AND '2010-03-31' ) AS sales_count, ( SELECT COUNT(*) FROM daily_individual_tracking WHERE repairs = 'YES' AND daily_individual_tracking_date BETWEEN '2010-01-01' AND '2010-03-31' ) AS repairs_count, ( SELECT COUNT(*) FROM daily_individual_tracking WHERE shipping = 'YES' AND daily_individual_tracking_date BETWEEN '2010-01-01' AND '2010-03-31' ) AS shipping_count ``` Another way to do it is to use SUM instead of COUNT. You could try this too to see how it affects the performance: ``` SELECT SUM(sales = 'YES') AS sales_count, SUM(repairs = 'YES') AS repairs_count, SUM(shipping = 'YES') AS shipping_count FROM daily_individual_tracking WHERE daily_individual_tracking_date BETWEEN '2010-01-01' AND '2010-03-31' ```
End user experience monitoring tools I have a web application with a great deal of both client-side and server-side logic. It is considered business-critical that this application feel responsive to the end user, for some definition of "feels responsive." ;) Most website monitoring discussions revolve around keeping an eye on server-side metrics (response time, I/O queue depth, latency, CPU load, etc.), i.e. we tend to treat server performance and responsiveness as though it's a viable "proxy" for what the user is experiencing. Unfortunately, as we move more and more logic to client side Javascript, the correlation decreases and our server metrics become less useful. I didn't find any good matching SO questions on this. Googling gives a range of commercial products that *might* be related, but they're generally from the manufacturers' websites, full of unhelpful marketspeak and "please call us for details," so it's hard to know. Are there any commonly-used tools for this sort of thing, other than rolling your own? Both free and commercial are welcome, although free is obviously better all else being equal. **EDIT:** To clarify, I primarily need to gather bulk data on the user experience, including both responsiveness and breakage/script errors. Automatic analysis is a very-nice-to-have, although I'd expect to have to occasionally dig into the data myself regardless of the solution.
There are some freely available tools for performance monitoring. Yahoo open-sourced a script they used called [Boomerang](http://developer.yahoo.com/blogs/ydn/posts/2010/06/performance_testing_with_boomerang/) which can measure page load times and other performance metrics for end-users. [Full documentation here](http://yahoo.github.com/boomerang/doc/). Google analytics also offers a [basic page load time report](http://analytics.blogspot.com/2011/05/measure-page-load-time-with-site-speed.html). For error monitoring, you'll want to listen for the window.onerror event. I don't know of any scripts that will automatically log it for you, or mine the logs on the server side. If you implement your own, you'll want to be very careful about not pinging the server too often--imagine how many requests it would generate if there was a JS error in your JS error handling code!
Catch a TimeoutError of a promise beforehand I have a bluebird promise which can be cancelled. When cancelled, I have to do some work to neatly abort the running task. A task can be cancelled in two ways: via `promise.cancel()` or `promise.timeout(delay)`. In order to be able to neatly abort the task when cancelled or timed out, I have to catch CancellationErrors and TimeoutErrors. Catching a CancellationError works, but for some reason I can't catch a TimeoutError: ``` var Promise = require('bluebird'); function task() { return new Promise(function (resolve, reject) { // ... a long running task ... }) .cancellable() .catch(Promise.CancellationError, function(error) { // ... must neatly abort the task ... console.log('Task cancelled', error); }) .catch(Promise.TimeoutError, function(error) { // ... must neatly abort the task ... console.log('Task timed out', error); }); } var promise = task(); //promise.cancel(); // this works fine, CancellationError is caught promise.timeout(1000); // PROBLEM: this TimeoutError isn't caught! ``` How can I catch timeout errors *before* a timeout is set?
when you cancel a promise, the cancellation first bubbles to its parents as long as a parents are found that are still cancellable, this is very different from normal rejection which only propagates to children. `.timeout` does a simple normal rejection, it doesn't do cancellation, so that's why it's not possible to do it like this. You can either cancel after a delay: ``` var promise = task(); Promise.delay(1000).then(function() { promise.cancel(); }); ``` or set the timeout in the task function: ``` var promise = task(1000); function task(timeout) { return new Promise(function (resolve, reject) { // ... a long running task ... }) .timeout(timeout) .cancellable() .catch(Promise.CancellationError, function(error) { // ... must neatly abort the task ... console.log('Task cancelled', error); }) .catch(Promise.TimeoutError, function(error) { // ... must neatly abort the task ... console.log('Task timed out', error); }); } ``` --- You can also create a method like: ``` Promise.prototype.cancelAfter = function(ms) { var self = this; setTimeout(function() { self.cancel(); }, ms); return this; }; ``` Then ``` function task() { return new Promise(function (resolve, reject) { // ... a long running task ... }) .cancellable() .catch(Promise.CancellationError, function(error) { // ... must neatly abort the task ... console.log('Task cancelled', error); }) } var promise = task(); // Since it's a cancellation, it will propagate upwards so you can // clean up in the task function promise.cancelAfter(1000); ```
When should I write my own Look and Feel for Java Swing instead of customizing one? I have used a few different Look and Feels for Java Swing, but I don't really like anyone to 100% so I often end up with customizing it a lot. Sometimes I am thinking about if it is a better idea to write my own LaF (by extending an existing one), but I don't really know. For the moment, I mostly use [Nimbus](http://java.sun.com/developer/technicalArticles/javase/java6u10/#nimbus), but I change all colors (to darker ones) and rewrite the appearance of some components, like [sliders](http://www.jasperpotts.com/blog/2008/08/skinning-a-slider-with-nimbus/) and scrollbars. I also mostly customize all tables and I am thinking about to change the look of a few other components. When is it recommended to create a new Look-and-Feel instead of customizing one? What are the pros and cons? I.e. customize Nimbus or create a new one by extending Nimbus? Related article: [Creating a Custom Look and Feel](http://java.sun.com/products/jfc/tsc/articles/sce/index.html) (old)
> > When is it recommended to create a new Look-and-Feel instead of customizing one? > > > - When you want your app to look really nearly exactly the same on every major platform - When you want to get rid of "dead giveaways" that your app is a Java app (which is a concern for quite some programmers selling commercial software for OS X users, where Java has a terribly bad rep amongst users [it's undeserved, but it's a fact]). Here's an example of the kind of sickness you have to go through to enhance a tiny bit the pathetic JTable (btw the author of this blog later got hired by Apple): <http://explodingpixels.wordpress.com/2009/05/18/creating-a-better-jtable/> > > What are the pros? > > > It is possible to make an app look nearly identical (the only difference being the font rendering, which slightly varies from OS X to Linux to Windows... but it can be made to be quite close, or you can go with "non AA" pixel-perfect fonts, that will look perfectly identical). > > and cons? > > > - You're app won't have a native look at all. - It's a lot of work (Swing ain't exactly easy: it's some kind of over-engineered, over-complicated beast and it's very difficult to tell if it's "incredibly smartly designed" or "incredibly dumbly designed"). I've written several custom Swing components looking **exactly** the same on Windows, Linux and OS X (using pixel-perfect fonts) but I wouldn't write an entire LnF.
how to highlight currently opened page link in css I have a set of links in the left navigation panel. And I wanted to highlight the opened link. I'm using css for my website. HTML code: ``` <div id="LEFTmenu"> <ul> <li><a href="link_01.html">Link1</a></li> <li><a href="link_02.html">Link2</a></li> <li><a href="link_03.html">Link3</a></li> <li><a href="link_04.html">Link4</a></li> <li><a href="link_05.html">Link5</a></li> </ul> </div> ``` CSS code: ``` #LEFTmenu { line-height:30px; width: 200px; float: left; margin-top: 10px; background-color: #FFFFFF;} #LEFTmenu ul { padding: 0; margin: 0 0 20px 15px; list-style: none; list-style-type: none; font-size: 14px; } #LEFTmenu ul li a:link, a:visited { font-family: "Trebuchet MS", Arial, Helvetica, sans-serif; color: #333; } #LEFTmenu ul li a:hover { color: #CC3366; } #LEFTmenu ul li a:active { color: #33FFFF; } ``` By using a:active, the link will have this property only for a very short time of just one click on the link. But I'm expecting the link to be highlighted while its page is opened. Is there such feasibility in CSS?
The :active pseudo class is only for elements tht are currently in the selected stage. For example in the case of a button, the button could be red color , when you hover the mouse over it it turns to blue. Here you use the :hover pseudo class. Now when you click the button ( just left click down, dont release it yet) the button turns green. Now that is the :active pseudo class. for what you are wanting, where the link is continuously highlighted when the page is opened and displayed, you can do it either using javascript or just plain css. the simplest way, the plain css way is just have a class called "highlighted" and set some css property like background ans stuff like, ``` .highlighted{ background-color:#000; color:#fff; } ``` just apply the "highlighted" class to the link you want.For example, if you are on link2.html page then you want the "link2" in your ul list to be highlighted. So inside your link2.html page, in your ul element referencing the links, just apply the class to link2 like.. ``` .highlighted{ color:#fff; background-colo:#000; } ``` ``` <div id="LEFTmenu"> <ul> <li><a href="link_01.html">Link1</a></li> <li class="highlighted"><a href="link_02.html">Link2</a></li> <li><a href="link_03.html">Link3</a></li> <li><a href="link_04.html">Link4</a></li> <li><a href="link_05.html">Link5</a></li> </ul> </div> ``` This is the easiest css solution for what you want to achieve. Now the javascript version of doing this is not difficult by any means, but a little more complicated than the just css approach. I say it is a little more complicated because you are dynamically going to manipulate the element properties. Now you do have to watch out for what you are doing bcause you might accidentally change some DOM property that you do not want to change but altogether it is not difficult. now for javascript approach now you can decide to do this in native javascript or use some jquery or other libraries. Jquery makes writing the code simpler but you have to link the jquery source to you html file, which adds memory/file size to your page. This part I will let you decide what you want to do and how you want to proceed. HopefullyI have shed some light into what you are wanting to do. Good luck
Alamofire request from outside ViewController Is there any way to call request by alamofire from outside the ViewController (UITableViewController) and then populate tablewView with data from response? right now I'm created a func inside the controller and after fetching the data just call tablewView's reloadData(). But is there any way to create method like this: ``` class InsuranceServices { class func getContracts(table : UITableView) -> [Contract] { var contracts = [Contract]() let headers : HTTPHeaders = [ "Content-Type":"application/json" ] let params : Parameters = [String:Any]() Alamofire.request("http://test:9090/testService/getContracts", method: .post, parameters: params, encoding: JSONEncoding.default, headers: headers).validate().responseJSON(completionHandler: {response -> Void in switch response.result { case .success(let value): let json = JSON(value) for data in json["data"].arrayValue { let contract = Contract(json: data) print(contract.fullName!) contracts.append(contract) } case .failure(let error): print(error) } table.reloadData() }) return contracts } } ``` this method is not working for me. maybe there is another way? I think create service methods inside the controller is incorrect.
You should have distinct separation between your `ViewController` and `Alamofire`. A way to achieve that is similar to what you wrote, but instead of taking the `UITableView` itself, your function should take a completion block that receives `[Contract]`. It can look something like this: ``` class func getContracts(completion: @escaping ([Contract])→Void) { //... Alamofire.request(...).responseJSON(completionHandler: {response -> Void in switch response.result { case .success(let value): let json = JSON(value) var contracts = [Contract]() for data in json["data"].arrayValue { let contract = Contract(json: data) contracts.append(contract) } completion(contracts) case .failure(let error): print(error) completion([]) } }) } ``` Another thing is that you can't return the result of the request (in your case array of `Contract`) in the return value of the function. Since `Alamofire` call the request asynchronously it would end much later than the return of your function. That is why as you see the function signature I wrote does not return anything. Instead, it calls the `completion` closure when the request is ready and the data is available. Note that you *must* call the `completion` closure even on failure! Otherwise the caller (in your case your view controller) will wait for the request forever not knowing it failed.... You can then call the function like so, making sure you don't cause a memory leak with `self`: ``` InsuranceServices.getContracts(completionHandler: { [weak self] data in self?.contracts = data self?.tableView.reloadData() }) ```
Copy installed packages using pip to another environment I downloaded some packages in my environment using pip command. And I want to have a copy of them to transfer them to another environment. I know that using: ``` pip freeze > requirements.txt ``` will generate requirements into a file, but since my second environment does not have access to internet i can not use: ``` pip install -r requirements.txt ``` to install that packages again. Is there any way to copy installed packages? or somehow install packages in a specified directory in my first environment? Thanks
You can use pip [download](https://pip.pypa.io/en/stable/reference/pip_download/) followed by pip install [--find-links](https://pip.pypa.io/en/stable/reference/pip_wheel/#find-links) to achieve what you want.Here is the steps involved 1. Get the requirements > > pip freeze>requirements.txt > > > 2. Download the packages to a folder > > > ``` > pip download -r requirements.txt -d path_to_the_folder > > ``` > > 3. From the new environment > > pip install -r requirements.txt --find-links=path\_to\_the\_folder > > >
MPI\_Isend request parameter When using `MPI_Isend`, can the `MPI_Request` parameter be a null pointer (when the sender doesn't care about the message after it is sent)?
Short answer is no - the request handle parameter cannot be `NULL`. `MPI_Isend()` initiates an asynchronous send operation. All asynchronous operations are given a request handle that has to be acted on later in one of the following ways: - block and wait for the operation to finish with `MPI_Wait()` and friends - test the operation for completion with `MPI_Test()` and friends until the test turns out positive - free the handle with `MPI_Request_free()` Both waiting and testing functions free the request once it has completed. You can also free it immediately after it is returned by `MPI_Isend()`. This will not cancel the operation but rather mark the request for deletion as soon as it is finished. You won't be able to get the status of the send operation though. If you don't care about the outcome of the asynchronous operation (e.g. completion status, message receive status, error code, etc.), the right thing to do is as follows: ``` MPI_Request req; ... MPI_Isend(..., &req); MPI_Request_free(&req); ... ``` Caveat: this works for asynchronous sends since one can devise another method to verify that the send operation has completed, e.g. the destination process might respond after receiving the message. But one should never free an asynchronous receive request and should wait or test for completion instead as there will be no way to know when the operation has been completed.
Left-align bullet points with css I'm trying to perfectly left-align bullet points with html and css, but when font-sizes change, the left alignment also change. The `list-style-position` should be `outside`. I tried with `inside` and then compensating with `text-indent` but that looked even worse. This is the best I got so far, but it still doesn't look good and it would be more optimal for my code structure if I just applied styles on the `ul` elements. ``` ul { list-style-type: disc; list-style-position: outside; padding: 0.5em 0 0 3em; } ul.two li { font-size: 20px; } ul.three li { font-size: 35px; } ``` <http://jsfiddle.net/d9VNk/776/>
you may also use a pseudo element and use pixels instead em: ``` ul { padding: 0.5em 0 0 20px; margin: 0 2em; background:linear-gradient(to right, transparent 20px, lightgray 20px); } li { list-style-type: none; padding-left: 20px; } li:before { content: ''; float: left; display: list-item; list-style-type: disc; list-style-position: inside; width: 20px; margin-left: -20px; } ul.two li { font-size: 20px; } ul.three li { font-size: 35px; } ul.hudge li { font-size: 60px; } ``` ``` <ul> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> </ul> <ul class="two"> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> </ul> <ul class="three"> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> </ul><ul class="hudge"> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> <li>saoi yhsahkjdhasjdg jhgdas hdgash gdashg dzxhjb cnxz cznxb cnzxb cznxbc xznbc xznb cznxbc nbzx czbx ncbzx cnbzx ncbzx ncbzxc zx czx czc x cxz</li> </ul> ``` <http://jsfiddle.net/d9VNk/778/>
How do I use Fiddler to modify the status code in an HTTP response? I need to test some client application code I've written to test its' handling of various status codes returned in an HTTP response from a web server. I have Fiddler 2 (Web Debugging Proxy) installed and I believe there's a way to modify responses using this application, but I'm struggling to find out how. This would be the most convenient way, as it would allow me to leave both client and server code unmodified. Can anyone assist as I'd like to intercept the HTTP response being sent from server to client and modify the status code before it reaches the client? Any advice would be much appreciated.
Ok, so I assume that you're already able to monitor your client/server traffic. What you want to do is set a breakpoint on the response then fiddle with it before sending it on to the client. Here are a couple of different ways to do that: 1. Rules > Automatic Breakpoints > After Responses 2. In the quickexec box (the black box at the bottom) type "bpafter yourpage.svc". Now Fiddler will stop at a breakpoint before all requests to any URL that contains "yourpage.svc". Type "bpafter" with no parameters to clear the breakpoint. 3. Programmatically tamper with the response using FiddlerScript. The best documentation for FiddlerScript is on the official site: <http://www.fiddler2.com/Fiddler/dev/> Once you've got a response stopped at the breakpoint, just double click it to open it in the inspectors. You've got a couple of options now: 1. Right next to the green Run to Completion button (which you click to send the response) there's a dropdown that lets you choose some default response types. 2. Or, on the Headers inspector, change the response code & message in the textbox at the top. 3. Or, click the "Raw" inspector and mess with the raw response to do arbitrary things to it. Also a good way to see what your client does when it gets a malformed response, which you'll probably test accidentally :)
One xib File with Multiple "File's Owner"s I've got three different UITableViews, each in it's own view, accessed via tabs. All three tables would ideally share the same custom UITableViewCell class and .xib file. I started with one table, setting the class of the .xib to my custom class and the File's Owner of the .xib to the table's parent UIViewController, which works great. All of the custom view-related code is in the cell's class (background images based on a property set by the controller, custom cell height based on the number of lines a label requires based on a cell property set by the controller, etc.). The result is nice: the cell is responsible for all of the visual layout and responding to user actions on the cell's controls, while the view controller is responsible for creating the cells and setting their data. Now that I need to reuse the cell in other tables, though, the fact that the custom cell's .xib has a single File's Owner is a problem. Rather than duplicating the .xib file, is there a simple way to allow multiple controllers to own it?
A nib's File's Owner is not strictly enforced. Instead it is only used to determine available outlets and actions, and to set bindings within Interface Builder. You can load a nib with any object as its File's Owner regardless of the class set in the nib file. When a nib is loaded it will send messages to the File's Owner to re-establish bindings. If the actual File's Owner object does not recognize those selectors you will have triggered an "unrecognized selector" exception. This means that if your nib binds some `UITableViewCell` to the 'cell' outlet of its File's Owner then any object with a 'cell' property could load that nib. You just need to be careful not to use this behavior to send an unrecognized selector or unexpected outlet class. In your case, consider creating a single `UIViewController` subclass to act as the File's Owner of your nib. Have each of your three existing controllers extend that view controller subclass. That way they can all inherit the same set of properties expected by the nib file and all safely load that nib while still defining their own custom behavior.
Jenkins: how to trigger pipeline on git tag We want to use Jenkins to generate releases/deployments on specific project milestones. Is it possible to trigger a Jenkins Pipeline (defined in a Jenkinsfile or Groovy script) when a tag is pushed to a Git repository? We host a private Gitlab server, so Github solutions are not applicable to our case.
This is currently something that is sorely lacking in the pipeline / multibranch workflow. See a ticket around this here: <https://issues.jenkins-ci.org/browse/JENKINS-34395> If you're not opposed to using release branches instead of tags, you might find that to be easier. For example, if you decided that all branches that start with `release-` are to be treated as "release branches", you can go... ``` if( env.BRANCH_NAME.startsWith("release-") ) { // groovy code on release goes here } ``` And if you need to use the name that comes after `release-`, such as `release-10.1` turning into `10.1`, just create a variable like so... ``` if( env.BRANCH_NAME.startsWith("release-") ) { def releaseName = env.BRANCH_NAME.drop(8) } ``` Both of these will probably require some method whitelisting in order to be functional.
Convert DateTime.Now to Seconds I am trying to write a function that will convert a DateTime.Now instance to the number of seconds it represents so that I can compare that to another DateTime instance. Here is what I currently have: ``` public static int convertDateTimeToSeconds(DateTime dateTimeToConvert) { int secsInAMin = 60; int secsInAnHour = 60 * secsInAMin; int secsInADay = 24 * secsInAnHour; double secsInAYear = (int)365.25 * secsInADay; int totalSeconds = (int)(dateTimeToConvert.Year * secsInAYear) + (dateTimeToConvert.DayOfYear * secsInADay) + (dateTimeToConvert.Hour * secsInAnHour) + (dateTimeToConvert.Minute * secsInAMin) + dateTimeToConvert.Second; return totalSeconds; } ``` I realize that I am truncating the calculation for seconds in a year, but I don't need my calculation to be precise. I'm really looking to know if the method that I am using to calculate seconds is correct. Does anyone have anything that could better compute seconds given from a DateTime object? Also, Should the return type be int64 if I am coding in C# if I am going to calculate all the seconds since 0 AD?
The [DateTime](http://msdn.microsoft.com/en-us/library/system.datetime.aspx) type supports [comparison operators](http://msdn.microsoft.com/en-us/library/ff986512.aspx): ``` if (dateTimeA > dateTimeB) { ... ``` This also works for [DateTime](http://msdn.microsoft.com/en-us/library/system.datetime.aspx) values returned by [DateTime.AddSeconds](http://msdn.microsoft.com/en-us/library/system.datetime.addseconds.aspx): ``` if (dateTimeA.AddSeconds(42) > dateTimeB) { ... ``` --- If you really want the number of seconds that elapsed since *01/01/0001 00:00:00*, you can calculate the difference between the two DateTime values. The resulting [TimeSpan](http://msdn.microsoft.com/en-us/library/system.timespan.aspx) value has a [TotalSeconds](http://msdn.microsoft.com/en-us/library/system.timespan.totalseconds.aspx) property: ``` double result = DateTime.Now.Subtract(DateTime.MinValue).TotalSeconds; ```
username , password and other parameters in sqlserver connection string I'm new to C# and .net and just worked with php and mysql. to connect to sqlserver express (in visual studio 2010) with c# we should provide a connection string that has lots of formats i found on the web specially in [connectionstrings](http://connectionstrings.com). for example in standard format : "Data Source=myServerAddress;Initial Catalog=myDataBase;UserId=myUsername;Password=myPassword;" what is username and password? where can i find them? are they "root" and "" like in php mysql or something else? as mentioned , i have created a database named "db.sdf" in sqlserver express in visual studio 2012. I'm really confused. please help.
If your database extension is `SDF`,you are creating `SQL Server CE` (*Compact Edition*) database file. Use this connection string ``` Data Source=" + (System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase) + "\\MyData.sdf;Persist Security Info=False; ``` OR ``` Data Source=MyData.sdf;Persist Security Info=False; ``` **[More on this link.](http://www.connectionstrings.com/sql-server-2005-ce)** **UPDATE 1** You need the `System.Data.SqlServerCe` namespace. ``` SqlCeConnection conn = new SqlCeConnection("Data Source=\\Mobile\\Northwind.sdf;"); conn.Open(); . . . conn.Close(); ``` **From [MSDN](http://msdn.microsoft.com/en-us/library/system.data.sqlserverce.sqlceconnection%28v=VS.100%29.aspx):** **Namespace:** System.Data.SqlServerCe **Assembly:** System.Data.SqlServerCe (in system.data.sqlserverce.dll)
Have PanedWindow separators "snap" to certain positions, instead of just allowing the user to choose any random position I have a tkinter PanedWindow containing some widgets and I want the user to be able to drag each separator to a certain position and when they let go of the mouse button, the separator should "snap" to the nearest position in a list of positions / proportions I give the program. For example, let's say I have a PanedWindow containing two buttons, and I want the user to be able to resize the buttons such that they can be in the proportions 1/4:3/4, 1/2:1/2 or 3/4:1/4 not just any width the user chooses. Is there a way I can do this? Code for PanedWindow with two buttons: ``` import tkinter as tk from tkinter import ttk root = tk.Tk() root.geometry("400x400") pw = tk.PanedWindow(root) buttons = [ttk.Button(text=message) for message in ["goose", "duck"]] for button in buttons: pw.add(button) pw.pack(expand=True, fill="both") root.mainloop() ``` Is there a way to specify the proportions the separators can snap to? Allowed positions: [![](https://i.stack.imgur.com/TmEl6.png)](https://i.stack.imgur.com/TmEl6.png) [![](https://i.stack.imgur.com/xSJqY.png)](https://i.stack.imgur.com/xSJqY.png) [![](https://i.stack.imgur.com/qKCA7.png)](https://i.stack.imgur.com/qKCA7.png) The buttons should snap to the nearest of these positions when resized.
You can do this by binding `<ButtonRelease-1>` to a function that calculates the closest point. You can then use this information and update the sash position using `PanedWindow.sash_place(index, x, y)`. Here is a minimal example: ``` import tkinter as tk from tkinter import ttk def snap(event): width = pw.winfo_width() closest = min([width//2, width//4, width*3//4], key=lambda w: abs(w - event.x)) pw.sash_place(0, closest, 1) root = tk.Tk() root.title("snappy") pw = tk.PanedWindow(root) buttons = [ttk.Button(text=message) for message in ["goose", "duck"]] for button in buttons: pw.add(button) pw.pack(expand=True, fill="both") pw.bind("<ButtonRelease-1>", snap) root.mainloop() ```
how to "bridge" writer and reader in go I usually find my way with Reader and Writer in Golang but I came to a situation new to me. I am using "golang.org/x/net/html" Render. It outputs to a Writer w. I want to use that output and create a new request from that. NewRequest uses a Reader r. ``` err := html.Render(w, msg) ... req, err := http.NewRequest("Post", url, r) io.Copy(w, r) ``` My question is "what is the best/ideomatic solution for binding the two calls using w and r?". I could not find an example for a similar situation on the web. I am thinking about creating both Reader and Writer and using io.Copy(w, r) on them. I am not sure since this appears a little complicated for something that apparently is used often.
A simple approach is to use a [bytes.Buffer](http://godoc.org/bytes#Buffer): ``` var buf bytes.Buffer err := html.Render(&buf, msg) ... req, err := http.NewRequest("POST", url, &buf) ``` This buffers the entire request in memory. An alternate approach that does not buffer everything in memory is to use [io.Pipe](http://godoc.org/io#Pipe). This approach is more complicated because it introduces concurrency in the program. Also, the http client starts to write the request to the wire before possible errors are detected in Render. ``` r, w := io.Pipe() go func() { w.CloseWithError(html.Render(w, msg)) }() req, err := http.NewRequest("POST", url, r) ```
Why doesn't "gcc -Wall" warn for "if (ptr < 0)"? *(A long story... you can directly jump to the question at the end...)* I need to use `realpath(3)` so I wrote a simple example to try it: ``` $> cat realpath.c #include <stdio.h> #include <limits.h> #include <stdlib.h> int main(int argc, char * argv[]) { char pathname[PATH_MAX]; if (realpath(argv[1], pathname) < 0) { perror("realpath"); return 1; } printf("%s\n", pathname); return 0; } $> gcc -Wall -o realpath realpath.c $> ls /xxx ls: cannot access '/xxx': No such file or directory $> ./realpath /xxx/foo/bar /xxx ``` The result of `./realpath /xxx/foo/bar` surprised me. According to the [manual](http://man7.org/linux/man-pages/man3/realpath.3.html) it makes more sense to fail with `ENOENT`. I even referred to the POSIX and found no answer. After quite some time I reread the manual and found `realpath(3)` returns `char *` rather than `int`. I was really irritated by `gcc`. ## Question So **why** doesn't gcc (even with `-Wall`) warn about `if (ptr < 0)`?
**`gcc -Wall` does not enable all of GCC's warnings!** See [**this question**](https://stackoverflow.com/questions/11714827/how-to-turn-on-literally-all-of-gccs-warnings) for more information. In your case, you need to add the **`-Wextra`** flag: ``` gcc -Wall -Wextra -o realpath realpath.c ``` According to [**GCC's documentation**](https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html): > > This enables some extra warning flags that are not enabled by `-Wall`. > > > The option `-Wextra` also prints warning messages for the following cases: > > > - **A pointer is compared against integer zero with <, <=, >, or >=.** > - [...] > > >
How to setRemoveOnCancelPolicy for Executors.newScheduledThreadPool(5) I have this: ``` ScheduledExecutorService scheduledThreadPool = Executors .newScheduledThreadPool(5); ``` Then I start a task like so: ``` scheduledThreadPool.scheduleAtFixedRate(runnable, 0, seconds, TimeUnit.SECONDS); ``` I preserve the reference to the Future this way: ``` ScheduledFuture<?> scheduledFuture = scheduledThreadPool.scheduleAtFixedRate(runnable, 0, seconds, TimeUnit.SECONDS); ``` I want to be able to cancel **and remove** the future ``` scheduledFuture.cancel(true); ``` However this SO answer notes that canceling doesn't remove it and adding new tasks will end in many tasks that can't be GCed. <https://stackoverflow.com/a/14423578/2576903> They mention something about `setRemoveOnCancelPolicy`, however this `scheduledThreadPool` doesn't have such method. What do I do?
This [method](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledThreadPoolExecutor.html#setRemoveOnCancelPolicy(boolean)) is declared in [ScheduledThreadPoolExecutor](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledThreadPoolExecutor.html). ``` /** * Sets the policy on whether cancelled tasks should be immediately * removed from the work queue at time of cancellation. This value is * by default {@code false}. * * @param value if {@code true}, remove on cancellation, else don't * @see #getRemoveOnCancelPolicy * @since 1.7 */ public void setRemoveOnCancelPolicy(boolean value) { removeOnCancel = value; } ``` This executor is returned by Executors class by [newScheduledThreadPool](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executors.html#newScheduledThreadPool(int)) and similar methods. ``` public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) { return new ScheduledThreadPoolExecutor(corePoolSize); } ``` So in short, you can cast the executor service reference to call the method ``` ScheduledThreadPoolExecutor ex = (ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(5); ex.setRemoveOnCancelPolicy(true); ``` or create `new ScheduledThreadPoolExecutor` by yourself. ``` ScheduledThreadPoolExecutor ex = new ScheduledThreadPoolExecutor(5); ex.setRemoveOnCancelPolicy(true); ```
How to nullify childre's foreign key when parent deleted using sqlalchemy? I have basic Flask application with `Parent` and `Child` models like that: ``` class Parent(db.Model): __tablename__ = 'parents' id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String) class Child(db.Model): __tablename__ = 'children' id = db.Column(db.Integer, primary_key=True) parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'), nullable=False) parent = db.relationship('Parent', backref=db.backref('children', cascade='all,delete')) name = db.Column(db.String) ``` As database I am using Postgres if it is important. Now I want to do following: remove `cascade='all,delete'` from child and make this `parent_id` nullable. I.e. when `Parent` removed from DB `Child` stays in place with `parent_id == NULL`. I know that I could specify it with schema creation script adding constraint to FK. But I want just to mark it as `NULL` and allow SqlAlchemy take control on nullification of children's FK.
It's explained in the [relevant section](http://docs.sqlalchemy.org/en/rel_1_0/orm/cascades.html?highlight=cascade#delete) of the documentation in a great detail. Make sure you also read "ORM-level “delete” cascade vs. FOREIGN KEY level “ON DELETE” cascade" section to understand differences between proposed solutions. ### ORM level > > Now I want to do following: remove `cascade='all,delete'` from child and make this `parent_id` nullable. > > > Do it and you will get exact behaviour you want. ``` class Child(db.Model): __tablename__ = 'children' id = db.Column(db.Integer, primary_key=True) parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'), nullable=True) parent = db.relationship('Parent', backref=db.backref('children')) name = db.Column(db.String) ``` Also note, that `all` is the synonym for `save-update, merge, refresh-expire, expunge, delete`, so `all, delete` is the same as simply `all`. ### DB level If you want to have `ON DELETE SET NULL` constraint on the database level, you can specify [`ondelete='SET NULL'`](http://docs.sqlalchemy.org/en/rel_1_0/core/constraints.html?highlight=foreignkey#sqlalchemy.schema.ForeignKey.params.ondelete) in the `ForeighKey` definition or do nothing (since it's default behaviour for the foreign key). To get it working on the DB level you also need to set [`passive_deletes`](http://docs.sqlalchemy.org/en/rel_1_0/orm/relationship_api.html#sqlalchemy.orm.relationship.params.passive_deletes) to either `True` or `'all'` (see docs for the difference). ``` class Child(db.Model): __tablename__ = 'children' id = db.Column(db.Integer, primary_key=True) parent_id = db.Column(db.Integer, db.ForeignKey('parents.id', ondelete='SET NULL'), nullable=True) parent = db.relationship('Parent', backref=db.backref('children', passive_deletes=True)) name = db.Column(db.String) ```
MFMailComposeViewController in Swift This is sample code: ``` import UIKit import MessageUI class ViewController: UIViewController, MFMailComposeViewControllerDelegate { @IBAction func showEmail(sender : AnyObject) { var emailTitle = "Test Email" var messageBody = "This is a test email body" var toRecipents = ["[email protected]"] var mc: MFMailComposeViewController = MFMailComposeViewController() mc.mailComposeDelegate = self mc.setSubject(emailTitle) mc.setMessageBody(messageBody, isHTML: false) mc.setToRecipients(toRecipents) self.presentViewController(mc, animated: true, completion: nil) } override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. } func mailComposeController(controller:MFMailComposeViewController, didFinishWithResult result:MFMailComposeResult, error:NSError) { switch result { case MFMailComposeResultCancelled: NSLog("Mail cancelled") case MFMailComposeResultSaved: NSLog("Mail saved") case MFMailComposeResultSent: NSLog("Mail sent") case MFMailComposeResultFailed: NSLog("Mail sent failure: %@", [error.localizedDescription]) default: break } self.dismissViewControllerAnimated(false, completion: nil) } } ``` In function mailComposeController I get an error on every ***case*** expression: **Could not find an overload '~=' that accepts the supplied arguments.** What am I doing wrong?
I compared `MFMailComposeResult` documentation on both Xcode 5 and Xcode 6. In Swift, `MFMailComposeResult` is a struct ``` struct MFMailComposeResult { init(_ value: CUnsignedInt) // available in iPhone 3.0 var value: CUnsignedInt } ``` with `MFMailComposeResultCancelled` as a constant of type `MFMailComposeResult`: ``` var MFMailComposeResultCancelled: MFMailComposeResult { get } ``` while it's an enum in Objective-C: ``` enum MFMailComposeResult { MFMailComposeResultCancelled, MFMailComposeResultSaved, MFMailComposeResultSent, MFMailComposeResultFailed }; typedef enum MFMailComposeResult MFMailComposeResult; // available in iPhone 3.0 ``` In order to make your code work, you will have to compare their values which are `CUnsignedInt`. So you will have to type the following code: ``` func mailComposeController(controller:MFMailComposeViewController, didFinishWithResult result:MFMailComposeResult, error:NSError) { switch result.value { case MFMailComposeResultCancelled.value: println("Mail cancelled") case MFMailComposeResultSaved.value: println("Mail saved") case MFMailComposeResultSent.value: println("Mail sent") case MFMailComposeResultFailed.value: println("Mail sent failure: \(error.localizedDescription)") default: break } self.dismissViewControllerAnimated(false, completion: nil) } ```
Doctrine: Object of class User could not be converted to string I keep getting this error with Doctrine: ``` PHP Catchable fatal error: Object of class User could not be converted to string in vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php on line 1337 ``` In my system users can have many permissions in a One to Many relationship. I have set up a `User` and `Permission` entity. They look like this (I removed some annotations, getters and setters to reduce clutter): ``` class User { /** * @ORM\Column(name="user_id", type="integer", nullable=false) * @ORM\Id * @ORM\GeneratedValue(strategy="IDENTITY") */ protected $id; public function getId() { return $this->id; } /** * @ORM\OneToMany(targetEntity="Permission", mappedBy="user", cascade={"persist"}) */ protected $permissions; public function getPermissions() { return $this->permissions; } } class Permission { /** * @ORM\Column(name="user_id", type="integer") * @ORM\ManyToOne(targetEntity="User", inversedBy="permissions") */ protected $user; public function getUser() { return $this->user; } public function setUser( $user ) { $this->user = $user; return $this; } } ``` The problem occurs when I add a new `Permission` to a `User`: ``` $permission = new Permission(); $user->getPermissions()->add( $permission ); $em->persist( $user ); $em->flush(); ``` This is the last bit of my stack trace: ``` PHP 11. Doctrine\ORM\UnitOfWork->persist() vendor/doctrine/orm/lib/Doctrine/ORM/EntityManager.php:565 PHP 12. Doctrine\ORM\UnitOfWork->doPersist() vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:1555 PHP 13. Doctrine\ORM\UnitOfWork->cascadePersist() vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:1615 PHP 14. Doctrine\ORM\UnitOfWork->doPersist() vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:2169 PHP 15. Doctrine\ORM\UnitOfWork->persistNew() vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:1597 PHP 16. Doctrine\ORM\UnitOfWork->scheduleForInsert() doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:836 PHP 17. Doctrine\ORM\UnitOfWork->addToIdentityMap() vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:1157 PHP 18. implode() vendor/doctrine/orm/lib/Doctrine/ORM/UnitOfWork.php:1337 ``` Any insight would be greatly appreciated.
Your solution gave me a clue of what is happening. Even though you have the entities and the anotations, Doctrine is not being able to understand the relation between entities. When doctrine understands the relation between entities, it knows what methods to call (ie User::getId()) but otherwise, it tries to transform whatever you are sending to a scalar value that it can use to query the database. Thats why it is calling the \_\_toString function of the User, and thats why if you return the id in toString, everything works from here. This is ok, but its a patch, and probably you dont want to keep it if we can find a better solution, since it could be harder to maintain as your application grows. What i can see, is that in Permissions you have: ``` /** * @ORM\Column(name="user_id", type="integer") * @ORM\ManyToOne(targetEntity="User", inversedBy="permissions") */ protected $user; ``` You should remove the `@ORM\Column(type="integer"`) About the join columns, it is not mandatory, but you have to be sure that the defauts, are what you want. As we can read [here](http://docs.doctrine-project.org/en/2.0.x/reference/association-mapping.html#mapping-defaults) > > Before we introduce all the association mappings in detail, you should > note that the @JoinColumn and @JoinTable definitions are usually > optional and have sensible default values. The defaults for a join > column in a one-to-one/many-to-one association is as follows: > > > ``` name: "<fieldname>_id" referencedColumnName: "id" ``` so, they will be the same as an explicit: ``` /** * @ORM\ManyToOne(targetEntity="User", inversedBy="permissions", cascade={"persist"}) * @ORM\JoinColumns({ * @ORM\JoinColumn(name="user_id", referencedColumnName="id") * }) */ protected $user; ``` So it is supposed to look for a column `user_id` in the `Permissions` table, and join it with the `id` column of the `User` table. We suppose that this is ok. If this is true, then in your User, the id shouldnt be user\_id, but id: ``` /** * @ORM\Column(name="id", type="integer", nullable=false) * @ORM\Id * @ORM\GeneratedValue(strategy="IDENTITY") */ protected $id; ``` Or if the column name is actually user\_id, then the User class is ok, but you have to change the join column to @ORM\JoinColumn(name="user\_id", referencedColumnName="user\_id") That much i can say. I cannot try it know, but i will be glad if you can give it a second.
How to implement a smooth clamp function in python? The clamp function is `clamp(x, min, max) = min if x < min, max if x > max, else x` I need a function that behaves like the clamp function, but is smooth (i.e. has a continuous derivative).
Normal clamp: ``` np.clip(x, mi, mx) ``` Smoothclamp (guaranteed to agree with normal clamp for x < min and x > max): ``` def smoothclamp(x, mi, mx): return mi + (mx-mi)*(lambda t: np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) ) ``` Sigmoid (Approximates clamp, never smaller than min, never larger than max) ``` def sigmoid(x,mi, mx): return mi + (mx-mi)*(lambda t: (1+200**(-t+0.5))**(-1) )( (x-mi)/(mx-mi) ) ``` For some purposes Sigmoid will be better than Smoothclamp because Sigmoid is an invertible function - no information is lost. For other purposes, you may need to be certain that *f(x) = xmax* for all *x > xmax* - in that case Smoothclamp is better. Also, as mentioned in another answer, there is a whole family of Smoothclamp functions, though the one given here is adequate for my purposes (no special properties other than a smooth derivative needed) Plot them: ``` import numpy as np import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) x = np.linspace(-4,7,1000) ax.plot(x, np.clip(x, -1, 4),'k-', lw=2, alpha=0.8, label='clamp') ax.plot(x, smoothclamp(x, -1, 4),'g-', lw=3, alpha=0.5, label='smoothclamp') ax.plot(x, sigmoid(x, -1, 4),'b-', lw=3, alpha=0.5, label='sigmoid') plt.legend(loc='upper left') plt.show() ``` ![graph](https://i.stack.imgur.com/6IpuV.png) Also of potential use is the arithmetic mean of these two: ``` def clampoid(x, mi, mx): return mi + (mx-mi)*(lambda t: 0.5*(1+200**(-t+0.5))**(-1) + 0.5*np.where(t < 0 , 0, np.where( t <= 1 , 3*t**2-2*t**3, 1 ) ) )( (x-mi)/(mx-mi) ) ```
How to group by time intervals with Google BigQuery I have TIMESTAMP weather data recorded every 5 minutes that I want to group in 15 minute intervals. I found the floor function below that looked promising, but BQ does not support the UNIX\_TIMESTAMP function ``` SELECT FLOOR(UNIX_TIMESTAMP(utc_timestamp)/(15 * 60)) AS timekey GROUP BY timekey ``` What is the best way to do this?
Below is for BigQuery Standard SQL ``` #standardSQL SELECT TIMESTAMP_SECONDS(15*60 * DIV(UNIX_SECONDS(utc_timestamp), 15*60)) timekey, AVG(metric) metric FROM `project.dataset.table` GROUP BY timekey ``` You can test, play with above using dummy data as in below example ``` #standardSQL WITH `project.dataset.table` AS ( SELECT TIMESTAMP '2019-03-15 00:00:00' utc_timestamp, 1 metric UNION ALL SELECT '2019-03-15 00:05:00', 2 UNION ALL SELECT '2019-03-15 00:10:00', 3 UNION ALL SELECT '2019-03-15 00:15:00', 4 UNION ALL SELECT '2019-03-15 00:20:00', 5 UNION ALL SELECT '2019-03-15 00:25:00', 6 UNION ALL SELECT '2019-03-15 00:30:00', 7 UNION ALL SELECT '2019-03-15 00:35:00', 8 UNION ALL SELECT '2019-03-15 00:40:00', 9 ) SELECT TIMESTAMP_SECONDS(15*60 * DIV(UNIX_SECONDS(utc_timestamp), 15*60)) timekey, AVG(metric) metric FROM `project.dataset.table` GROUP BY timekey -- ORDER BY timekey ``` with result ``` Row timekey metric 1 2019-03-15 00:00:00 UTC 2.0 2 2019-03-15 00:15:00 UTC 5.0 3 2019-03-15 00:30:00 UTC 8.0 ``` Obviously, you can use whatever aggregation your logic requires - I used AVG() just for the sake of example
Progress Bar while download file over http with Requests I need to download a sizable (~200MB) file. I figured out how to download and save the file with [here](https://stackoverflow.com/questions/16694907/how-to-download-large-file-in-python-with-requests-py). It would be nice to have a progress bar to know how much has been downloaded. I found [ProgressBar](http://progressbar-2.readthedocs.io/en/latest/usage.html) but I'm not sure how to incorperate the two together. Here's the code I tried, but it didn't work. ``` bar = progressbar.ProgressBar(max_value=progressbar.UnknownLength) with closing(download_file()) as r: for i in range(20): bar.update(i) ```
I suggest you try [`tqdm`](https://github.com/tqdm/tqdm), it's very easy to use. Example code for downloading with [`requests`](http://docs.python-requests.org/en/master/) library: ``` from tqdm import tqdm import requests url = "http://www.ovh.net/files/10Mb.dat" #big file test # Streaming, so we can iterate over the response. response = requests.get(url, stream=True) total_size_in_bytes= int(response.headers.get('content-length', 0)) block_size = 1024 #1 Kibibyte progress_bar = tqdm(total=total_size_in_bytes, unit='iB', unit_scale=True) with open('test.dat', 'wb') as file: for data in response.iter_content(block_size): progress_bar.update(len(data)) file.write(data) progress_bar.close() if total_size_in_bytes != 0 and progress_bar.n != total_size_in_bytes: print("ERROR, something went wrong") ```
Angular js Parsing Json object A json object has a key lastLogin. Value of it is a string. I am trying to print firstName **John** and **Blake** ``` $scope._users = [{ "User": { "userid": "dummy", "lastlogin": "{\"employees\":[{\"firstName\":\"John\"}, {\"firstName\":\"Blake\"}]}", } }]; ``` **[FIDDLE](https://jsfiddle.net/xu0vqmff/)** Any help would be appreciated.
Try like this **View** ``` <div ng-controller="MyCtrl"> <div ng-repeat="user in _users" ng-init="myInfo=parJson(user.User.lastlogin)"> <div ng-repeat="emp in myInfo.employees">{{emp.firstName}}</div> </div> </div> ``` **Controller** ``` var myApp = angular.module('myApp', []); function MyCtrl($scope) { $scope.getName = function (user) { return "Names"; }; $scope._users = [{ "User": { "userid": "dummy", "lastlogin": "{\"employees\":[{\"firstName\":\"John\"}, {\"firstName\":\"Blake\"}]}", } }]; $scope.parJson = function (json) { return JSON.parse(json); } //console.log(JSON.parse($scope._users[0].User.lastlogin)); } ``` `[**`DEMO`**](https://jsfiddle.net/vhxg75fp/93/)` you can also use [`angular.fromJson`](https://docs.angularjs.org/api/ng/function/angular.fromJson). Like this ``` $scope.parJson = function (json) { return angular.fromJson(json); } ``` `**[`DEMO`](https://jsfiddle.net/vhxg75fp/92/)**`
Get each selected value using chosen jquery plugin I'm using Chosen Multiple Select What I want to do is that if user selects any option I want that value and then I will do some functionality depending on that value. In chosen without multiple select i can get selected value by using foll. code ``` $(".selectId").chosen().change(function(){ selectedValue = $(this).find("option:selected").val(); }); ``` But in multiple select I get the first selected value again n again can anybody help me by finding the current selected value in multiple select element??
The [Chosen documentation](http://harvesthq.github.io/chosen/options.html) mentions parameters for getting the most recently changed variables. > > Chosen triggers the standard DOM event whenever a selection is made > (it also sends a selected or deselected parameter that tells you > which option was changed). > > > So, if you just want the most recent option selected or deselected: ``` $(".selectId").chosen().change(function(e, params){ // params.selected and params.deselected will now contain the values of the // or deselected elements. }); ``` Otherwise if you want the whole array to work with you can use: ``` $(".selectId").chosen().change(function(e, params){ values = $(".selectId").chosen().val(); //values is an array containing all the results. }); ```
How can I test iPhone app performance and network usage without building/running from xcode I want to check the memory and network usage of an iPhone app that I don't have the source code for. How would I do this?
Finally i could able to check the performance and real memory usage of the app. using the **Instruments.App** under **Xcode** installation directory/Applications/Instruments.app. ``` Connect your device to Mac. 1. Launch Instruments. The application automatically creates a new trace document and prompts you to select a template. 2. Select the Activity Monitor template under IOS section and and click the Choose button. Instruments adds the Activity Monitor instrument to the trace document. 3. In the Default Target menu of the trace document, choose Target and select the App you want to target. 4. Click on the Record button. Targeted App will be started running on your device. 5. Wait a few seconds so that Instruments can gather some data. Now perform you are operations on the App. 6. Click the Stop button after your actions completed. ``` Thanks.
n-th order Bezier Curves? I've managed to implement quadratic and cubic Bezier curves.They are pretty straightforward since we have a formula. Now I want to represent an n-th order Bezier curve using the generalization: ![enter image description here](https://i.stack.imgur.com/hFPfY.png) Where ![enter image description here](https://i.stack.imgur.com/98JCb.png) and ![enter image description here](https://i.stack.imgur.com/vwQKE.png) I'm using a bitmap library to render the output, so here is my code: ``` // binomialCoef(n, k) = (factorial(n) / (factorial(k) * factorial(n- k))) unsigned int binomialCoef(unsigned int n, const unsigned int k) { unsigned int r = 1; if(k > n) return 0; for(unsigned int d = 1; d <= k; d++) { r *= n--; r /= d; } return r; } void nBezierCurve(Bitmap* obj, const Point* p, const unsigned int nbPoint, float steps, const unsigned char red, const unsigned char green, const unsigned char blue) { int bx1 = p[0].x; int by1 = p[0].y; int bx2; int by2; steps = 1 / steps; for(float i = 0; i < 1; i += steps) { bx2 = by2 = 0; for(int j = 0; (unsigned int)j < nbPoint; j++) { bx2 += (int)(binomialCoef(nbPoint, j) * pow(1 - i, (float)nbPoint - j) * pow(i, j) * p[j].x); by2 += (int)(binomialCoef(nbPoint, j) * pow(1 - i, (float)nbPoint - j) * pow(i, j) * p[j].y); } bresenhamLine(obj, bx1, by1, bx2, by2, red, green, blue); bx1 = bx2; by1 = by2; } // curve must end on the last anchor point bresenhamLine(obj, bx1, by1, p[nbPoint - 1].x, p[nbPoint - 1].y, red, green, blue); } ``` Here's the set of points to render: ``` Point ncurv[] = { 20, 200, 70, 300, 200, 400, 250, 200 }; ``` and here's the output: ![enter image description here](https://i.stack.imgur.com/fQfv8.png) The red curve is a cubic Bezier. The blue one is supposed to be the 4th order Bezier, which is the same as cubic Bezier, but in this case, they are not the same ?! **EDIT :** I forgot to note that the bottom left point is (0, 0)
The sum in your formula... ![enter image description here](https://i.stack.imgur.com/iZBo2.png) ...runs from 0 to n, ie for an n-th order bezier you need n+1 points. You have 4 points, so you're drawing a 3rd-order bezier. The error in your code is here: ``` for(int j = 0; (unsigned int)j < nbPoint; j++) ``` it should be: ``` for(int j = 0; (unsigned int)j <= nbPoint; j++) ``` otherwise you're only iterating from 0 to n-1. ![3rd-order bezier](https://i.stack.imgur.com/hZ6dn.gif) EDIT: Out of interest, the shape you were getting is the same as if the missing (5th) point was at (0,0), since that's the only point that would contribute nothing to your sum... ![4th-order bezier with 5th point at origin](https://i.stack.imgur.com/F4RNk.gif)
How to use `Series.interpolate` in pandas with the old values modified The `interploate` method in `pandas` use the valid data to interpolate the `nan` values. However, it keeps the old valid data unchanged as the following codes. Is there any way to use `interploate` method with the old values changed such that the series become smooth? ``` In [1]: %matplotlib inline In [2]: from scipy.interpolate import UnivariateSpline as spl In [3]: import numpy as np In [4]: import pandas as pd In [5]: samples = { 0.0: 0.0, 0.4: 0.5, 0.5: 0.9, 0.6: 0.7, 0.8:0.3, 1.0: 1.0 } In [6]: x, y = zip(*sorted(samples.items())) In [7]: df1 = pd.DataFrame(index=np.linspace(0, 1, 31), columns=['raw', 'itp'], dtype=float) In [8]: df1.loc[x] = np.array(y)[:, None] In [9]: df1['itp'].interpolate('spline', order=3, inplace=True) In [10]: df1.plot(style={'itp': 'b-', 'raw': 'rs'}, figsize=(8, 6)) ``` [![enter image description here](https://i.stack.imgur.com/2UKQ5.png)](https://i.stack.imgur.com/2UKQ5.png) ``` In [11]: df2 = pd.DataFrame(index=np.linspace(0, 1, 31), columns=['raw', 'itp'], dtype=float) In [12]: df2.loc[x, 'raw'] = y In [13]: f = spl(x, y, k=3) In [14]: df2['itp'] = f(df2.index) In [15]: df2.plot(style={'itp': 'b-', 'raw': 'rs'}, figsize=(8, 6)) ``` [![enter image description here](https://i.stack.imgur.com/IH9uM.png)](https://i.stack.imgur.com/IH9uM.png)
When you use `Series.interpolate` with `method='spline'`, under the hood [Pandas uses `interpolate.UnivariateSpline`](https://github.com/pydata/pandas/blob/master/pandas/core/common.py#L1721). The spline returned by [`UnivariateSpline`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html) is not guaranteed to pass through the data points given as input [unless `s=0`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline). However, by default `s=None`, which uses a different smoothing factor and thus leads to a different result. The `Series.interpolate` method **always** [fills in NaN values](https://github.com/pydata/pandas/blob/master/pandas/core/common.py#L1671) without changing the non-NaN values. There is no way to make `Series.interpolate` modify the non-NaN values. So, when `s != 0`, the result produces jagged jumps. So if you want the `s=None` (default) spline interpolation but without the jagged jumps, as you've already found, you have to call `UnivariateSpline` directly and overwrite all the values in `df['itp']`: ``` df['itp'] = interpolate.UnivariateSpline(x, y, k=3)(df.index) ``` If you want a cubic spline that passes through all the non-NaN data points, then use `s=0` ``` df['itp'].interpolate('spline', order=3, s=0, inplace=True) ``` --- ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.interpolate as interpolate samples = { 0.0: 0.0, 0.4: 0.5, 0.5: 0.9, 0.6: 0.7, 0.8:0.3, 1.0: 1.0 } x, y = zip(*sorted(samples.items())) fig, ax = plt.subplots(nrows=3, sharex=True) df1 = pd.DataFrame(index=np.linspace(0, 1, 31), columns=['raw', 'itp'], dtype=float) df1.loc[x] = np.array(y)[:, None] df2 = df1.copy() df3 = df1.copy() df1['itp'].interpolate('spline', order=3, inplace=True) df2['itp'] = interpolate.UnivariateSpline(x, y, k=3)(df2.index) df3['itp'].interpolate('spline', order=3, s=0, inplace=True) for i, df in enumerate((df1, df2, df3)): df.plot(style={'itp': 'b-', 'raw': 'rs'}, figsize=(8, 6), ax=ax[i]) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/8MP2o.png)](https://i.stack.imgur.com/8MP2o.png)
Objective C does self point to parent after doing self = [super init]? When I have code like the following: ``` self = [super init] ``` does self point to super? If so, why would you want this? if my instance object has the variable "someVal", I won't be able to get to it by doing [self someVal]. correct? How then would I get to the instance variable's using self when self points to super?
> > does self point to super? > > > It's really the other way around. `super` is really the same as `self`, except that it tells the compiler to start looking for method implementations starting with the superclass rather than the class itself. You can check this by logging the value of `super` and the value of `self`; you'll find that they both point to the same address. When you create an object, you do this: ``` Foo *f = [[Foo alloc] init]; ``` The `alloc` allocates the memory that will become the object you're creating, but until that memory is initialized it's just a chunk of memory -- not a valid object. If `Foo` is a subclass of `Bar` and `Bar` is a subclass of `NSObject`, then by convention Foo's initializer will call Bar's, and Bar's will call NSObject's, so that the initialization proceeds in order: first the memory is initialized by NSObjects' `-init`, and Bar's init receives the returned value and assigns it to `self`. It then proceeds to do any Bar-specific initialization, and returns `self`. Foo's `-init` then assigns the returned value to `self` again and finally does any Foo-specific initialization. All that assigning to `self` might seem both redundant and confusing. It's really just a convention, but the purpose is to allow the superclass's initializer to return some object other than the one that was allocated, including `nil`. So, for example, if the initialization of `Bar` failed for some reason, `-[Bar init]` could return nil. The possibility that `nil` might be returned from `[super init]` is the reason we put the `self = [super init]` assignment inside a conditional: if the assigned value is `nil`, the initialization part is skipped and `nil` is returned. It's also possible that `-[Bar init]` could return a pointer to an object other than the one that was allocated, such as when an object similar to the one being created already exists and can be reused. Most of the time, the pointer you get back from `-init` will be the same one that you got from `+alloc`, so you *could* write this: ``` Foo *f = [Foo alloc]; [f init]; ``` If you write that, however, you're making an assumption that the initializers of your class and all the classes that it inherits from will *always* return the same object, and will never return `nil`. By doing that you're breaking the convention and severely hamstringing yourself and whoever wrote the classes from which Foo inherits -- they'll break your code if they return a different object in a future release of the class. Also, it'll look like you don't know what you're doing.
What is the term for when someone sends requests repeatedly to a server in order for it to crash? When someone sends requests repeatedly to a server in order for it to crash; what's the term for this? I'm looking for a software to test this kind of problem but I can't remember the term,
You're probably looking for either of - [DOS, or denial of service, attack](https://secure.wikimedia.org/wikipedia/en/wiki/Denial_of_service) (brings down a server by overloading it) - An [IDS, or intrusion detection system](https://secure.wikimedia.org/wikipedia/en/wiki/Intrusion_Detection_System) (detects attempts to break your server) - Programs such as [Nessus](https://secure.wikimedia.org/wikipedia/en/wiki/Nessus_%28software%29) (tries to find vulnerabilities on to your server) - Packet sniffers such as [Wireshark](https://secure.wikimedia.org/wikipedia/en/wiki/Wireshark) (listens to your network for traffic)
How can I create a text box for a note in markdown? I am writing a document in markdown. I am using the wonderful pandoc to create docx and tex files from the markdown source. I would like to have a textbox for tips and notes to readers the way programming books often do. I cannot figure out how to do this in markdown. Can you help?
What I usually do for putting alert box (e.g. Note or Warning) in markdown texts (not only when using pandoc but also every where that markdown is supported) is surrounding the content with two horizontal lines: ``` --- **NOTE** It works with almost all markdown flavours (the below blank line matters). --- ``` which would be something like this: --- **NOTE** It works with all markdown flavours (the below blank line matters). --- The good thing is that you don't need to worry about which markdown flavour is supported or which extension is installed or enabled. **EDIT**: As @filups21 has mentioned in the comments, it seems that a horizontal line is represented by `***` in RMarkdown. So, the solution mentioned before does not work with all markdown flavours as it was originally claimed.
most common 2-grams using python Given a string: ``` this is a test this is ``` How can I find the top-n most common 2-grams? In the string above, all 2-grams are: ``` {this is, is a, test this, this is} ``` As you can notice, the 2-gram `this is` appears 2 times. Hence the result should be: ``` {this is: 2} ``` I know I can use `Counter.most_common()` method to find the most common elements, but how can I create a list of 2-grams from the string to begin with?
You can use the method provided in this [blog post](http://locallyoptimal.com/blog/2013/01/20/elegant-n-gram-generation-in-python/) to conveniently create n-grams in Python. ``` from collections import Counter bigrams = zip(words, words[1:]) counts = Counter(bigrams) print(counts.most_common()) ``` That assumes that the input is a list of words, of course. If your input is a string like the one you provided (which does not have any punctuation), then you can do just `words = text.split(' ')` to get a list of words. In general, though, you would have to take punctuation, whitespace and other non-alphabetic characters into account. In that case you might do something like ``` import re words = re.findall(r'[A-Za-z]+', text) ``` or you could use an external library such as [nltk.tokenize](http://www.nltk.org/api/nltk.tokenize.html). Edit. If you need tri-grams or any other any other n-grams in general then you can use the function provided in the blog post I linked to: ``` def find_ngrams(input_list, n): return zip(*(input_list[i:] for i in range(n))) trigrams = find_ngrams(words, 3) ```
Can electron apps play any videos supported locally? I'm aware that browsers usually restrict players to `mp4` and `webm` type of media, but I'm wondering if it's possible for an electron-based app to run local videos with formats such as MKV and AVI. I can't find a definite source telling me what is and what is not available.
### Electron is still limited to web technologies... This means even though you have access to the file system and whatnot through the node APIs, content rendering happens inside a `BrowserWindow` which has about the same support for video playback as webkit based browsers. You can use HTML5 video or canvas for this purpose, and that's pretty much it natively. (Flash is theoretically also supported, but the amount of work required to get it running is not worth it in my opinion... also, it's *Flash*, so, no.) ### ... but that's not entirely true Even though native support ends there, you have various paths you can take to work around these limitaions. I'll list a few of the most common ones: - **Convert your MKV/AVI video** in the background to one of the supported formats. This has the drawback of needing the entire video file to be present on your filesystem (i.e doesn't work for streams). - **Transcode your MKV/AVI stream** on the fly to a supported format. This is pretty flexible but has some serious requirements on your environment, like having ffmpeg (or something similar) installed locally. Good luck packaging that if you want to distribute your app. - **Embed a player capable of MKV/AVI playback.** Most common example is VLC Player embedded through node bindings to `libvlc`. Check out the `wcjs-player` & `wcjs-prebuilt` modules if you want to go this way, they are node wrappers for the webchimera.js video player, which uses VLC in the background. (*I'm not affiliated with them, just currently using the modules for something similar in one of my projects with success so far.*) I'm sure there are other solutions to this, but these are the most logical ones I've found while researching on the subject. Hope this helps!
where lambda vs. first lambda Suppose I have some strings: ``` string[] strings = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" }; ``` What is the difference between: ``` string startsWithO = strings.First(s => s[0] == 'o'); ``` And: ``` string startsWithO = strings.Where(s => s[0] == 'o').First(); ``` Since Where() is deferred it shouldn't slow down the execution, right?
The performance penalty of using `.Where(filter).First()` rather than `.First(filter)` will usually be *very* small. However, they're not the same - `Where` generates a new iterator that `First` can simply take one element of, whereas `First(filter)` can microoptimize by using just one loop and a direct return whenever `filter` matches. So while both approaches have the same semantics and both execute the `filter` equally often (only as often as necessary), using `First` with a `filter` parameter doesn't need to create an intermediate iterator object and *probably* avoids some very simple method calls on that iterator too. In other words, if you're executing such code millions of times, **you will see a slight performance difference** - but nothing huge; I would *never* worry about it. Whenever that tiny performance difference actually matters you're much better off just writing the (very simple) foreach-with-if statement that's equivalent and avoiding the extra calls and object allocations inherent in LINQ - but remember that this is a microoptimization you'll rarely need. **Edit:** Benchmark demonstrating the effect: This takes 0.78 seconds: ``` for(int i=0;i<10*1000*1000;i++) Enumerable.Range(0,1000).First(n=> n > 2); GC.Collect(); ``` But this takes 1.41 seconds: ``` for(int i=0;i<10*1000*1000;i++) Enumerable.Range(0,1000).Where(n=> n > 2).First(); GC.Collect(); ``` Whereas plain loops are much faster (0.13 seconds): ``` long bla = 0; for(int i=0;i<10*1000*1000;i++) for(int n=0;n<1000;n++) if(n > 2) { bla+=n; break; } GC.Collect(); Console.WriteLine(bla);//avoid optimizer cheating. ``` **Note that this benchmark only shows such extreme differences because I have a trivial filter and a very short non-matching prefix.** Based on some quick experimentation, the difference seems largely attributable to the details of which codepath gets taken. So, for array's and `List<>`s the first variant is actually faster, likely to do special-casing in `.Where` for those types that `First` doesn't have; for custom iterators, the second version is a tiny bit faster, as expected. # Summary: `.Where(...).First()` is roughly as fast as `.First(...)` - don't bother choosing one or the other as an optimization. In *general* `.First(...)` is very slightly faster but in some common cases it is slower. If you really need that microoptimization then use plain loops which are faster than either.
Scala: How to convert tuple elements to lists Suppose I have the following list of tuples: ``` val tuples = listOfStrings.map(string => { val split = string.split(":") (split(0), split(1), split(2)) }) ``` I would like to get the split(0) in a list, split(1) in another list and so on. A simple way this could be achieved is by writing: ``` list1 = tuples.map(x => x._1).toList list2 = tuples.map(x => x._2).toList list3 = tuples.map(x => x._3).toList ``` Is there a more elegant (functional) way of achieving the above without writing 3 separate statements?
This will give you your result as a list of list: ``` tuples.map{t => List(t._1, t._2, t._3)}.transpose ``` If you want to store them in local variables, just do: ``` val List(l1,l2,l3) = tuples.map{t => List(t._1, t._2, t._3)}.transpose ``` **UPDATE**: As pointed by Blaisorblade, the standard library actually has a built-in method for this: `unzip3`, which is just like `unzip` but for triples instead of pairs: ``` val (l1, l2, l3) = tuples.unzip3 ``` Needless to say, you should favor this method over my hand-rolled solution above (but for tuples of arity > 3, this would still still apply).
How to make Jetty webserver listen on port 80? I would like to use Jetty as a webserver. I have edited the configuration file at `/etc/default/jetty` and set: ``` # change to 0 to allow Jetty start NO_START=0 # Listen to connections from this network host # Use 0.0.0.0 as host to accept all connections. JETTY_HOST=0.0.0.0 ``` Now I can reach the Jetty webserver at `http://192.168.1.10:8080` but I would like to have Jetty listening on port 80. I have tried this setting in the same configuration file: ``` # The network port used by Jetty JETTY_PORT=80 ``` and then restart Jetty with `sudo service jetty restart` but it doesn't work. How can I change so that the Jetty webserver is listening on port 80?
You will need to edit the `/etc/jetty/jetty.xml` file. Look for a paragraph that says: ``` <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.nio.SelectChannelConnector"> <Set name="host"><SystemProperty name="jetty.host" /></Set> <Set name="port"><SystemProperty name="jetty.port" default="8090"/></Set> <Set name="maxIdleTime">30000</Set> <Set name="Acceptors">2</Set> <Set name="statsOn">false</Set> <Set name="confidentialPort">8443</Set> <Set name="lowResourcesConnections">5000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> ``` Change the `jetty.port` Property to `80` as follows: `<Set name="port"><SystemProperty name="jetty.port" default="80"/></Set>` Restart jetty. That should do it. --- Since the above method did not work for OP, and running as root is discouraged, there is an alternative method, as mentioned in this [document](http://docs.codehaus.org/display/JETTY/port80).
Node js - Creating persistent private chat rooms I've been reading so much a bout node js lately, and the chat capabilities seem very nice. However, the only chat examples I've seen basically broadcast a chat server to a fixed URL (like a meeting room). Is it possible to use node js in part to create a chat client more like gchat? - where a chat window is popped up on the current page and then persists through multiple pages. Has anyone seen an example of this yet? If not, suggestions for other technologies to use for this purpose (I know that's been answered in other questions)? Thanks.
I'll give you a pseudo implementation relying on [jquery](http://jquery.com/) and [now](http://nowjs.com/) to abstract away tedious IO and tedious DOM manipulation from the solution. ``` // Server var nowjs = require('now'); var everyone = nowjs.initialize(httpServer); everyone.now.joinRoom = function(room) { nowjs.getGroup(room).addUser(this.user.clientId); } everyone.now.leaveRoom = function(room) { nowjs.getGroup(room).removeUser(this.user.clientId); } everyone.now.messageRoom = function(room, message) { nowjs.getGroup(room).now.message(message); } // Client var currRoom = ""; $(".join").click(function() { currRoom = ... now.joinRoom(currRoom); }); $(".send").click(function() { var input = ... now.messageRoom(currRoom, input.text()); }); now.messageRoom = function(message) { $("messages").append($("<div></div>").text(message)); }; ``` I only just noticed myself that the new version of nowjs (0.5) has the group system in build. This basically does what you want for you. No hassle. If you want you can remove the nowjs dependency and replace it with 100/200 lines of code. I'll leave that as an exercise for the user.
What's the difference between std::to\_string, boost::to\_string, and boost::lexical\_cast? What's the purpose of `boost::to_string` (found in `boost/exception/to_string.hpp`) and how does it differ from `boost::lexical_cast<std::string>` and `std::to_string`?
[`std::to_string`](http://en.cppreference.com/w/cpp/string/basic_string/to_string), available since C++11, works on **fundamental numeric types** specifically. It also has a [`std::to_wstring`](http://en.cppreference.com/w/cpp/string/basic_string/to_wstring) variant. It is designed to produce the same results that [`sprintf`](http://en.cppreference.com/w/cpp/io/c/fprintf) would. You may choose this form to avoid dependencies on external libraries/headers. --- The throw-on-failure function [`boost::lexical_cast<std::string>`](http://www.boost.org/doc/libs/1_57_0/doc/html/boost_lexical_cast.html) and its non-throwing cousin [`boost::conversion::try_lexical_convert`](https://www.boost.org/doc/libs/1_56_0/doc/html/boost_lexical_cast/synopsis.html#boost_lexical_cast.synopsis.try_lexical_convert) work on **any type that can be inserted into a `std::ostream`**, including types from other libraries or your own code. Optimized specializations exist for common types, with the generic form resembling: ``` template< typename OutType, typename InType > OutType lexical_cast( const InType & input ) { // Insert parameter to an iostream std::stringstream temp_stream; temp_stream << input; // Extract output type from the same iostream OutType output; temp_stream >> output; return output; } ``` You may choose this form to leverage greater flexibility of input types in generic functions, or to produce a `std::string` from a type that you know isn't a fundamental numeric type. --- `boost::to_string` isn't directly documented, and seems to be for internal use primarily. Its functionality behaves like `lexical_cast<std::string>`, not `std::to_string`.
How to load a new component on button click in ReactJs? I created a main component in `ReactJs` called `MainPage` (using Material-UI). ``` import React from 'react'; import Grid from '@material-ui/core/Grid'; import Button from '@material-ui/core/Button'; import CssBaseline from '@material-ui/core/CssBaseline'; import Card from '@material-ui/core/Card'; import CardContent from '@material-ui/core/CardContent'; import withStyles from '@material-ui/core/styles/withStyles'; const styles = theme => ({ card: { minWidth: 350, }, button: { fontSize: '12px', margin: theme.spacing.unit, minWidth: 350 }, extendedIcon: { marginRight: theme.spacing.unit, } }); class MainPage extends React.Component { constructor() { super(); } render() { const { classes } = this.props; return ( < React.Fragment > < CssBaseline / > < Grid container spacing = { 0 } direction = "column" alignItems = "center" justify = "center" style = { { minHeight: '100vh' } } > < form onSubmit = { this.handleSubmit } > < Card className = { classes.card } > < CardContent > < Grid item xs = { 3 } > < Button variant = "contained" size = "medium" color = "primary" className = { classes.button } type = "submit" value = "single" > ButtonA < /Button> < /Grid> < Grid item xs = { 3 } > < Button variant = "contained" size = "medium" color = "primary" className = { classes.button } type = "submit" value = "batch" > ButtonB < /Button> < /Grid> < /CardContent> < /Card> < /form> < /Grid> < /React.Fragment> ); } } export default withStyles(styles)(MainPage); ``` I want to load a new component (either `CompA` or `CompB`, depending on which button was clicked - ButtonA or ButtonB) on button click. A new component should completely replace a current component - I mean that it should be loaded in a whole screen (not anywhere next to buttons). How can I do it? **UPDATE:** I want to replace MainPage component, not just render on top of it. This is how I load `MainPage`: **index.js** ``` import React from 'react'; import { render } from 'react-dom'; import MainPage from './components/MainPage'; const View = () => ( <div> <MainPage/> </div> ); render(<View />, document.getElementById('root')); ```
You can create a different component to handle the state and add an if statement in that component to handle the view that you want to render. You can see the example here [codesandbox.io/embed/6wx2rzjrr3](https://codesandbox.io/embed/6wx2rzjrr3) App.js ``` import React, { Component } from "react"; import ReactDOM from "react-dom"; import Main from "./Main"; import View1 from "./View1"; import View2 from "./View2"; import "./styles.css"; class App extends Component { state = { renderView: 0 }; clickBtn = e => { this.setState({ renderView: +e.target.value }); }; render() { switch (this.state.renderView) { case 1: return <View1 />; case 2: return <View2 />; default: return <Main clickBtn={this.clickBtn} />; } } } const rootElement = document.getElementById("root"); ReactDOM.render(<App />, rootElement); ``` Main.js ``` import React from "react"; export default props => ( <> Main view{" "} <button value={1} onClick={props.clickBtn}> View 1 </button>{" "} <button value={2} onClick={props.clickBtn}> View 2 </button>{" "} </> ); ``` View1.js ``` import React from "react"; export default props => "View 1"; ``` View2.js ``` import React from "react"; export default props => "View 2"; ```
Changing the keyboard layout/mapping on both the console (tty) and X in an X/console agnostic way? I've been able to change the keyboard layout/mapping when the [X Window System](https://www.x.org/wiki/) is running using `~/.Xmodmap` (and I suppose you could also do it through the [X protocol](https://www.x.org/docs/XProtocol/proto.pdf) directly, eg. using [libxcb](https://xcb.freedesktop.org/) as a wrapper around X protocol calls). I've been able to do the same (somewhat) on the console/tty (ie. when X is not running) using `loadkeys`. > > Is there a way to customize the keyboard layout similarly, but from a *single source*, in such a way that it affects both X and the console, ie. in a way that is "X-and-console agnostic"? > > > (The only way I can think of is by writing a "keyboard driver" that talks to the kernel's input interface, `evdev`, and sends the input you want (through `uinput`?), or something, but I don't know if this even makes sense, or if there's an easier way.)
Yes, there is a way, and this is indeed how Debian's console-setup package does it. - Keyboard layouts are specified in XKB terms (model, layout, variant, and options) by the administrator in a file named `keyboard`, usually `/etc/default/keyboard`. This is the single source. It can be edited with a text editor. - The `setxkbmap` program is given these same XKB settings and configures an X11 server accordingly. - The keyboard-configuration package's post-installation maintainer script runs `setxkbmap` directly, with the `/etc/default/keyboard` settings, if it finds itself with an X11 display. Thus `dpkg-reconfigure keyboard-configuration` run from an X11 GUI terminal emulator will affect the X11 keyboard layout. - The `setupcon` script takes these XKB settings, passes them through the `ckbcomp` utility to generate a keyboard map, and loads that keyboard map into the kernel virtual terminals with `loadkeys`/`kbdcontrol`. - This script is run at system bootstrap by a service. - It can also be run manually elsewhen. Other systemd operating systems work differently but also have a single source. - The XKB layout, variant, and options are stored in in `/etc/X11/xorg.conf.d/00-keyboard.conf`. This is the single source. - This file is directly read by the X11 server at startup and sets the X11 keyboard map directly. - This file is parsed by `systemd-localed` at startup. The file cannot be usefully edited with a text editor whilst `systemd-localed` is running, because the service will blithely overwrite it with its own in-memory information. - To change the XKB information, one runs `localectl`, which talks to another server which in turn talks to `systemd-localed`. - `systemd-localed` converts the XKB settings to a virtual terminal keyboard map name using the mappings in `/usr/share/systemd/kbd-model-map`, which it then writes out to `/etc/vconsole.conf`. Unlike the Debian system, it does not generate maps on the fly from the XKB information, but selects only pre-supplied static maps listed in the map file. - `systemd-vconsole-setup` runs at bootstrap, reads `/etc/vconsole.conf`, and loads the keyboard map into the kernel virtual terminals by running `loadkeys`. # Further reading - [`keyboard`](https://manpages.debian.org/stretch/keyboard-configuration/keyboard.5.en.html). *console-setup User's Manual*. Debian. - [`ckbcomp`](https://manpages.debian.org/stretch/console-setup/ckbcomp.1.en.html). *console-setup User's Manual*. Debian. - Andrew T. Young (2011). *[Keyboard Configuration](https://aty.sdsu.edu/bibliog/latex/debian/keyboard.html)*. - [Where is Xkb getting its configuration?](https://unix.stackexchange.com/questions/66624/) - <https://unix.stackexchange.com/a/326804/5132>
Index plot for each cluster sorted by the silhouette After a cluster analysis I´m trying to plot for each cluster the Index plot of the Silhouette value instead of for the complete dataset (like in the WeightedCluster Library Manual by Matthias Studer). First of all, is that theoretically correct? If yes... I create the object "sil" with the wcSilhouetteObs command: ``` sil <- wcSilhouetteObs(distance.matrix, cluster.object) ``` Then I plot the Index Plot for the complete dataset (this line works, even if I can´t label the clusters, but for now I don´t care): ``` seqIplot(seq.data, group = group.p(cluster.object), sortv = sil) ``` But when I try to plot the Index plot sorted by the silhouette I don´t know (but as I said I´m not sure that it´s theoretically correct...) how to impose the restriction for the group argument selecting only the silhouette values e.g. for the first cluster.I have created a sequence object for each cluster separately (let´s say `cluster1.seq`), but what should I do then? Thank you! Emanuela
The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b(i)$ the minimum average dissimilarity to members of another cluster. The silhouette values of members of a cluster $k$ are at the same position as the values $k$ in the cluster membership vector `cluster.object`. So you do not have anything to do. Your `seqIplot` command will automatically produce one index plot for each cluster with the sequences sorted by their silhouette values in each cluster. Sequences will be sorted bottom up from the lower to the highest silhouette value, meaning that the sequences with the best silhouette values for each cluster are at the top of the plots. Hope this helps.
Android Media Player Error (100,0) I have read all the error codes given on the web. Error specifies: > > const PVMFStatus PVMFInfoLast = 100; " Placeholder for end of range" > > > But I didn't able to handle this error, thanks for helping.
Implement OnErrorListener to your class. inside the class body write ``` video_view.setOnErrorListener(this); ``` then overwrite the method OnError(MediaPlayer mp , int what , int extra) with this method ``` @Override public boolean onError(MediaPlayer mp, int what, int extra) { if (what == 100) { video_view.stopPlayback(); Intent inn = new Intent(HelloInterruptVideoStream.this,TabAct.class); startActivity(inn); } else if (what == 1) { pb2.setVisibility(View.GONE); Log.i("My Error ", "handled here"); video_view.stopPlayback(); Intent inn = new Intent(HelloInterruptVideoStream.this,TabAct.class); startActivity(inn); } else if(what == 800) { video_view.stopPlayback(); Intent inn = new Intent(HelloInterruptVideoStream.this,TabAct.class); startActivity(inn); } else if (what == 701) { video_view.stopPlayback(); Intent inn = new Intent(HelloInterruptVideoStream.this,TabAct.class); startActivity(inn); } else if(what == 700) { video_view.stopPlayback(); Toast.makeText(getApplicationContext(), "Bad Media format ", Toast.LENGTH_SHORT).show(); Intent inn = new Intent(HelloInterruptVideoStream.this,TabAct.class); startActivity(inn); } else if (what == -38) { video_view.stopPlayback(); Intent inn = new Intent(HelloInterruptVideoStream.this,TabAct.class); startActivity(inn); } return false; } ```
using ipdb to debug python code in one cell (jupyter or Ipython) I'm using jupyter (or Ipython) notebook with firefox, and want to debug some python code in the cell. I am using 'import ipdb; ipdb.set\_trace()' as kind of breakpoint, for example my cell has the following code: ``` a=4 import ipdb; ipdb.set_trace() b=5 print a print b ``` which after execution with Shift+Enter gives me this error: ``` -------------------------------------------------------------------------- MultipleInstanceError Traceback (most recent call last) <ipython-input-1-f2b356251c56> in <module>() 1 a=4 ----> 2 import ipdb; ipdb.set_trace() 3 b=5 4 print a 5 print b /home/nnn/anaconda/lib/python2.7/site-packages/ipdb/__init__.py in <module>() 14 # You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. 15 ---> 16 from ipdb.__main__ import set_trace, post_mortem, pm, run, runcall, runeval, launch_ipdb_on_exception 17 18 pm # please pyflakes /home/nnn/anaconda/lib/python2.7/site-packages/ipdb/__main__.py in <module>() 71 # the instance method will create a new one without loading the config. 72 # i.e: if we are in an embed instance we do not want to load the config. ---> 73 ipapp = TerminalIPythonApp.instance() 74 shell = get_ipython() 75 def_colors = shell.colors /home/nnn/anaconda/lib/python2.7/site-packages/traitlets/config/configurable.pyc in instance(cls, *args, **kwargs) 413 raise MultipleInstanceError( 414 'Multiple incompatible subclass instances of ' --> 415 '%s are being created.' % cls.__name__ 416 ) 417 MultipleInstanceError: Multiple incompatible subclass instances of TerminalIPythonApp are being created. ``` The same error appears if I use this code not in the jupyter notebook in the browser, but in jupyter qtconsole. What does this error mean and what to do to avoid it? Is it possible to debug code in the cell step-by-step, using next, continue, etc commands of pdb debugger?
Had this problem also and it seems to be related to versions of jupyter and ipdb. Solution is to use this instead of the ipdb library `set_trace` call: ``` from IPython.core.debugger import Tracer Tracer()() #this one triggers the debugger ``` Source: <http://devmartin.com/blog/2014/10/trigger-ipdb-within-ipython-notebook/> Annotated screenshot: [![screenshot illustrate how Tracer()() will cause Jupyter notebook to react. It pauses execution on the line in your code where you Trace()() and a new "inline" input accepts ipdb commands like 'p' or 'n' or 'c', shown here](https://i.stack.imgur.com/dlam9.png)](https://i.stack.imgur.com/dlam9.png)
How To Use Triangle Operators In Pattern Matching Function I'm trying to use Sequences to improve performance. When defining the function below I tried to use the "triangle" operators in pattern patching context. ``` import qualified Data.Sequence as S cleanField :: S.Seq Char -> S.Seq Char cleanField ((S.<|) ' ' xs) = cleanField xs cleanField ((S.|>) xs ' ') = cleanField xs cleanField xs = xs ``` GHC 7.4.1 says: ``` seq.hs:4:13: Parse error in pattern: (S.<|) ``` Can I not use the triangle operators (`<|`,`|>`)in pattern matching? If so, why can I use the cons (`:`) operator in pattern matching and not the triangle operators?
Xavier Pinho is correct. The operators `(<|)` and `(|>)` are ordinary functions, but only data constructors may be used in patterns. (Yes, they're listed in the docs under the title *Construction*, because they're used to construct sequences out of components, but they're not technically data constructors.) The library offers two functions `viewl` and `viewr` to create members of datatypes `ViewL` and `ViewR`. These have constructors `:<` and `:>` that can be matched on. An example: ``` s :: Seq Int s = fromList [1,2,3] test1 :: (Int, Seq Int) test1 = case viewl s of x :< xs -> (x, xs) test2 :: (Seq Int, Int) test2 = case viewr s of xs :> x -> (xs, x) ``` The views are also convenient to use with the `ViewPatterns` language extension. If enabled, you can say ``` test3 :: Seq Int -> (Int, Seq Int) test3 (viewl -> x :< xs) = (x, xs) ``` to match on an incoming sequence as if it was a list.
Meteor: Hide properties from client Is there a way to exclude certain properties from client updates? It should not be possible to see the property when inspecting a collection in the console
Absolutely. 1. Remove the `autopublish` package which is turned on by default: `meteor remove autopublish` 2. Create your collection: `Rooms = new Meteor.Collection("rooms");` *No conditional isServer or isClient needed, as this should be present to both* 3. In your server side code, [publish](http://docs.meteor.com/#meteor_publish) only a subset of your collection by zeroing out the fields you don't want the client to have: ``` if (Meteor.isServer) { //you could also Rooms.find({ subsetId: 'some_id' }) a subset of Rooms Meteor.publish("rooms", function () { return Rooms.find({}, {fields: {secretInfo: 0}}); }); } ``` NOTE: setting `{secretInfo: 0}` above does not set all instances of `secretInfo` for every row in the `Rooms` collection to zero. It **removes** the field altogether from the clientside collection. Think of `0` as the off switch :) 4. Subscribe client side to the published collection: ``` if (Meteor.isClient) { Deps.autorun(function() { Meteor.subscribe("rooms"); }); } ``` Hope this helps!
How to convert a string to JSON object in PHP I have the following result from an SQL query: ``` {"Coords":[ {"Accuracy":"65","Latitude":"53.277720488429026","Longitude":"-9.012038778269686","Timestamp":"Fri Jul 05 2013 11:59:34 GMT+0100 (IST)"}, {"Accuracy":"65","Latitude":"53.277720488429026","Longitude":"-9.012038778269686","Timestamp":"Fri Jul 05 2013 11:59:34 GMT+0100 (IST)"}, {"Accuracy":"65","Latitude":"53.27770755361785","Longitude":"-9.011979642121824","Timestamp":"Fri Jul 05 2013 12:02:09 GMT+0100 (IST)"}, {"Accuracy":"65","Latitude":"53.27769091555766","Longitude":"-9.012051410095722","Timestamp":"Fri Jul 05 2013 12:02:17 GMT+0100 (IST)"}, {"Accuracy":"65","Latitude":"53.27769091555766","Longitude":"-9.012051410095722","Timestamp":"Fri Jul 05 2013 12:02:17 GMT+0100 (IST)"} ] } ``` It is currently a string in PHP. I know it's already in JSON form, is there an easy way to convert this to a JSON object? I need it to be an object so I can add an extra item/element/object like what "Coords" already is.
What @deceze said is correct, it seems that your JSON is malformed, try this: ``` { "Coords": [{ "Accuracy": "30", "Latitude": "53.2778273", "Longitude": "-9.0121648", "Timestamp": "Fri Jun 28 2013 11:43:57 GMT+0100 (IST)" }, { "Accuracy": "30", "Latitude": "53.2778273", "Longitude": "-9.0121648", "Timestamp": "Fri Jun 28 2013 11:43:57 GMT+0100 (IST)" }, { "Accuracy": "30", "Latitude": "53.2778273", "Longitude": "-9.0121648", "Timestamp": "Fri Jun 28 2013 11:43:57 GMT+0100 (IST)" }, { "Accuracy": "30", "Latitude": "53.2778339", "Longitude": "-9.0121466", "Timestamp": "Fri Jun 28 2013 11:45:54 GMT+0100 (IST)" }, { "Accuracy": "30", "Latitude": "53.2778159", "Longitude": "-9.0121201", "Timestamp": "Fri Jun 28 2013 11:45:58 GMT+0100 (IST)" }] } ``` Use `json_decode` to convert String into Object (`stdClass`) or array: <http://php.net/manual/en/function.json-decode.php> **[edited]** I did not understand what do you mean by *"an official JSON object"*, but suppose you want to add content to json via PHP and then converts it right back to JSON? assuming you have the following variable: ``` $data = '{"Coords":[{"Accuracy":"65","Latitude":"53.277720488429026","Longitude":"-9.012038778269686","Timestamp":"Fri Jul 05 2013 11:59:34 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.277720488429026","Longitude":"-9.012038778269686","Timestamp":"Fri Jul 05 2013 11:59:34 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.27770755361785","Longitude":"-9.011979642121824","Timestamp":"Fri Jul 05 2013 12:02:09 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.27769091555766","Longitude":"-9.012051410095722","Timestamp":"Fri Jul 05 2013 12:02:17 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.27769091555766","Longitude":"-9.012051410095722","Timestamp":"Fri Jul 05 2013 12:02:17 GMT+0100 (IST)"}]}'; ``` You should convert it to *Object* (stdClass): `$manage = json_decode($data);` But working with `stdClass` is more complicated than PHP-Array, then try this (use second param with `true`): `$manage = json_decode($data, true);` This way you can use array functions: <http://php.net/manual/en/function.array.php> **adding an item:** ``` $manage = json_decode($data, true); echo 'Before: <br>'; print_r($manage); $manage['Coords'][] = Array( 'Accuracy' => '90' 'Latitude' => '53.277720488429026' 'Longitude' => '-9.012038778269686' 'Timestamp' => 'Fri Jul 05 2013 11:59:34 GMT+0100 (IST)' ); echo '<br>After: <br>'; print_r($manage); ``` **remove first item:** ``` $manage = json_decode($data, true); echo 'Before: <br>'; print_r($manage); array_shift($manage['Coords']); echo '<br>After: <br>'; print_r($manage); ``` any chance you want to save to json to a *database* or a *file*: ``` $data = '{"Coords":[{"Accuracy":"65","Latitude":"53.277720488429026","Longitude":"-9.012038778269686","Timestamp":"Fri Jul 05 2013 11:59:34 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.277720488429026","Longitude":"-9.012038778269686","Timestamp":"Fri Jul 05 2013 11:59:34 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.27770755361785","Longitude":"-9.011979642121824","Timestamp":"Fri Jul 05 2013 12:02:09 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.27769091555766","Longitude":"-9.012051410095722","Timestamp":"Fri Jul 05 2013 12:02:17 GMT+0100 (IST)"},{"Accuracy":"65","Latitude":"53.27769091555766","Longitude":"-9.012051410095722","Timestamp":"Fri Jul 05 2013 12:02:17 GMT+0100 (IST)"}]}'; $manage = json_decode($data, true); $manage['Coords'][] = Array( 'Accuracy' => '90' 'Latitude' => '53.277720488429026' 'Longitude' => '-9.012038778269686' 'Timestamp' => 'Fri Jul 05 2013 11:59:34 GMT+0100 (IST)' ); if (($id = fopen('datafile.txt', 'wb'))) { fwrite($id, json_encode($manage)); fclose($id); } ```
Play! framework - handle a POST request this is the route to handle the login POST request: ``` POST /login/submit controllers.Users.loginSubmit(user : String, password : String) ``` this is the login.scala.html: ``` <form method="post" action="???"> <input type="text" name="username" /><br/> <input type="password" name="password" /><br/> <input type="submit" value="Login" /> </form> ``` I got two questions: 1. what should be the value of action? is it "login/submit"? 2. how do you pass this form to be handled in the loginSubmit function? thanks
If it's `POST` form, you don't need to declare params in the `route`: ``` POST /login/submit controllers.Users.loginSubmit() ``` Template: ``` <!-- syntax: @routes.ControllerName.methodName() --> <form method="post" action="@routes.Users.loginSubmit()"> <input type="text" name="username" /><br/> <input type="password" name="password" /><br/> <input type="submit" value="Login" /> </form> ``` Import: ``` import play.data.DynamicForm; import play.data.Form; ``` Controller: ``` public static Result loginSubmit(){ DynamicForm dynamicForm = Form.form().bindFromRequest(); Logger.info("Username is: " + dynamicForm.get("username")); Logger.info("Password is: " + dynamicForm.get("password")); return ok("ok, I recived POST data. That's all..."); } ``` ## Template form helpers There are also [form template helpers](http://www.playframework.org/documentation/2.0.2/JavaFormHelpers) available for creating forms in Play's template so the same can be done as: ``` @helper.form(action = routes.User.loginSubmit()) { <input type="text" name="username" /><br/> <input type="password" name="password" /><br/> <input type="submit" value="Login" /> } ``` They are especially useful when working with large and/or `pre-filled` forms
SVG stacking, anchor elements, and HTTP fetching I have a series of overlapping questions, the intersection of which can best be asked as: Under what circumstances does an `#` character (an anchor) in a URL trigger an HTTP fetch, in the context of either an `<a href` or an `<img src` ? Normally, should: ``` http://foo.com/bar.html#1 ``` and ``` http://foo.com/bar.html#2 ``` require two different HTTP fetches? I would think the answer should definitely be NO. **More specific details:** The situation that prompted this question was my first attempt to experiment with [**SVG stacking**](http://hofmannsven.com/2013/laboratory/svg-stacking/) - a technique where multiple icons can be embedded within a single `svg` file, so that only a single HTTP request is necessary. Essentially, the idea is that you place multiple SVG icons within a single file, and use CSS to hide all of them, except the one that is selected using a CSS `:target` selector. You can then select an individual icon using the `#` character in the URL when you write the `img` element in the HTML: ``` <img src="stacked-icons.svg#icon3" width="80" height="60" alt="SVG Stacked Image" /> ``` When I try this out on Chrome it works perfectly. A single HTTP request is made, and multiple icons can be displayed via the same `svg` url, using different anchors/targets. However, when I try this with Firefox (28), I see via the Console that *multiple* HTTP requests are made - one for each svg URL! So what I see is something like: ``` GET http://myhost.com/img/stacked-icons.svg#icon1 GET http://myhost.com/img/stacked-icons.svg#icon2 GET http://myhost.com/img/stacked-icons.svg#icon3 GET http://myhost.com/img/stacked-icons.svg#icon4 GET http://myhost.com/img/stacked-icons.svg#icon5 ``` ...which of course defeats the purpose of using SVG stacking in the first place. Is there some reason Firefox is making a separate HTTP request for *each* URL instead of simply fetching `img/stacked-icons.svg` once like Chrome does? This leads into the broader question of - what rules determine whether an `#` character in the URL should trigger an HTTP request?
### Here's a demo in Plunker to help sort out some of these issues - [Plunker](http://plnkr.co/edit/FvabTiApk4IfqAFLVG5M?p=preview) - [stack.svg#circ](http://run.plnkr.co/plunks/FvabTiApk4IfqAFLVG5M/stack.svg#circ) - [stack.svg#rect](http://run.plnkr.co/plunks/FvabTiApk4IfqAFLVG5M/stack.svg#rect) ### A URI has a couple basic components: ![URI](https://i.stack.imgur.com/S4QCo.png) - **Protocol** - determines how to send the request - **Domain** - where to send the request - **Location** - file path within the domain - **Fragment** - which part of the document to look in ### Media Fragment URI A fragment is simply identifying a portion of the entire file. Depending on the implementation of the [Media Fragment URI spec](http://www.w3.org/TR/media-frags-recipes/), it actually might be totally fair game for the browser to send along the Fragment Identifier. Think about a streaming video, some of which has been cached on the client. If the request is for `/video.ogv**#t=10,20**` then the server can save space by only sending back the relevant portion/segment/fragment. Moreover, if the applicable portion of the media is already cached on the client, then the browser can prevent a round trip. ### Think Round Trips, Not Requests When a browser issues a GET Request, it does not necessarily mean that it needs to grab a fresh copy of the file all the way from the server. If it has already has a cached version, the request could be answered immediately. ### HTTP Status Codes - [200 - OK](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) - Got the resource and returned from the server - [302 - Found](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.3) - Found the resource in the cache and not enough has changed since the previous request that we need to get a fresh copy. ### Caching Disabled Various things can affect whether or not a client will perform caching: The way the request was issued (`F5` - soft refresh; `Ctrl` + `R` - hard refresh), the settings on the browser, any development tools that add attributes to requests, and the way the server handles those attributes. Often, when a browser's developer tools are open, it will automatically disable caching so you can easily test changes to files. If you're trying to observe caching behavior specifically, make sure you don't have any developer settings that are interfering with this. When comparing requests across browsers, to help mitigate the differences between Developer Tool UI's, you should use a tool like [fiddler](http://www.telerik.com/fiddler) to inspect the actual HTTP requests and responses that go out over the wire. I'll show you both for the simple plunker example. When the page loads it should request two different ids in the same stacked svg file. Here are [side by side requests of the same test page in Chrome 39, FF 34, and IE 11](https://i.stack.imgur.com/koedq.jpg): ![Fiddler - Disable Caching](https://i.stack.imgur.com/JYIyi.png) ![Dev Tools - Disable Caching](https://i.stack.imgur.com/SXkBu.png) ### Caching Enabled But we want to test what would happen on a normal client where caching is enabled. To do this, update your dev tools for each browser, or go to fiddler and *Rules* > *Performance* > and uncheck *Disable Caching*. Now our request should look like this: ![Fiddler - Enable Caching](https://i.stack.imgur.com/XDEYZ.png) ![Dev Tools - Enable Caching](https://i.stack.imgur.com/Qt20R.png) Now all the files are returned from the local cache, regardless of fragment ID The developer tools for a particular browser might try to display the fragment id for your own benefit, but fiddler should always display the most accurate address that is actually requested. Every browser I tested omitted the fragment part of the address from the request: ![HTTP Request](https://i.stack.imgur.com/AV0wA.png) ### Bundling Requests Interestingly, chrome seems to be smart enough to prevent a second request for the same resource, but FF and IE fail to do so when a fragment is part of address. This is equally true for SVG's and PNG's. Consequently, the first time a page is ever requested, the browser will load one file for each time the SVG stack is actually used. Thereafter, it's happy to take the version from the cache, but will hurt performance for new viewers. ### Final Score **CON**: *First Trip* - SVG Stacks are not fully supported in all browsers. One request made per instance. **PRO**: *Second Trip* - SVG resources will be cached appropriately ### Additional Resources - [simurai.com](http://simurai.com/blog/2012/04/02/svg-stacks/) is widely thought to be the first application of using SVG stacks and the article explains some browser limitations that are currently in progress - Sven Hofmann has a great [article about SVG Stacks](http://hofmannsven.com/2013/laboratory/svg-stacking/) that goes over some implementation models. - Here's the [**source code** in Github](https://github.com/KyleMit/StackSVG) if you'd like to fork it or play around with samples - Here's the [**official spec** for Fragment Identifiers within SVG images](http://www.w3.org/TR/SVG/linking.html#SVGFragmentIdentifiers) - Quick look at [**Browser Support** for SVG Fragments](http://caniuse.com/#feat=svg-fragment) ### Bugs Typically, identical resources that are requested for a page will be bundled into a single request that will satisfy all requesting elements. Chrome appears to have already fixed this, but I've opened bugs for FF and IE to one day fix this. - **Chrome** - This [very old bug](https://bugs.webkit.org/show_bug.cgi?id=89983) which highlighted this issue appears to have been resolved for webkit. - **Firefox** - [Bug Added Here](https://bugzilla.mozilla.org/show_bug.cgi?id=1121693) - **Internet Explorer** - [Bug Added Here](https://connect.microsoft.com/IE/feedbackdetail/view/1088029/ie11-svg-fragment-downloaded-for-every-image)
Get target of shortcut folder How do you get the directory target of a shortcut folder? I've search everywhere and only finds target of shortcut file.
I think you will need to use COM and add a reference to "Microsoft Shell Control And Automation", as described in [this blog post](http://www.saunalahti.fi/janij/blog/2006-12.html#d6d9c7ee-82f9-4781-8594-152efecddae2): Here's an example using the code provided there: ``` namespace Shortcut { using System; using System.Diagnostics; using System.IO; using Shell32; class Program { public static string GetShortcutTargetFile(string shortcutFilename) { string pathOnly = System.IO.Path.GetDirectoryName(shortcutFilename); string filenameOnly = System.IO.Path.GetFileName(shortcutFilename); Shell shell = new Shell(); Folder folder = shell.NameSpace(pathOnly); FolderItem folderItem = folder.ParseName(filenameOnly); if (folderItem != null) { Shell32.ShellLinkObject link = (Shell32.ShellLinkObject)folderItem.GetLink; return link.Path; } return string.Empty; } static void Main(string[] args) { const string path = @"C:\link to foobar.lnk"; Console.WriteLine(GetShortcutTargetFile(path)); } } } ```
Why is the variance of 2SLS bigger than that of OLS? > > ... Another potential problem with applying 2SLS and other IV procedures > is that the 2SLS standard errors have a tendency to be ‘‘large.’’ What > is typically meant by this statement is either that 2SLS coefficients > are statistically insignificant or that the 2SLS standard errors are > much larger than the OLS standard errors. Not surprisingly, the > magnitudes of the 2SLS standard errors depend, among other things, on > the quality of the instrument(s) used in estimation. > > > This quote is from [Wooldridge's "Econometric analysis of cross-sectional and panel data"](http://rads.stackoverflow.com/amzn/click/0262232588). I wonder why this happens? I would prefer a mathematical explanation. Assuming homoskedasticity for simplicity the (estimated) asymptotic variance of OLS estimator is given by $$\widehat{Avar}(\hat{\beta}\_{OLS}) = n\sigma^2(X'X)^{-1}$$ while for the 2SLS estimator $$\widehat{Avar}(\hat{\beta}\_{2SLS}) = n\sigma^2(\hat{X}'\hat{X})^{-1}$$ where $$\hat{X} = P\_zX = Z(Z'Z)^{-1}Z'X.$$ $X$ is the matrix of regressors, including the endogenous ones, and $Z$ is the matrix of instrumental variables. So rewriting the variance for 2SLS gives $$\widehat{Avar}(\hat{\beta}\_{2SLS}) = n\sigma^2\left(X'Z(Z'Z)^{-1}Z'X\right)^{-1}.$$ However, I can't conclude from above formulas that $\widehat{Avar}(\hat{\beta}\_{2SLS}) \geq \widehat{Avar}(\hat{\beta}\_{OLS})$.
We say a matrix $A$ is at least as large as $B$ if their difference $A-B$ is positive semidefinite (psd). An equivalent statement that turns out to be handier to check here is that $B^{-1}-A^{-1}$ is psd (much like $a>b$ is equivalent to $1/b>1/a$). So we want to check that $$ X'X-X'Z(Z'Z)^{-1}Z'X $$ is psd. Write $$ X'X-X'Z(Z'Z)^{-1}Z'X=X'(I-Z(Z'Z)^{-1}Z')X=X'M\_ZX $$ To check that $X'M\_ZX$ is psd, we must show that, for any vector $d$, $$ d'X'M\_ZXd\geq0 $$ Let $c=Xd$. Then, $$ c'M\_Zc\geq0 $$ as $M\_Z$ is a symmetric and idempotent projection matrix, which is known to be psd: write, using symmetry and idempotency, $$ c'M\_Zc=c'M\_ZM\_Zc=c'M\_Z'M\_Zc $$ and let $e=M\_Zc$, so that $c'M\_Zc=e'e=\sum\_ie\_i^2$, which, being a sum of squares, must be nonnegative. P.S.: Two little quibbles - you refer to the *estimated* asymptotic variances $\widehat{Avar}(\hat\beta\_j)$. Now, the OLS estimator and the 2SLS estimator of $\sigma^2$ are not the same. As Paul mentions in his answer, this will however not affect the ranking as OLS is, by definition, the estimator which minimizes the sum of squared residuals. (The OLS estimate conventionally divides by $n-k$ and the IV estimate by $n$, but that seems unlikely to affect the ranking in realisitic samples.) Also, the *asymptotic* variances are generally scaled by $n$ so as to obtain a nondegenerate quantity as $n\to\infty$. (Of course, scaling both by $n$ will not affect the ranking, so that the issue is a little moot for this particular question.)
Delete objects in Django via html link I have a project with a Post model, that is basic posts. I want to create a link on each post page to be able to delete that post (with appropriate security). There are a few questions on this on stack overflow, but I can't seem to find a complete, workable answer (I am using Django 1.7) that doesn't throw up errors when I implement it. I have been able to implement a delete function which works ok, but need to add a POST form with CSRF token for validation, and also check that the user deleting it is the one that created it. I can't seem figure out how to add these two in. So far, in my views.py: ``` def delete(request, id): post = Post.objects.filter(pk=id).delete() return HttpResponseRedirect(reverse('posts.views.all_posts')) ``` In urls.py: ``` url(r'^delete/(?P<id>\d+)/$','posts.views.delete'), ``` In html: ``` <a href="/delete/{{ post.id }}">Delete</a> ``` This all works, but there is no security - so appreciate guidance on how to add a form and checking. Also, I've seen an answer that uses DeleteView, but couldn't get that one to work either.
Indeed, using a GET method to delete your objects makes you vulnerable to [CSRF attacks](https://docs.djangoproject.com/en/dev/ref/contrib/csrf/). [`DeleteView`](https://docs.djangoproject.com/en/1.7/ref/class-based-views/generic-editing/#deleteview) only deletes on POST, and shows a confirmation page on GET. Your code should look something like this in `views.py`: ``` from django.views.generic import DeleteView class PostDelete(DeleteView): model = Post success_url = reverse_lazy('posts.views.all_posts') ``` In `urls.py`: ``` url(r'^delete/(?P<pk>\d+)/$', PostDelete.as_view(), name='entry_delete'), ``` Your form (without using a confirmation template. There is an example of confirmation template in the docs): ``` <form action="{% url 'entry_delete' object.pk %}" method="post"> {% csrf_token %} <input type="submit" value="Delete" /> </form> ``` If you are not using a confirmation template, make sure to point the form's `action` attribute to the `DeleteView` ([this is why](https://stackoverflow.com/questions/8395269/)). To ensure the user deleting the post is the user that owns it, I like to use [mixins](https://docs.djangoproject.com/en/1.7/ref/class-based-views/mixins/). Assuming your `Post` model has a `created_by` foreign key pointing to `User`, you could write a mixin like: ``` from django.core.exceptions import PermissionDenied class PermissionMixin(object): def get_object(self, *args, **kwargs): obj = super(PermissionMixin, self).get_object(*args, **kwargs) if not obj.created_by == self.request.user: raise PermissionDenied() else: return obj ``` Finally, your `DeleteView` should inherit from this mixin: ``` class PostDelete(PermissionMixin, DeleteView): model = Post success_url = reverse_lazy('posts.views.all_posts') ```
Django - source in serializers leading to too many database calls I am trying to create a song-artist-album relationship in Django. I have the following models: ``` class Artist(models.Model): gid = models.CharField(max_length=63, blank=True) name = models.CharField(max_length=255, blank=True) begin_life = models.CharField(max_length=31, blank=True) end_life = models.CharField(max_length=31, blank=True) type = models.CharField(max_length=1, blank=True) gender = models.CharField(max_length=1, blank=True) class Song(models.Model): gid = models.CharField(max_length=63, blank=True) title = models.CharField(max_length=255, blank=True) artist = models.ForeignKey('Artist', related_name='songs_artist') album = models.ForeignKey('Album', related_name='songs_album') length = models.IntegerField(default=0) ``` I have created my ArtistSerializer so that I can retrieve all the songs of the artist when I get the info of any particular artist. This is the serializer I have created: ``` class ArtistSerializer(serializers.ModelSerializer): songs_artist = SongSerializer(source='songs_artist') class Meta: model = Artist fields = ('name', 'type', 'gender', 'begin_life', 'end_life', 'songs_artist') class SongSerializer(serializers.ModelSerializer): artist = SongArtistSerializer() album = SongAlbumSerializer() class Meta: model = Song fields = ('id', 'title', 'artist', 'album', 'length') class SongArtistSerializer(serializers.ModelSerializer): class Meta: model = Artist fields = ('id', 'name') ``` A quick profiling on the GET method of my artist revealed some troubling facts. Following are the results of the profiling ordered by time and number of calls: <http://pastebin.com/bwcKsn2i>. But, when I removed the `songs_artist` field from my serializer, following was the output of the profiler: <http://pastebin.com/0s5k4w7i>. If I read right, the database is being hit 1240 times when I use `source`! Is there any other alternative to doing this? Thanks in advance.
Django REST Framework will not optimize your queries for you, it's up to you to decide how best to remove any N+1 queries. You should follow the guidelines covered in [the Django documentation](https://docs.djangoproject.com/en/dev/topics/db/optimization/) to handle performance issues. In the case of ForeignKey relationships, you should use [`select_related`](https://docs.djangoproject.com/en/dev/ref/models/querysets/#django.db.models.query.QuerySet.select_related) in your query, which will pre-fetch the objects in the original query. In the case of ManyToMany and GenericForeignKey relationships, you should use [`prefetch_related`](https://docs.djangoproject.com/en/dev/ref/models/querysets/#django.db.models.query.QuerySet.prefetch_related). I've written quite a bit about this in [another Stack Overflow answer](https://stackoverflow.com/a/26598897/359284), but the gist is that you use it similar to `select_related`. You should override the query in `get_queryset` on the view for best results, as you don't need to worry about Django REST Framework incorrectly cloning the queryset when used as an attribute on the class.
log4net multiple appenders, writing to event viewer I am using log4net for logging, I have two appenders one file and other eventlog appender. I have register application in registry editor, problem is now both logger are writing in event viewer. I need `fileAppender` to write in file and `eventLogAppender` to be shown in event viewer. ``` <log4net> <appender name="LogFileAppender" type="log4net.Appender.FileAppender"> <file value="file.log"/> <appendToFile value="false"/> <layout type="log4net.Layout.PatternLayout"> <header value="Logging Start &#xD;&#xA;"/> <footer value="Logging End &#xD;&#xA;"/> <conversionPattern value="%date [%thread] %-5level %logger: %message%newline"/> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <param name="ApplicationName" value="eventlog" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger - %message%newline" /> </layout> </appender> <root> <level value="DEBUG"/> <appender-ref ref="LogFileAppender"/> <appender-ref ref="EventLogAppender" /> </root> </log4net> ``` and then in code ``` private static readonly ILog log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); // private static readonly ILog log = LogManager.GetLogger("LogFileAppender"); private static readonly ILog logEvents = LogManager.GetLogger("EventLogAppender"); ``` I have tried different things but no one is working, any solution? Thanks!
In order to use multiple ILog instances with log4net and to log different information to each, you must configure at least two Logger elements in the log4net section of your config file. In your case, you have two targets that you want - file and event log. I'll call them FileLogger and EventLogger. Here is an example of the additional configuration you need in the log4net section: ``` <logger name="FileLogger"> <level value="ALL" /> <appender-ref ref="LogFileAppender" /> </logger> <logger name="EventLogger"> <level value="ALL" /> <appender-ref ref="EventLogAppender" /> </logger> ``` Then, to use each one in code, instantiate them like so: ``` private static readonly ILog fileLogger = LogManager.GetLogger("FileLogger"); private static readonly ILog eventLogger = LogManager.GetLogger("EventLogger"); ``` Note that the ref attribute of the appender elements matches the name attribute of the appenders you have already configured and that the name passed to GetLogger matches the name attribute of the logger elements.
Correct way to extend classes with Symfony autowiring I'm wondering if this is the correct way to extend and use classes with Symfonies autowiring. For example, I have a BaseClass that instantiates and auto wires the entity manager. ``` class BaseClass { protected $entityManager; public function __construct(EntityManagerInterface $entityManager) { $this->entityManager = $entityManager; } protected function someMethodIWantToUse(Entity $something) { // Do something there $this->entityManager->persist($something); $this->entityManager->flush(); } } ``` Then I have a subclass that extends the BaseClass and needs access that method. So I let it autowire again and pass it to the parent constructor. ``` class SubClass extends BaseClass { private $handler; public function __construct(EntityManagerInterface $em, SomeHandler $handler) { parent::__construct($em); $this->handler = $handler; } public function SubClassMethod() { // Get some data or do something $entity = SomeEntityIGot(); $this->someMethodIWantToUse($entity); } } ``` Now I'm wondering if this is actually the correct way to do this or there's something I'm missing and the parent class should be able to autowire the entitymanager by itself?
To summarize the comments, yes your way is correct. Depending on your use case there are alternatives. This are the ways you can go about it: **1. Extending Class and using [Constructor Injection](https://symfony.com/doc/current/service_container/injection_types.html#constructor-injection) (what you do)** ``` class BaseClass { protected $some; public function __construct(SomeInterface $some) { $this->some = $some; } } class SubClass extends BaseClass { private $other; public function __construct(SomeInterface $some, OtherInterface $other) { parent::__construct($some); $this->other = $other; } } ``` --- **2. [Setter Injection](https://symfony.com/doc/current/service_container/calls.html)** ``` class BaseClass { protected $some; public function __construct(SomeInterface $some) { $this->some = $some; } } class SubClass extends BaseClass { private $other; public function setOther(OtherInterface $other) { $this->other = $other; } } ``` Now `setOther` won't automatically be called, you have to "manually" call it by either specifying a `calls` property in your `services.yaml` file, as described here: <https://symfony.com/doc/current/service_container/calls.html>. This would then look something like this: ``` // services.yaml App\SubClass: calls: - [setOther, ['@other']] ``` Or ``` // services.yaml app.sub_class: class: App\SubClass calls: - [setOther, ['@other']] ``` assuming, an implementation of `OtherInterface` is available as `@other` in the service container. A more elegant solution if you're using autowiring, simply add a `@required` annotation to the function as described here: <https://symfony.com/doc/current/service_container/autowiring.html#autowiring-calls>, which would look like this: ``` /** * @required */ public function setOther(OtherInterface $other) { $this->other = $other; } ``` --- **3. [Property Injection](https://symfony.com/doc/current/service_container/injection_types.html#property-injection)** ``` class BaseClass { protected $some; public function __construct(SomeInterface $some) { $this->some = $some; } } class SubClass extends BaseClass { public $other; } ``` As with the Setter Injection, you'll need to tell Symfony to populate this property, by specifying it in your `services.yaml` file like this: ``` // services.yaml App\SubClass: properties: other: '@other' ``` Or ``` // services.yaml app.sub_class: class: App\SubClass properties: other: '@other' ``` assuming, an implementation of `OtherInterface` is available as `@other` in the service container. --- **Conclusion:** Since there are different ways to solve this, it's up to you to determine the correct way for your use case. I personally go with either option 1 (Constructor Injection) or option 2 (Setter Injection) using the annotation. Both of them allow you to use typehints and thus allowing your IDE to help you write clean code. In 90% of cases, I'd go with option 1, as then it's clear for every one reading your code, what services are available with one glance at the `__constructor` function. One use case for Setter Injection would be a base class offering all the `setXXX` functions but then sub classes not needing all of them. You could have a constructor in each sub class, requesting the needed services and then calling the `setXXX` methods of the base class. *Note: this is kind of an edge case and you probably won't run into this.* You can find a list of advantages and disadvantages of each method directly in the Symfony documentation about the [Service Container -> Types of Injection](https://symfony.com/doc/current/service_container/injection_types.html#property-injection)