prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Why are variable assignments within a Trap block not visible outside it? Why are the variable assignments that I make inside the Trap block not visible outside it? ``` $integer = 0; $string = [String]::Empty; $stringBuilder = new-object 'System.Text.StringBuilder'; trap { $integer = 1; $string = '1'; $stringBuilder.Append('1'); write-host "Integer Variable Inside: " $integer; write-host "String Variable Inside: " $string; write-host "StringBuilder Variable Inside: " $stringBuilder; continue; } $dummy = 1/$zero; write-host "Integer Variable Outside: " $integer; write-host "String Variable Outside: " $string; write-host "StringBuilder Variable Outside: " $stringBuilder; ``` I would have expected the results from within and outside the Trap block to be identical but these are the results that I am seeing. ``` Integer Variable Inside: 1 String Variable Inside: 1 StringBuilder Variable Inside: 1 Integer Variable Outside: 0 String Variable Outside: StringBuilder Variable Outside: 1 ``` Notice that it is only the StringBuilder that retains its value. I am guessing that this has something to do with the difference between value and reference types but can't quite pin it down.
With [info that slipsec provided](https://stackoverflow.com/questions/2001161/why-are-variable-assignments-within-a-trap-block-not-visible-outside-it/2002199#2002199) above and through some further experimentation, I now understand what is happening here. Joel [explains](http://huddledmasses.org/trap-exception-in-powershell/) how the Trap scope works as follows. > > Even though in our error handler we > were able to access the value of > $Result and see that it was True … and > even though we set it to $False, and > printed it out so you could see it was > set … the function still returns True, > because the trap scope doesn’t modify > the external scope unless you > explicitly set the scope of a > variable. NOTE: If you had used > $script:result instead of $result (in > every instance where $result appears > in that script), you would get the > output which the string/comments led > you to expect. > > > So variables from outside the Trap scope can be read but not set because they are copies of the originals (thanks [Jason](https://stackoverflow.com/users/64046/jasonmarcher)). This is the reason why the Integer variable did not retain its value. The StringBuilder however, is a reference object and the variable is only a pointer to that object. The code within the Trap scope was able to read the reference that the variable was set to and modify the object to which it was pointing - the variable itself required no change. Note that Joel's tip about specifying the scope of the variable allowed me to set the value of the Integer variable from within the Trap scope. $script:integer = 0; $string = [String]::Empty; $stringBuilder = new-object 'System.Text.StringBuilder'; ``` trap { $script:integer = 1; $string = '1'; $stringBuilder.Append('1'); write-host "Integer Variable Inside: " $script:integer; write-host "String Variable Inside: " $string; write-host "StringBuilder Variable Inside: " $stringBuilder; continue; } $dummy = 1/$zero; write-host "Integer Variable Outside: " $script:integer; write-host "String Variable Outside: " $string; write-host "StringBuilder Variable Outside: " $stringBuilder; ``` ...and these are the results. ``` Integer Variable Inside: 1 String Variable Inside: 1 StringBuilder Variable Inside: 1 Integer Variable Outside: 1 String Variable Outside: StringBuilder Variable Outside: 1 ``` Note that the string variable does not retain its value because although it is a reference type, it is also immutable.
How do I send an error response in Restlet? I have a Restlet `ServerResource`, which should process a GET request with a parameter `user`. If `user` is equal to some value, it should return some image, otherwise send an error response (404 or 403) indicating that the sender is not allowed to get the image. ``` import org.restlet.data.MediaType; import org.restlet.representation.ObjectRepresentation; import org.restlet.representation.Representation; import org.restlet.resource.Get; import org.restlet.resource.ResourceException; import org.restlet.resource.ServerResource; public class GetMap extends ServerResource { @Get public Representation getImage() { final String user = getQuery().getValues("user"); if (user.equals("me")) { //Read map from file and return it byte[] data = readImage(); final ObjectRepresentation<byte[]> or=new ObjectRepresentation<byte[]>(data, MediaType.IMAGE_PNG) { @Override public void write(OutputStream os) throws IOException { super.write(os); os.write(this.getObject()); } }; return or; } return null; // Here I want to send an error response } [...] } ``` How can I send a standardized error response in the `getImage` method (instead of `return null`) ?
Have a look at the `ServerResource#setStatus(Status)` [method JavaDoc](http://restlet.com/technical-resources/restlet-framework/javadocs/2.3/gae/api/org/restlet/resource/ServerResource.html#setStatus%28org.restlet.data.Status%29%20) and the additional overloaded methods. Which allows you to return a Custom body along with the desired HTTP Status. Alternatively throw a `new ResourceException` ([class JavaDoc](http://restlet.com/technical-resources/restlet-framework/javadocs/2.3/jee/api/org/restlet/resource/ResourceException.html?is-external=true)) the framework will convert these to the correct HTTP status and provide a default message, although this is unlikely to be an image. This should meet your needs. JavaDoc links to version 2.3.x on 25/11/2015
Execute command after seconds (without sleep) How can I run a command after X seconds without sleep. Or with sleep but without the shell just wait for the response of that command? I used this but it didn't work ``` sleep 5 ; ls > a.txt ``` I need to run it in background. I try not to have to run it in a script I try to run `ls` after 5 seconds, and shell does not just wait for the end of the sleep
A more concise way of writing what Hennes suggested is ``` (sleep 5; echo foo) & ``` --- Alternatively, if you need more than a few seconds, you could use [`at`](http://linux.about.com/library/cmd/blcmdl1_at.htm). There are three ways of giving a command to `at`: 1. Pipe it: ``` $ echo "ls > a.txt" | at now + 1 min warning: commands will be executed using /bin/sh job 3 at Thu Apr 4 20:16:00 2013 ``` 2. Save the command you want to run in a text file, and then pass that file to `at`: ``` $ echo "ls > a.txt" > cmd.txt $ at now + 1 min < cmd.txt warning: commands will be executed using /bin/sh job 3 at Thu Apr 4 20:16:00 2013 ``` 3. You can also pass `at` commands from STDIN: ``` $ at now + 1 min warning: commands will be executed using /bin/sh at> ls ``` Then, press `Ctrl``D` to exit the `at` shell. The `ls` command will be run in one minute. You can give very precise times in the format of `[[CC]YY]MMDDhhmm[.ss]`, as in ``` $ at -t 201412182134.12 < script.sh ``` This will run the script `script.sh` at 21:34 and 12 seconds on the 18th of December 2014. So, in theory, you could use `at` to run something five seconds in the future. However, that is kinda like using a tank to swat a fly, and Hennes's suggestion is better.
Verifying whether a tree is bst or not Python I have a practice interview question which tells me to verify if a tree is a balanced search tree or not and give a verification method... I have the class as ``` Class Node: def __init__(self, k, val): self.key = k self.value = val self.left = None self.right = None ``` and other function definitions for the tree max and min values as ``` def tree_max(node): maxleft = float('-inf') if not node.left else tree_max(node.left) maxright = float('-inf') if not node.right else tree_max(node.right) return max(node.value, maxleft, maxright) def tree_min(node): minleft = float('-inf') if not node.right else tree_min(node.left) minright = float('-inf') if not node.left else tree_min(node.right) return min(node.value, minleft, minright) ``` My verification method as ``` def verify(node): if tree_max(node.left) <= node.value and node.value <= tree_min(node.right): if verify(node.left) and verify(node.right): return True else: return False else: return False ``` My problem occurs when I try to implement the verification method I seem to always get false even when I try to make a BST tree. My implementation is as follows: ``` root= Node(10, "Hello") root.left = Node(15, "Fifteen") root.right= Node(30, "Thirty") print verify(root) root = Node(10, "Ten") root.right = Node(20, "Twenty") root.left = Node(5, "Five") root.left.right = Node(15, "Fifteen") print verify(root) ``` Both are giving me False...Is there a problem with my verification function or my min/max function...Any help would be appreciated.
I see four errors in your code. 1. First, your check for null children is backwards in `tree_min`. That is, you're checking if `node.right` exists before accessing `node.left`, and vise versa. 2. Second, `tree.min` returns negative infinity when called on a leaf node. You need to use positive infinity in the min calculation (negative infinity is correct in the max version). 3. Third, you have a logic error within `verify`, as it unconditionally calls `tree_min` or `tree_max` and itself on it's child nodes, even if one or both of them are `None`. I suggest making all the functions handle being passed `None`, rather than relying on the caller to do the right thing. This also simplifies the `min` and `max` code a bit! 4. Lastly, you're doing your comparisons on `node.value`, which is the string you're giving each node. I suspect you want to be comparing using `node.key` instead. Comparing a float (like `float("-inf")`) to a string (like `"ten"`) is an error in Python 3, and even in Python 2 where it is legal, it probably doesn't work like you would expect. With those issues fixed, I get expected results when I create valid and invalid trees. Your two examples are both invalid though, so if you were using them to test, you will always get a `False` result. Finally, a couple of minor style issues (that aren't bugs, but still things that could be improved). Python supports chained comparisons, so you can simplify your first `if` statement in `verify` to `tree_max(node.left) <= node.key <= tree_min(node.right)`. You can further simplify that part of the code by connecting the checks with `and` rather than nesting an additional `if` statement. Here's a version of your code that works for me (using Python 3, though I think it is all backwards compatible to Python 2): ``` class Node: def __init__(self, k, val): self.key = k self.value = val self.left = None self.right = None def tree_max(node): if not node: return float("-inf") maxleft = tree_max(node.left) maxright = tree_max(node.right) return max(node.key, maxleft, maxright) def tree_min(node): if not node: return float("inf") minleft = tree_min(node.left) minright = tree_min(node.right) return min(node.key, minleft, minright) def verify(node): if not node: return True if (tree_max(node.left) <= node.key <= tree_min(node.right) and verify(node.left) and verify(node.right)): return True else: return False root= Node(10, "Hello") root.left = Node(5, "Five") root.right= Node(30, "Thirty") print(verify(root)) # prints True, since this tree is valid root = Node(10, "Ten") root.right = Node(20, "Twenty") root.left = Node(5, "Five") root.left.right = Node(15, "Fifteen") print(verify(root)) # prints False, since 15 is to the left of 10 ```
Fastest way to populate a matrix using row/column indicies stored in vectors I'm trying to do something that seems relatively straightforward to do with something apply-esque, but I can only get it to work using a for loop. The general idea is I have two vectors, with one vector corresponding to a row in the matrix and another vector corresponding to the column, both the same length. I start with a 0 matrix, and increment [row,column] based on the pair of values in the two vectors. For example: ``` vectorCols <- c(1,2,3,1,3) vectorRows <- c(2,1,2,3,2) countMat <- matrix(rep(0,9),ncol=3) ``` And at the end, `countMat` is: ``` [,1] [,2] [,3] [1,] 0 1 0 [2,] 1 0 2 [3,] 1 0 0 ``` This is pretty manageable with a `for` loop: ``` for (i in 1:length(vectorCols)){ countMat[vectorRows[i],vectorCols[i]] <- countMat[vectorRows[i],vectorCols[i]] + 1 } ``` But I can't help thinking there is a better way to do this in R. I've tried using the `apply` family of functions, but these don't cooperate well when you want to assign something. I know I could use `mapply` and build each element of `countMat` one value at a time, but this seems inefficient--`vectorRows` and `vectorCols` are very long, and it seems wasteful to fully traverse them an entire time for each cell in `countMat`. But other than a loop and `mapply`, I can't think of how to do this. I've considered using `assign` with one of the `apply` family, but there's a caveat--my matrix actually has names for the columns and rows, with the names stored in `vectorCols` and `vectorRows`, and it seems `assign` doesn't want to play well something like `countMat["rowName"]["columnName"] (not to mention that`apply` will still want to return a value for each step in the iteration). Any suggestions? I'd also be curious if there is an ideal way to do this if I don't have names for the vector columns and rows. If that's the case then maybe I can convert `vectorCols` and `vectorRows` to numbers, then build the matrix, then rename everything. Thanks all.
Here are some solutions. No packages are needed. **1) table** ``` table(vectorRows, vectorCols) ``` giving: ``` vectorCols vectorRows 1 2 3 1 0 1 0 2 1 0 2 3 1 0 0 ``` Note that if there is any row or column with no entries then it will not appear. **2) aggregate** ``` ag <- aggregate( Freq ~ ., data.frame(Freq = 1, vectorRows, vectorCols), sum) countMat[as.matrix(ag[-3])] <- ag[[3]] ``` giving: ``` > countMat [,1] [,2] [,3] [1,] 0 1 0 [2,] 1 0 2 [3,] 1 0 0 ``` **3) xtabs** ``` xtabs(~ vectorRows + vectorCols) ``` giving: ``` vectorCols vectorRows 1 2 3 1 0 1 0 2 1 0 2 3 1 0 0 ```
extending an interface instead of implementing it An interface definition (see below) has me confused: ``` public abstract interface Cursor { // hide details } public abstract interface Parcelable { // hide details } public interface Foo<T extends Bar> extends Cursor, Parcelable { // details omitted for brevity }; ``` 1. I thought Java doesn't have multiple inheritance, so a class cannot `extends` more than one class. 2. On the other hand a class/interface can implement more than one interface. So why use `extends` instead of `implements`?
The rules for class/interface inheritance and implementation in Java 7 and below are as follows: 1. A class can only inherit from a single class (single inheritance) 2. A class might implement one or more interfaces (multiple interface implementation) 3. An interface might inherit from one or more interface (multiple interface inheritance) When inheriting, you use the keyword `extends`, both when a class inherits from a class or an interface inherits from one or more interfaces. The term `extends` is to be understood as follows: *this class/interface is an extension of the its parent class/interface(s) -- it is everything the parent or parents are, and possibly more*. When a class implements an interface (or more than one), you use the keyword `implements`. The term `implements` is to be understood as follows: *instances of this class are guaranteed to provide implementations for the methods of the parent interface(s)*. Also, note that an `abstract class` uses the keyword `implements` when referring to a parent interface even if the abstract class itself does not implement the interface methods. This does not violate the principles stated above: there can only be instances of concrete classes, which must implement every declared method; therefore, any instance of that abstract class must be, in reality, an instance of a subclass implementing the methods from the interface. For example, this is perfectly valid: `abstract class AnAbstractClass implements Cursor {}`, even if Cursor declares lots of methods. In your examples, the interface `Foo<T extends Bar>` inherits from two other interfaces, namely `Cursor` and `Parcelable`, which is an example of the 3rd point above. You use the keyword `extends` and not `implements` because the interface `Foo` is not implementing anything: the bodies of the methods of the parent interfaces are still not defined! If you had a class implementing those two parent interfaces, you'd do something like: ``` class AClass implements Cursor, Parcelable { @Override public // ... method signature and implementation // for every method in Cursor and Parcelable } ``` Also, note that you don't need to use the keyword `abstract` when declaring an interface: both the type and all its declared methods are implicitly abstract. The methods are also implicitly public (and cannot be declared with any other access modifier), since the purpose of an interface is precisely to declare the "public interface of an object", that is, which methods are guaranteed to be publicly available by any object implementing that interface. --- Finally, some of the concepts described above are subject to be slightly changed in Java 8: in that release, you will be able to provide "default implementations" of methods right in the interface, in case the classes implementing the interfaces do not implement them. Things will be more similar to "class multiple inheritance": a class will be able to inherit implemented methods from multiple sources, which is not possible in Java 7 or below. For example (the syntax is not yet fixed): ``` interface NaturalNumber { void increment(); default void add(int n) { for (int i = 0; i < n; i++) { increment(); } } } class NaturalNumberImplementation implements NaturalNumber { private int n = 0; @Override public void increment() { n++; } } ``` This exists to allow you to neglect implementing some methods when you don't want to, while still being able to implement "better" versions of them when possible/needed. For instance: ``` class HighPerformanceNaturalNumberImplementation implements NaturalNumber { private int n = 0; @Override public void increment() { n++; } @Override public void add(int n) { this.n += n; } } ```
Laravel multiple select array validation always give error I am using multiple select in my form, facing problem with its form validation, i am using multiple select field name as array if i give same name for validation rule its work great, but keep giving validation error on selected options also. here is my html code and validation rule. ``` <select multiple="multiple" name="skills[]" class="form-control"> ``` validation rule ``` 'skills[]' => 'required' ``` if i use field name without [] or skills.\* validation not working for this field, guide me where i am doing something wrong. I am using laravel 5.7 for my project.
If your `select` looks like this for example: ``` <div class="form-group row"> <label for="skills" class="col-md-4 col-form-label text-md-right">Skills</label> <div class="col-md-6"> <select multiple name="skills[]" id="skills" class="form-control{{ $errors->has('skills') ? ' is-invalid' : '' }}" required> <option value="ios">iOS</option> <option value="php">PHP</option> <option value="laravel">Laravel</option> </select> @if($errors->has('skills')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('skills') }}</strong> </span> @endif </div> </div> ``` Create a custom request: ``` $ php artisan make:request ExampleRequest ``` `ExampleRequest` validation would look like this: ``` public function authorize() { return true; } public function rules() { return [ 'skills' => 'required|array', ]; } ``` Then just grab the validated data from your `$request` directly ``` public function submitForm(ExampleRequest $request) { // at this point, validation already passed // if validation failed, you would be back at form with errors $skills = request('skills'); // or $skills = $request->skills; dd($skills); } ``` Custom requests are being validated first before even hitting your controller method.
Will Java app slow down by presence of -Xdebug or only when stepping through code? I realize that Java code will [slow down](https://stackoverflow.com/questions/2195720/why-does-java-code-slow-down-in-debugger) when run in debugger. Question is, will the code slow down simply by starting Java with these options: ``` Xdebug -Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n ``` ?? Or does the slowdown only happen when you connect to the "debug port" and actually step through code using an IDE?
First, to strictly answer your question - at least as stated in its title - `-Xdebug` only *enables debugging support in the VM* using JVMDI in JVMs prior to 5.0. So in itself, it doesn't do much. Moreover, JVMDI is [deprecated since 5.0](http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html) in favor of [JVMTI](http://java.sun.com/j2se/1.5.0/docs/guide/jvmti/jvmti.html): > > -**Xdebug** > > Start with support for [JVMDI](http://java.sun.com/j2se/1.5.0/docs/guide/jpda/jvmdi-spec.html) enabled. JVMDI has been deprecated and is not used for debugging in J2SE 5.0, so this option isn't needed for debugging in J2SE 5.0. > > > So -Xdebug doesn't do anything anymore and the important part is: ``` -Xrunjdwp:<name1>[=<value1>],<name2>[=<value2>]... ``` or, starting with Java 5.0, the newer (that you should prefer as the JDWP agent in 5.0 uses the JVM TI interface to the VM rather than the older JVMDI interface): ``` --agentlib:jdwp=<name1>[=<value1>],<name2>[=<value2>]... ``` Now, to my knowledge, just [loading the jwdp agent](http://java.sun.com/javase/6/docs/platform/jvmti/jvmti.html#starting) and/or configuring the JVM to listen for a socket connection on a given port don't have any noticeable performance impact. But connecting a debugger does.
Android CPU register names? This code fragment is extracted from an Android crash report on a Samsung Tab S: ``` Build fingerprint: 'samsung/chagallwifixx/chagallwifi:5.0.2/LRX22G/T800XXU1BOCC:user/release-keys' Revision: '7' ABI: 'arm' r0 a0d840bc r1 a0dcb880 r2 00000001 r3 a0d840bc r4 a0dc3c4c r5 00000000 r6 a066d200 r7 00000000 r8 32d68f40 r9 a0c359a8 sl 00000014 fp bef3ba84 ip a0dc3fb8 sp bef3ba10 lr a0c35a0c pc a0c34bc8 cpsr 400d0010 ``` `r0` through `r9` are pretty clearly general purpose registers, `sp` (`r13`) is the stack pointer, and `pc` (`r15`) is the program counter (instruction pointer). Referring to the [Wikipedia's ARM Architecture page Registers section](https://en.wikipedia.org/wiki/ARM_architecture#Registers) (one of *many* pages I looked through), I find that `lr` (`r14`) is the link register, and `cpsr` is the "Current Program Status Register." I would like to know what `sl` (`r10`), `fp` (`r11`) and `ip` (`r12`) are. I expect `ip` is *not* the "instruction pointer" because that function is done by `pc` (`r15`). Is there a reference document I haven't found that illustrates these names?
The current [ARM EABI procedure call standard](http://infocenter.arm.com/help/topic/com.arm.doc.ihi0042e/IHI0042E_aapcs.pdf) outlines the standard 'special' names for r12-r15: - PC (r15): Program counter - LR (r14): Link register - SP (r13): Stack pointer - IP (r12): Intra-procedure scratch register\* The GNU tools also still support names from the deprecated [legacy APCS](http://www.heyrick.co.uk/assembler/apcsintro.html) as identifiers for the given register numbers, even though they no longer necessarily have any meaning: - FP (r11): Frame pointer - may still be true for ARM code; Thumb code tends to keep actual frame pointers in r7, and of course the code may be compiled without frame pointers at all, in which cases "fp" is just another callee-saved general register. - SL (r10): Stack limit - I don't actually know the history of that one, but in most modern code r10 is no more special than r4-r8. Note that r9 is *not* necessarily a general-purpose register - the EABI reserves it for platform-specific purposes. Under linux-gnueabi it's nothing special, but other platforms may use it for special purposes like a TLS or global object table pointer, so it may also go by SB (static base) or TR (thread register). \* The story behind that the limited range of the PC-relative branch instructions - if the linker finds the target of a call ends up more than 32MB away, it may generate a veneer (some extra instructions within range of the call site) as the branch target, that computes the real address and performs an absolute branch, for which it may need a scratch register.
Aggregating a timestamped zoo object by clock time (i.e. not solely by time in the zoo object) I have a zoo object which consists of a timestamped (to the second) timeseries. The timeseries is irregular in that the time intervals between the values are not regularly spaced. I would like to transform the irregularly spaced timeseries object into a regularly spaced one, where the time intervals between values is a constant - say 15 minutes, and are "real world" clock times. Some sample data may help illustrate further ``` # Sample data 2011-05-05 09:30:04 101.32 2011-05-05 09:30:14 100.09 2011-05-05 09:30:19 99.89 2011-05-05 09:30:35 89.66 2011-05-05 09:30:45 95.16 2011-05-05 09:31:12 100.28 2011-05-05 09:31:50 100.28 2011-05-05 09:32:10 98.28 ``` I'd like to aggregate them (using my custom function) for every specified time period (e.g. 30 second time bucket) such that the output looks like the table presented below. The key is that I want to aggregate every 30 seconds by clock time NOT 30 seconds starting from my first observation time. Naturally, the first time bucket would be the first time bucket for which I have a recorded observation (i.e. row) in the data to be aggregated. ``` 2011-05-05 09:30:00 101.32 2011-05-05 09:30:30 89.66 2011-05-05 09:31:00 100.28 ``` In the example given, my custom aggregate function simply returns the first value in the 'set' of 'selected rows' to aggregate over.
Read in the data and then aggregate it by minute: ``` Lines <- "2011-05-05 09:30:04 101.32 2011-05-05 09:30:14 100.09 2011-05-05 09:30:19 99.89 2011-05-05 09:30:35 89.66 2011-05-05 09:30:45 95.16 2011-05-05 09:31:12 100.28 2011-05-05 09:31:50 100.28 2011-05-05 09:32:10 98.28" library(zoo) library(chron) toChron <- function(d, t) as.chron(paste(d, t)) z <- read.zoo(text = Lines, index = 1:2, FUN = toChron) aggregate(z, trunc(time(z), "00:01:00"), mean) ``` The result is: ``` (05/05/11 09:30:00) (05/05/11 09:31:00) (05/05/11 09:32:00) 97.224 100.280 98.280 ```
Can't ifdown eth0 (main interface) I can't `ifdown` an interface on Debian 6.0.5: ``` user@box:/etc/network$ sudo ifdown eth0 && sudo ifup eth0 ifdown: interface eth0 not configured SIOCADDRT: File exists Failed to bring up eth0. user@box:/etc/network$ cat interfaces auto lo iface lo inet loopback allow-hotplug eth0 allow-hotplug eth1 auto eth0 iface eth0 inet static address 10.0.0.1 netmask 255.255.255.0 gateway 10.0.0.254 auto eth1 iface eth1 inet manual ``` As requested by marco: ``` user@box:/etc/network/$ cat /run/network/ifstate lo=lo eth1=eth1 ```
Check the contents of the file `/run/network/ifstate`. `ifup` and `ifdown` use this file to note which network interfaces can be brought up and down. Thus, `ifup` can be easily confused when other networking tools are used to bring up an interface (e.g. `ifconfig`). From [man ifup](http://www.unix.com/man-page/Linux/8/ifup/) > > The program keeps records of whether network interfaces are up or > down. Under exceptional circumstances these records can become > inconsistent with the real states of the interfaces. For example, > an interface that was brought up using ifup and later > deconfigured using `ifconfig` will still be recorded as up. To fix > this you can use the `--force` option to force `ifup` or `ifdown` to > run configuration or deconfiguration commands despite what it > considers the current state of the interface to be. > > >
Do any C++11 thread-safety guarantees apply to third-party thread libraries compiled/linked with C++11? C++11 offers features like [thread-safe initialization of static variables](https://stackoverflow.com/questions/8102125/is-local-static-variable-initialization-thread-safe-in-c11), and citing that question we'll say for instance: ``` Logger& g_logger() { static Logger lg; return lg; } ``` So ostensibly (?) this is true regardless of whether a module compiled with a C++11 compiler included the thread headers, or spawned any threads in its body. You're offered the guarantee even if it were linked against another module that used C++11 threads and called the function. But what if your "other module" that calls into this code wasn't using C++11 threads, but something like Qt's `QThread`. Is atomic initialization of statics then outside of the scope of C++11's ability to make such a guarantee? Or does the mere fact of a module having been compiled with C++11 and then linked against other C++11 code imply that you will get the guarantee regardless? Does anyone know a good reference where issues like this are covered?
Your example relies on the memory model, not on how threads are implemented. Whoever executes this code will execute the same instructions. If two or more cores execute this code, they will obey the memory model. The basic implementation is equivalent to this: ``` std::mutex mtx; Logger * lg = 0; Logger& g_logger() { std::unique_lock<std::mutex> lck(mtx); if (lg == 0) lg = new Logger; return *lg; } ``` This code may be optimized to use the double-checked locking pattern (DCLP) which, on a particular processor architecture (e.g., on the x86) might be much faster. Also, because the compiler generates this code, it will know not to make crazy optimizations that break the naive DCLP.
Instantiate class on variable declaration or within constructor > > **Possible Duplicate:** > > [Where is the “proper” place to initialize class variables in AS3](https://stackoverflow.com/questions/3105081/where-is-the-proper-place-to-initialize-class-variables-in-as3) > > > I was wondering if anyone knows wether its better to instantiate class on it's variable declaration or within a constructor? For example, this: ``` protected var _errorHandler:ErrorHandler = new ErrorHandler(); ``` or this: ``` protected var _errorHandler:ErrorHandler; public function someClass() { _errorHandler = new ErrorHandler(); } ``` A small point I think, but I want my code to robust and efficient as possible! Thanks Chris
Initialization in the constructor is preferred, for readability--for being able to easily see what gets initialized when. The least readable option would be to mix these, which I can't recommend. There is a third option that you will see AS3 programmers use: - No initialization in the variable declarations - Empty (or nearly empty) constructor - All initialization done in one or more dedicated init() functions This approach has two things to offer: 1. You can easily reset the object for re-use by calling init again 2. You can get around the limitation that AS3 does not let you overload the constructor like other similar languages (Java/C++/C#). You might want to, for example, be able to initialize a data structure with one or more different types of objects. As far as performance goes, I believe your two examples would compile down to the same byte code. The AS3 compiler makes a special class initializer for *static* declarations that are outside the constructor, but for regular member variables initialized at declaration time, I expect it just moves the initializations to inside the constructor for you. But does it move them ahead or after what is explicitly in the contructor? I don't remember, which is why I cite readability as a main reason to put everything in the constructor yourself :-)
Does the parent attribute of bean tag is equivalent to inheritance in Java? I started studying Spring documentation. I came across `parent` attribute explanation, does using `parent` attribute between two beans is equivalent to inheritance relationship between these classes? If so, how to perform method overriding? Also I saw in some context, use of both `parent` attribute in XML config file and `extends` keyword in bean class. Is it required to specify both springs in order to implement inheritance?
- In spring, the `parent` in bean configuration signifies `configuration inheritance` and not related to Java inheritance. - The `configuration inheritance` saves a lot of code as you do away with repeated XML code. For example, you have following bean with attributes ``` Class MyBean { attrib1 attrib2 attrib3 attrib4 } ``` Say one instance of bean say `bean1` just needs `attrib1` and `attrib2` whereas another say `bean2` instance needs all four the attributes. Lets configure these two beans ``` <bean id="bean1" class="MyBean"> <property name="attrib1" value="val1" /> <property name="attrib2" value="val2" /> </bean> <bean id="bean2" parent="bean1"> <property name="attrib3" value="val3" /> <property name="attrib4" value="val4" /> </bean> ``` Note that `bean2` just needed to configure `attrib3` and `attrib4`. The other two attributes are **inherited** from `bean1` To answer your question: > > Does it is required to specify both springs in order to implement > inheritance? > > > No. As mentioned earlier this is not the same as java inheritance.
A few questions about how JavaScript works I have been looking deeply into JavaScript lately to fully understand the language and have a few nagging questions that I can not seem to find answers to (Specifically dealing with Object Oriented programming). Assuming the following code: ``` function TestObject() { this.fA = function() { // do stuff } this.fB = testB; function testB() { // do stuff } } TestObject.prototype = { fC : function { // do stuff } } ``` What is the difference between functions `fA` and `fB`? Do they behave exactly the same in scope and potential ability? Is it just convention or is one way technically better or proper? If there is only ever going to be one instance of an object at any given time, would adding a function to the prototype such as `fC` even be worthwhile? Is there any benefit to doing so? Is the prototype only really useful when dealing with many instances of an object or inheritance? And what is technically the "proper" way to add methods to the prototype the way I have above or calling `TestObject.prototype.functionName = function(){}` every time? I am looking to keep my JavaScript code as clean and readable as possible but am also very interested in what the proper conventions for Objects are in the language. I come from a Java and PHP background and am trying to not make any assumptions about how JavaScript works since I know it is very different being prototype based.
> > What is the difference between functions fA and fB > > > In practice, nothing. The primary difference between a function expression (fA) and a function declaration (fB) is when the function is created (declared functions are available before any code is executed, whereas a function expression isn't available until the expression is actually executed). There are also various quirks associated with [function expressions](http://perfectionkills.com/named-function-expressions-demystified/) that you may stumble across. In the example, I'd use a function expression, simply because declaring a function expression, then assigning the result seems a bit abstracted. But there is nothing "right" or "wrong" about either approach. > > If there is only ever going to be one instance of an object at any given time, would adding a function to the prototype such as fC even be worthwhile? > > > No. Just about everyone who goes does inheritance finds that plain objects are often simpler and therefore "better". Prototype inheritance is very handy for patching built–in objects though (e.g. adding Array.prototype.each where absent). > > And what is technically the "proper" way to add methods to the prototype… > > > There isn't one. Replacing the default prototype with some other object seems like a bit of a waste, but assigning an object created by a literal is perhaps tidier and easier to read that sequential assignments. For one or two assignments, I'd use: ``` Constructor.prototype.method = function(){…} ``` but for lots of methods I'd use an object literal. Some even use a classic extend function and do: ``` myLib.extend(Constructor.prototype, { method: function(){…} }); ``` Which is good for adding methods if some have already been defined. Have a look at some libraries and decide what you like, some mix and match. Do whatever suits a particular circumstance, often it's simply a matter of getting enough code to all look the same, then it will look neat whatever pattern you've chosen.
Response to preflight request doesn't pass access control check Laravel and Ajax call I have a REST api made in Laravel 5.1 hosted in a remote server. Now, I', trying to consume that API from another website (that I have in local). In Laravel I set the required lines to send the CORS headers. I also tested the API using Postman and everything seems to be ok! **In the Frontend** Then, in the website I sent the POST request using ajax, with this code: ``` var url="http://xxx.xxx.xxx.xxx/apiLocation"; var data=$("#my-form").serialize(); $.ajax({ type: "POST", url: url, data: data, headers: { 'token': 'someAPItoken that I need to send'}, success: function(data) { console.log(data); }, dataType: "json", }); ``` Buy then I get this error in the console: > > XMLHttpRequest cannot load <http://xxx.xxx.xxx.xxx/apiLocation>. > Response to preflight request doesn't pass access control check: No > 'Access-Control-Allow-Origin' header is present on the requested > resource. Origin '<http://localhost>' is therefore not allowed access. > > > **In the Backend** In the API I set this (using a Laravel Middleware to set the headers): ``` return $next($request) ->header('Access-Control-Allow-Origin', '*') ->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS'); ``` So, I'm confused about where is exactly the problem. 1. In the server? but then why with Postman work fine? 2. Is in the Ajax call? so, then what should I add?
Your backend code must include some explicit handling for `OPTIONS` requests that sends a `200` response with just the configured headers; for example: ``` if ($request->getMethod() == "OPTIONS") { return Response::make('OK', 200, $headers); } ``` The server-side code also must send an `Access-Control-Allow-Headers` response header that includes the name of the `token` request header your frontend code is sending: ``` -> header('Access-Control-Allow-Headers', 'token') ``` > > but then why with Postman work fine? > > > Postman isn’t a web app and isn’t bound by same-origin restrictions placed on web apps by browsers to block them from making cross-origin requests. Postman is a browser bolt-on for convenience of testing requests in the same way they could be made outside the browser using `curl` or whatever from the command line. Postman can freely make cross-origin requests. <https://developer.mozilla.org/docs/Web/HTTP/Access_control_CORS> in contrast explains how browsers block web apps from making cross-origin requests but also how you can un-block browsers from doing that by configuring your backend to send the right CORS headers. <https://developer.mozilla.org/docs/Web/HTTP/Access_control_CORS#Preflighted_requests> explains why the browser is sending that `OPTIONS` request your backend needs to handle.
Delegates.observable() is not getting notified when using MutableList I'm trying to use `Delegates.observable()` in order to get notified when a change on a `MutableList` has happened. ``` val items: MutableList<Item> by Delegates.observable(startingItems) { _, old, new -> Log.e("observable", "${old.size} -> ${new.size}") } ``` but nothing is happening whenever i try to remove nor add something to the list. I mean there's no trace of the log in the Logcat as it's supposed to be there.
The [docs](https://kotlinlang.org/docs/reference/delegated-properties.html) state: > > **Observable** > > > `Delegates.observable()` takes two arguments: the initial value and a handler for modifications. **The handler gets called every time we assign to the property** (after the assignment has been performed). It has three parameters: a property being assigned to, the old value and the new one. > > > In your case, you do not assign to `items`, you only add to the existing instance. The callback never gets invoked. Suggestion: Use a mutable property with a read-only `List` and reassign it when a new element is being added: ``` var items: List<String> by Delegates.observable(mutableListOf()) { _, old, new -> println("changed") } //add like this: user.items += "new val" ``` The `plus` operator does not call `add` on the list but creates a new instance with all old elements plus the new one.
Write-Host vs Write-Information in PowerShell 5 It is well known that `Write-Host` is evil. In `PowerShell 5`, `Write-Information` is added and is considered to replace `Write-Host`. But, really, which is better? `Write-Host` is evil for it does not use pipeline, so the input message can't get reused. But, what `Write-Host` do is just to show something in the console right? In what case shall we reuse the input? Anyway, if we really want to reuse the input, why not just write something like this: ``` $foo = "Some message to be reused like saving to a file" Write-Host $foo $foo | Out-File -Path "D:\foo.log" ``` Another Cons of `Write-Host` is that, `Write-Host` can specified in what color the messages are shown in the console by using `-ForegroundColor` and `-BackgroundColor`. On the other side, by using `Write-Information`, the input message can be used wherever we want via the No.6 pipeline. And doesn't need to write the extra codes like I write above. But the dark side of this is that, if we want to write messages to the console and also saved to the file, we have to do this: ``` # Always set the $InformationPreference variable to "Continue" $InformationPreference = "Continue"; # if we don't want something like this: # ======= Example 1 ======= # File Foo.ps1 $InformationPreference = "Continue"; Write-Information "Some Message" Write-Information "Another Message" # File AlwaysRunThisBeforeEverything.ps1 .\Foo.ps1 6>"D:\foo.log" # ======= End of Example 1 ======= # then we have to add '6>"D:\foo.log"' to every lines of Write-Information like this: # ======= Example 2 ======= $InformationPreference = "Continue"; Write-Information "Some Message" 6>"D:\foo.log" Write-Information "Another Message" 6>"D:\foo.log" # ======= End of Example 2 ======= ``` A little bit redundant I think. I only know a little aspect of this "vs" thing, and there must have something out of my mind. So is there anything else that can make me believe that `Write-Information` is better than `Write-Host`, please leave your kind answers here. Thank you.
The `Write-*` cmdlets allow you to channel the output of your PowerShell code in a structured way, so you can easily distinguish messages of different severity from each other. - `Write-Host`: display messages to an interactive user on the console. Unlike the other `Write-*` cmdlets this one is neither suitable nor intended for automation/redirection purposes. Not evil, just different. - `Write-Output`: write the "normal" output of the code to the default (success) output stream ("STDOUT"). - `Write-Error`: write error information to a separate stream ("STDERR"). - `Write-Warning`: write messages that you consider warnings (i.e. things that aren't failures, but something that the user should have an eye on) to a separate stream. - `Write-Verbose`: write information that you consider more verbose than "normal" output to a separate stream. - `Write-Debug`: write information that you consider relevant for debugging your code to a separate stream. `Write-Information` is just a continuation of this approach. It allows you to implement log levels in your output (`Debug`, `Verbose`, `Information`, `Warning`, `Error`) and still have the success output stream available for regular output. As for why `Write-Host` became a wrapper around `Write-Information`: I don't know the actual reason for this decision, but I'd suspect it's because most people don't understand how `Write-Host` actually works, i.e. what it can be used for and what it should not be used for. --- To my knowledge there isn't a generally accepted or recommended approach to logging in PowerShell. You could for instance implement a single logging function like [@JeremyMontgomery](https://stackoverflow.com/a/38537613/1630171) suggested in his answer: ``` function Write-Log { Param( [Parameter(Mandatory=$true, Position=0)] [ValidateNotNullOrEmpty()] [string]$Message, [Parameter(Mandatory=$false, Position=1)] [ValidateSet('Error', 'Warning', 'Information', 'Verbose', 'Debug')] [string]$LogLevel = 'Information' ) switch ($LogLevel) { 'Error' { ... } 'Warning' { ... } 'Information' { ... } 'Verbose' { ... } 'Debug' { ... } default { throw "Invalid log level: $_" } } } Write-Log 'foo' # default log level: Information Write-Log 'foo' 'Information' # explicit log level: Information Write-Log 'bar' 'Debug' ``` or a set of logging functions (one for each log level): ``` function Write-LogInformation { Param( [Parameter(Mandatory=$true, Position=0)] [ValidateNotNullOrEmpty()] [string]$Message ) ... } function Write-LogDebug { Param( [Parameter(Mandatory=$true, Position=0)] [ValidateNotNullOrEmpty()] [string]$Message ) ... } ... Write-LogInformation 'foo' Write-LogDebug 'bar' ``` Another option is to create a custom logger object: ``` $logger = New-Object -Type PSObject -Property @{ Filename = '' Console = $true } $logger | Add-Member -Type ScriptMethod -Name Log -Value { Param( [Parameter(Mandatory=$true, Position=0)] [ValidateNotNullOrEmpty()] [string]$Message, [Parameter(Mandatory=$false, Position=1)] [ValidateSet('Error', 'Warning', 'Information', 'Verbose', 'Debug')] [string]$LogLevel = 'Information' ) switch ($LogLevel) { 'Error' { ... } 'Warning' { ... } 'Information' { ... } 'Verbose' { ... } 'Debug' { ... } default { throw "Invalid log level: $_" } } } $logger | Add-Member -Type ScriptMethod -Name LogDebug -Value { Param([Parameter(Mandatory=$true)][string]$Message) $this.Log($Message, 'Debug') } $logger | Add-Member -Type ScriptMethod -Name LogInfo -Value { Param([Parameter(Mandatory=$true)][string]$Message) $this.Log($Message, 'Information') } ... Write-Log 'foo' # default log level: Information $logger.Log('foo') # default log level: Information $logger.Log('foo', 'Information') # explicit log level: Information $logger.LogInfo('foo') # (convenience) wrapper method $logger.LogDebug('bar') ``` Either way you can externalize the logging code by - putting it into a separate script file and [dot-sourcing](https://technet.microsoft.com/en-us/library/hh847841.aspx) that file: ``` . 'C:\path\to\logger.ps1' ``` - putting it into a [module](https://msdn.microsoft.com/en-us/library/dd878340%28v=vs.85%29.aspx) and importing that module: ``` Import-Module Logger ```
Why does calling calling a pure virtual method without body does not result in linker error? I've come across quite weird scenario today. When directly calling a pure virtual method in Interface constructor, I get a undefined reference error. ``` class Interface { public: virtual void fun() const = 0; Interface(){ fun(); } }; class A : public Interface { public: void fun() const override {}; }; int main() { A a; } ``` Results in: ``` prog.cc: In constructor 'Interface::Interface()': prog.cc:5:22: warning: pure virtual 'virtual void Interface::fun() const' called from constructor 5 | Interface(){ fun(); } | ^ /tmp/ccWMVIWG.o: In function `main': prog.cc:(.text.startup+0x13): undefined reference to `Interface::fun() const' collect2: error: ld returned 1 exit status ``` However, wrapping a call to fun() in a different method like this: ``` class Interface { public: virtual void fun() const = 0; Interface(){ callfun(); } virtual void callfun() { fun(); } }; class A : public Interface { public: void fun() const override {}; }; int main() { A a; } ``` Compiles just fine and (obviously) crashes with pure virtual call error. I've tested it on latest GCC 8.2.0 and 9.0.0 and Clang 8.0.0. Out of those, only GCC produces a linker error in the first case. Wandbox links for a full working example with the error: - <https://wandbox.org/permlink/KhXsBeoRXf9v0iJr> - <https://wandbox.org/permlink/38JEGGyA3hfAfPAS> EDIT: I'm getting flagged for duplication, but I'm not sure how this question is duplicated. It doesn't have anything to do with dangers of calling pure virtual method (from constructor or whatnot), I'm aware of them. I was trying to understand why the compiler permits this call in one scenario, and fails to do so in another, which was explained very well by Adam Nevraumont. EDIT2: It seems, that even if `callFun` is not virtual, it still somehow prevents GCC from devirtualizing and inlining `fun` call. See the example below: ``` class Interface { public: virtual void fun() const = 0; Interface(){ callfun(); } void callfun() { fun(); } }; class A : public Interface { public: void fun() const override {}; }; int main() { A a; } ``` - <https://wandbox.org/permlink/6k8i27DaAiwNIFJI>
You aren't calling the pure virtual function, you are doing a lookup in the vtable for the current entry in the virtual function table for that function. As it happens, at that point it is a pure virtual function, so you crash due to UB. In the first case, you are getting a linker error because gcc is devirtualizing the call to `fun` in the ctor. A devirtualized call to `fun` directly calls the pure virtual method. This is possible because while constructing `Interface`, the compiler knows the state of the virtual function table (derived class modifications to it do not occur yet). In the second case, the compiler can devirtualize the call to `callFun` from the ctor. But the call to `fun` from within `callFun` cannot be devirtualized, as `callFun` could be called from outside the ctor in another method. Devirtualizing it would be incorrect *in the general case*. In this specific case, if the compiler devirtualized `callFun` *and then* inlined it, it could then devirtualize `fun` in the inlined copy. But the compiler doesn't do this, so no devirtualization occurs. As an aside, you can implement that pure virtual function and cause every example you provided to both link and run fine. ``` void Interface::fun() const {} ``` anywhere in any `.cpp` file linked in will make your code link, and be correct regardless. Pure virtual doesn't mean "has no implementation" in C++, it just means "derived class must provide an override, and it is legal for me not to have an implementation".
How can a reference require no storage? From [this question](https://stackoverflow.com/questions/3744675/size-of-references-in-64bit-environments), and consequently, from the Standard (ISO C++-03): > > It is unspecified whether or not a reference requires storage (3.7). > > > In some answers in that thread, it's said that references have, internally, the same structure of a pointer, thus, having the same size of it (32/64 bits). What I'm struggling to grasp is: how would a reference come **not** to require storage? Any sample code exemplifying this would be greatly appreciated. **Edit**: From @JohannesSchaub-litb comment, is there anything like, if I'm not using a `const &`, or if I'm using a `const &` with default value, it requires allocation? It seems to me, somehow, that there should be no allocations for references at all -- except, of course, when there are explicit allocations involved, like: ``` A& new_reference(*(new A())); // Only A() instance would be allocated, // not the new_reference itself ``` Is there any case like this?
Take something simple: ``` int foo() { int x = 5; int& r = x; r = 10; return x; } ``` The implementation *may* use a pointer to `x` behind the scenes to implement that reference, but there's no reason it has to. It could just as well translate the code to the equivalent form of: ``` int foo() { int x = 10 return x; } ``` Then no pointers are needed whatsoever. The compiler can just bake it right into the executable that `r` is the same as `x`, without storing and dereferencing a pointer that points at `x`. The point is, whether the reference requires any storage is an implementation detail that you shouldn't need to care about.
Box/Rectangle Draw Selection in Google Maps I am working on Google Maps and want to implement a feature where a user can draw a box/rectangle using his/her mouse to select a region on map (like selecting multiple files in windows). Upon selection, I want to get all the markers that fall in the region. I have been looking around both Google Maps api and search but I am unable to find a solution. I tried using jQuery Selectable for selection but all it returns is a bunch of divs from which I am unable to determine if any marker is selected or not.
I found a Library keydragzoom (<http://google-maps-utility-library-v3.googlecode.com/svn/tags/keydragzoom/1.0/docs/reference.html>) and used it to draw a rectangle on the page. Later, I edit the library and stopped it from zooming the selected area and instead made it return the correct co-ordinates in 'dragend' event. Then I manually looped through all the marker on the map to find the markers that are within that particular region. The library was not giving me the proper co-ordinates to I made the following changes. Changed the DragZoom function to ``` var prj = null; function DragZoom(map, opt_zoomOpts) { var ov = new google.maps.OverlayView(); var me = this; ov.onAdd = function () { me.init_(map, opt_zoomOpts); }; ov.draw = function () { }; ov.onRemove = function () { }; ov.setMap(map); this.prjov_ = ov; google.maps.event.addListener(map, 'idle', function () { prj = ov.getProjection(); }); } ``` and DragZoom.prototype.onMouseUp\_ function to ``` DragZoom.prototype.onMouseUp_ = function (e) { this.mouseDown_ = false; if (this.dragging_) { var left = Math.min(this.startPt_.x, this.endPt_.x); var top = Math.min(this.startPt_.y, this.endPt_.y); var width = Math.abs(this.startPt_.x - this.endPt_.x); var height = Math.abs(this.startPt_.y - this.endPt_.y); var points={ top: top, left: left, bottom: top + height, right: left + width }; var prj = this.prjov_.getProjection(); // 2009-05-29: since V3 does not have fromContainerPixel, //needs find offset here var containerPos = getElementPosition(this.map_.getDiv()); var mapPanePos = getElementPosition(this.prjov_.getPanes().mapPane); left = left + (containerPos.left - mapPanePos.left); top = top + (containerPos.top - mapPanePos.top); var sw = prj.fromDivPixelToLatLng(new google.maps.Point(left, top + height)); var ne = prj.fromDivPixelToLatLng(new google.maps.Point(left + width, top)); var bnds = new google.maps.LatLngBounds(sw, ne); //this.map_.fitBounds(bnds); this.dragging_ = false; this.boxDiv_.style.display = 'none'; /** * This event is fired when the drag operation ends. * Note that the event is not fired if the hot key is released before the drag operation ends. * @name DragZoom#dragend * @param {GLatLngBounds} newBounds * @event */ google.maps.event.trigger(this, 'dragend', points); } }; ```
How different is type.\_\_setattr\_\_ from object.\_\_setattr\_\_? `type.__setattr__` is used for classes, basically instances of metaclasses. `object.__setattr__` on the other hand, is used for instances of classes. This is totally understood. I don't see a significant difference between the two method, at least at Python level, I notice the two use the same procedures for attribute assignment, correct me if I'm wrong: Suppose `a` is an instance of a user-defined class, just a normal class: ``` class A: pass a = A() a.x = ... ``` then `a.x = ..` invokes `type(a).__setattr__(...)` which performs the following steps: *Note: `type(a).__setattr__` will find `__setattr__` in `object` builtin class* 1) Look for a data descriptor in `type(a).__mro__`. 2) If a data descriptor was found, call its `__set__` method and exit. 3) If no data descriptor was found in `type(a).__mro__`, then add attribute to `a.__dict__`, `a.__dict__['x'] = ...` --- With classes--instances of metaclasses, the process is similar: ``` class A(metaclass=type): pass ``` then: `A.x = ...` is translated to `type(A).__setattr__(...)` which performs the following steps: *Note: `type(A).__setattr__` will find `__setattr__` in `type` builtin class* 1) Look for a data descriptor in `type(A).__mro__` 2) If a data descriptor was found, call its `__set__` method and exit. 3) If no data descriptor was found in `type(A).__mro__`, then add attribute to `A.__dict__`, `a.__dict__['x'] = ...` But `object.__setattr__` doesn't work for classes: ``` >>> object.__setattr__(A, 'x', ...) TypeError: can't apply this __setattr__ to type object ``` and vice versa, `type.__setattr__` doesn't work for instances of `A`: ``` >>> type.__setattr__(A(), 'x', ...) TypeError: descriptor '__setattr__' requires a 'type' object but received a 'A' ``` Hmmm! There must be something different between the two methods. This is subtle, but true nonetheless! Presumably the two methods perform the same steps inside `__setattr__`, what is the difference between `type.__setattr__` and `object.__setattr__` so that `type.__setattr__` is limited to classes and `object.__setattr__` is limited to instances of classes?
`type.__setattr__` has a check to prevent setting attributes on types like `int`, and it does a bunch of invisible cleanup that isn't needed for normal objects. --- Let's take a look under the hood! Here's [`type.__setattr__`](https://github.com/python/cpython/blob/3.6/Objects/typeobject.c#L3075): ``` static int type_setattro(PyTypeObject *type, PyObject *name, PyObject *value) { if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) { PyErr_Format( PyExc_TypeError, "can't set attributes of built-in/extension type '%s'", type->tp_name); return -1; } if (PyObject_GenericSetAttr((PyObject *)type, name, value) < 0) return -1; return update_slot(type, name); } ``` and if we examine [`PyBaseObject_Type`](https://github.com/python/cpython/blob/3.6/Objects/typeobject.c#L4508), we see it uses `PyObject_GenericSetAttr` for its `__setattr__`, the same call that appears halfway through `type_setattro`. Thus, `type.__setattr__` is like `object.__setattr__`, but with some additional handling wrapped around it. First, the `if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE))` check prohibits attribute assignment on types written in C, like `int` or `numpy.array`, because assigning attributes on those can seriously screw up the Python internals in ways someone unfamiliar with the C API might not expect. Second, after the `PyObject_GenericSetAttr` call updates the type's dict or calls an appropriate descriptor from the metaclass, `update_slot` fixes up any *slots* affected by the attribute assignment. These slots are C-level function pointers that implement functionality like instance allocation, `in` checks, `+`, deallocation, etc. Most of them have corresponding Python-level methods, like `__contains__` or `__add__`, and if one of those Python-level methods is reassigned, the corresponding slot (or slots) have to be updated, too. `update_slot` also updates slots on all descendants of the class, and it invalidates entries in an internal attribute cache used for type object attributes.
Understanding EF under the hood. Include vs Joins I am working with Entity Framework for a while but still have some "black holes" with how it is working under the hood. Even after reading couple of articles I'm not sure that my understanding is correct. --- Lets start with questions: - How EF Include is working under the hood? - What are the differences between Join and Include? - Performance issues with Include (lazy loading vs eager loading, generated SQL queries, etc.)? - When I should use Join instead Include and vice versa?
When querying EF through linq or lambda expressions, you only need join statements if the underlying schema doesn't provide FKs, and thus you don't have navigation properties on the objects. On the other side, include (eager loading) and lazy loading can only work if there are FKs, because it uses the navigation properties. The underlying sql in both cases will use joins (as sql has no "navigation property" concept). As for performance, it depends on situations. Lazy loading vs Eager loading (so in FK scenario) can be a difficult choice. I usually go with lazy loading, useful when you have a large main result, but you need "join" data only of a few items of the whole resultset. If you know ahead that you'll need the join data of the whole resultset, eager loading could be better for performance. I'd suggest to experiment and see for yourself.
Convert a String to int? **Note: this question contains deprecated pre-1.0 code! The answer is correct, though.** To convert a `str` to an `int` in Rust, I can do this: ``` let my_int = from_str::<int>(my_str); ``` The only way I know how to convert a `String` to an `int` is to get a slice of it and then use `from_str` on it like so: ``` let my_int = from_str::<int>(my_string.as_slice()); ``` Is there a way to directly convert a `String` to an `int`?
You can directly convert to an int using the [`str::parse::<T>()` method](https://doc.rust-lang.org/std/primitive.str.html#method.parse), which [returns a Result](https://doc.rust-lang.org/std/result/enum.Result.html) containing the int. ``` let my_string = "27".to_string(); // `parse()` works with `&str` and `String`! let my_int = my_string.parse::<i32>().unwrap(); ``` You can either specify the type to parse to with the turbofish operator (`::<>`) as shown above or via explicit type annotation: ``` let my_int: i32 = my_string.parse().unwrap(); ``` Since `parse()` returns a [`Result`](https://doc.rust-lang.org/std/result/enum.Result.html), it will either be an `Err` if the string couldn't be parsed as the type specified (for example, the string `"peter"` can't be parsed as `i32`), or an `Ok` with the value in it.
Can I say that my samples are different just by looking at box plots without performing a test? Is it possible to say that my samples are significantly different (or not) from each other just by looking a box plots? If yes, what do I have to look at and what is the theory behind it? I read something about notches which can be drawn at each side of the boxes, and if they do not overlap, the medians are significantly different at the 5%, but I don't know how to do this in R? The sample size is K:19, R:35 and N:30 but I have also data that contains only 5 data points in K, 7 in R and 10 in N Thanks a lot for your help! ![Example 1](https://i.stack.imgur.com/k2MS1.png) ![Example 2](https://i.stack.imgur.com/mYpTX.png)
(This section addresses the original question) If we were looking for some relatively formal test, then speaking in general, if there's plenty of points outside the whisker ends, you could maybe get somewhere with a generalization of a [two-sample Anderson-Darling type statistic](http://www.jstor.org/discover/10.2307/2335097), like [so](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test#Non-parametric_k-sample_tests). Since the Anderson Darling approach focuses more on the tails than say a Kolmogorov-Smirnov, the differences in the tails might be sufficient. However, I think in this case (since it now appears that you know $n$'s, not just lower bounds based on the tails) that you could perhaps also construct envelopes that put lower bounds on the difference in CDFs for a Kolmogorov-Smirnov type test. This could be generalized to a k-sample statistic. This test would have low power typically, but when you lose most of the information in your data, that's how it goes. --- **Outside of formal testing**: In the case of direct comparison of boxes, [Arnold et al](http://www.amstat.org/publications/jse/v19n2/pfannkuch.pdf) (2011)[1] give a number of rules of thumb, some of which are both simple to apply and with reasonable properties (see p5 for a list of increasingly sophisticated rules). In many stats packages, notched boxplots are available and can be used. [1]: Arnold P., Pfannkuch M., Wild CJ, Regan M, and Budgett S (2011), "Enhancing Students' Inferential Reasoning: From Hands-On To 'Movies'," *Journal of Statistics Education*, **19**:2 [pdf link](http://www.amstat.org/publications/jse/v19n2/pfannkuch.pdf)
how to copy files and folders from one S3 bucket to another S3 using python boto3 I want to copy a files and folders from one s3 bucket to another. I am unable to find a solution by reading the docs. Only able to copy files but not folders from s3 bucket. Here is my code: ``` import boto3 s3 = boto3.resource('s3') copy_source = { 'Bucket': 'mybucket', 'Key': 'mykey' } s3.meta.client.copy(copy_source, 'otherbucket', 'otherkey') ```
S3 does not have any concept of folder/directories. It follows a flat structure. For example it seems, On UI you see 2 files inside test\_folder with named file1.txt and file2.txt, but actually two files will have key as "test\_folder/file1.txt" and "test\_folder/file2.txt". Each file is stored with this naming convention. You can use code snippet given below to copy each key to some other bucket. ``` import boto3 s3_client = boto3.client('s3') resp = s3_client.list_objects_v2(Bucket='mybucket') keys = [] for obj in resp['Contents']: keys.append(obj['Key']) s3_resource = boto3.resource('s3') for key in keys: copy_source = { 'Bucket': 'mybucket', 'Key': key } bucket = s3_resource.Bucket('otherbucket') bucket.copy(copy_source, 'otherkey') ``` If your source bucket contains many keys, and this is a one time activity, then I suggest you to checkout [this link](https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket/). If this needs to be done for every insert event on your bucket and you need to copy that to another bucket, you can checkout [this approach](https://aws.amazon.com/blogs/compute/content-replication-using-aws-lambda-and-amazon-s3/).
Get and display related products in WooCommerce I have included WooCommerce related products in a theme with the following: ``` <?php wc_get_template( 'single-product/related.php' ); ?> ``` This has been copied into my template and is executing. However, even though I have added various upsells with this product the `$related_products` variable (used in the loop) is NULL. Is there any other variables at play in order to start showing these related products?
You need much more than that *(and the post\_id need to be a product)*: ``` global $product; // If not set… if( ! is_a( $product, 'WC_Product' ) ){ $product = wc_get_product(get_the_id()); } $args = array( 'posts_per_page' => 4, 'columns' => 4, 'orderby' => 'rand', 'order' => 'desc', ); $args['related_products'] = array_filter( array_map( 'wc_get_product', wc_get_related_products( $product->get_id(), $args['posts_per_page'], $product->get_upsell_ids() ) ), 'wc_products_array_filter_visible' ); $args['related_products'] = wc_products_array_orderby( $args['related_products'], $args['orderby'], $args['order'] ); // Set global loop values. wc_set_loop_prop( 'name', 'related' ); wc_set_loop_prop( 'columns', $args['columns'] ); wc_get_template( 'single-product/related.php', $args ); ``` Or in a shorter way *(which will give you the same)*: ``` global $product; if( ! is_a( $product, 'WC_Product' ) ){ $product = wc_get_product(get_the_id()); } woocommerce_related_products( array( 'posts_per_page' => 4, 'columns' => 4, 'orderby' => 'rand' ) ); ``` Both ways are tested and works…
How does ssh handle 2 computers on the local network with the same username? For instance, if one wants to access the account bob on a machine on a local network behind a router, they would simply type: ``` $ ssh -p xx [email protected]` ``` However, how does `ssh` handle the possibility of two machines on the local network having the same username? Is there a flag to differentiate between user bob on machine A vs a different user bob on machine B, or does `ssh` throw an error?
Why would `ssh` care about reiterating usernames on different hosts? It is absolutely expected that this will happen. Hint: the `root` user is omnipresent, is it not? So the answer to your questions is: `ssh` handles it the same way everything else would handle it: by not caring about which user is being referenced until talking to the host in question. A simplified expansion on the above: The first thing that happens is that the `ssh` client attempts to establish a conversation with the remote `ssh` server. Once a communications channel is opened, the client looks to see if it's a known host (e. g. an entry is present in `~/.ssh/known_hosts`), and handle things properly if it's either an unknown host or a known host with invalid credentials (e. g. the host key had changed). Now that all that is out of the way and a line of communication is properly open between the `ssh` server and client, the client will say to the server "I would like to authenticate for the user `bob`". Naturally, the server won't care about any other `bob`s on the network; only itself.
What's the difference between BackgroundImage and Image of a button? I have a simple question, if I have a button called myButton in Windows Forms, what is the difference of myButton.Image and myButton.BackgroundImage? Thank you in advance!
`BackgroundImage` is like wallpaper, it covers the whole background.. you should use patterns or full pictures for the background. If you are just trying to show an icon use `Button.Image` For the `Button.Image` property from MS documentation: **The Image displayed on the button control.** The BackgroundImage **An Image that represents the image to display in the background of the control.** So you can set a `BackgoundImage` as well as a `Button.Image` the `Button.Image` will be placed over top of the `BackgroundImage` > > Note: Images with translucent or transparent colors are not supported > by Windows Forms controls as background images. This property is not > supported on child controls whose RightToLeftLayout property is true. > > > Here is an example I just created of a `Button` with both `BackgroundImage` and `Button.Image` set. Clearly there is a background image, then my button text and a button image next to my text. ![BackGroundImage with Button.image](https://i.stack.imgur.com/hxmh6.png) The cool thing is you can change where you place your `Button.Image` with the `ImageAlign` property ![ImageAlign](https://i.stack.imgur.com/0liq9.png) Here's the same button using an icon with transparency and NOT using RTL and I set the background color to light blue and it all works fine. So either my icon with transparency really isn't transparent or the docs are wrong. ![enter image description here](https://i.stack.imgur.com/Ih04B.png)
Doesn´t refresh the carouselSlider in flutter I tried to make an app to display images, to do this I am using Firestore, Flutter... to the slider I am using [carousel\_slider](https://pub.dev/packages/carousel_slider) plugin to display those images, but when I recived a new image from Firestore, the slider doesn´t display the new image, but the circle (indicator) display the new circle but doesn´t work too, the indicator works with only the initial images. [![enter image description here](https://i.stack.imgur.com/In3lN.png)](https://i.stack.imgur.com/In3lN.png) my Code the carousel: ``` _loadCarrouselFood(){ var carouselSlider = CarouselSlider( viewportFraction: 1.0, aspectRatio: 2.0, autoPlay: false, enlargeCenterPage: true, items: child, onPageChanged: (index) { setState(() { _current = index; }); LatLng locationDish = LatLng(_locationDish[_current].latitude, _locationDish[_current].longitude); moveCameraTo(locationDish); }, ); setState(() { _sliderDish = carouselSlider; }); return carouselSlider; } } ``` Firestore function: ``` _getInfoDish() async{ List<LocationDish> _locationDishTmp = <LocationDish>[]; List<Dish> _dishCardTmp = <Dish>[]; bool hasFoodInArray = false; Firestore.instance // Get the firebase instance .collection(Constants.NAME_COLECTION) // Get the informations collection .snapshots() // Get the Stream of DocumentSnapshot .listen((QuerySnapshot snapshot){ _locationDishTmp.clear(); _dishCardTmp.clear(); print(":::::: - FIRESTORE - ::::::"); snapshot.documents.forEach((obj){ //Set dish to Cards _dishCardTmp.add( Dish( id: obj.data["id"], name: obj.data["name"], photoURL: 'https://recetaparahoy.com/wp-content/uploads/2017/06/ceviche-peruano-630x420.jpg', score: obj.data["score"].toDouble(), ), ); }); if(_locationDishTmp.length > 0) { setState(() { //add only food recent data for(int i = 0; i < _locationDishTmp.length; i++){ _locationDish.add(_locationDishTmp[i]); _dishCard.add(_dishCardTmp[i]); } //_current = _locationDish.length-1; _hasFoodOnMap = true; }); } }); } ``` Please, someone could say me the error, or maybe any other solution to do that, I hope my explanation is clear, thx a lot!!!
You can copy paste run full code below I use the following demo to simulate this case You can use `CarouselSlider.builder` build item when it will be visible <https://github.com/serenader2014/flutter_carousel_slider#build-item-widgets-on-demand> working demo [![enter image description here](https://i.stack.imgur.com/tELuA.gif)](https://i.stack.imgur.com/tELuA.gif) full code ``` import 'package:flutter/material.dart'; import 'package:carousel_slider/carousel_slider.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { MyHomePage({Key key, this.title}) : super(key: key); final String title; @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 1; String url = 'https://picsum.photos/250?image='; List<String> urlList = ['https://picsum.photos/250?image=1']; int _current = 0; void _incrementCounter() { setState(() { _counter++; urlList.add('https://picsum.photos/250?image=${_counter}'); }); } List<T> map<T>(List list, Function handler) { List<T> result = []; for (var i = 0; i < list.length; i++) { result.add(handler(i, list[i])); } return result; } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ CarouselSlider.builder( autoPlay: false, enlargeCenterPage: true, aspectRatio: 2.0, onPageChanged: (index) { setState(() { _current = index; }); }, itemCount: urlList.length, itemBuilder: (BuildContext context, int index) => Container( child: Image.network(urlList[index]), ), ), Row( mainAxisAlignment: MainAxisAlignment.center, children: map<Widget>( urlList, (index, url) { return Container( width: 8.0, height: 8.0, margin: EdgeInsets.symmetric(vertical: 10.0, horizontal: 2.0), decoration: BoxDecoration( shape: BoxShape.circle, color: _current == index ? Color.fromRGBO(0, 0, 0, 0.9) : Color.fromRGBO(0, 0, 0, 0.4)), ); }, ), ), Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headline4, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _incrementCounter, tooltip: 'Increment', child: Icon(Icons.add), ), ); } } ```
How does Amazon SSE-S3 key rotation work? I'm trying to wrap my mind around Amazon's Server Side Encryption options so I can start asking S3 to encrypt my data at rest when my applications upload files. So far the AWS-Managed Encryption Keys option sounds like what I'm looking for ([Model C](https://media.amazonwebservices.com/AWS_Securing_Data_at_Rest_with_Encryption.pdf)): ![enter image description here](https://i.stack.imgur.com/KYgry.jpg) But then it says > > As an additional safeguard, this key itself is encrypted with a > periodically rotated master key unique to Amazon S3 that is securely > stored in separate systems under AWS control. > > > How does this rotation work? Does this mean that every time AWS rotates their key-encrypting key, they have to re-encrypt EVERY SINGLE `Data Key` stored in S3??? That seems crazy to me, and I don't want to sound crazy when I try to convince my boss that this is a good idea :)
For each object you upload, a new encryption key is generated, and used to encrypt the object before it's stored to disk. Having the object stored encrypted means the it's computationally infeasible for someone in possession of the raw data as stored on disk to decrypt it... but, of course, anyone in possession of that key could decrypt it, so the keys have to be stored securely, and in a relatively useless form, and that's done by encrypting them with the master key. To compromise the stored object, you have to have the specific encryption key for that object... but even if you have it, it's useless since it's also been encrypted. To get it decrypted, or at least have it used on your behalf, you have to have the master key, or have a trust relationship with an entity that controls the master key and can use it on your behalf. So far, all I've really done is stated the obvious, of course. The encryption keys, themselves, are tiny, only a few tens of bytes each. When the master key is rotated, the object-specific encryption keys are decrypted with the old key, and re-encrypted with the new key. The new versions are stored, and the old versions are discarded. Since the keys are small, this is not the massive operation that would be required if the objects themselves were decrypted and re-encrypted... but it's only the per-object encryption keys that are decrypted and re-encrypted when the master key is rotated. Several analogies could be drawn to an apartment complex, where keys are stored in a common lockbox, where changing the lockbox lock would restrict future access to individual apartments by restricting access to the individual apartment keys by restricting access to the common lockbox. The analogy breaks down, because physical keys are easily copied, among other reasons... but it's apparent, in this illustration, that changing the lockbox lock (relatively simple operation) would be unrelated to changing the lock on one or more apartments (a much more substantial operation). The bottom line, in this scenario, they create a new secret key to encrypt/decrypt each object you upload, then store that secret key in an encrypted form... periodically, and transparently to you, they change their stored representation of that secret key as a preventative measure.
HttpMediaTypeNotAcceptableException after upgrading to Spring 3.2 After upgrading my Spring MVC application to Spring 3.2 I'm getting the following exception when accessing some of my URLs: ``` org.springframework.web.HttpMediaTypeNotAcceptableException: Could not find acceptable representation at org.springframework.web.servlet.mvc.method.RequestMappingInfoHandlerMapping.handleNoMatch(RequestMappingInfoHandlerMapping.java:203) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.lookupHandlerMethod(AbstractHandlerMethodMapping.java:272) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.getHandlerInternal(AbstractHandlerMethodMapping.java:212) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.getHandlerInternal(AbstractHandlerMethodMapping.java:55) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.handler.AbstractHandlerMapping.getHandler(AbstractHandlerMapping.java:297) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.DispatcherServlet.getHandler(DispatcherServlet.java:1091) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.DispatcherServlet.getHandler(DispatcherServlet.java:1076) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:896) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:856) ~[spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:915) [spring-webmvc-3.2.0.RELEASE.jar:3.2.0.RELEASE] (...) ``` This exception results to a HTTP 406 NOT ACCEPTABLE. I've managed to create a simplified controller with a URL I cannot access: ``` @RequestMapping(value = "/resources/foo.js", produces = "text/javascript") @ResponseBody public String foo() throws Exception { return ""; } ``` As I'm using a normal browser which has `*/*` in the `Accept`-header, I don't see why I should get a HTTP 406. What makes this even more strange is that this code is working with Spring 3.1.2, but not with Spring 3.2. Why is that?
There have been several changes related to how [Spring does content-negotiations in 3.2](http://static.springsource.org/spring-framework/docs/3.2.x/spring-framework-reference/html/new-in-3.2.html#new-in-3.2-webmvc-content-negotiation). One of these changes is that content negotiations can now be done based on the file suffix in the URL. This feature is enabled by default. In Spring versions prior to 3.2 the HTTP accept-header were used for content-negotiations. When browsers accessed your URLs content-negotiation was seldom an issue, as browser always sends `Accept:(...)*/*`. Spring has a map of suffix => Media type. For ".js" the default media type is "application/x-javascript". When Spring tries to lookup the handler mapping for a request to /resources/foo.js, it won't match your `foo()`-method as it produces the wrong media type. I'm not sure if the Spring team has thought through this case. It is at least a bit strange that it lets you create a `@RequestMapping` which cannot be accessed (because of the incompatibility between the .js-media type and what is set in the produces field). There are several ways of fixing this issue. One is to change the produces-parameter to "application/x-javascript". Another would be to change the media type of ".js" to "text/javascript" ([see the docs of how to do that](http://static.springsource.org/spring-framework/docs/3.2.x/spring-framework-reference/html/mvc.html#mvc-config-content-negotiation)). A third possibility is to turn off content-negotiations based on suffixes (again, [see the docs of how to do it](http://static.springsource.org/spring-framework/docs/3.2.x/spring-framework-reference/html/mvc.html#mvc-config-content-negotiation)).
Express Mongoose DB.once ('open') cannot execute a callback function ``` exports.c_39 = function(req,res) { var mongoose = require('mongoose'); mongoose.createConnection('mongodb://localhost/cj'); var db = mongoose.connection; db.on('error', console.error.bind(console, 'connection error:')); console.log('a') db.once('open',function(){ console.log('b') }) } ``` Can perform the console.log (' a '), but cannot perform DB.once ('open') callback function
That's because `mongoose.connection` isn't the same as the connection that is returned by `createConnection()`. There are two ways of opening a connection with Mongoose: ``` // method 1: this sets 'mongoose.connection' > var client = mongoose.connect('mongodb://localhost/test'); > console.log(client.connection === mongoose.connection) true // method 2: this *doesn't* set 'mongoose.connection' > var connection = mongoose.createConnection('mongodb://localhost/test'); > console.log(client.connection === mongoose.connection) false ``` So to solve your problem, you need to connect your event handler to the connection as returned by `createConnection()`, and not to `mongoose.connection`: ``` var db = mongoose.createConnection('mongodb://localhost/cj'); db.once('open', function() { ... }); ``` In short: - `.createConnection()` returns a `Connection` instance - `.connect()` returns the global `mongoose` instance
Why isn't this local variable initialized in Java? I am learning Java and I know that I must initialize a local variable when I use it. However, I just found a code from the book and the code is: I wonder why in this case the variable volume is not initialized? ``` public static double cubeVolume(double sideLength) { double volume; if (sideLength>=0) { volume=sideLength*sideLength*sideLength; } else { volume=0; } return volume; } ```
The rule is that it must be initialised **before it is used**, since on both branches of the if statement `volume` has been initialised before it is returned (aka used) the compiler can guarantee that it will have been initialised before being used. If you attempted to use `volume` before the if statement you will again receive that compilation error. Equally if it wasn't initialised on all branches (in this case both sides of the if statement) you would get the error. # Examples The following examples may give some incite into when this is likely to be a problem: ok (but pointless): ``` double volume; //<--declared volume=6; //<--initialised double volumeUsed=2*volume; ``` ok: ``` boolean useUpper=true; //<-- useUpper declared and initialised double volume; if (useUpper){ volume=6; //<--initialised }else{ volume=7; //<--initialised } double volumeUsed=2*volume; ``` not ok (volume not known when used): ``` double volume; double volumeUsed=2*volume; ``` not ok (volume **may** not be known when used - if useUpper were false): ``` boolean useUpper=true; double volume; if (useUpper){ volume=6; //<--initialised } //volume may not be initialised double volumeUsed=2*volume; ``` not ok: used before being initialised ``` double volume; double volumeUsed=2*volume; if (sideLength>=0) { volume=sideLength*sideLength*sideLength; //<--initialised (too late) } else { volume=0; } return volume; ```
Getting a meaningful stack trace when using async code I've created a small bit of code for running multiple async operations in parallel (the `Parallel` class itself isn't good for async operations). It looks like this: ``` public static async Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body) { var chunks = source.Chunk(dop); foreach (var chunk in chunks) await Task.WhenAll(chunk.Select(async s => await body(s).ContinueWith(t => ThrowError(t)))); } private static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> source, int chunksize) { while (source.Any()) { yield return source.Take(chunksize); source = source.Skip(chunksize); } } private static void ThrowError(Task t) { if (t.IsFaulted) { if (t.Exception.InnerExceptions != null && t.Exception.InnerExceptions.Count == 1) throw t.Exception.InnerExceptions[0]; else throw t.Exception; } } ``` As far as running the tasks in parallel goes, the above code works really well. I do, however, have some issues when exceptions are thrown. The exception-capturing code works well as far as returning the exception message goes, but the stack trace leaves a lot to be desired - as it points to the `ThrowError` method, rather than the method that originally generated the exception. I can sort of work my way and figure out what went wrong with a debugger attached, but if I ever release this application I won't have that option available - at best, I'll have the exception with the stack trace logged. So - is there any way to get a more meaningful stack trace when running async tasks? PS. This is for a WindowsRT application but I think the problem isn't limited to WindowsRT as such...
> > So - is there any way to get a more meaningful stack trace when running async tasks? > > > Yes, you can use `ExceptionDispatchInfo.Capture` that was introduced in .NET 4.5 for async-await speficially: ``` private static void ThrowError(Task t) { if (t.IsFaulted) { Exception exception = t.Exception.InnerExceptions != null && t.Exception.InnerExceptions.Count == 1 ? t.Exception.InnerExceptions[0] : t.Exception; ExceptionDispatchInfo.Capture(exception).Throw(); } } ``` > > "You can use the `ExceptionDispatchInfo` object that is returned by this method at another time and possibly on another thread to rethrow the specified exception, as if the exception had flowed from this point where it was captured to the point where it is rethrown. > If the exception is active when it is captured, the current stack trace information and Watson information that is contained in the exception is stored. If it is inactive, that is, if it has not been thrown, it will not have any stack trace information or Watson information." > > > However, keep in mind that exceptions from async code are generally less meaningful than you would like as all exceptions are thrown from inside the `MoveNext` method on the state machine generated by the compiler.
How to set the date in materialize datepicker I am using materializecss.com Datepicker. When i try to set date with jquery, the date doesn't get set. Here is my Code :- ``` // Materialize Date Picker window.picker = $('.datepicker').pickadate({ selectMonths: true, // Creates a dropdown to control month selectYears: 100, // Creates a dropdown of 15 years to control year format: 'dd/mm/yyyy' }); <input type="text" id="Date" class="datepicker" /> ``` On Click event of a Button , I am setting the date :- ``` $("#Date").val('23/01/2015'); ``` When i open the datepicker it shows me today's date. How to set the date in materialize datepicker?
Materialize datepicker is a modified [pickadate.js](http://amsul.ca/pickadate.js/) picker. Accodging to their [API docs](http://amsul.ca/pickadate.js/api/), this is how to set the picker: 1. Get the picker: ``` var $input = $('.datepicker').pickadate() // Use the picker object directly. var picker = $input.pickadate('picker') ``` 1. Set the date: ``` // Using arrays formatted as [YEAR, MONTH, DATE]. picker.set('select', [2015, 3, 20]) // Using JavaScript Date objects. picker.set('select', new Date(2015, 3, 30)) // Using positive integers as UNIX timestamps. picker.set('select', 1429970887654) // Using a string along with the parsing format (defaults to `format` option). picker.set('select', '2016-04-20', { format: 'yyyy-mm-dd' }) ```
Are there any real benefits to a DAO layer? When I first started using Hibernate I heard about all the joy of using a DAO layer. Initially it made sense for my problem: I was going to experiment with Hibernate and later experiment with NoSQL and/or XML for experience. It made sense at the time Before I go any farther I want to say that my current DAO layer isn't a "true" dao layer. Its more of a bunch of data objects backed by interfaces and a "Controller" that generates new objects, queries, and cleans up when the application exits. Now though I'm picking up Spring the DAO layer is starting to make less and less sense. NoSQL is neat and all, but I'm really starting to question if its worth it. I'm not even sure if my data is fit for NoSQL, it works quite well in a relational database. The horrible XML storage seems like a hurdle I'll cross later. Besides there's a huge of code I would need to support for other storage options: My custom "Controller" + a zillion Spring interfaces to implement + other stuff that I'm missing. What's preventing me though from just ripping it out and merging the core and hibernate dao modules is that there's a few other projects that have a DAO layer (a real one) with only Spring and Hibernate. That means that they have all the interfaces, all the abstract classes, all the complexity with only those two frameworks. Since I'm just starting out in Spring+Hibernate world I'm hesitant to go against what others are doing with so little experience. **Question:** Are their other advantages that I'm missing with having a DAO layer? And why would other projects have a DAO layer when their only using one database?
I'll try to tackle part of your question: why do I need a DAO layer if I never intend to swap databases? As another respondent mentioned part of your goal when writing any class is the Single Responsibility Principle: a class should have only one reason to change. Simply put, you don't want to modify code without necessity, because doing so introduces risk. To minimize that risk, we try and avoid touching the implementation of a class for too many reasons. If I had to alter my class because the business rules changed, or because I decided to change how I mapped it into persistent storage, I would have two reasons to change my code. By removing one of those responsibilities, database mapping, I can avoid the risk of making an mistake that impacts the other, and the testing burden of checking both. I don't want to check my Db mappings, just because I change a business rule. You may also hear people talking about 'separation of concerns', that's another way of expressing this idea. A class that handles my logic for arranging shipping containers should not care about persistence. It also comes into play when we think about unit testing. When I unit test my shipping container arrangement class, I don't want those tests to be coupled to other concerns, such as persistence. I want to be isolated from them. So I need to be easily able to test in isolation from them. If my class does not contain persistence logic, then I can easily test it in memory, without worrying about how it is persisted. This driver produces an additional benefit: reasoning about our code. If we don't perform data-access at random points in our code, but through accessing to the DAO through an application service layer and then access to objects returned from that, I will not suddenly have unexpected I/O code running at some point during execution of business logic. It is also much easier to reason about our persistence code if it is not interspersed with business logic. This makes it much easier to think about our code. We can follow domain logic without having our understanding polluted by access to persistent storage. This ability to reason about code easily is a key to productivity The ultimate expression of many of these [layering ideas](http://domaindrivendesign.org/node/118) is an [hexagonal architecture](http://alistair.cockburn.us/Hexagonal+architecture). You should be aware though that a domain model is not the ideal for all scenarios; particularly where you have no business logic. In that case a transaction script (one thing after another) might be the right solution. Some models, such as CQRS exist to try and gain the benefits of a domain model, without the costs for simple read access, for example for displaying a web page.
What part of your work does a company own when you program for them? Say you're a software engineer, what does the company actually own of your work? Do they own the source code, the binary, both? What about code you do outside of the workplace? Do they have any claim to that? I was just wondering from a legal standpoint how that all works.
It depends on your terms and contracts. I had to sign a legal agreement on my first day saying what does and does not belong to me. If you signed something like this, it should be on file in legal or HR. If you didn't, you should consult with your HR and/or legal departments to get something drawn up. You probably also want to consult a lawyer (especially one who specializes in intellectual property or technology-related law) to make sure that everything is good, especially if you currently don't have a legally binding document that covers this, or if you want to try to change the document that you previously signed (if that's possible). For me, everything that I produce on company-owned hardware belongs to the company. Also, any projects (even on my own time and hardware) that are inspired by or derived from work products belong to the company, although it's possible for me to present project overviews and get them signed off and over to me if the company is not interested in the product (and it's not a restricted, controlled, or in direct competition with company products).
Create new columns which show values based on ranking of other columns python I have a dataframe with some dates as rows and values in columns. To have an idea the df looks like the below: ``` print(df1) c1 c2 c3 c4 12/12/2016 38 10 1 8 12/11/2016 44 12 17 46 12/10/2016 13 6 2 7 12/09/2016 9 16 13 26 ``` I want to create a rule such that it ranks each row in df1 and create another dataframe where it stores some constant values. For example to the 2 highest values in each row it assigns the value k = 5 and for the lowest 2 values it shows the value y = -9 What I would like to obtain is the following df: ``` c1 c2 c3 c4 12/12/2016 5 5 -9 -9 12/11/2016 5 -9 -9 5 12/10/2016 5 -9 -9 5 12/09/2016 -9 5 -9 5 ``` I thought about using np.partition on df1 but I am stuck on how to create the new dataframe. Any hints is highly appreciated! Thanks!
Use [`rank`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rank.html) with [`numpy.where`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) and `DataFrame` contructor: ``` arr = np.where(df.rank(axis=1, method='dense') > 2, 5, -9) df = pd.DataFrame(arr, index=df.index, columns=df.columns) print (df) c1 c2 c3 c4 12/12/2016 5 5 -9 -9 12/11/2016 5 -9 -9 5 12/10/2016 5 -9 -9 5 12/09/2016 -9 5 -9 5 ```
Installing Java 7 in 16.04 and using multiple Java versions I already installed Java 8, but I don't know how to install Java 7 and how to switch between the two Java versions. I also want to switch between Java 7 and 8 to update a project I am working on. I am a new Ubuntu user, so be very specific. **added: 12/31/2017** ``` gero@4790k:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial ``` When I try to use: ``` gero@4790k:~$ sudo apt-get install openjdk-7-jdk Reading package lists... Done Building dependency tree Reading state information... Done Package openjdk-7-jdk is a virtual package provided by: oracle-java9-installer 9.0.1-1~webupd8~0 oracle-java8-installer 8u151-1~webupd8~0 oracle-java7-installer 7u80+7u60arm-0~webupd8~1 You should explicitly select one to install. E: Package 'openjdk-7-jdk' has no installation candidate ``` I don't even know how to select the the java installer 7u80. Or if you can tell me how to install jdk-7u80-linux-x64.tar.gz or jdk-7u80-linux-x64.rpm .
The openjdk-7-jdk package is available in Ubuntu 14.04. Although you don't have Ubuntu 14.04 currently installed, you can keep using the Ubuntu version that you have and install Ubuntu 14.04 as a guest OS in VirtualBox. I recommend that you keep the Ubuntu 14.04 VirtualBox guest OS files for as long as you need to use Java 7 and also make backups of these files. To install openjdk-7-jdk in Ubuntu 14.04 run: ``` sudo apt-get install openjdk-7-jdk ``` Java 7 is also available at the [Java SE 7 Archive Downloads](http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html) webpage of the official Oracle website. Select the .tar.gz file which is currently named jdk-7u80-linux-x64.tar.gz (for 64-bit architecture) or jdk-7u80-linux-i586.tar.gz (for 32-bit architecture) and follow the installation instructions at [this answer](https://askubuntu.com/questions/56104/how-can-i-install-sun-oracles-proprietary-java-jdk-6-7-8-or-jre/55960#55960). Oracle gives this warning that the packages in the Oracle Java Archive packages are no longer updated with the latest security patches. > > These older versions of the JRE and JDK are provided to help developers debug issues in older systems. They are not updated with the latest security patches and are not recommended for use in production. > > > openjdk-8-jdk is not included in the Ubuntu 14.04 default repositories, so install the Oracle Java 8 JDK version by following the instructions from: [How can I install Sun/Oracle's proprietary Java JDK 6/7/8 or JRE?](https://askubuntu.com/questions/56104/how-can-i-install-sun-oracles-proprietary-java-jdk-6-7-8-or-jre). You can use either `sudo update-alternatives --config java` (update-alternatives is provided by default by dpkg in Ubuntu) or `update-java-alternatives` (which is installed when openjdk-9-jdk is installed) to manually choose which Java to use before running an application. See [update-java-alternatives vs update-alternatives --config java](https://askubuntu.com/questions/315646/update-java-alternatives-vs-update-alternatives-config-java).
CloudWatch Event that targets SQS Queue fails to work According to this article it's possible to set **SQS** as target for scheduled **CloudWatch** event: <https://aws.amazon.com/ru/about-aws/whats-new/2016/03/cloudwatch-events-now-supports-amazon-sqs-queue-targets/> I've created a simple **Cloud Formation** template that aims to trigger **CloudWatch** event each minute so the new message should appear in **SQS**, but something is missing as there are no messages in **SQS**. The code: ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "stack 1", "Parameters": { }, "Resources": { "MyQueue": { "Type": "AWS::SQS::Queue", "Properties": { "QueueName": "MyQueue" } }, "MyRole": { "Type": "AWS::IAM::Role", "Properties": { "RoleName": "MyRole", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": ["events.amazonaws.com", "lambda.amazonaws.com"] }, "Action": "sts:AssumeRole" }] }, "Path": "/", "Policies": [{ "PolicyName": "CloudWatchPolicy", "PolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "*", "Resource": "*" }] } }] } }, "MyRule": { "Type": "AWS::Events::Rule", "Properties": { "Description": "A rule to schedule data update", "Name": "MyRule", "ScheduleExpression": "rate(1 minute)", "State": "ENABLED", "RoleArn": { "Fn::GetAtt": ["MyRole", "Arn"] }, "Targets": [{ "Arn": { "Fn::GetAtt": ["MyQueue", "Arn"] }, "Id": "MyRule" }] } } }, "Outputs": { } ``` } What can be wrong there? Should I add a queue listener to make messages appear? Question #2: Docs about **CloudWatch Event Rule Target** declare that **Id** is a required field: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html> Though **AWS::SQS::Queue** has no such property at all (only Name is present): <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-properties-sqs-queues-prop> What should be put to **CloudWatch Event Rule Target** Id property when SQS is used as a target? Many thanks in advance.
The missing piece in my template was *AWS::SQS::QueuePolicy*. The working template: ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "stack 1", "Parameters": {}, "Resources": { "MyPolicy": { "Type": "AWS::IAM::Policy", "Properties": { "PolicyDocument": { "Statement": [{ "Action": "sqs:*", "Effect": "Allow", "Resource": { "Fn::GetAtt": ["MyQueue", "Arn"] } }], "Version": "2012-10-17" }, "PolicyName": "MyPolicyName", "Roles": [{ "Ref": "MyRole" }] } }, "MyRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Statement": [{ "Action": "sts:AssumeRole", "Effect": "Allow", "Principal": { "Service": ["events.amazonaws.com", "sqs.amazonaws.com"] } }], "Version": "2012-10-17" } } }, "MyQueue": { "Type": "AWS::SQS::Queue", "Properties": { "QueueName": "MyQueue2" } }, "MyRule": { "Type": "AWS::Events::Rule", "Properties": { "Description": "A rule to schedule data update", "Name": "MyRule", "ScheduleExpression": "rate(1 minute)", "State": "ENABLED", "RoleArn": { "Fn::GetAtt": ["MyRole", "Arn"] }, "Targets": [{ "Arn": { "Fn::GetAtt": ["MyQueue", "Arn"] }, "Id": "MyRule1", "Input": "{\"a\":\"b\"}" }] } }, "MyQueuePolicy": { "DependsOn": ["MyQueue", "MyRule"], "Type": "AWS::SQS::QueuePolicy", "Properties": { "PolicyDocument": { "Version": "2012-10-17", "Id": "MyQueuePolicy", "Statement": [{ "Effect": "Allow", "Principal": { "Service": ["events.amazonaws.com", "sqs.amazonaws.com"] }, "Action": "sqs:SendMessage", "Resource": { "Fn::GetAtt": ["MyQueue", "Arn"] } }] }, "Queues": [{ "Ref": "MyQueue" }] } } }, "Outputs": { } } ```
The relationship could not be changed because one or more of the foreign-key properties is non nullable I get following error during update with EF: > > The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. > > > Is there any **general** way to find which foreign-key properties cause above error? **[Update]** For one case following code cause above error(I worked in a disconnected environment, so I used `graphdiff` to update my objects graph), when it wants to run `_uow.Commit();`: ``` public void CopyTechnicalInfos(int sourceOrderItemId, List<int> targetOrderItemIds) { _uow = new MyDbContext(); var sourceOrderItem = _uow.OrderItems .Include(x => x.NominalBoms) .Include("NominalRoutings.NominalSizeTests") .AsNoTracking() .FirstOrDefault(x => x.Id == sourceOrderItemId); var criteria = PredicateBuilder.False<OrderItem>(); foreach (var targetOrderItemId in orderItemIds) { int id = targetOrderItemId; criteria = criteria.OR(x => x.Id == id); } var targetOrderItems = _uow.OrderItems .AsNoTracking() .AsExpandable() .Where(criteria) .ToList(); foreach (var targetOrderItem in targetOrderItems) { //delete old datas and insert new datas targetOrderItem.NominalBoms = sourceOrderItem.NominalBoms; targetOrderItem.NominalBoms.ForEach(x => x.Id = 0); targetOrderItem.NominalRoutings = sourceOrderItem.NominalRoutings; targetOrderItem.NominalRoutings.ForEach(x => x.Id = 0); targetOrderItem.NominalRoutings .ForEach(x => x.NominalTests.ForEach(y => y.Id = 0)); targetOrderItem.NominalRoutings .ForEach(x => x.NominalSizeTests.ForEach(y => y.Id = 0)); _uow.OrderItems.UpdateGraph(targetOrderItem, x => x.OwnedCollection(y => y.NominalBoms) .OwnedCollection(y => y.NominalRoutings, with => with .OwnedCollection(t => t.NominalTests))); } _uow.Commit(); } ```
In Entity Framework you can work with *foreign key associations*. That is, a foreign key to another object is expressed as a pair of two properties: a primitive foreign key property (e.g. `NominalRouting.OrderItemId`) and an object reference (`NominalRouting.OrderItem`). This means that you can either set a primitive value or an object reference to establish a foreign key association. If you set one of them, EF tries to keep the other one in sync, if possible. Unfortunately, this may also give rise to conflicts between primitive foreign key values and their accompanying references. It's hard to tell what exactly happens in your case. However, I *do* know that your approach of "copying" objects from one parent to another is... not ideal. First, it's never a good idea to change primary key values. By setting them to `0` you make the object look like new, but they aren't. Secondly, you assign the same child objects to other parent objects many times. I *think* that, as a consequence, you end up with a large number of objects having a foreign key *value* but not a *reference*. I said "copying", because that's what you seemingly try to achieve. If so, you should properly clone objects and `Add` them to each `targetOrderItem`. At the same time, I wonder why you (apparently) clone all these objects. It looks like many-to-many associations are more appropriate here. But that's a different subject. Now your actual question: **how to find the conflicting associations?** That's very, very hard. It would require code to search through the conceptual model and find properties involved in foreign key associations. Then you'd have to find their values and find mismatches. Hard enough, but trivial when compared to determining when a *possible* conflict is an *actual* conflict. Let me clarify this by two examples. Here, a class `OrderItem` has a required foreign key association consisting of properties `Order` and `OrderId`. ``` var item = new OrderItem { OrderId = 1, ... }; db.OrderItems.Add(item); db.SaveChanges(); ``` So there's an item with `OrderId` assigned and `Order` = null, and EF is happy. ``` var item = db.OrderItems.Include(x => x.Order).Find(10); // returns an OrderItem with OrderId = 1 item.Order = null; db.SaveChanges(); ``` Again, an item with `OrderId` assigned and `Order` = null, but EF throws the exception "The relationship could not be changed...". (and there are more possible conflict situations) So it's no enough to look for unmatched values in `OrderId/Order` pairs, you'd also have to inspect entity states and know exactly in which combination of states a mismatch is not allowed. **My advice: forget it, fix your code.** There's one dirty trick though. When EF tries to match foreign key values and references, somewhere deep down in a tree of nested `if`s it collects the conflicts we're talking about into a member variable of the `ObjectStateManager`, named `_entriesWithConceptualNulls`. It's possible to get its value by doing some reflection: ``` #if DEBUG db.ChangeTracker.DetectChanges(); // Force EF to match associations. var objectContext = ((IObjectContextAdapter)db).ObjectContext; var objectStateManager = objectContext.ObjectStateManager; var fieldInfo = objectStateManager.GetType().GetField("_entriesWithConceptualNulls", BindingFlags.Instance | BindingFlags.NonPublic); var conceptualNulls = fieldInfo.GetValue(objectStateManager); #endif ``` `conceptualNulls` is a `HashSet<EntityEntry>`, `EntityEntry` is an internal class, so you can only inspect the collection in the debugger to get an idea of conflicting entities. For diagnostic purposes only!!!
Must a process group have a running leader process? In Unix-like operating systems, if a process' `pid` and its `pgid` are equal, then the process is a process group leader. However, if the process leader has exited and the other processes in the same group are still running, who is the succeeding leader process?
There is no succeeding leader: once a process group leader exits, the group loses leadership. Nothing requires a process group to have a leader, it's perfectly fine not to have one, and you can still send signals to every element in the group with `kill(2)`. What exactly happens when the leader exits depends on the status of the processes in the group and whether or not the group classifies as an orphaned process group. First, let's see what is an orphaned group. POSIX defines an orphaned process group as a group in which the parent of each process belonging to that group is either a member of that same group or is part of another session. In other words, a process group is not orphaned as long as at least one process in the group has a parent in a different process group but in the same session. This definition may seem odd at first, but there is a rationale behind this, which will (hopefully) be clear in a moment. So why is it important to know if a group is orphaned? Because of processes that are stopped. If a process group is orphaned, and there is at least one process in that group that is stopped (e.g. it was suspended with `SIGSTOP` or `SIGTSTP`), then POSIX.1 requires that every process in the orphaned group be sent `SIGHUP` followed by `SIGCONT`. The reason for doing this is to avoid having the process stopped forever: consider the case where the session leader and the process group leader exit, and the group is left with a stopped process. Since the parent is in another session, it doesn't have permission to send it `SIGCONT`, so the process would never run again. OTOH, if the parent is in the same session but in a different group, then there is a chance that it will signal the stopped process with `SIGCONT`, so the group is not considered orphaned and there is no need to forcefully wake up stopped processes.
How I can download a blu-ray iso of Debian with the most amount of packages possible for work offline? I have pretty decent bandwidth here, but soon I will need to be abroad with nothing except for a small mobile connection. So I would like to obtain the biggest possible ISO of Debian. In other words, the opposite of the netinst. Is it possible to obtain a Blu-ray like ISO of Debian with ALL the distro packages? Even 25-50 GB ISO file, is just that I will soon only be able to use mobile data and need to do many installs and uninstalls, but I have to use the "cd" as source. I saw there are many DVD ISOs but they are partial and i want everything in a single ISO file. Another option I was considering instead of downloading the ISO filw, would be setup an http server on my notebook and get a full mirror of Debian, then setup the `sources.list` to obtain files from the internal virtual lan between the vm and the machine. I think the huge ISO option is still the easier and the best for now ;)
You won't find a single ISO image, although you could probably build one. The closest you'll get with existing downloads is three Blu-ray disk images, which you'll need to use [`jigdo`](https://tracker.debian.org/pkg/jigdo) to download; see <http://cdimage.debian.org/debian-cd/current/amd64/jigdo-bd/> for details. Building a partial mirror is probably more sensible; you can use [`apt-mirror`](https://tracker.debian.org/pkg/apt-mirror) for that. A full mirror is overkill for your situation. It's [doable](https://www.debian.org/mirror/ftpmirror) of course, but it would take up [approximately 300GB](https://www.debian.org/mirror/size) (for sources, `all` and `amd64` packages)...
How can I store into a temporary table multiple times in a while loop? I'm doing an inner join that appends a new column to my table every time it For example, I'm trying to do this: ``` WHILE @Counter <= @MaxCount BEGIN SELECT @ParameterDefinitionID = ParameterDefinitionID FROM #CachedParameterDefinitionIDs WHERE RowNumber = @Counter SELECT * INTO #ContactsWithPatientID FROM #ContactsWithPatientID INNER JOIN (SELECT Parameter2.ContactID AS 'Parameter2ContactID', Parameter2.Value AS FirstName FROM #CachedParameterValues Parameter2 WHERE ParameterDefinitionID = @ParameterDefinitionID) FirstNameTable ON #ContactsWithPatientID.ContactID = FirstNameTable.Parameter2ContactID SET @Counter = @Counter + 1 END ``` It doesn't work on the first iteration because: ``` There is already an object named '#ContactsWithPatientID' in the database. ``` How can I make it work? Is there some alternative way to do this? I am doing it this way because I need to append a certain number of rows as columns in my query.
Create the table first ``` CREATE TABLE #ContactsWithPatientID (...) WHILE @Counter <= @MaxCount BEGIN SELECT @ParameterDefinitionID = ParameterDefinitionID FROM #CachedParameterDefinitionIDs WHERE RowNumber = @Counter INSERT #ContactsWithPatientID SELECT * FROM #ContactsWithPatientID INNER JOIN (SELECT Parameter2.ContactID AS 'Parameter2ContactID', Parameter2.Value AS FirstName FROM #CachedParameterValues Parameter2 WHERE ParameterDefinitionID = @ParameterDefinitionID) FirstNameTable ON #ContactsWithPatientID.ContactID = FirstNameTable.Parameter2ContactID SET @Counter = @Counter + 1 END ``` This can almost certainly be done in one go too as a recursive CTE probably
How to prevent alembic revision --autogenerate from making revision file if it has not detected any changes? I have project where I'm using SQLAlchemy for models and I'm trying to integrate Alembic for making migrations. Everything works as expected when I change models and Alembic sees that models have changed -> it creates good migration file with command: `alembic revision --autogenerate -m "model changed"` But when I have NOT changed anything in models and I use the same command: `alembic revision --autogenerate -m "should be no migration"` revision gives me 'empty' revision file like this: ``` """next Revision ID: d06d2a8fed5d Revises: 4461d5328f57 Create Date: 2021-12-02 18:09:42.208607 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd06d2a8fed5d' down_revision = '4461d5328f57' branch_labels = None depends_on = None def upgrade(): # ### commands auto generated by Alembic - please adjust! ### pass # ### end Alembic commands ### def downgrade(): # ### commands auto generated by Alembic - please adjust! ### pass # ### end Alembic commands ### ``` What is purpose of this file? Could I prevent creation of this 'empty file' when alembic revision --autogenerate will not see any changes? To compare when I use Django and it's internal migration when I type command: `python manage.py makemigrations` I get output something like: `No changes detected` and there is not migration file created. Is there a way to do the same with Alembic revision? Or is there other command that could check if there were changes in models and if there were then I could simply run alembic revision and upgrade?
Accepted answer does not answer the question. The correct answer is: Yes, you can call `alembic revision --autogenerate` and be sure that only if there are changes a revision file would be generated: As per alembic's [documentation](https://alembic.sqlalchemy.org/en/latest/api/autogenerate.html#customizing-revision-generation) Implemented in Flask-Migrate (exactly in [this](https://github.com/miguelgrinberg/Flask-Migrate/blob/main/src/flask_migrate/templates/flask/env.py#L70-L78) file), it's just a change to env.py to account for the needed feature, namely to not autogenerate a revision if there are no changes to the models. You would still run `alembic revision --autogenerate -m "should be no migration"` but the change you would make to the env.py is, in short: ``` def run_migrations_online(): # almost identical to Flask-Migrate (Thanks miguel!) # this callback is used to prevent an auto-migration from being generated # when there are no changes to the schema def process_revision_directives(context, revision, directives): if config.cmd_opts.autogenerate: script = directives[0] if script.upgrade_ops.is_empty(): directives[:] = [] print('No changes in schema detected.') connectable = engine_from_config( config.get_section(config.config_ini_section), prefix="sqlalchemy.", poolclass=pool.NullPool, ) with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata, process_revision_directives=process_revision_directives ) with context.begin_transaction(): context.run_migrations() ``` Now, you can easily call `alembic revision --autogenerate` without risking the creation of a new empty revision.
How do I remove a library from the arduino environment? In the Arduino GUI on windows, if I click on Sketch --> Import Library, at the bottom of the menu there is a section called "Contributed". Unfortunately, I had misclicked and added a library I did not want to that list. **How can I remove it from that list?** The help page only mentions that "If a sketch no longer needs a library, simply delete its #include statements from the top of your code" but it does not mention how to remove the library from the list of contributed libraries.
Go to your Arduino documents directory; inside you will find a directory named "Libraries". The imported library directory will be there. Just delete it and restart the Arduino app. Your Arduino library folder should look like this (on Windows): ``` My Documents\Arduino\libraries\ArduinoParty\ArduinoParty.cpp My Documents\Arduino\libraries\ArduinoParty\ArduinoParty.h My Documents\Arduino\libraries\ArduinoParty\examples .... ``` or like this (on Mac and Linux): ``` Documents/Arduino/libraries/ArduinoParty/ArduinoParty.cpp Documents/Arduino/libraries/ArduinoParty/ArduinoParty.h Documents/Arduino/libraries/ArduinoParty/examples ``` The only issue with unused libraries is the trivial amount of disk space they use. They aren't loaded automatically so don't take up any application memory of the Arduino IDE.
What is video timescale, timebase, or timestamp in ffmpeg? There does not seem to be any explanation online as to what these are. People talk about them a lot. I just want to know what they are and why they are significant. Using -video\_track\_timescale, how would I determine a number for it? Is it random? Should it be 0?
Modern containers govern the time component of presentation of video (and audio) frames using timestamps, rather than framerate. So, instead of recording a video as 25 fps, and thus implying that each frame should be drawn 0.04 seconds apart, they store a timestamp for each frame e.g. ``` Frame pts_time 0 0.00 1 0.04 2 0.08 3 0.12 ... ``` For the sake of precise resolution of these time values, a timebase is used i.e. a unit of time which represents one tick of a clock, as it were. So, a timebase of `1/75` represents 1/75th of a second. The **P**resentation **T**ime**S**tamps are then denominated in terms of this timebase. Timescale is simply the reciprocal of the timebase. FFmpeg shows the timescale as the `tbn` value in the readout of a stream. ``` Timebase = 1/75; Timescale = 75 Frame pts pts_time 0 0 0 x 1/75 = 0.00 1 3 3 x 1/75 = 0.04 2 6 6 x 1/75 = 0.08 3 9 9 x 1/75 = 0.12 ... ``` This method of regulating time allows variable frame-rate video.
Specify Column and Row of a String Search Because I'm working with a very complex table with nasty repeated values in variable places, I'd like to do a string search between specific rows and columns. For example: ``` table={{"header1", "header2", "header3", "header4"}, {"falsepositive", "falsepositive", "name1", "falsepositive"}, {"falsepositive", "falsepositive", "name2", "falsepositive"}, {"falsepositive", "falsepositive", "falsepositive", "falsepositive"}} %//TableForm= header1 header1 header1 header1 falsepositive falsepositive name1 falsepositive falsepositive falsepositive name2 falsepositive falsepositive falsepositive falsepositive falsepositive ``` How do I look for a string, for example, in column three, rows one through two? I'd like to use `Which` to assign values based on a string's location in the table. E.g., ``` Which[string matched in location one, value, matched in location two, value2] ```
As I understand it you want a test whether or not a given string is in a certain subsection of a matrix. You can pick these subsections using `Part` ([[...]]) and `Span` (;;), with which you can indicate ranges or subsamples of ranges. Testing whether or not this subsection contains your pattern can be done by `MemberQ`, like this: ``` MemberQ[table[[1 ;; 2, 3]], "name2"] (* ==> False *) MemberQ[table[[1 ;; 2, 3]], "header3"] (* ==> True *) ``` In this way, your `Which` statement could look like this: ``` myVar = Which[ MemberQ[table[[1 ;; 2, 3]], "name2"], 5, MemberQ[table[[2 ;; 3, 4]], "falsepositive"], 6, ... True, 20 ] ```
How to make FlexboxLayout scroll? I am using Android FlexboxLayout in one of my Activities, but when the content is larger than my screen, I cannot scroll there to see it. All examples seem to scroll be default, but it just doesn't do that in my code. The XML is like this: ``` <?xml version="1.0" encoding="utf-8"?> <com.google.android.flexbox.FlexboxLayout android:id="@+id/fbRoot" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" app:alignContent="flex_start" app:alignItems="stretch" app:flexWrap="wrap"> .. a lot of other views .. </com.google.android.flexbox.FlexboxLayout> ```
Wrap it inside a [ScrollView](https://developer.android.com/reference/android/widget/ScrollView) like this ``` <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:fillViewport="true"> <com.google.android.flexbox.FlexboxLayout android:id="@+id/fbRoot" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" app:alignContent="flex_start" app:alignItems="stretch" app:flexWrap="wrap"> .. a lot of other views .. </com.google.android.flexbox.FlexboxLayout> </ScrollView> ```
Return std::tuple and move semantics / copy elision I have the following factory function: ``` auto factory() -> std::tuple<bool, std::vector<int>> { std::vector<int> vec; vec.push_back(1); vec.push_back(2); return { true, vec }; } auto [b, vec] = factory(); ``` In the return statement is `vec` considered an `xvalue` or `prvalue` and therefore moved or copy elided? My guess is no, because the compiler, when list-initializing the `std::tuple` in the return statement, still doesn't know that vec is going to be destroyed. So maybe an explicit std::move is required: ``` auto factory() -> std::tuple<bool, std::vector<int>> { ... return { true, std::move(vec) }; } auto [b, vec] = factory(); ``` Is it that really required?
> > In the return statement is `vec` considered an xvalue or prvalue and therefore moved or copy elided? > > > `vec` is *always* an lvalue. Even in the simple case: ``` std::vector<int> factory() { std::vector<int> vec; return vec; } ``` That is *still* returning an lvalue. It's just that we have [special rules](http://eel.is/c++draft/class.copy.elision#1) that say that we just ignore the copy in this case when we're returning the name of an automatic object (and another [special rule](http://eel.is/c++draft/class.copy.elision#3) in the case that copy elision doesn't apply, but we still try to move from lvalues). But those special rules *only* apply to the `return object;` case, they don't apply to the `return {1, object};` case, no matter how similar it might look. In your code here, that would do a copy, because that's what you asked for. If you want to do a move, you must do: ``` return {1, std::move(object)}; ``` And in order to avoid the move, you must do: ``` auto factory() -> std::tuple<bool, std::vector<int>> { std::tuple<bool, std::vector<int>> t; auto& [b, vec] = t; b = true; vec.push_back(1); vec.push_back(2); return t; } ```
Clustering in ServiceMix 4 I'm trying to configure Apache ServiceMix 4 to provide load balancing feature mentioned in it's documentation (for example here: <http://servicemix.apache.org/clustering.html>). Although it's mentioned, I couldn't find the exact way how to do it. The idea is to have 2 ServiceMixes (in LAN, for example) with the same OSGi service installed in them. When client tries to use the service, the load balancer takes him to appropriate service instance on one of the ServiceMixes. Is there an easy way to do that?
Fabric8 (<http://fabric8.io/>) can do Karaf/ServiceMix clustering and much more out of the box. It also have additional clustered Camel components such as the master and fabric endpoints - <http://fabric8.io/gitbook/camelEndpointMaster.html> - <http://fabric8.io/gitbook/camelEndpointFabric.html> There is a clustered Camel example, that demonstrates that - <https://github.com/fabric8io/fabric8/tree/master/fabric/fabric8-karaf/src/main/resources/distro/fabric/import/fabric/profiles/example/camel/cluster> The client will then load balance between the active nodes that provides the service. So if you have 5 nodes, then it balance among those. If one of the nodes dies or is stopped etc, then it just balance between the 4 still active nodes. So the solution is fully elastic and scalable. The principle is illustrated in the image below: ![enter image description here](https://i.stack.imgur.com/Yi0tv.png)
Android animate drop down/up view proper I'm trying to do a proper slide-down animation. The view that slides down should push all views below it down in one smooth movement and again when it slides up all the views should follow in one smooth movement. **What I've tried**: In code: ``` LinearLayout lin = (LinearLayout)findViewById(R.id.user_list_container); setLayoutAnimSlidedownfromtop(lin, this); lin.addView(getLayoutInflater().inflate(R.layout.user_panel,null),0); ``` And: ``` public static void setLayoutAnimSlidedownfromtop(ViewGroup panel, Context ctx) { AnimationSet set = new AnimationSet(true); Animation animation = new AlphaAnimation(0.0f, 1.0f); animation.setDuration(100); set.addAnimation(animation); animation = new TranslateAnimation( Animation.RELATIVE_TO_SELF, 0.0f, Animation.RELATIVE_TO_SELF, 0.0f, Animation.RELATIVE_TO_SELF, -1.0f, Animation.RELATIVE_TO_SELF, 0.0f ); animation.setDuration(500); set.addAnimation(animation); LayoutAnimationController controller = new LayoutAnimationController(set, 0.25f); panel.setLayoutAnimation(controller); } ``` My `user_panel.xml`: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="40dp" android:orientation="vertical" > <ImageView android:layout_alignParentLeft="true" android:layout_height="wrap_content" android:layout_width="wrap_content" android:src="@drawable/icon" /> </LinearLayout> ``` Top of the main XML: ``` <LinearLayout android:id="@+id/user_list_container" android:layout_alignParentTop="true" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"/> <LinearLayout android:id="@+id/container" android:layout_below="@+id/user_list_container" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"> ``` The problem with above approach is that when I start the animation first the empty space for the view is created and then the view slides down. I'd like it to slowly push all the other views down instead of doing it in one hard motion.
So I ended up doing it myself with some help from [this answer](https://stackoverflow.com/a/9112691/16936415). If it had been [Android 3.0](http://developer.android.com/guide/topics/graphics/animation.html), I could have used property animation, but it's not so I had to do that myself. Here is what I ended up with: ``` import android.view.View; import android.view.animation.Animation; import android.view.animation.Transformation; /** * Class for handling collapse and expand animations. * @author Esben Gaarsmand * */ public class ExpandCollapseAnimation extends Animation { private View mAnimatedView; private int mEndHeight; private int mType; /** * Initializes expand collapse animation, has two types, collapse (1) and expand (0). * @param view The view to animate * @param duration * @param type The type of animation: 0 will expand from gone and 0 size to visible and layout size defined in XML. * 1 will collapse view and set to gone */ public ExpandCollapseAnimation(View view, int duration, int type) { setDuration(duration); mAnimatedView = view; mEndHeight = mAnimatedView.getLayoutParams().height; mType = type; if(mType == 0) { mAnimatedView.getLayoutParams().height = 0; mAnimatedView.setVisibility(View.VISIBLE); } } @Override protected void applyTransformation(float interpolatedTime, Transformation t) { super.applyTransformation(interpolatedTime, t); if (interpolatedTime < 1.0f) { if(mType == 0) { mAnimatedView.getLayoutParams().height = (int) (mEndHeight * interpolatedTime); } else { mAnimatedView.getLayoutParams().height = mEndHeight - (int) (mEndHeight * interpolatedTime); } mAnimatedView.requestLayout(); } else { if(mType == 0) { mAnimatedView.getLayoutParams().height = mEndHeight; mAnimatedView.requestLayout(); } else { mAnimatedView.getLayoutParams().height = 0; mAnimatedView.setVisibility(View.GONE); mAnimatedView.requestLayout(); mAnimatedView.getLayoutParams().height = mEndHeight; } } } } ``` Example ussage: ``` import android.app.Activity; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; public class AnimationTestActivity extends Activity { private boolean mActive = false; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final Button animatedButton = (Button) findViewById(R.id.animatedButton); Button button = (Button) findViewById(R.id.button); button.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { ExpandCollapseAnimation animation = null; if(mActive) { animation = new ExpandCollapseAnimation(animatedButton, 1000, 1); mActive = false; } else { animation = new ExpandCollapseAnimation(animatedButton, 1000, 0); mActive = true; } animatedButton.startAnimation(animation); } }); } } ``` XML: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <Button android:id="@+id/animatedButton" android:visibility="gone" android:layout_width="fill_parent" android:layout_height="50dp" android:text="@string/hello"/> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> <Button android:id="@+id/button" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello"/> </LinearLayout> ``` --- **Edit** **Measure wrap\_content height:** So in order to get this to work for `wrap_content`, I measured the height of the view before I start the animation and then use this measured height as the actual height. Below is the code for measuring the height of the view and set this as the new height (I assume the view uses screen width, change according to your own needs): ``` /** * This method can be used to calculate the height and set it for views with wrap_content as height. * This should be done before ExpandCollapseAnimation is created. * @param activity * @param view */ public static void setHeightForWrapContent(Activity activity, View view) { DisplayMetrics metrics = new DisplayMetrics(); activity.getWindowManager().getDefaultDisplay().getMetrics(metrics); int screenWidth = metrics.widthPixels; int heightMeasureSpec = MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED); int widthMeasureSpec = MeasureSpec.makeMeasureSpec(screenWidth, MeasureSpec.EXACTLY); view.measure(widthMeasureSpec, heightMeasureSpec); int height = view.getMeasuredHeight(); view.getLayoutParams().height = height; } ```
Setting up taints on pod template in Kubernetes Plugin on Jenkins I would like to apply the tolerations on the Jenkins slave pod that are dynamically spinned by the Kubernetes Plugin. I see that the Kubernetes Plugin does not provide any option on the Jenkins UI to add the tolerations, as shown in the image below. Could anyone tell me, how can I add the tolerations in this case to the slave pods(which are generated by kubernetes plugin). P.S.:- I do not want to use labels,I Strictly want to use tolerations. And I am not sure if I want to add the podTemplate in the Jenkinsfile and specify the tolerations in this podTemplate. Because this will force me to do the same for every job's Jenkinsfile, which is tedious and not possible if multiple developers prepare their own respective pipelines.[![enter image description here](https://i.stack.imgur.com/r0lKJ.png)](https://i.stack.imgur.com/r0lKJ.png)
You can actually add taints to the configs in jenkins outright. <https://github.com/jenkinsci/kubernetes-plugin/pull/311#issuecomment-386342776> You can add this into the "raw yaml for the pod" under your container and update your criteria accordingly for the labels. I have used this myself and it does indeed work. (here's the yaml from the link above) ``` apiVersion: v1 kind: Pod metadata: labels: jenkins/kube-default: true app: jenkins component: agent spec: nodeSelector: werkint.com/entity: other tolerations: - key: werkint.com/entity operator: Equal value: other effect: NoSchedule ```
How to send email using smtp in zf2? I make a zf2 action in which i want to send email to user that saved in database so i used gmail for sending emails but email not sent to user. How i send email in zf2? Here is my code: ``` public function addAction() { //check permissions if(!$this->acl->isAllowed($this->user->user_type, 'user.create')) $this->redirect()->toRoute('admin_index'); // $this->layout()->top_heading = 'Add User'; $dm = $this->getServiceLocator()->get('doctrine.documentmanager.odm_default'); //////////////////////////////// $form = new UserForm(); $update=false; $message=''; if($this->getRequest()->isPost()) { $post = $this->getRequest()->getPost(); $form->setInputFilter($form->getInputFilter()); $form->setData($post); if($form->isValid()) { $formData=$form->getData(); $s = new User(); $user = $dm->getRepository('Calendar\Document\User')->findOneBy(array( "username" => $formData['username'] )); $email = $dm->getRepository('Calendar\Document\User')->findOneBy(array( "email" => $formData['email'] )); if($user || $email) { $update=2; $message='User Already Exists.'; } if($post['role']=='admin') { $update=2; $message="Select Some Other Role."; } else { $s->setProperty('username',$post['username']); $s->setProperty('password',md5($post['password'])); $s->setProperty('email',$post['email']); $s->setProperty('user_type',$post['role']); $s->setProperty('dataentered',date('Y-m-d H:m:i')); $dm->persist($s); $dm->flush(); //echo new Response($s->getProperty('id')); // $update=1; $message='User Added Successfully.'; $form = new UserForm(); $config = array('ssl' => 'tls', 'auth' => 'login', 'username' => '[email protected]', 'password' => 'password'); $transport = new Zend_Mail_Transport_Smtp('smtp.gmail.com', $config); $mail = new Zend_Mail(); $mail->setBodyHtml($bodytext); $mail->setFrom('[email protected]'); $mail->addTo($formData['email'], $formData['username']); $mail->setSubject('Profile Activation'); $mail->send($transport); ``` How to send email using gmail in zf2?
Try using smtp like this: ``` $message = new \Zend\Mail\Message(); $message->setBody('This is the body'); $message->setFrom('[email protected]'); $message->addTo('[email protected]'); $message->setSubject('Test subject'); $smtpOptions = new \Zend\Mail\Transport\SmtpOptions(); $smtpOptions->setHost('smtp.gmail.com') ->setConnectionClass('login') ->setName('smtp.gmail.com') ->setConnectionConfig(array( 'username' => 'YOUR GMAIL ADDRESS', 'password' => 'YOUR PASSWORD', 'ssl' => 'tls', )); $transport = new \Zend\Mail\Transport\Smtp($smtpOptions); $transport->send($message); ```
Kotlin lateinit correspondent java Hello when I use Kotlin to program Android I have seen `lateinit` in the code. What is the equivalent in java? How can I change this code from Kotlin to Java? ``` public class MyTest { lateinit var subject: TestSubject } ```
`lateinit` in Kotlin is there so that you can have non-nullable types on variables that you can't initialize at the moment the class containing them is created. Using your example, if you didn't use `lateinit`, you'd have to make the `subject` nullable, since it has to be initialized with a value. ``` public class MyTest { var subject: TestSubject? = null } ``` This would force you to do null checks every time you use it, which is ugly, so you can mark it as `lateinit` instead. --- In Java, you don't really have this problem, since everything is nullable, and declaring an uninitialized field is nothing special: ``` public class JavaTest { TestSubject subject; } ``` This initializes `subject` to `null`, so it's practically equivalent to the non-`lateinit` Kotlin example. The only real difference between the `lateinit` version in Kotlin and the Java version is that you get a more specific exception when trying to access an uninitialized property in Kotlin, namely, a `UninitializedPropertyAccessException`, which will make debugging it easier than having to look for the cause of a generic `NullPointerException`. --- If you really wanted this slightly different behavior, you could wrap your Java properties in some sort of wrappers, but I don't think it would be worth the syntactic overhead to do so. A very basic (not thread safe, for example) way to do this would be: Have a generic wrapper class for properties: ``` public class Property<T> { private T value = null; public T get() { if (value == null) throw new UninitializedPropertyAccessException("Property has not been initialized"); return value; } public void set(T value) { if (value == null) throw new IllegalArgumentException("Value can't be null"); this.value = value; } } ``` Use this wrapper in your classes: ``` public class JavaTest { Property<TestSubject> subject = new Property<>(); } ``` And then this usage would give you the uninitialized exception: ``` JavaTest test = new JavaTest(); test.subject.get(); ``` And this one would run fine: ``` JavaTest test = new JavaTest(); test.subject.set(new TestSubject()); test.subject.get(); ``` --- **Edit:** this is very similar to how `lateinit` works in Kotlin, if you decompile the bytecode of your example to Java, this is what you get: ``` public final class MyTest { @NotNull public TestSubject subject; @NotNull public final TestSubject getSubject() { TestSubject var10000 = this.subject; if(this.subject == null) { Intrinsics.throwUninitializedPropertyAccessException("subject"); } return var10000; } public final void setSubject(@NotNull TestSubject var1) { Intrinsics.checkParameterIsNotNull(var1, "<set-?>"); this.subject = var1; } } ``` Basically, the compiler puts the code for checking the property access inside the class itself (+ uses some helper methods) instead of using a wrapper, which is more efficient.
Change GlSurfaceView renderer I looked all over the net in order to find out if its possible to change the renderer of a GLSurfaceView on the flight. The reason is that I want to change the OpenGl program, and initiate all the attributes and unified params from its vertex and fragment shader and I don't want the any change would require to create a brand new GLSurfaceView with a brand new Renderer. It seems like reasonable operation that should be doable.
Note: I haven't implemented the following. [`GLSurfaceView.Renderer`](http://developer.android.com/reference/android/opengl/GLSurfaceView.Renderer.html) is an interface. Implement it three times. Twice for your different OpenGL renderers, and one time attached to the [`GLSurfaceView`](http://developer.android.com/reference/android/opengl/GLSurfaceView.html). The latter only dispatches to one of the former, and allows to change the renderer to which it dispatches. The code must hold a reference to this renderer, and eventually must be synchronized to the draw calls (though I don't know). Be aware that you cannot easily switch OpenGLES context data. It is shared between all renderer instances. ``` class DispatchingRenderer implements GLSurfaceView.Renderer { private class Renderer1 implements GLSurfaceView.Renderer { ... } private class Renderer2 implements GLSurfaceView.Renderer { ... } public DispatchingRenderer() { this.r1 = new Renderer1(); this.r2 = new Renderer2(); this.currentRenderer = this.r1; } public void ToggleRenderer() { if(this.currentRenderer == this.r1) { this.currentRenderer = this.r2; } else if (this.currentRenderer == this.r2) { this.currentRenderer = this.r1; } } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // do one-time setup } public void onSurfaceChanged(GL10 gl, int w, int h) { this.currentRenderer.onSurfaceChanged(gl, w, h); } public void onDrawFrame(GL10 gl) { this.currentRenderer.onDrawFrame(gl); } } ```
Plotting a large number of custom functions in ggplot in R using stat\_function() The basic issue is that I'd like to figure out how to add a large number (1000) custom functions into the same figure in ggplot, using different values for the function coefficients. I have seen other questions about how to add two or three functions, but not 1000, and questions about adding in different functional forms, but not the same form with multiple values for the parameters... The goal is to have stat\_function draw the lines over using parameters values stored in a data frame, but with no actual data for x. [The overall goal here is to show the large uncertainty in the model parameters of a non-linear regression from a small dataset, which translates into uncertainty associated with predictions from this data (which I'm trying to convince someone else is a bad idea). I often do this by plotting many lines built from the uncertainty in the model parameters, (a la Andrew Gelman's Multilevel Regression textbook).] As an example, here is the plot in the base R graphics. ``` #The data p.gap <- c(50,45,57,43,32,30,14,36,51) p.ag <- c(43,24,52,46,28,17,7,18,29) data <- as.data.frame(cbind(p.ag, p.gap)) #The model (using non-linear least squares regression): fit.1.nls <- nls(formula=p.gap~beta1*p.ag^(beta2), start=list(beta1=5.065, beta2=0.6168)) summary(fit.1.nls) #From the summary, I find the means and s.e's the two parameters, and develop their distributions: beta1 <- rnorm(1000, 7.8945, 3.5689) beta2 <- rnorm(1000, 0.4894, 0.1282) coefs <- as.data.frame(cbind(beta1,beta2)) #This is the plot I want (using curve() and base R graphics): plot(data$p.ag, data$p.gap, xlab="% agricultural land use", ylab="% of riparian buffer gap", xlim=c(0,130), ylim=c(0,130), pch=20, type="n") for (i in 1:1000){curve(coefs[i,1]*x^(coefs[i,2]), add=T, col="grey")} curve(coef(fit.1.nls)[[1]]*x^(coef(fit.1.nls)[[2]]), add=T, col="red") points(data$p.ag, data$p.gap, pch=20) ``` I can plot the mean model function with the data in ggplot: ``` fit.mean <- function(x){7.8945*x^(0.4894)} ggplot(data, aes(x=p.ag, y=p.gap)) + scale_x_continuous(limits=c(0,100), "% ag land use") + scale_y_continuous(limits=c(0,100), "% riparian buffer gap") + stat_function(fun=fit.mean, color="red") + geom_point() ``` But nothing I do draws multiple lines in ggplot. I can't seem to find any help on drawing the parameter values from of functions on the ggplot website, or on this site, which are both usually very helpful. Does this violate enough plotting theory that no one dares do this? Any help is appreciated. Thank you!
It is possible to collect multiple geoms or stats (and even other elements of a plot) into a vector or list and add that vector/list to the plot. Using this, the `plyr` package can be used to make a list of `stat_function`, one for each row of `coefs` ``` library("plyr") coeflines <- alply(as.matrix(coefs), 1, function(coef) { stat_function(fun=function(x){coef[1]*x^coef[2]}, colour="grey") }) ``` Then just add this to the plot ``` ggplot(data, aes(x=p.ag, y=p.gap)) + scale_x_continuous(limits=c(0,100), "% ag land use") + scale_y_continuous(limits=c(0,100), "% riparian buffer gap") + coeflines + stat_function(fun=fit.mean, color="red") + geom_point() ``` ![enter image description here](https://i.stack.imgur.com/8iTUv.png) A couple of notes: - This is slow. It took a few minutes on my computer to draw. `ggplot` was not designed to be very efficient at handling circa 1000 layers. - This just addresses adding the 1000 lines. Per @Roland's comment, I don't know if this represents what you want/expect it to statistically.
Golang pq: syntax error when executing sql Using `revel`, `golang 1.1.2`, `gorp`, `postgres 9.3.2` on `heroku` Following `robfig`'s List [booking example](https://github.com/robfig/revel/blob/master/samples/booking/app/controllers/hotels.go) ``` func (c App) ViewPosts(page int) revel.Result { if page == 0 { page = 1 } var posts []*models.Post size := 10 posts = loadPosts(c.Txn.Select(models.Post{}, `select * from posts offset ? limit ?`, (page-1)*size, size)) // error here return c.RenderJson(posts) } ``` Not sure why I'm getting `pq: syntax error at or near "limit"`. I'm assuming the combined query is wrong. Why does the query not end up being something like `select * from posts offset 0 limit 10`, which I've tested to run on postgres. Where am I messing up?
I'm not familiar with postgres, but I found this [issue](https://github.com/lib/pq/issues/174). I think you should use it like in the [godoc](http://godoc.org/github.com/lib/pq) **Example in godoc** ``` age := 21 rows, err := db.Query("SELECT name FROM users WHERE age = $1", age) ``` (Replace "?" with "$n") **Your code** ``` func (c App) ViewPosts(page int) revel.Result { if page == 0 { page = 1 } var posts []*models.Post size := 10 posts = loadPosts(c.Txn.Select(models.Post{}, `select * from posts offset $1 limit $2`, (page-1)*size, size)) return c.RenderJson(posts) } ```
Calling class method as part of initialization # Current Status I have an abstract base class that, which hosts data in the form of a numpy array, knows how to work this data, and which can explain matplotlib how to draw it. To accomodate different types of data, it has a number of subclasses, like this: ``` class PlotData(): """Base Class""" subclasslist = [] @classmethod def register(cls): super().subclasslist.append(cls) def __new__(self, initdata, *args, **kwargs): for subclass in subclasslist: try: subclass.__test__(initdata) except AssertionError: continue else: break else: raise TypeError("Initdata does not fit any known subclass") return subclass(initdata, *args, **kwargs) class Plot3D(PlotData): """Subclass for 3d-plotting data""" def __test__(initdata): assert Data_is_the_right_kind class Plot_XY(PlotData): """Subclass for for plotting X-Y relations of data""" def __test__(initdata): assert Data_is_the_right_kind ``` # The Issue now, the issue is how to get the class references into the subclasslist. At first I wanted to call super().register() in the class body, but im unable to get a reference to the class itself, which is what I want to store in the list. A small search has yielded two possible solutions, and I was wondering what the best one was. # Solution 1 Adding a call after each class definition, like this: ``` class Plot_XY(PlotData): """Subclass for for plotting X-Y relations of data""" def __test__(initdata): assert Data_is_the_right_kind Plot_XY.register() ``` This works, but seems like a very dirty solution to me - a very important part of the class structure is located outside of the body. # Solution 2 Another possibility could be class decorators. However, I've never used them before, and the examples I've found are generally used to override/add functionality to methods. ([here](https://github.com/agiliq/Django-parsley/blob/master/parsley/decorators.py) and [here](https://stackoverflow.com/questions/681953/python-class-decorator), for example). I am familiar with function decorators though, and the following should roughly make clear what I'm aiming for (and a dumbed down version works in the interpreter): ``` def some_creative_decorator_name(cls): cls.register() return cls ``` or at least, something that functions like Solution 1 but looks like: ``` @some_creative_decorator_name class Plot_XY(PlotData): """Subclass for for plotting X-Y relations of data""" def __test__(initdata): assert Data_is_the_right_kind ``` It seems to work just as well, but will this screw up stuff like inheritance? That was one of the concerns noted in the linked pages, and I don't really dare count to much on it. (I am not expecting people to subclass it further, but I don't really want to make it impossible if it's desired.) (Of course other solutions are welcome as well.)
What you are doing is useless because it's already provided: ``` >>> class A(object):pass ... >>> class B(A):pass ... >>> class C(A): pass ... >>> A.__subclasses__() [<class '__main__.B'>, <class '__main__.C'>] >>> ``` There is no need to keep your own `subclasslist` when python already provides one for you. Note that this doesn't include subclasses of subclasses: ``` >>> class D(B):pass ... >>> A.__subclasses__() [<class '__main__.B'>, <class '__main__.C'>] ``` However it is easy enough to find all the subclasses: ``` >>> def all_subclasses(klass): ... for sub in klass.__subclasses__(): ... yield sub ... yield from all_subclasses(sub) ... >>> list(all_subclasses(A)) [<class '__main__.B'>, <class '__main__.D'>, <class '__main__.C'>] ``` --- This said, if you want to replicate this functionality it is easier to look at how the default method works. And you'd discover that: ``` >>> '__subclasses__' in dir(object) False >>> '__subclasses__' in dir(type) True ``` So here you can see that it is a method of `type` which is the **metaclass** of `object`. The way to properly replicate this is to write your custom metaclass. Basically a metaclass is similar to the decorator approach however: - It is more general because you can do stuff before the class is created, control how it is created and do something afterwards. A decorator receives the class object when it is completed and can only do post-creation stuff - They are inherited, so you don't have to add anything explicit for each class, but only to the base class. I'm not going into details here. Check out [What is a metaclass in Python?](https://stackoverflow.com/q/100003/510937) for more information on metaclasses.
How to do a well-coded splash screen We all know there is lots of tutorials about how to do a splash screen on Android. But we also know those are *pseudo-splashscreens*. I searched for many and I always saw `Thread.sleep(x)`. That's not well coded, this is just to make the app beauty and looking like a professional app, that's not what I want! Another problem with those splash screens is that they don't solve my problem because they only show it **after** the activity start and show the content view. I have an app that does lots of things while initializing and when the app starts the user sees a black screen for a few seconds, enough time to annoy. So that's why I want to show a well-coded splash screen that removes that black screen that appears before the content view has been set. I tried something. I included the splash screen (a `RelativeLayout`) into the layout that is set in the `MainActivity`, but as far as I know Android only shows the content after everything has been loaded, so if I'm trying to show some view from the content view I have to wait until everything has finished. Still, I'll send my code, it can help somehow... ``` @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); new SplashTask().execute(); } private class SplashTask extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { initializeViews(); mSplashScreen.setVisibility(View.VISIBLE); } @Override protected Void doInBackground(Void... params) { return null; } @Override protected void onPostExecute(Void params) { Standard.Initiate(MainActivity.this); verifyListStats(); loadListAdapters(); setOnClickListeners(); mSplashScreen.setVisibility(View.GONE); } } ``` I tried to load some resources in `doInBackground(...)` but because I do some operations in `onResume()` that need those resources I can't do it (or at least I think I can't). Any idea? I heard about a built-in mechanism similar to iOS launch images, perhaps that can be a way.
When an activity is launched, Android starts a Zygote, an empty activity which does nothing, and sets your activity theme on it, then launches it. Once your activity is ready for display, it swaps the displayed activity to yours. For more info about Zygote, you can read [this article](http://cyrilmottier.com/2013/01/23/android-app-launching-made-gorgeous/) by Cyril Motier So to answer your question, you can do this : 1. Create a small theme which a custom window background displaying your splash info (you can use 9-patch to center unscaled content); 2. In your manifest, use this splash theme for your activity; 3. in your activity's onCreate() method, call setTheme(R.id.yourActivityTheme) (call it before setContentView() ); 4. enjoy... That way, your "splash screen" (ie : the zygote with your splash theme) will be visible until your activity is ready to be displayed.
How to watch for array changes? In Javascript, is there a way to be notified when an array is modified using push, pop, shift or index-based assignment? I want something that would fire an event that I could handle. I know about the `watch()` functionality in SpiderMonkey, but that only works when the entire variable is set to something else.
There are a few options... ## 1. Override the push method Going the quick and dirty route, you could override the `push()` method for your array1: ``` Object.defineProperty(myArray, "push", { // hide from for..in and prevent further overrides (via default descriptor values) value: function () { for (var i = 0, n = this.length, l = arguments.length; i < l; i++, n++) { RaiseMyEvent(this, n, this[n] = arguments[i]); // assign/raise your event } return n; } }); ``` 1 Alternatively, if you'd like to target *all* arrays, you could override `Array.prototype.push()`. Use caution, though; other code in your environment may not like or expect that kind of modification. Still, if a catch-all sounds appealing, just replace `myArray` with `Array.prototype`. Now, that's just one method and there are lots of ways to change array content. We probably need something more comprehensive... ## 2. Create a custom observable array Rather than overriding methods, you could create your own observable array. This particular implementation copies an array into a new array-like object and provides custom `push()`, `pop()`, `shift()`, `unshift()`, `slice()`, and `splice()` methods **as well as** custom index accessors (provided that the array size is only modified via one of the aforementioned methods or the `length` property). ``` function ObservableArray(items) { var _self = this, _array = [], _handlers = { itemadded: [], itemremoved: [], itemset: [] }; function defineIndexProperty(index) { if (!(index in _self)) { Object.defineProperty(_self, index, { configurable: true, enumerable: true, get: function() { return _array[index]; }, set: function(v) { _array[index] = v; raiseEvent({ type: "itemset", index: index, item: v }); } }); } } function raiseEvent(event) { _handlers[event.type].forEach(function(h) { h.call(_self, event); }); } Object.defineProperty(_self, "addEventListener", { configurable: false, enumerable: false, writable: false, value: function(eventName, handler) { eventName = ("" + eventName).toLowerCase(); if (!(eventName in _handlers)) throw new Error("Invalid event name."); if (typeof handler !== "function") throw new Error("Invalid handler."); _handlers[eventName].push(handler); } }); Object.defineProperty(_self, "removeEventListener", { configurable: false, enumerable: false, writable: false, value: function(eventName, handler) { eventName = ("" + eventName).toLowerCase(); if (!(eventName in _handlers)) throw new Error("Invalid event name."); if (typeof handler !== "function") throw new Error("Invalid handler."); var h = _handlers[eventName]; var ln = h.length; while (--ln >= 0) { if (h[ln] === handler) { h.splice(ln, 1); } } } }); Object.defineProperty(_self, "push", { configurable: false, enumerable: false, writable: false, value: function() { var index; for (var i = 0, ln = arguments.length; i < ln; i++) { index = _array.length; _array.push(arguments[i]); defineIndexProperty(index); raiseEvent({ type: "itemadded", index: index, item: arguments[i] }); } return _array.length; } }); Object.defineProperty(_self, "pop", { configurable: false, enumerable: false, writable: false, value: function() { if (_array.length > -1) { var index = _array.length - 1, item = _array.pop(); delete _self[index]; raiseEvent({ type: "itemremoved", index: index, item: item }); return item; } } }); Object.defineProperty(_self, "unshift", { configurable: false, enumerable: false, writable: false, value: function() { for (var i = 0, ln = arguments.length; i < ln; i++) { _array.splice(i, 0, arguments[i]); defineIndexProperty(_array.length - 1); raiseEvent({ type: "itemadded", index: i, item: arguments[i] }); } for (; i < _array.length; i++) { raiseEvent({ type: "itemset", index: i, item: _array[i] }); } return _array.length; } }); Object.defineProperty(_self, "shift", { configurable: false, enumerable: false, writable: false, value: function() { if (_array.length > -1) { var item = _array.shift(); delete _self[_array.length]; raiseEvent({ type: "itemremoved", index: 0, item: item }); return item; } } }); Object.defineProperty(_self, "splice", { configurable: false, enumerable: false, writable: false, value: function(index, howMany /*, element1, element2, ... */ ) { var removed = [], item, pos; index = index == null ? 0 : index < 0 ? _array.length + index : index; howMany = howMany == null ? _array.length - index : howMany > 0 ? howMany : 0; while (howMany--) { item = _array.splice(index, 1)[0]; removed.push(item); delete _self[_array.length]; raiseEvent({ type: "itemremoved", index: index + removed.length - 1, item: item }); } for (var i = 2, ln = arguments.length; i < ln; i++) { _array.splice(index, 0, arguments[i]); defineIndexProperty(_array.length - 1); raiseEvent({ type: "itemadded", index: index, item: arguments[i] }); index++; } return removed; } }); Object.defineProperty(_self, "length", { configurable: false, enumerable: false, get: function() { return _array.length; }, set: function(value) { var n = Number(value); var length = _array.length; if (n % 1 === 0 && n >= 0) { if (n < length) { _self.splice(n); } else if (n > length) { _self.push.apply(_self, new Array(n - length)); } } else { throw new RangeError("Invalid array length"); } _array.length = n; return value; } }); Object.getOwnPropertyNames(Array.prototype).forEach(function(name) { if (!(name in _self)) { Object.defineProperty(_self, name, { configurable: false, enumerable: false, writable: false, value: Array.prototype[name] }); } }); if (items instanceof Array) { _self.push.apply(_self, items); } } (function testing() { var x = new ObservableArray(["a", "b", "c", "d"]); console.log("original array: %o", x.slice()); x.addEventListener("itemadded", function(e) { console.log("Added %o at index %d.", e.item, e.index); }); x.addEventListener("itemset", function(e) { console.log("Set index %d to %o.", e.index, e.item); }); x.addEventListener("itemremoved", function(e) { console.log("Removed %o at index %d.", e.item, e.index); }); console.log("popping and unshifting..."); x.unshift(x.pop()); console.log("updated array: %o", x.slice()); console.log("reversing array..."); console.log("updated array: %o", x.reverse().slice()); console.log("splicing..."); x.splice(1, 2, "x"); console.log("setting index 2..."); x[2] = "foo"; console.log("setting length to 10..."); x.length = 10; console.log("updated array: %o", x.slice()); console.log("setting length to 2..."); x.length = 2; console.log("extracting first element via shift()"); x.shift(); console.log("updated array: %o", x.slice()); })(); ``` See `Object.[defineProperty()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/defineProperty)` for reference. That gets us closer but it's still not bullet proof... which brings us to: ## 3. Proxies A [Proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy) object offers another solution to [the modern browser](https://caniuse.com/proxy). It allows you to intercept method calls, accessors, etc. Most importantly, you can do this without even providing an explicit property name... which would allow you to test for an arbitrary, index-based access/assignment. You can even intercept property deletion. Proxies would effectively allow you to inspect a change *before* deciding to allow it... in addition to handling the change after the fact. Here's a stripped down sample: ``` (function() { if (!("Proxy" in window)) { console.warn("Your browser doesn't support Proxies."); return; } // our backing array var array = ["a", "b", "c", "d"]; // a proxy for our array var proxy = new Proxy(array, { deleteProperty: function(target, property) { delete target[property]; console.log("Deleted %s", property); return true; }, set: function(target, property, value, receiver) { target[property] = value; console.log("Set %s to %o", property, value); return true; } }); console.log("Set a specific index.."); proxy[0] = "x"; console.log("Add via push()..."); proxy.push("z"); console.log("Add/remove via splice()..."); proxy.splice(1, 3, "y"); console.log("Current state of array: %o", array); })(); ```
Change element style on hover of another element I have the following HTML code: ``` <div> class="module-title" <h2 class="title" style="visibility: visible;"> <span>Spantext</span> Nonspantext </h2> </div> ``` What I want to do is to hover over the h2 element and both of the spantext and nonspantext to change color. My limitation is that I cannot put the Nonspantext inside span tags (I cannot change the HTML code at all). The problem is that when I put the CSS rule ``` .title :hover {color:#D01A13;} ``` only the spantext changes color on hover and when I put the code ``` .module-title :hover {color:#D01A13;} ``` the nonspantext will change color if I hover over the spantext but I also want the opposite to happen. I am aware of the '+' and '~' rules in CSS but I could not make it work.
``` .title:hover { color:#D01A13; } ``` Spaces matter in CSS. The space is the "children" operator, so `.title :hover` means "when any children of `.title` are hovered over." Text nodes don't count as children. [Here is a fiddle.](http://jsfiddle.net/AW2pM/) --- Re your comment: > > Ok, this code is good, I did not know about spaces in CSS. But, forgot to mention that there is also a rule .title span {color:#888888 !important;} that complicates the things a little bit. If I cannot change this rule, is there any chance that the .title can change color on hover? > > > Sure, you could try ``` .title:hover, .title:hover span { color:#D01A13 !important; } ``` Of course, this is very ugly. This is why you should use `!important` as little as possible. [Fiddle](http://jsfiddle.net/AW2pM/1/)
PACT - Using provider state I am trying to use pact for validating the spring boot microservices. I have generated the pact file from consumer and verified it in the provider side using pact broker. I have another use case where I need to execute some code before validating the pact file against actual service response. I read about state change URL and state change with closure to achieve it but couldnt get an example of how to achieve this. Can someone help? My specific situation is: I have created a contract to update customer with id 1234 (First Name: test Last Name: user). If this customer doesnt exist, then i would need to insert this data into DB by reading the first name, last name, id from the update request in pact file and additional info (city, state, phone number) through state change code. So my question is, can i read the request data from pact file through state change instead of configuring the first name,last name and id in the verification side?
The state change URL is a hook you create on the provider to allow Pact to tell the provider what state it should be in at the start of the test. Before each test runs, the mock consumer taps the state change URL on your provider, and tells it the name of the state the test expects. You need to do two things: 1. Configure a state change URL 2. Implement the state change endpoint on the provider ### Configuring the state change URL You can configure the state change URL in the provider verification settings. For example, using the [the maven plugin](https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-provider-maven#provider-states): ``` <serviceProvider> <name>provider1</name> <stateChangeUrl>http://localhost:8080/tasks/pactStateChange</stateChangeUrl> ... ``` Or using the [Gradle provider plugin](https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-provider-gradle#using-a-state-change-url): ``` hasPactWith('consumer1') { stateChangeUrl = url('http://localhost:8080/tasks/pactStateChange') ... ``` Both of these tell the mock consumer to use `localhost:8080/tasks/pactStateChange` to change the state of the provider before each test. ### Implementing the state change endpoint The documentation linked above tells us that by default, the format of the request is a POST request of your state string and any parameters: ``` { "state" : "a provider state description", "params": { "a": "1", "b": "2" } } ``` To use this, you implement something like the following untested code on the provider : ``` @RequestMapping(value = "tasks/pactStateChange", method = RequestMethod.POST) ResponseEntity<?> stateChange(@RequestBody ProviderState state) { if (state.state == "no database") { // Set up state for the "no database" case here } else if state.state == "Some other state" { // Set up state here } else if ... // Other states go here ... } return ResponseEntity.ok().build() } ``` Please excuse any spring boot errors in that example - I'm not a spring boot person, but you can see the general principle. With the state change URL, pact doesn't tell the provider any setup details. It just tells the provider the pre-agreed state string that you used in your test. This could be something like `"foo exists"`. Then, when implementing the handler for the state change URL, you detect `"foo exists"`, and do any explicit setup there. ``` if (state.state == "foo exists") { // do whatever you need to set up so that foo exists repository.clear() repository.insert(new Foo("arguments that foo needs",12)) } ``` If you'd like to know more about the intent of provider states, have a read of the [wiki page on provider states](https://github.com/realestate-com-au/pact/wiki/Provider-states). ### How to do this in your specific case You asked: > > Can i read the request data from pact file through state change instead of configuring the first name,last name and id in the verification side? > > > You might be confused about the intention of the contract tests - each test is a combination of state and request. So instead of using one test to say: - My test is to request a customer update. If the customer exists, then I expect X response, and if it doesn't, then I expect Y response you use two tests to say: - When I submit an update to the customer record (in the state when the customer exists), then I expect X response. - When I submit an update to the customer record (in the state where the customer does NOT exist), then I expect Y response. These tests are two separate items in your Pact contract. The intention is not to include *details* of the setup in the contract. On the consumer side, your state is just a string that says something like "Customer with id=1234 exists". On the Provider side, your state change endpoint detects that URL and creates the state as appropriate. This is usually done in a hard-coded way: ``` if (state == "Customer with id=1234 exists") { Database.Clear() Database.Insert(new Customer(1234, "John","Smith")) } else if (state == "No customers exist") { Database.Clear() } ``` You don't want to do this in a parameterised way by parsing the state string, because then you're creating a new complex contract between the test consumer and the provider. The consumer tests shouldn't know anything about how to set provider state, they should just know what state is required by the test (by name only). Similarly, the provider doesn't need to know what's being tested, it just needs to know how to turn state names into actual state.
SSIS Derived Column (if then...else) is there a way of replicating the below expression in SSIS Derived Column? ``` SELECT CASE WHEN LEN(column1) > 8 THEN column1 ELSE REPLICATE('0', 18 - LEN(column1)) + column1 END AS column1 FROM myTable ``` I want to pad the value of column1 with 0 if the lenght of the value is less than 8 character
The SSIS expression language supports the ternary operator `? :` ``` (LEN([column1]) > 8) ? column1 : replicate("0", (18 - LEN([column1]))) + [column1] ``` That expression *ought* to work but it doesn't because the REPLICATE call is going to provide metadata back stating it's 4k nvarchar characters (the limit). If you're deadset on getting it to work that way, comment and I'll hack the expression to size the output of replicate before concatenating with `column1` An easier approach is to just always add 8 leading zeros to the expression and then slice it off from the right. ``` RIGHT(REPLICATE("0",8) + column1,8) ``` You might need 18 on the second as your example seemed to use 18 but the concept will be the same.
Unity Application Block, How pass a parameter to Injection Factory? Here what I have now ``` Container.RegisterType<IUserManager, UserManagerMock>(); Container.RegisterType<IUser, UserMock>( new InjectionFactory( (c) => c.Resolve<IUserManager>().GetUser("John"))); ``` and get it ``` Container.Resolve<IProfile>(); ``` I want to pass a name as parameter to Factory so that I will be able to resolve user object with name; Something like this: ``` Container.Resolve<IProfile>("Jonh"); ``` How can I change the type registration for this case?
While most DI frameworks have advanced features to do these types of registrations, I personally rather change the design of my application to solve such a problem. This keeps my DI configuration simple and makes the code easier to understand. Especially for the creation of objects that depend on some context (thread, request, whatever) or have a lifetime that must be managed explicitly, I like to define factories. Factories make these things much more explicit. In your situation, you want to fetch a profile for a certain user. This is typically something you would like to have a factory for. Here's an example of this: ``` // Definition public interface IProfileFactory { IProfile CreateProfileForUser(string username); } // Usage var profile = Container.Resolve<IProfileFactory>() .CreateProfileForUser("John"); // Registration Container.RegisterType<IProfileFactory, ProfileFactory>(); // Mock implementation public class ProfileFactory : IProfileFactory { public IProfile CreateProfileForUser(string username) { IUser user = Container.Resolve<IUserManager>() .GetUser(username); return new UserProfile(user); } } ``` I hope this helps.
How can I do version control of Database Schema? Is there away (Cheap or FLOSS) to do version control of SQL Server 2008 DB schema?
Here is a nice article by Jeff Atwood on [database version control](http://www.codinghorror.com/blog/2006/12/is-your-database-under-version-control.html) You can use [Team edition for database professionals](http://msdn.microsoft.com/en-us/vstudio/aa718807.aspx) for this purpose Here is a [list of tools](http://secretgeek.net/dbcontrol.asp) that you can purchase which can be used too: [Red Gate SQL Compare from $295.](http://www.red-gate.com/products/sql-development/sql-compare/) [DB Ghost from $195](http://www.innovartis.co.uk/home.aspx) SQL Change Manager $995 per instance. [SQL Effects](http://www.sqleffects.com/) Clarity standard ed. from $139 [SQLSourceSafe](http://www.bestsofttool.com/SQLSourceSafe/SSS_Introduction.aspx) from $129. sqlXpress Diff contact for price. :-( [Embarcadero Change Manager](http://www.embarcadero.com/products/db-change-manager-xe) contact for price. :-( [Apex SQL Diff from $399](http://www.apexsql.com/sql_tools_edit.aspx) SQL Source Control 2003 from $199 [SASSI v2.0 professional](http://www.sqlassi.net/Features.htm) from $180 Evorex Source # shareware or $299+ (conflicting reports!) **Edit** Just found this post which explains version control through svn: [Versioning SQL Server database](https://stackoverflow.com/questions/173/how-do-i-version-my-ms-sql-database-in-svn)
ReactJS styles 'leaking' to other components So I have two components... a Navbar component, and an AboutPage component. They are both in the same directory, 'App' **App** **-- Navbar** --> Navbar.css, Navbar.js **-- AboutPage** --> Aboutpage.css, Aboutpage.js So as you can see, they have two separate stylesheets. In the JS pages the correct CSS file is being imported as well. When I do a style like this for example: *Navbar Component* ``` p { background: red } ``` ^^ this style also applies to the p's in the Aboutpage. I even tried to give the P in Aboutpage its on id and style it that way and it still failed.
That's the expected behaviour. No matter which file you specify a rule like `p { background: red }`, it's going to be applied to all DOM. Specifying and id attribute to won't work either. The above rule is general enough to apply to all `<p>`s. If you want to specify css files for each component, you should also create component specific css classes. Like the following example. ``` import React from 'react'; import './DottedBox.css'; const DottedBox = () => ( <div className="DottedBox"> <p className="DottedBox_content">Get started with CSS styling</p> </div> ); export default DottedBox; ``` and its css file: ``` .DottedBox { margin: 40px; border: 5px dotted pink; } .DottedBox_content { font-size: 15px; text-align: center; } ``` If you want different ways of defining css for React, this [resource](https://codeburst.io/4-four-ways-to-style-react-components-ac6f323da822) adds 3 more ways of doing so, in addition to the above way.
Can iOS app's version and build numbers be strings? I have noticed that I can name my current build by string. Xcode allows to do this. But I haven't seen somebody doing so. It would be pretty convenient for me to use words instead of 1.3.78. Is this legal though? By "legal" I mean that it wouldn't be rejected by App review team. [![enter image description here](https://i.stack.imgur.com/lnCBS.png)](https://i.stack.imgur.com/lnCBS.png)
- "Version" **CFBundleShortVersionString** (String - iOS, OS X) specifies the release version number of the bundle, which identifies a released iteration of the app. The release version number is a string comprised of three period-separated integers. - "Build" **CFBundleVersion** (String - iOS, OS X) specifies the build version number of the bundle, which identifies an iteration (released or unreleased) of the bundle. The build version number should be a string comprised of three non-negative, **period-separated integers with the first integer being greater than zero. The string should only contain numeric (0-9) and period (.) characters. Leading zeros are truncated from each integer and will be ignored (that is, 1.02.3 is equivalent to 1.2.3).** This key is not localizable. **CFBundleVersion** -- is identify the last and current build available in itunes conncet, if you use in the **legal**, the appstore doesnot accept/reject your app, may be the binary also not uploaded. you can see the type in your info.plist `CFBundleVersion` and `CFBundleShortVersionString` ``` <key>CFBundleVersion</key> <string>1.0</string> <key>CFBundleShortVersionString</key> <string>1.0</string> ```
Classes store data members in sequential memory? The short version of this question: Does a pointer to the first data member of a class result in a pointer to all its data members? Discussion: I'm reading some sample code, and this is interesting. When a particular function (in this case the `glUniformMatrix4fv` C function from OpenGL) takes as a parameter an array, this is passed in the common C way of using a pointer to the address of the array's first element, as follows: `glUniformMatrix4fv(glvariable, 1, 0, &newmatrix[0]);` // referring to newmatrix This developer has a C++ class for creating a 4X4 matrix, for a total of 16 floats. However, the data members of that class are divided into 4 separate vectors of 4 data members each: Here are the data members of the class: ``` vec4 x; vec4 y; vec4 z; vec4 w; ``` The `vec4` class in turn provides these data members: ``` T x; T y; T z; T w; ``` He then creates a function to point to just the first `vec4 x` data member: ``` const T* Pointer() const { return &x.x; } ``` And somehow, this miraculously translates into sending all 4 `vec4` objects and their respective 4 components: ``` glUniformMatrix4fv(modelviewUniform, 1, 0, modelviewMatrix.Pointer()); ``` I'm fairly new to C++ but this surprised me that you can point to just one data member and expect all the rest to get sent as well. Is it because the class defines them in a particular order? What if they had been defined in a different order in the class definition?
This is the C++ object model. You're guaranteed that within the same access level (`private`/`public`/`protected`), the object properties are in order (C++11), and the same for members without access specifiers between them (in C++03). This doesn't mean what you're doing is a good idea. It's better to avoid the cast and just have the function do some more work. ## C++11: ### 9.2 Class members [class.mem] > > 14) **Nonstatic data members of a (non-union) class with the same access control (Clause 11) are allocated so > that later members have higher addresses within a class object. The order of allocation of non-static data > members with different access control is unspecified (11)**. Implementation alignment requirements might > cause two adjacent members not to be allocated immediately after each other; so might requirements for > space for managing virtual functions (10.3) and virtual base classes (10.1). > > > ## C++03 ### 9.2 Class members [class.mem] > > 12) **Nonstatic data members of a (non-union) class declared without an intervening access-specifier are allocated so > that later members have higher addresses within a class object. The order of allocation of non-static data > members separated by an access-specifier is unspecified (11.1)**. Implementation alignment requirements might > cause two adjacent members not to be allocated immediately after each other; so might requirements for > space for managing virtual functions (10.3) and virtual base classes (10.1). > > >
Postgres: Many-to-many vs. multiple columns vs. array column I need help designing complex user permissions within a Postgres database. In my Rails app, each user will be able to access a *unique* set of features. In other words, there are no pre-defined "roles" that determine which features a user can access. In almost every controller/view, the app will check whether or not the current user has access to different features. Ideally, the app will provide ~100 different features and will support 500k+ users. At the moment, I am considering three different options (but welcome alternatives!) and would like to know which option offers the best performance. *Thank you in advance for any help/suggestions.* ### Option 1: Many-to-many relationship By constructing a many-to-many relationship between the `User` table and a `Feature` table, the app could check whether a user has access to a given feature by querying the join table. E.g., if there is a record in the join table that connects *user1* and *feature1*, then *user1* has access to *feature1*. ### Option 2: Multiple columns The app could represent each feature as a boolean column on the `User` table. This would avoid querying multiple tables to check permissions. E.g., if `user1.has_feature1` is true, then *user1* has access to *feature1*. ### Option 3: Array column The app could store features as strings in a (GIN-indexed?) array column on the `User` table. Then, to check whether a user has access to a feature, it would search the array column for the given feature. E.g., if `user1.features.include? 'feature1'` is true, then *user1* has access to *feature1*.
Many-to-many relationships are the only viable option here. There is a reason why they call it a relational database. Why? - Joins are actually not that expensive. - Multiple columns - The number of columns in your tables will be ludicris and it will be true developer hell. As each feature adds a migration the amount of churn in your codebase will be silly. - Array column - Using an array column may seem like an attractive alternative until you realize that its actually just a marginal improvement over stuffing things into a comma seperated string. you have no referential integrety and none of the code organization benefits that come from have having models that represent the entities in your application. Oh and every time a feature is yanked you have to update every one of those 500k+ users. VS just using CASCADE. --- ``` class Feature has_many :user_features has_many :users, through: :user_features end class UserFeature belongs_to :user belongs_to :feature end class User has_many :user_features has_many :features, through: :user_features def has_feature?(name) features.exist?(name: name) end end ```
macOS: Is there any way to know when the user has tried to quit an application via its Dock icon? Is there any way for a Cocoa application to detect when the user has tried to quit it via its Dock menu, and not by some other method? Normally it's possible to catch and respond to quit events using the application delegate's `applicationShouldTerminate:` method. However, this method doesn't seem to distinguish between the request to quit coming from the application's main menu, from its Dock icon, from an Apple event, or any other conventional method of quitting the application. I'm curious if there's any way to know precisely how the user has tried to quit the application.
It is in fact possible for an app to know the reason why it's quitting by checking to see if there is current AppleEvent being handled and, if so, checking to see whether it's a quit event and whether it was the Dock that sent it. (See [this thread](https://forums.developer.apple.com/thread/94126) discussing how to tell if an app is being quit because the system is logging out or shutting down.) Here is an example of a method that, when called from the application delegate's `applicationShouldTerminate:` method, will return true if the app is being quit via the Dock: ``` - (bool)isAppQuittingViaDock { NSAppleEventDescriptor *appleEvent = [[NSAppleEventManager sharedAppleEventManager] currentAppleEvent]; if (!appleEvent) { // No Apple event, so the app is not being quit by the Dock. return false; } if ([appleEvent eventClass] != kCoreEventClass || [appleEvent eventID] != kAEQuitApplication) { // Not a 'quit' event return false; } NSAppleEventDescriptor *reason = [appleEvent attributeDescriptorForKeyword:kAEQuitReason]; if (reason) { // If there is a reason for this 'quit' Apple event (such as the current user is logging out) // then it didn't occur because the user quit the app through the Dock. return false; } pid_t senderPID = [[appleEvent attributeDescriptorForKeyword:keySenderPIDAttr] int32Value]; if (senderPID == 0) { return false; } NSRunningApplication *sender = [NSRunningApplication runningApplicationWithProcessIdentifier:senderPID]; if (!sender) { return false; } return [@"com.apple.dock" isEqualToString:[sender bundleIdentifier]]; } ```
Artifactory REST API: How can I reassociate a build with a deployed artifact? When you deploy an artifact to Artifactory over an existing artifact, it does not associate it with the build that the existing artifact has. For example: If you used the Jenkins Artifactory Plugin to deploy three artifacts: ``` example.jar example.pom example.json ``` Then it would create a new build, associate these artifacts with that build, and deploy the artifacts to the location and repo you specified. Let's say this deployed to /libs-release-local/example/1.0/ with buildName "example-build" and buildNumber 51 If you looked at the artifacts, you will see on the build tab that it is associated with the build-info.json. Now, let's say you deploy example.json using the REST API to the same location: ``` PUT /libs-release-local/example/1.0/example.json ``` Now the new artifact is not associated with the build-info.json! How can I deploy the artifact so that it is associated with an already existing build-info.json? (in this example, the "/example-build/51" build). Not being able to do this causes all sorts of issues (such as when build\_promotion is done, it promotes only the previously associated artifacts, and not anything deployed later.)
Artifactory associates the `Build Info` descriptor with the build artifacts artifacts based **on their checksum**. If you look at the JSON which is the `Build Info` descriptor, you'll be able to see: ``` { ... "modules" : [ { "id" : "org._10ne.gradle:rest-gradle-plugin:0.2.0", "artifacts" : [ { "type" : "pom", "sha1" : "f0dcec6a603aa99f31990e20c0f314749f0e22ca", "md5" : "427dcf49c07cc7be175ea31fd92da44e", "name" : "rest-gradle-plugin-0.2.0.pom" }, .... } } ``` A `Build Info` descriptor describes a "build" which is essentially a single unit of module/s produced by a certain process; this process depends on a specific environment. You're deploying a new artifact **which was not part of the original process** or environment that the `Build Info` describes; if it was, it would have been produced with the exact same checksum as the former artifact You are basically **compromising the integrity of the "build" unit**. The "right" way to do it would be to start a new build process and produce a valid `Build Info` descriptor.
String split in java does not work? I split this String by space : `String input = ":-) :) :o) :] :3 :c) :> =] 8) =) :} :^)";` (space between emoticons) And result is: ``` :-)?:)?:o)?:]?:3?:c)?:> =] 8) =)?:}?:^) ``` There are some strange characters in the results. I don't know why. Please help me. Here is the code: ``` fileReader = new BufferedReader(new FileReader("emoticon.txt")); String line = ""; while ((line = fileReader.readLine()) != null){ String[] icons = parts[0].split("\\s+"); .... } ``` Thank for any advices. Here is emoticon file: <https://www.dropbox.com/s/6ovz0aupqo1utrx/emoticon.txt>
``` String input = ":-) :) :o) :] :3 :c) :> =] 8) =) :} :^)"; String[] similies = input.split(" "); for (String simili : similies) { System.out.println(simili); } ``` This works fine. Output : ``` :-) :) :o) :] :3 :c) :> =] 8) =) :} :^) ``` and in case if there is any tab/newline/spaces and you wnat to split, in that case you can use ``` input.split("\\s+"); ``` in your example there is few more charaters are their like  and non breaking whitespaces so you have to explicitly handle these type of charater. Here is the code: ``` public static void main(final String[] args) throws Exception { BufferedReader fileReader = new BufferedReader(new FileReader("emoticon.txt")); String line = ""; while ((line = fileReader.readLine()) != null) { line = line.replaceAll("Â", ""); line = line.replace("" + ((char) 160), " "); System.out.println("line: " + line); String[] icons = line.split("\\s+"); for (String icon : icons) { System.out.println(icon); } System.out.println("======================="); } } ```
Plot number of occurrences from Pandas DataFrame I have a DataFrame with two columns. One of them is containing timestamps and another one - id of some action. Something like that: ``` 2000-12-29 00:10:00 action1 2000-12-29 00:20:00 action2 2000-12-29 00:30:00 action2 2000-12-29 00:40:00 action1 2000-12-29 00:50:00 action1 ... 2000-12-31 00:10:00 action1 2000-12-31 00:20:00 action2 2000-12-31 00:30:00 action2 ``` I would like to know how many actions of certain type have been performed in a certain day. I.e. for every day, I need to count the number of occurrences of actionX and plot this data with date on X axis and number of occurrences of actionX on Y axes, for each date. Of course I can count actions for each day naively just by iterating through my dataset. But what's the "right way" to do in with pandas/matplotlib?
Starting from ``` mydate col_name 0 2000-12-29 00:10:00 action1 1 2000-12-29 00:20:00 action2 2 2000-12-29 00:30:00 action2 3 2000-12-29 00:40:00 action1 4 2000-12-29 00:50:00 action1 5 2000-12-31 00:10:00 action1 6 2000-12-31 00:20:00 action2 7 2000-12-31 00:30:00 action2 ``` You can do ``` df['mydate'] = pd.to_datetime(df['mydate']) df = df.set_index('mydate') df['day'] = df.index.date counts = df.groupby(['day', 'col_name']).agg(len) ``` but perhaps there's an even more straightforward way. the above should work anyway. If you want to use counts as a DataFrame, I'd then transform it back ``` counts = pd.DataFrame(counts, columns=['count']) ```
Javascript - How to Get Current time and Date of specific country? eg: New Zealand I'm currently working on a MERN stack application and it's hosted through AWS EC2. So when I try to get the current date/time using Node, I get the server's date and time. How can I get the current time of a specific country/timezone on the server?
There are couple of options you have. 1. Modify the [EC2 instances timezone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) and send it to the client browser. 2. Calculate the New Zealand time from web browser clients timezone. If you follow the 1st option you can get the server UTC time using a HTTP request. For the 2nd option, you need to calculate the time for New Zealand using JavaScript in client's browser. ``` // create Date object for current location var date = new Date(); // convert to milliseconds, add local time zone offset and get UTC time in milliseconds var utcTime = date.getTime() + (date.getTimezoneOffset() * 60000); // time offset for New Zealand is +12 var timeOffset = 12; // create new Date object for a different timezone using supplied its GMT offset. var NewZealandTime = new Date(utcTime + (3600000 * timeOffset)); ``` Note: This won't reflect Daylight Saving Time.
SharedPreferences.onSharedPreferenceChangeListener not being called consistently I'm registering a preference change listener like this (in the `onCreate()` of my main activity): ``` SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this); prefs.registerOnSharedPreferenceChangeListener( new SharedPreferences.OnSharedPreferenceChangeListener() { public void onSharedPreferenceChanged( SharedPreferences prefs, String key) { System.out.println(key); } }); ``` The trouble is, the listener is not always called. It works for the first few times a preference is changed, and then it is no longer called until I uninstall and reinstall the app. No amount of restarting the application seems to fix it. I found a mailing list [thread](http://www.mail-archive.com/[email protected]/msg43834.html) reporting the same problem, but no one really answered him. What am I doing wrong?
This is a sneaky one. SharedPreferences keeps listeners in a WeakHashMap. This means that you cannot use an anonymous inner class as a listener, as it will become the target of garbage collection as soon as you leave the current scope. It will work at first, but eventually, will get garbage collected, removed from the WeakHashMap and stop working. Keep a reference to the listener in a field of your class and you will be OK, provided your class instance is not destroyed. i.e. instead of: ``` prefs.registerOnSharedPreferenceChangeListener( new SharedPreferences.OnSharedPreferenceChangeListener() { public void onSharedPreferenceChanged(SharedPreferences prefs, String key) { // Implementation } }); ``` do this: ``` // Use instance field for listener // It will not be gc'd as long as this instance is kept referenced listener = new SharedPreferences.OnSharedPreferenceChangeListener() { public void onSharedPreferenceChanged(SharedPreferences prefs, String key) { // Implementation } }; prefs.registerOnSharedPreferenceChangeListener(listener); ``` The reason unregistering in the onDestroy method fixes the problem is because to do that you had to save the listener in a field, therefore preventing the issue. It's the saving the listener in a field that fixes the problem, not the unregistering in onDestroy. **UPDATE**: The Android docs have been [updated](http://developer.android.com/reference/android/content/SharedPreferences.html#registerOnSharedPreferenceChangeListener(android.content.SharedPreferences.OnSharedPreferenceChangeListener)) with [warnings](http://developer.android.com/guide/topics/ui/settings.html#Listening) about this behavior. So, oddball behavior remains. But now it's documented.
Check for null pointer in a truth-value context Lets say I have a pointer ``` MyType *ptr; ``` When checking the validity of that pointer in a ["truth-value context"](https://stackoverflow.com/a/21713259/2567683) by the old standards I would write something like this ``` if (ptr) { ... // 1 while (ptr) { ... // 2 ``` The thing is that in such "truth value contexes" we expect for the implicit conversion of a pointer to a boolean value to take place, so we would be pretty much be comparing ``` if (NULL != ptr) { ... while (NULL != ptr) { ... ``` Yet comparing against a macro for the integer `0` is deprecated and C++11 proposes comparing against `nullptr`. When in a truth value context though like (1) or (2) above where we don't explicitly say ``` if (nullptr != ptr) { ... while (nullptr != ptr) { ... ``` **what is our pointer compared against ?** It's conversion to a boolean ? **Do we have to explicitly compare against `nullptr` ?**
The condition (if it's an expression) of an `if` statement is *contextually converted to `bool`*: [stmt.select]/4 about the condition in selection statements (`if`, `switch`): > > The value of a condition that is an expression is the value of the > expression, contextually converted to `bool` for statements other than `switch`; if that conversion is ill-formed, the program is ill-formed. > > > *Contextual conversion to `bool`* is defined as follows in [conv]/3: > > An expression `e` can be *implicitly converted* to a type `T` if and only if the declaration `T t=e;` is well-formed, for some invented temporary variable `t`. Certain language constructs require that an expression be > converted to a Boolean value. An expression `e` appearing in such a context is said to be contextually converted to `bool` and is well-formed if and only if the declaration `bool t(e);` is well-formed, for some invented temporary variable `t`. > > > Here's the description of a conversion to `bool` for fundamental types [conv.bool]/1: > > A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a > prvalue of type `bool`. A zero value, null pointer value, or null member pointer value is converted to `false`; > any other value is converted to `true`. A prvalue of type `std::nullptr_t` can be converted to a prvalue of > type `bool`; the resulting value is `false`. > > > So when we test a pointer `if(ptr)`, we compare `ptr` to the *null pointer value* of that type. What's a *null pointer value*? [conv.ptr]/1 > > A *null pointer constant* is an integral constant expression prvalue of integer type that evaluates to > zero or a prvalue of type `std::nullptr_t`. A null pointer constant can be converted to a pointer type; the > result is the *null pointer value* of that type and is distinguishable from every other value of object pointer or > function pointer type. Such a conversion is called a *null pointer conversion*. Two null pointer values of the > same type shall compare equal. > > > This also describes what happens when we compare `if(ptr != nullptr)`: The `nullptr` is converted to the type of `ptr` (see [expr.rel]/2), and yields the *null pointer value* of that type. Hence, the comparison is *equivalent* to `if(ptr)`.
An ASP.NET MVC Gotcha? Frustrated I have an issue with ASP.NET MVC html helpers like TextBoxFor(), HiddenFor(), etc. If I have a model such as Employee with the string member Name and execute Html.TextBoxFor(p => p.Name), is it wrong for me to assume that ASP.NET MVC **will always use** the value in the Employee's Name? Because it doesn't. ASP.NET will override that binding and use what's in the POST. For exmaple let's say I have the following code: **Model** ``` namespace MvcApplication2.Models { public class Company { public string Name { set; get; } public List<Employee> Employees { set; get; } } public class Employee { public string Name { set; get; } } } ``` **Controller** ``` namespace MvcApplication2.Controllers { public class HomeController : Controller { public ActionResult Company(string Name) { return View(new Company { Name = Name, Employees = new List<Employee> { new Employee { Name = "Ralph" }, new Employee { Name = "Joe" } } }); } } } ``` **Home/Company.cshtml** ``` @using MvcApplication2.Models; @model Company <h2>Company's Name: @Model.Name</h2> @foreach (Employee emp in Model.Employees) { Html.RenderPartial("Employee", emp); } ``` **Home/Employee.cshtml** ``` @model MvcApplication2.Models.Employee <b>Employee's Name: </b> @Html.TextBoxFor(p => p.Name); ``` When I hit the relative url "Home/Company?Name=MSFT", I expected Employee.cshtml to render "Ralph" and "Joe" in the textboxes but instead it renders MSFT for both textboxes. What do I have to do so that "Ralph" and "Joe" shows up in the textboxes? Do i have to make sure that my POST and GET variables never conflict in all layers of my view models (in this case the Company and Employee classes)? This seems silly. There's got to be an easy workaround, right? Here's a screenshot of the result: ![enter image description here](https://i.stack.imgur.com/D0sE8.png)
## HtmlHelper methods and ModelState An important thing to understand with MVC HtmlHelper methods: They always look at `ModelState` first, `value` second, `ViewData` third. The `ModelState` is important, because it contains the user-submitted values. If a page fails validation, the `ModelState` is used to store the previous values and error messages. If you want to POST a form, and if everything is valid, you want to show the form again, you either have to: - Redierect from the POST to the GET (following the PRG Pattern) - Clear the ModelState: `ModelState.Clear();` ## EditorFor vs RenderPartial Another important thing to mention is the difference between `EditorFor/DisplayFor` vs `RenderPartial`. When you use `EditorFor/DisplayFor` for an item (such as `.EditorFor(m => m.Person)`), it adds a sort-of "namespace" to the template of `"Person"` so that the sub-controls will have a unique name. For example, in the template, `.TextBoxFor(p => p.Name)` will render something like `<input name="Person.Name" ...`. However, when you use `RenderPartial`, no such namespace is created. Therefore, `.TextBoxFor(p => p.Name)` will render `<input name="Name" ...`, which will not be unique if you have multiple editors.
Override Angular Material CSS differently in different components I have two components with tab groups in them. One is a main page and I have overwritten the css to make the labels larger, done using ViewEncapsulation.None. The other is a dialog, and I want to keep it small but still apply some other custom styles to it. When I open the dialog after visiting the other tabs page, it copies all the styles, which I have figured is because ViewEncapsulation.None bleeds CSS but not exactly as expected. Is there anyway to override Angular Material styles without changing ViewEncapsulation so that I can keep the two components separate?
Solution 1: you can put all elements of your component into a parent element with a css class and override the material style into it.(it's custom capsulation) Note: ViewEncapsulation is none here. component.html ``` <div class="my-component__container"> <!-- other elements(material) are here --> </div> ``` component.scss ``` .my-component__container{ // override material styles here .mat-form-field{...} } ``` Solution 2: use `/deep/`([deprecated](https://angular.io/guide/component-styles#deprecated-deep--and-ng-deep)).(use `::ng-depp` insteaded) ``` :host /deep/ .mat-form-field { text-align: left !important; } ``` Solution 3: don't change `ViewEncapsulation` , then: ``` :host { .my-component__container{} } ```
replace "DynamicResource" with "StaticResource" To realize my application I have used a lot Blend3. When Blend3 wants to link a resource to another resource, it uses many times the link-type "DynamicResource". As I have understood (but I could have understood not well), the "Dynamic" links have sense only if I want to modify the links at runtime. In other cases they use more memory in vain. I don't want to modify anything at runtime, then the question is: have sense to replace "DynamicResource" with "StaticResource" in all my application? Thank you! Pileggi
Blend works better in design time with DynamicResource. See: <http://blogs.msdn.com/b/unnir/archive/2009/03/31/blend-wpf-and-resource-references.aspx> The money quote from that: > > **a) Should I use Static or Dynamic > resource lookup?** > > > Blend def. plays > better with dynamic resource lookups. > You could use a static resource lookup > as long as the resource was not > located or merged into App.xaml. > People have raised concerns around > performance issues with dynamic > resource lookups (you pay for what you > get). While that might be true, an > interesting data point is that the > Expression Blend source code uses a > ton uses dynamic resource lookups for > our own UI (of course, we too use > static resource lookups in places > where the resource would never change, > or where it not possible to use a > dynamic resource extension, for > example non-DPs). > > >
How to assign default value in route param in react native I want to set a default value in route param if nothing is sent from the other screen earlier we used to do like ``` let phot0 = this.props.navigation.getParam("photo","empty"); ``` what to do in react Navigation 5.x My code is..(difficulty at line no 5) ``` import React from "react"; import { StyleSheet, Text, View, Image, Button } from "react-native"; export default function Home({ route, navigation }) { const { photo } = route.params; return ( <View style={styles.container}> <Image resizeMode="center" style={styles.imageHolder} source={photo === "empty" ? require("../assets/email.png") : photo} /> <Button title="take photo" style={styles.button} onPress={() => navigation.navigate("Camera")} /> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: "#fff", alignItems: "center", justifyContent: "center", }, imageHolder: { alignSelf: "center", }, button: { margin: 20, }, }); ``` also its showing some error : undefined is not an object(evaluating 'route.params.photo').. do I always need to declare param in the sending screen?
You can pass some initial params to a screen in [react-navigation version 5](https://reactnavigation.org/docs/params/#updating-params) as below, ``` <Stack.Screen name="Details" component={DetailsScreen} initialParams={{ itemId: 100 }} /> ``` According to example, If you didn't specify any params when navigating to `Details` screen, the initial params will be used. For more information check below complete example ``` import * as React from "react"; import { Text, View, Button } from "react-native"; import { NavigationContainer } from "@react-navigation/native"; import { createStackNavigator } from "@react-navigation/stack"; function HomeScreen({ navigation }) { return ( <View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}> <Text>Home Screen</Text> <Button title="Go to Details" onPress={() => { navigation.navigate("Details"); }} /> </View> ); } function DetailsScreen({ route, navigation }) { return ( <View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}> <Text>Details Screen</Text> <Text>itemId: {route.params.itemId}</Text> <Button title="Go back" onPress={() => navigation.goBack()} /> </View> ); } const Stack = createStackNavigator(); export default function App() { return ( <NavigationContainer> <Stack.Navigator> <Stack.Screen name="Home" component={HomeScreen} /> <Stack.Screen name="Details" component={DetailsScreen} /** * when you didn't specify itemId params the initial params will be used */ initialParams={{ itemId: 100 }} /> </Stack.Navigator> </NavigationContainer> ); } ``` Hope this helps you. Feel free for doubts.
Basic Functions in unityscript I just have a basic question. I'm trying to learn how to use functions with unityscript and I learned that when you create a function using something like ``` function Example(text) { print(text); } ``` and then call it up using ``` Example(); ``` you can actually insert a string or a number into the parentheses on the latter piece of code in the parentheses and it will pass through and insert itself into the function you created previously. In this case, it does end up printing whatever I type into the parentheses. Now, my question is if that works, then why don't functions where I pass in two numbers and ask to have them added work? The code I'm trying to use looks like: ``` Addition(4, 4); function Addition(a, b) { print(a+b); } ``` When I try to execute this, Unity tells me > > Operator '+' cannot be used with a left hand side of type 'Object' and a right hand side of type 'Object'. > > > I was watching [this video](http://vimeo.com/channels/151501/17060579) on Javascript functions and in the video at around 9 minutes, the guy types in exactly that, and gets the script to add it for him. I'm rather new to scripting and I do not have much of an idea of what I could be doing wrong. Thanks in advance for the help.
Actually, in Unity your code would have to look like: ``` Addition(4, 4); function Addition(a:int, b:int) { print(a+b); } ``` You'd have to call what type of argument you expect in the function's parameters. You are completely right by stating that in JavaScript this normally isn't needed. In Unity, it is though. This is because it is actually called `UnityScript`. UnityScript is the name used by people who want to point out that Unity's Javascript is very different from the traditional version of Javascript used in browsers. I have found a nice list of differences for you between `UnityScript` and `Javascript` [here](http://wiki.unity3d.com/index.php/UnityScript_versus_JavaScript). When I create a quick bulletlist out of that web page, I can tell you the differences are (perhaps not even all): - JavaScript is class-free - JavaScript supports multiple variable declarations in one var statement. - In JavaScript, assignment is treated as an expression. - Every top-level variable in JavaScript is global. Additionally, any variable declaration not preceded by the `var` statement is global. - In Javascript, dynamic typing is efficient. - In JavaScript, privacy is rather unconventional. - Dollar signs ($) are not allowed in UnityScript identifiers as they are in JS identifiers. - There is no `with` statement in UnityScript. - Javascript supports `Regular Expression Literals` (`RegExp` or `regex`). - In JavaScript, `this` can refer to one of two things, depending on the context: - The global object (best not explained here) - The object to which the current method or constructor belongs I have retagged your question to categorize it in UnityScript as well. StackOverflow has this to say about it: > > Unity's JavaScript tries to follow the ECMAScript standard closely, it > varies in many ways from other implementations of JavaScript that are > based on the same standard. It is perhaps most similar to Microsoft's > JScript, especially in that both are .NET languages. However, Unity's > version was developed independently and there are a number of > differences between the two. > > > I, too, had to find out the hard way. I messed up the two languages and [got to this problem](https://stackoverflow.com/questions/14529726/search-a-substring-in-an-array-of-strings-in-javascript).
implementing delegate in interface I can't find a way to implement delegate in interface I want to get this: ``` public class SomeClass : ISomeInterface { public delegate void SomeCallback(); public SomeCallback callback; public void SomeMethod() { callback.invoke(); } } public class MainClass { void Callback() { Console.WriteLine("Callback"); } public void Start() { SomeClass s = new SomeClass(); s.callback = Callback; s.SomeMethod(); } } ``` but in case when I create instance of the class "SomeClass" using interface: ``` public class MainClass { void Callback() { Console.WriteLine("Callback"); } public void Start() { ISomeInterface s = new SomeClass(); // <<<---- s.callback = Callback; // here will be an error :( s.SomeMethod(); } } ``` Please, help me with it :)
Moving the callback to the interface is required if you want to use the callback without casing to a concrete type. Note that your current implementation has `callback` as a *field*. To declare it in an interface, you must make it a *property*. Because *properties* are really methods, you must implement the property in your concrete class. Using an auto-property is fine for the implementation here. Once you've made those changes, you can then set and call the callback using only the interface. ``` public delegate void SomeCallback(); public interface ISomeInterface { SomeCallback callback { get; set; } void SomeMethod(); } public class SomeClass : ISomeInterface { public SomeCallback callback { get; set; } public void SomeMethod() { callback.Invoke(); } } public class MainClass { void Callback() { Console.WriteLine("Callback"); } public void Start() { ISomeInterface s = new SomeClass(); s.callback = Callback; s.SomeMethod(); } } ```
Is it a good practice to Mock entity manager in spring boot unit testing I currently design an API using spring boot. In my service layer, I use Entity Manager for accessing the database. I have provided a method in my service layer below as an example. ``` public Object getFirstReturnDate(Long vehicle_id, LocalDateTime reservation_date){ String string = "SELECT r.reservation_time from Reservation r left join fetch r.driverAssignment where r.reservation_time > :reservation_time " + "and r.reserved_status.slug in ('pending','approved') and r.reserved_vehicle.id=:vehicle_id " + " order by r.reservation_time asc "; Query query = em.createQuery(string); query.setParameter("reservation_time",reservation_date); query.setParameter("vehicle_id",vehicle_id); List<LocalDateTime> localDateTimes=query.getResultList(); if(localDateTimes.size()==0) return new DefaultResponseDTO(200, ResponseStatus.OK,"Any Date",null);; return new DefaultResponseDTO(200, ResponseStatus.OK,"Possible Time",localDateTimes.get(0)); } ``` in the testing unit, I mocked the entity manager as below, ``` @Mock EntityManager em; ``` And in the test method, ``` Query query = Mockito.mock(Query.class); Mockito.when(em.createQuery(Mockito.anyString())).thenReturn(query); Mockito.when(query.getResultList()).thenReturn(new LinkedList()); ``` My question is, if I mock entity Manger as I mentioned above then the query in the method didn't get checked. Is it a bad coding practice? Is there any other way to check queries without calling the database?
The objective of unit testing is to validate the *outcome* of a certain piece of code in isolation. By mocking the `EntityManager`, you are only validating that your implementation calls some methods, but it gives you no confidence on how this code will actually behave with the real dependencies. Even if you did make sure you sent the correct query and the correct parameters (by substituting the actual query instead of using `any()`), you would now be testing the implementation instead of the outcome, which negates the benefits of unit testing. Not all code may be easily unit tested, and interaction with an external service such as a database system is generally one of those cases. You will find much more value by converting this test to an integration test, testing against an actual database. This is also a consequence of your current architecture. The method `getFirstReturnDate` acts on multiple abstraction levels. It checks the database, validates some rules for presentation purposes and returns an HTTP response code. While there is nothing inherently wrong with this design choice (especially for a small CRUD service), it hinders unit testing. If you isolated each different abstraction to its own isolated context, you could easily validate the presentation rule (Are there any reservation time?) and the API layer (Does this return the correct HTTP status code?) in isolation, with unit tests.
defining macro inside of struct definition On the definition of the structure below, there is a line with a macro definition (`#define`). What does that line do exactly? I understand it makes an alias to the first entry of the array `h_addr_list`, but it still looks weird to me. Is `h_addr` a member of the struct `hostent`? Is this definition only within the scope of the struct? ``` struct hostent { char *h_name; /* official name of host */ char **h_aliases; /* alias list */ int h_addrtype; /* host address type */ int h_length; /* length of address */ char **h_addr_list; /* list of addresses from name server */ #define h_addr h_addr_list[0] /* address, for backward compatiblity */ }; ```
The macro definition is not scoped at all, it would work the same way if it was defined outside that struct. It allows old code to continue using `hr.h_addr` rather than having to be changed to `he.h_addr_list[0]` (assuming `he` is a `struct hostent`). To clarify, `h_addr` is not a field of `struct hostent`. That define is processed by the *pre-processor*, the actual *compiler* sees an empty line where the `#define` is. But if a piece of code is written as `he.h_addr`, the pre-processor will modify it and replace it with `he.h_addr_list[0]`. That is what the actual compiler will see. Pre-processor macros are never scoped: the compiler proper doesn't even see them - it only sees the result of the substitutions, and the pre-processor ignores (is not aware of) scoping entirely.
How to change Windows 10 interface language on Single Language version Unfortunately, on my PC was pre installed Windows 10 Single Language (BR Portuguese) and I can't change the "display language" from BR-PT to US English because don't have any options; for that obvious reason, indeed. So, please: how can I install a new "display language" onto Windows 10 Single Language version?
Worked for me: 1. Download package (see links below), name it lp.cab and place it to your `C:` drive 2. Run the following commands as Administrator: 2.1 installing new language > > dism /Online /Add-Package /PackagePath:C:\lp.cab > > > 2.2 get installed packages > > dism /Online /Get-Packages > > > 2.3 remove original package > > dism /Online /Remove-Package > /PackageName:Microsoft-Windows-Client-LanguagePack-Package~31bf3856ad364e35~amd64~ru-RU~10.0.10240.16384 > > > If you don't know which is your original package you can check your installed packages with this line > > dism /Online /Get-Packages | findstr /c:"LanguagePack" > > > 3. Enjoy your new system language List of MUI for Windows 10: For LPs for Windows 10 version 1607 build 14393, follow [this](http://winaero.com/blog/mui-language-packs-for-windows-10-anniversary-update-rtm-version-1607-build-14393/) link. **Windows 10 x64 (Build 10240)**: zh-CN: Chinese download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_9949b0581789e2fc205f0eb005606ad1df12745b.cab hr-HR: Croatian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_c3bde55e2405874ec8eeaf6dc15a295c183b071f.cab cs-CZ: Czech download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_d0b2a69faa33d1ea1edc0789fdbb581f5a35ce2d.cab da-DK: Danish download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_15e50641cef50330959c89c2629de30ef8fd2ef6.cab nl-NL: Dutch download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_8658b909525f49ab9f3ea9386a0914563ffc762d.cab en-us: English download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_75d67444a5fc444dbef8ace5fed4cfa4fb3602f0.cab fr-FR: French download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_206d29867210e84c4ea1ff4d2a2c3851b91b7274.cab de-DE: German download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_3bb20dd5abc8df218b4146db73f21da05678cf44.cab hi-IN: Hindi download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_e9deaa6a8d8f9dfab3cb90986d320ff24ab7431f.cab it-IT: Italian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_42c622dc6957875eab4be9d57f25e20e297227d1.cab ja-JP: Japanese download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_adc2ec900dd1c5e94fc0dbd8e010f9baabae665f.cab kk-KZ: Kazakh download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_a03ed475983edadd3eb73069c4873966c6b65daf.cab ko-KR: Korean download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_24411100afa82ede1521337a07485c65d1a14c1d.cab pt-BR: Portuguese download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_894199ed72fdf98e4564833f117380e45b31d19f.cab ru-RU: Russian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_d85bb9f00b5ee0b1ea3256b6e05c9ec4029398f0.cab es-ES: Spanish download.windowsupdate.com/c/msdownload/update/software/updt/2015/07/lp\_7b21648a1df6476b39e02476c2319d21fb708c7d.cab uk-UA: Ukrainian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_131991188afe0ef668d77c8a9a568cb71b57f09f.cab **Windows 10 x86 (Build 10240)**: zh-CN: Chinese download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_e7d13432345bcf589877cd3f0b0dad4479785f60.cab hr-HR: Croatian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_60856d8b4d643835b30d8524f467d4d352395204.cab cs-CZ: Czech download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_dfa71b93a76b4500578b67fd3bf6b9f10bf5beaa.cab da-DK: Danish download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_af0ea4318f43d9cb30bcfa5ce7279647f10bc3b3.cab nl-NL: Dutch download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_cbcdf4818eac2a15cfda81e37595f8ffeb037fd7.cab en-us: English download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_41877260829bb5f57a52d3310e326c6828d8ce8f.cab fr-FR: French download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_80fa697f051a3a949258797a0635a4313a448c29.cab de-DE: German download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_7ea2648033099f99f87642e47e6d959172c6cab8.cab hi-IN: Hindi download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_78a11997f4e4bf73bbdb1da8011ebfb218bd1bac.cab it-IT: Italian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_9e62d9a8b141e0eb6434af5a44c4f9468b60a075.cab ja-JP: Japanese download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_79bd099ac811cb1771e6d9b03d640e5eca636b23.cab kk-KZ: Kazakh download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_59e690df497799cacb96ab579a706250e5a0c8b6.cab ko-KR: Korean download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_a88379b0461479ab8b5b47f65c4c3241ef048c04.cab pt-BR: Portuguese download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_bb9f192068fe42fde8787591197a53c174dce880.cab ru-RU: Russian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_280bf97bbe34cec1b0da620fa1b2dfe5bdb3ea07.cab es-ES: Spanish download.windowsupdate.com/c/msdownload/update/software/updt/2015/07/lp\_31400c38ffea2f0a44bb2dfbd80086aa3cad54a9.cab uk-UA: Ukrainian download.windowsupdate.com/d/msdownload/update/software/updt/2015/07/lp\_41cd48aa22d21f09fbcedc69197609c1f05f433d.cab
How to find a text inside stored procedures across multiple databases I have 40 + databases, and I want to find procedures in all databases that use the text `sp_reset_data`. [This query helped me a lot](https://stackoverflow.com/questions/674623/how-to-find-a-text-inside-sql-server-procedures-triggers): ``` DECLARE @Search varchar(255) SET @Search='sp_reset_data' SELECT DISTINCT o.name AS Object_Name,o.type_desc FROM sys.sql_modules m INNER JOIN sys.objects o ON m.object_id=o.object_id WHERE m.definition Like '%'+@Search+'%' ORDER BY 2,1 ``` But, this get the procedures only for the current database. Is there a way to improve this kind of query to look in every server's DB without manually changing the current DB?
``` DECLARE @sql NVARCHAR(MAX); SET @sql = N''; SELECT @sql = @sql + 'SELECT db = ''' + name + ''', o.name, o.type_desc FROM ' + QUOTENAME(name) + '.sys.sql_modules AS m INNER JOIN ' + QUOTENAME(name) + '.sys.objects AS o ON m.[object_id] = o.[object_id] WHERE m.definition LIKE N''%'' + @Search + ''%'' ORDER BY o.type_desc, o.name;' FROM sys.databases WHERE database_id > 4 AND state = 0; -- online EXEC sp_executesql @sql, N'@Search NVARCHAR(255)', N'sp_reset_data'; ``` Strictly speaking, if you only want procedures, then it is a little simpler (the above will also include functions, triggers, even views): ``` DECLARE @sql NVARCHAR(MAX); SET @sql = N''; SELECT @sql = @sql + 'SELECT db = ''' + name + ''', o.name FROM ' + QUOTENAME(name) + '.sys.sql_modules AS m INNER JOIN ' + QUOTENAME(name) + '.sys.procedures AS o ON m.[object_id] = o.[object_id] WHERE m.definition LIKE N''%'' + @Search + ''%'' ORDER BY o.name;' FROM sys.databases WHERE database_id > 4 AND state = 0; -- online EXEC sp_executesql @sql, N'@Search NVARCHAR(255)', N'sp_reset_data'; ```
AJAX File Upload with XMLHttpRequest I know there are a lot of similar questions, but I still haven't found a solution for my problem. I'm trying to upload a file with XMLHttpRequest, so I developed the code below: ``` var sendFiles = function(url,onload,onerror,file,headers){ var xhr = XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject('Microsoft.XMLHttp'), upload = xhr.upload; API.addEvent(xhr,'readystatechange',function(){ if(xhr.readyState==4) if((xhr.status>=200 && xhr.status<300) || xhr.status==304){ this.response = this.response || this.responseText; onload.call(xhr); }else onerror.call(xhr); }); xhr.open('POST',url,true); for(var n=0;n<headers.length;n++) xhr.setRequestHeader(headers[n]); xhr.send(file); return xhr; }; ``` And the PHP-side script is: ``` <?php header('Content-type: text/html;charset=ISO-8859-1'); $status = 0; if(@copy($_FILES['file']['tmp_name'],'test\\' . $_FILES['file']['name'])) $status = 1; else $err = '0'; echo '{ "status": ' . $status . ' }'; ?>; ``` But the var $\_FILES['file'] seems to be empty, which means that the file isn't being sent to the server. Then i decided to use the FormData Object, in the code below ``` var sendFiles = function(url,onload,onerror,file,headers){ var xhr = XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject('Microsoft.XMLHttp'), upload = xhr.upload, formData = new FormData(); formData.append('file',file); API.addEvent(xhr,'readystatechange',function(){ if(xhr.readyState==4) if((xhr.status>=200 && xhr.status<300) || xhr.status==304){ this.response = this.response || this.responseText; onload.call(xhr); }else onerror.call(xhr); }); xhr.open('POST',url,true); for(var n=0;n<headers.length;n++) xhr.setRequestHeader(headers[n]); xhr.send(formData); return xhr; }; ``` And it worked, but only with file sizes low to about 8mb. When I try sending a file that has more than 8mb of size, the var `$_FILES['file']` becomes empty again NOTE: the 'file' var corresponds to something like document.getElementsById('fileInput').files[0];
To avoid the post\_max\_size limitation problem... but also out of memory problems on both sides : ## On the client side - use PUT instead of POST : `xhr.open("put", "upload.php", true);` - add custom headers to specify original FileName and FileSize : `xhr.setRequestHeader("X-File-Name", file.name);` `xhr.setRequestHeader("X-File-Size", file.size);` - use the File object directly in your XHR send method : `xhr.send(file);` *Please note that the File object can be obtained directly via the “files” property of your input[type=file] DOM object* ## On the server side - read the custom headers via $\_SERVER : `$filename = $_SERVER['HTTP_X_FILE_NAME'];` `$filesize = $_SERVER['HTTP_X_FILE_SIZE'];` - read file data using php://input : `$in = fopen('php://input','r');` You'll then be able to send very big files (1GB or more) without any limitation!!! *This code works for FireFox 4+, Chrome 6+, IE10+*
What is the difference between E\_STRICT and E\_ALL in PHP 5.4? In PHP 5.4, what is the difference between using `E_STRICT` and `E_ALL` ? Are both the same?
*In PHP 5.4, what is the difference between using E\_STRICT and E\_ALL*. Well: ``` 5.4.0 E_STRICT became part of E_ALL. 5.3.0 E_DEPRECATED and E_USER_DEPRECATED introduced. 5.2.0 E_RECOVERABLE_ERROR introduced. 5.0.0 E_STRICT introduced (not part of E_ALL). ``` An Example: ``` <?php // Turn off all error reporting error_reporting(0); // Report simple running errors error_reporting(E_ERROR | E_WARNING | E_PARSE); // Reporting E_NOTICE can be good too (to report uninitialized // variables or catch variable name misspellings ...) error_reporting(E_ERROR | E_WARNING | E_PARSE | E_NOTICE); // Report all errors except E_NOTICE error_reporting(E_ALL & ~E_NOTICE); // Report all PHP errors (see changelog) error_reporting(E_ALL); // Report all PHP errors error_reporting(-1); // Same as error_reporting(E_ALL); ini_set('error_reporting', E_ALL); ?> ``` [PHP Manual: error\_reporting](http://php.net/manual/en/function.error-reporting.php) [A similar question answered on SO here as well.](https://stackoverflow.com/questions/1638238/whats-the-point-of-e-all-e-strict-if-its-the-same-value-as-e-all)
How can I get an enumeration's valid ranges using RTTI or TypeInfo in Delphi I am using RTTI in a test project to evaluate enumeration values, most commonly properties on an object. If an enumeration is out of range I want to display text similar to what Evaluate/Modify IDE Window would show. Something like "(out of bound) 255". The sample code below uses TypeInfo to display the problem with a value outside the enumeration as an Access Violation when using `GetEnumName`. Any solution using RTTI or TypeInfo would help me, I just don't know the enumerated type in my test code ``` program Project60; {$APPTYPE CONSOLE} {$R *.res} uses System.SysUtils, TypInfo; Type TestEnum = (TestEnumA, TestEnumB, TestEnumC); const TestEnumUndefined = TestEnum(-1); procedure WriteEnum(const ATypeInfo: PTypeInfo; const AOrdinal: Integer); begin WriteLn(Format('Ordinal: %d = "%s"', [AOrdinal, GetEnumName(ATypeInfo, AOrdinal)])); end; var TestEnumTypeInfo: PTypeInfo; begin try TestEnumTypeInfo := TypeInfo(TestEnum); WriteEnum(TestEnumTypeInfo, Ord(TestEnumA)); WriteEnum(TestEnumTypeInfo, Ord(TestEnumB)); WriteEnum(TestEnumTypeInfo, Ord(TestEnumC)); WriteEnum(TestEnumTypeInfo, Ord(TestEnumUndefined)); //AV except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; ReadLn; end. ```
Use [`GetTypeData()`](http://docwiki.embarcadero.com/Libraries/en/System.TypInfo.GetTypeData) to get more detailed information from a [`PTypeInfo`](http://docwiki.embarcadero.com/Libraries/en/System.TypInfo.PTypeInfo) , eg: ``` procedure WriteEnum(const ATypeInfo: PTypeInfo; const AOrdinal: Integer); var LTypeData: PTypeData; begin LTypeData := GetTypeData(ATypeInfo); if (AOrdinal >= LTypeData.MinValue) and (AOrdinal <= LTypeData.MaxValue) then WriteLn(Format('Ordinal: %d = "%s"', [AOrdinal, GetEnumName(ATypeInfo, AOrdinal)])) else WriteLn(Format('Ordinal: %d (out of bound)', [AOrdinal])); end; ```
Why does "chmod 1777" and "chmod 3777" both set the sticky bit? To set the sticky bit on a directory, why do the commands `chmod 1777` and `chmod 3777` both work?
``` 1 1 1 1 1 1 1 1 1 1 1 1 ___________ __________ __________ ___ ___ ___ ___ ___ ___ ___ ___ ___ setUID bit setGID bit sticky bit user group others ``` Each number (also referred to as an octal because it is base8) in that grouping represents 3 bits. If you turn it into binary it makes it a lot easier. 1 = 0 0 1 3 = 0 1 1 5 = 1 0 1 7 = 1 1 1 So if you did 1777, 3777, 5777, or 7777 you would set the sticky bit because the third column would be a 1. However, with 3777, 5777, and 7777 you are additionally setting other bits (SUID for the first column, and SGID for the second column). Conversely, any other number in that spot (up to the maximum of 7) would not set the sticky bit because the last column wouldn't be a 1 or "on." 2 = 0 1 0 4 = 1 0 0 6 = 1 1 0
.htaccess RewriteOptions Inherit is not inheriting I've been trying inerhiting my rules with **RewriteOptions Inerhit**, because for some reason rules are not inerhited by default.You might want to know that I'm on a *LiteSpeed* server. The *IP* XXX.XX.XX.XX *points to a subdomain* which is *located in* `~/public_html/dir/sub/` This .htaccess is located in **`~/public_html/`**: ``` RewriteEngine On RewriteOptions Inherit RewriteCond %{HTTP_HOST} ^XXX.XX.XX.XX$ [NC] RewriteRule ^(.*)$ http://www.example.com [R=301,L] ``` In **`~/public_html/dir/`** I have this .htaccess: ``` RewriteEngine On RewriteOptions Inherit ``` And in **`~/public_html/dir/sub/`** the .htaccess also looks like this: ``` RewriteEngine On RewriteOptions Inherit ``` *In this .htaccess there's also some WordPress code below which I also tried to remove but nothing changed.* I also tried to use `RewriteOptions Inherit Before` but **rules from ~/public\_html/ are not applied.** So if I go to the IP I'm not redirected. Anyone faced such a problem before and got any tips for me what I cloud try? I really did a lot of search but couldn't figure out what's wrong.
I found out that inherited (in root folder) rules are parsed with the subdirectory context ``` /.htaccess RewriteRule melon http://www.google.com [R,L] RewriteRule /services/bar http://www.yahoo.com [R,L] /services/.htaccess RewriteEngine On RewriteOptions Inherit deny from all http://mydomin.com/services/bar => 403 http://mydomin.com/services/melon => google.com !!! ``` Maybe try with a more loose RewriteRule (.\*) Also, as RewriteRule context is lost, try using RewriteCond capture pattern if you need conditionnal rewrite ``` RewriteCond %{REQUEST_URI} (/services/.*) RewriteRule (.*) http://elsewhere.com/%1 #in place of a basic RewriteRule /services/(.*) http://elsewhere.com/$1 ```
Find subset of points whose distance among each other is a multiple of a number The problem: given an array `A` which represents points on a line, e.g. `[5,-4,1,3,6]`, and a number `M=3`, find the maximal subset of points within `A` whose distance among each other is a multiple of `M`. In the example, the two possible subsets would be `[-4,5]` (distance 9) and `[3,6]` (distance 3). The obvious brute force solution would be to compute the distance between each pair of points in `O(N^2)` time, then build a set of candidates by incrementally building up the subsets. Is there a more efficient solution?
Iterate over the numbers in the array and compute the modulus M of each, and group by the result. The largest group is the maximal subset. e.g. `[5,-4,1,3,6]` would give `[2,2,1,0,0]` You'd have to take care how you handle negative numbers, the modulus operation for negatives is defined [differently in some languages](https://en.wikipedia.org/wiki/Modulo_operation#Remainder_calculation_for_the_modulo_operation) (e.g. PHP) to others, but you would want mod(-4, 3) to evaluate to 2, not -1. For negatives you could compute it as (M - (-x mod M)) mod M You could efficiently group the numbers by storing a hash map of lists containing the numbers, keyed on the modulus.
Angular 4 - Error: formControlName must be used with a parent formGroup directive I am adding `form` input fields using component - *engine-add-contact-form.html* ``` <form (ngSubmit)="onSubmit()" [formGroup]="contact_form"> <md-tab-group> <md-tab label="Form"> <ang-form></ang-form> </md-tab> <md-tab label="Email"> <ang-email></ang-email> </md-tab> <md-tab label="Message"> <ang-message></ang-message> </md-tab> </md-tab-group> <button md-raised-button type="submit">Publish</button> ``` *ang-form.html* ``` <div class="form-grid form-title"> <md-input-container> <input formControlName="contact_form_title" class="form-title-field" mdInput placeholder="Title" value=""> </md-input-container> </div> ``` Using same way i added other components (`ang-email ang-message`) html. I added `[formGroup]` directive at `engine-add-form.ts` ``` export class EngineAddFormComponent{ contact_form: any; form_value: any; constructor(){ this.contact_form = new FormGroup({ contact_form_title: new FormControl('', Validators.minLength(2)), ........ ........ }); } onSubmit(){ this.form_value = JSON.stringify(this.contact_form.value); console.log(this.form_value); } } ``` I get following error - > > Error: formControlName must be used with a parent formGroup directive. > You'll want to add a formGroup > directive and pass it an existing FormGroup instance (you can create one in your class). > > > I can't understand what is wrong with my code.
You need to pass formGroup (in your case **contact\_form**) to child component which is **ang-form** ***engine-add-contact-form.html(modified)*** ``` <form (ngSubmit)="onSubmit()" [formGroup]="contact_form"> <md-tab-group> <md-tab label="Form"> <ang-form [group]="contact_form"></ang-form> </md-tab> <md-tab label="Email"> <ang-email></ang-email>`` </md-tab> <md-tab label="Message"> <ang-message></ang-message> </md-tab> </md-tab-group> <button md-raised-button type="submit">Publish</button> ``` ***ang-form.html(modified)*** ``` <div class="form-grid form-title" [formGroup]="group"> <md-input-container> <input formControlName="contact_form_title" class="form-title-field" mdInput placeholder="Title" value=""> </md-input-container> </div> ``` > > Add **@Input() group: FormGroup;** in your ang-form.component.ts > > >
TypeError in Typescript Here my problem: I get this error: > > Uncaught TypeError: Object prototype may only be an Object or null: undefined > > > ``` export abstract class AbstractLogicExpression { protected _logicChildExpressions: AbstractLogicExpression[] = Array(); protected _precedence = 0; protected _parsed = false; protected _expressionType = ""; protected rightAssociative = false; public toDNF() { for (let i = 0; i < this.logicChildExpressions.length; i++) { let actualLogicExpression: AbstractLogicExpression = this.logicChildExpressions[i]; if (actualLogicExpression._expressionType == "~") { let logicConjunction = actualLogicExpression.logicChildExpressions[0]; let var1 = logicConjunction.logicChildExpressions[0]; let var2 = logicConjunction.logicChildExpressions[1]; if (logicConjunction._expressionType == "*") { actualLogicExpression.logicChildExpressions[0] = new LogicOr(); //actualLogicExpression.logicChildExpressions[0].add(new LogicNeg(var1)); //actualLogicExpression.logicChildExpressions[0].add(new LogicNeg(var2)); } } } } } ``` I get this error because of the line before the two commented lines: ``` actualLogicExpression.logicChildExpressions[0] = new LogicOr(); ``` I tested it by comment and uncomment the lines, because I get no line number in the error message. Does someone know what I can do? If you need a little more code. I can post something... Here the code of LogicOr: <https://pastebin.com/T28Zjbtb>
You get an error on this line: > > actualLogicExpression.logicChildExpressions[0] = new LogicOr(); > > > The error message is > > Uncaught TypeError: Object prototype may only be an Object or null: undefined > > > It is very easy to understand once you are familiar with Classes and how they work (<https://basarat.gitbooks.io/typescript/docs/classes.html>). The error means that `new LogicOr` is failing because `LogicOr` is extending something that is `undefined`. Simple example: ``` let Bar; class Foo extends Bar { } // Uncaught TypeError: Object prototype may only be an Object or null: undefined ``` # More Fix the bug in `LogicOr` and its inheritance chain.
C++ Vector Initialization List vs. Assignment In C++ what is the technical difference between following two ways of initializing a vector? ``` vector<int> v_1 {0, 1, 2}; vector<int> v_2 = {3, 4, 5}; ``` The first one is an *initialization list*. What is the second one? I appreciate hints on correct terminology and referring to documentation and different standard versions (C++98 vs. C++11).
``` vector<int> v_1 {0, 1, 2}; ``` This is [*direct-list-initialization*](https://en.cppreference.com/w/cpp/language/list_initialization), a form of [*direct-initialization*](https://en.cppreference.com/w/cpp/language/direct_initialization). An object `v_1` is constructed with the provided values. ``` vector<int> v_2 = {3, 4, 5}; ``` This is [*copy-list-initialization*](https://en.cppreference.com/w/cpp/language/list_initialization). In this case there is no difference from *direct-list-initialization*. There is still a minor semantic difference, though, as *copy-initialization* excludes explicit constructors. The list-initialization syntax (both version 1 and 2) was introduced in C++11.
How to dynamically decide on number of goroutines working on a task I have written a dummy code to demonstrate the purpose. There are 2 channels and 3 goroutines in the code. 1 goroutine is generating numbers based on if they are divisible by 100 with no remainder: - If the number is divisible by 100, it pushes it to the first channel. - Otherwise it pushes it to the second channel. 2 goroutines are the consumers of these channels: - 1 goroutine is responsible for consuming the number 1...99 - 101...199 etc. - Other goroutine is responsible for 100, 200, 300 etc. Now obviously, one goroutine has 99x more work to do than the other goroutine. How is this handled in Go? If a goroutine works more than other, is this goroutine given more CPU time? Or should I handle this situation, for example creating 99 goroutines for the more resource-hungry channel? (for the sake of argument, the jobs are thought of as identical) ``` func main() { ch1 := make(chan int) ch2 := make(chan int) go generator(ch1, ch2) go handler(ch1) go handler2(ch2) time.Sleep(1*time.Second) } func generator(chan1, chan2 chan int){ for i:=0 ; ; i++{ if i%100 == 0{ chan1 <- i }else{ chan2 <- i } } } func handler(number chan int){ for _ = range number{ num := <- number fmt.Println("Number divided by 100 is 0. ", num) } } func handler2(number chan int){ for _ = range number{ num := <- number fmt.Println("Number divided by 100 is not 0. ", num) } } ```
How much CPU resource a goroutine gets depends on a lot of things. What we can say in general is that the goroutine that only handles the numbers dividable by 100 will most likely wait a lot more than the other. You shouldn't worry about this, waiting for an element on a channel does not require CPU resources, so if you have "enough" other goroutines that have jobs to do, they can utilize your CPU. Your example is simple for obvious reasons, but in a real-life example it would be more profitable to abstact your tasks into general tasks (e.g. handling any number could be a task), create and use a general worker pool, and send all tasks for execution to the pool. That way no matter how many goroutines the pool has, if there is work to do and there is a free (waiting) goroutine, it will take on the task, utilizing your CPU resource as much as possible. The job processor (executioner) should have the knowledge what to do with a number being `100` or `101`. For an example how such a goroutine pool can be implemented, see [Is this an idiomatic worker thread pool in Go?](https://stackoverflow.com/questions/38170852/is-this-an-idiomatic-worker-thread-pool-in-go/38172204#38172204)
Change the icons of PrimeFaces p:rating? I would like to change the shape of `p:rating`.. And display circles or squares or anything else instead of the stars. Is it possible. Maybe by CSS?
# You have two Options ## Option 1 You need to create your own icons, basiclly the icons are in one [sprite image](http://www.primefaces.org/showcase/javax.faces.resource/rating/rating.png.xhtml?ln=primefaces&v=5.0.4): ![original image](https://i.stack.imgur.com/cpAx2.png) So in your case you would implement your own by replacing each icon, for example: ![enter image description here](https://i.stack.imgur.com/LWN1z.png) Then in CSS replace the background with the new one: ``` div.ui-rating-star a, div.ui-rating-cancel a { background-image: url("#{resource['images/icon/myrating.png']}"); } ``` [online demo](http://jsf.hatemalimam.com/DailyLab/rating/main.xhtml) ### How to create such sprites go to <http://spritepad.wearekiss.com/> Select ![change size](https://i.stack.imgur.com/tFNNV.png) Then ![size](https://i.stack.imgur.com/osajM.png) Create your icons 15x15px ![like this](https://i.stack.imgur.com/MsITE.png) Drag and drop to the pad ![drag](https://i.stack.imgur.com/RgX2x.png) ## Option 2 Font-awesome or any other css font ``` <p:rating styleClass="awesome-rating" /> ``` in case of font-awesome you need to add this css piece ``` .awesome-rating .ui-rating-star, .awesome-rating .ui-rating-cancel { float: initial; display: inline; } .awesome-rating .ui-rating-star a, .awesome-rating .ui-rating-cancel a { background: none; width: initial; height: initial; font: normal normal normal 14px/1 FontAwesome; font-size: inherit; text-rendering: auto; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; color: #e6e6e6; margin-right: 5px; font-size: 30px; display: inline; } .awesome-rating .ui-rating-cancel a:before { content: "\f056"; } .awesome-rating .ui-rating-cancel.ui-rating-cancel-hover a { color: #ce1c18; } .awesome-rating .ui-rating-star a:before { content: "\f006"; } .awesome-rating .ui-rating-star.ui-rating-star-on a { color: #fadc3e; } .awesome-rating .ui-rating-star.ui-rating-star-on a:before { content: "\f005"; } ``` you can change the `:before` content with the icons content code, here's a [full list](https://astronautweb.co/snippet/font-awesome/) **See more:** - [CSS Image Sprites](http://www.w3schools.com/css/css_image_sprites.asp).
Black line showing on map fragment in android I am developing a fragment in Android. In my tab there are four fragments, of which one is a map fragment. Using this map fragment: ``` <fragment xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/map" android:layout_width="match_parent" android:layout_height="match_parent" class="com.google.android.gms.maps.SupportMapFragment" /> ``` the task is done perfectly. The problem is that when the user swipes, a black line is shown on the map fragment for 1 second. I've researched this online but couldn't find a way to prevent this? Why is this occurring and how can I fix it?
Was also facing this strange issue.Fixed it as follows by applying a transparent View over the mapview fragment seems to resolve the issue. Its a kind of hack.See if it works for you ``` <RelativeLayout android:id="@+id/relativeLayout1" android:layout_width="match_parent" android:layout_height="300dp" > <fragment android:id="@+id/map" android:name="com.google.android.gms.maps.MapFragment" android:layout_width="fill_parent" android:layout_height="fill_parent"/> <View android:id="@+id/imageView123" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@android:color/transparent" /> </RelativeLayout> ```