prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
R - How can I plot multiple histograms together? I have a dataframe with four columns that looks like this: ``` Beef-Low Beef-High Cereal-Low Cereal-High 90 73 107 98 76 102 95 74 90 118 97 56 64 104 80 111 86 81 98 95 51 107 74 88 72 100 74 82 90 87 67 77 95 117 89 86 78 111 58 92 ``` **I want to make a histogram showing all four columns as different-colored bars** so I tried: ``` > hist(wt$Beef.Low, main="Weight Gain Across Four Diets", xlab="Weight Gain", col="coral", xlim=c(0,120), ylim=c(0,4)) > hist(wt$Beef.High, col="coral3", add=T) > hist(wt$Cereal.Low, col="yellow", add=T) > hist(wt$Cereal.High, col="yellow3", add=T) ``` Which produced: ![Histogram](https://i.stack.imgur.com/THywp.jpg) I don't like the opaque nature of the bars because they mask the shapes of the overlapping histograms. I know that I can use the code found [here](http://www.shizukalab.com/toolkits/overlapping-histograms) to manually curate the colors of my histograms; but that seems like a tedious process and I feel sure that there must be a better way. Instead, I tried to copy what was done in this [question](https://stackoverflow.com/questions/3541713/how-to-plot-two-histograms-together-in-r) ``` > bl = wt$Beef.Low > bh = wt$Beef.High > cl = wt$Cereal.Low > ch = wt$Cereal.High > wts = rbind(bl,bh,cl,ch) > wtss = as.data.frame(wts) > ggplot(wtss, aes("Weight", fill="Diet")) + geom_histogram(alpha=0.5, aes(y = "Frequency"), position="identity") ``` But it doesn't work and I don't understand the ggplot commands well enough to even have a clue as to why. Please help.
I'd be inclined to do this with facets. Otherwise, with your dataset, the result are incomprehensible. ``` library(reshape2) library(ggplot2) gg <- melt(wt) ggplot(gg, aes(x=value, fill=variable)) + geom_histogram(binwidth=10)+ facet_grid(variable~.) ``` ![](https://i.stack.imgur.com/i2enk.png) **EDIT**: Response to OP's comment. `melt(...)` converts a data frame from "wide" format - data in different columns - to "long" format - all the data in one column, with a second column distinguishing between the different types of data (e.g., identifying which column the data in the row came from). If you use `melt(...)` with the defaults, as above, it creates a data frame with two columns: `$value` contains the actual data, and `$variable` contains the names of the column (in the starting data frame) that this data came from. Compare `wt` and `gg` and I think you'll see what I mean. So here we use `value` for the x-axis, and group the data based on `variable`.
Tracking changes in Entity Framework 4.0 using POCO Dynamic Proxies across multiple data contexts I started messing with EF 4.0 because I am curious about the POCO possibilities... I wanted to simulate disconnected web environment and wrote the following code to simulate this: 1. Save a test object in the database. 2. Retrieve the test object 3. Dispose of the DataContext associated with the test object I used to retrieve it 4. Update the test object 5. Create a new data context and persist the changes on the test object that are automatically tracked within the DynamicProxy generated against my POCO object. The problem is that when I call dataContext.SaveChanges in the Test method above, the updates are not applied. The testStore entity shows a status of "Modified" when I check its EntityStateTracker, but it is no longer modified when I view it within the new dataContext's Stores property. I would have thought that calling the Attach method on the new dataContext would also bring the object's "Modified" state over, but that appears to not be the case. Is there something I am missing? I am definitely working with self-tracking POCOs using DynamicProxies. ``` private static void SaveTestStore(string storeName = "TestStore") { using (var context = new DataContext()) { Store newStore = context.Stores.CreateObject(); newStore.Name = storeName; context.Stores.AddObject(newStore); context.SaveChanges(); } } private static Store GetStore(string storeName = "TestStore") { using (var context = new DataContext()) { return (from store in context.Stores where store.Name == storeName select store).SingleOrDefault(); } } [Test] public void Test_Store_Update_Using_Different_DataContext() { SaveTestStore(); Store testStore = GetStore(); testStore.Name = "Updated"; using (var dataContext = new DataContext()) { dataContext.Stores.Attach(testStore); dataContext.SaveChanges(SaveOptions.DetectChangesBeforeSave); } Store updatedStore = GetStore("Updated"); Assert.IsNotNull(updatedStore); } ```
As you stated later, you were using the POCO generator, not the self-tracking entities generator. I've tried it as well, and became quite perplexed. It seems that the proxy classes don't quite work as expected, and there might be a bug. Then again. none of the examples on MSDN try something like this, and when they reference updates in different tiers of an app (something like we're doing here) they use self-tracking entities, not POCO proxies. I'm not sure how these proxies work, but they do seem to store some kind of state (I managed to find the "Modified" state inside the private properties). But it seems that this property is COMPLETELY ignored. When you attach a property to a context, the context adds an entry to the ObjectStateManager, and it stores further state updates in there. At this point if you make a change - it will be registered, and applied. The problem is that when you .Attach an entity - the Modified state from the proxy is not transferred to the state manager inside the context. Furthermore, if you use context.Refresh() the updates are override, and forgotten! Even if you pass RefreshMode.ClientWins into it. I tried setting the object state's state property to modified, but it was overridden anyway, and the original settings were restored.. It seems that there's a bug in the EF right not, and the only way to do this would be to use something like this: ``` using (var db = new Entities()) { var newUser = (from u in db.Users where u.Id == user.Id select u).SingleOrDefault(); db.Users.ApplyCurrentValues(user); db.SaveChanges(); } ``` ### One more thing here [Entitity Framework: Change tracking in SOA with POCO approach](https://stackoverflow.com/questions/2314995/entitity-framework-change-tracking-in-soa-with-poco-approach) It seems that POCO just doesn't support the approach you're looking for, and as I expected the self-tracking entities were created to tackle the situation you were testing, while POCO's proxies track changes only within the context they created.. Or so it seems...
Define forward declared C-struct as C++-struct Is it legal to forward-declare a `struct` as a C-`struct` ``` // api.h #ifdef __cplusplus extern "C" { #endif typedef struct handle_tag handle_t; handle_t *construct(); void destruct(handle_t *h); void func(handle_t *h); #ifdef __cplusplus } #endif ``` and subsequently define it as a C++-`struct`, i.e. as a non-POD type? ``` // api.cpp struct handle_tag { void func(); std::string member; }; void func(handle_t *h) { h->func(); } ``` The general intention is to get via a C interface an externally accessible opaque type `handle_t` which is internally implemented as an C++ data type.
Yes, that will work fine as long as the C code never needs to see "inside" the `handle_tag` structure, and the appropriate C++ construction/destruction is performed by the C++ code (which I preseume the `construct` and `destruct` are for). All that the C code needs is a pointer to some datastructure - it won't know what the contents is, so the content can be anything you like it to to be, including constructor/destructor reliant data. Edit: I should point out that this, or methods similar to it (e.g. using a `void *` to record the address of an object for the C portion to `hold`), is a fairly common way to interface C-code to C++ functionality. Edit2: It is critical that the C++ code called doesn't "leak" exceptions into the C code. That is undefined behaviour, and very much liable to cause crashes, or worse, "weird things" that don't crash... So unless the code is guaranteed to not cause exceptions (and for example `std::string` is liable to throw `bad_alloc` in case of low memory), it is required to use a `try/catch` block inside code like `construct` anf `func` in the C++ side.
NM-applet's VPN menu disappears I spend my days hopping between VPN connections in Ubuntu 10.10 and this is becoming severely annoying. What happens is that every once in a while - typically several times a day - when I click on the networking icon in the top bar, the nm-applet's VPN Connections flyout menu is gone. (Said menu item is still there, it just doesn't have a submenu or do anything when clicked.) Every time I `killall nm-applet` and start it again, which usually brings the menu back. Has anybody here experienced this and been able to fix it?
This started to happen to me when I added the Nautilus Elementary PPA to my repo list. The version of network-manager-gnome available in that PPA is newer then the Canonical version and has some bugs in it that apparently crash the applet (causing the VPN menu to dissapear). You can check to see where your version of network-manager-gnome is coming from by opening synaptic and searching for network-manager-gnome. As of today, the latest Canonical version is 0.8.4~git.20110318t152954.9c4c9a0-0ubuntu1 and the maintainer of the package should be the Ubuntu Core Developers. If you notice that your version of network-manager-gnome is not coming from the Canonical, select network-manager-gnome in synaptic and then select 'force version' from the package menu. This will prompt you to select the version you want to force, select 0.8.4~git.20110318t152954.9c4c9a0-0ubuntu1.
Drawing a box around a table and its associated caption I have a table in HTML which has an associated caption. I want to draw a box around these collectively (a single box around the tabular part and the caption), ``` caption { border: 2px solid #eeeeee; } table { border: 4px solid black; } ``` ``` <html> <table> <caption>Table caption</caption> <thead> <tr> <td>Alpha</td> <td>Beta</td> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> </tr> </tbody> </table> </html> ``` I know I could wrap the whole table in a DIV and style that, but I am using another program to generate the HTML programmatically (HTML from markdown using pandoc) so I can't control this. Is there any way to make the black box in the example go all around both the table part and the caption?
If you set the `display` property of the table to `inline-block`, then the border of the table will surround both the tabular part and the caption. ``` caption { border: 2px solid #eeeeee; } table { border: 4px solid black; display: inline-block; } ``` ``` <html> <table> <caption>Table caption</caption> <thead> <tr> <td>Alpha</td> <td>Beta</td> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> </tr> </tbody> </table> </html> ```
Linking against binary crate There's a crate I want to use as a library for some of my own code ([speedtest-rs](https://github.com/nelsonjchen/speedtest-rs) specifically, but it doesn't really matter). However, whenever I try to use this crate, the compiler doesn't want to play nice with it. ``` $ cargo build Compiling my-project v0.1.0 (/home/nick/Documents/code/my-project) error[E0432]: unresolved import `speedtest_rs` --> src/main.rs:1:5 | 1 | use speedtest_rs::*; | ^^^^^^^^^^^^ use of undeclared type or module `speedtest_rs` ``` Looking at the [Rust book](https://doc.rust-lang.org/book/ch02-00-guessing-game-tutorial.html), it seems like there's a distinction between a binary and library crae > > The `rand` crate is a *library crate* which contains code intended to be used in other programs > > > Some googling has shown me that binary crates just have an extra link step, so I *should* be able to link against them, right? I know a lot of Rust packages have both a library and a binary in them, but what do you do when an author does not seem to follow this pattern?
> > Some googling has shown me that binary crates just have an extra link step, so I should be able to link against them, right? > > > No. It's not that simple. Plus that extra step creates an executable file rather than a library file. An executable cannot be used as a library. > > I know a lot of Rust packages have both a library and a binary in them, but what do you do when an author does not seem to follow this pattern? > > > You can: - Ask them on GitHub to publish a library. - Fork the crate and make your own library (which you can do since it is published with the usual dual “Apache License, Version 2.0” + “MIT” license). There isn't an automated way to use a binary crate as a library because in particular: - Rust won't generate a library. - Since the crate is missing a `src/lib.rs` file, nothing is exported. This is akin to have all items in that crate private. You wouldn't be able to use anything.
Bad quality for 100% both width and height of canvas I have done a very tiny example with canvas, it's available on **JsFiddle**: <http://jsfiddle.net/yPtr5/> ``` <!DOCTYPE html> <html> <head> <title></title> <style type="text/css"> html, body { width: 100%; height: 100%; margin: 0px; padding: 0px; } #myCanvas { width: 100%; height: 100%; display: block; } </style> </head> <body> <canvas id="myCanvas"> Your browser does not support the HTML5 canvas tag. </canvas> <script> var canvas = document.getElementById( "myCanvas" ); var context = canvas.getContext( "2d" ); context.id = "myContext"; context.beginPath(); context.arc( 95, 50, 40, 0, 2 * Math.PI ); context.stroke(); setTimeout( function() { var rectWidth = 150; var rectHeight = 75; context.fillStyle = "blue"; context.fillRect( rectWidth / -2, rectHeight / -2, rectWidth, rectHeight ); }, 2000 ); </script> </body> </html> ``` As you are able to see, the rendering result has a very low quality: ![enter image description here](https://i.stack.imgur.com/O48sP.png) So, I'm wondering, how can I draw various figures using **Canvas** in a good quality, I don't want to draw in small size, I want to draw in 100% size of page. So, maybe I didn't define some anti aliasing filter or something else? Thanks!
## Problem In most general cases we should avoid using CSS to set the canvas size. The default size of canvas is 300 x 150 pixels (bitmap). If you set the size using CSS we'll just end up scaling those 300 x 150 pixels meaning the browser will start interpolating and smoothing the image, which is why you end up with a blurry result. ## Solution Remove these from the CSS-rule: ``` #myCanvas { /*width: 100%; height: 100%;*/ display: block; } ``` and set the size in JavaScript like this: ``` var canvas = document.getElementById( "myCanvas" ); canvas.width = window.innerWidth; // equals window dimension canvas.height = window.innerHeight; ``` You can of course set any other size you need (in pixels). You probably want to define position (i.e. fixed or absolute) for the canvas' CSS as well if your goal is full window size. Hope this helps.
Why is the CI for an odds ratio not always centered on the sample value? Also, what will tend to produce a CI in which the sample value is far above the center, or far below the center?
Odds ratios are not distributed symmetrically - they can't be, because they can't go below zero, but they can go as high as infinity. What is distributed symmetrically is the log of the odds ratio. Most stats packages give a choice of the regular regression coefficient (B), and the exponentiated regression coefficient (exp(B)), which is the odds ratio. Here's an extreme example: B is 3, CIs are 1, 3. Exponentiate those values, and you get a point estimate for the odds ratio of 20.1, and confidence intervals of 2.72 and 148.4. However, notice that they are not symmetrical additively, but they are symmetrical multiplicatively. That is to say: 20.1/2.72 = 7.4 and 148.4/20.1 = 7.4 as well. The farther your point estimate of the odds ratio is from 1, the more distorted the effect will (appear to) be. To take an extreme example, a point estimate for B of 12, with CIs 10, 14 gives an odds ratio of 162,754 with CIs 22,026 and 1,202,604. The CIs cover a range over one million. But also notice that the same ratio (7.4) still holds.
Should one create a bond with a Bluetooth LE device For a Bluetooth project with Xamarin (Android API 21 and up) I would like to know if it is common to create a bond with a Bluetooth device. The current requirements are: - The Bluetooth device is used frequently, but not continuously. - Reconnecting should happen as fast as possible - Bluetooth address changes randomly when the device is powered down - The device's name is unknown, null or random - The connection is encrypted - The connection uses an overlying API that requires a Bluetooth Device as parameter for connection. **Should one create a bond with this device for "better" recognition (as some sort of cache) or reconnect to the device "from scratch". What is common in this scenario?** SO it is not a question of "Can I bond", but is it necessary to bond, or even better: what is a coorect and working, reliable scenario. Currently I use code like this (result.Device.Name is for dev purposes): ``` public override void OnScanResult([GeneratedEnum] ScanCallbackType callbackType, ScanResult result) { if (result.Device.Name == "��" && !_discovered && result.ScanRecord != null && result.ScanRecord.ServiceUuids != null && result.ScanRecord.ServiceUuids.Any(x => x.Uuid.ToString().ToUpper() == uuid)) { lock (_locker) { _discovered = true; _deviceList.Add(result.Device); BluetoothDiscoverySucces?.Invoke(result.Device); } } } ```
Short answer: the correct, common, and reliable scenario is to bond. Bonding means the connection is secure and the link is trusted. It means that your local device will usually find the remote device even if its address is changing. Pairing/bonding is recommended practice in Bluetooth for security and privacy reasons. --- Long answer: since its introduction, incremental versions of the Bluetooth spec have added features to improve the security and privacy of Bluetooth devices. Many devices will not allow you to exchange data or properly track them unless you are paired/bonded (The difference between bonding and pairing is that with bonding, the exchanged keys are stored in the database.) In Bluetooth Low Energy, the pairing/bonding process consists of three stages:- **Phase 1 - Pairing Feature Exchange** The two connected devices exchange their IO capabilities (e.g. does the device have a keyboard), authentication requirements (e.g. to bond or not to bond) and supported key sizes. **Phase 2 - Authentication and Encryption** Using encryption algorithms a key is generated and used to encrypt the link (this is different for legacy and LESC pairing, but it is beyond the scope of this question). **Phase 3 - Key distribution** Several keys are exchanged between the devices including the CSRK (Connection Signature Resolving Key), the IRK (Identity Resolving Key) and the static address. Of particular importance to your question is the IRK and the address. Since Bluetooth v4.0, a feature known as [LE Privacy](https://blog.bluetooth.com/bluetooth-technology-protecting-your-privacy) allowed the device to continuously change its address to reduce its track-ability. Malicious devices would not be able to track the device implementing this feature, as it actually looks like a series of different devices. *In order to resolve the address, the devices need to be previously paired/bonded*. If the remote device contains the IRK then it can use that and the random resolvable address to derive the Bluetooth device's original address. --- So, going over your criteria:- - The Bluetooth device is used frequently, but not continuously. If you are going to disconnect/reconnect frequently, you can pair once with the device and store the keys (i.e. bond). Pairing is no longer needed afterwards as the same keys will be used to encrypt the connection upon disconnection/reconnection. - Reconnecting should happen as fast as possible Connection and bonding are two different things. It will take the same amount of time to reconnect regardless of bonding being implemented. However, once the devices are reconnected, it will take some time for the connection to be re-encrypted. - Bluetooth address changes randomly when the device is powered down This means that the device is utilising the LE privacy feature. Therefore your device should be bonded with it in order to resolve the private resolvable address. - The device's name is unknown, null or random This is usually the case with BLE. The devices are usually identifiable via their address. As such if your devices have previously bonded you will be able to resolve the changing address and identify the remote device. - The connection is encrypted You cannot achieve an encrypted connection without pairing first (as per the 3 phases above). With bonding you are storing the keys in your database, therefore ensuring that you can use them in the future to re-encrypt the connection without having to go over the pairing phases. - The connection uses an overlying API that requires a Bluetooth Device as parameter for connection. I am not sure what this means, but is irrelevant to the requirement for bonding. --- For further reading on the subject, I recommend visiting the Bluetooth Specification Version 5.0, Vol 3, Part H, Section 2 Security Manager (page 2295)
How to use "less -F" without "-X", but still display output if only one page? I'm tweaking the pager of Git, but I've got some issues with it. What I want is: 1. Always colored output 2. Scrolling by touchpad or mouse 3. Quit-if-one-screen And my current configuration is: ``` $ git config --global core.pager less -+F -+X -+S ``` This does everything except the last one. But, if I remove `-+F`, there will be no output in case of one-screen. If I remove `-+X` as well, the output is back but I cannot scroll by touchpad in `less`. Is there a workaround which can meet all the requirements above?
***UPDATE*** tl;dr Solution: upgrade to less 530 From <http://www.greenwoodsoftware.com/less/news.530.html>: > > Don't output terminal init sequence if using -F and file fits on one screen. > > > So with this fix we don't even need to bother determining whether to use `-X` on our own, `less -F` just takes care of it. PS. Some other less configs that I use: ``` export PAGER='less -F -S -R -M -i' export MANPAGER='less -R -M -i +Gg' git config --global core.pager 'less -F -S -R -i' #alias less='less -F -S -R -M -i' ``` --- I eventually ended up with writing a wrapper on my own. ``` #!/usr/local/bin/bash # BSD/OSX compatibility [[ $(type -p gsed) ]] && SED=$(type -p gsed) || SED=$(type -p sed) CONTEXT=$(expand <&0) [[ ${#CONTEXT} -eq 0 ]] && exit 0 CONTEXT_NONCOLOR=$( $SED -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g" <<< "$CONTEXT") LINE_COUNT=$( (fold -w $(tput cols) | wc -l) <<< "$CONTEXT_NONCOLOR" ) [[ $LINE_COUNT -ge $(tput lines) ]] && less -+X -+S -R <<< "$CONTEXT" || echo "$CONTEXT" ``` BSD/OSX users should manually install `gnu-sed`. The amazing regexp, which helps remove color codes, is from <https://stackoverflow.com/a/18000433/2487227> I've saved this script to `/usr/local/bin/pager` and then `git config --global core.pager /usr/local/bin/pager` ~~The treatment for OCD patients, hooray!~~
Koa-router route urls that don't exist I can't believe there is no easy answer to do this. I wish to redirect let's say; ``` www.example.com/this-url-does-not-exist ``` to ``` www.example.com/ ``` There has to be a way, all the nodejs websites with koajs just can't crash? Heres my router (I'm using koa with koa-router): ``` router .get('/', function* (next) { this.body = "public: /"; }) .get('/about', function* (next) { this.body = "public: /about"; }) .get('*', function* (next) { // <--- wildcard * doesn't work this.body = "public: *"; }); ``` And don't tell me to use regular expressions, I've been trying and with them and it means manually updating the expression when adding urls etc. which is not what I've looking for, plus it doesn't work as javascript does not support negative lookbehinds.
If you prefer no regex do something like this: ``` var koa = require('koa'), router = require('koa-router')(), app = koa(); router.get('/path1', function *(){ this.body = 'Path1 response'; }); router.get('/path2', function *(){ this.body = 'Path2 response'; }); app.use(router.routes()) app.use(router.allowedMethods()); // catch all middleware, only land here // if no other routing rules match // make sure it is added after everything else app.use(function *(){ this.body = 'Invalid URL!!!'; // or redirect etc // this.redirect('/someotherspot'); }); app.listen(3000); ```
Is this use of a static queue thread-safe? The msdn documentation states that a static generic Queue is thread-safe. Does this mean that the following code is thread-safe? In other words, is there a problem when a thread Enqueues an int and another thread Dequeues an int at the same time? Do I have to lock the Enqueue and Dequeue operations for thread-safety? ``` class Test { public static Queue<int> queue = new Queue<int>(10000); Thread putIntThread; Thread takeIntThread; public Test() { for(int i = 0; i < 5000; ++i) { queue.Enqueue(0); } putIntThread = new Thread(this.PutInt); takeIntThread = new Thread(this.TakeInt); putIntThread.Start(); takeIntThread.Start(); } void PutInt() { while(true) { if(queue.Count < 10000) {//no need to lock here as only itself can change this condition queue.Enqueue(0); } } } void TakeInt() { while(true) { if(queue.Count > 0) {//no need to lock here as only itself can change this condition queue.Dequeue(); } } } } ``` Edit: I have to use .NET 3.5
This is absolutely *not* thread-safe. From the docs of [`Queue<T>`](http://msdn.microsoft.com/en-us/library/7977ey2c.aspx). > > Public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. > > > A `Queue<T>` can support multiple readers concurrently, as long as the collection is not modified. Even so, enumerating through a collection is intrinsically not a thread-safe procedure. To guarantee thread safety during enumeration, you can lock the collection during the entire enumeration. To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization. > > > Rereading your question, you seem to be confused about the phrase "static members of this type" - it's not talking about "a static Queue" as there's no such thing. An object isn't static or not - a member is. When it talks about static members it's talking about things like `Encoding.GetEncoding` (`Queue<T>` doesn't actually have any static members). Instance members are things like `Enqueue` and `Dequeue` - members which relate to an instance of the type rather than the type itself. So either you need to use a lock for each action, or if you're using .NET 4, use [`ConcurrentQueue<T>`](http://msdn.microsoft.com/en-us/library/dd267265.aspx).
C# Function parameter implementing abstract class and multiple interfaces Consider the following structure in C#: ``` interface I1 interface I2 abstract class A - class A1 : A - class A11 : A1, I1 - class A12 : A1, I2 - class A2 : A - class A21 : A2, I1 - class A22 : A2, I2 ``` Now I have a class B which takes an argument in the constructor. That argument must be a class that is somehow derived from A and implements the interface I2, i.e. it can be a class of type A12 or A22, but not A11 or A21. How can I define that parameter? Furthermore, I want to store the parameter later as a property in B (for later usage). I cannot redefine A as an interface because it provides a lot of virtual methods. Neither can I skip the tree-like inheritance scheme, because A1 and A2 provide a number of specific methods that are used elsewhere (not in B). So, all I need to handle in B is stuff from A and I2.
> > How can I define that parameter? > > > You can't, basically. Not in a constructor. The closest you could come to would be to create a static generic method returning a B: ``` public static B CreateInstance<T>(T item) where T : A, I2 ``` You can't do this in a constructor because constructors can't be generic. However, then if you need to *store* that value, you would need to choose to have a field of type `A` or `I2`, and cast when you need to. Of course, you could make `B` generic as per Enigmativity's answer - but that may have other implications elsewhere. If you want a non-generic `B`, you could potentially have both: ``` public abstract class B { // Common operations which don't depend on the constructor parameter } public class B<T> : B where T : A, I2 { public B(T item) { } } ``` This starts to get pretty complex, of course. Alternatively, you *could* skip compile-time checking, and just check at execution time: ``` public class B { private readonly A item; public B(A item) { if (!(item is I2)) { throw new ArgumentException("..."); } this.item = item; } } ``` While compile-time safety is preferable in general, it *may* be that it's not worth the extra hoops you need to go through in this case. It really depends on what you're doing with the type.
Complete example of (N)RVO I've been reading about (N)RVO and would like one, complete scenario description. I hope this question will serve other C++ -learners to clarify their ideas. Suppose this scenario: ``` string get_string() { string x("racecar"); //work on x... return x; } string a( get_string() ); string b = get_string(); ``` Please disregard the C++11 move-semantics for a moment. - If no (N)RVO is performed, how many constructors/assignments/destructors will be executed? (please point out, to which objects to they refer) - What changes if (N)RVO is applied? - Finally, how does the situation change in C++11 assuming, that `std::string` supports move-semantics.
1) Inside `get_string`, one string object (x) will be constructed using the constructor which takes a `const char*`. 2) When the function returns, the string constructed inside will be copy constructed to a temporary string object in the caller's space. 3) The temporary will be copy constructed to `a`. 4) See 1 5) See 2 6) See 3, but the copy will go to `b` With RVO, 2 an 5 can be eliminated by constructing the temporary inside the function via an invisible reference. With further copy elision (not RVO), 3 and 6 can be eliminated. So that leaves us with 2 constructions, both using the `const char*` constructor. With C++11 move semantics, the situation doesn't change at all if the compiler was good enough to get all the copy elision done. If copy elision isn't done, then 2, 3, 5 and 6 still exist, but become moves instead of copies. Unlike copy elision though, these moves are not an optional optimization. A conforming compiler must perform them, assuming it didn't already perform copy elision.
How can I use qnorm on Rcpp? ``` require(inline) func <- cxxfunction(, 'return Rcpp::wrap( qnorm(0.95,0.0,1.0) );' ,plugin="Rcpp") ``` *error: no matching function for call to ‘qnorm5(double, int, int)’* ``` require(inline) func <- cxxfunction(, 'return Rcpp::wrap( qnorm(0.95, 0.0, 1.0, 1, 0) );' ,plugin="Rcpp") ``` *error: no matching function for call to ‘qnorm5(double, double, double, int, int)’* ``` require(inline) code <-' double a = qnorm(0.95, 0.0, 1.0); return Rcpp::wrap( a ); ' func <- cxxfunction(, code ,plugin="Rcpp") func() ``` *error: no matching function for call to ‘qnorm5(double, double, double)’* How can I use qnorm on Rcpp?
By making the `mean` and `sd` arguments `double` as the error message shows -- so try this is a full example ``` library(inline) f <- cxxfunction(signature(xs="numeric", plugin="Rcpp", body=' Rcpp::NumericVector x(xs); return Rcpp::wrap(Rcpp::qnorm(x, 1.0, 0.0)); ') ``` and have a look at the examples and unit tests -- I just looked this up in the unit test file `runit.stats.R` which has a lot of test cases for these statistical 'Rcpp sugar' functions. ***Edit on 2012-11-14:*** With Rcpp 0.10.0 released today, you can call do the signature `R::qnorm(double, double, double, int, int)` if you want to use C-style code written against `Rmath.h`. Rcpp sugar still gives you vectorised versions.
Location detecting techniques for IP addresses What are the location detecting techniques for IP addresses? I know to look at the `$_SERVER['HTTP_ACCEPT_LANGUAGE']` (not accurate but mostly useful to detect location, for example if an IP range's users set French to their browser then it means that this range) belongs to France and `gethostbyaddr($_SERVER['REMOTE_ADDR'])` (to look country code top-level domain) then may be to whois `gethostbyaddr($_SERVER['REMOTE_ADDR'])` sometimes: `$HTTP_USER_AGENT` (Firefox's user agent string has language code, not accurate but mostly can be used to detect the location) Also I know how to get the time zone but it does not work in the new browsers. Moreover there is [css issue](http://hacks.mozilla.org/2010/03/privacy-related-changes-coming-to-css-vistited/) that detects visitor's history, it can be used to see what google and wikipedia pages he/she has visited (google.co.uk, google.com.tr) But what about cities?
You can't do this without a database that maps IP addresses to cities/countries/providers. There are commercial offerings such as [ip2location](http://www.ip2location.com/) that you could use. ~~AFAIK there is no free alternative though, as maintaining such a IP database is quite a lot of work.~~ Free alternative: [GeoIP2](http://search.cpan.org/dist/GeoIP2/) **Update:** There are several things that allow you to create such a db, if you invest enough time: 1. Use the databases provided by [regional and local registries](http://www.iana.org/numbers/) to find an IP's owner. 2. Many ISPs use a naming schema that allows you to locate the user. Sometimes you can even read the city name in plain text if you do a reverse-DNS lookup. Sometimes it is more cryptic. For example I currently have p5dcf6c4a.dip.t-dialin.net , and I have no idea that the naming scheme is.. 3. Use a traceroute. If you can't identify the location of a user, you can still find out the location of its uplink
Converting CodeFile to CodeBehind So I seem to have some difficulties with making my CodeFile reference into CodeBehind. Before you ask, I do not have Visual Studio and I'm not on Windows. The code is being hosted on an IIS server that I have access to. ``` <% @Page Language="C#" Inherits="corn.cheese.pies.CasLogin" CodeFile="anon.cs" %> ``` This code works perfectly, however upon changing it to: ``` <% @Page Language="C#" Inherits="corn.cheese.pies.CasLogin" CodeBehind="anon.cs" %> ``` The page returns an error: ``` Parser Error Message: Could not load type 'corn.cheese.pies.CasLogin'. ```
Are you sure you understand the difference between CodeFile and CodeBehind? See, for example, [CodeFile vs CodeBehind](https://stackoverflow.com/questions/73022/codefile-vs-codebehind) (ignore the accept answer, though, both links are dea) or [this article](http://www.ashchuan.com/blog/2011/03/31/codefile-and-codebehind-in-net-what-the-heck-is-the-difference/) about the difference. In a nutshell: A `CodeBehind` file is assumed to be compiled into an assembly that is deployed in the `bin` folder of your web site. When the ASP.NET engine loads your `aspx` file, it uses the `Inherits` directive to try to find that class. The `CodeBehind` file is primarily used to aid compiling (and tooling, e.g. "View Source") but the work is done at compile time. A `CodeFile` file is located and compiled by ASP.NET at runtime, on demand. You can "pre-compile" your web site to fall back on the older model, which is useful if you won't want to deploy your source code to every web site. But by default, your `CodeFile` file is a `.cs` file that is deployed to the site, and the class is generated in the ASP.NET temporary storage folder. (From a technical perspective: `CodeFile` pages are `partial class` definitions, with only the custom code-behind part of the class actually deployed in the C# file; this is why `CodeFile` didn't exist initially, because the C# compiler could not initially do partial classes.) By changing from `CodeFile` to `CodeBehind` you are telling ASP.NET to ignore the presence of the C# file, and only look for it in the compiled assembly. ASP.NET is, in turn, telling you that the compiled assembly doesn't actually contain the class you told it to look for.
Which virtual machine software is preferable for running Windows 8 Developer Preview? I want to [preview Windows 8](http://msdn.microsoft.com/en-us/windows/apps/br229516) and decided to install it on a virtual machine as I don't want to format an existing partition. Installing Windows 8 by mounting the `.iso` [results in the primary partition getting formated](https://superuser.com/questions/335355/does-windows-8-developer-preview-64bit-allow-multiboot-when-installing-from-a-mou/335387#335387) and losing the current operating system. Can anyone who has tried installing Windows 8 in a virtual machine suggest which virtual machine is preferable to run *Windows 8 Developer Preview* from among the following virtual machine softwares. - [VirtualBox](http://www.virtualbox.org/) - [VMWare Server](http://www.vmware.com/products/server/overview.html) - [VMware Player](http://www.vmware.com/products/player/overview.html) - [Microsoft Virtual PC](http://www.microsoft.com/windows/virtual-pc/) **Edit:** It is not necessary to restrict to the above list, I would like to know about any other virtualization software available.
I would say that my favourite has to be VMware Workstation. However, restricting it to just your list: - **VirtualBox** It works fine with the latest version. Follow [this guide](http://www.sysprobs.com/guide-install-windows-8-virtualbox) for a walk through on how to do it. - **VMware Server** I can't find anywhere that specifically says no, nor I do have a way to test it, but I highly doubt it can; if it could I wouldn't like to use it. Remember that Windows 8 is heavily graphically accelerated - this is completely against what VMware Server is good at! - **VMware Workstation** If you upgrade to the latest version (8.0.0 Build 471780) - released today as part of the Workstation 8 release - it can run Windows 8 fine. - **Microsoft Virtual PC** Currently cannot run Windows 8; HAL error.
Network interface binding appears to only work as root I have a CentOS 7 server with two network interfaces. One points to our LAN (ens32) and the other to our DMZ (ens33). While the LAN interface works fine, I noticed that it seems like the traffic will only traverse the DMZ interface if logged in as root, and I'm unsure what this means or if it has any net effect on other issues that I'm experiencing. For example, if I run: ``` curl --interface ens32 ipecho.net/plain ``` as a non-root user, I'll get the public IP for that interface. If I run: ``` curl --interface ens33 ipecho.net/plain ``` as a non-root user, the command appears to stall and I'll be forced to kill the process. If I run the previous command as root, I'll get the expected output which is the public IP for that interface. Can anyone explain what might be going on here?
As it can be seen in the official `curl` page, to specify/bind to a specific interface, there is a need to either have CAP\_NET\_RAW or to run as `root`. Hence thus `root` being able to do the binding/routing, and it not working with a regular user for you. When a regular user is using `curl`, without it being no permitted to use the `--interface` option, the default routing rules of the server will apply to that operation instead. From the curl official page [curl.1 the man page](https://curl.haxx.se/docs/manpage.html): > > --interface > > > Perform an operation using a specified interface. You can enter > interface name, IP address or host name. An example could look like: > > > curl --interface eth0:1 <https://www.example.com/> > > > If this option is used several times, the last one will be used. > > > **On Linux it can be used to specify a VRF, but the binary needs to > either have CAP\_NET\_RAW or to be ran as root.** > > > More information about > Linux VRF: <https://www.kernel.org/doc/Documentation/networking/vrf.txt> > > >
How give gradient on text in TextView iOS? I would like to give gradient effect on text in UITextView, when I try using gradient layer its applied on complete textview background. I just want it tp be applied on text and background coloraturas should separate. I want output like this : [![enter image description here](https://i.stack.imgur.com/lU24b.png)](https://i.stack.imgur.com/lU24b.png) And its coming like this: [![enter image description here](https://i.stack.imgur.com/QcuEb.png)](https://i.stack.imgur.com/QcuEb.png) Can someone suggest how to achieve output like first image on UITextView **I am specifically looking solution for UITextView, instead UILabel or UIView**
You can create a patterned color for your text. So, you can apply this color to any text component(like a label,textView, button, etc). Please check the below example. where you can customize your color pattern in the getGradientLayer() method. ``` func gradientColor(bounds: CGRect, gradientLayer :CAGradientLayer) -> UIColor? { //We are creating UIImage to get gradient color. UIGraphicsBeginImageContext(gradientLayer.bounds.size) gradientLayer.render(in: UIGraphicsGetCurrentContext()!) let image = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return UIColor(patternImage: image!) } func getGradientLayer(bounds : CGRect) -> CAGradientLayer{ let gradient = CAGradientLayer() gradient.frame = bounds gradient.colors = [UIColor.red.cgColor, UIColor.blue.cgColor] gradient.startPoint = CGPoint(x: 0.0, y: 0.5) gradient.endPoint = CGPoint(x: 1.0, y: 0.5) return gradient } let textView = UITextView(frame: CGRect(x: 0, y: 0, width: 400, height: 400)) textView.font = UIFont.boldSystemFont(ofSize:50) textView.textAlignment = .center textView.text = "Hello World!" let gradient = getGradientLayer(bounds: textView.bounds) textView.textColor = gradientColor(bounds: textView.bounds, gradientLayer: gradient) ``` Output:- [![enter image description here](https://i.stack.imgur.com/IfHqS.png)](https://i.stack.imgur.com/IfHqS.png)
How to overcome my ISP router limitations (1Gbps ports on 2Gbps connection)? Here is my problem : I recently switched to the quickest ISP available in my region : Orange. It offers **2Gb/s FTTH** internet connection, but the crappy provided router 'Livebox 5' only has 4 gigabit ethernet LAN ports and WiFi... so on a basic installation, i can only have a **1Gb/s access per machine** directly connected to it ! I was able to confirm it's an "Integrated switch" limitation by connecting two computers to their router and downloading at 1Gb/s on both of them at the same time... I also managed to get to 2Gb/s on a single Win10 machine by connecting two gigabit network cards directly to their router and using "Connectify Dispatch PRO" software to aggregate these links ! I also figured out that the ISP router integrated switch don't supports 802.3ad (LACP) aggregation... As i have some 10Gb/s SFP+ machines, i wonder how i can arrange my network to allow full 2Gb/s download speed on these machines ? Here is what i have on hand : - the crappy 'Livebox 5' ISP router with 4 Gigabit ports - **TP-Link T1700G-28TQ** L3 Smart Switch with 24 Gigabit slots and 4 10GE SFP+ Slots - a Win10 PC with 1Gb/s integrated card and a 10Gb/s SFP+ network card - two Debian servers with 1Gb/s integrated cards and a 10Gb/s SFP+ network cards - lots of 1Gb/s Win10 and Debian machines... - lots of RasPi's Please give me some hints on where to start ! Thx a lot guys !
I finally managed to do it ! I just had to put a pfSense router between my LAN and my ISP router ! pfSense has two Gigabit links as WAN to the ISP router, and one SFP+ link to my LAN, then : - Checked "Use non-local gateway" on each WAN gateway - Made a "Gateway group" with both WAN gateways - Chose this Gateway group as the Gateway in the LAN firewall rule And voila ! <https://www.speedtest.net/result/d/06b049fe-4fdf-4952-8efb-c65aa79503dd> [![enter image description here](https://i.stack.imgur.com/Nd7n7.png)](https://i.stack.imgur.com/Nd7n7.png)
C++ strongly typed typedef I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: ``` typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; ``` This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. ``` EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong ``` Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: ``` template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; ``` Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: ``` class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; ``` The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: ``` typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; ``` With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. ``` class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error ``` To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.
These are *phantom type parameters*, that is, parameters of a parameterised type that are used not for their representation, but to separate different “spaces” of types with the same representation. And speaking of spaces, that’s a useful application of phantom types: ``` template<typename Space> struct Point { double x, y; }; struct WorldSpace; struct ScreenSpace; // Conversions between coordinate spaces are explicit. Point<ScreenSpace> project(Point<WorldSpace> p, const Camera& c) { … } ``` As you’ve seen, though, there are some difficulties with unit types. One thing you can do is decompose units into a vector of integer exponents on the fundamental components: ``` template<typename T, int Meters, int Seconds> struct Unit { Unit(const T& value) : value(value) {} T value; }; template<typename T, int MA, int MB, int SA, int SB> Unit<T, MA - MB, SA - SB> operator/(const Unit<T, MA, SA>& a, const Unit<T, MB, SB>& b) { return a.value / b.value; } Unit<double, 0, 0> one(1); Unit<double, 1, 0> one_meter(1); Unit<double, 0, 1> one_second(1); // Unit<double, 1, -1> auto one_meter_per_second = one_meter / one_second; ``` Here we’re using *phantom values* to tag runtime values with compile-time information about the exponents on the units involved. This scales better than making separate structures for velocities, distances, and so on, and might be enough to cover your use case.
Why doesn't this path work to open a Windows file in PERL? I tried to play with Strawberry Perl, and one of the things that stumped me was reading the files. I tried to do: ``` open(FH, "D:\test\numbers.txt"); ``` But it can not find the file (despite the file being there, and no permissions issues). An equivalent code (100% of the script other than the filename was identical) worked fine on Linux.
As per [Perl FAQ 5](http://perldoc.perl.org/perlfaq5.html#Why-can%27t-I-use-%22C:%5Ctemp%5Cfoo%22-in-DOS-paths?--Why-doesn%27t-%60C:%5Ctemp%5Cfoo.exe%60-work?), you should be using **forward** slashes in your DOS/Windows filenames (or, as an alternative, escaping the backslashes). > > **Why can't I use "C:\temp\foo" in DOS paths? Why doesn't `C:\temp\foo.exe` work?** > > > Whoops! You just put a tab and a formfeed into that filename! Remember that within double quoted strings ("like\this"), the backslash is an escape character. The full list of these is in Quote and Quote-like Operators in perlop. Unsurprisingly, you don't have a file called "c:(tab)emp(formfeed)oo" or "c:(tab)emp(formfeed)oo.exe" on your legacy DOS filesystem. > > > Either single-quote your strings, or (preferably) use forward slashes. Since all DOS and Windows versions since something like MS-DOS 2.0 or so have treated / and \ the same in a path, you might as well use the one that doesn't clash with Perl--or the POSIX shell, ANSI C and C++, awk, Tcl, Java, or Python, just to mention a few. POSIX paths are more portable, too. > > > So your code should be `open(FH, "D:/test/numbers.txt");` instead, to avoid trying to open a file named "D:<TAB>est\numbers.txt" --- As an aside, you could further improve your code by using lexical (instead of global named) filehandle, a 3-argument form of open, and, most importantly, error-checking ALL your IO operations, especially `open()` calls: ``` open(my $fh, "<", "D:/test/numbers.txt") or die "Could not open file: $!"; ``` Or, better yet, don't hard-code filenames in IO calls (the following practice MAY have let you figure out a problem sooner): ``` my $filename = "D:/test/numbers.txt"; open(my $fh, "<", $filename) or die "Could not open file $filename: $!"; ```
optional closure and check if it is nil So what I want to have is a class that may get a closure passed to it in a function, it may also at some point want to disregard a that closure. How can I check if the closure variable is set and hwo can I delete it when I am done with it? > > Cannot invoke '!=' with an argument list of type '(@lvalue (sucsess: > Bool!, products: [AnyObject]!) -> ()?, NilLiteralConvertible)' Type > '(sucsess: Bool!, products: [AnyObject]!) -> ()?' does not conform to > protocol 'NilLiteralConvertible' > > > ``` class someClass{ //typealias completionHandlerClosureType = (sucsess:Bool!, items:[AnyObject]!)->() var completionHandler:(sucsess:Bool!, items:[AnyObject]!)->()? var hitpoints = 100 var someset = ["oh no!","avenge me!"] init(){} func getHitFunc(impact:Int, passedCompletionsHandler:(sucsess:Bool!, items:[AnyObject]!)->()){ completionHandler = passedCompletionsHandler hitpoints = hitpoints - impact } func checkIfDead{ if hitpoints<=0 { // The error received if completionHandler != nil{// Cannot invoke '!=' with an argument list of type //'(@lvalue (sucsess: Bool!, products: [AnyObject]!) -> ()?, NilLiteralConvertible)' //run the handler if dead completionHandler(sucsess: true, items: someset) //do not run it again completionHandler = nil //Type '(sucsess: Bool!, products: [AnyObject]!) -> ()?' does not conform to protocol 'NilLiteralConvertible' } } else{ completionHandler = nil //Type '(sucsess: Bool!, products: [AnyObject]!) -> ()?' does not conform to protocol 'NilLiteralConvertible' } } } ```
You need to wrap your closure signature in parentheses to make the closure itself optional. The way it's written now, the closure returns an optional Void (which doesn't really make sense). ``` var completionHandler: ((sucsess:Bool!, items:[AnyObject]!)->())? ``` Some style points and revisions to your example code: ``` // Capitalize class names so it's clear what's a class class SomeClass { // "success" has two "c"s var completionHandler: ((success:Bool!, items:[AnyObject]!)->())? var hitpoints = 100 var someset = ["oh no!","avenge me!"] init() { } func getHitFunc(impact:Int, passedCompletionsHandler:(success:Bool!, items:[AnyObject]!)->()){ completionHandler = passedCompletionsHandler hitpoints = hitpoints - impact } // You were missing the argument list here: func checkIfDead() { if hitpoints <= 0 { // Rather than checking to see if the completion handler exists, you can // just call it using optional syntax like this: completionHandler?(success: true, items: someset) } completionHandler = nil } } ```
What is the best way to upload files in a modern browser I want to upload a (single) file to a server and show the progress of the upload. I know I can upload a file using HTTP POST. I'm not familiar with web-sockets, but as I understand, binary data can also be sent that way and because web sockets are bi-directional I could get the progress of the upload. I'm not worried about older browsers so iframe's and flash solutions aren't very appealing unless there is a significant advantage in going that route. ~~I'm also curious as to the best server-side technology. Are their advantages to using a WSGI server like Django? Or maybe non-blocking I/O technology like Node.js? I'm not asking if web framework x is better than web framework y, or server x is better than server y. But simply what the ideal technology should have in order to facility uploads in the client.~~ **Update**: *It seems like the server side does not have bearing on the technologies/API's available on the client to facilitate uploads.*
**Edit** (2017-10-17): As of now, there is also the option to use [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch). It offers essentially the same capabilities as XMLHttpRequest behind a more modern promise-based API. There is a [polyfill](https://github.com/github/fetch) for browsers that don't support `window.fetch()` natively (which is mainly Internet Explorer and older Safari versions right now). # XMLHttpRequest vs. Web sockets vs. Something else Clearly [XMLHttpRequest](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest). Its capabilities in modern browsers are enormous and cover almost all scenarios. It will produce a standard POST or PUT request, any web server and framework combination can deal with that. While web sockets are nice for some scenarios, it's a different protocol that adds lots of complexity - they are only worth using if you need real-time responses from the server. And as you noted yourself, other approaches like Flash are merely ugly hacks. # Sending binary data Normally, you won't have direct access to files. So you will have an `<input type="file">` form field somewhere on your page and wait for the user to choose a file. The options then are: - Sending only the file contents: `request.send(input.files[0])`. The request body will be the file's contents and nothing else, no encoding will be performed and no metadata like file name will be transmitted. [Browser compatibility](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#Browser_Compatibility): Chrome 7, Firefox 3.6, Opera 12, IE 10. - [Sending the data of the entire form](https://developer.mozilla.org/en-US/docs/Web/Guide/Using_FormData_Objects#Retrieving_a_FormData_object_from_an_HTML_form): `request.send(new FormData(input.form))`. Here the form contents will be encoded as `multipart/form-data`, meaning that you can send multiple form fields and metadata like field and file names will be transmitted as well. You can also [modify the `FormData` object](https://developer.mozilla.org/en-US/docs/Web/API/FormData#Methods) before sending it. Depending on the server-side framework, handling this request might be simpler than raw data, there are typically many helpers you can use. [Browser compatibility](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#Browser_Compatibility): Chrome 6, Firefox 4, Opera 12, IE 10. - [Sending a typed array](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data#Sending_typed_arrays_as_binary_data): just in case you don't have a file but merely want to send some binary data you generate on the fly. No extra encoding is being performed here, so as far as the server side is concerned this works like sending file contents. [Browser compatibility](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#Browser_Compatibility): Chrome 9, Firefox 9, Opera 11.60, IE 10. # Displaying upload progress You can [listen to `progress` events on `XMLHttpRequest.upload`](https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications#Handling_the_upload_process_for_a_file). The [`progress` events](https://developer.mozilla.org/en-US/docs/Web/API/ProgressEvent) have `loaded` and `total` properties that allow determining how far you've got with your request. [Browser compatibility](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#Browser_Compatibility): Chrome 7, Firefox 3.5, Opera 11.60, IE 10. # JavaScript libraries There are of course existing libraries wrapping the functionality outlined here. These are mentioned in other answers, searching on the web will certainly turn up even more. I explicitly don't want to propose any libraries here - which of them if any you should use is purely a matter of preference.
Who's triggering onLocationChanged in LocationManager in Android? I was looking for this answer in previous posts about Android's Location Provider but I couldn't find the answer. How is onLocationChanged triggered and what is the process since the GPS daemon/library fixes a location until it reaches the LocationManager? Is it directly triggered by the Gps daemon or does it go first somewhere in the Android Java Framework? Many thanks N
I've been looking around the code and this is what i have thus far: requestLocationUpdates registers the onLocationChangeListener. This in turn [gets stored](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.3.4_r1/android/location/LocationManager.java#LocationManager._requestLocationUpdates%28java.lang.String,android.location.Criteria,long,float,boolean,android.location.LocationListener,android.os.Looper%29) as a key in a hashmap with a value of a ListenerTransport. The ListenerTransport has a [private variable](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.3.4_r1/android/location/LocationManager.java#LocationManager.ListenerTransport.%3Cinit%3E%28android.location.LocationListener%2Candroid.os.Looper%29) which is the listener in question. The ListenerTransport just seems to be a class that recieves a message, which [calls the onLocationChange method](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/2.3.4_r1/android/location/LocationManager.java#LocationManager.ListenerTransport._handleMessage%28android.os.Message%29) of the listener. This is where i'm lost: The ListenerTransport extends `ILocationListener.Stubs` which extends Binder. Hence, its a type of RPC call. But where does the call go? I've noticed in `ILocationListener.Proxy` in `requestLocationUpdates()`, theres a call to `IBinder.transact(Stub.TRANSACTION_requestLocationUpdates,...)`. It's to be seen where this goes and what happens...
Delete JavaFX table row with delete key Is there a way to delete selected table row using keyboard delete key? Is there any example with this implementation?
Sure you can. You only have to register an [EventHandler](https://docs.oracle.com/javase/8/javafx/api/javafx/event/EventHandler.html) and listen to the specific KeyCode. Following example is for TreeTableView but should be applyable for all TableViews. ``` treeTableView.setOnKeyPressed( new EventHandler<KeyEvent>() { @Override public void handle( final KeyEvent keyEvent ) { final TreeItem<YourObject> selectedItem = treeTableView.getSelectionModel().getSelectedItem(); if ( selectedItem != null ) { if ( keyEvent.getCode().equals( KeyCode.DELETE ) ) { //Delete or whatever you like: presenter.onEntityDeleteAction( selectedItem ); } //... other keyevents } } } ); ```
How can I safely open a suspicious email? Suppose I receive an email that I highly suspect is some form of spam but I'm not 100% sure. Suppose also that I'm using form of webmail (like Gmail) with good spam filters, but this message made it through safely. Obviously I should not open any attachments, but is this email otherwise safe to open? If not, is there a simple way to safely open it?
If you open the email in a web client (ie, online at gmail.com or mail.yahoo.com, etc), you're generally very unlikely to experience any problems. If this email contained a script virus (very rare nowadays) it would generally require being opened in an email client locally installed on the computer in order to gain sufficient access to actually infect your computer. Viruses, because of the popularity of web clients for email, have pretty much stopped sending themselves as emails in the last few years. Spam is still a problem, and many viruses create spambots and enslave their infected computers as spam relays. But you're not going to catch a virus from the average spam message. If you are using a local email client, don't open suspicious emails unless you've got the email client running inside a virtualized system that you can scrub easily with a reset.
Module load/unload cycle when debugging Delphi application Sometimes when running application under Delphi 2009 debugger, as soon as I do anything with the application form (click, move..) Delphi starts to flood Event Log with following messages (similar): ``` Module Load: UNKNOWN_MODULE_59954.No DebugInfo. Base Address: $02D90000. Process MyApp.exe (5584) Module Unload: UNKNOWN_MODULE_59954. Process MyApp.exe (5584) ``` Number of UNKNOWN\_MODULE increases for each cycle, so at example there have been almost 60000 module load/unloads. Application remains unresponsive during this flood. Sometimes I have to just terminate the application to be able to continue. Any idea how to start tracking the cause of this?
As "Arioch 'The" suggested - download and run [Sysinternal's process explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx). - From the View menu choose "Lower pane -> DLLs (Ctrl+D)" - Mark your process in the process tree - Right click lower pane's header and choose "Add Columns". - From the DLL tab choose "Base Address". - Sort modules on that column - Find your dll's base address - you will get it's name :) If the module is unloading too fast - set "Options > Difference Highlight Duration" to 9 sec. In my case it was odbcint.dll: ![procMon - find unknown module](https://i.stack.imgur.com/6X8Na.png) Also give your PC a virus scan - just in case :)
Number of unique values per column by group Consider the following dataframe: ``` A B E 0 bar one 1 1 bar three 1 2 flux six 1 3 flux three 2 4 foo five 2 5 foo one 1 6 foo two 1 7 foo two 2 ``` I would like to find, for each value of `A`, the number of unique values in the other columns. 1. I thought the following would do it: ``` df.groupby('A').apply(lambda x: x.nunique()) ``` but I get an error: ``` AttributeError: 'DataFrame' object has no attribute 'nunique' ``` 2. I also tried with: ``` df.groupby('A').nunique() ``` but I also got the error: ``` AttributeError: 'DataFrameGroupBy' object has no attribute 'nunique' ``` 3. Finally I tried with: ``` df.groupby('A').apply(lambda x: x.apply(lambda y: y.nunique())) ``` which returns: ``` A B E A bar 1 2 1 flux 1 2 2 foo 1 3 2 ``` and seems to be correct. Strangely though, it also returns the column `A` in the result. Why?
The `DataFrame` object doesn't have `nunique`, only `Series` do. You have to pick out which column you want to apply `nunique()` on. You can do this with a simple dot operator: ``` df.groupby('A').apply(lambda x: x.B.nunique()) ``` will print: ``` A bar 2 flux 2 foo 3 ``` And doing: ``` df.groupby('A').apply(lambda x: x.E.nunique()) ``` will print: ``` A bar 1 flux 2 foo 2 ``` Alternatively you can do this with one function call using: ``` df.groupby('A').aggregate({'B': lambda x: x.nunique(), 'E': lambda x: x.nunique()}) ``` which will print: ``` B E A bar 2 1 flux 2 2 foo 3 2 ``` To answer your question about why your recursive lambda prints the `A` column as well, it's because when you do a `groupby`/`apply` operation, you're now iterating through three `DataFrame` objects. Each `DataFrame` object is a sub-`DataFrame` of the original. Applying an operation to that will apply it to each `Series`. There are three `Series` per `DataFrame` you're applying the `nunique()` operator to. The first `Series` being evaluated on each `DataFrame` is the `A` `Series`, and since you've done a `groupby` on `A`, you know that in each `DataFrame`, there is only one unique value in the `A` `Series`. This explains why you're ultimately given an `A` result column with all `1`'s.
Where do I find the Instagram media ID of a image I'm am looking for the `MediaID` of an Instagram image which has been uploaded. It should look like > > 1234567894561231236\_33215652 > > > I have found out the last set of integers are the `usersID` For example: this is the link for the image directly, however I see no `mediaID` in the correct format? ``` http://distilleryimage11.ak.instagram.com/d33aafc8b55d11e2a66b22000a9f09de_7.jpg ``` while this is the link ``` http://instagram.com/p/Y7GF-5vftL/ ``` I don't wish to use the API as all I need the MediaID from a selected image.
# Here's a better way: ``` http://api.instagram.com/oembed?url=http://instagram.com/p/Y7GF-5vftL/ ``` Render as json object and you can easily extract media id from it --- **For instance, in PHP** ``` $api = file_get_contents("http://api.instagram.com/oembed?url=http://instagram.com/p/Y7‌​GF-5vftL/"); $apiObj = json_decode($api,true); $media_id = $apiObj['media_id']; ``` **For instance, in JS** ``` $.ajax({ type: 'GET', url: 'http://api.instagram.com/oembed?callback=&url=http://instagram.com/p/Y7GF-5vftL‌​/', cache: false, dataType: 'jsonp', success: function(data) { try{ var media_id = data[0].media_id; }catch(err){} } }); ```
Will the terraform fail if the data does not exist? Will the terraform fail if a user in the data does not exist? I need to specify a user in the nonproduction environment by the data block: ``` data "aws_iam_user" "labUser" { user_name = "gitlab_user" } ``` Then I use this user in giving the user permissions: ``` resource "aws_iam_role" "ApiAccessRole_abc" { name = "${var.stack}-ApiAccessRole_abc" tags = "${var.tags}" assume_role_policy = <<EOF { "Version": "2019-11-29", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "AWS": [ "${aws_iam_user.labUser.arn}" ] }, "Effect": "Allow", "Sid": "" } ] } EOF } ``` In the production environment this user does not exist. Would the terraform break if this user does not exist? What would be a good approach to use the same terraform in both environments?
In Terraform a `data` block like you showed here is both a mechanism to fetch data and also an assertion by the author (you) that a particular external object is expected to exist in order for this configuration to be applyable. In your case then, the answer is to ensure that the assertion that the object exists only appears in situations where it should exist. The "big picture" answer to this is to review the [Module Composition](https://www.terraform.io/docs/language/modules/develop/composition.html) guide and consider whether this part of your module ought to be decomposed into a separate module if it isn't *always* a part of the module it's embedded in, but I'll also show a smaller solution that uses conditional expressions to get the behavior you wanted without any refactoring: ``` variable "lab_user" { type = string default = null } data "aws_iam_user" "lab_user" { count = length(var.lab_user[*]) user_name = var.lab_user } resource "aws_iam_role" "api_access_role_abc" { count = length(data.aws_iam_user.lab_user) name = "${var.stack}-ApiAccessRole_abc" tags = var.tags assume_role_policy = jsonencode({ Version = "2019-11-29" Statement = [ { Sid = "" Action = "sts:AssumeRole" Effect = "Allow" Principal = { AWS = [data.aws_iam_user.lab_user[count.index].arn] } }, ] }) } ``` There's a few different things in the above that I want to draw attention to: - I made the lab username an optional variable rather than a hard-coded value. You can than change the behavior between your environments by assigning a different value to that `lab_user` variable, or leaving it unset altogether for environments that don't need a "lab user". - In the `data "aws_iam_user"` I set count to `length(var.lab_user[*])`. The `[*]` operator here is asking Terraform to translate the possibly-null string variable `var.lab_user` into a list of either zero or one elements, and then using the length of that list to decide how many `aws_iam_user` queries to make. If `var.lab_user` is `null` then the length will be zero and so no queries will be made. - Finally, I set the `count` for the `aws_iam_role` resource to match the length of the `aws_iam_user` data result, so that in any situation where there's one user expected there will also be one role created. If you reflect on the Module Composition guide and conclude that this lab user ought to be a separate concern in a separate module then you'd be able to remove this conditional complexity from the "gitlab user" module itself and instead have the calling module either call that module or not depending on whether such a user is needed for that environment. The effect would be the same, but the decision would be happening in a different part of the configuration and thus it would achieve a different separation of concerns. Which separation of concerns is most appropriate for your system is, in the end, a tradeoff you'll need to make for yourself based on your knowledge of the system and how you expect it might evolve in future.
Golang API response 400 I can't see were is my error every time i try to run it i get nothing when i print some of the key variables i got this : **print longURL** > > <http://www.example.com> > > > **print &output** > > &{400 Bad Request 400 HTTP/1.1 1 1 map[X-Frame-Options:[SAMEORIGIN] > X-Xss-Protection:[1; mode=block] Server:[GSE] > Alternate-Protocol:[443:quic] Content-Type:[application/json; > charset=UTF-8] Date:[Thu, 12 Jun 2014 02:10:33 GMT] Expires:[Thu, 12 > Jun 2014 02:10:33 GMT] Cache-Control:[private, max-age=0] > X-Content-Type-Options:[nosniff]] 0xc2100fe940 -1 [chunked] false > map[] 0xc2100581a0} > > > ``` // c0de urlShort package main import ( "bytes" "encoding/json" "fmt" "io/ioutil" "log" "net/http" "os" ) type apiResponse struct { Id, Kind, LongURL string } func main() { longURL := os.Args[len(os.Args)-1] body := bytes.NewBufferString(fmt.Sprintf( `{"longURL":"%s"}`, longURL)) request, err := http.NewRequest( "POST", "https://www.googleapis.com/urlshortener/v1/url", body) request.Header.Add("Content-Type", "application/json") client := http.Client{} response, err := client.Do(request) if err != nil { log.Fatal(err) } outputAsBytes, err := ioutil.ReadAll(response.Body) response.Body.Close() var output apiResponse err = json.Unmarshal(outputAsBytes, &output) if err != nil { log.Fatal(err) } fmt.Printf("%s", output.Id) } ```
Instead of normal response, you are getting this: ``` { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Required", "locationType": "parameter", "location": "resource.longUrl" } ], "code": 400, "message": "Required" } } ``` It says that you are missing required parameter: longUrl. Notice that it's long **Url** not long **URL** This code works for me: ``` package main import ( "bytes" "encoding/json" "fmt" "io/ioutil" "log" "net/http" "os" ) type apiResponse struct { Id, Kind, LongURL string } func main() { longURL := os.Args[len(os.Args)-1] body := bytes.NewReader([]byte(fmt.Sprintf( `{"longUrl":"%s"}`, longURL))) request, err := http.NewRequest( "POST", "https://www.googleapis.com/urlshortener/v1/url", body) request.Header.Add("Content-Type", "application/json") client := http.Client{} response, err := client.Do(request) if err != nil { log.Fatal(err) } outputAsBytes, err := ioutil.ReadAll(response.Body) response.Body.Close() fmt.Println(string(outputAsBytes)) var output apiResponse err = json.Unmarshal(outputAsBytes, &output) if err != nil { log.Fatal(err) } fmt.Printf("%s", output) } ```
How to run multi pipeline in logstash using The Elastic stack (ELK) powered by Docker and Compose I am using [this\_repo](https://github.com/deviantony/docker-elk) to get started running ELK with Docker. my question is regarding the logstash image in the docker-compose file: When I run locally I have 3 files ``` #general settings logstash.yml #pipeline setting pipeline.yml #a pipe line configuration myconf.conf1 ``` when I want to use multi pipeline I use the pipeline.yml file to control all the different pipeline I am running ``` # Example of my pipeline.yml - pipeline.id: my-first-pipeline path.config: "/etc/logstash/my-first-pipeline.config" pipeline.workers: 2 - pipeline.id: super-swell-pipeline path.config: "/etc/logstash/super-swell-pipeline.config" queue.type: persisted ``` In the repo I am using as guideline I can only find logstash.yml and I am not understanding how to can I add pipelines. the only running pipeline is the default "main" which by default only run logstash.conf I tried different configurations, all field How can I add pipeline.yml to the docker? or what is the best practice running multi pipelines with this docker-compose file? appreciate any help docker-compose/logstash form the repo: ``` logstash: build: context: logstash/ args: ELK_VERSION: $ELK_VERSION volumes: - type: bind source: ./logstash/config/logstash.yml target: /usr/share/logstash/config/logstash.yml read_only: true - type: bind #can be either a host path or volume name. source: ./logstash/pipeline #is the container path where the volume is mounted target: /usr/share/logstash/pipeline read_only: true ports: - "5000:5000/tcp" - "5000:5000/udp" - "9600:9600" environment: LS_JAVA_OPTS: "-Xmx256m -Xms256m" networks: - elk depends_on: - elasticsearch ``` DockerFILE: ``` ARG ELK_VERSION # https://www.docker.elastic.co/ FROM docker.elastic.co/logstash/logstash:${ELK_VERSION} # Add your logstash plugins setup here # Example: RUN logstash-plugin install logstash-filter-json ``` logstash.yml ``` ## Default Logstash configuration from Logstash base image. ## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml # http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ] ## X-Pack security credentials # xpack.monitoring.enabled: false xpack.monitoring.elasticsearch.username: elastic xpack.monitoring.elasticsearch.password: changeme ```
You need to mount your pipelines.yml file to the container as well. The default location Logstash is looking for a possible pipelines.yml file is `/usr/share/logstash/config/` (the same folder you've already mounted the logstash.yml file to). Please note that you also have to update your current, local pipelines.yml file to the correct paths of the pipelines **inside the container**. To be precise, you need to change `path.config: "/etc/logstash/my-first-pipeline.config"` to `path.config: "/usr/share/logstash/pipeline/my-first-pipeline.config"` Also, have a look at these official guides for running Logstash with Docker and how to configure multiple pipelines: - <https://www.elastic.co/guide/en/logstash/current/docker-config.html#docker-config> - <https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html> I hope I could help you! **EDIT:** The official documentations call the file pipeline**s**.yml instead of pipeline.yml
Flutter/Dart how to groupBy list of maps I have this list of maps. ``` [ {title: 'Avengers', release_date: '10/01/2019'}, {title: 'Creed', release_date: '10/01/2019'} {title: 'Jumanji', release_date: '30/10/2019'}, ] ``` I would like to write a code that would group the list of movies by `release_date` like that. ``` [ { "10/01/2019": [ { "title": "Avengers" }, { "title": "Creed" } ] }, { "30/10/2019": [ { "title": "Jumanji" } ] } ] ```
The package [collection](https://pub.dartlang.org/packages/collection) implements the `groupBy` function. For grouping by date: ``` import "package:collection/collection.dart"; main(List<String> args) { var data = [ {"title": 'Avengers', "release_date": '10/01/2019'}, {"title": 'Creed', "release_date": '10/01/2019'}, {"title": 'Jumanji', "release_date": '30/10/2019'}, ]; var newMap = groupBy(data, (Map obj) => obj['release_date']); print(newMap); } ``` For removing the `release_date` key from each map entry: ``` var newMap = groupBy(data, (Map obj) => obj['release_date']).map( (k, v) => MapEntry(k, v.map((item) { item.remove('release_date'); return item;}).toList())); ``` For changing a key: ``` var newMap = groupBy(data, (Map obj) => obj['release_date']).map( (k, v) => MapEntry(k, v.map((item) => {'name': item['title']}).toList())); ```
How to find the expectation $\mathbb{E} \left[ \frac{|h|^4}{|h+w|^2} \right]$? Given the independent and complex Gaussian random variables $h$ and $w$, how does one can find the following expectation? $$\mathbb{E} \left[ \frac{|h|^4}{|h+w|^2} \right] = \int\_{\mathbb{C}}\int\_{\mathbb{C}}{\frac{|h|^4}{|h+w|^2}} f(h)g(w)d\_{h}d\_{w},$$ where $h \sim \mathcal{CN}\left(0,d\right)$ and $w \sim \mathcal{CN}\left(0,p\right)$. The pdf of $h$ and $w$ are defined as $$f(h) = \frac{1}{\pi d} \text{e}^{-\frac{|h|^2}{d}},$$ $$g(w) = \frac{1}{\pi p} \text{e}^{-\frac{|w|^2}{p}}.$$ I've tried to change the variables like: $|h|^2 = r^2$ and $|w|^2 = n^2$, however, I could not apply this change of variables to the denominator: $|h + w|^2$. Note that $h$ and $w$ are complex random variables that can be written in rectangular form like: $h = a + i\*b$ and $w = c + i\*d$, where $i = \sqrt{-1}$. $\textbf{UPDATE}$: After running some simulations, it seems as $p \to \infty$ the expectation above tends to $d$. It can be checked with the following matlab simulation script: <https://pastebin.com/U48fcMZ9>
**The expectation is infinite.** One way to see this is to condition on $H$. Preliminary changes of variable (merely involving rescaling $H$ and $W$ and then shifting to a new origin) reduce the conditional expectation to a positive constant times a two-dimensional integral of the form $$\mathcal{I}(\lambda)=\iint\_{\mathbb{C}}\ \frac{1}{|z|^2} e^{-\lambda |z-1|^2}\ dz d\bar z$$ with $\lambda \gt 0.$ In polar coordinates $(r,\theta),$ $|z|^2 = r^2$ and $|z-1|^2 = r^2 - 2r\cos(\theta)+1,$ and the area element is $dzd\bar{z} = r dr d\theta,$ giving $$\mathcal{I}(\lambda) = e^{-\lambda}\int\_0^{2\pi}d\theta \int\_0^\infty \frac{1}{r^2}e^{-\lambda(r^2 - 2r\cos\theta)}\ r\, drd.$$ For $0 \le r \le \sqrt{1 + 1/\lambda} -1 = u(\lambda)\gt 0,$ the expression in the exponent exceeds $-1,$ so we may underestimate this integral by replacing the exponential by $e^{-1}$ and limiting $r$ to this range: $$\mathcal{I}(\lambda) \ge e^{-\lambda-1}\int\_0^{2\pi}d\theta \int\_0^{u(\lambda)}\frac{1}{r}dr = 2\pi e^{-\lambda-1} \lim\_{\epsilon\to 0} \int\_\epsilon^{u(\lambda)} \frac{dr}{r}\ \propto\ \lim\_{\epsilon\to 0}\log(u(\lambda)) - \log(\epsilon),$$ which diverges to $+\infty.$ Since all conditional expectations are infinite, the expectation must be infinite. **A simulation bears this out.** For simplicity I chose $H$ and $W$ to have independent standard (Complex) Normal normal distributions, generated twenty million realizations $(h,w),$ and computed the running mean of $|h|^4/|h+w|^2.$ The periodic large jumps are characteristic of a divergent expectation: no matter how far out you run this simulation, these jumps will recur (whenever a tiny value of $|w+h|$ is generated compared to $|h|^2$) and its mean will never converge. [![enter image description here](https://i.stack.imgur.com/vj60D.png)](https://i.stack.imgur.com/vj60D.png) This plot shows the running mean "Mean" as a function of the number of simulated values "N" for $n=10^4$ through $n=2\times 10^7.$ Colors highlight the largest jumps. Evidently one could be fooled by relying on a simulation to estimate the mean: notice how the purple segment from $N\approx 508,000$ to $N\approx 9,300,000$ seems to settle down--only to be followed by a large jump. This indicates that the simulation-based estimate depends entirely on when you choose to end the simulation.
import jQuery-ui and jQuery with npm install Somewhat new to npm here.. I'd like to use jQuery-ui in my project. I'm used to importing both jQuery and jQuery-ui with a script tag in my HTML, but I would like to import both using npm install. I got jQuery working with. ``` import $ from 'jquery' ``` But whenever I try to import `jquery-ui` I get this error: > > Uncaught ReferenceError: jQuery is not defined > > > There's a lot of posts about this error. But all of them seem to be centered around people just putting their script tags for jQuery and jQuery-ui in the wrong order. I can't find anything on how to install both jQuery and jQuery-ui with npm install? Any ideas?
Just tried updating **jquery** (to 3.1.0) & **jquery-ui** (to 1.12.0) and got the very same error. Newer versions of **jquery-ui** seems to require a global `jQuery` variable to initialize or newer versions of **jquery** does not set the `jQuery` global variable by default anymore, hence the *Uncaught ReferenceError*. A clear solution is to set `global.jQuery = require('jquery')` before importing **jquery-ui**. It does not seem to work for **browserify** though, since **browserify** prioritizes imports over other expressions (imports are placed on top of the browserified code even if they were placed behind other expressions in the original code). So if you're using **browserify**, try **jquery**@2.2.4 and **jquery-ui**@1.10.5, then import as: ``` import $ from 'jquery'; import 'jquery-ui'; ``` Worked for me.
Prevent a Google Maps iframe from capturing the mouse's scrolling wheel behavior If you're browsing with an embedded maps iframe using your trackpad or mouse, you can potentially trap yourself into the Maps' zooming capabilities, which is really annoying. Try it here: <https://developers.google.com/maps/documentation/embed/guide#overview> Is there a way to prevent this?
This has been answered here => [Disable mouse scroll wheel zoom on embedded Google Maps](https://stackoverflow.com/questions/21992498/is-it-possible-to-disable-mouse-scroll-wheel-zoom-on-embedded-google-maps) by Bogdan. What it does is that it will disable the mouse until you click onto the map and the mouse start working again, if you move the mouse out from the map the mouse will be disabled again. **Note**: Does not work on IE < 11 (Working fine on IE 11) CSS: ``` <style> .scrolloff { pointer-events: none; } </style> ``` Script: ``` <script> $(document).ready(function () { // you want to enable the pointer events only on click; $('#map_canvas1').addClass('scrolloff'); // set the pointer events to none on doc ready $('#canvas1').on('click', function () { $('#map_canvas1').removeClass('scrolloff'); // set the pointer events true on click }); // you want to disable pointer events when the mouse leave the canvas area; $("#map_canvas1").mouseleave(function () { $('#map_canvas1').addClass('scrolloff'); // set the pointer events to none when mouse leaves the map area }); }); </script> ``` HTML: (just need to put correct id as defined in css and script) ``` <section id="canvas1" class="map"> <iframe id="map_canvas1" src="https://www.google.com/maps/embe...." width="1170" height="400" frameborder="0" style="border: 0"></iframe> </section> ```
comparying 2 array's with NSPredicate (cocoa) i have an NSObject with 2 property's ``` @interface Entity : NSObject { NSNumber *nid; NSString *title; } ``` i have 2 array's with Entity's objects in it and I want to compare those two on the nid with a predicate ``` array1: ({nid=1,title="test"},{nid=2,title="test2"}) array2: ({nid=2,title="test2"},{nid=3,title="test3"}) ``` the 2 arrays have both a nid with value 2 so my output should but ``` array3: ({nid=2,title="test2"}) ``` so i can produce an array with only matching nid's
The following code seems to work for me (it obviously leaks MyEntity objects but that was not the sample target): ``` NSArray* array1 = [NSArray arrayWithObjects:[[MyEntity alloc] initWithID:[NSNumber numberWithInt:1] title:@"1"], [[MyEntity alloc] initWithID:[NSNumber numberWithInt:2] title:@"2"], nil]; NSArray* array2 = [NSArray arrayWithObjects:[[MyEntity alloc] initWithID:[NSNumber numberWithInt:2] title:@"2"], [[MyEntity alloc] initWithID:[NSNumber numberWithInt:3] title:@"3"], nil]; NSArray* idsArray = [array1 valueForKey:@"nid"]; NSArray* filteredArray = [array2 filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"nid IN %@", idsArray]]; ``` `filteredArray` contains entities whose ids present in both array.
Rsync filter: copying one pattern only I am trying to create a directory that will house all and only my PDFs compiled from LaTeX. I like keeping each project in a separate folder, all housed in a big folder called `LaTeX`. So I tried running: ``` rsync -avn *.pdf ~/LaTeX/ ~/Output/ ``` which should find all the pdfs in `~/LaTeX/` and transfer them to the output folder. This doesn't work. It tells me it's found no matches for "`*.pdf`". If I leave out this filter, the command lists all the files in all the project folders under LaTeX. So it's a problem with the \*.pdf filter. I tried replacing `~/` with the full path to my home directory, but that didn't have an effect. I'm, using zsh. I tried doing the same thing in bash and even *with* the filter that listed every single file in every subdirectory... What's going on here? Why isn't rsync understanding my pdf only filter? --- OK. So update: No I'm trying ``` rsync -avn --include="*/" --include="*.pdf" LaTeX/ Output/ ``` And this gives me the whole file list. I guess because everything matches the first pattern...
TL,DR: ``` rsync -am --include='*.pdf' --include='*/' --exclude='*' ~/LaTeX/ ~/Output/ ``` --- Rsync copies the source(s) to the destination. If you pass `*.pdf` as sources, the shell expands this to the list of files with the `.pdf` extension in the current directory. No recursive traversal happens because you didn't pass any directory as a source. So you need to run `rsync -a ~/LaTeX/ ~/Output/`, but with a filter to tell rsync to copy `.pdf` files only. Rsync's filter rules can seem daunting when you read the manual, but you can construct many examples with just a few simple rules. - Inclusions and exclusions: - Excluding files by name or by location is easy: `--exclude=*~`, `--exclude=/some/relative/location` (relative to the source argument, e.g. this excludes `~/LaTeX/some/relative/location`). - If you only want to match a few files or locations, include them, *include every directory leading to them* (for example with `--include=*/`), then exclude the rest with `--exclude='*'`. This is because: - If you exclude a directory, this excludes everything below it. The excluded files won't be considered at all. - If you include a directory, this doesn't automatically include its contents. In recent versions, `--include='directory/***'` will do that. - For each file, the first matching rule applies (and anything never matched is included). - Patterns: - If a pattern doesn't contain a `/`, it applies to the file name sans directory. - If a pattern ends with `/`, it applies to directories only. - If a pattern starts with `/`, it applies to the whole path from the directory that was passed as an argument to `rsync`. - `*` any substring of a single directory component (i.e. never matches `/`); `**` matches any path substring. - If a source argument ends with a `/`, its contents are copied (`rsync -r a/ b` creates `b/foo` for every `a/foo`). Otherwise the directory itself is copied (`rsync -r a b` creates `b/a`). --- Thus here we need to include `*.pdf`, include directories containing them, and exclude everything else. ``` rsync -a --include='*.pdf' --include='*/' --exclude='*' ~/LaTeX/ ~/Output/ ``` Note that this copies all directories, even the ones that contain no matching file or subdirectory containing one. This can be avoided with the `--prune-empty-dirs` option (it's not a universal solution since you then can't copy a directory even by matching it explicitly, but that's a rare requirement). ``` rsync -am --include='*.pdf' --include='*/' --exclude='*' ~/LaTeX/ ~/Output/ ```
Spring Background / Fire and Forget processing I am developing an application using Spring 4.1.6 and Mongodb. I would like to perform some of the task in fire and forget mode e.g. once a method is accessed an entry in a collection will be made. I don't want to wait till writing to collection finishes or if it fails I don't need any notification either. How to achieve this using Spring.
You can do this without spring but with spring i suggest to use [@Async](https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#scheduling-annotation-support-async). First you need to enable it. To do so on a Configuration class: ``` @Configuration @EnableAsync public class AppConfig { } ``` Then in a bean use @Async on the method you want to be execute asynchronously ``` @Component public class MyComponent { @Async void doSomething() { // this will be executed asynchronously } } ``` Your method can have parameters too: ``` @Component public class MyComponent { @Async void doSomething(String s, int i, long l, Object o) { // this will be executed asynchronously } } ``` In your case you don't need it but the method can return a Future: ``` @Component public class MyComponent { @Async Future<String> doSomething(String s, int i, long l, Object o) { // this will be executed asynchronously return new AsyncResult<>("result"); } } ```
Perl system + split + array my name is luis, live in arg. i have a problem, which can not solve. ``` **IN BASH** pwd /home/labs-perl ls file1.pl file2.pl **IN PERL** my $ls = exec("ls"); my @lsarray = split(/\s+/, $ls); print "$lsarray[1]\n"; #how this, i need the solution. >> file.pl file1.pl file2.pl # but this is see in shell. ```
The output you see is not from the print statement, it is the console output of `ls`. To get the `ls` output into a variable, use backticks: ``` my $ls = `ls`; my @lsarray = split(/\s+/, $ls); print "$lsarray[1]\n"; ``` This is because `exec` does not return, the statements after it are not executed. From [perldoc](http://perldoc.perl.org/functions/exec.html): > > The exec function executes a system command and never returns; use > system instead of exec if you want it to return. It fails and returns > false only if the command does not exist and it is executed directly > instead of via your system's command shell > > > But using `system` command will not help you as it does not allow output capturing, hence, the backticks. However, using `glob` functions is better: ``` my @arr = glob("*"); print $arr[1], "\n"; ``` Also, perl array indices start at 0, not 1. To get file1.pl you should use `print "$lsarray[0]\n"`.
If I have a pointer to a vector, does inserting invalidate it Suppose I have the following code: ``` std::vector<std::string>* vs = new std::vector<std::string>; vs->push_back(string("Hello")); vs->push_back(string("World")); ``` Does do this invalidate the pointer `vs`? Or more specifically, ``` void doSomething(std::vector<std::string>* vs) { vs->push_back(string("thisWasATriumph")); if (vs->size() < 3) { doSomething(vs); } } int main() { std::vector<std::string>* vs = new std::vector<std::string>; doSomething(vs); std::cout << vs->back() << endl; } ``` Will the call to `doSomething(vs)` have the element in it that was previously inserted?
> > Does do this invalidate the pointer `vs`? > > > No it doesn't invalidate the pointer to the vector itself (having such is pretty silly in most cases BTW1). You are probably confusing that with the invalidation of any `std::vector<>::iterator` values, as it's stated in the [documentation](http://en.cppreference.com/w/cpp/container/vector/push_back): > > > > > > If the new size() is greater than capacity() then all iterators and references (including the past-the-end iterator) are invalidated. Otherwise only the past-the-end iterator is invalidated. > > > > > > > > > --- > > Will the call to doSomething(vs) have the element in it that was previously inserted? > > > Yes. --- It's probably better, to simply use a reference, and don't use `new` at all: ``` void doSomething(std::vector<std::string>& vs) { // ^ vs.push_back(string("thisWasATriumph")); if (vs.size() < 3) { doSomething(vs); } } ``` --- ``` int main() { std::vector<std::string> vs; // = new std::vector<std::string>; // ^^^^^^^^^^ No need or this doSomething(vs); std::cout << vs->back() << endl; } ``` --- 1)There's little to no advantage to dynamically allocate a `std::vector`. Just use a plain local or member variable with automatic memory allocation.
Min s-t cut in a network I am trying to simulate a network of wireless sensor nodes in order to research about the robustness of the network. I am faced with the following problem: I have a network of nodes with some edge capacities. This is equivalent to something like network flow problem in algorithms. There is a source node (which detects certain events) and a sink node (my base station). Now, I want to find the minimum s-t cut in the network so that the size of the source set is minimized. The source set here refers to the set of nodes separated by the min s-t cut that contains the source. e.g. if the s-t cut, `C = {S,T}`, then there is a set of edges which can be removed to separate the network into two sets, `S` and `T` and the set `S` contains the source and `T` contains the sink. The cut is minimum when the sum of capacities of the edges in the cut is minimum among all possible s-t cuts. There can be several such min-cuts. I need to find a min-cut that has least number of elements in the set `S` Note that this is not the original problem but I have tried to simplify it in order to express it in terms of algorithms.
I believe that you can solve this problem by finding a minimum cut in a graph with slightly modified constraints. The idea is as follows - since the cost of a cut is equal to the total capacity crossing the cut, we could try modifying the graph by adding in an extra edge from each node in the graph to t that has capacity one. Intuitively, this would mean that every node in the same part of the cut as s would contribute one extra cost to the total cost of the cut, because the edge from that node to t would cross the edge. Of course, this would definitely mess up the actual min-cut because of the extra capacity. To fix this, we apply the following transformation - first, multiply the capacities of the edges by n, where n is the number of nodes in the graph. Then add one to each edge. The intuition here is that by multiplying the edge capacities by n, we've made it so that the cost of the min-cut (ignoring the new edges from each node to t) will be n times the original cost of the cut. When we then add in the extra one-capacity edges from each node to t, the maximum possible contribution these edges can make to the cost of the cut is n - 1 (if every node in the graph except for t is on the same side as s). Thus the cost of the old min-cut was C, the cost of the new min-cut (S, V - S) is nC + |S|, where |S| is the the number of nodes on the same side of the cut as s. More formally, the construction is as follows. Given a directed, capacitated graph G and a (source, sink) pair (s, t), construct the graph G' by doing the following: 1. For each edge (u, v) in the graph, multiply its capacity by n. 2. For each node v in the graph, add a new edge (v, t) with capacity 1. 3. Compute a min s-t cut in the graph. I claim that a min s-t cut in the graph G' corresponds to a min s-t cut in graph G with the fewest number of nodes on the same side of the cut as s. The proof is as follows. Let (S, V - S) be a min s-t cut in G'. First, we need to show that (S, V - S) is a min s-t cut in G. This proof is by contradiction; assume for the sake of contradiction that there is an s-t cut (S', V - S') whose cost is lower than the cost of (S, V - S). Let the cost of (S', V - S') in G be C' and let the cost of (S, V - S) in G be C. Now, let's consider the cost of these two cuts in G'. By constriction, the cost of C' would be nC' + |S'| (since each node on the S' side of the cut contributes one capacity across the cut) and the cost of C would be nC + |S|. Since we know that C' < C, we must have that C' + 1 ≤ C. Thus > > nC + |S| ≥ n(C' + 1) + |S| = nC' + n + |S| > > > Now, note that 0 ≤ |S| < n and 0 ≤ |S'| < n, because there can be at most n nodes on the same side of the cut as s. Thus means that > > nC + |S| ≥ nC' + n + |S| > nC' + |S'| + |S| > nC' + |S'| > > > But this means that the cost of (S, V - S) in G' is greater than the cost of (S', V - S') in G', contradicting the fact that (S, V - S) is a min s-t cut in G'. This allows us to conclude that any min s-t cut in G' is also a min s-t cut in G. Now, we need to show that not only is a min s-t cut in G' also a min s-t cut in G, but it corresponds to a min s-t cut in G with the fewest number of nodes on the same side of the cut as s. Again, this proof is by contradiction; suppose that (S, V - S) is a min s-t cut in G' but that there is some min s-t cut in G with fewer nodes on the s side of the cut. Call this better cut (S', V - S'). Since (S, V - S) is a min s-t cut in G', it's also a min s-t cut in G, so the cost of (S', V - S') and (S, V - S) in G is some number C. Then the cost of (S, V - S) and (S', V - S') in G' will be nC + |S| and nC + |S'|, respectively. We know that nC + |S'| < nC + |S|, since we've chosen (S', V - S') to be an s-t min cut in G with the fewest number of nodes on the same side as S. But this means that (S', V - S') has a lower cost than (S, V - S), contradicting the fact that (S, V - S) is a min s-t cut in G'. Thus our assumption was wrong and (S, V - S) is a min s-t cut in G with the fewest number of nodes on the same side as S. This completes the correctness proof of the construction. Hope this helps!
HTTPoison Multipart Post Request to Spree API While attempting to post an image to Spree's [ProductImage API](http://guides.spreecommerce.org/api/product_images.html#create) using HTTPoison, it's failing with the Rails error `NoMethodError (undefined method 'permit' for #<ActionDispatch::Http::UploadedFile:0x007f94fa150040>)`. The Elixir code that I'm using to generate this request is: ``` def create() do data = [ {:file, "42757187_001_b4.jpeg", {"form-data", [{"name", "image[attachment]"}, {"filename", "42757187_001_b4.jpeg"}]}, [{"Content-Type", "image/jpeg"}] }, {"type", "image/jpeg"} ] HTTPoison.post!("http://localhost:3000/api/v1/products/1/images", {:multipart, data}, ["X-Spree-Token": "5d096ecb51c2a8357ed078ef2f6f7836b0148dbcc536dbfc", "Accept": "*/*"]) end ``` I can get this to work using Curl with the following call: ``` curl -i -X POST \ -H "X-Spree-Token: 5d096ecb51c2a8357ed078ef2f6f7836b0148dbcc536dbfc" \ -H "Content-Type: multipart/form-data" \ -F "image[attachment]=@42757187_001_b4.jpeg" \ -F "type=image/jpeg" \ http://localhost:3000/api/v1/products/1/images ``` For comparison, here's a RequestBin capture of both the failing HTTPoison request followed by the successful Curl request: <https://requestb.in/12et7bp1?inspect> What do I need to do in order to get HTTPoison to play nicely with this Rails API?
The `Content-Disposition` line [requires double quotes around the `name` and `filename` values](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition#Syntax). `curl` adds those automatically but Hackney passes the data you specify as-is, so you need to add the double quotes to the values yourself. This: ``` [{"name", "image[attachment]"}, {"filename", "42757187_001_b4.jpeg"}] ``` should be: ``` [{"name", ~s|"image[attachment]"|}, {"filename", ~s|"42757187_001_b4.jpeg"|}] ``` (I'm only using the `~s` sigil so that double quotes can be added without escaping them. `~s|""|` is exactly the same as `"\"\""`.)
Invalid security context token when using WCF with a Load Balance (AWS) I have a WCF application that is hosted on AWS. To achieve a higher availability, I've created a snapshot of the WCF machine and launched another instance with this image. Also, I've created an Elastic Load Balance (ELB) that routes requests to those 2 servers. With my WCF Client, I can successfully connect with both servers if I use the machine public IP address. **But if I use the ELB hostname, my connection fails with the following error:** > > System.ServiceModel.FaultException: The message could not be > processed. This is most likely because the action > '<http://tempuri.org/IService/GetCustomerData>' is incorrect or because > the message contains an invalid or expired security context token or > because there is a mismatch between bindings. The security context > token would be invalid if the service aborted the channel due to > inactivity. To prevent the service from aborting idle sessions > prematurely increase the Receive timeout on the service endpoint's > binding. > > > The error suggests that I have a invalid or expired security token. So, I've checked the Send and Receive timeout and it is already set to 10 minutes: `sendTimeout="00:10:00"`, `receiveTimeout="00:10:00"` (requests usually takes 5-15 seconds) My biding configuration: ``` <bindings> <wsHttpBinding> <binding name="wsHttpBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" useDefaultWebProxy="true"> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Basic"/> <message clientCredentialType="UserName"/> </security> </binding> </wsHttpBinding> </bindings> ``` Also, I've double checked: - ELB and machines Firewall are open on ports 80 and 443. - My ELB was configured with the correct certificate and has a listener on port 443. - I have an IIS Web Server on both machines. If I use the ELB address, everything works. - If the ELB routes to one machine, WCF works. If routes to two machines, WCF fails.
I've managed to solve this issue adding the following parameter: `establishSecurityContext="false"`. ``` <bindings> <wsHttpBinding> <binding name="wsHttpBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" useDefaultWebProxy="true"> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Basic"/> <message clientCredentialType="UserName" establishSecurityContext="false"/> <!-- this line --> </security> </binding> </wsHttpBinding> </bindings> ``` Googling, I've learned that: When this value is set to **false**, key exchange and validation must be done per call using asymmetric encryption. The default value of this parameter is **true** and that means that the first call will create a security context with asymmetric encryption, but it will be cached and further calls will use only symmetric encryption, that is faster. Performance consideration: [Yaron Naveh](http://webservices20.blogspot.com.br/2009/01/wcf-performance-gearing-up-your-service.html) When the client is expected to make many calls in succession, it is better to set this parameter to **true**, BUT with a Load Balance, **calls are routed to different servers and this breaks the message due to invalid token**. So, you must to disable this Security Context feature. Detailed explanation in *Establishing Security Context* section: <https://msdn.microsoft.com/en-us/library/hh273122(v=vs.100).aspx>
Scalar::Util vs. ref function What's the difference between the built in `ref($object)` and `Scalar::Util` `blessed($object)`? Is one preferred over the other? ``` use strict; use warnings; use Scalar::Util qw(blessed isvstring); my $object = foo->new(); print "Object is a " . blessed($object) . "\n"; print "Object is a " . ref($object) . "\n"; my $version = 5.00.03; print "Version is a " . ref(\$version) . "\n"; if (isvstring($version)) { print "Version is a VSTRING\n"; } package foo; sub new { my $class = shift; my $self = {}; bless($self, $class); return $self; } ```
According to POD, `blessed()` only works on blessed references (e.g. a references passed to a `bless()` call). It returns `undef` on everything else, including hash/array refs where `ref()` returns `HASH`/`ARRAY` (and a bunch of other types as delineated in [perldoc ref](http://perldoc.perl.org/functions/ref.html)). To get reference type you can, of course call `Scalar::Util::reftype`. As for whether one should be used over another, I think it depends largely on what the logic is. - If you **only** want to distinguish real blessed references from everything else, `blessed()` provides a more concise way than taking a `ref` and then verifying that the value is not on of standard ones returned by unblessed reference. ``` my $ref_type = ref($my_ref); print "USING REF: "; if ( $ref_type && $ref_type ne ref({}) && $ref_type ne ref([]) && $ref_type ne "SCALAR" # Could also use a hash with all allowed values of ref() instead && $ref_type !~ /^(CODE|REF|GLOB|...)$) { print "I am an object of class $ref_type\n"; } else { print "I'm a reference of type $ref_type\n"; } # vs... print "USING SCALAR_UTIL: "; my $ref_type = blessed($my_ref); print $ref_type ? "I am an object of class $ref_type\n" : "I am a reference of type " . reftype($my_ref) . "\n"; ``` - If you need fine distinctions between both blessed references and different ublessed ones, then a single `ref()` call is more concise than a combination of `blessed` and `reftype`. - One edge case where there's an actual functional difference between the two approaches, as noted in the comments by Eric Strom, is when someone creates a class which matches one of `ref()` [hardcoded values](http://perldoc.perl.org/functions/ref.html) (e.g. `bless [], 'HASH'` - in which case they are either Way Dumb or Way Too Clever By Half). ``` my $sssft = bless [], 'HASH'; # sssft = someone_should_suffer_for_this ref_description_using_ref($sssft); ref_description_using_scalar_util($sssft); # OUTPUT: USING REF: I'm a reference of type HASH USING SCALAR_UTIL: I am an object of class HASH ``` DISCLAIMER: Based on documentation, there should be no difference between the two when the argument is a reference blessed into a class (e.g. it returns class name). But I haven't checked "Scalar::Util" source to confirm.
Is it safe to assume that the NULL constant is zero? The book **Understanding and Using C Pointers**, by Richard Reese says: > > The null concept is an abstraction supported by the null pointer > constant. This constant may or may not be a constant zero. A C > programmer need not be concerned with their actual internal > representation. > > > My question is, since "this constant may or may not be a constant zero," is it safe for me to do things like the below in my code: ``` int *ptr = NULL; // Some code which probably sets ptr to a valid memory address if(!ptr) { ERROR(); } ``` If NULL is not 0, there is a chance that the if clause will evaluate to true.
> > Is it safe to assume that the NULL constant is zero? > > > `NULL` will compare equal to `0`. `NULL` is *very* commonly a zero bit pattern. It is possible for `NULL` to be a non-zero bit pattern - but not seen these days. --- OP is mixing as least 4 things: `NULL`, *null pointer constant*, *null pointer*, comparing a *null pointer* to 0. C does not define a *NULL constant*. **`NULL`** > > `NULL` is a macro "which expands to an implementation-defined null > pointer constant" C17dr § 7.19 3 > > > ***null pointer constant*** > > An integer constant expression with the value 0, or such an expression > cast to type `void` \*, is called a *null pointer constant*. C17dr § § > 6.3.2.3 3 > > > Thus the *type* of a *null pointer constant* may be `int`, `unsigned`, `long`, ... or `void *` . When an integer constant expression1, the *null pointer constant* *value* is 0. As a pointer like `((void *)0)`, its value/encoding is not specified. It ubiquitously does have the bit pattern of zeros, but is not specified so. There may be many *null pointer constants*. They all compare equal to each other. Note: the *size* of a *null pointer constant*, when it is an integer, may differ from the size of an object pointer. This size difference is often avoided by appending a `L` or two suffix as needed. ***null pointer*** > > If a null pointer constant is converted to a pointer type, the > resulting pointer, called a *null pointer*, is guaranteed to compare > unequal to a pointer to any object or function. C17dr § § 6.3.2.3 3 > > > Conversion of a null pointer to another pointer type yields a null > pointer of that type. Any two null pointers shall compare equal. C17dr > § § 6.3.2.3 4 > > > The type of *null pointer* is some pointer, either an object pointer like `int *, char *` or function pointer like `int (*)(int, int)` or `void *`. The *value* of a *null pointer* is not specified. It ubiquitously does have the bit pattern of zeros, but is not specified so. All *null pointer* compare as equal, regardless of their encoding. **comparing a *null pointer* to 0** `if(!ptr)` is the same as `if(!(ptr != 0))`. When the pointer `ptr`, which is a *null pointer*, is compared to 0, the zero is converted to a pointer, a *null pointer* of the same type: `int *`. These 2 *null pointers*, which *could* have different bit patterns, compare as equal. --- **So when it is not safe to assume that the NULL constant is zero?** `NULL` may be a `((void*)0)` and its bit pattern may differ from zeros. It does compare equal to 0 as above regardless of its encoding. Recall pointer compares have been discussed, not integer compares. Converting `NULL` to an integer may not result in an integer value of 0 even if `((void*)0)` was all zero bits. ``` printf("%ju\n", (uintmax_t)(uintptr_t)NULL); // Possible not 0 ``` Notice this is converting a pointer to an integer, not the case of `if(!ptr)` where a 0 was converted to a pointer. The C spec embraces many old ways of doing things and is open to novel new ones. I have never came across an implementation where `NULL` was not an all zeros bit pattern. Given much code exist that assumes `NULL` is all zero bits, I suspect only old obscure implementations ever used a non-zero bit-pattern `NULL` and that `NULL` can be all but certain to be an all zero bit pattern. --- 1 The *null pointer constant* is 1) an integer or 2) a `void*`. "When an integer ..." refers to the first case, not a cast or conversion of the second case as in `(int)((void*)0)`.
Default selection in RadioButtonFor I am generating the radiobutton list and then trying to select one option on load as below. Foreach loop in View ``` @foreach (var myListItem in Model.MyList) { @Html.RadioButtonFor(m => m.MyType,myListItem.MyType, new {id = myListItem.MyType, @Checked = (Model.MyTypeId == myListItem.MyTypeId) }) @myListItem.MyType } ``` Eventhough the HTML is generated correctly(refer below). The second option is checked instead of 1st even when `Model.MyTypeId = 0`. Generated HTML for view ``` <input id="0" name="MyType" value="Option One" CHECKED="True" type="radio">Option One <input id="1" name="MyType" value="Option Two " CHECKED="False" type="radio">Option Two ``` Please suggest how else I can select the desired radio button option by deafult.
The HTML isn't correct actually. You need to do something more along these lines: ``` @foreach (var myListItem in Model.MyList) { if (Model.MyTypeId == myListItem.MyTypeId) { @Html.RadioButtonFor(m => m.MyType,myListItem.MyType, new { id = myListItem.MyType, @Checked = "" }) } else { @Html.RadioButtonFor(m => m.MyType,myListItem.MyType, new { id = myListItem.MyType, }) } @myListItem.MyType } ``` Though I can't verify the **exact** output, it should look something like this: ``` <input id="0" name="MyType" value="Option One" CHECKED type="radio"> ``` You may have to use `null` to get it to generate the `CHECKED` without the `=""`, but that would be okay too. See, it's not the **value** that's recognized, it's the **attribute itself**, so that's why the **second one** is checked.
What are npm, bower, gulp, Yeoman, and grunt good for? I'm a backend developer, and slightly confused by npm, bower, gulp, grunt, and Yeoman. Whenever I ask someone what their purpose is, the answer tends to boil down to dependency manager - for all of them. Surely, we don't need four different tools that all do the same? Can someone please explain what each of these is good for **in as few sentences as possible** - if possible just one per tool, using language a five year old (with development skills) could understand? For example: - **SVN** stores, manages, and keeps track of changes to our source code I have used maven, Jenkins, nexus and ant in the past; maybe you could compare the tools above to these? Also feel free to add other front-end tools to the list. Here is what I have found out so far - not sure it's correct, though: - **bower** dependency manager for front-end development/JS libraries, uses a flat dependency list - **npm** dependency manager for node.js server, can resolve transitive dependencies/dependency trees - **grunt** runs tasks, much like Jenkins, but locality on the command line - **Yeoman** provided scaffolding, i.e skeleton projects - **gulp** same as grunt, but written in js only - **node.js** server for js apps? - **git** decentralized SCM/VCS, counterpart to svn/cvs Am I close? :)
You are close! Welcome to JavaScript :) Let me give you a short description and one feature that most developers spend some time with. **bower** Focuses on packages that are used in the browser. Each `bower install <packagename>` points to exactly one file to be included for (more can be downloaded). Due to the success of webpack, browserify and babel it's mostly obsolete as a first class dependency manager. 2018 Update: bower is mostly deprecated in favour of NPM **npm** Historically focuses on NodeJS code but has overthrown bower for browser modules. Don't let anyone fool you: NPM is huge. NPM also loads MANY files into your project and a fresh npm install is always a good reason to brew a new cup of coffee. NPM is easy to use but can break your app when changing environments due to the loose way of referencing versions and the arbitrariness of module publishing. Research [Shrink Wrap](https://docs.npmjs.com/cli/shrinkwrap "Shrink Wrap") and `npm install --save-exact` 2018 Update: NPM grew up! Lot's of improvements regarding safety and reproducibility have been implemented. **grunt** Facilitates task automation. Gulps older and somewhat more sluggish brother. The JavaScript community used to hang out with him in 2014 a lot. Grunt is already considered legacy in some places but there is still a great amount of really powerful automation to be found. Configuration can be a nightmare for larger use-cases. [There is a grunt module for that though.](https://github.com/firstandthird/load-grunt-config) 2018 Update: grunt is mostly obsolete. Easy to write webpack configurations have killed it. **gulp** Does the same thing as grunt but is faster. **npm run-script** You might not need task runners at all. NodeJS scripts are really easy to write so most use-cases allow for customizedtask-automation workflow. Run scripts from the context of your package.json file using [npm run-script](https://docs.npmjs.com/cli/run-script) **webpack** Don't miss out on webpack. Especially if you feel lost on the many ways of writing JavaScript into coherent modular code. Webpack packages .js files into modules and does so splendidly. Webpack is highly extensible and offers a good development environment too: [webpack-dev-server](https://github.com/webpack/webpack-dev-server) Use in conjunction with [babel](https://babeljs.io/repl/) for the best possible JavaScript experience to date. **Yeoman** Scaffolding. Extremly valuable for teams with different backgrounds as it provides a controllable common ground for your projects architecture. There even is a [scaffolding for scaffolds](https://github.com/yeoman/generator-generator).
Why do we sample from log space when optimizing learning rate + regularization params? Since I took [Karpathy's CS231n](https://www.youtube.com/playlist?list=PLlJy-eBtNFt6EuMxFYRiNRS07MCWN5UIA) I used the method he mentions on the 5th lecture for hyperparameter optimization of neural networks which samples the learning rate and regularization parameters from the log space randomly. It seems to work great from experience, but I never understood why that's the right thing to do from the lecture. I would appreciate an intuitive explanation about why the log space is where we sample from.
Hyperparameters such as learning rate and regularization term tend to be very small positive numbers. When we sample them, we would like to sample values from all the orders of magnitude in a given interval. Take the learning rate as an example. Let's say we decide to sample uniformly from 0 to 1, then only about 10% of the values would come from 0 to 0.1, and 90% of the values would come from 0.1 to 1. This does not seem to be appropriate because we would definitely like to sample values in the order of $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$,etc and all of these values fall under the first group that has only 10% chance of being selected. Instead, if we used a logarithmic scale to sample the values such as from -5 to 0, then values from $10^{-4}$, $10^{-3}$, $10^{-2}$, $10^{-1}$, $10^{0}$ all have equal chance of being selected. If what I've said does not make much sense, I would recommend watching <https://www.youtube.com/watch?v=cSoK_6Rkbfg&list=PLkDaE6sCZn6Hn0vK8co82zjQtt3T2Nkqc&index=25> .
Spring Boot - How to disable @Cacheable during development? I'm looking for 2 things: 1. How to disable all caching during development with Spring boot "dev" profile. There doesn't seem to be a general setting to turn it all off in application.properties. What's the easiest way? 2. How to disable caching for a specific method? I tried to use SpEl like this: ``` @Cacheable(value = "complex-calc", condition="#${spring.profiles.active} != 'dev'}") public String someBigCalculation(String input){ ... } ``` But I can get it to work. There are a couple of questions on SO related to this, but they refer to XML config or other things, but I'm using Spring Boot 1.3.3 and this uses auto-configuration. I don't want to over-complicate things.
The [David Newcomb comment](https://stackoverflow.com/questions/35917159/spring-boot-how-to-disable-cachable-during-development) tells the truth : > > `spring.cache.type=NONE` doesn't switch caching off, it prevents things > from being cached. i.e. it still adds 27 layers of AOP/interceptor > stack to your program, it's just that it doesn't do the caching. It > depends what he means by "turn it all off". > > > Using this option may fast up the application startup but could also have some overheads. **1)To disable completely the Spring Cache feature** Move the `@EnableCaching` class in a dedicated configuration class that we will wrap with a `@Profile` to enable it : ``` @Profile("!dev") @EnableCaching @Configuration public class CachingConfiguration {} ``` Of course if you already have a `Configuration` class that is enabled for all but the `dev` environment, just reuse it : ``` @Profile("!dev") //... any other annotation @EnableCaching @Configuration public class NoDevConfiguration {} ``` **2) Use a fake (noop) Cache manager** In some cases, activating `@EnableCaching` by profile is not enough because some of your classes or some Spring dependencies of your app expect to retrieve from the Spring container a bean implementing the `org.springframework.cache.CacheManager` interface. In this case, the right way is using a fake implementation that will allow Spring to resolve all dependencies while the implementation of the `CacheManager` is overhead free. We could achieve it by playing with `@Bean` and `@Profile` : ``` import org.springframework.cache.support.NoOpCacheManager; @Configuration public class CacheManagerConfiguration { @Bean @Profile("!dev") public CacheManager getRealCacheManager() { return new CaffeineCacheManager(); // or any other implementation // return new EhCacheCacheManager(); } @Bean @Profile("dev") public CacheManager getNoOpCacheManager() { return new NoOpCacheManager(); } } ``` Or if it is more suitable, you can add the `spring.cache.type=NONE` property that produces the same result as written in the M. Deinum answer.
Is it possible to use an http url as your source location of a Source Filter in DirectShow .Net? I'm using the DirectShow.Net Library to create a filter graph that streams video by using an http address and the WM Asf Writer. Then on the webpage I'm able to use the object element to render the video feed in a Windows Media Player object. So now I'm curious if it is possible to read from that http address with some type of FilterSource. I have seen that some people use the AsyncReader as an IBaseFilter, and then casting it as an IFileSourceFilter and calling the load method and passing it a url for their network. But I haven't been able to successfully do this with a url of "http://localhost:8080". I'm guessing this is because it's not an actual "file source". I have tried using a AMMediaType with a majorType of MediaType.URLStream and subType of MediaSubType.Asf in the IFileSourceFilter Load method, but still no luck. If someone could help me figure this out I would figuratively kiss them, seeing that I have been working on this for some time now. Please oh please help me. In my code I'm creating the FilterGraph and CaptureGraph accordingly. Then creating an AsyncReader instance and casting it as an IBaseFilter. Next I cast it as an IFileSourceFilter and call the Load method passing it the "http://localhost:8080" url. Then add it to the FilterGraph. Then I create the video render filter and add it, but when I try to call the RenderStream method of the CaptureGraphBuilder2 object it throws an "Unspecified Error" exception. Here is what I have for code... ``` var fGraph = new FilterGraph() as IFilterGraph2; var cGraph = new CaptureGraphBuilder2() as ICaptureGraphBuilder2; cGraph.SetFiltergraph(fGraph); var tmp = new AsyncReader() as IBaseFilter; // This is where I tried to load it with a media type. //media = new AMMediaType { majorType = MediaType.URLStream, subType = MediaSubType.Asf }; //((IFileSourceFilter)tmp).Load(_streamingURL, media); //DsUtils.FreeAMMediaType(media); ((IFileSourceFilter)tmp).Load(_streamingURL, null); hr = fGraph.AddFilter(tmp, "SourceFilter"); DsError.ThrowExceptionForHR(hr); var vRender = new VideoRenderer() as IBaseFilter; var aRender = new AudioRender() as IBaseFilter; hr = fGraph.AddFilter(vRender, "vRenderer"); DsError.ThrowExceptionForHR(hr); hr = cGraph.RenderStream(null, MediaType.Video, tmp, null, vRender); // This is where it throws an "Unspecified Error". DsError.ThrowExceptionForHR(hr); hr = fGraph.AddFilter(aRender, "aRenderer"); DsError.ThrowExceptionForHR(hr); hr = cGraph.RenderStream(null, MediaType.Audio, tmp, null, aRender); DsError.ThrowExceptionForHR(hr); var mcx = fGraph as IMediaControl; hr = mcx.Run(); DsError.ThrowExceptionForHR(hr); ``` So if you have any advice for me I would greatly appreciate it. Thanks again for all your help.
After some more research I was able to find some information that helped me solve my issue. Here's the graph the adds a source filter with a http url as it's source and then renders the stream to a video renderer filter and an audio render filter. ``` var fGraph = new FilterGraph() as IFilterGraph2; var cGraph = new CaptureGraphBuilder2() as ICaptureGraphBuilder2; cGraph.SetFiltergraph(fGraph); IBaseFilter sourceFilter = null; hr = fGraph.AddSourceFilter(@"http://localhost:8080/tempStreaming.asf", "SourceFilter", out sourceFilter); DsError.ThrowExceptionForHR(hr); var vRender = new VideoRenderer() as IBaseFilter; var aRender = new AudioRender() as IBaseFilter; hr = fGraph.AddFilter(vRender, "vRenderer"); DsError.ThrowExceptionForHR(hr); hr = cGraph.RenderStream(null, MediaType.Video, sourceFilter, null, vRender); DsError.ThrowExceptionForHR(hr); hr = fGraph.AddFilter(aRender, "aRenderer"); DsError.ThrowExceptionForHR(hr); hr = cGraph.RenderStream(null, MediaType.Audio, sourceFilter, null, aRender); DsError.ThrowExceptionForHR(hr); var mcx = fGraph as IMediaControl; hr = mcx.Run(); DsError.ThrowExceptionForHR(hr); ``` The tmpStreaming.asf file is created using an WM Asf Writer filter set up with a network sink. In case you need an example of how to do this it's in the WindowsMediaLib .Net Framework samples as the AsfNet project. Hope this helps if you come across same issue.
Bad Resolution Image taken with getuserMedia() Javascript i wanted to take screenshots from a mobilephone camera using javascript getUserMedia function but resolution is very bad. ``` if (navigator.mediaDevices) { // access the web cam navigator.mediaDevices.getUserMedia({ video: { width: { min: 1280, }, height: { min: 720, }, facingMode: { exact: 'environment' } } }).then(function(stream) { video.srcObject = stream; video.addEventListener('click', takeSnapshot); }) .catch(function(error) { document.body.textContent = 'Could not access the camera. Error: ' + error.name; }); } var video = document.querySelector('video'), canvas; function takeSnapshot(){ var img = document.createElement('img'); var context; var width = video.offsetWidth, height = video.offsetHeight; var canvas = document.createElement('canvas'); canvas.width = width; canvas.height = height; context = canvas.getContext('2d'); context.webkitImageSmoothingEnabled = false; context.mozImageSmoothingEnabled = false; context.imageSmoothingEnabled = false; context.drawImage(video, 0, 0, width, height); img.src = canvas.toDataURL('image/jpeg'); } ``` No errors-code, but resolution not good, i cannot read the text of the photo. There is a method to get real image quality from camera?
**MediaCapture** This is what you are using via getUserMedia. If you have a camera which allows a 1920x1080, 1280x720, and 640x480 resolutions only, the browser implementation of Media Capture can emulate a 480x640 feed from the 1280x720 (see [MediaStream](https://www.w3.org/TR/mediacapture-streams/#mediastream)). From testing (primarily Chrome) the browser typically scales 720 down to 640 and then crops the center. Sometimes when I have used virtual camera software I see Chrome has added artificial black padding around a non supported resolution. The client sees a success message and a feed of the right dimensions but a person would see a qualitative degradation. Because of this emulation you cannot guarantee the feed is correct or not scaled. However it will typically have the correct dimensions requested. You can read about constraints [here](https://www.w3.org/TR/mediacapture-streams/#constrainable-interface). It basically boils down to: Give me a resolution as close to x. Then the browser determines by its own implementation to reject the constraints and throw an error, get the resolution, or emulate the resolution. More information of this design is detailed in the mediacapture specification. Especially: > > The RTCPeerConnection is an interesting object because it acts > simultaneously as both a sink and a source for over-the-network > streams. As a sink, it has source transformational capabilities (e.g., > lowering bit-rates, scaling-up / down resolutions, and adjusting > frame-rates), and as a source it could have its own settings changed > by a track source. > > > The main reason for this is allowing n clients to have access to the same media source but may require different resolutions, bit rate, etc, thus emulation/scaling/transforming attempts to solve this problem. A negative to this is that you never truly know what the source resolution is. **ImageCapture** This is potentially your solution. If 60FPS video isn't a hard requirement and you have leway on compatibility you can poll [ImageCapture](https://www.w3.org/TR/image-capture/) to emulate a camera and receive a much clearer image from the camera. You would have to check for clientside support and then potentially fallback on MediaCapture. The API enables control over camera features such as zoom, brightness, contrast, ISO and white balance. Best of all, Image Capture allows you to access the full resolution capabilities of any available device camera or webcam. Previous techniques for taking photos on the Web have used video snapshots (MediaCapture rendered to a Canvas), which are lower resolution than that available for still images. <https://developers.google.com/web/updates/2016/12/imagecapture> and its polyfill: <https://github.com/GoogleChromeLabs/imagecapture-polyfill>
How can Autofac and an Options pattern be used in a .NET 5 console application? I am trying to use an option pattern with Autofac and every attempt has just resulted in errors. What I've tried: 1. Using the ConfigurationBuilder to retrieve an IConfiguration/IConfigurationRoot. 2. Register an instance of `TestSectionOptions` using the IConfiguration/IConfigurationRoot that was created before: `builder.Register(c => config.GetSection("TestSection").Get<TestSectionOptions>());` 3. Trying to inject it via constructor injection: ``` private readonly TestSectionOptions _options; public DemoClass(IOptions<TestSectionOptions> options) { _options = options.Value; } ``` I'm getting following error: > > DependencyResolutionException: None of the constructors found with > 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type > 'DemoApp.DemoClass' can be invoked with the available services and parameters: > Cannot resolve parameter > 'Microsoft.Extensions.Options.IOptions`1[DemoApp.TestSectionOptions] options' of constructor 'Void .ctor(Microsoft.Extensions.Options.IOptions`1 > > > Of course I tried other types of registration, but none of them worked. I also know that I can simply bind the configuration file to a class, which I then register and inject without the IOptions<> part. But that would no longer correspond exactly to the option pattern, would it? Even if it doesn't make a big difference, I'd still like to know why it doesn't work and how I could get it to work.
The problem is that this IOptions type should be registerd somewhere. You can see e.g. [this article](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options?view=aspnetcore-5.0). There is an example ``` public void ConfigureServices(IServiceCollection services) { services.Configure<PositionOptions>(Configuration.GetSection( PositionOptions.Position)); services.AddRazorPages(); } ``` So, somewhere inside Configure extension method it registers types for options, among others IOptions<>. So, in your case you either have to do this explicitly, like ``` builder.Register(c => Options.Create(config.GetSection("TestSection").Get<TestSectionOptions>())) ``` This will register IOptions or, you can create an empty service collection, then call Configure method on it, and then copy all registrations to autofac builder - there is Populate method from the package "Autofac.Extensions.DependencyInjection" <https://autofac.org/apidoc/html/B3162450.htm>
Text in PDF turns gibberish on copying but displays fine We're a small group that is promoting the spread of Unicode in India (here legacy encodings are deeply entrenched). But I have a problem when I convert a document in unicode text in any Indic language to PDF format. The text displays as intended, but on copy pasting the content partially turns gibberish. I am using inDesign CC for typesetting on a Win 7. I can export to epub format just fine. But the exported PDF has this problem. I also tried printing to Adobe PDF printer and PrimoPDF, it only got worse. On checking out PDF's on the internet, turns out this problem exists in all such unicode encoded Indic PDF (and probably all East Asian complex scripts). Is that a problem in the PDF specs? Check out the PDF here <http://www.rajbhasha.nic.in/pdf/dolebook-4.pdf> Copy any text and match with the original, you'll see characters are replaced by other characters, unnecessary white space has crept in. Now we're promoting unicode on grounds that it'll make copy-pasting and searching/indexing easier. This problem totally destroys that. Any ideas?
I decompressed the pdf with [`mutool clean`](http://mupdf.com/) and had a look at. The problem seems to be that as described as in [this](https://stackoverflow.com/questions/128162/unicode-in-pdf) stackoverflow question, it's difficult to use unicode encoding for the fonts. For this reason, the fonts that the PDF contains use a different encoding. However, it also contains `/ToUnicode` objects for each font with a complicated mapping from the font glyphs to the unicode characters. Now many PDF viewers (like e.g. `xpdf` on Linux) don't seem to pay attention to this complicated mapping (or at least not to a mapping with such a complexity, though they may work on more simple mappings), which is why you get garbage when trying to copy and paste. However, with other PDF viewers (like [`mupdf`](http://mupdf.com/)) it works, as I've confirmed. So the problem is located in the PDF viewer, not in the document. Also, PDFs and unicode don't go together that well, as you can see from the complicated means necessary to do the translation. Possible solutions: (1) pressure the developers of PDF viewers to fully support `\ToUnicode` mappings. Maybe fix them yourself for open source ones. (2) Promote the usage of a particular PDF viewer that works with the mappings. (3) Try to use fonts inside the PDF where the glyph encoding matches the unicode encoding. This seems possible with 16-bit unicode codepoints (and the Indian characters seem to be 16-bit as far as I can tell), but I don't know how well this will work, or which application you should use to produce such PDFs.
NUnit's CollectionAssert return false for similar lists of custom class Here is my class: ``` public class MyClass { public string Name { get; set; } public string FaminlyName { get; set; } public int Phone { get; set; } } ``` Then I have two similar list: ``` List<MyClass> list1 = new List<MyClass>() { new MyClass() {FaminlyName = "Smith", Name = "Arya", Phone = 0123}, new MyClass() {FaminlyName = "Jahani", Name = "Shad", Phone = 0123} }; List<MyClass> list2 = new List<MyClass>() { new MyClass() {FaminlyName = "Smith", Name = "Arya", Phone = 0123}, new MyClass() {FaminlyName = "Jahani", Name = "Shad", Phone = 0123} }; ``` The problem is that NUnit CollectionAssert return false always. ``` CollectionAssert.AreEqual(list1,list2); ``` Am I missing something about CollectionAssert test
The `AreEqual` checks for equality of the objects. Since you did not override the `Equals` method, it will return `false` in case the *references* are not equal. You can solve this by overriding the `Equals` method of your `MyClass`: ``` public class MyClass { public string Name { get; set; } public string FaminlyName { get; set; } public int Phone { get; set; } public override bool Equals (object obj) { MyClass mobj = obj as MyClass; return mobj != null && Object.Equals(this.Name,mobj.Name) && Object.Equals(this.FaminlyName,mobj.FaminlyName) && Object.Equals(this.Phone,mobj.Phone); } } ``` You furthermore better override the `GetHashCode` method as well: ``` public class MyClass { public string Name { get; set; } public string FaminlyName { get; set; } public int Phone { get; set; } public override bool Equals (object obj) { MyClass mobj = obj as MyClass; return mobj != null && Object.Equals(this.Name,mobj.Name) && Object.Equals(this.FaminlyName,mobj.FaminlyName) && Object.Equals(this.Phone,mobj.Phone); } public override int GetHashCode () { int hc = 0x00; hc ^= (this.Name != null) ? this.Name.GetHashCode() : 0; hc ^= (this.FaminlyName != null) ? this.FaminlyName.GetHashCode() : 0; hc ^= this.Phone.GetHashCode(); return hc; } } ```
Easy way to test nested model forms with Cucumber? I've got a nested model for Departments within a Hospitals form. The code snippet looks like this: ``` <%= f.simple_fields_for :hospital do |h| %> . . . <%= h.simple_fields_for :departments do |builder| %> <%= render "department_fields", :f => builder %> <% end %> . . <% end %> ``` The \_department\_fields partial looks like this: ``` <div class="fields"> <%= f.input :name %> <%= link_to_remove_fields "remove", f %> </span> </div> ``` As a result, the bottom of the form has a place for the user to enter up to three department names. I am using Rails 3, Cucumber, Capybara and Selenium for integration testing. Is there a simple way to fill in the repeating fields when testing this form in Cucumber? Ideally, I'd like to be able to write my feature like this: ``` And I fill in the first "Name" with "Cardiology" And I fill in the second "Name" with "Radiology" ``` Is there some way to easily approximate this in Cucumber/Capybara? Has someone already figured out some steps to address this?
Capybara has a way to deal with this by using within. For example: ``` And I fill in "Name" with "Radiology" within "fields" ``` For example, if you have divs with ids of 'hospital\_fields' and 'department\_fields' around both form areas, you could do something like this to differentiate which field you are filling in: ``` And I fill in "Name" with "Cardiology" within "hospital_fields" And I fill in "Name" with "Radiology" within "department_fields" ``` You can also be more specific by using the text field id instead of the field label name. For example, if the first text field has an id of 'hospital\_name' and the second has an id of 'hospital\_department\_name' you could do the following: ``` And I fill in "hospital_name" with "Cardiology" And I fill in "hospital_deparment_name" with "Radiology" ``` **UPDATE**: You can also add a custom cucumber step to use numbered inputs: ``` When /^(?:|I )fill in the (\d+)(?:st|nd|rd|th) "([^"]*)" with "([^"]*)"$/ do |num, name, value| find(:xpath, ".//form/input[@name='#{name}'][#{num}]").set(value) end And I fill in the 1st "name" with "Cardiology" And I fill in the 2nd "name" with "Radiology" ``` **UPDATE**: To match with the label use this expression: ``` find(:xpath, ".//input[@id=//label[contains(.,'#{name}')]/@for][#{num}]").set(value) ```
Microsoft Edge blocked cross-domain requests sent to IPs in same private network CIDR ``` $.ajax({ url: "http://10.13.22.150/req_path", success: function(result){ console.log(result); } }); ``` I'd like to send Cross-Domain XMLHttpRequest to ip addresses in the private network. However, the following error is shown in developer tools console: > > SCRIPT7002: XMLHttpRequest: Network Error 0x2efd, Could not complete the operation due to error 00002efd. > > > According to Wireshark, the packet is not sent from client side. I guess the request is blocked by Microsoft Edge Furthermore, I found that the requests are only blocked if the url of XMLHttpRequest and Edge client are in the same CIDR of private network. ``` Client IP Request URL Result 192.168.x.x send to 192.168.x.x ->>>>> X 10.13.x.x send to 10.13.x.x ->>>>> X 10.13.x.x send to 192.168.x.x ->>>>> O ``` Other browsers like IE11 / Chrome / Firefox work just fine. This circumstance is only shown in Microsoft Edge. Is there any workaround or solution about this issue?
From [Understanding Enhanced Protected Mode](http://blogs.msdn.com/b/ieinternals/archive/2012/03/23/understanding-ie10-enhanced-protected-mode-network-security-addons-cookies-metro-desktop.aspx) > > **Private Network resources** > > > Because EPM does not declare the privateNetworkClientServer capability, your Intranet resources are protected from many types of cross-zone attacks (usually called “Cross-Site-Request-Forgery (CSRF)” and “Intranet Port Scanning.”) Internet pages are not able to frame Intranet pages, load images or resources from them, send them CORS XHR requests, etc. > > > All of the above seems to apply to MS Edge. The only thing Edge is lacking (at least at this point, v20.10240) is the security zone settings. My issue wasn't with the XMLHttpRequest but rather with trying to load an intranet page in an iframe inside the internet page. The workaround involved chanding my network setup - see <https://stackoverflow.com/a/32828629>
Python: breaking a list into all possible sublists Lets assume I've got a list of integers: ``` mylist = [101, 102, 103, 104, 105, 106] ``` Now I need to create every possible sublist division (order preserved): ``` sublists = [([101], [102, 103, 104, 105, 106]), ([101, 102], [103, 104, 105, 106]), ([101, 102, 103], [104, 105, 106]), ... ([101, 102], [103, 104], [105, 106]), ... ([101], [102, 103, 104], [105], [106]), ... ([101], [102], [103], [104], [105], [106])] ``` Any idea? Would `itertools` be helpful?
You are creating *slice points*; are you slicing after the current element or not. You can generate these with booleans: ``` from itertools import product def sublists(lst): for doslice in product([True, False], repeat=len(lst) - 1): slices = [] start = 0 for i, slicehere in enumerate(doslice, 1): if slicehere: slices.append(lst[start:i]) start = i slices.append(lst[start:]) yield slices ``` Demo: ``` >>> from pprint import pprint >>> mylist = [101, 102, 103, 104, 105, 106] >>> pprint(list(sublists(mylist))) [[[101], [102], [103], [104], [105], [106]], [[101], [102], [103], [104], [105, 106]], [[101], [102], [103], [104, 105], [106]], [[101], [102], [103], [104, 105, 106]], [[101], [102], [103, 104], [105], [106]], [[101], [102], [103, 104], [105, 106]], [[101], [102], [103, 104, 105], [106]], [[101], [102], [103, 104, 105, 106]], [[101], [102, 103], [104], [105], [106]], [[101], [102, 103], [104], [105, 106]], [[101], [102, 103], [104, 105], [106]], [[101], [102, 103], [104, 105, 106]], [[101], [102, 103, 104], [105], [106]], [[101], [102, 103, 104], [105, 106]], [[101], [102, 103, 104, 105], [106]], [[101], [102, 103, 104, 105, 106]], [[101, 102], [103], [104], [105], [106]], [[101, 102], [103], [104], [105, 106]], [[101, 102], [103], [104, 105], [106]], [[101, 102], [103], [104, 105, 106]], [[101, 102], [103, 104], [105], [106]], [[101, 102], [103, 104], [105, 106]], [[101, 102], [103, 104, 105], [106]], [[101, 102], [103, 104, 105, 106]], [[101, 102, 103], [104], [105], [106]], [[101, 102, 103], [104], [105, 106]], [[101, 102, 103], [104, 105], [106]], [[101, 102, 103], [104, 105, 106]], [[101, 102, 103, 104], [105], [106]], [[101, 102, 103, 104], [105, 106]], [[101, 102, 103, 104, 105], [106]], [[101, 102, 103, 104, 105, 106]]] ``` If you want to drop the last entry (containing a list with only one list in it, in turn containing all elements), replace the last 2 lines with: ``` if start: slices.append(lst[start:]) yield slices ```
pdfbox: trying to decrypt PDF Following [this answer](https://stackoverflow.com/questions/14700241/remove-encryption-from-pdf-with-pdfbox-like-qpdf) I'm trying to decrypt a pdf-document with pdfbox: ``` PDDocument pd = PDDocument.load(path); if(pd.isEncrypted()){ try { pd.decrypt(""); pd.setAllSecurityToBeRemoved(true); } catch (Exception e) { throw new Exception("The document is encrypted, and we can't decrypt it."); } ``` This leads to ``` Exception in thread "main" java.lang.NoClassDefFoundError: org/bouncycastle/jce/provider/BouncyCastleProvider at org.apache.pdfbox.pdmodel.PDDocument.openProtection(PDDocument.java:1601) at org.apache.pdfbox.pdmodel.PDDocument.decrypt(PDDocument.java:948) ... Caused by: java.lang.ClassNotFoundException: org.bouncycastle.jce.provider.BouncyCastleProvider ... ``` The path is correct, so I don't know what's going on. Furthermore, if I have a look at the PDDocument.decrypt(String pw) method, I find this: *This will decrypt a document. This method is provided for compatibility reasons only. User should use the new security layer instead and the openProtection method especially.* What does it mean? Could someone give an example how to decrypt a pdf-document correctly with pdfbox?
See the dependency list: <https://pdfbox.apache.org/1.8/dependencies.html> You need to use the bouncycastle libraries. ``` <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcprov-jdk15</artifactId> <version>1.44</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcmail-jdk15</artifactId> <version>1.44</version> </dependency> ``` the decrypt() call is indeed deprecated in the current version (1.8.9). Use ``` pd.openProtection(new StandardDecryptionMaterial("")); ``` instead. Additional advice: download the source code package. You'll find many examples that will help you further.
JavaFX / how to load/populate values at start up? I started working with JavaFX just today and already need some advise. I load the `applicaton.fxml` (created with Oracle SceneBuiler) using the FXMLLoader in the `start(Stage ...)` method of the MainApplication (which has an ApplicationController specified in my `application.fxml` file). ``` <AnchorPane id="AnchorPane" disable="false" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="600.0" prefWidth="800.0" styleClass="theme" xmlns:fx="http://javafx.com/fxml/1" xmlns="http://javafx.com/javafx/2.2" fx:controller="app.AppController"> ...more code here... <ComboBox id="cmb_locations" fx:id="cmb_locations"> <items> <FXCollections fx:factory="observableArrayList"> <String fx:value="Item 1" /> <String fx:value="Item 2" /> <String fx:value="Item 3" /> </FXCollections> </items> </ComboBox> ``` Now, I have a `ComboBox` in the `applicaton.fxml`, which has three items (the default items). What I need is to populate that `ComboBox` during the startup with my own values. Does anyone know how to achieve that and where to put the relevant code snippets (`app.AppController` or something similar)? Thanks in advance.
You have some controller for you fxml file. There you have access to your ComboBox. You could put this code to setup list of elements (probably in initialize() method): If you don't really want to edit your fxml file you can just clear the list first with `cmb_locations.getItems().clear();` before you setup new list. ``` public class ApplicationController implements Initializable { @FXML ComboBox cmb_locations; ... @Override public void initialize(URL url, ResourceBundle rb) { ... List<String> list = new ArrayList<String>(); list.add("Item A"); list.add("Item B"); list.add("Item C"); ObservableList obList = FXCollections.observableList(list); cmb_locations.getItems().clear(); cmb_locations.setItems(obList); ... } } ```
Color Intensity of Pixel(X,Y) on Image [OpenCV / Python] I'm using Template Matching to detect for smaller images in a large image. After detecting it , i would grab the center point(x y) of main picture of the detected image. Could anyone advice how I could grab the shade/color of that particular center point? I understand the Template Matching ignores color , based this example, is there anyway to grab the color intensity of the particular pixel? of that center point ``` # Python program to illustrate # template matching import cv2 import numpy as np import time import sys # Read the main image img_rgb = cv2.imread('test.png') # Convert it to grayscale img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) # Read the template template = cv2.imread('template.png',0) # Store width and heigth of template in w and h w, h = template.shape[::-1] # Perform match operations. res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED) # Specify a threshold threshold = 0.90 # Store the coordinates of matched area in a numpy array loc = np.where( res >= threshold) xyMiddle = "" for pt in zip(*loc[::-1]): xyMiddle = str(pt[0] + w/2) +"," +str(pt[1] + h/5) if(xyMiddle != ""): print(xyMiddle) ```
The grayscale image has just a single channel and the colour image has 3 or 4 channels (BGR or BGRA). Once you have the pixel coordinates, the pixel value in the grayscale image will be an intensity value or you can get the BGR values from that pixel in the original image. That is, `img_gray[y][x]` will return an intensity value in the range 0-255, and `img_rgb[y][x]` will return a list of `[B, G, R (, A)]` values, each of which will have intensity values in the range 0-255. Thus the value returned when you call e.g. `img_gray[10][50]` or `print(img_gray[10][50])` is the pixel value at `x=50`, `y=10`. Similarly the value returned when you call e.g. `img_rgb[10][50]` is the pixel value at `x=50`, `y=10`, but calling it in this way will return the list of pixel values for that location e.g. `[93 238 27]` for `RGB` or `[93 238 27 255]` for `RGBA`. To get just the B, G or R value you would call `img_rgb[10][50][chan]` where for `chan`, `B=0`, `G=1`, `R=2`.
How to rotate and then move on that direction? Hy, I am currently trying to make a first person game.what i was able to do was to make the camera move using the function gluLookAt(), and to rotate it using glRotatef().What I am trying to to is to rotate the camera and then move forward on the direction i have rotated on, but the axes stay the same,and although i have rotated the camera moves sideways not forward. Can someone help me ? this is my code: ``` glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(cameraPhi,1,0,0); glRotatef(cameraTheta,0,1,0); gluLookAt(move_camera.x,move_camera.y,move_camera.z,move_camera.x,move_camera.y,move_camera.z-10,0,1,0); drawSkybox2d(treeTexture); ```
This requires a bit of vector math... Given these functions, the operation is pretty simple though: ``` vec rotx(vec v, double a) { return vec(v.x, v.y*cos(a) - v.z*sin(a), v.y*sin(a) + v.z*cos(a)); } vec roty(vec v, double a) { return vec(v.x*cos(a) + v.z*sin(a), v.y, -v.x*sin(a) + v.z*cos(a)); } vec rotz(vec v, double a) { return vec(v.x*cos(a) - v.y*sin(a), v.x*sin(a) + v.y*cos(a), v.z); } ``` Assuming you have an orientation vector defined as {CameraPhi, CameraTheta, 0.0}, then if you want to move the camera in the direction of a vector v with respect to the camera's axis, you add this to the camera's position p: ``` p += v.x*roty(rotx(vec(1.0, 0.0, 0.0), CameraPhi), CameraTheta) + v.y*roty(rotx(vec(0.0, 1.0, 0.0), CameraPhi), CameraTheta) + v.z*roty(rotx(vec(0.0, 0.0, 1.0), CameraPhi), CameraTheta); ``` And that should do it. Keep Coding :)
What does the statement "to qualify the use of a type in that namespace" mean in C# I am a beginner in C# and came across the below error when using the `Console.WriteLine` function. ``` The name 'Console' does not exist in the current context ``` My understanding of the `using` keyword is that it acts like a `require` or `import` in JavaScript. So I then added the statement `using System;` at the top on the namespace file since a suggestion from the IDE gave me something like `System.Console`. Now I do not have the error anymore. Out of curiosity I went to the C# docs in the [using Directive](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using-directive) section. And there is the following state: *The using directive has three uses:* *To allow the use of types in a namespace so that you do not have to qualify the use of a type in that namespace:* What does the part - *so that you do not have to qualify the use of a type in that namespace:* mean. And why the `using` keyword is called a directive or what is a directive in general programming in contrast to the directives I use for example in Angular? Thanks.
> > What does the part - so that you do not have to qualify the use of a type in that namespace: mean. > > > C# distinguishes between *simple names*, which have a name and an optional type argument, like `String` or `List<int>`, and *qualified names* which have multiple names separated by dots, like `System.String` or `System.Collections.Generic.List<int>`. When you have a `using` directive, you can elide the used qualifier. > > And why the using keyword is called a directive or what is a directive in general programming > > > In C# we have *declarations* like `namespace Foo` or `class Bar` or `void M() {}` We have *statements*, like `foreach(var foo in bar) blah();` What then is `using System;` ? It's not a declaration; no new item is *declared*. It's not a *statement* -- statements are control flow elements, but the using directive introduces no control flow. It *directs* the compiler to have a particular rule for resolving names, and so it is a *directive*. All of this information is in the C# specification. **I strongly recommend that you get yourself a copy of the spec and consult it when you have questions like these**. That will be faster and easier for you than posting a question here.
plain javascript to jquery - clientHeight I have the following script ``` (function(win){ var doc = win.document; if (doc.querySelector && doc.addEventListener) { var toggler = doc.querySelector('.toggle-menu') var menu = doc.querySelector('.main-nav ul'); menu.style.height = '0px'; toggler.addEventListener('click',function(e) { e.preventDefault(); if (menu.style.height == '0px') { menu.style.height = 'auto'; if (menu.clientHeight != 0) { menu.style.height = menu.clientHeight+'px'; } } else { menu.style.height = '0px'; } }); } })(this); ``` What will be the jQuery version of that script, since i can't find a jQuery equivalent to clientHeight.
`clientHeight` is not a jQuery property. It was introduced in Internet Explorer, but isn't part of the W3C specifications. It looks like it is only supported in Firefox and Internet Explorer. I've just tested that it works in the latest version of Chrome, though. Not sure if results are standard across browsers, though the link I posted below suggests no. Also, Mozilla suggests the following formula to be used in place for browsers that don't support it: > > clientHeight can be calculated as CSS height + CSS padding - height of > horizontal scrollbar (if present). > > > I'm assuming that is the scrollbar of the element itself, not the entire browser window, unless the element takes up the entire window. Sources: - [clientHeight/clientWidth returning different values on different browsers](https://stackoverflow.com/questions/833699/clientheight-clientwidth-returning-different-values-on-different-browsers) - <https://developer.mozilla.org/en/DOM/element.clientHeight>
Placing arguments for anonymous functions in JavaScript When I make an anonymous function in JavaScript like this: ``` (function(){ /* some code here */ })() ``` In which object will be this function added, and where will this function live? Also you can see in the jQuery source code an anonymous function like this: ``` (function(window, undefined){ /* some code here */ })(window) ``` How do this function's arguments differentiate it from an anonymous, 0-arg function?
Functions in JavaScript are *values*. That is, a function is represented by an object, and like any other object it can be the value of a variable or participate in expressions. Thus ``` (function() { ... }) ``` is a value, just like `17` or `"hello world"` is a value. When a function (as a value) appears in an expression, and it's followed by `(...)` with a comma-separated list of expressions between the parentheses, that's a function call. OK, so: ``` (function() { ... })() ``` creates a function (as a value) and then invokes that function with no arguments. The function object, at least as a direct result of that code, is not stored anywhere. It essentially vanishes after the function call completes, and the overall value of that subexpression will be whatever the function returned. Passing parameters to such a function is no different than passing parameters to any other function. In the specific example you quote, the purpose is to prevent certain kinds of anomalies caused by errant "alien" code. Your example really should read: ``` (function(window, undefined) { // code })(this); ``` The symbol `this` is a reserved word and its value is under complete control of the runtime. (Well, it's value in a local execution context is thusly controlled.) When evaluated in the global scope, the above code ensures that *inside* the anonymous function, the symbol "window" will be a reference to the global context. That sort of construct is also useful for code that may be used in contexts other than a browser, like Node.js for example, where the global context isn't called "window".
How can I enable CDI with Jersey Test Framework? I found [How can I inject a data source dependency into a RESTful web service with Jersey (Test Framework)?](https://stackoverflow.com/q/5892349/330457) but I think I'm gonna ask a little bit different question. This is a follow-up question of [@PostConstruct of abstract ancestors are not invoked](https://stackoverflow.com/q/29787068/330457) I wrote a JAX-RS library and I'm trying to unit-test with [Jersey Test Framework](https://jersey.java.net/documentation/latest/test-framework.html). I seems HK2 injects properly. But I found some of my life cycle interceptor method annotated with `@PostConstruct` or `@PreDestroy` aren't invoked (or only some invoked). ``` public class MyResource { @PostConstruct private void constructed() { // not invoked } @Inject private Some some; // injection works. } ``` How can I enable CDI with Jersey Test Framework? What kind of artifacts do I have to depend on? Here is my current dependencies. ``` <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.glassfish.jersey.test-framework.providers</groupId> <artifactId>jersey-test-framework-provider-grizzly2</artifactId> <scope>test</scope> </dependency> ```
I found a solution. I added following additional dependencies. ``` <dependency> <groupId>org.glassfish.jersey.ext.cdi</groupId> <artifactId>jersey-cdi1x</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.glassfish.jersey.ext.cdi</groupId> <artifactId>jersey-weld2-se</artifactId> <scope>test</scope> </dependency> ``` Now Weld takes over HK2, I think. I don't know what `jersey-cdi1x-ban-custom-hk2-binding` is for. Anyway, I can use standard annotations from `javax.enterprise:cdi-api`. ``` public class MyProducer { @Produces @Some public MyType produceSome() {} public void disposeSome(@Disposes @Some MyType instance) {} } ``` And an initialisation code for Weld added. ``` @Override protected Application configure() { // this method works somewhat weirdly. // local variables including logger // is null in here // I have to start (and join) a thread // which initializes Weld and adds a shutdown hook final Thread thread = new Thread(() -> { final Weld weld = new Weld(); weld.initialize(); Runtime.getRuntime().addShutdownHook( new Thread(() -> weld.shutdown())); }); thread.start(); try { thread.join(); } catch (final InterruptedException ie) { throw new RuntimeException(ie); } final ResourceConfig resourceConfig = new ResourceConfig(MyResource.class); resourceConfig.register(MyProducer.class); return resourceConfig; } ``` Every points get injected and all lifecycle methods are invoked. Yay!!! --- I don't understand why I tried to use a thread in the first place. ``` @Override protected Application configure() { final Weld weld = new Weld(); weld.initialize(); Runtime.getRuntime().addShutdownHook(new Thread(() -> weld.shutdown())); final ResourceConfig resourceConfig = new ResourceConfig(MyResource.class); resourceConfig.register(MyProducer.class); return resourceConfig; } ``` --- Since I use `JerseyTestNg.ContainerPerClassTest` I failed, at least with TestNG, to work with `@BeforeClass` and `@AfterClass` because `configure()` method is invoked (indirectly) from the constructor. I think I can use `@BeforeMethod` and `@AfterMethod` for initializing/shutting-down Weld if I switch to `JerseyTestNg.ContainerPerMethodTest`. --- `jersey-cdi1x` is a transitive dependency of the `jersey-weld2-se` so it can be omitted.
Do aging UPSes need to be replaced? I've got a 2200VA UPS with AVR and power conditioning that's over 10 years old. Does a UPS lose its ability to protect equipment over time? I'm not asking about batteries, but the UPS hardware itself.
It depends on what the UPS is doing and how it is made. There are two main purposes in a UPS: 1. Ensure that the equipment plugged into it is kept live in the even of a power loss, over-voltage, or under-voltage condition; 2. Ensure that power line badness (spikes) are mitigated and the equipment protected. Part 1 is easy to test, and most good UPSs will do a periodic test. This tests more than the battery; it tests the UPS itself and even to a lesser extent the equipment behind it. Part 2 is harder. Many UPSs (and power strips) which claim to protect against spikes and lightening strikes do so using a small device called a metal oxide varistor. These devices sit on the input line and wait to catch a spike. When they do, they are damaged to some extent. A nearby lightening strike can take them out completely, yet they will appear to be perfectly intact physically. Basically you can think of it as a can of paint you use to touch up a house. As you touch up more and more, your can gets empty, and eventually you run out of paint. When these are damaged enough you start to lose fast-spike protection. In some areas it can take as little as a year to make a surge protector almost useless. In others, you may go for 10 without a problem if the input power is very clean. So, to finally answer your question: I would consider replacing a 10 year old unit for this reason alone. Another reason is that you can likely get a larger one for cheaper, it may be more energy efficient, and may even be smaller.
How to check if there's nothing to be committed in the current branch? The goal is to get an unambiguous status that can be evaluated in a shell command. I tried `git status` but it always returns 0, even if there are items to commit. ``` git status echo $? #this is always 0 ``` I have an idea but I think it is rather a bad idea. ``` if [ git status | grep -i -c "[a-z]"> 2 ]; then code for change... else code for nothing change... fi ``` any other way? --- **update with following solve, see Mark Longair's post** I tried this but it causes a problem. ``` if [ -z $(git status --porcelain) ]; then echo "IT IS CLEAN" else echo "PLEASE COMMIT YOUR CHANGE FIRST!!!" echo git status fi ``` I get the following error `[: ??: binary operator expected` now, I am looking at the man and try the git diff. ===================code for my hope, and hope better answer====================== ``` #if [ `git status | grep -i -c "$"` -lt 3 ]; # change to below code,although the above code is simple, but I think it is not strict logical if [ `git diff --cached --exit-code HEAD^ > /dev/null && (git ls-files --other --exclude-standard --directory | grep -c -v '/$')` ]; then echo "PLEASE COMMIT YOUR CHANGE FIRST!!!" exit 1 else exit 0 fi ```
An alternative to testing whether the output of `git status --porcelain` is empty is to test each condition you care about separately. One might not always care, for example, if there are untracked files in the output of `git status`. For example, to see if there are any local unstaged changes, you can look at the return code of: ``` git diff --exit-code ``` To check if there are any changes that are staged but not committed, you can use the return code of: ``` git diff --cached --exit-code ``` Finally, if you want to know about whether there are any untracked files in your working tree that aren't ignored, you can test whether the output of the following command is empty: ``` git ls-files --other --exclude-standard --directory ``` *Update:* You ask below whether you can change that command to exclude the directories in the output. You can exclude empty directories by adding `--no-empty-directory`, but to exclude all directories in that output I think you'll have to filter the output, such as with: ``` git ls-files --other --exclude-standard --directory | egrep -v '/$' ``` The `-v` to `egrep` means to only output lines that don't match the pattern, and the pattern matches any line that ends with a `/`.
Preload Link using Files Directive in .htaccess I'm currently preloading font files for one webpage on WordPress. So `https://example.com` not preloaded. `https://example.com/test/` preloaded font file. This is what I've got in `.htaccess`. ``` <Files "/test/"> Header add Link "</fonts/poppins.woff2>; rel=preload; as=font; type=font/woff2; crossorigin" </Files>` ``` The link works fine, it's the conditional loading that I'm struggling with. Tried adding wildcards, plus the full URL, etc, but no joy. Am I right in thinking it's not the syntax, but it needs allowing before it works?
It is the syntax... The `<Files>` directive matches against *filenames* only (eg. `foo.php`) - when the request maps to physical files on the filesystem. Since this is WordPress, I assume `/test/` is not even a filesystem directory - it's simply a URL-path? You can use mod\_setenvif to set an environment variable when this URL-path is requested and then set the `Header` conditionally based on this environment variable. This should go *before* the WordPress front-controller, near the top of your `.htaccess` file. For example: ``` SetEnvIf Request_URI "^/test/" PRELOAD_FONT Header add Link "</fonts/poppins.woff2>; rel=preload; as=font; type=font/woff2; crossorigin" env=PRELOAD_FONT ``` The regex `^/test/` matches any URL-path that *starts* `/test/`. If this should *only* match the single URL `/test/` then append an end-of-string anchor to the regex: `^/test/$`. Note the extra `env=PRELOAD_FONT` argument at the end of the `Header` directive. The header is only set when the `PRELOAD_FONT` environment variable, set by the preceding `SetEnvIf` directive, is also set.
UserWarning: FixedFormatter should only be used together with FixedLocator I have used for a long time small subroutines to format axes of charts I'm plotting. A couple of examples: ``` def format_y_label_thousands(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:,.0f}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) def format_y_label_percent(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:.1%}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) ``` However, after an update to matplotlib yesterday, I get the following warning when calling any of these two functions: ``` UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) ``` What is the reason for such a warning? I couldn't figure it out looking into matplotlib's documentation.
**WORKAROUND:** The way to avoid the warning is to use FixedLocator (that is part of matplotlib.ticker). Below I show a code to plot three charts. I format their axes in different ways. Note that the "set\_ticks" silence the warning, but it changes the actual ticks locations/labels (it took me some time to figure out that FixedLocator uses the same info but keeps the ticks locations intact). You can play with the x/y's to see how each solution might affect the output. ``` import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mticker mpl.rcParams['font.size'] = 6.5 x = np.array(range(1000, 5000, 500)) y = 37*x fig, [ax1, ax2, ax3] = plt.subplots(1,3) ax1.plot(x,y, linewidth=5, color='green') ax2.plot(x,y, linewidth=5, color='red') ax3.plot(x,y, linewidth=5, color='blue') label_format = '{:,.0f}' # nothing done to ax1 as it is a "control chart." # fixing yticks with "set_yticks" ticks_loc = ax2.get_yticks().tolist() ax2.set_yticks(ax1.get_yticks().tolist()) ax2.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing yticks with matplotlib.ticker "FixedLocator" ticks_loc = ax3.get_yticks().tolist() ax3.yaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing xticks with FixedLocator but also using MaxNLocator to avoid cramped x-labels ax3.xaxis.set_major_locator(mticker.MaxNLocator(3)) ticks_loc = ax3.get_xticks().tolist() ax3.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_xticklabels([label_format.format(x) for x in ticks_loc]) fig.tight_layout() plt.show() ``` OUTPUT CHARTS: [![Sample charts](https://i.stack.imgur.com/f9ssR.png)](https://i.stack.imgur.com/f9ssR.png) Obviously, having a couple of idle lines of code like the one above (I'm basically getting the yticks or xticks and setting them again) only adds noise to my program. I would prefer that the warning was removed. However, look into some of the "bug reports" (from links on the comments above/below; the issue is not actually a bug: it is an update that is generating some issues), and the contributors that manage matplotlib have their reasons to keep the warning. **OLDER VERSION OF MATPLOTLIB:** If you use your Console to control critical outputs of your code (as I do), the warning messages might be problematic. Therefore, a way to delay having to deal with the issue is to downgrade matplotlib to version 3.2.2. I use Anaconda to manage my Python packages, and here is the command used to downgrade matplotlib: ``` conda install matplotlib=3.2.2 ``` Not all listed versions might be available. For instance, couldn't install matplotlib 3.3.0 although it is listed on matplotlib's releases page: <https://github.com/matplotlib/matplotlib/releases>
Are Promise.resolve and new Promise(resolve) interchangeable I think `Promise.resolve` and `new Promise(resolve)` are interchangeable. Consider this: A. ``` new RSVP.Promise(function (resolve, reject) { resolve(); }).then(function () { return new RSVP.Promise(function (resolve) { resolve("HI") }); }).then(function (result) { console.log(result); }); ``` B. ``` new RSVP.Promise(function (resolve, reject) { resolve(); }).then(function () { return RSVP.resolve("HI"); }).then(function (result) { console.log(result); }); ``` Both print "HI" as I expected. So I think if I don't need to "reject" anything. I can just write `RSVP.resolve();` for simplicity. But consider this example: ``` new RSVP.Promise(function (resolve, reject) { resolve(); }).then(function () { return new RSVP.Promise(function (resolve, reject) { setTimeout(function () { resolve("HI") }, 3000); }); }).then(function (result) { console.log(result); }); ``` How can I use `RSVP.resolve();` to replace? I tried for example: ``` new RSVP.Promise(function (resolve, reject) { resolve(); }).then(function () { return setTimeout(function () { return new RSVP.resolve("HI"); }, 3000); }).then(function (result) { console.log(result); }); ``` This prints something else instead of "HI". So is it possible to use RSVP.resolve(); here? Are these two interchangeable?
First and foremost > > I think Promise.resolve and new Promise(resolve) are interchangeable. > > > Nope. `Promise.resolve` will create a promise which is already resolved, whereas `new Promise(resolve)` creates a promise which is neither resolved nor rejected. --- In the last example, ``` return setTimeout(function () { return new RSVP.resolve("HI"); }, 3000); ``` means that, you are returning the result of `setTimeout` function, not a promise object. So, the current `then` handler will return a resolved promise with the result of `setTimeout`. That is why you are seeing a weird object. --- In your particular case, you want to introduce a delay before resolving the promise. It is not possible with `Promise.resolve`. The penultimate method you have shown in the question, is the way to go.
Raw Type Warning in Android Studio Android Studio doesn't show a compiler warning when using the raw type in referencing a generic type. Is there a way to enable this feature? ``` public class GenericClass<T> { } public class SpecificClass extends GenericClass { } ``` Eclipse usually shows the following warning: *GenericClass is a raw type. References to generic type GenericClass <T> should be parameterized.*
You can enable the warning but cant force it as compilation error. Same is the case in Eclipse[see tail for the update]. You can refer to [JLS](http://docs.oracle.com/javase/specs/jls/se7/html/jls-4.html#jls-4.8) which states it compilation warning and not a compilation error. You can enable the inspection in your android studio. Go to `File > Settings > Inspection` and turn on the check as for the "Raw use of parameterized class" setting as shown below may help: [![raw usage](https://i.stack.imgur.com/jwyUJ.png)](https://i.stack.imgur.com/jwyUJ.png) Thanks to Stephan: You can enable this in Eclipse using: `Java Compiler > Errors/Warnings > Generic Types > Usage of a raw type`: and select `Error` in the combo
MySQL 5.7 : convert simple JSON\_ARRAY to rows I have a simple table with a JSON\_ARRAY column like that: ``` +----+---------+ | id | content | +----+---------+ | 1 | [3, 4] | | 2 | [5, 6] | +----+---------+ ``` I want to list all the content references for a specific id `SELECT JSON_EXTRACT(content, '$') as res FROM table WHERE id=1` But I'd like the result to be in rows: ``` +-----+ | res | +-----+ | 3 | | 4 | +-----+ ```
You can do this in MySQL 8.0 with [JSON\_TABLE()](https://dev.mysql.com/doc/refman/8.0/en/json-table-functions.html): ``` select r.res from mytable, json_table(mytable.content, '$[*]' columns (res int path '$')) r where mytable.id = 1 ``` I tested on MySQL 8.0.17, and this is the output: ``` +------+ | res | +------+ | 3 | | 4 | +------+ ``` If you use a version older than MySQL 8.0, you have these options: - Find some impossibly complex SQL solution. This is almost always the wrong way to solve the problem, because you end up with code that is too expensive to maintain. - Fetch the JSON array as-is, and explode it in application code. - Normalize your data so you have one value per row, instead of using JSON arrays. I often find questions on Stack Overflow about using JSON in MySQL that convince me that this feature has ruined MySQL. Developers keep using it inappropriately. They like that it makes it easy to *insert* semi-structured data, but they find that it makes *querying* that data far too complex. --- Re comment from @ChetanOswal: There are solutions for MySQL 5.7, but they are unsatisfying and my advice is to avoid them. Demo: We can start with the data shown in the original question above. ``` mysql> create table mytable (id serial primary key, content json); mysql> insert into mytable values -> (1, '[3,4]'), -> (2, '[5,6]'); ``` Next we need another table that simply has a series of integer values, at least as many as the longest JSON array. ``` mysql> create table numbers (number int primary key); mysql> insert into numbers values (0), (1), (2); ``` Joining these tables using an inequality we can create as many rows as the number of items in the JSON arrays. ``` mysql> select * from mytable join numbers on numbers.number < json_length(mytable.content); +----+---------+--------+ | id | content | number | +----+---------+--------+ | 1 | [3, 4] | 0 | | 1 | [3, 4] | 1 | | 2 | [5, 6] | 0 | | 2 | [5, 6] | 1 | +----+---------+--------+ ``` Now we can use those numbers as array indexes to extract the values from the JSON arrays. ``` mysql> select *, json_extract(mytable.content, concat('$[', numbers.number, ']')) as value from mytable join numbers on numbers.number < json_length(mytable.content); +----+---------+--------+-------+ | id | content | number | value | +----+---------+--------+-------+ | 1 | [3, 4] | 0 | 3 | | 1 | [3, 4] | 1 | 4 | | 2 | [5, 6] | 0 | 5 | | 2 | [5, 6] | 1 | 6 | +----+---------+--------+-------+ ``` That's as far as I will advise on solving this in MySQL 5.7. If you need features that are in MySQL 8.0, then you should upgrade to MySQL 8.0. MySQL 5.7 is nearing its end of life in October 2023. It's time to upgrade anyway. If you can't upgrade, and you don't like workaround solutions like I showed, then you should stop using SQL to work with JSON.
What functions are cached in Haskell? I have the following code: ``` memoize f = (map f [0 ..] !!) fib' 0 = 1 fib' 1 = 1 fib' n = fib' (n - 1) + fib' (n - 2) fibMemo n = memoize fib' n fibMemo' = memoize fib' ``` (I am aware of that fibonacci implementation has exponential time complexity and does not use the cache) The first time I execute `fibmemo' 30` it takes 3 seconds, and the second time it takes ~0 seconds, because the result is cached. But the first version, `fibmemo`, does not get the result cached, it always takes 3 seconds to execute. The only difference is the definition (which as far as I know are equivalent). So my question is, what functions are cached in Haskell? I have already read <https://wiki.haskell.org/Memoization> and does not resolve my question.
Essentially, the functions you defined behave as the following ones: ``` fibMemo n = let m = map fib' [0..] in m !! n fibMemo' = let m = map fib' [0..] in (m !!) ``` Why is `fibMmemo'` more efficient? Well, we can rewrite it as ``` fibMemo' = let m = map fib' [0..] in \n -> m !! n ``` which makes it more clear that the single list `m` gets created before `n` is taken as input. This means that all the calls to `fibMemo'` will use the same `m`. The first call evaluates a part of `m` slowly, and the successive calls will reuse that cached result (assuming the call hits the cache, of course, otherwise another part of `m` is evaluated and cached). Instead, `fibMemo` is equivalent to ``` fibMemo = \n -> let m = map fib' [0..] in m !! n ``` which takes the input `n` before the list `m` gets created. So, each call gets a new cache, which is pointless, since the whole purpose of a cache is that it is reused later. The order of the lambda `\n ->` vs the `let m = ..` matters a lot in terms of the performance. Since `m = ..` does not use `n`, technically the `let m = ..` can be floated outwards, essentially turning `fibMemo` into `fibMemo'`, without affecting the semantics. However, as you discovered, this does not preserve performance, in general! This is indeed an optimization that GHC could perform, but does not, because it can easily make the performance significantly worse.
How do I use a callable object as a method definition? I know it's possible to define a function and use it as a method: ``` def method(*args, **kwargs): print "%r %r" % (args, kwargs) class Test(object): method1 = method t = Test() t.method1() # (<__main__.Test object at 0x10705a4d0>,) {} ``` I'd like to do the same thing with callable objects, like this: ``` class Method(object): __call__ = method class Test(object): method1 = Method() ``` However, when I do this, the `self` argument of `Method.__call__` is the method itself (which is normal), but the `self` argument of the `Test` instance is lost. ``` t = Test() t.method1() # (<__main__.Method object at 0x10703f110>,) {} ``` Is it possible to have the `self` argument passed as the second argument to `Method.__call__`?
By wrapping that function `method` in a class you are effectively preventing the mechanism that binds an object to a function and thus creates a method. The way this works is that regular python functions are [descriptors](https://docs.python.org/2/howto/descriptor.html#functions-and-methods). To summarize the docs: When you write the following code: ``` some_instance.some_function() ``` The `some_function`s `__get__` method is called with `some_instance` as the first parameter. The `__get__` method then returns a bound method object, that remembers the instance. Later, when the bound method object's `__call__` method is called, it passes the saved instance as a first parameter. We can reimplement that behaviour like this: ``` def method(*args, **kwargs): print("%r %r" % (args, kwargs)) class BoundMethod(object): # the bound method remembers the instance and the function def __init__(self, instance, function): self.instance = instance self.function = function # when the bound method is called, it passes the instance def __call__(self, *args, **kwargs): return self.function(self.instance, *args, **kwargs) class Method(object): # the __get__ method assembles a bound method consisting of the # instance it was called from and the function def __get__(self, instance, cls): return BoundMethod(instance, method) class Test(object): method1 = Method() t = Test() t.method1() # (<__main__.Test object at 0x7f94d8c3aad0>,) {} ``` In your case `Method` is not a descriptor. So, when internally the `__call__` property (which is a function) is requested it is bound to an object of the containing class (`Method`). I am not sure if this is useful, as this example is just a simplified version of what happens under the hood anyway. Note: in this example: ``` class C: def function(self): pass print(C.function) print(C().function) ``` The first print shows us, that an unbound method literally is called `<unbound method C.function>` while a bound method is called `<bound method C.function of ...>`. In python3 however the first print shows us that unbound methods are just the unchanged functions we defined in the class.
jcr query construct I am building a jcr query and receive data from repository. Here is my code: ``` String queryString = "SELECT * FROM public:hours"; try { // get session Session session = requestContext.getSession(); // create query from queryString constructed Query q = session.getWorkspace().getQueryManager().createQuery(queryString, Query.JCR_SQL2); // execute query and retrieve result QueryResult result = q.execute(); // debug line log.error("query is", q.getStatement()); .... ``` But this can not execute successfully. It gives me an error that ``` Repositorty Failed: [INFO] [talledLocalContainer] javax.jcr.query.InvalidQueryException: Query: [INFO] [talledLocalContainer] SELECT * FROM public:(*)hours; expected: <end> ``` In the jcr-shell, it works if I type in `query sql "select * from public:hours"` and will give me proper results. I searched many references but almost every example is the same as mine. so I am not sure where the problem is. Anyone have experience with this please help.
If you're using the JCR-SQL2 query language, then you should quote the selector name with square brackets: ``` SELECT * FROM [public:hours] ``` For details, see Section 6.7.4 of the [JSR-283](http://jcp.org/en/jsr/detail?id=283) (aka, JCR 2.0) specification, which is also available [online](http://www.day.com/specs/jcr/2.0/6_Query.html#6.7.4%20Name). Note that the square bracket quote characters are not required if the name were to be a valid SQL92 identifier. The node type names containing namespace prefixes always need to be quoted, since the ':' character is not allowed in SQL92 identifiers. Of course, this assumes that you have a node type named "public:hours", where "public" is the namespace prefix.
All possible permutations of decimal numbers (hundredths) that sum up to 1 for a given length Consider vector `s` as follows: ``` s=seq(0.01, 0.99, 0.01) > s [1] 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 .......... 0.89 0.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 ``` Now given `s` and a fixed length `m`, I would like to have a matrix for **all possible permutations** of length `m` such that each row of matrix sums up to `1` (excluding the brute force approach). For example, if `m=4` (i.e. number of columns), the desired matrix would be something like this: ``` 0.01 0.01 0.01 0.97 0.02 0.01 0.01 0.96 0.03 0.01 0.01 0.95 0.04 0.01 0.01 0.94 0.05 0.01 0.01 0.93 0.06 0.01 0.01 0.92 . . . 0.53 0.12 0.30 0.05 . . . 0.96 0.02 0.01 0.01 0.97 0.01 0.01 0.01 . . . 0.01 0.97 0.01 0.01 . . . ```
Here's how to do this using recursion: ``` permsum <- function(s,m) if (m==1L) matrix(s) else do.call(rbind,lapply(seq_len(s-m+1L),function(x) unname(cbind(x,permsum(s-x,m-1L))))); res <- permsum(100L,4L); head(res); ## [,1] [,2] [,3] [,4] ## [1,] 1 1 1 97 ## [2,] 1 1 2 96 ## [3,] 1 1 3 95 ## [4,] 1 1 4 94 ## [5,] 1 1 5 93 ## [6,] 1 1 6 92 tail(res); ## [,1] [,2] [,3] [,4] ## [156844,] 95 2 2 1 ## [156845,] 95 3 1 1 ## [156846,] 96 1 1 2 ## [156847,] 96 1 2 1 ## [156848,] 96 2 1 1 ## [156849,] 97 1 1 1 ``` You can divide by 100 to get fractions, as opposed to integers: ``` head(res)/100; ## [,1] [,2] [,3] [,4] ## [1,] 0.01 0.01 0.01 0.97 ## [2,] 0.01 0.01 0.02 0.96 ## [3,] 0.01 0.01 0.03 0.95 ## [4,] 0.01 0.01 0.04 0.94 ## [5,] 0.01 0.01 0.05 0.93 ## [6,] 0.01 0.01 0.06 0.92 ``` --- # Explanation First let's define the inputs: - `s` This is the target value to which each row in the output matrix should sum. - `m` This is number of columns to produce in the output matrix. It is more efficient and reliable to compute the result using integer arithmetic, as opposed to floating-point arithmetic, so I designed my solution to work only with integers. Hence `s` is a scalar integer representing the target integer sum. --- Now let's examine the sequence generated by `seq_len()` for the non-base case: ``` seq_len(s-m+1L) ``` This generates a sequence from 1 to the highest possible value that could be part of a sum to `s` with `m` columns remaining. For example, think about the case of `s=100,m=4`: the highest number we can use is 97, participating in a sum of 97+1+1+1. Each remaining column reduces the highest possible value by 1, which is why we must subtract `m` from `s` when computing the sequence length. Each element of the generated sequence should be viewed as one possible "selection" of an addend in the summation. --- ``` do.call(rbind,lapply(seq_len(s-m+1L),function(x) ...)) ``` For each of the selections, we must then recurse. We can use `lapply()` to do this. To jump ahead, the lambda will make a single recursive call to `permsum()` and then `cbind()` the return value with the current selection. This will produce a matrix, always of the same width for this level of recursion. Hence, the `lapply()` call will return a list of matrices, all of the same width. We must then row-bind them together, which is why we must use the `do.call(rbind,...)` trick here. --- ``` unname(cbind(x,permsum(s-x,m-1L))) ``` The body of the lambda is fairly simple; we `cbind()` the current selection `x` with the return value of the recursive call, completing the summation for this submatrix. Unfortunately we must call `unname()`, otherwise each column that ends up being set from the `x` argument will have column name `x`. The most important detail here is the choice of arguments to the recursive call. First, because the lambda argument `x` has just been selected out during the current recursive evaluation, we must subtract it from `s` to get a new summation target, which the impending recursive call will be responsible for attaining. Hence the first argument becomes `s-x`. Second, because the selection of `x` takes up one column, we must subtract 1 from `m`, so that the recursive call will have one fewer column to produce in its output matrix. --- ``` if (m==1L) matrix(s) else ... ``` Lastly, let's examine the base case. In every evaluation of the recursive function we must check if `m` has reached 1, in which case we can simply return the required sum `s` itself. --- # Floating-point discrepancy I looked into the discrepancy between my results and psidom's results. For example: ``` library(data.table); bgoldst <- function(s,m) permsum(s,m)/s; psidom <- function(ss,m) { raw <- do.call(data.table::CJ,rep(list(ss),m)); raw[rowSums(raw)==1,]; }; ## helper function to sort a matrix by columns smp <- function(m) m[do.call(order,as.data.frame(m)),]; s <- 100L; m <- 3L; ss <- seq_len(s-1L)/s; x <- smp(bgoldst(s,m)); y <- smp(unname(as.matrix(psidom(ss,m)))); nrow(x); ## [1] 4851 nrow(y); ## [1] 4809 ``` So there's a 42 row discrepancy between our two results. I decided to try to find exactly which permutations were omitted with the following line of code. Basically, it compares each element of the two matrices and prints the comparison result as a logical matrix. We can scan down the scrollback to find the first differing row. Below is the excerpted output: ``` x==do.call(rbind,c(list(y),rep(list(NA),nrow(x)-nrow(y)))); ## [,1] [,2] [,3] ## [1,] TRUE TRUE TRUE ## [2,] TRUE TRUE TRUE ## [3,] TRUE TRUE TRUE ## [4,] TRUE TRUE TRUE ## [5,] TRUE TRUE TRUE ## ## ... snip ... ## ## [24,] TRUE TRUE TRUE ## [25,] TRUE TRUE TRUE ## [26,] TRUE TRUE TRUE ## [27,] TRUE TRUE TRUE ## [28,] TRUE TRUE TRUE ## [29,] TRUE FALSE FALSE ## [30,] TRUE FALSE FALSE ## [31,] TRUE FALSE FALSE ## [32,] TRUE FALSE FALSE ## [33,] TRUE FALSE FALSE ## ## ... snip ... ``` So it's at row 29 where we have the first discrepancy. Here's a window around that row in each permutation matrix: ``` win <- 27:31; x[win,]; y[win,]; ## [,1] [,2] [,3] ## [1,] 0.01 0.27 0.72 ## [2,] 0.01 0.28 0.71 ## [3,] 0.01 0.29 0.70 (missing from y) ## [4,] 0.01 0.30 0.69 (missing from y) ## [5,] 0.01 0.31 0.68 ## [,1] [,2] [,3] ## [1,] 0.01 0.27 0.72 ## [2,] 0.01 0.28 0.71 ## [3,] 0.01 0.31 0.68 ## [4,] 0.01 0.32 0.67 ## [5,] 0.01 0.33 0.66 ``` Interestingly, the missing permutations normally do sum to exactly 1 when you compute the sum manually. At first I thought it was data.table's `CJ()` function that was doing something strange with floats, but further testing seems to indicate it's something `rowSums()` is doing: ``` 0.01+0.29+0.70==1; ## [1] TRUE ss[1L]+ss[29L]+ss[70L]==1; ## [1] TRUE rowSums(CJ(0.01,0.29,0.70))==1; ## looks like CJ()'s fault, but wait... ## [1] FALSE cj <- CJ(0.01,0.29,0.70); cj$V1+cj$V2+cj$V3==1; ## not CJ()'s fault ## [1] TRUE rowSums(matrix(c(0.01,0.29,0.70),1L,byrow=T))==1; ## rowSums()'s fault ## [1] FALSE ``` We can work around this `rowSums()` quirk by applying a manual (and somewhat arbitrary) tolerance in the floating-point comparison. To do this we need to take the absolute difference and then perform a less-than comparison against the tolerance: ``` abs(rowSums(CJ(0.01,0.29,0.70))-1)<1e-10; ## [1] TRUE ``` Hence: ``` psidom2 <- function(ss,m) { raw <- do.call(data.table::CJ,rep(list(ss),m)); raw[abs(rowSums(raw)-1)<1e-10,]; }; y <- smp(unname(as.matrix(psidom2(ss,m)))); nrow(y); ## [1] 4851 identical(x,y); ## [1] TRUE ``` --- # Combinations Thanks to Joseph Wood for pointing out that this is really *permutations*. I originally named my function `combsum()`, but I renamed it to `permsum()` to reflect this revelation. And, as Joseph suggested, it is possible to modify the algorithm to produce combinations, which can be done as follows, now living up to the name `combsum()`: ``` combsum <- function(s,m,l=s) if (m==1L) matrix(s) else do.call(rbind,lapply(seq((s+m-1L)%/%m,min(l,s-m+1L)),function(x) unname(cbind(x,combsum(s-x,m-1L,x))))); res <- combsum(100L,4L); head(res); ## [,1] [,2] [,3] [,4] ## [1,] 25 25 25 25 ## [2,] 26 25 25 24 ## [3,] 26 26 24 24 ## [4,] 26 26 25 23 ## [5,] 26 26 26 22 ## [6,] 27 25 24 24 tail(res); ## [,1] [,2] [,3] [,4] ## [7148,] 94 3 2 1 ## [7149,] 94 4 1 1 ## [7150,] 95 2 2 1 ## [7151,] 95 3 1 1 ## [7152,] 96 2 1 1 ## [7153,] 97 1 1 1 ``` This required 3 changes. First, I added a new parameter `l`, which stands for "limit". Basically, in order to guarantee that each recursion generates a unique combination, I enforce that each selection must be *less than or equal to* any previous selection in the current combination. This required taking the current upper limit as a parameter `l`. On the top-level call `l` can just be defaulted to `s`, which is actually too high anyway for cases where `m>1`, but that's not a problem, since it's just one of two upper limits that will be applied during sequence generation. The second change was of course to pass the latest selection `x` as the argument to `l` when making the recursive call in the `lapply()` lambda. The final change is the trickiest. The selection sequence must now be computed as follows: ``` seq((s+m-1L)%/%m,min(l,s-m+1L)) ``` The lower limit had to be raised from the 1 used in `permsum()` to the lowest possible selection that would still allow a descending-magnitude combination. The lowest possible selection of course depends on how many columns have yet to be produced; the more columns, the more "room" we have to leave for future selections. The formula is to take an integer division of `s` on `m`, but we also must effectively "round up", which is why I add `m-1L` prior to taking the division. I also considered doing a floating-point division and then calling `as.integer(ceiling(...))`, but I think the all-integer approach is much better. For example, consider the case of `s=10,m=3`. To produce a sum of 10 with 3 columns remaining, we cannot make a selection less than 4, because then we would not have enough quantity to produce 10 without ascending along the combination. In this case, the formula divides 12 by 3 to give 4. The upper limit can be computed from the same formula used in `permsum()`, except that we must also apply the current limit `l` using a call to `min()`. --- I've verified that my new `combsum()` behaves identically to Joseph's `IntegerPartitionsOfLength()` function for many random test cases with the following code: ``` ## helper function to sort a matrix within each row and then by columns smc <- function(m) smp(t(apply(m,1L,sort))); ## test loop for (i in seq_len(1000L)) { repeat { s <- sample(1:100,1L); m <- sample(2:5,1L); if (s>=m) break; }; x <- combsum(s,m); y <- IntegerPartitionsOfLength(s,m); cat(paste0(s,',',m,'\n')); if (!identical(smc(x),smc(y))) stop('bad.'); }; ``` --- # Benchmarking Common self-contained test code: ``` library(microbenchmark); library(data.table); library(partitions); library(gtools); permsum <- function(s,m) if (m==1L) matrix(s) else do.call(rbind,lapply(seq_len(s-m+1L),function(x) unname(cbind(x,permsum(s-x,m-1L))))); combsum <- function(s,m,l=s) if (m==1L) matrix(s) else do.call(rbind,lapply(seq((s+m-1L)%/%m,min(l,s-m+1L)),function(x) unname(cbind(x,combsum(s-x,m-1L,x))))); IntegerPartitionsOfLength <- function(n, Lim, combsOnly = TRUE) { a <- 0L:n; k <- 2L; a[2L] <- n; MyParts <- vector("list", length=P(n)); count <- 0L; while (!(k==1L) && k <= Lim + 1L) { x <- a[k-1L]+1L; y <- a[k]-1L; k <- k-1L; while (x<=y && k <= Lim) {a[k] <- x; y <- y-x; k <- k+1L}; a[k] <- x+y; if (k==Lim) { count <- count+1L; MyParts[[count]] <- a[1L:k]; }; }; MyParts <- MyParts[1:count]; if (combsOnly) {do.call(rbind, MyParts)} else {MyParts}; }; GetDecimalReps <- function(s,m) { myPerms <- permutations(m,m); lim <- nrow(myPerms); intParts <- IntegerPartitionsOfLength(s,m,FALSE); do.call(rbind, lapply(intParts, function(x) { unique(t(sapply(1L:lim, function(y) x[myPerms[y, ]]))); })); }; smp <- function(m) m[do.call(order,as.data.frame(m)),]; smc <- function(m) smp(t(apply(m,1L,sort))); bgoldst.perm <- function(s,m) permsum(s,m)/s; psidom2 <- function(ss,m) { raw <- do.call(data.table::CJ,rep(list(ss),m)); raw[abs(rowSums(raw)-1)<1e-10,]; }; joseph.perm <- function(s,m) GetDecimalReps(s,m)/s; bgoldst.comb <- function(s,m) combsum(s,m)/s; joseph.comb <- function(s,m) IntegerPartitionsOfLength(s,m)/s; ``` --- **Permutations** ``` ## small scale s <- 10L; m <- 3L; ss <- seq_len(s-1L)/s; ex <- smp(bgoldst.perm(s,m)); identical(ex,smp(unname(as.matrix(psidom2(ss,m))))); ## [1] TRUE identical(ex,smp(joseph.perm(s,m))); ## [1] TRUE microbenchmark(bgoldst.perm(s,m),psidom2(ss,m),joseph.perm(s,m)); ## Unit: microseconds ## expr min lq mean median uq max neval ## bgoldst.perm(s, m) 347.254 389.5920 469.1011 420.383 478.7575 1869.697 100 ## psidom2(ss, m) 702.206 830.5015 1007.5111 907.265 1038.3405 2618.089 100 ## joseph.perm(s, m) 1225.225 1392.8640 1722.0070 1506.833 1860.0745 4411.234 100 ``` --- ``` ## large scale s <- 100L; m <- 4L; ss <- seq_len(s-1L)/s; ex <- smp(bgoldst.perm(s,m)); identical(ex,smp(unname(as.matrix(psidom2(ss,m))))); ## [1] TRUE identical(ex,smp(joseph.perm(s,m))); ## [1] TRUE microbenchmark(bgoldst.perm(s,m),psidom2(ss,m),joseph.perm(s,m),times=5L); ## Unit: seconds ## expr min lq mean median uq max neval ## bgoldst.perm(s, m) 1.286856 1.304177 1.426376 1.374411 1.399850 1.766585 5 ## psidom2(ss, m) 6.673545 7.046951 7.416161 7.115375 7.629177 8.615757 5 ## joseph.perm(s, m) 5.299452 10.499891 13.769363 12.680607 15.107748 25.259117 5 ``` --- ``` ## very large scale s <- 100L; m <- 5L; ss <- seq_len(s-1L)/s; ex <- smp(bgoldst.perm(s,m)); identical(ex,smp(unname(as.matrix(psidom2(ss,m))))); ## Error: cannot allocate vector of size 70.9 Gb identical(ex,smp(joseph.perm(s,m))); ## [1] TRUE microbenchmark(bgoldst.perm(s,m),joseph.perm(s,m),times=1L); ## Unit: seconds ## expr min lq mean median uq max neval ## bgoldst.perm(s, m) 28.58359 28.58359 28.58359 28.58359 28.58359 28.58359 1 ## joseph.perm(s, m) 50.51965 50.51965 50.51965 50.51965 50.51965 50.51965 1 ``` --- **Combinations** ``` ## small-scale s <- 10L; m <- 3L; ex <- smc(bgoldst.comb(s,m)); identical(ex,smc(joseph.comb(s,m))); ## [1] TRUE microbenchmark(bgoldst.comb(s,m),joseph.comb(s,m)); ## Unit: microseconds ## expr min lq mean median uq max neval ## bgoldst.comb(s, m) 161.225 179.6145 205.0898 187.3120 199.5005 1310.328 100 ## joseph.comb(s, m) 172.344 191.8025 204.5681 197.7895 205.2735 437.489 100 ``` --- ``` ## large-scale s <- 100L; m <- 4L; ex <- smc(bgoldst.comb(s,m)); identical(ex,smc(joseph.comb(s,m))); ## [1] TRUE microbenchmark(bgoldst.comb(s,m),joseph.comb(s,m),times=5L); ## Unit: milliseconds ## expr min lq mean median uq max neval ## bgoldst.comb(s, m) 409.0708 485.9739 556.4792 591.4774 627.419 668.4548 5 ## joseph.comb(s, m) 2164.2134 3315.0138 3317.9725 3540.6240 3713.732 3856.2793 5 ``` --- ``` ## very large scale s <- 100L; m <- 6L; ex <- smc(bgoldst.comb(s,m)); identical(ex,smc(joseph.comb(s,m))); ## [1] TRUE microbenchmark(bgoldst.comb(s,m),joseph.comb(s,m),times=1L); ## Unit: seconds ## expr min lq mean median uq max neval ## bgoldst.comb(s, m) 2.498588 2.498588 2.498588 2.498588 2.498588 2.498588 1 ## joseph.comb(s, m) 12.344261 12.344261 12.344261 12.344261 12.344261 12.344261 1 ```
Scan QRcode with inverted colors using Vision API After struggling a few hours on making my app detect this QRCode:[![original QRCode](https://i.stack.imgur.com/yvfY0.png)](https://i.stack.imgur.com/yvfY0.png) I realized that the problem was the in the QRCode appearance. After inverting the colors, the detection was working perfectly.[![inverted QRCode](https://i.stack.imgur.com/Mspmr.png)](https://i.stack.imgur.com/Mspmr.png). Is there a way to make Vision API detect the first QRCode? I tried to enable all symbologies but it did not work. I guess it is possible because the app [QR Code Reader](https://play.google.com/store/apps/details?id=tw.mobileapp.qrcode.banner&hl=fr) detects it.
I improved googles example app "barcode-reader" to detect both inverted colored barcodes and regular ones. here is a link to googles example app: <https://github.com/googlesamples/android-vision/tree/master/visionSamples/barcode-reader> I did so by editing "CameraSource" class, package: `"com.google.android.gms.samples.vision.barcodereader.ui.camera"`. I added a parameter: `private boolean isInverted = false;` and changed function `void setNextFrame(byte[] data, Camera camera)`: ``` void setNextFrame(byte[] data, Camera camera) { synchronized (mLock) { if (mPendingFrameData != null) { camera.addCallbackBuffer(mPendingFrameData.array()); mPendingFrameData = null; } if (!mBytesToByteBuffer.containsKey(data)) { Log.d(TAG, "Skipping frame. Could not find ByteBuffer associated with the image " + "data from the camera."); return; } mPendingTimeMillis = SystemClock.elapsedRealtime() - mStartTimeMillis; mPendingFrameId++; if (!isInverted){ for (int y = 0; y < data.length; y++) { data[y] = (byte) ~data[y]; } isInverted = true; } else { isInverted = false; } mPendingFrameData = mBytesToByteBuffer.get(data); // Notify the processor thread if it is waiting on the next frame (see below). mLock.notifyAll(); } } ```
With SonataAdminBundle. Configure filter on a two step related entity I'd like to know weather is possible and how to configure a filter for the list view as the following with SonataAdminBundle in Symfony 2 Say I have entities Order, pointing to entities User, pointing to entities Company. I want to configure filters both for filtering by User and also for filtering by Company (User's Company) The first is straight forward. The second is what I try to clearify. In the class OrderAdmin I would overwrite configureDatagridFilters as: ``` protected function configureDatagridFilters(DatagridMapper $datagridMapper) { $datagridMapper ->add('created_at') //... some other filters on Order fields, as usual // the filter on User, provided 'user', no ploblem ->add('user') // and the filter by Company ->add('user.company') // this doesn't work, of course ; } ``` That syntax for the company filter is inpired by sonta docs: <http://sonata-project.org/bundles/doctrine-orm-admin/2-0/doc/reference/filter_field_definition.html> Is not intended for what I try to acomplish, but cannot find where to look at. Hope someone has a clue on this. Thanks
Finally I found an answer guided by this other question: [How can I create a custom DataGrid filter in SonataAdmin](https://stackoverflow.com/questions/10224280/how-can-i-create-a-custom-datagrid-filter-in-sonataadmin) and a closer read of the sonata admin docs link I pasted in my question. In case someone have this issue and taking the previous example: ``` protected function configureDatagridFilters(DatagridMapper $datagridMapper) { $datagridMapper //... whatever filter // and the filter by Company ->add('company', 'doctrine_orm_callback', array( 'callback' => array($this, 'callbackFilterCompany'), 'field_type' => 'checkbox' ), 'choice', array('choices' => $this -> getCompanyChoices()) ; } ``` where the method getCompanyChoices retrieves an associative array of company ids => company names (for instance). And the method callbackFilterCompany is as follows ``` public function callbackFilterCompany ($queryBuilder, $alias, $field, $value) { if(!is_array($value) or !array_key_exists('value', $value) or empty($value['value'])){ return; } $queryBuilder ->leftJoin(sprintf('%s.user', $alias), 'u') ->leftJoin('u.company', 'c') ->andWhere('c.id = :id') ->setParameter('id', $value['value']) ; return true; } ```
Unexpectedly able to call derived-class virtual function from base class ctor Can anyone help explain this unexpected behavior? **The Premise** I've created class Thread that contains a member `std::thread` variable. Thread's ctor constructs the member `std::thread` providing a pointer to a static function that calls a pure virtual function (to be implemented by base classes). **The Code** ``` #include <iostream> #include <thread> #include <chrono> namespace { class Thread { public: Thread() : mThread(ThreadStart, this) { std::cout << __PRETTY_FUNCTION__ << std::endl; // This line commented later in the question. } virtual ~Thread() { } static void ThreadStart(void* pObj) { ((Thread*)pObj)->Run(); } void join() { mThread.join(); } virtual void Run() = 0; protected: std::thread mThread; }; class Verbose { public: Verbose(int i) { std::cout << __PRETTY_FUNCTION__ << ": " << i << std::endl; } ~Verbose() { } }; class A : public Thread { public: A(int i) : Thread() , mV(i) { } virtual ~A() { } virtual void Run() { for (unsigned i = 0; i < 5; ++i) { std::cout << __PRETTY_FUNCTION__ << ": " << i << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } } protected: Verbose mV; }; } int main(int argc, char* argv[]) { A a(42); a.join(); return 0; } ``` **The Problem** As you may have already noticed, there's a subtle bug here: `Thread::ThreadStart(...)` is called from the `Thread` ctor context, therefore calling a pure/virtual function will not call the derived class' implementation. This is borne out by the runtime error: ``` pure virtual method called terminate called without an active exception Aborted ``` **However**, there is unexpected runtime behavior if I remove the call to `std::cout` in the `Thread` ctor: ``` virtual void {anonymous}::A::Run(){anonymous}::Verbose::Verbose(int): : 042 virtual void {anonymous}::A::Run(): 1 virtual void {anonymous}::A::Run(): 2 virtual void {anonymous}::A::Run(): 3 virtual void {anonymous}::A::Run(): 4 ``` I.e. removing the call to `std::cout` in the `Thread` ctor seems to have the effect of being able to call a derived class' pure/virtual function from the base class` constructor context! This doesn't align with prior learning and experience. Build environment in Cygwin x64 on Windows 10. gcc version is: ``` g++ (GCC) 5.4.0 Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` I'm baffled by this observation and am burning with curiosity about what's going on. Can anyone shed light?
The behavior of this program is undefined, due to race condition. But, if you want to reason about it, let's try. For `A`'s construction, here's what happens: 1. `mThread` is initialized. The OS schedules it to start at some point in the future. 2. `std::cout << __PRETTY_FUNCTION__ << std::endl;` - this is a fairly slow operation from the program's perspective. 3. `A` constructor runs - initializing its *vtable* (this is not mandated by the stanard, but as far as I know, all implementations do this). *If this happens before `mThread` is scheduled to start, you get the behaviour you observed.* Otherwise, you get the pure virtual call. Because those operations aren't in any way sequenced, the behaviour is undefined. You can notice that you removed a fairly slow operation from your base's constructor, thus initializing your derived - and its vtable - much faster. Say, before the OS actually scheduled `mThread`'s thread to start. That being said, this did not fix the problem, just made encountering it less likely. If you modify your example a bit, you'll notice that removing the IO code made the race harder to find, but fixed nothing. ``` virtual void Run() { for (unsigned i = 0; i < 1; ++i) { std::cout << __PRETTY_FUNCTION__ << ": " << i << std::endl; // std::this_thread::sleep_for(std::chrono::seconds(1)); } } ``` main: ``` for(int i = 0; i < 10000; ++i){ A a(42); a.join(); } ``` [demo](http://melpon.org/wandbox/permlink/wQgz4kiDr4dekEZ5)
(Delphi) Call Child's procedure from Parent How can parent procedure call it's child's procedure which was override from itself? ``` type TBase = class(TForm) procedure BtnRefreshClick(Sender: TObject); protected text: string; end; procedure TBase.BtnRefreshClick(Sender: TObject); begin showmessage(text); end; ``` --- ``` type TParent = class(TBase) protected procedure doThis; end; procedure TParent.doThis; begin // blah blah do something BtnRefreshClick(nil); end; ``` --- ``` type TChild = class(TParent) procedure BtnRefreshClick(Sender: TObject); protected procedure clicky; override; end; procedure TChild.BtnRefreshClick(Sender: TObject); begin text := 'Hello, World!'; inherited; end; ``` --- Actual calling procedure will be somewhat like : ``` child := TChild.Create; child.doThis; ``` --- If I try to do `child.BtnRefreshClick;` then it will produce a dialog of `Hello, World!` since `TChild.BtnRefreshClick` was called and the `text` variable were set. But when I call `child.doThis` it will only shows an empty dialog box since `child.doThis` calls `parent.doThis` and then `??.BtnRefreshClick`. How to make `parent.doThis` call `child.BtnRefreshClick`? Is it not possible? Thanks in advance, Yohan W.
The parent class calls the base class's method because that's the only method that existed within the scope of the parent class. When the compiler compiled the `TParent` code, it bound the name `BtnRefreshClick` to the method in the base class, not the one in the child class, because the child class is not an ancestor of the parent. In general, for a parent object to call a method of a child class, that method should be declared in the parent object (or higher) and be *virtual*. If you change `TBase` to make `BtnRefreshClick` virtual, and you change `TChild` to *override* that same method, then when `TParent.doThis` calls it, the call will be dispatched to the `TChild` method. ``` type TBase = class(TForm) procedure BtnRefreshClick(Sender: TObject); virtual; end; TChild = class(TParent) procedure BtnRefreshClick(Sender: TObject); override; end; ``` In the particular case of a form with method properties assigned by name via DFM settings, another solution is as [Saintfalcon's answer](https://stackoverflow.com/a/22618097/33732) demonstrates, which is to call the associated button's `Click` method. When the button on the `TChild` form is instantiated, the VCL reads the DFM resource and finds the string "BtnRefreshClick" associated with the button's `OnClick` event. It uses the form's [`MethodAddress`](http://docwiki.embarcadero.com/Libraries/en/System.TObject.MethodAddress) function to look up the address of the method with that name, and it finds the one belonging to `TChild`. It assigns that value to the `OnClick` property. The `Click` method reads that property and calls whatever method is there. I've written before about [the differences between calling an event-handler method directly, calling the event-handler property, and calling the event trigger](https://stackoverflow.com/a/957996/33732), but at that time, I hadn't considered the aspect illustrated here, where a handler is hidden or overridden in a descendant class.
How to make a generic of a class is required? The class is: ``` class Test<P> { constructor(data: P) {} } ``` I hope the following code does not pass the type check because it has no incoming generic: ``` new Test({ a: 1 }) ``` I know that the generic P above is automatically derived as `{a: number}`, but this is not what I want, the following is. ``` new Test< {a: number} >({ a: 1 }) ``` I tried a lot of methods, but in the end the generic P will be automatically derived into the constructor's parameter type.
There is a question that deals with a very similar issue [here](https://stackoverflow.com/questions/51173191/typescript-require-generic-parameter-to-be-provided): ``` async function get<U = void>(url: string & (U extends void ? "You must provide a type parameter" : string)): Promise<U> { return null as any; } ``` The difference is that in that case the type parameter was not used in the parameters at all. This means typescript had no place to infer it the type parameter from. If, as in your case the type parameter is used in the argument list, typescript will use the argument as a source to infer the type parameter from and our trick of using the default value as a signal that no type parameter was explicitly specified will not work (since typescript not use the default if it can infer the type parameter). The solution is simple enough, let typescript know that we don't want it to infer `T` from a specific parameter. While there is no builtin support to do this jcalz offers a reasonable workaround [here](https://github.com/microsoft/TypeScript/issues/14829#issuecomment-504042546) ``` type NoInfer<T> = [T][T extends any ? 0 : never]; class Test<P = void> { constructor(data: NoInfer<P> & (P extends void ? "No type parameter was supplied" : {})) {} } new Test({ a: " "}) // err Argument of type '{ a: string; }' is not assignable to parameter of type 'void & "No type parameter was supplied"'. new Test<{ a: string }>({ a: " "})// ok ``` [play](http://www.typescriptlang.org/play/#code/C4TwDgpgBAcg9gSQHYDMICcA8AVAfFAXigG1sBdUqCAD2AiQBMBnKAQyRCgH4oAGKAFxQkEAG4YyAbgBQAYwA2rJi2wQmwTAAVCUUXACWDfAG9pUKLLhJ16AK6zgcdAAoGrYKyHxkaLJvwAZFDO2jR0jCx6htxQAETwUKCQUGCs6KwAthB06FAA7kpQTLZgYPL6EAyxglDGAL4AlA21ddKt0iJ5UKrqzsZsQtWxjVAA9KNU6LkAgugA5rZZSMBQcCiJ4NAA5P2eRcDo+khzklB1W1D6LEhwK0pM+nNIrABG8tCOKWmZ2Rir60ltlEGFAgvE4Btkql0lkcvlCsVSuVKrEtgA6DoQLo9DS7IQ2I5zM64PoDOJxRrjVYAayAA)
Understanding the `ctags -e` file format (ctags for emacs) I am using "ExuberantCtags" also known as "ctags -e", also known as just "etags" and I am trying to understand the TAGS file format which is generated by the etags command, in particular I want to understand line #2 of the TAGS file. [Wikipedia says](http://en.wikipedia.org/wiki/Ctags#Etags_2) that line #2 is described like this: ``` {src_file},{size_of_tag_definition_data_in_bytes} ``` In practical terms though TAGS file line:2 for "foo.c" looks like this ``` foo.c,1683 ``` My quandary is how exactly does it find this number: 1683 I know it is the size of the "tag\_definition" so what I want to know is what is the "tag\_definition"? I have tried looking through the [ctags source code](http://ctags.sourceforge.net/), but perhaps someone better at C than me will have more success figuring this out. Thanks! EDIT #2: ``` ^L^J hello.c,79^J float foo (float x) {^?foo^A3,20^J float bar () {^?bar^A7,59^J int main() {^?main^A11,91^J ``` Alright, so if I understand correctly, "79" refers to the number of bytes in the TAGS file from after 79 down to and including "91^J". Makes perfect sense. Now the numbers 20, 59, 91 in this example wikipedia says refer to the {byte\_offset} What is the {byte\_offset} offset from? Thanks for all the help Ken!
It's the number of bytes of tag data following the newline after the number. Edit: It also doesn't include the ^L character between file tag data. Remember etags comes from a time long ago where reading a 500KB file was an expensive operation. ;) Here's a complete tags file. I'm showing it two ways, the first with control characters as ^X and no invisible characters. The end-of-line characters implicit in your example are ^J here: ``` ^L^J hello.cc,45^J int main(^?5,41^J int foo(^?9,92^J int bar(^?13,121^J ^L^J hello.h,15^J #define X ^?2,1^J ``` Here's the same file displayed in hex: ``` 0000000 0c 0a 68 65 6c 6c 6f 2e 63 63 2c 34 35 0a 69 6e ff nl h e l l o . c c , 4 5 nl i n 0000020 74 20 6d 61 69 6e 28 7f 35 2c 34 31 0a 69 6e 74 t sp m a i n ( del 5 , 4 1 nl i n t 0000040 20 66 6f 6f 28 7f 39 2c 39 32 0a 69 6e 74 20 62 sp f o o ( del 9 , 9 2 nl i n t sp b 0000060 61 72 28 7f 31 33 2c 31 32 31 0a 0c 0a 68 65 6c a r ( del 1 3 , 1 2 1 nl ff nl h e l 0000100 6c 6f 2e 68 2c 31 35 0a 23 64 65 66 69 6e 65 20 l o . h , 1 5 nl # d e f i n e sp 0000120 58 20 7f 32 2c 31 0a X sp del 2 , 1 nl ``` There are two sets of tag data in this example: 45 bytes of data for hello.cc and 15 bytes for hello.h. The hello.cc data starts on the line following "hello.cc,45^J" and runs for 45 bytes--this also happens to be complete lines. The reason why bytes are given is so code reading the file can just allocate room for a 45 byte string and read 45 bytes. The "^L^J" line is after the 45 bytes of tag data. You use this as a marker that there are more files remaining and also to verify that the file is properly formatted. The hello.h data starts on the line following "hello.h,15^J" and runs for 15 bytes.
cross platform native open/save file dialogs I'm writing a UI for my program using opengl with SDL in a combination of Lua and C++ What I need now is some library that will allow me to call a function that presents the user with a file select dialog for opening/saving a file. But if the OS offers **native** functionality for such a dialog, then I want to use that dialog (eg Window's GetOpenFileName). The only platforms I need to support are Windows and Linux, but I want to be able to still use most of the SDL & openGL code I've already written. What options are available?
In my opinion, [wxWidgets](http://www.wxwidgets.org/) is *the* open source, battle tested, mature, cross platform, cross language, free, open source, GUI library. Beside wxWidgets, there are also many other such cross platform libraries, such as: - [Tk](http://www.tkdocs.com) - [FLTK](http://www.fltk.org/index.php) - [FOX](http://www.fox-toolkit.org/) - [XUL](https://developer.mozilla.org/en/XUL) - [AbiWord's cross platform layer](http://www.abiword.org/) ... and more. You will most likely find you're looking for in the AbiWord source code. Look for "FileOpen" inside the src directory: ``` ./af/xap/cocoa/xap_CocoaDlg_FileOpenSaveAs.cpp ./af/xap/cocoa/xap_CocoaDlg_FileOpenSaveAs.h ./af/xap/cocoa/xap_CocoaFileOpen_Views.nib ./af/xap/cocoa/xap_CocoaFileOpen_Views.nib/classes.nib ./af/xap/cocoa/xap_CocoaFileOpen_Views.nib/info.nib ./af/xap/cocoa/xap_CocoaFileOpen_Views.nib/keyedobjects.nib ./af/xap/gtk/xap_UnixDlg_FileOpenSaveAs.cpp ./af/xap/gtk/xap_UnixDlg_FileOpenSaveAs.h ./af/xap/win/xap_Win32Dlg_FileOpenSaveAs.cpp ./af/xap/win/xap_Win32Dlg_FileOpenSaveAs.h ./af/xap/xp/xap_Dlg_FileOpenSaveAs.cpp ./af/xap/xp/xap_Dlg_FileOpenSaveAs.h ``` The Windows version does indeed use `GetOpenFileName()`. --- Of course, as [Andre](https://stackoverflow.com/users/417197/andre) points out while downvoting a valid alternative answer, there's also Qt, as you already know. Refer to [raj's answer](https://stackoverflow.com/questions/6145910/cross-platform-native-open-save-file-dialogs#answer-6145962) for a link.
Uploading/Displaying Images in MVC 4 Anyone know of any step by step tutorials on how to upload/display images from a database using Entity Framework? I've checked out code snippets, but I'm still not clear on how it works. I have no code, because aside from writing an upload form, I'm lost. Any (and I mean any) help is greatly appreciated. On a sidenote, why don't any books cover this topic? I have both Pro ASP.NET MVC 4 and Professional MVC4, and they make no mention of it.
Have a look at the following ``` @using (Html.BeginForm("FileUpload", "Home", FormMethod.Post, new { enctype = "multipart/form-data" })) { <label for="file">Upload Image:</label> <input type="file" name="file" id="file" style="width: 100%;" /> <input type="submit" value="Upload" class="submit" /> } ``` your controller should have action method which would accept `HttpPostedFileBase`; ``` public ActionResult FileUpload(HttpPostedFileBase file) { if (file != null) { string pic = System.IO.Path.GetFileName(file.FileName); string path = System.IO.Path.Combine( Server.MapPath("~/images/profile"), pic); // file is uploaded file.SaveAs(path); // save the image path path to the database or you can send image // directly to database // in-case if you want to store byte[] ie. for DB using (MemoryStream ms = new MemoryStream()) { file.InputStream.CopyTo(ms); byte[] array = ms.GetBuffer(); } } // after successfully uploading redirect the user return RedirectToAction("actionname", "controller name"); } ``` **Update 1** In case you want to upload files using jQuery with asynchornously, then try [this article](http://hayageek.com/jquery-multiple-file-upload/). the code to handle the server side (for multiple upload) is; ``` try { HttpFileCollection hfc = HttpContext.Current.Request.Files; string path = "/content/files/contact/"; for (int i = 0; i < hfc.Count; i++) { HttpPostedFile hpf = hfc[i]; if (hpf.ContentLength > 0) { string fileName = ""; if (Request.Browser.Browser == "IE") { fileName = Path.GetFileName(hpf.FileName); } else { fileName = hpf.FileName; } string fullPathWithFileName = path + fileName; hpf.SaveAs(Server.MapPath(fullPathWithFileName)); } } } catch (Exception ex) { throw ex; } ``` this control also return image name (in a javascript call back) which then you can use it to display image in the DOM. ### UPDATE 2 Alternatively, you can try **[Async File Uploads in MVC 4](http://weblogs.asp.net/bryansampica/archive/2013/01/15/AsyncMVCFileUpload.aspx)**.
ContinueWith chaining not working as expected I have this example code: ``` static void Main(string[] args) { var t1 = Task.Run(async () => { Console.WriteLine("Putting in fake processing 1."); await Task.Delay(300); Console.WriteLine("Fake processing finished 1. "); }); var t2 = t1.ContinueWith(async (c) => { Console.WriteLine("Putting in fake processing 2."); await Task.Delay(200); Console.WriteLine("Fake processing finished 2."); }); var t3 = t2.ContinueWith(async (c) => { Console.WriteLine("Putting in fake processing 3."); await Task.Delay(100); Console.WriteLine("Fake processing finished 3."); }); Console.ReadLine(); } ``` The console output baffles me: - Putting in fake processing 1. - Fake processing finished 1. - Putting in fake processing 2. - Putting in fake processing 3. - Fake processing finished 3. - Fake processing finished 2. I am trying to chain the tasks so they execute one after another, what am I doing wrong? And I can't use await, this is just example code, in reality I am queueing incoming tasks (some asynchronous, some not) and want to execute them in the same order they came in but with no parallelism, ContinueWith seemed better than creating a ConcurrentQueue and handling everythning myself, but it just doesn't work...
Take a look at the type of `t2`. It's a `Task<Task>`. `t2` will be completed *when it finishes starting the task that does the actual work* not when that work actually finishes. The smallest change to your code to get it to work would be to add an `unwrap` after both your second and third calls to `ContinueWith`, so that you get out the task that represents the completion of your work. The more idiomatic solution would be to simply remove the `ContinueWith` calls entirely and just use `await` to add continuations to tasks. Interestingly enough, you would see the same behavior for `t1` if you used `Task.Factory.StartNew`, but `Task.Run` is specifically designed to work with `async` lambdas and actually internally unwraps all `Action<Task>` delegates to return the result of the task returned, rather than a task that represents starting that task, which is why you don't need to unwrap that task.
Where do I set DYLD\_LIBRARY\_PATH on Mac OS X, and is it a good idea? I am trying to install a solver written in C++ on my Mac (OS X), for use with code I have written in XCode. The solver documentation says this: > > Be sure to have "." in your `DYLD_LIBRARY_PATH` in order to > > > - run the ready-built executables > - link with the libamg.dylib (and the gfortran RTSlibs) > > > I don't really understand what this means. Where and what do I need to change what? I have done some googling, but haven't come across anything that is simple enough for a newbie like me! If there are any patient people out there who wouldn't mind directing me to an online resource or giving me the a-b-cs of how and where to set environment variables, I would be very grateful.
It's an environment variable and as such is usually set in Terminal by ``` export DYLD_LIBRARY_PATH=someValue ``` `man dyld` says: > > `DYLD_LIBRARY_PATH` > > > This is a colon separated list of directories that contain libraries. The dynamic linker searches these directories before it searches the default locations for libraries. It allows you to test new versions of existing libraries. > > > For each library that a program uses, the dynamic linker looks for it in each directory in `DYLD_LIBRARY_PATH` in turn. If it still can't find the library, it then searches `DYLD_FALLBACK_FRAMEWORK_PATH` and `DYLD_FALLBACK_LIBRARY_PATH` in turn. > > > Use the `-L` option to `otool(1)`. to discover the frameworks and shared libraries that the executable is linked against. > > > --- You'd probably want something like ``` export DYLD_LIBRARY_PATH=.:$DYLD_LIBRARY_PATH ``` to prepend `.` (current directory) to the list of locations searched. On my unmodified OS X, `DYLD_LIBRARY_PATH` has no current value though: ``` $ echo $DYLD_LIBRARY_PATH $ ``` --- Depending on how you intent to run your program, you'd need to set this differently, e.g. in Xcode (I don't know where though).
Node.js - HTTPS PFX Error: Unable to load BIO I'm trying to make a HTTPS request-promise. I already know that the PFX is good and that is not the issue (I have a similar sample app working). I am doing the following: ``` var request = require('request-promise'); ``` ... ``` options.pfx = fs.readFileSync('myfile.pfx'); options.passphrase = 'passphrase'; ``` I am passing my options into an request. ``` request.post(options); ``` I then try to build the request I get the following error: ``` _tls_common.js:130 c.context.loadPKCS12(pfx, passphrase); ^ Error: Unable to load BIO at Error (native) at Object.createSecureContext (_tls_common.js:130:17) at Object.exports.connect (_tls_wrap.js:955:21) at Agent.createConnection (https.js:73:22) at Agent.createSocket (_http_agent.js:174:16) at Agent.addRequest (_http_agent.js:143:23) at new ClientRequest (_http_client.js:133:16) at Object.exports.request (http.js:31:10) at Object.exports.request (https.js:163:15) at Request.start (/Users/filomeno/workspace/sla-crawler/node_modules/request/request.js:747:30) at Request.write (/Users/filomeno/workspace/sla-crawler/node_modules/request/request.js:1369:10) at end (/Users/filomeno/workspace/sla-crawler/node_modules/request/request.js:561:16) at Immediate._onImmediate (/Users/filomeno/workspace/sla-crawler/node_modules/request/request.js:589:7) at processImmediate [as _immediateCallback] (timers.js:374:17) ``` I have a sample app where the same code works. I've tried to convert to .p12 without success. Does anyone have an idea what this error might refer to? Edit: I'm using lodash to do a merge of 2 objects with dinamic properties and static properties ``` _.merge(options, _this.requestOptions); ``` And that was causing the problem
Looking at the nodejs source code (specifically this file <https://github.com/nodejs/node/blob/master/src/node_crypto.cc>) the error is thrown by this function ``` // Takes .pfx or .p12 and password in string or buffer format void SecureContext::LoadPKCS12(const FunctionCallbackInfo<Value>& args) { Environment* env = Environment::GetCurrent(args); ... ``` In line 964 ``` in = LoadBIO(env, args[0]); if (in == nullptr) { return env->ThrowError("Unable to load BIO"); } ``` Where the LoadBIO returns null ``` // Takes a string or buffer and loads it into a BIO. // Caller responsible for BIO_free_all-ing the returned object. static BIO* LoadBIO(Environment* env, Local<Value> v) { HandleScope scope(env->isolate()); if (v->IsString()) { const node::Utf8Value s(env->isolate(), v); return NodeBIO::NewFixed(*s, s.length()); } if (Buffer::HasInstance(v)) { return NodeBIO::NewFixed(Buffer::Data(v), Buffer::Length(v)); } return nullptr; } ``` Perhaps the buffer is somehow not readable? Also it seems that the function is expecting an utf-8 encoded string. Some ideas: Are you sure the path to the file is correct? Maybe encoding issue? Did you try to set `fs.readFileSync()` encoding explicitly? Try with `fs.readFile(<filename>, <encoding>, function(error, data){})` to see if it throws an error?
Automatically format a measurement into engineering units in Java I'm trying to find a way to automatically format a measurement and unit into a String in [engineering notation](http://en.wikipedia.org/wiki/Engineering_notation). This is a special case of scientific notation, in that the exponent is always a multiple of three, but is denoted using kilo, mega, milli, micro prefixes. This would be similar to [this post](https://stackoverflow.com/questions/4753251/how-to-go-about-formatting-1200-to-1-2k-in-java) except it should handle the whole range of SI units and prefixes. For example, I'm after a library that will format quantities such that: 12345.6789 Hz would be formatted as 12 kHz or 12.346 kHz or 12.3456789 kHz 1234567.89 J would be formatted as 1 MJ or 1.23 MJ or 1.2345 MJ And so on. JSR-275 / JScience handle the unit measurement ok, but I'm yet to find something that will work out the most appropriate scaling prefix automatically based on the magnitude of the measurement. Cheers, Sam.
``` import java.util.*; class Measurement { public static final Map<Integer,String> prefixes; static { Map<Integer,String> tempPrefixes = new HashMap<Integer,String>(); tempPrefixes.put(0,""); tempPrefixes.put(3,"k"); tempPrefixes.put(6,"M"); tempPrefixes.put(9,"G"); tempPrefixes.put(12,"T"); tempPrefixes.put(-3,"m"); tempPrefixes.put(-6,"u"); prefixes = Collections.unmodifiableMap(tempPrefixes); } String type; double value; public Measurement(double value, String type) { this.value = value; this.type = type; } public String toString() { double tval = value; int order = 0; while(tval > 1000.0) { tval /= 1000.0; order += 3; } while(tval < 1.0) { tval *= 1000.0; order -= 3; } return tval + prefixes.get(order) + type; } public static void main(String[] args) { Measurement dist = new Measurement(1337,"m"); // should be 1.337Km Measurement freq = new Measurement(12345678,"hz"); // should be 12.3Mhz Measurement tiny = new Measurement(0.00034,"m"); // should be 0.34mm System.out.println(dist); System.out.println(freq); System.out.println(tiny); } } ```
Run Invoke-Command in remote computer as administrator I'm trying to run invoke-command to launch a `powershell script in a powershell file` on a `remote computer`. I'm using the `credentials for a user with Administrator privilege`. The command needs to be executed by `running powershell as an administrator`. There are licensing issues with the application that i'm trying to invoke using the powershell script, so i cannot change the credentials to Administrator but need to run with that particular user itself. I have tried using `-RunAsAdministrator` at the end of the Invoke-Command, but i got an error saying: `Invoke-Command : Parameter set cannot be resolved using the specified named parameters.` ``` $command = { cd Folder C:\Folder\build.ps1 } Invoke-Command -ComputerName $RemoteSystemIP -ScriptBlock $command -credential $Credentials1 -ErrorAction Stop -AsJob ``` I'm trying to execute this as a background job that's why i added the `-AsJob` parameter. Its been several days and i haven't found a solution yet.
**tl;dr** - The only way to get a remote PowerShell session to execute elevated (with admin privileges) is to connect with a user account (either implicitly or via `-Credential`) that has admin privileges on the target machine. - With such an account, the session automatically and invariably runs elevated. --- The [`Invoke-Command`](https://learn.microsoft.com/powershell/module/microsoft.powershell.core/invoke-command)'s **`-RunAsAdministrator`** switch **can only be used with (virtualization) *containers* (`-ContainerId` parameter)**, not regular remoting (`-ComputerName` parameter). You **cannot elevate *on demand* in a remote session** (the way you can *locally, interactively* with `Start-Process -Verb RunAs`).[1] Instead, you must **make sure that the credentials** you're passing to `Invoke-Command -Credential` to connect to the remote machine with refer to a user account that (also) **has administrative privileges on the target machine, in which case the remote session *automatically and invariably* runs elevated** (with admin privileges).[2] If you cannot pass such credentials, I think you're out of luck. --- To **test if the current user has administrative privileges**: ``` # Returns $true if elevated, otherwise $false. [Security.Principal.WindowsPrincipal]::new( [Security.Principal.WindowsIdentity]::GetCurrent() ).IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator) ``` Separately, here's a **simple test you can run from *inside a session*** to determine **whether it is running with elevation**: ``` # Returns $true if elevated, otherwise $false. [bool] (net session 2>$null) ``` --- [1] Unless the session at already *is* elevated, `-Verb RunAs` presents a pop-up UAC dialog that a user must confirm interactively, which is not supported in a remote session. [2] The same applies if you use "loopback remoting", i.e. if you target the *local* machine via remoting, using `Invoke-Command -ComputerName .`, for instance, with additional restrictions, however: You cannot use a user that is authorized for remoting but isn't part of the local `Administrators` group, and if you use the *current* user (whether or not with explicit credentials), the *calling* session must itself be elevated.
How to get the current project name in C# code? I want to send an email to myself when an exception is thrown. Using StackFrame object, I am able to get File Name, Class Name and even class method that throw the Exception, but I also need to know the project name as many of my ASP.NET project has the same file name, class name and method. This is my code: ``` public static string JndGetEmailTextForDebuggingExceptionError(this Exception Ex) { StackFrame sf = Ex.JndGetStackFrame(); string OutputHTML = "<i><b><u>For Developer Use Only: </u></b></i>" + "<br>" + "<br>" + "Project Name: " + HttpContext.Current.ApplicationInstance.GetType().Assembly.GetName().Name + "<br>" + //Under discussion "File Name: " + sf.GetFileName() + "<br>" + "Class Name: " + sf.GetMethod().DeclaringType + "<br>" + "Method Name: " + sf.GetMethod() + "<br>" + "Line Number: " + sf.GetFileLineNumber() + "<br>" + "Line Column: " + sf.GetFileColumnNumber() + "<br>" + "Error Message: " + Ex.Message + "<br>" + "Inner Message : " + Ex.InnerException.Message + "<br>"; return OutputHTML; } ``` Thanks ALL.
You can use [`Assembly.GetCallingAssembly`](http://msdn.microsoft.com/en-us/library/system.reflection.assembly.getcallingassembly.aspx) if you have your logging code in a separate library assembly, and call directly from your ASP.NET assembly to your library, and you mark the method so that it won't be inlined: ``` [MethodImpl(MethodImplOptions.NoInlining)] public static string JndGetEmailTextForDebuggingExceptionError(this Exception Ex) { StackFrame sf = Ex.JndGetStackFrame(); string OutputHTML = "<i><b><u>For Developer Use Only: </u></b></i>" + "<br>" + "<br>" + "Project Name: " + Assembly.GetCallingAssembly().GetName().Name + "<br>" + "File Name: " + sf.GetFileName() + "<br>" + "Class Name: " + sf.GetMethod().DeclaringType + "<br>" + "Method Name: " + sf.GetMethod() + "<br>" + "Line Number: " + sf.GetFileLineNumber() + "<br>" + "Line Column: " + sf.GetFileColumnNumber() + "<br>" + "Error Message: " + Ex.Message + "<br>" + "Inner Message : " + Ex.InnerException.Message + "<br>"; return OutputHTML; } ``` On any entry points in your library that can end up wanting to log the project name, you'd have to record the calling assembly and mark it `NoInlining`, then pass that around internally. If you're using .NET 4.5, there's an alternative way to do this: [`CallerFilePath`](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.callerfilepathattribute.aspx). It has the same restrictions on entry points, and it returns the source path on your machine instead of the assembly name (which is probably less useful), but it's easier to know that it'll work (because it compiles it, just like optional parameters are compiled in), and it allows inlining: ``` public static string JndGetEmailTextForDebuggingExceptionError (this Exception Ex, [CallerFilePath] string filePath = "") { StackFrame sf = Ex.JndGetStackFrame(); string OutputHTML = "<i><b><u>For Developer Use Only: </u></b></i>" + "<br><br>" + "Source File Path: " + filePath + "<br>" + ... ```
Change Windows 10 boot drive I had a working dual-boot system and decided to introduce an M.2 drive. I migrated my gentoo install to the drive and grew my windows partiton to eat up all the former linux. Gentoo boots normally, but to boot windows I have to remove the M.2 drive. With the M.2 drive in the BIOS finds the windows boot loader, but it does not load up the OS. What do I need to do to tell Windows to use the second drive in the system? ``` 0 blaze:0.0 /root # lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT NAME FSTYPE LABEL MOUNTPOINT sda |-sda1 swap [SWAP] `-sda2 ext4 / sdb |-sdb1 ntfs Recovery |-sdb2 vfat /boot/efi |-sdb3 `-sdb4 ntfs sdc `-sdc1 linux_raid_member blaze:0 `-md0 ext4 /home sdd linux_raid_member `-sdd1 linux_raid_member blaze:0 `-md0 ext4 /home sde `-sde1 linux_raid_member blaze:0 `-md0 ext4 /home ``` `sda` is the M.2 and `sdb` is a solid state drive.
This is the reason I preffer using grub to handle multiboot systems. Anyway, here's how to fix a Windows 10 boot partition: [ORIGINAL SOURCE: Use Diskpart to Fix UEFI Boot Error in Windows 10/8/7](https://www.easeus.com/partition-manager-software/fix-uefi-boot-in-windows-10-8-7.html) This is not my solution. This is copied from a commercial website which sells partition tools. I have nothing do to with them. I strongly advise you to backup your hard drive: It's been a long since last time I did this in Windows, and though the commands looks good, they may have messed something up. **Method 1. Use Diskpart to Fix UEFI Boot Error in Windows 10/8/7** Probably you only have to do "Step 1: Type below command and hit Enter each time:" "3 - Repair the Boot Record" and "4 - Rebuild the BDC Store" If you are a Windows 10 or 8 user and you prefer free methods to fix UEFI boot error, you may follow below two solutions to solve this issue now: 1 - Enter Command Prompt from Advanced options Step 1: Insert Windows 10/8/7 installation disk or installation USB into PC > Boot from the disk or USB. Step 2: Click Repair your computer or hit F8 at the Install now screen. Step 3: Click Troubleshoot > Advanced options > Command Prompt. 2 - Run Diskpart to set partition ID and assign drive letter Step 1: Type below command and hit Enter each time: ``` diskpart list disk sel disk 0 ``` [![Run Disk Part to start fixing UEFI error.](https://i.stack.imgur.com/AI9oq.png)](https://i.stack.imgur.com/AI9oq.png) **Be carefull! it may not be disk 1** Step 2: When the message "Disk 0 is now the selected disk" shows up, type: list vol and hit Enter. [![Select disk and volume to set volume ID.](https://i.stack.imgur.com/Zd27W.png)](https://i.stack.imgur.com/Zd27W.png) Diskpart will now show the full list of volumes on your PC, find UEFI volume from the list: UEFI partition will be on Volume 2. Step 3: Do this only if your disk doesnt have ID: Type below command and hit Enter each time: ``` sel vol 2 set id=c12a7328-f81f-11d2-ba4b-00a0c93ec93b Or SET ID=ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 ``` [![Set volume ID to fix volume id.](https://i.stack.imgur.com/2czRu.png)](https://i.stack.imgur.com/2czRu.png) Step 4: Do this only if your boot disk doesnt have a letter: Assign drive letter by typing below command and click Enter: ``` assign letter=G: ``` [![Change drive letter of your selected partition.](https://i.stack.imgur.com/o9OAc.png)](https://i.stack.imgur.com/o9OAc.png) > > (Note: G shall be a unique drive letter which cannot be already used.) > > > 3 - Repair the Boot Record Step 1: Open Command Prompt as administrator, enter below command: ``` cd /d G:\EFI\Microsoft\Boot\ ``` > > Note: G is the drive letter you signed to UEFI partition and remember to replace G with UEFI's partition letter. > > > [![Repair boot record on UEFI disk.](https://i.stack.imgur.com/W1WNg.png)](https://i.stack.imgur.com/W1WNg.png) ``` Enter: exit when the process completes. ``` Step 2: To repair the Boot Record, open CMD and enter below command line: ``` bootrec /fixboot ``` [![Fix boot record command.](https://i.stack.imgur.com/0Nrr8.png)](https://i.stack.imgur.com/0Nrr8.png) ``` Enter: exit when the repair process completes. ``` 4 - Rebuild the BDC Store Step 1: Type each command line and hit Enter each time: ``` ren BCD BCD.old bcdboot C:\Windows /l en-us /s G: /f ALL (Note: c:\ is the drive where Windows 10/8.1/8 is installed on.) ``` [![Rebuild BDC store on UEFI.](https://i.stack.imgur.com/23GfB.png)](https://i.stack.imgur.com/23GfB.png) Step 2: Type: exit in Command Prompt when the process completes and then restart your PC. Now UEFI boot is fixed and Windows 10/8/7 can be boot up on your PC again. **Method 2. Use Automatic Repair to repair Windows 10/8/7 UEFI** Windows Automatic Repair is a built-in tool for users to apply and try to fix some normal errors on Windows PC. And you may also try this method to repair Windows 10, 8 or 7 UEFI/EFI boot error: 1- Boot up Windows from Installation media Step 1: Insert Windows 10/8/7 installation disk or USB to your PC. Step 2: Restart PC and boot from the disk or USB. 2 - Enable Automatic Repair option Step 1: Click Repair your computer at the Install now screen. [![Enable Automatic repair option to fix UEFI error.](https://i.stack.imgur.com/Jxo2Q.png)](https://i.stack.imgur.com/Jxo2Q.png) Step 2: Click Troubleshoot at Choose an option screen > Click Automatic Repair. [![Use Automatic Repair option to fix UEFI boot error.](https://i.stack.imgur.com/yCiXE.png)](https://i.stack.imgur.com/yCiXE.png) Step 3: Choose an account from the list to continue at the Automatic Repair screen and wait for the process to finish. When the process completes, you can restart your PC and then you should be able to use your computer without any problems again.
Links inside HTML5 footer element (nav and aside?) I'm currently moving a page from HTML4 to HTML5, and I have kind of an issue. There's a bunch of lists with links inside the footer. Most of them link to information (faq etc.) about the site itself. So I think it's OK to put those inside a nav element (see "Introducing HTML5" and the HTHML5 Doctor). But two lists contain links to external pages like Facebook. Now I could wrap those inside an aside, but this is a bit of an edge case: > > "It is legitimate to wrap the links to other pages on the site with an aside inside the footer — but ask yourself whether the aside to instead be a sibling of the footer [...] After all, the links to other pages are presumably tangentially related to the whole page rather than just the content of the footer." (Introducing HTML5) > > > I could also just leave them inside the footer without wrapping them. Since all the lists of links in the footer are a "visual unit", I wouldn't wanna take the external links out of the footer altogether at the moment, even though that might be better in regards to semantics. I was searching for HTML5 sites with a similar footer but couldn't find any. What do you guys think is the best approach? Thanks
I would suggest simply using a `<footer>` with links in it. If you want to wrap the links in a `<nav>`, that’s acceptable, but I would advise against it. The `<nav>` element is intended to mark up “major” navigation blocks, and the fact that you’re putting the links in a footer suggests that they are not a major navigation block. (In a sense, `<footer>` and `<nav>` are almost contradictory.) The spec even [talks specifically](http://www.whatwg.org/specs/web-apps/current-work/multipage/sections.html#the-nav-element) about this scenario: > > Not all groups of links on a page need to be in a nav element — the element is primarily intended for sections that consist of major navigation blocks. In particular, **it is common for footers to have a short list of links to various pages of a site**, such as the terms of service, the home page, and a copyright page. **The footer element alone is sufficient for such cases**; while a nav element can be used in such cases, it is usually unnecessary. > > > As for `<aside>`, I would say it’s overkill if you’re only going to put a few links in it, especially since it sounds like, though external, they belong together with the other links. If it helps with styling, and the external links feel like kind of a self-contained unit, then sure—otherwise, don’t bother. I would say the `<footer>` element itself already carries enough aside-ish semantics.
auto increment primary leaving gaps in counting I have got a table with auto increment primary key. This table is meant to store millions of records and I don't need to delete anything for now. The problem is, when new rows are getting inserted, because of some error, the auto increment key is leaving some gaps in the auto increment ids.. For example, after 5, the next id is 8, leaving the gap of 6 and 7. Result of this is when I count the rows, it results 28000, but the max id is 58000. What can be the reason? I am not deleting anything. And how can I fix this issue. P.S. I am using insert ignore while inserting records so that it doesn't give error when I try to insert duplicate entry in unique column.
This is by design and will always happen. Why? Let's take 2 overlapping transaction that are doing INSERTs - Transaction 1 does an INSERT, gets the value (let's say 42), does more work - Transaction 2 does an INSERT, gets the value 43, does more work Then - Transaction 1 fails. Rolls back. 42 stays unused - Transaction 2 completes with 43 If consecutive values were guaranteed, every transaction would have to happen one after the other. Not very scalable. Also see [Do Inserted Records Always Receive Contiguous Identity Values](https://stackoverflow.com/questions/4219201/do-inserted-records-always-receive-contiguous-identity-values) (SQL Server but same principle applies)
Converting GeoJSON response to FeatureCollection Hi I am trying to parse the response from a OSM webservice into feature collection using GeoJson.Net I am new to GeoJSON and not able to identify how to do so: The Json response can be find [here](https://a.data.osmbuildings.org/0.2/anonymous/tile/15/17608/10740.json). The code I have written is: ``` System.IO.StreamReader file = new System.IO.StreamReader(filepath); string content = file.ReadToEnd(); file.Close(); dynamic deserialized = JsonConvert.DeserializeObject(content); List<Feature> lstGeoLocation = new List<Feature>(); foreach (JObject item in deserialized.features) { //var feature = new Feature(); var geom = item.Property("geometry").Value; } ``` But this will be plain JSON parsing and there might be a better way to parse the same. I also tried [NetTopologySuite JSON extension](https://www.nuget.org/packages/NetTopologySuite.IO.GeoJSON/) but when i use following code it gives me exception "Expected token 'type' not found." ``` System.IO.StreamReader file = new System.IO.StreamReader(filepath); string content = file.ReadToEnd(); file.Close(); var reader = new NetTopologySuite.IO.GeoJsonReader(); var featureCollection = reader.Read <NetTopologySuite.Features.FeatureCollection>(content); ```
I hate to answer my I question but after two days of hit & trial I get it working with both NetTopology and GeoJson ``` // get the JSON file content var josnData = File.ReadAllText(destinationFileName); // create NetTopology JSON reader var reader = new NetTopologySuite.IO.GeoJsonReader(); // pass geoJson's FeatureCollection to read all the features var featureCollection = reader.Read<GeoJSON.Net.Feature.FeatureCollection>(josnData); // if feature collection is null then return if (featureCollection == null) { return; } // loop through all the parsed featurd for (int featureIndex = 0; featureIndex < featureCollection.Features.Count; featureIndex++) { // get json feature var jsonFeature = featureCollection.Features[featureIndex]; Geometry geom = null; // get geometry type to create appropriate geometry switch (jsonFeature.Geometry.Type) { case GeoJSONObjectType.Point: break; case GeoJSONObjectType.MultiPoint: break; case GeoJSONObjectType.LineString: break; case GeoJSONObjectType.MultiLineString: break; case GeoJSONObjectType.Polygon: { var polygon = jsonFeature.Geometry as GeoJSON.Net.Geometry.Polygon; var coordinates = new List <Point3D>(); foreach (var ring in polygon.Coordinates) { if (ring.IsLinearRing()) { foreach (var coordinate in ring.Coordinates) { var location = coordinate as GeographicPosition; if (location == null) { continue; } coordinates.Add(new Point3D(location.Longitude, location.Latitude, location.Altitude.HasValue ? location.Altitude.Value : 0 )); } } } geom = new Polygon(new LinearRing(new CoordinateSequence(coordinates.ToArray())), null); } break; case GeoJSONObjectType.MultiPolygon: break; case GeoJSONObjectType.GeometryCollection: break; case GeoJSONObjectType.Feature: break; case GeoJSONObjectType.FeatureCollection: break; default: throw new ArgumentOutOfRangeException(); } } ```
CSS3 Background with Gradient Overlay - Gradient only working in Firefox I'm trying to overlay a CSS gradient on a background image. I've got it working in Firefox, but in safari and chrome I only get the background image, without a gradient. (I generated the gradient code using <http://www.colorzilla.com/gradient-editor/>, then simply added url() at the end of each line) **Update:** seems my problem might be that webkit displays the gradient behind the image, not on top like I need, and firefox does. ``` background: -moz-linear-gradient(top, rgba(0, 0, 0, 0) 0%, rgba(99, 130, 169, 0.7) 100%), url(app/bg.jpg) no-repeat center center fixed; /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(0, 0, 0, 0)), color-stop(100%, rgba(99, 130, 169, 0.7))), url(app/bg.jpg) no-repeat center center fixed; /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, rgba(0, 0, 0, 0) 0%,rgba(0,0,0,0.16) 100%), url(app/bg.jpg) no-repeat center center fixed; /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, rgba(0, 0, 0, 0) 0%,rgba(99, 130, 169, 0.7) 100%), url(app/bg.jpg) no-repeat center center fixed; /* Opera 11.10+ */ background: -ms-linear-gradient(top, rgba(0, 0, 0, 0) 0%,rgba(99, 130, 169, 0.7) 100%), url(app/bg.jpg) no-repeat center center fixed; /* IE10+ */ background: linear-gradient(to bottom, rgba(0, 0, 0, 0) 0%,rgba(99, 130, 169, 0.7) 100%), url(app/bg.jpg) no-repeat center center fixed; /* W3C */ ```
I would recommend using shorthand notation in this case, it will make it much easier to read and maintain (and it fixes the issue). Here is a working example (and a demo: <http://jsfiddle.net/joshnh/yyy7V/>): ``` .foo { background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, rgba(0, 0, 0, 0)), color-stop(100%, rgba(99, 130, 169, 0.7))), url('http://lorempixel.com/400/300/'); /* Chrome,Safari4+ */ background-image: -webkit-linear-gradient(top, transparent, rgba(99,130,169,.7)), url('http://lorempixel.com/400/300/'); /* Chrome10+,Safari5.1+ */ background-image: -moz-linear-gradient(top, transparent, rgba(99,130,169,.7)), url('http://lorempixel.com/400/300/'); /* FF3.6+ */ background-image: -ms-linear-gradient(top, transparent, rgba(99,130,169,.7)), url('http://lorempixel.com/400/300/'); /* IE10+ */ background-image: -o-linear-gradient(top, transparent, rgba(99,130,169,.7)), url('http://lorempixel.com/400/300/'); /* Opera 11.10+ */ background-image: linear-gradient(to bottom, transparent, rgba(99,130,169,0.7)), url('http://lorempixel.com/400/300/'); /* W3C */ background-repeat: no-repeat; background-position: 50% 50%; background-attachment: fixed; } ```
How do I fix PyDev "Undefined variable from import" errors? I've got a Python project using PyDev in Eclipse, and PyDev keeps generating false errors for my code. I have a module `settings` that defines a `settings` object. I import that in module `b` and assign an attribute with: ``` from settings import settings settings.main = object() ``` In some of my code--but not all of it, statements like: ``` from settings import settings print settings.main ``` ... generate "Undefined variable from import: main" messages in the Eclipse code error pane, even though the code runs without a problem. How can I correct these?
For code in your project, the only way is adding a declaration saying that you expected that -- possibly protected by an `if False` so that it doesn't execute (the static code-analysis only sees what you see, not runtime info -- if you opened that module yourself, you'd have no indication that main was expected). To overcome this there are some choices: 1. If it is some external module, it's possible to add it to the `forced builtins` so that PyDev spawns a shell for it to obtain runtime information (see <http://pydev.org/manual_101_interpreter.html> for details) -- i.e.: mostly, PyDev will import the module in a shell and do a `dir(module)` and `dir` on the classes found in the module to present completions and make code analysis. 2. You can use Ctrl+1 (Cmd+1 for Mac) in a line with an error and PyDev will present you an option to add a comment to ignore that error. 3. It's possible to create a `stub` module and add it to the `predefined` completions (<http://pydev.org/manual_101_interpreter.html> also has details on that).
Python Reading from a file to create a weighted directed graph using networkx I am new at python and Spyder. I am trying to read from a text file with format into a graph using networkx: ``` FromNodeId ToNodeId Weight 0 1 0.15 0 2 0.95 0 3 0.8 0 4 0.5 0 5 0.45 0 6 0.35 0 7 0.4 0 8 0.6 0 9 0.45 0 10 0.7 1 2 0.45 1 11 0.7 1 12 0.6 1 13 0.75 1 14 0.55 1 15 0.1 ... ``` I want to use Networkx graph format that can store such a large graph(about 10k nodes, 40k edges). ``` import networkx as nx import matplotlib.pyplot as plt g = nx.read_edgelist('test.txt', nodetype=int, create_using= nx.DiGraph()) print(nx.info(g)) nx.draw(g) plt.show() ``` When I run this code, nothing happens. I am using Spyder for editing. Could you help? Thanks!
You have comment first line with symbol `#` (`read_edgelist` by default skip lines start with `#`): ``` #FromNodeId ToNodeId Weight 0 1 0.15 0 2 0.95 0 3 0.8 ``` Then modify call of `read_edgelist` to define type of weight column: ``` import networkx as nx import matplotlib.pyplot as plt g = nx.read_edgelist('./test.txt', nodetype=int, data=(('weight',float),), create_using=nx.DiGraph()) print(g.edges(data=True)) nx.draw(g) plt.show() ``` Output: ``` [(0, 1, {'weight': 0.15}), (0, 2, {'weight': 0.95}), (0, 3, {'weight': 0.8}), (0, 4, {'weight': 0.5}), (0, 5, {'weight': 0.45}), (0, 6, {'weight': 0.35}), (0, 7, {'weight': 0.4}), (0, 8, {'weight': 0.6}), (0, 9, {'weight': 0.45}), (0, 10, {'weight': 0.7}), (1, 2, {'weight': 0.45}), (1, 11, {'weight': 0.7}), (1, 12, {'weight': 0.6}), (1, 13, {'weight': 0.75}), (1, 14, {'weight': 0.55}), (1, 15, {'weight': 0.1})] ``` [![enter image description here](https://i.stack.imgur.com/sprEE.png)](https://i.stack.imgur.com/sprEE.png)