prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
unexpected T\_FUNCTION error when using "function (array $matches)"
Hi I'm using the following code but I'm getting an "unexpected T\_FUNCTION" syntax error for the second line. Any suggestions?
```
preg_replace_callback("/\\[LINK\=(.*?)\\\](.*?)\\[\/LINK\\]/is",
function (array $matches) {
if (filter_var($matches[1], FILTER_VALIDATE_URL))
return '<a href="'.
htmlspecialchars($matches[1], ENT_QUOTES).
'" target="_blank">'.
htmlspecialchars($matches[2])."</a>";
else
return "INVALID MARKUP";
}, $text);
```
|
That happens when your PHP is older than 5.3. Anonymous function support wasn't available until 5.3, so PHP won't recognize function signatures passed as parameters like that.
You'll have to create a function the traditional way, and pass its name instead (I use `link_code()` for example):
```
function link_code(array $matches) {
if (filter_var($matches[1], FILTER_VALIDATE_URL))
return '<a href="'.
htmlspecialchars($matches[1], ENT_QUOTES).
'" target="_blank">'.
htmlspecialchars($matches[2])."</a>";
else
return "INVALID MARKUP";
}
preg_replace_callback("/\\[LINK\=(.*?)\\\](.*?)\\[\/LINK\\]/is", 'link_code', $text);
```
Also, `array $matches` is not a problem because type hinting for arrays is supported in PHP 5.2.
|
Lua/C++ - Segfault inside lua\_next() while trying to walk table
I have the following code in C++:
```
lua_getglobal(C, "theTable");
lua_pushnil(C);
while (lua_next(C, -2) != 0) {
/* snip */
}
```
However, when it runs, a segfault is reported. The LLDB stop message is as follows.
```
* thread #1: tid = 0x50663f, 0x000000000002b36a luaos`luaH_next + 58, queue =
'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x38)
frame #0: 0x000000000002b36a luaos`luaH_next + 58
```
Luaos is the executable name. I have Lua compiled directly into the executable, for portability.
P.S. `C` is the name of the lua state. It is my secondary configuration lua state (as opposed to my primary code lua state `L`), which is the reason behind the name.
|
It seems that your code is only partially correct, I'm going to explain how to use `lua_next` in details, even the parts that your code is doing correctly.
`lua_next` expects at least two elements in stack (in the following order):
```
[1] previous key
...
[t] table
```
In the first call to the function `previous key` should be `nil` so the function will push the first pair key-value to the stack, and the traversal will begin.
```
lua_getglobal(L, "theTable"); // Stack table
// ...
lua_pushnil(L); // Push nil
lua_next(L, t); // t is theTable index
```
When called `lua_next` pops the previous key and pushes a key-value pair to the stack, so it'll look something like this:
```
[1] value
[2] key
...
[t] -- unknown value --
[t+1] table
```
If the function is called again with this stack it'll have as inputs `value` as the current key and an unknown value as the `table`, thus an error will ocurr. To avoid that the top of the stack (`[1] value`) should be popped
```
lua_pop(L, 1); // Pop value
```
Now the stack will have the expected values for the traversal to continue, and `lua_next` can be called again.
When there are no more elements in the table the function will return 0.
Here is a complete example:
```
lua_getglobal(L, "theTable"); // Stack table
lua_pushnil(L); // Push nil
while(lua_next(L, -2) != 0) {
// Do something with key, value pair
lua_pop(L, 1); // Pop value
}
```
|
How to efficiently compute the average "direction" of pixels in a grayscale image?
So I figured out I can convert an image to grayscale like this:
```
public static Bitmap GrayScale(this Image img)
{
var bmp = new Bitmap(img.Width, img.Height);
using(var g = Graphics.FromImage(bmp))
{
var colorMatrix = new ColorMatrix(
new[]
{
new[] {.30f, .30f, .30f, 0, 0},
new[] {.59f, .59f, .59f, 0, 0},
new[] {.11f, .11f, .11f, 0, 0},
new[] {0, 0, 0, 1.0f, 0},
new[] {0, 0, 0, 0, 1.0f}
});
using(var attrs = new ImageAttributes())
{
attrs.SetColorMatrix(colorMatrix);
g.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height),
0, 0, img.Width, img.Height, GraphicsUnit.Pixel, attrs);
}
}
return bmp;
}
```
Now, I want to compute the average "direction" of the pixels.
What I mean by that is that I want to look at say a 3x3 region, and then if the left side is darker than the right side, then the direction would be to the right, if the bottom is darker than the top, then the direction would be upwards, if the bottom-left is darker than the top-right, then the direction would be up-right. (Think of little vector arrows over every 3x3 region). Perhaps a better example is if you draw a grayscale gradient in photoshop, and you want to compute at what angle they drew it.
I've done stuff like this MatLab, but that was years ago. I figure I could use a matrix similar to `ColorMatrix` to compute this, but I'm not quite sure how. It looks like [this function](http://msdn.microsoft.com/en-us/library/k1cfyd2w.aspx) might be what I want; could I convert it to grayscale (as above) and then do something with the grayscale matrix to compute these directions?
IIRC, what I want is quite similar to [edge detection](http://en.wikipedia.org/wiki/Edge_detection).
After I compute these direction vectors, I'm just going to loop over them and compute the average direction of the image.
The end goal is I want to rotate images so that their average direction is always upwards; this way if I have two identical images except one is rotated (90,180 or 270 degrees), they will end up oriented the same way (I'm not concerned if a person ends up upside down).
---
**\*snip\*** Deleting some spam. You can view the revisions of you want to read the rest of my attempts.
|
Calculating the mean of angles is generally a bad idea:
```
...
sum += Math.Atan2(yi, xi);
}
}
double avg = sum / (img.Width * img.Height);
```
The mean of a set of angles has no clear meaning: For example, the mean of one angle pointing up and one angle pointing down is a angle pointing right. Is that what you want? Assuming "up" is +PI, then the mean between two angles *almost* pointing up would be an angle pointing down, if one angle is PI-[some small value], the other -PI+[some small value]. That's probably not what you want. Also, you're completely ignoring the strength of the edge - most of the pixels in your real-life images aren't edges at all, so the gradient direction is mostly noise.
If you want to calculate something like an "average direction", you need to add up *vectors* instead of angles, then calculate Atan2 after the loop. Problem is: That vector sum tells you nothing about objects inside the image, as gradients pointing in opposite directions cancel each other out. It only tells you something about the difference in brightness between the first/last row and first/last column of the image. That's probably not what you want.
I think the simplest way to orient images is to create an angle histogram: Create an array with (e.g.) 360 bins for 360° of gradient directions. Then calculate the gradient angle and magnitude for each pixel. Add each gradient magnitude to the right angle-bin. This won't give you a single angle, but an angle-histogram, which can then be used to orient two images to each other using simple cyclic correlation.
Here's a proof-of-concept Mathematica implementation I've thrown together to see if this would work:
```
angleHistogram[src_] :=
(
Lx = GaussianFilter[ImageData[src], 2, {0, 1}];
Ly = GaussianFilter[ImageData[src], 2, {1, 0}];
angleAndOrientation =
MapThread[{Round[ArcTan[#1, #2]*180/\[Pi]],
Sqrt[#1^2 + #2^2]} &, {Lx, Ly}, 2];
angleAndOrientationFlat = Flatten[angleAndOrientation, 1];
bins = BinLists[angleAndOrientationFlat , 1, 5];
histogram =
Total /@ Flatten[bins[[All, All, All, 2]], {{1}, {2, 3}}];
maxIndex = Position[histogram, Max[histogram]][[1, 1]];
Labeled[
Show[
ListLinePlot[histogram, PlotRange -> All],
Graphics[{Red, Point[{maxIndex, histogram[[maxIndex]]}]}]
], "Maximum at " <> ToString[maxIndex] <> "\[Degree]"]
)
```
Results with sample images:

The angle histograms also show why the mean angle can't work: The histogram is essentially a single sharp peak, the other angles are roughly uniform. The mean of this histogram will always be dominated by the uniform "background noise". That's why you've got almost the same angle (about 180°) for each of the "real live" images with your current algorithm.
The tree image has a single dominant angle (the horizon), so in this case, you could use the mode of the histogram (the most frequent angle). But that will not work for every image:

Here you have two peaks. Cyclic correlation should still orient two images to each other, but simply using the mode is probably not enough.
Also note that the peak in the angle histogram is not "up": In the tree image above, the peak in the angle histogram is probably the horizon. So it's pointing up. In the Lena image, it's the vertical white bar in the background - so it's pointing to the right. Simply orienting the images using the most frequent angle will *not* turn every image with the right side pointing up.

This image has even more peaks: Using the mode (or, probably, any single angle) would be unreliable to orient this image. But the angle histogram as a whole should still give you a reliable orientation.
**Note:** I didn't pre-process the images, I didn't try gradient operators at different scales, I didn't post-process the resulting histogram. In a real-world application, you would tweak all these things to get the best possible algorithm for a large set of test images. This is just a quick test to see if the idea could work at all.
**Add:** To orient two images using this histogram, you would
1. Normalize all histograms, so the area under the histogram is the same for each image (even if some are brighter, darker or blurrier)
2. Take the histograms of the images, and compare them for each rotation you're interested in:
For example, in C#:
```
for (int rotationAngle = 0; rotationAngle < 360; rotationAngle++)
{
int difference = 0;
for (int i = 0; i < 360; i++)
difference += Math.Abs(histogram1[i] - histogram2[(i+rotationAngle) % 360]);
if (difference < bestDifferenceSoFar)
{
bestDifferenceSoFar = difference;
foundRotation = rotationAngle;
}
}
```
(you could speed this up using FFT if your histogram length is a power of two. But the code would be a lot more complex, and for 256 bins, it might not matter that much)
|
Measuring Process memory usage gives extremely low (wrong) values
I have the following code to launch and monitor a process:
```
Process process = new Process();
process.StartInfo.FileName = "foo.exe";
long maxMemoryUsage = 0;
process.Start()
while(!process.HasExited)
{
maxMemoryUsage = Math.Max(maxMemoryUsage, process.PrivateMemorySize64);
}
```
After using this code to run a large application that, according to the task manager used 328 MB at its peak (Memory "Private Working Set"). The value of maxMemoryUsage, and the value of process.PeakPagedMemorySize64 is 364544. According to [MSDN](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.peakpagedmemorysize64(v=vs.100).aspx) this value should be interpreted as bytes, meaning it is a little over 300KB, a factor thousand away from the expected value. The other process.Peak...Memory properties also report extremely low values (all under a megabyte, except for PeakVirtualMemorySize64 which is 4MB which I think is the minimum value for this field).
I've tried launching different applications (C# and C++ based of which I have the source code) which I know to use very little or a lot of memory and the memory values where always very close to the values seen with this process. Apparently I'm doing something completely wrong.
So my question is: How can I measure the maximum memory usage of a process which I spawned from my C# application. (Note that I don't need to have the value ealtime as long as I know its value after the program exited, I also don't need it super precise as I don't care if it was 27.04MB or 30MB, but I do care if it was 30MB or 100MB).
**Edit: here is a full reproducable test case**
```
class Program
{
static void Main(string[] args)
{
Process process = new Process();
process.StartInfo.FileName = @"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv.exe";
long maxMemoryUsage = 0;
process.Start();
while(!process.HasExited)
{
maxMemoryUsage = Math.Max(maxMemoryUsage, process.PagedMemorySize64);
}
Console.Out.WriteLine("Memory used: " + (maxMemoryUsage / 1024.0) / 1024.0 + "MB");
Console.ReadLine();
}
}
```
According to the task manager Visual Studio uses 103MB. After closing Visual Studio the program reports 0.3984375MB.
|
Process class is heavily cached. You'll get only cached result, no matter how many times you read some property unless you throw a call to `Refresh` method. You need to call [Process.Refresh](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.refresh%28v=vs.110%29.aspx) to get the non cached result.
To quote from msdn
>
> When a Process component is associated with a process resource, the
> property values of the Process are immediately populated according to
> the status of the associated process. **If the information about the
> associated process subsequently changes, those changes are not
> reflected in the Process component's cached values**. The Process
> component is a snapshot of the process resource at the time they are
> associated. To view the current values for the associated process,
> call the Refresh method.
>
>
>
So, your code will become:
```
while(!process.HasExited)
{
process.Refresh();
maxMemoryUsage = Math.Max(maxMemoryUsage, process.PrivateMemorySize64);
}
```
Also you may consider looking at `process.PeakXXX` properties, that will help you I suppose.
|
How to override this virtual operator in c++?
Base struct is defined like this in the header file:
```
struct Base {
virtual operator char * (void) {
return 0;
}
virtual operator long (void) { // hash function
return 0;
}
virtual long operator == (Base & base) {// isequal function
return *this == base;
}
Base (void) {} // new_element
virtual ~Base (void) {} // delete_element
virtual ostream & Write (ostream & stream) = 0;// write_element
};
```
I am confused by the first two virtual operator declarations, suppose I have a new class inherits the base class, how do I override these two operators so that when the subclass object is regarded as the Base class, these two functions can be called?
|
Just like any other function that you override.
```
#include <iostream>
#include <vector>
using namespace std;
struct A
{
virtual operator char * (void)
{
return 0;
}
};
struct B : A
{
operator char * (void) override
{
return (char*)12;
}
};
struct C : A
{
operator char * (void) override
{
return (char*)24;
}
};
int main()
{
vector<A*> v;
v.push_back(new A);
v.push_back(new B);
v.push_back(new C);
for (auto e : v)
{
char* p = *e;
cout << "p=" << (int)p << endl;
}
for (auto e : v)
{
delete e;
}
return 0;
}
```
This will print:
```
p=0
p=12
p=24
```
|
Out of memory error in BitmapFactory
Today I got my app crashed, Out of memory error.
>
> java.lang.OutOfMemoryError
>
> in android.graphics.BitmapFactory.nativeDecodeAsset
>
>
>
I only used Bitmap Factory to make a backgroud to my action bar
The code:
```
BitmapDrawable background = new BitmapDrawable (BitmapFactory.decodeResource(getResources(), R.drawable.actionbar));
background.setTileModeX(android.graphics.Shader.TileMode.REPEAT);
actionbar.setBackgroundDrawable(background);
```
This error doesn't happend on activity start, it happens after changing in activites a lot.
Can someone show me how to fix this
**EDIT EDIT EDIT**
Here is the error message in developer console:
```
java.lang.IllegalStateException: Could not execute method of the activity
at android.view.View$1.onClick(View.java:3838)
at android.view.View.performClick(View.java:4475)
at android.view.View$PerformClick.run(View.java:18786)
at android.os.Handler.handleCallback(Handler.java:730)
at android.os.Handler.dispatchMessage(Handler.java:92)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:5419)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:525)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1187)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003)
at dalvik.system.NativeStart.main(Native Method)
Caused by: java.lang.reflect.InvocationTargetException
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:525)
at android.view.View$1.onClick(View.java:3833)
... 11 more
Caused by: java.lang.OutOfMemoryError
at android.graphics.BitmapFactory.nativeDecodeAsset(Native Method)
at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:596)
at android.graphics.BitmapFactory.decodeResourceStream(BitmapFactory.java:444)
at android.graphics.drawable.Drawable.createFromResourceStream(Drawable.java:832)
at android.content.res.Resources.loadDrawable(Resources.java:2988)
at android.content.res.Resources.getDrawable(Resources.java:1558)
at android.widget.ImageView.resolveUri(ImageView.java:646)
at android.widget.ImageView.setImageResource(ImageView.java:375)
at com.packagename.pp.Activitytwo.disableAnswer(Activitytwo.java:435)
at com.packagename.pp.Activitytwo.submitAnswer(Activitytwo.java:230)
... 14 more
```
|
You have a memory leak on your code. Consider the use of [WeakReference](http://developer.android.com/reference/java/lang/ref/WeakReference.html), [WeakHashMap](http://developer.android.com/reference/java/util/WeakHashMap.html) or [SoftReference](http://developer.android.com/reference/java/lang/ref/SoftReference.html) to avoid strong references. Free unused resources and variables on onLowMemory method of activities.
Also you can use the option [BitmapFactory.Options](http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html) to decode the bitmap as show the examples [here](http://developer.android.com/training/displaying-bitmaps/load-bitmap.html).
I posted the some hints to solve this problem [here](https://stackoverflow.com/questions/25344518/outofmemory-exception-not-sure-android/25344553#25344553).
|
Avoid passing void expression, when pass from macro to function
I want to write a debugging macro which prints the name of the function called when there is an error. The catch is that some functions return values, and I would like to return any value back from the macro.
This is my attempt:
```
#define check(expr) _check_expr(#expr, expr)
extern bool check_for_error();
template <typename T> inline T _check_expr(const char *str, T value)
{
if (check_for_error()) {
fprintf(stderr, "Failure: %s\n", str);
}
return value;
}
```
The problem I get here is that sometimes `T = void` and the compiler will not let me pass an expression of `void` type to a function:
```
../src/render.cc: In constructor ‘render::impl::impl()’:
../src/render.cc:34:20: error: invalid use of void expression
34 | check(glDisable(GL_DEPTH_TEST));
```
I cannot redefine the functions to be called under the `check` macro, or the `check_for_error` function, which are external to my program. Also, checking for error needs to occur after evaluating the expression.
**Is there a good way to solve this problem in C++?**
Something like: "If decltype of this expression is void, then generate this code, otherwise generate that code".
|
[A void function can return a void expression](https://stackoverflow.com/a/2176229/1116364). This also applies to lambdas:
```
#define check(expr) _check_expr(#expr, [&] () { return expr; })
extern bool check_for_error();
template <typename Fn>
inline auto _check_expr(const char *str, Fn fn)
{
auto check_ = [str]() {
if (check_for_error()) {
fprintf(stderr, "Failure: %s\n", str);
}
};
if constexpr (std::is_same_v<std::invoke_result_t<Fn>, void>) {
fn();
check_();
} else {
auto v = fn();
check_();
return v;
}
}
```
There's probably a better solution but this works. Also this requires at least C++17 in the current form, but can probably be back ported to C++14 or even C++11.
<https://wandbox.org/permlink/YHdoyKL0FIoJUiQb> to check the code.
|
How to iterate each number in array and sum them with the other numbers in the same array - JS
I'm trying to sum every number in the array with the next numbers and save all results:
This is my example:
```
var arr = [2, 3, -6, 2, -1, 6, 4];
```
I have to sum 2 + 3 and save it, then 2 + 3 - 6 save it, next 2 + 3 - 6 - 1 save it etc.. to the end of the array. Next the same with second index 3 - 6 and save it, 3 - 6 + 2...
I know this can be done with two nested loops but don't know how exactly to do it.
Where I'm wrong ?
```
const sequence = [2, 3, -6, 2, -1, 2, -1, 6, 4]
const sums = [];
for (let i = 0; i < sequence.length; i++) {
for (let z = 1; z < sequence.length; z++) {
let previous = sequence[z - 1] + sequence[z];
sums.push(previous + sequence[z + 1])
}
}
console.log(sums);
```
|
The following uses a function to [reduce](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) an array into an array of its gradual sums.
You can call that function repeatedly while removing items from the array to sum the entire thing.
Here's a concise version:
```
var arr = [2, 3, -6, 2, -1, 6, 4];
var listSums = (array) => array.reduce((a,b) => [...a, a[a.length-1]+b], [0]).slice(2);
var listAllSums = (array) => array.reduce((a, b, index) => [...a, ...listSums(array.slice(index))], []);
console.log(listAllSums(arr));
```
And here's an expanded version for clarity.
The logic:
```
Add the sums of [2, 3, -6, 2, -1, 6, 4] to the list
Add the sums of [ 3, -6, 2, -1, 6, 4] to the list
Add the sums of [ -6, 2, -1, 6, 4] to the list
...
Add the sums of [ -1, 6, 4] to the list
Add the sums of [ 6, 4] to the list
Output the list
```
The code:
```
var arr = [2, 3, -6, 2, -1, 6, 4];
function sumArray(array) {
var result = array.reduce(function(accumulator,currentInt) {
var lastSum = accumulator[accumulator.length-1]; //Get current sum
var newSum = lastSum + currentInt; //Add next integer
var resultingArray = [...accumulator, newSum]; //Combine them into an array
return resultingArray; //Return the new array of sums
}, [0]); //Initialize accumulator
return result.slice(2);
}
var result = arr.reduce(function(accumulator, currentInt, index) { //For each item in our original array
var toSum = arr.slice(index); //Remove x number of items from the beginning (starting with 0)
var sums = sumArray(toSum); //Sum the array using the function above
var combined = [...accumulator, ...sums]; //Store the results
return combined; //Return the results to the next iteration
}, []); //Initialize accumulator
console.log(result);
```
|
Fine-tuning of methods before main analysis: p-value hacking?
I am reviewing a paper for a journal. The authors propose a new classification scheme. They then try to assess the classification accuracy using some novel predictor data. Standard classification techniques (SVM, KNN, etc) are used. I came to know of a detail of what they did, via the response document that they gave to my initial set of comments. Hence this message.
Let us say there are two ways to process the predictor data, and derive the prediction metrics (independent variables) associated with the classification. Let us call them M1 and M2 (method1 and method2). The authors first tried both the methods using the entire sample data set, and they concluded that M1 was better. This was done because M1 was giving better results (accuracy stats) with the data. Then, they did their "main analysis" with M1. And in the final paper, they just talk about M1, and they give a logical-sounding reason for selecting M1. That is, they do not mention that they had tested M1 and M2. They derived accuracy stats using cross validation (train/test ratio 80:20). I think that these accuracy stats (eg, the overall accuracy, kappa values) associated with sample/M1 will now not represent that of the population. Am I right? Or is it just the confidence intervals associated with those stats?
I also think it is a version of "p-value hacking / data dregring". But it is more like "methods dredging", where they first use the data to select a "good/better" method. And then in the main paper, they do not mention this step. They (rather) give a logical reason for selecting method M1, and they go on to state the results of using this method.
|
This is a textbook case of HARKing: Hypothesizing After Results are Known ([Kerr, 1998](https://doi.org/10.1207/s15327957pspr0203_4)). Not disclosing this step is indeed more than borderline unethical. The authors should prominently note that they did this preprocessing step, because it makes all their results less reliable.
Essentially, all statistics (i.e., parameters estimated from data, like the accuracy of a classifier or many other parameters) are random variables. Evaluating statistics across multiple methods, models or samples and then cherry-picking the largest or most significant ones means that we tend to pick the ones that just randomly in this particular dataset turned out to be large. It may well be that in a similar new dataset, this statistic is lower and another one higher, simply due to sampling variability. Any estimate driven by this kind of cherry-picking will be biased, i.e., it will systematically over-estimate the true parameter value. And confidence intervals that are calculated without taking this additional "snooping" step into account will be too low, i.e., they will have lower than their nominal coverage. Correcting for this "path through the garden of forking paths" (take a look at Gelman's paper) is highly nontrivial.
(Also, [accuracy is a seriously flawed evaluation measure](https://stats.stackexchange.com/q/312780/1352).)
|
:empty pseudoclass when adding dynamic content
I have read in this [sitepoint page](http://reference.sitepoint.com/css/pseudoclass-empty#compatibilitysection) and [quirksmode page](http://www.quirksmode.org/css/empty.html) about the new `:empty` pseudoclass.
Sitepoint said that even when there is dynamic content appended, the empty style will still take effect. It is noted that firefox was the one who behaves this way.
Quirksmode said that it discards the empty state when it it filled in with some elements or text. the demo on this site works in my browser (chrome 19). So i assumed only firefox would be buggy.
However I have this piece of code in my plugin, which dynamically fills up a list with items, it doesn't seem to work, [here's a fiddle](http://jsfiddle.net/YprUV/5/) which appends list items, even if you click the button, the items won't appear until you try to debug it in the console (they magically appear when you click the `<li>` in the element tree).
Why is this happening, and is there a work around to "force-discard" the empty style?
I know there are other ways to do what I am doing in the fiddle (and currently doing one of these "other ways"), but the `:empty` method is a lot easier.
**UPDATE**:
added a remove item button. when the last item is removed, the list should disappear - still doesn't work. hmmm.. i'll try to check in another browser.
**FIX**
Temporary fix/alternative to using `:empty` and `display:none` is to have the element have zero `width`, `height`, `borders`, `margins`, and `paddings`. additionally, `position:absolute` to remove it from the flow.
|
The fiddle you provided works for me with FF10 and IE9. It only fails in Chrome (18+)
Not sure why this happens, as the quirksmode page works in Chrome as well..
For the *force-discard* it seems to be do-able by setting almost any css property with code..
example at <http://jsfiddle.net/gaby/YprUV/9/>
**Update**
Ok, i found this [***Chrome bug report***](http://code.google.com/p/chromium/issues/detail?id=88906) that is about `:empty` selector misbehaving when used with `display:none`
It is marked as fixed, but perhaps we should re-oopen it..
Here is a test that does not use `display:none` (*it just changes the background color*) and it works just fine.. <http://jsfiddle.net/YprUV/11/>
|
Get url for current page, but with a different format
Using rails 2. I want a link to the current page (whatever it is) that keeps all of the params the same but changes the format to 'csv'. (setting the format can be done by having format=csv in the params or by putting .csv at the end of the path). Eg
```
posts/1
=> posts/1.csv OR posts/1?format=csv
posts?name=jim
=> posts.csv?name=jim OR posts?name=jim&format=csv
```
I tried this as a hacky attempt
`request.url+"&format=csv"`
and that works fine if there are params in the current url (case 2 above) but breaks if there aren't (case 1). I could come up with more hacky stuff along these lines, eg testing if the request has params, but i'm thinking there must be a nicer way.
cheers, max
EDIT - btw, it's not guaranteed that the current page could have a named route associated with it, in case that's relevant: we could have got there via the generic "/:controller/:action/:id" route.
|
```
<%= link_to "This page in CSV", {:format => :csv } %>
<%= link_to "This page in PDF", {:format => :pdf } %>
<%= link_to "This page in JPEG", {:format => :jpeg } %>
```
**EDIT**
Add helper
```
def current_url(new_params)
url_for :params => params.merge(new_params)
end
```
then use this
```
<%= link_to "This page in CSV", current_url(:format => :csv ) %>
```
**EDIT 2**
Or improve your hack:
```
def current_url(new_params)
params.merge!(new_params)
string = params.map{ |k,v| "#{k}=#{v}" }.join("&")
request.uri.split("?")[0] + "?" + string
end
```
**EDIT**
IMPORTANT! @floor - your approach above has a serious problem - it directly modifies params, so if you've got anything after a call to this method which uses params (such as will\_paginate links for example) then that will get the modified version which you used to build your link. I changed it to call .dup on params and then modify the duplicated object rather than modifying params directly. – @Max Williams
|
scipy sparse matrix division
I have been trying to divide a python scipy sparse matrix by a vector sum of its rows. Here is my code
```
sparse_mat = bsr_matrix((l_data, (l_row, l_col)), dtype=float)
sparse_mat = sparse_mat / (sparse_mat.sum(axis = 1)[:,None])
```
However, it throws an error no matter how I try it
```
sparse_mat = sparse_mat / (sparse_mat.sum(axis = 1)[:,None])
File "/usr/lib/python2.7/dist-packages/scipy/sparse/base.py", line 381, in __div__
return self.__truediv__(other)
File "/usr/lib/python2.7/dist-packages/scipy/sparse/compressed.py", line 427, in __truediv__
raise NotImplementedError
NotImplementedError
```
Anyone with an idea of where I am going wrong?
|
You can circumvent the problem by creating a sparse diagonal matrix from the reciprocals of your row sums and then multiplying it with your matrix. In the product the diagonal matrix goes left and your matrix goes right.
Example:
```
>>> a
array([[0, 9, 0, 0, 1, 0],
[2, 0, 5, 0, 0, 9],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 9, 5, 3, 0, 7],
[1, 0, 0, 8, 9, 0]])
>>> b = sparse.bsr_matrix(a)
>>>
>>> c = sparse.diags(1/b.sum(axis=1).A.ravel())
>>> # on older scipy versions the offsets parameter (default 0)
... # is a required argument, thus
... # c = sparse.diags(1/b.sum(axis=1).A.ravel(), 0)
...
>>> a/a.sum(axis=1, keepdims=True)
array([[ 0. , 0.9 , 0. , 0. , 0.1 , 0. ],
[ 0.125 , 0. , 0.3125 , 0. , 0. , 0.5625 ],
[ 0. , 1. , 0. , 0. , 0. , 0. ],
[ 1. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0.375 , 0.20833333, 0.125 , 0. , 0.29166667],
[ 0.05555556, 0. , 0. , 0.44444444, 0.5 , 0. ]])
>>> (c @ b).todense() # on Python < 3.5 replace c @ b with c.dot(b)
matrix([[ 0. , 0.9 , 0. , 0. , 0.1 , 0. ],
[ 0.125 , 0. , 0.3125 , 0. , 0. , 0.5625 ],
[ 0. , 1. , 0. , 0. , 0. , 0. ],
[ 1. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0.375 , 0.20833333, 0.125 , 0. , 0.29166667],
[ 0.05555556, 0. , 0. , 0.44444444, 0.5 , 0. ]])
```
|
When is double-quoting necessary?
The old advice used to be to double-quote any expression involving a `$VARIABLE`, at least if one wanted it to be interpreted by the shell as one single item, otherwise, any spaces in the content of `$VARIABLE` would throw off the shell.
I understand, however, that in more recent versions of shells, double-quoting is no longer always needed (at least for the purpose described above). For instance, in `bash`:
```
% FOO='bar baz'
% [ $FOO = 'bar baz' ] && echo OK
bash: [: too many arguments
% [[ $FOO = 'bar baz' ]] && echo OK
OK
% touch 'bar baz'
% ls $FOO
ls: cannot access bar: No such file or directory
ls: cannot access baz: No such file or directory
```
In `zsh`, on the other hand, the same three commands succeed. Therefore, based on this experiment, it seems that, in `bash`, one can omit the double quotes inside `[[ ... ]]`, but not inside `[ ... ]` nor in command-line arguments, whereas, in `zsh`, the double quotes may be omitted in all these cases.
But inferring general rules from anecdotal examples like the above is a chancy proposition. It would be nice to see a summary of when double-quoting is necessary. I'm primarily interested in `zsh`, `bash`, and `/bin/sh`.
|
First, separate zsh from the rest. It's not a matter of old vs modern shells: zsh behaves differently. The zsh designers decided to make it incompatible with traditional shells (Bourne, ksh, bash), but easier to use.
Second, it is far easier to use double quotes all the time than to remember when they are needed. They are needed most of the time, so you'll need to learn when they aren't needed, not when they are needed.
In a nutshell, **double quotes are necessary wherever a list of words or a pattern is expected**. They are optional in contexts where a single raw string is expected by the parser.
### What happens without quotes
Note that without double quotes, two things happen.
1. First, the result of the expansion (the value of the variable for a parameter substitution like `$foo` or `${foo}`, or the output of the command for a command substitution like `$(foo)`) is split into words wherever it contains whitespace.
More precisely, the result of the expansion is split at each character that appears in the value of the `IFS` variable (separator character). If a sequence of separator characters contains whitespace (space, tab, or newline), the whitespace counts as a single character; leading, trailing, or repeated non-whitespace separators lead to empty fields. For example, with `IFS=" :"` (space and colon), `:one::two : three: :four` produces empty fields before `one`, between `one` and `two`, and (a single one) between `three` and `four`.
2. Each field that results from splitting is interpreted as a glob (a wildcard pattern) if it contains one of the characters `[*?`. If that pattern matches one or more file names, the pattern is replaced by the list of matching file names.
An unquoted variable expansion `$foo` is colloquially known as the “split+glob operator”, in contrast with `"$foo"` which just takes the value of the variable `foo`. The same goes for command substitution: `"$(foo)"` is a command substitution, `$(foo)` is a command substitution followed by split+glob.
### Where you can omit the double quotes
Here are all the cases I can think of in a Bourne-style shell where you can write a variable or command substitution without double quotes, and the value is interpreted literally.
- On the right-hand side of a scalar (not array) variable assignment.
```
var=$stuff
a_single_star=*
```
Note that you do need the double quotes after `export` or `readonly`, because in a few shells, they are still ordinary builtins, not a keyword. This is only true in some shells such as some older versions of dash, older versions of zsh (in sh emulation), yash, or posh; in bash, ksh, newer versions of dash and zsh `export` / `readonly` and co are treated specially as dual builtin / keyword (under some conditions) as POSIX now more clearly requires.
```
export VAR="$stuff"
```
- In a `case` statement.
```
case $var in …
```
Note that you do need double quotes in a case pattern. Word splitting doesn't happen in a case pattern, but an unquoted variable is interpreted as a glob-style pattern whereas a quoted variable is interpreted as a literal string.
```
a_star='a*'
case $var in
"$a_star") echo "'$var' is the two characters a, *";;
$a_star) echo "'$var' begins with a";;
esac
```
- Within double brackets. Double brackets are shell special syntax.
```
[[ -e $filename ]]
```
Except that you do need double quotes where a pattern or regular expression is expected: on the right-hand side of `=` or `==` or `!=` or `=~` (though for the latter, behaviour varies between shells).
```
a_star='a*'
if [[ $var == "$a_star" ]]; then echo "'$var' is the two characters a, *"
elif [[ $var == $a_star ]]; then echo "'$var' begins with a"
fi
```
You do need double quotes as usual within single brackets `[ … ]` because they are ordinary shell syntax (it's a command that happens to be called `[`). See [Why does parameter expansion with spaces without quotes work inside double brackets "[[" but not inside single brackets "["?](https://unix.stackexchange.com/questions/32210/using-single-or-double-bracket-bash/32227#32227).
- In a redirection in non-interactive POSIX shells (not `bash`, nor `ksh88`).
```
echo "hello world" >$filename
```
Some shells, when interactive, do treat the value of the variable as a wildcard pattern. POSIX prohibits that behaviour in non-interactive shells, but a few shells including bash (except in POSIX mode) and ksh88 (including when found as the (supposedly) POSIX `sh` of some commercial Unices like Solaris) still do it there (`bash` does also attempt *splitting* and the redirection fails unless that *split+globbing* results in exactly one word), which is why it's better to quote targets of redirections in a `sh` script in case you want to convert it to a `bash` script some day, or run it on a system where `sh` is non-compliant on that point, or it may be *sourced* from interactive shells.
- Inside an arithmetic expression. In fact, you need to leave the quotes out in order for a variable to be parsed as an arithmetic expression in several shells.
```
expr=2*2
echo "$(($expr))"
```
However, you do need the quotes around the arithmetic expansion as it is subject to word splitting in most shells as POSIX requires (!?).
- In an associative array subscript.
```
typeset -A a
i='foo bar*qux'
a[foo\ bar\*qux]=hello
echo "${a[$i]}"
```
An unquoted variable and command substitution can be useful in some rare circumstances:
- When the variable value or command output consists of a list of glob patterns and you want to expand these patterns to the list of matching files.
- When you know that the value doesn't contain any wildcard character, that `$IFS` was not modified and you want to split it at whitespace (well, only space, tab and newline) characters.
- When you want to split a value at a certain character: disable globbing with `set -o noglob` / `set -f`, set `IFS` to the separator character (or leave it alone to split at whitespace), then do the expansion.
### Zsh
In zsh, you can omit the double quotes most of the time, with a few exceptions.
- `$var` never expands to multiple words, but it expands to the empty list (as opposed to a list containing a single, empty word) if the value of `var` is the empty string. Contrast:
```
var=
print -l -- $var foo # prints just foo
print -l -- "$var" foo # prints an empty line, then foo
```
Similarly, `"${array[@]}"` expands to all the elements of the array, while `$array` only expands to the non-empty elements.
- The `@` parameter expansion flag sometimes requires double quotes around the whole substitution: `"${(@)foo}"`.
- Command substitution undergoes field splitting if unquoted: `echo $(echo 'a'; echo '*')` prints `a *` (with a single space) whereas `echo "$(echo 'a'; echo '*')"` prints the unmodified two-line string. Use `"$(somecommand)"` to get the output of the command in a single word, sans final newlines. Use `"${$(somecommand; echo .)%?}"` to get the exact output of the command including final newlines. Use `"${(@f)$(somecommand)}"` to get an array of lines from the command's output (removing trailing empty lines if any though).
|
Authentication with NSURLConnection sendAsynchronousRequest with completion handler
Generally I like to just "fire and forget" with NSURL's sendAsynchronousRequest class method using the completion handler block but it seems that might not be an option when authentication is needed.
When using a completion handler style request like this:
```
[NSURLConnection sendAsynchronousRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.mysite.com/"]]
queue:[NSOperationQueue mainQueue]
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
//Do Stuff
}];
```
What is the proper way to handle authentication? Do I need to alloc and init the NSURLConnection and set a delegate instead of doing using this class method style? I think I understand how to authenticate correctly with the delegate function but I'm trying to figure out if I can include that in the completionHandler block or if there is a better way to do this.
```
- (void)connection:(NSURLConnection *)connection willSendRequestForAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge
{
if ([challenge previousFailureCount] > 0) {
NSLog(@"Authentication Failure");
[connection cancel];
}
else
{
NSURLCredential *credential = [NSURLCredential credentialWithUser:self.username
password:self.password
persistence:NSURLCredentialPersistenceForSession];
[[challenge sender] useCredential:credential forAuthenticationChallenge:challenge];
}
}
```
|
I think the completionHandler method is for basic requests. Maybe you could consider using [AFNetworking](https://github.com/gowalla/AFNetworking) as I use this with block methods and authentication.
EDIT....
Have you tried adding the authentication header to the NSURLRequest?
Create an NSMutableURLRequest:
```
NSMutableURLRequest *urlRequest = [[NSMutableURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://www.example.com/"]];
```
And add the authentication header like this:
```
NSString *basicAuthCredentials = [NSString stringWithFormat:@"%@:%@", userName, password];
NSString *authValue = [NSString stringWithFormat:@"Basic %@", AFBase64EncodedStringFromString(basicAuthCredentials)];
[urlRequest setValue:authValue forHTTPHeaderField:@"Authorization"];
```
The AFBase64EncodedStringFromString function is this:
```
static NSString * AFBase64EncodedStringFromString(NSString *string) {
NSData *data = [NSData dataWithBytes:[string UTF8String] length:[string lengthOfBytesUsingEncoding:NSUTF8StringEncoding]];
NSUInteger length = [data length];
NSMutableData *mutableData = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t *input = (uint8_t *)[data bytes];
uint8_t *output = (uint8_t *)[mutableData mutableBytes];
for (NSUInteger i = 0; i < length; i += 3) {
NSUInteger value = 0;
for (NSUInteger j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) {
value |= (0xFF & input[j]);
}
}
static uint8_t const kAFBase64EncodingTable[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
NSUInteger idx = (i / 3) * 4;
output[idx + 0] = kAFBase64EncodingTable[(value >> 18) & 0x3F];
output[idx + 1] = kAFBase64EncodingTable[(value >> 12) & 0x3F];
output[idx + 2] = (i + 1) < length ? kAFBase64EncodingTable[(value >> 6) & 0x3F] : '=';
output[idx + 3] = (i + 2) < length ? kAFBase64EncodingTable[(value >> 0) & 0x3F] : '=';
}
return [[NSString alloc] initWithData:mutableData encoding:NSASCIIStringEncoding];
}
```
Then call the function you called before, but using your new NSURLRequest:
```
[NSURLConnection sendAsynchronousRequest:urlRequest
queue:[NSOperationQueue mainQueue]
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
}];
```
|
how to print char array in c++
how can i print a char array such i initialize and then concatenate to another char array? Please see code below
```
int main () {
char dest[1020];
char source[7]="baby";
cout <<"source: " <<source <<endl;
cout <<"return value: "<<strcat(dest, source) <<endl;
cout << "pointer pass: "<<dest <<endl;
return 0;
}
```
this is the output
```
source: baby
return value: v����baby
pointer pass: v����baby
```
basically i would like to see the output print
```
source: baby
return value: baby
pointer pass: baby
```
|
You haven't initialized `dest`
```
char dest[1020] = ""; //should fix it
```
You were just lucky that it so happened that the 6th (random) value in `dest` was `0`. If it was the 1000th character, your return value would be much longer. If it were greater than 1024 then you'd get undefined behavior.
Strings as `char` arrays must be delimited with `0`. Otherwise there's no telling where they end. You could alternatively say that the string ends at its zeroth character by explicitly setting it to 0;
```
char dest[1020];
dest[0] = 0;
```
Or you could initialize your whole array with 0's
```
char dest[1024] = {};
```
And since your question is tagged `C++` I cannot but note that in C++ we use `std::string`s which save you from a lot of headache. Operator + can be used to concatenate two `std::string`s
|
Navigating from canActivate in Aurelia view-model
In my Aurelia view-model, I am trying to check some precondition, in `canActivate()`, and based on that it is decided whether navigate to a different view or not.
```
export class ThisView{
canActivate() {
const self = this;
const promise = self.services.get(url)
.then((items: any) => {
if (items) {
self.items= items;
if (self.items.length === 1) {
this.router.navigate("some/other/view");
//return false;
}
//return true;
} else {
self.logger.error("No item found");
//return false;
}
});
return promise;
}
}
```
Now even though I am navigating to `some/other/view` if there is only one item found, view of `ThisView` still gets activated (i.e. can be seen in browser).
Is there a way, to avoid that? There couple of things I tried for this.
1. Return `true`, or `false` from `promise` to accept, or reject activation of this view. But as this view is kind of landing page of my application, if rejected (returned false) it tries to restore the previous location, which is not available, and throws an error. And restoring a previous location is also not desired by the application for this specific case.
2. Another idea was to do something like a pipeline step where we can do something like `next.cancel(new Redirect("some/other/view"))`, where we can instruct to cancel the current navigation instruction with a new one. But I am not sure how to do the same from a view-model.
Please suggest.
**Workaround:** I have finally used a simple trick of using `if.bind` on view of `ThisView`. However, it could have been more interesting, if we can somehow cancel the current instruction (from page lifecycle) with a new one.
|
instead of this.router.navigate("some/other/view) can you not import redirect and add the redirect in there, i.e.
```
import {Redirect} from 'aurelia-router';
export class ThisView{
canActivate() {
const self = this;
var items = self.services.get(url);
if(items){
self.items= items;
if (self.items.length === 1) {
return new Redirect("some/other/view");
}
return true;
}
else {
self.logger.error("No item found");
return new Redirect("not-found-view");
}
}
```
}
I've made a basic GistRun showing how this works - I haven't used a promise, but I believe the concept is the same.
<https://gist.run/?id=52301f1b2898691ff4d54f320f61f6c6>
|
Custom font in a Cocoa application
I know you can customize fonts by using Interface Builder and selecting a font. However, I'm curious if I can use a custom font that's not included by default on systems. Is there a way to include a custom font in my application?
|
While the manual font activation procedure is one option, you might also consider the `ATSApplicationFontsPath` Info.plist key:
[Information Property List Key Reference](http://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Articles/GeneralPurposeKeys.html#//apple_ref/doc/uid/TP40009253-SW8):
>
> "`ATSApplicationFontsPath` (`String` - Mac
> OS X) identifies the location of a
> font file or directory of fonts in the
> bundle’s `Resources` directory. If
> present, Mac OS X activates the fonts
> at the specified path for use by the
> bundled application. The fonts are
> activated only for the bundled
> application and not for the system as
> a whole. The path itself should be
> specified as a relative directory of
> the bundle’s Resources directory. For
> example, if a directory of fonts was
> at the path
> `/Applications/MyApp.app/Contents/Resources/Stuff/MyFonts/`,
> you should specify the string
> `Stuff/MyFonts/` for the value of this
> key."
>
>
>
I'd be sure to double-check, but I believe this functionality was added in OS X 10.5.x (which the code posted by Jinhyung Park targets).
|
How to print object list to file with formatting in table format
I have to print list of objects to a text file with table format. For example, if I have list of Person(has getName,getAge and getAddress methods)objects, the text file should look like as below.
```
Name Age Address
Abc 20 some address1
Def 30 some address2
```
I can do this by manually writing some code, where I have to take care of spaces and formatting issues.
I am just curious whether are they APIs or tools to do this formatting work?
|
```
import java.util.*;
public class Test {
public static void main(String[] args) {
List<Person> list = new ArrayList<Person>();
list.add(new Person("alpha", "astreet", 12));
list.add(new Person("bravo", "bstreet", 23));
list.add(new Person("charlie", "cstreet", 34));
list.add(new Person("delta", "dstreet", 45));
System.out.println(String.format("%-10s%-10s%-10s", "Name", "Age", "Adress"));
for (Person p : list)
System.out.println(String.format("%-10s%-10s%-10d", p.name, p.addr, p.age));
}
}
class Person {
String name;
String addr;
int age;
public Person(String name, String addr, int age) {
this.name = name;
this.addr = addr;
this.age = age;
}
}
```
---
**Output:**
```
Name Age Adress
alpha astreet 12
bravo bstreet 23
charlie cstreet 34
delta dstreet 45
```
|
Shutdown procedure clarification
The book "How Linux Works" says the general shutdown procedure (independent of init system) is something like this:
1. init asks every process to shut down cleanly.
2. If a process doesn’t respond after a while, init kills it, first trying a TERM signal.
3. If the TERM signal doesn’t work, init uses the KILL signal on any stragglers.
4. The system locks system files into place and makes other preparations for shutdown.
5. The system unmounts all filesystems other than the root.
6. The system remounts the **root filesystem read-only**.
7. The system **writes all buffered data out to the filesystem** with the sync program.
8. The final step is to tell the kernel to reboot or stop with the reboot(2) system call. This can be done by init or an auxiliary program such as reboot, halt, or poweroff.
How can sync write its buffers if the filesystem is read-only?
|
You're right to be surprised: that order doesn't make sense. If a book presents it that way, it's sloppy and misleading.
Unmounting a filesystem, or mounting it read-only, writes all the data to the disk. When the `umount` command or `mount -o remount,ro` returns, all the data is written to the disk and `sync` has nothing left to do. It's pointless to call `sync` before (the data will be written by the umount operation anyway) and pointless to call it after (it won't do anything).
I think this wasn't true in some ancient Unix systems, where you had to call `sync` *before* unmounting. Calling it after was still pointless.
If you look beyond filesystems, there may be cases where `sync` does something. For example, I think that on Linux `sync` ensures that the metadata of RAID arrays are written to the disk. This is useful even in the absence of any filesystem mounted read-write.
|
Dictionary in django template
I have a view like this:
```
info_dict = [{u'Question 1': ['13365', '13344']}, {u'Question 2': ['13365']}, {u'Question 3': []}]
for key in info_dict:
for k, v in key.items():
profile = User.objects.filter(id__in=v, is_active=True)
for f in profile:
wanted_fields = ['job', 'education', 'country', 'city','district','area']
profile_dict = {}
for w in wanted_fields:
profile_dict[f._meta.get_field(w).verbose_name] = getattr(f, w).name
return render_to_response('survey.html',{
'profile_dict':profile_dict,
},context_instance=RequestContext(request))
```
and in template:
```
<ul>
{% for k, v in profile_dict.items %}
<li>{{ k }} : {{ v }}</li>
{% endfor %}
</ul>
```
I have only one dictionary in template. But 4 dictionary might be here (because info\_dict)
What is wrong in view?
Thanks in advance
|
In your view, you’ve only created one variable (`profile_dict`) to hold the profile dicts.
In each iteration of your `for f in profile` loop, you’re re-creating that variable, and overwriting its value with a new dictionary. So when you include `profile_dict` in the context passed to the template, it holds the last value assigned to `profile_dict`.
If you want to pass four profile\_dicts to the template, you could do this in your view:
```
info_dict = [{u'Question 1': ['13365', '13344']}, {u'Question 2': ['13365']}, {u'Question 3': []}]
# Create a list to hold the profile dicts
profile_dicts = []
for key in info_dict:
for k, v in key.items():
profile = User.objects.filter(id__in=v, is_active=True)
for f in profile:
wanted_fields = ['job', 'education', 'country', 'city','district','area']
profile_dict = {}
for w in wanted_fields:
profile_dict[f._meta.get_field(w).verbose_name] = getattr(f, w).name
# Add each profile dict to the list
profile_dicts.append(profile_dict)
# Pass the list of profile dicts to the template
return render_to_response('survey.html',{
'profile_dicts':profile_dicts,
},context_instance=RequestContext(request))
```
And then in your template:
```
{% for profile_dict in profile_dicts %}
<ul>
{% for k, v in profile_dict.items %}
<li>{{ k }} : {{ v }}</li>
{% endfor %}
</ul>
{% endfor %}
```
|
Get normal of bezier curve in 2D
I'm trying to wrap text in bezier curve and following tutorial from this link, <http://www.planetclegg.com/projects/WarpingTextToSplines.html>
I get Derivative by this code:
```
function Derivative(x0,x1,x2,t)
{
var mt = 1-t;
var a = mt * mt;
var b = mt * t * 2;
var c = t * t;
var result=a * x0 + b * x1 + c * x2;
return result;
}
```
So i calcu Normal with this code:
```
function Normal(x0,x1,x2,y0,y1,y2,t)
{
var dx = Derivative(x0,x1,x2,t);
var dy = Derivative(y0,y1,y2,t);
var q = Math.sqrt(dx*dx + dy*dy)
return { x: -dy/q, y: dx/q };
};
```
So this is result: somthing wrong but i don't know where.

Thanks you all!
|
The "what I want" image looks a lot like my [bezierjs documentation](http://pomax.github.io/bezierjs/#normal), so: you have the right idea (take the derivative to get the tangent vector, then rotate to get the normal), but make sure to get those derivatives right.
If you're using quadratic Bezier curves, consisting of three 2d points P1, P2 and P3, then the Bezier function is:
```
P1 * (1-t)² + P2 * 2 * (1-t)t + P3 * t²
```
and the derivative (written in but one of many ways) is:
```
P1 * (2t-2) + (2*P3-4*P2) * t + 2 * P2
```
The code you show as derivative computation is actually the regular quadratic Bezier function, so that's going to give you rather wrong results. Update the code to the correct derivative, and you should be fine.
|
Proving inductive facts in Z3
I am trying to prove an inductive fact in Z3, an SMT solver by Microsoft. I know that Z3 does not provide this functionality in general, as explained in the [Z3 guide](http://rise4fun.com/z3/tutorial/guide) (section 8: Datatypes), but it looks like this is possible when we constrain the domain over which we want to prove the fact. Consider the following example:
```
(declare-fun p (Int) Bool)
(assert (p 0))
(assert (forall ((x Int))
(=>
(and (> x 0) (<= x 20))
(= (p (- x 1)) (p x) ))))
(assert (not (p 20)))
(check-sat)
```
The solver responds correctly with `unsat`, which means that `(p 20)` is valid. The problem is that when we relax this constraint any further (we replace `20` in the previous example by any integer greater than 20), the solver responds with `unknown`.
I find this strange because it does not take Z3 long to solve the original problem, but when we increase the upper limit by one it becomes suddenly impossible. I have tried to add a pattern to the quantifier as follows:
```
(declare-fun p (Int) Bool)
(assert (p 0))
(assert (forall ((x Int))
(! (=>
(and (> x 0) (<= x 40))
(= (p (- x 1)) (p x) )) :pattern ((<= x 40)))))
(assert (not (p 40)))
(check-sat)
```
Which seems to work better, but now the upper limit is 40. Does this mean that I can better not use Z3 to prove such facts, or am I formulating my problem incorrectly?
|
Z3 uses many heuristics to control quantifier instantiation. One one them is based on the "instantiation depth". Z3 tags every expression with a "depth" attribute. All user supplied assertions are tagged with depth 0. When a quantifier is instantiated, the depth of the new expressions is bumped. Z3 will not instantiate quantifiers using expressions tagged with a depth greater than a pre-defined threshold. In your problem, the threshold is reached: `(p 40)` is depth 0, `(p 39)` is depth 1, `(p 38)` is depth 2, etc.
To increase the threshold, you should use the option:
```
(set-option :qi-eager-threshold 100)
```
Here is the example with this option: <http://rise4fun.com/Z3/ZdxO>.
Of course, using this setting, Z3 will timeout, for example, for `(p 110)`.
In the future, Z3 will have better support for "bounded quantification". In most cases, the best approach for handling this kind of quantifier is to expand it.
With the programmatic API, we can easily "instantiate" expressions before we send them to Z3.
Here is an example in Python (<http://rise4fun.com/Z3Py/44lE>):
```
p = Function('p', IntSort(), BoolSort())
s = Solver()
s.add(p(0))
s.add([ p(x+1) == p(x) for x in range(40)])
s.add(Not(p(40)))
print s.check()
```
Finally, in Z3, patterns containing arithmetic symbols are not very effective. The problem is that Z3 preprocess the formula before solving. Then, most patterns containing arithmetic symbols will never match. For more information on how to use patterns/triggers effectively, see [this article](http://research.microsoft.com/en-us/um/people/moskal/pdf/prtrig.pdf). The author also provides a [slide deck](http://research.microsoft.com/en-us/um/people/moskal/pdf/prtrig-slides.pdf).
|
How to pipe a bash command and keep Ctrl+C working?
Considering a custom command line software (e.g. loopHelloWorld) which detect ctrl+C for nice shutdown. How to pipe the answer without losing the `Ctrl+C`?
```
$ loopHelloWorld
- Ctrl+C to nicely shutdown
```
But with pipe, the pipe kill the software without nice shutdown
```
$ loopHelloWorld |
while IFS= read -r line; do
echo "$line"
done
```
Example
```
ping example.com |
while IFS= read -r line; do
echo "$line"
done
```
|
`Ctrl+C` causes a SIGINT to be sent to all processes in the pipeline (as they're all run in the same process group that correspond to that foreground job of your interactive shell).
So in:
```
loopHelloWorld |
while IFS= read -r line; do
echo "$line"
done
```
Both the process running `loopHelloWorld` and the one running the subshell that runs the `while` loop will get the `SIGINT`.
If `loopHelloWorld` writes the `Ctrl+C to nicely shutdown` message on its stdout, it will also be written to the pipe. If that's *after* the subshell at the other end has already died, then `loopHelloWorld` will *also* receive a SIGPIPE, which you'd need to handle.
Here, you should write that message to stderr as it's not your command's normal output (doesn't apply to the `ping` example though). Then it wouldn't go through the pipe.
Or you could have the subshell running the while loop ignore the SIGINT so it keeps reading the `loopHelloWorld` output after the SIGINT:
```
loopHelloWorld | (
trap '' INT
while IFS= read -r line; do
printf '%s\n' "$line"
done
)
```
that would however cause the exit status of the pipeline to be 0 when you press `Ctrl+C`.
Another option for that specific example would be to use `zsh` or `ksh93` instead of `bash`. In those shells, the `while` loop would run in the main shell process, so would not be affected by SIGINT.
That wouldn't help for `loopHelloWorld | cat` though where `cat` and `loopHelloWorld` run in the foreground process group.
|
onActivityResult never called in my nested fragment
I have many nested fragments.
- Activity A
- MainFragment (in a FrameLayout)
- Fragment A (in a FrameLayout in MainFragment)
- Fragment B (in a FrameLayout in MainFragment)
- Fragment C (in a ViewPager in Fragment B)
- Fragment D (in a ViewPager in Fragment B) <--- this is where I want to catch onActivityResult
This is how I start activity for result:
```
startActivityForResult(Intent.createChooser(intent, "Title"), FILE_PICK);
```
I don't have `onActivityResult` overriden anywhere else. I tried to call it in Activity A and it got called, but then even though I called super, it never came to Fragment D. Also tried to call `onActivityResult` in `MainFragment` and it never gets called there either.
|
The event is going to be received in the activity. To have it in Fragment D you have to propagate it.
On your parent activity override `onActivityResult` and start calling the `onActivityResult` of your fragments:
```
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
List<Fragment> fragments = fragmentManager.getFragments();
if(fragments != null){
for(Fragment fragment : fragments){
fragment.onActivityResult(requestCode, resultCode, data);
}
}
}
```
In your parent fragment you have to do the same thing, but remember to use `getChildFragmentManager` to get the fragment manager of the fragment
```
List <Fragment> fragments = getChildFragmentManager().getFragments();
```
|
Automator: duplicate OR copy in place
I am trying to build the following services:
1. Change type of image, result in the same folder (image.jpg => image.jpg + image.png)
2. Change size of image, result in the same folder (image.jpg => image.jpg + image-800x600.jpg)
I am stuck on part where the original image is duplicated in the same folder, under a different name (the copy finder item workflow requires a hard coded destination or other option I am not familiar with).
Maybe I could use a shell script to perform the duplicating part. I know how to get the file paths passed to the run shell script workflow, but I can't figure out how to send valid paths out to the next task (change type or resize).
MAC OS version is Mountain lion 10.8.2.
|
You can duplicate the files before you scale them:
```
on run {input}
set newFiles to {}
repeat with aFile in input
tell application "Finder" to set myFile to duplicate aFile
set end of newFiles to myFile as alias
end repeat
delay 1
return newFiles
end run
```

You can add another AppleScript at the end to deal with the files names:
```
on run {input}
repeat with myFile in input
tell application "System Events" to set oldName to myFile's name
set newName to do shell script "echo " & quoted form of oldName & " | sed -E 's/ ?copy ?[0-9?]*//'"
tell application "System Events" to set myFile's name to newName
end repeat
end run
```
|
Using a character array as a string stream buffer
I'm looking for a clean STL way to use an existing C buffer (char\* and size\_t) as a string stream. I would prefer to use STL classes as a basis because it has built-in safeguards and error handling.
note: I cannot use additional libraries (otherwise I would use [QTextStream](http://qt-project.org/doc/qt-5/qtextstream.html))
|
You can try with [`std::stringbuf::pubsetbuf`](http://en.cppreference.com/w/cpp/io/basic_streambuf/pubsetbuf). It calls [`setbuf`](http://en.cppreference.com/w/cpp/io/basic_stringbuf/setbuf), but it's implementation defined whether that will have any effect. If it does, it'll replace the underlying string buffer with the char array, without copying all the contents like it normaly does. Worth a try, IMO.
Test it with this code:
```
std::istringstream strm;
char arr[] = "1234567890";
strm.rdbuf()->pubsetbuf(arr, sizeof(arr));
int i;
strm >> i;
std::cout << i;
```
[**Live demo.**](http://coliru.stacked-crooked.com/a/4619d9db1001ebcb)
|
How do I use source in podfile?
I'm new to ios development. For some reason I need to manually set podfile for my Cordova app. There are `GoogleCloudMessaging` and `GGLInstanceID` in my podfile, now I want to install a brightcove video player library, the source is `https://github.com/brightcove/BrightcoveSpecs.git`. However when I add the `source` on the top of podfile, it seems cocoapods also try to install `GoogleCloudMessaging` from that source.
My podfile:
```
source 'https://github.com/brightcove/BrightcoveSpecs.git'
use_frameworks!
platform :ios, '8.0'
target 'myapp' do
pod 'Brightcove-Player-Core/dynamic'
pod 'GoogleCloudMessaging'
pod 'GGLInstanceID'
end
```
Error:
```
Analyzing dependencies
[!] Unable to find a specification for `GoogleCloudMessaging`
```
|
You need to include the official CocoaPods source:
`https://github.com/CocoaPods/Specs.git`
[Docs](https://guides.cocoapods.org/syntax/podfile.html#source):
>
> The official CocoaPods source is implicit. Once you specify another source, then it will need to be included.
>
>
>
So your file should work like this I believe:
```
source 'https://github.com/brightcove/BrightcoveSpecs.git'
source 'https://github.com/CocoaPods/Specs.git'
use_frameworks!
platform :ios, '8.0'
target 'myapp' do
pod 'Brightcove-Player-Core/dynamic'
pod 'GoogleCloudMessaging'
pod 'GGLInstanceID'
end
```
|
Keycloak spring adapter - check that the authToken is active with every http request
Problem I want to solve:
For every call made to the service I want to check that the token is active, if it isn't active I want to redirect the user to the login page.
>
> Current setup: Grails 3.2.9 , Keycloak 3.4.3
>
>
>
Ideas so far:
This article looked promising: <https://www.linkedin.com/pulse/json-web-token-jwt-spring-security-real-world-example-boris-trivic>
In my security config I added a token filter
```
@Bean
public TokenAuthenticationFilter authenticationTokenFilter() throws Exception {
return new TokenAuthenticationFilter();
}
@Override
protected void configure(HttpSecurity http) throws Exception {
super.configure http
http
.addFilterBefore(authenticationTokenFilter(), BasicAuthenticationFilter.class)
.logout()
.logoutSuccessUrl("/sso/login") // Override Keycloak's default '/'
.and()
.authorizeRequests()
.antMatchers("/assets/*").permitAll()
.anyRequest().hasAnyAuthority("ROLE_ADMIN")
.and()
.csrf()
.csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse());
}
```
My TokenAuthenticationFilter just prints out the request headers at the moment :
```
public class TokenAuthenticationFilter extends OncePerRequestFilter {
private String getToken( HttpServletRequest request ) {
Enumeration headerEnumeration = request.getHeaderNames();
while (headerEnumeration.hasMoreElements()) {
println "${ headerEnumeration.nextElement()}"
}
return null;
}
@Override
public void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain) throws IOException, ServletException {
String authToken = getToken( request );
}
}
```
Which returns:
- host
- user-agent
- accept
- accept-language
- accept-encoding
- cookie
- connection
- upgrade-insecure-requests
- cache-control
The code/logic I want to implement in the filter is something like:
```
KeycloakAuthenticationToken token = SecurityContextHolder.context?.authentication
RefreshableKeycloakSecurityContext context = token.getCredentials()
if(!context.isActive()){
// send the user to the login page
}
```
However I'm lost as to how to get there.
Any help greatly appreciated
|
As far as I understand, your question is about "how to check the token is active?" and not "how to redirect the user to login page?".
As I see you added the tag "spring-boot" and "keycloak" maybe you could use "Keycloak Spring Boot Adapter". Assuming you use the version 3.4 of Keycloak (v4.0 still in beta version), you can found some documentation [here](https://www.keycloak.org/docs/3.4/securing_apps/index.html#_spring_boot_adapter).
If you can't (or don't want to) use Spring Boot Adapter, here is the part of the `KeycloakSecurityContextRequestFilter` [source code](https://github.com/keycloak/keycloak/blob/20f24bffc43f61186d1e41d6aec9a098ea19afea/adapters/oidc/spring-security/src/main/java/org/keycloak/adapters/springsecurity/filter/KeycloakSecurityContextRequestFilter.java#L60-L73) that could be interesting for your case:
```
KeycloakSecurityContext keycloakSecurityContext = getKeycloakPrincipal();
if (keycloakSecurityContext instanceof RefreshableKeycloakSecurityContext) {
RefreshableKeycloakSecurityContext refreshableSecurityContext = (RefreshableKeycloakSecurityContext) keycloakSecurityContext;
if (refreshableSecurityContext.isActive()) {
...
} else {
...
}
}
```
and here is the (Java) [source code](https://github.com/keycloak/keycloak/blob/20f24bffc43f61186d1e41d6aec9a098ea19afea/adapters/oidc/spring-security/src/main/java/org/keycloak/adapters/springsecurity/filter/KeycloakSecurityContextRequestFilter.java#L88-L100) of the getKeycloakPrincipal method:
```
private KeycloakSecurityContext getKeycloakPrincipal() {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
if (authentication != null) {
Object principal = authentication.getPrincipal();
if (principal instanceof KeycloakPrincipal) {
return KeycloakPrincipal.class.cast(principal).getKeycloakSecurityContext();
}
}
return null;
}
```
And if you want to understand how the Authentication is set in the SecurityContextHolder, please read this [piece of (Java) code](https://github.com/keycloak/keycloak/blob/20f24bffc43f61186d1e41d6aec9a098ea19afea/adapters/oidc/spring-security/src/main/java/org/keycloak/adapters/springsecurity/filter/KeycloakAuthenticationProcessingFilter.java#L186-L210) from `KeycloakAuthenticationProcessingFilter`:
```
@Override
protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authResult) throws IOException, ServletException {
if (authResult instanceof KeycloakAuthenticationToken && ((KeycloakAuthenticationToken) authResult).isInteractive()) {
super.successfulAuthentication(request, response, chain, authResult);
return;
}
...
SecurityContextHolder.getContext().setAuthentication(authResult);
...
try {
chain.doFilter(request, response);
} finally {
SecurityContextHolder.clearContext();
}
}
```
As an alternative you could also check this github repository of dynamind:
<https://github.com/dynamind/grails3-spring-security-keycloak-minimal>
Hoping that can help.
Best regards,
Jocker.
|
Why do Matlab figures generated in a loop have slightly different file sizes?
I am performing a parameter sweep. Inside a for loop, the value of a parameter is changed. Based on this parameter, a plot is produced and saved as a `.tiff` file.
I noticed that the resulting files have slightly different file sizes, for instance
>
> 215, 222, 223, 215, 210, 196, 195, 195, 195, 195 kB.
>
>
>
I wondered why they do not all have exactly the same file size?
**EDIT: MWE**
**1. tiff**
Executing
```
for a=1:3
b=1:.01:10;
h=figure(1);
plot(b,sin(a*b))
set(h,'units','normalized','outerposition',[0 0 1 1]);
filename=horzcat('test_',num2str(a),'.tiff');
print('-dtiff',filename)
end
```
yields 3 files with resp. file sizes 79, 95, 110kB.
**2. bmp**
Executing
```
for a=1:3
b=1:.01:10;
h=figure(1);
plot(b,sin(a*b))
set(h,'units','normalized','outerposition',[0 0 1 1]);
filename=horzcat('test_',num2str(a),'.bmp');
print('-dbmp16m',filename)
end
```
yields 3 files with the same file size: 3165kB.
|
The difference in file size is to be expected.
In a bitmap image (without compression), the color value of each pixel is stored within a file. It doesn't matter whether all pixels are white, black, or whatever the value of each will be stored. For this reason, all bitmap images (of the same dimension and color depth) are going to be the same size. You are using a 24-bit bitmap meaning that 24-bits are allocated *per pixel* in your figure. [More information on bitmaps](http://paulbourke.net/dataformats/bitmaps/).
A TIFF on the other hand is a little more complicated. As @Andras stated, a TIFF can be compressed and the compression *depends on the image contents*. For example, if an image is all black, that is *highly compressible* because it's only one color value for an entire image (results in a smaller file size). If every pixel is a different color this is less compressible (resulting in a larger file size).
In your example, you are changing that data in the plot which changes the distribution of pixel colors in your saved image which is ultimately going to change the file size of a TIFF slightly from iteration to iteration. The only way that you can expect the same file size is if your data is *exactly* the same and the figure is the same size.
|
AngularJS: Dynamic locale
I'm using [**Angular Dynamic locale**](https://github.com/lgalfaso/angular-dynamic-locale) and [**Angular-Translate**](https://github.com/angular-translate/angular-translate) for internationalization and localization (*i18n*). And works well.
I like the idea of **angular-translate** that is possible to change the language without refresh the page.
- *Is possible to do the same with **Angular Dynamic locale**? If is possible, how can I get this?*
All the words from angular-translate changed automatically, but not the words from angular\_locale (*datapicker, etc*), that the users need refreshing the page.
*Thanks!*
|
Just in case you don't have absolute necessity to use Angular Dynamic locale,you can create your own LocaleFactory like this:
```
factory('LocaleFactory', function ( $locale, $translate) {
var locales = {
nl: {
"DATETIME_FORMATS": {
"AMPMS" : [
"AM",
"PM"
],
"DAY" : [
"zondag",
"maandag",
"dinsdag",
"woensdag",
"donderdag",
"vrijdag",
"zaterdag"
],
"MONTH" : [
"januari",
"februari",
"maart",
"april",
"mei",
"juni",
"juli",
"augustus",
"september",
"oktober",
"november",
"december"
],
"SHORTDAY" : [
"zo",
"ma",
"di",
"wo",
"do",
"vr",
"za"
],
"SHORTMONTH": [
"jan.",
"feb.",
"mrt.",
"apr.",
"mei",
"jun.",
"jul.",
"aug.",
"sep.",
"okt.",
"nov.",
"dec."
],
"fullDate" : "EEEE d MMMM y",
"longDate" : "d MMMM y",
"medium" : "d MMM y HH:mm:ss",
"mediumDate": "d MMM y",
"mediumTime": "HH:mm:ss",
"short" : "dd-MM-yyyy HH:mm",
"shortDate" : "dd-MM-yyyy",
"shortTime" : "HH:mm"
},
"NUMBER_FORMATS" : {
"CURRENCY_SYM": "\u20ac",
"DECIMAL_SEP" : ",",
"GROUP_SEP" : ".",
"PATTERNS" : [
{
"gSize" : 3,
"lgSize" : 3,
"macFrac": 0,
"maxFrac": 3,
"minFrac": 0,
"minInt" : 1,
"negPre" : "-",
"negSuf" : "",
"posPre" : "",
"posSuf" : ""
},
{
"gSize" : 3,
"lgSize" : 3,
"macFrac": 0,
"maxFrac": 2,
"minFrac": 2,
"minInt" : 1,
"negPre" : "\u00a4\u00a0",
"negSuf" : "-",
"posPre" : "\u00a4\u00a0",
"posSuf" : ""
}
]
}
}
};
return {
setLocale: function (key) {
$translate.use(key);
angular.copy(locales[key], $locale);
}
};
});
```
Similarly you can add other locals as well
Call setLocale to change the locale
```
run(function (LocaleFactory) {
LocaleFactory.setLocale('nl');
});
```
When ever your locale get changed you can call setLocale by providing the locale key as an argument. It will change your locale instantly
|
Why Windows Installer can only install a single program at a time?
I have always been wondering why Windows Installer only allows you to install one program at a time. It is very frustrating not to be able to launch multiple installations, especially when setting up a new installation of Windows. What is the reason for that?
|
It would be very complex to guarantee correctness, when concurrent installations take place - assuming that they share some of the files. This would need some form of transactions.
- You need to lock files
- It should be possible to undo intermediate changes, if the installation fails (not sure, if that's possible now?)
These concepts are known from transactional databases - but the topic isn't trivial, and you usually don't find a fully transactional infrastructure in file systems (even though journaling file systems provide a part of that). One problem is, that multiple locks can lead to a deadlock - then you need deadlock detection (or both installers will hang forever), and a way to treat that. Deadlocks can be avoided (e.g. by always locking files in the same order), but there are other problems:
If you lock all the required files up front, you get effectively what you have: One installer must wait until the other is finished. If you don't lock all required files up front, and keep on going, you risk that the "transaction" will fail. That would mean, that one of the installers would have to be restarted.
Then you may have to think about transaction isolation levels - to be fully correct, your transactions would have to be ["serializable"](http://en.wikipedia.org/wiki/Transaction_isolation#SERIALIZABLE) - but that's not easy, even for many databases.
There may even be alternative strategies to deal with the problems, which circumvent full isolation, but it would usually be even harder to prove their correctness.
I believe, with concurrent installation, we would have a lot more intractable post-installation problems - especially because I don't think, an OS vendor (or a distribution) would go through all the trouble to make it 100% clean. So I would prefer not to use it, even if it were offered by the OS.
**Note**
But maybe what you really want is not even installing "at the same time". Maybe it would be sufficient, if you could queue up the installations, which are then executed one after the other (ideally without asking any questions in between). And that's really something, some other OS (distributions) handle a lot better.
|
Comparing #include and using namespace std in C++ with import in Python
I read through many pages on the internet about the `#include` statement and `using namespace std` phrase in C++ and I need some clarification. Since I already know Python I will use it as an analogy. First of all, in C++
```
#include library
```
and in Python
```
import library
```
are the same. Then, in C++
```
#include library
using namespace std;
```
and in Python
```
from library import *
```
are the same.
For example, if we compare the first analogy, we know that in C++
```
#include <iostream>
int main()
{
std::cout << "hello" << std::endl;
return 0;
}
```
is similar to the code below in Python (similar in using std and #include):
```
import math
def main():
print math.sqrt(12)
```
If we were to compare the second analogy, we have that in C++
```
include <iostream>
using namespace std;
int main()
{
cout << "hello world" << endl;
}
```
and in Python
```
from math import *
def main():
print sqrt(12)
```
are similar.
Can you correct me if I am wrong?
|
`#include` is "copy-paste this file". For example, you can `#include` a file into itself, and get [this ridiculousness](http://www.cise.ufl.edu/~manuel/obfuscate/zsmall.hint). You can also have a snippet of code in a file and `#include` it everywhere you want.
Don't do this:
```
//forloop.txt
for(int i=0;i<SIZE;i++){
ARRAY[i] = VALUE;
}
//prime.txt
2147483647
//main.cpp
#include<iostream>
using std::cout;
using std::string;
int main(){
int prime =
#include "prime.txt"
;
int arr[10];
#define ARRAY arr
#define SIZE 10
#define VALUE prime
#include "forloop.txt"
#undef VALUE
#undef SIZE
#undef ARRAY
for(int i=10;i-- > 0;){
cout<<arr[i]<<'\n';
}
}
```
The `using` directive is not strictly needed. C++ has an idea of "namespaces" that prevents, for example, your `max` function to be considered different from the `math` function in `cmath`. It looks something like this:
```
namespace std{
int max(int i, int j){
if (i<j)
return j;
return i;
}
//some variable declaration and initialization
ostream cout(MAKE_BUFFER(stdout)); //std::ostream, if used outside
}
int max = 5;
int main(){
std::cout<<std::max(3,::max)<<'\n'; //::max refers to global name "max"
}
```
`using namespace std;` practically means "If you can't find a name globally, try sticking `std::` in front and see if that's a name." People say it's bad practice to say `using namespace std` partly because if you `#include "F.h"`, and `F.h` has `using namespace std`, then your code also uses namespace std (due to copy-paste `#include`).
---
Python is not a compiled language. When you say `import X`, you're giving a command to the interpreter to do something (run some code and make some name). `#include` is more like telling the compiler what to do to continue compiling at this point.
`import` in Python kind of means, "attach the names in this module". So `import blah as x` means, "Every variable (including functions) in `blah` can be accessed as `x.THING`". It also calls the module's initialization stuff, which makes sense, because the variables need to be initialized before you can use them.
---
Java is also a compiled language. In Java, the `import` statement doesn't play with files, it plays with classes. Every piece of code in Java has to belong to a class.
But unlike the other two languages, `import` is not strictly necessary. `import` is actually closer to C++'s `using`. `import` simply adds names for classes you can already use, except only to the current class.
Here's some code using imports.
```
import java.util.Scanner;
public class Example{
public static void main(String blargs[]){
Scanner cin = new Scanner(System.in);
System.out.println("Type in your name and press Enter: ");
System.out.println("Hello "+cin.next());
}
}
```
Here's the same program, using no imports.
```
public class Example{
public static void main(String blargs[]){
java.util.Scanner cin = new java.util.Scanner(System.in);
System.out.println("Type in your name and press Enter: ");
System.out.println("Hello "+cin.next());
}
}
```
Here's the same program, using all long names (`import java.lang.*;` is implicit in every Java source file).
```
public class Example{
public static void main(java.lang.String blargs[]){
java.util.Scanner cin = new java.util.Scanner(java.lang.System.in);
java.lang.System.out.println("Type in your name and press Enter: ");
java.lang.System.out.println("Hello "+cin.next());
}
}
```
Here's the same program, using all the imports.
```
import java.util.Scanner;
import static java.util.System.out;
import static java.util.System.in;
public class Example{
public static void main(String blargs[]){
Scanner cin = new Scanner(in);
out.println("Type in your name and press Enter: ");
out.println("Hello "+cin.next());
}
}
```
|
Can EC2 instances be set up to come from different IP ranges?
I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway.
NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down.
Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?
|
First, the answer - yes, each EC2 instance gets its own IP address. Now on to some commentary:
- It's easy for a site owner to block all requests from EC2-land, and some webmaster have started doing that, due to many poorly behaved bots running in EC2. So using EC2 might not be a long term solution to your problem.
- One request/second is still pretty fast. Super-polite is using a crawl delay of 30 seconds. At Bixo Labs we usually run with a crawl delay of 15 seconds - even 10 seconds starts causing problems at some sites.
- You also need to worry about total requests/day, as some sites monitor that. A good rule of thumb is no more than 5000 requests/day/IP address.
- Finally, using multiple servers in EC2 to get around rate-limiting means you're in the gray zone of web crawling, mostly inhabited by slimy characters harvesting email addresses, ripping off content, and generating splog. So consider carefully if you really want to be living in that neighborhood.
|
Azure Table Storage Bad Request - Error in query syntax
The following used to work.
```
public void CreateTableIfMissing()
{
var info = new StorageInfo(); // initialized with tablename and connectionstring
var storageAccount = CloudStorageAccount.Parse(info.ConnectionString);
var tableClient = storageAccount.CreateCloudTableClient();
var table = tableClient.GetTableReference(info.TableName);
try
{
table.CreateIfNotExists();
var batchOperation = new TableBatchOperation();
var s = DateTime.Now.ToString();
var entry = new TableEntity("partkey"+s,"rowkey"+s);
batchOperation.Insert(entry);
table.ExecuteBatch(batchOperation);
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
```
Error information is
```
{Microsoft.WindowsAzure.Storage.StorageException:
ErrorCode "InvalidInput"
Element 0 in the batch returned an unexpected response code.
StatusMessage:0:Bad Request - Error in query syntax
```
The table is in use for error logging via Serilog with an Azure sync.
I can see that it is still getting log records if I connect with Azure Storage Explorer.
I have not changed connection strings
[Update]
I am trying a single operation but having trouble
```
'TableOperation' does not contain a constructor that takes 2 arguments
Cannot access internal constructor 'TableOperation' here
```
[](https://i.stack.imgur.com/DyTct.png)
[](https://i.stack.imgur.com/5Mfbd.png)
[Update]
If I follow Ivan's advice but omit the ToString("o") parameter the error is
```
ErrorMessage:The 'PartitionKey' parameter of value 'partkey3/7/2019 8:33:25 PM' is out of range.
```
This makes sense.
I wonder why it ever worked!
|
**Update:**
For the error message in your previous code(not the update one):
```
{Microsoft.WindowsAzure.Storage.StorageException:
ErrorCode "InvalidInput"
Element 0 in the batch returned an unexpected response code.
StatusMessage:0:Bad Request - Error in query syntax
```
The reason is that the partkey and rowkey in table storage does not accept characters like "/". And when you use DateTime.Now.ToString(),which contains characters "/", as the suffix of partkey and rowkey, then will cause the error.
Please format the datetime and remove the "/", you can use `DateTime.Now.ToString("o")` in your code(or other correct format).
**For the updated code:**
The error is because `TableOperation class` does not has constructor(parameter or parameterless). You can nav to TableOperation class and take a look at its usage.
[](https://i.stack.imgur.com/okOjL.jpg)
In you case, you should use it's static `Insert method` like `var op = TableOperation.Insert(entry)` instead of `var op = new TableOperation(entry,TableOperationType.Insert)`.
And also one thing you need to know, the partkey and rowkey in table storage does not accept characters like "/", so when you use `datetime.now` for a suffix of partkey and rowkey, you should use `var s = DateTime.Now.ToString("o")`. Or it will cause error.
The sample code works fine for me:
```
public void CreateTableIfMissing()
{
var info = new StorageInfo(); // initialized with tablename and connectionstring
var storageAccount = CloudStorageAccount.Parse(info.ConnectionString);
var tableClient = storageAccount.CreateCloudTableClient();
var table = tableClient.GetTableReference(info.TableName);
try
{
table.CreateIfNotExists();
var s = DateTime.Now.ToString("o");
var entry = new TableEntity("partkey" + s, "rowkey" + s);
var op = TableOperation.Insert(entry);
table.Execute(op);
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
```
For more code samples about table storage, you can refer to this [article](https://learn.microsoft.com/en-us/azure/cosmos-db/table-storage-how-to-use-dotnet?toc=%2Fen-us%2Fazure%2Fstorage%2Ftables%2FTOC.json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json#create-a-table).
|
Does future::wait() synchronize-with completion of the thread of execution by async()?
It's said that [`thread::join()`](http://en.cppreference.com/w/cpp/thread/thread/join) synchronizes-with completion of the corresponding thread of execution. I'm wondering whether the same applies to `async()` and `future::wait()`. So for example:
```
std::atomic_int v(0);
std::async(
std::launch::async,
[&v] { v.fetch_add(1, std::memory_order_relaxed); }
).wait();
assert(v.load(std::memory_order_relaxed) == 1);
```
The `assert()` will never fail.
|
Straight from N3337 ([c++11](/questions/tagged/c%2b%2b11 "show questions tagged 'c++11'") standard draft), [[futures.async]/5](https://timsong-cpp.github.io/cppwp/n3337/futures.async#5) with my emphasis:
>
> *Synchronization*: Regardless of the provided policy argument,
>
>
> - the invocation of async synchronizes with ([intro.multithread]) the invocation of f. [ Note: This statement applies even when the
> corresponding future object is moved to another thread. — end note ];
> and
> - **the completion of the function f is sequenced before** ([intro.multithread]) **the shared state is made ready**. [ Note: f might
> not be called at all, so its completion might never happen. — end
> note ]
>
>
> If the implementation chooses the `launch::async` policy,
>
>
> - **a call to a waiting function on an asynchronous return object that shares the shared state created by this async call shall block until
> the associated thread has completed, as if joined**
> ([thread.thread.member]);
> - the associated thread completion synchronizes with ([intro.multithread]) the return from the first function that
> successfully detects the ready status of the shared state or with the
> return from the last function that releases the shared state,
> whichever happens first.
>
>
>
So referring to your question this means that yes, the assertion will never fail.
|
C/C++/Obj-C Real-time algorithm to ascertain Note (not Pitch) from Vocal Input
I want to detect not the pitch, but the **pitch class** of a sung note.
So, whether it is C4 or C5 is not important: they must both be detected as C.
Imagine the 12 semitones arranged on a clock face, with the needle pointing to the pitch class. That's what I'm after! ideally I would like to be able to tell whether the sung note is spot-on or slightly off.
This is not a duplicate of previously asked questions, as it introduces the constraints that:
1. the sound source is a **single human voice**, hopefully with negligible background interference (although I may need to deal with this)
2. the octave is not important, **only the pitch class**
EDIT -- Links:
[Real time pitch detection](https://stackoverflow.com/questions/1354084/real-time-pitch-detection?lq=1)
[Using the Apple FFT and Accelerate Framework](https://stackoverflow.com/questions/3398753/using-the-apple-fft-and-accelerate-framework?lq=1)
|
Most of the frequency detection algorithms cited in other answers don't work well for voice. To see why this is so intuitively, consider that all the vowels in a language can be sung at one particular note. Even though all those vowels have very different frequency content, they would all have to be detected as the same note. Any note detection algorithm for voices must take this into account somehow. Furthermore, human speech and song contains many [fricatives](http://facweb.furman.edu/~wrogers/phonemes/phono/fric.htm), many of which have no implicit pitch in them.
In the generic (non voice case) the feature you are looking for is called the **chroma feature** and there is a fairly large body of work on the subject. It is equivalently known as the **harmonic pitch class profile**. The original reference paper on the concept is Tayuka Fujishima's "[Real-Time Chord Recognition of Musical Sound: A System Using Common Lisp Music](http://www.music.mcgill.ca/~jason/mumt621/papers5/fujishima_1999.pdf)". The [Wikipedia entry](https://en.wikipedia.org/wiki/Harmonic_pitch_class_profiles) has an overview of a more modern variant of the algorithm. There are a bunch of free [papers and MATLAB implementations](http://labrosa.ee.columbia.edu/matlab/chroma-ansyn/) of chroma feature detection.
However, since you are focusing on the human voice only, and since the human voice naturally contains tons of overtones, what you are practically looking for in this specific scenario is a **fundamental frequency detection algorithm**, or [**f0 detection algorithm**](https://www.ee.columbia.edu/~dpwe/papers/Goto00-bass.pdf). There are several such algorithms [explicitly tuned for voice](http://www.utdallas.edu/~hynek/citing_papers/bartosek_Bartosek_Comparing%20Pitch%20Detection%20Algorithms%20for%20Voice%20Applications.pdf). Also, [here is a widely cited algorithm](http://papers.nips.cc/paper/2631-real-time-pitch-determination-of-one-or-more-voices-by-nonnegative-matrix-factorization.pdf) that works on multiple voices at once. You'd then check the detected frequency against the equal-tempered scale and then find the closest match.
Since I suspect that you're trying to build a pitch detector and/or corrector a la Autotune, you may want to use M. Morise's excellent [WORLD](https://github.com/mmorise/World) implementation, which permits fast and good quality detection and modification of f0 on voice streams.
Lastly, be aware that there are only a few vocal pitch detectors that work well within the vocal fry register. Almost all of them, including WORLD, fail on vocal fry as well as very low voices. A number of papers refer to vocal fry as ["creaky voice"](http://idiom.ucsd.edu/~mgarellek/files/Keating_Garellek_2015_LSA.pdf) and have developed specific algorithms to help with that type of voice input specifically.
|
Is UTF-16 compatible with UTF-8?
I asked Google the question above and was sent to [Difference between UTF-8 and UTF-16?](https://stackoverflow.com/questions/4655250/difference-between-utf-8-and-utf-16) which unfortunately doesn't answer the question.
From my understanding UTF-8 should be a subset of UTF-16 meaning: if my code uses UTF-16 and I hand in a UTF-8 encoded string everything should always be fine. The other way around (expecting UTF-8 and getting UTF-16) may cause problems.
Is that correct?
EDIT: To clarify why the linked SO question doesn't answer my question: My problem arose when trying to process a JSON string using `WebClient.DownloadString`, because the WebClient used the wrong encoding. The JSON I received from the request was encoded as UTF-8 and the question for me was: if I set `webClient.Encoding = New System.Text.UnicodeEncoding` (a.k.a UTF-16) would I be on the safe side, i.e. able to handle UTF-8 and UTF-16 request results, or should I use `webClient.Encoding = New System.Text.UTF8Encoding`?
|
It's not entirely clear what you mean by "compatible", so let's get some basics out of the way.
Unicode is the underlying concept, and UTF-16 and UTF-8 are two different ways to encode Unicode. They are obviously different -- otherwise, why would there be two different serialization formats?
Unicode by itself does not specify a serialization format. UTF-8 and UTF-16 are two alternative serialization formats.
There are several others, but these two are arguably the most widely used.
They are "compatible" in the sense that they can represent the same Unicode code points, but "incompatible" in that the representations are completely different, and irreconcileable.
There are two additional twists with UTF-16. Firstly, there are actually two different encodings, UTF-16LE and UTF-16BE. These differ in endianness. (UTF-8 is a byte encoding, so does not have endianness.) Secondly, legacy UTF-16 used to be restricted to 65,536 possible characters, which is less than Unicode currently contains. This is handled with surrogates, but really old and/or broken UTF-16 implementations (properly identified as UCS-2, not "real" UTF-16) do not support them.
For a bit of concretion, let's compare four different code points. We pick [U+0041](http://www.fileformat.info/info/unicode/char/0041/index.htm), [U+00E5](http://www.fileformat.info/info/unicode/char/00e5/index.htm), [U+201C](http://www.fileformat.info/info/unicode/char/201c/index.htm), and [U+1F4A9](http://www.fileformat.info/info/unicode/char/1f4a9/index.htm), as they illustrate the differences nicely.
U+0041 is a 7-bit character, so UTF-8 represents it simply with a single byte. U+00E5 is an 8-bit character, so UTF-8 needs to encode it. U+1F4A9 is outside the Basic Multilingual Plane, so UTF-16 represents it with a surrogate sequence. Finally, U+201C is none of the above.
Here are the representations of our candidate characters in UTF-8, UTF-16LE, and UTF-16BE.
| Character | UTF-8 | UTF-16LE | UTF-16BE |
| --- | --- | --- | --- |
| U+0041 (a) | 0x41 | 0x41 0x00 | 0x00 0x41 |
| U+00E5 (å) | 0xC3 0xA5 | 0xE5 0x00 | 0x00 0xE5 |
| U+201C (“) | 0xE2 0x80 0x9C | 0x1C 0x20 | 0x20 0x1C |
| U+1F4A9 () | 0xF0 0x9F 0x92 0xA9 | 0x3D 0xD8 0xA9 0xDC | 0xD8 0x3D 0xDC 0xA9 |
To pick one obvious example, the UTF-8 encoding of U+00E5 would represent a completely different character if interpreted as UTF-16 (in UTF-16LE, it would be [U+A5C3](http://www.fileformat.info/info/unicode/char/a5c3/index.htm), and in UTF-16BE, [U+C3A5](http://www.fileformat.info/info/unicode/char/c3a5/index.htm).) Any UTF-8 sequence with an odd number of bytes is an incomplete 16-bit sequence. I suppose UTF-8 when interpreted as UTF-16 could also happen to encode an invalid surrogate sequence. Conversely, many of the UTF-16 codes are not valid UTF-8 sequences at all. So in this sense, UTF-8 and UTF-16 are completely and utterly incompatible.
These are byte values; in ASCII, 0x00 is the NUL character (sometimes represented as `^@`), 0x41 is uppercase A, and 0xE5 is undefined; in e.g. Latin-1 it represents the character å (which is also conveniently U+00E5 in Unicode), but in KOI8-R it is the Cyrillic character Е ([U+0415](https://www.fileformat.info/info/unicode/char/0415/index.htm)), [etc.](https://tripleee.github.io/8bit/#e5)
Perhaps notice also how the last example requires a nontrivial transformation in UTF-16, too, using a pair of surrogate code points, in some sense superficially similarly to how UTF-8 encodes all multibyte code points.
In modern programming languages, your code should simply use Unicode, and let the language handle the nitty-gritty of encoding it in a way which is suitable for your platform and libraries. On a somewhat tangential note, see also <http://utf8everywhere.org/>
|
Change height of valueboxes in Flexdashboard
I am making a Flexdashboard that is column orientated and contains four value boxes along with graphs and tables. The value boxes had a certain height before but I have recently changed the information in one of the value boxes and now the height of all boxes is larger. Here is how the code looks for the four value boxes
```
vb1<-valueBox(Parkvorgaenge_Insgesamt,"Parkvorgänge Insgesamt", icon = "fa-car", color = "warning")
vb2<-valueBox(Parkstunden_Insgesamt,"Parkstunden Insgesamt", icon = "fa-hourglass-end", color = "warning")
vb3<-valueBox(Einnahmen_Insgesamt,"Einnahmen Insgesamt", icon = "fa-eur", color = "warning")
vb4<-valueBox(Durchschnittliche_Parkdauer,"Durchschnittliche Parkdauer", icon = "fa-clock", color = "warning")
```
```
Column {data-width=350}
-----------------------------------------------------------------------
###
``{r}
renderValueBox(vb1)
``
###
``{r}
renderValueBox(vb2)
``
###
``{r}
renderValueBox(vb3)
``
###
``{r}
renderValueBox(vb4)
``
```
I have tried adding `{data-height = some number}` by each of the three hashtags (like this ### {data-height = some number}) but this did nothing to change the height. I have looked online but there is nothing directly answering this.
In short, how do you control the height of the value boxes in Flexdashboard?
|
You could change the height of the value boxes with css:
```
---
title: "Tabset Column"
output:
flexdashboard::flex_dashboard
runtime: shiny
---
```{css}
.value-box {
height: 200px;
}
```
```{r global, echo = FALSE}
library(shiny)
library(flexdashboard)
vb1<-valueBox(2000,"Parkvorgänge Insgesamt", icon = "fa-car", color = "warning")
vb2<-valueBox(541515,"Parkstunden Insgesamt", icon = "fa-hourglass-end", color = "warning")
vb3<-valueBox(30000,"Einnahmen Insgesamt", icon = "fa-eur", color = "warning")
vb4<-valueBox(5.4,"Durchschnittliche Parkdauer", icon = "fa-clock", color = "warning")
```
Column
-----------------------------------------------------------------------
###
```{r}
renderValueBox(vb1)
```
###
```{r}
renderValueBox(vb2)
```
###
```{r}
renderValueBox(vb3)
```
###
```{r}
renderValueBox(vb4)
```
```
Alternatively you can put the css in a separate file. In the example below it is called `styles.css` and it is placed in the same folder as the app.
```
.value-box {
height: 200px;
}
```
This would be the app itself:
```
---
title: "Tabset Column"
output:
flexdashboard::flex_dashboard:
css: styles.css
runtime: shiny
---
```{r global, echo = FALSE}
library(shiny)
library(flexdashboard)
vb1<-valueBox(2000,"Parkvorgänge Insgesamt", icon = "fa-car", color = "warning")
vb2<-valueBox(541515,"Parkstunden Insgesamt", icon = "fa-hourglass-end", color = "warning")
vb3<-valueBox(30000,"Einnahmen Insgesamt", icon = "fa-eur", color = "warning")
vb4<-valueBox(5.4,"Durchschnittliche Parkdauer", icon = "fa-clock", color = "warning")
```
Column
-----------------------------------------------------------------------
###
```{r}
renderValueBox(vb1)
```
###
```{r}
renderValueBox(vb2)
```
###
```{r}
renderValueBox(vb3)
```
###
```{r}
renderValueBox(vb4)
```
```
|
Get Elements by Tag Name from a Node in Android (XML) Document?
I have an XML like this:
```
<!--...-->
<Cell X="4" Y="2" CellType="Magnet">
<Direction>180</Direction>
<SwitchOn>true</SwitchOn>
<Color>-65536</Color>
</Cell>
<!--...-->
```
There're many `Cell elements`, and I can get the Cell Nodes by `GetElementsByTagName`. However, I realise that `Node` class DOESN'T have `GetElementsByTagName` method! How can I get `Direction` node from that cell node, without go throught the list of `ChildNodes`? Can I get the `NodeList` by Tag name like from `Document` class?
Thank you.
|
You can cast the `NodeList` item again with `Element` and then use `getElementsByTagName();` from `Element` class.
The best approach is to make a `Cell Object` in your project alongwith fields like `Direction`, `Switch`, `Color`. Then get your data something like this.
```
String direction [];
NodeList cell = document.getElementsByTagName("Cell");
int length = cell.getLength();
direction = new String [length];
for (int i = 0; i < length; i++)
{
Element element = (Element) cell.item(i);
NodeList direction = element.getElementsByTagName("Direction");
Element line = (Element) direction.item(0);
direction [i] = getCharacterDataFromElement(line);
// remaining elements e.g Switch , Color if needed
}
```
Where your `getCharacterDataFromElement()` will be as follow.
```
public static String getCharacterDataFromElement(Element e)
{
Node child = e.getFirstChild();
if (child instanceof CharacterData)
{
CharacterData cd = (CharacterData) child;
return cd.getData();
}
return "";
}
```
|
How can I add timestamps GC log file names in a Java Service under Windows?
I've got a Java Application running against Apache Tomcat, under Windows. There are two ways of running this application - either as a Windows Service, or from a batch file to invoke Tomcat manually.
When I start the application via the batch file, I use the following to add GC logs to the JVM parameters:
```
-Xloggc=%~dp0..\logs\gc-%DATE:~-4%.%DATE:~4,2%.%DATE:~7,2%_%TIME:~0,2%.%TIME:~3,2%.%TIME:~6,2%.log
```
That causes the GC logs to be output with the date in the file name - but when run as a service, the `DATE` and `TIME` variables do not resolve correctly.
**When using a Windows Service, what variables must I use in my JVM parameters to ensure the date and time are appended to the GC logs?**
I have tried to simplify it - `gc-%DATE%.log, gc-${DATE}.log`, in all cases the variable will not resolve.
Any help is appreciated!
**Edit:** Please note that any solution *must* work when the application runs as a Windows Service. Stand alone is fine, we've got that covered. It's only when a Windows Service is used.
|
**EDIT - *Using Tomcat as a Windows Service***
I've installed Tomcat as a Windows service using the [documentation](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html).
Then I've run this command line from the Tomcat bin directory :
```
"tomcat7.exe" //US//Tomcat7 ++JvmOptions=-Xloggc:gc-%DATE:~-4%.%DATE:~4,2%.%DATE:~7,2%_%TIME:~0,2%.%TIME:~3,2%.%TIME:~6,2%.log
```
*Note : `//US` is the Updating services command. `Tomcat7` is the name of the service.*
Then restart the service, now the log path is correctly resolved.
It should work assuming you have existing directories. Your expression with date expect a directory named `gc-2015.1` and create a file named `.01_13.43.35.log` (if we are the 15th of January 2015 at 13h 43min 35sec)
---
***If you are using catalina.bat***
Why it's not working? Because you suffer of a problem of delayed expansion.
You have to enable delayed expansion using `setlocal ENABLEDELAYEDEXPANSION` in `catalina.bat`
Then replace `%` with `!`. You will ends up with `-Xloggc="!~dp0..\logs\gc-!DATE:~-4!.!DATE:~4,2!.!DATE:~7,2!_!TIME:~0,2!.!TIME:~3,2!.!TIME:~6,2!.log"`.
|
What's the difference between the Window.Loaded and Window.ContentRendered events
What's the difference between the `Window.Loaded` and `Window.ContentRendered` events in WPF? Is the `ContentRendered` event called first?
The description of the `Window.ContentRendered` event [here](http://msdn.microsoft.com/en-us/library/system.windows.window.contentrendered.aspx) just says
>
> Occurs after a window's content has been rendered.
>
>
>
The description of the `Window.Loaded` event [here](http://msdn.microsoft.com/en-us/library/system.windows.frameworkelement.loaded.aspx) says
>
> Occurs when the element is laid out, rendered, and ready for interaction.
>
>
>
I have a case where I want to set the window's `MaxHeight` to the height of the working area of the screen that is displaying my window. Which event should I do it in?
Edit:
I think I found what I was looking for, but I'm even more confused now. The `Loaded` event happens first and then the `ContentRendered` event happens. In the book Programming WPF by Chris Sells & Ian Griffiths, it says that the `Loaded` event is
>
> Raised just before the window is shown
>
>
>
While the 'ContentRendered` event is
>
> Raised when the window's content is visually rendered.
>
>
>
This contradicts what the MSDN documentation says about the `Loaded` event:
>
> Occurs when the element is laid out, rendered, and ready for interaction.
>
>
>
This is even more confusing now.
|
I think there is little difference between the two events. To understand this, I created a simple example to manipulation:
`XAML`
```
<Window x:Class="LoadedAndContentRendered.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Name="MyWindow"
Title="MainWindow" Height="1000" Width="525"
WindowStartupLocation="CenterScreen"
ContentRendered="Window_ContentRendered"
Loaded="Window_Loaded">
<Grid Name="RootGrid">
</Grid>
</Window>
```
`Code behind`
```
private void Window_ContentRendered(object sender, EventArgs e)
{
MessageBox.Show("ContentRendered");
}
private void Window_Loaded(object sender, RoutedEventArgs e)
{
MessageBox.Show("Loaded");
}
```
In this case the message `Loaded` appears the first after the message `ContentRendered`. This confirms the information in the documentation.
In general, in WPF the `Loaded` event fires if the element:
>
> is laid out, rendered, and ready for interaction.
>
>
>
Since in WPF the `Window` is the same element, but it should be generally content that is arranged in a root panel (for example: `Grid`). Therefore, to monitor the content of the `Window` and created an `ContentRendered` event. Remarks from MSDN:
>
> If the window has no content, this event is not raised.
>
>
>
That is, if we create a `Window`:
```
<Window x:Class="LoadedAndContentRendered.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Name="MyWindow"
ContentRendered="Window_ContentRendered"
Loaded="Window_Loaded" />
```
It will only works `Loaded` event.
With regard to access to the elements in the `Window`, they work the same way. Let's create a `Label` in the main `Grid` of `Window`. In both cases we have successfully received access to `Width`:
```
private void Window_ContentRendered(object sender, EventArgs e)
{
MessageBox.Show("ContentRendered: " + SampleLabel.Width.ToString());
}
private void Window_Loaded(object sender, RoutedEventArgs e)
{
MessageBox.Show("Loaded: " + SampleLabel.Width.ToString());
}
```
As for the `Styles` and `Templates`, at this stage they are successfully applied, and in these events we will be able to access them.
For example, we want to add a `Button`:
```
private void Window_ContentRendered(object sender, EventArgs e)
{
MessageBox.Show("ContentRendered: " + SampleLabel.Width.ToString());
Button b1 = new Button();
b1.Content = "ContentRendered Button";
RootGrid.Children.Add(b1);
b1.Height = 25;
b1.Width = 200;
b1.HorizontalAlignment = HorizontalAlignment.Right;
}
private void Window_Loaded(object sender, RoutedEventArgs e)
{
MessageBox.Show("Loaded: " + SampleLabel.Width.ToString());
Button b1 = new Button();
b1.Content = "Loaded Button";
RootGrid.Children.Add(b1);
b1.Height = 25;
b1.Width = 200;
b1.HorizontalAlignment = HorizontalAlignment.Left;
}
```
In the case of `Loaded` event, `Button` to add to `Grid` immediately at the appearance of the `Window`. In the case of `ContentRendered` event, `Button` to add to `Grid` after all its content will appear.
Therefore, if you want to add items or changes before load `Window` you must use the `Loaded` event. If you want to do the operations associated with the content of `Window` such as taking screenshots you will need to use an event `ContentRendered`.
|
How do I debug a program written in the Go language?
How do I debug a Go program? I have been using the [Gedit Go IDE](http://gohelp.wordpress.com/), but it doesn't have debugging. Is there a way to step though my code and inspect memory? Or am I stuck with print statements? Can I use [OutputDebugString](http://msdn.microsoft.com/en-us/library/aa363362(VS.85).aspx)?
|
**Update**: There is now an official page in the docs on [Debugging Go Code with GDB](http://golang.org/doc/gdb). Much has changed since this answer was written, and several of the limitations listed below have been removed. I'm leaving the rest of this answer for posterity, but if you want to debug Go code, follow the link above.
The Go linkers [now emit](http://blog.golang.org/2010/11/debugging-go-code-status-report.html) DWARF debugging symbols that can be interpreted by gdb version 7.x.
Highlight from the blog post linked above:
You can...
- load a Go program in GDB version 7.x
- list all Go, C, and assembly source files by line (parts of the Go runtime are written in C and assembly),
- set breakpoints by line and step through the code,
- print stack traces and inspect stack frames, and
- find the addresses and print the contents of most variables.
There are still some inconveniences:
- The emitted DWARF code is unreadable by the GDB version 6.x that ships with Mac OS X. We would gladly accept patches to make the DWARF output compatible with the standard OS X GDB, but until that’s fixed you’ll need to download, build, and install GDB 7.x to use it under OS X. The source can be found at <http://sourceware.org/gdb/download/>. Due to the particulars of OS X you’ll need to install the binary on a local file system with chgrp procmod and chmod g+s.
- Names are qualified with a package name and, as GDB doesn't understand Go packages, you must reference each item by its full name. For example, the variable named v in package main must be referred to as 'main.v', in single quotes. A consequence of this is that tab completion of variable and function names does not work.
- Lexical scoping information is somewhat obfuscated. If there are multiple variables of the same name, the nth instance will have a suffix of the form ‘#n’. We plan to fix this, but it will require some changes to the data exchanged between the compiler and linker.
- Slice and string variables are represented as their underlying structure in the runtime library. They will look something like {data = 0x2aaaaab3e320, len = 1, cap = 1}. For slices, you must dereference the data pointer to inspect the elements.
Some things don't work:
- Channel, function, interface, and map variables cannot be inspected.
- Only Go variables are annotated with type information; the runtime's C variables are not.
- Windows and ARM binaries do not contain DWARF debugging information and, as such, cannot be inspected with GDB.
|
Error: range violation in D programming
I have dynamic array in struct and a method that uses the dynamic array. The problem is that I get range violation error when I run the program. However when I create a new dynamic array inside the method, it works fine. The following code causes problem.
```
struct MyStr {
int[] frontArr;
this(int max = 10) {
frontArr = new int[10];
}
void push(int x) {
frontArr[0] = x;
}
}
void main() {
MyStr s;
s.push(5);
}
```
---
However, this one works;
```
struct MyStr {
int[] frontArr;
this(int max = 10) {
frontArr = new int[10];
}
void push(int x) {
frontArr = new int[10]; // <---Add this line
frontArr[0] = x;
}
}
void main() {
MyStr s;
s.push(5);
}
```
---
I basically add that line to test the scope. It seems like the initialized FrontArr can't be seen in push(int x) method. Any explanation?
Thanks in advance.
|
Initialization of structs must be guaranteed. This is you do not want the default construction of a struct to throw an exception. For this reason D does not support default constructors in structs. Imagine if
```
MyStr s;
```
resulted in an exception being thrown. Instead D provides its own default constructor which initializes all fields to the init property. In your case you are not calling your constructor and just using the provided defaults which means frontArr is never initialized. You want something like:
```
void main() {
MyStr s = MyStr(10);
s.push(5);
}
```
It should probably be a compiler error to have default values for all parameters of a struct constructor. [Bugzilla](http://d.puremagic.com/issues/show_bug.cgi?id=3438)
|
Gnome-terminal: Difference between -e and --?
I'm trying to open a terminal that shows a file as it's being written. A progress percentage is written into the file and I'd like the user to see it.
Most solutions I have found for opening a new terminal say to use `-e`, but that returns
```
# Option "-e" is deprecated and might be removed in a later version of gnome-terminal
# Use "-- " to terminate the options and put the command line to execute after it.
```
I've seen discussion about this error, but I'm still unsure as to what the functional difference between `-e` and `--` actually is. Scripts that I run that use `-e` stop working properly if I just swap them, so there's obviously something that I'm missing.
|
`-e` would take a single argument which would have to be parsed as a shell command, but it could precede other `gnome-terminal` arguments. For example,
```
gnome-terminal -e 'command "argument with spaces"' --some-other-gnome-terminal-option
```
`--` is not itself an option; it's a special argument that signals the *end* of options. Anything following `--` is ignored by `gnome-terminal`'s own option parser and treated like an ordinary argument. Something like
```
gnome-terminal -- 'command "argument with spaces"' --some-other-gnome-terminal-option
```
would present 2 additional arguments to `gnome-terminal` following the `--`:
1. `command "argument with spaces"`
2. `--some-other-gnome-terminal-option`
Further, you would get an error, because `gnome-terminal` would attempt to run a command named `command "argument with spaces"`, rather than a command named `command`.
---
In practice, this means you can't simply replace `-e` with `--` and call it a day. Instead, you would first move `-e` to the *end* of the options list:
```
gnome-terminal --some-other-gnome-terminal-option -e 'command "argument with spaces"'
```
then replace `-e '...'` with `-- ...`. The command and each of its arguments are now distinct arguments to `gnome-terminal`, which removes the need for an entire layer of quoting.
```
gnome-terminal --some-other-gnome-terminal-option -- command "argument with spaces"
```
|
GDI+ generic error saving bitmap created from memory using LockBits
The GDI+ generic error when saving a bitmap is obviously a common problem according to my research here on SO and the web. Given following simplified snippet:
```
byte[] bytes = new byte[2048 * 2048 * 2];
for (int i = 0; i < bytes.Length; i++)
{
// set random or constant pixel data, whatever you want
}
Bitmap bmp = new Bitmap(2048, 2048, PixelFormat.Format16bppGrayScale);
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, 2048, 2048), ImageLockMode.ReadWrite, bmp.PixelFormat);
System.Runtime.InteropServices.Marshal.Copy(bytes, 0, bmpData.Scan0, 8388608);
bmp.UnlockBits(bmpData);
bmp.Save(@"name.bmp");
```
This results in the 0x80004005 generic error. The usual reason for this are said to be locks on components, but I do not see anything here I. Am I just blind? The path I am saving to exists, of course, only a empty bmp file is created (0B).
Background: I am getting pixel data from a camera driver that I transfer to .NET using a C++/CLI wrapper, so the Bitmap object above is returned by a function call. But since this small example already fails, I guess that there is nothing wrong with the adapter.
Any suggestions are highly appreciated!
|
```
Bitmap bmp = new Bitmap(2048, 2048, PixelFormat.Format16bppGrayScale);
```
GDI+ exceptions are rather poor, you'll have little hope to diagnose the two mistakes. The lesser one is your Save() call, it doesn't specify the ImageFormat you want to save. The default is PNG, not BMP as you hoped.
But the core one is PixelFormat.Format16bppGrayScale. When GDI+ was designed, long before .NET came around, everybody was still using CRTs instead of LCD monitors. CRTs were quite good at displaying a gamut of colors. Although good, there were no mainstream CRTs yet that were capable of display 65536 distinct gray colors. Most of all restricted by the DAC in the video adapter, the chip that converts the digital pixel value to an analog signal for the CRT. A DAC that can convert with 16-bit accuracy at 100 MHz or more wasn't technologically feasible yet. Microsoft gambled on display technology improving to make that possible someday so specified Format16bppGrayScale as a pixel format that might *someday* be available.
That did not happen. Rather the opposite, LCDs are significantly worse at color resolution. Typical LCD panels can only resolve 6 bits of a color rather than the 8 bits available from the pixel format. Getting to 16-bit color resolution is going to require a significant technological break-through.
So they guessed wrong and, since the pixel format isn't useful, GDI+ doesn't actually have an image encoder that can write a 16bpp grayscale image format. Kaboom when you try to save it to disk, regardless of the ImageFormat you pick.
16bpp grayscale is actually used, radiological imaging uses that pixel format. With very expensive displays to make it actually useful. Such equipment however invariable uses a custom image format to go with that, DICOM is the usual choice. GDI+ doesn't have a codec for it.
You'll need to go shopping for a library that supports the image format that your customer wants. Lead Tools is the thousand pound gorilla in that product segment.
|
encodeURIComponent throws an exception
I am programmatically building a URI with the help of the `encodeURIComponent` function using user provided input. However, when the user enters invalid unicode characters (such as `U+DFFF`), the function throws an exception with the following message:
>
> The URI to be encoded contains an invalid character
>
>
>
I looked this up on [MSDN](http://msdn.microsoft.com/en-us/library/061fh4cw%28v=vs.90%29.aspx), but that didn't tell me anything I didn't already know.
>
> To correct this error
>
>
> - Ensure the string to be encoded contains only valid Unicode sequences.
>
>
>
My question is, is there a way to sanitize the user provided input to remove all invalid Unicode sequences before I pass it on to the `encodeURIComponent` function?
|
Taking the programmatic approach to discover the answer, the only range that turned up any problems was \ud800-\udfff, the range for high and low surrogates:
```
for (var regex = '/[', firstI = null, lastI = null, i = 0; i <= 65535; i++) {
try {
encodeURIComponent(String.fromCharCode(i));
}
catch(e) {
if (firstI !== null) {
if (i === lastI + 1) {
lastI++;
}
else if (firstI === lastI) {
regex += '\\u' + firstI.toString(16);
firstI = lastI = i;
}
else {
regex += '\\u' + firstI.toString(16) + '-' + '\\u' + lastI.toString(16);
firstI = lastI = i;
}
}
else {
firstI = i;
lastI = i;
}
}
}
if (firstI === lastI) {
regex += '\\u' + firstI.toString(16);
}
else {
regex += '\\u' + firstI.toString(16) + '-' + '\\u' + lastI.toString(16);
}
regex += ']/';
alert(regex); // /[\ud800-\udfff]/
```
I then confirmed this with a simpler example:
```
for (var i = 0; i <= 65535 && (i <0xD800 || i >0xDFFF ) ; i++) {
try {
encodeURIComponent(String.fromCharCode(i));
}
catch(e) {
alert(e); // Doesn't alert
}
}
alert('ok!');
```
And this fits with what MSDN says because indeed all those Unicode characters (even valid Unicode "non-characters") besides surrogates are all valid Unicode sequences.
You can indeed filter out high and low surrogates, but when used in a high-low pair, they become legitimate (as they are meant to be used in this way to allow for Unicode to expand (drastically) beyond its original maximum number of characters):
```
alert(encodeURIComponent('\uD800\uDC00')); // ok
alert(encodeURIComponent('\uD800')); // not ok
alert(encodeURIComponent('\uDC00')); // not ok either
```
So, if you want to take the easy route and block surrogates, it is just a matter of:
```
urlPart = urlPart.replace(/[\ud800-\udfff]/g, '');
```
If you want to strip out unmatched (invalid) surrogates while allowing surrogate pairs (which are legitimate sequences but the characters are rarely ever needed), you can do the following:
```
function stripUnmatchedSurrogates (str) {
return str.replace(/[\uD800-\uDBFF](?![\uDC00-\uDFFF])/g, '').split('').reverse().join('').replace(/[\uDC00-\uDFFF](?![\uD800-\uDBFF])/g, '').split('').reverse().join('');
}
var urlPart = '\uD801 \uD801\uDC00 \uDC01'
alert(stripUnmatchedSurrogates(urlPart)); // Leaves one valid sequence (representing a single non-BMP character)
```
If JavaScript had negative lookbehind the function would be a lot less ugly...
|
Unexpected running times for HashSet code
So originally, I had this code:
```
import java.util.*;
public class sandbox {
public static void main(String[] args) {
HashSet<Integer> hashSet = new HashSet<>();
for (int i = 0; i < 100_000; i++) {
hashSet.add(i);
}
long start = System.currentTimeMillis();
for (int i = 0; i < 100_000; i++) {
for (Integer val : hashSet) {
if (val != -1) break;
}
hashSet.remove(i);
}
System.out.println("time: " + (System.currentTimeMillis() - start));
}
}
```
It takes around 4s to run the nested for loops on my computer and I don't understand why it took so long. The outer loop runs 100,000 times, the inner for loop should run 1 time (because any value of hashSet will never be -1) and the removing of an item from a HashSet is O(1), so there should be around 200,000 operations. If there are typically 100,000,000 operations in a second, how come my code takes 4s to run?
Additionally, if the line `hashSet.remove(i);` is commented out, the code only takes 16ms.
If the inner for loop is commented out (but not `hashSet.remove(i);`), the code only takes 8ms.
|
You've created a marginal use case of `HashSet`, where the algorithm degrades to the quadratic complexity.
Here is the simplified loop that takes so long:
```
for (int i = 0; i < 100_000; i++) {
hashSet.iterator().next();
hashSet.remove(i);
}
```
[async-profiler](https://github.com/jvm-profiling-tools/async-profiler) shows that almost all time is spent inside [`java.util.HashMap$HashIterator()`](http://hg.openjdk.java.net/jdk/jdk/file/7c2236ea739e/src/java.base/share/classes/java/util/HashMap.java#l1566) constructor:
```
HashIterator() {
expectedModCount = modCount;
Node<K,V>[] t = table;
current = next = null;
index = 0;
if (t != null && size > 0) { // advance to first entry
---> do {} while (index < t.length && (next = t[index++]) == null);
}
}
```
The highlighted line is a linear loop that searches for the first non-empty bucket in the hash table.
Since `Integer` has the trivial `hashCode` (i.e. hashCode is equal to the number itself), it turns out that consecutive integers mostly occupy the consecutive buckets in the hash table: number 0 goes to the first bucket, number 1 goes to the second bucket, etc.
Now you remove the consecutive numbers from 0 to 99999. In the simplest case (when the bucket contains a single key) the removal of a key is implemented as nulling out the corresponding element in the bucket array. Note that the table is not compacted or rehashed after removal.
So, the more keys you remove from the beginning of the bucket array, the longer `HashIterator` needs to find the first non-empty bucket.
Try to remove the keys from the other end:
```
hashSet.remove(100_000 - i);
```
The algorithm will become dramatically faster!
|
Alternative to scanning AWS DynamoDB?
I understand that scanning DynamoDB is not reccomended and is bad practice.
Let's say I have a food ordering website and I want to do a daily scan of all users to find out who hasn't ordered food in the last week so I can send them an email (just an example).
This would put some very spikey demand on the database, especially with a large user base.
Is there an alternative to these scheduled scans that I'm missing? Or in this scenario is a scan the best tool for the job?
|
There are a lot of different possible answers to this question. As so often all of this begins with the simple truth that the best way to do something like this, *depends* on the actual specifics and what you are trying to optimise for (cost, latency, duration etc.).
Since this appears to be a "once a week" thing I guess latency and "job" duration are not high on the priority list, but cost might be.
The next important thing to consider is implementation complexity. For example: if your service only has 100 users, I would not bother with any of the more complex solutions and just do a scan. But if your service has millions of users, this is probably not a great idea anymore.
For the purpose of this answer I am going to assume that your user base has become too large to just do a scan. In this scenario I can think of two possible solutions:
1. Add a separate index that allows you to "query" for the last order date easily.
2. [Use a S3 backup](https://aws.amazon.com/premiumsupport/knowledge-center/back-up-dynamodb-s3/)
The first should be fairly self explanatory. As often described in DynamoDB articles, you are supposed to define your "access patterns" and build indexes around them. The pro here is that you are still operating within DynamoDB, the con is the added cost.
My preferred solution is probably to just do scheduled backups of the table to S3 and then process the backup somewhere else. Maybe a custom tool you write or some AWS service that allows processing large amounts of data. This is probably the cheapest solution but processing time might not be "super fast".
I am looking forward to other solutions to this interesting question.
|
Computing Bayesian Credible Intervals for Bayesian Regression
I understand the Bayesian regression approach, in which I compute the posterior distribution of the parameters $\theta\_i$. Now I am interested in how to compute the credible intervals of the regression curve: $$y=x\_0+\theta\_1\*x\_1+..+\theta\_n\*x\_n$$
In a frequentist setting, a prediction interval of a linear regression prediction for $x\_{new}$ states:
$$\hat{\mu}±t\_{(α/2,n-p)}\*s\*\sqrt{1+x\_{new}(X^TX)^{-1}x^\top\_{new}}$$
yet for the Bayesian regression, I am unsure how to obtain the credible interval of the regression curve from the credible intervals of the parameters $\theta\_i$. What is the correct formula, and how does it come about? Thank you for your help in advance!
**EDIT:**
I have found out that the predictive distribution for Bayesian regression with a zero mean Gaussian prior with covariance matrix $\Sigma\_p$, i.e.
$$w\sim N(0,\Sigma\_p)$$
comes out to be:
$$\int p(f\_\*|x\_\*,w)p(w|X,y)dw=\int x\_\*^\top \*w\* p(w|X,y)dw$$
and this is distributed normally:
$$N\sim (\frac{1}{\sigma\_n^2}x\_\*^{\top}A^{-1}Xy , x\_\*^{\top}A^{-1}x\_\*)$$
where $A=\sigma^{-2}\_nXX^\top+\Sigma\_p^{-1}$
I have thus computed the standard deviation $std=x\_\*^{\top}A^{-1}x\_\*$ for a range of values of $x\_\*$ to plot the upper and lower prediction boundaries:
[](https://i.stack.imgur.com/H0Qpf.png)
The polynomial regression is unregularized (as I do not know how to compute prediction intervals for the regularized case). I am left with the questions:
1. The posterior predictive distribution predicts an $f\_\*$. Am I right in the assumption, that the credible interval obtained from this distribution is an interval surrounding the mean prediction, and not a single prediction (thus not a prediction interval)?
2. If this is correct: To get the (Bayesian Regression) plot, I have added the MSE to the computed standard deviations, to account for the error term $\epsilon$. Is this a valid approach to get from a mean prediction interval to a single prediction interval? (It is my intuitive understanding, that the $\epsilon\_s$ cancel out in the case of a mean prediction)
3. Lastly, to compute the Bayesian Regression curve, I have used the Python library Scikit-Learn (<http://scikit-learn.org/stable/modules/linear_model.html#bayesian-ridge-regression>). However, this library assumes a NIG-prior (which is, to my best understanding, conjugate). I have looked through a lot of literature, but as I am just beginning to understand this topic, I simply could not find out: What is the predictive distribution in this case of a (hierarchical) NIG prior? I would like to use it to compute prediction intervals, as I attempted to in the plots above.
|
Unfortunately, the interval that you are looking for is not uniquely determined. Essentially, what you need is the Posterior Predictive Density (PPD, see <https://en.wikipedia.org/wiki/Posterior_predictive_distribution>), which is the density function of new/unseen data given the observed data. This PPD depends on the posterior distribution of the parameters, $\theta\_1, ..., \theta\_n$ in your case. It can be written as
$p(y^\* | y, x, x^\*) = \int p(y^\*, \theta | y, x, x^\*) d\theta = \int p(y^\* | \theta) p (\theta | y, x, x^\*)d\theta$
where $y^\*$ represents the unseen response data, $y$ represents the known response data, $x$ and $x^\*$ represent the predictor values that correspond to $y$ and $y^\*$, and $\theta$ represents the parameters. The last factor in the final integral is the posterior distribution of $\theta$ given $y, x, x^\*$. As the PPD depends on the posterior distribution of the parameters, this, in turn, depends on the prior distribution of the parameters (and on the chosen data model / likelihood function). This means that for each prior you may choose, your posterior distribution changes (and, as a result, your interval as well).
Usually, when choosing completely uninformative (i.e. flat) priors, along with a Normal likelihood for the response values given the predictors, the results of a Bayesian analysis overlap with those of a frequentist analysis. Then again, flat priors are usually a poor choice for such a model.
When you know which priors you want to use for your analysis, it *may* be possible to compute the PPD analytically, but in many cases this is simply impossible. I'd recommend using a tool like Stan (<http://mc-stan.org>) to draw samples from the posterior distribution and then use those to determine a credible interval for your parameters and your new (simulated) data.
Hope this helps!
|
Update max\_map\_count for ElasticSearch docker container Mac host
I'm using [this](https://hub.docker.com/r/_/elasticsearch/) container to start `elasticsearch` in `docker`. In accordance with the [manual](https://www.elastic.co/guide/en/elasticsearch/reference/5.0/vm-max-map-count.html) I have to update `max_map_count` to start the container
>
> sudo sysctl -w vm.max\_map\_count=262144
>
>
>
but.. I can update it in my host (container) AFTER I start it while I'm unable to start it. Am I doing something wrong?
>
> ERROR: bootstrap checks failed max virtual memory areas
> vm.max\_map\_count [65530] likely too low, increase to at least [262144]
>
>
>
If I try to do it on my host machine (which is Mac) I get the following error.
>
> sysctl: unknown oid 'vm.max\_map\_count'
>
>
>
Docker engine installs the Lunix VM where all containers are running. So the command to increase the limit should be executed for the Linux host, not for the Mac.
How can I access Linux VM via terminal installed by the Docker engine?
|
# On Docker Toolbox
If you are in docker toolbox try the docker client from terminal and then make the configs:
```
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
exit
```
# On Docker For Mac:
The vm\_max\_map\_count setting must be set within the xhyve virtual machine:
```
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
```
If it asks you for a username and passwordLog in with root and no password.
If it just has a blank screen, press RETURN.
Then configure the sysctl setting as you would for Linux:
```
sysctl -w vm.max_map_count=262144
```
Exit by Control-A Control-\.
Se the docs [here](https://www.elastic.co/guide/en/elasticsearch/reference/5.1/docker.html#docker-cli-run-prod-mode).
### Persistence
In some cases, this change does not persist across restarts of the VM. So, while `screen`'d into, edit the file `/etc/sysctl.d/00-alpine.conf` and add the parameter `vm.max_map_count=262144` to the end of file.
|
Haskell: How is join a natural transformation?
I can define a natural transformation in Haskell as:
```
h :: [a] -> Maybe a
h [] = Nothing
h (x:_) = Just x
```
and with a function k:
```
k :: Char -> Int
k = ord
```
the naturality condition is met due to the fact that:
`h . fmap k` == `fmap k . h`
Can the naturality condition of the List monad's `join` function be demonstrated in a similar way? I'm having some trouble understanding how `join`, say `concat` in particular, is a natural transformation.
|
Okay, let's look at `concat`.
First, here's the implementation:
```
concat :: [[a]] -> [a]
concat = foldr (++) []
```
This parallels the structure of your `h` where `Maybe` is replaced by `[]` and, more significantly, `[]` is replaced by--to abuse syntax for a moment--`[[]]`.
`[[]]` is a functor as well, of course, but it's *not* a `Functor` instance in the way that the naturality condition uses it. Translating your example directly won't work:
`concat . fmap k` =/= `fmap k . concat`
...because both `fmap`s are working on only the outermost `[]`.
And although `[[]]` is hypothetically a valid instance of `Functor` you can't make it one directly, for practical reasons that are probably obvious.
However, you can reconstruct the correct lifting as so:
`concat . (fmap . fmap) k` == `fmap k . concat`
...where `fmap . fmap` is equivalent to the implementation of `fmap` for a hypothetical `Functor` instance for `[[]]`.
As a related addendum, `return` is awkward for the opposite reason: `a -> f a` is a natural transformation from an elided identity functor. Using `: []` the identity would be written as so:
`(:[]) . ($) k` == `fmap k . (:[])`
...where the completely superfluous `($)` is standing in for what would be `fmap` over the elided identity functor.
|
Installing Vapor on macOS without needing Homebrew
I am trying to get a Swift Vapor project started. Following the guide [here](https://docs.vapor.codes/2.0/getting-started/install-on-macos/), it seems that Homebrew is the only option. I already have MacPorts and prefer it in many ways to Homebrew. Unfortunately there is no port for Vapor, so I went for the SPM installation that Vapor people describe [here](https://docs.vapor.codes/2.0/getting-started/manual/). I had previous success with Kitura, so I thought why not with Vapor. Well, when you go and build your project, you get
```
$ swift build
[... build stuff ...]
note: you may be able to install ctls using your system-packager:
brew install ctls
[... more build stuff ...]
<module-includes>:1:9: note: in file included from <module-includes>:1:
#import "shim.h"
^
[... more like that ...]
/Users/morpheu5/web/vizex/api/.build/checkouts/crypto.git-7980259129511365902/Sources/Crypto/Cipher/Cipher+Method.swift:1:8: error: could not build Objective-C module 'CTLS'
import CTLS
^
<unknown>:0: error: build had 1 command failures
error: exit(1):/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift-build-tool -f /Users/morpheu5/web/vizex/api/.build/debug.yaml
```
Apparently you really need this `ctls` package, and the only way of getting it appears to be through Homebrew/Tap.
I really don't want or need Homebrew, so how do I get to the bottom of this? I'd really like to give Vapor a try.
|
Obligatory 1: installing Homebrew is the easiest way. If you then decide you don't want Homebrew, it uninstalls quite neatly.
Obligatory 2: using a Linux VM is the second easiest way.
But to answer your question and manually install `CTLS`:
1. Make sure you have the libraries for `LibreSSL` or `OpenSSL` installed (using MacPorts, presumably)
2. Download the latest [release](https://github.com/vapor/ctls/releases) of `CTLS`.
3. From the release archive, rename `macos.pc` to `ctls.pc` and then edit it using a text editor. Change the paths to point to your LibreSSL/OpenSSL installation.
4. Move the edited `ctls.pc` into your `$PKG_CONFIG_PATH`.
I have tested this and it works for me, with the caveat that I installed `LibreSSL` using Homebrew so I don't know where MacPorts will put it.
|
Class template method specialization
I'm trying to specialize a template method like this:
```
template <typename X, typename Y>
class A {
public:
void run(){};
};
template<typename Y>
void A<int, Y>::run() {}
```
But I get
```
main.cpp:70:17: error: nested name specifier 'A<int, Y>::' for declaration does not refer into a class, class template or class template partial specialization
```
I understand that the specialization isn't yet complete because I haven't instantiated it with a specific `Y`, but how can I do that?
|
You need at first partially specialize the class itself including the function declaration. After that you can write its definition. You may not partially specialize a function.
For example
```
#include <iostream>
template <typename X, typename Y>
class A {
public:
void run()
{
std::cout << "How do you do?\n";
};
};
template<typename Y>
class A<int, Y>
{
public:
void run();
};
template<typename Y>
void A<int, Y>::run()
{
std::cout << "Hello World!\n";
}
int main()
{
A<int, int>().run();
A<double, int>().run();
return 0;
}
```
The program output.
```
Hello World!
How do you do?
```
|
Custom 404 response model
I want to provide a custom reponse for all 404s on our API. For example:
```
{
"message": "The requested resource does not exist. Please visit our documentation.."
}
```
I believe the following result filter works for all cases within the MVC pipeline:
```
public class NotFoundResultFilter : ResultFilterAttribute
{
public override void OnResultExecuting(ResultExecutingContext context)
{
var result = context.Result as NotFoundResult;
if (result != null)
{
context.Result = new HttpNotFoundResult(); // My custom 404 result object
}
}
}
```
But, when a URL requested does not match an action route, the above filter is not hit. **How could I best intercept these 404 responses?** Would this require middleware?
|
Yes, you need to use middleware, as filter is only for MVC.
1. You may, as always, write your own middleware
```
app.Use(async (context, next) =>
{
await next();
if (context.Response.StatusCode == 404)
{
context.Response.ContentType = "application/json";
await context.Response.WriteAsync(JsonConvert.SerializeObject("your text"), Encoding.UTF8);
}
});
```
2. Or use built-in middlware [StatusCodePagesMiddleware](https://docs.asp.net/en/latest/fundamentals/error-handling.html), but as you want to handle only one status, this is an extra functionality. This middleware can be used to handle the response status code is between 400 and 600 .You can configure the StatusCodePagesMiddleware adding one of the following line to the `Configure` method (example from [StatusCodePages Sample](https://github.com/aspnet/Diagnostics/blob/b1643b438aa947370868b4d5ee7727c27f2d78cb/samples/StatusCodePagesSample/Startup.cs)):
```
app.UseStatusCodePages(); // There is a default response but any of the following can be used to change the behavior.
// app.UseStatusCodePages(context => context.HttpContext.Response.SendAsync("Handler, status code: " + context.HttpContext.Response.StatusCode, "text/plain"));
// app.UseStatusCodePages("text/plain", "Response, status code: {0}");
// app.UseStatusCodePagesWithRedirects("~/errors/{0}"); // PathBase relative
// app.UseStatusCodePagesWithRedirects("/base/errors/{0}"); // Absolute
// app.UseStatusCodePages(builder => builder.UseWelcomePage());
// app.UseStatusCodePagesWithReExecute("/errors/{0}");
```
|
C#-Why does this if statment removes a new line character on StreamReader's output?
```
StreamReader reader = new StreamReader("randomTextFile.txt");
string line = "";
while (line != null)
{
line = reader.ReadLine();
if (line != null)
{
Console.WriteLine(line);
}
}
reader.Close();
Console.ReadLine();
```
In the above code, there is an if statment inside the while statment, even though they specify the same thing `(line != null)`. If I remove said if statment, a new line will be added after the txt file contents (instead of "11037", the console will show "11037" + an empty line).
|
The [`while`-loop](https://msdn.microsoft.com/en-us/library/2aeyhxcd.aspx) exit condition will only be checked when it is invoked, so at the beginning of each iteration, not everytime inside it's scope.
>
> MSND: the test of the while expression takes place **before each execution** of
> the loop
>
>
>
You could use this loop:
```
string line;
using (var reader = new StreamReader("randomTextFile.txt"))
{
while ((line = reader.ReadLine()) != null)
{
Console.WriteLine(line);
}
}
```
You should also use the `using`-statement as shown above on every object implementing `IDisposable`. On that way it is ensured that unmanaged resources are disposed.
---
According to the `Console.WriteLine` specific question why it writes a new line even if the value is `null`, that's [documented](https://msdn.microsoft.com/en-us/library/xf2k8ftb(v=vs.110).aspx):
>
> If value is null, only the line terminator is written to the standard
> output stream.
>
>
>
|
Can I migrate a VirtualBox Ubuntu Guest to a \*real\* Hardware Box?
I want to transfer Ubuntu from a VirtualBox Guest 'appliance' to a **real** (metal and chips) computer?
Can this be done, and what steps are involved?
|
I'd try it with `dd` (don't forget to replace the device names like `sda` with your device name):
1. Replace all uuids in your `/etc/fstab` with things like `/dev/sda1` (`sda` = destination hd number!)
2. `update-grub2 && grub-install /dev/sda`
3. Save your virtual hdd inside VBox into a File: `dd if=/dev/sda /home/user/sda.img`
4. Copy the Image to a disk (external hdd, network share, dvd, ...)
5. Restore the Image to the destination drive: `dd if=/media/drive/sda.img of=/dev/sda`
The biggest problem might be the bootloader (but there are tutorials for this even in this forum). I once reinstalled a bootloader by doing a fresh install of Ubuntu (preferably the same as the one you dd'ed) and then `dd` the old partition over the fresh install (in this case, you would only `dd` `/dev/sda1`, not `/dev/sda`, which also includes the bootloader and all partitions)
|
open viewcontroller from tab like in instagram camera page on IOS
what i want is suppose to be simple, instagram middle tab on IOS opens the app's camera interface but not like the rest of the tabs but as an independet view controller and when you press cancel it gets back to the last tab youv'e been.
any ideas??
|
Edit: In order for this code to work you'll need to subclass UITabBarController and implement the UITabBarDelegate. So you'll add something like this:
```
@interface MyTabsViewController : UITabBarController <UITabBarDelegate>
```
In Interface Builder you'll need to set the tags of your tab items to whatever you need them to be:

And now you can do something like this:
```
-(void)tabBar:(UITabBar *)tabBar didSelectItem:(UITabBarItem *)item
{
if(item.tag == 1)
{
[self presentViewController:myVC animated:YES completion:nil];
}
else if(item.tag == 2)
{
//your code
}
}
```
This will allow you to get the tapped event for each of your tab bar items and do add some custom code, you just need to add the appropriate tags to your tab buttons in interface builder. For example, you could add code to show your own view controller as shown above.
Hope that helps!
And here's another link: [How can i get Tabbar Item Button's Click event by clicking on TabbarItem button?](https://stackoverflow.com/questions/2801299/how-can-i-get-tabbar-item-buttons-click-event-by-clicking-on-tabbaritem-button)
|
How to save a video from AVAssetExportSession to Camera Roll?
I have some code that edits a video, and then creates an AVAssetExportSession to save the edited video somewhere. I would like to save it to the camera roll, but can't figure out what the NSURL for that is.
```
var session: AVAssetExportSession = AVAssetExportSession(asset: myasset, presetName: AVAssetExportPresetHighestQuality)
session.outputURL = ???
session.exportAsynchronouslyWithCompletionHandler(nil)
```
Does anyone know how to determine the correct NSURL for saving a video to the camera roll? Thanks in advance for your help.
|
You can't save your video directly to the camera roll simply by using `session.outputURL = ...`. You'll have to save the video to a file path (temporary or otherwise) then write the video at that url to your camera roll using `writeVideoAtPathToSavedPhotosAlbum:`, ex:
```
var exportPath: NSString = NSTemporaryDirectory().stringByAppendingFormat("/video.mov")
var exportUrl: NSURL = NSURL.fileURLWithPath(exportPath)!
var exporter = AVAssetExportSession(asset: myasset, presetName: AVAssetExportPresetHighestQuality)
exporter.outputURL = exportUrl
exporter.exportAsynchronouslyWithCompletionHandler({
let library = ALAssetsLibrary()
library.writeVideoAtPathToSavedPhotosAlbum(exportURL, completionBlock: { (assetURL:NSURL!, error:NSError?) -> Void in
// ...
})
})
```
|
How to create hyperlinks in Apache POI Word?
How do I create hyperlinks in Word documents using apache-poi? Is it possible to use relative paths?
|
There is [XWPFHyperlinkRun](https://poi.apache.org/apidocs/org/apache/poi/xwpf/usermodel/XWPFHyperlinkRun.html) but not a method for creating a such until now (March 2018, `apache poi` version `3.17`). So we will need using underlaying low level methods.
The following example provides a method for creating a `XWPFHyperlinkRun` in a `XWPFParagraph`. After that the `XWPFHyperlinkRun` can be handled as a `XWPFRun` for further formatting since it extents this class.
```
import java.io.*;
import org.apache.poi.xwpf.usermodel.*;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTHyperlink;
public class CreateWordXWPFHyperlinkRun {
static XWPFHyperlinkRun createHyperlinkRun(XWPFParagraph paragraph, String uri) {
String rId = paragraph.getDocument().getPackagePart().addExternalRelationship(
uri,
XWPFRelation.HYPERLINK.getRelation()
).getId();
CTHyperlink cthyperLink=paragraph.getCTP().addNewHyperlink();
cthyperLink.setId(rId);
cthyperLink.addNewR();
return new XWPFHyperlinkRun(
cthyperLink,
cthyperLink.getRArray(0),
paragraph
);
}
public static void main(String[] args) throws Exception {
XWPFDocument document = new XWPFDocument();
XWPFParagraph paragraph = document.createParagraph();
XWPFRun run = paragraph.createRun();
run.setText("This is a text paragraph having ");
XWPFHyperlinkRun hyperlinkrun = createHyperlinkRun(paragraph, "https://www.google.de");
hyperlinkrun.setText("a link to Google");
hyperlinkrun.setColor("0000FF");
hyperlinkrun.setUnderline(UnderlinePatterns.SINGLE);
run = paragraph.createRun();
run.setText(" in it.");
paragraph = document.createParagraph();
paragraph = document.createParagraph();
run = paragraph.createRun();
run.setText("This is a text paragraph having ");
hyperlinkrun = createHyperlinkRun(paragraph, "./test.pdf"); //path in URI is relative to the Word document file
hyperlinkrun.setText("a link to a file");
hyperlinkrun.setColor("0000FF");
hyperlinkrun.setUnderline(UnderlinePatterns.SINGLE);
hyperlinkrun.setBold(true);
hyperlinkrun.setFontSize(20);
run = paragraph.createRun();
run.setText(" in it.");
FileOutputStream out = new FileOutputStream("CreateWordXWPFHyperlinkRun.docx");
document.write(out);
out.close();
document.close();
}
}
```
|
Convert list of tables to matrix
I have a list containing 270 tables. Each table has 3 columns and 22 rows, looking something like this:
```
1 2 3
lx -0.206628 -0.148405 -0.159344
ly 0.017395 0.043114 0.027263
lz -0.044009 0.005597 -0.016906
...
```
I need each of these tables merged into a giant 270x66 matrix, where the rows are each table and the columns are values in the table columns. I've tried using `lapply`, `as.matrix`, and `data.matrix`. Is there some kind of combination of these that will do the trick? Or should I first convert the table into vectors, then into the matrix? I know loops are generally discouraged.
|
You could create a vector using `unlist` and then create a new matrix:
```
matrix(unlist(mylist), nrow = length(mylist), byrow = TRUE)
```
An example:
```
mylist <- rep(list(matrix(1:9, 3)), 3)
matrix(unlist(mylist), nrow = length(mylist), byrow = TRUE)
#
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
# [1,] 1 2 3 4 5 6 7 8 9
# [2,] 1 2 3 4 5 6 7 8 9
# [3,] 1 2 3 4 5 6 7 8 9
```
|
set a getter/setter on class variable JavaScript
I have a class which looks like below:
```
export default class Car {
brand = '';
color = '';
fuel = '';
constructor(car) {
...
}
}
```
How can I set a setter on the class variables?
So for example
```
export default class Car {
brand = '';
color = '';
fuel = '';
set brand(brand) {
...
}
get brand() {
return ...;
}
get color(color) {
return ...;
}
set color(color) {
...
}
}
```
I tried to use the above, however, it doens't work.
```
class Car {
brand = '';
color = '';
constructor(car) {
this.brand = car.brand;
this.color = car.color;
}
set brand(val) {
// It should alert Audi.
alert(val);
}
}
let car = new Car({brand: 'BMW', color: 'white'});
car.brand = 'Audi';
```
It doens't alert the value I am setting on `brand`.
|
The problem lays in the naming convention, both the setter and public property are the same, so when setting it you don't actually use the custom setter. If you were to change the name it would work ex.
```
class Car {
_brand = '';
_color = '';
constructor(car) {
this._brand = car.brand;
this._color = car.color;
}
set brand(val) {
this._brand = val;
alert(val);
}
get brand() {
return this._brand;
}
}
let car = new Car({
brand: 'BMW',
color: 'white'
});
car.brand = 'Audi';
console.log(car.brand)
```
|
Preserve normal word wrapping inside absolutely positioned container
I have an absolutely positioned block of text inside a relatively positioned container. The absolutely positioned element exceeds the right boundary of its container.
The problem is: the text isn't wrapping as normal; it's breaking prematurely rather than expanding to its defined `max-width`:
**Observed behavior:**

**Desired behavior**

**HTML/CSS** ([**JSFIDDLE**: http://jsfiddle.net/WmcjM/](http://jsfiddle.net/WmcjM/)):
```
<style>
.container {
position: relative;
width: 300px;
background: #ccc;
height: 100px;
}
.text {
position: absolute;
max-width: 150px;
left: 290px;
top: 10px;
background: lightblue;
}
</style>
<div class="container">
<div class="text">Lorem ipsum dolor sit amet</div>
</div>
```
Note: A couple changes that appear to achieve the desired behavior, but which aren't quite what I'm looking for, include:
- defining `min-width: 150px` on `.text` (the text might just be one word, and I don't want the container to be oversized)
- positioning `.text`. relative to document, rather than to `.container` (it needs to appear beside the container, even when the browser is resized)
|
Try using `position: relative;` on .text
EDIT: Also put it inside an absolute positioned wrapper with your custom max-width
**CSS**
```
.container {
position: relative;
width: 300px;
background: #ccc;
height: 300px;
}
.wrap_text {
position: absolute;
max-width: 200px;
top: 10px;
}
.text {
position: relative;
left: 290px;
background: lightblue;
}
```
And **HTML**:
```
<div class="container">
<div class="wrap_text">
<div class="text">
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
</div>
</div>
</div>
```
|
Finding out who has modified a Stored Procedure on SQL Server?
Looks like somebody has logged in and modified a Stored Procedure on our SQL Server database. Is there a way to find out the credentials of the person that has done that?
|
You can look at the default trace and figure this out easily. Try the examples from Jonathan Kehayias.
<http://sqlblog.com/blogs/jonathan_kehayias/archive/2009/05/27/pass-dba-sig-default-trace-presentation-files.aspx>
This is to show as an example and you need to look at the ALTER OBJECT event type.
<http://sqlblog.com/blogs/jonathan_kehayias/archive/2009/09/29/what-session-created-that-object-in-tempdb.aspx>
EDIT:
Code is courtesy of Jonathan Kehayias.
```
DECLARE @filename VARCHAR(255)
SELECT @FileName = SUBSTRING(path, 0, LEN(path)-CHARINDEX('\', REVERSE(path))+1) + '\Log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT gt.HostName,
gt.ApplicationName,
gt.NTUserName,
gt.NTDomainName,
gt.LoginName,
gt.SPID,
gt.EventClass,
te.Name AS EventName,
gt.EventSubClass,
gt.TEXTData,
gt.StartTime,
gt.EndTime,
gt.ObjectName,
gt.DatabaseName,
gt.FileName,
gt.IsSystem
FROM [fn_trace_gettable](@filename, DEFAULT) gt
JOIN sys.trace_events te ON gt.EventClass = te.trace_event_id
WHERE EventClass in (164) --AND gt.EventSubClass = 2
ORDER BY StartTime DESC;
```
|
Access a list of entries in MATLAB
I have a huge matrix `MxN` matrix, say, `A=rand([M,N]);` and an index vector with `N` integer values between `1` and `M`, say, `RandomIndex = randi(M,[1,N]);`.
Now I would like to generate a row vector with entries
```
result = [A(RandomIndex(1),1), A(RandomIndex(2),2), ..., A(RandomIndex(N),N)]
```
What would be an efficient way to do this? It should be a very cheap operation but all my implementations are slow. I don't think there is a notation in Matlab to do this directly, is there?
The fastest option so far is
```
indexFunction = @(r,c) A(r,c);
result = cell2mat(arrayfun(indexFunction,RandomIndex,1:N,'UniformOutput',false));
```
Is there a more efficient way?
|
Use [`sub2ind`](http://uk.mathworks.com/help/matlab/ref/sub2ind.html)
```
A(sub2ind(size(A), RandomIndex, 1:N))
```
[`sub2ind`](http://uk.mathworks.com/help/matlab/ref/sub2ind.html) will convert the row and column indices given by `RandomIndex` and `1:N` to linear indices based on `size(A)` which you can then use to index `A` directly.
---
Another way to do this is to use `RandomIndex` and `1:N` to return an `NxN` matrix and then take the diagonal of this with [`diag`](https://uk.mathworks.com/help/matlab/ref/diag.html)
```
diag(A(RandomIndex, 1:N)).'
```
**Note**: `.'` is used to convert the row vector returned by [`diag`](https://uk.mathworks.com/help/matlab/ref/diag.html) to a column vector.
|
MSAccess: Hiding Navigation Pane, and Menu Bar Using VBAccess
Is there a way to hide the Navigation Pane, and Menu Bar on launch using MSAccess VB? Point is to remove "distractions" from users using MSAccess solution.
**Fig A: Hiding the Navigation Pane and Menu Bar**
[](https://i.stack.imgur.com/5hLo5.jpg)
|
# Option 1
One easy way is to rename the `*.accdb` to `*.accdr`.
Then it will be opened in runtime mode without ribbon bar and navigation pane.
# Option 2
Call the database by full command line of Microsoft Access and the database and the command line parameter `/runtime`, then it also will be opened in runtime mode.
Example:
`"c:\Program Files (x86)\Microsoft Office\Office15\msaccess.exe" "c:\data\yourDatabase.accdb" /runtime`
(the path of Microsoft Access varies regarding your installation of Access (msi or c2r, x86 or x64, version of access, custom installation folder...)
# Option 3
You can hide them by code:
- Navigation Pane: <https://stackoverflow.com/a/47519552/7658533>
- Ribbon: <https://stackoverflow.com/a/35582657/7658533>
|
How can I set the focus to a specific input field in an AlertDialog?
In a login dialog with two input fields (user name / password) I would like to set the focus on the second field (because user name is stored in preferences).
Does the AlertDialog.Builder provide a way to set the focus?
|
Iam using alertdialog too, you could try
```
final EditText input = new EditText(this);
input.setInputType(InputType.TYPE_CLASS_TEXT); // you should use .TYPE_TEXT_VARIATION_PASSWORD
input.requestFocus();
```
Example:
```
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle(x);
builder.setIcon(R.drawable.x);
final EditText input = new EditText(this);
input.setInputType(InputType.TYPE_CLASS_TEXT);
input.setText("mytext");
builder.setView(input);
builder.setPositiveButton("OK", new DialogInterface.OnClickListener() {
@Override
// xy
});
builder.setNegativeButton(cancel, new DialogInterface.OnClickListener() {
@Override
// xy
});
builder.show();
input.requestFocus(); // <--- for the focus
}
```
Regards
|
Install Vim in Cygwin
I have installed Cygwin on my Windows machine and I have also selected some additional packages as part of my installation (like GCC etc). Now I want to add Vim also to my existing Cygwin setup. What is the procedure to add Vim to my existing Cygwin setup? Or is there some separate binary for Vim in Cygwin which I can untar and install? What is the best option in my current scenario?
|
You need to run Cygwin's `setup.exe` again, and select the packages you want. Vim is not included in the default package.
I've blogged about this, with explicit instructions and a picture: [Cygwin setup gotchas | Code and comments](http://wilsonericn.wordpress.com/2011/08/15/cygwin-setup-gotchas/)
After installing Vim you may find that things just don't seem to be what you are used to. That is because Linux systems usually have a default `.vimrc` file somewhere. It seems that Cygwin does not. In Vim, run `:edit $MYVIMRC` to see your `.vimrc`.
You should get a nice `.vimrc` from somewhere and place it in your home folder for a better experience. Currently I'm using [this one](https://github.com/amix/vimrc).
|
Can KVM work without libvirt?
I am sorry for silly question, but could you please tell: can KVM work without
libvirt?
From my poor experience I have seen KVM functionality which based on libvirt.
Thanks for your reply in advance.
|
I've used KVM without libvirt. libvirt is just a group that you assign a user to so that you are not rooted when you execute the virtual machine. You have to have qemu installed.
```
sudo apt-get install qemu
```
Then you would use the qemu package that supports the type of iso you are trying to install. Then you would write something like this
```
qemu-system-x86_64 -m 2048 -cdrom /path/to/Windows10.iso -enable-kvm
```
If you already have windows extracted onto the hard drive then you would write it like this
```
qemu-system-x86_64 -m 2048 -hda /path/to/Windows10.iso -enable-kvm
```
You can find out the different kinds of qemu packages that you have just typing in `qemu` in the terminal, and this will tell you the types of OSes that can be run inside of KVM using the specified package.
|
Node / Angular app Uncaught SyntaxError: Unexpected token <
I'm following [this node/angular tutorial](https://scotch.io/tutorials/creating-a-single-page-todo-app-with-node-and-angular) and am getting the following errors:

I'm bootstrapping my app through node, which renders index page:
```
module.exports = function(app) {
app.get('*', function(req, res) {
res.sendfile('./public/index.html');
...
});
```
Which renders:
```
<html ng-app="DDE">
<head>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular.min.js"></script>
<script src="bower_components/angular/angular.js"></script>
<script src="bower_components/angular-route/angular-route.js"></script>
<script src="js/app.js"></script>
<script src="js/controllers/main.js"></script>
</head>
<body>
This is the index page
<div ng-view></div>
```
I'd like Node to handle the initial page load, but Angular to handle the rest of the routing. The problem is here: It doesn't seem my angular routing is working. I put a self-exec fn `run()` in there to test, but it's not being called.
I'm simply trying to test display the `testpage.html` template:
**app.js file:**
```
angular
.module('DDE', [
'ngRoute'
])
.config(['$routeProvider',
function($routeProvider) {
$routeProvider.
when('/test', {
run : (function() {
alert('hit');
})(),
templateUrl: '../html/partials/testpage.html'
}).
otherwise({
redirectTo: '/test'
});
}
]);
```
The angular error isn't very helpful. I'm not sure what `Unexpected token <` means as I cannot find where I've added an extra `<` anywhere.
---
EDIT:
```
app.get('/', function(req, res) {
res.send('./public/index.html');
});
```

It should be able to find the stuff in bower components as the pathing is correct:
```
root/bower_components/angular/angular.js
root/bower_components/angular-route/angular-route.js
```
|
You are missing some settings in your app file on the server to handle all of the requests being made to the server. Here is a basic sample working express file, which you can modify to fit your environment:
```
var express = require('express');
var app = express();
app.use('/js', express.static(__dirname + '/js'));
app.use('/bower_components', express.static(__dirname + '/../bower_components'));
app.use('/css', express.static(__dirname + '/css'));
app.use('/partials', express.static(__dirname + '/partials'));
app.all('/*', function(req, res, next) {
// Just send the index.html for other files to support HTML5Mode
res.sendFile('index.html', { root: __dirname });
});
```
This file uses `express.static` to serve any content which is in the specific directory regardless of name or type. Also keep in mind that Express use statements are processed in order, and the first match is taken, any matches after the first are ignored. So in this example, if it isn't a file in the `/js`, `/bower_components`, `/css`, or `/partials` directory, the `index.html` will be returned.
|
Fluent NHibernate one-to-many relationship setting foreign key to null
I have a simple Fluent NHibernate model with two related classes:
```
public class Applicant
{
public Applicant()
{
Tags = new List<Tag>();
}
public virtual int Id { get; set; }
//other fields removed for sake of example
public virtual IList<Tag> Tags { get; protected set; }
public virtual void AddTag(Tag tag)
{
tag.Applicant = this;
Tags.Add(tag);
}
}
public class Tag
{
public virtual int Id { get; protected set; }
public virtual string TagName { get; set; }
public virtual Applicant Applicant { get; set; }
}
```
My fluent mapping is the following:
```
public class ApplicantMap : ClassMap<Applicant>
{
public ApplicantMap()
{
Id(x => x.Id);
HasMany(x => x.Tags).Cascade.All();
}
}
public class TagMap : ClassMap<Tag>
{
public TagMap()
{
Id(x => x.Id);
Map(x => x.TagName);
References(x => x.Applicant).Not.Nullable();
}
}
```
Whenever I try to **update** an applicant (inserting a new one works fine), it fails and I see the following SQL exception in the logs:
```
11:50:52.695 [6] DEBUG NHibernate.SQL - UPDATE [Tag] SET Applicant_id = null WHERE Applicant_id = @p0;@p0 = 37 [Type: Int32 (0)]
11:50:52.699 [6] ERROR NHibernate.AdoNet.AbstractBatcher - Could not execute command: UPDATE [Tag] SET Applicant_id = null WHERE Applicant_id = @p0 System.Data.SqlClient.SqlException (0x80131904): Cannot insert the value NULL into column 'Applicant_id', table 'RecruitmentApp.dbo.Tag'; column does not allow nulls. UPDATE fails.
```
Why is NHibernate trying to update the tag table and set Applicant\_id to null? I'm at a loss on this one.
|
Set `Applicant.Tags` to `Inverse` will instruct NHibernate to save `Tags` after the `Applicant`.
```
public class ApplicantMap : ClassMap<Applicant>
{
public ApplicantMap()
{
Id(x => x.Id);
HasMany(x => x.Tags).Cascade.All().Inverse();
}
}
```
More detail:
`Inverse` (as opposed to `.Not.Inverse()`) means the other side of the relationship (in this case, each `Tag`) is responsible for maintaining the relationship. Therefore, NHibernate knows that the `Applicant` must be saved first so that `Tag` has a valid foreign key for its `Applicant`.
Rule of thumb: The entity containing the foreign key is usually the owner, so the other table should have `Inverse`
|
HashMap with incorrect equals and HashCode implementation
According to what I have read,
>
> to use an object as the key to a hashMap, it has to provide a correct
> override and implementation of the ***equals*** and ***hashCode***
> method. HashMap get(Key k) method calls hashCode method on the key
> object and applies returned ***hashValue*** to its own static hash
> function to find a bucket location(backing array) where keys and
> values are stored in form of a nested class called Entry (Map.Entry).
> HashMap's internal hash Method defends against poor quality hash
> functions.
>
>
>
To test these contracts, I have written a bean class with incorrect but legal implementations of the ***equals*** and ***hashCode*** method.
The class:
```
public class HashVO {
private String studentName;
private int age;
private boolean isAdult;
public HashVO(String studentName, int age, boolean isAdult) {
super();
this.studentName = studentName;
this.age = age;
this.isAdult = isAdult;
}
public String getStudentName() {
return studentName;
}
public void setStudentName(String studentName) {
this.studentName = studentName;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public boolean isAdult() {
return isAdult;
}
public void setAdult(boolean isAdult) {
this.isAdult = isAdult;
}
@Override
public String toString() {
return studentName + " : " + age + " : " + isAdult;
}
@Override
public boolean equals(Object obj) {
return false;
}
@Override
public int hashCode() {
return 31;
}
}
```
In this case, the hash method of the HashMap,
```
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
```
should also return same value everytime because the hashcode always returns 31. So if objects of class HashVO are used as key of a hashMap, **the get method should not work, as it should go to the same bucket to retrieve the objects and the equals method always returns false so it will not be able to able to find a match for the key object**.
But when I am using this method,
```
public static void main(String[] args) {
HashMap<HashVO, String> voMap = new HashMap<HashVO, String>();
HashVO vo = new HashVO("Item1", 25, true);
HashVO vo1 = new HashVO("Item2", 12, false);
HashVO vo2 = new HashVO("Item3", 1, false);
voMap.put(vo, "Item");
voMap.put(vo1, "Item1");
voMap.put(vo2, "Item2");
System.out.println(voMap.get(vo));
System.out.println(voMap.get(vo1));
System.out.println(voMap.get(vo2));
}
```
the output is correct, and showing
```
Item
Item1
Item2
```
I want to understand why this correct output is coming even as the Equals and HashCode method implementation is incorrect.
|
`HashMap` has a little trick where it compares object references before using `equals`. Since you're using the same object references for adding the elements and for retrieving them, `HashMap` will return them correctly.
See Java 7 source [here](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7u40-b43/java/util/HashMap.java#HashMap.getEntry%28java.lang.Object%29) (Java 8 did a pretty big revamp of `HashMap` but it does something similar)
```
final Entry<K,V> getEntry(Object key) {
if (size == 0) {
return null;
}
int hash = (key == null) ? 0 : hash(key);
for (Entry<K,V> e = table[indexFor(hash, table.length)];
e != null;
e = e.next) {
Object k;
// HERE. Uses == with the key
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
}
return null;
}
```
Note that this isn't part of the docs, so don't depend on it.
|
AWS SES SDK send email with attachments
I'm using the [official AWS Golang SDK](https://github.com/aws/aws-sdk-go) to integrate with SES but can't find any information about how to add some attachments (pdf file represented as []byte in code) to the email.
Could you help me?
The current email sending code looks like this:
```
sesEmailInput := &ses.SendEmailInput{
Destination: &ses.Destination{
ToAddresses: []*string{aws.String("To address")},
},
Message: &ses.Message{
Subject: &ses.Content{
Data: aws.String("Some text"),
},
Body: &ses.Body{
Html: &ses.Content{
Data: aws.String("Some Text"),
},
},
},
Source: aws.String("From address"),
ReplyToAddresses: []*string{
aws.String("From address"),
},
}
if _, err := s.sesSession.SendEmail(sesEmailInput); err != nil {
return err
}
```
|
To send attachments, use the SendRawEmail API instead of SendEmail. AWS documentation will generally refer to this as constructing a 'raw message' instead of explicitly calling out how to send attachments.
## Example
From the AWS SDK for Go API Reference, linked below:
```
params := &ses.SendRawEmailInput{
RawMessage: &ses.RawMessage{ // Required
Data: []byte("PAYLOAD"), // Required
},
ConfigurationSetName: aws.String("ConfigurationSetName"),
Destinations: []*string{
aws.String("Address"), // Required
// More values...
},
FromArn: aws.String("AmazonResourceName"),
ReturnPathArn: aws.String("AmazonResourceName"),
Source: aws.String("Address"),
SourceArn: aws.String("AmazonResourceName"),
Tags: []*ses.MessageTag{
{ // Required
Name: aws.String("MessageTagName"), // Required
Value: aws.String("MessageTagValue"), // Required
},
// More values...
},
}
resp, err := svc.SendRawEmail(params)
```
## Further Reading
- [Amazon SES API Reference - SendRawEmail](https://docs.aws.amazon.com/ses/latest/APIReference/API_SendRawEmail.html)
- [AWS SDK for Go API Reference - SendRawEmail](https://docs.aws.amazon.com/sdk-for-go/api/service/ses/#SES.SendRawEmail)
- [AWS SES Documentation - Sending Raw Email Using the Amazon SES API](http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-raw.html) - This is a good primer for email standards and constructing raw messages (including a section about attachments).
|
Dice Roll Array PHP Issue
I have never used PHP before, but basically this program is supposed to run 36,000 times, and each time the two dice are rolled, the total number gets a "tally." Right now, it outputs whichever number it has rolled with "36,000 tallies" instead.
Here is my code:
```
<?php
$dice = (rand(1,6) + rand(1,6));
$roll = array();
for ($result = 0; $result < 36000; $result++){
if ($dice == 2){
$roll[2]++;
}
if ($dice == 3){
$roll[3]++;
}
if ($dice == 4){
$roll[4]++;
}
if ($dice == 5){
$roll[5]++;
}
if ($dice == 6){
$roll[6]++;
}
if ($dice == 7){
$roll[7]++;
}
if ($dice == 8){
$roll[8]++;
}
if ($dice == 9){
$roll[9]++;
}
if ($dice == 10){
$roll[10]++;
}
if ($dice == 11){
$roll[11]++;
}
if ($dice == 12){
$roll[12]++;
}
}
?>
```
|
Initialize all your possibilities and add the dice roll inside the loop.
<http://codepad.org/nQfGZ3bR>
```
<?php
$roll = array();
$roll[2] = 0;
$roll[3] = 0;
$roll[4] = 0;
$roll[5] = 0;
$roll[6] = 0;
$roll[7] = 0;
$roll[8] = 0;
$roll[9] = 0;
$roll[10] = 0;
$roll[11] = 0;
$roll[12] = 0;
for ($result = 0; $result < 36000; $result++){
$dice = (rand(1,6) + rand(1,6));
$roll[$dice]++;
}
var_dump($roll);
```
Output:
```
array(11) {
[2]=>
int(962)
[3]=>
int(1999)
[4]=>
int(3019)
[5]=>
int(3923)
[6]=>
int(4929)
[7]=>
int(6083)
[8]=>
int(5076)
[9]=>
int(3971)
[10]=>
int(3006)
[11]=>
int(2017)
[12]=>
int(1015)
}
```
|
Why can spaces between options and parameters be omitted?
For example:
```
xargs -n 1
```
is the same as
```
xargs -n1
```
But if you look at the [man page](http://linux.die.net/man/1/xargs), the option is listed as `-n max-args`, which means the space is supposed to be preserved. There is nothing about the abbreviated form -n*max-args*.
This also happens with many other Linux utilities.
What is this called in Linux? Do all utilities support the abbreviated form (but never document it in the man page)?
|
When you write the command line parsing bit of your code, you specify what options take arguments and which ones do not. For example, in a shell script accepting an `-h` option (for help for example) and an `-a` option that should take an argument, you do
```
opt_h=0 # default value
opt_a=""
while getopts 'a:h' opt; do
case $opt in
h) opt_h=1 ;;
a) opt_a="$OPTARG" ;;
esac
done
echo "h: $opt_h"
echo "a: $opt_a"
```
The `a:h` bit says "I'm expecting to parse two options, `-a` and `-h`, and `-a` should take an argument" (it's the `:` after `a` that tells the parser that `-a` takes a argument).
Therefore, there is never any ambiguity in where an option ends, where its value starts, and where another one starts after that.
Running it:
```
$ bash test.sh -h -a hello
h: 1
a: hello
$ bash test.sh -h -ahello
h: 1
a: hello
$ bash test.sh -hahello
h: 1
a: hello
```
This is why you most of the time shouldn't write your own command line parser to parse options.
There is only one case in this example that is tricky. The parsing usually stops at the first non-option, so when you have stuff on the command line that *looks* like options:
```
$ bash test.sh -a hello -world
test.sh: illegal option -- w
test.sh: illegal option -- o
test.sh: illegal option -- r
test.sh: illegal option -- l
test.sh: illegal option -- d
h: 0
a: hello
```
The following solves that:
```
$ bash test.sh -a hello -- -world
h: 0
a: hello
```
The `--` signals an end of command line options, and the `-world` bit is left for the program to do whatever it wants with (it's in one of the positional variables).
That is, by the way, how you remove a file that has a dash in the start of its file name with `rm`.
**EDIT**:
Utilities written in C call `getopt()` (declared in `unistd.h`) which works pretty much the same way. In fact, for all we know, the `bash` function `getopts` may be implemented using a call to the C library function `getopt()`. Perl, Python and other languages have similar command line parsing libraries, and it is most likely that they perform their parsing in similar ways.
Some of these `getopt` and `getopt`-like library routines also handle "long" options. These are *usually* preceded by double-dash (`--`), and long options that takes arguments often does so after an equal sign, for example the `--block-size=SIZE` option of [some implementations of] the `du` utility (which also allows for `-B SIZE` to specify the same thing).
The reason manuals are often written to show a space in between the short options and their arguments is probably for readability.
**EDIT**: *Really* old tools, such as the `dd` and `tar` utilities, have options without dashes in front of them. This is purely for historical reasons and for maintaining compatibility with software that relies on them to work in exactly that way. The `tar` utility has gained the ability to take options with dashes in more recent times. The BSD manual for `tar` calls the old-style options for "bundled flags".
|
Pandas: set difference by group
I have a very large dataset containing the members in each team in each month. I want to find additions and deletions to each team. Because my dataset is very big, I'm trying to use in-built functions as much as possible.
My dataset looks like this:
```
month team members
0 0 A X, Y, Z
1 1 A X, Y
2 2 A W, X, Y
3 0 B D, E
4 1 B D, E, F
5 2 B F
```
It's generated by the following code:
```
num_months = 3
num_teams = 2
obs = num_months*num_teams
df = pd.DataFrame({"month": [i % num_months for i in range(obs)],
"team": ['AB'[i // num_months] for i in range(obs)],
"members": ["X, Y, Z", "X, Y", "W, X, Y", "D, E", "D, E, F", "F"]})
df
```
The result should be like this:
```
month team members additions deletions
0 0 A X, Y, Z None None
1 1 A X, Y None Z
2 2 A W, X, Y W None
3 0 B D, E None None
4 1 B D, E, F F None
5 2 B F None D, E
```
or in Python code
```
df = pd.DataFrame({"month": [i % num_months for i in range(obs)],
"team": ['AB'[i // num_months] for i in range(obs)],
"members": ["X, Y, Z", "X, Y", "W, X, Y", "D, E", "D, E, F", "F"],
"additions": [None, None, "W", None, "F", None],
"deletions": [None, "Z", None, None, None, "D, E"]
})
```
A technique that immediately comes to mind is to create a new column which shows the [lagged value of members in each group](https://stackoverflow.com/questions/26280345/pandas-shift-down-values-by-one-row-within-a-group), followed by taking the set difference (both ways) between both columns.
Is there a way to take set differences between columns using pandas inbuilt functions?
Are there other techniques I should try?
|
## Using [`set`](https://docs.python.org/3.8/library/stdtypes.html#set), [`groupby`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html), [`apply`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html), and [`shift`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shift.html).
- For efficiency:
- Convert `members` to `set` type because `-` is an unsupported operand, which will cause a `TypeError`.
- Leave `additions` and `deletions` as `set` type
## Using `apply`
- With a dataframe of 60000 rows:
- `91.4 ms ± 2.77 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)`
```
# clean the members column
df.members = df.members.str.replace(' ', '').str.split(',').map(set)
# create del and add
df['deletions'] = df.groupby('team')['members'].apply(lambda x: x.shift() - x)
df['additions'] = df.groupby('team')['members'].apply(lambda x: x - x.shift())
# result
month team members additions deletions
0 A {Z, X, Y} NaN NaN
1 A {X, Y} {} {Z}
2 A {W, X, Y} {W} {}
0 B {D, E} NaN NaN
1 B {D, F, E} {F} {}
2 B {F} {} {D, E}
```
## More Efficiently
- [`pandas.DataFrame.diff`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.diff.html)
- With a dataframe of 60000 rows:
- `60.7 ms ± 3.54 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)`
```
df['deletions'] = df.groupby('team')['members'].diff(periods=-1).shift()
df['additions'] = df.groupby('team')['members'].diff()
```
|
Is 'Service Worker' support offline HTML form data?
[Service Worker](http://www.html5rocks.com/en/tutorials/service-worker/introduction/)s essentially act as proxy servers that sit between web applications.
My concern: Is [ServiceWorker](http://www.chromium.org/blink/serviceworker) also support for offline forms? - If so then my other list of concerns are:
1. Where it stored the incomplete HTML form data at client side (Storage, Session, Local, Database)?
2. In which form it stores user's filled data (encryption etc.)
3. How the data security/privacy tackled? that the returning user is the same user? - if one user had filled out the form in public place and lost the internet connection and left the incomplete form and move on, now the next immediate user can see the incomplete form data, where the last user left the form.
|
First off, service workers could potentially do a lot of things. They have access to [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API), for instance, so there's always the possibility of saving and loading arbitrary data, from the context of either a controlled page or a service worker.
But none of that will happen automatically by virtue of having an service worker registered—a service worker will, by default, do absolutely nothing. It's only when you write code in the service worker to hook into different events (`install`, `fetch`, `message`, etc.) that things get interesting.
Hypothetically, if you were to write code in a service worker that cached an HTTP response for an HTML resource and then responded to `fetch` events for the request's URL with that cached response, then the browser would get back and render the same HTML as it would if the response had come from the network. There wouldn't be any special "form state" that gets cached along with the HTML response body.
If, for some reason, you did want to save "form state" so that users who leave a page with a form could return and continue editing it, you'd have to do that independently of the [`caches`](https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#self-caches) object that's exposed to the service worker. ([`caches`](https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#self-caches) requires that you use [`Request`](https://fetch.spec.whatwg.org/#request) objects as keys and stores [`Response`](https://fetch.spec.whatwg.org/#response) objects as values.)
So you could potentially do it using `IndexedDB`, or you could do it using [`localStorage`](https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Storage#localStorage), or really anything else. It's independent of the service worker, and the particular privacy/security considerations are up to the implementer.
|
React Input Element : Value vs Default Value
When I render an input element within my component if i set the element "value" it become read-only but if i set the value on "defaultValue" it will never update again when i re-update my state.
Here is my code :
```
import React from "react";
export default class EditForm extends React.Component {
editTransaction(event) {
var transaction = this.props.transaction;
event.preventDefault();
var NewTransaction = {
transactions_data: {
amount: this.refs.amount.value
}
}
this.props.editTransaction(NewTransaction, transaction.id);
}
closeForm() {
this.props.closeForm();
}
render() {
var {amount}=this.props.transaction;
return (
<div>
<br/>
<h4>Edit Transaction</h4>
<div className="btn btn-danger pull-right" onClick={this.closeForm.bind(this)}>close</div>
<div className="clearfix"></div>
<form onSubmit={this.editTransaction.bind(this)}>
<div>
<label for="amount">Amount</label>
<input value={amount} onChange={(value) => this.onChange(value)} className="form-control"
id="amount" name="amount" type="number"
ref="amount"/>
</div>
<br/>
<br/>
<input className="btn btn-info" type="submit" value="submit"/>
</form>
</div>
);
}
}
```
and then i found out if i make an error out of this by adding
`onChange={(value) => this.onChange(value)}` on my input element, it works properly ( it updating while the props or state is updating, and i can re-type the value), but i think this is not a proper solution, because it cause errors on my browser console. It is because "this.onChange" function does not exist.
How can this problem be solved?
|
The reason your input doesn't work is because you need to `define` the `onChange` function which actually `sets the state` with the updated value. You can probably do it inline since it only needs on statement like
```
<input type="text" value={this.state.inputVal} onChange={(e) => {this.setState({inputVal: e.target.value})}} />
```
However I would recommend you to use an `onChange` method as you can handle multiple inputs together with it and it looks cleaner
```
class EditForm extends React.Component {
constructor() {
super();
this.state = {
}
}
onChange(e) {
this.setState({[e.target.name]: e.target.value})
}
editTransaction(event) {
var transaction = this.props.transaction;
event.preventDefault();
var NewTransaction = {
transactions_data: {
amount: this.refs.amount.value
}
}
this.props.editTransaction(NewTransaction, transaction.id);
}
closeForm() {
this.props.closeForm();
}
render() {
return (
<div>
<br/>
<h4>Edit Transaction</h4>
<div className="btn btn-danger pull-right" onClick={this.closeForm.bind(this)}>close</div>
<div className="clearfix"></div>
<form onSubmit={this.editTransaction.bind(this)}>
<div>
<label for="amount">Amount</label>
<input value={this.state.amount} onChange={(value) => this.onChange(value)} className="form-control"
id="amount" name="amount" type="number"
ref="amount"/>
<input value={this.state.amount1} onChange={(value) => this.onChange(value)} className="form-control"
id="amount1" name="amount1" type="number"
ref="amount"/>
</div>
<br/>
<br/>
<input className="btn btn-info" type="submit" value="submit"/>
</form>
</div>
);
}
}
ReactDOM.render(<EditForm/>, document.getElementById('app'));
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.0.2/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.0.2/react-dom.min.js"></script>
<div id="app"></div>
```
|
Exception in thread "main" java.lang.NoClassDefFoundError: HelloWorld
I've been working on this for about an hour and thumbing through Q&As on stackoverflow but I haven't found a proposed solution to my problem. I'm sorry if this is a duplicate, but I couldn't find any duplicate question with an answer that solved my specific problem.
I am trying to write and compile a java program from terminal for the first time (up until this point I have been using Eclipse for java and VIM for everything else, but I feel its time to switch entirely to VIM). Here is my current HelloWorld code:
```
package main;
public class HelloWorld {
public static void main(String args[]) {
System.out.println("Hello World!");
}
}
```
I compile and run using the following commands (specifying the classpath to ensure that isn't the problem):
```
javac -cp "./" HelloWorld.java
java -cp "./" HelloWorld
```
This gives me the following error message:
```
Exception in thread "main" java.lang.NoClassDefFoundError: HelloWorld (wrong name: main/HelloWorld)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:480)
```
I know it is seeing the file HelloWorld.class and trying to access the class HelloWorld because if I change the run command to:
```
java -cp "./" Foo
```
I get an entirely different error message:
```
Error: Could not find or load main class Foo
```
I have tried several dozen pages worth of troubleshooting and come up short, including the following:
[Exception in thread "main" java.lang.NoSuchMethodError: main](https://stackoverflow.com/questions/3540065/exception-in-thread-main-java-lang-nosuchmethoderror-main)
<http://introcs.cs.princeton.edu/java/15inout/mac-cmd.html>
`java -version` yields:
```
java version "1.7.0_07"
Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
Java HotSpot(TM) Client VM (build 23.3-b01, mixed mode)
```
My operating system is LinuxMint and `uname -a` yields:
```
Linux will-Latitude-D620 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux
```
|
>
> package main;
>
>
>
This means that your class resides in the `main` package, and its canonical name is `main.HelloWorld`.
Java requires that package names should also be mirrored in the directory structure. This means that:
1. Your `HelloWorld.java` file should be in a directory named `main`
2. You should execute `javac` and `java` from the directory containing `main`, not from `main` itself
3. The classpath should contain the directory where the `main` directory is, not `main` itself
4. `java` expects the canonical name of the class to execute, so `main.HelloWorld`
So, to recap:
You should have something like `myproject/main/HelloWorld.java`
From `myproject`, run `javac main/HelloWorld.java`
From `myproject`, run `java -cp ./ main.HelloWorld`
|
Blurred background-image on Bootstrap 3
I've just started developing my first Bootstrap powered design but I'm struggling on how to create a blurred background-image.
Here's my code:
```
body {
padding-top: 40px;
padding-bottom: 40px;
}
body::before{
background: url(/scnet/includes/images/bg.jpg) no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
-webkit-filter: blur(5px);
-moz-filter: blur(5px);
-o-filter: blur(5px);
-ms-filter: blur(5px);
filter: blur(5px);
}
```
```
<body>
<div class="container">
<form class="form-signin">
<h2 class="form-signin-heading">Please sign in</h2>
<label for="inputEmail" class="sr-only">Email address</label>
<input type="text" id="inputUser" class="form-control" placeholder="User ID" data-toggle="tooltip" title="tooltip on second input!" name="secondname" required autofocus>
<label for="inputPassword" class="sr-only">Memorable Data</label>
<input type="password" id="inputMem" class="form-control" placeholder="Memorable Data" required>
<label for="inputPassword" class="sr-only">Password</label>
<input type="password" id="inputPassword" class="form-control" placeholder="Password" required>
<button class="btn btn-lg btn-primary btn-block" type="submit">Sign in</button><button class="btn btn-lg btn-primary btn-block" type="submit">Password Reset</button>
</form>
<script type="text/javascript">
$('input[type=text][id=inputEmail]').tooltip({ /*or use any other selector, class, ID*/
placement: "right",
trigger: "focus"
});
</script>
</div> <!-- /container -->
<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
<!-- Include all compiled plugins (below), or include individual files as needed -->
<script src="includes/js/bootstrap.min.js"></script>
</body>
```
Currently nothing happens (background stays white) - I know the image reference is correct because I tried add it to `body` class and it loaded fine.
Any ideas what I'm doing wrong?
|
Try:
```
body {
padding-top: 40px;
padding-bottom: 40px;
position: relative;
}
body::before {
background: url(/scnet/includes/images/bg.jpg) no-repeat center center fixed;
content: '';
z-index: -1;
width: 100%;
height: 100%;
position:absolute;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
-webkit-filter: blur(5px);
-moz-filter: blur(5px);
-o-filter: blur(5px);
-ms-filter: blur(5px);
filter: blur(5px);
}
```
|
Displaying an image from a QAbstractTableModel
I am trying to display an image from a `QAbstractTableModel`. I tried returning a `QPixmap` as the `QVariant` of `data()`, but it only produces empty cells, when I would expect every cell in the second column to have a 20x20 black square.
This is my code currently:
```
QVariant MySqlTableModel::data(const QModelIndex &idx, int role = Qt::DisplayRole) const
{
if (role == Qt::DisplayRole && idx.column() == 1) {
QPixmap pixmap(20,20);
QColor black(0,0,0);
pixmap.fill(black);
return pixmap;
}
return QSqlTableModel::data(idx, role);
}
```
|
Only `QVariant`s that can be converted to string can be returned for the role `Qt::DisplayRole` with the standard delegate.
You can show the image by returning it for the role `Qt::DecorationRole`
```
QVariant MySqlTableModel::data(const QModelIndex &idx, int role = Qt::DisplayRole) const
{
if (idx.column() == 1) {
if (role == Qt::DecorationRole) {
QPixmap pixmap(20,20);
QColor black(0,0,0);
pixmap.fill(black);
return pixmap;
} else if (role == Qt::DisplayRole) {
// For Qt::DisplayRole return an empty string, otherwise
// you will have *both* text and image displayed.
return "";
}
}
return QSqlTableModel::data(idx, role);
}
```
Or write your own delegate to do the painting yourself. See [QStyledItemDelegate documentation](http://doc.qt.nokia.com/latest/qstyleditemdelegate.html#details) for more details.
|
AngularJS V1.1 interceptor always have $q.when at the end
In the [documentation](https://code.angularjs.org/1.1.5/docs/api/ng.$http) (version 1.1) of AngularJS about interceptors, the interceptor functions all return something like this
```
return response || $q.when(response);
```
However, in my app, 'response' is always defined, so $q.when(response) is never executed.
So the question is in what situation will the 'response' be undefined and what will
```
$q.when(response) // == $q.when(null)
```
do! because response is undefined/null ?
|
- `$q.when(promise)` → `promise`
- `$q.when(nonPromise)` → a new `promise`, that will asynchronously resolve to the given value `nonPromise`.
Lets see what is `$q.when`:
```
$q.when = function (foreignPromise) {
var deferred = $q.defer();
foreignPromise.then(function (data) {
deferred.resolve(data);
$rootScope.$digest();
}, function (reason) {
deferred.reject(reason);
$rootScope.$digest();
});
return deferred.promise;
}
```
## Factory return $q.when(data)
As we can see `$q.when` receives promise or nonPromise and wrap it with.
Factory example:
```
fessmodule.factory('Data', ['$resource','$q', function($resource, $q) {
var data = [
{
"PreAlertInventory": "5.000000",
"SharesInInventory": "3.000000",
"TotalSharesSold": "2.000000",
"TotalMoneySharesSold": "18.000000",
"TotalSharesBought": "0.000000",
"TotalShareCost": "0.000000",
"EstimatedLosses": "0.000000"
}
];
var factory = {
query: function (selectedSubject) {
return $q.when(data);
}
}
return factory;
}]);
```
Now we can call it from controller:
```
Data.query()
.then(function (result) {
$scope.data = result;
}, function (result) {
alert("Error: No data returned");
});
```
Demo `**[Fiddle](http://jsfiddle.net/9Ymvt/700/)**`
## Factory returns $q.when(data) || data
From this example we return promise. So lets change it a bit:
Instead `return $q.when(data);` we will write:
```
return $q.when(data) || data;
```
It will work as well. But not vice versa.
As I understand Angular knows that controller waits from `Data` service the promise and above mentioned statement will use 1st off `$q.when(data)`.
Demo 2 `**[Fiddle](http://jsfiddle.net/9Ymvt/702/)**`
## Factory returns data || $q.when(data)
Now lets call our `Data` service by this way:
```
$scope.data = Data.query();
```
No promises, the call is sync.
Out factory seems like:
```
fessmodule.factory('Data', ['$resource','$q', function($resource, $q) {
var data = [
{
"PreAlertInventory": "5.000000",
"SharesInInventory": "3.000000",
"TotalSharesSold": "2.000000",
"TotalMoneySharesSold": "18.000000",
"TotalSharesBought": "0.000000",
"TotalShareCost": "0.000000",
"EstimatedLosses": "0.000000"
}
];
var factory = {
query: function (selectedSubject) {
return data || $q.when(data);
}
}
return factory;
}]);
```
Demo 3 `**[Fiddle](http://jsfiddle.net/9Ymvt/703/)**`
**My Conclusion**
The `return data || $q.when(data)` means that our service can return single value or promise. But since we know what type of data our service returns , there is no sense in this statement. Or `data` or `promise`.
|
c++ console closing after cin.get is placed
i am writing a basic program to convert meters to feet
```
// TestApp.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
# include <iostream>
int main()
{
using namespace std;
double feet;
short int input;
const double feettometer = 3.28 ; (changed after second comment, small mistake)
cout << "Enter meter value: ";
cin >> input;
feet = feettometer * input ;
cout << "your meter value of " << input << " in feet is " << feet ;
cin.get();
return 0;
}
```
why does this con.get() not keep the console alive?
|
When you type in a number like `123` and hit enter, the input stream has `123\n` in it. When you extract into `input`, `123` is removed and `\n` is left in the stream. Then when you call `cin.get()`, this `\n` is extracted. It doesn't need to wait for any input because this character is already there waiting to be extracted.
So one solution is to clear the input stream with `ignore` before doing `get`:
```
cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
```
This will extract and discard anything up to and including the next `\n` in the stream. So if your input was `123hello\n`, it would even discard the `hello`.
An alternative would be to read the input line using `std::getline` (which will extract the `\n` too) and *then* parse the line for the input number.
|
Changing charset when retrieving messages from mail server!
i'm currently creating a little mail client and facing a problem with charset.
I use indy's TIdIMAP4 component to retrieve data from mail-server. When i try to retrieve mail bodies then accent letters like ä, ü etc are converted to =E4, =FC respectively as it is using charset ISO-8859-1.
>
> Content-Type: text/plain;
> charset="ISO-8859-1"
> Content-Transfer-Encoding:
> quoted-printable
>
>
>
How can i make server to send me data in another charset, like utf-8? What would be the best solution for that problem?
Thanks in advance!
|
It is not the `charset` that is producing strings like `=E4` and `=FC`, it is the `Content-Transfer-Encoding` instead. `$E4` and `$FC` are the binary representations of `ä` and `ü` in ISO-8859-1, but they are 8-bit values. Email is still largely a 7-bit environment. Unless both clients and servers negotiate 8-bit transfers during their communications, then byte octets above `$7F` have to be encoded in a 7-bit compatible manner to pass through email gateways safely, especially legacy ones that still exist. `quoted-printable` is a commonly used 7-bit byte encoding in email for textual content. `base64` is another one, but it is not human-readible so it tends to be used for binary data instead of textual data (though it can be used for text).
In any case, you cannot make the server deliver the email data to you in another encoding. The server is merely delivering the original email data as-is that was originally delivered to it by the sender. If you want the data in UTF-8, then you have to re-encode it yourself after downloading it. Indy will handle the decoding for you.
|
Use esttab to generate summary statistics by group with columns for mean difference and significance
I would like to use `esttab` (`ssc install estout`) to generate summary statistics by group with columns for the mean difference and significance. It is easy enough to generate these as two separate tables with `estpost`, `summarize`, and `ttest`, and combine manually, but I would like to automate the whole process.
The following code generates the two components of the desired table.
```
sysuse auto, clear
* summary statistics by group
eststo clear
by foreign: eststo: quietly estpost summarize ///
price mpg weight headroom trunk
esttab, cells("mean sd") label nodepvar
* difference in means
eststo: estpost ttest price mpg weight headroom trunk, ///
by(foreign) unequal
esttab ., wide label
```
And I can print the two tables and cut-an-paste into one table.
```
* can generate similar tables and append horizontally
esttab, cells("mean sd") label
esttab, wide label
* manual, cut-and-paste solution
-------------------------------------------------------------------------------------------------------
(1) (2) (3)
mean sd mean sd
-------------------------------------------------------------------------------------------------------
Price 6072.423 3097.104 6384.682 2621.915 -312.3 (-0.44)
Mileage (mpg) 19.82692 4.743297 24.77273 6.611187 -4.946** (-3.18)
Weight (lbs.) 3317.115 695.3637 2315.909 433.0035 1001.2*** (7.50)
Headroom (in.) 3.153846 .9157578 2.613636 .4862837 0.540** (3.30)
Trunk space (.. ft.) 14.75 4.306288 11.40909 3.216906 3.341*** (3.67)
-------------------------------------------------------------------------------------------------------
Observations 52 22 74
-------------------------------------------------------------------------------------------------------
t statistics in parentheses
* p<0.05, ** p<0.01, *** p<0.001
```
It seems that I should be able to get the desired table with one `esttab` call and without cutting-and-pasting, but I can't figure it out. Is there a way to generate the desired table without manually cutting-and-pasting?
I would prefer to output a LaTeX table, but anything that eliminates the cutting-and-pasting is a big step, even passing through a delimited text file.
|
If you still want to use esttab, you can play around using cells and pattern. The table in the original post can be replicated with the following code:
```
sysuse auto, clear
eststo domestic: quietly estpost summarize ///
price mpg weight headroom trunk if foreign == 0
eststo foreign: quietly estpost summarize ///
price mpg weight headroom trunk if foreign == 1
eststo diff: quietly estpost ttest ///
price mpg weight headroom trunk, by(foreign) unequal
esttab domestic foreign diff, ///
cells("mean(pattern(1 1 0) fmt(2)) sd(pattern(1 1 0)) b(star pattern(0 0 1) fmt(2)) t(pattern(0 0 1) par fmt(2))") ///
label
```
which yields
```
-----------------------------------------------------------------------------------------------------
(1) (2) (3)
mean sd mean sd b t
-----------------------------------------------------------------------------------------------------
Price 6072.42 3097.10 6384.68 2621.92 -312.26 (-0.44)
Mileage (mpg) 19.83 4.74 24.77 6.61 -4.95** (-3.18)
Weight (lbs.) 3317.12 695.36 2315.91 433.00 1001.21*** (7.50)
Headroom (in.) 3.15 0.92 2.61 0.49 0.54** (3.30)
Trunk space (.. ft.) 14.75 4.31 11.41 3.22 3.34*** (3.67)
-----------------------------------------------------------------------------------------------------
Observations 52 22 74
-----------------------------------------------------------------------------------------------------
```
|
plot a bar chart using matplotlib - type error
I am trying to plot a frequency distribution (of occurences of words and their frequency)
This is my code:
```
import matplotlib.pyplot as plt
y = [1,2,3,4,5]
x = ['apple', 'orange', 'pear', 'mango', 'peach']
plt.bar(x,y)
plt.show
```
However, i am getting this error:
```
TypeError: cannot concatenate 'str' and 'float' objects
```
|
```
import matplotlib.pyplot as plt
import numpy as np
y = [1,2,3,4,5]
x = np.arange(0,len(y)) + 0.75
xl = ['', 'apple', 'orange', 'pear', 'mango', 'peach']
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(x,y,0.5)
ax.set_xticklabels(xl)
ax.set_xlim(0,5.5)
```
It would be interesting if there is a better method for setting the labels to be in the middle of the bars.
According to this [SO post](https://stackoverflow.com/questions/31067389/matplotlib-barplot-generic-way-of-setting-the-xticklabels-in-the-middle-of-the), there is a better solution:
```
import matplotlib.pyplot as plt
import numpy as np
y = [1,2,3,4,5]
# adding 0.75 did the trick but only if I add a blank position to `xl`
x = np.arange(len(y))
xl = ['apple', 'orange', 'pear', 'mango', 'peach']
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(x,y,0.5, align='center')
ax.set_xticks(x)
ax.set_xticklabels(xl)
```
|
TensorFlow DataSet `from\_generator` with variable batch size
I'm trying to use the TensorFlow Dataset API to read an HDF5 file, using the `from_generator` method. Everything works fine unless the batch size does not evenly divide into the number of events. I don't quite see how to make a flexible batch using the API.
If things don't divide evenly, you get errors like:
```
2018-08-31 13:47:34.274303: W tensorflow/core/framework/op_kernel.cc:1263] Invalid argument: ValueError: `generator` yielded an element of shape (1, 28, 28, 1) where an element of shape (11, 28, 28, 1) was expected.
Traceback (most recent call last):
File "/Users/perdue/miniconda3/envs/py3a/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 206, in __call__
ret = func(*args)
File "/Users/perdue/miniconda3/envs/py3a/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 452, in generator_py_func
"of shape %s was expected." % (ret_array.shape, expected_shape))
ValueError: `generator` yielded an element of shape (1, 28, 28, 1) where an element of shape (11, 28, 28, 1) was expected.
```
I have a script that reproduces the error (and instructions to get the several MB required data file - Fashion MNIST) here:
<https://gist.github.com/gnperdue/b905a9c2dd4c08b53e0539d6aa3d3dc6>
The most important code is probably:
```
def make_fashion_dset(file_name, batch_size, shuffle=False):
dgen = _make_fashion_generator_fn(file_name, batch_size)
features_shape = [batch_size, 28, 28, 1]
labels_shape = [batch_size, 10]
ds = tf.data.Dataset.from_generator(
dgen, (tf.float32, tf.uint8),
(tf.TensorShape(features_shape), tf.TensorShape(labels_shape))
)
...
```
where `dgen` is a generator function reading from the hdf5:
```
def _make_fashion_generator_fn(file_name, batch_size):
reader = FashionHDF5Reader(file_name)
nevents = reader.openf()
def example_generator_fn():
start_idx, stop_idx = 0, batch_size
while True:
if start_idx >= nevents:
reader.closef()
return
yield reader.get_examples(start_idx, stop_idx)
start_idx, stop_idx = start_idx + batch_size, stop_idx + batch_size
return example_generator_fn
```
The core of the problem is we have to declare the tensor shapes in `from_generator`, but we need the flexibility to change that shape down the line while iterating.
There are some workarounds - drop the last few samples to get even division, or just use a batch size of 1... but the first is bad if you can't lose any samples and a batch size of 1 is very slow.
Any ideas or comments? Thanks!
|
When specifying Tensor shapes in `from_generator`, you can use `None` as an element to specify variable-sized dimensions. This way you can accommodate batches of different sizes, in particular "leftover" batches that are a bit smaller than your requested batch size. So you would use
```
def make_fashion_dset(file_name, batch_size, shuffle=False):
dgen = _make_fashion_generator_fn(file_name, batch_size)
features_shape = [None, 28, 28, 1]
labels_shape = [None, 10]
ds = tf.data.Dataset.from_generator(
dgen, (tf.float32, tf.uint8),
(tf.TensorShape(features_shape), tf.TensorShape(labels_shape))
)
...
```
|
How to add QLineEdit to QMessageBox PyQt5
I want a copyable text on my QMessageBox so I thought I can put a QLineEdit on QMessageBox then set the text of QLineEdit whatever I want, so user can choose the text and copy it.
But I couldn't success. Is there a way to add QLineEdit to QMessageBox or make a copyable text on QMessageBox?
|
by playing with `QMessageBox.informativeText()`, `QMessageBox.detailedText()` and `QMessageBox.textInteractionFlags()` i found the following:
`QMessageBox.informativeText()` and `QMessageBox.detailedText()` are always selectable, even if `QmessageBox.textInteractionFlags()` are set to `QtCore.Qt.NoTextInteraction`. `QMessageBox.detailedText()` is shown in a textedit. `QMessageBox.setTextInteractionFlags()` only acts on `QmessageBox.text()`. The use of these kinds of text is descripted in [documentation of QMessageBox](http://doc.qt.io/qt-5/qmessagebox.html#details). By flags you can set the text editable and/or selectable, see [enum TextInteractionFlags](http://doc.qt.io/qt-5/qt.html#TextInteractionFlag-enum).
Here an working example with selectable text in `QmessageBox.detailedText()`:
```
import sys
from PyQt5 import QtWidgets, QtCore
class MyWidget(QtWidgets.QWidget):
def __init__(self):
QtWidgets.QWidget.__init__(self)
self.setGeometry(400,50,200,200)
self.pushButton = QtWidgets.QPushButton('show messagebox', self)
self.pushButton.setGeometry(25, 90, 150, 25)
self.pushButton.clicked.connect(self.onClick)
def onClick(self):
msgbox = QtWidgets.QMessageBox()
msgbox.setText('to select click "show details"')
msgbox.setTextInteractionFlags(QtCore.Qt.NoTextInteraction) # (QtCore.Qt.TextSelectableByMouse)
msgbox.setDetailedText('line 1\nline 2\nline 3')
msgbox.exec()
app = QtWidgets.QApplication(sys.argv)
w = MyWidget()
w.show()
sys.exit(app.exec_())
```
|
Perfmon counters to check memory leak
I want to check the memory leakage issue in my service. I have tried following set of perfmon counters.
1. .NET CLR Memory\# Bytes in all Heaps
2. .NET CLR Memory\Gen 2 Heap Size
3. .NET CLR Memory\# GC handles
4. .NET CLR Memory\# of Pinned Objects
5. .NET CLR Memory\# total committed Bytes
6. .NET CLR Memory\# total reserved Bytes
7. .NET CLR Memory\Large Object Heap size
I have referred above set from [here](http://dotnetdebug.net/2005/06/30/perfmon-your-debugging-buddy/)
Also referred following set:
1. Memory/Available Bytes
2. Memory/Committed Bytes
3. Process/Private Bytes
4. Process/Page File Bytes
5. Process/Handle Count
I have referred above set from [here](http://vmin.wordpress.com/2012/05/20/memory-leaks-finding-a-memory-leak-in-microsoft-windows/)
Is there any parameter/criteria or any other best way to identify perfmon counter for memory leak?
Can any one suggest me set of counters to check memory leak? Or above sets covers memory leak?
|
To detect a memory leak using Performance Monitor, monitor these counters:
1. The Memory/Available Bytes counter lets you view the total number of bytes of available memory. This value normally fluctuates, but if
you have an application with the memory leak, it will decrease over
time.
2. TheMemory/Committed Bytes counter will steadily rise if a memory leak is occurring, because as the number of available bytes of
memory decreases, the number of committed bytes increases.
3. The Process/Private Bytes counter displays the number of bytes reserved exclusively for a specific process. If a memory leak is
occurring, this value will tend to steadily rise.
4. The Process/Page File Bytes counter displays the size of the pagefile. Windows uses virtual memory (the pagefile) to supplement a
machine's physical memory. As a machine's physical memory begins to
fill up, pages of memory are moved to the pagefile. It is normal for
the pagefile to be used even on machines with plenty of memory. But
if the size of the pagefile steadily increases, that's a good sign a
memory leak is occurring.
5. I also want to mention the Process/Handle Count counter. Applications use handles to identify resources that they must
access. If a memory leak is occurring, an application will often
create additional handles to identify memory resources. So a rise in
the handle count might indicate a memory leak. However, not all
memory leaks will result in a rise in the handle count.
*[Source](http://bshwjt.blogspot.co.uk/2010/08/how-to-find-memory-leak-using.html)*
In my experience this is accurate.
I'd also refer you to this Microsoft Advanced Debugging blog by Tess, a Microsoft employee. Who suggests the following counters. I have found the above to be more than enough to indicate a memory leak is present but I trust that Tess's instructions could provide a more indepth insight into the issue.
[Debugging Demos - Memory Review](https://www.tessferrandez.com/blog/2008/02/20/net-debugging-demos-lab-3-walkthrough.html)
[Updated link](https://www.tessferrandez.com/blog/2008/02/20/net-debugging-demos-lab-3-walkthrough.html)
- .NET CLR Memory/# Bytes in all Heaps
- .NET CLR Memory/Large Object Heap Size
- .NET CLR Memory/Gen 2 heap size
- .NET CLR Memory/Gen 1 heap size
- .NET CLR Memory/Gen 0 heap size
- Process/Private Bytes
- Process/Virtual Bytes
|
STRING\_AGG with distinct without sub-query
This is my data:
```
Code SubCode Colour Fruit Car City Name
A A1 Red Apple Honda Mel John
A A1 Green Apple Toyota NYC John
A A1 Red Banana Honda Lon John
A A1 Red Banana Opel Mel John
A A2 ...
A A2 ...
A A3
A A3
```
This is my sql:
```
SELECT Code, SubCode, STRING_AGG(Colour, ',') STRING_AGG(Fruit, ',') STRING_AGG(Car, ',') STRING_AGG(City, ',') STRING_AGG(Name, ',')
FROM myTable
```
I get this result:
```
Code SubCode Colour Fruit Car City Name
A A1 Red,Green,Red,Red Apple,Apple,Banana,Banan Honda,Toyota,Honda,Opel ...
```
Is there a way I get distinct values? Can I can create a sub-query with `STRING_AGG`?
```
Code SubCode Colour Fruit Car City Name
A A1 Red,Green Apple,Banana Honda,Toyota,Opel ...
```
|
Alas, SQL Server's `string_agg()` currently does not support `DISTINCT`. So you would need multiple subqueries, like so:
```
select
code,
subcode,
(select string_agg(color, ',') from (select distinct color from mytable t1 where t1.code = t.code and t1.subcode = t.subcode) t) colors,
(select string_agg(fruit, ',') from (select distinct fruit from mytable t1 where t1.code = t.code and t1.subcode = t.subcode) t) fruits,
(select string_agg(car , ',') from (select distinct car from mytable t1 where t1.code = t.code and t1.subcode = t.subcode) t) cars,
(select string_agg(city , ',') from (select distinct city from mytable t1 where t1.code = t.code and t1.subcode = t.subcode) t) cities,
(select string_agg(name , ',') from (select distinct name from mytable t1 where t1.code = t.code and t1.subcode = t.subcode) t) names
from mytable t
group by code, subcode
```
Note that your original query was missing a `group by` clause, because of which it was invalid SQL. I fixed that as well.
|
Possible to make labels appear when hovering over a point in matplotlib in stem plot?
I am new to matplotlib and I am looking to label stems in a stem plot with x,y co-od when mouse hovers over that point. When I searched everything was meant for scatter plot (Possible to make labels appear when hovering over a point in matplotlib? present code is like this:
```
def plot_matching(mzs,ints,matching,scan_num):
fig=p1.gcf()
fig.canvas.set_window_title('MS/MS Viewer')
rel_ints=relative_intensity(ints)
p1.xlim(min(mzs)-100,max(mzs)+100)
p1.ylim(min(rel_ints),max(rel_ints)+5)
p1.title('Scan Number:'+scan_num)
p1.xlabel('m/z')
p1.ylabel('Relative intensity')
mzs_rel=zip(mzs,rel_ints)
for x,y in mzs_rel:
x1=[]
y1=[]
x1.append(x)
y1.append(y)
markerline, stemlines, baseline=p1.stem(x1,y1)
p1.setp(markerline, 'Marker', '')
for m in matching:
if x==m[1] and y>3.0:
p1.setp(stemlines, linewidth=2, color='r')
p1.text(x,y,m[0],fontsize=12)
break
else:
p1.setp(stemlines,linewidth=2, color='g')
return p1
```
Will the scatter plot link for stem plot too?
|
To make a hovering label, you need to hook up a function to handle `motion_notify_events`:
```
plt.connect('motion_notify_event', some_function)
```
Below is some code showing one way to do it. The hovering label behavior is produced by
```
cursor = FollowDotCursor(ax, x, y)
```
where `ax` is the axis, `x` and `y` are lists of coordinates. Since you supply `x` and `y`, it does not matter if you are making a line plot or a stem plot or whatever. The labels appear when the mouse is moved near any point `(xi, yi)`.
The code below uses `scipy.spatial.cKDTree` to locate the nearest point. [Here](https://stackoverflow.com/a/13306887/190597) is an older version of this code which does not require scipy.
---
```
import matplotlib.pyplot as plt
import scipy.spatial as spatial
import numpy as np
pi = np.pi
cos = np.cos
def fmt(x, y):
return 'x: {x:0.2f}\ny: {y:0.2f}'.format(x=x, y=y)
class FollowDotCursor(object):
"""Display the x,y location of the nearest data point.
https://stackoverflow.com/a/4674445/190597 (Joe Kington)
https://stackoverflow.com/a/13306887/190597 (unutbu)
https://stackoverflow.com/a/15454427/190597 (unutbu)
"""
def __init__(self, ax, x, y, tolerance=5, formatter=fmt, offsets=(-20, 20)):
try:
x = np.asarray(x, dtype='float')
except (TypeError, ValueError):
x = np.asarray(mdates.date2num(x), dtype='float')
y = np.asarray(y, dtype='float')
mask = ~(np.isnan(x) | np.isnan(y))
x = x[mask]
y = y[mask]
self._points = np.column_stack((x, y))
self.offsets = offsets
y = y[np.abs(y-y.mean()) <= 3*y.std()]
self.scale = x.ptp()
self.scale = y.ptp() / self.scale if self.scale else 1
self.tree = spatial.cKDTree(self.scaled(self._points))
self.formatter = formatter
self.tolerance = tolerance
self.ax = ax
self.fig = ax.figure
self.ax.xaxis.set_label_position('top')
self.dot = ax.scatter(
[x.min()], [y.min()], s=130, color='green', alpha=0.7)
self.annotation = self.setup_annotation()
plt.connect('motion_notify_event', self)
def scaled(self, points):
points = np.asarray(points)
return points * (self.scale, 1)
def __call__(self, event):
ax = self.ax
# event.inaxes is always the current axis. If you use twinx, ax could be
# a different axis.
if event.inaxes == ax:
x, y = event.xdata, event.ydata
elif event.inaxes is None:
return
else:
inv = ax.transData.inverted()
x, y = inv.transform([(event.x, event.y)]).ravel()
annotation = self.annotation
x, y = self.snap(x, y)
annotation.xy = x, y
annotation.set_text(self.formatter(x, y))
self.dot.set_offsets(np.column_stack((x, y)))
bbox = self.annotation.get_window_extent()
self.fig.canvas.blit(bbox)
self.fig.canvas.draw_idle()
def setup_annotation(self):
"""Draw and hide the annotation box."""
annotation = self.ax.annotate(
'', xy=(0, 0), ha = 'right',
xytext = self.offsets, textcoords = 'offset points', va = 'bottom',
bbox = dict(
boxstyle='round,pad=0.5', fc='yellow', alpha=0.75),
arrowprops = dict(
arrowstyle='->', connectionstyle='arc3,rad=0'))
return annotation
def snap(self, x, y):
"""Return the value in self.tree closest to x, y."""
dist, idx = self.tree.query(self.scaled((x, y)), k=1, p=1)
try:
return self._points[idx]
except IndexError:
# IndexError: index out of bounds
return self._points[0]
fig, ax = plt.subplots()
x = np.linspace(0.1, 2*pi, 10)
y = cos(x)
markerline, stemlines, baseline = ax.stem(x, y, '-.')
plt.setp(markerline, 'markerfacecolor', 'b')
plt.setp(baseline, 'color','r', 'linewidth', 2)
cursor = FollowDotCursor(ax, x, y, tolerance=20)
plt.show()
```

|
Lucene 4.1 : How split words that contains "dots" when indexing?
I'l trying to figure out what I should do to index my keywords that contains "." .
ex : this.name
I want to index the terms : this and name in my index.
I use the StandardAnalyser. I try to extends the WhitespaceTokensizer or extends TokenFilter, but I'm not sure if I'm in the right direction.
if I use the StandardAnalyser, I'll obtain "this.name" as a keyword, and that's not what I want, but the analyser do the rest correctly for me.
|
You can put a CharFilter in front of StandardTokenizer that converts periods and underscores to spaces. MappingCharFilter will work.
Here's MappingCharFilter added to a stripped-down StandardAnalyzer (see the original 4.1 version [here](http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_1_0/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/StandardAnalyzer.java)):
```
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.charfilter.MappingCharFilter;
import org.apache.lucene.analysis.charfilter.NormalizeCharMap;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopAnalyzer;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;
import org.apache.lucene.util.Version;
import java.io.IOException;
import java.io.Reader;
public final class MyAnalyzer extends StopwordAnalyzerBase {
private int maxTokenLength = 255;
public MyAnalyzer() {
super(Version.LUCENE_41, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
}
@Override
protected TokenStreamComponents createComponents
(final String fieldName, final Reader reader) {
final StandardTokenizer src = new StandardTokenizer(matchVersion, reader);
src.setMaxTokenLength(maxTokenLength);
TokenStream tok = new StandardFilter(matchVersion, src);
tok = new LowerCaseFilter(matchVersion, tok);
tok = new StopFilter(matchVersion, tok, stopwords);
return new TokenStreamComponents(src, tok) {
@Override
protected void setReader(final Reader reader) throws IOException {
src.setMaxTokenLength(MyAnalyzer.this.maxTokenLength);
super.setReader(reader);
}
};
}
@Override
protected Reader initReader(String fieldName, Reader reader) {
NormalizeCharMap.Builder builder = new NormalizeCharMap.Builder();
builder.add(".", " ");
builder.add("_", " ");
NormalizeCharMap normMap = builder.build();
return new MappingCharFilter(normMap, reader);
}
}
```
Here's a quick test to demonstrate it works:
```
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.BaseTokenStreamTestCase;
public class TestMyAnalyzer extends BaseTokenStreamTestCase {
private Analyzer analyzer = new MyAnalyzer();
public void testPeriods() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"this.name; here.i.am; sentences ... end with periods.",
new String[] { "name", "here", "i", "am", "sentences", "end", "periods" } );
}
public void testUnderscores() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"some_underscore_term _and____ stuff that is_not in it",
new String[] { "some", "underscore", "term", "stuff" } );
}
}
```
|
How to appendReplacement on a Matcher group instead of the whole pattern?
I am using a `while(matcher.find())` to loop through all of the matches of a Pattern. For each instance or match of that pattern it finds, I want to replace `matcher.group(3)` with some new text. This text will be different for each one so I am using `matcher.appendReplacement()` to rebuild the original string with the new changes as it goes through. However, `appendReplacement()` replaces the entire Pattern instead of just the group.
How can I do this but only modify the third group of the match rather than the entire Pattern?
Here is some example code:
```
Pattern pattern = Pattern.compile("THE (REGEX) (EXPRESSION) (WITH MULTIPLE) GROUPS");
Matcher matcher = pattern.matcher("THE TEXT TO SEARCH AND MODIFY");
StringBuffer buffer = new StringBuffer();
while(matcher.find()){
matcher.appendReplacement(buffer, processTheGroup(matcher.group(3));
}
```
but I would like to do something like this (obviously this doesn't work).
```
...
while(matcher.find()){
matcher.group(3).appendReplacement(buffer, processTheGroup(matcher.group(3));
}
```
Something like that, where it only replaces a certain group, not the whole Pattern.
*EDIT: changed the regex example to show that not all of the pattern is grouped.*
|
Let's say your entire pattern matches `"(prefix)(infix)(suffix)"`, capturing the 3 parts into groups 1, 2 and 3 respectively. Now let's say you want to replace only group 2 (the infix), leaving the prefix and suffix intact the way they were.
Then what you do is you append what `group(1)` matched (unaltered), the new replacement for `group(2)`, and what `group(3)` matched (unaltered), so something like this:
```
matcher.appendReplacement(
buffer,
matcher.group(1) + processTheGroup(matcher.group(2)) + matcher.group(3)
);
```
This will still match and replace the entire pattern, but since groups 1 and 3 are left untouched, effectively only the infix is replaced.
You should be able to adapt the same basic technique for your particular scenario.
|
Javascript Create JSON Hash Array for jQuery AJAX
I am desperately trying to manually create a JSON-style array in Javascript to send over the network via jQuery's AJAX method.
```
var fieldsobj = {fields:[]}
$(".fact_field", fact).each(function(index, field){
var index = $(field).attr("data-index");
var name = $(".fact_field_label", field).text().trim();
var value = $(".fact_field_value", field).text().trim();
fieldsobj["fields"].push({index:index, name:name, value:value});
});
//...
$.ajax({
type: 'PUT',
url: url,
data: fieldsobj,
success: function(data){...
},
complete: function(){...
}
});
```
What I want is the following:
```
{fields => [{index:0, name:1, value:2},{...},{...}]}
```
What I get is this:
```
{"fields"=>{"0"=>{...}, "1"=>{..}, "2"=>{...}, "3"=>{...}}
```
What am I doing wrong?
|
When you pass an object as the `data` property, jQuery will pass it as url-encoded form parameters (e.g. `foo=bar&moo=too`) in the body. I think what you want is to pass JSON through the body.
Grab the `json2.js` [written by uncle Crockford](https://github.com/douglascrockford/JSON-js) and use `JSON.stringify` (that library provides the functionality for browsers that still don't support it):
```
$.ajax({
type: 'PUT',
url: url,
data: JSON.stringify(fieldsobj),
contentType: "application/json",
success: function(data){...
},
complete: function(){...
}
});
```
And don't forget to set the `contentType` property! On the PHP side, you can use `json_decode` to decode the raw body content:
```
$fieldsobj = json_decode(@file_get_contents('php://input'));
```
|
Serializing object that contains cyclic object value
I have an object (parse tree) that contains child nodes which are references to other nodes.
I'd like to serialize this object, using `JSON.stringify()`, but I get
>
> TypeError: cyclic object value
>
>
>
because of the constructs I mentioned.
How could I work around this? It does not matter to me whether these references to other nodes are represented or not in the serialized object.
On the other hand, removing these properties from the object when they are being created seems tedious and I wouldn't want to make changes to the parser (narcissus).
|
Use the second parameter of `stringify`, the [replacer function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#The_replacer_parameter), to exclude already serialized objects:
```
var seen = [];
JSON.stringify(obj, function(key, val) {
if (val != null && typeof val == "object") {
if (seen.indexOf(val) >= 0) {
return;
}
seen.push(val);
}
return val;
});
```
<http://jsfiddle.net/mH6cJ/38/>
As correctly pointed out in other comments, this code removes every "seen" object, not only "recursive" ones.
For example, for:
```
a = {x:1};
obj = [a, a];
```
the result will be incorrect. If your structure is like this, you might want to use Crockford's [decycle](https://github.com/douglascrockford/JSON-js/blob/master/cycle.js) or this (simpler) function which just replaces recursive references with nulls:
```
function decycle(obj, stack = []) {
if (!obj || typeof obj !== 'object')
return obj;
if (stack.includes(obj))
return null;
let s = stack.concat([obj]);
return Array.isArray(obj)
? obj.map(x => decycle(x, s))
: Object.fromEntries(
Object.entries(obj)
.map(([k, v]) => [k, decycle(v, s)]));
}
//
let a = {b: [1, 2, 3]}
a.b.push(a);
console.log(JSON.stringify(decycle(a)))
```
|
react native 2 colors as background
I'm trying to make 2 colors of background using flex, and it seems to work good but I want to make the button on the middle as in the photo, where I need to insert the button in the code?
i want it to be like this:
[](https://i.stack.imgur.com/S2t1g.jpg)
```
return (
<View style={container}>
<View style={leftContainer}>
</View>
<View style={rightContainer}>
</View>
<Button
title="button"/>
</View>
)
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
flexDirection:'row'
},
leftContainer:{
flex:1,
backgroundColor: '#ca8afa',
},
rightContainer:{
flex:1,
backgroundColor: '#96d0e3'
},
addButton: {
justifyContent: 'center',
alignItems: 'center',
position: 'absolute',
bottom: 20,
right: 20,
zIndex: 1111,
width: calcSize(192 / 2),
height: calcSize(192 / 2)
}
})
```
the problem is that the button is also in the row now and not in the middle,
how can i fix it?
|
Here's a live demo of a possible solution: <https://snack.expo.io/HJFL7A3ez>
**Edit -** Adding the code here as well:
```
export default class App extends Component {
render() {
return (
<View style={styles.container}>
<View style={styles.leftContainer}>
</View>
<View style={styles.rightContainer}>
</View>
<View style={styles.buttonContainer}>
<Button style={styles.addButton} title="button"/>
</View>
</View>
)
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
flexDirection:'row'
},
leftContainer:{
flex:1,
backgroundColor: '#ca8afa',
},
rightContainer:{
flex:1,
backgroundColor: '#96d0e3'
},
buttonContainer: {
position: 'absolute',
width: '100%',
height: '100%',
justifyContent: 'center',
alignItems: 'center',
},
addButton: {
zIndex: 1111,
width: 200
}
})
```
|
Alerts in K8s for Pod failing
I wanted to create alerts in Grafana for My Kubernetes Clusters.
I have configured Prometheus, Node exporter, Kube-Metrics, Alert Manager in my k8s Cluster.
I wanted to setup Alerting on Unschedulable or Failed Pods.
1. Cause of unschedulable or failed pods
2. Generating an alert after a while
3. Creating another alert to notify us when pods fail.
Can You guide me how to achieve this??
|
Based on the comment from [Suresh Vishnoi](https://stackoverflow.com/users/8803619/suresh-vishnoi "14,017 reputation"):
>
> it might be helpful [awesome-prometheus-alerts.grep.to/rules.html#kubernetes](https://awesome-prometheus-alerts.grep.to/rules.html#kubernetes)
>
>
>
yes, this could be very helpful. On this site you can find templates for [failed pods (not healthy)](https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-17):
>
> Pod has been in a non-ready state for longer than 15 minutes.
>
>
>
```
- alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n V
```
or for [crash looping](https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-18):
>
> Pod {{ $labels.pod }} is crash looping
>
>
>
```
- alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
```
See also [this good guide about monitoring kubernetes cluster with Prometheus](https://sysdig.com/blog/kubernetes-monitoring-prometheus/):
>
> The Kubernetes API and the [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) (which natively uses prometheus metrics) **solve part of this problem** by exposing Kubernetes internal data, such as the number of desired / running replicas in a deployment, unschedulable nodes, etc.
>
>
> Prometheus is a good fit for microservices because you just need to **expose a metrics port**, and don’t need to add too much complexity or run additional services. Often, the service itself is already presenting a HTTP interface, and the developer just needs to add an additional path like `/metrics`.
>
>
>
If it comes to unschedulable nodes, you can use the metric `kube_node_spec_unschedulable`. It is described [here](https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md) or [here](https://www.sumologic.com/blog/kubernetes-monitoring/):
`kube_node_spec_unschedulable` - Whether a node can schedule new pods or not.
Look also at [this guide](https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics).
Basically, you need to find the metric you want to monitor and set it appropriately in Prometheus. Alternatively, you can use templates, as I showed at the beginning of the answer.
|
( was unexpected at this time - batch script
I'm using the batch script below and get an error
>
> ( was unexpected at this time.
>
>
>
I know that the problem is in the first line but I don't understand what is wrong.
Any ideas ?
script:
```
IF [%1]==[] (
:LOOP1
SET /P isDefault=Value Missing, do you want to use default values [1,1,10,Local Area Connection 2]?[y/n]
IF %isDefault%==y (
SET from=1
SET step=1
SET to=10
SET lan="Local Area Connection 2"
GOTO :USERLOOP
)
IF %isDefault%==n GOTO :END
GOTO :LOOP1
)
```
|
Actually, the problem is *not* on the first line.
The problem is that `cmd` does variable substitution immediately when it parses the `IF` statement, including its body. Therefore the line:
```
IF %isDefault%==y (
```
is problematic because `isDefault` isn't set when the outer `IF` statement is parsed, so it becomes:
```
IF ==y (
```
and hence you get the error about `(` being unexpected. You can get around this by enabling the command extension (`SETLOCAL ENABLEDELAYEDEXPANSION`) for *delayed environment variable expansion* (see `set /?` for details). You also can rewrite your script:
```
@ECHO OFF
IF NOT "%1"=="" GOTO :EOF
:LOOP1
SET /P isDefault=Value Missing, do you want to use default values [1,1,10,Local Area Connection 2]?[y/n]
IF "%isDefault%"=="y" (
SET from=1
SET step=1
SET to=10
SET lan="Local Area Connection 2"
GOTO :USERLOOP
)
IF "%isDefault%"=="n" GOTO :EOF
GOTO :LOOP1
```
(I made some other changes, such as using the built-in `:EOF` label instead of `:END`.)
|
STRING\_AGG is not a recognized built-in function name
I've downloaded and installed SQL Server 2016. When I attempted to use the `STRING_AGG` function I receive this error. Here is my code:
```
SELECT STRING_AGG(cast(FieldNumber AS VARCHAR(100)), ',')
FROM Fields
```
I've installed SQL Server 2016 and SP1. Is there anything else I need to do. Here is the feature I am trying to use. [String Agg](https://msdn.microsoft.com/en-us/library/mt790580.aspx)
|
`STRING_AGG` is not introduced in `SQL SERVER 2016`.
It is introduced in `SQL SERVER 2017`. In the MSDN [link](https://msdn.microsoft.com/en-us/library/mt790580.aspx) you have provided it is mentioned ***THIS TOPIC APPLIES TO : SQL Server 2017*** not `SQL SERVER 2016`.
At the time of the question, this version was known by the code name "vNext", described as:
>
> SQL Server vNext represents a major step towards making SQL Server a
> platform that enables choices of development languages, data types,
> on-premises and in the cloud, and across operating systems by bringing
> the power of SQL Server to Linux, Linux-based Docker containers, and
> Windows. SQL Server vNext also includes the features added in SQL Server 2016 service packs
>
>
>
|
Stationarity of AR(1) process, stable filter
[This](https://en.wikipedia.org/wiki/Autoregressive_model#Example:_An_AR.281.29_process) section of the Wikipedia article about the Autoregressive Model reads:
>
> An AR(1) process is given by: $$X\_t = c + \varphi X\_{t-1}+\varepsilon\_t$$ where $\varepsilon\_t$ is a white noise process with zero mean and constant variance $\sigma\_\varepsilon^2$.
>
>
>
Then it is stated:
>
> The process is wide-sense stationary if $|\varphi|<1$ since it is obtained as the output of a stable filter whose input is white noise.
>
>
>
I do not understand this last sentence. In particular, I do not know:
- What a **stable filter** is.
- How the process can be written as the output of a stable filter.
A partial answer, e.g. only to the first bullet, is appreciated as well.
|
A **stable filter** is a filter which exists, and is causal. Causal means that your current observation is a function of past or contemporaneous noise, not future noise. Why do they use the word stable? Well, intuitively, you can see what happens when you simulate data from the model if $|\phi| > 1$. You will see the process could not hover around some mean for all time.
If you rewrite your model as
$$
X\_t - \mu = \varphi(X\_{t-1} - \mu) + \epsilon\_t
$$
with $c = \mu(1-\varphi)$, then you can re-write it again as
$$
(1-\varphi B) Y\_t = \epsilon\_t \tag{1}
$$
where $B$ is the backshift operator and $Y\_t = X\_t - \mu$ is the demeaned process.
A filter is a (possibly infinite) linear combination that you apply to to white noise (I take white noise to mean errors that are mutually uncorrelated and mean zero. This doesn't mean that they are independent, necessarily.). Filtering white noise is a natural way to form time series data. We would write filtered noise as
$$
\psi(B)\epsilon\_t = \left(\sum\_{j=-\infty}^{\infty}\psi\_j B^j\right)\epsilon\_t = \sum\_{j=-\infty}^{\infty}\psi\_j \epsilon\_{t-j},
$$
where the collection of coefficients $\{\psi\_j\}$ is our impulse response function.
This only exists (has finite expectation and variance) if the coefficients far away get small fast enough. Usually they are assumed to be absolutely summable, that is $\sum\_{j=-\infty}^{\infty} |\psi\_j| < \infty$. Showing that this is a sufficient condition is a detail you might want to fill in yourself.
Getting $\psi(B)$ our filter from $\varphi(B)$ our model's AR polynomial is not always something you can do, though. If we could divide both sides of (1) by $(1-\varphi B)$, then your model is
$$
Y\_t = \sum\_{j=-\infty}^{\infty}\psi\_j \epsilon\_{t-j},
$$
and this is just like doing simple algebra. We would do this, and then figure out what each $\psi\_j$ was in terms of $\varphi$. You can only do this, however, if the roots of the complex polynomial $1 - \varphi z$ are not zero (otherwise you would be dividing by zero), or equivalently if $|\varphi|\neq1$ if you're writing the constraint in terms of the parameters instead of the complex number $z$. If moreover $|\varphi| < 1$, (or if you're stating it in terms of $z$ again, the roots are outside of the unit circle), then your model is causal, and you don't have to filter future noise:
$$
Y\_t = \sum\_{j=0}^{\infty}\psi\_j \epsilon\_{t-j} = \sum\_{j=0}^{\infty}\varphi^j \epsilon\_{t-j}.
$$
See how the sum representing the lag runs from $0$ to $\infty$ now?
Figuring out the coefficients of $\psi(B)$ in terms of $\phi$ can be done by solving $(1 + \psi\_1 B + \psi\_2 B^2 + \cdots)(1 - \varphi B) = 1$, and this might be something you want to do yourself.
|
How to set columns programmatically to have dynamic with in Android?
I have created a table like this:
```
TableLayout table = new TableLayout(getApplicationContext());
table.setColumnStretchable(1, true);
table.setColumnStretchable(2, true);
tableRow = new TableRow[20];
tableRow[i] = new TableRow(getApplicationContext());
tableRow[i].setGravity(Gravity.LEFT);
for (int j = 0; j < 4; j++) {
statistics = new TextView[4];
statistics[i] = new TextView(getApplicationContext());
statistics[i].setText("Text");
statistics[i].setGravity(Gravity.LEFT);
tableRow[i].addView(statistics[i]);
}
table.addView(tableRow[i]);
```
The result of this code is:
[](https://i.stack.imgur.com/jyGlB.png)
And I want to achieve this:
[](https://i.stack.imgur.com/HZX9f.png)
How is this possible?
|
With this
```
<TableLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:stretchColumns="*"
<TableRow
android:layout_weight="1"
android:gravity="center">
<ImageButton
android:id="@+id/btn1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:focusable="true"
android:scaleType="center"
android:src="@drawable/ic_1"
android:background="@null" />
<ImageButton
android:id="@+id/btn2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:focusable="true"
android:scaleType="center"
android:src="@drawable/ic_2"
android:background="@null"/>
<ImageButton
android:id="@+id/btn3"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:focusable="true"
android:scaleType="center"
android:src="@drawable/ic_3"
android:background="@null" />
</TableRow>
</TableLayout>
```
The three button appears aligned. I think you need gravity="center"and/or android:stretchColumns="\*" in your code.
**UPDATE**
Try with this:
```
TableLayout table = new TableLayout(getApplicationContext());
table.setColumnStretchable(1, true);
table.setColumnStretchable(2, true);
table.setStretchAllColumns(true);
tableRow = new TableRow[20];
tableRow[i] = new TableRow(getApplicationContext());
tableRow[i].setGravity(Gravity.CENTER);
for (int j = 0; j < 4; j++) {
statistics = new TextView[4];
statistics[i] = new TextView(getApplicationContext());
statistics[i].setText("Text");
statistics[i].setGravity(Gravity.LEFT);
tableRow[i].addView(statistics[i]);
}
table.addView(tableRow[i]);
```
**UPDATE**
I did a mock of your problem and use this
```
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
TableLayout table = new TableLayout(getApplicationContext());
table.setStretchAllColumns(true);
for (int i = 0; i<20; i++) {
TableRow[] tableRow = new TableRow[20];
tableRow[i] = new TableRow(getApplicationContext());
tableRow[i].setGravity(Gravity.CENTER);
TextView pos = new TextView(getApplicationContext());
pos.setGravity(Gravity.LEFT);
pos.setText(String.valueOf(i) + ". " + getName(i));
TextView a = new TextView(getApplicationContext());
a.setGravity(Gravity.LEFT);
a.setText("2/9");
TextView points = new TextView(getApplicationContext());
points.setGravity(Gravity.LEFT);
points.setText("2/9");
tableRow[i].addView(pos);
tableRow[i].addView(a);
tableRow[i].addView(points);
table.addView(tableRow[i]);
}
RelativeLayout container = (RelativeLayout) findViewById(R.id.container);
container.addView(table);
}
private String getName(int i) {
if (i == 2) {
return "Recooooooooooooooord";
} else if (i == 3) {
return "Recooooooord";
}
return "Fran";
}
```
The result is like the one you want, you can see the result [here](https://i.stack.imgur.com/JSDq7.jpg)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.