prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
glGetShaderInfoLog to std::string?
This is a snippet from OpenGL Super Bible 7th edition:
```
GLint log_length;
glGetShaderiv(fs, GL_INFO_LOG_LENGTH, &log_length);
std::string str;
str.reserve(log_length);
glGetShaderInfoLog(fs, log_length, NULL, str.c_str());
```
At a first glance it seemed weird to me to pass `str.c_str()` as an argument to this function, and of course clang immediatelly protested: `cannot initialize a parameter of type 'GLchar *' (aka 'char *') with an rvalue of type 'const char *'`.
So I tried to investigate and changed `str.c_str()` to `str.data()`, which should provide a pointer to the internal data array, but this produces the same error message. **Edit:** `c_str()` and `data()` are actually the same thing (in c++11 at least), so it doesn't matter which one we use.
I did some searches, but didn't find a solution (although I'm quite new to C++ and some things are still not obvious to me).
Is this an error in the book? And is there a way to make it work with std::string?
|
Both `string::c_str()` and `string::data()` return a `const char*` until C++17. Since C++17, `string::data()` has an overload that will return a `char*`. This means this code snippet is invalid.
Edit: An easier and as efficient solution is to use `&str[0]`. The subscript operator will return a `char&`.
If you cannot use C++17, you could use a `std::vector<char>`, which does have a non-const `data()`, but this will cause a copy when constructing a `std::string`:
```
// ...
std::vector<char> v(log_length);
glGetShaderInfoLog(fs, log_length, NULL, v.data());
std::string s(begin(v), end(v));
```
Also, `reserve()` isn't sufficient, because the actual `size()` is not changed. If `c_str()` returned a `char*`, the snippet would actually also cause undefined behavior.
|
Center an UIImageView on the screen when zoom out
I have an UIImageView inside an UIScrollView. I want that the user can zoom and navigate the image.
This is my work code:
```
//img is the UIImageView
//scroller is the UIScrollView
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView {
return img;
}
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *image = [UIImage imageNamed:@"map_screen.png"];
img = [[UIImageView alloc] initWithImage:image];
scroller.delegate = self;
scroller.autoresizesSubviews = YES;
scroller.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth;
scroller.contentSize = img.frame.size;
scroller.scrollEnabled = YES;
scroller.directionalLockEnabled = NO;
scroller.userInteractionEnabled = YES;
CGSize ivsize = img.frame.size;
CGSize ssize = scroller.frame.size;
float scalex = ssize.width / ivsize.width;
float scaley = ssize.height / ivsize.height;
scroller.minimumZoomScale = fmin(1.0, fmax(scaley, scalex));
scroller.zoomScale = fmin(1.0, fmax(scalex, scaley));
[scroller addSubview:img];
img.userInteractionEnabled = YES;
}
```
all works, but this happened: the minimum zoom is the height of the screen.
My image has the width bigger than the height, so i want that the minumum zoom is the width.
If i write
```
scroller.minimumZoomScale = fmin(1.0, scalex);
```
works, but when the user zooms out, the image is not at the center of the screen, but at the top.
i've tried something like this
```
CGPoint scrollCenter = [scroller center];
[img setCenter:CGPointMake(scrollCenter.x, scrollCenter.y)];
```
or
```
img.center = scroller.center;
```
but with this solution, the image is not completly scrollable, and if i zoom out, it stay again at the top of the screen, but is not completly visible!
What can i do for fix it?
|
You have to do it manually while the zooming is in progress using the scrollViewDidZoom delegate function... something like
```
- (void)scrollViewDidZoom:(UIScrollView *)scrollView
{
// center the image as it becomes smaller than the size of the screen
CGSize boundsSize = scrollView.bounds.size;
CGRect frameToCenter = imageView.frame;
// center horizontally
if (frameToCenter.size.width < boundsSize.width)
{
frameToCenter.origin.x = (boundsSize.width - frameToCenter.size.width) / 2;
} else {
frameToCenter.origin.x = 0;
}
// center vertically
if (frameToCenter.size.height < boundsSize.height)
{
frameToCenter.origin.y = (boundsSize.height - frameToCenter.size.height) / 2;
} else {
frameToCenter.origin.y = 0;
}
imageView.frame = frameToCenter;
}
```
|
DDD – A rookie’s unanswered questions
I’ve decided to use DDD in one of my pet projects to find out what the fuss is about! Let me start off by saying that this (DDD) is the way software SHOULD be written, I’ve seen some strange patterns and conventions but DDD is really the most natural way of writing code.
Interesting points that I noted started with DDD:
- I can now dream about code without being reminded of constraints set upon us by databases
Persisting data now takes the backseat when developing a complex system (This might come back to bite me)
- I feel a lot closer to the business and business problem at hand
- Discussions between you and the client is now done on their neutral ground
- Discussions between developers are in line with the code
- I learned to love refactoring (You better make peace with that right now)
Question 1: Is an entity allowed to make repository calls? For instance, if you need to validate that a certain field is unique in the system, should you write a domain service or can the entity make a call to the repo?
Question 2: What is the best practice for validating a new entity? Should you have a Validate() function, or can the validation be done in the constructor?
Your thoughts on this?
|
Yes, you can call repositories from within entities. But should you do it? Probably not, it often gives you more technical problems that it will benefit you, it also has the risk of making performance tuning very difficult (mostly because it invites bad design).
The validation example also does not seem to justify calling repositories from your entities. Also your example seems to imply that your object can exist in an invalid state, this should be avoided, your domain object should always be in a valid state.
Validation should already start at domain object creation time. If the creation logic is simple, you can keep this kind of logic in the objects constructor. But still the creation logic is kind of separate from the rest of the logic as it’s only needed during creation time and not during the rest of the lifecycle of an object. Thus when creation logic gets complex enough, like calling repositories, you should spinoff this separate concern to another object, a [factory](http://en.wikipedia.org/wiki/Factory_method_pattern) object.
You clarified that you use frontend technology that creates objects for you. In that case, I would say that those object are just application value objects that can be used for input to your object factories (or constructors for that matter). Application validation of the fields though, should probably stay separate from your domain though.
In general you should try to avoid giving domain objects setters for all properties. Instead you should provide domain actions on the object. Thus instead of having a method like: setOrderStatus(Order.PROCESSED) you should have a method processOrder(). This way each method will always result in a valid state (or else throw an exception). Note that I write in general, as for trivial fields that have no connection to domain logic and are just representing data, you can still use setters.
For simple data centric applications this kind of approach might give too much overhead though. But then again, that’s not where DDD is meant to be used.
|
caliburn.micro serialization issue when implementing PropertyChangedBase
I'm developing a client/server data driven application using caliburn.micro for frontend and Asp.net WebApi 2 for backend.
```
public class Person
{
public int Id {get;set;}
public string FirstName{get;set;}
...
}
```
The application contains a class called "Person". A "Person" object is serialized (JSON) and moved back and forth from client to server using simple REST protocal. The solution works fine without any problem.
Problem:
I have set a parent class "PropertyChangedBase" for "Person" in order to implement NotifyOfPropertyChanged().
```
public class Person : PropertyChangedBase
{
public int Id {get;set;}
private string _firstName;
public string FirstName
{
get { return _firstName; }
set
{
_firstName = value;
NotifyOfPropertyChange(() => FirstName);
}
}
...
}
```
But this time the properties of class "Person" has NULL values at receiving end.
I guess there is a problem with serialization / deserialization.
This is only happens when implementing PropertyChangedBase.
Can anyone help me to overcome this issue?
|
You need to add the [`[DataContract]`](https://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractattribute%28v=vs.110%29.aspx) attribute to your `Person` class and the [`[DataMember]`](https://msdn.microsoft.com/en-us/library/System.Runtime.Serialization.DataMemberAttribute%28v=vs.110%29.aspx) attribute to every property and field you wish to serialize:
```
[DataContract]
public class Person : PropertyChangedBase
{
[DataMember]
public int Id { get; set; }
private string _firstName;
[DataMember]
public string FirstName { get; set; }
}
```
You need to do this because the [caliburn.micro](https://github.com/Caliburn-Micro/Caliburn.Micro/) base class [`PropertyChangedBase`](https://github.com/Caliburn-Micro/Caliburn.Micro/blob/master/src/Caliburn.Micro/PropertyChangedBase.cs) has the `[DataContract]` attribute:
```
namespace Caliburn.Micro {
[DataContract]
public class PropertyChangedBase : INotifyPropertyChangedEx
{
}
}
```
But why should this be necessary? In theory, the presence of the [`DataContractAttribute`](https://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractattribute%28v=vs.110%29.aspx) applied to the base class *should not* affect your derived `Person` class, because [`DataContractAttribute` sets `AttributeUsageAttribute.Inherited = false`](https://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractattribute%28v=vs.110%29.aspx):
```
[AttributeUsageAttribute(AttributeTargets.Class|AttributeTargets.Struct|AttributeTargets.Enum, Inherited = false,
AllowMultiple = false)]
public sealed class DataContractAttribute : Attribute
```
However, [`HttpClientExtensions.PostAsJsonAsync`](https://msdn.microsoft.com/en-us/library/jj614378%28v=vs.118%29.aspx) uses the default instance of [`JsonMediaTypeFormatter`](https://msdn.microsoft.com/en-us/library/system.net.http.formatting.jsonmediatypeformatter%28v=vs.118%29.aspx), which [by default uses the Json.NET library to perform serialization.](http://www.asp.net/web-api/overview/formats-and-model-binding/json-and-xml-serialization) And Json.NET does not respect the `Inherited = false` attribute of `DataContractAttribute`, as is explained [here](https://json.codeplex.com/discussions/357850)
>
> [Json.NET] detects the DataContractAttribute on the base class and assumes opt-in serialization.
>
>
>
(For confirmation see [Question about inheritance behavior of DataContract #872](https://github.com/JamesNK/Newtonsoft.Json/issues/872) which confirms this behavior of Json.NET continues to be as intended.)
So you need to add those attributes after all.
Alternatively, if you do not want to have to apply data contract attributes all over your derived classes, you could switch to `DataContractJsonSerializer` following the instructions here: [JSON and XML Serialization in ASP.NET Web API](http://www.asp.net/web-api/overview/formats-and-model-binding/json-and-xml-serialization):
>
> If you prefer, you can configure the **JsonMediaTypeFormatter** class to use the **DataContractJsonSerializer** instead of Json.NET. To do so, set the **UseDataContractJsonSerializer** property to **true**:
>
>
>
> ```
> var json = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
> json.UseDataContractJsonSerializer = true;
>
> ```
>
>
|
Datapoint to show X and Y value on chart
This should be easy, I just can't find it:
I find a datapoint in a series given some rules, I then want to show that point on the chart, at the moment it shows only the Y value. I need it to show both X and Y values, something like **-1506;409**
(X axis value = -1506, Y axis value = 409)
To display the datapoint I have at the moment:
```
datapoint.Font = new System.Drawing.Font("Serif", 7);
datapoint.LabelFormat = "#,#";
datapoint.IsValueShownAsLabel = true;
```
Any ideas?
Edit:
datapoint, of course is:
```
Datapoint datapoint
```
|
A nice overview of Labels in Chart controls is here on [MSDN](https://msdn.microsoft.com/en-us/library/dd456628.aspx).
Here is an example at work; the first line labels each point in a whole series, the second line only one point in another series:
```
chart1.Series[3].Label = "Y = #VALY\nX = #VALX";
chart1.Series[1].Points[5].Label = "Y = #VALY\nX = #VALX";
```
[](https://i.stack.imgur.com/9Rx1w.png)
A less crowded altenative may be setting tooltips, which only show up when the mouse is over the datapoint:
```
chart1.Series[2].ToolTip = "Y = #VALY\nX = #VALX";
```
For more ways to include data values do look into the chart [Keywords](https://msdn.microsoft.com/en-us/library/dd456687.aspx)!!
|
Run tcsh with arbitrary startup script
I want to create a new instance of `tcsh` and source an arbitrary script, all in one step. The problem is that when I use the `-c` option, the shell instance closes as soon as the script is complete. So, in the following **trivial example** the `pushd` command completes successfully but then the shell exits:
```
tcsh -c "pushd ~/some/directory/of/interest"
```
**How can I source a script which modifies the environment and then work interactively within that environment?** This is most useful when used in conjunction with programs like `ssh` or `screen`, as in the following:
```
ssh -t user@host 'tcsh -c "source ~/test_environment.csh"'
```
|
Our solution will involve two steps.
1. Pass Environment variable containing path to script you want to source to new `tcsh` instance.
2. Have `tcsh` source the script pointed at by this environment variable
---
For step 1 `ssh` will run the command you specify in your default shell (the one in the destination server's `/etc/passwd`) so I'll give you several solutions for this.
- If destination shell is `sh`/`bash`: Connect to ssh server with command: `ssh -t user@host
'export SOURCESCRIPT=/tmp/tmp.sh; exec /usr/bin/tcsh'`
- If destination shell is `csh`/`tcsh`: Connect to ssh server with command: `ssh -t
user@host 'setenv SOURCESCRIPT /tmp/tmp.sh; exec /usr/bin/tcsh'`
- If you can modify the destination's ssh server config, add/modify `AcceptEnv` option in `/etc/ssh/sshd_config` to allow `SOURCESCRIPT` environment variable to be passed (e.g. `AcceptEnv SOURCESCRIPT`), set `SOURCESCRIPT` in local environment and connect with command: `ssh -t -o SendEnv=SOURCESCRIPT user@host 'exec /usr/bin/tcsh'`
---
For step 2, we modify `~/.tcshrc` to add the following:
```
if $?SOURCESCRIPT then
source $SOURCESCRIPT
endif
```
|
How to make a Hook and Trampoline function in one for WinAPI hooking
So I have been learning about the concept of hooking and using trampolines in order to bypass/execute data in a WinAPI hook function (In a different executable file, using DLL injection). So far I know how to make it (the trampoline and hook) using a mixture of assembly and C, but I can't seem to do it with just using C, as I seem to be missing something. I'd appreciate if someone could tell me what I'm doing wrong and how to fix it up.
Right now my code:
```
#include <Windows.h>
unsigned char* address = 0;
__declspec(naked) int __stdcall MessageBoxAHookTrampoline(HWND Window, char* Message, char* Title, int Type) {
__asm
{
push ebp
mov ebp, esp
mov eax, address
add eax, 5
jmp eax
}
}
int __stdcall MessageBoxAHook(HWND Window, char* Message, char* Title, int Type) {
wchar_t* WMessage = L"Hooked!";
wchar_t* WTitle = L"Success!";
MessageBoxW(0, WMessage, WTitle, 0);
return MessageBoxAHookTrampoline(Window, Message, Title, Type);
}
unsigned long __stdcall Thread(void* Context) {
address = (unsigned char*)GetProcAddress(LoadLibraryA("user32"), "MessageBoxA");
ULONG OP = 0;
if (VirtualProtect(address, 1, PAGE_EXECUTE_READWRITE, &OP)) {
memset(address, 0x90, 5);
*address = 0xE9;
*(unsigned long*)(address + 1) = (unsigned long)MessageBoxAHook - (unsigned long)address - 5;
}
else {
MessageBoxA(0, "Failed to change protection", "RIP", 0);
}
return 1;
}
// Entry point.
BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpReserved) {
if (fdwReason == DLL_PROCESS_ATTACH) {
CreateThread(0, 0, Thread, 0, 0, 0);
}
else if (fdwReason == DLL_PROCESS_DETACH) {
}
return true;
}
```
>
> So question is: How would I make a function say `InstallHook` that will install the hook and return a trampoline so I can use it easily?
> Function prototype probably would be: `void* InstallHook(void* originalFunc, void* targetFunc, int jumpsize)`, or so I've understood reading online, but unsure what `jumpsize` would be used for.
>
>
>
So far I know that the first 5 bytes must be preserved and restored, and then there's a jump to the address of the original hooked function. So I'd have to use malloc to allocate memory, memcpy to copy bytes over, the `0xE9` is the value of a jump instruction and such, but I just don't know how to implement it using just pure C. [I figure it would be something similar to the code in this question.](https://stackoverflow.com/questions/8099660/writing-a-trampoline-function) So how can I write a hook function that returns a trampoline using pure C for WinAPI functions?
|
If I understood the question correctly, you want to avoid "hard-coding" the trampoline function in assembly, presumably so you could have multiple trampolines in use at the same time without duplicating the code. You can achieve this using `VirtualAlloc` (`malloc` won't work since the returned memory won't be executable).
I wrote this from memory without access to a compiler so it might have some minor bugs, but the general idea is here. Normally you would also use `VirtualProtect` to change the page permissions to `r-x` instead of `rwx` once you're done modifying it, but I've left that out for the sake of simplicity:
```
void *CreateTrampoline(void *originalFunc)
{
/* Allocate the trampoline function */
uint8_t *trampoline = VirtualAlloc(
NULL,
5 + 5, /* 5 for the prologue, 5 for the JMP */
MEM_COMMIT | MEM_RESERVE,
PAGE_EXECUTE_READWRITE); /* Make trampoline executable */
/* Copy the original function's prologue */
memcpy(trampoline, originalFunc, 5);
/* JMP rel/32 opcode */
trampoline[5] = 0xE9;
/* JMP rel/32 operand */
uint32_t jmpDest = (uint32_t)originalFunc + 5; /* Skip original prologue */
uint32_t jmpSrc = (uint32_t)trampoline + 10; /* Starting after the JMP */
uint32_t delta = jmpDest - jmpSrc;
memcpy(trampoline + 6, &delta, 4);
return trampoline;
}
```
Your `InstallHook` function would then just call `CreateTrampoline` to create a trampoline, then patch the first 5 bytes of the original function with a `JMP rel/32` to your hook.
Be warned, this only works on WinAPI functions, because Microsoft requires that they have a 5-byte prologue to enable hot-patching (which is what you're doing here). Normal functions do not have this requirement -- usually they only start with `push ebp; mov ebp, esp` which is only 3 bytes (and sometimes not even that, if the compiler decides to optimize it out).
Edit: here's how the math works:
```
_______________delta______________
| |
trampoline | originalFunc |
| | | |
v | v v
[prologue][jmp delta] [prologue][rest of func]
|________||_________| |________|
5 + 5 5
```
|
Performance benefits of using a List vs Map in Java
I have been wondering what is the actual benefit of using Lists. Please note that my question is not "when to use what" but rather Is there any impact on performance if i insist on having maps as my primary objects
obviously if my aim is to just work on values
>
> UPDATE after not being clear with my intent at first glance: I meant
> if i just want to filter a list of [8000] people whose age is > 30 , i
> would use a list... But can i use a map instead and have it be used
> instead - My Question is - will there be any performance hindrance ?
>
>
>
I would also use List. But do we get any performance boost - if yes - How can I see it myself.
for example if i take
```
List <Integer> listOfInt = new ArrayList<>(map.values());
```
It would make sense to use Map as my global object and serve lists based on it.
I know the key/value O(1) runtime for insert or remove in Maps but then why Lists are preferred most places i have seen.
|
>
> my question is not "when to use what"
>
>
>
but it should. List and Map are having different use. A List is - well - a list of values without any explicit key. Item in a list is designated by its position.
>
> obviously if my aim is to just work on values I would also use List.
>
>
>
yes, that's correct
>
> But do we get any performance boost
>
>
>
Please note,. the Map is not a simple structure. For each item in a map, an "Entry" object is created with references to the key and the value object, an hash array is created, etc.. so using map you definitely use more memory and code. For simpler cases the performance difference is negligible
|
Reload Adsense ads, or have to use DFP?
I've found a number of sites that explain how to make a DFP ad reload/refresh after 30-120 seconds, but none of them explain if it's possible to do with an Adsense ad.
I've tried DFP - it's very confusing compared to Adsense.
My current Adsense code:
```
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<!-- MYSITE- Responsive - #1 -->
<ins class="adsbygoogle"
style="display:block"
data-ad-client="ca-pub-123456789..."
data-ad-slot="123456789"
data-ad-format="auto"></ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>
```
|
AdSense's [TOS](https://www.google.com/adsense/localized-terms) disallow the sort of automatic refreshing you're referring to. This does not mean that you *cannot* automatically refresh AdSense Tags programmatically. It does mean that you will almost certainly get banned for doing so.
On the other hand, this behavior is allowed when using AdSense + Doubleclick for publishers (DFP). [This documentation](https://support.google.com/dfp_sb/answer/3058726?hl=en) should point you in the right direction, and you'd want to call something like this to refresh your ads:
`googletag.pubads().refresh();`
The following tutorials should give you enough information to figure out how to use AdSense + DFP:
1. [General Tutorial #1](http://www.labnol.org/internet/google-dfp-tutorial/14099/) (Outdated, but still useful)
2. [General Tutorial #2](http://www.dfptutorial.com/) (Also outdated, but useful nonetheless)
3. [Linking AdSense + DFP](https://support.google.com/dfp_premium/answer/188529?hl=en) (This doesn't work if you're using DFP for Small Business; instead, you'll just want to add you Adsense embed code as a third-party creative
If you are adamant about automatically refreshing ads without DFP, you could do the following wrapped in a timer function:
`document.getElementById('FrameID').contentWindow.location.reload(true);`
Be advised: **This will get you banned from AdSense if you are not using the service through DFP**
|
Ruby `send` vs `call` method
I'm reading an article about Ruby 1.9. There are a lot of uses of the `call` method with `Object`.
But with a recent version of Ruby, I get this:
```
BasicObject.methods.include? :send # => true
BasicObject.methods.include? :call # => false
Object.methods.include? :call # => false
def foo
puts 'text'
end
Object.send :foo # => text
Object.call :foo # => NoMethodError: undefined method `call' for Object:Class
```
I think that in some version of Ruby (probably 1.9), method was renamed. But I'm not sure. Please make it clear.
|
To begin with, `send` and `call` are two very different methods.
In ruby, the concept of object orientation takes its roots from Smalltalk. Basically, when you call a method, you are *sending* that object a *message*. So, it makes sense that when you want to dynamically call a method on an object, the method you call is [`send`](http://ruby-doc.org/core-2.3.0/Object.html#method-i-send). This method has existed in ruby since at least 1.8.7.
In ruby, we also have a concept of "blocks". Blocks are the `do...end` things attached to the end of method calls. Blocks can be traditionally `yield`ed to; or, it is entirely possible to create an object out of a block (a [`Proc`](http://ruby-doc.org/core-2.2.0/Proc.html)), and pass that around. In order to execute the block, you can call [`call`](http://ruby-doc.org/core-2.2.0/Proc.html#method-i-call) on the block.
`call` has never been defined on `Object`, whereas `send` is defined on everything.
*(note: for some reason, `call` doesn't seem to have documentation in the 2.3.0 documentation; however, it still exists and does the same thing from 2.2.0, so I linked that one instead.)*
|
Testimonial section with HTML tags
I created this testimonial on [CodePen](http://codepen.io/JGallardo/pen/zGyid)

I am a bit skeptical about a few things in my HTML structure. For example, I typically see testimonials enclosed in `<div>`s with custom classes. In my case I used a `<blockquote>` but had to overwrite a lot of rules.
Also wondering if enclosing the author in `<strong>` tags was wise.
**HTML**
```
<div class="wrapper">
<blockquote>
“Such cool. Much awesome. WOW”
</blockquote>
<p class="author">
–
<strong>Doge</strong>,
<a href="#">The Moon</a>
</p>
</div>
```
**CSS**
```
/* == resets == */
body { margin: 0; padding: 0; }
/* == project == */
body {
background: none repeat scroll 0% 0% rgb(240, 240, 240);
color: rgb(102, 102, 102);
font-family: "Helvetica Neue", Helvetica, Arial, Sans-serif;
font-size: 22px;
}
.wrapper {
width: 600px;
margin: 24px auto;
}
blockquote {
background-color: rgb(255, 255, 255);
border-radius: 6px;
font-family: Georgia, serif;
font-size: 22px;
line-height: 1.4;
margin: 0;
padding: 17px;
}
p.author {
background-color: transparent;
font-weight: 500;
font-size: 22px;
line-height:22px;
margin: 24px 0 0 18px;
}
strong {
color: rgb(68, 68, 68);
}
a {
color: rgb(64, 131, 169);
text-decoration: none;
}
```
|
- I would say that for it to be semantically accurate, the `author` should be a part of the `blockquote`, perhaps using a `footer`.
- You should include a `cite` attribute if the quote has a source.
- The quote content should be inside a paragraph element.
- I guess now you don't need the wrapper any more.
```
<blockquote cite="http://knowyourmeme.com/memes/doge">
<p>“Such cool. Much awesome. WOW”</p>
<footer class="author">
–
<strong>Doge</strong>,
<a href="#">The Moon</a>
</footer>
</blockquote>
```
- To make it look the same I had to change these bits of CSS:
```
blockquote {
font-size: 22px;
margin: 0;
}
blockquote p {
background-color: rgb(255, 255, 255);
border-radius: 6px;
font-family: Georgia, serif;
line-height: 1.4;
padding: 17px;
}
footer.author {
background-color: transparent;
font-weight: 500;
font-size: 22px;
line-height:22px;
margin: 24px 0 0 18px;
}
```
- [Here's the result](http://codepen.io/anon/pen/ufsxA).
|
How is it that missing 0s are automatically added in IP addresses? (`ping 10.5` equivalent to `ping 10.0.0.5`)
I accidentally typed
```
ssh 10.0.05
```
instead of
```
ssh 10.0.0.5
```
and was very surprised that it worked. I also tried `10.005` and `10.5` and those also expanded automatically to `10.0.0.5`. I also tried `192.168.1` and that expanded to `192.168.0.1`. All of this also worked with `ping` rather than `ssh`, so I suspect it would work for many other commands that connect to an arbitrary user-supplied host.
Why does this work? Is this behavior documented somewhere? Is this behavior part of POSIX or something? Or is it just some weird implementation? (Using Ubuntu 13.10 for what it's worth.)
|
Quoting from [`man 3 inet_aton`](http://man7.org/linux/man-pages/man3/inet_aton.3.html#DESCRIPTION):
```
a.b.c.d Each of the four numeric parts specifies a byte of the
address; the bytes are assigned in left-to-right order to
produce the binary address.
a.b.c Parts a and b specify the first two bytes of the binary
address. Part c is interpreted as a 16-bit value that
defines the rightmost two bytes of the binary address.
This notation is suitable for specifying (outmoded) Class B
network addresses.
a.b Part a specifies the first byte of the binary address.
Part b is interpreted as a 24-bit value that defines the
rightmost three bytes of the binary address. This notation
is suitable for specifying (outmoded) Class C network
addresses.
a The value a is interpreted as a 32-bit value that is stored
directly into the binary address without any byte
rearrangement.
In all of the above forms, components of the dotted address can be
specified in decimal, octal (with a leading 0), or hexadecimal, with
a leading 0X). Addresses in any of these forms are collectively
termed IPV4 numbers-and-dots notation. The form that uses exactly
four decimal numbers is referred to as IPv4 dotted-decimal notation
(or sometimes: IPv4 dotted-quad notation).
```
---
For fun, try this:
```
$ nslookup unix.stackexchange.com
Non-authoritative answer:
Name: unix.stackexchange.com
Address: 198.252.206.140
$ echo $(( (198 << 24) | (252 << 16) | (206 << 8) | 140 ))
3338456716
$ ping 3338456716 # What? What did we ping just now?
PING stackoverflow.com (198.252.206.140): 48 data bytes
64 bytes from 198.252.206.140: icmp_seq=0 ttl=52 time=75.320 ms
64 bytes from 198.252.206.140: icmp_seq=1 ttl=52 time=76.966 ms
64 bytes from 198.252.206.140: icmp_seq=2 ttl=52 time=75.474 ms
```
|
Remove request parameter from query string
I have a query string that could be:
```
/fr/hello?language=en
```
or
```
/fr/welcome?param1=222¶m2=aloa&language=en
```
or
```
/it/welcome?param1=222&language=en¶m2=aa
```
I would like to remove from each query string the parameter language with its value, therefore the results would be:
```
/fr/hello
```
and
```
/fr/welcome?param1=222¶m2=aloa
```
and
```
/it/welcome?param1=222¶m2=aa
```
EDIT: The length of the value of the parameter could be more than 2
Does anybody know any good regex expression to use in String.replaceAll([regex],[replace]) ?
|
Use the below regex and replace the matched strings with empty string,
```
[&?]language.*?(?=&|\?|$)
```
[DEMO](http://regex101.com/r/nJ8cJ8/9)
Example code:
```
String s1 = "/fr/welcome?param1=222¶m2=aloa&language=en";
String s2 = "/fr/welcome?language=en";
String s3 = "/fr/welcome?param1=222&language=en¶m2=aa";
String m1 = s1.replaceAll("[&?]language.*?(?=&|\\?|$)", "");
String m2 = s2.replaceAll("[&?]language.*?(?=&|\\?|$)", "");
String m3 = s3.replaceAll("[&?]language.*?(?=&|\\?|$)", "");
System.out.println(m1);
System.out.println(m2);
System.out.println(m3);
```
Output:
```
/fr/welcome?param1=222¶m2=aloa
/fr/welcome
/fr/welcome?param1=222¶m2=aa
```
[IDEONE 1](http://ideone.com/tKPk0B) or [IDEONE 2](http://ideone.com/e3d6vF)
|
Learning Weka on the Command Line
I am fairly new to Weka and even more new to Weka on the command line. I find documentation is poor and I am struggling to figure out a few things to do. For example, want to take two .arff files, one for training, one for testing and get an output of predictions for the missing labels in the test data.
How can I do this?
I have this code as a starting block
```
java -classpath weka.jar weka.classifiers.meta.FilteredClassifier
-t "training_file_with_missing_values.arff"
-T "test_file_with_missing_values.arff"
-F weka.filters.unsupervised.attribute.ReplaceMissingValues -- -c last
-W weka.classifiers.functions.MultilayerPerceptron -- -L 0.3 -M 0.2 -H a
```
Running that code gives me "Illegal option -c last" and I am not sure why. I am also not going to be using MLP as NN tend to be too slow when I have a few thousand features from the text data. I know how to change it to another classifier though (like NB or libSVM so that is good).
But I am not sure how to add multiple filters in one call as I also need to add the StringToWordVector filter (and possibly the Reorder filter to make the class the last, instead of first attribute).
And then how do I get it actually output me the prediction labels of each class? And then store so those in an arff with the initial data.
|
Weka is not really the shining example of documentation, but you can still find valuable information about it on their sites. You should start with the [Primer](https://webcache.googleusercontent.com/search?q=cache:WL-e_bOe4TUJ:https://weka.wikispaces.com/Primer). I understand that you want to classify text files, so you should also have a look at [Text categorization with WEKA](https://webcache.googleusercontent.com/search?q=cache:SdU8ij_oxaQJ:https://weka.wikispaces.com/Text%2Bcategorization%2Bwith%2BWEKA). There is also a new [Weka documentation site](https://waikato.github.io/weka-wiki/).
[**Edit**: Wikispaces has shut down and Weka hasn't brought up the sites somewhere else, yet, so I've modified the links to point at the Google cache. If someone reads this and a new Weka Wiki is up, feel free to edit the links and remove this note.]
The command line you posted in your question contains an error. I know, you copied it from my answer to another question, but I also just noticed it. You have to omit the `-- -c last`, because the `ReplaceMissingValue` filter doesn't like it.
In the Primer it says:
>
> **weka.filters.supervised**
>
>
> Classes below weka.filters.supervised in the class hierarchy are for **supervised** filtering, i.e. taking advantage of the class information. A class must be assigned via -c, for WEKA default behaviour use `-c last`.
>
>
>
but `ReplaceMissingValue` is an **unsupervised** filter, as is `StringToWordVector`.
# Multiple filters
Adding multiple filter is also no problem, that is what the `MultiFilter` is for. The command line can get a bit messy, though: (I chose `RandomForest` here, because it is a lot faster than NN).
```
java -classpath weka.jar weka.classifiers.meta.FilteredClassifier \
-t ~/weka-3-7-9/data/ReutersCorn-train.arff \
-T ~/weka-3-7-9/data/ReutersCorn-test.arff \
-F "weka.filters.MultiFilter \
-F weka.filters.unsupervised.attribute.StringToWordVector \
-F weka.filters.unsupervised.attribute.Standardize" \
-W weka.classifiers.trees.RandomForest -- -I 100 \
```
# Making predictions
Here is what the Primer says about getting the prediction:
>
> However, if more detailed information about the classifier's predictions are necessary, -p # outputs just the predictions for each test instance, along with a range of one-based attribute ids (0 for none).
>
>
>
It is a good convention to put those general options like `-p 0` directly after the class you're calling, so the command line would be
```
java -classpath weka.jar weka.classifiers.meta.FilteredClassifier \
-t ~/weka-3-7-9/data/ReutersCorn-train.arff \
-T ~/weka-3-7-9/data/ReutersCorn-test.arff \
-p 0 \
-F "weka.filters.MultiFilter \
-F weka.filters.unsupervised.attribute.StringToWordVector \
-F weka.filters.unsupervised.attribute.Standardize" \
-W weka.classifiers.trees.RandomForest -- -I 100 \
```
# Structure of WEKA classifiers/filters
But as you can see, WEKA can get very complicated when calling it from the command line. This is due to the tree structure of WEKA classifiers and filters. Though you can run only one classifier/filter per command line, it can be structured as complex as you like. For the above command, the structure looks like this:
The FilteredClassifier will initialize a filter on the training data set, filter both training and test data, then train a model on the training data and classify the given test data.
```
FilteredClassifier
|
+ Filter
|
+ Classifier
```
If we want multiple filters, we use the MultiFilter, which is only one filter, but it calls multiple others in the order they were given.
```
FilteredClassifier
|
+ MultiFilter
| |
| + StringToWordVector
| |
| + Standardize
|
+ RandomForest
```
The hard part of running something like this from the command line is assigning the desired options to the right classes, because often the option names are the same. For example, the `-F` option is used for the `FilteredClassifier` and the `MultiFilter` as well, so I had to use quotes to make it clear which -F belongs to what filter.
In the last line, you see that the option `-I 100`, which belongs to the `RandomForest`, can't be appended directly, because then it would be assigned to `FilteredClassifier` and you will get `Illegal options: -I 100`. Hence, you have to add `--` before it.
# Adding predictions to the data files
[Adding the predicted class label](https://webcache.googleusercontent.com/search?q=cache:62a1IJ4Z43QJ:https://weka.wikispaces.com/Making%2Bpredictions) is also possible, but even more complicated. AFAIK this can't be done in one step, but you have to train and save a model first, then use this one for predicting and assigning new class labels.
Training and saving the model:
```
java -classpath weka.jar weka.classifiers.meta.FilteredClassifier \
-t ~/weka-3-7-9/data/ReutersCorn-train.arff \
-d rf.model \
-F "weka.filters.MultiFilter \
-F weka.filters.unsupervised.attribute.StringToWordVector \
-F weka.filters.unsupervised.attribute.Standardize" \
-W weka.classifiers.trees.RandomForest -- -I 100 \
```
This will serialize the model of the trained `FilteredClassifier` to the file `rf.model`. The important thing here is that the initialized filter will also be serialized, otherwise the test set wouldn't be compatible after filtering.
Loading the model, making predictions and saving it:
```
java -classpath weka.jar weka.filters.supervised.attribute.AddClassification \
-serialized rf.model \
-classification \
-remove-old-class \
-i ~/weka-3-7-9/data/ReutersCorn-test.arff \
-o pred.arff \
-c last
```
|
why an adjacency list might not be preferred for a tree?
As Tree is sparse graph with no cycles, is there a reason Adjacency List is not preferred representation?
Why is linked structure used most commonly to represent tree?
|
Keeping the data for neighbors (children) in an external adjacency list versus fields in node objects is a design decision about where to put data such that is most conducive to supporting typical operations for the data structure.
[Adjacency lists](https://en.wikipedia.org/wiki/Adjacency_list) are often implemented as a hash of `node => node[]` pairs where every node points to a list or set of its neighbors (in a tree, children). This representation is much more typical of [graphs](https://en.wikipedia.org/wiki/Graph_(abstract_data_type)) than trees ([trees](https://en.wikipedia.org/wiki/Tree_structure) are a specific type of directed graph that are acyclic and all nodes except the root have exactly one incoming edge).
The main advantage of externalizing data in an adjacency list is ease of operating on it in aggregate or offering constant time access to any member. These properties are more important on graphs where you might, for example, run a [BFS](https://en.wikipedia.org/wiki/Breadth-first_search) starting from every node in the graph. Trees, on the other hand, use the root as a single entry point for its operations ([traversals](https://en.wikipedia.org/wiki/Tree_traversal), insertions, removals, rotations and so forth) and nodes basically don't need to be accessed randomly except as a step in such an operation starting from the root.
Among trees, there are [binary trees](https://en.wikipedia.org/wiki/Binary_tree) and n-ary trees where each node has up to `n` children. Based on the follow-up comments, you mention [BST](https://en.wikipedia.org/wiki/Binary_search_tree)s and [red-black](https://en.wikipedia.org/wiki/Red%E2%80%93black_tree) trees (both binary trees) as examples of using child pointers (i.e. `this.left`, `this.right`) instead of an adjacency list.
For binary trees, `node.left` and `node.right` are very explicit properties. Keeping two separate hashes for left and right children and accessing them with `leftChildren[node]` and `rightChildren[node]` is verbose, adds extra state and incurs hash lookup overhead with no obvious merit.
It gets worse for red-black trees, which are concerned with parents and other relationships, each of which would require an additional "adjacency" hash. Adjacency lists (or any list/array) is basically out of the picture for binary trees or anything with `node.left` and `node.right` properties, but still on the table for n-ary trees, for which the `node.children` property is much more akin to `tree[node]` or `children[node]`.
In addition to accessing fields, when properties are in external data structures, function headers and state in general can become more complex. Consider `def inorder(tree, root)` versus `def inorder(root)`. `tree` could be made a class member, but that doesn't change the fact that extra state needs to be passed around and managed somehow.
Another consideration is that some languages such as C have no native support for hash maps, sets or dynamic lists. It's possible to give nodes 0..n id fields and index into an array, but the pointer approach is natural in lower-level languages.
In some cases, the data in a graph or tree is so simple (say, sequential integers), that nodes can be completely eliminated in favor of a lone adjacency list or 2d array. A [binary heap](https://en.wikipedia.org/wiki/Binary_heap) is a good example of tree data that works very nicely as a flat structure, reinforcing the idea of picking whatever representation makes the most sense.
|
JSP Document/JSPX: what determines how tabs/space/linebreaks are removed in the output?
I've got a "JSP Document" ("JSP in XML") nicely formatted and when the webpage is generated and sent to the user, some linebreaks are removed.
Now the really weird part: apparently the "main" .jsp always gets *all* its linebreak removed but for any subsequent .jsp included from the main .jsp, linebreaks seems to be randomly removed (some are there, others aren't).
For example, if I'm looking at the webpage served from Firefox and ask to *"view source"*, I get to see what is generated.
So, what determines when/how linebreaks are kept/removed?
This is just an example I made up... Can you force a .jsp to serve this:
```
<body><div id="page"><div id="header"><div class="title">...
```
or this:
```
<body>
<div id="page">
<div id="header">
<div class="title">...
```
?
I take it that linebreaks are removed to save on bandwidth, but what if I want to keep them? And what if I want to keep the same XML indentation as in my .jsp file?
Is this doable?
**EDIT**
Following skaffman's advice, I took a look at the generated .java files and the "main" one doesn't have lots of *out.write* but not a single one writing tabs nor newlines. Contrary to that file, all the ones that I'm including from that main .jsp have lots of lines like:
```
out.write("\t...\n");
```
So I guess my question stays exactly the same: what determines how tabs/space/linebreaks are included/removed in the output?
|
As per the [JSP specification](https://jsp.dev.java.net/spec/jsp-2_1-fr-spec.pdf):
>
> ### JSP.6.2.3 Semantic Model
>
>
> ...
>
>
> To clearly explain the processing of whitespace, we follow the structure of the
> XSLT specification. The first step in processing a JSP document is to identify the
> nodes of the document. **Then, all textual nodes that have only white space are
> dropped from the document; the only exception are nodes in a jsp:text element,
> which are kept verbatim.** The resulting nodes are interpreted as described in the
> following sections. Template data is either passed directly to the response or it is
> mediated through (standard or custom) actions.
>
>
>
So, if you want to preserve whitespace, you need to wrap the desired parts in `<jsp:text>`.
|
How to specify resolution and rejection type of the promise in JSDoc?
I have some code that returns a promise object, e.g. using [Q](https://github.com/kriskowal/q) library for NodeJS.
```
var Q = require('q');
/**
* @returns ???
*/
function task(err) {
return err? Q.reject(new Error('Some error')) : Q.resolve('Some result');
}
```
How to document such a return value using JSDoc?
|
Even if they don't exist in Javascript, I found that JSdoc understands "generic types".
So you can define your custom types and then use `/* @return Promise<MyType> */`. The following result in a nice *TokenConsume(token) → {Promise.<Token>}* with a link to your custom `Token` type in the doc.
```
/**
* @typedef Token
* @property {bool} valid True if the token is valid.
* @property {string} id The user id bound to the token.
*/
/**
* Consume a token
* @param {string} token [description]
* @return {Promise<Token>} A promise to the token.
*/
TokenConsume = function (string) {
// bla bla
}
```
It even works with `/* @return Promise<MyType|Error> */` or `/* @return Promise<MyType, Error> */`.
|
Distributing RDLC output as an email attachment
Our winforms application has long allowed a "print" option which basically uses RDLC.
The customer has requested that we add a feature allowing users to send the "printed" output via email.
Now, we know that an EMF file is created (in the TEMP folder) as a sort of hidden byproduct of our current printing process.
Seems to us we can simply grab this EMF file and attach it to a new email and the job is done.
1. Is this the best option?
2. Can we rely on an EMF file be opened by any Windows machine?
3. How we identify the EMF file? ... just seems to be named **`%TEMP%\DiaryGrid_1.emf`** currently. OK so DiaryGrid is the name of our RDLC file but the \_1 gets added somewhere along the way.
|
I did it before. I did it exporting programatically the report to a pdf to a specific location, then we email the pdf file and delete it. I will try to find the code for you (Not in home now)
**EDITED:**
Sorry for the later. Now i'm in home and I will give you some code blocks that I think will give you some help to acomplish your task. I will include some comments to the code so you can understand some things that are specific in my project. This code are tested and are working well in my clients, but i'm sure that it can be improved. Please, let me know if you can improve this code ;)
First of all, we will export the report to pdf.
```
private string ExportReportToPDF(string reportName)
{
Warning[] warnings;
string[] streamids;
string mimeType;
string encoding;
string filenameExtension;
byte[] bytes = ReportViewer1.LocalReport.Render(
"PDF", null, out mimeType, out encoding, out filenameExtension,
out streamids, out warnings);
string filename = Path.Combine(Path.GetTempPath(), reportName);
using (var fs = new FileStream(filename, FileMode.Create))
{
fs.Write(bytes, 0, bytes.Length);
fs.Close();
}
return filename;
}
```
Now, we need a class that control the Mail system. Every mail system has their own caracteristics, so maybe you will need modify this class. The behaviour of the class is simple. You only need to fill the properties, and call the Send method. In my case, windows don't let me delete the pdf file once I send it (Windows says the file is in use), so I program the file to be deleted in the next reboot. Take a look to the delete method. Please, note that the send method use a cutom class named MailConfig. This is a small class that has some config strings like Host, User Name, and Password. The mail will be send using this params.
```
public class Mail
{
public string Title { get; set; }
public string Text { get; set; }
public string From { get; set; }
public bool RequireAutentication { get; set; }
public bool DeleteFilesAfterSend { get; set; }
public List<string> To { get; set; }
public List<string> Cc { get; set; }
public List<string> Bcc { get; set; }
public List<string> AttachmentFiles { get; set; }
#region appi declarations
internal enum MoveFileFlags
{
MOVEFILE_REPLACE_EXISTING = 1,
MOVEFILE_COPY_ALLOWED = 2,
MOVEFILE_DELAY_UNTIL_REBOOT = 4,
MOVEFILE_WRITE_THROUGH = 8
}
[DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
static extern bool MoveFileEx(string lpExistingFileName,
string lpNewFileName,
MoveFileFlags dwFlags);
#endregion
public Mail()
{
To = new List<string>();
Cc = new List<string>();
Bcc = new List<string>();
AttachmentFiles = new List<string>();
From = MailConfig.Username;
}
public void Send()
{
var client = new SmtpClient
{
Host = MailConfig.Host,
EnableSsl = false,
};
if (RequireAutentication)
{
var credentials = new NetworkCredential(MailConfig.Username,
MailConfig.Password);
client.Credentials = credentials;
}
var message = new MailMessage
{
Sender = new MailAddress(From, From),
From = new MailAddress(From, From)
};
AddDestinataryToList(To, message.To);
AddDestinataryToList(Cc, message.CC);
AddDestinataryToList(Bcc, message.Bcc);
message.Subject = Title;
message.Body = Text;
message.IsBodyHtml = false;
message.Priority = MailPriority.High;
var attachments = AttachmentFiles.Select(file => new Attachment(file));
foreach (var attachment in attachments)
message.Attachments.Add(attachment);
client.Send(message);
if (DeleteFilesAfterSend)
AttachmentFiles.ForEach(DeleteFile);
}
private void AddDestinataryToList(IEnumerable<string> from,
ICollection<MailAddress> mailAddressCollection)
{
foreach (var destinatary in from)
mailAddressCollection.Add(new MailAddress(destinatary, destinatary));
}
private void DeleteFile(string filepath)
{
// this should delete the file in the next reboot, not now.
MoveFileEx(filepath, null, MoveFileFlags.MOVEFILE_DELAY_UNTIL_REBOOT);
}
}
```
Now, you can create a form to ask for the destinataries, add some validation, etc, return to you an instance of the Mail class... or you can simply "hard code" the values and fill the class.
Here is the code that I use in a button to call this form, in my example it is named SendMailView.
```
private void BtnSendByMail_Click(object sender, EventArgs e)
{
SendMailView sendMailView = new SendMailView();
if (sendMailView.ShowDialog()== DialogResult.OK)
{
Mail mail = sendMailView.CurrentItem;
mail.AttachmentFiles.Add(ExportReportToPDF("Invoice.pdf"));
mail.DeleteFilesAfterSend = true;
mail.RequireAutentication = true;
mail.Send();
}
sendMailView.Dispose();
}
```
In this example senMailView.CurrentItem is the instance of the mail class. We simply need to call to the Send methis and the work is done.
This is the largest answer I ever wrote in SO... I hope it help you :D If you have any problem using it, call me. By the way, i'm not very proud of my english, so forgive me if the text has any mistake.
|
Tennis match scheduling
There are a limited number of players and a limited number of tennis courts. At each round, there can be at most as many matches as there are courts.
Nobody plays 2 rounds without a break. Everyone plays a match against everyone else.
Produce the schedule that takes as few rounds as possible. (Because of the rule that there must a break between rounds for everyone, there can be a round without matches.)
The output for 5 players and 2 courts could be:
```
| 1 2 3 4 5
-|-------------------
2| 1 -
3| 5 3 -
4| 7 9 1 -
5| 3 7 9 5 -
```
In this output the columns and rows are the player-numbers, and the numbers inside the matrix are the round numbers these two players compete.
The problem is to find an algorithm which can do this for larger instances in a feasible time. We were asked to do this in Prolog, but (pseudo-) code in any language would be useful.
My first try was a greedy algorithm, but that gives results with too many rounds.
Then I suggested an iterative deepening depth-first search, which a friend of mine implemented, but that still took too much time on instances as small as 7 players.
(This is from an old exam question. No one I spoke to had any solution.)
|
# Preface
In Prolog, **CLP(FD) constraints** are the right choice for solving such scheduling tasks.
See **[clpfd](/questions/tagged/clpfd "show questions tagged 'clpfd'")** for more information.
In this case, I suggest using the powerful [`global_cardinality/2`](https://sicstus.sics.se/sicstus/docs/latest4/html/sicstus.html/Arithmetic_002dLogical-Constraints.html#Arithmetic_002dLogical-Constraints) constraint to restrict the number of **occurrences** of each round, depending on the number of available courts. We can use *iterative deepening* to find the minimal number of admissible rounds.
Freely available Prolog systems suffice to solve the task satisfactorily. Commercial-grade systems will run dozens of times faster.
# Variant 1: Solution with SWI-Prolog
```
:- use_module(library(clpfd)).
tennis(N, Courts, Rows) :-
length(Rows, N),
maplist(same_length(Rows), Rows),
transpose(Rows, Rows),
Rows = [[_|First]|_],
chain(First, #<),
length(_, MaxRounds),
numlist(1, MaxRounds, Rounds),
pairs_keys_values(Pairs, Rounds, Counts),
Counts ins 0..Courts,
foldl(triangle, Rows, Vss, Dss, 0, _),
append(Vss, Vs),
global_cardinality(Vs, Pairs),
maplist(breaks, Dss),
labeling([ff], Vs).
triangle(Row, Vs, Ds, N0, N) :-
length(Prefix, N0),
append(Prefix, [-|Vs], Row),
append(Prefix, Vs, Ds),
N #= N0 + 1.
breaks([]).
breaks([P|Ps]) :- maplist(breaks_(P), Ps), breaks(Ps).
breaks_(P0, P) :- abs(P0-P) #> 1.
```
Sample query: 5 players on 2 courts:
```
?- time(tennis(5, 2, Rows)), maplist(writeln, Rows).
% 827,838 inferences, 0.257 CPU in 0.270 seconds (95% CPU, 3223518 Lips)
[-,1,3,5,7]
[1,-,5,7,9]
[3,5,-,9,1]
[5,7,9,-,3]
[7,9,1,3,-]
```
The specified task, **6 players on 2 courts**, solved well within the time limit of 1 minute:
```
?- time(tennis(6, 2, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% 6,675,665 inferences, 0.970 CPU in 0.977 seconds (99% CPU, 6884940 Lips)
- 1 3 5 7 10
1 - 6 9 11 3
3 6 - 11 9 1
5 9 11 - 2 7
7 11 9 2 - 5
10 3 1 7 5 -
```
Further example: 7 players on 5 courts:
```
?- time(tennis(7, 5, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% 125,581,090 inferences, 17.476 CPU in 18.208 seconds (96% CPU, 7185927 Lips)
- 1 3 5 7 9 11
1 - 5 3 11 13 9
3 5 - 9 1 7 13
5 3 9 - 13 11 7
7 11 1 13 - 5 3
9 13 7 11 5 - 1
11 9 13 7 3 1 -
```
# Variant 2: Solution with SICStus Prolog
With the following additional definitions for compatibility, the *same* program also runs in SICStus Prolog:
```
:- use_module(library(lists)).
:- use_module(library(between)).
:- op(700, xfx, ins).
Vs ins D :- maplist(in_(D), Vs).
in_(D, V) :- V in D.
chain([], _).
chain([L|Ls], Pred) :-
chain_(Ls, L, Pred).
chain_([], _, _).
chain_([L|Ls], Prev, Pred) :-
call(Pred, Prev, L),
chain_(Ls, L, Pred).
pairs_keys_values(Ps, Ks, Vs) :- keys_and_values(Ps, Ks, Vs).
foldl(Pred, Ls1, Ls2, Ls3, S0, S) :-
foldl_(Ls1, Ls2, Ls3, Pred, S0, S).
foldl_([], [], [], _, S, S).
foldl_([L1|Ls1], [L2|Ls2], [L3|Ls3], Pred, S0, S) :-
call(Pred, L1, L2, L3, S0, S1),
foldl_(Ls1, Ls2, Ls3, Pred, S1, S).
time(Goal) :-
statistics(runtime, [T0|_]),
call(Goal),
statistics(runtime, [T1|_]),
T #= T1 - T0,
format("% Runtime: ~Dms\n", [T]).
```
Major difference: SICStus, being a commercial-grade Prolog that ships with a serious CLP(FD) system, is **much faster** than SWI-Prolog in this use case and others like it.
The specified task, 6 players on 2 courts:
```
?- time(tennis(6, 2, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% **Runtime: 34ms (!)**
- 1 3 5 7 10
1 - 6 11 9 3
3 6 - 9 11 1
5 11 9 - 2 7
7 9 11 2 - 5
10 3 1 7 5 -
```
The larger example:
```
| ?- time(tennis(7, 5, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% **Runtime: 884ms**
- 1 3 5 7 9 11
1 - 5 3 9 7 13
3 5 - 1 11 13 7
5 3 1 - 13 11 9
7 9 11 13 - 3 1
9 7 13 11 3 - 5
11 13 7 9 1 5 -
```
# Closing remarks
In both systems, `global_cardinality/3` allows you to specify options that alter the propagation strength of the global cardinality constraint, enabling weaker and potentially more efficient filtering. Choosing the right options for a specific example may have an even larger impact than the choice of Prolog system.
|
addEventListener memory leaks
When registering an event via [`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener) on an element, then delete that element without removing the event, and doing so repeatedly, would memory be "leaked"?
|
It shouldn't leak. The one browser that's infamous for leaking like hell when an event handler causes a host-object<>JS-object loop is IE (up to version 7), and IE (up to version 8) doesn't support `addEventListener`.
Leave this running and see how the browser's memory usage is affected in the long term, if you want to test it in a particular browser.
```
<div id="x"></div>
<script type="text/javascript">
function replace() {
var x= document.getElementById('x');
if (x.firstChild!==null)
x.removeChild(x.firstChild);
var el= document.createElement('p');
el.addEventListener('click', click, false);
x.appendChild(el);
}
function click() {
alert('click');
};
setInterval(replace, 1);
</script>
```
(To test it with a reference loop present, move the `function click` definition up into the `replace` body.)
|
Semaphore is not waiting swift
I'm trying to do 3 async requests and control the load with semaphores to know when all have loaded.
I Init the semaphore in this way:
```
let sem = dispatch_semaphore_create(2);
```
Then send to background the waiting for semaphore code:
```
let backgroundQueue = dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0)
dispatch_async(backgroundQueue) { [unowned self] () -> Void in
println("Waiting for filters load")
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
println("Loaded")
}
```
Then I signal it 3 times (on each request `onSuccess` and `onFailure`):
```
dispatch_semaphore_signal(sem)
```
But when the signal code arrives it already passed the semaphore wait code, it never waits to subtract the semaphore count.
why?
|
You've specified `dispatch_semaphore_create` with a parameter of `2` (which is like calling `dispatch_semaphore_signal` twice), and then signal it three more times (for a total of five), but you appear to have only one wait (which won't wait at all because you started your semaphore with a count of `2`).
That's obviously not going to work. Even if you fixed that (e.g. use zero for the creation of the semaphore and then issue three waits) this whole approach is inadvisable because you're unnecessarily tying up a thread waiting for the the other requests to finish.
This is a textbook candidate for dispatch groups. So you would generally use the following:
1. Create a `dispatch_group_t`:
```
dispatch_group_t group = dispatch_group_create();
```
2. Then do three `dispatch_group_enter`, once before each request.
3. In each of the three `onSuccess`/`onFailure` blocks pairs, do a `dispatch_group_leave` in both block.
4. Create a `dispatch_group_notify` block that will be performed when all of the requests are done.
|
Python integer division operator vs math.floor
Is there any benefit in performance using integer division operator over math.floor?
```
7 // 2
```
over
```
math.floor(7/2)
```
|
Integer division is much faster than a `math.floor` function call:
```
>>> import timeit
>>> timeit.timeit('7//2')
0.024671780910702337
>>> timeit.timeit('floor(7/2)', setup='from math import floor')
0.27053647879827736
>>> timeit.timeit('math.floor(7/2)', setup='import math')
0.3131167508719699
```
As you can see with this disassembly, using the `math` module's `floor` function (with `import math` and `math.floor` or `from math import floor` and `floor`) involve extra lookups and function calls over the plain integer division:
```
>>> import dis
>>> import math
>>> from math import floor
>>> def integer_division():
... 7//2
...
>>> def math_floor():
... floor(7/2)
...
>>> def math_full_floor():
... math.floor(7/2)
...
>>> dis.dis(integer_division)
2 0 LOAD_CONST 3 (3)
3 POP_TOP
4 LOAD_CONST 0 (None)
7 RETURN_VALUE
>>> dis.dis(math_floor)
2 0 LOAD_GLOBAL 0 (floor)
3 LOAD_CONST 3 (3.5)
6 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
9 POP_TOP
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
>>> dis.dis(math_full_floor)
2 0 LOAD_GLOBAL 0 (math)
3 LOAD_ATTR 1 (floor)
6 LOAD_CONST 3 (3.5)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 POP_TOP
13 LOAD_CONST 0 (None)
16 RETURN_VALUE
```
|
Equivalent of uintptr\_t/intptr\_t for pointers to functions?
Afaik [`uintptr_t`](https://stackoverflow.com/questions/1845482/what-is-uintptr-t-data-type) and `intptr_t` can be used to hold any pointer to `void`. Hence these types can be used to store pointers to **data**.
In C99 or later, **are there similar signed and unsigned integer types capable of holding pointers to functions?**
|
No, there are no such types.
Function pointers may only be reliably cast to other function pointer types (and then, only dereferenced while pointing to the correct function type).
The conversion of function pointers to integers in C is covered by 6.3.2.3/6:
>
> Any pointer type may be converted to an integer type. Except as previously specified, the result is implementation-defined. If the result cannot be represented in the integer type, the behavior is undefined. The result need not be in the range of values of any integer type.
>
>
>
Note that even if the integer type is large enough, casting to integer and back to function pointer is not guaranteed to retrieve the original function pointer.
In C++, the text is in [expr.reinterpret.cast] points 4 and 6. The behaviour is similar, but it explicitly guarantees that if an integer of sufficient size exists, then converting function pointer to integer and back again does retrieve the original function pointer.
|
"Invalid Handle" Create CGBitmapContext
I've got a problem with the CGBitmapcontext.
I get en error while creating the CGBitmapContext with the message "invalid Handle".
Here is my code:
```
var previewContext = new CGBitmapContext(null, (int)ExportedImage.Size.Width, (int)ExportedImage.Size.Height, 8, (int)ExportedImage.Size.Height * 4, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst);
```
Thank you;
|
That is because you are passing null to the first parameter. The CGBitmapContext is for drawing directly into a memory buffer. The first parameter in all the overloads of the constructor is (Apple docs):
>
> *data*
> A pointer to the destination in memory where the drawing is to be rendered. The size of this memory block should be at least
> (bytesPerRow\*height) bytes.
>
>
>
In MonoTouch, we get two overloads that accept a byte[] for convenience. So you should use it like this:
```
int bytesPerRow = (int)ExportedImage.Size.Width * 4; // note that bytes per row should
//be based on width, not height.
byte[] ctxBuffer = new byte[bytesPerRow * (int)ExportedImage.Size.Height];
var previewContext =
new CGBitmapContext(ctxBuffer, (int)ExportedImage.Size.Width,
(int)ExportedImage.Size.Height, 8, bytesPerRow, colorSpace, bitmapFlags);
```
|
Fill vector with alphabets depending on user input and put Start and End on the extremities
I am trying to make a vector to look like this:
alphabet= {start,A,B,C,D,E,F,G,H,I,J,K,etc..,end}
The alphabet doesn't go from A to Z, the user inputs the values.
So if user inputs 5, I want the vector to be:
{start,A,B,C,D,E,end}
I tried using iota but I don't know how to push the "start" and "end" at the extremities of the vector
```
vector<string> alphabet;
iota(alphabet.start(), alphabet.end(), 'A');
```
How to push the `start` and `end` values?
|
For the first 5 letters of alphabet
```
#include <iostream>
#include <vector>
#include <string>
#include <numeric>
int main() {
// vector needs to be allocated, +2 is for start and end
std::vector<std::string> alphabet(5+2);
// front() gives you reference to first item
alphabet.front() = "start";
// end() gives you reference to last item
alphabet.back() = "end";
// you can use iota, but skipping the first and last item in vector
std::iota(std::next(alphabet.begin()), std::prev(alphabet.end()), 'A');
for (const auto& S : alphabet)
std::cout<<S<< ", ";
}
```
Output of this block of code is: `start, A, B, C, D, E, end,`
|
What is a component's baseline in Java
Very simple question:
**What is a component's baseline in Java?**
[The documentation](https://docs.oracle.com/javase/7/docs/api/javax/swing/JComponent.html#getBaseline(int,%20int)) does not provide an answer as to what the "baseline" is, just describes its use by LayoutManagers. Yes, the answer is probably straightforward, but I don't want to play the guessing game.
|
From JavaDocs of [FontMetrics](https://docs.oracle.com/javase/7/docs/api/java/awt/FontMetrics.html)
[](https://i.stack.imgur.com/9xryw.gif)
*When an application asks to place a character at the position (x, y), the character is placed so that its reference point (shown as the dot in the accompanying image) is put at that position. The reference point specifies a horizontal line called the baseline of the character. In normal printing, the baselines of characters should align.*
More formally a component's baseline is an imaginary line on which text is placed in a component. In general it is the distance in pixels between top-left of the component and Text's baseline. So in order to get this baseline one needs to pass height and width of the component. It is not necessary for every component to have baseline and for those components this method returns -1.
This method is used during component layout, so it can not use actual dimensions at that point because component is still being resized/repositioned. Hence it needs width and height to be passed.
For your reference as @Frakcool mentioned:
[](https://i.stack.imgur.com/1wxjW.png)
Line under "Find What:" is baseline.
|
Questions about preparing an apk for the Amazon Android App Store
Amazon's documentation is surprising lacking in information about the submitting binary process. From what I can tell, you submit an unsigned binary and they wrap it in their own code and produce a signed apk?
This leaves several questions:
1. Does the Amazon App Store perform a zipalign for you?
2. If you have your app in the Android Market (Google's) already, is it recommended to use the same package name or a different one? Does it make any difference?
3. I also saw elsewhere, that they offer the option to download the apk they prepare and sign it with your own key. Is it recommended to take this and then sign it with the same key you are using in the Android Market? Does it make any difference?
Are there any other considerations or pitfalls that one should know before diving into this process?
|
1. Yes. Amazon wraps your binary with code specific to their appstore that allows them to collect analytics data and enforce DRM. The app will be repackaged after that.
2. You should use the same package name. The Amazon distribution agreement currently has a number of provisos; e.g., that your app is not priced lower on another app store. They also do occasional checks to see whether the version of your app on the market is up to date. These checks are primarily done using the package name; changing the package name of your app could easily be viewed by them as a means to evade the terms of the agreement.
3. No. There may be good reasons why one would want to do this, but none that I can think of. By default, Amazon signs your apk with a signature that is specific to your Amazon developer account.
Other:
[Read this](http://www.amazonappstoredev.com/2011/06/make-your-app-fly-through-testing-part-2.html). In particular, ensure that the app links correctly to the Amazon app store and not the Android market, or others. I don't have inside data, but I'd wager a fair amount that the vast majority of submissions that Amazon turn down fall afoul of that requirement.
Edit: Point 2 is no longer correct; see comment below.
|
How to create a system tray popup message with python? (Windows)
I'd like to know how to create a system tray popup message with python. I have seen those in lots of softaware, but yet difficult to find resources to do it easily with any language. Anyone knows some library for doing this in Python?
|
With the help of the [`pywin32` library](https://sourceforge.net/projects/pywin32/) you can use the following example code I found [here](https://gist.github.com/BoppreH/4000505):
```
from win32api import *
from win32gui import *
import win32con
import sys, os
import struct
import time
class WindowsBalloonTip:
def __init__(self, title, msg):
message_map = {
win32con.WM_DESTROY: self.OnDestroy,
}
# Register the Window class.
wc = WNDCLASS()
hinst = wc.hInstance = GetModuleHandle(None)
wc.lpszClassName = "PythonTaskbar"
wc.lpfnWndProc = message_map # could also specify a wndproc.
classAtom = RegisterClass(wc)
# Create the Window.
style = win32con.WS_OVERLAPPED | win32con.WS_SYSMENU
self.hwnd = CreateWindow( classAtom, "Taskbar", style, \
0, 0, win32con.CW_USEDEFAULT, win32con.CW_USEDEFAULT, \
0, 0, hinst, None)
UpdateWindow(self.hwnd)
iconPathName = os.path.abspath(os.path.join( sys.path[0], "balloontip.ico" ))
icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE
try:
hicon = LoadImage(hinst, iconPathName, \
win32con.IMAGE_ICON, 0, 0, icon_flags)
except:
hicon = LoadIcon(0, win32con.IDI_APPLICATION)
flags = NIF_ICON | NIF_MESSAGE | NIF_TIP
nid = (self.hwnd, 0, flags, win32con.WM_USER+20, hicon, "tooltip")
Shell_NotifyIcon(NIM_ADD, nid)
Shell_NotifyIcon(NIM_MODIFY, \
(self.hwnd, 0, NIF_INFO, win32con.WM_USER+20,\
hicon, "Balloon tooltip",msg,200,title))
# self.show_balloon(title, msg)
time.sleep(10)
DestroyWindow(self.hwnd)
def OnDestroy(self, hwnd, msg, wparam, lparam):
nid = (self.hwnd, 0)
Shell_NotifyIcon(NIM_DELETE, nid)
PostQuitMessage(0) # Terminate the app.
def balloon_tip(title, msg):
w=WindowsBalloonTip(title, msg)
if __name__ == '__main__':
balloon_tip("Title for popup", "This is the popup's message")
```
|
When should I build a Substrate Runtime Module versus a Substrate Smart Contract?
What are the differences between developing a Substrate Runtime Module and a Substrate Smart Contract (for example with the ink! language)?
What are some examples of applications which are best written in one form or the other?
|
Substrate Runtime Modules and Substrate Smart Contracts are two different approaches to building "decentralized applications" using the Substrate framework.
---
# Smart Contracts
A traditional smart contract platform allows users to publish additional logic on top of some core blockchain logic. Since smart contract logic can be published by anyone, including malicious actors and inexperienced developers, there are a number of intentional safe guards built around the smart contract platform. Some examples are:
- Fees: Ensuring that contract developers are charged for the computation and storage they force on the computers running their contract, and not allowed to abuse the block creators.
- Sandbox: A contract is not able to modify core blockchain storage or the storage of other contracts directly. It's power is limited to only modifying it's own state, and the ability to make outside calls to other contracts or runtime functions.
- State Rent: A contract takes up space on the blockchain, and thus should be charged for simply existing. This ensures that people don't take advantage of "free, unlimited storage".
- Revert: A contract can be prone to have situations which lead to logical errors. The expectations of a contract developer are low, so extra overhead is added to support reverting transactions when they fail so no state is updated when things go wrong.
These different overheads makes running contracts slower and more costly, but again, the "target audience" for contract development is different than runtime developers.
Contracts can allow your community to extend and develop on top of your runtime logic without needing to go through all the craziness of proposals, runtime upgrades, etc... It may even be used as a testing grounds for future runtime changes, but done in a way that isolates your network from any of the growing pains or errors which may occur.
**In summary**, Substrate Smart Contracts:
- Are inherently safer to the network.
- Have built in economic incentives against abuse.
- Have computational overhead to support graceful failures in logic.
- Have a lower bar to entry for development.
- Enable fast pace community interaction through a playground to write new logic.
---
# Runtime Modules
Runtime modules on the other hand afford none of these protections or safe guards that Smart Contracts give you. As a runtime developer, the bar to entry on the code you produce jumps way up.
You have full control of the underlying logic that each node on your network will run. You have full access to each and every storage item across all of your modules, which you can modify and control. You can even brick your chain with incorrect logic or poor error handling.
Substrate Runtime Module development has the intention of producing lean, performant, and fast nodes. It affords none of the protections or overhead of transaction reverting, and does not implicitly introduce any fee system to the computation which nodes on your chain run. This means while you are developing runtime functions, it is up to *you* to correctly asses and apply fees to the different parts of your runtime logic such that it will not be abused by bad actors and hurt your network.
**In summary**, Substrate Runtime Modules:
- Provide low level access to your entire blockchain.
- Have removed the overhead of built-in safety for performance.
- Have a high bar to entry for developers.
- Not necessarily to write working code, but to avoid writing broken code.
- Has no inherent economic incentives to repel bad actors.
---
# The Right Tool For You
Substrate Runtime Modules and Substrate Smart Contracts are tools made available to you to solve problems.
There is likely some amount of overlap in the kinds of problems each one can solve, but there is also a clear set of problems suited for only one of the two. Two give just one example in each category:
- Runtime Module: Building a privacy layer on top of transactions in your blockchain.
- Shared: Building a DApp like Cryptokitties which may need to build up a community of users (leaning toward Smart Contract), or may need to scale to millions of transactions a day (leaning toward Runtime Module).
- Smart Contract: Introducing 2nd layer tokens and custom assets to your network.
In addition to everything written above, you also need to take into account the costs to set up a DApp using a certain tool. Deploying a contract is a relatively simple and easy process since you take advantage of the existing network. The only costs to you are those fees which you pay to deploy and maintain your contract.
Setting up your own blockchain on the other hand has the cost of building a community who find value in your service or establishing a private network with the overhead of cloud computing system and general network maintenance.
I think that now is really the first time it has been so easy and approachable to build runtime logic. In the past, everyone built their "decentralized application idea" using the tool available to them, Smart Contracts, even when that wasn't the best tool for the job.
With the introduction of Substrate, there is a new tool available for building your decentralized applications; but again, it would be wrong to think that all of your ideas should be a Substrate Runtime Module.
Instead, for the first time as a community, we have two tools, and we need to figure out together which one is best to use for each scenario. I don't think all the answers to this exist today, but we can learn and make some educated guesses along the way.
|
How to get checked checkboxes in JSP
How can I get/set checkbox value using jstl and delete only those record from the database where the checkbox is checked? can you also advise how to use ternary operators in jstl for this scenario?
SearchStudent.jsp
```
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/functions" prefix="fn"%>
<%@page contentType="text/html" pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Lookup Students</title>
</head>
<form method="post" action="deleteStudentServlet" class="form">
<body class="body">
<!-- List results -->
<c:if test="${not empty studentList}">
<table border="1" cellspacing="0" cellpadding="0" :>
<tr>
<th></th>
<th>ID</th>
<th>Title</th>
<th>First Name</th>
<th>Last Name</th>
<th></th>
</tr>
<c:forEach var="students" items="${studentList}">
<tr>
<td><input type="checkbox" name="chkBox"> </td>
<td>${students.studentID}</td>
<td>${students.title}</td>
<td>${students.firstName}</td>
<td>${students.lastName}</td>
<td><c:url value="UDS" var="url">
<c:param name="StudentID" value="${students.studentID}" />
</c:url> <a href="${url}">Edit</a></td>
</tr>
</c:forEach>
</table>
</c:if>
<td><input type="submit" name="submit" value="Delete" ></td>
</form>
<p>There are ${fn:length(studentList)} results.</p>
</body>
</html>
```
thanks.
|
Your checkbox has currently no value associated with the parameter name at all:
```
<input type="checkbox" name="chkBox">
```
So it's hard to find out the checked ones. You need to give the checkbox a value which uniquely identifies the selected item. In your particular example, the student ID seems to be an obvious choice:
```
<input type="checkbox" name="selected" value="${student.studentID}">
```
*(by the way, why are you duplicating the entity name in the property name? why not just name it `id` so that you can just self-documentary use `${student.id}`? also your `var="students"` is kind of odd, it is referring only **one** student, so just name it `var="student"`; the `${studentList}` can better be named `${students}`)*
When the form is submitted, all checked value are available as follows:
```
String[] selectedStudentIds = request.getParameterValues("selected");
```
Finally, just pass it through to your DAO/service class which does the business job:
```
studentService.delete(selectedStudentIds);
```
### See also:
- [How to transfer data from JSP to servlet when submitting HTML form](https://stackoverflow.com/questions/4971877/how-to-transfer-data-from-jsp-to-servlet-when-submitting-html-form)
- [ServletRequest.getParameterMap() returns Map<String, String[]> and ServletRequest.getParameter() returns String?](https://stackoverflow.com/questions/1928675/servletrequest-getparametermap-returns-mapstring-string-and-servletreques)
- [Send an Array with an HTTP Get](https://stackoverflow.com/questions/3061273/send-an-array-with-an-http-get)
|
(moxy) jaxb marshaling and hibernate proxy objects
In the last couple of days I have tried to make support for XML marshalling/unmarshalling of a Hibernate model, using MOXy JAXB. Trying to do this, I have run into a problem with hibernates proxy objects.
Consider something like:
```
public class User {
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "address")
public Address getAddress() {
return address;
}
}
public abstract class Address {
// Something
}
public class CoolAddress extends Address {
public String getSomething() {
return something;
}
}
```
I have tried to map this code using MOXy JAXB in the following way:
```
@XmlAccessorType(XmlAccessType.NONE)
public class User {
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "address")
@XmlElement
public Address getAddress() {
return address;
}
}
@XmlAccessorType(XmlAccessType.NONE)
@XmlSeeAlso(CoolAddress.class)
public abstract class Address {
// Something
}
@XmlAccessorType(XmlAccessType.NONE)
public class CoolAddress extends Address {
@XmlElement
public String getSomething() {
return something;
}
}
```
My problem is that hibernate instantiates a proxy object of the address obtained by calling getAddress() on a User. Then, when JAXB tries to marshal the object, it can't find out that it actually is a CoolAddress it is trying to marshal, which results in that properties in CoolAddress not being marshaled.
I have googled/considered the following possible solutions:
- In some way get a callback from JAXB, allowing me to replace the object being marshaled with another. This would allow me to obtain the real object from the proxy.
- Touch all objects in the model which will make hibernate fetch the real objects. I have not been able to find any smart way doing this other than manually running through all non-transient properties, which is quiet tedious.
- Set hibernate to use eager fetching in the session I am marshaling the models.
I'm looking for alternative suggestions, or if one of the above suggestions is possible (and easy) to implement. Any help is appreciated :).
|
To solve this Hibernate issue you may be able to use an `XmlAdapter`. The XmlAdapter would look something like where the logic in the marshal method is to convert from the proxy to the real object:
```
package forum6838323;
import javax.xml.bind.annotation.adapters.XmlAdapter;
public class AddressAdapter extends XmlAdapter<Address, Address> {
@Override
public Address unmarshal(Address v) throws Exception {
return v;
}
@Override
public Address marshal(Address v) throws Exception {
// TODO Auto-generated method stub
return null;
}
}
```
You configure the `XmlAdapter` as follows:
```
public class User {
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "address")
@XmlJavaTypeAdapter(AddressAdapter.class)
public Address getAddress() {
return address;
}
}
```
If you need to pass an initialized XmlAdapter to the JAXB marshaller, you can do that as well, see the following for an example:
- [Using JAXB to cross reference XmlIDs from two XML files](https://stackoverflow.com/questions/5319024/using-jaxb-to-cross-reference-xmlids-from-two-xml-files/5327425#5327425)
---
**Alternative Using EclipseLink JPA**
Note: The lazy loading in EclipseLink JPA does not cause this issue:
**User**
```
package forum6838323;
import javax.persistence.*;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlRootElement;
@Entity
@Table(name="users")
@XmlRootElement
public class User {
private int id;
Address address;
@Id
@XmlAttribute
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "address")
public Address getAddress() {
return address;
}
public void setAddress(Address address) {
this.address = address;
}
}
```
**Address**
```
package forum6838323;
import javax.persistence.*;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlSeeAlso;
@Entity
@Inheritance(strategy=InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name="TYPE", discriminatorType=DiscriminatorType.STRING)
@DiscriminatorValue("ADDRESS")
@XmlSeeAlso(CoolAddress.class)
public class Address {
private int id;
private String street;
@Id
@XmlAttribute
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getStreet() {
return street;
}
public void setStreet(String street) {
this.street = street;
}
}
```
**CoolAddress**
```
package forum6838323;
import javax.persistence.*;
@Entity
@DiscriminatorValue("COOL")
public class CoolAddress extends Address {
private String something;
public String getSomething() {
return something;
}
public void setSomething(String something) {
this.something = something;
}
}
```
**Demo**
```
package forum6838323;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.Marshaller;
public class Demo {
public static void main(String[] args) throws Exception {
EntityManagerFactory emf = Persistence.createEntityManagerFactory("Forum6838323");
EntityManager em = emf.createEntityManager();
User user = em.find(User.class, 2);
System.out.println("user.address BEFORE marshal: " + user.address);
JAXBContext jc = JAXBContext.newInstance(User.class);
Marshaller marshaller = jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.marshal(user, System.out);
System.out.println("user.address AFTER marshal: " + user.address);
}
}
```
**Output**
You can see from the output that the address value is being lazily loaded since the field is null before the marshal and populated afterwards:
```
user.address BEFORE marshal: null
[EL Finest]: 2011-07-27 11:47:13.118--ServerSession(23503403)--Thread(Thread[main,5,main])--Execute query ReadObjectQuery(name="Forum6838323" referenceClass=Address )
[EL Finest]: 2011-07-27 11:47:13.118--ServerSession(23503403)--Connection(10272075)--Thread(Thread[main,5,main])--Connection acquired from connection pool [default].
[EL Fine]: 2011-07-27 11:47:13.118--ServerSession(23503403)--Connection(10272075)--Thread(Thread[main,5,main])--SELECT ID, TYPE, STREET, SOMETHING FROM ADDRESS WHERE (ID = ?)
bind => [2]
[EL Finest]: 2011-07-27 11:47:13.118--ServerSession(23503403)--Connection(10272075)--Thread(Thread[main,5,main])--Connection released to connection pool [default].
[EL Finest]: 2011-07-27 11:47:13.118--UnitOfWork(6131844)--Thread(Thread[main,5,main])--Register the existing object forum6838323.CoolAddress@109ea96
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user id="2">
<address xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="coolAddress" id="2">
<street>2 B Road</street>
<something>Cool Road</something>
</address>
</user>
user.address AFTER marshal: forum6838323.CoolAddress@83b1b
```
|
The interpretor for Perl is written in C and yet perl is faster for string matching compared to C
The interpreter for Perl is written in C and yet perl is faster for string matching compared to C. Can some one correct me if I am wrong. And if its a fact can this be explained?
|
Perl is indeed written in C (and in Perl, I suppose).
C has no string matching ability, so it can't possibly be slower (or faster) than Perl's. Sure, you can write a string matching library in C and compare it to string matching in Perl, but you can't compare C's since C doesn't have one.
---
demerphq (one's of Perl's regex engine gurus) recently replied to "What is the speed (I won't define that) of P5 RE vs other RE in other languages?" with the following:
>
> This is a tricky question. My general experience is that some regex engines match faster for some patterns, but that many regex engines *fail* to match slower, often much slower and that in practice perls engine is competitive or superior to many other engines. So you have to be really careful with benchmarks on this, many only look at a success cases, which Perl often is slower at. This is a deliberate design decision, in many scenarios pattern fail to match more often than they match, and generally we try to fail fast.
>
>
> Another subject that makes this kind of comparison tricky is that Perls regex engine has a very rich feature set and other engines often trade features for performance. Perls regex engine has very good unicode support, support that you will often find is not supported by other engines (often this is despite their claims.)
>
>
> An example of this kind of stuff is the "mandatory substring" logic that Perl uses. The engine will find the longest string in the pattern that must be present for the pattern to match. We then use Fast-Boyer-Moore (FBM) matching to determine if the string exists. If it does not then we never start the regex engine. This means that in theory Perl can reject a pattern significantly faster than a true DFA could, and that in practice it often does so. A DFA typically needs to do N operations for a string of length N, FBM can do best case N/L inspections, where N is the number of characters and L is the length of the mandatory string.
>
>
>
|
Android, RXJava, search using debounce
I'm using `debounce()` for handling user searching and dealing with it's pause on typing (searches 1 second after the last char):
```
RxSearchView.queryTextChanges(searchView)
.debounce(1, TimeUnit.SECONDS)
.subscribe(new Action1<CharSequence>() {
@Override
public void call(CharSequence charSequence) {
presenter.loadUsers(charSequence.toString());
}
});
```
so if user delete all chars, it waits 1 second and then loads the list, how can I handle it and load the list instantly?
|
In your case just `debounce` operator with different parameters is needed:
[`public final <U> Observable<T> debounce(Func1<? super T, ? extends Observable<U>> debounceSelector)`](https://github.com/ReactiveX/RxJava/blob/1.x/src/main/java/rx/Observable.java#L5341)
Using it you can filter, which events can be delayed or not:
```
RxSearchView.queryTextChanges(searchView)
.debounce(new Func1<CharSequence, Observable<CharSequence>>() {
@Override
public Observable<CharSequence> call(CharSequence charSequence) {
if (charSequence.length() == 0) {
return Observable.empty();
} else {
return Observable.<CharSequence>empty().delay(1, TimeUnit.SECONDS);
}
}
})
.subscribe(new Action1<CharSequence>() {
@Override
public void call(CharSequence charSequence) {
Log.d(MainActivity.class.getSimpleName(), new Date().toGMTString() + " " + charSequence.length() + " :" + charSequence);
}
});
```
|
Why is a LinkedList Generally Slower than a List?
I started using some LinkedList’s instead of Lists in some of my C# algorithms hoping to speed them up. However, I noticed that they just felt slower. Like any good developer, I figured that I should do due diligence and verify my feelings. So I decided to benchmark some simple loops.
I thought that populating the collections with some random integers should be sufficient. I ran this code in Debug mode to avoid any compiler optimizations. Here is the code that I used:
```
var rand = new Random(Environment.TickCount);
var ll = new LinkedList<int>();
var list = new List<int>();
int count = 20000000;
BenchmarkTimer.Start("Linked List Insert");
for (int x = 0; x < count; ++x)
ll.AddFirst(rand.Next(int.MaxValue));
BenchmarkTimer.StopAndOutput();
BenchmarkTimer.Start("List Insert");
for (int x = 0; x < count; ++x)
list.Add(rand.Next(int.MaxValue));
BenchmarkTimer.StopAndOutput();
int y = 0;
BenchmarkTimer.Start("Linked List Iterate");
foreach (var i in ll)
++y; //some atomic operation;
BenchmarkTimer.StopAndOutput();
int z = 0;
BenchmarkTimer.Start("List Iterate");
foreach (var i in list)
++z; //some atomic operation;
BenchmarkTimer.StopAndOutput();
```
Here is output:
```
Linked List Insert: 8959.808 ms
List Insert: 845.856 ms
Linked List Iterate: 203.632 ms
List Iterate: 125.312 ms
```
This result baffled me. A Linked List insert should be O(1) whereas as List Insert is Θ(1), O(n) (because of copy) if it needs to be resized. Both list iterations should be O(1) because of the enumerator. I looked at the disassembled output and it doesn’t shed much light on the situation.
Anyone else have any thoughts on why this is? Did I miss something glaringly obvious?
Note: here is the source for the simple BenchmarkTimer class: <http://procbits.com/2010/08/25/benchmarking-c-apps-algorithms/>
|
**Update** (in response to your comment): you're right, discussing big-O notation by itself is not exactly useful. I included a link to James's answer in my original response because he already offered a good explanation of the technical reasons why `List<T>` outperforms `LinkedList<T>` in general.
Basically, it's a matter of memory allocation and locality. When all of your collection's elements are stored in an array internally (as is the case with `List<T>`), it's all in one contiguous block of memory which can be accessed *very quickly*. This applies both to *adding* (as this simply writes to a location within the already-allocated array) as well as *iterating* (as this accesses many memory locations that are very close together rather than having to follow pointers to completely disconnected memory locations).
A `LinkedList<T>` is a *specialized* collection, which only outshines `List<T>` in the case where you are performing random insertions or removals from the *middle* of the list—and even then, only *maybe*.
As for the question of scaling: you're right, if big-O notation is all about how well an operation *scales*, then an O(1) operation should eventually beat out an O(>1) operation given a large enough input—which is obviously what you were going for with 20 million iterations.
This is why I mentioned that `List<T>.Add` has an **[amortized complexity](http://en.wikipedia.org/wiki/Amortized) of O(1)**. That means adding to a list is *also* an operation that scales linearly with the size of the input, the same (effectively) as with a linked list. Forget about the fact that occasionally the list has to resize itself (this is where the "amortized" comes in; I encourage you to visit that Wikipedia article if you haven't already). They *scale* the **same**.
Now, interestingly, and perhaps counter-intuitively, this means that if anything, the performance difference between `List<T>` and `LinkedList<T>` (again, when it comes to *adding*) actually becomes **more obvious** as the number of elements increases. The reason is that when the list runs out of space in its internal array, it *doubles* the size of the array; and thus with more and more elements, the frequency of resizing operations *decreases*—to the point where the array is basically never resizing.
So let's say a `List<T>` starts with an internal array large enough to hold 4 elements (I believe that's accurate, though I don't remember for sure). Then as you add up to 20 million elements, it resizes itself a total of ~(log2(20000000) - 1) or **23 times**. Compare this to the **20 million times** you're performing the *considerably* less efficient `AddLast` on a `LinkedList<T>`, which allocates a new `LinkedListNode<T>` with every call, and those 23 resizes suddenly seem pretty insignificant.
I hope this helps! If I haven't been clear on any points, let me know and I will do my best to clarify and/or correct myself.
---
[James](https://stackoverflow.com/questions/5983059/why-is-a-linkedlist-generally-slower-than-a-list/5983124#5983124) is right on.
Remember that big-O notation is meant to give you an idea of how the performance of an algorithm *scales*. It does not mean that something that performs in guaranteed O(1) time will outperform something else that performs in amortized O(1) time (as is the case with `List<T>`).
Suppose you have a choice of two jobs, one of which requires a commute 5 miles down a road that occasionally suffers from traffic jams. Ordinarily this drive should take you about 10 minutes, but on a bad day it could be more like 30 minutes. The other job is 60 miles away but the highway is always clear and never has any traffic jams. This drive *always* takes you an hour.
That's basically the situation with `List<T>` and `LinkedList<T>` for purposes of adding to the end of the list.
|
how to determine (using java code) if a web page is HTML5 (or older version of HTML)
I want to determine the version of HTML of a web page. How do I do this in a Google App Engine Java application? (Or even a desktop java application?)
|
As the comments have mentioned, there isn’t much of a hard-and-fast difference between an “HTML5” HTML page and an “older” HTML page. It’s all HTML. Much of the point of HTML5 as a standard is to document how browsers already treat HTML, rather than specify new stuff (aside from tags with different names, and JavaScript APIs).
If a page uses the HTML5 doctype (`<!DOCTYPE html>`), that’s a pretty good indication that the author intended it to be HTML5. But as the comments have mentioned, you just need a decent HTML parser — it’ll suck up older HTML and HTML5 alike, because they’re effectively the same thing as far as parsing goes.
I’ve very little experience with HTML parsers, but as robertc suggested in his comment, you might try <http://about.validator.nu/htmlparser/>.
|
Unable to locate the Javac Compiler
I tried to `mvn install` and got this message:
```
Compilation failure
Unable to locate the Javac Compiler in:
/usr/lib/jvm/java-7-openjdk-amd64/jre/../lib/tools.jar
Please ensure you are using JDK 1.4 or above and
not a JRE (the com.sun.tools.javac.Main class is required).
In most cases you can change the location of your Java
installation by setting the JAVA_HOME environment variable.
```
Well, there is an open jdk, I also downloaded another one. I tried to point JAVA\_HOME to both, now it is set:
```
JAVA_HOME=/usr/lib/jvm/jdk1.7.0_03
export JAVA_HOME
PATH=$PATH:$JAVA_HOME/bin
export PATH
```
I also tried to choose one of those open with `sudo update-alternatives --config java` but got the same error with different jdk versions in it.
How can I fix that? Thanks in advance.
|
it seems like your PATH is not picked up correctly... does the output of "echo $PATH" contain the directory where javac resides?
I would suggest following:
open terminal and do an:
```
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_03
export PATH=$PATH:$JAVA_HOME/bin
javac -version
which javac
```
if javac -version still does not work create a symlink in /usr/local/bin pointing to your javac binary:
```
cd /usr/local/bin
ln -s /usr/lib/jvm/jdk1.7.0_03/bin/javac javac
```
this should get you up an running...
an alternative is to try setting up java via your package management system (e.g. "apt-get install java" or sth. similar)
|
what happens in scala when loading objects that extends App?
I've encountered a somewhat bizzar behavior in objects that extends `App`. take a look at the following REPL commands:
```
scala> object A extends App {val x = "I am null"}
defined module A
scala> object B {val x = "I am a string"}
defined module B
scala> A.x
res0: java.lang.String = null
scala> B.x
res1: java.lang.String = I am a string
```
well, this is a bit weird... but it gets weirder. i then thought the vals in an `object` go into some lazy evaluation... so i tried a real `lazy val`:
```
scala> object C extends App {lazy val x = "What am I?"}
defined module C
scala> C.x
res2: java.lang.String = What am I?
```
so what's happening here? why is a regular val gets a null value?
why does this behavior changes when i use `lazy val`?
and what is so special with the `App` trait, that makes the regular vals to be unevaluated?
|
App extends [DelayedInit](http://www.scala-lang.org/api/current/scala/DelayedInit.html) trait. So *all statements and all value definitions* are moved to `delayedInit` method. Lazy val works because it compiles to method.
For example if you decompile this class:
```
class TestApp extends App{
val test = "I am null"
lazy val testLazy ="I am a string"
}
```
You will get class with 'lazy method':
```
public String testLazy()
{
if((bitmap$0 & 1) == 0)
synchronized(this)
{
if((bitmap$0 & 1) == 0)
{
testLazy = "I am a string";
bitmap$0 = bitmap$0 | 1;
}
BoxedUnit _tmp = BoxedUnit.UNIT;
}
return testLazy;
}
```
and delayedInit method in inner class `delayedInit.body`:
```
public final class delayedInit.body extends AbstractFunction0
implements ScalaObject
{
public final Object apply()
{
$outer.test_$eq("I am null");
return BoxedUnit.UNIT;
}
private final TestApp $outer;
....
```
So value "I am null" will be assigned to `test` field only when delayedInit is called.
|
Simplifying up/down vote code
I've got this working JavaScript (example is [here][1]), which should work the same as on the Stack Exchange network.
How can I simplify it a bit?
```
function yellow() {
return 'rgb(255, 255, 0)';
}
$(function() {
$(".post-upvote").click(function() {
// ajax(url + "upvote/" + $(this).attr('data-postid'), false, false);
if ($(this).parent().children('.post-downvote').css('background-color') == yellow()) { // user upvoted so let's delete upvote
$(this).parent().children('.post-votes').text(parseInt($(this).parent().children('.post-votes').text()) + parseInt(1));
}
$(this).parent().children('.post-downvote').css('background-color', 'white');
if ($(this).css('background-color') == yellow()) { // if it's yellow, user is canceling his downvote
$(this).css('background-color', 'white');
$(this).parent().children('.post-votes').text(parseInt($(this).parent().children('.post-votes').text()) - parseInt(1));
}
else {
$(this).css('background-color', 'yellow');
$(this).parent().children('.post-votes').text(parseInt($(this).parent().children('.post-votes').text()) + parseInt(1));
}
});
$(".post-downvote").click(function() {
// ajax(url + "downvote/" + $(this).attr('data-postid'), false, false);
if ($(this).parent().children('.post-upvote').css('background-color') == yellow()) { // user upvoted so let's delete upvote
$(this).parent().children('.post-votes').text(parseInt($(this).parent().children('.post-votes').text()) - parseInt(1));
}
$(this).parent().children('.post-upvote').css('background-color', 'white');
if ($(this).css('background-color') == yellow()) { // if it's yellow, user is canceling his downvote
$(this).css('background-color', 'white');
$(this).parent().children('.post-votes').text(parseInt($(this).parent().children('.post-votes').text()) + parseInt(1));
}
else {
$(this).css('background-color', 'yellow');
$(this).parent().children('.post-votes').text(parseInt($(this).parent().children('.post-votes').text()) - parseInt(1));
}
});
});
```
|
Get rid of antipatterns like
```
parseInt(1)
```
Just say `1` instead.
Factor out common code like
```
$(this).parent().children('.post-votes')
```
and
```
$(this).css('background-color')
```
into variables.
Combine the two handlers thus
```
function vote(isUpvote) {
var control = $(this);
var otherControl = control.parent().children(
isUpvote ? ".post-downvote" : ".post-upvote");
var postVotes = control.parent().children(".post-votes");
var ajaxHandler = isUpvote ? "upvote/" : "downvote/";
// ajax(url + ajaxHandler + control.attr('data-postid'), false, false);
// user voted so let's delete the other control
if (otherControl.css("background-color") == yellow()) {
postVotes.text(+(postVotes.text()) + 1);
}
control.parent().children(otherId).css("background-color", "white");
// if it's yellow, user is cancelling their vote
if (control.css("background-color") == yellow()) {
control.css("background-color", "white");
postVotes.text(+(postVotes.text()) - 1);
} else {
control.css("background-color", "yellow");
postVotes.text(+(postVotes.text()) + 1);
}
}
$(".post-upvote" ).click(function() { vote.call(this, true); });
$(".post-downvote").click(function() { vote.call(this, false); });
```
|
ColdFusion: How to insert a list with a static foreign key in one insert?
I have a list like this thing1,thing2,thing3. And I want to insert them into a look-up table with the same foreign key. So ideally it would look like this:
```
<cfquery datasource="#ds#" result="insert_things">
INSERT INTO lkp_things (foreign_key, thing) VALUES
(1, thing1), (1, thing2), (1, thing3)
</cfquery>
```
**It seems like the only way to accomplish this is to turn the list into a query, but I'm wondering, is there is a simpler way?**
Here's what I've tried:
```
<cfquery datasource="#ds#" result="insert_things">
INSERT INTO lkp_things (foreign_key, thing) VALUES
<cfloop list="#things#" index="thing">
(#id#,#thing#)<cfif ?????? NEQ len(#things#)>,</cfif>
</cfloop>
</cfquery>
```
I've heard that you can't do a cfloop inside a cfquery, but I'm not even sure if that's true because I can't have a trailing comma in the VALUES, and I'm not sure how to say "The current iteration number" inside a cfloop. If I turned the list into a query, then I could do currentRow, but again, I'd like to know if there's a simpler way to accomplish this before I go through all that.
Also, I'm using CF 8 and sql server '08 EDIT: Sorry, I'm actually using 2000.
|
**Update:**
Ultimately the real problem here was that the feature of inserting multiple sets of values with a single `VALUES` clause is only supported in SQL Server 2008+ and the OP is using 2000. So they went with the [select / union all approach](https://stackoverflow.com/a/19596550/104223) instead.
---
(Expanded from the comments)
Sure you can loop inside a `cfquery`. All cfml code is processed on the CF server first. Then the resulting SQL string is sent to the database for execution. As long as your CF code results in a valid SQL statement, you can do just about anything you want :) Whether you should is a different question, but this kind of looping is perfectly fine.
Getting back to your question, just switch to a `from/to` loop instead and use list functions like `getToken(list, index)` to get the individual elements (see [Matt's example](https://stackoverflow.com/a/19592020/104223)) or use an array instead. Obviously you should also validate the list is not empty first. My personal preference is arrays. Not tested, but something like this:
```
<cfset thingArray = listToArray(things, ",")>
<cfquery datasource="#ds#" result="insert_things">
INSERT INTO lkp_things (foreign_key, thing) VALUES
<cfloop from="1" to="#arrayLen(thingArray)#" index="x">
<cfif x gt 1>,</cfif>
(
<!--- Note: Replace cfsqltype="..." with correct type --->
<cfqueryparam value="#id#" cfsqltype="...">
, <cfqueryparam value="#thingArray[x]#" cfsqltype="...">
)
</cfloop>
</cfquery>
```
Having said that, what is the source of your `#thing#` list? If those values exist in a database table, you could insert them directly with a `SELECT` statement, instead of a loop:
```
INSERT INTO lkp_things (foreign_key, thing)
SELECT <cfqueryparam value="#id#" cfsqltype="...">, thing
FROM ThingTable
WHERE thing IN
(
<cfqueryparam value="#thingList#" list="true" cfsqltype="...">
)
```
|
How can I be sure that a directory or file is actually deleted?
I know that most files, when deleted, aren't actually removed from the disk, and can be recovered later.
How can I ensure that a directory I had deleted will actually be removed from the disk? Are there utilities for that?
I'm using Debian Linux.
|
Encrypt the data before storing it. To erase the data, wipe the key.
If you've already written the data in plaintext, it's too late to wipe it in a simple way. There may be multiple copies of the data laying around in various places:
- on the filesystem if the file was written multiple times (either overwritten or replaced);
- on the filesystem if it was rearranged as part of defragmentation;
- in the journal (this is likely to disappear pretty fast after the data was last written);
- in backups;
- in disabled sectors (especially on SSD).
To get rid of copies of the data on the filesystem, a crude method is to fill the free space (`cat /dev/zero >somefile` and wait for it to stop because the filesystem is full). This will overwrite all full blocks.
Small parts of the data may remain in incomplete blocks that are partially used by other files. This is especially a concern for file names, which may remain in blocks that store directory contents. To get rid of everything, back up all the files, overwrite the device containing the filesystem completely, then restore the files.
Storage media may retain data in blocks that are no longer in use. On hard disks, this means bad blocks that have been reallocated; this is a pretty rare occurrence until the disk starts wearing down. On SSD, this is a common occurrence due to wear levelling. In both cases, the threat is very low, because accessing that data requires a somewhat sophisticated attacker with some moderately expensive hardware and time to waste. If you care about these threats, encrypt your data and don't leave your key lying around.
Note that you may see advice about erasing data by doing multiple passes or using random data instead of zeroes (“Gutmann wipe”). Forget it: this applies only to 1980s hard disks (and even then the data is not that cheap to reconstruct and the reconstruction is rather unreliable). Overwriting with zeroes is good enough; doing multiple random passes is obsolete advice or snake oil. See [Why is writing zeros (or random data) over a hard drive multiple times better than just doing it once?](https://security.stackexchange.com/questions/10464/why-is-writing-zeros-or-random-data-over-a-hard-drive-multiple-times-better-th)
|
Why any modern x86 masks shift count to the 5 low bits in CL
I'm digging into left and right shift operations in x86 ASM, like `shl eax, cl`
From IA-32 Intel Architecture Software Developer’s Manual 3
>
> All IA-32
> processors (starting with the Intel 286 processor) do mask the shift
> count to 5 bits, resulting in a maximum count of 31. This masking is
> done in all operating modes (including the virtual-8086 mode) to
> reduce the maximum execution time of the instructions.
>
>
>
I'm trying to understand the reasoning behind this logic. Maybe it works this way because on a hardware level it is hard to implement shift for all 32 (or 64) bits in a register using 1 cycle?
Any detailed explanation would help a lot!
|
Edited to correct statement re: 80386, which (to my surprise) did have a barrel shifter.
---
Happy to hear the 286 described as "modern" :-)
The 8086 ran a `SHL AX, CL` in 8 clocks + 4 clocks per bit shifted. So if `CL` = 255 this is a seriously slow instruction !
So the 286 did everybody a favour and clamped the count by masking to 0..31. Limiting the instruction to at most 5 + 31 clocks. Which for 16 bit registers is an interesting compromise.
[I found "80186/80188 80C186/80C188 Hardware Reference Manual" (order no. 270788-001) which says that this innovation appears there first. `SHL` et al ran 5+n clocks (for register operations), same like the 286. FWIW, the 186 also added PUSHA/POPA, PUSH immed., INS/OUTS, BOUND, ENTER/LEAVE, IMUL immed. and SHL/ROL etc. immed. I do not know why the 186 appears to be a non-person.]
For the 386 they kept the same mask, but that applies also to 32-bit register shifts. I found a copy of the "80386 Programmer's Reference Manual" (order no. 230985-001), which gives a clock count of 3 for all register shifts. The "Intel 80386 Hardware Reference Manual" (order no. 231732-002), section 2.4 "Execution Unit" says that the Execution Unit includes:
>
> • The Data Unit contains the ALU, a file of eight 32-bit general-purpose registers, and a 64-bit barrel shifter (which performs multiple bit shifts in one clock).
>
>
>
So, I do not know why they did not mask 32-bit shifts to 0..63. At this point I can only suggest the cock-up theory of history.
I agree it is a shame that there isn't a (GPR) shift which returns zero for any count >= argument size. That would require the hardware to check for any bit set beyond the bottom 6/5, and return zero. As a compromise, perhaps just the Bit6/Bit5.
[I haven't tried it, but I suspect that using `PSLLQ` et al is hard work -- shuffling count and value to `xmm` and shuffling the result back again -- compared to testing the shift count and masking the result of a shift in some branch-free fashion.]
Anyway... the reason for the behaviour appears to be history.
|
Prolog singleton variables in Python
I'm working on a little set of scripts in python, and I came to this:
```
line = "a b c d e f g"
a, b, c, d, e, f, g = line.split()
```
I'm quite aware of the fact that these are decisions taken during implementation, but shouldn't (or does) python offer something like:
```
_, _, var_needed, _, _, another_var_needed, _ = line.split()
```
as well as Prolog does offer, in order to exclude the famous `singleton variables`.
I'm not sure, but wouldn't it avoid unnecessary allocation? Or creating references to the result of the `split` call does not count up as overhead?
**EDIT:**
Sorry, my point here is: in Prolog, as far as I'm concerned, in an expression like:
```
test(L, N) :-
test(L, 0, N).
test([], N, N).
test([_|T], M, N) :-
V is M + 1,
test(T, V, N).
```
The variable represented by `_` is not accessible, for what I suppose the reference to the value that does exist in the list `[_|T]` is not even created.
But, in Python, if I use `_`, I can use the last value assigned to `_`, and also, I do suppose the assignment occurs for each of the variables `_` -- which may be considered an overhead.
My question here is if shouldn't there be (or if there is) a syntax to avoid such unnecessary attributions.
|
`_` is a perfectly valid variable name and yes, you can use a variable multiple times in an unpacking operation, so what you've written will work. `_` will end up with the last value assigned in the line. Some Python programmers do use it this way.
`_` is used for special purposes by some Python interactive shells, which may confuse some readers, and so some programmers do *not* use it for this reason.
There's no way to avoid the allocation with `str.split()`: it always splits the whole line, and the resulting strings are always allocated. It's just that, in this case, some of them don't live very long. But then again, who does?
You can avoid *some* allocations with, say, `re.finditer()`:
```
import re
fi = re.finditer(r"\S+", line)
next(fi)
next(fi)
var_needed = next(fi).group()
next(fi)
next(fi)
another_var_needed = next(fi).group()
# we don't care about the last match so we don't ask for it
```
But `next()` returns a `Match` object and so it'll be allocated (and immediately discarded since we're not saving it anywhere). So you really only save the final allocation. If your strings are long, the fact that you're getting a `Match` object and not a string could save some memory and even time, I guess; I *think* the matched string is not sliced out of the source string until you ask for it. You could profile it to be sure.
You could even generalize the above into a function that returns only the desired tokens from a string:
```
import re
def get_tokens(text, *toknums):
toknums = set(toknums)
maxtok = max(toknums)
for i, m in enumerate(re.finditer(r"\S", text)):
if i in toknums:
yield m.group()
elif i > maxtok:
break
var1, var2 = get_tokens("a b c d e f g", 2, 5)
```
But it still ain't exactly pretty.
|
Polar/Stereographic map in R
I am trying to produce a sterographic map similarly to this:
[](https://i.stack.imgur.com/kU3U8.png)
What I am trying to do is to add:
1. Coordinates
2. Graticule lines
This can be in both base R or with ggplot2. Any help is appreciated.
**My attempts so far**
```
library(rgdal)
library(raster)
proj <- "+proj=stere +lat_0=90 +lat_ts=70 +lon_0=-45 +k=1 +x_0=0 +y_0=0 +a=6378273 +b=6356889.449 +units=m +no_defs"
data("wrld_simpl", package = "maptools")
wm <- crop(wrld_simpl, extent(-180, 180, 45, 90))
plot(wm)
```
[](https://i.stack.imgur.com/yMJS0.png)
```
wm <- spTransform(wm, CRSobj = CRS(proj))
plot(wm)
```
[](https://i.stack.imgur.com/dVkeI.png)
|
This is quite a complex map to reproduce, and all the details required to make it work seem beyond the scope of a single question. However, this is most of the stuff you require.
Doing this in ggplot is *easier* to do that using the base graphics. However, it is quite a complex graph to make.
I have had to use a few hacks to get it to work. In particular, the axes produced from `coord_map` did not end at the edge of the plot, so I had to manually delete the axes and then recreate them using the `geom_text` and `geom_segment` lines below.
```
library(rgdal)
library(raster)
library(ggplot2)
# Defines the x axes required
x_lines <- seq(-120,180, by = 60)
ggplot() +
geom_polygon(data = wm_ggplot, aes(x = long, y = lat, group = group), fill = "grey", colour = "black", alpha = 0.8) +
# Convert to polar coordinates
coord_map("ortho", orientation = c(90, 0, 0)) +
scale_y_continuous(breaks = seq(45, 90, by = 5), labels = NULL) +
# Removes Axes and labels
scale_x_continuous(breaks = NULL) +
xlab("") +
ylab("") +
# Adds labels
geom_text(aes(x = 180, y = seq(55, 85, by = 10), hjust = -0.2, label = paste0(seq(55, 85, by = 10), "°N"))) +
geom_text(aes(x = x_lines, y = 39, label = c("120°W", "60°W", "0°", "60°E", "120°E", "180°W"))) +
# Adds axes
geom_hline(aes(yintercept = 45), size = 1) +
geom_segment(aes(y = 45, yend = 90, x = x_lines, xend = x_lines), linetype = "dashed") +
# Change theme to remove axes and ticks
theme(panel.background = element_blank(),
panel.grid.major = element_line(size = 0.25, linetype = 'dashed',
colour = "black"),
axis.ticks=element_blank()) +
labs(caption = "Designed by Mikey Harper")
```
[](https://i.stack.imgur.com/8Pjh4.png)
|
How can an ISO image be mounted to a specified drive letter in Powershell?
I have an ISO image (in this case an MS Office ISO) that I'd like to mount.
I'd like to use Powershell, and specify the drive letter assignment at the time of mounting, such that I can use scripting commands on files on the mounted ISO (drive), after which time, I'd like to dismount the ISO.
How can this be done?
Background: I'd like to script installation of MS Office given an ISO image.
|
The following Powershell commands will mount the specified ISO image to the specified drive letter. The [mountvol](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/mountvol) command requires elevation, so run Powershell **as an Administrator**:
```
# ISO image - replace with path to ISO to be mounted
$isoImg = "D:\en_visio_professional_2019_x86_x64_dvd_3b951cef.iso"
# Drive letter - use desired drive letter
$driveLetter = "Y:"
# Mount the ISO, without having a drive letter auto-assigned
$diskImg = Mount-DiskImage -ImagePath $isoImg -NoDriveLetter
# Get mounted ISO volume
$volInfo = $diskImg | Get-Volume
# Mount volume with specified drive letter (requires Administrator access)
mountvol $driveLetter $volInfo.UniqueId
#
#
# Do work (e.g. MS Office installation - omitted for brevity)
#
#
# Unmount drive
DisMount-DiskImage -ImagePath $isoImg
```
---
Background: this was a useful reference: <https://www.derekseaman.com/2010/04/change-volume-drive-letter-with.html>
|
Is the iid assumption in Linear Regression necessary?
In linear or logistic regression, we have the following (adapted from Foundations of machine learning.):
As in all supervised learning problems, the learner $\mathcal{A}$ receives a labeled sample dataset $\mathcal{S}$
containing $N$ i.i.d. samples $\left(\mathbf{x}^{(n)}, y^{(n)}\right)$ drawn from $\mathbb{P}\_{\mathcal{D}}$:
$$
\mathcal{S} = \left\{\left(\mathbf{x}^{(1)}, y^{(1)}\right), \left(\mathbf{x}^{(2)}, y^{(2)}\right), \ldots, \left(\mathbf{x}^{(N)}, y^{(N)}\right)\right\} \subset \mathbb{R}^{D} \quad \overset{\small{\text{i.i.d.}}}{\sim} \quad \mathbb{P}\_{\mathcal{D}}\left(\mathcal{X}, \mathcal{Y} ; \boldsymbol{\beta}\right)
$$
---
I am used to the iid assumption in machine learning, but in the case of conditional maximum likelihood, I have the following question.
To use maximum likelihood for linear/logistic regression, it is required to have $y \mid x$ to be independent, in other words, $y$ is conditionally independent of $x$. The question is, do we need the strong iid assumption mentioned above for us to invoke MLE?
|
If you are asking about the i.i.d. assumption in machine learning in general, we already have that question answered in the [On the importance of the i.i.d. assumption in statistical learning](https://stats.stackexchange.com/questions/213464/on-the-importance-of-the-i-i-d-assumption-in-statistical-learning) question.
As about maximum likelihood, notice that the likelihood function is often written as
$$
\prod\_{i=1}^N p(x\_i | \theta)
$$
where $p(x\_i | \theta)$ is probability density or mass function for the point $x\_i$ parameterized by $\theta$. We are multiplying *because* we are making the [independence](https://en.wikipedia.org/wiki/Independence_(probability_theory)) assumption; otherwise the joint distribution would not be a product of the individual distributions. Moreover, $p(\cdot | \theta)$ are all the same, so they are "identical", and hence we are talking about the i.i.d. assumption. This *does not* mean that every likelihood function would assume independence, but that is often the case. The identical distributions assumption also is not necessary, e.g. you can have a mixture model (e.g. clustering), where you assume that individual samples come from different distributions, together forming a mixture.
Notice that with maximum likelihood we are directly making such assumptions. If you are fitting a decision tree or $k$NN you are not maximizing any likelihood, the algorithms do not explicitly assume any probability distribution, so you are also not explicitly making such a assumption. It still is the case, however, that you are assuming that your data is "all alike" (so a kind of i.i.d. or exchangeability): for example, you wouldn't mix data from completely different domains (say, ice-cream sales, size of brain tumors, and speed of Formula 1 cars) together and expect it to return reasonable predictions.
As for logistic regression, that is discussed in the [Is there i.i.d. assumption on logistic regression?](https://stats.stackexchange.com/questions/259704/is-there-i-i-d-assumption-on-logistic-regression) thread.
It would be a tautology, but *the assumptions that you made need to hold*. If your model assumes that the samples are independent, then you need the independence assumption.
|
How to prevent upstart from killing child processes to a daemon?
## Situation
I have a daemon I wrote in PHP (not the best language for this, but work with me), and it is made to receive jobs from a queue and process them whenever a job needs to be done. For each new job, I use [pcntl\_fork()](http://us3.php.net/pcntl_fork) to fork the job off into a child process. Within this child process, I then use [proc\_open()](http://www.php.net/manual/en/function.proc-open.php) to execute long-running system commands for audio transcoding, which returns directly to the child when finished. When the job is completely done, the child exits and is cleaned up by the parent process.
To keep this daemon always running, I use upstart. Here is my upstart configuration file:
```
description "Audio Transcoding Daemon"
start on startup
stop on shutdown
# kill signal SIGCHLD
kill timeout 1200 # Don't force kill the process until it runs over 20 minutes
respawn
exec audio-daemon.php
```
## Goal
Because I want to use this daemon in a distributed environment, I want to be able to shutdown the server at any time without disrupting any running jobs. To do this, I have already implemented signal handlers using [pcntl\_signal()](http://www.php.net/manual/en/function.pcntl-signal.php) for SIGTERM, SIGHUP, and SIGINT on the parent process, which waits for all children to exit normally before exiting itself. The children also have signal handlers, but they are made to ignore all kill signals.
## Problem
The problem is, according to the [docs](http://upstart.ubuntu.com/cookbook/)...
>
> *The signal specified by the kill signal stanza is sent to the process group of the main process. (such that all processes belonging to the jobs main process are killed). By default this signal is SIGTERM.*
>
>
>
This is concerning because, in my child process, I run system commands through [proc\_open()](http://www.php.net/manual/en/function.proc-open.php), which spawns new child processes as well. So, whenever I run `sudo stop audio-daemon`, this sub-process (which happens to be [sox](http://sox.sourceforge.net/)) is killed immediately, and the job returns back with an error. Apparently, sox obeys SIGTERM and does what it's told...
Originally, I thought, "Fine. I'll just change `kill signal` to send something that is inherently ignored, and I'll just pick it up in the main process only." But according to the [manual](http://unixhelp.ed.ac.uk/CGI/man-cgi?signal+7), there are only two signals that are ignored by default: SIGCHLD and SIGURG (and possibly SIGWINCH). But I'm afraid of getting false flags, since these can also be triggered other ways.
There are ways to create a custom signal using what the [manual](http://unixhelp.ed.ac.uk/CGI/man-cgi?signal+7) calls "Real-time Signals" but it also states...
>
> *The default action for an unhandled real-time signal is to terminate the receiving process.*
>
>
>
So that doesn't help...
Can you think of any way that I can get upstart to keep all of my sub-processes open until they complete? I really don't want to go digging through sox's source code to modify its signal handlers, and while I could set SIGCHLD, SIGURG, or SIGWINCH as my upstart kill signal and pray nothing else sends them my way, I can't help but think there's a better way to do this... Any ideas?
Thanks for all your help! :)
|
Since I haven't received any other answers for how to do this a better way, this is what I ended up doing, and I hope it helps someone out there...
To stall shutdown/reboot of the system until the daemon is finished, I changed my `start on` and `stop on` in my upstart configuration. And to keep upstart from killing my children, I resorted to using SIGURG as my `kill signal`, which I then catch as a kill signal in my main daemon process only.
Here is my final upstart configuration:
```
description "Audio Transcoding Daemon"
start on runlevel [2345]
stop on starting rc RUNLEVEL=[016] # Block shutdown/reboot until the daemon ends
kill signal SIGURG # Kill the process group with SIGURG instead of SIGTERM so only the main process will pick it up (since SIGURG will be ignored by all children by default)
kill timeout 1200 # Don't force kill the process until it runs over 20 minutes
respawn
exec audio-daemon.php
```
Note that using `stop on starting rc RUNLEVEL=[016]` is necessary to stall shutdown/reboot. `stop on runlevel [016]` will not work.
Also note that if you use SIGURG in your application for any other reason, using it as a kill signal may cause problems. In my case, I wasn't, so this works fine as far as I can tell.
Ideally, it would be nice if the POSIX standard provided a user-defined signal like SIGUSR1 and SIGUSR2 that was ignored by default. But right now, it looks like it doesn't exist.
Feel free to chime in if you have a better answer, but for now, I hope this helps anyone else having this problem.
|
Git Merge - Difference Between conflictStyle diff3 and merge
## Context
`git merge` considers the setting `merge.conflictStyle` in case of merge conflicts.
Possible values are `merge` (default) and `diff3`.
I noticed that `diff3` sometimes produces much bigger conflicts (see example below).
I found [this paper](https://www.cis.upenn.edu/%7Ebcpierce/papers/diff3-short.pdf), which describes the `diff3` algorithm in great detail, but I couldn't find much about the default `merge` algorithm.
## Question
What are the exact differences between the `merge` and `diff3` algorithm?
How does the default `merge` algorithm work exactly?
## Example
I have these files:
- Base:
```
1
2
3
```
- Yours:
```
1
change1
change2
input1OnlyChange1
change3
change4
change5
change6
input1OnlyChange2
change7
change8
change9
2
3
```
- Theirs:
```
1
change1
change2
input2OnlyChange1
change3
change4
change5
change6
input2OnlyChange2
change7
change8
change9
2
3
```
With `merge` I get 2 conflict markers:
```
1
change1
change2
<<<<<<< HEAD
input1OnlyChange1
=======
input2OnlyChange1
>>>>>>> input2
change3
change4
change5
change6
<<<<<<< HEAD
input1OnlyChange2
=======
input2OnlyChange2
>>>>>>> input2
change7
change8
change9
2
3
```
However, with `diff3` I only get 1 conflict marker:
```
1
<<<<<<< HEAD
change1
change2
input1OnlyChange1
change3
change4
change5
change6
input1OnlyChange2
change7
change8
change9
||||||| 0fcee2c
=======
change1
change2
input2OnlyChange1
change3
change4
change5
change6
input2OnlyChange2
change7
change8
change9
>>>>>>> input2
2
3
```
This is my test script (powershell):
```
rm -Force -r ./repo -ErrorAction Ignore
mkdir ./repo
cd ./repo
git init
# git config merge.conflictStyle diff3
cp ../../base.txt content.txt
git add *; git commit -m first
git branch base
git checkout -b input2
cp ../../input2.txt content.txt
git add *; git commit -m input2
git checkout base
cp ../../input1.txt content.txt
git add *; git commit -m input1
git merge input2
```
Does the `merge` algorithm diff the diffs again to split up the bigger conflict?
Clearly the `merge` algorithm also performs some kind of 3 way diff, as you don't get a conflict when you update `base` to match `yours`.
## Official documentation
The [docs](https://git-scm.com/docs/merge-config#Documentation/merge-config.txt-mergeconflictStyle) say this:
>
> Specify the style in which conflicted hunks are written out to working tree files upon merge. The default is "merge", which shows a `<<<<<<<` conflict marker, changes made by one side, a `=======` marker, changes made by the other side, and then a `>>>>>>>` marker. An alternate style, "diff3", adds a `|||||||` marker and the original text before the `=======` marker.
>
>
>
Clearly this does not explain the observed difference in the example.
|
Yes, this arises particularly when both sides added something where there was nothing before, but they added different things (hence the conflict, obviously).
>
> Clearly this does not explain the observed difference in the example
>
>
>
Actually, I think it does. In the two-part merge conflict display style, we just contrast ours against theirs, so regions of identical content are not shown as part of the conflict. But in the three-part diff3 merge conflict display style, we display the conflict by diffing ours against base and theirs against base; in a case where base is "nothing", as here, that means that both the ours display hunk and the theirs display hunk must consist of the *entire* inserted material.
From a practical point of view, this makes the conflict a lot harder for a human to solve when viewed as diff3 — and in actual fact, what I do is re-diff it the other way, diffing the ours hunk against the theirs hunk to help me "spot the difference" that needs thinking about. You can swap display styles in the middle of the conflict by saying `git checkout --conflict <diff3|merge> <filepath>`.
---
**Addendum** Consideration of your comments leads me to suggest you may have a possible misunderstanding here. The merge/diff3 distinction doesn't affect how the merge works or whether there is a conflict. What it affects, given that there *is* a conflict, is how it is *displayed* in the single-file markup.
|
How would I be able to open google maps when I press a button in my app?
I have this app and I want to use google maps or apple maps to open when the user presses a button in my app. How would I be able to do this? Is there like a link to the maps that opens the app or is it something else? If you can point me in the right direction it would be really helpful. Thanks! I have the button set up below like this:
```
override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) {
for touch in (touches ) {
let location = touch.locationInNode(self)
let node = self.nodeAtPoint(location)
if node.name == "openMaps" {
//code to open google maps or apple maps....
}
```
|
Use this:
```
if node.name == "openMaps" {
let customURL = "comgooglemaps://"
if UIApplication.sharedApplication().canOpenURL(NSURL(string: customURL)) {
UIApplication.sharedApplication().openURL(NSURL(string: customURL))
}
else {
var alert = UIAlertController(title: "Error", message: "Google maps not installed", preferredStyle: UIAlertControllerStyle.Alert)
var ok = UIAlertAction(title: "OK", style: UIAlertActionStyle.Default, handler: nil)
alert.addAction(ok)
self.presentViewController(alert, animated:true, completion: nil)
}
}
```
You can find more info about the google maps URL scheme [here](https://developers.google.com/maps/documentation/ios-sdk/urlscheme?hl=en)
Edit: You must add a key to your `info.plist` for this to work.
```
<key>LSApplicationQueriesSchemes</key>
<array>
<string>googlechromes</string>
<string>comgooglemaps</string>
</array>
```
Edit: Per updated [Google Maps docs](https://developers.google.com/maps/documentation/ios-sdk/start#step_7_declare_the_url_schemes_used_by_the_api) added "googlechromes" to plist above also.
|
F# - Flatten List/Range
I'm new to F# and am wondering how I would go about flattening a list.
Essentially in the database I store a record with a min\_age and max\_age range (this is a fictitious example for the sake of brevity - i am not agist!). My fields look something like the following:
id,
cost,
savings,
min\_age,
max\_age
I essentially have an F# class that acts as a one-to-one mapping with this table - i.e. all properties are mapped exactly to the database fields.
What I would like to do is flatten this range. So, instead of a list containing items like this:
```
saving_id = 1, cost = 100, savings = 20, min_age = 20, max_age = 26
saving_id = 2, cost = 110, savings = 10, min_age = 27, max_age = 31
```
I would like a list containing items like this:
```
saving_id = 1, cost = 100, savings = 20, age = 20
saving_id = 1, cost = 100, savings = 20, age = 21
etc.
saving_id = 2, cost = 110, savings = 10, age = 27
saving_id = 2, cost = 110, savings = 10, age = 28
etc.
```
Is there any in-built mechanism to flatten a list in this manner and/or does anyone know how to achieve this?
Thanks in advance,
JP
|
You might want to use Seq.collect. It concatenates sequences together, so in your case, you can map a function over your input that splits a single age range record to a sequence of age records and use Seq.collect to glue them together.
For example:
```
type myRecord =
{ saving_id: int;
cost: int;
savings: int;
min_age: int;
max_age: int }
type resultRecord =
{ saving_id: int;
cost: int;
savings: int;
age: int }
let records =
[ { saving_id = 1; cost = 100; savings = 20; min_age = 20; max_age = 26 }
{ saving_id = 2; cost = 110; savings = 10; min_age = 27; max_age = 31 } ]
let splitRecord (r:myRecord) =
seq { for ageCounter in r.min_age .. r.max_age ->
{ saving_id = r.saving_id;
cost = r.cost;
savings = r.savings;
age = ageCounter }
}
let ageRanges = records |> Seq.collect splitRecord
```
Edit: you can also use a sequence generator with yield!
```
let thisAlsoWorks =
seq { for r in records do yield! splitRecord r }
```
|
PHP Facebook FQL: Making queries in the new SDK
New to PHP and Graph API.
It seems that in the old API you could create a facebook object and pass in a standard query (select blah in blank where x = y ...) but now I don't see how to do that. All of the examples in the facebook documentation have you using a get\_file\_contents for graph.facebook.com/?fql etc. I've been trying to use that but it seems inefficient for complex queries.
Help? I'm kinda confused and don't understand some of the facebook documentation.
Thanks.
|
Using the PHP SDK you can run fql queries by :
```
$facebook = new Facebook(array(
'appId' => 'YOUR_API_KEY',
'secret' => 'YOUR_API_SECRET',
'cookie' => true,
));
$fql = "Your query";
$response = $facebook->api(array(
'method' => 'fql.query',
'query' =>$fql,
));
print_r($response);
I’ve also seen it done this way as well:
$param = array(
'method' => 'fql.query',
'access_token' => $cookie['access_token'],
'query' => $fql,
'callback' => ''
);
$response = $facebook->api($param);
print_r($response);
```
Hope this helps :)
|
Retrieving information about a contact with Google People API (Java)
I am using an example of recently released Google's People API from [here](https://developers.google.com/people/quickstart/java). I have extended a sample a bit to display additional information about the contact such as an email address and a phone number. The code that should do the job is presented below.
```
public class PeopleQuickstart {
...
public static void getPersonInfo(Person person){
// Get names
List<Name> names = person.getNames();
if(names != null && names.size() > 0) {
for(Name personName: names) {
System.out.println("Name: " + personName.getDisplayName());
}
}
// Get email addresses
List<EmailAddress> emails = person.getEmailAddresses();
if(emails != null && emails.size() > 0) {
for(EmailAddress personEmail: emails) {
System.out.println("Email: " + personEmail.getValue());
}
}
// Get phone numbers
List<PhoneNumber> phones = person.getPhoneNumbers();
if(phones != null && phones.size() > 0) {
for(PhoneNumber personPhone: phones){
System.out.println("Phone number: " + personPhone.getValue());
}
}
}
public static void main(String [] args) throws IOException {
People service = getPeopleService();
// Request 120 connections.
ListConnectionsResponse response = service.people().connections()
.list("people/me")
.setPageSize(120)
.execute();
// Display information about your connections.
List<Person> connections = response.getConnections();
if (connections != null && connections.size() > 0) {
for (Person person: connections){
getPersonInfo(person);
}
} else {
System.out.println("No connections found.");
}
}
}
```
I am testing this program with my contact list and I can successfully obtain a list of people along with the name fields. However, I cannot get values for email addresses and phone numbers (lists are always null), although I do have these values set in my contact list (verified through Gmail->Contacts). What am I missing?
|
Ok, problem solved. It looks like Google's documentation is a bit misleading (well, it has just been released;)). When I try to fetch my contacts using *people.connections.list* (see [here](https://developers.google.com/people/api/rest/v1/people.connections/list)) there are several query parameters that can be set. However, for the *requestMask* parameter it is stated that "Omitting this field will include all fields" which is not the case (at least did not work for me). Therefore, one has to explicitly specify which fields to be returned in the response. The modified code is given below. I wish Google people would clarify this point a bit.
```
public class PeopleQuickstart {
...
public static void main(String [] args) throws IOException {
People service = getPeopleService();
// Request 120 connections.
ListConnectionsResponse response = service.people().connections()
.list("people/me")
.setPageSize(120)
// specify fields to be returned
.setRequestMaskIncludeField("person.names,person.emailAddresses,person.phoneNumbers")
.execute();
// Display information about a person.
List<Person> connections = response.getConnections();
if (connections != null && connections.size() > 0) {
for (Person person: connections){
getPersonInfo(person);
}
} else {
System.out.println("No connections found.");
}
}
}
```
|
Converting Object to JSON and JSON to Object in PHP, (library like Gson for Java)
I am developing a web application in PHP,
I need to transfer many objects from server as JSON string, is there any library existing for PHP to convert object to JSON and JSON String to Objec, like Gson library for Java.
|
This should do the trick!
```
// convert object => json
$json = json_encode($myObject);
// convert json => object
$obj = json_decode($json);
```
Here's an example
```
$foo = new StdClass();
$foo->hello = "world";
$foo->bar = "baz";
$json = json_encode($foo);
echo $json;
//=> {"hello":"world","bar":"baz"}
print_r(json_decode($json));
// stdClass Object
// (
// [hello] => world
// [bar] => baz
// )
```
If you want the output as an Array instead of an Object, pass `true` to `json_decode`
```
print_r(json_decode($json, true));
// Array
// (
// [hello] => world
// [bar] => baz
// )
```
More about [json\_encode()](http://php.net/manual/en/function.json-encode.php)
See also: [json\_decode()](http://www.php.net/manual/en/function.json-decode.php)
|
Automatically derive mandatory SonarQube properties from pom file in Jenkins
**Situation:**
I want to analyze my project with SonarQube (5.4) triggered by Jenkins (1.642.4). It is a java project build with maven.
I see two ways to trigger the analysis:
1. Post Build Action "SonarQube analysis with maven" but it's **deprecated**, so I don't want to use it
2. Post Build Step "Execute SonarQube Scanner", is the recommended way.
**Problem:**
If I use the deprecated Post Build Action, the properties for sonar project configuration are derived automatically from the project pom.
It I use the recommended Post Build Step, I receive the Exception
>
> You must define the following mandatory properties for 'Unknown': sonar.projectKey, sonar.projectName, sonar.projectVersion, sonar.sources
>
>
>
**Undesired Solution**:
The solution is to provide the required properties, via sonar-project.properties file in the java project or via parameters in Jenkins step.
IMHO: this is duplication. All relevant information is defined in the Maven pom: projectKey can be derived from artifactId, projectName and projectVerstion are same properties in maven. Especially the projectVersion is critical. I don't want to update the project version after each release (or write some code in release plugin to update it automatically).
**What I want**
I want to use the recommended Post Build Step in Jenkins, without redefine all project properties for all my project to make sonar happy. Instead sonar/jenkins/plugin/whatever should derive the properties from my maven pom file. Is there an additional plugin I can use? Can I reconfigure my Jenkins-Sonar-Plugin?
I don't want to provide any sonar specific information in my pom/project, because the project shouldn't care about sonar. It should contain only information required to build the project.
|
[The documentation](http://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzingwithSonarQubeScannerforMaven) (although slightly confusing, see edit below) explains how to use a generic post-build step (leveraging environment variables), instead of the deprecated post-build action. in short:
- install latest SonarQube Plugin (v2.4 as of now) in Jenkins
- in System Config under SonarQube servers: check `Enable injection of SonarQube server configuration as build environment variables`
- in the configuration of your Maven project:
- check `Prepare SonarQube Scanner environment`
- add a post-build step `Invoke top-level Maven targets` and leverage the injected environment variables in the `Goals` field e.g.:
>
> $SONAR\_MAVEN\_GOAL -Dsonar.host.url=$SONAR\_HOST\_URL -Dsonar.login=$SONAR\_AUTH\_TOKEN
>
>
>
**Edit**: when [the documentation](http://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzingwithSonarQubeScannerforMaven) says `The Post-build Action for Maven analysis is deprecated.` , it refers to the old post-build **action** which is not documented anymore. The paragraph after that warning (summarized in this answer) really is the recommended procedure. Illustration [here](http://i68.tinypic.com/x4er8y.png) if it's still not clear.
|
What are the 15 classifications of types in C++?
During a [CppCon2014 conference talk](https://www.youtube.com/watch?v=a0FliKwcwXE&feature=youtu.be&t=2m51s) by Walter E. Brown, he states that there are **15 classifications** of types in C++ that the standard describes.
"15 partitions of the universe of C++ types."
"**void** is one of them." -- Walter E. Brown.
What are the other 14?
---
While digging through the standard, I found the following:
```
// 20.11.4.1
primary type categories:
template <class T> struct is_void;
template <class T> struct is_integral;
template <class T> struct is_floating_point;
template <class T> struct is_array;
template <class T> struct is_pointer;
template <class T> struct is_lvalue_reference;
template <class T> struct is_rvalue_reference;
template <class T> struct is_member_object_pointer;
template <class T> struct is_member_function_pointer;
template <class T> struct is_enum;
template <class T> struct is_union;
template <class T> struct is_class;
template <class T> struct is_function;
// 20.11.4.2
composite type categories:
template <class T> struct is_reference;
template <class T> struct is_arithmetic;
template <class T> struct is_fundamental;
template <class T> struct is_object;
template <class T> struct is_scalar;
template <class T> struct is_compound;
template <class T> struct is_member_pointer;
```
Hmm, that's more than 15. These are type traits anyhow. They are used to test certain properties of types at compile time. For example, an `integer` type would give back true for `is_integral`, `is_fundamental`, and is `is_scalar`. Perhaps the 15 are some of the ones listed above and the rest are sub categories to others.
---
Here's my attempt of trying to make a type tree of the language:

My guess:
```
1. void
2. bool
3. char
4. nullptr
5. integral (signed)
6. integral (unsigned)
7. floating
8. enum
9. array
10. class
11. union
12. lvalue reference
13. rvalue reference
14. member object pointer
15. member function pointer
```
But also note that `bool`, `char`, and `enum` are all integral types, so I'm really not very confident in this list.
|
I spoke with Walter directly, and it was simply a miscount.
>
> "Alas, I realized shortly thereafter that I'd miscounted and hence
> committed an off-by-one error during the talk: there are 14 (not 15)
> type classifications. See the list of primary type category
> predicates in clause [meta.unary.cat] in the C++ standard; these
> correspond to the classifications established for the core language in
> [basic.types]." --WEB
>
>
>
That being said, they are:
```
template <class T> struct is_void;
template <class T> struct is_null_pointer; //<- arrived in C++11 (std::nullptr_t)
template <class T> struct is_integral;
template <class T> struct is_floating_point;
template <class T> struct is_array;
template <class T> struct is_pointer;
template <class T> struct is_lvalue_reference;
template <class T> struct is_rvalue_reference;
template <class T> struct is_member_object_pointer;
template <class T> struct is_member_function_pointer;
template <class T> struct is_enum;
template <class T> struct is_union;
template <class T> struct is_class;
template <class T> struct is_function;
```
|
OxyPlot: How to use the axis label formatter and show Y labels?
I'm using to Oxyplot for my Xamarin.iOS project for plotting a bar chart..
This is what my graph currently looks likes currently
[](https://i.stack.imgur.com/3Prkv.png)
here's instead of x axis values which are numbers, I want to show sun, mon true, wed.....
I can see that CategoryAxis has a method called LabelFormatter which returns `Func<double, string>`, but how do I use it?
And also why are the Y-Axis labels not showing?
```
public class MyClass
{
/// <summary>
/// Gets or sets the plot model that is shown in the demo apps.
/// </summary>
/// <value>My model.</value>
public PlotModel MyModel { get; set; }
/// <summary>
/// Initializes a new instance of the <see cref="OxyPlotSample.MyClass"/> class.
/// </summary>
public MyClass()
{
var model = new PlotModel { Title = "ColumnSeries" };
model.PlotAreaBorderColor = OxyColors.Transparent;
// A ColumnSeries requires a CategoryAxis on the x-axis.
model.Axes.Add(new CategoryAxis()
{
Position = AxisPosition.Bottom,
MinorTickSize = 0,
MajorTickSize = 0,
//MajorGridlineStyle = LineStyle.Solid,
//MinorGridlineStyle = LineStyle.Solid,
});
model.Axes.Add(new LinearAxis()
{
AxislineStyle = LineStyle.None,
Position = AxisPosition.Left,
MinorTickSize = 0,
MajorTickSize = 0,
MajorGridlineStyle = LineStyle.Solid,
MinorGridlineStyle = LineStyle.Solid,
Minimum = 0,
Maximum = 400,
});
var series = new ColumnSeries();
series.Items.Add(new ColumnItem() { Value = 200, Color = OxyColors.Orange});
series.Items.Add(new ColumnItem(200));
series.Items.Add(new ColumnItem(300));
series.Items.Add(new ColumnItem(100));
series.Items.Add(new ColumnItem(200));
series.Items.Add(new ColumnItem(100));
series.Items.Add(new ColumnItem(130));
model.Series.Add(series);
this.MyModel = model;
}
}
```
|
To show the label on the axis you have to specify the property `MajorStep`, Oxyplot will paint only the labels matching the major step.
```
model.Axes.Add(new LinearAxis()
{
MajorStep = 10,
Position = AxisPosition.Left,
...
});
```
And to modify the labels with the day name, you can use a `DateTimeAxis` instead of `LinearAxis`:
```
model.Axes.Add(new DateTimeAxis()
{
StringFormat = "ddd",
Position = AxisPosition.Bottom,
...
});
```
If you want something more customized you will have to use the `LabelFormatter` attribute.
**EDIT:**
Labels in CategoryAxis:
```
var categoryAxis = new CategoryAxis()
{
Position = AxisPosition.Bottom,
...
};
categoryAxis.ActualLabels.Add("Mon");
categoryAxis.ActualLabels.Add("Tue");
categoryAxis.ActualLabels.Add("Wed");
categoryAxis.ActualLabels.Add("Thu");
categoryAxis.ActualLabels.Add("Fri");
categoryAxis.ActualLabels.Add("Sat");
categoryAxis.ActualLabels.Add("Sun");
Model.Axes.Add(categoryAxis);
```
`CategoryAxis.ActualLabels` is readOnly, so you will have to Add the items one by one.
|
How to get html of a page loaded in QWebEngineView
I am trying to get HTML of a page loaded in PyQT5 QWebEngineView. Here is a simple example:
```
import sys
from PyQt5.QtCore import *
from PyQt5.QtWebEngineWidgets import *
from PyQt5.QtWidgets import *
def callback_function(html):
print(html)
def on_load_finished():
web.page().runJavaScript("document.getElementsByTagName('html')[0]", callback_function)
app = QApplication(sys.argv)
web = QWebEngineView()
web.load(QUrl("https://stackoverflow.com"))
web.show()
web.loadFinished.connect(on_load_finished)
sys.exit(app.exec_())
```
I was hoping to be able to return html from the runJavaScript() call but i get a blank in the callback function.
What is incorrect in my code and what alternatives are available for obtaining HTML of a page?
|
Using my [old answer](https://stackoverflow.com/questions/57351643/how-to-save-dynamically-generated-web-page-in-qwebengineview/57352932#57352932) written C++ and translating the solution to Python:
```
import sys
from PyQt5.QtCore import QUrl
from PyQt5.QtWebEngineWidgets import QWebEngineView
from PyQt5.QtWidgets import QApplication
def callback_function(html):
print(html)
def on_load_finished():
web.page().runJavaScript("document.documentElement.outerHTML", callback_function)
app = QApplication(sys.argv)
web = QWebEngineView()
web.load(QUrl("https://stackoverflow.com"))
web.show()
web.resize(640, 480)
web.loadFinished.connect(on_load_finished)
sys.exit(app.exec_())
```
---
**Update:**
The problem in your case is that getElementsByTagName() returns a list of js elements, and that element cannot be exported to python, what you should do is get the innerHTML:
```
import sys
from PyQt5.QtCore import QUrl
from PyQt5.QtWebEngineWidgets import QWebEngineView
from PyQt5.QtWidgets import QApplication
def callback_function(html):
print(html)
def on_load_finished():
web.page().runJavaScript(
"document.getElementsByTagName('html')[0].innerHTML", callback_function
)
# or document.getElementsByTagName('html')[0].outerHTML
app = QApplication(sys.argv)
web = QWebEngineView()
web.load(QUrl("https://stackoverflow.com"))
web.show()
web.resize(640, 480)
web.loadFinished.connect(on_load_finished)
sys.exit(app.exec_())
```
|
why is there a while loop in put() of LinkedBlockingQueue
```
public void put(E e) throws InterruptedException {
if (e == null) throw new NullPointerException();
int c = -1;
Node<E> node = new Node<E>(e);
final ReentrantLock putLock = this.putLock;
final AtomicInteger count = this.count;
putLock.lockInterruptibly();
try {
while (count.get() == capacity) {
notFull.await();
}
enqueue(node);
c = count.getAndIncrement();
if (c + 1 < capacity)
notFull.signal();
} finally {
putLock.unlock();
}
if (c == 0)
signalNotEmpty();
}
```
why is there a while loop?
All the putting thread is shut out by putLock.
No thread can increase 'count' when the waiting thread is holding the putLock.
|
There is a fundamental property of `await` (which applies to the intrinsic locking via `synchronized` and using `Object.wait` as well), you have to understand:
When you invoke `await`, you are *releasing the lock* this `Condition` is associated with¹. There is no way around it, as otherwise, no-one could acquire the lock, make the condition fulfilled, and invoke `signal` on it.
When your waiting thread gets signaled, it does not get the lock back immediately. That would not be possible, as the thread which invoked `signal` still owns it. Instead, the receiver will try to re-acquire the lock, not much different to calling `lockInterruptibly()`.
But this thread is not necessarily the only thread trying to acquire the lock. It doesn’t even have to be the first one. Another thread could have arrived at `put` before the signalling and waiting for the lock at `lockInterruptibly()`. So even if the lock was fair (which locks usually are not), the signaled thread had no precedence. Even if you gave signaled threads precedence, there could be multiple threads being signaled for different reasons.
So another thread arriving at `put` could get the lock before the signaled thread, find that there is space, and store the element without ever bothering with signals. Then, by the time the signaled thread acquired the lock, the condition is not fulfilled anymore. So a signaled thread can never rely on the validity of the condition just because it received a signal and therefore has to re-check the condition and invoke `await` again if not fulfilled.
This makes checking the condition in a loop the standard idiom of using `await`, as documented in [the `Condition` interface](https://docs.oracle.com/javase/8/docs/api/?java/util/concurrent/locks/Condition.html), as well as [`Object.wait`](https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#wait--) for the case of using the intrinsic monitor, just for completeness. In other words, this is not even specific to a particular API.
Since the condition has to be pre-checked and re-checked in a loop anyway, the specification even allows for *spurious wakeups*, the event of a thread returning from the wait operation without actually receiving a signal. This may simplify lock implementations of certain platforms, while not changing the way a lock has to be used.
¹ It’s important to emphasize that when holding multiple locks, *only* the lock associated with the condition is released.
|
Issue displaying PDF figures created with R on iOS devices
I'm making some plots in R. The resulting PDFs don't display properly on iOS devices like the iPhone. For example, here's a stock ggplot2 figure created as a PDF:
```
library(ggplot2)
mpg.eg <- within(mpg[1:74,], {
model <- reorder(model, cty)
manufacturer <- reorder(manufacturer, -cty)
})
pdf(file="figures/ios-example.pdf")
p <- qplot(cty, model, data=mpg.eg)
p + facet_grid(manufacturer ~ ., scales="free", space="free") +
opts(strip.text.y = theme_text())
dev.off()
```
When viewed on an iPhone, the dots in the dotplot are not displayed. See, e.g., [the resulting pdf](http://kieranhealy.org/files/misc/ios-example.pdf) if you're on an iOS device.
I understand from reading the docs that this is most likely a problem with limited font availability and the vagaries of PDF rendering on iOS, not an issue with pdf creation in R. I had thought that maybe embedding fonts in the PDF with
```
embedFonts("figures/ios-example.pdf")
```
would sort things out, but it doesn't. Is there something I can do to work around this iOS issue beyond just making the figure available in some other format?
|
`embedFonts` by default doesn't embed the standard PDF font set, and therefore doesn't actually make any significant changes to your example PDF. Try instead
```
embedFonts("figures/ios-example.pdf",
options="-dSubsetFonts=true -dEmbedAllFonts=true")
```
and if *that* doesn't work, tack "`-dPDFSETTINGS=/printer`" on there too.
[EDIT August 2020: With current versions of R, another thing to try is switching from the `pdf` device to the `cairo_pdf` device. `cairo_pdf` uses a more sophisticated library for PDF generation and, among other things, it embeds fonts itself.]
For what it's worth, though, your example is displayed correctly on the only iOS device I have to hand (iPad, OS version 4.2.1).
|
Why is setting a field many times slower than getting a field?
I already knew that setting a field is much slower than setting a local variable, but it also appears that setting a field **with** a local variable is much slower than setting a local variable with a field. Why is this? In either case the address of the field is used.
```
public class Test
{
public int A = 0;
public int B = 4;
public void Method1() // Set local with field
{
int a = A;
for (int i = 0; i < 100; i++)
{
a += B;
}
A = a;
}
public void Method2() // Set field with local
{
int b = B;
for (int i = 0; i < 100; i++)
{
A += b;
}
}
}
```
The benchmark results with 10e+6 iterations are:
```
Method1: 28.1321 ms
Method2: 162.4528 ms
```
|
Running this on my machine, I get similar time differences, however looking at the JITted code for 10M iterations, it's clear to see why this is the case:
Method A:
```
mov r8,rcx
; "A" is loaded into eax
mov eax,dword ptr [r8+8]
xor edx,edx
; "B" is loaded into ecx
mov ecx,dword ptr [r8+0Ch]
nop dword ptr [rax]
loop_start:
; Partially unrolled loop, all additions done in registers
add eax,ecx
add eax,ecx
add eax,ecx
add eax,ecx
add edx,4
cmp edx,989680h
jl loop_start
; Store the sum in eax back to "A"
mov dword ptr [r8+8],eax
ret
```
And Method B:
```
; "B" is loaded into edx
mov edx,dword ptr [rcx+0Ch]
xor r8d,r8d
nop word ptr [rax+rax]
loop_start:
; Partially unrolled loop, but each iteration requires reading "A" from memory
; adding "B" to it, and then writing the new "A" back to memory.
mov eax,dword ptr [rcx+8]
add eax,edx
mov dword ptr [rcx+8],eax
mov eax,dword ptr [rcx+8]
add eax,edx
mov dword ptr [rcx+8],eax
mov eax,dword ptr [rcx+8]
add eax,edx
mov dword ptr [rcx+8],eax
mov eax,dword ptr [rcx+8]
add eax,edx
mov dword ptr [rcx+8],eax
add r8d,4
cmp r8d,989680h
jl loop_start
rep ret
```
As you can see from the assembly, Method A is going to be significantly faster since the values of A and B are both put in registers, and all of the additions occur there with no intermediate writes to memory. Method B on the other hand incurs a load and store to "A" in memory for *every single iteration*.
|
Is it possible to run basic Linux on file permissionless file system(ex. Fat32)
So I would like to be able run Linux an a Fat32(preferably ExFat) partition is this possible?
It appears this has been done [Can I install GNU/Linux on a FAT drive?](https://unix.stackexchange.com/questions/248173/can-i-install-gnu-linux-on-a-fat-drive)
Couldn't shorcuts be used in place of hard and soft symbolic links?
Watching random videos on youtube it appears ExFat is faster then NTFS in most cases <https://www.youtube.com/watch?v=fc98Vgc25hM> which would make me believe it would be a good candidate for a file system.
|
It's technically possible. The [posixovl](https://sourceforge.net/projects/posixovl/) filesystem allows storing files on a FAT filesystem, with extra metadata stored in additional files to implement things that FAT doesn't provide: file names containing characters that FAT forbids or that are too long, additional metadata such as permissions and ownership, other file types such as symbolic links and devices, etc.
That doesn't mean that it's a good idea, though. It would be difficult to set up (I don't know of any distribution that sets it up for you) and slow.
Shortcuts could in theory be read as symbolic links, but this would have several downsides. Someone would need to write a filesystem driver that stores symbolic links as shortcuts. Windows might mess up symbolic links when it edits shortcuts (shortcuts are only very vaguely like symbolic links: symbolic links point to a file path, whereas Windows shortcuts track a file and Windows modifies the shortcut if the target file is moved). Linux would have no way to tell whether a file that looks like a shortcut is in fact intended to be a symbolic link or a regular file.
There used to be a way to install Linux on a disk image which is stored as a single file on a Windows system, called [Wubi](https://en.wikipedia.org/wiki/Wubi_(software)). It has been abandoned. It works, but it too has a number of downsides: lower performance, high risk of losing data if the system crashes, etc.
The normal way to install Linux is the best way: let the installer create a Linux partition. If you really, really don't want to create a Linux partition (for example because your corporate IT management forbids it), run Linux in a virtual machine. With Windows 10, you can run many Linux applications through the [Windows Subsystem for Linux](https://msdn.microsoft.com/en-us/commandline/wsl/install_guide); you can get [a whole Ubuntu userspace](https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-the-ubuntu-userspace-for-windows-developers/) that way.
|
android.os.NetworkOnMainThreadException in a Service in a separate process
On ICS, I'm getting an android.os.NetworkOnMainThreadException error when using UrlConnection - even though I'm making this request in a Service that runs on it's own process, and was called asyncronously to be done via Messenger.
Changing the StrictPolicy had no effect, I still get the error.
What can I do?
edit: this Service runs in a separate process - has a different pid and everything.
|
Services still run on the main thread, as per the the [documentation](http://developer.android.com/reference/android/app/Service.html):
>
> Note that services, like other application objects, run in the main thread of their hosting process. This means that, if your service is going to do any CPU intensive (such as MP3 playback) or blocking (such as networking) operations, it should spawn its own thread in which to do that work. More information on this can be found in Processes and Threads. The IntentService class is available as a standard implementation of Service that has its own thread where it schedules its work to be done.
>
>
>
Therefore for long running tasks (such as using a UrlConnection) you should either use something like an [IntentService](http://developer.android.com/reference/android/app/IntentService.html) or spawn a new Thread yourself.
|
how do i make doxygen ignore function-like macros when building the documentation?
I need to document a project with doxygen, but also need to ignore some macros, which I use for readability in small sections of my code, for which it makes no sense to appear in the documentation.
here's a minimal example (I mainly use the macros to index some C-style 2D or 3D arrays):
```
#include <cstring>
/*! \file notes.cpp
\brief A test for macros and doxygen
*/
/// my main function
int main ()
{
double loc_arr[9][4][4];
memset (loc_arr, 0.0, 144 * sizeof (double));
#define BLOCK(i) (&(loc_arr[i][0][0]))
for (int ii = 0; ii < 9; ++ii)
{
memset (BLOCK(ii), double(ii), 16 * sizeof (double));
}
#undef BLOCK
return 1;
}
```
When I doxy it, with the following settings:
```
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = NO
SEARCH_INCLUDES = YES
INCLUDE_PATH =
INCLUDE_FILE_PATTERNS =
PREDEFINED =
EXPAND_AS_DEFINED =
SKIP_FUNCTION_MACROS = YES
```
I obtain this:
[](https://i.stack.imgur.com/LVZ5i.jpg)
Note that the only way i could avoid the macros to be documented was by setting `ENABLE_PREPROCESSING = NO` , which is a problem to me since it also eliminates the inclusion graph on the top of the page.
|
**Option 1.** You can use the preprocessor symbol `DOXYGEN_SHOULD_SKIP_THIS`, as described in the [doxygen FAQ](http://www.doxygen.nl/manual/faq.html)
```
How can I make doxygen ignore some code fragment?
The new and easiest way is to add one comment block with a
\cond command at the start and one comment block with a
\endcond command at the end of the piece of code that should be
ignored. This should be within the same file of course.
But you can also use doxygen's preprocessor for this: If you put
#ifndef DOXYGEN_SHOULD_SKIP_THIS
/* code that must be skipped by Doxygen */
#endif /* DOXYGEN_SHOULD_SKIP_THIS */
around the blocks that should be hidden and put:
PREDEFINED = DOXYGEN_SHOULD_SKIP_THIS
in the config file then all blocks should be skipped by doxygen
as long as ENABLE_PREPROCESSING is set to YES.
```
**Option 2.** You can use the
[EXCLUDE\_SYMBOLS](https://stackoverflow.com/a/3733023/994153) configuration option of doxygen.
|
Positive Lookbehind greedy
I think I have some misunderstanding about how a positive Lookbehind works in Regex, here is an example:
```
12,2 g this is fully random
89 g random string 2
0,6 oz random stuff
1 really random stuff
```
Let's say I want to match everything after the measuring unit, so I want "this is fully random", "random string 2", "random stuff" and really "random stuff".
In order to do that I tried the following pattern:
```
(?<=(\d(,\d)?) (g|oz)?).*
```
But as "?" means 0 or 1, it seems that the pattern prioritizes 0 over 1 in that case - So I get:
[](https://i.stack.imgur.com/SH05o.png)
But the measuring unit has to stay "optional" as it won't necessary be in the string (cf fourth instance)...
Any idea on how to deal with that issue? Thanks!
|
It would be easier to look at the positions that it matches to see what happens. The assertion `(?<=(\d(,\d)?) (g|oz)?)` is true at a position where what is directly to the left is `(\d(,\d)?)` and optional `(g|oz)?`
The pattern goes from left to right, and the assertion is true at multiple places. But at the first place it encounters, it matches `.*` meaning 0+ times any char and will match until the end of the line.
See the positions [on regex101](https://regex101.com/r/18Vryo/1)
What you might do instead is match the digit part and make the space followed by `g` or `oz` optional and use a capturing group for the second part.
```
\d+(?:,\d+)?(?: g| oz)? (.*)
```
[Regex demo](https://regex101.com/r/LWodpF/1)
|
How to add parenthesis to queries in querydsl?
I'm using `querydsl` for sql selects. Now I have native query that I want to transform to querydsl. It consists of two OR statements in parenthesis, followed by and AND query that applies thereby to both OR parts.
I need the following:
```
((q1 AND q2) OR (q3 AND q4)) AND q5
```
In querydsl I can write for example:
`BooleanExpression query = q1.and(q2).or(q3.and(q4)).and(q5)`
But that's not the same. What I really want is (note parenthesis):
`BooleanExpression query = ((q1.and(q2)).or((q3.and(q4))).and(q5)`
Question: how can I achieve this in java? How can I add those parenthesis logically to a querydsl expression.
|
Let's assume we have three `BooleanExpression`s and we want to express `(q1 || q2) && q3`.
If you write `q1.or(q2).and(q3)`, than the `and` statement is the top level operand and `q1.or(q2)` is first evaluated and then used as the left-side of the expression. This is due to the fact that the operations are evaluated in the order they appear. Remember, this is Java syntax. Hence, it would be valid to use the statement.
However, if you do not like the notation, an alternative would be to simply write the following:
```
q3.and(q1.or(q2));
```
See [these examples](https://gist.github.com/timowest/5098112) for further information.
|
How can I use decorators today?
I see decorators being used today already in some javascript code. My question is really two fold.
First:
If decorators have not even been finalized how is it possible to use them in production code, today? Won't browser support be non-existent?
Second:
Given it is possible to use it today, as some open source projects would suggest, what's a typically recommended setup for getting decorators to work?
|
You're right, ES2016 decorators are not yet part of the spec. But it doesn't mean we can't use it today.
First let's take a step back and go over "what is a decorator". Decorators are simply wrappers that add behavior to an object. It's not a new concept in javascript (or programming in general), it's actually been around for a while...
Here's a basic example of a decorator that checks permissions:
```
function AuthorizationDecorator(protectedFunction) {
return function() {
if (user.isTrusted()) {
protectedFunction();
} else {
console.log('Hey! No cheating!');
}
}
}
```
Using it would look like this:
```
AuthorizationDecorator(save);
```
You see all we're doing is simply wrapping up some other function. You can even pass a function through multiple decorators each adding a piece of functionality or running some code.
You can even find some [old articles](http://addyosmani.com/blog/decorator-pattern/) explaining the decorator pattern in javascript.
Now that we understand decorators are actually something we (javascript community) were always able to do, it probably comes as no shock that really when we utilize ES2016 decorators today they are simply just being compiled down to ES5 code hence why you maintain browser compatibility. So for the time being it is simply syntactic sugar (some really sweet sugar I might add).
As for which compiler to use to convert your ES2016 code to ES5 code, you have some choices: [Babel](https://babeljs.io/) and [Traceur](https://github.com/google/traceur-compiler) are the most popular.
Here's further reading on [Exploring ES2016 Decorators](https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841#.ku11ts7sj).
|
Display django form error for each field in template and each correct data in template
I am new to django and i am trying to validate form data and if any error occurs transfer that data to the template page with the old values to display in the form. If possible i want the error message for each data in dictionary
views.py
```
from django.shortcuts import render
from .forms import RegForm
# Create your views here.
def login(request):
return render(request, 'login.html')
def registration(request):
if request.method == 'POST':
form = RegForm(request.POST)
if form.is_valid():
print "Form is valid"
print form.cleaned_data
user_data = {
'firstname':form.cleaned_data['firstname'],
'lastname':form.cleaned_data['secondname'],
'username': form.cleaned_data['username'],
'mail':form.cleaned_data['mail'],
'password': form.cleaned_data['password']}
print user_data
else:
print "Form is not valid"
print form['mail'].errors
return render(request, 'register.html')
forms.py
from django import forms
class RegForm(forms.Form):
firstname = forms.CharField(max_length=100)
secondname = forms.CharField(max_length=100)
username = forms.CharField(max_length=100, min_length=8)
mail = forms.EmailField()
password = forms.CharField(widget = forms.PasswordInput(), min_length=8, max_length=100)
```
register.html
```
<!DOCTYPE html>
<html>
<head>
<title>registration</title>
<style type="text/css">
header{
width: 100%;
display: block;
}
section{
width: 180px;
margin: auto;
display: block;
}
nav{
width: 180px;
float: right;
display: block;
}
</style>
</head>
<body>
<header>
<nav>
<a href="{% url 'registration' %}">Registration</a>
<a href="{% url 'login' %}">Login</a>
</nav>
</header>
<section>
<h1 align="center">Register:</h1>
<form method = "post" action = "{% url 'registration' %}">
{% csrf_token %}
<label>First Name:</label><br>
<input type="text" name="firstname"><br>
<label>Second Name:</label><br>
<input type="text" name="secondname"><br>
<label>Mail Id:</label><br>
<input type="text" name="mail"><br>
<label>User Name:</label><br>
<input type="text" name="username" value=""><br>
<label>Password:</label><br>
<input type="password" name="password"><br><br>
<input type="submit" name="submit" value="submit">
</form>
</section>
</body>
</html>
```
|
From your `views.py`, pass the form object to the template
```
...
return render(request, 'register.html', { 'form': form })
...
```
Now, in the template, you can use the object `form` for rendering purposes.
For example, for your `First Name` field you were doing:
```
...
<label>First Name:</label><br>
<input type="text" name="firstname"><br>
...
```
You should do this instead:
```
...
<label for="{{ form.firstname.id_for_label }}">First Name:</label>
{{ form.firstname }}
{{ form.firstname.errors }}
...
```
`{{ form.firstname }}` renders you html input tag, while `{{ form.firstname.errors }}` renders errors for this field.
Finally, you could also use the template tag `{{ form.non_field_errors }}` (usually at the end or at the top of the form) to render other errors.
The documentation is very clear on this topic. Have a look at [rendering-fields-manually](https://docs.djangoproject.com/en/1.8/topics/forms/#rendering-fields-manually) for further information.
|
Draw gradient along a curved UIBezierPath
In an app, I draw a curved UIBezierPath an an MKOverlayPathView class to show flight routes. This is the code I am using:
>
>
> ```
> - (UIBezierPath *)pathForOverlayForMapRect:(MKMapRect)mapRect {
>
> ... bla bla bla ...
>
> UIBezierPath *path = [UIBezierPath bezierPath];
> [path moveToPoint:s];
> [path addQuadCurveToPoint:e controlPoint:cp1];
> [path addLineToPoint:e2];
> [path addQuadCurveToPoint:s2 controlPoint:cp2];
> [path closePath];
>
> return path;
> }
>
> ```
>
>
```
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
self.mapRect = mapRect;
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextSetLineWidth(context, mapRect.size.height/700);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextAddPath(context, [self pathForOverlayForMapRect:mapRect].CGPath);
[self updateTouchablePathForMapRect:mapRect];
CGContextDrawPath(context, kCGPathFillStroke);
}
```
This is working just fine but I would like to draw a gradient along that path instead of just a fill color. And this is where it is starting to get very tricky.
I have experimented with CGContextDrawLinearGradient() but it hasn't got me anywhere useful yet.
|
The trick is to use the stroke path of the line (`CGContextReplacePathWithStrokedPath`) and clip it (`CGContextClip`) to restrict the gradient to the path:
```
// Create a gradient from white to red
CGFloat colors [] = {
1.0, 1.0, 1.0, 1.0,
1.0, 0.0, 0.0, 1.0
};
CGColorSpaceRef baseSpace = CGColorSpaceCreateDeviceRGB();
CGGradientRef gradient = CGGradientCreateWithColorComponents(baseSpace, colors, NULL, 2);
CGColorSpaceRelease(baseSpace), baseSpace = NULL;
CGContextSetLineWidth(context, mapRect.size.height/700);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextAddPath(context, [self pathForOverlayForMapRect:mapRect].CGPath);
CGContextReplacePathWithStrokedPath(context);
CGContextClip(context);
[self updateTouchablePathForMapRect:mapRect];
// Define the start and end points for the gradient
// This determines the direction in which the gradient is drawn
CGPoint startPoint = CGPointMake(CGRectGetMidX(rect), CGRectGetMinY(rect));
CGPoint endPoint = CGPointMake(CGRectGetMidX(rect), CGRectGetMaxY(rect));
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, 0);
CGGradientRelease(gradient), gradient = NULL;
```
|
Bind keys for scrolling pages in copy mode in tmux
I'd like to be able to map emacs keys (like `C-v`/`M-v`) for scrolling in tmux instead of default `PgUp`/`PgDown`, is that possible? Can't see that from the manual at the moment (apologies if its there, seems like a such a natural thing considering the rest of the emacs-like navigation key bindings tmux uses)
|
By default, the `emacs-copy` key binding table has all of C-v, Page Down (`NPage`), and Space bound to `page-down` as well as both M-v and Page Up (`PPage`) bound to `page-up`.
You can check your bindings with `tmux list-keys -t emacs-copy | grep -i page`.
If these bindings are missing you can reestablish them by hand (e.g. in your `~/.tmux.conf`):
```
bind-key -t emacs-copy C-v page-down
bind-key -t emacs-copy M-v page-up
```
But since these are the default, you will need to track down where they are being changed/removed before you will know where to put the above commands to make them effective (they will need to come after whatever else is modifying the bindings).
Are you sure your `mode-keys` option is set to `emacs`? It does default to `emacs`, but *tmux* will set it to `vi` (along with `status-keys`) if you have the VISUAL environment variable set and its value has `vi` in it†, or if you do not have VISUAL set but do have EDITOR set and its value has `vi` in it.
You can check your global `mode-keys` value with `tmux show-options -g -w | grep mode-keys`. You may also have a per-window `mode-keys` value (omit the `-g` to check its value; you may use `-t` to target another window if you can not run the command in the window itself).
If you want to override the “auto-detection” and always use the `emacs` binding tables, then you can put these lines in your `~/.tmux.conf`:
```
set-option -g status-keys emacs
set-option -gw mode-keys emacs
```
---
† The “has `vi` in it” test is actually more like “`vi` occurs after the last `/` (or anywhere if there `/` does not occur in the value)”. This means that a value like `/opt/vital/bin/emacs` will not count as `vi` (despite the `vi` in `vital`).
|
finding the mapping between video point and real world point
I am doing car tracking on a video. I am trying to determine how many meters it traveled.
I randomly pulled 7 points from a video frame. I made point1 as my origin
Then on the corresponding Google Maps perspective, I calcculated the distances of the 6 points from the orgin (delta x and delta y)
Then I ran the following
```
pts_src = np.array([[417, 285], [457, 794], [1383, 786], [1557, 423], [1132, 296], [759, 270], [694, 324]])
pts_dst = np.array([[0,0], [-3, -31], [30, -27], [34, 8], [17, 15], [8, 7], [6, 1]])
h, status = cv2.findHomography(pts_src, pts_dst)
a = np.array([[1032, 268]], dtype='float32')
a = np.array([a])
# finally, get the mapping
pointsOut = cv2.perspectiveTransform(a, h)
```
When I tested the mapping of point 7, the results are wrong.
Am I missing anything? Or am I using the wrong method?
Thank you
Here is the image from the video
[](https://i.stack.imgur.com/rnLwr.jpg)
I have marked the points and here is the mapping
[](https://i.stack.imgur.com/Uu5Kt.png)
The x,y column represent the pixels on the image. The metered column represent the distance from the the origin to the point in meters. I basically, usging google maps, converted the geo code to UTM and calculated the x and the y difference.
I tried to input the 7th point and I got [[[14.682752 9.927497]]] as output which is quite far in the x axis.
Any idea if I am doing anything wrong?
|
Cameras are not ideal pinhole cameras and therefore the homography cannot capture the real transform.
For small angle cameras the result are quite close, but for a fish-eye camera the result can be very off.
Also, in my experience, just the theoretical lens distortion model found in literature is not very accurate with real-world lenses (multi-element that do "strange" things to compensate for barrel/cushion distortion). Today is also viable the use of non-spherical lenses where the transformation can be just anything.
To be able to get accurate results the only solution I found was actually mapping the transformation function using an interpolating spline function.
# EDIT
In your case I'd say the problem is in the input data: considering the quasi-quadrilateral formed by the points 6, 3, 1, 2
[](https://i.stack.imgur.com/5aIta.jpg)
If the A-D distance in meters is 36.9, how can B-C distance be 53.8 meters?
May be the problem is in how you collected the data, or that google maps shouldn't be considered reliable for such small measurements.
A solution could be just measuring the relative distances of the points and then finding their coordinates on the plane solving from that distance matrix.
# EDIT
To check I wrote a simple non-linear least squares solver (works by stochastic hill climbing) using a picture of my floor to test it.
After a few seconds (it's written in Python, so speed it's not its best feature) can solve a general pinpoint planar camera equation:
```
pixel_x = (world_x*m11 + world_y*m12 + m13) / w
pixel_y = (world_x*m21 + world_y*m22 + m23) / w
w = (x*m31 + y*m32 + m33)
m11**2 + m12**2 + m13**2 = 1
```
and I can get a camera with less that 4 pixel maximum error (on a 4k image).
[](https://i.stack.imgur.com/Y1d9A.jpg)
With **YOUR** data however I cannot get an error smaller than 120 pixels.
The best matrix I found for your data is:
```
0.0704790534896005 -0.0066904288370295524 0.9974908226049937
0.013902632209214609 -0.03214426521221147 0.6680756144949469
6.142954035443663e-06 -7.361135651590592e-06 0.002007213927080277
```
Solving your data using only points 1, 2, 3 and 6 I get of course an exact numeric solution (with four general points there is one exact planar camera) but the image is clearly completely wrong (the grid should lie on the street plane):
[](https://i.stack.imgur.com/dTb0X.png)
|
Error rate not returning any data when grouped by HTTP status
I have a graph in Grafana using a Prometheus data source for displaying the error rates from my API. This is working fine with this query:
```
sum(rate(va_request_response_code{endpoint="api", statusCode!="200"}[5m])) by (exported_endpoint, statusCode)
```
I get the rate of non-200 response codes from my API and then sum those rates as I have 3 instances of my API running. I then group these by `exported_endpoint` (the actual path a user requested) and `statusCode` which is the exact HTTP status code returned.
Although this is a little useful as a raw number it is not that helpful. Having 100 errors per second out of 100 requests is very bad, having 100 errors out of 1,000,000 requests is less of an issue. Therefore I wanted to divide by the total number of requests but am struggling. If I do:
```
(sum(rate(va_request_response_code{endpoint="api", statusCode!="200"}[5m])) by (exported_endpoint, statusCode))/(sum(rate(va_request_response_code{endpoint="api"}[5m])) by (exported_endpoint))
```
Then it just displays `No data points` which makes sense as in the first case it is grouping by statusCode in the dividend and not in the divisor so it must mean that it is trying to find matches before dividing. Instead doing this does work:
```
(sum(rate(va_request_response_code{endpoint="api", statusCode!="200"}[5m])) by (exported_endpoint))/(sum(rate(va_request_response_code{endpoint="api"}[5m])) by (exported_endpoint))
```
But I then lose the grouping by status code which I want. Ideally I would like to know that x% of the requests return `404` and y% return `500`. Is this possible?
|
You need to write it like this:
```
sum by (exported_endpoint, statusCode) (rate(va_request_response_code{endpoint="api", statusCode!="200"}[5m]))
/ ignoring(statusCode) group_left
sum by (exported_endpoint) (rate(va_request_response_code{endpoint="api"}[5m]))
```
I.e. take the two vectors, left one with a `statusCode` label, right one without; divide them ignoring the `statusCode` label; then apply the labels of the left vector to the result.
Here's [the link](https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches) to the Prometheus documentation of many-to-one and one-to-many matching in PromQL.
|
SQLAlchemy throwing integrity error, "update or delete on table violates foreign key constraint"
I am trying to set-up a cascade delete on a join relationship. However, whenever I try to delete a post in my application, I receive an error saying that, "update or delete on table "post" violates foreign key constraint" Here is a photo of the error message:
[](https://i.stack.imgur.com/Xi9tR.png)
Here is my code:
```
class Post(db.Model):
"""Blog posts"""
__tablename__ = "posts"
id = db.Column(db.Integer,
primary_key=True,
autoincrement=True)
title = db.Column(db.String(25),
nullable=False)
content = db.Column(db.String(500),
nullable=False)
created_at = db.Column(db.DateTime,
default=db.func.current_timestamp())
user_table = db.Column(db.Integer,
db.ForeignKey('users.id',
ondelete='CASCADE'))
tags = db.relationship('Tag',
secondary="post_tags",
back_populates="posts",
cascade="all, delete"
)
class Tag(db.Model):
__tablename__ = "tags"
id = db.Column(db.Integer,
primary_key=True,
autoincrement=True)
name = db.Column(db.String,
unique=True)
posts = db.relationship('Post',
secondary="post_tags",
back_populates="tags")
class Post_Tag(db.Model):
__tablename__ = "post_tags"
post_id = db.Column(db.Integer,
db.ForeignKey('posts.id'), primary_key=True)
tag_id = db.Column(db.Integer,
db.ForeignKey('tags.id'), primary_key=True)
```
Based on the documentation and other questions I've viewed, I seem to be setting this up correctly. What am I doing wrong here?
**UPDATE**
I can delete Tags, but cannot delete Posts
|
You may be recieving this error because you're using `backref` instead of `back_populates`...
***Also, I'd suggest defining your relationship bidirectionally***, meaning in both the parent `Post` and child `Tag` models. This allows cascade deletion to the secondary table with different rules depending on which object is being deleted.
The following changes to your models should fix the error you're receiving:
```
# Modify your tags relationship to the following:
class Post(db.Model):
...
tags = db.relationship(
'Tag',
secondary="post_tags",
back_populates="posts", # use back-populates instead of backref
cascade="all, delete"
)
# Also, define your relationship from your tag model
class Tag(db.Model):
__tablename__ = "tags"
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String, unique=True)
posts = db.relationship(
'Post',
secondary="post_tags",
back_populates="tags", # use back-populates instead of backref
# When a parent ("post") is deleted, don't delete the tags...
passive_deletes=True
)
```
|
Does store.loadData fires load event on the store on ExtJs 4.2.5?
**Does store.loadData fires load event on the store on ExtJs 4.2.5 ?**
The documentation states it does: <http://docs-devel.sencha.com/extjs/4.2.5/#!/api/Ext.data.Store-method-loadData>
But given what I've experienced it doesn't look like it does and also looking at the source code:
```
loadData: function(data, append) {
var length = data.length,
newData = [],
i;
//make sure each data element is an Ext.data.Model instance
for (i = 0; i < length; i++) {
newData.push(this.createModel(data[i]));
}
this.loadRecords(newData, append ? this.addRecordsOptions : undefined);
},
```
ok so probably loadRecords is firing it right?
```
loadRecords: function(records, options) {
var me = this,
i = 0,
length = records.length,
start,
addRecords,
snapshot = me.snapshot;
if (options) {
start = options.start;
addRecords = options.addRecords;
}
if (!addRecords) {
delete me.snapshot;
me.clearData(true);
} else if (snapshot) {
snapshot.addAll(records);
}
me.data.addAll(records);
if (start !== undefined) {
for (; i < length; i++) {
records[i].index = start + i;
records[i].join(me);
}
} else {
for (; i < length; i++) {
records[i].join(me);
}
}
/*
* this rather inelegant suspension and resumption of events is required because both the filter and sort functions
* fire an additional datachanged event, which is not wanted. Ideally we would do this a different way. The first
* datachanged event is fired by the call to this.add, above.
*/
me.suspendEvents();
if (me.filterOnLoad && !me.remoteFilter) {
me.filter();
}
if (me.sortOnLoad && !me.remoteSort) {
me.sort();
}
me.resumeEvents();
if (me.isGrouped()) {
me.constructGroups();
}
me.fireEvent('datachanged', me);
me.fireEvent('refresh', me);
},
```
It's not fired there either.
**Is this a known issue or am I missing something ?**
|
`loadData` does not fire the `load` event, because the `load` event fires whenever the store reads data from a **remote** data source ([docs](http://docs.sencha.com/extjs/4.2.1/#!/api/Ext.data.Store-event-load)). `loadData` works with **local** data provided in its first argument.
You've found an inaccuracy in the documentation. I followed your research in version 4.1.1 and I couldn't find `loadData` firing the `load` event there.
However `loadRawData` used to fire the `load` event in 4.1.1, but it doesn't anymore in 4.2.5. In the [source code](http://docs.sencha.com/extjs/4.2.5/source/Store.html#Ext-data-Store-method-loadRawData) you can read:
*As of 4.2, this (loadRawData) method will no longer fire the {@link #event-load} event.*
|
Change parent label background when checkbox is checked
I want the selection (`:checked`) to have a different background colour.
Here my example code:
```
label {
display: block;
background: gray;
border:1px solid black;
margin-bottom:5px;
padding:10px;
cursor: pointer;
}
label + [type="checkbox"]:checked {
background: yellow;
}
```
```
<div class="poll">
<label class="a"><input type="checkbox" /> Yes</label>
<label class="a"><input type="checkbox" /> No</label>
</div>
```
My question is related to: [Design survey form](https://stackoverflow.com/questions/75144312/design-survey-form/75144606?noredirect=1#75144695)
Also take into account the situation where the input is preset in the DOM, but hidden, so the label is still clickable and toggles it.
|
### You can use the [`:has`](https://developer.mozilla.org/en-US/docs/Web/CSS/:has) pseudo selector for that:
[](https://i.stack.imgur.com/iMUGQ.png)
```
label {
display: block;
background: gray;
border:1px solid black;
margin-bottom:5px;
padding:10px;
cursor: pointer;
user-select: none;
}
label:has([type="checkbox"]:checked) {
background: yellow;
}
```
```
<div class="poll">
<label class="a"><input type="checkbox" /> Yes</label>
<label class="a"><input type="checkbox" /> No</label>
</div>
```
### If you need browser backward-compatibility support, here's a trick:
```
label {
display: block;
background: gray;
border:1px solid black;
margin-bottom:5px;
padding:10px;
cursor: pointer;
overflow: hidden; /* Important so the outline won't overflow */
user-select: none;
}
label :checked {
outline: 9999px solid yellow;
}
```
```
<div class="poll">
<label><input type="checkbox" /> Yes</label>
<label><input type="checkbox" /> No</label>
</div>
```
---
### OP asked me in the comments for a solution tailored for a hidden checkbox.
All that is needed is to use the [`hidden`](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/hidden) attribute on the input.
Never use CSS `display: none` to not render an input element.
It is preferable to use the [designated attribute](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/hidden).
Note that this also has a bug in Firefox. Below is a Firefox-hack to overcome it.
```
label {
display: block;
background: gray;
border:1px solid black;
margin-bottom:5px;
padding:10px;
cursor: pointer;
user-select: none;
}
label:has([type="checkbox"]:checked) {
background: yellow;
}
```
```
<div class="poll">
<label><input type="checkbox" hidden /> Yes</label>
<label><input type="checkbox" hidden /> No</label>
</div>
```
### Firefox-supported solution:
Extract each `input` to be above the `label` element, and use a unique `id` attribute on each `input` element, and a matching `for` attribute on each `label`:
```
label {
display: block;
background: gray;
border:1px solid black;
margin-bottom:5px;
padding:10px;
cursor: pointer;
user-select: none;
}
:checked + label {
background: yellow;
}
```
```
<div class="poll">
<input type="checkbox" id='poll-1' hidden/>
<label for='poll-1'>Yes</label>
<input type="checkbox" id='poll-2' hidden/>
<label for='poll-2'>No</label>
</div>
```
|
How to get previous and current row value using recursive CTE?
Consider the below
```
Id Nums
1 10
2 20
3 30
4 40
5 50
```
The expected output
```
Id CurrentValue PreviousValue
1 10 Null
2 20 10
3 30 20
4 40 30
5 50 40
```
I am trying with the below but no luck
```
;With Cte(Id,CurrValue,PrevValue) As
(
Select
Id
,CurrentValue = Nums
,PreviousValue = Null
From @t Where Id = 1
Union All
Select
t.Id
,c.CurrValue
,c.PrevValue
From Cte c
Join @t t On c.Id <= t.Id + 1
)
Select *
From Cte
```
Help needed
|
This assumes increasing ID values and will deal with gaps in ID
```
SELECT
ID,
This.Number AS CurrentValue,
Prev2.Number AS PreviousValue
FROM
myTable This
OUTER APPLY
(
SELECT TOP 1
Number
FROM
myTable Prev
WHERE
Prev.ID < This.ID -- change to number if you want
ORDER BY
Prev.ID DESC
) Prev2;
```
OR
```
WITH CTE
AS (SELECT ID,
Number,
ROW_NUMBER() OVER (ORDER BY ID) AS rn
FROM Mytable)
SELECT ID,
This.Number AS CurrentValue,
Prev.Number AS PreviousValue
FROM CTE This
LEFT JOIN CTE Prev
ON Prev.rn + 1 = This.rn;
```
And for SQL Server 2012
```
SELECT
ID,
Number AS CurrentValue,
LAG(Number) OVER (ORDER BY ID) AS PreviousValue
FROM
MyTable
```
|
DotNetZip - Cannot access a closed Stream
None of the similar questions are quite what I'm looking for!
What's wrong with the following code?
`files` is a text array of file contents, and `fileNames` is the corresponding filename array.
This code always fails at the second-last line with the Save method, but I can't see why the stream would be closed!
```
result = new MemoryStream();
using (ZipFile zipFile = new ZipFile())
{
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
using (MemoryStream fs = new MemoryStream(bytes))
{
zipFile.AddEntry(fileNames[i], fs);
}
}
zipFile.Save(result);
}
```
Thanks for any help - getting desperate here!
This is my solution based on @spender's first comment, although his solution posted below is possibly nicer.
```
try
{
result = new MemoryStream();
List<Stream> streams = new List<Stream>();
if (files.Count > 0)
{
using (ZipFile zipFile = new ZipFile())
{
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
streams.Add(new MemoryStream(bytes));
zipFile.AddEntry(fileNames[i], streams[i]);
}
zipFile.Save(result);
}
}
}
catch (Exception ex)
{
throw;
}
```
|
It seems that calling `Save` is the point when the source streams are read. This means you have to keep them undisposed until after the save. Abandon `using` statement in this case as it is impossible to extend its scope beyond the loop. Instead, collect your IDisposables and dispose of them once the save is completed.
```
result = new MemoryStream();
using (ZipFile zipFile = new ZipFile())
{
List<IDisposable> memStreams = new List<IDisposable>();
try
{
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
MemoryStream fs = new MemoryStream(bytes);
zipFile.AddEntry(fileNames[i], fs);
memStreams.Add(fs);
}
zipFile.Save(result);
}
finally
{
foreach(var x in memStreams)
{
x.Dispose();
}
}
}
```
|
REST API Authorization & Authentication (web + mobile)
I've read about oAuth, Amazon REST API, HTTP Basic/Digest and so on but can't get it all into "single piece". This is probably the closest situation - [Creating an API for mobile applications - Authentication and Authorization](https://stackoverflow.com/questions/3963877/creating-an-api-for-mobile-applications-authentication-and-authorization)
I would like to built API-centric website - service. So (in the beginning) I would have an API in center and **website** (PHP + MySQL) would connect via **cURL**, **Android** and **iPhone** via their network interfaces. So 3 main clients - 3 API keys. And any other developer could also develop via API interface and they would get their own API key. API actions would be accepted/rejected based on userLevel status, if I'm an admin I can delete anything etc., all other can manipulate only their local (account) data.
First, authorization - should I use oAuth + xAuth or my some-kind-of-my-own implemenation (see <http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/RESTAuthentication.html?r=9197>)? As I understand, on **Amazon service user is == API user (have API key)**. On my service I need to separate standard users/account (the one who registered on the website) and Developer Accounts (who should have their API key).
So I would firstly need to **authorize the API key** and then **Authenticate the user** itself. If I use Amazon's scheme to check developer's API keys (authorize their app), which sheme should I use for user authentication?
I read about getting a token via `api.example.org/auth` after (via **HTTPS**, HTTP Basic) posting my username and password and then forward it on every following request. How manage tokens if I'm logged in simultaneously on **Android** and a **website**? What about man-in-the-middle-attack if I'm using SSL only on first request (when username and password are transmitted) and just HTTP on every other? Isn't that a problem in this example [Password protecting a REST service?](https://stackoverflow.com/questions/8562223/password-protecting-a-rest-service)
|
As allways, the best way to protect a key is not to transmit it.
That said, we typically use a scheme, where every "API key" has two parts: A non-secret ID (e.g. 1234) and a secret key (e.g. byte[64]).
- If you give out an API key, store it (salted and hashed) in you
service's database.
- If you give out user accounts (protected by password), store the
passwords (salted and hashed) in your service's database
Now when a consumer **first** accesses your API, to connect, have him
- Send a "username" parameter ("john.doe" not secret)
- Send a "APIkeyID" parameter ("1234", not secret)
and give him back
- the salts from your database (In case one of the parameters is wrong,
just give back some repeatable salt - eg.
sha1(username+"notverysecret").
- The timestamp of the server
The consumer should store the salt for session duration to keep things fast and smooth, and he should calculate and keep the time offset between client and server.
The consumer should now calculate the salted hashes of API key and password. This way the consumer has the exact same hashes for password and API key, as what is stored in your database, but without anything seceret ever going over the wire.
Now when a consumer **subseqently** accesses your API, to do real work, have him
- Send a "username" parameter ("john.doe" not secret)
- Send a "APIkeyID" parameter ("1234", not secret)
- Send a "RequestSalt" parameter (byte[64], random, not secret)
- Send a "RequestTimestamp" parameter (calculated from client time and known offset)
- Send a "RequestToken" parameter (hash(passwordhash+request\_salt+request\_timestamp+apikeyhash))
The server should not accept timestamps more than say 2 seconds in the past, to make this safe against a replay attack.
The server can now calculate the same hash(passwordhash+request\_salt+request\_timestamp+apikeyhash) as the client, and be sure, that
- the client knows the API key,
- the client knows the correct password
|
Creating Snapshoot Gallery From Video
Anyone know of an application for creating a snapshot gallery from a video file, like this picture?
[](https://i.stack.imgur.com/woJMo.jpg)
|
You can use the standard Gnome video player Totem.
From the [Totem manual](http://library.gnome.org/users/totem/unstable/totem-usage.html.en#totem-usage-create-screenshot-gallery):
>
> # 3.14. To Create a Screenshot Gallery
>
>
> To create a gallery of screenshots of the movie or visualization that is playing,
>
> choose **Edit ▸ Create Screenshot Gallery**. The *Save Gallery* dialog is displayed. Choose a location and insert the filename as which you want to save the gallery image, then click on the **Save** button to save the screenshot.
>
>
> You may specify the width of the individual screenshots in the gallery using the **Screenshot width** entry. The default width is 128 pixels.
>
> You may also specify the number of screenshots to be put in the gallery. By default, this is calculated based on the length of the movie; however, this may be overridden by deselecting the **Calculate the number of screenshots** checkbox and entering the new number in the **Number of screenshots** spin box.
>
>
>
|
Should I use std::set or std::unordered\_set for a set of pointers?
I have a set of pointers. In the first step, I insert data pointers, and in the second step, I iterate over the whole set and do something with the elements. The order is not important, I just need to avoid duplicates, which works fine with pointer comparison.
My question is, whether it might be advantageous to use an unordered set for the same purpose. Is insertion faster for an unordered set?
|
As Ami Tavory commented, if you don't need order, then it's usually best to go for unordered containers. The reason being that if order somehow improved performance, unordered containers would still be free to use it, and hence get the same or better complexity anyhow.
A downside of unordered collections is that they usually require a hash function for the key type. If it's too hard or expensive to make one, then containers which don't use hashes might be better.
In C++'s standard library, the average insertion complexity for `std::set` is *O(log(N))*, whereas for `std::unordered_set` it's *O(1)*. Aside from that, there are probably less cache misses on average when using `std::unordered_set`.
At the end of the day though, this is just theory. You should try something that sounds good enough and profile it to see if it really is.
|
Fast inverse arbitrary power root algorithm implementation
Many sources indicate that well-known [fast inverse square root](https://en.wikipedia.org/wiki/Fast_inverse_square_root) algorithm can be generalized to calculation arbitrary power inverse root. Unfortunately I have not found such C++ implementation and I'm not so good at math to generalize this method by myself. Could you help me to do this or perhaps provide a ready-made solution? I think this will be useful to many, especially with good explanations.
This is the original algorithm and I do not quite understand what I need to change to get for example `1 /cbrt(x)`:
```
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the...?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
```
|
The algorithm consists of two steps - a rough solution estimation and the solution improvement using several [Newton method](https://en.wikipedia.org/wiki/Newton%27s_method) steps.
# Rough estimation
The basic idea is to use relationship between float number logarithm `log2(x)` and its integer representation `Ix`:
+L(B-%5Csigma);&space;log_%7B2%7D(x)%5Capprox&space;%5Cfrac%7BI_%7Bx%7D%7D%7BL%7D-(B-%5Csigma) "I_{x}= Llog_{2}(x)+L(B-\sigma); log_{2}(x)\approx \frac{I_{x}}{L}-(B-\sigma)")
[](https://i.stack.imgur.com/yny3p.png)
(Image from <https://en.wikipedia.org/wiki/Fast_inverse_square_root>)
Now use the well-known logarithm [identity](https://en.wikipedia.org/wiki/Logarithm#Product,_quotient,_power,_and_root) for root:
=-%5Cfrac%7B1%7D%7Bn%7Dlog_%7B2%7D(x) "log_{2}(\frac{1}{\sqrt[n]{x}})=-\frac{1}{n}log_{2}(x)")
Combining the identities obtained earlier, we get:
=-%5Cfrac%7B1%7D%7Bc%7D(%5Cfrac%7BI_%7Bx%7D%7D%7BL%7D-(B-%5Csigma&space;)) "\frac{I_{y}}{L}-(B-\sigma )=-\frac{1}{c}(\frac{I_{x}}{L}-(B-\sigma ))")
L(B-%5Csigma)%7D%7Bn%7D&space;-%5Cfrac%7BI_%7Bx%7D%7D%7Bn%7D "I_{y}= \frac{(n+1)L(B-\sigma)}{n} -\frac{I_{x}}{n}")
Substituting numerical values `L * (B - s) = 0x3F7A3BEA`, so
`Iy = 0x3F7A3BEA / c * (c + 1) - Ix / c;`.
For a simple float point number representation as an integer and back it is convenient to use `union` type:
```
union
{
float f; // float representation
uint32_t i; // integer representation
} t;
t.f = x;
t.i = 0x3F7A3BEA / n * (n + 1) - t.i / n; // Work with integer representation
float y = t.f; // back to float representation
```
Note that for `n=2` the expression is simplified to `t.i = 0x5f3759df - t.i / 2;` which is identical to original `i = 0x5f3759df - ( i >> 1 );`
# Newton's solution improvement
Transform equality
[](https://i.stack.imgur.com/9JEx4.gif)
into an equation that should be solved:
=%5Cfrac%7B1%7D%7By%5E%7Bn%7D%7D-x;f(y)=0 "x=\frac{1}{y^{n}};f(y)=\frac{1}{y^{n}}-x;f(y)=0")
Now construct Newton steps:
%7D%27=-%5Cfrac%7Bn%7D%7By%5E%7Bn+1%7D%7D "{f(y)}'=-\frac{n}{y^{n+1}}")
/%7Bf%7D%27(y_%7Bi%7D)=y_%7Bi%7D+%5Cfrac%7B(%5Cfrac%7B1%7D%7By%5E%7Bn%7D_%7Bi%7D%7D-x)y%5E%7Bn+1%7D_%7Bi%7D%7D%7Bn%7D=%5Cfrac%7By_%7Bi%7D(1+n-xy_%7Bi%7D%5E%7Bn%7D)%7D%7Bn%7D "y_{i+1}=y_{i}-f(y_{i})/{f}'(y_{i})=y_{i}+\frac{(\frac{1}{y^{n}_{i}}-x)y^{n+1}_{i}}{n}=\frac{y_{i}(1+n-xy_{i}^{n})}{n}")
Programmatically it looks like: `y = y * (1 + n - x * pow(y,n)) / n;`. As an initial `y`, we use the value obtained at **Rough estimation** step.
Note for the particular case of square root (`n = 2`) we get `y = y * (3 - x*y*y) / 2;` which is identically to the original formula `y = y * (threehalfs - (x2 * y * y))`;
Final code as template function. Parameter `N` determines root power.
```
template<unsigned N>
float power(float x) {
if (N % 2 == 0) return power<N / 2>(x * x);
else if (N % 3 == 0) return power<N / 3>(x * x * x);
return power<N - 1>(x) * x;
};
template<>
float power<0>(float x){ return 1; }
// fast_inv_nth_root<2>(x) - inverse square root 1/sqrt(x)
// fast_inv_nth_root<3>(x) - inverse cube root 1/cbrt(x)
template <unsigned n>
float fast_inv_nth_root(float x)
{
union { float f; uint32_t i; } t = { x };
// Approximate solution
t.i = 0x3F7A3BEA / n * (n + 1) - t.i / n;
float y = t.f;
// Newton's steps. Copy for more accuracy.
y = y * (n + 1 - x * power<n>(y)) / n;
y = y * (n + 1 - x * power<n>(y)) / n;
return y;
}
```
# Testing
Testing code:
```
int main()
{
std::cout << "|x ""|fast2 "" actual2 "
"|fast3 "" actual3 "
"|fast4 "" actual4 "
"|fast5 "" actual5 ""|" << std::endl;
for (float i = 0.00001; i < 10000; i *= 10)
std::cout << std::setprecision(5) << std::fixed
<< std::scientific << '|'
<< i << '|'
<< fast_inv_nth_root<2>(i) << " " << 1 / sqrt(i) << "|"
<< fast_inv_nth_root<3>(i) << " " << 1 / cbrt(i) << "|"
<< fast_inv_nth_root<4>(i) << " " << pow(i, -0.25) << "|"
<< fast_inv_nth_root<5>(i) << " " << pow(i, -0.2) << "|"
<< std::endl;
}
```
Results:
```
|x |fast2 actual2 |fast3 actual3 |fast4 actual4 |fast5 actual5 |
|1.00000e-05|3.16226e+02 3.16228e+02|4.64152e+01 4.64159e+01|1.77828e+01 1.77828e+01|9.99985e+00 1.00000e+01|
|1.00000e-04|9.99996e+01 1.00000e+02|2.15441e+01 2.15443e+01|9.99991e+00 1.00000e+01|6.30949e+00 6.30957e+00|
|1.00000e-03|3.16227e+01 3.16228e+01|1.00000e+01 1.00000e+01|5.62339e+00 5.62341e+00|3.98103e+00 3.98107e+00|
|1.00000e-02|9.99995e+00 1.00000e+01|4.64159e+00 4.64159e+00|3.16225e+00 3.16228e+00|2.51185e+00 2.51189e+00|
|1.00000e-01|3.16227e+00 3.16228e+00|2.15443e+00 2.15443e+00|1.77828e+00 1.77828e+00|1.58487e+00 1.58489e+00|
|1.00000e+00|9.99996e-01 1.00000e+00|9.99994e-01 1.00000e+00|9.99991e-01 1.00000e+00|9.99987e-01 1.00000e+00|
|1.00000e+01|3.16226e-01 3.16228e-01|4.64159e-01 4.64159e-01|5.62341e-01 5.62341e-01|6.30948e-01 6.30957e-01|
|1.00000e+02|9.99997e-02 1.00000e-01|2.15443e-01 2.15443e-01|3.16223e-01 3.16228e-01|3.98102e-01 3.98107e-01|
|1.00000e+03|3.16226e-02 3.16228e-02|1.00000e-01 1.00000e-01|1.77827e-01 1.77828e-01|2.51185e-01 2.51189e-01|
|1.00000e+04|9.99996e-03 1.00000e-02|4.64155e-02 4.64159e-02|9.99995e-02 1.00000e-01|1.58487e-01 1.58489e-01|
```
|
Stage.enter doesn't start the wizard
I created an application which display a survey wizard to the user. When the user launch the /start command, I call the `AddProject`:
```
const Telegraf = require('telegraf');
const bot = new Telegraf(process.env.BOT_TOKEN);
const session = require('telegraf/session');
bot.use(session());
const main = new TelegrafInlineMenu("Welcome.");
main.setCommand('start');
main.simpleButton('Start Survey', 'START_SURVEY', {
doFunc: async ctx => surveyController.AddProject(ctx, bot)
});
```
essentially the code above create a menu that display the label Welcome and a button to start the survey. When the user click on the button, the method AddProject is called from the surveyController:
```
const Composer = require('telegraf/composer');
const stepHandler = new Composer();
const Stage = require('telegraf/stage');
const WizardScene = require('telegraf/scenes/wizard');
const userController = require('../controllers/user.controller');
module.exports = {
AddProject: async function (ctx, bot) {
const superWizard = new WizardScene('super-wizard',
(ctx) => {
ctx.reply('Step 1', Markup.inlineKeyboard([
Markup.urlButton('❤️', 'http://telegraf.js.org'),
Markup.callbackButton('➡️ Next', 'next'),
]).extra());
return ctx.wizard.next();
},
(ctx) => {
ctx.reply('Step 2');
return ctx.wizard.next();
},
(ctx) => {
ctx.reply('Done');
return ctx.wizard.leave();
},
);
const stage = new Stage([superWizard]);
bot.use(stage.middleware());
Stage.enter('super-wizard');
},
};
```
the method AddProject is firing correctly, but the wizard is not displayed, what I did wrong?
|
After a lot of attempts I finally achieved my goal. I'm not an expert of `TelegrafJS` and I found the documentation really hard to understand, specially if you are a newbie (missing examples, some concept like WizardScene missing etc...).
**Create a Scene**
So basically to achieve my target I have to use a `Scene` and `TelegrafJS` makes available different types of `Scene`s.
My goal was to wait for user input, and validate it. For this, I have used the `WizardScene`, this is my implementation:
```
const userWizard = new WizardScene('user-wizard',
(ctx) => {
ctx.reply("What is your name?");
//Necessary for store the input
ctx.scene.session.user = {};
//Store the telegram user id
ctx.scene.session.user.userId = ctx.update.callback_query.from.id;
return ctx.wizard.next();
},
(ctx) => {
//Validate the name
if (ctx.message.text.length < 1 || ctx.message.text.length > 12) {
return ctx.reply("Name entered has an invalid length!");
}
//Store the entered name
ctx.scene.session.user.name = ctx.message.text;
ctx.reply("What is your last name?");
return ctx.wizard.next();
},
async (ctx) => {
//Validate last name
if (ctx.message.text.length > 30) {
return ctx.reply("Last name has an invalid length");
}
ctx.scene.session.user.lastName = ctx.message.text;
//Store the user in a separate controller
await userController.StoreUser(ctx.scene.session.user);
return ctx.scene.leave(); //<- Leaving a scene will clear the session automatically
}
);
```
**Register a Scene**
the `WizardScene` above need to be registered in a `Stage`, so we can use this `Scene` in a `Middleware`. In this way, we can access to the `Scene` in a separate class or module:
```
const stage = new Stage([userWizard]);
stage.command('cancel', (ctx) => {
ctx.reply("Operation canceled");
return ctx.scene.leave();
});
bot.use(stage.middleware())
```
I also told to the `Stage` to leave the `Scene` if the `/cancel` command is reiceved, so if the user want cancel the operation, typing `/cancel` is the way.
**Start a Scene**
Now for enter in the wizard you can do the following:
```
await ctx.scene.enter('user-wizard');
```
so basically you have the `Scene` middleware registered in the `context` of your application, and you need to type `.enter` with the id of your `Scene` which is in my case `user-wizard`.
This will start the wizard.
I hope the documentation will be enhanced with more example, because I found this really hard to implement and understand, specially for me that I'm a newbie on `TelegrafJS`.
Kind regards.
|
Error authorized to tfs programmatically
I have local tfs 2012. On Visual studio 2012 , use c# , I write program which programmatically connect to tfs server. But I have error:
>
> {"TF30063: You are not authorized to access <http://server:8080/tfs>."} System.Exception {Microsoft.TeamFoundation.TeamFoundationServerUnauthorizedException}
>
>
>
My code:
```
Uri collectionUri = new Uri("http://server:8080/tfs");
NetworkCredential netCred = new NetworkCredential("login","password");
BasicAuthCredential basicCred = new BasicAuthCredential(netCred);
TfsClientCredentials tfsCred = new TfsClientCredentials(basicCred);
tfsCred.AllowInteractive = false;
TfsTeamProjectCollection teamProjectCollection = new TfsTeamProjectCollection(collectionUri, netCred);
teamProjectCollection.EnsureAuthenticated();//throw error here
```
Can you help me fix this error?
P.S. I try do connect this way, but I have same error:
```
var projectCollection = new TfsTeamProjectCollection(
new Uri("http://myserver:8080/tfs/DefaultCollection"),
new NetworkCredential("youruser", "yourpassword"));
projectCollection.Connect(ConnectOptions.IncludeServices);
```
And this way:
```
Uri collectionUri = new Uri("http://server:8080/tfs/DefaultCollection");
NetworkCredential netCred = new NetworkCredential("login","password","Server.local");
BasicAuthCredential basicCred = new BasicAuthCredential(netCred);
TfsClientCredentials tfsCred = new TfsClientCredentials(basicCred);
tfsCred.AllowInteractive = false;
TfsTeamProjectCollection teamProjectCollection = new TfsTeamProjectCollection(collectionUri, netCred);
teamProjectCollection.EnsureAuthenticated();
```
|
Hope it helps you.
```
var tfsCon = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://server:8080/tfs"));
tfsCon.Authenticate();
var workItems = new WorkItemStore(tfsCon);
var projectsList = (from Project p in workItems.Projects select p.Name).ToList();
```
or
```
Uri TfsURL = new Uri(""http://server:8080/tfs"");
NetworkCredential credential = new NetworkCredential(Username, Password, Domain);
TfsTeamProjectCollection collection = new TfsTeamProjectCollection(TfsURL, credential);
collection.EnsureAuthenticated();
```
If fails here you need to configure in App.config to set the default proxy:
```
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true"></defaultProxy>
</system.net>
</configuration>
```
|
Conditional types initialized in Python
I'm trying to write my own "fraction" class that takes in two numerical objects: the numerator and denominator. However, depending on the data type for the two arguments, I'd like the object to initialize in different ways.
For example, if I declare `x = Frac(2,5)` I want `x` to stay a `Frac` type. Whereas if I declare `y = Frac(2.1, 5)` I want `y` to be cast to a `float` with the value of 0.42, rather than `Frac(21,50)`.
What would be the right way to go about something like this?
This feels very similar to the scenario in which a tuple with a single object simply returns the original object. This means that `x = ("cat")` sets `x` to be a `str` type, rather than a `tuple` type.
|
There are two options.
### 1. Factory method
```
x = Frac.create_by_type(2, 5)
y = Frac.create_by_type(2.1, 5)
```
Implementation example:
```
class Frac:
...
@classmethod
def create_by_type(cls, a, b):
if isinstance(a, float):
return a / b
return cls(a, b)
```
---
If you want to use `Frac` constructor directly,
### 2. Overriding `Frac.__new__` method
```
class Frac:
def __new__(cls, a, b):
if isinstance(a, float):
return a / b
return super().__new__(cls)
def __init__(self, a, b):
self.a = a
self.b = b
f1 = Frac(2, 5)
f2 = Frac(2.1, 5)
print(type(f1))
print(type(f2))
print(f1.a, f1.b)
```
output:
```
<class '__main__.Frac'>
<class 'float'>
2 5
```
But overriding `__new__` method might be tricky, so I'm not sure to recommend it.
|
Relationship between solution configuration, publish profile, and web.config transforms
I have the following setup in a Visual Studio 2013 ASP.NET Web API 2 project.
- Web.Develop.config web transform to set an app settings key value
- Web.Release.config web transform to remove an app settings key
- Develop.pubxml to map to the Web.Develop.config transform
- Release.pubxml to map to the Web.Release.config transform
Details for each are found below.
```
<!-- Web.Develop.config (Web Config Transform) -->
<appSettings>
<add key="ReportInputPath"
value="DevelopPath"
xdt:Transform="SetAttributes"
xdt:Locator="Match(key)" />
</appSettings>
```
```
<!-- Web.Release.config (Web Config Transform) -->
<appSettings xdt:Transform="Remove" />
```
```
<!-- **Develop.pubxml (Publish Profile) -->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<WebPublishMethod>FileSystem</WebPublishMethod>
<LastUsedBuildConfiguration>Release</LastUsedBuildConfiguration>
<LastUsedPlatform>x64</LastUsedPlatform>
<SiteUrlToLaunchAfterPublish />
<LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
<ExcludeApp_Data>True</ExcludeApp_Data>
<publishUrl>Path</publishUrl>
<DeleteExistingFiles>True</DeleteExistingFiles
<ExcludeFilesFromDeployment>packages.config</ExcludeFilesFromDeployment>
</PropertyGroup>
</Project>
```
```
<!-- Release.pubxml (Publish Profile) -->
<!-- Contents are identical to Develop.pubxml.
This is used to target the Web.Release.Config transform. -->
```
Whenever I publish the application via the Release publish profile my `<appSettings/>` element is successfully removed. However, `<appSettings/>` element is removed when the Develop publish profile is run as well.
What I want to understand is:
Why is the `<appSettings/>` element being removed when I run the Develop publish profile instead of setting the ReportInputPath value?
And what are the relationships between the between solution/project configurations, publish profiles, and web.config transforms?
|
The answer to why the `<appSettings/>` element is being removed when the Develop publish profile is run is because two transformations are run in the following order.
1. Web.Release.config. This is run because the configuration target in the Develop.pubxml file is the Release build configuration.
2. Web.Develop.config. This is run because the name of the publish profile (Develop) matches the name of the transform file.
What is happening is the the first transformation removes the `<appSettings/>` element. The second transformation attempts to set the key value in that element, but cannot find it, so it silently fails.
I was able to confirm this by searching through the console output. When the Develop transformation was run there was a warning that the desired element could not be found.
```
Example (shortened for clarity)
> TransformXml: Applying Transform File: C:\...\MyProject\Web.Develop.config
> C:\...\MyProject\Web.Develop.config(6,4): Warning : No element in the source document matches '/configuration/appSettings'
> TransformXml: Not executing SetAttributes (transform line 9, 10)
```
The [Profile specific web.config transforms and transform preview](http://sedodream.com/PermaLink,guid,fdcc88bf-95ac-4945-a49b-96d9d1ac35a5.aspx) article by Sayed Ibrahim Hashimi was very helpful in identifying this was the issue.
As far as the relationship between the build configuration, publish profiles, and web.config transform go my current understanding is this.
1. Publish profiles have (among other things) a configuration target
2. Publish profiles first run the transformation that maps to the their specified configuration target name if one exists
3. Publish profiles then run the transformation that maps to their publish profile name if one exists
The key here being that *two* web.config transformations may be run.
|
Simple conversion of netCDF4.Dataset to xarray Dataset
I know how to convert netCDF4.Dataset to xarray DataArray manually. However, I would like to know whether is there any simple and elegant way, e.g. using xarray backend, for simple conversion of the following 'netCDF4.Dataset' object to xarray DataArray object:
```
<type 'netCDF4.Dataset'>
root group (NETCDF4 data model, file format HDF5):
Originating_or_generating_Center: US National Weather Service, National Centres for Environmental Prediction (NCEP)
Originating_or_generating_Subcenter: NCEP Ensemble Products
GRIB_table_version: 2,1
Type_of_generating_process: Ensemble forecast
Analysis_or_forecast_generating_process_identifier_defined_by_originating_centre: Global Ensemble Forecast System (GEFS)
Conventions: CF-1.6
history: Read using CDM IOSP GribCollection v3
featureType: GRID
History: Translated to CF-1.0 Conventions by Netcdf-Java CDM (CFGridWriter2)
Original Dataset = /data/ldm/pub/native/grid/NCEP/GEFS/Global_1p0deg_Ensemble/member/GEFS_Global_1p0deg_Ensemble_20170926_0600.grib2.ncx3#LatLon_181X360-p5S-180p0E; Translation Date = 2017-09-26T17:50:23.259Z
geospatial_lat_min: 0.0
geospatial_lat_max: 90.0
geospatial_lon_min: 0.0
geospatial_lon_max: 359.0
dimensions(sizes): time2(2), ens(21), isobaric1(12), lat(91), lon(360)
variables(dimensions): float32 u-component_of_wind_isobaric_ens(time2,ens,isobaric1,lat,lon), float64 time2(time2), int32 ens(ens), float32 isobaric1(isobaric1), float32 lat(lat), float32 lon(lon), float32 v-component_of_wind_isobaric_ens(time2,ens,isobaric1,lat,lon)
groups:
```
I've got this using `siphon.ncss`.
|
The next release of xarray (0.10) has support for this very thing, or at least getting an xarray **dataset** from a netCDF4 one, for exactly the reason you're trying to use it:
```
import xarray as xr
nc = nc4.Dataset('filename.nc', mode='r') # Or from siphon.ncss
dataset = xr.open_dataset(xr.backends.NetCDF4DataStore(nc))
```
Or with `siphon.ncss`, this would look like:
```
from datetime import datetime
from siphon.catalog import TDSCatalog
import xarray as xr
gfs_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog'
'/grib/NCEP/GFS/Global_0p5deg/catalog.xml')
latest = gfs_cat.latest
ncss = latest.subset()
query = ncss.query().variables('Temperature_isobaric')
query.time(datetime.utcnow()).accept('netCDF4')
nc = ncss.get_data(query)
dataset = xr.open_dataset(xr.backends.NetCDF4DataStore(nc))
```
Until it's released, you could install xarray from master. Otherwise, the only other solution is to do everything manually.
|
Creating shortcut to Oracle SQL Developer on desktop
To start Oracle SQL Developer I have to write
cd /opt/sqldeveloper/
./sqldeveloper.sh
I tried to make a shortcut to the sqldeveloper.sh file on my desktop, but when I run the shortcut I only see the script in gedit. It doesnt run it.
How can I make the script run when I click my desktop shortcut?
|
So, my guess is that it's not set as an executable. Check it's permissions.
From the command line you can run the following to add execute permissions to the file:
```
chmod +x sqldeveloper.sh
```
Another method I'd recommend is creating a shortcut would be the following:
```
cd /usr/share/applications
sudo gedit ./sqldeveloper.desktop
```
This will open an empty file in Gedit.
In the file, add the following contents:
```
[Desktop Entry]
Name=SQL Developer
Comment=Oracle SQL Developer
GenericName=SQL Developer for Linux
Exec=/opt/sqldeveloper/sqldeveloper.sh
Type=Application
Categories=Developer;
Icon=/opt/sqldeveloper/icon.png
```
Save the file.
Now if you use the Search for SQL Developer in your app menu, you'll find SQL Developer and you can run it from there.
Now you can either add that as a favorite of yours or you can drag and drop it to your desktop.
To confirm, I just did this process myself while writing this up.
You can also reference this post for other ways to do what you're looking to do.
[How can I create launchers on my desktop?](https://askubuntu.com/questions/64222/how-can-i-create-launchers-on-my-desktop)
|
Using the Macro SEC\_IS\_BEING\_DEBUGGED\_RETURN\_NIL in iOS app
I learnt about the below piece of code, which is claimed to prevent [Method Swizzling](http://resources.infosecinstitute.com/ios-application-security-part-8-method-swizzling-using-cycript/) to some extent.
```
#ifndef DEBUG
SEC_IS_BEING_DEBUGGED_RETURN_NIL();
#endif
```
But while including in my project for testing, I get an error.
>
> Implicit declaration of function 'SEC\_IS\_BEING\_DEBUGGED\_RETURN\_NIL' is
> invalid in C99
>
>
>
Can someone help me out on this error, if I need to include any library header for the same.
|
I didn't intend to answer my own question. From the comment above, I did a search for any such implementation. And Found this [In a GitHub Project](https://github.com/x128/MemeCollector/blob/master/Meme%20Collector/NSObject%2BdebugCheck.h). Which is a category of `NSObject`
Perhaps, it would help anyone in `future`.
```
#define SEC_IS_BEING_DEBUGGED_RETURN_NIL() size_t size = sizeof(struct kinfo_proc); \
struct kinfo_proc info; \
int ret, name[4]; \
memset(&info, 0, sizeof(struct kinfo_proc)); \
name[0] = CTL_KERN; \
name[1] = KERN_PROC; \
name[2] = KERN_PROC_PID; \
name[3] = getpid(); \
if ((ret = (sysctl(name, 4, &info, &size, NULL, 0)))) { \
if (ret) return nil; \
} \
if (info.kp_proc.p_flag & P_TRACED) return nil
```
Credits to maker of this
>
> // Created by Derek Selander on a happy day. //
>
> // Copyright (c)
>
> // 2013 Derek Selander. All rights reserved. //
>
>
>
|
microtime() and floating point arithmetics
I created a simple website which grabs articles from a MySQL database. I used PHP `microtime(true)` function to calculate the time of the interpretation. At the top of my PHP script I used :
```
$time = microtime(true);
```
And at the bottom of the page I used the following code :
```
echo microtime(true) - $time;
```
When I refresh my webpage with those statements at the top and bottom of my script. It always echos out a value around (`0.0355005264282`; just an instance). That is the time that took to interpret my PHP page.
As the PHP manual says (<http://php.net/manual/en/function.microtime.php>), `microtime(true)` returns the current unix time stamp in microseconds. A microsecond `is one millionth of` a second. So,
(for instance):
```
0.03 microseconds = 1/1,000,000 * 0.03 seconds
0.03 microseconds = 0.000,000,03 seconds
```
So the time took to interpret a PHP webpage which uses MySQL is around `0.000,000,03` seconds.
**My Questions are :**
Is this `microtime(true)` is telling the truth about the interpretation time ? If it's true, It's wonderful, because I won't have to worry too much about performance anymore.
*I am using XAMPP on Windows*
|
>
> microtime — Return current Unix timestamp with microseconds
>
>
> mixed microtime ([ bool $get\_as\_float = false ] )
>
>
>
do instead:
```
$time = microtime(true);
echo microtime(true) - $time;
```
And result will be in seconds. Check this(Manual):
>
> 'time1' => float 1360860136.6731
>
>
> 'time2' => float 1360860136.6732 and
>
>
> 'time2' - 'time1' = 9.9897384643555E-5 i.e. 0.000099897384643555 (not 0.0001)
>
>
>
PHP typically uses the IEEE 754 double precision format. Rational numbers that are exactly representable as floating point numbers in base 10, like 0.1 or 0.7, do not have an exact representation as floating point numbers in base 2
|
How to generate service and repository layers for MVC3
I recently completed an MVC3 project for a client using a Repository pattern implemented using a micro ORM, and with a service layer for my business logic. IMHO the application works well and the client is happy.
But I had to write a lot of boilerplate code whenever I wanted to add/implement a new service and associated DTO's. Plus there was a fair amount of code duplication converting DTO's to View Models and back again, with associated PEBKAC. And as the requirements evolved there were changes to the database schema that needed to be propagated right through to the View Models.
For a different client I inherited a .netTiers code generation project that caused me a lot of grief until I fixed some defects in the base templates, automated the code generation using MSBuild and, also using MSBuild, massaged the resulting code to get it to build without the catalog of manual tweaks previously required. .netTiers ultimately generated a lot of useful code but with enormous amounts of duplication, a bucket of complexity, and it felt like using a sledgehammer to skin a cat.
Now I'm looking at another MVC3 project and I want to avoid having to write all the boilerplate myself but I also want to avoid a full .netTiers-type code generation. I haven't used EF. I've tended to think of it as too big a tool for the size of projects I undertake but if it can take away some of the manual steps for me then that would be a big time saving. What are the merits of EF and will it scaffold the service layers for me?
The other option I'm considering is [LightSpeed](http://www.mindscapehq.com/products/lightspeed) which would require me to spend some dollars (not many) but if it can generate service layer code for me that would be money well spent. Does LightSpeed support this type of code generation?
Obviously as the domain model and database schema evolve the services need to be updated to accommodate those changes. .netTiers implements this by generating partial classes. How do these other tools handle those changes without overwriting any custom logic in the service layer?
What other options are there?
Update: Thanks for all the feedback, lots of positive options to look at. Has anyone had a look at [MVC Scaffolding](http://blog.stevensanderson.com/2011/01/13/mvcscaffolding-standard-usage/)?
Update #2: I'm going to pursue the MVCScaffolding option which produces code for EF Code First. Out of the box it produces a Repository class and then somewhat unfortunately sticks it with the Model which in MVC is actually the View Model and *not* the Domain Model. There is a pull request on the MVCScaffolding project for Service layer scaffolding so will investigate that option. Plus AutoMapper for mapping POCOs <-> DTOs.
|
T4 templates work very well and are easily modifiable. They can be used for the datatier layer as well as any service layers.
The process we use for EF Code Only:
1. Build an edmx from our database
2. Install the T4 POCO class generation templates
3. Generate our POCO and context classes
4. Delete our edmx and T4 templates, keeping the POCO and context classes
The new tools in Entity Framework are great as they give you various options:
## Model First:
1. Model in edmx
2. Generate database from edmx
3. Generate POCO classes from edmx
4. (optional) Delete edmx, run Code Only
## Code First:
1. Write POCO classes
2. Generate database from POCO classes
3. (Optional) Generate edmx from POCO classes
## Database First:
1. Build database
2. Generate edmx from database
3. Generate POCO classes from edmx
4. (optional) Delete edmx, run Code Only
## Addendum (14/01/2012):
A beta release of Code First Migrations has been released. We haven't investigated it yet but it looks interesting.
- [Code-first Migrations - Beta Announcement](http://blogs.msdn.com/b/adonet/archive/2011/11/29/code-first-migrations-beta-1-released.aspx)
- [Code-first Migrations - No Magic Walkthrough](http://blogs.msdn.com/b/adonet/archive/2011/11/29/code-first-migrations-beta-1-no-magic-walkthrough.aspx)
- [Code-first Migrations - Automatic Migrations](http://blogs.msdn.com/b/adonet/archive/2011/11/29/code-first-migrations-beta-1-with-magic-walkthrough-automatic-migrations.aspx)
|
What exactly does sapply with '[' do?
I was browsing some answer concerning `strsplit` in R. Example text:
```
fileName <- c("hello.w-rp-al",
"how.nez-r",
"do.qs-sdz",
"you.d-aerd",
"do.dse-e")
```
I wanted to get the first element of the created list and thought I could use something such as
```
fileNameSplit <- strsplit(fileName, "[.]")
node_1 <- fileNameSplit[0]
node_2 <- fileNameSplit[1]
```
But that didn't work.
Then I found [this answer](https://stackoverflow.com/a/4350896/1150683) that suggests using `sapply` with `[`. This does work.
```
d <- data.frame(fileName)
fileNameSplit <- strsplit(d$fileName, "[.]")
d$node_1 <- sapply(fileNameSplit, "[", 1)
d$node_2 <- sapply(fileNameSplit, "[", 2)
```
However, I'm trying to figure out *why*. What exactly is happening, and what does `[` have to do with anything? It's semantically confusing in my opinion.
|
`sapply` operates on lists, which are vectors where each element can take any form.
---
**In the special case of your `fileNameSplit` list**, we know that each element of the list is a character vector with two elements.
```
> fileNameSplit
[[1]]
[1] "hello" "w-rp-al"
[[2]]
[1] "how" "nez-r"
[[3]]
[1] "do" "qs-sdz"
[[4]]
[1] "you" "d-aerd"
[[5]]
[1] "do" "dse-e"
```
To extract the first element from each of these character vectors, we have to iterate over the list, which is what
```
sapply(fileNameSplit, `[`, 1)
```
does. It may be clearer when written as
```
sapply(fileNameSplit, function(x) x[1])
```
The documentation at `?`[`` and `?sapply` explains why the shorter version works.
We use `1` because that is where indexing starts in R (unlike other languages that start at 0).
|
Data parallelism in Keras
I am looking for data parallelism in keras(tensorflow backend), not model parallelism.
I am performing video classification on video file, and hence could only fit a batch of size 2 in the GPU. So, I was wondering a way to use multiple GPUs in order increase my batch size for better estimation and faster training.
Can you suggest me an effective way to do this?
I am using one 12gb TitanX and one 6gb Titan Black.
Thanks
|
This is one way to do it:
This method `to_multi_gpu` gets a `model` (defined using Keras 2.0 over a single GPU), and returns that same model replicated (with shared parameters) over multiple GPUs. The input to the new model is being sliced evenly and each slice is passed to one of the replicated models. The output from all the replicated models is concatenated at the end.
```
from keras import backend as K
from keras.models import Model
from keras.layers import Input
from keras.layers.core import Lambda
from keras.layers.merge import Concatenate
def slice_batch(x, n_gpus, part):
"""
Divide the input batch into [n_gpus] slices, and obtain slice number [part].
i.e. if len(x)=10, then slice_batch(x, 2, 1) will return x[5:].
"""
sh = K.shape(x)
L = sh[0] // n_gpus
if part == n_gpus - 1:
return x[part*L:]
return x[part*L:(part+1)*L]
def to_multi_gpu(model, n_gpus=2):
"""
Given a keras [model], return an equivalent model which parallelizes
the computation over [n_gpus] GPUs.
Each GPU gets a slice of the input batch, applies the model on that slice
and later the outputs of the models are concatenated to a single tensor,
hence the user sees a model that behaves the same as the original.
"""
with tf.device('/cpu:0'):
x = Input(model.input_shape[1:], name=model.input_names[0])
towers = []
for g in range(n_gpus):
with tf.device('/gpu:' + str(g)):
slice_g = Lambda(slice_batch,
lambda shape: shape,
arguments={'n_gpus':n_gpus, 'part':g})(x)
towers.append(model(slice_g))
with tf.device('/cpu:0'):
merged = Concatenate(axis=0)(towers)
return Model(inputs=[x], outputs=[merged])
```
|
HTML5 camera does not switch off after route change in AngularJs
I'm testing the video capabilities of HTML5. With a directive `userMedia`, I'm able to switch on my camera on my MacBook via [`navigator.getUserMedia()`](http://dev.w3.org/2011/webrtc/editor/getusermedia.html) (actually via an adapter to make it cross browser - at least those who support it).
But when I change my `$route`, I don't see myself anymore (hurray), but the camera does not switch off (the green light stays on). Only refreshing the page resets everything (which is normal).
I was hoping that watching for a change in `$location.path()` would do the trick:
```
link: function(scope, elm, attrs, ctrl) {
...
var path = $location.path();
scope.$watch(function() {
return $location.path();
}, function(value) {
if (value && value !== path) {
$log.info('Location changed, switching off camera');
webRTCAdapter.detachMediaStream(elm[0]);
}
}, true);
}
```
**detachMediaStream (Chrome):**
```
webRTCAdapter.detachMediaStream = function(element) {
console.log("Detaching media stream");
element.pause();
element.src = '';
element.parentNode.removeChild(element);
};
```
**Html:**
```
<video id="localVideo" width="100%" autoplay="autoplay" user-media="user-media"></video>
```
`detachMediaStream` gets executed (I see the necessary logs in `console.log`), but the camera does not switch off.
**Any idea how to solve this? Should I unload the element somehow?**
|
I found the cause of the problem. The [`LocalMediaStream`](http://docs.webplatform.org/wiki/apis/webrtc/LocalMediaStream) which was created when the camera switches on, needs to be stopped by using the `stop()` function.
A reference to the created `LocalMediaStream` object has to be kept, when attaching it to the video element:
```
controller: function($element) {
var self = this;
self.onUserMediaSuccess = function(stream) {
$log.info("User has granted access to local media.");
webRTCAdapter.attachMediaStream($element[0], stream);
// keep a reference
self.localStream = stream;
};
```
This `LocalMediaStream` reference has to added to `detachMediaStream` function, when the `$destroy` event occurs (thank you for that, [**Joseph Silber**](https://stackoverflow.com/users/825568/joseph-silber)):
```
scope.$on('$destroy', function() {
$log.info('Location changed, switching off camera');
webRTCAdapter.detachMediaStream( elm[0], ctrl.localStream);
});
```
On the `LocalMediaStream` object I need to execute the `stop()` function:
```
webRTCAdapter.detachMediaStream = function(element, stream) {
console.log("Detaching media stream");
element.pause();
element.src = '';
element.parentNode.removeChild(element);
// stopping stream (camera, ...)
stream.stop();
};
```
|
How to create an instance of a model with the ebean framework and scala in Play 2.2
I would like to instance a model object of the Ebean project with scala and the fremework Play 2.2. I face to an issue with the ID autogenerate and the class parameteres / abstraction :
```
@Entity
class Task(@Required val label:String) extends Model{
@Id
val id: Long
}
object Task {
var find: Model.Finder[Long, Task] = new Model.Finder[Long, Task](classOf[Long], classOf[Task])
def all(): List[Task] = find.all.asScala.toList
def create(label: String) {
val task = new Task(label)
task.save
}
def delete(id: Long) {
find.ref(id).delete
}
}
```
The error : "class Task needs to be abstract, since value id is not defined". Any idea to avoid this problem?
|
I found the solution thanks this link : <http://www.avaje.org/topic-137.html>
```
import javax.persistence._
import play.db.ebean._
import play.data.validation.Constraints._
import scala.collection.JavaConverters._
@Entity
@Table( name="Task" )
class Task{
@Id
var id:Int = 0
@Column(name="title")
var label:String = null
}
/**
* Task Data Access Object.
*/
object Task extends Dao(classOf[Task]){
def all(): List[Task] = Task.find.findList().asScala.toList
def create(label: String) {
var task = new Task
task.label = label
Task.save(task)
}
def delete(id: Long) {
Task.delete(id)
}
}
```
And the DAO :
```
/**
* Dao for a given Entity bean type.
*/
abstract class Dao[T](cls:Class[T]) {
/**
* Find by Id.
*/
def find(id:Any):T = {
return Ebean.find(cls, id)
}
/**
* Find with expressions and joins etc.
*/
def find():com.avaje.ebean.Query[T] = {
return Ebean.find(cls)
}
/**
* Return a reference.
*/
def ref(id:Any):T = {
return Ebean.getReference(cls, id)
}
/**
* Save (insert or update).
*/
def save(o:Any):Unit = {
Ebean.save(o);
}
/**
* Delete.
*/
def delete(o:Any):Unit = {
Ebean.delete(o);
}
```
|
How to remove dot "." after number in ordered list items in OL LI?
>
> **Possible Duplicate:**
>
> [HTML + CSS: Ordered List without the Period?](https://stackoverflow.com/questions/5945161/html-css-ordered-list-without-the-period)
>
>
>
Want to remove dot "." from OL (order list)
```
<ol>
<li>One</li>
<li>Two</li>
</ol>
```
Result
```
1. One
2. Two
```
Required Result
```
1 One
2 Two
```
|
This will work in IE8+ and other browsers
```
ol {
counter-reset: item;
list-style-type: none;
}
li { display: block; }
li:before {
content: counter(item) " ";
counter-increment: item
}
```
or even:
```
ol li:before {
content: counter(level1) " "; /*Instead of ". " */
counter-increment: level1;
}
```
If you want older browsers to be supported as well then you could do this (courtesy neemzy):
```
ol li a {
float: right;
margin: 8px 0px 0px -13px; /* collapses <a> and dots */
padding-left: 10px; /* gives back some space between digit and text beginning */
position: relative; z-index: 10; /* make the <a> appear ABOVE the dots */
background-color: #333333; /* same background color as ol ; the dots are now invisible ! */
}
```
|
Detect with "JSON for Modern C++" library that integer doesn't fit into a specified type?
This code prints `-1`:
```
#include <iostream>
#include <nlohmann/json.hpp>
int main()
{
auto jsonText = "{ \"val\" : 4294967295 }";
auto json = nlohmann::json::parse(jsonText);
std::cout << json.at("val").get<int>() << std::endl;
}
```
I would like to detect that the value is out of the expected range. Is it possible to accomplish somehow?
|
Aside Holt's answer, you can also take advantage of the [`operator==`](https://nlohmann.github.io/json/classnlohmann_1_1basic__json_a122640e7e2db1814fc7bbb3c122ec76e.html) the library defines, which states:
>
> **Integer and floating-point numbers are automatically converted before comparison.** Note than two NaN values are always treated as unequal.
>
>
>
What happens here is that the number overflows, which means `json["val"] != json["val"].get<int>()`.Disclaimer: I have no idea how efficient this approach is compared to Holt's approach
```
#include <iostream>
#include <nlohmann/json.hpp>
int main()
{
auto jsonText = "{ \"val\" : 4294967296 }";
auto json = nlohmann::json::parse(jsonText);
auto val = json["val"].get<long long>();
auto i = json["val"].get<int>();
bool longOverflow = json["val"] != val;
bool intOverflow = json["val"] != i;
std::cout << std::boolalpha << "Long: " << longOverflow << "\nInt: " << intOverflow;
}
```
Prints:
```
Long: false
Int: true
```
[Try it online](https://wandbox.org/permlink/TwRIu5tQZLTpMHHT)
Note that this has a caveat: If the value stored in the JSON is a double or a float, and is retrieved as a long or int, it'll naturally evaluate to true (12.3 != 12), but it doesn't have to imply an overflow. You can check the general type with [`is_number_integer()`](https://nlohmann.github.io/json/classnlohmann_1_1basic__json_abac8af76067f1e8fdca9052882c74428.html#abac8af76067f1e8fdca9052882c74428) (checks for int, long, and various other types, unsigned or signed) and [`is_number_float()`](https://nlohmann.github.io/json/classnlohmann_1_1basic__json_a33b4bf898b857c962e798fc7f6e86e70.html#a33b4bf898b857c962e798fc7f6e86e70) (checks for double/float).
From what I can tell though, from int/long to double doesn't fail. However, provided a big enough number, [the library will fail to parse the number](https://nlohmann.github.io/json/classnlohmann_1_1basic__json_a28f7c2f087274a0012eb7a2333ee1580.html) (`json.exception.out_of_range.406`), so there's a hard limit imposed by the library. From what I can tell, that limit is set at `1.79769e+308` (at least on Wandbox), or the max double value. This also makes double the only type you can't overflow with the library.
As far as automatically checking for number overflow when you retrieve it with a certain type, that's not supported out of the box.
|
Why minus operation on usize numbers can't be negative and what's Rust idiomatic way to handle this?
`s` and `t` are vectors. If we do this:
```
t.len() - s.len()
```
It may result in `attempt to subtract with overflow` error if the subtraction result is negative (since both types' are `usize`).
However, it may cause a tricky bug if we are not aware that both types are `usize` and we expect the result can be a negative number (I encountered myself and only catch it during runtime on edge cases). To fix the bug, I converted them to:
```
t.len() as i32 - s.len() as i32
```
My questions are:
- Why Rust doesn't convert `usize` substraction to a signed type?
- What's Rust idiomatic way to catching this type of bugs early during compile time?
|
>
> To fix the bug, I converted them to:
>
>
>
>
>
> ```
> t.len() as i32 - s.len() as i32
>
> ```
>
>
Depending on platform, `usize` can be 64b, `isize` exists for the purpose of diffing usizes though it still has only half the positive range of a `usize`, casting to `i128` would provide more certainty (at least until we get platforms with 128b usize) but then it might be very inefficient on e.g. 32b platforms as they may not have hardware support for 128b operations and they are implemented in software instead.
>
> Why Rust doesn't convert usize substraction to a signed type?
>
>
>
Because arithmetic operations on numeric types are bulk-implemented with the same input and output using a macro.
And the option you suggest would just have its own issues e.g. `usize::MAX - 1` would return garbage.
>
> What's Rust idiomatic way to catching this type of bugs early during compile time?
>
>
>
Numeric types have methods for explicit overflow behaviour: failing (`checked_*`), wrapping (`wrapping_*`) or saturating (`saturating_*`). There's a wrapper type for wrapped operations as that's very common in some contexts (e.g. cryptography) but I don't think there's one for checked operations.
There is no really idiomatic way to handle this, because it's a case-by-case issue and for once this failibility was judged too inconvenient and inhabitual to surface by default.
|
When should I use connection pooling with MySQL in NodeJS
Working with the **Node-MySql** module:
From my understanding multithreaded programs benefit more from pooling connections than singlethreaded ones. Is this true?
And if this logic proves true, what scenario is connection pooling **beneficial** in a Node.JS application?
|
Whether single or multithreaded, pooling can still be beneficial in [allowing open connections to be reused](https://github.com/felixge/node-mysql#pooling-connections) rather than being closed only to open another immediately after:
>
> When you are done with a connection, just call `connection.release()` and **the connection will return to the pool, ready to be used again by someone else.**
>
>
>
The added benefit with multithreading is that the pool can also manage multiple, concurrent connections:
>
> Connections are lazily created by the pool. If you configure the pool to allow up to 100 connections, but only ever use 5 simultaneously, only 5 connections will be made.
>
>
>
Though, to be clear, Node is multithreaded. It just uses a different model than seems to be typical -- 1 "*application*" thread which executes JavaScript and multiple "*worker*" threads handling the brunt of asynchronous I/O.
|
Slide animation text color change
I have a button that has a background color slide in from the right on hover, which works fine, however I need the text color to change as well. I have managed to have it fade, but that doesn't work properly. What I would like is a color transition slide in from the right in concert with the background change.
```
.slideybutton {
background-size: 200% 100%;
background-image: linear-gradient(to right, white 50%, blue 50%);
transition: background-position 1s linear, color 1s linear;
color: blue;
}
.slideybutton:hover {
background-position: -100% 0;
color: white;
}
```
```
<a class="slideybutton">
Lorem ipsum dolor sit amet.
</a>
```
I have seen this question, but the only solution is unfeasible in this instance.
[Sliding in changing text color animation](https://stackoverflow.com/questions/32742597/sliding-in-changing-text-color-animation)
Is there some CSS trick I am missing? Google searches don't result in anything pointing me in the right direction, so I am concerned I am attempting the impossible.
I'm happy to utilise JS or jQuery if it will accomplish what I want.
|
This could be done by "combining the text with the background", the key is property background-clip, check this out:
```
.slideybutton {
background-size: 200% 100%;
background-image: linear-gradient(to right, blue 50%, white 50%),
linear-gradient(to right, white 50%, blue 50%);
transition: background-position 1s linear;
-webkit-background-clip: text, border-box;
background-clip: text, border-box;
color: transparent;
}
.slideybutton:hover {
background-position: -100% 0;
}
```
```
<a class="slideybutton">
Lorem ipsum dolor sit amet.
</a>
```
|
Two functions or boolean parameter?
Is there some rule when to use two functions or when to pass boolean parameter.
Thanks
|
It has been a while since I last re-read Code Complete, but I vaguely recall McConnell addressing this, and the words "disjunctive conherence" pop into my head. Briefly,
```
void f(int x, int y, bool b)
```
versus
```
void f1(int x, int y)
void f2(int x, int y)
```
is often a choice, and depending on how similar or different `f` would behave under `true` versus `false`, it may make sense to break it into two functions and give them distinct names. Often a third choice is better, which is to change the `bool` to a two-value enum, where the enum name makes the distinction clear.
The key is to look at the call-sites, and see if the meaning is clear just from reading the code. If you are tempted to put a comment on every boolean call-site:
```
f(3, 4, true /* absoluteWidgetMode */ )
```
and the call-sites usually call with boolean *constants*, that's a strong smell that you should break it up into multiple functions.
|
How to plot a date range on X-axis in Flot Charts?
I'm using Flot charts to display data for a certain period (to be selected by the user, e.g. last 30 days, last 7 days, from 1st Jan 2013 to 3rd Mar 2013 etc)
So I want to display a line chart with x-axis as the date.
E.g. if I've two days, startDate and endDate how do I make the X-axis display something like:
1 Jan 2013 | 2 Jan 2013........................3 Mar 2013
My code is as follows:
The data (currently it's static).
```
var mydata = [
[1, 2.4],
[2, 3.4 ],
[3, 4.5 ],
[4, 5 ],
[5, 5],
[6, 5],
[7, 2 ],
[8, 1 ],
[9, 1.5 ],
[10, 2.5 ],
[11, 3.5],
[12, 4 ],
[13, 4 ],
[14, 2.4],
[15, 3.4 ],
[16, 4.5 ],
[17, 5 ],
[18, 5],
[19, 5],
[20, 2 ],
[21, 1 ],
[22, 1.5 ],
[23, 2.5 ],
[24, 3.5],
[25, 4 ],
[26, 4 ],
[27, 2.5 ],
[28, 3.5],
[29, 4 ],
[30, 4 ],
];
var plot = $.plot($("#mychart"), [{
data: mydata,
label: "Y-axis label"
}], {
series: {
lines: {
show: true
},
points: {
show: true
},
shadowSize: 2
},
grid: {
hoverable: true,
clickable: true
},
colors: ["#37b7f3", "#d12610", "#52e136"],
xaxis: {
mode: "time", timeformat: "%d/%m/%y", minTickSize: [1, "day"]
},
yaxis: {
ticks: 11,
tickDecimals: 0,
min:0, max: 5
}
});
```
I realize that I need to make mydata look like [date, value]. Will that work?
I've the data dynamically generated by the server in JSON in
>
> [{date, value}, {date, value}...]
>
>
>
format.
Please guide.
|
You will need to change the numbers to UNIX time stamps multiplied by 1000. This is from the API if you search Time Series Data:
*The time series support in Flot is based on Javascript timestamps,
i.e. everywhere a time value is expected or handed over, a Javascript
timestamp number is used. This is a number, not a Date object. A
Javascript timestamp is the number of milliseconds since January 1,
1970 00:00:00 UTC. This is almost the same as Unix timestamps, except it's
in milliseconds, so remember to multiply by 1000!*
There is a .Net example in the API:
```
public static int GetJavascriptTimestamp(System.DateTime input)
{
System.TimeSpan span = new System.TimeSpan(System.DateTime.Parse("1/1/1970").Ticks);
System.DateTime time = input.Subtract(span);
return (long)(time.Ticks / 10000);
}
```
Here is an example - <http://jsfiddle.net/zxtFc/4/>
|
How do streams / fork-join access arrays thread-safely?
Streams and fork-join both provide functionality to parallelize code that accesses arrays. For example, `Arrays.parallelSetAll` is implemented largely by the following line:
```
IntStream.range(0, array.length).parallel()
.forEach(i -> { array[i] = generator.applyAsLong(i); });
```
Also, the [documentation](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/RecursiveAction.html) of `RecursiveAction`, part of the fork-join framework, contains the following example:
```
static class SortTask extends RecursiveAction {
final long[] array; final int lo, hi;
...
void merge(int lo, int mid, int hi) {
long[] buf = Arrays.copyOfRange(array, lo, mid);
for (int i = 0, j = lo, k = mid; i < buf.length; j++)
array[j] = (k == hi || buf[i] < array[k]) ?
buf[i++] : array[k++];
}
}
```
Finally, parallel streams created from arrays access the arrays in multiple threads (the code is too complex to summarize here).
All of these examples appear to read from or write to arrays without any synchronization or other memory barriers (as far as I can tell). As we know, completely ad hoc multithreaded array accesses are unsafe as there is no guarantee that a read reflects a write in another thread unless there is a happens-before relationship between the read and the write. In fact, the `Atomic...Array` classes were created specifically to address this issue. However, given that each example above is in the standard library or its documentation, I presume they're correct.
Can someone please explain what mechanism guarantees the safety of the array accesses in these examples?
|
Short answer: partitioning.
The JMM is defined in terms of access to *variables*. Variables include static fields, instance fields, and array elements. If you arrange your program such that thread T0 is the only thread to access element 0 of an array, and similarly T1 is the only thread to access element 1 of an array, then each of these elements is effectively thread-confined, and you have no problem -- the JMM program order rule takes care of you.
Parallel streams build on this principle. Each task is working on a segment of the array that no other task is working on. Then all we have to do is ensure that the thread running a task can see the initial state of the array, and the consumer of the final result can see the as-modified-by-the-task view of the appropriate section of the array. These are easily arranged through synchronization actions embedded in the implementation of the parallel stream and FJ libraries.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.