prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
ASP.NET Core 2.0 Bearer Auth without Identity
I thought I had a pretty simple goal in mind when I set out a day ago to implement a self-contained bearer auth webapi on .NET core 2.0, but I have yet to get anything remotely working. Here's a list of what I'm trying to do:
- Implement a bearer token protected webapi
- Issue tokens & refresh tokens from an endpoint in the same project
- Use the [Authorize] attribute to control access to api surface
- Not use ASP.Net Identity (I have much lighter weight user/membership reqs)
I'm totally fine with building identity/claims/principal in login and adding that to request context, but I've not seen a single example on how to issue and consume auth/refresh tokens in a Core 2.0 webapi without Identity. I've seen the 1.x MSDN example of cookies without Identity, but that didn't get me far enough in understanding to meet the requirements above.
I feel like this might be a common scenario and it shouldn't be this hard (maybe it's not, maybe just lack of documentation/examples?). As far as I can tell, IdentityServer4 is not compatible with Core 2.0 Auth, opendiddict seems to require Identity. I also don't want to host the token endpoint in a separate process, but within the same webapi instance.
Can anyone point me to a concrete example, or at least give some guidance as to what best steps/options are?
|
Did an edit to make it compatible with ASP.NET Core 2.0.
---
Firstly, some Nuget packages:
- Microsoft.AspNetCore.Authentication.JwtBearer
- Microsoft.AspNetCore.Identity
- System.IdentityModel.Tokens.Jwt
- System.Security.Cryptography.Csp
Then some basic data transfer objects.
```
// Presumably you will have an equivalent user account class with a user name.
public class User
{
public string UserName { get; set; }
}
public class JsonWebToken
{
public string access_token { get; set; }
public string token_type { get; set; } = "bearer";
public int expires_in { get; set; }
public string refresh_token { get; set; }
}
```
Getting into the proper functionality, you'll need a login/token web method to actually send the authorization token to the user.
```
[Route("api/token")]
public class TokenController : Controller
{
private ITokenProvider _tokenProvider;
public TokenController(ITokenProvider tokenProvider) // We'll create this later, don't worry.
{
_tokenProvider = tokenProvider;
}
public JsonWebToken Get([FromQuery] string grant_type, [FromQuery] string username, [FromQuery] string password, [FromQuery] string refresh_token)
{
// Authenticate depending on the grant type.
User user = grant_type == "refresh_token" ? GetUserByToken(refresh_token) : GetUserByCredentials(username, password);
if (user == null)
throw new UnauthorizedAccessException("No!");
int ageInMinutes = 20; // However long you want...
DateTime expiry = DateTime.UtcNow.AddMinutes(ageInMinutes);
var token = new JsonWebToken {
access_token = _tokenProvider.CreateToken(user, expiry),
expires_in = ageInMinutes * 60
};
if (grant_type != "refresh_token")
token.refresh_token = GenerateRefreshToken(user);
return token;
}
private User GetUserByToken(string refreshToken)
{
// TODO: Check token against your database.
if (refreshToken == "test")
return new User { UserName = "test" };
return null;
}
private User GetUserByCredentials(string username, string password)
{
// TODO: Check username/password against your database.
if (username == password)
return new User { UserName = username };
return null;
}
private string GenerateRefreshToken(User user)
{
// TODO: Create and persist a refresh token.
return "test";
}
}
```
You probably noticed the token creation is still just "magic" passed through by some imaginary ITokenProvider. Define the token provider interface.
```
public interface ITokenProvider
{
string CreateToken(User user, DateTime expiry);
// TokenValidationParameters is from Microsoft.IdentityModel.Tokens
TokenValidationParameters GetValidationParameters();
}
```
I implemented the token creation with an RSA security key on a JWT. So...
```
public class RsaJwtTokenProvider : ITokenProvider
{
private RsaSecurityKey _key;
private string _algorithm;
private string _issuer;
private string _audience;
public RsaJwtTokenProvider(string issuer, string audience, string keyName)
{
var parameters = new CspParameters { KeyContainerName = keyName };
var provider = new RSACryptoServiceProvider(2048, parameters);
_key = new RsaSecurityKey(provider);
_algorithm = SecurityAlgorithms.RsaSha256Signature;
_issuer = issuer;
_audience = audience;
}
public string CreateToken(User user, DateTime expiry)
{
JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler();
ClaimsIdentity identity = new ClaimsIdentity(new GenericIdentity(user.UserName, "jwt"));
// TODO: Add whatever claims the user may have...
SecurityToken token = tokenHandler.CreateJwtSecurityToken(new SecurityTokenDescriptor
{
Audience = _audience,
Issuer = _issuer,
SigningCredentials = new SigningCredentials(_key, _algorithm),
Expires = expiry.ToUniversalTime(),
Subject = identity
});
return tokenHandler.WriteToken(token);
}
public TokenValidationParameters GetValidationParameters()
{
return new TokenValidationParameters
{
IssuerSigningKey = _key,
ValidAudience = _audience,
ValidIssuer = _issuer,
ValidateLifetime = true,
ClockSkew = TimeSpan.FromSeconds(0) // Identity and resource servers are the same.
};
}
}
```
So you're now generating tokens. Time to actually validate them and wire it up. Go to your Startup.cs.
In `ConfigureServices()`
```
var tokenProvider = new RsaJwtTokenProvider("issuer", "audience", "mykeyname");
services.AddSingleton<ITokenProvider>(tokenProvider);
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options => {
options.RequireHttpsMetadata = false;
options.TokenValidationParameters = tokenProvider.GetValidationParameters();
});
// This is for the [Authorize] attributes.
services.AddAuthorization(auth => {
auth.DefaultPolicy = new AuthorizationPolicyBuilder()
.AddAuthenticationSchemes(JwtBearerDefaults.AuthenticationScheme)
.RequireAuthenticatedUser()
.Build();
});
```
Then `Configure()`
```
public void Configure(IApplicationBuilder app)
{
app.UseAuthentication();
// Whatever else you're putting in here...
app.UseMvc();
}
```
That should be about all you need. Hopefully I haven't missed anything.
The happy result is...
```
[Authorize] // Yay!
[Route("api/values")]
public class ValuesController : Controller
{
// ...
}
```
|
ZingCharts horizontal stacked bar chart X-Axis labels being cut off
I've created a simple horizontal bar chart. The labels on the X-Axis are being cut off as shown in the snippet below. I need to show the full label of 'X Axis Row A'. I've tried changing the width of the chart but that doesn't help.
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>ZingChart</title>
</head>
<body>
<div id="chartDiv"></div>
<script src='http://cdn.zingchart.com/zingchart.min.js'></script>
<script>
var chartData={
"type":"hbar",
"stacked":true,
"stack-type":"normal", // Also accepts "100%"
"title":{
"text":"X-Axis Row Label Cut Off Example"
},
"scale-x":{
"values":["X Axis Row B","X Axis Row A"],
},
"scale-y":{
"format":"%v%",
},
"series":[{
"background-color":"rgb(248,51,45)",
"text":"Negative",
"values":[-11,-22]
},{
"background-color":"rgb(120,152,55)",
"text":"Positive",
"values":[35,45]
}]
};
window.onload=function(){
var x = zingchart.render({
id:'chartDiv',
height:200,
width:600,
data:chartData
});
}
</script>
</body>
</html>
```
|
So I finally figured it out. If you use the 'plotarea' key you can then specify the margins. For example:
```
"plotarea":{
"margin":"40px 20px 50px 100px"
}
```
Updated example below:
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>ZingChart</title>
</head>
<body>
<div id="chartDiv"></div>
<script src='http://cdn.zingchart.com/zingchart.min.js'></script>
<script>
var chartData={
"type":"hbar",
"stacked":true,
"stack-type":"normal", // Also accepts "100%"
"title":{
"text":"Add Margin Example"
},
"plotarea": {
"margin":"40px 20px 50px 100px"
},
"scale-x":{
"values":["X Axis Row B","X Axis Row A"],
},
"scale-y":{
"format":"%v%",
},
"series":[{
"background-color":"rgb(248,51,45)",
"text":"Negative",
"values":[-11,-22]
},{
"background-color":"rgb(120,152,55)",
"text":"Positive",
"values":[35,45]
}]
};
window.onload=function(){
var x = zingchart.render({
id:'chartDiv',
height:200,
width:600,
data:chartData
});
}
</script>
</body>
</html>
```
|
Display WooCommerce related products based on specific product attribute value
I am trying to get displayed related products based on specific product attribute "pa\_kolekcja" term value (set on the product). I have a following piece of code *(it's almost ready)*:
```
function woo_related_products_edit() {
global $product;
$current_kolekcja = "???"; // <== HERE
$args = apply_filters( 'woocommerce_related_products_args', array(
'post_type' => 'product',
'ignore_sticky_posts' => 1,
'no_found_rows' => 1,
'posts_per_page' => 4,
'orderby' => $orderby,
'post__not_in' => array( $product->id ),
'tax_query' => array(
array(
'taxonomy' => 'pa_kolekcja',
'field' => 'slug',
'terms' => $current_kolekcja
)
)
) );
}
add_filter( 'woocommerce_related_products_args', 'woo_related_products_edit' );
```
How I can get the current product attribute "pa\_kolekcja" term value set on the product?
|
**Update**
Since woocommerce 3, `woocommerce_related_products_args` has been removed.
To display related products for a specific product attribute set in current product, try instead the following:
```
add_filter( 'woocommerce_related_products', 'related_products_by_attribute', 10, 3 );
function related_products_by_attribute( $related_posts, $product_id, $args ) {
$taxonomy = 'pa_kolekcja'; // HERE define the targeted product attribute taxonomy
$term_slugs = wp_get_post_terms( $product_id, $taxonomy, ['fields' => 'slugs'] ); // Get terms for the product
if ( empty($term_slugs) )
return $related_posts;
$posts_ids = get_posts( array(
'post_type' => 'product',
'ignore_sticky_posts' => 1,
'posts_per_page' => 4,
'post__not_in' => array( $product_id ),
'tax_query' => array( array(
'taxonomy' => $taxonomy,
'field' => 'slug',
'terms' => $term_slugs,
) ),
'fields' => 'ids',
'orderby' => 'rand',
) );
return count($posts_ids) > 0 ? $posts_ids : $related_posts;
}
```
Code goes in functions.php file of the active child theme (or active theme). Tested and work.
|
What is below definition actually doing?
In one of the questions of SO, I came across below defination:
```
char randchar = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" [rand() % 26];
```
I know that `rand() % 26` will give a random number from 0 to 25.
But what is actually happening in above definition? When the expression is evaluated(compile time or run time)?What will be the contain of stack.
What do we call such definitions in C.Which feature is this?
|
`"ABCDEFGHIJKLMNOPQRSTUVWXYZ" [rand() % 26];` is an expression which is being used as an initialized for `randchar`.
- `"ABCDEFGHIJKLMNOPQRSTUVWXYZ"` is an array of 27 `char` (26 letters and a null terminator).
- The square brackets are the array indexing operator.
- `rand()` generates a random number
- `rand() % 26` converts that to a number between `0` and `25` (inclusive)
- That result is used as the array index.
- The `char` in the array is copied to `randchar`.
It could have been written:
```
char const letters[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
int random_number = rand() % 26;
char randchar = letters[random_number];
```
|
how to robustly parse a document for any headings and build a tree of just those headings
So I parse through a document in order to grab all the headings with stackHeadings(). I do this in order to build a Microsoft Word style document map with buildNav(). This currently works OK but its not very robust and breaks anytime the headings do not follow a strict order... e.g. (If you start with an H2 it breaks, if you nest a H3 under and H1 it breaks, etc...)
I can't quite figure out the best way to fix this (make it more robust). I'm taking advantage of jQuery's `nextUntil' function to find all the h2s between two h1s.
One possibility is replacing:
```
elem.nextUntil( 'h' + cur, 'h' + next )
```
with
```
elem.nextUntil( 'h' + cur, 'h' + next + ',h' + (next + 1) + ',h' + (next + 2) ... )
```
to find ALL subheadings between two headings of the same level. But now h3 children of h1s would only be nested one level rather than two.
So then you'd have to compare the current heading level with the parent heading level, and if there's a jump of more than one (h1 -> h3), you'd have to create an empty child between them as a nesting placeholder for the missing h2.
Any ideas or solutions would be greatly appreciated!
```
stackHeadings = (items, cur, counter) ->
cur = 1 if cur == undefined
counter ?= 1
next = cur + 1
for elem, index in items
elem = $(elem)
children = filterHeadlines( elem.nextUntil( 'h' + cur, 'h' + next ) )
d.children = stackHeadings( children, next, counter ) if children.length > 0
d
filterHeadlines = ( $hs ) ->
_.filter( $hs, ( h ) -> $(h).text().match(/[^\s]/) )
buildNav = ( ul, items ) ->
for child, index in items
li = $( "<li>" )
$( ul ).append( li )
$a = $("<a/>")
$a.attr( "id", "nav-title-" + child.id )
li.append( $a )
if child.children
subUl = document.createElement( 'ul' )
li.append( subUl )
buildNav( subUl, child.children )
items = stackHeadings( filterHeadlines( source.find( 'h1' ) ) )
ul = $('<ul>')
buildNav( ul, items)
```
|
I threw together some JavaScript that will do what you want <http://jsfiddle.net/fA4EW/>
It's a fairly straightforward recursive function that consumes an array of elements (nodes) and builds the UL structure accordingly. To be consistent with the question I add the placeholder (empty) list elements when you from an H1 to an H3 etc.
```
function buildRec(nodes, elm, lv) {
var node;
// filter
do {
node = nodes.shift();
} while(node && !(/^h[123456]$/i.test(node.tagName)));
// process the next node
if(node) {
var ul, li, cnt;
var curLv = parseInt(node.tagName.substring(1));
if(curLv == lv) { // same level append an il
cnt = 0;
} else if(curLv < lv) { // walk up then append il
cnt = 0;
do {
elm = elm.parentNode.parentNode;
cnt--;
} while(cnt > (curLv - lv));
} else if(curLv > lv) { // create children then append il
cnt = 0;
do {
li = elm.lastChild;
if(li == null)
li = elm.appendChild(document.createElement("li"));
elm = li.appendChild(document.createElement("ul"));
cnt++;
} while(cnt < (curLv - lv));
}
li = elm.appendChild(document.createElement("li"));
// replace the next line with archor tags or whatever you want
li.innerHTML = node.innerHTML;
// recursive call
buildRec(nodes, elm, lv + cnt);
}
}
// example usage
var all = document.getElementById("content").getElementsByTagName("*");
var nodes = [];
for(var i = all.length; i--; nodes.unshift(all[i]));
var result = document.createElement("ul");
buildRec(nodes, result, 1);
document.getElementById("outp").appendChild(result);
```
|
Sorting an STL vector on two values
How do I sort an STL vector based on two different comparison criterias? The default sort() function only takes a single sorter object.
|
You need to combine the two criteria into one.
Heres an example of how you'd sort a struct with a first and second field
based on the first field, then the second field.
```
#include <algorithm>
struct MyEntry {
int first;
int second;
};
bool compare_entry( const MyEntry & e1, const MyEntry & e2) {
if( e1.first != e2.first)
return (e1.first < e2.first);
return (e1.second < e2.second);
}
int main() {
std::vector<MyEntry> vec = get_some_entries();
std::sort( vec.begin(), vec.end(), compare_entry );
}
```
NOTE: implementation of `compare_entry` updated to use code from [Nawaz](https://stackoverflow.com/users/415784/nawaz).
|
Is this a case for empty try catch blocks?
Let's assume that you read in a large XML file and about 25% of the nodes are optional, so you really don't care if they are there, however, if they are provided to you, you would still read them in and do something with them (store them in a db for example).
Since they are optional, isn't it OK in this case to wrap them in empty `try . . . catch` blocks, so in the event that they are not there, the program just continues execution? You don't care about throwing an error or anything similar.
Keep in mind that just because they are optional does not mean you don't want to check for them. It just means that the person providing you with the XML either doesn't want you to know certain information or they do want you to know and it is up to you to handle it.
Finally, if this was just a few nodes, it wouldn't be a big deal, but if you have 100 nodes that are optional for example, it can be a pain to check if each one is `null` first or halting execution if a `null` was found, hence the reason why I asked if this is valid reason for empty try catch statements.
|
If processing node X is optional then your code should looks something like:
```
if(node X exists in file)
{
do work with X
}
```
and not:
```
try
{
do work with X
}
catch{}
```
Now if there is no way to determine if node X exists other than to try to use it, or if it's possible for node X to be removed after you check to see if it's there, then you're forced to use the try/catch model. That isn't the situation here. (As opposed to say, checking if a file exists before reading it; someone can delete it after you check to see if it's there.)
**------------------------------------------------------------**
**Edit:**
Since it seems your issue is to access node "grandchild" alone in the following XML in which 'Parent' may not exist. (Please excuse my poor ability to render this XML in SO; knowledgeable readers feel free to edit in the appropriate formatting.)
```
<root>
<Parent>
<Child>
<GrandChild>
The data I really want.
</GrandChild>
</Child>
</Parent>
</root>
```
For that I'd do something like this:
```
public static class XMLHelpers
{
public static Node GetChild(this Node parent, string tagName)
{
if(parent == null) return null;
return parent.GetNodeByTagName(tagName);
}
}
```
Then you can do:
```
var grandbaby = rootNode.GetChild("Parent").GetChild("Child").GetChild("GrandChild");
if(grandbaby != null)
{
//use grandbaby
}
```
|
Using class decorators, can I get the type of the class-type instance?
Consider this [failing example](http://www.typescriptlang.org/Playground#src=function%20DecorateClass%3CT%3E(instantiate%3A%20(json%3Aany)%20%3D%3E%20T)%7B%0D%0A%09return%20(classTarget%3AT)%20%3D%3E%20%7B%20%2F*...*%2F%09%7D%0D%0A%7D%0D%0A%0D%0A%40DecorateClass((json%3Aany)%20%3D%3E%20%7B%0D%0A%09var%20instance%20%3D%20new%20Animal()%3B%0D%0A%09instance.Name%20%3D%20json.name%3B%0D%0A%09instance.Sound%20%3D%20json.sound%3B%0D%0A%09return%20instance%3B%0D%0A%7D)%0D%0Aclass%20Animal%20%7B%0D%0A%09public%20Name%3Astring%3B%0D%0A%09public%20Sound%3Astring%3B%0D%0A%7D):
```
function DecorateClass<T>(instantiate: (...params:any[]) => T){
return (classTarget:T) => { /*...*/ }
}
@DecorateClass((json:any) => {
//Purely example logic here, the point is that it have to return
//an instance of the class that the decorator runs on.
var instance = new Animal();
instance.Name = json.name;
instance.Sound = json.sound;
return instance;
})
class Animal {
public Name:string;
public Sound:string;
}
```
Here I want to constrain the anonymous function in the decorator to always return an instance of the class in question, but the above does not work since T is actually `typeof Animal` and not `Animal`.
In a generic function, is there anyway I can get type `Animal` from the type `typeof Animal` without being annoyingly verbose like explicitly defining all types like `function DecorateClass<TTypeOfClass, TClass>(...)`?
Unfortunately, using typeof in the generic syntax is not supported, which was my best bet in trying to get the compiler to understand what I want:
```
function DecorateClass<T>(instantiate: (json:any) => T){
return (classTarget:typeof T) => { /*...*/ } // Cannot resolve symbol T
}
```
|
*Hold the line just for a second...*
Recently I've needed a type definition for a function that takes in a class as an argument, and returns an *instance* of that class. When I came up with a [solution](https://stackoverflow.com/questions/36500861/type-for-function-that-takes-a-class-as-argument-and-returns-an-instance-of-tha/36501062#36501062), this question soon came to my mind.
Basically, using a newable type it is possible to conjure a relation between a class and its instance, which accurately and perfectly answers your question:
```
function DecorateClass<T>(instantiate: (...args: any[]) => T) {
return (classTarget: { new(...args: any[]): T }) => { /*...*/ }
}
```
---
## Explanation
In TypeScript, any given newable type can be defined with the following signature:
```
new(...args: any[]): any
```
This is analogous to *a newable type (the constructor function) that may or may not take arguments and returns `any` (the instance)*. However, nothing says it must be `any` that is returned -- it can be a generic type as well.
And since we have exactly what is returned from the constructor function (by type-inferring the class the decorator is applied to) inside a generic type parameter we can use that to define the return type of the passed in callback function.
I've tested the decorator, and it seems to be working precisely as expected:
```
@DecorateClass((json: any) => {
return new Animal(); // OK
})
@DecorateClass((json: any) => {
return Animal; // Error
})
@DecorateClass((json: any) => {
return "animal"; // Error
})
class Animal {
public Name: string;
public Sound: string;
}
```
This effectively invalidates my previous [answer](https://stackoverflow.com/a/35374498/2788872).
---
## Edit: Inheritance
When inheritance is involved (eg.: a derived type is to be returned from `instantiate`), assignability seems to be flipped: you can return a *base* type, but not a *derived* type.
This is because the returned type from `instantiate` takes precedence over the "returned" type of `classTarget` during generic type-inference. The following question examines this exact problem:
- [Generic type parameter inference priority in TypeScript](https://stackoverflow.com/q/36835363/2788872)
|
Creating transparent background in WPF
I have a button with a backgroun image of color white. My button is sitting on the toolbar which has a bacground color of Blue. When the button is sitting on the toolbar, the button looks white, however I want it to look like blue as the the background color of toolbar is Blue.
How should I achieve this in WPF.. Please let me know.
|
Are you sure you want to use an image? If the image is just a solid color, you're probably better off just setting the button's background to white and adjusting its opacity. Like:
```
<Button Background="White" Opacity="0.5" />
```
Where Opacity can be any value between 0 and 1.
**EDIT**: I'm not sure what you mean by "shades." Are you aware that you can set the background of a Button to a gradient? You'll want to change the colors I'm sure, but it will look something like this:
```
<Button>
<Button.Background>
<LinearGradientBrush>
<GradientStop Color="Yellow" Offset="0.0" />
<GradientStop Color="Orange" Offset="0.5" />
<GradientStop Color="Red" Offset="1.0" />
</LinearGradientBrush>
<Button.Background>
</Button>
```
|
JavaScript bitwise & operator appends 1 on unintended position
When I run `0x00000000C17E000F & 0x00000000C17E0000` on javascript, it returns `-1048707072` instead of `3246260224`.
Binary of each is
- `0x00000000C17E000F` : (32 **zeros** omitted) 1100 0001 0111 1110 0000 0000 0000 1111
- `0x00000000C17E0000` : (32 **zeros** omitted) 1100 0001 0111 1110 0000 0000 0000 0000
so I expected the result of `0x00000000C17E000F & 0x00000000C17E0000` as `3246260224`, but it results in `-1048707072`.
- `3246260224` : (32 of **0** omitted) 1100 0001 0111 1110 0000 0000 0000 0000
- `-1048707072` : (32 of **1** omitted) 1100 0001 0111 1110 0000 0000 0000 0000
Why does Javascript calculates `0 & 0` as `1` at there (in 33 to 64th bit)?
(I heard that Javascript Number consists of 64bit.)
|
From [the MDN on bitwise operators](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators#Bitwise_AND):
>
> The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones). Numbers with more than 32 bits get their most significant bits discarded.
>
>
>
Your numbers aren't between -2 ^ 31 and 2 ^ 31 so they're changed by the conversion.
Note that this isn't really overflowing nor a limitation of the storage: Numbers in Javascript are 64 bits (IEEE754 doubles) which means they can store all integers between - 2 ^53 and 2 ^53. The limitation at 32 bits is just with the bitwise operators, by design.
A consequence of this remark is that you may very well design your own functions to do bitwise operations on bigger integers:
```
function and(a, b){
let abits = Array.from(a.toString(2));
let bbits = Array.from(b.toString(2));
return parseInt(abits.map((abit, i)=>abit & bbits[i]).join(""), 2);
}
console.log(and(0x00000000C17E000F, 0x00000000C17E0000)); // 3246260224
```
|
Show activity indicator in SDWebImage
I'm using SDWebView image and i want to show an Activity Indicator as placeholder, while fetching the image from remote.
I tried Malek's answer here [How to show an activity indicator in SDWebImage](https://stackoverflow.com/questions/9388372/how-to-show-an-activity-indicator-in-sdwebimage), but it seems that
```
UIImage *cachedImage = [manager imageWithURL:url];
```
is deprecated.
Is there anyone using this library that could tell me how can i insert an Activity Indicator while loading the image?
**EDIT**
Following the indications of Michael Frederick, i ended up with this code and everything's working fine.
```
UIActivityIndicatorView *activityIndicator = [[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhite] autorelease];
activityIndicator.hidesWhenStopped = YES;
activityIndicator.hidden = NO;
[activityIndicator startAnimating];
activityIndicator.center = CGPointMake(self.tipImage.frame.size.width /2, self.tipImage.frame.size.height/2);
[imageView setImageWithURL:[NSURL URLWithString:imageString]
placeholderImage:nil options:SDWebImageProgressiveDownload
success:^(UIImage *image) { [activityIndicator stopAnimating];[activityIndicator removeFromSuperview]; }
failure:^(NSError *error) { [activityIndicator stopAnimating];[activityIndicator removeFromSuperview]; }];
[imageView addSubview:activityIndicator];
```
|
[This fork](https://github.com/parsec-digital/SDWebImage) has support for this functionality. Take a look at [the diff](https://github.com/rs/SDWebImage/pull/131/files).
But, in general, you can do it rather easily without using that fork:
```
__block UIActivityIndicatorView *activityIndicator = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:activityStyle];
activityIndicator.center = imageView.center;
activityIndicator.hidesWhenStopped = YES;
[imageView setImageWithURL:[NSURL URLWithString:@"http://www.domain.com/path/to/image.jpg"]
placeholderImage:[UIImage imageNamed:@"placeholder.png"]
success:^(UIImage *image) { [activityIndicator removeFromSuperview]; }
failure:^(NSError *error) { [activityIndicator removeFromSuperview]; }];
[imageView addSubview:activityIndicator];
[activityIndicator startAnimating];
```
|
Exception on Monitor.Exit in C#
This is a large multi-threaded project (which I didn't write) that I am fixing. The application hangs on some locks which I am tracking down.
I went through and replaced all the "lock" statements with `Monitor.TryEnter` so I could set a wait period. I am occasionally getting an exception with the `Monitor.Exit`.
The original style was
```
private List<myClass> _myVar= new List<myClass>();
if (_myVar != null)
{
lock (_myVar)
{
_myVar = newMyVar; // Where newMyVar is another List<myClass>
}
}
```
I replaced all my locks like above with:
```
if (_myVar != null)
{
bool lockTaken = false;
try
{
Monitor.TryEnter(_myVar, new TimeSpan(0, 0, 5), ref lockTaken);
if (lockTaken)
{
_myVar = newMyVar; // Where newMyVar is another List<myClass>
}
}
finally
{
if (lockTaken) Monitor.Exit(_myVar);
}
}
```
The exception I am getting is
>
> SynchronizationLockException Object synchronization method was called
> from an unsynchronized block of code
>
>
>
. If this is true, why doesn't the original lock statement also throw an exception?
Would it be safe to put the `Monitor.Exit` in a try catch and just ignore it if there is an exception?
|
It should be very clear why you are getting the exception in your new code. **If the lock is taken then the object that is unlocked is not the object that was locked**. Locks take an *object*, not a *variable*. The correct translation of the *deeply wrong* original code is
```
// THIS CODE IS COMPLETELY WRONG; DO NOT USE IT
if (_myVar != null)
{
bool lockTaken = false;
var locker = _myVar;
try
{
Monitor.TryEnter(locker, new TimeSpan(0, 0, 5), ref lockTaken);
if (lockTaken)
{
_myVar = newMyVar; // where newMyVar is another List<myClass>
}
}
finally
{
if (lockTaken) Monitor.Exit(locker);
}
}
```
Which will not throw on exit, but *is still completely wrong*.
**Never** lock on the contents of a variable and then *mutate the variable*; *every subsequent lock will lock on a different object!* So you have no mutual exclusion.
And **never** lock on a public object! If that list leaks out anywhere then *other wrong code can be locking on that list in an unexpected order*, which means deadlocks -- which is the original symptom you are diagnosing.
The correct practice for locking on a field is to create a private readonly object field used *only* as a locker, and used *every time* the field is accessed. That way you know that (1) the field is always accessed under the same lock, no matter its value, and (2) the lock object is used only for locking that field, and not for locking something else. That ensures mutual exclusion and prevents deadlocks.
The fact that someone wrote a large multithreaded program without understanding the most basic facts about locks means that it is almost certainly a complete mess of hard-to-find bugs. The fact that this wasn't immediately obvious upon reading the code means that you don't have enough knowledge of threading to fix the problems correctly. You're going to need to either find an expert on this stuff who can help you, or gain at least a minimal working knowledge of correct practices.
I cannot emphasize enough that **this is hard stuff.** Programs with multiple threads of control in them are extremely difficult to write correctly on modern hardware; many of the things you believe are guaranteed by the language are only guaranteed in single threaded programs.
|
How to make an AnimatedContainer change height after intial widget build, instead of on Button Click
```
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.only(bottom: 3.0, right: 1),
child: AnimatedContainer(
duration: Duration(seconds: 1),
height: voteCountList[0],
decoration: BoxDecoration(
color: ColorChooser.graphColors[int.parse(optionsList[0]['color'])],
borderRadius: BorderRadius.all(Radius.circular(2))
),
),
),
),
```
As seen in my code snippet the height of the Animated Container is voteCountList[0], this works fine if the value is updated. However when the widget is initially build there is no animation and the height of the container is instantly = voteCountList[0]. I would like to implement the AnimatedContainer such that the height of the container is seen to animate from 0 to voteCountList[0]. This height needs to animated on build.
|
If you don't want to use an AnimatedBuilder or create your own Animation controller one way to do it with the Animatedcontainer would be
```
class MyWidget extends StatelessWidget {
Future<double> get _height => Future<double>.value(200);
@override
Widget build(BuildContext context) {
return Padding(
padding: const EdgeInsets.only(bottom: 3.0, right: 1),
child: FutureBuilder<double>(
future: _height,
initialData: 0.0,
builder: (context, snapshot) {
return AnimatedContainer(
duration: Duration(seconds: 1),
width: 200,
height: snapshot.data, //voteCountList[0],
decoration: BoxDecoration(
color: Colors.red,
borderRadius: BorderRadius.all(Radius.circular(2))),
);
}
)
);
}
}
```
You start a future with an initial value of 0.0 and after a tick the futures resolve (it's a value you already had, you only transformed it to a future to make him think it will be ready in a tick) and the animation will start from 0 to the resolved future voteCountList[0]
|
How to state a facts in prolog what works in both ways?
What is the correct way to do this in Prolog:
"jake and amy are married."
```
married(jake, amy).
```
this statement says jake is married to amy, but not amy is married to jake.
so to solve this I thought of adding this
```
married(amy, jake).
```
but this feels redundant and may mess up some searches.
Is there any other way to do this?
|
You could write a helper predicate `partner/2` to do it:
```
partner(X,Y):- married(X,Y).
partner(X,Y):- married(Y,X).
```
or shorter:
```
partner(X,Y):- married(X,Y) ; married(Y,X).
```
Gives the output
```
?- partner(X,Y).
X = jake,
Y = amy ;
X = amy,
Y = jake ;
false.
```
However if you define it like this:
```
mar(jake, amy).
mar(X,Y):- mar(Y,X).
```
you might get stuck in an infinite loop when called within an unlucky constellation.
```
?- mar(X,Y).
X = jake,
Y = amy ;
X = amy,
Y = jake ;
X = jake,
Y = amy ;
...
```
|
Array reference confusion in PHP
```
$arr = array(1);
$a = & $arr[0];
$arr2 = $arr;
$arr2[0]++;
echo $arr[0],$arr2[0];
// Output 2,2
```
Can you please help me how is it possible?
|
>
> Note, however, that references inside arrays are potentially
> dangerous. Doing a normal (not by reference) assignment with a
> reference on the right side does not turn the left side into a
> reference, but references inside arrays are preserved in these normal
> assignments. This also applies to function calls where the array is
> passed by value.
>
>
>
```
/* Assignment of array variables */
$arr = array(1);
$a =& $arr[0]; //$a and $arr[0] are in the same reference set
$arr2 = $arr; //not an assignment-by-reference!
$arr2[0]++;
/* $a == 2, $arr == array(2) */
/* The contents of $arr are changed even though it's not a reference! */
```
|
Q: Change notification icon in Android Studio
I configured the application icon (Image 1), but the notification icon (Notification sent via Firebase) shows a gray rectangle (Image 2).
What is the procedure for changing the notification icon image (Image 2) via Android Studio 2.3 or via script?
**Manifest**
```
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.studioshow.studioshow">
<uses-permission android:name="android.permission.INTERNET" />
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
```
[](https://i.stack.imgur.com/1T6ED.jpg)
[](https://i.stack.imgur.com/x5Gyi.jpg)
|
In manifest file. Set meta data.
Following code is my case.
```
<application
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@android:style/Theme.NoTitleBar" >
<activity
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<service
android:name=".MyFirebaseMessagingService">
<intent-filter>
<action android:name="com.google.firebase.MESSAGING_EVENT"/>
</intent-filter>
</service>
<meta-data
android:name="com.google.firebase.messaging.default_notification_icon"
android:resource="@drawable/ic_launcher" />
<meta-data
android:name="com.google.firebase.messaging.default_notification_color"
android:resource="@color/whitechoke" />
</application>
```
|
Conversion failed when converting date and/or time from character string
I am struggling with this query which returns the error: Conversion failed when converting date and/or time from character string.
This is a common error judging from my google searches, but nothing I've tried so far works. I've tried casting @startdate as datetime and varchar and leaving it alone, as in the below example.
I've also tried using convert against the fieldname and the parameter name, although admittedly, I may just be getting the syntax wrong.
```
ALTER PROCEDURE [dbo].[customFormReport]
(
@formid int,
@startdate DATETIME
)
AS
BEGIN
SET NOCOUNT ON
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(fieldname) from FormResponse WHERE FormID = @formid AND value IS NOT NULL FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT FormID, FormSubmissionID,' + @cols + ' from
(
SELECT FormID, FormSubmissionID, fieldname, value
FROM FormResponse WHERE FormID = ' + CAST(@formid AS VARCHAR(25)) + ' AND createDate > ' + @startdate + '
) x
pivot
(
max(value)
for fieldname in (' + @cols + ')
) p '
execute(@query)
```
edit: the query works except when I add the bit causing the error:
```
' AND createDate > ' + @startdate + '
```
|
The problem is you are attempting to concatenate a `datetime` to your `varchar` sql string. You need to convert it:
```
convert(varchar(10), @startdate, 120)
```
So the full code will be:
```
ALTER PROCEDURE [dbo].[customFormReport]
(
@formid int,
@startdate DATETIME
)
AS
BEGIN
SET NOCOUNT ON
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(fieldname) from FormResponse WHERE FormID = @formid AND value IS NOT NULL FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT FormID, FormSubmissionID,' + @cols + ' from
(
SELECT FormID, FormSubmissionID, fieldname, value
FROM FormResponse
WHERE FormID = ' + CAST(@formid AS VARCHAR(25)) + '
AND createDate > ''' + convert(varchar(10), @startdate, 120) + '''
) x
pivot
(
max(value)
for fieldname in (' + @cols + ')
) p '
execute(@query)
```
|
In C# (WPF) does databinding occur when the datachanges in the UI thread immediately?
I'm creating an application and I've been under the assumption
that when a control is bound to a member in the view-model
(i.e. a TextBox to a string field) the string is updated whenever
the user changes the information in the textbox and no later.
But what I've found is that the string is updated when the textbox
is changed AND when the user clicks\tabs out of the textbox.
(I'm using the Caliburn.Micro framework if that matters.)
Can someone explain which is correct and how to make it so that
a change is immediately reflected?
|
This is not a WPF issue... it totally lies with the controls.
>
> But what I've found is that the string
> is updated when the textbox is changed
> AND when the user clicks\tabs out of
> the textbox.
>
>
>
This is specific textbox to reduce the amounts of set operations and avoid setting incomplete data.
<http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/c3ae2677-7cc9-4bb3-9cce-4e7c0eeff6f0> has a solution - basically update source trigger is set to property changed. If you do that, though, you get a lot more invalid data into the model, like for example when people enter an invocie number all the partials will be going to the model.
<http://msdn.microsoft.com/en-us/library/system.windows.data.binding.updatesourcetrigger.aspx> has a nice explanation - klike it says, normal trigger is PropertyChanged, while the text property defaults to LostFocus.
|
Grails Spring Security max concurrent session
I have grails 2.5.1 app with spring security plugin(2.0-RC5). I would like to block the number of current session per user. I have read some blog and it doesn't work.(<http://www.block-consult.com/blog/2012/01/20/restricting-concurrent-user-sessions-in-grails-2-using-spring-security-core-plugin/>)
my resources.groovy
```
beans = {
sessionRegistry(SessionRegistryImpl)
concurrencyFilter(ConcurrentSessionFilter,sessionRegistry,'/main/index'){
logoutHandlers = [ref("rememberMeServices"), ref("securityContextLogoutHandler")]
}
concurrentSessionControlStrategy(ConcurrentSessionControlAuthenticationStrategy, sessionRegistry) {
exceptionIfMaximumExceeded = true
maximumSessions = 1
}
}
```
In my boostrap.groovy
```
def init = { servletContext ->
SpringSecurityUtils.clientRegisterFilter('concurrencyFilter', SecurityFilterPosition.CONCURRENT_SESSION_FILTER)
}
```
and my config.groovy I have added this:
```
grails.plugin.springsecurity.useHttpSessionEventPublisher = true
```
Thanks..
|
To start with, let me warn you if you decided to continue with my solution.
- **SessionRegistryImpl** is not scalable. You need to create your own scalable implementation based on your scaling plan(e.g. data grid). Session Replication just not enough.
- Currently, default logout handlers did not remove SessionRegistry properly. So i have created a sample Logout handler called **CustomSessionLogoutHandler**.
- You have to override grails spring-security-core Login Controller to handle **SessionAuthenticationException**.
- You can change number of users that can login maximumSessions = 1 to -1 for unlimited sessions.
first, in resources.groovy
```
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.ConcurrentSessionControlAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionFixationProtectionStrategy;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.CompositeSessionAuthenticationStrategy;
import com.basic.CustomSessionLogoutHandler
// Place your Spring DSL code here
beans = {
sessionRegistry(SessionRegistryImpl)
customSessionLogoutHandler(CustomSessionLogoutHandler,ref('sessionRegistry') )
concurrentSessionControlAuthenticationStrategy(ConcurrentSessionControlAuthenticationStrategy,ref('sessionRegistry')){
exceptionIfMaximumExceeded = true
maximumSessions = 1
}
sessionFixationProtectionStrategy(SessionFixationProtectionStrategy){
migrateSessionAttributes = true
alwaysCreateSession = true
}
registerSessionAuthenticationStrategy(RegisterSessionAuthenticationStrategy,ref('sessionRegistry'))
sessionAuthenticationStrategy(CompositeSessionAuthenticationStrategy,[ref('concurrentSessionControlAuthenticationStrategy'),ref('sessionFixationProtectionStrategy'),ref('registerSessionAuthenticationStrategy')])
}
```
in config.groovy make sure customSessionLogoutHandler is first before securityContextLogoutHandler:
```
grails.plugin.springsecurity.logout.handlerNames = ['customSessionLogoutHandler','securityContextLogoutHandler']
```
**ConcurrentSessionControlAuthenticationStrategy** uses this i18n. So you can have it in your language:
```
ConcurrentSessionControlAuthenticationStrategy.exceededAllowed = Maximum sessions for this principal exceeded. {0}
```
This is my sample **CustomSessionLogoutHandler** you can save it in src/groovy/com/basic/CustomSessionLogoutHandler.groovy:
```
/*
* Copyright 2002-2013 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.basic;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.security.core.Authentication;
import org.springframework.security.web.authentication.logout.LogoutHandler;
import org.springframework.util.Assert;
import org.springframework.security.core.session.SessionRegistry;
/**
* {@link CustomSessionLogoutHandler} is in charge of removing the {@link SessionRegistry} upon logout. A
* new {@link SessionRegistry} will then be generated by the framework upon the next request.
*
* @author Mohd Qusyairi
* @since 0.1
*/
public final class CustomSessionLogoutHandler implements LogoutHandler {
private final SessionRegistry sessionRegistry;
/**
* Creates a new instance
* @param sessionRegistry the {@link SessionRegistry} to use
*/
public CustomSessionLogoutHandler(SessionRegistry sessionRegistry) {
Assert.notNull(sessionRegistry, "sessionRegistry cannot be null");
this.sessionRegistry = sessionRegistry;
}
/**
* Clears the {@link SessionRegistry}
*
* @see org.springframework.security.web.authentication.logout.LogoutHandler#logout(javax.servlet.http.HttpServletRequest,
* javax.servlet.http.HttpServletResponse,
* org.springframework.security.core.Authentication)
*/
public void logout(HttpServletRequest request, HttpServletResponse response,
Authentication authentication) {
this.sessionRegistry.removeSessionInformation(request.getSession().getId());
}
}
```
My Sample Login Controller (I copied from the source) if you need it too. Just save as normal controller in your project as it will override the default. See line 115 below as i handle the **SessionAuthenticationException**:
```
/* Copyright 2013-2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.basic
import grails.converters.JSON
import org.springframework.security.access.annotation.Secured
import org.springframework.security.authentication.AccountExpiredException
import org.springframework.security.authentication.AuthenticationTrustResolver
import org.springframework.security.authentication.CredentialsExpiredException
import org.springframework.security.authentication.DisabledException
import org.springframework.security.authentication.LockedException
import org.springframework.security.core.Authentication
import org.springframework.security.core.context.SecurityContextHolder
import org.springframework.security.web.WebAttributes
import org.springframework.security.web.authentication.session.SessionAuthenticationException
import javax.servlet.http.HttpServletResponse
import grails.plugin.springsecurity.SpringSecurityUtils
@Secured('permitAll')
class LoginController {
/** Dependency injection for the authenticationTrustResolver. */
AuthenticationTrustResolver authenticationTrustResolver
/** Dependency injection for the springSecurityService. */
def springSecurityService
/** Default action; redirects to 'defaultTargetUrl' if logged in, /login/auth otherwise. */
def index() {
if (springSecurityService.isLoggedIn()) {
redirect uri: conf.successHandler.defaultTargetUrl
}
else {
redirect action: 'auth', params: params
}
}
/** Show the login page. */
def auth() {
def conf = getConf()
if (springSecurityService.isLoggedIn()) {
redirect uri: conf.successHandler.defaultTargetUrl
return
}
String postUrl = request.contextPath + conf.apf.filterProcessesUrl
render view: 'auth', model: [postUrl: postUrl,
rememberMeParameter: conf.rememberMe.parameter,
usernameParameter: conf.apf.usernameParameter,
passwordParameter: conf.apf.passwordParameter,
gspLayout: conf.gsp.layoutAuth]
}
/** The redirect action for Ajax requests. */
def authAjax() {
response.setHeader 'Location', conf.auth.ajaxLoginFormUrl
render(status: HttpServletResponse.SC_UNAUTHORIZED, text: 'Unauthorized')
}
/** Show denied page. */
def denied() {
if (springSecurityService.isLoggedIn() && authenticationTrustResolver.isRememberMe(authentication)) {
// have cookie but the page is guarded with IS_AUTHENTICATED_FULLY (or the equivalent expression)
redirect action: 'full', params: params
return
}
[gspLayout: conf.gsp.layoutDenied]
}
/** Login page for users with a remember-me cookie but accessing a IS_AUTHENTICATED_FULLY page. */
def full() {
def conf = getConf()
render view: 'auth', params: params,
model: [hasCookie: authenticationTrustResolver.isRememberMe(authentication),
postUrl: request.contextPath + conf.apf.filterProcessesUrl,
rememberMeParameter: conf.rememberMe.parameter,
usernameParameter: conf.apf.usernameParameter,
passwordParameter: conf.apf.passwordParameter,
gspLayout: conf.gsp.layoutAuth]
}
/** Callback after a failed login. Redirects to the auth page with a warning message. */
def authfail() {
String msg = ''
def exception = session[WebAttributes.AUTHENTICATION_EXCEPTION]
if (exception) {
if (exception instanceof AccountExpiredException) {
msg = message(code: 'springSecurity.errors.login.expired')
}
else if (exception instanceof CredentialsExpiredException) {
msg = message(code: 'springSecurity.errors.login.passwordExpired')
}
else if (exception instanceof DisabledException) {
msg = message(code: 'springSecurity.errors.login.disabled')
}
else if (exception instanceof LockedException) {
msg = message(code: 'springSecurity.errors.login.locked')
}
else if (exception instanceof SessionAuthenticationException){
msg = exception.getMessage()
}
else {
msg = message(code: 'springSecurity.errors.login.fail')
}
}
if (springSecurityService.isAjax(request)) {
render([error: msg] as JSON)
}
else {
flash.message = msg
redirect action: 'auth', params: params
}
}
/** The Ajax success redirect url. */
def ajaxSuccess() {
render([success: true, username: authentication.name] as JSON)
}
/** The Ajax denied redirect url. */
def ajaxDenied() {
render([error: 'access denied'] as JSON)
}
protected Authentication getAuthentication() {
SecurityContextHolder.context?.authentication
}
protected ConfigObject getConf() {
SpringSecurityUtils.securityConfig
}
}
```
|
How to create drop shadow for Rectangle on QtQuick 2.0
How can i draw a drop shadow for a `Rectangle` visual item on QtQuick 2.0?
I like to draw a drop shadow for my main window (I have a transparent and no-decorated window)
|
As a workaround for the clipped shadow issue, you can put your `Rectangle` in an `Item`, with additionnal margin to take blur radius in account, and apply shadow on that container:
```
import QtQuick 2.0
import QtGraphicalEffects 1.0
Item {
width: 320
height: 240
Item {
id: container
anchors.centerIn: parent
width: rect.width + (2 * rectShadow.radius)
height: rect.height + (2 * rectShadow.radius)
visible: false
Rectangle {
id: rect
width: 100
height: 50
color: "orange"
radius: 7
antialiasing: true
border {
width: 2
color: "red"
}
anchors.centerIn: parent
}
}
DropShadow {
id: rectShadow
anchors.fill: source
cached: true
horizontalOffset: 3
verticalOffset: 3
radius: 8.0
samples: 16
color: "#80000000"
smooth: true
source: container
}
}
```
|
Removed some directories being root and Xubuntu won't start
I had some problems before doing this where xfce won't start (applications started, but no borders and couldn't type anything on the keyboard) and someone told me to use this command `sudo rm -rf /usr/share/xsessions`
Now not even Xorg will start. I can do `Ctrl` + `Alt` + `F1` and use `tty-1` and I tried `apt-get install --reinstall xfce4` but the same problem.
What do you recommend me to do?
|
I'm not sure I believe that in 2013 people still run commands with `rm -rf` they find on the internet without trying to work out what they're doing... Perhaps Ubuntu should pop up a big warning the first time a user runs `rm`...
But anyhow... The session file lives in the `xfce4-session` package. Reinstall that and you should be golden.
```
sudo apt-get --reinstall xfce4-session
```
And here are a few others:
- For the standard Ubuntu desktop, reinstall `gnome-session`
- For KDE you would want `kde-workspace-data`.
- For Gnome-shell reinstall `gnome-shell`.
- For AwesomeWM, reinstall `awesome`.
There are others but I don't know their xsession names to look them up by.
|
Authorized client-side JS API calls with Google's gapi library with an existing access token?
## Some background
I'm using Meteor as an application framework, and I use the `accounts-google` package to authenticate users. Using the accounts packages is very nice, because it handles all the dirty work of obtaining access tokens, refreshing them on expiry, etc. However, I wanted more profile information about the user than what gets populated in the `Meteor.user()` object.
With Facebook, I was able to easily load their client-side JS library and make graph api requests using `Meteor.user().services.facebook.accessToken` and following the documentation on the API:
<https://developers.facebook.com/docs/javascript/reference/FB.api>
## My Problem
When referencing Google's JavaScript API, the documentation states that when making API calls, the request includes the access token automatically, **but only when using gapi to handle the authorization requests**.
<https://developers.google.com/api-client-library/javascript/features/authentication#MakingtheAPIrequest>
This is not helpful when relying on a 3rd party authorization package (or when your application handles Google's authentication away from the client side).
Is there any way to use an existing access token in requests using the `gapi.client` library methods?
|
I struggled over this for a long time, and I was able to find a similar question posted here:
[Google OAuth2 - Using externally generated Access Token - With JS Client Library](https://stackoverflow.com/questions/21168688/google-oauth2-using-externally-generated-access-token-with-js-client-library)
However, this person already knew what had to be done, just not how to make the appropriate call, so it was fairly lucky that I found it.
## Solution
In order to make `gapi` requests using an existing access token, you must call:
```
gapi.auth.setToken({
access_token: "token_string"
});
```
before making a client request. In Meteor, you can use `Meteor.user().services.google.accessToken` in place of the `"token_string"` above.
Documentation on the method is here:
<https://developers.google.com/api-client-library/javascript/reference/referencedocs#gapiauthsetToken>
|
Multi-tenancy in Hibernate - multi databases (SQLite)
I'm trying to do multi-tenancy with multi databases. From this [chapter](http://docs.jboss.org/hibernate/orm/4.1/devguide/en-US/html/ch16.html#d5e4691) I took **MultiTenantConnectionProviderImpl**.
And here I have problem. Eclipse cannot find class **ConnectionProviderUtils**. I'm using Maven with dependency:
```
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>4.1.4.Final</version>
</dependency>
```
|
I hate to disapoint you but I was running in the same problem a while back. The point is the ConnectionProviderUtil is quite misleading in the documentation. There is no such thing. The ConnectionProviderUtil is something you have to implement yourself. I implemented this by constructing my own `DataSource` (a c3p0 pooled one) in the MultiTenantConnectionProvider and handing out connection from there.
So you have to implement it from scratch yourself. For reference here is my way to a Solution. [Setting up a MultiTenantConnectionProvider using Hibernate 4.2 and Spring 3.1.1](https://stackoverflow.com/questions/16213573/setting-up-a-multitenantconnectionprovider-using-hibernate-4-2-and-spring-3-1-1)
For the multi DB approach you can just autowire the different `DataSources` into the `MultiTenantConnectionProvider` and switch based on the TenantIdentifier. See this answer for more details: <https://stackoverflow.com/a/16769595/2319179>
**Edit:**
If you use Spring you can set up a DataSource in the appcontext like this:
```
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="<jdbcdriver>" />
<property name="url" value="jdbc:SQLServer://<host>:<port>;databaseName=<dbname>" />
<property name="username" value="<user>" />
<property name="password" value="<pw>" />
</bean>
```
If you need to build it from java you can do it like this:
```
cpds = new DriverManagerDataSource();
cpds.setDriverClass(<jdbc.driver>);
cpds.setJdbcUrl(<jdbc.url>);
cpds.setUser("<user>");
cpds.setPassword("<pw>"));
```
A quick googlesearch should bring up the right driver.
|
Why is module math called "built-in"?
AFAIK, Python `builtins` refers to those exceptions and functions contained in `__builtins__`:
```
>>> import builtins # import __builtin__ in Python 2
>>> dir(builtins) # dir(__builtin__) for Python 2
['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException',
'BlockingIOError', 'BrokenPipeError', 'BufferError', 'BytesWarning',
...many more...
'ord', 'pow', 'print', 'property', 'quit', 'range', 'repr', 'reversed',
'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum',
'super', 'tuple', 'type', 'vars', 'zip']
```
But look at the code below (both Python2 and 3 gave the same results):
```
>>> globals()
{'__name__': '__main__', '__builtins__': <module 'builtins' (built-in)>,
'__loader__': <class '_frozen_importlib.BuiltinImporter'>,
'__doc__': None, '__package__': None}
>>> import threading
>>> import math
>>> globals()
{'__name__': '__main__', '__builtins__': <module 'builtins' (built-in)>,
'__loader__': <class '_frozen_importlib.BuiltinImporter'>,
'__doc__': None, '__package__': None,
'threading': <module 'threading' from C:\\ProgramFiles\\Python3.3\\lib\\threading.py'>,
'math': <module 'math' (built-in)>}
```
On the last line, module `math` is called `built-in`. Why? What's the difference between module `math` and other modules like `threading`?
|
From the docs (`./Doc/library/stdtypes.rst`):
>
> ## Modules
>
>
> ...
>
>
> Modules built into the interpreter are written like this: `<module 'sys' (built-in)>`. If loaded from a file, they are written as `<module 'os' from '/usr/local/lib/pythonX.Y/os.pyc'>`.
>
>
>
The relevant code is in the `repr()` function for the module object:
```
static PyObject *
module_repr(PyModuleObject *m)
{
char *name;
char *filename;
name = PyModule_GetName((PyObject *)m);
if (name == NULL) {
PyErr_Clear();
name = "?";
}
filename = PyModule_GetFilename((PyObject *)m);
if (filename == NULL) {
PyErr_Clear();
return PyString_FromFormat("<module '%s' (built-in)>", name);
}
return PyString_FromFormat("<module '%s' from '%s'>", name, filename);
}
```
|
When would the python tracemalloc module allocations statistics not match what's shown in ps or pmap?
I'm trying to track down a memory leak, so I've done
```
import tracemalloc
tracemalloc.start()
<function call>
# copy pasted this from documentation
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
print("[ Top 10 ]")
for stat in top_stats[:10]:
print(stat)
```
This shows no major allocations, all memory allocations are pretty small, while I'm seeing 8+ GB memory allocated in `ps` and `pmap` (checking before and after running the command, and after running garbage collection). Furthermore, `tracemalloc.get_traced_memory` confirms that `tracemalloc` is not seeing many allocations. `pympler` also does not see the allocations.
Does anyone know when this could be the case? Some modules are using cython, could this cause issues for tracemalloc?
In pmap the allocation looks like:
`0000000002183000 6492008 6491876 6491876 rw--- [ anon ]`
|
From the documentation on [tracemalloc](https://docs.python.org/3/library/tracemalloc.html):
`The tracemalloc module is a debug tool to trace memory blocks allocated by Python.`
In other words, memory not allocated by the python interpreter is not seen by tracemalloc. This would include anything not done by `PyMalloc` at the C-API level, including all standard libc `malloc` calls by native code used via extensions, or extension code using `malloc` directly.
Whether that is the case here is impossible to tell for certain without code to reproduce. You can try running the native code part outside of python through, for example, valgrind, to detect memory leaks in the native code.
If there is cython code doing `malloc`, that could be switched to `PyMalloc` to have it traced.
|
How to get scalar value on a cell using conditional indexing
I have the dataframe shown below. I need to get the scalar value of column B, dependent on the value of A (which is a variable in my script). I'm trying the loc() function but it returns a Series instead of a scalar value. How do I get the scalar value()?
```
>>> x = pd.DataFrame({'A' : [0,1,2], 'B' : [4,5,6]})
>>> x
A B
0 0 4
1 1 5
2 2 6
>>> x.loc[x['A'] == 2]['B']
2 6
Name: B, dtype: int64
>>> type(x.loc[x['A'] == 2]['B'])
<class 'pandas.core.series.Series'>
```
|
First of all, you're better off accessing both the row and column indices from the `.loc`:
```
x.loc[x['A'] == 2, 'B']
```
Second, you can always get at the underlying numpy matrix using `.values` on a series or dataframe:
```
In : x.loc[x['A'] == 2, 'B'].values[0]
Out: 6
```
Finally, if you're not interested in the original question's "conditional indexing", there are also specific accessors designed to get a single scalar value from a DataFrame: [`dataframe.at[index, column]`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.at.html#pandas.DataFrame.at) or [`dataframe.iat[i, j]`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iat.html) (these are similar to `.loc[]` and `.iloc[]` but designed for quick access to a single value).
|
How to link external CSS resource with JSF h:outputStylesheet?
I was wondering if I can use `<h:outputStylesheet/>` to link CSS from an external resources. I want to link the Yahoo Grids. Using the following code, I got a `RES_NOT_FOUND`:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html"
xmlns:p="http://primefaces.prime.com.tr/ui"
xmlns:tcmt="http://java.sun.com/jsf/composite/tcmt/component">
<h:head>
</h:head>
<h:body>
<h:outputStylesheet library="css" name="http://yui.yahooapis.com/3.3.0/build/cssgrids/grids-min.css" target="head" />
</h:body>
</html>
```
|
You can keep using plain HTML for that:
```
<link type="text/css" rel="stylesheet" href="http://yui.yahooapis.com/3.3.0/build/cssgrids/grids-min.css" />
```
When you use the `<h:graphicImage/>` or `<h:outputStylesheet/>` or `<h:outputScript/>`, then the file needs to be inside the `/resources` folder of the web application itself. See also [How to reference CSS / JS / image resource in Facelets template?](https://stackoverflow.com/questions/8367421/how-to-reference-css-js-image-resource-in-facelets-template) But if the file is not provided by the web application, then you should use plain HTML `<img/>` or `<link/>` or `<script></script>` for this.
Instead of plain HTML `<link/>` you can also download this `.css` and put in the `/resources` folder of the web application so that you can use `<h:outputStylesheet/>`.
|
Throwing exceptions so that stack trace doesn't contain certain class types
Is it possible to do this ?
The problem is, that huge applications have tons of servlet filters for instance. And each exception that is thrown regarding http request contains 250 lines when 160 of them is from catalina/tomcat stack, which is absolutely not important.
And having 250 lines long stack traces is very hard to work with.
|
Yes, it is possible to manipulate the stack trace. As said, it depends on where you want to (and can) attack the problem.
---
As an example:
For a remote method-call protocol I implemented for our project, in the case of an exception we catch it at the target side, cut off the lower some StackTraceElements (which are always the same, until the actual calling of the target method with reflection), and send the exception with the important part of the stack trace to the caller side.
There I reconstruct the exception with its (sent) stack trace, and then merge it with the current Stack trace.
For this, we also remove the top some elements of the current stack trace (which contain only calls of the remote-call framework):
```
private void mergeStackTraces(Throwable error)
{
StackTraceElement[] currentStack =
new Throwable().getStackTrace();
int currentStackLimit = 4; // TODO: raussuchen
// We simply cut off the top 4 elements, which is just
// right for our framework. A more stable solution
// would be to filter by class name or such.
StackTraceElement[] oldStack =
error.getStackTrace();
StackTraceElement[] zusammen =
new StackTraceElement[currentStack.length - currentStackLimit +
oldStack.length + 1];
System.arraycopy(oldStack, 0, zusammen, 0, oldStack.length);
zusammen[oldStack.length] =
new StackTraceElement("══════════════════════════",
"<remote call %" +callID+ ">",
"", -3);
System.arraycopy(currentStack, currentStackLimit,
zusammen, oldStack.length+1,
currentStack.length - currentStackLimit);
error.setStackTrace(zusammen);
}
```
This gives, for example, this trace printed:
```
java.lang.SecurityException: The user example does not exist
at de.fencing_game.db.userdb.Db4oUserDB.login(Db4oUserDB.java:306)
at de.fencing_game.server.impl.StandardServers$SSServer$1.run(StandardServers.java:316)
at de.fencing_game.server.impl.StandardServers$SSServer$1.run(StandardServers.java:314)
at java.security.AccessController.doPrivileged(Native Method)
at de.fencing_game.server.impl.StandardServers$SSServer.login(StandardServers.java:313)
at de.fencing_game.transport.server.ServerTransport$ConnectionInfo$4.login(ServerTransport.java:460)
at ══════════════════════════.<remote call %2>()
at $Proxy1.login(Unknown Source)
at de.fencing_game.gui.basics.LoginUtils.login(LoginUtils.java:80)
at de.fencing_game.gui.Lobby.connectTo(Lobby.java:302)
at de.fencing_game.gui.Lobby$20.run(Lobby.java:849)
```
---
Of course, for your case you would better simply iterate through your array, copy the important elements into a list, and then set this as the new stackTrace. Make sure to do this for the causes (i.e. linked throwables), too.
You can do this in the constructor of your exceptions, or where you print the stack traces, or anywhere between (where you catch, manipulate and rethrow the exception).
|
Render array of arrays table in Angular using ng-repeat
I have an array of arrays (representing rows and columns) and I need to render an HTML table with the data.
Each row is an array of column values, for example: `$scope.table.row[0] = [123, 456, 789]`
This is my attempt (that doesn't work). Any ideas?
```
<table>
<tbody>
<tr ng-repeat="row in table">
<td ng-repeat="col in row">{{col}}</td>
</tr>
</tbody>
</table>
```
|
You either need to iterate over `table.row` or make `table` an array of arrays itself. [Demo](http://plnkr.co/edit/j8wfL0KC9r3AA7SLoY1n?p=preview).
```
<table>
<tbody>
<tr ng-repeat="row in table.row">
<td ng-repeat="col in row">{{col}}</td>
</tr>
</tbody>
</table>
```
OR
```
table = [ [123, 456, 789], [111, 222, 333], [111, 222, 333]]
<table>
<tbody>
<tr ng-repeat="row in table">
<td ng-repeat="col in row">{{col}}</td>
</tr>
</tbody>
</table>
```
|
How do I find out the numbering of merge commit parents?
I need to do a git revert -m , but I have not been able to find any documentation on how the commit parents are numbered. Everything I've seen (including the help pages for rev-parse) just tells me that the parents are numbered but do not say how they are numbered. Could someone point out to me where this is defined and how to determine this?
|
```
git show --format="%P" <SHA>
```
will give you the parent SHAs of the given commit in numerical order. Most often the one you'll want to specify for `git revert` will be `1`.
In the case of most merges, the SHA of the branch that was checked out will be the first parent, and the SHA of the branch that was merged in will be the second. (Basically, the checked out branch is always the first parent; the other parents are the branches that were specified to the merge command in the order of their specification.)
If you want to find the SHA for a particular parent, use the caret operator:
- First parent: `git rev-parse <SHA>^1`
- Second parent: `git rev-parse <SHA>^2`
et cetera.
|
OpenXML SDK2.5 (Excel): How to determine if a cell contains a numeric value?
I am busy developing a component which imports data from a MS Excel (2016) file.
This component uses the MS OpenXML SDK2.5 library. The end-users installation of MS Excel is based on Dutch country / region settings.
The file contains, among others, a column with financial data (numeric). The position of this column is not known in advance.
To determine if a cell contains numeric data I evaluate the property Cell.DataType (of type CellValues, which is an enum).
At first it seems that this property is the perfect candidate to determine this. Possible values of CellValues are:
Boolean, Number, Error, SharedString, String, InlineString or Date. So I would expect that Cell.DataType is set to CellValues.Number.
After some debugging I found out that Cell.DataType is null when the cell contains numeric data.
While searching on internet to find an explanation I found the following MSDN article:
<https://msdn.microsoft.com/en-us/library/office/hh298534.aspx>
The article describes exactly what I found during debugging:
>
> The Cell type provides a DataType property that indicates the type of the data within the cell. The value of the DataType property is null for numeric and date types.
>
>
>
Does anybody know why Cell.DataType is not initialized with respectively CellValues.Number or CellValues.Date?
What is the best way to determine if a cell contains a numeric value?
|
>
> Does anybody know why Cell.DataType is not initialized with respectively CellValues.Number or CellValues.Date?
>
>
>
Looking at the ECMA-376 standard from [here](http://www.ecma-international.org/publications/standards/Ecma-376.htm), the (abbreviated) XSD for a `Cell` looks like this:
```
<xsd:complexType name="CT_Cell">
...
<xsd:attribute name="t" type="ST_CellType" use="optional" default="n"/>
...
</xsd:complexType>
```
That attribute represents the type. Note that it is optional with a default value of `"n"`. Section 18.18.11 ST\_CellType (Cell Type) lists the valid values for the type which are:
>
> b - boolean
>
> d - date
>
> e - error
>
> inlineStr - an inline string
>
> **n - number (the default)**
>
> s - a shared string
> str - a formula string
>
>
>
You can see that `"n"` represents a `number`.
>
> What is the best way to determine if a cell contains a numeric value?
>
>
>
It would seem from the above that you could check for a null `Cell.DataType` or a `Cell.DataType` of `CellValues.Number` to tell if a cell contains a number but it's not quite that simple - the big problem is dates.
It would seem that the original storage mechanism for dates was to use a number and rely on the style to know whether or not the number is actually a number or if the number represents a date.
Confusingly, the spec has been updated to include the `Date` type but *not all dates will use the date type*. The `Date` type means the cell contains a date in ISO 8601 format but it's perfectly valid for a date to be stored as a number with the correct style. The following XML snippet for example shows the same date (1st Feb 2017) in both `Number` and `Date` format:
```
<sheetData>
<row r="1" spans="1:1" x14ac:dyDescent="0.25">
<c r="A1" s="1">
<v>42767</v>
</c>
</row>
<row r="2" spans="1:1" x14ac:dyDescent="0.25">
<c r="A2" s="1" t="d">
<v>2017-02-01</v>
</c>
</row>
</sheetData>
```
Which looks like this when opened in Excel:
[](https://i.stack.imgur.com/Nw6bs.png)
If you need to differentiate between dates and numbers then you will need to find any numbers (null `Cell.DataType` or a `Cell.DataType` of `CellValues.Number`) and then check the style of those cells to ensure they are numbers and not dates disguised as numbers.
|
window.getSelection() offset with HTML tags?
If I have the following HTML:
```
<div class="content">
Vivamus <span>luctus</span> urna sed urna ultricies ac tempor dui sagittis.
</div>
```
And I run an event on `mouseup` that sees the ranges of the selected text:
```
$(".content").on("mouseup", function () {
var start = window.getSelection().baseOffset;
var end = window.getSelection().focusOffset;
if (start < end) {
var start = window.getSelection().baseOffset;
var end = window.getSelection().focusOffset;
} else {
var start = window.getSelection().focusOffset;
var end = window.getSelection().baseOffset;
}
console.log(window.getSelection());
console.log(start + ", " + end);
});
```
And I select the word `Vivamus` from the content, it will log `1, 8`, as that is the range of the selection.
**If**, however, I select the word `urna`, it will log `15, 20`, but won't take into account the `<span>` elements of the HTML.
Is there anyway for `focusOffset` and `baseOffset` to also count for HTML tags, instead of just the text?
|
**Update**
Live Example: <http://jsfiddle.net/FLwxj/61/>
[Using a `clearSelection()` function](https://stackoverflow.com/a/6562764/1085891) and [a replace approach](https://stackoverflow.com/a/7168142/1085891), I was able to achieve the desired result.
```
var txt = $('#Text').html();
$('#Text').html(
txt.replace(/<\/span>(?:\s)*<span class="highlight">/g, '')
);
clearSelection();
```
Sources:
- `clearSelection()`: <https://stackoverflow.com/a/6562764/1085891>
- Replace approach: <https://stackoverflow.com/a/7168142/1085891>
---
Below you'll find some working solutions to your problem. I placed them in order of code efficiency.
**Working Solutions**
- <https://stackoverflow.com/a/8697302/1085891> ([live example](http://jsfiddle.net/FLwxj/1/))
```
window.highlight = function() {
var selection = window.getSelection().getRangeAt(0);
var selectedText = selection.extractContents();
var span = document.createElement("span");
span.style.backgroundColor = "yellow";
span.appendChild(selectedText);
span.onclick = function (ev) {
this.parentNode.insertBefore(
document.createTextNode(this.innerHTML),
this
);
this.parentNode.removeChild(this);
}
selection.insertNode(span);
}
```
- <https://stackoverflow.com/a/1623974/1085891> ([live example](http://jsbin.com/itotef/2/))
```
$(".content").on("mouseup", function () {
makeEditableAndHighlight('yellow');
});
function makeEditableAndHighlight(colour) {
sel = window.getSelection();
if (sel.rangeCount && sel.getRangeAt) {
range = sel.getRangeAt(0);
}
document.designMode = "on";
if (range) {
sel.removeAllRanges();
sel.addRange(range);
}
// Use HiliteColor since some browsers apply BackColor to the whole block
if (!document.execCommand("HiliteColor", false, colour)) {
document.execCommand("BackColor", false, colour);
}
document.designMode = "off";
}
function highlight(colour) {
var range, sel;
if (window.getSelection) {
// IE9 and non-IE
try {
if (!document.execCommand("BackColor", false, colour)) {
makeEditableAndHighlight(colour);
}
} catch (ex) {
makeEditableAndHighlight(colour)
}
} else if (document.selection && document.selection.createRange) {
// IE <= 8 case
range = document.selection.createRange();
range.execCommand("BackColor", false, colour);
}
}
```
- <https://stackoverflow.com/a/12823606/1085891> ([live example](http://jsbin.com/itotef/1/))
Other helpful solutions:
- <http://tech.pro/tutorial/1075/javascript-highlighting-selected-text>
|
Why does a StackOverflowException in a child AppDomain terminate the parent AppDomain?
I was under the impression that AppDomains are isolated from each other. It seems that in the case of a StackOverException, this isn't the case.
To demonstrate the issue, I created a simple console application whose only purpose is to spawn a new AppDomain, into which I load a very simple assembly and call one of its methods. This method happens to throw a StackOverflowException. This causes my console application to terminate unceremoniously.
My desired behavior is for the "child" AppDomain to crash and burn on such an exception, but leave my console application, running in the "parent" AppDomain, unscathed.
Is this possible?
UPDATE: here is some code. Neither of the exception handlers are hit.
```
class Program
{
static void Main(string[] args)
{
AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException;
// create app domain
var domain = AppDomain.CreateDomain("MyDomain");
// create a component
var component = (MyComponent)domain.CreateInstanceAndUnwrap(
"AppDomainMonitor.Component",
typeof(MyComponent).FullName);
// create a thread from a method on this component
var thread = new Thread(component.CauseStackOverflow);
// start the thread
thread.Start();
Console.ReadKey();
}
static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
// never hit
}
}
public class MyComponent : MarshalByRefObject
{
public void CauseStackOverflow()
{
try
{
Infinite();
}
catch (Exception ex)
{
// never hit
}
}
void Infinite()
{
Infinite();
}
}
```
|
Only the managed memory is isolated between AppDomains. If a thread throws an exception in any AppDomain, this will cause the whole application to crash.
I think the best solution is to ensure that all exceptions are properly handled on every thread (or thread pool work items).
There is however a hack which consist of applying this configuration in the App.Config file:
```
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<runtime>
<legacyUnhandledExceptionPolicy enabled="1"/>
</runtime>
</configuration>
```
Unhandled exceptions are crashing the process since .Net 2.0, but you can revert to the old policy using this trick.
I would use this cautiously tough as it's often best to let processes crash instead of failing silently. You can add traces in AppDomain.UnhandledException to be notified when unhandled exception occurs and handle them appropriately.
**EDIT**
You are right about StackOveflowException, since .Net 2.0 this exception cannot be handled by user code. (See the Remarks section of [this page in msdn](http://msdn.microsoft.com/en-us/library/system.stackoverflowexception.aspx)).
There is a way to override this by creating a custom CLR host, but this seems like a crazy thing to do. I guess you'll have to live with it or you can create child process instead of AppDomain if you really need this kind of fault tolerance.
|
How to enhance the speed of my C++ program in reading delimited text files?
I show you C# and C++ code that execute the same job: to read the same text file delimited by “|” and save with “#” delimited text.
When I execute C++ program, the time elapsed is 169 seconds.
UPDATE 1: Thanks to Seth (compilation with: cl /EHsc /Ox /Ob2 /Oi) and GWW for changing the positions of string s outside the loops, the elapsed time was reduced to 53 seconds. I updated the code also.
UPDATE 2: Do you have any other suggestion to enhace the C++ code?
When I execute the C# program, the elapsed time is 34 seconds!
The question is, how can I enhance the speed of C++ comparing with the C# one?
C++ Program:
```
int main ()
{
Timer t;
cout << t.ShowStart() << endl;
ifstream input("in.txt");
ofstream output("out.txt", ios::out);
char const row_delim = '\n';
char const field_delim = '|';
string s1, s2;
while (input)
{
if (!getline( input, s1, row_delim ))
break;
istringstream iss(s1);
while (iss)
{
if (!getline(iss, s2, field_delim ))
break;
output << s2 << "#";
}
output << "\n";
}
t.Stop();
cout << t.ShowEnd() << endl;
cout << "Executed in: " << t.ElapsedSeconds() << " seconds." << endl;
return 0;
}
```
C# program:
```
static void Main(string[] args)
{
long i;
Stopwatch sw = new Stopwatch();
Console.WriteLine(DateTime.Now);
sw.Start();
StreamReader sr = new StreamReader("in.txt", Encoding.Default);
StreamWriter wr = new StreamWriter("out.txt", false, Encoding.Default);
object[] cols = new object[0]; // allocates more elements automatically when filling
string line;
while (!string.Equals(line = sr.ReadLine(), null)) // Fastest way
{
cols = line.Split('|'); // Faster than using a List<>
foreach (object col in cols)
wr.Write(col + "#");
wr.WriteLine();
}
sw.Stop();
Console.WriteLine("Conteo tomó {0} secs", sw.Elapsed);
Console.WriteLine(DateTime.Now);
}
```
# UPDATE 3:
Well, I must say I am very happy for the help received and because the answer to my question has been satisfied.
I changed the text of the question a little to be more specific, and I tested the solutions that kindly raised Molbdlino and Bo Persson.
Keeping Seth indications for the compile command (i.e. cl /EHsc /Ox /Ob2 /Oi pgm.cpp):
Bo Persson's solution took 18 seconds on average to complete the execution, really a good one taking in account that the code is near to what I like).
Molbdlino solution took 6 seconds on average, really amazing!! (thanks to Constantine also).
Never too late to learn, and I learned valuable things with my question.
My best regards.
|
As Constantine suggests, read large chunks at a time using `read`.
I cut the time from ~25s to ~3s on a 129M file with 5M "entries" (26 bytes each) in 100,000 lines.
```
#include <iostream>
#include <fstream>
#include <sstream>
#include <algorithm>
using namespace std;
int main ()
{
ifstream input("in.txt");
ofstream output("out.txt", ios::out);
const size_t size = 512 * 1024;
char buffer[size];
while (input) {
input.read(buffer, size);
size_t readBytes = input.gcount();
replace(buffer, buffer+readBytes, '|', '#');
output.write(buffer, readBytes);
}
input.close();
output.close();
return 0;
}
```
|
How to set width of CupertinoButton in Flutter?
Am I missing something obvious here?
Setting minSize on CupertinoButton button sets width *and* height (see image).
How do I set only the width?
Thanks in advance.
[](https://i.stack.imgur.com/hVlxw.jpg)
|
Wrap it in a SizedBox and set width and height.
Example
```
SizedBox(
width: 500,
height: 200
child: CupertinoButton(
onPressed: (){},
child: Text("Click Me", style: TextStyle(color: Colors.white)),
color: Colors.green
)
)
```
**NOTE:** It is best practice to use SizedBox as it is more efficient since you only need to change the width and height.
Check this.
[Flutter: SizedBox Vs Container, why use one instead of the other?](https://stackoverflow.com/questions/55716322/flutter-sizedbox-vs-container-why-use-one-instead-of-the-other/59594450#59594450)
|
How to count array elements in mongo
I know there are lots of topics talking about that out there but I can't figure out what goes wrong in my implementation.
I have the followings documents:
```
{
"_id" : ObjectId("510a353929e16756d5000009"),
"skills" : [
[
"UserEmbed::Author::Skills::Copywriting",
"UserEmbed::Author::Skills::Proofreading",
"UserEmbed::Author::Skills::Proofreading",
"UserEmbed::Author::Skills::Translation",
"UserEmbed::Author::Skills::Translation",
"UserEmbed::Author::Skills::Translation"
]
]
}
```
I would like something like this:
```
{
"UserEmbed::Author::Skills::Copywriting": 1,
"UserEmbed::Author::Skills::Proofreading": 2,
"UserEmbed::Author::Skills::Translation": 3
}
```
Here is what I have (first `$group` is to get the document above from my original document structure):
```
aggregate([ { $group: { _id: "$_id", skills: { $addToSet : "$author_profile.skills._type" } } }, { $unwind : "$skills" }, { $group : { _id : "$skills", count: { $sum : 1 } } }])
```
Which returns something like this (with other documents):
```
{
"_id" : [
"UserEmbed::Author::Skills::Copywriting",
"UserEmbed::Author::Skills::Copywriting",
"UserEmbed::Author::Skills::Copywriting",
"UserEmbed::Author::Skills::Translation",
"UserEmbed::Author::Skills::Translation",
"UserEmbed::Author::Skills::Translation"
],
"count" : 1
}
```
It seems that the `$group` is not working properly. Did I misunderstand something ?
|
Given your document contains an array of arrays, you'd need to introduce a second `$unwind` for `$skills`:
```
db.so.aggregate([
{ $group: { _id: "$_id",
skills: { $addToSet : "$author_profile.skills._type" }}},
{ $unwind : "$skills" },
{ $unwind: "$skills" },
{ $group : { _id : "$skills", count: { $sum : 1 } } }])
```
Produces:
```
"result" : [
{
"_id" : "UserEmbed::Author::Skills::Translation",
"count" : 3
},
{
"_id" : "UserEmbed::Author::Skills::Proofreading",
"count" : 2
},
{
"_id" : "UserEmbed::Author::Skills::Copywriting",
"count" : 1
}
],
"ok" : 1
```
|
why define-syntax of or in scheme need consider three conditions?
I'm reading the *scheme programming language*, in chapter 3, the book use define-syntax to define or and and procedure, and it says the following definition of or is incorrect:
```
(define-syntax or ; incorrect!
(syntax-rules ()
[(_) #f]
[(_ e1 e2 ...)
(let ([t e1])
(if t t (or e2 ...)))]))
```
And the correct definition is:
```
(define-syntax or
(syntax-rules ()
[(_) #f]
[(_ e) e]
[(_ e1 e2 e3 ...)
(let ([t e1])
(if t t (or e2 e3 ...)))]))
```
Why the correct definition need three conditions? I run many tests, the two definitions produce the same results. How can tell me why the first definition is wrong?
|
Let's consider the hint from the book.
First we define our own version of `or`:
```
(define-syntax my-or ; incorrect!
(syntax-rules ()
[(_) #f]
[(_ e1 e2 ...)
(let ([t e1])
(if t t (my-or e2 ...)))]))
```
Then we look at the expression in the hint.
```
(letrec ([even?
(lambda (x)
(my-or (= x 0)
(odd? (- x 1))))]
[odd?
(lambda (x)
(and (not (= x 0))
(even? (- x 1))))])
(list (even? 20) (odd? 20)))
```
Let's look at the expansion (I edited the full expansion a little):
```
(letrec ([even? (lambda (x)
(let ([t (= x 0)])
(if t t (let ([t (odd? (- x 1))])
(if t t #f)))))]
[odd? (lambda (x) (if (not (= x 0)) (even? (- x 1)) #f))])
(list (even? 20) (odd? 20)))
```
The problem here is that the call to `odd?` in `(let ([t (odd? (- x 1))]) ...)`
is not in tail position. For each loop the `let` expression will allocate a new variable (on the stack or elsewhere) an eventually we have a memory problem.
In short: The semantics of `or` is that in `(or e1 ... en)` the last expression `en` is in tail position. If we use the simple version of the `my-or` macro, then
```
(my-or e1)
```
expands into
```
(let ([t e1])
(if t t #f))]))
```
and the expression `e1` is not in tail position in the output.
|
How can I tell whether an old graphics card is PCIe or AGP?
I've been given an old(ish) graphics card, with the make and model not listed clearly, or at all, on the card. Or perhaps they are, but are covered by the heatsink. Assume also that I don't have access to a machine to play with, into which I could check whether the card fits.
How can I tell, **by only looking at the card itself** and without trying to insert it anywhere, whether it's a PCIe card or an AGP one? I remember the physical slot interface is somewhat similar with both of them having an L-shaped latch towards the back, which makes it relatively easy to mistake them with each other (as opposed to PCI cards which don't have this latch).
|
Consider this image of the slot part of two cards, a PCIe 16x graphics card (bottom) and an AGP 8x graphics card (top):
[](https://i.stack.imgur.com/2SzsY.jpg)
They have the following distinctive features:
- **L-latch height**: AGP cards' back-L "latch" extends all the way down to the same level as the connector pins; with PCIe, the latch is shorter.
- **Pin pattern**: The AGP card has a sort of a zig-zag pattern where subsequent pins are at different heights; PCIe pins are all parallel to each other.
- **Position of slit**: The PCIe connector pins are split by a slit in the PCB close to the panel-side of the card (where the output connector is), while the AGP card's similar slit is much further away from the panel (beyond the half-point).
Notes:
- You can't trust various other differences, like having/not having pins on the L-latch.
- Even if you're sure you've correctly identified the type of card, be very careful when inserting it into the socket and refrain from applying more than a little bit of pressure to avoid damaging the hardware.
|
ftp\_ssl\_connect with implicit ftp over tls
Can ftp\_ssl\_connect handle Implicit FTP over TLS? By default it uses explicit.
I'm trying to upload to a server that accepts only Implicit ftp over tls on port 990; has anybody run into this as of yet? How did you fix it?
|
ftp\_ssl\_connect is only explicit
if you need implicit, use curl
```
$fp = fopen($path, 'r');
$ftp_server = 'ftps://'.$server.'/'.$filename;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $ftp_server);
curl_setopt($ch, CURLOPT_USERPWD,$user.':'.$pass);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ch, CURLOPT_FTP_SSL, CURLFTPSSL_TRY);
curl_setopt($ch, CURLOPT_FTPSSLAUTH, CURLFTPAUTH_TLS);
curl_setopt($ch, CURLOPT_UPLOAD, 1);
curl_setopt($ch, CURLOPT_INFILE, $fp);
$output = curl_exec($ch);
$error_no = curl_errno($ch);
//var_dump(curl_error($ch));
curl_close($ch);
```
|
How to handle post-validation errors in command (DDD + CQRS)
For example, when you submit a Register form, you have to check in the `Domain Model` (`WriteModel` in `CQRS`) that it is in a valid state (example, email address syntax, age, etc).
Then you create a `Command`, and send it to a `Command Bus`.
I understand that Commands should not return anything.
So how do you handle an error beyond `Command Bus`? (For example, a user registered 1 second before with the same `username/email`).
How do you know that command failed, and how do you know the error?
|
>
> I understand that Commands should not return anything.
>
>
>
That's one view, but it's not entirely set in stone. Consider writes (PUT, POST, DELETE) in HTTP -- all of these messages are commands, in the sense that they are messages with request that the resource change state, and yet they all return responses.
>
> So how do you handle an error beyond Command Bus? (For example, a user registered 1 second before with the same username/email).
>
>
> How do you know that command failed, and how do you know the error?
>
>
>
So in a case where you are communicating directly with the command handler, a returned message is a perfectly reasonable way to acknowledge that the command has been received and processed.
If you are using a piece of middleware, like a bus, that prevents you from communicating directly with the target, then I would suggest that you look to asynchronous messaging patterns -- how do you get the command handler to send a message back to the caller?
One idea is to subscribe to the outcome of the command; this borrows from some of the ideas in Hohpe's Enterprise Integration Patterns. The basic idea is that, since the client is familiar with the command message that was sent, it is well positioned to subscribe to any new messages published as a consequence of the command message. The command handler, after saving the data to the book of record, publishes events announcing that the change was successful, and the client subscribes to those events -- recognizing the correct events by considering the coincidence of various identifiers in the message (causation id, [correlation id](http://www.enterpriseintegrationpatterns.com/patterns/messaging/CorrelationIdentifier.html), and so on).
Alternative approaches are a bit more direct. One would be to include in the message a callback, which can be invoked by the command handler after the message is successfully handled.
An very similar alternative is to reserve space in the command message for the command handler to write the acknowledgement -- since the client already has the command message in question, the circuit is already complete. Think "[promise](https://promisesaplus.com/)" or "[completable future".](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html) The message tells the command handler where to write the acknowledgement; doing so signals to the client (countdown latch) that the acknowledgement is available.
And of course, you have the additional option of removing the middleware that seems to be getting in the way of doing the right thing simply.
>
> For example, a user registered 1 second before with the same username/email
>
>
>
If you are handling user registration idempotently, that wouldn't necessarily be an error -- repeating messages until a response is observed is a common way to ensure at least once delivery.
|
short plus short is an int
I have three short variables.
When I add two together and assign the result to the third, eclipse tells me that I need to cast it to a short !
```
short sFirst, sSecond, sThird;
sFirst = 10;
sSecond = 20;
sThird = sFirst + sSecond;
```
Hovever, when I do a simple assignment followed by an incremental assignment, all is fine.
```
short sFirst, sSecond, sThird;
sFirst = 10;
sSecond = 20;
sThird = sFirst;
sThird += sSecond;
```
Why is this ?
|
The JLS ([§15.8.2](http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18.2)) says this:
>
> *"The binary + operator performs addition when applied to two operands of numeric type, producing the sum of the operands."*
>
>
> *"Binary numeric promotion is performed on the operands ([§5.6.2](http://docs.oracle.com/javase/specs/jls/se7/html/jls-5.html#jls-5.6.2))."*
>
>
>
That means that the operands of your expression are converted to `int`. So the addition will add an `int` to an `int`.
>
> *"The type of an additive expression on numeric operands is the promoted type of its operands."*
>
>
>
In your case, `int`.
---
I won't speculate as to why it is done this way. However, it is no accident. If you look at the bytecode instruction set as defined in the JVM spec, you will see that there are arithmetic instructions for `int`, `long`, `float` and `double` ... but NOT for the smaller integer types.
|
How to include existing CMake git repo as submodule in my own CMake project?
I am trying to write some program using the [OpenKinect](https://github.com/OpenKinect/libfreenect) project. I want to add the OpenKinect's master branch as a subdirectory to my project's source. But, if you look into the OpenKinect's own CMake, there is too much stuff there, and I do not need everything. There are certain options there, which are being set to ON or OFF, like this:
```
OPTION(BUILD_AUDIO "Build audio support" OFF)
OPTION(BUILD_REDIST_PACKAGE "Build libfreenect in a legally-redistributable manner (only affects audio)" OFF)
OPTION(BUILD_EXAMPLES "Build example programs" ON)
OPTION(BUILD_FAKENECT "Build fakenect mock library" ON)
OPTION(BUILD_C_SYNC "Build c synchronous library" ON)
OPTION(BUILD_CPP "Build C++ Library (currently header only)" ON)
OPTION(BUILD_CV "Build OpenCV wrapper" ON)
OPTION(BUILD_AS3_SERVER "Build the Actionscript 3 Server Example" OFF)
OPTION(BUILD_PYTHON "Build Python extension" ON)
IF(PROJECT_OS_LINUX)
OPTION(BUILD_CPACK "Build an RPM or DEB using CPack" ON)
ENDIF(PROJECT_OS_LINUX)
```
Without making any major changes to the OpenKinect files (so that I can `git pull` any time I need to), how do I import only some parts (e.g., the `C++ wrapper` and the `OpenCV bindings`) to my own CMake project? I thought of copying certain directories, which are not dependent on the other directories, if I completely re-write the CMake files. I would not be able to use the git any more, but it would be a quick fix. But I am getting strange errors that way, such as "stdexcept was not not declared in this scope", which makes no sense, because it is a standard gc++ library.
|
If you simply want to enable/disable some parts of that library, you can simply set the appropriate options before calling `ADD_SUBDIRECTORY`.
Simply use the same OPTION commands as in the library's `CMakeLists.txt` but set them `ON`/`OFF` as you need. Of course, oyu are free to change/choose the describing string as you like.
Alternatively (and if options have a different value than true/false), you can use the `SET(.... CACHE ... )`
e.g.
```
SET(BUILD_CPP TRUE CACHE BOOL "Build C++ Library (currently header only)")
```
Similar question:
[Override option in CMake subproject](https://stackoverflow.com/questions/14061605/override-option-in-cmake-subproject)
|
How does Generation Type IDENTITY work? How does it know what's the next primary key to insert
How does Generation Type IDENTITY know whats the next integer to insert in the database when persisting an entity into the DB and in a multi-threaded application(java) is it not possible that two threads get the same primary key for an entity?
|
For `@Id` attribute, as per [this documentation](https://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing),
>
> There are several strategies for generating unique ids. Some
> strategies are database agnostic and others make use of built-in
> databases support.
> JPA provides support for several strategies for id generation defined through the GenerationType enum values: TABLE, SEQUENCE and
> IDENTITY.
>
>
>
When one uses `IDENTITY` strategy, then, database can automatically assign a next value.
```
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
private long id;
```
Typically, in such cases, column will be of type `auto increment`.
Please note that the IDs are not generated in Java code, but instead assigned by the underlying database - and databases are built to work with multiple connections wherein a user may be inserting a record in each connection.
On the Java side, each thread will work with one connection - typically such code work with connection pools and borrows a connection from a connection pool - and thus are inherently thread safe.
As per [this documentation](https://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing),
>
> Identity sequencing uses special IDENTITY columns in the database to
> allow the database to automatically assign an id to the object when
> its row is inserted. Identity columns are supported in many databases,
> such as MySQL, DB2, SQL Server, Sybase, and PostgreSQL. Oracle does
> not support IDENTITY columns but its is possible to simulate them
> using sequence objects and triggers.
>
>
>
|
Convert text list of urls into clickable HTML links
I'm looking to convert a text list of urls into clickable links.
```
<!DOCTYPE html>
<body>
<script>
// http://stackoverflow.com/questions/37684/how-to-replace-plain-urls-with-links
function replaceURLWithHTMLLinks(text) {
var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/
%=~_|])/ig;
return text.replace(exp,"<a href='$1'>$1</a>");
}
var a1 = document.getElementById("test").innerHTML;
var a2 = replaceURLWithHTMLLinks(a1);
document.getElementById("test").innerHTML = a2;
</script>
<div id="test">
http://www.site.com/
http://www.site2.com/
http://www.site3.com/
</div>
</body>
</html>
```
Firebug returns the list of sites in the console for:
```
document.getElementById("test").innerHTML;
```
ie:
www.site.com/
www.site2.com/
www.site3.com/
**Why do I get this error for the line below?**
```
var a1 = document.getElementById("test").innerHTML;
```
TypeError: document.getElementById(...) is null
|
That is because you are trying to access the element before it has been added so `document.getElementById('test')` return null. You can wrap it under `window.onload` or add the script after the html element.
Try:
```
<script>
// http://stackoverflow.com/questions/37684/how-to-replace-plain-urls-with-links
function replaceURLWithHTMLLinks(text) {
var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/
%=~_|])/ig;
return text.replace(exp,"<a href='$1'>$1</a>");
}
window.onload = function(){
var elm = document.getElementById("test");
var modifiedHtml = replaceURLWithHTMLLinks(elm.innerHTML);
elm.innerHTML = modifiedHtml ;
}
</script>
```
|
NSSplitViewItem collapse animation and window setFrame conflicting
I am trying to make a (new in 10.10) NSSplitViewItem collapse and uncollapse whilst moving its containing window so as to keep the whole thing "in place".
The problem is that I am getting a twitch in the animation (as seen [here](https://files.app.net/2xm6k59k-.mov)).
The code where I'm doing the collapsing is this:
```
func togglePanel(panelID: Int) {
if let splitViewItem = self.splitViewItems[panelID] as? NSSplitViewItem {
// Toggle the collapsed state
NSAnimationContext.runAnimationGroup({ context in
// special case for the left panel
if panelID == 0 {
var windowFrame = self.view.window.frame
let panelWidth = splitViewItem.viewController.view.frame.width
if splitViewItem.collapsed {
windowFrame.origin.x -= panelWidth
windowFrame.size.width += panelWidth
} else {
windowFrame.origin.x += panelWidth
windowFrame.size.width -= panelWidth
}
self.view.window.animator().setFrame(windowFrame, display: true)
}
splitViewItem.animator().collapsed = !splitViewItem.collapsed
}, completionHandler: nil)
}
}
```
I am aware of the "Don't cross the streams" issue (from session 213, WWDC'13) where a window resizing animation running on the main thread and a core animation collapse animation running on a separate thread interfere with each other. Putting the splitViewItem collapse animation onto the main thread seems like the wrong approach and I've got a nagging feeling there's a much better way of doing this that I'm missing.
Since I am not finding any documentation on the NSSplitViewItems anywhere (yet) I would appreciate any insights on this.
I have the little test project on GitHub [here](https://github.com/NeoTeo/SlidePaneTest) if anyone wants a look.
**Update** The project mentioned has now been updated with the solution.
Thanks,
Teo
|
The problem is similar to the "don't cross the streams" issue in that there are two drivers to the animation you've created: (1) the split view item (2) the window, and they're not in sync.
In the example from the '13 Cocoa Animations talk, constraints were setup to result in the correct within-window animation as only the window's frame was animated.
Something similar could be tried here -- only animating the window's frame and not the split view item, but since the item manages the constraints used to (un)collapse, the app can't control exactly how within-window content animates:

Instead the split view item animation could completely drive the animation and use NSWindow's `-anchorAttributeForOrientation:` to describe how the window's frame is affected.
```
if let splitViewItem = self.splitViewItems[panelID] as? NSSplitViewItem {
let window = self.view.window
if panelID == 0 {
// The Trailing edge of the window is "anchored", alternatively it could be the Right edge
window.setAnchorAttribute(.Trailing, forOrientation:.Horizontal)
}
splitViewItem.animator().collapsed = !splitViewItem.collapsed
}
```

|
How to limit the textfield entry to 2 decimal places in swift 4?
I have a textfield and I want to limit the entry to max 2 decimal places.
number like 12.34 is allowed but not 12.345
How do I do it?
|
Set your controller as the delegate for the text field and check if the proposed string satisfy your requirements:
```
override func viewDidLoad() {
super.viewDidLoad()
textField.delegate = self
textField.keyboardType = .decimalPad
}
func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {
guard let oldText = textField.text, let r = Range(range, in: oldText) else {
return true
}
let newText = oldText.replacingCharacters(in: r, with: string)
let isNumeric = newText.isEmpty || (Double(newText) != nil)
let numberOfDots = newText.components(separatedBy: ".").count - 1
let numberOfDecimalDigits: Int
if let dotIndex = newText.index(of: ".") {
numberOfDecimalDigits = newText.distance(from: dotIndex, to: newText.endIndex) - 1
} else {
numberOfDecimalDigits = 0
}
return isNumeric && numberOfDots <= 1 && numberOfDecimalDigits <= 2
}
```
|
Preventing a MEX file from crashing in MATLAB
I have a MEX file which I "borrowed" from someone else to help me code a semi-automated nuclear detection algorithm. The problem is that the MEX file sporadically causes a segmentation fault. However, if the code is run with the same parameters a second time, it's fine.
I was hoping there was a sort of `try/catch` idiom for MEX files, but after spending most of my Saturday looking for something, I couldn't find anything.
Any help on this issue would be amazing! Otherwise, I am going to have to port the `.cpp` into MATLAB (and it's around 10,000 lines using hundreds of dependencies :-().
|
Which platform are you using? There are methods to help *debug* your MEX files. Whenever there is a segmentation fault, you can use an IDE to tell you where in the code it's crashing.
Here are the directions for Mac OS: <http://www.mathworks.com/help/matlab/matlab_external/debugging-on-mac-platforms.html>
Here are the directions for Linux: <http://www.mathworks.com/help/matlab/matlab_external/debugging-on-linux-platforms.html>
Here are the directions for Windows: <http://www.mathworks.com/help/matlab/matlab_external/debugging-on-microsoft-windows-platforms.html>
**NB:** I am *against* referring the OP to a link off-site to help with his or her question. I usually include most of the answer in my posts and provide links as references. Because you haven't told me what platform you're using, I will not write solutions for all platforms as that will take too much time. As such (and I'm praying that the links are stable for a long time), you can refer to the appropriate link for your platform.
|
How to force all variables in Typescript to have a type declared
I understand that when you declare a variable in Typescript, you can choose whether or not to specify a type for the variable. If no type is specified, the default "any" type is used. Is there a way to force all variables to have a type declared, even if it may be "any". As in, I want a compiler error when a type isn't specified. This is so that programmers would be forced to give everything a type and prevent cases where variables are accidentally left as "any".
|
It's not true that a variable declared is necessarily without type in TypeScript. The TypeScript compiler will, when possible, infer a type based on the right hand side of a declaration.
For example:
```
var x = 150;
```
`x` will be a Number as the RHS is a number.
You can use the command line compile option to catch declarations where the type cannot be inferred by using `--noImplicitAny`:
>
> Warn on expressions and declarations with an implied 'any' type.
>
>
>
This option would catch a case where a variable `d` for example is declared, yet not assigned to a value immediately.
```
var d;
```
Will produce an error:
>
> error TS7006: Parameter 'd' of 'test' implicitly has an 'any' type.
>
>
>
The compiler switch also catches parameters without a specified type, and as [@basarat](https://stackoverflow.com/users/390330/basarat) pointed out in a comment, it also catches return types and class/interface members.
There's a little more information in [this blog post](http://blogs.msdn.com/b/typescript/archive/2013/08/06/announcing-0-9-1.aspx) as well. Note that there's also an equivalent MSBuild/project setting available: `<TypeScriptNoImplicitAny>`.
|
How can I debug an MSBuild file?
I've got a large solution that I'm using TFS (and MSBuild) to... well... build. However, it takes a long time to build everything, and I was wondering if it was possible to just debug the build XML file rather than doing the build itself.
I'm using VS2008 and TFS 2008.
|
Unfortunately the possibility to debug MSBuild scripts with Visual Studio has been [unofficially introduced in .NET 4.0](https://devblogs.microsoft.com/visualstudio/debugging-msbuild-script-with-visual-studio/).
For earlier versions all you are left with is "*debugging by tracing*", that is inserting log statements at key points in your script, running the script and examining the output.
Here's how you would typically do it using the [Message Task](https://learn.microsoft.com/en-us/visualstudio/msbuild/message-task):
```
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<SomeVariable>foo</SomeVariable>
</PropertyGroup>
<Target Name="MyTarget">
<!-- Some tasks -->
<Message Text="The value of SomeVariable is: $(SomeVariable)" Importance="High" />
<!-- Some tasks -->
</Target>
</Project>
```
You can then invoke the script from the command line and redirect the output to a log file:
```
msbuild MyScript.proj /t:MyTarget > %USERPROFILE%\Desktop\MyScript.log
```
Related resources:
- [Debugging MSBuild scripts with Visual Studio (.NET 4.0)](https://devblogs.microsoft.com/visualstudio/debugging-msbuild-script-with-visual-studio/)
- [Overview of Logging in MSBuild](https://learn.microsoft.com/en-us/visualstudio/msbuild/obtaining-build-logs-with-msbuild)
|
Basic input with x64 assembly code
I am writing a tutorial on basic input and output in assembly. I am using a Linux distribution (Ubuntu) that is 64 bit. For the first part of my tutorial I spoke about basic output and created a simple program like this:
```
global _start
section .text
_start:
mov rax,1
mov rdi,1
mov rsi,message
mov rdx,13
syscall
mov rax,60
xor rdi,rdi
syscall
section .data
message: db "Hello, World", 10
```
That works great. The system prints the string and exits cleanly. For the next part of my tutorial, I simply want to read one character in from the keyboard. From my understanding of this [web site](http://blog.rchapman.org/posts/Linux_System_Call_Table_for_x86_64/) we change the **rdi** register to be **0** for a sys\_read call.
I first subtract 8 from the current **rsp** and then load that address into the **rsi** register. (That is where I want to store the char). When I compile and run my program it appears to work... but the terminal seems to mimick the input I type in again.
Here is the program:
```
global _start
section .text
_start:
sub rsp,8 ; allocate space on the stack to read
mov rdi,0 ; set rdi to 0 to indicate a system read
mov rsi,[rsp-8]
mov rdx,1
syscall
mov rax,1
mov rdi,1
mov rsi,message
mov rdx,13
syscall
mov rax,60
xor rdi,rdi
syscall
section .data
message: db "Hello, World", 10
```
and this is what happens in my terminal...
```
matthew@matthew-Precision-WorkStation-690:~/Documents/Programming/RockPaperScissors$ nasm -felf64 rps.asm && ld rps.o && ./a.out
5
Hello, World
matthew@matthew-Precision-WorkStation-690:~/Documents/Programming/RockPaperScissors$ 5
5: command not found
matthew@matthew-Precision-WorkStation-690:~/Documents/Programming/RockPaperScissors$
```
The input 5 is repeated back to the terminal *after* the program has exited. What is the proper way to read in a single char using NASM and Linux x64?
|
In your first code section you have to set the SYS\_CALL to `0` for `SYS_READ` (as mentioned rudimentically in the other answer).
So check [a Linux x64 SYS\_CALL list](http://blog.rchapman.org/posts/Linux_System_Call_Table_for_x86_64/) for the appropriate parameters and try
```
_start:
mov rax, 0 ; set SYS_READ as SYS_CALL value
sub rsp, 8 ; allocate 8-byte space on the stack as read buffer
mov rdi, 0 ; set rdi to 0 to indicate a STDIN file descriptor
lea rsi, [rsp] ; set const char *buf to the 8-byte space on stack
mov rdx, 1 ; set size_t count to 1 for one char
syscall
```
|
Stopping a method after a specified time period
I'm running parallel operations on a list of PCs I'm getting from the Active Directory. I was using this method to check PC states, such as whether the computer was online, or if a specific directory existed. However due to the nature of these, sometimes slow, operations, I wanted to include a timeout so my application could continue to operate.
```
public static T MethodTimeout<T>(Func<T> f, int timeout, out bool completed)
{
T result = default(T);
var thread = new Thread(() => result = F());
thread.Start();
Completed = thread.Join(Timeout);
if (!Completed) thread.Abort();
return result;
}
```
This works for the most part, but the processing usage seemed to spike a little, and in a few cases I ran into Out of Memory exceptions. So, I since changed the method to use tasks in the hope that the `ThreadPool` would eliminate the aforementioned problems:
```
public static T MethodTimeout<T>(Func<T> f, int timeout, out bool completed)
{
T result = default(T);
var timedTask = Task.Factory.StartNew(() => result = F());
Completed = timedTask.Wait(Timeout);
return result;
}
```
However, I have a feeling I'm just filling up the `ThreadPool` with processes that are hanging waiting for these potentially long tasks to finish. Since I'm passing the task's function as a parameter I don't see a way to use a cancellation token. However, my experience is very limited with these classes and I might be missing some superior techniques.
Here's how I've been using the method above:
```
bool isReachable; // Check if the directory exists (1 second)
MethodTimeout(() => Directory.Exists(startDirectory), 1000, out isReachable);
```
A quick note, I'm only running the above check after I've already confirmed the PC is online via a WMI call (also using `MethodTimeout`). I'm aware through my early testing that checking the directory before that ends up being inefficient.
I'm also open to replacing this approach with something better.
|
I may be the harbinger of bad news here, but this scenario is way more difficult to deal with than most people think. It seems that you are already picking up on this. Using the cooperative cancellation mechanisms is all good, but you have to be able to actually cancel the time consuming operation to be able to use them effectively. The problem is that `Directory.Exists` can not be cancelled.
The problem with your first solution is that you are aborting a thread. This is not a good idea because it halts the thread at unpredictable points. This may lead to the corruption of data structures for anything that was executing on the call stack when the abort got injected. In this particular case I would not be surprised if the `Thread.Abort` call actually hung. The reason is that aborts are normally delayed while the execution is in unmanaged code. It is likely `Directory.Exists` defers to an unmanaged module. If that is the case then the abort will not work anyway.
The problem with the second solution is that you are leaving the task orphaned. That `Directory.Exists` call would still be executing on a thread pool thread somewhere. This is because you have not actually cancelled it.
To be honest I am not really sure what to do about this. The lack of a cancellable `Directory.Exists` method is very problematic. My first thought is to attempt a write or read from the directory you want to test as kind of a proxy for checking its existence. The `FileStream` class does have cancellable operations. In fact, many of the methods even accept a `CancellationToken` as a parameter. Or you could close the `FileStream` and that will cancel any pending operations as well. Another option might be to use the Win32 API functions to do the IO. Then you could call [CancelSynchronousIo](http://msdn.microsoft.com/en-us/library/windows/desktop/aa363794%28v=vs.85%29.aspx) if necessary.
I know this is not much of an answer because all I have really done is tell you what you cannot do, but did not offer a definitive solution. The point is that the best solution starts with a cancellable operation. It is unfortunate that some of the BCL classes do not provide these even when they should.
|
Lisp data security/validation
This is really just a conceptual question for me at this point.
In Lisp, programs are data and data are programs. The REPL does exactly that - reads and then evaluates.
So how does one go about getting input from the user in a secure way? Obviously it's possible - I mean viaweb - now Yahoo!Stores is pretty secure, so how is it done?
|
The REPL stands for Read Eval Print Loop.
```
(loop (print (eval (read))))
```
Above is only conceptual, the real REPL code is much more complicated (with error handling, debugging, ...).
You can read all kinds of data in Lisp without evaluating it. Evaluation is a separate step - independent from reading data.
There are all kinds of IO functions in Lisp. The most complex of the provided functions is usually READ, which reads s-expressions. There is an option in Common Lisp which allows evaluation during READ, but that can and should be turned off when reading data.
So, data in Lisp is not necessarily a program and even if data is a program, then Lisp can read the program as data - without evaluation. A REPL should only be used by a developer and should not be exposed to arbitrary users. For getting data from users one uses the normal IO functions, including functions like READ, which can read S-expressions, but does not evaluate them.
Here are a few things one should NOT do:
- use READ to read arbitrary data. READ for examples allows one to read really large data - there is no limit.
- evaluate during READ ('read eval'). This should be turned off.
- read symbols from I/O and call their symbol functions
- read cyclical data structures with READ, when your functions expect plain lists. Walking down a cyclical list can keep your program busy for a while.
- not handle syntax errors during reading from data.
|
Swagger: Reuse schema definition inside object property
Problem: I want to reuse a definiton. Once directly and once inside an object. The following code works but the validator from <http://editor.swagger.io/#/> is saying "Not a valid response definition" and pointing to the line "200".
Do I really have to define Account twice? Or is this an issue form the validator?
```
* responses:
* 200: <---- ERROR POINTING HERE
* description: Updated account
* schema:
* type: object
* properties:
* data:
* type: object
* properties:
* account:
* schema:
* $ref: '#/definitions/Account'
```
The definition itself:
```
"Account" : {
"type" : "object",
"required": [
"id", "username", "lastname", "firstname", "traderid", "customerid", "company", "validated"
],
"properties": {
"id": {
"type": "string"
},
"username" : {
"type": "string"
},
"lastname" : {
"type": "string"
},
"firstname" : {
"type": "string"
},
"traderid" : {
"type": "string"
},
"customerid" : {
"type": "string"
},
"company" : {
"type": "string"
},
"validated" : {
"type": "boolean"
}
},
"example" : {
"id": "57790fdde3fd3ed82681f39c",
"username": "yuhucebafu",
"validated": false,
"customerid": "57790fdce3fd3ed82681f39a"
}
},
```
|
The problem is with the use of `schema` in the `account` property.
```
200:
description: Updated account
schema:
type: object
properties:
data:
type: object
properties:
account:
schema: # <-- Problem is here.
$ref: '#/definitions/Account'
```
Here's the corrected response definition:
```
200:
description: Updated account
schema:
type: object
properties:
data:
type: object
properties:
account:
$ref: '#/definitions/Account'
```
The property name "schema" is only used as a top-level property in Swagger's [response object](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#responseObject), or a [parameter object](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#user-content-parameterObject) where `in` is set to "body".
Once you've started specifying your schema, everything within that structure mostly follows standard [JSON Schema](http://json-schema.org/), which is highly recursive. For example:
- Within an object schema, the value of each `properties/[propertyName]` is a schema.
- Within an array schema, the value of `items` is a schema.
- The value of each `/definitions/[name]` is a schema.
- The value of each array element within `allOf`, `anyOf`, or `oneOf` is a schema.
- The value of `additionalProperties` can be a schema (or a boolean value).
- ...
You get the idea. JSON Schema doesn't use the property name `schema` for all of these cases, because it would introduce a lot of noise in a language where practically "everything is a schema."
|
Returning constructor arguments in braces?
I am new to C++ and the brace initialization (or uniform init) really is confusing. What exactly happens when a function returns a list of arguments in braces? Thanks a lot for claryfying.
```
std::vector<double> foo()
{
return {1, 2}; // is this the same as: std::vector<double>{1, 2} or std::vector<double>(1, 2)? or something else?
}
```
|
`return {1, 2};`, the return value is [list-initialized](https://en.cppreference.com/w/cpp/language/list_initialization) from `{1, 2}`, as the effect, the returned `std::vector<double>` contains 2 elements with value `1` and `2`.
`return std::vector<double>{1, 2};`, the return value is [copy-initialized](https://en.cppreference.com/w/cpp/language/copy_initialization) from `std::vector<double>{1, 2}`, as the effect, the returned `std::vector<double>` contains 2 elements with value `1` and `2`. In concept it'll construct a temporary `std::vector<double>` and the return value is copy-initialized from the temporary; because of [mandatory copy elision](https://en.cppreference.com/w/cpp/language/copy_elision) (since C++17) the copy/move operation is ommitted and the effect is exactly the same as the 1st case.
`return std::vector<double>(1, 2)`, the return value is copy-initialized from `std::vector<double>(1, 2)`, as the effect, the returned `std::vector<double>` contains 1 elements with value `2`. Mandatory copy elision takes effect in this case too.
|
VIM: Don't underline leading whitespace in HTML links
>
> **Possible Duplicate:**
>
> [Why is vim drawing underlines on the place of tabs and how to avoid this?](https://stackoverflow.com/questions/4625274/why-is-vim-drawing-underlines-on-the-place-of-tabs-and-how-to-avoid-this)
>
>
>
When indenting PHP code in VIM 7.0 on CentOS 5.x, HTML links are shown underlined. This is very handy, but in some places I have indented PHP code in that HTML, and the whole indentation is underlined:
```
<li class="picture">
________________<a href="<?=$linkUrl?>">
____________________<img src="/<?=$img['source']?>" alt="Picture"/>
____________________<? if ($someCondition): ?><span class="info"><?=$img['info']?></span><? endif; ?>
________________</a>
</li>
```
Is there any way to tell the syntax highlighter to ignore line-leading whitespace in HTML links?
|
I managed to achieve this through modifying `$VIMRUNTIME/syntax/html.vim`. Make a copy to `~/.vim/syntax/html.vim` (`.vim` is named `vimfiles` on Windows), and replace the original syntax definition
```
syn region htmlLink start="<a\>\_[^>]*\<href\>" end="</a>"me=e-4 contains=@Spell,htmlTag,htmlEndTag,htmlSpecialChar,htmlPreProc,htmlComment,javaScript,@htmlPreproc
```
with the following:
```
syn region htmlLink start="<a\>\_[^>]*\<href\>" end="</a>"me=e-4 keepend contains=@Spell,htmlTag,htmlEndTag,htmlSpecialChar,htmlPreProc,htmlComment,htmlLinkText,javaScript,@htmlPreproc
syn match htmlLinkText contained contains=@Spell,htmlTag,htmlEndTag,htmlSpecialChar,htmlPreProc,htmlComment,htmlLinkText,javaScript,@htmlPreproc "^\s*\zs.\{-}\ze\s*$"
syn match htmlLinkText contained contains=@Spell,htmlTag,htmlEndTag,htmlSpecialChar,htmlPreProc,htmlComment,htmlLinkText,javaScript,@htmlPreproc "\S.\{-}\ze\s*$"
```
Further down, change
```
HtmlHiLink htmlLink Underlined
```
to
```
HtmlHiLink htmlLinkText Underlined
```
Voila! Basically, this introduces another contained syntax group `htmlLinkText`, which does not match leading and trailing whitespace, and applies the highlighting to that instead.
|
Made my own simple calculator
So I made this calculator in c# console, I'm pretty nooby-ish in programming. I want to know if I can make it better, because my coding skills suck. Pretty sure my logic sucks too.
Any advice and tips?
```
public static void Main() {
bool returnToStart;
do {
returnToStart = Calc();
} while(returnToStart == true);
Console.ReadLine();
}
public static bool Calc() {
ExitProgram();
Console.Clear();
float firstNumber; float secondNumber; float result;
string operation;
Console.Write("First number: ");
firstNumber = float.Parse(Console.ReadLine());
Console.Write("Choose your operator: addition(+), subtraction(-), " +
"multiplication(*) or division(/): ");
operation = Console.ReadLine();
Console.Write("Second number: ");
secondNumber = float.Parse(Console.ReadLine());
Console.Clear();
switch(operation) {
case "+":
result = (firstNumber + secondNumber);
Console.WriteLine("Calculation: {0}", result);
break;
case "-":
result = (firstNumber - secondNumber);
Console.WriteLine("Calculation: {0}", result);
break;
case "*":
result = (firstNumber * secondNumber);
Console.WriteLine("Calculation: {0}", result);
break;
case "/":
if(firstNumber == 0) {
Console.WriteLine("Cant divide by zero!");
} else {
result = (firstNumber / secondNumber);
Console.WriteLine(result);
}
break;
default:
Console.WriteLine("Error! Try again.");
break;
}
return true;
}
public static bool ExitProgram() {
string exit = "Exit";
Console.WriteLine("If you're done with your calculations,\nthen you can exit this program by simply typing in 'Exit' ");
Console.Write("Want to quit the program?: ");
exit = Console.ReadLine();
if(exit == "Exit") {
Environment.Exit(0);
}return true;
}
}
```
}
|
**Keep It Simple For the User**
Rather than making the user type the word `Exit` it might be better to ask the user
`Are you done yet(y/n)?` and accept a simple yes/no answer. It also might be better if the answer was not case sensitive so accept `Y`, `y`, `N` and `n`.
**Exit From Main() When Possible**
The Main() function may contain clean up code so exiting from the `ExitProgram()` function may not be the best idea. A second problem I see with the `ExitProgram()` function is that it is called before the user ever enters a calculation. There are two possible ways to handle this, one would be to move the call to `ExitProgram()` to after the calculation is performed, the other would be to have a function that contains a `do while` loop that calls `Calc` within the loop and tests `ExitProgram()` in the while condition.
**Function Complexity**
The function `Calc()` is overly complex (it does too much) and should be multiple functions. One of the functions should get the user input, and a second function should do the calculation. This would be applying two programming principles, the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single_responsibility_principle) and the KISS Principle.
The Single Responsibility Principle states:
>
> that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function.
>
>
>
The [Keep It Simple (KISS) Principle](https://en.wikipedia.org/wiki/KISS_principle) is an engineering principle that predates computers, basically it is keep the implementation as simple as possible.
|
How to call base constructor inside constructor depending on some parameter?
How to call base constructor inside constructor depending on parametres?
For example:
```
public SomeConstructor (){
if(SomeParameter == "something") //here call base("something");
else //here call base("something else")
}
```
in my example
```
SomeParameter
```
could be for example local computer name.
To explain what am i doing, i want to determine constructor depending on computer name. I am working on MVC project, and i still forget change name of connection string when i'm publishing project on the server. so, i want to specify if computer name == my computer name, then call
```
:base("DefaultConnection")
```
otherwise, call for example
```
:base("ServerConnectionString")
```
|
You can't do that way, you can only call as in your latter examples, and even then, both your examples are passing a string and not changing the parameter type, so it seems senseless this way (they're not even different constructors you're calling). You could get away with calling the constructor in the conventional way, and making sure the provided value is the valid one prior to that.
As an off the cuff example, consider the following:
```
public SomeConstructor()
: base(Configuration.ConnectionString) {
}
public static Configuration {
public static string ConnectionString {
get {
/* some logic to determine the appropriate value */
#if DEBUG
return ConfigurationManager.ConnectionStrings["DebugConnectionString"];
#else
return ConfigurationManager.ConnectionStrings["ReleaseConnectionString"];
#endif
}
}
}
```
|
things to absolutely check when deploying an MVC 2 web application?
I am going to deploy my first MVC web application to the internet.
As is the first app with this framework for me, I was wondering if I can collect some advices regarding what should be done to prevent troubles.
I am generic on the question and this is xpressely done to collect the most various answers.
Thanks!
**UPDATE**:
Thanks everybody for your answers. The question does not just regards the "deploy" scope but even more these scopes:
- Security
- Public Hosting
- Application Management & Operations
In short, all the issue that need to be addressed from the initial deploy on.
|
If you are going to deploy within the hour:
1. Make sure the reference to System.Web.Mvc is marked as Copy Local=True in your project. It tends to default to False, and so will not be copied to your bin folder, causing the app to fail when deployed. Review any additional assemblies you may have added to the project for Copy Local=True.
2. Make sure stack traces aren't shown to end users when an exception occurs in your code. Use the [`<customErrors>`](http://msdn.microsoft.com/en-us/library/h0hfz6fc.aspx) in web.config to specify your own error page(s).
3. In your web.config, make sure that the connection strings will work on the server you'll be deploying to.
4. Is your application writing to disk for any reason? Logging perhaps? Some hosting services will not let your app write to disk -- worth checking ahead of time.
5. Deploy to a local folder, and sanity-check that published app. Are scripts, stylesheets, images loaded without glitches?
If you have a little bit more time on your hands:
1. Review the code for security, XSS considerations: are inputs sanitized? Is user input encoded correctly when rendered in views?
2. Review the data schema. This may be your last opportunity to modify an empty database. Once you deploy, you can only hope to be gently updating live data, trying to avoid harming existing data and dependencies in application logic.
3. If you haven't already, make sure to set up a google analytics account. You're going to be curious about how your site is being used, and this is one of the best free tools available. Goog analytics, as well as search engine registrations will require you to place files with funky names in the root 'folder' of your domain name. Verify that your app can serve files from the root folder, or set up a controller+routing to handle those requests.
|
Read lines from a file with variable line endings in Go
How can I read lines from a file where the line endings are carriage return (CR), newline (NL), or both?
The PDF specification allows lines to end with CR, LF, or CRLF.
- `bufio.Reader.ReadString()` and `bufio.Reader.ReadBytes()` allow a single delimiter byte.
- `bufio.Scanner.Scan()` handles `\n` optionally preceded by `\r`, but not a lone `\r`.
>
> The end-of-line marker is one optional carriage return followed by one mandatory newline.
>
>
>
Do I need to write my own function that uses `bufio.Reader.ReadByte()`?
|
You can write custom `bufio.SplitFunc` for `bufio.Scanner`. E.g:
```
// Mostly bufio.ScanLines code:
func ScanPDFLines(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i := bytes.IndexAny(data, "\r\n"); i >= 0 {
if data[i] == '\n' {
// We have a line terminated by single newline.
return i + 1, data[0:i], nil
}
advance = i + 1
if len(data) > i+1 && data[i+1] == '\n' {
advance += 1
}
return advance, data[0:i], nil
}
// If we're at EOF, we have a final, non-terminated line. Return it.
if atEOF {
return len(data), data, nil
}
// Request more data.
return 0, nil, nil
}
```
And use it like:
```
scan := bufio.NewScanner(r)
scan.Split(ScanPDFLines)
```
|
DOM Exception when assigning HTML entities to innerHTML
On this page <http://blog.zacharyvoase.com/2010/11/11/sockets-and-nodes-i/>, running the following code in javascript console will throw an Exception.
```
var div = document.createElement('div'); div.innerHTML = "»";
```
- Chrome 8.0.552.28 Mac: Error: INVALID\_STATE\_ERR: DOM Exception 11
- Firebug in Firefox 3.6.12 Mac: NS\_ERROR\_DOM\_SYNTAX\_ERR An invalid or illegal string was specified
- Safari 5.0.2 Mac: Error: NO\_MODIFICATION\_ALLOWED\_ERR: DOM Exception 7
Opera: works fine
But it works fine in all other pages I tried. My questions are **what's special about the page** and **why does chrome and firefox throw an exception**?
Writing the character directly without using entity works fine.
```
var div = document.createElement('div'); div.innerHTML = "»";
```
Using other entities also works, e.g.
```
var div = document.createElement('div'); div.innerHTML = "<";
```
|
I'm answering my own question.
The short answer: it's because of browser limitation.
If a page is recognized as XHTML by certain browsers, only a subset of named character entities allowed by the standard are supported for innerHTML assignment.
Specifically in my testing, it seems that in Mozilla and Webkit, only `"`, `&`, `<` and `>` are allowed in innerHTML assignment.
My testing code is available here: <https://gist.github.com/673901>
XHTML is a "better" and cleaner version of HTML reformulated in XML. XHTML 1.1 was supposed to be the successor and future of HTML. However, this is unlikely to happen with the adoption of HTML5.
Unlike HTML, which requires a dedicated HTML parser, an XHTML document can be parsed by a general XML parser. At least in mozilla and webkit, XHTML and HTML go through different code paths. It's understandable that the HTML code path is where most effort goes to and also better tested because there are far more HTML documents out there than XHTML.
It is worth noting that whether a document is recognized as XHTML is determined by the effective MIME type rather than the document content.
The conclusion is that if you are working with XHTML, make sure you convert named entities to numeric entities (e.g. ` ` -> ` `) before assigning to .innerHTML.
|
Is GCC correct in requiring the constexpr specifier for this reference declaration?
The code below doesn't compile under [GCC 5.3.0](https://goo.gl/2oyoCt) because the declaration of `r` is missing a `constexpr` specifier.
```
const int i = 1;
const int& r = i;
constexpr int j = r;
```
I believe the rejection is correct. How do I prove it using the working draft N4527?
|
First, since we're using a reference, [expr.const]/(2.9) must not be violated. (2.9.1) applies, though:
>
> an *id-expression* that refers to a variable or data member of
> reference type unless the reference has a preceding initialization and
> either
> — it is initialized with a constant expression
>
>
>
I.e. using `r` is fine, as long as the initializer - `i` - is a constant expression (this is shown below).
It's also necessary to check whether the l-t-r conversion in line 3 is legal, i.e. (2.7) must not be violated. However, (2.7.1) applies:
>
> an lvalue-to-rvalue conversion (4.1) unless it is applied to
> — a
> non-volatile glvalue of integral or enumeration type that refers to
> a complete non-volatile `const` object with a preceding
> initialization, initialized with a constant expression, or
>
>
>
…so that's fine as well, since the (g)lvalue is `r`, and it refers to `i` - which is a non-volatile `const` object with a constant expression initializer (`1`).
We postponed showing that `i` is actually a constant expression, and once that's out of the way, we need to show that `r` is a constant expression.
[expr.const]/5 pertains to that:
>
> A *constant expression* is either a glvalue core constant expression
> whose value refers to an entity that is a permitted result of a
> constant expression (as defined below), or a prvalue core constant
> expression whose value is an object where, for that object and its
> subobjects:
>
>
> - each non-static data member of reference type refers to an entity that is a permitted result of a constant expression, and
> - if the object or subobject is of pointer type, it contains the address of an object with static storage duration, the address past
> the end of such an object (5.7), the address of a function, or a null
> pointer value.
>
>
> An entity is a *permitted result of a constant expression* if it is an object with static storage duration that is
> either not a temporary object or is a temporary object whose value satisfies the above constraints, or it is a
> function.
>
>
>
Since `i` is, in the above context, a (g)lvalue, it has to be a permitted result of a constant expression - which it is, since it has static storage duration and certainly isn't a temporary. Thus `i` is a constant expression.
`r` is, however, treated as a prvalue in line 3. Since we already established that `r` is a core constant expression, we solely need to check the bullet points. They're clearly met, though.
Hence **the code is well-formed in namespace scope**. It won't be in local scope, as `i` wouldn't be a permitted result of a constant expression anymore. [Clang gives a comprehensive error message](http://coliru.stacked-crooked.com/a/6c003aa3056096af).
|
How to install applet on smart card using java
Is there any way to load .**cap**(converted applet) design in java card in to java and then install applet from that .cap(converted applet) file into smart card?
I am having .cap file that is converted applet file and i want to install applet present in that .cap file.
First tell me how to load .cap file in java.
In java i am using **javax.smartcardio** package to interact with smart card and apdu commands to interact with applet installed in smart card.
I am using:
1. smart card type = contact card
2. using **JavaCard2.2.2** with **jcop** using **apdu**
|
On every programmable JavaCard there is a pre-loaded applet called "CardManager". This applet has to be used for uploading and installing new applets.
In most cases the used SDK of your JavaCard comes with libraries that encapsulate the necessary steps for selecting the CardManager (including necessary authentication), loading and installing an applet. It bases on the install and load commands defined in the [GlobalPlatform](http://globalplatform.org) standard.
The necessary steps/commands are for example explained here: [Installing JavaCard Applet into real SmartCard](http://adywicaksono.wordpress.com/2008/01/05/installing-javacard-applet-into-real-smartcard/). However the concrete authentication sequence (CardManager AID, used authentication key) for your card depends on what card type you have.
|
Why doesn't `sudo cd /var/named` work?
I want to `cd` into `/var/named` but it gives me a permission denied error, and when I want to use `sudo` to do this I am not permitted. What is the technical reason for this, and is it possible to do this some other way?
|
The reason you can't do this is simple and two fold
### 1
`cd` is not a program but an in-built command and `sudo` only applies to programs.
`sudo foo` means run the program foo as root
`sudo cd /path` returns
```
sudo: cd: command not found
```
because `cd` is not a program.
### 2
If it were possible to use **sudo** to `cd` to a protected directory then having run the command `sudo cd /var/named` you would be in that directory as a normal user but normal users are not allowed to be in that directory.
**This is not possible.**
**Workaround:**
You can use `sudo -i` to elevate yourself to super user. For example:
```
sudo -i
cd /var/named
```
You are now logged on as root and can use whatever commands you wish. When finished type `exit` and you are back to being logged on as a normal user.
|
Is it possible to annotate a seaborn violin plot with number of observations in each group?
I would like to annotate my violin plot with the number of observations in each group. So the question is essentially the same as [this one](https://stackoverflow.com/questions/15720545/use-stat-summary-to-annotate-plot-with-number-of-observations), except:
- python instead of R,
- seaborn instead of ggplot, and
- violin plots instead of boxplots
Lets take this example from [Seaborn API documentation](https://seaborn.pydata.org/generated/seaborn.violinplot.html):
```
import seaborn as sns
sns.set_style("whitegrid")
tips = sns.load_dataset("tips")
ax = sns.violinplot(x="day", y="total_bill", data=tips)
```
I'd like to have n=62, n=19, n=87, and n=76 on top of the violins. Is this doable?
|
In this situation, I like to precompute the annotated values and incorporate them into the categorical axis. In other words, precompute e.g., "Thurs, N = xxx"
That looks like this:
```
import seaborn as sns
sns.set_style("whitegrid")
ax= (
sns.load_dataset("tips")
.assign(count=lambda df: df['day'].map(df.groupby(by=['day'])['total_bill'].count()))
.assign(grouper=lambda df: df['day'].astype(str) + '\nN = ' + df['count'].astype(str))
.sort_values(by='day')
.pipe((sns.violinplot, 'data'), x="grouper", y="total_bill")
.set(xlabel='Day of the Week', ylabel='Total Bill (USD)')
)
```
[](https://i.stack.imgur.com/WfrXa.png)
|
Linux - How to tell which files are sourced at login?
When a particular user logs in to a Linux machine, certain files are sourced, like perhaps .bashrc, .bash\_profile, etc... Sometimes different shells mean different files are sourced. And of course the user might have something setup to source certain custom files of their own.
My question: Is there a way for root/su to determine a list of every file that is sourced for any given user when they login?
|
inotifywatch may help. Packaged in inotify-tools.
Use as the user of files/dir for home dir being watched, not sudo, or you'll get errors on .gvfs if running a newer gnome.
inotify will only tell you which files get accessed/created/modified/deleted.
```
$ inotifywatch -r /home/username/.* /home/username/*
Establishing watches...
```
in another terminal
```
$ cat /home/username/.bashrc
```
in inotifywatch terminal ctrl-c to end
```
Finished establishing watches, now collecting statistics.
total access modify close_nowrite open filename
3 1 0 1 1 /home/username/.bashrc
```
**For your specific request, all files accessed during login.**
```
$ inotifywatch -r /home/username/.* /home/username/*
Establishing watches...
```
in another terminal
```
$ sudo su
# login username
```
in inotifywatch terminal ctrl-c to end
You may want to redirect the inotifywatch to a file if using a fat desktop like gnome or kde. Or increase your scrollback in the inotifywatch terminal. In gnome 3, thousands of homedir files are accessed during login. You'll probably want to either exclude directories or make a specific dir/file list to watch.
|
How to return concrete type from generic function?
In the example below the `Default` trait is used just for demonstration purposes.
My questions are:
1. What is the difference between the declarations of `f()` and `g()`?
2. Why `g()` doesn't compile since it's identical to `f()`?
3. How can I return a concrete type out of a `impl trait` generically typed declaration?
```
struct Something {
}
impl Default for Something {
fn default() -> Self {
Something{}
}
}
// This compiles.
pub fn f() -> impl Default {
Something{}
}
// This doesn't.
pub fn g<T: Default>() -> T {
Something{}
}
```
|
>
> What is the difference between the declarations of `f()` and `g()`?
>
>
>
- `f` returns some type which implements `Default`. The caller of `f` *has no say in what type it will return*.
- `g` returns some type which implements `Default`. The caller of `g` *gets to pick the exact type that must be returned*.
You can clearly see this difference in how `f` and `g` can be called. For example:
```
fn main() {
let t = f(); // this is the only way to call f()
let t = g::<i32>(); // I can call g() like this
let t = g::<String>(); // or like this
let t = g::<Vec<Box<u8>>(); // or like this... and so on!
// there's potentially infinitely many ways I can call g()
// and yet there is only 1 way I can call f()
}
```
>
> Why `g()` doesn't compile since it's identical to `f()`?
>
>
>
They're not identical. The implementation for `f` compiles because it can only be called in 1 way and it will always return the exact same type. The implementation for `g` fails to compile because it can get called infinitely many ways for all different types but it will always return `Something` which is broken.
>
> How can I return a concrete type out of a `impl trait` generically typed declaration?
>
>
>
If I'm understanding your question correctly, you can't. When you use generics you let the caller decide the types your function must use, so your function's implementation itself must be generic. If you want to construct and return a generic type within a generic function the usual way to go about that is to put a `Default` trait bound on the generic type and use that within your implementation:
```
// now works!
fn g<T: Default>() -> T {
T::default()
}
```
If you need to conditionally select the concrete type within the function then the only other solution is to return a trait object:
```
struct Something;
struct SomethingElse;
trait Trait {}
impl Trait for Something {}
impl Trait for SomethingElse {}
fn g(some_condition: bool) -> Box<dyn Trait> {
if some_condition {
Box::new(Something)
} else {
Box::new(SomethingElse)
}
}
```
|
Where is .class defined in Java? (Is it a variable or what?)
There are 2 ways to get a class's `Class` object.
1. Statically:
```
Class cls = Object.class;
```
2. From an instance:
```
Object ob = new Object();
Class cls = ob.getClass();
```
Now my question is `getClass()` is a method present in the `Object` class,
but what is `.class`? Is it a variable? If so then where is it defined in Java?
|
That's implemented internally and called a **class literal** which is handled by the JVM.
The Java Language Specification specifically mentions the term "token" for it.
So `.class` is more than a variable, to be frank it is not a variable at all. At a broader level you can consider it as a keyword or *token*.
<https://docs.oracle.com/javase/specs/jls/se9/html/jls-15.html#jls-15.8.2>
>
> A class literal is an expression consisting of the name of a class, interface, array, or primitive type, or the pseudo-type `void`, **followed by a '`.`' and the token `class`.**
>
>
> A class literal evaluates to the `Class` object for the named type (or for `void`) as defined by the defining class loader (§12.2) of the class of the current instance.
>
>
>
|
Custom Layout Android Compose measurable width
I'm trying to implement a Custom "Row" Layout in Android compose like this:
```
@Composable
fun CustomLayout(modifier: Modifier, content: @Composable () -> Unit){
Layout(
modifier = modifier,
content = content,
) { measurables, constraints ->
val placeables = measurables.map { measurable ->
measurable.measure(constraints)
}
var xPos = 0
layout(constraints.maxWidth, constraints.maxHeight) {
placeables.forEach { placeable ->
placeable.placeRelative(x = xPos, y = 0)
xPos += placeable.width
}
}
}
}
```
Here is the preview for this composable:
```
@Preview(name = "Preview")
@Composable
fun customLayoutPreview() {
CustomLayout(Modifier.size(1080.dp,100.dp)){
Box(
Modifier
.size(32.dp, 64.dp)
.background(Color.Red)) {
}
Box(
Modifier
.size(100.dp, 64.dp)
.background(Color.Green)) {
}
Box(
Modifier
.size(32.dp, 64.dp)
.background(Color.Yellow)) {
}
}
}
```
The issue here is that each `measurable` (Box), after being measured, got parent width, hence, 1080dp, and not the desired width passed to the boxes which caused the first child to take all space.
see:
[Preview](https://i.stack.imgur.com/E7jkj.png)
What is expected to be in theory:
(Box 32dp)(Box 100dp)(Box 32dp)
What I'm missing here?
I tried the documentation, but I could not understand a way to achieve something like plain android Custom `ViewGroup.onMeasure` like :
`view.measure(MeasureSpec.makeMeasureSpec(wantedSize, AT_MOST or EXACT))`, where `wantedSize` can be fetched from LayoutParams or from the `WRAP_CONTENT` strategy.
Environment:
Android Studio Arctic Fox.
Compose: 1.0.0.
Kotlin: 1.5.10.
AGP and Gradle: 7.0.0.
|
You have to make a copy of your constraints and change the min to 0. Change it like this:
```
@Composable
fun CustomLayout(modifier: Modifier, content: @Composable () -> Unit) {
Layout(
modifier = modifier,
content = content,
) { measurables, constraints ->
// MAKE A COPY WITH 0 FOR MIN
val looseConstraints = constraints.copy(
minWidth = 0,
minHeight = 0
)
val placeables = measurables.map { measurable ->
measurable.measure(looseConstraints)
}
var xPos = 0
layout(constraints.maxWidth, constraints.maxHeight) {
placeables.forEach { placeable ->
placeable.placeRelative(x = xPos, y = 0)
xPos += placeable.width
}
}
}
}
```
[](https://i.stack.imgur.com/C1EYe.png)
|
No style ViewPagerIndicator in combination with SherlockActionBar
I'm trying to implement the ViewPagerIndicator with SherlockActionBar. It works: I can slide the fragments, but the style doesn't work!
It looks like this:
[](https://i.stack.imgur.com/PmWVl.png)
related topic: <https://github.com/JakeWharton/Android-ViewPagerIndicator/issues/66>
I know what's going wrong:
In the sample of VPI, the style of a page is set in AndroidManifest.xml by (for example)
```
<activity
android:name=".SampleTabsDefault"
android:theme="@style/Theme.PageIndicatorDefaults">
</activity>
```
But when I add that android:theme element to my own Manifest.xml I get the following exception:
```
03-16 11:06:24.400: E/AndroidRuntime(483): Caused by: java.lang.IllegalStateException: You must use Theme.Sherlock, Theme.Sherlock.Light, Theme.Sherlock.Light.DarkActionBar, or a derivative.
```
I've also tried to copy all the style-information to styles.xml, but that also doesn't work.
Can you tell me how to set a style to the VPI?
Thank you!
EDIT:
Now I've added
```
<activity
android:name=".SampleTabsDefault"
android:theme="@style/StyledIndicators">
</activity>
```
to my Manifest.xml
and the following style in styles.xml
```
<style name="Theme.BataApp" parent="Theme.Sherlock.Light.DarkActionBar">
<item name="actionBarStyle">@style/Widget.Styled.ActionBar</item>
<item name="android:actionBarStyle">@style/Widget.Styled.ActionBar</item>
</style>
<style name="StyledIndicators" parent="Theme.BataApp">
<item name="vpiTitlePageIndicatorStyle">@style/Widget.TitlePageIndicator</item>
<item name="vpiTabPageIndicatorStyle">@style/Widget.TabPageIndicator</item>
<item name="vpiTabTextStyle">@style/Widget.TabPageIndicator.Text</item>
</style>
```
Result: I don't get any Exception, see the style, but I now I can't see any fragments :(
The code of the activity can be found here: <http://pastebin.com/hsNgxFDZ>
Screenshot of current page with style:
[](https://i.stack.imgur.com/oXNtH.png)
|
In order to see the Fragments, you'll have to somehow extend an Android base-theme. In order to see the ViewPagerIndicator, you'll have to somehow include those style-definitions. The answer is to create your own theme that extends the ActionBarSherlock-theme (which extends the Android base-themes), and implements the ViewPagerIndicator-styles.
Example:
```
<style name="myTheme" parent="@style/Theme.Sherlock">
<!-- If you want to use the default ViewPagerIndicator-styles, use the following: -->
<item name="vpiTitlePageIndicatorStyle">@style/Widget.TitlePageIndicator</item>
<item name="vpiTabPageIndicatorStyle">@style/Widget.TabPageIndicator</item>
<item name="vpiTabTextStyle">@style/Widget.TabPageIndicator.Text</item>
<!-- If you want to implement your custom style of CirclePageIndicator, do it like so: -->
<item name="vpiCirclePageIndicatorStyle">@style/myCirclePageIndicator</item>
<!-- Put whatever other styles you want to define in your custom theme -->
</style>
<style name="myCirclePageIndicator" parent="Widget.CirclePageIndicator">
<item name="fillColor">@color/my_custom_circle_indicator_fill_color</item>
</style>
```
Then in your Manifest, set `android:theme="@style/myTheme"` to the `<application>` or to your `<activity>`s.
Note that you don't have to include all 4 "vpi..."-styles; you only have to include those that you use. E.g. if you're only using the CirclePageIndicator, you only have to include `<item name="vpiCirclePageIndicatorStyle">...</item>` and can leave out the other 3 item declarations.
|
TextField maxLength - Android Jetpack Compose
Is there any out of the box solution for limiting the character size in TextField's ?
I don't see any maxLength parameter like we had in XML.
|
You can use the **`onValueChange`** parameter to limit the number of characters.
```
var text by remember { mutableStateOf("") }
val maxChar = 5
TextField(
value = text,
onValueChange = {
if (it.length <= maxChar) text = it
}
singleLine = true,
)
```
Then with **M3** you can use the **`supportingText`** attribute to display the `counter text`.
Something like:
```
val maxChar = 5
TextField(
value = text,
onValueChange = {
if (it.length <= maxChar) text = it
},
modifier = Modifier.fillMaxWidth(),
supportingText = {
Text(
text = "${text.length} / $maxChar",
modifier = Modifier.fillMaxWidth(),
textAlign = TextAlign.End,
)
},
)
```
[](https://i.stack.imgur.com/PF9Io.png)
With **M2** there isn't a built-in parameter.
In this case to display the **counter text** you can use something like:
```
val maxChar = 5
Column(){
TextField(
value = text,
onValueChange = {
if (it.length <= maxChar) text = it
},
singleLine = true,
modifier = Modifier.fillMaxWidth()
)
Text(
text = "${text.length} / $maxChar",
textAlign = TextAlign.End,
style = MaterialTheme.typography.caption,
modifier = Modifier.fillMaxWidth().padding(end = 16.dp)
)
}
```
[](https://i.stack.imgur.com/kW77Q.png)
|
Getting the text from all p elements in a div with BeautifulSoup
I'm trying to get the text (contents without the tags) of all p elements in a given div:
```
import requests
from bs4 import BeautifulSoup
def getArticle(url):
url = 'http://www.bbc.com/news/business-34421804'
result = requests.get(url)
c = result.content
soup = BeautifulSoup(c)
article = []
article = soup.find("div", {"class":"story-body__inner"}).findAll('p')
for element in article:
article = ''.join(element.findAll(text = True))
return article
```
The problem is that this returns only the contents of the last paragraph. But the code works perfectly if I just use print:
```
for element in article:
print ''.join(element.findAll(text = True))
return
```
I want to call this function elsewhere, so I need it to return the text, not just print it. I have searched stackoverflow and googled a lot, but didn't find the answer and I don't understand what might be the problem. I use Python 2.7.9 and bs4.
Thanks in advance!
|
Following code should work -
```
import requests
from bs4 import BeautifulSoup
def getArticle(url):
url = 'http://www.bbc.com/news/business-34421804'
result = requests.get(url)
c = result.content
soup = BeautifulSoup(c)
article_text = ''
article = soup.find("div", {"class":"story-body__inner"}).findAll('p')
for element in article:
article_text += '\n' + ''.join(element.findAll(text = True))
return article_text
```
There are a couple of issues in your code -
1. Same variable name 'article' has been used to store the elements as well as the text.
2. The variable that should be returned is only being assigned the value and not appended, so only last value remains in that.
|
How to convert 64 bit binary string in to a decimal number in tcl?
I'm trying to convert a 64 bit binary string value () in to an integer but I'm apparently hitting the tcl limit and getting:
```
integer value too large to represent
```
Binary String:
```
000000000000101000000000000010100000000000000100
```
The proc I'm using to convert from binary to integer is:
```
proc binary2decimal {bin} {
binary scan [binary format B64 [format %064d $bin]] I dec
return $dec
}
```
I tried using 64 for formatting and also tried LI to no avail. I read about "wide" but did not understand if it could be applied here.
P.S: I'm using tcl 8.4
|
First off, the error is coming from `format`, which isn't very happy with 64-bit numbers by default (for backward compatibility reasons) but we're really working with a string of characters there, so `%064s` (`s`, not `d`) is a suitable fix.
You really ought to switch to using Tcl 8.5 where this is *much* easier:
```
binary scan [binary format B64 [format "%064s" $bin]] W dec
```
Or even just:
```
set dec [expr "0b$bin"]
```
In 8.4, you've got to do more work as the 64-bit support was a lot more primitive.
```
binary scan [binary format B64 [format %064s $bin]] II d1 d2
set dec [expr {wide($d1)<<32 | wide($d2)}]
```
That's slightly non-obvious, but the `I` format is and always has been documented to work with 32-bit values only. So we stitch the two (big-endian) half-values back together after scanning.
|
Read from word document line by line
I'm trying to read a word document using C#. I am able to get all text but I want to be able to read **line by line** and **store in a list and bind to a gridview**. Currently my code returns a list of one item only with all text (not line by line as desired). I'm using the **Microsoft.Office.Interop.Word** library to read the file. Below is my code till now:
```
Application word = new Application();
Document doc = new Document();
object fileName = path;
// Define an object to pass to the API for missing parameters
object missing = System.Type.Missing;
doc = word.Documents.Open(ref fileName,
ref missing, ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing);
String read = string.Empty;
List<string> data = new List<string>();
foreach (Range tmpRange in doc.StoryRanges)
{
//read += tmpRange.Text + "<br>";
data.Add(tmpRange.Text);
}
((_Document)doc).Close();
((_Application)word).Quit();
GridView1.DataSource = data;
GridView1.DataBind();
```
|
Ok. I found the solution [here](http://mantascode.com/c-how-to-parse-the-text-content-from-microsoft-word-document/).
The final code is as follows:
```
Application word = new Application();
Document doc = new Document();
object fileName = path;
// Define an object to pass to the API for missing parameters
object missing = System.Type.Missing;
doc = word.Documents.Open(ref fileName,
ref missing, ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing);
String read = string.Empty;
List<string> data = new List<string>();
for (int i = 0; i < doc.Paragraphs.Count; i++)
{
string temp = doc.Paragraphs[i + 1].Range.Text.Trim();
if (temp != string.Empty)
data.Add(temp);
}
((_Document)doc).Close();
((_Application)word).Quit();
GridView1.DataSource = data;
GridView1.DataBind();
```
|
git error :Unable to look up xyz.com (port ) (Servname not supported for ai\_socktype)
I'm getting this error message from git.What's this mean ? How to fix?
|
I don't think this is related to Git, but rather linked to the server on which Git is running.
See [this article](http://www.ducea.com/2006/09/11/error-servname-not-supported-for-ai_socktype/)
>
> what could this mean: “Servname not supported for ai\_socktype“.
>
> After some other tries I have finally seen the problem… NTP ports were not defined in `/etc/services` and this was the root of the error.
>
> The system didn’t know how to make ntp connections without that. So I’ve added the following lines to `/etc/services`
>
>
>
```
ntp 123/tcp
ntp 123/udp
```
>
> and after this `ntpdate` started working as expected…
>
>
>
Check with your administrator before attempting any modification of those files (unless this is your personal server)
|
Xcode 4: How to Add Static Library Target Dependency to Project
I know, this has been asked a few times, but mostly for Xcode 3.x. For the iPad, I have two projects both living in a common workspace
- Foo, a view-based application and
- Foolib, a static Cocoa-Touch library
the former depending on the latter. I seem unable to work out how to add that dependency. It's not well explained in the documentation, so I tried the following:
1. Click on the Foo project in the Navigation Area on the left,
2. Select Build Phases up the top and expand the Target Dependencies section
3. Click the plus button, but the resulting list is empty.
I have also tried to drag/drop the .a file into that section, with little success. Also, the documentation states
>
> If the build product of one project in a workspace is dependent on the build product of another project in the workspace (for example, if one project builds a library used by the other project), Xcode discovers such implicit dependencies and builds in the correct sequence.
>
>
>
I wonder how Xcode discovers those dependencies. Is Apple saying I don't have to add this target dependency at all? But then how would Xcode discover that one is using the other?
Last but not least, I will need to get the .h files from Foolib across to Foo somehow. What is the recommended way of doing that? Obviously, I don't want to just copy them. With frameworks the header files come included, but what do people generally do when working with static libraries that they themselves develop in parallel.
A nudge in the right direction would be much appreciated. Thank you.
|
In general Xcode 4 seems to discover the dependencies automatically as the Edit Scheme sheet implies. Other developers have mentioned that the dependencies are not automatically discovered and require explicitly listing them:
So, Edit Scheme -> Build -> add targets from your workspace.
As far as the static library header files go, Xcode 4 seems to have a problem, at least with code completion and syntax highlighting. The only way I can get either to work properly with classes in static libraries to to drag a copy of the header files in question to a location into a group folder in the main project. Note that you should uncheck Add to Target... That takes care of the syntax highlighting and code completion. The rest should be handled by giving it the proper header search path. That would be User Header Search Paths = $(BUILT\_PRODUCTS\_DIR) - depending on how you set up your locations preferences.
See: [this link](http://blog.carbonfive.com/2011/04/04/using-open-source-static-libraries-in-xcode-4/)
|
IIS 7.5 optimizations for a site serving only static content
I'm looking to setup a cookie-free domain intended to serve static content for a web application, similar to the <http://sstatic.net/> site that the stack exchange sites use.
My question is, what optimizations can I make to my IIS 7.5 setup for such a domain? For example, it will never be responsible for anything but serving static content, so would disabling ASP.NET integration be a good move for this site?
Any suggestions or references on setting up such a site with IIS 7.5 would be most welcome.
**Edit**
Just to clarify, this isn't the ONLY site on the server, so suggested optimizations should target the site level, and not the server-level config.
|
There are several considerations in this, some which are handled on IIS (HTTP compression, caching headers fx), and some which are handled during the build process / before deployment (such as Javascript and CSS file concatenation & whitespace minification).
As such, it's a bit hard to give you a complete rundown in one answer, as some of it will depend on your build & release methods. In high level steps:
- The site is "cookieless" by virtue of you using a new domain, one that is not tied to your webapplication. Since you're not setting any cookies for the domain (using f.x. .NET application code), it's then "cookieless".
- You should *absolutely* [enable HTTP compression for static text content](http://www.iis.net/ConfigReference/system.webServer/httpCompression) such as Javascript and CSS.
- I'm not the greatest IIS administrator, but as far as I can tell, you only need the default IIS components associated with the [basic "Web Server (IIS)" server role](http://technet.microsoft.com/en-us/library/cc753473.aspx).
- You should *absolutely* enable [long caching headers for the static content](http://www.iis.net/ConfigReference/system.webServer/staticContent/clientCache). The general recommendation is 31 days, but you can set it higher or lower. Remember, if you serve static content with long cache headers, then you must change the URL if you change the file, to avoid old cached content being re-used by the clients.
- You *should* enable HTTP keep-alive (same docs as caching headers).
In addition to this, there are pre-deployement tasks, such as [whitespace compressing the Javascript and CSS](http://coderjournal.com/2010/01/yahoo-yui-compressor-vs-microsoft-ajax-minifier-vs-google-closure-compiler/), and [ideally compress PNG's better](http://www.smushit.com/ysmush.it/), etc. This is were your development tools and build cycle helps decide how to proceed.
When you're done, try downloading a few files from your [static servers with YSlow enabled](http://developer.yahoo.com/yslow/). I find that the ["Classic V2" ruleset](http://developer.yahoo.com/yslow/help/#guidelines) gives the biggest impact for the effort, so I would suggest check your score against this YSlow ruleset.
Of the "Classic V2" ruleset, these rules apply cleanly to your static server IIS instances & content:
```
3. Add an Expires or a Cache-Control Header
4. Gzip Components
10. Minify JavaScript and CSS
11. Avoid Redirects
13. Configure ETags
19. Use Cookie-Free Domains for Components
22. Make favicon.ico Small and Cacheable
```
|
parameters of jstree create\_node
Could you please give me the list of parameters of this function and an example of usage
```
$('#treepanel').jstree("create_node");
```
|
IMHO jsTree is powerful, but documentation could be improved.
The create\_node function is documented [here](https://www.jstree.com/api/#/?f=create_node(%5Bpar,%20node,%20pos,%20callback,%20is_loaded%5D).
Be careful not interpreting the [] as a literal. They are just to indicate that the parameters are optional.
This works for jsTree version "pre 1.0 fixed":
```
var position = 'inside';
var parent = $('#your-tree').jstree('get_selected');
var newNode = { state: "open", data: "New nooooode!" };
$('#your-tree').jstree(
"create_node", parent, position, newNode, false, false);
```
---
JSTree 3.3.5
From their docs "create\_node" functionality has inverted args 'newNode' and 'position'
```
$('#your-tree').jstree("create_node", parent, newNode, position, false, false);
```
<https://www.jstree.com/api/#/?f=create_node([par,%20node,%20pos,%20callback,%20is_loaded])>
|
Getting all function arguments in haskel as list
Is there a way in haskell to get all function arguments as a list.
Let's supose we have the following program, where we want to add the two smaller numbers and then subtract the largest. Suppose, we can't change the function definition of `foo :: Int -> Int -> Int -> Int`. Is there a way to get all function arguments as a list, other than constructing a new list and add all arguments as an element of said list? More importantly, is there a general way of doing this independent of the number of arguments?
Example:
```
module Foo where
import Data.List
foo :: Int -> Int -> Int -> Int
foo a b c = result!!0 + result!!1 - result!!2 where result = sort ([a, b, c])
```
|
>
> is there a general way of doing this independent of the number of arguments?
>
>
>
Not really; at least it's not worth it. First off, this entire idea isn't very useful because lists are *homogeneous*: all elements must have the same type, so it only works for the rather unusual special case of functions which only take arguments of a single type.
Even then, the problem is that “number of arguments” isn't really a sensible concept in Haskell, because as Willem Van Onsem commented, all functions really only have *one* argument (further arguments are actually only given to the *result* of the first application, which has again function type).
That said, at least for a single argument- and final-result type, it is quite easy to pack any number of arguments into a list:
```
{-# LANGUAGE FlexibleInstances #-}
class UsingList f where
usingList :: ([Int] -> Int) -> f
instance UsingList Int where
usingList f = f []
instance UsingList r => UsingList (Int -> r) where
usingList f a = usingList (f . (a:))
foo :: Int -> Int -> Int -> Int
foo = usingList $ (\[α,β,γ] -> α + β - γ) . sort
```
It's also possible to make this work for any type of the arguments, using type families or a multi-param type class. What's not so simple though is to write it once and for all with variable type of the final result. The reason being, that would also have to handle a *function* as the type of final result. But then, that could also be intepreted as “we still need to add one more argument to the list”!
|
Is there a .NET port of the SED stream editor
Is there a .NET port of the SED stream editor? Ideally it would be open source, but at this stage I'm not ruling anything out.
I have googled for this already without any obvious hits, so the secondary question here is: How hard would it be to create one?
I assume the sources for SED are in C. If the answer to my primary question is No, I would appreciate pointers/strategies for porting such a project to C#.
|
Use a mingw build including [sed from msys](http://mingw.org/wiki/msys), [build a dll](http://www.adp-gmbh.ch/win/misc/mingw/dll.html), then [call the dll](http://www.adp-gmbh.ch/csharp/call_dll.html) from C#.
The [gnuwin32](http://gnuwin32.sourceforge.net/) project includes a sed port and a minised.
**References**
- [Building Windows DLLs with MinGW](http://www.transmissionzero.co.uk/computing/building-dlls-with-mingw/)
- [How to compile C program as dll in Mingw for use with Inno Setup & callback](https://stackoverflow.com/questions/18521921/1113772)
|
Presence of unrelated method in interface breaks text/template?
Playground link: <http://play.golang.org/p/Ebf5AuJlcP>
```
type Foo interface {}
type Bar interface {
ThisIsABar()
}
// Person implements both Foo and Bar
type Person struct {
Name string
}
func (p Person) ThisIsABar() {}
type FooContext struct {
Something Foo
}
type BarContext struct {
Something Bar
}
func main() {
t := template.Must(template.New("test").Parse("{{ .Something.Name }}\n"))
// This works fine.
if err := t.Execute(os.Stdout, FooContext{Person{"Timmy"}}); err != nil {
fmt.Printf("Error: %s\n", err)
}
// With BarContext, containing the exact same Person but wrapped in a
// different interface, it errors out.
if err := t.Execute(os.Stdout, BarContext{Person{"Timmy"}}); err != nil {
fmt.Printf("Error: %s\n", err)
}
}
```
When I render a template (via the `text/template` package) containing `{{ .Something.Name }}`, I can go through interface `Foo` which contains no methods, and it works fine. But if I go through interface `Bar` instead, I get:
```
executing "test" at <.Something.Name>: can't evaluate field Name in type main.Bar
```
Why does the presence of an unrelated method on the interface, that isn't even used, affect the rendering of the template?
|
text/template is special casing interface{}, so called functions can have return type interface{}, etc. Adding a method to your interface means that detection no longer triggers.
<http://golang.org/src/pkg/text/template/exec.go#L323>
```
323 for _, cmd := range pipe.Cmds {
324 value = s.evalCommand(dot, cmd, value) // previous value is this one's final arg.
325 // If the object has type interface{}, dig down one level to the thing inside.
326 if value.Kind() == reflect.Interface && value.Type().NumMethod() == 0 {
327 value = reflect.ValueOf(value.Interface()) // lovely!
328 }
329 }
```
BarContext.Something is a Bar (an interface). A Bar has no field Name. If you want to use an interface there, you'll need to provide the data via a method that is part of the interface.
|
Nodejs http request not working
I have a piece of code that's supposed to do a http get request. The program exited successfully without error, but I didn't see any response and it didn't even go inside the callback function! At first I thought it's because http is asynchronous and put a large loop in the end but that didn't work either. Does anyone know this issue? Only the first console log sendHttpRequest and 444 gets printed. I also tried the http.get but it didn't work either.
```
function sendHttpRequest (url, callBack) {
console.log("sendHttpRequest");
//constrct options
var options = {
host: 'www.google.com',
path: '/index.html',
method: 'GET',
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
}
};
http.get("http://www.google.com/index.html", function(res) {
console.log("Got response: " + res.statusCode);
});
var req = http.request(options, function(res) {
console.log("333");
var output = '';
console.log(options.host + ':' + res.statusCode);
res.setEncoding('utf8');
res.on('data', function (chunk) {
console.log("DATATATAT!")
output += chunk;
});
res.on('end', function () {
console.log('222');
var obj = JSON.parse(output);
callBack(res.statusCode, obj);
});
});
req.on('error', function (err) {
console.log('error: ' + err.message);
});
req.end();
console.log("444");
}
}
```
|
**Update**
The grunt task terminated before the OP received a response; adding `async` and a callback to the task fixed it.
---
If I take your code outside of the function and prepend `var http = require('http');` I get a response up until `222`, at which point it dies with `SyntaxError: Unexpected token <`. Which is actually dying because you're trying to parse an HTML response as JSON.
If you paste the entire script below and run it end to end, the console dies with:
```
undefined:1
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
^
SyntaxError: Unexpected token <
at Object.parse (native)
at IncomingMessage.<anonymous> (/Users/you/nodetest/tmp/test.js:31:28)
at IncomingMessage.EventEmitter.emit (events.js:120:20)
at _stream_readable.js:896:16
at process._tickCallback (node.js:599:11)
```
The script:
```
var http = require('http');
console.log("sendHttpRequest");
//constrct options
var options = {
host: 'www.google.com',
path: '/index.html',
method: 'GET',
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
}
};
http.get("http://www.google.com/index.html", function(res) {
console.log("Got response: " + res.statusCode);
});
var req = http.request(options, function(res) {
console.log("333");
var output = '';
console.log(options.host + ':' + res.statusCode);
res.setEncoding('utf8');
res.on('data', function (chunk) {
console.log("DATATATAT!")
output += chunk;
});
res.on('end', function () {
console.log('222');
// it's failing on the next line, because the output
// it's receiving from Google is HTML, not JSON.
// If you comment out this line and simply
// "console.log(output)" you'll see the HTML response.
var obj = JSON.parse(output);
callBack(res.statusCode, obj);
});
});
req.on('error', function (err) {
console.log('error: ' + err.message);
});
req.end();
console.log("444");
```
|
Doctrine's date field - how to write query
I'm using Symfony2 and Doctrine.
I have a date field, here it is:
```
/**
* @ORM\Column(type="date")
*/
protected $date;
```
In my form I use a text field to avoid Chrome default datepicker, but I insert DateTime objects in the database:
```
if ($request->isMethod('POST')) {
$form->bind($request);
//Convert string date to DateTime object and send it to database as object
$dateObj = \DateTime::createfromformat('d-m-Y', $expense->getDate());
$expense->setDate($dateObj);
// ...
```
and then I want to find all items with a specific date:
```
public function findExpensesForDate($user, $date)
{
$q = $this
->createQueryBuilder('e')
->where('e.date = :date')
->andWhere('e.user = :user')
->setParameter('date', $date)
->setParameter('user', $user)
->getQuery();
return $q->getResult();
}
```
and call it like this:
```
$expenses_for_today = $this->repository->findExpensesForDate($this->user, $today);
```
which returns nothing when
```
$today = new /DateTime();
```
and returns the results when
```
$today_obj = new /DateTime();
$today = $today_obj->format('Y-m-d');
```
So why when I give the date as object this doesn't work? Isn't the reason to use date filed is to take advantage of quering with DateTime objects? I guess I'm missing something trivial and important, but I just can't see what, or I'm not understanding the situation quite well. My understanding is like this: the field is of type date, so I should insert DateTime objects in it and when quering I should also you DateTime objects. Can you please help me to fix this?
P.S.: I tried changing the field to datetime:
```
/**
* @ORM\Column(type="datetime")
*/
protected $date;
```
but there was no change.
And at all is it OK and good to query with string? Will I get the advantage of using objects when querying that way?
|
I think this is because a DateTime object is too much specific and have also hours, minutes and seconds, so it can't be equal to a registered date in database !
If you want to use DateTime objects, you need to get the date of the beginning of the day and the end date to get all results of the day ! You must compare an interval of dates to get all dates between it.
First, get the start and the end dates of the current day (to simplify, we will base the end date on the beginning of the next day, and we will exclude it in the request) :
```
$fromDate = new \DateTime('now'); // Have for example 2013-06-10 09:53:21
$fromDate->setTime(0, 0, 0); // Modify to 2013-06-10 00:00:00, beginning of the day
$toDate = clone $fromDate;
$toDate->modify('+1 day'); // Have 2013-06-11 00:00:00
```
And modify your method :
```
public function findExpensesForDate($user, $fromDate, $toDate)
{
$q = $this
->createQueryBuilder('e')
->where('e.date >= :fromDate')
->andWhere('e.date < :toDate')
->andWhere('e.user = :user')
->setParameter('fromDate', $fromDate)
->setParameter('toDate', $toDate)
->setParameter('user', $user)
->getQuery();
return $q->getResult();
}
```
That's all, it should work, here the script :
```
$expenses_for_today = $this->repository->findExpensesForDate($this->user, $fromDate, $toDate);
```
So you will get all the dates between the **2013-06-10 00:00:00** and the **2013-06-11 00:00:00** (excluded), so the results of the day !
|
Can a derived class be smaller than its parent class?
I'm asking this about C++ since I'm familiar with that but the question really explains itself: is there a language in which we can derive a class and make it occupy less space in memory than the original class?
This question is more of a gimmick than an actual problem I'm trying to solve, but I could imagine that some really high-performant code could benefit from this kind of memory optimization:
Suppose we have:
```
class Person {
std::string name;
unsigned int age;
}
class PersonNamedJared {
}
```
In theory, we don't need a field "name" in this subclass since it'll always be "Jared", which could allow for better memory efficiency.
Is the only way to make this happen by replacing the 'name' field in `Person` with a `get_name()` function that we simply override in `PersonNamedJared` to always return "Jared"? How would we still make the name variable in the base class?
I know this example is really bad practice, but it's the best I could come up with. I think there are legitimate reasons to implement this kind of pattern.
|
>
> Can a derived class be smaller than its parent class?
>
>
>
No. A derived class always contains a base class sub object. An object can never be smaller than its sub objects, so a derived class can never be smaller than its base.
>
> Is the only way to make this happen by replacing the 'name' field in Person with a get\_name() function that we simply override in PersonNamedJared to always return "Jared"?
>
>
>
That would be one way to achieve it.
>
> How would we still make the name variable in the base class?
>
>
>
You couldn't have a member variable in a base that you don't want to be in the derived class.
You could for example use multiple inheritance so that you do have a base with the variable in some derived classes. Something like this:
```
struct Person {
unsigned int age;
virtual std::string name() = 0;
...
struct Named {
std::string name;
};
struct NamedPerson : Person, private Named {
std::string name() override {
return name;
}
};
struct JaredPerson : Person {
std::string name() override {
return "Jared";
}
};
```
Here, there is a base with the variable, but Jared does not inherit that particular base.
|
Lambda LINQ query with multiple from clauses
I've got this query
```
var myobjectList = from g in repository.SpeedGoalsRepository.DbSet.Where(e => e.ID == null)
from d in daysOfWeek
select new myObject
{
DayID = (short?)d,
GoalA = g.GoalA,
GoalB = g.GoalB
};
```
where `daysOfWeek` is an enum array.
```
private readonly DayOfWeek[] daysOfWeek = new DayOfWeek[]
{
DayOfWeek.Sunday,
DayOfWeek.Monday,
DayOfWeek.Tuesday,
DayOfWeek.Wednesday,
DayOfWeek.Thursday,
DayOfWeek.Friday,
DayOfWeek.Saturday
};
```
so, I need to convert this query sintax to lamda expression. I try with this, but its wrong :(
```
var defaultSpeedGoals= repository.SpeedGoalsRepository.DbSet.Where(e => e.ID == null);
var myobjectList = SpeedGoals.Cast<SpeedGoalsRepository>()
.SelectMany(g => g.DbSet.Cast<DayOfWeek>().Select(sg => new myObject
{
DayID = (short?)sg,
GoalA= g.DbSet.FirstOrDefault().GoalA,
GoalB = g.DbSet.FirstOrDefault().GoalB
}));
```
|
The page you want to look at is here:
<https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions>
It's a lot of text. The bit you want specifically is:
>
> A query expression with a second from clause followed by a select clause
>
>
>
```
from x1 in e1
from x2 in e2
select v
```
>
> is translated into
>
>
>
```
( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => v )
```
Let's apply that to your example:
```
from g in repository.SpeedGoalsRepository.DbSet.Where(e => e.ID == null)
from d in daysOfWeek
select new myObject
{
DayID = (short?)d,
GoalA = g.GoalA,
GoalB = g.GoalB
};
```
- `x1` is `g`
- `e1` is `repository.SpeedGoalsRepository.DbSet.Where(e => e.ID == null)`.
- `x2` is `d`
- `e2` is `daysOfWeek`
- `v` is `new myObject ...`
So put it all together:
```
repository.SpeedGoalsRepository.DbSet
.Where(e => e.ID == null)
.SelectMany(
g => daysOfWeek,
(g, d) => new myObject { ... } )
```
and we're done.
|
R: data-time objects in data frame
```
> my.lt <- strptime("2003-02-05 03:00:02", format="%Y-%m-%d %H:%M:%S")
> x <- data.frame(d=my.lt)
> class(x$d)
[1] "POSIXct" "POSIXt"
```
I don't know why data.frame changed x$d from a POSIXlt object to a POSIXct one. Now if I do
```
> x$d = my.lt
```
Then I got what I want, but this is ugly. Can anybody tell me 1) Why this happened; and 2) How to initialize a data frame with one of its column being a POSIXlt in a neat way.
Thank you.
|
As it says in the 3rd paragraph of the Details section of `?data.frame`:
>
> ‘data.frame’ converts each of its arguments to a data frame by calling ‘as.data.frame(optional=TRUE)’.
>
>
>
That means `as.data.frame.POSIXlt` is being called. It's defined as:
```
function (x, row.names = NULL, optional = FALSE, ...)
{
value <- as.data.frame.POSIXct(as.POSIXct(x), row.names,
optional, ...)
if (!optional)
names(value) <- deparse(substitute(x))[[1L]]
value
}
```
So that's why it happened. I can't think of a clean way to do it using the `data.frame` constructor, but here is a bit of a kludge:
```
x <- data.frame(d=as.data.frame.vector(my.lt))
```
This converts your `POSIXlt` object to a data.frame using the `vector` method. If you *really* want to confuse yourself later, you can also use the `POSIXct` method:
```
x <- data.frame(d=as.data.frame.POSIXct(my.lt))
str(x)
# 'data.frame': 1 obs. of 1 variable:
# $ my.lt: POSIXlt, format: "2003-02-05 03:00:02"
```
|
How to fix legacy code that uses unsafely?
We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005.
When compiling the project, there are many warnings about the unsafe string manipulation functions used (`sprintf()`, `strcpy()`, etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using `sprintf_s()`, `strcpy_s()`, etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of \*NIX)
How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it.
|
First note that those "safe" functions are only marginally safer than the functions they replace. Most of those "safe" functions take an extra parameter informing it of the size of the destination buffer (which they use to check for overflows) and they check that no null pointers are passed where they shouldn't.
This can catch certain errors, but not all of them and you could make an error in passing the wrong buffer size.
If you don't want to switch to the secure functions, or you can't due to portability requirements, then I would advise the following path:
- Compile your code with Visual Studio with the deprecation warnings enabled
- For each string manipulation function that gets flagged, evaluate if there really is a possibility of a buffer overflow or null pointer being passed.
- Where potential problems lie, modify the surrounding code to ensure the error can't occur.
- When you are sure that all string handling functions are used in a secure way as much as possible, you add to your Visual Studio a define `_CRT_SECURE_NO_WARNINGS` to suppress the deprecation warnings.
As the code has been used successfully for a large number of years, I suspect that there are few places with problematic uses of the string functions.
|
Stop highlighting selected item WPF ComboBox
I'm currently working on a WPF UI and I have a combobox on my window.
So I want the user to be able to select an item from this combobox but when it is selected
I don't want it to be highlighted in the default blue colour.
I assume there is some way to stop this in XAML but I have not been able to find the solution so far.
Thanks.
P.S. I don't have access to Expression Blend so if anyone suggests a solution could it be in XAML
EDIT: Just to make it clearer I by selected I mean once you have selected a value and the SelectionChanged event has fired and the item is displayed in the combobox and the combo box is highlighted like so:

|
You need to set appearance of your selection via styles.
```
<Window.Resources>
<Style TargetType="{x:Type ComboBoxItem}">
<Setter Property="Control.Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type ComboBoxItem}">
<Border Background="{TemplateBinding Background}">
<Grid>
<Grid.RowDefinitions>
<RowDefinition />
<RowDefinition />
</Grid.RowDefinitions>
<Border Margin="2" Grid.Row="0" Background="Azure" />
<ContentPresenter />
</Grid>
</Border>
<ControlTemplate.Triggers>
<Trigger Property="IsMouseOver" Value="True">
<Setter Property="Background" Value="Green" />
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Window.Resources>
```
This style will be automatically applied to any ComboBoxes within the window:
```
<StackPanel>
<ComboBox>
<ComboBoxItem>111</ComboBoxItem>
<ComboBoxItem>222</ComboBoxItem>
<ComboBoxItem>333</ComboBoxItem>
<ComboBoxItem>444</ComboBoxItem>
<ComboBoxItem>555</ComboBoxItem>
</ComboBox>
</StackPanel>
```
You will see it as follows:

**UPD:** In order to remove highlighting from selected item you need to modify system brushes which are actually used for these purposes. Just add two extra styles:
```
<SolidColorBrush x:Key="{x:Static SystemColors.HighlightBrushKey}" Color="Transparent"/>
<SolidColorBrush x:Key="{x:Static SystemColors.HighlightTextBrushKey}" Color="Black"/>
```
|
Python: on linux, subprocess.Popen() works weird with shell = True
If I execute following python code on Windows:
```
import subprocess
subprocess.Popen( [ 'python', 'foo' ], shell = True ).communicate()
```
I got error written into stdout, as expected:
```
python: can't open file 'foo': [Errno 2] No such file or directory
```
But if i execute same code on linux (ubuntu, OSX - any) I got interactive python REPL started instead of this text! Like this:
```
user@debian:~/Documents$ python test.py
Python 2.7.3 (default, Jab 2 2013, 16:53:07)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information
>>>
```
Why such strange behaviour? executing python interpreter with argument ('foo') must put it into evaluation mode on all platforms, not into REPL mode.
|
This is spelled out in the documentation:
>
> The shell argument (which defaults to False) specifies whether to use the shell as the program to execute. **If shell is True, it is recommended to pass args as a string rather than as a sequence.**
>
>
> **On Unix with shell=True**, the shell defaults to /bin/sh. If args is a string, the string specifies the command to execute through the shell. This means that the string must be formatted exactly as it would be when typed at the shell prompt. This includes, for example, quoting or backslash escaping filenames with spaces in them. *If args is a sequence, the first item specifies the command string*, and **any additional items will be treated as additional arguments to the shell itself.**
>
>
>
(emphasis mine)
|
How to test if @Valid annotation is working?
I have the following unit test:
```
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = {EqualblogApplication.class})
@WebAppConfiguration
@TestPropertySource("classpath:application-test.properties")
public class PostServiceTest {
// ...
@Test(expected = ConstraintViolationException.class)
public void testInvalidTitle() {
postService.save(new Post()); // no title
}
}
```
The code for `save` in `PostService` is:
```
public Post save(@Valid Post post) {
return postRepository.save(post);
}
```
The `Post` class is marked with `@NotNull` in most fields.
The problem is: **no validation exception is thrown**.
However, this happens only in testing. **Using the application normally runs the validation and throws the exception.**
Note: I would like to do it automatically (on save) and not by manually validating and then saving (since it's more realistic).
|
This solution works with Spring 5. It should work with Spring 4 as well. (I've tested it on Spring 5 and SpringBoot 2.0.0).
There are three things that have to be there:
1. in the test class, provide a bean for method validation (PostServiceTest in your example)
Like this:
```
@TestConfiguration
static class TestContextConfiguration {
@Bean
public MethodValidationPostProcessor bean() {
return new MethodValidationPostProcessor();
}
}
```
2. in the class that has @Valid annotations on method, you also need to annotate it with @Validated (org.springframework.validation.annotation.Validated) on the class level!
Like this:
```
@Validated
class PostService {
public Post save(@Valid Post post) {
return postRepository.save(post);
}
}
```
3. You have to have a Bean Validation 1.1 provider (such as Hibernate Validator 5.x) in the classpath. The actual provider will be autodetected by Spring and automatically adapted.
More details in [MethodValidationPostProcessor documentation](https://docs.spring.io/autorepo/docs/spring-framework/4.2.x/javadoc-api/org/springframework/validation/beanvalidation/MethodValidationPostProcessor.html)
Hope that helps
|
JavaScript: Difference between toString() and toLocaleString() methods of Date
I am unable to understand the difference between the `toString()` and `toLocaleString()` methods of a `Date` object in JavaScript. One thing I know is that `toString()` will automatically be called whenever the `Date` objects needs to be converted to string.
The following code returns identical results always:
```
var d = new Date();
document.write( d + "<br />" );
document.write( d.toString() + "<br />" );
document.write( d.toLocaleString() );
```
And the output is:
```
Tue Aug 14 2012 08:08:54 GMT+0500 (PKT)
Tue Aug 14 2012 08:08:54 GMT+0500 (PKT)
Tue Aug 14 2012 08:08:54 GMT+0500 (PKT)
```
|
<https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Date/toLocaleString>
Basically, it formats the Date to how it would be formatted on the computer where the function is called, e.g. Month before Day in US, Day before Month in most of the rest of the world.
**EDIT:**
Because some others pointed out that the above reference isn't necessary reliable, how's this from the [ECMAScript spec](http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf):
>
> **`15.9.5.2 Date.prototype.toString ( )`**
>
>
>
> >
> > This function returns a String value. The contents of the String are implementation->> dependent, but are intended to represent the Date in the current time zone in a convenient, human-readable form.
> >
> >
> >
>
>
> **`15.9.5.5 Date.prototype.toLocaleString ( )`**
>
>
>
> >
> > This function returns a String value. The contents of the String are implementation->>dependent, but are intended to represent the Date in the current time zone in a convenient, human-readable form that corresponds to the conventions of the host environment‘s current locale.
> >
> >
> >
>
>
>
Since you can hopefully assume that most implementations will reflect the specification, the difference is that `toString()` is just required to be readable, `toLocaleString()` should be readable in a format that the should match the users expectations based on their locale.
|
Can hackers enable the camera after the user disabled it?
A person I know got a spam email saying to pay a ransom of 1000$ for his personal information. The spammer sent him an old password he used in an old email as proof; the email is linked to his facebook account.
He is afraid the hacker had access to his camera and his personal information and info related to work. Is there a way the hacker could have enabled the camera and started recording after the user disabled it?
Note that the laptop doesn't have any real protection, just Windows Defender and firewall. He found the webcam was enabled after disabling it. Also, can windows enable the camera on its own after an update?
|
**These emails are all scams**. I get them and I don't even *have* a webcam on my desktop.
---
**In theory a dedicated attacker *could* have done this**, if they got in and used a privilege-escalation exploit to get kernel access.
But AFAIK typically randomly-targeted attacks for extortion purposes aren't going to be worth the risk of [burning an unknown / unpatched 0-day exploit](https://security.stackexchange.com/questions/187912/what-does-it-mean-to-burn-a-zero-day/187913), so unless your computer doesn't get security updates, the chances of a casual attacker actually getting in are basically zero. And the amount of effort they'd have to put in (downloading watching videos of random people from the camera) to actually find people doing anything embarrassing is way higher than just making stuff up!
So the real risk is if you are a high-value target for some attacker, like maybe they want to read papers on your desk near your computer. Or the screen of another nearby device. They wouldn't be sending you blackmail emails about it.
---
There are no physical interlocks that would prevent the camera from being enabled without a physical keyboard press or physical mouse click. It's all software. AFAIK it's usually fairly secure software, behind multiple layers of protection (like it would require a kernel exploit to silently enable the camera without user interaction).
**The only way to be *sure* is to physically cover the lens of your camera, and/or not point it at yourself when not using it.** Just like the only way to be *sure* your computer isn't cracked is to keep it powered off (physically unplugged), and preferably encased in concrete, at the bottom of the ocean.
---
I think I've seen some laptops with a flap you can slide over the camera, possibly to protect the lens from dirt, or maybe privacy was one of the intended uses. Some stand-alone USB webcams have a physical lens-cover slider or iris.
Built-in microphones are more insidious because they're not directional.
|
How to create "remember me checkbox" using Codeigniter session library?
in Codeigniter I am building an Authentication system for my web site and to achieve that I use session library
```
session->set_userdata('username')
```
this will save the session -I believe- for some time
I want to provide a "remember me" checkbox in the login form so the user can save the session forever - could not find a way to save the session forever!?
**Note:**$sess\_expiration will not work because it sets expiration date for all users and what I want to do is setting the expiration date based on his preferences
is that possible? and how to do it?
Thanks
|
If the remember me checkbox is checked you set a cookie on the user's system with a random string. E.g.:
```
$cookie = array(
'name' => 'remember_me_token',
'value' => 'Random string',
'expire' => '1209600', // Two weeks
'domain' => '.your_domain.com',
'path' => '/'
);
set_cookie($cookie);
```
You also save this random string in the `users` table, e.g. in the column `remember_me_token`.
Now, when a user (who is not yet logged in) tries to access a page that requires authentication:
- you check if there is a cookie by the name of `remember_me` token on his system
- if it's there, you check the database if there is a record with the same value
- if so, you recreate this user's session (this means they are logged in)
- show the page they were visiting
If one of the requirements above is not met, you redirect them to the login page.
For security reasons you may want to renew the random `remember_me_token` every time the user logs in. You can also update the expiry date of the cookie every time the user logs in. This way he will stay logged in.
It would be too much work to write all the code for you, but I hope this helps you to implement this functionality yourself. Please comment if you have any questions. Good luck.
|
Transpose 3D Numpy Array
Trying to transpose each numpy array in my numpy array.
Here is an example of what I want:
A:
```
[[[ 1 2 3]
[ 4 5 6]]
[[ 7 8 9]
[10 11 12]]]
```
A Transpose:
```
[[[ 1 4]
[ 2 5]
[ 3 6]]
[[ 7 10]
[ 8 11]
[ 9 12]]]
```
Tried doing this using np.apply\_along\_axis function but was not getting the correct results.I am trying to apply this to a very large array and any help would be greatly appreciated!
```
A=np.arange(1,13).reshape(2,2,3)
A=np.apply_along_axis(np.transpose, 0, A)
```
|
You need to swap the second and third axises, for which you can use either `np.swapaxes`:
```
A.swapaxes(1,2)
#array([[[ 1, 4],
# [ 2, 5],
# [ 3, 6]],
# [[ 7, 10],
# [ 8, 11],
# [ 9, 12]]])
```
or `transpose`:
```
A.transpose(0,2,1)
#array([[[ 1, 4],
# [ 2, 5],
# [ 3, 6]],
# [[ 7, 10],
# [ 8, 11],
# [ 9, 12]]])
```
|
Should i spend my effort implementing knockoutjs or look into jQuery Data Link
I've recently been using Steve Sanderson's knockout js library <http://knockoutjs.com/> in my client side web development. I just recently found out that microsoft has contributed code to jQuery for a Data Link plugin that seems to duplicate what I like about knockout.
<http://weblogs.asp.net/scottgu/archive/2010/10/04/jquery-templates-data-link-and-globalization-accepted-as-official-jquery-plugins.aspx>
<https://github.com/jquery/jquery-datalink>
<http://api.jquery.com/category/plugins/data-link/>
Should I scrap my knockout code and go with the embraced and extended jQuery?
|
I'd stick with Knockout personally -- it's already [been demonstrated to work quite well](http://blog.stevensanderson.com/2010/10/20/knockout-110-new-project-site-launched/), it's in active development, and it knocks data-link off the charts when you compare features. In short, Knockout seems ready for prime time, while data-link feels unfinished.
(I stress tested Knockout by having it data-bind a dropdown to an array of 5,000 items, while also calculating the number of unique items in the array and adding that to another data-bound element. My calculations and the re-painting of the browser chrome took far longer than the data-binding and updating.)
Now, that being said, I would keep an eye on data-link and probably continue to play around with it -- if it gets off the ground, it will be a very viable alternative. (Given the success of jquery-tmpl, the other major piece to come out of the Microsoft-Jquery cooperation.)
The better one [to compare Knockout to](http://news.ycombinator.com/item?id=1810665) would be documentcloud's [`backbone`](http://documentcloud.github.com/backbone/). I'm looking into backbone next week, so I don't have any good recommendations for now, but I've been *very* impressed with [underscore](http://documentcloud.github.com/underscore/) (another of their projects), so I would definitely recommend looking into it as an alternative.
---
### The score 4 years later*for those who are wondering*
Knockout has been used in production on all kinds of projects for years, is now at version 3, and has a healthy ecosystem around it. jQuery.datalink became [JSViews](https://github.com/BorisMoore/jsviews), which is still in beta.
|
How to run .apk file generated using Cordova on device instead of emulator?
I am using Sencha Touch 2.3.1 for developing cross platform application and to deploy it to native platforms I am using Cordova 3.3.0
I followed [this](http://sensai-touch.com/building-an-ios-and-android-app-with-sencha-touch-2-3-and-cordova/) tutorial. Following the tutorial I was able to run the .apk file in the emulator. Now I want to run the application on my device. Even though I connect my device to the laptop using a usb, I am not able to run the application on my device. Any help will be appreciated.
|
To have the project run on your device instead of in the emulator, you basically only have to plug in your device before you run `cordova run android`.
Cordova will run the emulator only if it can't find a device connected.
Before you run `cordova run android`, I advise you to run the command `adb devices` to check if your device is found by the android sdk tools.
If it's not, you need to enable USB debugging on the device and install an adb driver for your device.
## Edit:
As this answer seems to get up votes, I think it would be a good idea to improve it a little...
There's actually a `--device` option you can add when you run `cordova run android` to force to start on the device (without that an emulator is started if no device is found, that can sometime be annoying).
|
Where I can find a detailed comparison of Java XML frameworks?
I'm trying to choose an XML-processing framework for my Java projects, and I'm lost in names.. XOM, JDOM, etc. Where I can find a detailed comparison of all popular Java XML frameworks?
|
As Blaise pointed out stick with the standards. But there are multiple standards created over the period to solve different problems/usecases. Which one to choose completely depends upon your requirement. I hope the below comparison can help you choose the right one.
Now there are **two things** you have to choose. **API** and the **implementations of the API** (there are many)
## API
**SAX: Pros**
- event based
- memory efficient
- faster than DOM
- supports schema validation
**SAX: Cons**
- No object model, you have to tap into
the events and create your self
- Single parse of the xml and can only
go forward
- read only api
- no xpath support
- little bit harder to use
**DOM: Pros**
- in-memory object model
- preserves element order
- bi-directional
- read and write api
- xml MANIPULATION
- simple to use
- supports schema validation
**DOM: Cons**
- memory hog for larger XML documents
(typically used for XML documents
less than 10 mb)
- slower
- generic model i.e. you work with Nodes
**Stax: Pros**
- Best of SAX and DOM i.e. Ease of DOM
and efficiency of SAX
- memory efficient
- Pull model
- read and write api
- [supports subparsing](http://tutorials.jenkov.com/java-xml/sax-vs-stax.html)
- can read multiple documents same time
in one single thread
- parallel processing of XML is easier
**Stax: Cons**
- no schema validation support (as far
as I remember, not sure if they have
added it now)
- can only go forward like sax
- no xml MANIPULATION
**JAXB: Pros**
- allows you to access and process XML
data without having to know XML
- bi-directional
- more memory efficient than DOM
- SAX and DOM are generic parsers where
as JAXB creates a parser specific to
your XML Schmea
- data conversion: JAXB can convert xml
to java types
- supports XML MANIPULATION via object
API
**JAXB: Cons**
- can only parse valid XML
**Trax:** For transforming XML from 1 form to another form using XSLT
## Implementations
SAX, DOM, Stax, JAXB are just specifications. There are many [open source and commercial implementations of these specifications](http://www.edankert.com/jaxpimplementations.html). Most of the time you can just stick with what comes with JDK or your application server. But sometimes you need to use a different implementation that provided by default. And this is where you can appreciate the [JAXP wrapper api](http://jaxp.java.net/). **JAXP** allows you to switch implementations through configuration without the need to modify your code. It also provides a parser/spec independent api for parsing, transformation, validation and querying XML documents.
**Performance and other comparisons of various implementations**
- Stax:
<http://java.sun.com/performance/reference/whitepapers/StAX-1_0.pdf>
- Most of the API:
<http://www.ibm.com/developerworks/xml/library/x-databdopt2/>
- JAXB versus XmlBeans discussion on SO: [JAXB vs Apache XMLBeans](https://stackoverflow.com/questions/1362030/jaxb-vs-apache-xmlbeans)
- <http://www.ibm.com/developerworks/xml/library/x-injava/>
---
Now standards are good but once in a while you encounter this crazy usecase where you have to support parsing of XML document that is **100 gigabytes** of size or you need **ultra fast** processing of XML (may be your are implementing a XML parser chip) and this is when you need to dump the standards and look for a different way of doing things. Its about using the right tool for the right job! And this is where I suggest you to have a look at [**vtd-xml**](http://vtd-xml.sourceforge.net/)
During the initial days of SAX and DOM, people wanted simpler API's than provided by either of them. [JDOM](http://www.jdom.org/), [dom4j](http://www.dom4j.org/), [XmlBeans](http://xmlbeans.apache.org/), [JiBX](http://jibx.sourceforge.net/), [Castor](http://www.castor.org/) are the ones I know that became popular.
|
Getting the target of a symbolic link with pathlib
Is there a way to get the target of a symbolic link using `pathlib`? I know that this can be done using `os.readlink()`.
I want to create a dictionary composed by links and their target files.
```
links = [link for link in root.rglob('*') if link.is_symlink()]
files = [Path(os.readlink(str(pointed_file))) for pointed_file in links]
```
**Edit** ... and I want to filter all paths that are not absoulute
```
link_table = {link : pointed_file for link, pointed_file in zip(links, files) if pointed_file.is_absolute()}
```
|
Update: Python 3.9 introduced `Path.readlink()` method, so explanations below apply to earlier releases only.
Nope, it's currently not possible to get the results from `pathlib` that `os.readlink()` gives. `Path.resolve()` doesn't work for broken links and either raises `FileNotFoundError` (Python <3.5) or returns potentially bizarre path (Python 3.6). One can still get old behaviour with `Path.resolve(True)`, though that means incompatibility between versions (not mentioned in the package documentation nor Porting section of Py3.6 release document by the way).
Also regarding updated question about absolute paths, `os.readlink()` is the only way to check whether symbolic link is absolute or relative one. `Path.relative_to()` as name says, transforms existing Path to relative one, without checking whether symbolic link destination path starts with '/' or not. The same applies for `Path.is_absolute()`. Finally `Path.resolve()` for existing targets, transforms destination path so it destroys required information on the way.
And by *bizarre path* above I mean, let's say in `/home/test` there's symlink `.myapp` pointing to `.config/myapp/config`. If `.config/myapp` doesn't exist and the code is written in Py<=3.5, `Path.resolve()` would raise an exception and the app could inform a user about the missing file. Now if called from Py3.6 without code changes, it resolves to `~/.config/myapp`, the exception is not thrown, so check around `.resolve()` passes, but then probably another exception will be thrown later when the app will try to open the file for reading, so the user may get message that `~/config/myapp` file is not found, though that's not really the one missing here. It may be even worse – when app would do `Path('/home/test/.myapp').resolve().open('w')` (not necessarily in one step, but let's say as part of some sanitization process), then simply wrong file is created. Of course next time the app is called, path resolves one level deeper, to `/home/test/.config/myapp/config` (as `Path.resolve()` doesn't check if `myapp` is a directory or not), and both reading and writing will fail with a `NotADirectoryError` exception (with a little misleading `"Not a directory: /home/test/.config/myapp/config"` as a description…).
|
Flutter: Is it Possbile to have Multiple Futurebuilder or A Futurebuilder for Multiple Future Methods?
So let's say I'm fetching different List from Different Url's
so The Future function will look like this
```
Future<List<City>> getCityDetails(Region selectedRegion) async {}
Future<List<Region>> getregionDetails() async {}
```
thus the Widget Builder will look like this
```
FutureBuilder<List<Region>>(
future: getregionDetails(),
builder: (context, snapshot) {}
```
is there a way or work around in this one? I'm trying to find an answer how to implement this one or is there another way?
|
i think you are looking for something like [this](https://gist.github.com/Rahiche/33b488618ab7f7879f645e4a9d2905c6)
```
import 'dart:async';
import 'dart:convert';
import 'package:flutter/material.dart';
import 'package:http/http.dart' as http;
void main() => runApp(MaterialApp(
home: MyApp(),
));
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
Widget createRegionsListView(BuildContext context, AsyncSnapshot snapshot) {
var values = snapshot.data;
return ListView.builder(
itemCount: values.length,
itemBuilder: (BuildContext context, int index) {
return values.isNotEmpty
? Column(
children: <Widget>[
ListTile(
title: Text(values[index].region),
),
Divider(
height: 2.0,
),
],
)
: CircularProgressIndicator();
},
);
}
Widget createCountriesListView(BuildContext context, AsyncSnapshot snapshot) {
var values = snapshot.data;
return ListView.builder(
itemCount: values == null ? 0 : values.length,
itemBuilder: (BuildContext context, int index) {
return GestureDetector(
onTap: () {
setState(() {
selectedCountry = values[index].code;
});
print(values[index].code);
print(selectedCountry);
},
child: Column(
children: <Widget>[
new ListTile(
title: Text(values[index].name),
selected: values[index].code == selectedCountry,
),
Divider(
height: 2.0,
),
],
),
);
},
);
}
final String API_KEY = "03f6c3123ea549f334f2f67c61980983";
Future<List<Country>> getCountries() async {
final response = await http
.get('http://battuta.medunes.net/api/country/all/?key=$API_KEY');
if (response.statusCode == 200) {
var parsedCountryList = json.decode(response.body);
List<Country> countries = List<Country>();
parsedCountryList.forEach((country) {
countries.add(Country.fromJSON(country));
});
return countries;
} else {
// If that call was not successful, throw an error.
throw Exception('Failed to load ');
}
}
Future<List<Region>> getRegions(String sl) async {
final response = await http
.get('http://battuta.medunes.net/api/region/$sl/all/?key=$API_KEY');
if (response.statusCode == 200) {
var parsedCountryList = json.decode(response.body);
List<Region> regions = List<Region>();
parsedCountryList.forEach((region) {
regions.add(Region.fromJSON(region));
});
return regions;
} else {
// If that call was not successful, throw an error.
throw Exception('Failed to load ');
}
}
String selectedCountry = "AF";
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: Row(children: [
Expanded(
child: FutureBuilder(
future: getCountries(),
initialData: [],
builder: (context, snapshot) {
return createCountriesListView(context, snapshot);
}),
),
Expanded(
child: FutureBuilder(
future: getRegions(selectedCountry),
initialData: [],
builder: (context, snapshot) {
return createRegionsListView(context, snapshot);
}),
),
]),
);
}
}
class Country {
String name;
String code;
Country({this.name, this.code});
factory Country.fromJSON(Map<String, dynamic> json) {
return Country(
name: json['name'],
code: json['code'],
);
}
}
class Region {
String country;
String region;
Region({this.country, this.region});
factory Region.fromJSON(Map<String, dynamic> json) {
return Region(
region: json["region"],
country: json["country"],
);
}
}
```
[](https://i.stack.imgur.com/ohnZk.gif)
|
How to de-serialize a Map with GSON
i am fairly new to GSON and get a JSON response of this format (just an easier example, so the values make no sense):
```
{
"Thomas": {
"age": 32,
"surname": "Scott"
},
"Andy": {
"age": 25,
"surname": "Miller"
}
}
```
I want GSON to make it a Map, PersonData is obviously an Object. The name string is the identifier for the PersonData.
As I said I am very new to GSON and only tried something like:
```
Gson gson = new Gson();
Map<String, PersonData> decoded = gson.fromJson(jsonString, new TypeToken<Map<String, PersonData>>(){}.getType());
```
but this threw the error:
```
Exception in thread "main" com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was STRING at line 1 column 3141
```
Any help is appreciated :)
|
The following works for me
```
static class PersonData {
int age;
String surname;
public String toString() {
return "[age = " + age + ", surname = " + surname + "]";
}
}
public static void main(String[] args) {
String json = "{\"Thomas\": {\"age\": 32,\"surname\": \"Scott\"},\"Andy\": {\"age\": 25,\"surname\": \"Miller\"}}";
System.out.println(json);
Gson gson = new Gson();
Map<String, PersonData> decoded = gson.fromJson(json, new TypeToken<Map<String, PersonData>>(){}.getType());
System.out.println(decoded);
}
```
and prints
```
{"Thomas": {"age": 32,"surname": "Scott"},"Andy": {"age": 25,"surname": "Miller"}}
{Thomas=[age = 32, surname = Scott], Andy=[age = 25, surname = Miller]}
```
So maybe your `PersonData` class is very different.
|
Why use causal inference if coefficients are same in an OLS?
I was reading this [amazing article](https://towardsdatascience.com/the-fwl-theorem-or-how-to-make-all-regressions-intuitive-59f801eb3299) about FWL theorem and it's application to causal inference.
In the article, there are some examples showing that the coefficients of an OLS estimator is the same when estimating the coefficients using the FWL theorem. If that's the case, what is the point of using causal inference to reduce multivariate regressions into univariate ones?
The article mentions couple of reasons, but if the coefficient in question is the same for both approaches I'm having difficulty seeing the benefits of it. TIA!
|
## Causal inference is all about what to estimate, not about how to estimate it
The point of causal inference is not to reduce multivariate regressions into univariate ones. The point of causal inference is to identify **what** estimand to estimate to begin with.
The article in question gives you multiple ways of running the same regression. That's great, but how would one know what regression to run in the first place? That's the question causal graphical models help answer, and it's also how it's used in the article you cited.
Without the causal graph shown in their example, we would not know what to include in our regression model, and the FWL theorem could not help us. Once we are given the causal graph, we know what we would like to estimate and we can then use different techniques (e.g. FWL) to do so.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.