prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
What is the difference between SequenceType and CollectionType in swift?
Please explain the difference between `SequenceType`, `GeneratorType` and `CollectionType` in the Swift programming language.
Also, if I am implementing my own data structure what would be the advantage of using `SequenceType`, `GeneratorType` or `CollectionType` protocols?
|
**[GeneratorType](http://swiftdoc.org/v2.2/protocol/GeneratorType/) ([IteratorProtocol in Swift 3](https://developer.apple.com/reference/swift/iteratorprotocol)):** `Generators` is something that can give the `next` element of some sequence, if there is no element it returns `nil`. `Generators` encapsulates iteration state and interfaces for iteration over a sequence.
A generator works by providing a single method, namely – `next()`, which simply returns the next value from the underlying `sequence`.
**Following classes Adopt GeneratorType Protocol:**
[DictionaryGenerator](https://developer.apple.com/library/tvos/documentation/Swift/Reference/Swift_DictionaryGenerator_Structure/index.html#//apple_ref/swift/struct/s:Vs19DictionaryGenerator), [EmptyGenerator](https://developer.apple.com/library/tvos/documentation/Swift/Reference/Swift_EmptyGenerator_Structure/index.html#//apple_ref/swift/struct/s:Vs14EmptyGenerator), more [here](https://developer.apple.com/reference/swift/iteratorprotocol#relationships).
---
**[SequenceType](http://swiftdoc.org/v2.2/protocol/SequenceType/) ([Sequence in Swift 3](https://developer.apple.com/reference/swift/sequence)):** A `Sequence` represent a series of values. `Sequence` is a type that can be iterated with a `for...in` loop.
Essentially a sequence is a generator factory; something that knows how to make generators for a sequence.
**Following classes Adopt SequenceType Protocol:**
[NSArray](https://developer.apple.com/library/tvos/documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/index.html#//apple_ref/swift/cl/c:objc(cs)NSArray), [NSDictionary](https://developer.apple.com/library/tvos/documentation/Cocoa/Reference/Foundation/Classes/NSDictionary_Class/index.html#//apple_ref/swift/cl/c:objc(cs)NSDictionary), [NSSet](https://developer.apple.com/library/tvos/documentation/Cocoa/Reference/Foundation/Classes/NSSet_Class/index.html#//apple_ref/swift/cl/c:objc(cs)NSSet) and [more](https://developer.apple.com/reference/swift/sequence#relationships).
---
**[CollectionType](http://swiftdoc.org/v2.2/protocol/CollectionType/) ([Collection in Swift 3](https://developer.apple.com/reference/swift/collection)):** `Collection` is a `SequenceType` that can be accessed via subscript and defines a `startIndex` and `endIndex`. `Collection` is a step beyond a sequence; individual elements of a collection can be accessed multiple times.
`CollectionType` inherits from `SequenceType`
**Following classes Adopt CollectionType Protocol:**
[Array](https://developer.apple.com/library/tvos/documentation/Swift/Reference/Swift_Array_Structure/index.html#//apple_ref/swift/struct/s:Sa), [Dictionary](https://developer.apple.com/library/tvos/documentation/Swift/Reference/Swift_Dictionary_Structure/index.html#//apple_ref/swift/struct/s:Vs10Dictionary), [Set](https://developer.apple.com/library/tvos/documentation/Swift/Reference/Swift_Set_Structure/index.html#//apple_ref/swift/struct/s:Vs3Set), [Range](https://developer.apple.com/library/tvos/documentation/Swift/Reference/Swift_Range_Structure/index.html#//apple_ref/swift/struct/s:Vs5Range) and [more](https://developer.apple.com/reference/swift/collection#relationships).
---
Form more information you can see [this](http://nshipster.com/swift-collection-protocols/), [this](http://iosdeveloperzone.com/2014/10/13/swift-standard-library-generators-sequences-and-collections/), and [this](http://austinzheng.com/2015/01/24/swift-seq/)
|
WP Rest API + AngularJS : How to grab Featured Image for display on page?
I am accessing Wordpress data through an HTTP REST API plugin (this wordpress plugin: <http://v2.wp-api.org/>). I know how to grab my post title, but how do I display the featured image associated with that post using this plugin? My test shows the post title and the featured image ID, but I am unsure how to display the actual image. [**Test Example**](http://www.ogmda.com/gg/test.html).
Here's my code:
```
<div ng-app="myApp">
<div ng-controller="Ctrl">
<div ng-repeat="post in posts | limitTo: 1">
<h2 ng-bind-html="post.title.rendered"></h2>
<p>{{ post.featured_image }}</p>
</div>
</div>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.15/angular.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.15/angular-sanitize.min.js"></script>
<script>
var app = angular.module('myApp', ['ngSanitize']);
app.controller('Ctrl', function($http, $scope) {
$http.get("http://ogmda.com/wp/wp-json/wp/v2/posts").success(function(data) {
$scope.posts = data;
});
});
</script>
```
|
To get featured images response, please add ***\_embed*** on the query string. example:
<http://demo.wp-api.org/wp-json/wp/v2/posts/?_embed>
Then, access the featured images in returned JSON response using ***\_embedded['wp:featuredmedia'][0].media\_details.sizes.thumbnail.source\_url***
```
var app = angular.module('myApp', ['ngSanitize']);
app.controller('Ctrl', function($http, $scope) {
$http.get("http://ogmda.com/wp/wp-json/wp/v2/posts?_embed").success(function(data) {
$scope.posts = data;
var firstFeaturedImageUrl = $scope.posts[0]._embedded['wp:featuredmedia'][0].media_details.sizes.thumbnail.source_url;
});
});
```
|
React.js throttle mousemove event keep throwing event.persist() error
I need to throttle the mousemove event, and I follow the tips below to build the method, but doesn't work:
[Perform debounce in React.js](https://stackoverflow.com/questions/23123138/perform-debounce-in-react-js)
Here is my code (<http://jsbin.com/binesofepo/edit?js,console,output>):
```
class Tool extends Component {
constructor(props) {
super(props);
this._onMouseMove = _.throttle(this._onMouseMove.bind(this), 1000)
}
render() {
return (
<div ref="tool" className="tool">
<div ref="toolBody"
className="tool__body"
onMouseMove={this._onMouseMove}></div>
</div>
)
}
_onMouseMove(e) {
e.persist()
console.log(e.screenX)
}
}
```
If you keep mousemove on the `tool__body`, It'll get lots of below warning:
>
> Warning: This synthetic event is reused for performance reasons. If you're seeing this, you're accessing the property `screenX` on a released/nullified synthetic event. This is set to null. If you must keep the original synthetic event around, use event.persist(). See fb.me/react-event-pooling for more information.
>
>
>
my react version: "15.0.2"
Seems `e.persist()` doesn't work well. Any idea? :D
|
e.persist needs to be called synchronously with the event, the handler can be called asynchronously. Here is a fix:
```
class Tool extends React.Component {
constructor(props) {
super(props);
this._throttledMouseMove = _.throttle(this._throttledMouseMove.bind(this), 2000);
}
_throttledMouseMove = (e) => {
console.log(e.screenX);
}
render() {
return (
<div ref="tool" className="tool">
<div ref="toolBody"
className="tool__body"
onMouseMove={this._onMouseMove}>
</div>
</div>
)
}
_onMouseMove = (e) => {
e.persist();
this._throttledMouseMove(e);
}
}
ReactDOM.render(<Tool/>, document.querySelector('.main'))
```
The relevant change is calling \_onMouseMove directly from the event, and setting up a second method to actually handle event that's been throttled.
|
Why Beta/Dirichlet Regression are not considered Generalized Linear Models?
The premise is this quote from vignette of R package `betareg`[1](https://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf).
>
> Further-more, the model shares some properties (such as linear
> predictor, link function, dispersion parameter) with generalized
> linear models (GLMs; McCullagh and Nelder 1989), but it is not a
> special case of this framework (not even for fixed dispersion)
>
>
>
[This answer](https://stats.stackexchange.com/a/29042/60613) also makes allusion to the fact:
>
> [...] This is a type of regression model that is appropriate when the
> response variable is distributed as Beta. You can think of it as
> **analogous** to a generalized linear model. It's exactly what you are
> looking for [...] (emphasis mine)
>
>
>
Question title says it all: why Beta/Dirichlet Regression are not considered Generalized Linear Models (are they not)?
---
As far as I know, the Generalized Linear Model defines models built on the expectation of their dependent variables conditional on the independent ones.
$f$ is the link function that maps the expectation, $g$ is probability distribution, $Y$ the outcomes and $X$ the predictiors, $\beta$ are linear parameters and $\sigma^2$ the variance.
$$f\left(\mathbb E\left(Y\mid X\right)\right) \sim g(\beta X, I\sigma^2)$$
Different GLMs impose (or relax) the relationship between the mean and the variance, but $g$ must be a probability distribution in the exponential family, a desirable property which should improve robustness of the estimation if I recall correctly. The Beta and Dirichlet distributions are part of the exponential family, though, so I'm out of ideas.
---
[[1] Cribari-Neto, F., & Zeileis, A. (2009). Beta regression in R.](https://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf)
|
Check the original reference:
>
> Ferrari, S., & Cribari-Neto, F. (2004). [Beta regression for modelling
> rates and proportions.](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.473.8394&rep=rep1&type=pdf) Journal of Applied Statistics, 31(7), 799-815.
>
>
>
as the authors note, the parameters of re-parametrized beta distribution are correlated, so
>
> Note that the parameters $\beta$ and $\phi$ are not orthogonal, in
> contrast to what is verified in the class of generalized linear
> regression models (McCullagh and Nelder, 1989).
>
>
>
So while the model looks like a GLM and quacks like a GLM, it does not perfectly fit the framework.
|
Discards inside C# Linq queries
I wonder if that's good pattern to use Discards in Linq queries according to <https://learn.microsoft.com/en-us/dotnet/csharp/discards>, example:
```
public bool HasRedProduct => Products.Any(_=>_.IsRed == true);
```
What's pros / cons instead using
```
public bool HasRedProduct => Products.Any(x=>x.IsRed == true);
```
|
That isn't a discard - it's a lambda expression parameter called `_`. It falls into the note later in the article:
>
> Note that `_` is also a valid identifier. When used outside of a supported context, `_` is treated not as a discard but as a valid variable.
>
>
>
You can tell it's not a discard because its value *isn't discarded* - you're using it in the rest of the lambda expression. I would *strongly* discourage the use of `_` as a lambda expression parameter name when you *are* using the value. It's fine to use `_` as a parameter name when you *want* to discard it though, even if it's not technically a discard from a language perspective. The name `_` was chosen for discards precisely because that's how it was already being used in practice.
|
MVC3 Validation with ComponentModel.DataAnnotations for UK date format (also using jquery ui datepicker)
I see there are some similar questions to this, but none solve my issue.
I am working on an MVC3 app with Entity Framework 4.3. I have a UK date field that i plan to allow the user to edit using the Jquery UI datepicker (which i got working thanks to [this blog](http://blogs.msdn.com/b/stuartleeks/archive/2011/01/25/asp-net-mvc-3-integrating-with-the-jquery-ui-date-picker-and-adding-a-jquery-validate-date-range-validator.aspx)).
Fortunately for me this blog includes instructions on making the datepicker using UK format, however, the EF validation is still telling me that i need to enter a valid date format. Wierdly it doesn't prevent me from submitting the date to the DB its just the unobtrusive validation kicking in and displaying the message.
At the moment I have the following data annotation:
```
[DataType(DataType.Date)]
public System.DateTime Module_Date { get; set; }
```
but i have also tried adding:
```
[DisplayFormat(DataFormatString="{0:dd/MM/yyyy}")]
```
which had no effect at all. I hope some one has a solution because i don't fancy turning off the unobtrusive validation to stop this error message.
Thanks
**EDIT**
following @Iridio answer, i looked into adding a Model Binding, and indeed from the few posts like this one that i read it seemed to be the right thing to do, but what i have come up with has no effect. here is what i have tried:
```
public class DateTimeBinder : IModelBinder
{
public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
var value = bindingContext.ValueProvider.GetValue(bindingContext.ModelName);
var date = value.ConvertTo(typeof(DateTime), CultureInfo.CurrentCulture);
return date;
}
}
```
with this in the `Application_Start()` method of the `Global.asax.cs` file:
```
ModelBinders.Binders.Add(typeof(DateTime), new DateTimeBinder());
ModelBinders.Binders.Add(typeof(DateTime?), new DateTimeBinder());
```
|
Right, the problem was with the jquery validate scripts insisting on using the US datefomat. I will restrain my self from going on a proper rant about the fact that the majority of the world uses dd/mm/yyyy though.
Anyway, eventually i found the answer to my woes in a comment to an answer of a similar [question](https://stackoverflow.com/questions/7438754/localizing-jquery-validation-with-asp-net-mvc-3), the author of which kindly wrote a [blog post](http://vintem.com.br/en/2011/09/17/globalizacao-de-validacao-do-jquery-com-o-asp-net-mvc-3/) about how he solved the issue.
Basically I have used the [jquery globalize script](https://github.com/jquery/globalize) and just set the culture to `en-GB`. I should mention that in his blog he doesn't mention where to put the bit where you specify the culture, so i just shoved in in script tags in the page under the references to the globalization scripts:
```
<script src="@Url.Content("~/Scripts/globalize.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/globalize.culture.en-GB.js")" type="text/javascript"></script>
<script type="text/javascript">
Globalize.culture("en-GB");
$.validator.methods.date = function (value, element) {
return this.optional(element) || Globalize.parseDate(value);
};
</script>
```
|
Are the AES legal key sizes really the limit?
The `AesCryptoServiceProvider.LegalKeySizes` field shows you the allowed sizes in bits.
However what I don't understand is if those are true, how am I able to successfully utilise a 2048 bit key length (256 Bytes)?
I suppose my real question is, does my key get produced to the size requested (larger than max 32 Byte), but then only the first 32 Bytes (256 bits) are actually taken in the encryption/decryption process, rendering the larger key size a waste of space?
I don't know if there is a way of actually telling from what's exposed in the API...
Any thoughts? Maybe I'm looking at this in the wrong way?
|
AES can be used for 3 key sizes: 128, 192 and 256 bit keys. Basically if you are able to use larger keys than 256 bit, then the library is "lying to you" i.e. some bits of the larger key are discarded or compressed somehow. For instance PHP `mcrypt` simply cuts the size of the key down to the largest possible size.
Larger key "seeds" are rather common in the world of cryptography. For instance Diffie-Hellman - a key agreement algorithm - usually generates a secret larger than the key size required. So the question of *extracting* (concentrating) the amount of entropy in a key often arises. If bits are truncated then the entropy in those bits is discarded.
So what is actually used in modern cryptography is a KDF, a Key Derivation Function. If the input - the seed - is a password, you should utilize a PBKDF (Password Based KDF). Modern PBKDF's are PBKDF2, bcrypt, scrypt and Argon2.
If the input is already a key - data that is provides enough entropy (randomness) if taken together - you should utilize a KBKDF (Key Based KDF). A modern KBKDF is for instance HKDF. Note that these algorithms require additional input, so if no additional data is provided it is most likely that the extra key bits are simply ignored.
The cryptographic strength of AES-128 is and stays 128 bits of course. As long as these bits are indistinguishable from random by an attacker, [AES-128 should provide enough security for practical needs](http://www.keylength.com/en/3/). AES-256 could be used if you fear breakthroughs in Quantum Cryptography.
---
So for the answer: "Are AES legal key sizes really the limit?" the answer is a resounding **yes**. 2048 bit key sizes are more commonly found for asymmetric algorithms such as RSA / DSA. For RSA and DSA the key size is actually rather low, even though it should still be out of reach for practical attacks. Maybe the ciphertext was encrypted using [hybrid encryption](http://en.wikipedia.org/wiki/Hybrid_cryptosystem).
|
Get requested address in socket programming with C
I am using something like this to create a server using C. When I go to `127.0.0.1:5000` from my browser I can see `"Hello Worlds"` as I am sending it as buffer. But I want `127.0.0.1:5000/filename.html` to work. But I don't know how to get `filename` that comes after `127.0.0.1:5000` in C.
I am using this to make connection:
```
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
bind(listenfd, (struct sockaddr*)&serv_addr,sizeof(serv_addr));
connfd = accept(listenfd, (struct sockaddr*)NULL ,NULL);
```
|
The browser will be sending your server an *HTTP request* that contains the URL it is after. The request could look like this:
```
GET /filename.html HTTP/1.1
Host: 127.0.0.1:5000
```
Your C program must read this request from the socket and *parse* it to find the URL. Note that the request will likely contain more information than the above, but it should always end with a blank line (so you know where to stop parsing). Lines in HTTP requests should end with both a carriage return and line feed (`"\r\n"`).
You receive data through the same socket that you use to send data. The steps to read an HTTP request could be something like this:
1. Declare a buffer of a sufficient size, perhaps 4096 bytes or more.
2. Read data into this buffer using [`read`](http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html) and your `connfd` until:
1. You have received 4095 bytes (in which case your server should respond with error 413)
2. You have encountered the characters `"\r\n\r\n"` (this indicates a blank line)
3. Some amount of time has passed and neither of the above has occurred. In order to implement a timeout you will need to use [`select()`](http://pubs.opengroup.org/onlinepubs/007908775/xsh/select.html) or [`poll()`](http://pubs.opengroup.org/onlinepubs/007908775/xsh/poll.html).
3. Once you have received the HTTP request into your buffer, parse it:
1. The first line is the *request* line which dictates the method of the request, the URI, and the protocol version number. A possible way to parse this line is to split it by space.
2. Subsequent lines represent HTTP header fields, and can generally be parsed as `Key: Value\r\n`. These header fields contain cookies, information about the client making the request, etc.
4. You need to form your HTTP *response* as well. A response for when the URI specifies a valid resource (such as `filename.html`) might be:
```
HTTP/1.1 200 OK
Date: Thu, 25 Jul 2013 03:55:00 GMT
Server: sadaf2605-server/1.0
Content-Type: text/html
Content-Length: 40595
< contents of filename.html follows here >
```
In the above, `Content-Length` refers to the number of bytes in the `filename.html` file. Just like the request, a response is separated from data using a blank line.
|
Java PrintWriter vs JSP
I just want to ask an opinion, which are the best practice to use html between this two option:
1) Use JSP by creating html page there
2) Generate html using Java server side by using `PrintWriter`
Which one are the best practice?
And if I got some situation like only to generate small line of html, is it better to create JSP page or just generate it using Java `PrintWriter`?
Or there are any better approach?
Thanks!
|
I would use a JSP every time. Why?
- Low-level programming HTML is a slow and error-prone operation. I've done this in C++ on an Arduino board, and I'm glad it was a hobby project.There is no way I could charge commercial rates for the small amount of functionality I produced.
- A JSP is very close to the HTML it emits, and so is easy to maintain.
- There are good libraries (JSTL for one) that simplify formatting output, handling loops etc.
- Investigate the use of scriptlet tags. These are easy to write, and allow encapsulating chunks of the JSP into a separate file. Typically these are used for things like input fields with an associated label and error reporting field. You can call the same scriptlet from multiple JSPs. In my experience, scriptlet tags have all but replaced tags written in Java.
Having said that, JSPs are just one of many web templating options.[This Wikipedia article](https://en.wikipedia.org/wiki/Comparison_of_web_template_engines) provides a good comparison of the many options.
|
Fragment.getView() always return null
I add two fragments to ViewPager in MainActivity dynamically,while I'm trying to get the sub view of the fragments, Fragment.getView() always return null, how can I solve this problem? Thanks in advance.
```
mLinearLayout= (LinearLayout)fragments.get(0).getView().findViewById(R.id.linear_layout);
mRelativeLayout= (RelativeLayout) fragments.get(1).getView().findViewById(R.id.relative_layout);
```
|
If I were you, I would use the fragments' `onCreateView()` to bind the views, then let the parent `Activity` know about the views through an Interface in `onActivityCreated()`.
Your interface could look like
```
public interface ViewInterface {
void onLinearLayoutCreated(LinearLayout layout);
void onRelativeLayoutCreated(RelativeLayout layout);
}
```
and then in each fragment
```
public View onCreateView (LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
ViewGroup layout = (ViewGroup) inflater.inflate(R.layout.fragment_layout, inflater, false);
mLinearLayout = layout.findViewById(R.id.linear_layout);
...
return layout;
}
...
public void onActivityCreated (Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
try {
ViewInterface callback = (ViewInterface) getActivity();
callback.onLinearLayoutCreated(mLinearLayout);
} catch (ClassCastException e) {
Log.e("ERROR", getActivity().getName()+" must implement ViewInterface");
}
...
}
```
and then in your parent `Activity` that implements `ViewInterface`
```
void onLinearLayoutCreated(LinearLayout layout) {
//do something with LinearLayout
...
}
```
|
Javascript closure not working
I've read these questions:
- [JavaScript closure inside loops – simple practical example](https://stackoverflow.com/questions/750486/javascript-closure-inside-loops-simple-practical-example)
- [How do JavaScript closures work?](https://stackoverflow.com/questions/111102/how-do-javascript-closures-work)
- [How do I pass the value (not the reference) of a JS variable to a function?](https://stackoverflow.com/questions/2568966/how-do-i-pass-the-value-not-the-reference-of-a-js-variable-to-a-function)
and tried to apply their solutions (as well as at least 1/2 a dozen other implementations) and none of them are working.
Here's the function that has the loop:
```
ExecuteQueryWhereQueryAndParamsBothArrays: function (queryArray, paramsArray, idsArray, success, fail, errorLogging) {
var hasError = false;
$rootScope.syncDownloadCount = 0;
$rootScope.duplicateRecordCount = 0;
$rootScope.db.transaction(function (tx) {
for (var i = 0; i < paramsArray.length; i++) {
window.logger.logIt("id: " + idsArray[i]);
var query = queryArray[i];
var params = paramsArray[i];
var id = idsArray[i];
tx.executeSql(query, params, function (tx, results) {
incrementSyncDownloadCount(results.rowsAffected);
}, function(tx, error) {
if (error.message.indexOf("are not unique") > 0 || error.message.indexOf("is not unique") > 0) {
incrementDuplicateRecordCount(1);
return false;
}
// this didn't work: errorLogging(tx, error, id);
// so I wrapped in in an IIFE as suggested:
(function(a, b, c) {
errorLogging(a, b, idsArray[c]);
})(tx, error, i);
return true;
});
}
}, function () {
fail();
}, function () {
success();
});
```
And here's the errorLogging function that is writing my message (Note, I'm not able to "write" the message in the same javascript file because I'd need to [angular] inject another reference into this file and it would cause a circular reference and the code won't run)
```
var onError = function (tx, e, syncQueueId) {
mlog.LogSync("DBService/SQLite Error: " + e.message, "ERROR", syncQueueId);
};
```
What other method can I implement to stop it from returning the very last "id" of my sync records (when it's only the first record that has the error)?
|
>
>
> ```
> … var i …
> async(function() { …
> // errorLogging(tx, error, id);
> (function(a, b, c) {
> errorLogging(a, b, idsArray[c]);
> })(tx, error, i);
> … })
>
> ```
>
>
That's rather useless, because the `i` variable already does have the wrong values there. You need to put the wrapper around the whole async callback, closing over all variables are used within the async callback but are going to be modified by the synchronous loop.
The easiest way (works always) is to simply wrap the complete loop body, and close over the iteration variable:
```
for (var i = 0; i < paramsArray.length; i++) (function(i) { // here
var query = queryArray[i];
var params = paramsArray[i];
var id = idsArray[i];
window.logger.logIt("id: " + id);
tx.executeSql(query, params, function (tx, results) {
incrementSyncDownloadCount(results.rowsAffected);
}, function(tx, error) {
if (error.message.indexOf("are not unique") > 0 || error.message.indexOf("is not unique") > 0) {
incrementDuplicateRecordCount(1);
return false;
}
errorLogging(tx, error, id);
return true;
});
}(i)); // and here
```
You also might pass all variables that are constructed in the loop (and depend on the iteration variable) as the closure arguments. In your case, it might look like this:
```
for (var i = 0; i < paramsArray.length; i++) {
(function(query, params, id) { // here
window.logger.logIt("id: " + id);
tx.executeSql(query, params, function (tx, results) {
incrementSyncDownloadCount(results.rowsAffected);
}, function(tx, error) {
if (error.message.indexOf("are not unique") > 0 || error.message.indexOf("is not unique") > 0) {
incrementDuplicateRecordCount(1);
return false;
}
errorLogging(tx, error, id);
return true;
});
}(queryArray[i], paramsArray[i], idsArray[i])); // here
}
```
Or you identify the async callback, and wrap only that:
```
for (var i = 0; i < paramsArray.length; i++) {
window.logger.logIt("id: " + idsArray[i]);
tx.executeSql(queryArray[i], paramsArray[i], function (tx, results) {
incrementSyncDownloadCount(results.rowsAffected);
}, (function(id) { // here
return function(tx, error) {
// ^^^^^^ and here
if (error.message.indexOf("are not unique") > 0 || error.message.indexOf("is not unique") > 0) {
incrementDuplicateRecordCount(1);
return false;
}
errorLogging(tx, error, id);
return true;
};
}(idsArray[i]))); // and here
}
```
|
What is the meaning of second value in return statement in C
I have following c code:
```
#define ASSERT_ACTIVE(active) do { if (!active) return errno = 6, -1; } while (0);
#define errno (*_errno())
int someCrazyFunc (bool active) {
ASSERT_INACTIVE (active);
...
}
```
As far as I know a `#define` will simply place replacement text in place of the specified identifier.
I like to know:
- What does `return errno = 6, -1;` means? is that returns two values in one return statement?
- What is the meaning of replacement code `(*_errno()) = 6`
|
There isn't a second value - a `return` statement returns exactly one value. In the statement:
```
return errno = 6, -1;
```
The return value is the result of the expression `errno = 6, -1`. This is an expression using the comma operator - it is parsed as `(errno = 6), -1`, which evaluates to `-1` and assigns `6` to `errno` as a side-effect. So this means that it's equivalent to the two statements:
```
errno = 6;
return -1;
```
Assuming that `_errno()` is a function returning a pointer - for example it has a return type of `int *` - then the expression `(*_errno()) = 6` assigns the value `6` to the object pointed to by the return value of the function. It would be equivalent to code similar to:
```
int *p = _errno();
*p = 6;
```
`errno` is often defined like this in order to give each thread in a multi-threaded implementation its own `errno`. The function `_errno()` in this case would return a pointer to the current thread's `errno` variable.
|
pandas merging 300 dataframes
The purpose of this code is
1. Scrape a 300 of tables via Pandas and Beautiful Soup
2. Concatenate this tables into a single data frame
The code works fine for the first step. But it is not working in the second.
Here is the code:
```
import pandas as pd
from urllib.request import urlopen, Request
from bs4 import BeautifulSoup
header = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 " "Safari/537.36", "X-Requested-With": "XMLHttpRequest"}
url = open(r"C:\Users\Sayed\Desktop\script\links.txt").readlines()
for site in url:
req = Request(site, headers=header)
page = urlopen(req)
soup = BeautifulSoup(page, 'lxml')
table = soup.find('table')
df = pd.read_html(str(table), parse_dates={'DateTime': ['Release Date', 'Time']}, index_col=[0])[0]
df = pd.concat(df, axis=1, join='outer').sort_index(ascending=False)
print(df)
```
Here is the error:
Traceback (most recent call last):
File "D:/Projects/Tutorial/try.py", line 18, in
```
df = pd.concat(df, axis=1, join='outer').sort_index(ascending=False)
```
File "C:\Users\Sayed\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 225, in concat
copy=copy, sort=sort)
File "C:\Users\Sayed\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 241, in **init**
```
'"{name}"'.format(name=type(objs).__name__))
```
TypeError: first argument must be an iterable of pandas objects, you passed an object of type "DataFrame
|
The Pandas concat function takes a *sequence or mapping of Series, DataFrame, or Panel objects* as it's first argument. Your code is currently passing a single DataFrame.
I suspect the following will fix your issue:
```
import pandas as pd
from urllib.request import urlopen, Request
from bs4 import BeautifulSoup
header = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 " "Safari/537.36", "X-Requested-With": "XMLHttpRequest"}
url = open(r"C:\Users\Sayed\Desktop\script\links.txt").readlines()
dfs = []
for site in url:
req = Request(site, headers=header)
page = urlopen(req)
soup = BeautifulSoup(page, 'lxml')
table = soup.find('table')
df = pd.read_html(str(table), parse_dates={'DateTime': ['Release Date', 'Time']}, index_col=[0])[0]
dataframes.append(df)
concat_df = pd.concat(dfs, axis=1, join='outer').sort_index(ascending=False)
print(df)
```
All I have done is to create a list called *dfs*, as a place to append your DataFrames as you iterate through the sites. Then *dfs* is passed as the argument to concat.
|
Boost serialize object as a json
Let's say I have an object from type `Animal` and I serialize it.
In the following example, the content to serialization is
```
22 serialization::archive 16 0 0 4 1 5 Horse
```
Nice. But, what if I need it to be serialized as a `json`. Is it possible via `boost` serialization?
I look for such a string:
```
{
"legs": 4,
"is_mammal": true,
"name": "Horse"
}
```
Code:
```
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>
#include <iostream>
#include <sstream>
using namespace boost::archive;
class Animal
{
public:
Animal(){}
void set_leg(int l){legs=l;};
void set_name(std::string s){name=s;};
void set_ismammal(bool b){is_mammal=b;};
void print();
private:
friend class boost::serialization::access;
template <typename Archive>
void serialize(Archive &ar, const unsigned int version)
{
ar & legs;
ar & is_mammal;
ar & name;
}
int legs;
bool is_mammal;
std::string name;
};
void Animal::print()
{
std::cout
<<name<<" with "
<<legs<<" legs is "
<<(is_mammal?"":"not ")
<<"a mammal"<<std::endl;
}
void save_obj(const Animal &animal,std::stringstream &stream)
{
text_oarchive oa{stream};
oa << animal;
}
void load_obj(std::stringstream &stream,Animal &animal)
{
text_iarchive ia{stream};
ia >> animal;
}
int main()
{
std::stringstream stream;
Animal animal;
animal.set_name("Horse");
animal.set_leg(4);
animal.set_ismammal(true);
save_obj(animal,stream);
Animal duplicate;
load_obj(stream,duplicate);
std::cout<<"object print: ";
duplicate.print();
std::cout<<"stream print: "<<stream.str()<<std::endl;
}
// g++ -std=c++11 main.cpp -lboost_serialization && ./a.out
```
Results
```
object print: Horse with 4 legs is a mammal
stream print: 22 serialization::archive 16 0 0 4 1 5 Horse
```
|
No there's not such a thing.
You could write your own (by implementing the Archive concept). But I reckon that's not worth the effort. Just use a JSON library.
Here's a sketch of the minimal output-archive model that works with your sample:
```
#include <boost/serialization/serialization.hpp>
#include <boost/serialization/nvp.hpp>
#include <iostream>
#include <iomanip>
#include <sstream>
struct MyOArchive {
std::ostream& _os;
MyOArchive(std::ostream& os) : _os(os) {}
using is_saving = boost::true_type;
template <typename T>
MyOArchive& operator<<(boost::serialization::nvp<T> const& wrap) {
save(wrap.name(), wrap.value());
return *this;
}
template <typename T>
MyOArchive& operator<<(T const& value) {
return operator<<(const_cast<T&>(value));
}
template <typename T>
MyOArchive& operator<<(T& value) {
save(value);
return *this;
}
template <typename T> MyOArchive& operator&(T const& v) { return operator<<(v); }
bool first_element = true;
void start_property(char const* name) {
if (!first_element) _os << ", ";
first_element = false;
_os << std::quoted(name) << ":";
}
template <typename T> void save(char const* name, T& b) {
start_property(name);
save(b);
}
void save(bool b) { _os << std::boolalpha << b; }
void save(int i) { _os << i; }
void save(std::string& s) { _os << std::quoted(s); }
template <typename T>
void save(T& v) {
using boost::serialization::serialize;
_os << "{";
first_element = true;
serialize(*this, v, 0u);
_os << "}\n";
first_element = false;
}
};
class Animal {
public:
Animal() {}
void set_leg(int l) { legs = l; };
void set_name(std::string s) { name = s; };
void set_ismammal(bool b) { is_mammal = b; };
void print();
private:
friend class boost::serialization::access;
template <typename Archive> void serialize(Archive &ar, unsigned)
{
ar & BOOST_SERIALIZATION_NVP(legs)
& BOOST_SERIALIZATION_NVP(is_mammal)
& BOOST_SERIALIZATION_NVP(name);
}
int legs;
bool is_mammal;
std::string name;
};
void Animal::print() {
std::cout << name << " with " << legs << " legs is " << (is_mammal ? "" : "not ") << "a mammal" << std::endl;
}
void save_obj(const Animal &animal, std::stringstream &stream) {
MyOArchive oa{ stream };
oa << animal;
}
int main() {
std::stringstream stream;
{
Animal animal;
animal.set_name("Horse");
animal.set_leg(4);
animal.set_ismammal(true);
save_obj(animal, stream);
}
std::cout << "stream print: " << stream.str() << std::endl;
}
```
Prints
```
stream print: {"legs":4, "is_mammal":true, "name":"Horse"}
```
>
> ## CAVEAT
>
>
> I do not recommend this approach. In fact there are numerous missing things in the above - most notably the fact that it is output-only
>
>
>
|
Retain Twitter Bootstrap Collapse state on Page refresh/Navigation
I'm using Bootstrap "collapse" plugin to make an accordion for a long list of links. The accordion-body tag includes "collapse" so all the groups are collapsed when the page loads. When you open a group and click on a link, it takes you to a new page to see some detail and then you click a back link or the browser back to return to the list. Unfortunately, when you return the accordion is back in its collapsed state and you have to open the group again and find where you were. I anticipate a lot of this back and forth navigation and this behavior is going to be frustrating.
Is there some way to preserve the user's place and go back to it, or just prevent the page from reloading or the javascript from firing again.
I thought the solution might be along these lines, but not sure.
[Twitter bootstrap: adding a class to the open accordion title](https://stackoverflow.com/questions/10918801/twitter-bootstrap-adding-a-class-to-the-open-accordion-title)
|
You can very easily solve this by a cookie. There is a lot of simplified libraries, like <https://github.com/carhartl/jquery-cookie> as I use in the example below :
```
<script src="https://raw.github.com/carhartl/jquery-cookie/master/jquery.cookie.js"></script>
```
add the following code to a script section (`#accordion2` refers to the modfied twitter bootstrap example, I list afterwards)
```
$(document).ready(function() {
var last=$.cookie('activeAccordionGroup');
if (last!=null) {
//remove default collapse settings
$("#accordion2 .collapse").removeClass('in');
//show the last visible group
$("#"+last).collapse("show");
}
});
//when a group is shown, save it as the active accordion group
$("#accordion2").bind('shown', function() {
var active=$("#accordion2 .in").attr('id');
$.cookie('activeAccordionGroup', active)
});
```
And you are done! Here a modified version of the example at <http://twitter.github.com/bootstrap/javascript.html#collapse> with clickable links, when you go back - the last shown accordion group opens up automatically
```
<div class="accordion" id="accordion2">
<div class="accordion-group">
<div class="accordion-heading">
<a class="accordion-toggle" data-toggle="collapse" data-parent="#accordion2" href="#collapseOne">
Collapsible Group Item #1
</a>
</div>
<div id="collapseOne" class="accordion-body collapse in">
<div class="accordion-inner">
Link : <a href="http://google.com">google.com</a>
</div>
</div>
</div>
<div class="accordion-group">
<div class="accordion-heading">
<a class="accordion-toggle" data-toggle="collapse" data-parent="#accordion2" href="#collapseTwo">
Collapsible Group Item #2
</a>
</div>
<div id="collapseTwo" class="accordion-body collapse">
<div class="accordion-inner">
Link : <a href="http://stackoverflow.com">stackoverflow.com</a>
</div>
</div>
</div>
<div class="accordion-group">
<div class="accordion-heading">
<a class="accordion-toggle" data-toggle="collapse" data-parent="#accordion2" href="#collapseThree">
Collapsible Group Item #3
</a>
</div>
<div id="collapseThree" class="accordion-body collapse">
<div class="accordion-inner">
Link : <a href="http://cryptozoologynews.blogspot.com/">cryptozoology news</a>
</div>
</div>
</div>
</div>
```
|
Calling BeginInvoke from a destructor
I have some code in a WPF application that looks like this:
```
public class MyTextBox : System.Windows.Controls.TextBox, IDisposable
{
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
Dispatcher.BeginInvoke((Action) delegate
{
// do work on member variables on the UI thread.
});
}
~MyTextBox()
{
Dispose(false);
}
}
```
The dispose method is never getting explicitly called so the destructor calls it. It seems like in this case the object would be destroyed before the delegate in the BeginInvoke fires on the UI thread. It appears to be working though. What is happening here? Is this safe?
|
>
> It seems like in this case the object would be destroyed before the
> delegate in the BeginInvoke fires on the UI thread
>
>
>
The finalizer queues work to the UI message loop. The object may finish running its finalizer method *before* the actual delegate gets invoked on the UI thread, but that doesn't matter, as the delegate gets queued regardless.
>
> What is happening here?
>
>
>
You're queueing work to the UI from the finalizer.
>
> Is this safe?
>
>
>
Safe is a broad term. Would I do this? Definitely not. It looks and feels weird that you're invoking manipulation of UI elements from a finalizer, especially given this is a `TextBox` control. I suggest you get the full grasp of what running a [finalizer guarantees](https://msdn.microsoft.com/en-us/library/system.object.finalize.aspx#Notes) and doesn't guarantee. For one, running a finalizer doesn't mean the object gets cleaned up in memory right away.
I'd also suggest reading @EricLippert posts: *Why everything you know is wrong*, [Part1](http://ericlippert.com/2015/05/18/when-everything-you-know-is-wrong-part-one/) & [Part2](http://ericlippert.com/2015/05/21/when-everything-you-know-is-wrong-part-two/)
|
Add custom steps to source package's debian/package.postinst?
I have a package that incorporates an auto-generated `debian/package.postinst.debhelper` file into the generated binary. When I put my own code into a file at `debian/package.postinst`, the auto-generated file is no longer incorporated into the resulting binary.
How do I add custom code to the `postinst` file in the generated package without blocking the use of the auto-generated code?
|
Your postinst script should included a `#DEBHELPER#` token if you are using any debhelper commands that might modify it. It will get replaced in the resulting script by the auto-generated content. See [manpage for the `dh_installdeb` command ](http://manpages.ubuntu.com/dh_installdeb)
For example:
```
#!/bin/sh
# postinst script for webpy-example
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
# * <postinst> `configure' <most-recently-configured-version>
# * <old-postinst> `abort-upgrade' <new version>
# * <conflictor's-postinst> `abort-remove' `in-favour' <package>
# <new-version>
# * <postinst> `abort-remove'
# * <deconfigured's-postinst> `abort-deconfigure' `in-favour'
# <failed-install-package> <version> `removing'
# <conflicting-package> <version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
# source debconf library
. /usr/share/debconf/confmodule
# Source dbconfig-common functions
if [ -f /usr/share/dbconfig-common/dpkg/postinst.pgsql ]; then
. /usr/share/dbconfig-common/dpkg/postinst.pgsql
fi
case "$1" in
configure)
# Set up our config for apache
/bin/cp /usr/share/webpy-example/postinstall/webpy-config /etc/apache2/conf.d/
/usr/sbin/a2enmod wsgi
/usr/sbin/a2enmod rewrite
/etc/init.d/apache2 reload
# set up database
dbc_pgsql_createdb_encoding="UTF8"
dbc_generate_include=template:/usr/share/webpy-example/lib/credentials.py
dbc_generate_include_args="-U -o template_infile='/usr/share/doc/webpy-example/credentials_template.py'"
dbc_generate_include_owner="root:www-data"
dbc_generate_include_perms="0660"
dbc_go webpy-example $@ || true
;;
abort-upgrade|abort-remove|abort-deconfigure)
exit 0
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
db_stop
exit 0
```
|
Mount current directory as a volume in Docker on Windows 10
**Description**
I am using Docker version 1.12.5 on Windows 10 via Hyper-V and want to use container executables as commands in the current path. I built a Docker image that is running fine, but I have a problem to mount the current path. The idea is to create an alias and do a `docker run --rm [...]` command so that it could be used system-wide in the current directory.
**Setup**
I have a drive E with a folder "test" and in there a folder called "folder on windows host" to show that the command is working. The Dockerfile create the directory `/data`, defines it as VOLUME and WORKDIR.
Having `E:\test` as the current directory in PowerShell and executing the Docker command with an absolute path, I can see the content of `E:\test`:
```
PS E:\test> docker run --rm -it -v E:\test:/data mirkohaaser/docker-clitools ls -la
total 0
drwxr-xr-x 2 root root 0 Jan 4 11:45 .
drwxr-xr-x 2 root root 0 Jan 5 12:17 folder on windows host
```
**Problem**
I want to use the current directory and not an absolute notation. I could not use pwd in the volume because of different error messages:
Trying with ($pwd)
```
PS E:\test> docker run --rm -it -v ($pwd):/data mirkohaaser/docker-clitools ls -la
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error parsing reference: ":/data" is not a valid repository/tag.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
```
Trying with /($pwd)
```
PS E:\test> docker run --rm -it -v /($pwd):/data mirkohaaser/docker-clitools ls -la
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error parsing reference: "E:\\test" is not a valid repository/tag.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
```
Trying with \´pwd\´
```
PS E:\test> docker run --rm -it -v ´$pwd´:/data mirkohaaser/docker-clitools ls -la
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Invalid bind mount spec "´E:\\test´:/data": invalid mode: /data.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
```
Trying with `pwd`
```
PS E:\test> docker run --rm -it -v `$pwd`:/data mirkohaaser/docker-clitools ls -la
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: create $pwd: "$pwd" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
```
What is the correct syntax of mounting the current directory as a volume in Docker on Windows 10?
|
In Windows Command Line (`cmd`), you can mount the current directory like so:
```
docker run --rm -it -v %cd%:/usr/src/project gcc:4.9
```
In PowerShell, you use `${PWD}`, which gives you the current directory:
```
docker run --rm -it -v ${PWD}:/usr/src/project gcc:4.9
```
On Linux:
```
docker run --rm -it -v $(pwd):/usr/src/project gcc:4.9
```
**Cross Platform**
The following options will work on both PowerShell and on Linux (at least Ubuntu):
```
docker run --rm -it -v ${PWD}:/usr/src/project gcc:4.9
docker run --rm -it -v $(pwd):/usr/src/project gcc:4.9
```
|
Can I load one .gitconfig file from another?
>
> **Possible Duplicate:**
>
> [Is it possible to include a file in your .gitconfig](https://stackoverflow.com/questions/1557183/is-it-possible-to-include-a-file-in-your-gitconfig)
>
>
>
With bash and zsh I can source subfiles in order to better organize my config.
Can I do something similar with `.gitconfig`?
|
(March 2012) It looks like this is finally going to be possible soon -- git 1.7.10 is going to support this syntax in `.gitconfig`:
```
[include]
path = /path/to/file
```
See [here](https://github.com/gitster/git/commit/9b25a0b52e09400719366f0a33d0d0da98bbf7b0) for a detailed description of the git change and its edge cases.
By the way, a couple of subtleties worth pointing out:
1. Path expansion, e.g. `~` or `$HOME`, does not appear to be supported.
2. If a relative path is specified, then it is relative to the .gitconfig file that has the `[include]` statement. This works correctly even across chained includes -- e.g. `~/.gitconfig` can have:
```
[include]
path = subdir/gitconfig
```
and `subdir/gitconfig` can have:
```
[include]
path = nested_subdir/gitconfig
```
... which will cause `subdir/nested_subdir/gitconfig` to be loaded.
3. If git can't find the target file, it silently ignores the error. This appears to be by design.
|
discord.py trying to remove all roles from a user
I have a problem that I`m trying to remove all roles a user has for some kind of mute role but it gives me this error `discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role`
Here`s my code
```
@client.command(aliases=['m'])
@commands.has_permissions(kick_members = True)
async def mute(ctx,member : discord.Member):
muteRole = ctx.guild.get_role(728203394673672333)
for i in member.roles:
await member.remove_roles(i)
await member.add_roles(muteRole)
await ctx.channel.purge(limit = 1)
await ctx.send(str(member)+' has been muted!')
```
I know that this kind of questiion was alredy asked here: [How to remove all roles at once (Discord.py 1.4.1)](https://stackoverflow.com/questions/63536983/how-to-remove-all-roles-at-once-discord-py-1-4-1).
But it wasn`t answered and did not help me at all
|
The problem is that all users have an "invisible role", `@everyone`. You will see it show up if you try
```
for i in member.roles:
print(i)
```
`remove_roles` is a high level function and it will try to remove `@everyone`, which is causing your error.
To clear all current roles from the user, you can do:
```
@client.command(aliases=['m'])
@commands.has_permissions(kick_members = True)
async def mute(ctx, member : discord.Member):
muteRole = ctx.guild.get_role(775449115022589982)
await member.edit(roles=[muteRole]) # Replaces all current roles with roles in list
await ctx.channel.purge(limit = 1)
await ctx.send(str(member)+' has been muted!')
```
`await member.edit(roles=[])` Replaces all the current roles with the roles you have in the list. Leave the list empty to remove all roles from the user.
[discord.Member.edit](https://discordpy.readthedocs.io/en/latest/api.html#discord.Member.edit)
Although if you want to do it with a `for loop`, you can use `try`
```
@client.command(aliases=['m'])
@commands.has_permissions(kick_members = True)
async def mute(ctx, member : discord.Member):
muteRole = ctx.guild.get_role(775449115022589982)
for i in member.roles:
try:
await member.remove_roles(i)
except:
print(f"Can't remove the role {i}")
await member.add_roles(muteRole)
await ctx.channel.purge(limit = 1)
await ctx.send(str(member)+' has been muted!')
```
|
Creating Windows shortcuts from Chrome bookmarks
I have a very large number of bookmarks in Google Chrome. I want to transfer all of them to a windows folder, so that each bookmark will be a shortcut to a website (I want a list of shortcuts, just like any list of regular applications shortcuts). I also would like to preserve the bookmark's name.
I tried achieving my goal using the `Export bookmarks to HTML file` in the `Organize` menu inside the `Bookmark Manager`, but all I could do with it is save the links manually as `html` files, and they won't even save with their current bookmarks' names.
|
After searching the web for a while, I came to the conclusion that [there is no simple solution for this problem](http://productforums.google.com/forum/#!topic/chrome/m0Bopuvbo3s). There are different methods to save a link or a bookmark as a Windows URL shortcut, but there is no way to do it for multiple links / URLs at once.
Daniel Beck suggested an OS X / Safari bookmarks file based script, but I didn't manage to execute the script because I wasn't sure how to adapt it to Windows, even with Cygwin.
I realized the only way to achieve my goal is by using a script, so I posted a programming-specific question on Stack Overflow and asked for a script which would take the URLs from the links inside the `bookmarks.html` file and use them to create Windows URL shortcuts.
*Here is the question + the answer (it's a VBScript):*
[**Creating multiple Windows URL shortcuts from a bookmarks HTML file**](https://stackoverflow.com/questions/12043989/creating-multiple-windows-url-shortcuts-from-a-bookmarks-html-file).
|
Checking for balanced brackets in JavaScript
I'm wrote a simple function `isBalanced` which takes some code and returns `true` if the brackets in the code are balanced and `false` otherwise:
```
function isBalanced(code) {
var length = code.length;
var delimiter = '';
var bracket = [];
var matching = {
')': '(',
']': '[',
'}': '{'
};
for (var i = 0; i < length; i++) {
var char = code.charAt(i);
switch (char) {
case '"':
case "'":
if (delimiter)
if (char === delimiter)
delimiter = '';
else delimiter = char;
break;
case '/':
var lookahead = code.charAt(++i);
switch (lookahead) {
case '/':
case '*':
delimiter = lookahead;
}
break;
case '*':
if (delimiter === '*' && code.charAt(++i) === '/') delimiter = '';
break;
case '\n':
if (delimiter === '/') delimiter = '';
break;
case '\\':
switch (delimiter) {
case '"':
case "'":
i++;
}
break;
case '(':
case '[':
case '{':
if (!delimiter) bracket.push(char);
break;
case ')':
case ']':
case '}':
if (!delimiter && bracket.length && matching[char] !== bracket.pop())
return false;
}
}
return bracket.length ? false : true;
}
```
The function must not operate on brackets inside strings and comments. I wanted to know if my current implementation will work correctly for all test cases. I also wanted to know whether brackets may be used in any other context beside strings and comments in a language like JavaScript (AFAIK this is not the case).
|
`The function must not operate on brackets inside strings and comments.`
If that's the case then why not just compare the number of opened vs closed symbols?
Example:
```
var haveSameLength = function(str, a, b){
return (str.match(a) || [] ).length === (str.match(b) || [] ).length;
};
var isBalanced = function(str){
var arr = [
[ /\(/gm, /\)/gm ], [ /\{/gm, /\}/gm ], [ /\[/gm, /\]/gm ]
], i = arr.length, isClean = true;
while( i-- && isClean ){
isClean = haveSameLength( str, arr[i][0], arr[i][1] );
}
return isClean;
};
```
Simple Testcases.
```
console.log( isBalanced( "var a = function(){return 'b';}" ) === true );
console.log( isBalanced( "var a = function(){return 'b';" ) === false );
console.log( isBalanced( "/*Comment*/var a = function(){ \n // coment again \n return 'b';" ) === false );
console.log( isBalanced( "var a = function(){return 'b';" ) === false );
```
Here's a demo:
<http://jsfiddle.net/9esyk/>
# Update
Your code is optimal if performance is the main consideration, but the complexity is too high.
Here are a few tips.
# 1)
Split up your function into smaller methods to reduce the complexity. One way to do this would be to have functions to filter your string so that you only analyze the meaningful characters.
# 2)
Avoid using the keyword `char` since it's a java reserved keyword.
# Final Result:
```
var removeComments = function(str){
var re_comment = /(\/[*][^*]*[*]\/)|(\/\/[^\n]*)/gm;
return (""+str).replace( re_comment, "" );
};
var getOnlyBrackets = function(str){
var re = /[^()\[\]{}]/g;
return (""+str).replace(re, "");
};
var areBracketsInOrder = function(str){
str = ""+str;
var bracket = {
"]": "[",
"}": "{",
")": "("
},
openBrackets = [],
isClean = true,
i = 0,
len = str.length;
for(; isClean && i<len; i++ ){
if( bracket[ str[ i ] ] ){
isClean = ( openBrackets.pop() === bracket[ str[ i ] ] );
}else{
openBrackets.push( str[i] );
}
}
return isClean && !openBrackets.length;
};
var isBalanced = function(str){
str = removeComments(str);
str = getOnlyBrackets(str);
return areBracketsInOrder(str);
};
```
Testcases
```
test("test isBalanced for good values", function(){
var func = isBalanced;
ok(func( "" ));
ok(func( "(function(){return [new Bears()]}());" ));
ok(func( "var a = function(){return 'b';}" ));
ok(func( "/*Comment: a = [} is bad */var a = function(){return 'b';}" ));
ok(func( "/*[[[ */ function(){return {b:(function(x){ return x+1; })('c')}} /*_)(([}*/" ));
ok(func( "//Complex object;\n a = [{a:1,b:2,c:[ new Car( 1, 'black' ) ]}]" ));
});
test("test isBalanced for bad values", function(){
var func = isBalanced;
ok(!func( "{" ));
ok(!func( "{]" ));
ok(!func( "{}(" ));
ok(!func( "({)()()[][][}]" ));
ok(!func( "[//]" ));
ok(!func( "[/*]*/" ));
ok(!func( "(function(){return [new Bears()}())];" ));
ok(!func( "var a = [function(){return 'b';]}" ));
ok(!func( "/*Comment: a = [} is bad */var a = function({)return 'b';}" ));
ok(!func( "/*[[[ */ function(){return {b:(function(x){ return x+1; })'c')}} /*_)(([}*/" ));
ok(!func( "//Complex object;\n a = [{a:1,b:2,c:[ new Car( 1, 'black' ) ]]" ));
});
```
Demo: <http://jsfiddle.net/9esyk/3/>
|
Converting two Uint32Array values to Javascript number
I found a code from [here](https://stackoverflow.com/a/14379836/1691517) that converts Javascript number to inner IEEE representation as two Uint32 values:
```
function DoubleToIEEE(f)
{
var buf = new ArrayBuffer(8);
(new Float64Array(buf))[0] = f;
return [ (new Uint32Array(buf))[0] ,(new Uint32Array(buf))[1] ];
}
```
How to convert the returned value back to Javascript number? This way:
```
var number = -10.3245535;
var ieee = DoubleToIEEE(number)
var number_again = IEEEtoDouble(ieee);
// number and number_again should be the same (if ever possible)
```
|
That code is ugly as hell. Use
```
function DoubleToIEEE(f) {
var buf = new ArrayBuffer(8);
var float = new Float64Array(buf);
var uint = new Uint32Array(buf);
float[0] = f;
return uint;
}
```
If you want an actual `Array` instead of a `Uint32Array` (shouldn't make a difference in the most cases), add an [`Array.from`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from) call. You can also reduce this to a oneliner by passing the value to the `Float64Array` constructor:
```
function DoubleToIEEE(f) {
// use either
return new Uint32Array(Float64Array.of(f).buffer);
return Array.from(new Uint32Array(Float64Array.of(f).buffer));
return Array.from(new Uint32Array((new Float64Array([f])).buffer));
}
```
The inverse would just write the inputs into the `uint` slots and return the `float[0]` value:
```
function IEEEToDouble(is) {
var buf = new ArrayBuffer(8);
var float = new Float64Array(buf);
var uint = new Uint32Array(buf);
uint[0] = is[0];
uint[1] = is[1];
return float[0];
}
```
which can be shortened to
```
function IEEEToDouble(is) {
return (new Float64Array(Uint32Array.from(is).buffer))[0];
}
```
|
Linux disk usage
I'm trying to find out what folders occupy / partition.
I see that lots of disk space goes to jenkins directory
```
sudo du -sh /home/jenkins
289G /home/jenkins
```
When I examine jenkins directory folder I get the largest folder is:
```
sudo du -sh /home/jenkins/*
137G /home/jenkins/jobs
```
And rest of the folders are relatively small, tens of K/M...
In total there are 50 folders under /home/jenkins.
How can I find who "eats" the space?
Thanks
|
The difference between: `sudo du -sh /home/jenkins` and `sudo du -sh /home/jenkins/*` is that in almost all shells (with the default setttings), `*` does not include hidden files or directories. Hidden means names starting with a period (e.g., if there is a `/home/jenkins/.temp/`, that would not be included in the second `du`).
So it'd appear you have about 289-137=152 GiB of hidden files. The easiest way to find out where they are is something like this:
```
sudo du -m /home/jenkins | sort -nr | less
```
Taking off the `-s` will make `du` show you the subdirectories everything is in, which sounds like what you want. That'll include hidden ones. If that still doesn't find it, add an `-a`:
```
sudo du -am /home/jenkins | sort -nr | less
```
that will additionally show individual files, in case you have a few very large hidden files. It will probably also take a bit longer to run (adding files often greatly expands the output).
There are also graphical frontends you can use; personally, I use xdiskusage (but maybe just because I've been using it forever):
```
sudo du -am /home/jenkins | xdiskusage -
```
|
Audacity adds a gap when exporting mp3
I created a loop Audacity which was `10.549s` long, I exported it to mp3, now when I open the exported file via Audacity it now has a gap at the beginning making it `10.58s` long. The loop I made that sounded perfectly inside Audacity becomes an imperfect loop when exported to mp3. Any ideas how to fix this?
|
## Problem: MP3 File has a gap
This is a [known, acknowledged issue](http://lame.sourceforge.net/tech-FAQ.txt) since at least 2000:
>
> 1 Why is a decoded MP3 longer than the original .wav file?
>
>
> Because LAME (and all other MDCT based encoders) add padding to the
> beginning and end of each song. For an explination of why,
> see the questions below.
>
>
>
800 word long technical explanation pertaining to both decoder and encoder issues snipped.
LAME-enabled players should apparently automatically jump this gap:
>
> LAME embeds the amount of padding in the ancillary data of the
> first frame of the MP3 file. (LAME INFO tag). The LAME decoder
> will use this information to remove the leading padding of an MP3 file.
>
>
>
however:
>
> Modifications to the decoder so that it will also remove the
> trailing padding have not yet been made.
>
>
>
## Alternatives
You could try another encoder as mentioned, if you have access to the Fraunhofer version (IIRC it is available in iTunes and Windows Media Player). Alternatively, you may be able to compile/acquire a version of `sox` with `libmad` enabled. I think these will have similar issues, however.
The question is, do you definitely need an MP3 as mentioned in the comments? Are you using a player that only handles MP3s?
If it absolutely, definitely, positively has to be an MP3, no ifs ands or buts; and the Fraunhofer encoder also gives the same issue, you could have a look at a previous thread here on SU:
[Best program to trim silence beginning and end of mp3 files?](https://superuser.com/questions/120315/best-program-to-trim-silence-beginning-and-end-of-mp3-files)
|
python - list all inner functions of a function?
In python you can do `fname.__code__.co_names` to retrieve a list of functions and global things that a function references. If I do `fname.__code__.co_varnames`, this includes inner functions, I believe.
Is there a way to essentially do `inner.__code__.co_names` ? by starting with a string that looks like `'inner'`, as is returned by `co_varnames`?
|
I don't think you can inspect the code object because inner functions are lazy, and their code-objects are only created just in time. What you probably want to look at instead is the ast module. Here's a quick example:
```
import ast, inspect
# this is the test scenario
def function1():
f1_var1 = 42
def function2():
f2_var1 = 42
f2_var2 = 42
def function3():
f3_var1 = 42
# derive source code for top-level function
src = inspect.getsource(function1)
# derive abstract syntax tree rooted at top-level function
node = ast.parse(src)
# next, ast's walk method takes all the difficulty out of tree-traversal for us
for x in ast.walk(node):
# functions have names whereas variables have ids,
# nested-classes may all use different terminology
# you'll have to look at the various node-types to
# get this part exactly right
name_or_id = getattr(x,'name', getattr(x,'id',None))
if name_or_id:
print name_or_id
```
The results are: function1, function2, f1\_var1, function3, f2\_var1, f2\_var2, f3\_var1. Obligatory disclaimer: there's probably not a good reason for doing this type of thing.. but have fun :)
Oh and if you only want the names of the inner functions?
```
print dict([[x.name,x] for x in ast.walk(ast.parse(inspect.getsource(some_function))) if type(x).__name__=='FunctionDef'])
```
|
How can I use a member function pointer in libcurl
I am using libcurl I have my downloading of files inside of a class, to which I want to see a progress function. I notice I can set a typical function pointer by
```
curl_easy_setopt(mCurl, CURLOPT_PROGRESSFUNCTION, progress_func3);
```
However, I would like to set it to a function pointer to my class. I can get the code to compile with
```
curl_easy_setopt(mCurl, CURLOPT_PROGRESSFUNCTION, &MyClass::progress_func3);
```
and the `progress_func3` function will get called. The problem is, as soon as it returns there will be a "Buffer overrun detected!" error through, saying the program can not safely continue and execute, and must be terminated. (It is a Microsoft Visual C++ Runtime Library error window, I am using Visual Studio 2010).
When I use a function, there is no problem, but when I use a member function pointer, I will get this error. How can I use a member function pointer in libcurl?
|
A non-static member function needs a `this` pointer to be callable. You can't provide that `this` pointer with this type of interface, so using a non-static member function is not possible.
You should create a "plain C" function as your callback, and have that function call the member function on the appropriate `MyClass` instance.
Try something like:
```
int my_progress_func(void *clientp, ...)
{
MyClass *mc = static_cast<MyClass*>(clientp);
mc->your_function(...);
return 0; // or something else
}
```
Then:
```
curl_easy_setopt(mCurl, CURLOPT_PROGRESSDATA, &your_myclass_object);
curl_easy_setopt(mCurl, CURLOPT_PROGRESSFUNCTION, my_progress_func);
```
(You're responsible for the type match here, obviously. If you attach anything else but a `MyClass` pointer to the progress data, you're on your own.)
|
User defined properties in C# cause StackOverflowException on construction
I have been writing some code in the Unity3d engine and have been teaching myself C#. I have been trying to find an answer to my question but to no avail. I've come from java and been trying to use properties and I'm afraid that I don't understand them very well. When I tried something like this:
```
public int Property
{
get
{
return Property;
}
set
{
Property = value;
}
}
```
I get a stack overflow initializing the object when that property is accessed for assignment. I was able to fix it by just using the default property style:
```
get;
set;
```
but I don't know what is going on in the first instance that is causing the exception. It would be fantastic if someone could help explain this.
|
You need a backing field.
When you set that property, it will set itself, which will set itself, which will set itself, which will set itself, which will ... you get the gist.
Either:
```
private int _Property;
public int Property
{
get
{
return _Property;
}
set
{
_Property = value;
}
}
```
or this:
```
public int Property
{
get;
set;
}
```
This latter form, called an automatic property, creates that backing field for you so in reality these two will create nearly the same code (the name of the backing field will differ).
---
When you do this in your version of the code:
```
x.Property = 10;
```
you end up the property doing the exact same thing, and thus you get a stack overflow. You could rewrite it to a method with the same problem like this:
```
public void SetProperty(int value)
{
SetProperty(value);
}
```
This tool will cause a Stack Overflow Exception for the exact same reason.
|
Specified argument was out of the range of valid values. Parameter name: size & Serial Port Communication
I need to create an application which requires communicating to an existent software using TCP/IP, where both mine and the other application will be using the port number specified below.
```
private void frmScan_Load(object sender, EventArgs e)
{
clientSocket.Connect("100.100.100.30", 76545);
}
public void msg(string mesg)
{
textBox1.Text = textBox1.Text + Environment.NewLine + " >> " + mesg;
}
private void cmdSCANok_Click(object sender, EventArgs e)
{
msg("Client Started");
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("PCK|SCAN|5025066840471");
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();
byte[] inStream = new byte[10025];
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
string returndata = Encoding.ASCII.GetString(inStream, 0, inStream.Length);
msg("Data from Server : " + returndata);
}
```
What happens is, the program I am communicating with has some in-built language where it will understand the code that I send, and it will return data according to the code that I have sent. So in the code above, I sent three bits of code: (`"PCK|SCAN|5025066840471"`) which will find a specific item in the database. When it runs, I get an error on the line:
```
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
```
the error shows the following:
"Specified argument was out of the range of valid values.
Parameter name: size"
I followed the tutorial I saw on this website: <http://csharp.net-informations.com/communications/csharp-client-socket.htm> - But I did slightly different. So instead of putting
```
string returndata = Encoding.ASCII.GetString(inStream);
```
I wrote:
```
string returndata = Encoding.ASCII.GetString(inStream, 0, inStream.Length);
```
I am extremely confused on why I am getting those problems, and to be honest I am not understanding much of what the code is doing, I just have a rough idea, but not enough to troubleshoot this. Can someone help please?
Much appreciated!
PS: I am programming for Windows CE (portable device) on Visual Studio 2010.
|
Your code is a great example of how not to do TCP communication. I've seen this code copied over and over many times, and I'd be very happy to point you to a good tutorial on TCP - too bad I haven't seen one yet :)
Let me point out some errors first:
- TCP doesn't guarantee you the packet arrives as one bunch of bytes. So (theoretically) the `Write` operation could result in a split, requiring two reads on the other side. Sending data without headers over TCP is a very bad idea - the receiving side has no idea how much it has to read. So you've got two options - either write the length of the whole bunch of data before the data itself, or use a control character to end the "packet"
- The first point should also clarify that your reading is wrong as well. It may take more than a single read operation to read the whole "command", or a single read operation might give you two commands at once!
- You're reading `ReceiveBufferSize` bytes into a `10025` long buffer. `ReceiveBufferSize` might be bigger than your buffer. *Don't* do that - read a max count of `inStream.Length`. If you were coding in C++, this would be a great example of a buffer overflow.
- As you're converting the data to a string, you're expecting the whole buffer is full. That's most likely not the case. Instead, you have to store the return value of the read call - it tells you how many bytes were actually read. Otherwise, you're reading garbage, and basically having *another* buffer overflow.
So a much better (though still far from perfect) implementation would be like this:
```
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("PCK|SCAN|5025066840471");
// It would be much nicer to send a terminator or data length first,
// but if your server doesn't expect that, you're out of luck.
serverStream.Write(outStream, 0, outStream.Length);
// When using magic numbers, at least use nice ones :)
byte[] inStream = new byte[4096];
// This will read at most inStream.Length bytes - it can be less, and it
// doesn't tell us how much data there is left for reading.
int bytesRead = serverStream.Read(inStream, 0, inStream.Length);
// Only convert bytesRead bytes - the rest is garbage
string returndata = Encoding.ASCII.GetString(inStream, 0, bytesRead);
```
Oh, and I have to recommend [this essay on TCP protocol design](http://wiki.barcampgr.org/images/1/14/Barcampgr4_Designing_Application_Protocols_for_TCPIP.pdf).
It talks about many of the misconceptions about TCP, most importantly see the Message Framing part.
|
XML Document to String?
I've been fiddling with this for over twenty minutes and my Google-foo is failing me.
Let's say I have an XML Document created in Java (org.w3c.dom.Document):
```
DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder docBuilder = docFactory.newDocumentBuilder();
Document document = docBuilder.newDocument();
Element rootElement = document.createElement("RootElement");
Element childElement = document.createElement("ChildElement");
childElement.appendChild(document.createTextNode("Child Text"));
rootElement.appendChild(childElement);
document.appendChild(rootElement);
String documentConvertedToString = "?" // <---- How?
```
How do I convert the document object into a text string?
|
```
public static String toString(Document doc) {
try {
StringWriter sw = new StringWriter();
TransformerFactory tf = TransformerFactory.newInstance();
Transformer transformer = tf.newTransformer();
transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "no");
transformer.setOutputProperty(OutputKeys.METHOD, "xml");
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");
transformer.transform(new DOMSource(doc), new StreamResult(sw));
return sw.toString();
} catch (Exception ex) {
throw new RuntimeException("Error converting to String", ex);
}
}
```
|
Animation image Start Bottom to Top
I know my question will take a lot of downvote but someone help me i want to add animation to my `imageview` for example when i click button appears from the bottom and moves up . like this picture

|
**1.** Create `move.xml` that defines the `animation`.
```
<?xml version="1.0" encoding="utf-8"?>
<set
xmlns:android="http://schemas.android.com/apk/res/android"
android:interpolator="@android:anim/linear_interpolator"
android:fillAfter="true">
<translate
android:fromYDelta="100%p"
android:toYDelta="0%p"
android:duration="1000" />
</set>
```
**2.** Create `activity_animation.xml` for showing `Button` and `ImageView`.
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/activity_animation"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="16dp">
<ImageView
android:id="@+id/icon"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@mipmap/ic_launcher"
android:layout_centerHorizontal="true"
android:visibility="gone"/>
<Button
android:id="@+id/btnStart"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Start"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"/>
</RelativeLayout>
```
**3.** Your `AnimationActivity` should be like this:
```
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.view.animation.Animation;
import android.view.animation.AnimationUtils;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.Toast;
public class AnimationActivity extends AppCompatActivity implements Animation.AnimationListener {
ImageView imageIcon;
Button btnStart;
// Animation
Animation animMoveToTop;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_animation);
imageIcon = (ImageView) findViewById(R.id.icon);
btnStart = (Button) findViewById(R.id.btnStart);
// load the animation
animMoveToTop = AnimationUtils.loadAnimation(getApplicationContext(), R.anim.move);
// set animation listener
animMoveToTop.setAnimationListener(this);
// button click event
btnStart.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
imageIcon.setVisibility(View.VISIBLE);
// start the animation
imageIcon.startAnimation(animMoveToTop);
}
});
}
@Override
public void onAnimationEnd(Animation animation) {
// Take any action after completing the animation
// check for move animation
if (animation == animMoveToTop) {
Toast.makeText(getApplicationContext(), "Animation Stopped", Toast.LENGTH_SHORT).show();
}
}
@Override
public void onAnimationRepeat(Animation animation) {
// TODO Auto-generated method stub
}
@Override
public void onAnimationStart(Animation animation) {
// TODO Auto-generated method stub
}
}
```
**OUTPUT:**
[](https://i.stack.imgur.com/zRY3d.png)
Hope this will help~
|
Saving a list of entities to the db - MVC
I think I almost have this working, but I cant figure out how to finish.
Model:
```
public class Location
{
public int LocationId { get; set; }
public string SiteCode { get; set; }
public int PersonId { get; set; }
public int IncidentId { get; set; }
}
```
View Model
```
public List<Location> LocationList { get; set; }
```
Controller:
```
[HttpPost]
public ActionResult AddRecord(RecordViewModel model)
{
if (ModelState.IsValid)
{
Location location;
foreach (var loc in model.LocationList)
{
location = new Location
{
PersonId = model.PersonId,
SiteCode = loc.SiteCode,
IncidentId = loc.IncidentId
};
}
using (var db = new MyEntities())
{
db.Order.AddObject(incident);
db.Location.AddObject(location);
db.Comment.AddObject(comment);
db.SaveChanges();
}
```
The line `db.Location.AddObject(location);` is receiving empty. How do I get the location list from the foreach, to the db?
|
You're so close!
```
// create a new list of your entity Location (may not be in namespace Data)
var locationList = new List<Data.Location>();
foreach (var loc in model.LocationList)
{
var location = new Data.Location
{
PersonId = model.PersonId,
SiteCode = loc.SiteCode,
IncidentId = loc.IncidentId
};
locationList.Add(location);
}
using (var db = new MyEntities())
{
db.Order.AddObject(incident);
foreach (var item in LocationList)
{
db.Location.AddObject(location);
}
db.Comment.AddObject(comment);
db.SaveChanges();
}
```
OR: since you already have the LocationList on your model, use that instead
```
using (var db = new MyEntities())
{
db.Order.AddObject(incident);
foreach (var loc in model.LocationList)
{
var location = new Data.Location
{
PersonId = model.PersonId,
SiteCode = loc.SiteCode,
IncidentId = loc.IncidentId
};
db.Location.AddObject(location);
}
db.Comment.AddObject(comment);
db.SaveChanges();
}
```
|
I don't understand the ordering of the file array returned by listFiles()
So, I've got a ListView that displays the names of all of the files in a certain directory. This application also deletes files and adds new files to the directory.
I'd like the files to be displayed in the order of their creation, but this isn't always true with File(*directory*).listFiles(). If I start with an empty directory and start adding files, then the newest file goes in position 0 of the array, and ascends in order to the oldest. However, if I delete any files and then add new ones, things get strange. Here's an example...
Imagine that I've started with an empty directory, and added four files to it. The array returned by listFiles() will be:
Position 0 = File #4 (fourth to be added)
P 1 = File #3
P 2 = File #2
P 3 = File #1
Then, I'll delete Files #2 and #3. The array is:
P 0 = File #4
P 1 = File #1
So far, so good. Now, I'll add two new files. I'd expect the new array returned by listFiles() to be:
P 0 = File #6
P 1 = File #5
P 2 = File #4
P 3 = File #1
However, here's what it actually is:
P 0 = File #4
P 1 = File #6
P 2 = File #5
P 3 = File #1
But if I then add a File #7, the new array will be:
P 0 = File #7
P 1 = File #4
P 2 = File #6
P 3 = File #5
P 4 = File #1
Basically, if any files are deleted, then new files will fill their "old positions" in the array. All of the "old positions" must be filled before any new files will go to Position 0. Can anyone explain this behavior? Also, is there a quick and easy way to re-sort the file array into my desired chronological order?
|
Get the list of files is to use [File.listFiles()](http://java.sun.com/javase/6/docs/api/java/io/File.html#listFiles()) and the documentation states that this makes no guarantees about the order of the files returned. Therefore you need to write a [Comparator](http://java.sun.com/javase/6/docs/api/java/util/Comparator.html) that uses [File.lastModified()](http://java.sun.com/javase/6/docs/api/java/io/File.html#lastModified()) and pass this, along with the array of files, to [Arrays.sort()](http://java.sun.com/javase/6/docs/api/java/util/Arrays.html#sort(T[],%20java.util.Comparator)).
**CODE:**
```
File[] files = directory.listFiles();
Arrays.sort(files, new Comparator<File>(){
public int compare(File f1, File f2)
{
return Long.valueOf(f1.lastModified()).compareTo(f2.lastModified());
} });
```
Try this and let me know what happen..
**EDIT:**
You might also look at [apache commons IO](http://commons.apache.org/io/), it has a built in [last modified comparator](http://commons.apache.org/io/api-release/org/apache/commons/io/comparator/LastModifiedFileComparator.html) and many other nice utilities for working with files.
|
Loading data from DB asynchronously in win forms
many time we populate UI with data from DB in the form load and that is why form gets freeze for few second. so i just want to know how can i load data asynchronously and populate UI in form load as a result my form will not freeze and also will be responsive but i don't want to use background worker class. please help me with sample code which can solve my problem.
thanks
|
Here is a well commented example code:
## Example:
```
// This method can be called on Form_Load, Button_Click, etc.
private void LoadData()
{
// Start a thread to load data asynchronously.
Thread loadDataThread = new Thread(LoadDataAsync);
loadDataThread.Start();
}
// This method is called asynchronously
private void LoadDataAsync()
{
DataSet ds = new DataSet();
// ... get data from connection
// Since this method is executed from another thread than the main thread (AKA UI thread),
// Any logic that tried to manipulate UI from this thread, must go into BeginInvoke() call.
// By using BeginInvoke() we state that this code needs to be executed on UI thread.
// Check if this code is executed on some other thread than UI thread
if (InvokeRequired) // In this example, this will return `true`.
{
BeginInvoke(new Action(() =>
{
PopulateUI(ds);
}));
}
}
private void PopulateUI(DataSet ds)
{
// Populate UI Controls with data from DataSet ds.
}
```
|
Taking the rest of the list into account in filter
I need to display the number of elements whose successor is greater in a list. For example, in the list [3,7,2,1,9] my function should return 2 because 7 is greater than 3 and 9 is greater than 1.
In order to do that,I was thinking to use the filter function:
```
greaterElems :: Ord a => [a] -> Int
greaterElems [] = 0
greaterElems [x] = 0
greaterElems (x:xs) = length (filter (< head xs) (x:xs))
```
However, this does not work as expected: it seems that Haskell always considers the second element of the list, as if "head xs" is calculated only once, but this does not seem normal to me since Haskell is lazy.
What am I missing and how could I fix my code in order to achieve my goal?
|
You can make use of [**`zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]`**](https://hackage.haskell.org/package/base-4.12.0.0/docs/Prelude.html#v:zipWith) where we pass the list, and its tail. Indeed:
```
sucGreater :: Ord a => [a] -> [Bool]
sucGreater x = zipWith (<) x (tail x)
```
or as @RobinZigmond says, we can omit tail, and use `drop`:
```
sucGreater :: Ord a => [a] -> [Bool]
sucGreater x = zipWith (<) x (drop 1 x)
```
For the given sample list, this gives us:
```
Prelude> sucGreater [3,7,2,1,9]
[True,False,False,True]
```
I leave it as an exercise to the count the number of `True`s in that list.
|
Receive (and send) XML via POST with ASP.NET
I have to set up an XML "web service" that receives a POST where the 'Content-type header will specify “text/xml”.'
What is the simplest way to get the XML into an `XDocument` for access by VB.NET's axis queries?
I don't believe the web service is guaranteed to follow any protocol (e.g. SOAP, etc); just specific tags and sub-tags for various requests, and it will use Basic Authentication, so I will have to process the headers.
(If it matters:
\* the live version will use HTTPS, and
\* the response will also be XML.)
|
Given Steven's warning, the answer may be to parse `Request.InputStream` manually with [Tom Holland's test](http://blogs.msdn.com/tomholl/archive/2009/05/21/protecting-against-xml-entity-expansion-attacks.aspx) first, followed by `XDocument.Load` in the `Page_Load` event.
A Google search initiated before I asked the question, but only checked after, found [this](http://www.vikramlakhotia.com/Reading_Post_data_in_the_Request_Object.aspx), also suggesting I'm on the right track.
Also I was going to ask the question implied by my point that the response had to be XML too, as to what is the best way for that, but I've found an answer [here](http://forums.asp.net/t/1513007.aspx).
In summary, the final code is:
```
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If Request.ContentType <> "text/xml" Then _
Throw New HttpException(500, "Unexpected Content-Type")
Dim id = CheckBasicAuthentication
Dim textReader = New IO.StreamReader(Request.InputStream)
CheckXmlValidity(textReader)
' Reset the stream & reader
Request.InputStream.Seek(0, IO.SeekOrigin.Begin)
textReader.DiscardBufferedData()
Dim xmlIn = XDocument.Load(textReader)
' process XML in xmlIn
Dim xmlOut = <?xml version="1.0" encoding="UTF-8" ?>
<someresult>
<header>
<id><%= id.ToString() %></id>
<datestamp>To be inserted</datestamp>
</header>
<result/>
</someresult>
' Further generation of XML for output
xmlOut.<someresult>.<header>.<datestamp>.Value = Date.UtcNow.ToString(xmlDateFormat)
xmlText.Text = xmlOut.ToString
End Sub
Private Function CheckBasicAuthentication() As Integer
Dim httpAuthorisation = Request.Headers("Authorization")
If Left(httpAuthorisation, 6).ToUpperInvariant <> "BASIC " Then _
Throw New HttpException(401, "Basic Authentication Required")
Dim authorization = Convert.FromBase64String(Mid(httpAuthorisation, 7))
Dim credentials = Text.Encoding.UTF8.GetString(authorization).Split(":"c)
Dim username = credentials(0)
Dim password = credentials(1)
Return ConfirmValidUser(username, password)
End Function
Private Shared Sub CheckXmlValidity(ByVal textReader As System.IO.StreamReader)
Try
' Check for "interesting" xml documents.
Dim settings = New System.Xml.XmlReaderSettings()
settings.XmlResolver = Nothing
settings.MaxCharactersInDocument = 655360
' Successfully parse the file, otherwise an XmlException is to be thrown. '
Dim reader = System.Xml.XmlReader.Create(textReader, settings)
Try
While reader.Read()
'Just checking.
End While
Finally
reader.Close()
End Try
Catch ex As Exception
Throw New HttpException(500, "Invalid Xml data", ex)
End Try
End Sub
```
and the ASP.NET webpage.aspx is:
```
<%@ Page Language="VB" AutoEventWireup="false" CodeFile="webpage.aspx.vb" Inherits="WebPage" ContentType="text/xml" %>
<asp:Literal ID="xmlText" runat="server" Mode="PassThrough"></asp:Literal>
```
NB Throwing `HTTPException` is not a valid final solution for unwanted scenarios.
|
Connect to Camel- SEDA queue
I am not able to connect to camel route having a SEDA queue. On sever side I have following configuration:
```
<camel:route>
<camel:from uri="seda:input"/>
<camel:log message =">>>>>data is : ${body}"/>
<camel:inOnly uri="activemq:queue:TESTQUEUE"/>
</camel:route>
```
I am trying to hit this route from a standalone client like this:
```
public static void main(String[] args) {
CamelContext context = new DefaultCamelContext();
producer = context.createProducerTemplate();
producer.sendBody("seda:input","Hey");
}
```
But my producer is not able to connect to the seda queue.
Not able to hit queue of my route. Not able to add camelContext in my bean property. I am getting "Invalid property 'camelContext' of bean class". If I am sending the body to SEDA queue, message is going there but not to the next element of the rout
|
As Petter suggested, your client needs to connect to the same Camel Context which the SEDA route is defined in. In your example, it appears that you are creating a new DefaultCamelContext() and trying to send a message through to the route that is defined in another context.
Generally, I define the Camel Context in Spring XML and then inject the context into any classes that need it...
```
<camelContext id="camelContext" xmlns="http://camel.apache.org/schema/spring">
<routeBuilder ref="myRouter"/>
</camelContext>
<bean id="myClient" class="com.mycompany.MyClient">
<property name="camelContext" ref="camelContext"/>
</bean>
```
Then, your client code would simply need to call the following...
```
getCamelContext().createProducerTemplate().sendBody("seda:input","Hey");
```
That said, if your client code is not in the same JVM or not able to get a handle to the same CamelContext, then your options are to use JMS, REST, HTTP (or any [camel component](http://camel.apache.org/components.html) that support remote client interfaces)...instead of or around the SEDA endpoint.
For example, you could wrap access to your SEDA queue with an HTTP endpoint (via [camel-jetty](http://camel.apache.org/jetty.html)) like this...
```
from("jetty:http://localhost:9001/input").to("seda:input");
```
|
Ambiguous use of -CONSTANT resolved as -&CONSTANT()
I'm trying to declare magic numbers as constants in my Perl scripts, as described in perlsub. However, I get warnings:
```
$ cat foo.perl
use warnings ; use strict ;
sub CONSTANT() { 5 }
print 7-CONSTANT,"\n" ;
$ perl foo.perl
Ambiguous use of -CONSTANT resolved as -&CONSTANT() at foo.perl line 3.
2
$
```
The warning goes away if I insert a space between the minus and the `CONSTANT`. It makes the expressions more airy than I'd like, but it works.
I'm curious, though: What is the ambiguity it's warning me about? I don't know any other way it could be parsed.
(Perl 5.10.1 from Debian "squeeze").
|
First, some background. Let's look at the following for a second:
```
$_ = -foo;
```
`-foo` is a string literal[1].
```
$ perl -Mstrict -wE'say -foo;'
-foo
```
Except if a sub named `foo` has been declared.
```
$ perl -Mstrict -wE'sub foo { 123 } say -foo;'
Ambiguous use of -foo resolved as -&foo() at -e line 1.
-123
```
Now back to your question. The warning is wrong. A TERM (`7`) cannot be followed by another TERM, so `-` can't be the start of a string literal or a unary minus operator. It *must* be the subtraction operator, so *there is no ambiguity*.
This warning is still issued in 5.20.0[2]. I have filed a [bug report](https://rt.perl.org/Ticket/Display.html?id=121700).
---
1. Look ma! No quotes!
```
system(grep => ( -R, $pat, $qfn ));
```
2. Well, 5.20.0 isn't out yet, but we're in a code freeze running up to its release. This won't be fixed in 5.20.0.
|
Converting date format to YYYY-MM-DD from YYYY/MM/DD HH:MM:SS format in Logstash for nginx error logs
I am having nginx error logs of the below form:-
>
> 2015/09/30 22:19:38 [error] 32317#0: \*23 [lua] responses.lua:61:
> handler(): Cassandra error: Error during UNIQUE check: Cassandra
> error: connection refused, client: 127.0.0.1, server: , request: "POST
> /consumers/ HTTP/1.1", host: "localhost:8001"
>
>
>
As mentioned [here](https://logstash.jira.com/browse/LOGSTASH-1663?focusedCommentId=20224&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-20224) I am able to parse this logs.
My filter configuration is like the below:-
```
filter {
grok {
match => {
"message" => [
"%{DATESTAMP:mydate} \[%{DATA:severity}\] (%{NUMBER:pid:int}#%{NUMBER}: \*%{NUMBER}|\*%{NUMBER}) %{GREEDYDATA:mymessage}",
"%{DATESTAMP:mydate} \[%{DATA:severity}\] %{GREEDYDATA:mymessage}",
"%{DATESTAMP:mydate} %{GREEDYDATA:mymessage}"
]
}
add_tag => ["nginx_error_pattern"]
}
if ("nginx_error_pattern" in [tags]) {
grok {
match => {
"mymessage" => [
"server: %{DATA:[request_server]},"
]
}
}
grok {
match => {
"mymessage" => [
"host: \"%{IPORHOST:[request_host]}:%{NUMBER:[port]}\""
]
}
}
grok {
match => {
"mymessage" => [
"request: \"%{WORD:[request_method]} %{DATA:[request_uri]} HTTP/%{NUMBER:[request_version]:float}\""
]
}
}
grok {
match => {
"mymessage" => [
"client: %{IPORHOST:[clientip]}",
"client %{IP:[clientip]} "
]
}
}
grok {
match => {
"mymessage" => [
"referrer: \"%{DATA:[request_referrer]}\""
]
}
}
}
}
```
`mydate` is having date of the form:-
```
"mydate" => "15/09/30 22:19:38"
```
Can someone let me know how can I add one more field (let's say `log_day`) having date of the form `2015-09-30`?
|
It is always a good idea to save the time/date in a field of type `date`. It enables you to do [complex range queries](https://www.elastic.co/guide/en/elasticsearch/reference/2.0/query-dsl-range-query.html#ranges-on-dates) with Elasticsearch or Kibana.
You can use [logstash's date filter](https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html) to parse the date.
**Filter:**
```
date {
match => [ "mydate", "YY/MM/dd HH:mm:ss" ]
}
```
**Result:**
```
"@timestamp" => "2015-09-30T20:19:38.000Z"
```
The date filter puts the result in the `@timestamp` field by default.
To **avoid the default mapping** into `@timestamp` field, specify the target field like "log\_day", such as following:
**Filter:**
```
date {
match => [ "mydate", "YY/MM/dd HH:mm:ss" ]
target => "log_day"
}
```
**Result:**
```
"log_day" => "2015-09-30T20:19:38.000Z"
```
Once you have a field of type `date` you can proceed with further operations. You might use the [date\_formatter](https://github.com/wiibaa/logstash-filter-date_formatter) filter to create another date field in your special format.
```
date_formatter {
source => "log_day"
pattern => "YYYY-MM-dd"
}
```
Result: `"log_day" => "2015-09-30"`
|
Strange printf behavior
```
std::vector<DWORD64> v;
for(size_t i = init; i < pageSize; ++i)
v.push_back(i);
DWORD64 last = *(v.rbegin());
DWORD64 first = *(v.begin());
printf("%d %d \n", last, first);
printf("%d %d \n", first, last);
```
outputs:
```
4095 0
0 0
```
I can't understand why this printf behaves like that? Neither init or pageSize is 0.
I understand that %d is not valid for unsigned long long, but what bothers me is that printf's behavior changes when argument's order change.
|
>
> Neither init or pageSize is 0.
>
>
>
Nor is `%d` a suitable format string specifier for a 64-bit value, I'd bet :-)
More than likely, you'll need to use `%ld` (if your longs are 64 bit) or `%lld` (if your long longs are 64 bit) or the fixed-width specifier macros from the latest C standard that I can never remember off the top of my head, assuming they're available in your environment :-)
That whole problem would probably disappear if you embraced C++ rather than that half-ground which many coders seem to exist in (using legacy stuff like `stdio.h` when better alternatives are available). You should use the type-aware:
```
std::cout << a << ' ' << b << '\n';
```
It also helps to have a compiler that's a bit intelligent, and ensure you *use* that intelligence:
```
pax$ cat qq.cpp
#include <iostream>
#include <vector>
#include <cstdio>
int main (void) {
std::vector<int> v;
v.push_back (111142);
v.push_back (314159);
long long a = *(v.begin());
long long b = *(v.rbegin());
printf ("%6d %6d, a then b, bad\n", a, b);
printf ("%6d %6d, b then a, bad\n", b, a);
std::cout << a << ' ' << b << ", good\n";
return 0;
}
pax$ g++ -Wall -Wextra -o qq qq.cpp
qq.cpp: In function 'int main()':
qq.cpp:11: warning: format '%d' expects type 'int', but argument 2
has type 'long long int'
qq.cpp:11: warning: format '%d' expects type 'int', but argument 3
has type 'long long int'
: : : : :
qq.cpp:12: warning: format '%d' expects type 'int', but argument 3
has type 'long long int'
pax$ ./qq
111142 0, a then b, bad
314159 0, b then a, bad
111142 314159, good
```
For those truly interested in the mechanics as to why the values change based on their order in the `printf`, see [this answer](https://stackoverflow.com/questions/2424528/printf-of-a-size-t-variable-with-lld-ld-and-d-type-identifiers/2424554#2424554).
It goes into detail about what things (and more importantly, the sizes of those things) get pushed on the stack, comparing them with what you told `printf` would be there.
Long story writ short: you lied to `printf` so it treated you the same way your significant other would, had you been caught lying to them :-)
|
Can JSON schema enums be case insensitive?
# JSON Schema enums
[JSON Schemas feature enums, which impose a constraint on the values of a string type](http://spacetelescope.github.io/understanding-json-schema/UnderstandingJSONSchema.pdf):
```
{
"type": "array",
"items": [
{
"type": "number"
},
{
"type": "string"
},
{
"type": "string",
"enum": ["Street", "Avenue", "Boulevard"]
},
{
"type": "string",
"enum": ["NW", "NE", "SW", "SE"]
}
]
}
```
This schema validates values such as `[1600, "Pennsylvania", "Avenue", "NW"]`.
# The problem
Is there an elegant way to make the `enum` case-insensitive, so that both `Avenue` and `avenue` would be accepted as the third value in the array?
# Other possible solutions
I can use `anyOf` on a list of values, and validate each against a case-insensitive regex - but that's cumbersome, error-prone and inelegant.
|
I'm afraid you won't find any elegant solution to this. There was a proposal for [case-insensitive enums and several issues were commented](http://grokbase.com/p/gg/json-schema/139v21sth3/proposal-case-insensitive-enum).
So if you can not avoid the requirement, regex solutions are the only feasible ones. Another brute-force approach would be to have n complete lists of enum values, one with starting capital letters, other all capital letters, etc. and then use anyOf as you stated. You can automate the creation of this json-schema easily. Obviously it won't be very readable.
Anyway I would try to solve this with a pre-processing step before validation. You might convert to lowercase the required properties if they are present, and then validate. I find a bit forced to use json-schema specification to allow 'dirty' data.
|
Ticks between Unix epoch and GPS epoch
What is the number of one second ticks between Unix time epoch (01 Jan 1970) and GPS time epoch (06 Jan 1980)?
I have seen multiple answers from several sources on the web. One camp claims the answer is **315964800**, the other claims it is **315964819**. I always thought it was 315964800, but now am not so sure.
I just found my software baseline has been using 315964819 for the last eight years. I have a hard time understanding how it could have been 19 seconds off and no one noticed it when we integrated our embedded devices with other devices.
I think that whoever put 315964819 in the code baseline must have mistakenly used a TAI offset (19 seconds).
From what I understand, Unix time does not include leap seconds, which would indicate to me that 315964800 is the number of ticks between the two epochs. Then I think about how Unix time handles the leap second. It simply repeats the tick count when there is a leap second inserted, and there *were* 19 leap seconds inserted between 1970 and 1980... I start to wonder if the repeated ticks matter. I do not think so, but someone in this code's history thought so, and it seemed to work....
The long and short of it is I am about to change a constant set in the dark ages of this product that has to do with timing, which is important for the platform, from what it had been to what I believe is more accurate, and I wanted some sort of thumbs-up from more knowledgeable people than me.
Can someone authoritative please step in here?
[315964800 camp](https://www.google.com/#q=315964800)
[315964819 camp](https://www.google.com/#q=315964819)
Also note that I'm only asking about Unix epoch to GPS epoch. I'm pretty sure we've got leap seconds since GPS epoch covered appropriately.
|
The different values you stated are caused by mixing up the 1970 to 1980 offset with leap seconds.
The correct offset value is 315964800 seconds.
**Explanation:**
UTC and GPS time deviate (on average) every 18 months by one additional second.
This is called a leap second, introduced in UTC time base, necessary to adjust for changes in the earth's rotation.
GPS Time not adjusted by leap seconds.
Currently (2013) there is an offset of 16s:
GPS Time-UTC = 16 seconds
Unix time is a time format not a time reference.
It represents the number of milliseconds (or seconds) since 1.1.1970 UTC.
Ideally your system time is synchronized with UTC by a TimeServer (NTP).
To convert, and get your offset, you should use a fixed offset: (6.1.1980 UTC - 1.1.1970 UTC)
and THEN add the current value of GPS to UTC deviation (currently 16s).
E.g make that value configurable, or read the current offset from a GPS device (they know the difference between UTC and GPS Time)
The different values you stated are caused by mixing up 1970 to 1980 offset with leap seconds.
Dont do that, handle them separately.
This java program:
```
SimpleDateFormat df = new SimpleDateFormat();
df.setTimeZone(TimeZone.getTimeZone("UTC"));
Date x = df.parse("1.1.1970 00:00:00");
Date y = df.parse("6.1.1980 00:00:00");
long diff = y.getTime() - x.getTime();
long diffSec = diff / 1000;
System.out.println("diffSec= " + diffSec);
```
Outputs this value:
>
> diffSec= 315964800
>
>
>
So this is the correct offset between 1.1.1970 UTC and 6.1.1980 UTC where GPS Time began.
Then you have to correct further 16 seconds which were introduced since 6.1.1980 and today, to calculate the GPS Time of a current UTC time.
|
Why are my delegate methods never called?
I've tried to set up a very basic delegate between a `TableViewController` and a `DetailViewController`, but the methods are never called. Here's my code:
**DetailViewController.h**
```
@protocol DetailViewControllerDelegate
- (void) detailViewControllerDidLike;
- (void) detailViewControllerDidUnlike;
- (void) detailViewControllerDidDislike;
@end
```
**DetailViewController.m**
```
- (IBAction) changeLikedSwitch: (id) sender
{
UISwitch *likedSwitch = (UISwitch *) sender;
if ([likedSwitch isOn]) {
[_selectedQuote setIsLiked: [NSNumber numberWithBool: YES]];
[self.delegate detailViewControllerDidLike];
} else {
[_selectedQuote setIsLiked: [NSNumber numberWithBool: NO]];
[self.delegate detailViewControllerDidUnlike];
}
NSError *error;
if (![[[CDManager sharedManager] managedObjectContext] save:&error]) NSLog(@"Saving changes failed: %@, %@", error, [error userInfo]);
}
- (IBAction) changeDislikedSwitch: (id) sender
{
UISwitch *dislikedSwitch = (UISwitch *) sender;
if ([dislikedSwitch isOn]) {
[_selectedQuote setIsDisliked: [NSNumber numberWithBool: YES]];
[self.delegate detailViewControllerDidDislike];
[self dismissViewControllerAnimated: YES completion: nil];
} else {
[_selectedQuote setIsDisliked: [NSNumber numberWithBool: NO]];
}
NSError *error;
if (![[[CDManager sharedManager] managedObjectContext] save:&error]) NSLog(@"Saving changes failed: %@, %@", error, [error userInfo]);
}
```
**TableViewController.h Interface line:**
```
@interface TableViewController : UITableViewController <NSFetchedResultsControllerDelegate, DetailViewControllerDelegate>
```
**TableViewController.m**
```
- (void) detailViewControllerDidLike
{
NSLog(@"detailViewControllerDidLike!");
[self.tableView reloadData];
}
- (void) detailViewControllerDidUnlike
{
NSLog(@"detailViewControllerDidUnlike!");
[self.tableView reloadData];
}
- (void) detailViewControllerDidDislike
{
NSLog(@"detailViewControllerDidDislike!");
[self.tableView reloadData];
}
```
None of these methods are called. I'm trying to work out whether it's because I haven't set the delegate, but I don't understand how I can do that. There isn't an instance of my `DetailViewController` in my `TableViewController`, so how am I supposed to set one of its properties? Isn't the whole point of having a delegate that I don't need to create a concrete link between the classes? Very, very confused here.
|
You do need to set the delegate, for delegate methods to be called.
You must have a class that creates both the TableViewController and the DetailViewController? when they are created you would call
`[myDetailViewControllerObject setDelegate:myTableViewControllerObject];`
to set the delegate. This is assuming you've defined a delegate property in DetailViewController with
```
@property (readwrite, weak) id<DetailViewcontrollerDelegate> delegate;
```
A further explanation of delegates:
The point of a delegate is so that you don't need a specific type of object, you only need an object that implements the protocol. There still needs to be a connection between the delegate and the "delegator". If you want no concrete connection, then you would want to use an NSNotification, which is very much a "shout into the ether, and hope something is listening" method of communication.
In this case, a delegate is the correct thing to use. Delegates should be for one-to-one relationships, NSNotification are best used for one-to-N type relationships, where N can be 0 or more.
|
Why inet\_ntop() and inet\_ntoa() gives different result?
I am creating a UDP server-client program. Client requests a file and the server sends to client if found.
Based on Beej's Guide to Networking,
>
> - inet\_ntoa() returns the dots-and-numbers string in a static buffer that is overwritten with each call to the function.
> - inet\_ntop() returns the dst parameter on success, or NULL on failure (and errno is set).
>
>
>
The guide mentions ntoa is deprecated so ntop is recommended since it supports IPv4 and IPv6.
On my code I am getting different results when I use function or the other and my understanding is that they should throw the same result. Anything I am missing? Any help would be greatly appreciated.
Code:
```
//UDP Client
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <netdb.h>
#define MAXBUFLEN 1024
#define SER_IP "176.180.226.0"
#define SER_PORT "1212"
// Get port, IPv4 or IPv6:
in_port_t get_in_port(struct sockaddr *sa){
if (sa->sa_family == AF_INET) {
return (((struct sockaddr_in*)sa)->sin_port);
}
return (((struct sockaddr_in6*)sa)->sin6_port);
}
int main(int argc, char *argv[]){
int sock, rv, numbytes;
struct addrinfo hints, *servinfo, *p;
char buffer[MAXBUFLEN];
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_DGRAM;
rv = getaddrinfo(NULL, SER_PORT, &hints, &servinfo);
if (rv != 0){
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv));
exit(1);
}
// Printing IP, should provide same result
for(p = servinfo; p != NULL; p = p->ai_next) {
char str1[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &p->ai_addr, str1, INET_ADDRSTRLEN);
printf("ntop:%s\n", str1) ;
printf("inet_ntoa:%s \n", inet_ntoa(((struct sockaddr_in *)p->ai_addr)->sin_addr));
printf("\n");
}
exit(1);
}
```
Current output:
```
ntop:64.80.142.0
inet_ntoa:0.0.0.0
ntop:160.80.142.0
inet_ntoa:127.0.0.1
```
|
As per the `man` page, in the case of `AF_INET` the argument `src` must point to a `struct in_addr` (network byte order).
In your `struct addrinfo` you have a pointer to `struct sockaddr` which is basically
```
sa_family_t sa_family;
char sa_data[];
```
However, `struct sockaddr_in` is
```
sa_family_t sin_family;
in_port_t sin_port;
struct in_addr sin_addr;
```
So, you need to replace
```
inet_ntop(AF_INET, &p->ai_addr, str1, INET_ADDRSTRLEN);
```
by either
```
inet_ntop(AF_INET, &p->ai_addr->sa_data[2], str1, INET_ADDRSTRLEN);
```
(the `src` argument may be `&p->ai_addr->sa_data[1 << 1]` to avoid the "magic number" `2` - the offset which counts for the port number storage)
or
```
inet_ntop(AF_INET, &((struct sockaddr_in *)p->ai_addr)->sin_addr, str1, INET_ADDRSTRLEN);
```
Then it will produce correct output.
|
Why did this code fail to send password reset link in firebase?
I am new to firebase and I am trying to handle firebase user authentication in React.js. I did manage to create users with email and passwords. But, now I would like to send the user an Email link to reset their password.
My code currently look like this.
```
// This line of code belongs to the top
import { auth } from '../firebaseConfig'
//This part goes under the React component
<p onClick={async () => {
try{
await sendPasswordResetEmail(auth, // My Email Id)
alert('Password reset link has been sent to your email')
}
catch(err){
alert(err)
}
}}
>Forgot your Password ?</p>
```
However, I do not get any error messages and I do get the alert message that says "Password reset link has been sent to your email." Unfortunately, I didn't receive any email. Note that I have given my own email id as the parameter for testing purposes.
|
*firebaser here*
Did you check your spam folder? We recently see a lot of the emails from Firebase Authentication ending up in the user's spam folder or being marked as spam in a system along the way. This is being tracked in this [status message](https://status.firebase.google.com/incidents/LqqtfhaYsmNsYe5QpYic) on the Firebase dashboard and in [public issue #253291461](https://issuetracker.google.com/issues/253291461).
To reduce the chances of the messages getting marked as spam, consider taking more control of the email delivery yourself.
1. As a first step, consider [using a custom domain](https://firebase.google.com/docs/auth/email-custom-domain) with your project. Email that comes from a custom domain has less chance of being marked as span.
2. As a second step, consider [setting up your own SMTP server](https://stackoverflow.com/questions/51008278/how-do-i-connect-a-smtp-with-firebase#:%7E:text=You%20can%20set%20a%20custom,to%20log%20into%20smtp2go%20with).) for delivering the email, so that the emails are not being delivered from Firebase's shared infrastructure anymore.
While these steps are more involved, they typically will drastically reduce the cases where the messages from Firebase Authentication are marked as spam.
|
Which features of Perl make it a functional programming language?
Inspired a little by: <https://stackoverflow.com/questions/30977789/why-is-c-not-a-functional-programming-language>
I found: [Higher Order Perl](http://hop.perl.plover.com/)
It made me wonder about the assertion that Perl is a functional programming language. Now, I appreciate that functional programming is a technique (much like object oriented).
However I've found a list of [what makes a functional programming language](http://c2.com/cgi/wiki?FunctionalProgramming):
- First Class functions
- Higher Order Functions
- Lexical Closures
- Pattern Matching
- Single Assignment
- Lazy Evaluation
- Garbage Collection
- Type Inference
- Tail Call Optimization
- List Comprehensions
- Monadic effects
Now some of these I'm quite familiar with:
Garbage collection, for example, is Perl reference counting and releasing memory when no longer required.
Lexical closures are even part of the FAQ: [What is a closure?](http://perldoc.perl.org/perlfaq7.html#What's-a-closure%3F) - there's probably a better article here: <http://www.perl.com/pub/2002/05/29/closure.html>
But I start to get a bit fuzzy on some of these - List Comprehensions, for example - I think that's referring to `map`/`grep` (`List::Util` and `reduce`?)
I anyone able to help me fill in the blanks here? Which of the above can Perl do easily (and is there an easy example) and are there examples where it falls down?
|
## Useful things that are relevant:
[Perl monks rant about functional programming](http://www.perlmonks.org/?node_id=450922)
[Higher Order Perl](http://hop.perl.plover.com/)
[C2.com functional programming definitions](http://c2.com/cgi/wiki?FunctionalProgramming)
# [First Class functions](https://en.wikipedia.org/wiki/First-class_function)
In computer science, a programming language is said to have first-class functions if it treats functions as first-class citizens. Specifically, this means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures.
So in Perl:
```
my $print_something = sub { print "Something\n" };
sub do_something {
my ($function) = @_;
$function->();
}
do_something($print_something);
```
## Verdict: Natively supported
# [Higher Order Functions](https://en.wikipedia.org/wiki/Higher-order_function)
>
> In mathematics and computer science, a higher-order function (also functional form, functional or functor) is a function that does at least one of the following:
>
>
> - takes one or more functions as an input
> - outputs a function
>
>
>
With reference to [this post on perlmonks](http://www.perlmonks.org/?node_id=492651):
>
> In Perl terminology, we often refer to them as callbacks, factories, and functions that return code refs (usually closures).
>
>
>
## Verdict: Natively supported
# [Lexical Closures](http://www.perl.com/pub/2002/05/29/closure.html)
Within the perl FAQ we have questions regarding [`What is a closure?`](http://perldoc.perl.org/perlfaq7.html#What's-a-closure%3F):
>
> Closure is a computer science term with a precise but hard-to-explain meaning. Usually, closures are implemented in Perl as anonymous subroutines with lasting references to lexical variables outside their own scopes. These lexicals magically refer to the variables that were around when the subroutine was defined (deep binding).
>
>
> Closures are most often used in programming languages where you can have the return value of a function be itself a function, as you can in Perl.
>
>
>
This is explained perhaps a little more clearly in the article: [Achieving Closure](http://www.perl.com/pub/2002/05/29/closure.html)
```
sub make_hello_printer {
my $message = "Hello, world!";
return sub { print $message; }
}
my $print_hello = make_hello_printer();
$print_hello->()
```
## Verdict: Natively supported
# [Pattern Matching](http://c2.com/cgi/wiki?PatternMatching)
>
> In the context of pure functional languages and of this page, Pattern Matching is a dispatch mechanism: choosing which variant of a function is the correct one to call. Inspired by standard mathematical notations.
>
>
>
Dispatch tables are the closest approximation - essentially a hash of either anonymous subs or code refs.
```
use strict;
use warnings;
sub do_it {
print join( ":", @_ );
}
my $dispatch = {
'onething' => sub { print @_; },
'another_thing' => \&do_it,
};
$dispatch->{'onething'}->("fish");
```
Because it's `just` a hash, you can add code references and anonymous subroutines too. (Note - not entirely dissimilar to Object Oriented programming)
## Verdict: Workaround
# [Single Assignment](https://en.wikipedia.org/wiki/Assignment_(computer_science)#Single_assignment)
>
> Any assignment that changes an existing value (e.g. x := x + 1) is disallowed in purely functional languages.[4](https://en.wikipedia.org/wiki/First-class_function) In functional programming, assignment is discouraged in favor of single assignment, also called initialization. Single assignment is an example of name binding and differs from assignment as described in this article in that it can only be done once, usually when the variable is created; no subsequent reassignment is allowed.
>
>
>
I'm not sure `perl` really does this. The closest approximation might be references/anonymous subs or perhaps `constant`.
## Verdict: Not Supported
# [Lazy Evaluation](http://c2.com/cgi/wiki?LazyEvaluation)
>
> Waiting until the last possible moment to evaluate an expression, especially for the purpose of optimizing an algorithm that may not use the value of the expression.
>
>
>
[Examples of lazy evaluation techniques in Perl 5?](https://stackoverflow.com/questions/4382337/examples-of-lazy-evaluation-techniques-in-perl-5)
And again, coming back to [Higher Order Perl](http://hop.perl.plover.com/book/pdf/06InfiniteStreams.pdf) (I'm not affiliated with this book, honest - it just seems to be one of the key texts on the subject).
The core concept here seems to be - create a 'linked list' in perl (using object oriented techniques) but embed a code reference at your 'end marker' that evaluates if you ever get that far.
## Verdict: Workaround
# [Garbage Collection](http://c2.com/cgi/wiki?GarbageCollection)
>
> "GarbageCollection (GC), also known as automatic memory management, is the automatic recycling of heap memory."
>
>
>
Perl does this via reference counting, and releasing things when they are no longer referenced. Note that this can have implications for certain things that you're (probably!) more likely to encounter when functional programming.
Specifically - circular references which are covered in [`perldoc perlref`](http://perldoc.perl.org/perlref.html#Circular-References)
## Verdict: Native support
# [Type Inference](http://c2.com/cgi/wiki?TypeInference)
>
> TypeInference is the analysis of a program to infer the types of some or all expressions, usually at CompileTime
>
>
>
Perl does implicitly cast values back and forth as it needs to. Usually this works well enough that you don't need to mess with it. Occasionally you need to 'force' the process, by making an explicit numeric or string operation. Canonically, this is either by adding 0, or concatenating an empty string.
You can overload a scalar to do different things in by using [`dualvars`](http://perldoc.perl.org/Scalar/Util.html#%24var-%3D-dualvar(-%24num%2C-%24string-))
## Verdict: Native support
# [Tail Call Optimization](http://c2.com/cgi/wiki?TailCallOptimization)
>
> Tail-call optimization (or tail-call merging or tail-call elimination) is a generalization of TailRecursion: If the last thing a routine does before it returns is call another routine, rather than doing a jump-and-add-stack-frame immediately followed by a pop-stack-frame-and-return-to-caller, it should be safe to simply jump to the start of the second routine, letting it re-use the first routine's stack frame (environment).
>
>
>
[Why is Perl so afraid of "deep recursion"?](https://stackoverflow.com/questions/20710284/why-is-perl-so-afraid-of-deep-recursion)
It'll work, but it'll warn if your recursion depth is >100. You can disable this by adding:
```
no warnings 'recursion';
```
But obviously - you need to be slightly cautious about recursion depth and memory footprint.
As far as I can tell, there isn't any particular *optimisation* and if you want to do something like this in an efficient fashion, you may need to (effectively) unroll your recursives and iterate instead.
>
> Tailcalls are supported by perl. Either see the goto ⊂ notation, or see the neater syntax for it provided by `Sub::Call::Tail`
>
>
>
## Verdict: Native
# [List Comprehensions](http://c2.com/cgi/wiki?ListComprehension)
>
> List comprehensions are a feature of many modern FunctionalProgrammingLanguages. Subject to certain rules, they provide a succinct notation for GeneratingElements? in a list.
> A list comprehension is SyntacticSugar for a combination of applications of the functions concat, map and filter
>
>
>
Perl has `map`, `grep`, `reduce`.
It also copes with expansion of ranges and repetitions:
```
my @letters = ( "a" .. "z" );
```
So you can:
```
my %letters = map { $_ => 1 } ( "A" .. "z" );
```
## Verdict: Native (`List::Utils` is a core module)
# [Monadic effects](http://c2.com/cgi/wiki?OnMonads)
... nope, still having trouble with these. It's either much simpler or much more complex than I can grok.
If anyone's got anything more, please chip in or edit this post or ... something. I'm still a sketchy on some of the concepts involved, so this post is more a starting point.
|
Is it a good idea to make method behavior depend on the calling thread?
I want to subclass a 3rd party class, in order to make it thread-safe.
I have a good idea of how to implement this, but there is a problem: the superclass has a property, which affects the behaviour of one of its methods. If one thread sets the property, it will interfere with the other threads when they call the method.
I can see two ways to do this:
1. Create a thread-safe 'stateless' object which then has multiple 'views' into it. The property is in the view and each thread has its own view instance.
2. Detect which thread makes the call in the property's get accessor and the method, and store the state for that thread internally.
(1) is self-explanatory, but it involves more boilerplate code. (2) does something non-trivial behind the scenes, but if it works, it is completely transparent.
Which is best, for maintainability and readability? The more complex code but whose behaviour is up-front, or the code which is easier to use when it works, but if it breaks it will do so in a location and way which is not obvious?
Is there any reason an object should *not* be dependent on what thread interacts with it?
(EDIT: Removing reference to the 3rd party class, since the requirements of the implementation are not as simple as it sounds and it was generating more confusion than needed!)
|
The idea of having a thread-specific variable is not unreasonable, though I am unsure if it is appropriate for your use case. The idea of a thread-safe `Stream` strikes me as a bit broken; I would rather have a thread-safe `StreamFactory`.
The best way to implement a thread-specific state variable is to use either [`ThreadStatic`](https://msdn.microsoft.com/en-us/library/system.threadstaticattribute.aspx) or [`ThreadLocal<T>`](https://msdn.microsoft.com/en-us/library/dd642243.aspx). This makes your code short, simple, and trivially maintainable. This variable will be a member of your `Stream`.
See [ThreadStatic v.s. ThreadLocal: is generic better than attribute?](https://stackoverflow.com/questions/18333885/threadstatic-v-s-threadlocalt-is-generic-better-than-attribute) for discussion on which to use (short version: use `ThreadLocal<T>` if you're on .Net 4+).
|
In Base SAS, how can I auto refresh the explorer?
I'm fairly sure this must be something that has bugged others and so there must be a solution. I write my code and want to quickly check the dataset, but it isn't there. I need to select the window, click View and click refresh. Is there a keyboard shortcut I can use or a macro I can write that does this for me?
I know this is lazy but it bugs me.
Any thoughts are appreciated.
J
|
You could do this programmatically using:
```
dm "next explorer; refresh";
```
Or assign it to a shortcut key (eg F2) as follows:
```
dm "keydef F2 'next explorer; refresh'";
```
If you just want to open the last dataset, you could also assign this to a shortcut key:
```
dm "keydef F3 'vt &syslast'";
```
If the dataset is in a remote location, the following could be adapted for your needs (note the embedded sas code which gets submitted):
```
dm 'keydef F4 "submit ''rsubmit; %nrstr(%sysrput lastDS=&syslast;) endrsubmit;''; vt rwork.%scan(&lastDS,2,.)"';
```
More shortcuts available [here](https://stackoverflow.com/questions/1233056/sas-display-manager-commands)!
|
VIM: simple steps to create syntax highlight file - for logfiles
I have some (log4j generated) logfiles to go through; I know their format pretty well (I mean I have already got off-the-peg regexes etc I can use).
I want to automatically highlight them in VIM when I load them up (\*.log).
A logfile entry looks something like this:
```
YYYY-MM-DD HH:MM:ss,SSS [...] #LOG-LEVEL# [...] Message
```
Where #LOG-LEVEL# is one of the standard 'ERROR', 'INFO', 'DEBUG', 'FATAL'....and the 'YYYY-MM...' represents the date/time to millisecond resolution.
To get me started , what are the steps needed to get the date-string highlighted in (say) yellow-background with blue text - and also when the text reads ' ERROR ' this should have a red-background with white text.
I have tried going through some tutorials on this, but can't find one which is simple enough to understand, so I'm after some real basic steps here !
Cheers
EDIT:
Here's the summary of what I did, based on the instructions below:
1. Created the syntax file 'log.vim' in .vim\syntax (see below for example content).
2. Created a file in .vim\ftdetect\log.vim with the following content:
au BufRead,BufNewFile \*.log set filetype=log
3. Made sure the following are in my startup settings:
syntax on
filetype on
|
There are three ways of defining syntax items (see `:help :syn-define`):
- Keywords: these are for items that are simple strings of keyword characters. This is the fastest matcher.
- Matches: these are regular expressions for matching.
- Regions: these are for long regions that are likely to contain other items.
There are various arguments that make things more complicated (to do with matches within regions etc), see `:help :syn-arguments` for a discussion of this.
There is a priority that comes into effect (see `:help :syn-priority`).
Colouring is controlled by the highlight command and is separate to the syntax commands.
A simple way to get started would be to use a match to detect the date and a keyword to detect error. Then use highlight to make the colours come to life:
```
" This creates a keyword ERROR and puts it in the highlight group called logError
:syn keyword logError ERROR
" This creates a match on the date and puts in the highlight group called logDate. The
" nextgroup and skipwhite makes vim look for logTime after the match
:syn match logDate /^\d\{4}-\d\{2}-\d\{2}/ nextgroup=logTime skipwhite
" This creates a match on the time (but only if it follows the date)
:syn match logTime /\d\{2}:\d\{2}:\d\{2},\d\{3}/
" Now make them appear:
" Link just links logError to the colouring for error
hi link logError Error
" Def means default colour - colourschemes can override
hi def logDate guibg=yellow guifg=blue
hi def logTime guibg=green guifg=white
```
Bung all of that in ~/.vim/syntax/log.vim and make sure the file type is set properly (see `:help filetype.txt`) - it should then load automatically.
Hopefully that should give you something to get going with. Have a (very gradual) read of the various sections of `:help syntax.txt` and `:help usr_44.txt` for further info.
|
Active Directory - Only bridgeheads get the full forest replication
In my Active Directory, we have two sites. One site has a single domain controller. The other site, DefaultFirstSiteName, has two domain controllers. When I view the replication status from the non-primary domain controller using repadmin /showrepl from the command line, I can see only replications from the primary domain controller, to my secondary domain controller in DefaultFirstSiteName.
Is the primary domain controller supposed to be the only domain controller in my DefaultFirstSiteName site that receives replications from it's sister site?
Any input is appreciated. Thank you all.
|
Q: Is the primary domain controller supposed to be the only domain controller in my DefaultFirstSiteName site that receives replications from it's sister site?
A: I think what you're really asking here is whether or not you should have inbound replication connections from DC3 to DC1 **and** DC2, and the answer is **no**. One aspect of the job of the KCC and the ISTG is to create a least-cost, loop free replication topology. If both DC's in the DefaultFirstSiteName site had inbound replication connections from the DC in the sister site then a loop would exist.
Let's assume DC1, DC2, and DC3. DC1 and DC2 are in the DefaultFirstSiteName site and DC3 is in the sister site. DC1 has an inbound replication connection from DC2, and DC2 has an inbound replication connection from DC1, so any change made on either of these two DC's is replicated to the other DC. DC3 has an inbound replication connection from either DC1 or DC2, but not both. Likewise DC1 or DC2, but not both, has an inbound replication connection from DC3. If DC1 and DC2 both had inbound replication connections from DC3 then a loop would exist. If DC3 had an inbound replication from both DC1 and DC2 then a loop would exist. What would happen to a change that occurred on any of the DC's in this scenario? The change would "collide" as it replicated from DC to DC.
There is a lot at play in AD replication and in building the replication topology but in answering I tried to keep it simple.
|
Git submodule without extra weight
I'm not a Git master yet, faced a problem I can't figure out how to fix. I have a repo with my WordPress custom skeleton and I've added WordPress as a submodule from its original repo by `git submodule add wp_repo_url`. When I clone my repo to local machine with:
```
git clone --recursive https://github.com/user/repo local_dir
```
it downloads the WP submodule as expected, but here's the problem - actual files are only 20.7Mb, and in `.git/modules/core/objects/pack` I've got a huge 124Mb .pack file, which, I suppose, is smth like commit history / revisions of that submodule.
How can I re-add submodule or modify while cloning to prevent downloading this extra weight?
**UPDATE:**
With the help of @iclmam I've came up with the following setup:
- my skeleton repo will have WordPress as a submodule, the whole original repo with history
- when creating a new project from skeleton, I'll clone it without --recursive option to get only the main files and empty folder for submodule
- **IF** I need WordPress with full history - for example, if I need to switch between different WP branches/tags to test my plugin/theme backward compatibility - then I'll get this submodule with full history
- if I just need a plain clean install of recent WP version, I'll change into wp directory and go the old way:
```
curl -L -O http://wordpress.org/latest.zip
unzip latest.zip
mv wordpress/* .
rm latest.zip
rm -rf wordpress
```
Not a perfect solution (I wanted to automate everything as much as possible), but it works for now.
Any advices on the original question are appreciated.
|
since Git 2.10+ (Q3 2016), you will be able to do a regular clone... and still benefit from **shallow clone for submodules**.
All you need to do is record that configuration in your `.gitmodules`:
```
git config -f .gitmodules submodule.<name>.shallow true
```
Add, commit, and push: anyone cloning your repo (regular clone, full history) will get only a depth of 1 for the submodule `<name>`.
See [commit f6fb30a](https://github.com/git/git/commit/f6fb30a01d2373f36cbf2054be9ae1bf65475794), [commit abed000](https://github.com/git/git/commit/abed000acafd8aa86e02bcbb65fc1a8e4f06b8a0) and [commit 37f52e9](https://github.com/git/git/commit/37f52e93441e1da00c9c9824ed03cd074d77f43a) (03 Aug 2016) by [Stefan Beller (`stefanbeller`)](https://github.com/stefanbeller).
(Merged by [Junio C Hamano -- `gitster` --](https://github.com/gitster) in [commit dc7e09a](https://github.com/git/git/commit/dc7e09a3e0b1a06348a0b59da71ceefe08489e77), 08 Aug 2016)
## > `submodule update`: learn `--[no-]recommend-shallow` option
>
> Sometimes the history of a submodule is not considered important by the projects upstream. To make it easier for downstream users, allow a boolean field '`submodule.<name>.shallow`' in `.gitmodules`, which can be used to recommend whether upstream considers the history important.
>
>
> This field is honored in the initial clone by default, it can be ignored by giving the `--no-recommend-shallow` option.
>
>
>
|
expected scope variable undefined in karma test
I'm having trouble understanding how the scope gets initialized in karma tests. i'm expecting a scope variable to be preset when the test runs, but it keeps coming back as undefined.
What am I missing?
**Test Case**
```
describe('loginController', function() {
beforeEach(module('app'));
var $controller, $scope;
beforeEach(inject(function(_$controller_, $rootScope){
$controller = _$controller_;
$scope = $rootScope.$new();
}));
describe('$scope.login', function() {
beforeEach(function() {
controller = $controller('loginController', { $scope: $scope });
});
it('checks it initialized', function() {
expect($scope.foo).toEqual('foo');
expect($scope.bar).toEqual('bar');
//expect($scope).toBeDefined();
//expect($scope.loginData.userName).toEqual('');
//expect($scope.loginData.password).toEqual('');
});
```
The controller:
```
angular.module('app').controller('loginController', ['$location',
'authService', function($scope, $location, authService) {
$scope.foo = 'foo';
$scope.bar = 'bar';
$scope.loginData = {
userName: '',
password: ''
};
}]);
```
|
I refactored the test code and now it works:
```
describe('loginController', function() {
beforeEach(module('app'));
var controller, scope;
beforeEach(inject(function($controller, $rootScope){
scope = $rootScope.$new();
console.log('scope1', scope);
controller = $controller('loginController', {
$scope: scope
});
}));
describe('login', function() {
it('sets variables ', function() {
expect(scope).toBeDefined();
expect(scope.loginData).toBeDefined();
expect(scope.loginData.userName).toEqual('');
expect(scope.loginData.password).toEqual('');
});
});
});
```
|
closest pair algorithm
I am trying to understand the closest pair algorithm. I understand about dividing the set in half. But I am having trouble understanding how to recursively compute the closest pair. I understand recursion, but do not understand how to compute the closest pair by recursion. If you have (1,2)(1,11)(7,8) how would recursion work on these?
|
If you mean [this algorithm](http://en.wikipedia.org/wiki/Closest_pair_of_points_problem#Planar_case) you do the following:
1. Sort points: (1,2) (1,11) (7,8)
2. Build two subsets: (1,2) (1,11) and (7,8)
3. Run the algorithm on (1,2) (1,11) and on (7,8) separately <= this is where the recursion comes. The result is dLmin = 9 and dRmin = infinity (there is no second point on the right)
4. dLRmin = sqrt(45)
5. result = min(dLmin, dRmin, dLRmin) = sqrt(45)
The recursion consists of the same steps as above. E.g. the call with (1,2) and (1,11) does:
1. Sort points: (1,2) (1,11)
2. Build two subsets: (1,2) and (1,11)
3. Run the algorithm on (1,2) and on (1,11) separately <= again recursion calls. The result is dLmin = infinity and dRmin = infinity
4. dLRmin = 9
5. result = min(dLmin, dRmin, dLRmin) = 9
|
Using Authenticode with a ClickOnce WPF application
All right, I'm not doing something right, and I need some help. Here's what's happening:
1. I have a "real" Authenticode certificate from [Comodo](https://en.wikipedia.org/wiki/Comodo_Group) that I have paid for.
2. I'm trying to sign and deploy a WPF application written in Visual Studio 2012 and .NET 4.5.
3. In the properties of the project, I have checked "Sign the ClickOnce manifests" and have chosen my certificate.
4. I'm also using Comodo's timestamp sever (<http://timestamp.comodoca.com/authenticode>)
5. In the Publish tab and under the Prerequisites button, I have checked "Create setup program to install prerequisite components".
When I build and publish, everything works! The setup.exe is signed with my Comodo certificate, so that's good. Also, the `.application` file is signed with the Comodo certificate and my company name shows as the publisher -- this is also good.
Here comes the problem: Once the application is downloaded to the client, Windows 8 throws up a warning about an untrusted program (MyProgram.exe) and the publisher is not my company name. So, everything is getting signed *except for the actual executable.*
I've tried adding a post-build script that uses signtool.exe on obj\Release\MyProgram.exe, but when I try to install the application, I get a manifest error stating that the hash values don't match. In other words, the manifest is getting generated before the post-build event.
How do I sign my .exe and maintain the ClickOnce manifest's integrity? Is there a simple way to do this or do I have to use mage.exe on every file, by hand (I hope not)?
|
Well, no one has jumped on this, but thankfully, I figured it out!
Thanks to this question: ["File has a different computed hash than specified in manifest" error when signing the EXE](https://stackoverflow.com/questions/12521642/file-has-a-different-computed-hash-than-specified-in-manifest-error-when-signi)
I was able to edit the project file's XML (Unload the project, then choose "Edit myproject.csproj") and added:
```
<Target Name="SignOutput" AfterTargets="CoreCompile">
<PropertyGroup>
<TimestampServerUrl>http://timestamp.comodoca.com/authenticode</TimestampServerUrl>
<ApplicationDescription>My Project Friendly Name</ApplicationDescription>
<SigningCertificateCriteria>/n MyCertName</SigningCertificateCriteria>
</PropertyGroup>
<ItemGroup>
<SignableFiles Include="$(ProjectDir)obj\$(ConfigurationName)\$(TargetName)$(TargetExt)" />
</ItemGroup>
<GetFrameworkSdkPath>
<Output TaskParameter="Path" PropertyName="SdkPath" />
</GetFrameworkSdkPath>
<Exec Command=""$(SdkPath)bin\signtool" sign $(SigningCertificateCriteria) /d "$(ApplicationDescription)" /t "$(TimestampServerUrl)" "%(SignableFiles.Identity)"" />
```
I had to move the signtool.exe file into the SDK folder (C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin, in my case), but after that it worked like a charm!
I hope this helps someone else in the future.
|
ShowDialog and UI interaction in BackGroundWorker Thread
After 2 hours of researching, i still couldn't find a solution to my problem.
The task I do is process some files in the BackGroundWorker thread. However, sometimes I need to use ShowDialog to let the user choose the SaveFile location but i'm getting the STA/MTA error.
MainForm code:
```
private void button2_Click(object sender, EventArgs e)
{
button1.Enabled = false;
ProcessReportWorker.RunWorkerAsync();
}
```
DoWork Code:
```
void ProcessReportWorker_DoWork(object sender, DoWorkEventArgs e)
{
int ReportCount = Reports.Count();
foreach (string Report in Reports)
{
ProcessReport NewReport = new ProcessReport(Report);
string result = NewReport.Start();
}
}
```
ProcessReport.Start() Code:
```
class ProcessReport
{
public string Start()
{
if(File.Exists(ResultPath))
{
SaveFileDialog SaveReport = new SaveFileDialog();
SaveReport.InitialDirectory = "c:\somepath";
SaveReport.CheckPathExists = true;
SaveReport.DefaultExt = ".xls";
SaveReport.OverwritePrompt = true;
SaveReport.ValidateNames = true;
if (SaveReport.ShowDialog() == DialogResult.OK)
{
ResultPath = SaveReport.FileName;
if (File.Exists(ResultPath)) File.Delete(ResultPath);
}
}
}
}
```
As you can see, the ShowDialog is needed in some cases.
I believe this can be done using delegates but i'm not much familiar with delegates. I did try the solution by Jon in [Calling ShowDialog in BackgroundWorker](https://stackoverflow.com/questions/10498555/calling-showdialog-in-backgroundworker) but i couldn't get it to work. (maybe i'm doing something wrong with delegates?)
Someone please help me with this. Please provide me the code for delegates if needed for this. Thanks!
EDIT:
**Solution given by PoweredByOrange worked. HOwever, i had to make a small change to it:**
**this.Invoke((MethodInvoker)delegate{....});** did not work because
- the intention is to refer to the MainForm instance but this code exists in the ProcessReport Class. So the "**this**" here is referring to the ProcessReport class instance, but it must refer to the GUI instance (MainForm instance) to work.
My Fix:
I sent an instance of the MainForm to the ProcessReport class and made the changes as mentioned below:
IN DoWork:
```
ProcessReport NewReport = new ProcessReport(Report, this); //CHANGE: Sending 'this'
//this sends reference of MainForm(GUI) to the ProcessReport Class
```
In ProcessReport Class:
```
class ProcessReport
{
MainForm MainFormInstance;
public ProcessReport(string report, MainForm x)
{
MainFormInstance = x;
}
public string Start()
{
MainFormInstance.Invoke((MethodInvoker)delegate //changed this.Invoke to MainFormInstance.Invoke
{
SaveFileDialog SaveReport = new SaveFileDialog();
SaveReport.InitialDirectory = "c:\somepath";
SaveReport.CheckPathExists = true;
SaveReport.DefaultExt = ".xls";
SaveReport.OverwritePrompt = true;
SaveReport.ValidateNames = true;
if (SaveReport.ShowDialog() == DialogResult.OK)
{
ResultPath = SaveReport.FileName;
if (File.Exists(ResultPath)) File.Delete(ResultPath);
}
});
}
}
```
So the above thing finally worked. I understood this pretty well, thanks to PoweredByOrange.
|
The reason you're getting the exception is because only the thread that owns a control is allowed to modify/access it. In this case, the `SaveFileDialog` belongs to your main thread, but the `Start()` method is running in a different (i.e. background) thread. Therefore, the background thread in this case needs to **ask the main thread to open up its SaveFileDialog**.
```
public string Start()
{
if(File.Exists(ResultPath))
{
this.Invoke((MethodInvoker)delegate
{
SaveFileDialog SaveReport = new SaveFileDialog();
SaveReport.InitialDirectory = "c:\somepath";
SaveReport.CheckPathExists = true;
SaveReport.DefaultExt = ".xls";
SaveReport.OverwritePrompt = true;
SaveReport.ValidateNames = true;
if (SaveReport.ShowDialog() == DialogResult.OK)
{
ResultPath = SaveReport.FileName;
if (File.Exists(ResultPath)) File.Delete(ResultPath);
}
});
}
}
```
To make it more clear, assume you want your friend to give you one of **his** textbooks. You are NOT allowed to go to your friend's room and steal the book. What you could do, is call your friend (**invoke**) and ask for a favor (**delegate**).
|
Error while running Hive Action in Oozie
I'm trying to run a hive action through Oozie. My `workflow.xml` is as follows:
```
<workflow-app name='edu-apollogrp-dfe' xmlns="uri:oozie:workflow:0.1">
<start to="HiveEvent"/>
<action name="HiveEvent">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>oozie.hive.defaults</name>
<value>${hiveConfigDefaultXml}</value>
</property>
</configuration>
<script>${hiveQuery}</script>
<param>OUTPUT=${StagingDir}</param>
</hive>
<ok to="end"/>
<error to="end"/>
</action>
<kill name='kill'>
<message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end'/>
```
And here is my job.properties file:
```
oozie.wf.application.path=${nameNode}/user/${user.name}/hiveQuery
oozie.libpath=${nameNode}/user/${user.name}/hiveQuery/lib
queueName=interactive
#QA
nameNode=hdfs://hdfs.bravo.hadoop.apollogrp.edu
jobTracker=mapred.bravo.hadoop.apollogrp.edu:8021
# Hive
hiveConfigDefaultXml=/etc/hive/conf/hive-default.xml
hiveQuery=hiveQuery.hql
StagingDir=${nameNode}/user/${user.name}/hiveQuery/Output
```
When I run this workflow, I end up with this error:
```
ACTION[0126944-130726213131121-oozie-oozi-W@HiveEvent] Launcher exception: org/apache/hadoop/hive/cli/CliDriver
java.lang.NoClassDefFoundError: org/apache/hadoop/hive/cli/CliDriver
```
`Error Code: JA018`
`Error Message: org/apache/hadoop/hive/cli/CliDriver`
I'm not sure what this error means. Where am I going wrong?
**EDIT**
[This link](http://archive.cloudera.com/cdh4/cdh/4/oozie/oozie-default.xml) says error code `JA018` is: `JA018 is output directory exists error in workflow map-reduce action`. But in my case the output directory does not exist. This makes it all the more confusing
|
I figured out what was going wrong!
The class `org/apache/hadoop/hive/cli/CliDriver` is required for execution of a Hive Action. This much is obvious from the error message. This class is within this jar file: `hive-cli-0.7.1-cdh3u5.jar`. (In my case cdh3u5 in my cloudera version).
Oozie checks for this jar in the `ShareLib` directory. The location of this directory is usually configured in `hive-site.xml`, with the property name as `oozie.service.WorkflowAppService.system.libpath`, so Oozie should find the jar easily.
But in my case, `hive-site.xml` did not include this property, so Oozie didn't know where to look for this jar, hence the `java.lang.NoClassDefFoundError`.
To resolve this, I had to include a parameter in my job.properties file to point oozie to the location of the `ShareLib` directory, as follows:
`oozie.libpath=${nameNode}/user/oozie/share/lib`. (depends on where `SharedLib` directory is configured on your cluster).
This got rid of the error!
|
Python: How can I convert string to datetime without knowing the format?
I have a field that comes in as a string and represents a time. Sometimes its in 12 hour, sometimes in 24 hour. Possible values:
1. 8:26
2. 08:26am
3. 13:27
Is there a function that will convert these to time format by being smart about it? Option 1 doesn't have am because its in 24 hour format, while option 2 has a 0 before it and option 3 is obviously in 24 hour format. Is there a function in Python/ a lib that does:
```
time = func(str_time)
```
|
super short answer:
```
from dateutil import parser
parser.parse("8:36pm")
>>>datetime.datetime(2015, 6, 26, 20, 36)
parser.parse("18:36")
>>>datetime.datetime(2015, 6, 26, 18, 36)
```
Dateutil should be available for your python installation; no need for something large like pandas
If you want to extract the time from the `datetime` object:
```
t = parser.parse("18:36").time()
```
which will give you a `time` object (if that's of more help to you).
Or you can extract individual fields:
```
dt = parser.parse("18:36")
hours = dt.hour
minute = dt.minute
```
|
Docker "permission denied" in container
I am trying to run a docker image by
```
docker run -it -v $PWD/examples:/home/user/examples image
```
which should make `$PWD/examples` in the host accessible in the container. However when I `ls` in the container, it keeps giving me
```
ls: cannot access 'examples': Permission denied
```
I have tried the answers for similar questions, the `z/Z` option and `chcon -Rt svirt_sandbox_file_t /host/path/` and `run --privileged`, but neither of them have any effect in my case.
In fact, the `z` option appears to work for the first time `ls`, but when I issue `ls` the second time it is denied again.
|
In the comments it turned out that there is probably a `USER` instruction in the `Dockerfile` of the image. This user is not allowed to access `examples` due to file access permissions of `examples`.
---
It is possible to supersede `USER` with docker run option `--user`.
A quick and dirty solution is to run with `--user=root` to allow arbitrary access.
Be aware that files written as `root` in container to folder `examples` will be owned by `root`.
A better solution is to look for owner of `examples`, call him `foo`. Specify its user id and group id to have exactly the same user in container:
```
docker run --user $(id -u foo):$(id -g foo) imagename
```
---
Another possible solution is to allow arbitray access with `chmod 666 examples` or `chmod 644 examples`, but most probably you don't want that.
---
The best way would be to look at the Dockerfile and check the purpose of `USER` instruction.
- If it only serves the purpose of avoiding root in container, the best way is to use `--user=foo` or more precisely `--user=$(id -u foo):$(id -g foo)`.
- If something in Dockerfile/image relies on specific `USER`, it may be the best to change access permissions of `examples`.
- If you have access to the Dockerfile, you may adjust it to fit your host user/the owner of `examples`.
|
Running a async method as a Quartz.NET job and disposed object issue
I'm using Quartz.NET in this context (needs to mention that `GrabberContext` is a `DbContext` extended class):
```
// configuring Autofac:
var builder = new ContainerBuilder();
// configuring GrabberContext
builder.RegisterType<GrabberContext>()
.AsSelf()
.InstancePerLifetimeScope();
// configuring GrabService
builder.RegisterType<GrabService>()
.AsImplementedInterfaces()
.InstancePerLifetimeScope();
// configuring Quartz to use Autofac
builder.RegisterModule(new QuartzAutofacFactoryModule());
builder.RegisterModule(new QuartzAutofacJobsModule(typeof(DiConfig).Assembly));
var container = builder.Build();
// configuring jobs:
var scheduler = container.Resolve<IScheduler>();
scheduler.Start();
var jobDetail = new JobDetailImpl("GrabJob", null, typeof(GrabJob));
var trigger = TriggerBuilder.Create()
.WithIdentity("GrabJobTrigger")
.WithSimpleSchedule(x => x
.RepeatForever()
.WithIntervalInMinutes(1)
)
.StartAt(DateTimeOffset.UtcNow.AddSeconds(30))
.Build();
scheduler.ScheduleJob(jobDetail, trigger);
```
and this is the job:
```
public class GrabJob : IJob {
private readonly IGrabService _grabService;
public GrabJob(IGrabService grabService) { _grabService = grabService; }
public void Execute(IJobExecutionContext context) {
_grabService.CrawlNextAsync("");
}
}
```
The `GrabService` implementation is something like this:
```
public class GrabService : IGrabService {
private readonly GrabberContext _context;
public GrabService(GrabberContext context) {
_context = context;
}
public async Task CrawlNextAsync(string group) {
try {
var feed = await _context.MyEntities.FindAsync(someId); // line #1
// at the line above, I'm getting the mentioned error...
} catch(Exception ex) {
Trace.WriteLine(ex.Message);
}
}
}
```
But when the execution gets to `line #1` I get this error:
>
> The ObjectContext instance has been disposed and can no longer be used
> for operations that require a connection.
>
>
>
Any idea please?
|
You're calling the async method `CrawlNextAsync()` from a sync method `Execute()`. As soon as `CrawlNextAsync()` hits ...`await _context`..., it returns, and `Execute()` then returns, and I assume at that point the `GrabJob`, and hence `GrabService`, and hence `GrabberContext`, is disposed, while the continuation in `CrawlNextAsync()` continues (and tries to use the disposed `GrabberContext`).
As a simple fix, you could try change
```
public void Execute(IJobExecutionContext context) {
_grabService.CrawlNextAsync("");
}
```
to
```
public void Execute(IJobExecutionContext context) {
_grabService.CrawlNextAsync("").Wait();
}
```
|
A RESTful persistence solution usable with backbone.js... in PHP?
I'll preface this with saying that I'm a crappy programmer, I'm sure that what I want to do could be done in 10 lines of node or Rails or something else, but PHP is what I have available.
So, I'm hoping to find a simple PHP library which wraps the database calls in an API that looks similar to the RESTful model.
I've had little success trying to find such a thing -- searching for `PHP CRUD` or `PHP REST` turns up several zillion pages, and I've no idea how to filter through them.
I'm really trying to keep things simple here, I don't want a big framework like Zend or something. The models I'm dealing with in Backbone are really simple. I just want to send `GET`s to, say, `/notes/3` or `POST`s to `/notes`, etc, and have PHP do the right thing to a database.
Perhaps I'm asking too much, but it seems to me that this is what other frameworks like Rails provide. Any suggestions? TIA...
|
**EDIT Nov 2018:** Although I wouldn't knock CodeIgniter, nowadays [Laravel](https://laravel.com/) (currently 5.5) is the framework I use.
Here is a [good article](https://www.toptal.com/laravel/why-i-decided-to-embrace-laravel) that sums up the reasons I use Laravel.
To get jump started, I recommend [Laracasts](https://laracasts.com/). It's a subscription video tutorial service that goes in depth on how to use Laravel (and other web dev related things).
**ORIGINAL ANSWER:**
[Codeigniter](http://codeigniter.com/), to me, is the easiest of the Rails-like frameworks. It's bare bones, and you can build a CRUD app from scratch easily.
The biggest issue with rolling your own app is security. Codeigniter can help you build a less hackable site by shielding you from many of the common security risks, such as using $\_POST arrays directly, and not properly filtering your data. Not to mention the many helper classes it offers such as form validation.
You can view the [documentation](http://codeigniter.com/user_guide/) on their website. It's very easy to use as long as you remember the navigation is hidden at the top of each page. :D
|
How do I code this dependently-typed example in Haskell?
Suppose I want to represent the finite models of the first-order language with constant c, unary function symbol f, and predicate P. I can represent the carrier as a list `m`, the constant as an element of `m`, the function as a list of ordered pairs of elements of `m` (which can be applied via a helper function `ap`), and the predicate as a list of the elements of `m` that satisfy it:
```
-- Models (m, c, f, p) with element type a
type Model a = ([a], a, [(a,a)], [a])
-- helper function application, assumes function is total
ap :: Eq a => [(a,b)] -> a -> b
ap ((x',y'):ps) x = if x == x' then y' else ap ps x
```
I can then construct particular models and operations on models. The details aren't important for my question, just the types (but I've included the definitions so you can see where the type constraints come from):
```
unitModel :: Model ()
unitModel = ([()], (), [((),())], [])
cyclicModel :: Int -> Model Int
cyclicModel n | n > 0 = ([0..n-1], 0, [(i, (i+1)`mod`n) | i<-[0..n-1]], [0])
-- cartesian product of models
productModel :: (Eq a, Eq b) => Model a -> Model b -> Model (a,b)
productModel (m1, c1, f1, p1) (m2, c2, f2, p2) = (m12, c12, f12, p12) where
m12 = [(x1,x2) | x1 <- m1, x2 <- m2]
c12 = (c1, c2)
f12 = [(x12, (ap f1 (fst x12), ap f2 (snd x12))) | x12 <- m12]
p12 = [x12 | x12 <- m12, elem (fst x12) p1 && elem (snd x12) p2]
-- powerset of model (using operations from Data.List)
powerModel :: (Eq a, Ord a) => Model a -> Model [a]
powerModel (m, c, f, p) = (ms, cs, fs, ps) where
ms = subsequences (sort m) -- all subsets are "normalized"
cs = [c]
fs = [(xs, nub (sort (map (ap f) xs))) | xs <- ms] -- "renormalize" the image of f
ps = [xs | xs <- ms, elem c xs]
```
Now, I want to give names to all of these models:
```
data ModelName = UnitModel
| CyclicModel Int
| Product ModelName ModelName
| Power ModelName
deriving (Show, Eq)
```
Finally, I want to write this code, mapping each name to the model it names:
```
model_of UnitModel = unitModel
model_of (CycleModel n) = cycleModel n
model_of (Product m1 m2) = productModel (model_of m1) (model_of m2)
model_of (Power m1) = powerModel (model_of m1)
```
I've tried a number of approaches to making this to work, in the sense of defining types so that I can use exactly this definition of model\_of, including using phantom types, GADTs, and type families -- but haven't found a way to do it. (But then again, I'm a relative newcomer to Haskell.) Can it be done? How should I do it?
|
By using a GADT for `ModelName` you can associate a given name with the resulting model's type parameter. Here's what's needed to make your `model_of` compile:
```
{-# LANGUAGE GADTs #-}
data ModelName t where
UnitModel :: ModelName ()
CyclicModel :: Int -> ModelName Int
Product :: (Eq a, Eq b) => ModelName a -> ModelName b -> ModelName (a, b)
Power :: (Ord a) => ModelName a -> ModelName [a]
model_of :: ModelName t -> Model t
model_of UnitModel = unitModel
model_of (CyclicModel n) = cyclicModel n
model_of (Product m1 m2) = productModel (model_of m1) (model_of m2)
model_of (Power m1) = powerModel (model_of m1)
```
EDIT: as you noticed, the normal `deriving` clause doesn't work with GADTs but it turns out `StandaloneDeriving` works just fine.
```
{-# LANGUAGE StandaloneDeriving #-}
deriving instance Show (ModelName t)
deriving instance Eq (ModelName t)
```
Note, however, that the `Eq` instance is a bit nonsensical in this case because the type-class allows you to only compare values of the same type, but the different constructors essentially produce values of different types. So, for example, the following does not even type-check:
```
UnitModel == CyclicModel
```
because `UnitModel` and `CyclicModel` have different types (`ModelName ()` and `ModelName Int` respectively). For situations where you need to erase the additional type-information for some reason you can use a wrapper such as
```
data Some t where
Some :: t a -> Some t
```
and you can derive e.g. an `Eq` instance for `Some ModelName` manually:
```
{-# LANGUAGE FlexibleInstances #-}
instance Eq (Some ModelName) where
Some UnitModel == Some UnitModel
= True
Some (CyclicModel n) == Some (CyclicModel n')
= n == n'
Some (Product m1 m2) == Some (Product m1' m2')
= Some m1 == Some m1' && Some m2 == Some m2'
Some (Power m1) == Some (Power m1')
= Some m1 == Some m1'
_ == _ = False
```
|
How long do dentries stay in the dcache?
On Linux, when I create a new file and open it, an entry about it is created in the dcache's hash table, right? How long does that entry stay there? Does it get removed when nothing is currently using the file anymore, or does it stay there until the cache is full and the kernel decides to remove things from it to create space for new dcache entries?
If the latter is the case, how exactly does the kernel determine which entries to remove when the dcache is full?
(References to the kernel source would be nice.)
|
>
> **Q1:** On Linux, when I create a new file and open it, an entry about it is created in the dcache's hash table, right?
>
>
>
Correct.
>
> **Q2:** How long does that entry stay there?
>
>
>
Until the space they're occupying is needed for some other purpose.
>
> **Q3:** If the latter is the case, how exactly does the kernel determine which entries to remove when the dcache is full?
>
>
>
From the Linux kernel documentation on [virtual memory (VM)](https://www.kernel.org/doc/Documentation/sysctl/vm.txt):
*excerpt*
>
> ### vfs\_cache\_pressure
>
>
> Controls the tendency of the kernel to reclaim the memory which is used for
> caching of directory and inode objects.
>
>
> At the default value of `vfs_cache_pressure=100` the kernel will attempt to
> reclaim dentries and inodes at a "fair" rate with respect to pagecache and
> swapcache reclaim. Decreasing `vfs_cache_pressure` causes the kernel to prefer
> to retain dentry and inode caches. When `vfs_cache_pressure=0`, the kernel will
> never reclaim dentries and inodes due to memory pressure and this can easily
> lead to out-of-memory conditions. Increasing vfs\_cache\_pressure beyond 100
> causes the kernel to prefer to reclaim dentries and inodes.
>
>
>
So to answer your general question, `dentries` will remain in cache until the space they're occupying is needed by something else
### Additional notes
This philosophy is permeated throughout the kernel. There is no reason to free up RAM, until their's an actual need to do so. If you'd like to analyze the contents of the cache or drop the dentries all together take a look at his Q&A titled: [Are there any ways or tools to dump the memory cache and buffer?](https://unix.stackexchange.com/questions/87863/are-there-any-ways-or-tools-to-dump-the-memory-cache-and-buffer/87888#87888).
|
Why does XOR swap with integers trigger a warning?
I typed the following program:
```
#include <stdio.h>
int main(void) {
int a = 3;
int b = 42;
printf("a = %d\nb = %d\n", a, b);
printf("Exchanging values.\n");
a ^= b ^= a ^= b;
printf("a = %d\nb = %d\n", a, b);
return 0;
}
```
and it's ok. When I try to compile it, I get this:
```
$ gcc test.c -o test -Wall -Wextra -ansi -pedantic-errors
test.c: In function ‘main’:
test.c:11: warning: operation on ‘a’ may be undefined
```
That's pretty much standard code, isn't it?
Why does it trigger a warning? As far as I know, bitwise XOR is implemented by default for `int` as long as you are using a standard implementation of C.
Thank you very much.
|
Variable `a` is used as an lvalue twice in the expression.
Keep in mind that `x ^= y` is in fact a shortcut for `x = x ^ y`, and it means that the first operand is read, then written.
If you take the first operation out from the original expression, it is fine, see:
```
b ^= a ^= b; // OK
/* 2 1 */
```
Here, `a` is used twice and `b` is used three times. Since the assignment operator is right-to-left associative, first `a ^= b` is calculated, variable `b` is only read, variable `a` is read and then written, and the result (*r1*) is passed to the second operation. On the second operation, `b ^= r1`, `b` is read a second time (giving the same value as read previously) and then written. Note there is no way to interpret differently, no undefined behavior. In the above statement, `a` is read only once, `b` is read twice but both reads return the same value, and both `a` and `b` is written only once. It is ok.
When you add the third assignment to the left, it becomes a problem:
```
a ^= b ^= a ^= b; // NOT OK
/* 3 2 1 */
```
Now, `a` is read twice, once on operation 1 and once on operation 3, and also written on operation 1 and operation 3. What value should `a` return on operation 3, the original value or the value after operation 1 is processed?
The clever programmer may think that operation 1 is completely executed before operation 3 is processed, but this is not defined by the standard. It just happens to work with most compilers. On operation 3, the compiler may very well return the same value for `a` as it is returned for operation 1, causing the wrong result. This is undefined behavior.
|
GitlabCi deploy on multiple servers
I use Gitlab runner and works fine for a single server. The gitlab-ci.yml is simple:
```
stages:
- test
- deploy
test:
stage: test
image: php
tags:
- docker
script:
- echo "Run tests..."
deploy:
stage: deploy
tags:
- shell
script:
- sh deploy.sh
```
As i said this is fine for a single server but to deploy same app on another server? I tried with same gitlab-runner config (same conf.toml) but then it was only updating one of them randomly.
Is there somehow gitlab Ci to be triggered by more than 1 runner and deploy all of them according gitlab-ci.yml?
|
You can register several runners (e.g. tagged *serverA* and *serverB*) from different servers and have multiple deployment jobs, each of them performed by a different runner. This is because you can set more than one tag in a job and only a runner having all the tags will be used.
```
stages:
- test
- deploy
test:
stage: test
image: php
tags:
- docker
script:
- echo "Run tests..."
deployA:
stage: deploy
tags:
- shell
- serverA
script:
- sh deploy.sh
deployB:
stage: deploy
tags:
- shell
- serverB
script:
- sh deploy.sh
```
However, take into account a situation when one of the deployment jobs fails - this would end up in you having two different versions of the code on the servers. Depending on your situation this might or might not be a problem.
|
Buffer Overflow explanation
I made this simple password verification program, and I'm trying to overflow the buffer array to change the auth variable to 1 and i managed to do it except I can only change the auth variable to the character 1 and not the decimal 1, how can i do it?
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char *argv[]){
char buffer[16];
int auth=0;
strcpy(buffer, argv[1]);
if(strcmp(buffer,"password")==0)
auth=1;
else
auth=0;
if(auth)
printf("Granted");
return 0;
}
```
|
Following information is derived from runs on my Ubuntu-14.04 system using gcc version 4.8.4 as my compiler and gdb version 7.7.1 as my debugger
First, the buffer overflow happens as a result of the strcpy function, and if you overflow `buf` so that it overwrites the memory location of `auth`, but the following if-else block will overwrite your changes.
Secondly you can see what is happening by looking at the stack in a debugger. I made a slight modification to you code, by initializing `auth` to `0xbbbbbbbb` (just so I can find here auth is located on the stack).
Setting a break point on main and stepping into the function we can examine the values of the various registers:
```
(gdb) info reg
rax 0x0 0
rbx 0x0 0
rcx 0x0 0
rdx 0x7fffffffdf30 140737488346928
rsi 0x7fffffffdf18 140737488346904
rdi 0x2 2
rbp 0x7fffffffde30 0x7fffffffde30
rsp 0x7fffffffddf0 0x7fffffffddf0
[... some lines removed ...]
rip 0x400652 0x400652 <main+37>
eflags 0x246 [ PF ZF IF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
```
From this we can see that the stack extends from `0x7fffffffddf0` to `0x7fffffffde30`. Now stopping right before the call to strcpy, we can take a look at the stack:
```
(gdb) x/76xb $rsp
0x7fffffffddf0: 0x18 0xdf 0xff 0xff 0xff 0x7f 0x00 0x00
0x7fffffffddf8: 0x1d 0x07 0x40 0x00 0x02 0x00 0x00 0x00
0x7fffffffde00: 0x30 0xde 0xff 0xff 0xff 0x7f 0x00 0x00
0x7fffffffde08: 0x00 0x00 0x00 0x00 0xbb 0xbb 0xbb 0xbb
0x7fffffffde10: 0xd0 0x06 0x40 0x00 0x00 0x00 0x00 0x00
0x7fffffffde18: 0x40 0x05 0x40 0x00 0x00 0x00 0x00 0x00
0x7fffffffde20: 0x10 0xdf 0xff 0xff 0xff 0x7f 0x00 0x00
0x7fffffffde28: 0x00 0x2b 0x25 0x07 0xdd 0x7a 0xc0 0x6d
0x7fffffffde30: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0x7fffffffde38: 0x45 0x6f 0xa3 0xf7
```
Looking at this, we can see that `auth` is located at a memory address of `0x7fffffffde0c`.
I set as a command line argument `passwordAAAAAAAA111`, and now we can single step across the strcpy call and look at memory again:
```
(gdb) x/76xb $rsp
0x7fffffffddf0: 0x18 0xdf 0xff 0xff 0xff 0x7f 0x00 0x00
0x7fffffffddf8: 0x1d 0x07 0x40 0x00 0x02 0x00 0x00 0x00
0x7fffffffde00: 0x30 0xde 0xff 0xff 0xff 0x7f 0x00 0x00
0x7fffffffde08: 0x00 0x00 0x00 0x00 0xbb 0xbb 0xbb 0xbb
0x7fffffffde10: 0x70 0x61 0x73 0x73 0x77 0x6f 0x72 0x64
0x7fffffffde18: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7fffffffde20: 0x31 0x31 0x31 0x31 0x00 0x7f 0x00 0x00
0x7fffffffde28: 0x00 0x2b 0x25 0x07 0xdd 0x7a 0xc0 0x6d
0x7fffffffde30: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0x7fffffffde38: 0x45 0x6f 0xa3 0xf7
```
(gdb)
From this, we can see that the value of `auth` has not be touched (notice the four 0xbb still in memory starting at 0x7fffffffde0c). Also we can now see where the password is stored in memory, it starts at `0x7fffffffde10`. The four 'A's that I used are where the four 0x41s are and the four '1's that I used are where the four 0x31s are
So, on my system I do not see a way that you would be able to overflow into the `auth` variable.
Finally, the question that you originally raised, remember that the command line arguments are treated as a character array, so passing in something line `AAAA1` on the command line will result in the array [0x41 0x41 0x41 0x41 0x31] being passed to your program. What you want your program to receive is actually [0x41 0x41 0x41 0x41 0x01 0x00 0x00 0x00] (assuming 32-bit, little endian architecture). There are two issues that you will face,
1. 0x01 is a non-printable character
2. 0x00 being the null terminator will stop the string input at the first null.
There is not alot you can do about the issue 2, with just a simple input; however as others have suggested the solution around issue 1 is to create a driver program that builds the input buffer the way that you want and then passes that to the program.
|
why \*ngIf doesnt'work with ng-template?
I have a condition in the template as follows:
```
<ng-container>
<p *ngFor="let seat of InfoDetails?.seatInfo">
<template *ngIf="seat.section">
Section {{seat?.section}} ,
</template>
<template *ngIf="seat.row">
Row {{seat?.row}},
</template>
<template *ngIf="seat.seatNo">
Seat number {{seat?.seatNo}}
</template>
</p>
</ng-container>
```
I have dataset that contains `row` and `seatNo`, but it does not seem to print in the template. what is the issue here?
|
Read the doc here <https://angular.io/guide/structural-directives> especially for
>
> `<div *ngIf="hero" >{{hero.name}}</div>`
>
>
> The asterisk is "syntactic sugar" for something a bit more
> complicated. Internally, Angular desugars it in two stages. First, it
> translates the \*ngIf="..." into a template attribute, template="ngIf
> ...", like this.
>
>
> `<div template="ngIf hero">{{hero.name}}</div>`
>
>
> Then it translates the template attribute into a
> element, wrapped around the host element, like this.
>
>
> `<ng-template [ngIf]="hero"> <div>{{hero.name}}</div></ng-template>`
>
>
> - The \*ngIf directive moved to the element where it became a property binding,[ngIf].
> - The rest of the , including its class attribute, moved inside the element.
>
>
>
So for it we have `ng-container`
```
<ng-container *ngIf="seat.section">
Section {{seat.section}} ,
</ng-container>
```
or use span or div or regular html tag.
```
<span *ngIf="seat.section">
Section {{seat.section}} ,
</span>
```
or if you still want to use ng-template ([not recommended](https://angular.io/guide/structural-directives#prefer-the-asterisk--syntax))
```
<ng-template [ngIf]="seat.section">
Section {{seat.section}} ,
</ng-template>
```
|
How to migration change columns type?
I got an error, When I try to column type from string to text, With migration function of Laravel.
File : {data\_time}\_change\_db\_structure.php
```
public function up()
{
Schema::table('service_request_type', function (Blueprint $table) {
$table->dropIndex(['sro_key_group']);
$table->text('sro_key_group')->change();
$table->renameColumn('sro_key_group', 'tags');
});
}
```
This is orginal migration create table file.
```
public function up()
{
Schema::create('service_request_type', function (Blueprint $table) {
$table->engine = 'InnoDB';
...
$table->string('sro_key_group', 100)->nullable()->index();
...
});
}
```
Error I got.
>
> [Illuminate\Database\QueryException]
>
> SQLSTATE[42000]: Syntax error or access violation: 1170 BLOB/TEXT
> column 'sro\_key\_group' used in key specification without a key length
> (SQL: ALTER TABLE service\_request\_type CHANGE sro\_key\_group sro \_key\_group TEXT DEFAULT NULL COLLATE utf8\_unicode\_ci)
>
>
>
> ```
> [Doctrine\DBAL\Driver\PDOException]
>
> ```
>
> SQLSTATE[42000]: Syntax error or access violation: 1170 BLOB/TEXT
> column 'sro\_key\_group' used in key specification without a key length
>
>
>
> ```
> [PDOException]
>
> ```
>
> SQLSTATE[42000]: Syntax error or access violation: 1170 BLOB/TEXT
> column 'sro\_key\_group' used in key specification without a key length
>
>
>
What wrong? And I already install `doctrine/dbal` in my `composer.json`.
|
You'll need to do this in three steps, either using three separate migrations, or three calls to `table()` as you've shown in your answer.
The first issue is that, even though you've written your statements in the order you'd like them to execute (and the order they need to execute), the schema builder will actually rearrange the order so that "change" statements are executed first. The schema builder treats new columns and changed columns as "implied" statements, and moves them to the top of the stack of commands to run. However, rename statements are not considered "change" statements.
So, even though you've written the code to:
```
[
remove index,
change column from varchar to text,
rename column,
]
```
The schema builder will actually execute:
```
[
change column from varchar to text,
remove index,
rename column,
]
```
Now, since the change command is happening before the column is removed from the index, you are getting the 1170 error.
The next issue is with attempting to do the column change and the column rename in the same context. The SQL to implement the requests changes is generated by doing schema diffs, however both schema diffs will be done before any changes are actually made. So, the first change from `varchar` to `text` will generate the appropriate SQL to make that change, but then the second change to rename the column will actually generate SQL that changes the column *back* to a text field while renaming it.
To work around these issues, you can either create three migrations, where the first migration simply drops the index, the second migration changes the type, and then the third migration renames it, or you can keep your one migration and run three `table()` statements.
```
public function up()
{
// make sure the index is dropped first
Schema::table('service_request_type', function (Blueprint $table) {
$table->dropIndex(['sro_key_group']);
});
// now change the type of the field
Schema::table('service_request_type', function (Blueprint $table) {
$table->text('sro_key_group')->nullable()->change();
});
// now rename the field
Schema::table('service_request_type', function (Blueprint $table) {
$table->renameColumn('sro_key_group', 'tags');
});
}
```
|
Why can't I use PowerShell's Start-Process with both -Credential and -Verb parameters?
Using `powershell.exe`, I want to emulate `cmd.exe`'s `runas` command with the additional benefit of escalating privileges through UAC.
However, if I both supply both `-Credential` and `-Verb Runas` parameters to `Start-Process`, I get the error below:
```
Start-Process powershell.exe -Credential (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList 'username',(ConvertTo-SecureString 'password' -AsPlainText -Force)) -ArgumentList '-NoProfile' -Verb RunAs
Start-Process : Parameter set cannot be resolved using the specified named parameters.
At line:1 char:1
+ Start-Process powershell.exe -Credential (New-Object -TypeName System ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Start-Process], ParameterBindingException
+ FullyQualifiedErrorId : AmbiguousParameterSet,Microsoft.PowerShell.Commands.StartProcessCommand
```
Using only one of these parameters yields no errors:
```
Start-Process -Verb RunAs powershell.exe -ArgumentList "-NoProfile"
```
Why is that? Both syntax forms of `Start-Process` accept `[<CommonParameters>]`, which `-Verb Runas` belongs to?
|
The `-Verb` parameter is only available in one of the parameter sets (if you do `Get-Help Start-Process` you can see it explictly listed in the second set):
```
SYNTAX
Start-Process [-FilePath] <String> [[-ArgumentList] <String[]>] [-Credential <PSCredential>] [-LoadUserProfile] [-NoNewWindow] [-PassThru] [-RedirectStandardError <String>] [-RedirectStandardInput <String>] [-RedirectStandardOutput <String>] [-UseNewEnvironment] [-Wait] [-WindowStyle {Normal | Hidden | Minimized | Maximized}] [-WorkingDirectory <String>]
[<CommonParameters>]
Start-Process [-FilePath] <String> [[-ArgumentList] <String[]>] [-PassThru] [-Verb <String>] [-Wait] [-WindowStyle {Normal | Hidden | Minimized | Maximized}] [-WorkingDirectory <String>] [<CommonParameters>]
```
It's not a part of `CommonParameters`, that just includes things like `-Debug`, `-Verbose`, `-ErrorAction` etc. (see the full list [here](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_commonparameters?view=powershell-6)).
This seems to be a possible workaround:
```
Start-Process powershell -Credential mydomain\myuser -ArgumentList '-noprofile -command &{Start-Process powershell -verb runas}'
```
|
Magento: Add content block at the end of the structual block "content"
I'm trying to add a content block to Magento, which should be visible on every side below the main content. I want to archive this with a custom extension, so I can copy this extension and it workes without touching core design files.
My extension includes the following layout update:
```
<default>
<reference name="content">
<block type="mymod/blockname" name="myblockname" after="-" template="mymod/block.phtml"/>
</reference>
</default>
```
My problem is, that the attribute `after="-"` is not working. The block always showes up at the top of the content block. Seems `before` and `after` have no consequence.
If I move the block to i.e. footer, the attributes `before` and `after` are working fine.
How can I place my block at the bottom of block "content"
|
As far as I can see the problem is that you specify your block in the "default" layout handle while most of the content in the "content" block is added by other layout handles which are applied later. That's why the added dependencies in your XML registration file (mentioned by Fabian) are not helping.
Please consider these two options depending on your needs:
## 1. If you really want to include your block on all frontend pages
In your XML layout file (local.xml or a custom one), **add a new layout handle**:
```
<?xml version="1.0" encoding="UTF-8"?>
<layout version="0.1.0">
<!-- your other adjustments for default, category_product_view and so on go here -->
<add_my_block>
<reference name="content">
<block type="mymod/blockname" name="myblockname" after="-" template="mymod/block.phtml"/>
</reference>
</add_my_block>
</layout>
```
Now you **create an event observer** to inject your layout handle into your layout:
```
<?php
class YourCompany_YourExtension_Model_Observer
{
/**
* Adds a block at the end of the content block.
*
* Uses the event 'controller_action_layout_load_before'.
*
* @param Varien_Event_Observer $observer
* @return YourCompany_YourExtension_Model_Observer
*/
public function addBlockAtEndOfMainContent(Varien_Event_Observer $observer)
{
$layout = $observer->getEvent()->getLayout()->getUpdate();
$layout->addHandle('add_my_block');
return $this;
}
}
```
Then you **register the event observer** in your XML extension configuration file (config.xml):
```
<?xml version="1.0" encoding="UTF-8" ?>
<config>
<modules>
<YourCompany_YourExtension>
<version>0.0.1</version>
</YourCompany_YourExtension>
</modules>
<frontend>
<events>
<controller_action_layout_load_before>
<observers>
<mymod_add_block_at_end_of_main_content>
<type>singleton</type>
<class>mymod/observer</class>
<method>addBlockAtEndOfMainContent</method>
</mymod_add_block_at_end_of_main_content>
</observers>
</controller_action_layout_load_before>
</events>
<!-- declaring your layout xml etc. -->
</frontend>
<global>
<!-- declaring your block classes etc. -->
<models>
<mymod>
<class>YourCompany_YourExtension_Model</class>
</mymod>
</models>
</global>
</config>
```
Now your block should end up below the other blocks. I tested this successfully for the homepage, customer login page and category view page. If you have to exclude your block on a few pages, you can check in your event observer if the block should be excluded on that certain page.
## 2. If you want to include your block only on some pages
**Add a layout handle** to your XML layout file just as we did before but instead of creating and registering an event observer, just **tell your XML layout file to use the custom layout handle** in some areas:
```
<?xml version="1.0" encoding="UTF-8"?>
<layout version="0.1.0">
<catalog_category_default>
<update handle="add_my_block" />
</catalog_category_default>
<catalog_category_layered>
<update handle="add_my_block" />
</catalog_category_layered>
<cms_page>
<update handle="add_my_block" />
</cms_page>
<!-- and so on -->
<add_my_block>
<reference name="content">
<block type="mymod/blockname" name="myblockname" after="-" template="mymod/block.phtml"/>
</reference>
</add_my_block>
</layout>
```
|
static\_assert does not break compiling immediately
Code example:
```
template <int x>
struct SUM
{
static_assert(x >= 0, "X must be greater or equal to 0");
enum {VALUE = x + SUM<x-1>::VALUE};
};
template<>
struct SUM<0>
{
enum {VALUE = 0};
};
int main()
{
std::cout << SUM<-1>::VALUE << std::endl;
return 0;
}
```
Why compiler does not break compilation on first static\_assert but continue work till reaching maximum instantiation depth?
```
Invoking: GCC C++ Compiler
g++ -O0 -g3 -Wall -c -fmessage-length=0 -std=c++11 -MMD -MP -MF"src/Main.d" -MT"src/Main.d" -o "src/Main.o" "../src/Main.cpp"
../src/Main.cpp: In instantiation of ‘struct SUM<-1>’:
../src/Main.cpp:47:22: required from here
../src/Main.cpp:26:2: error: static assertion failed: X must be greater or equal to 0
static_assert(x >= 0, "X must be greater or equal to 0");
^
......
../src/Main.cpp: In instantiation of ‘struct SUM<-2>’:
../src/Main.cpp: In instantiation of ‘struct SUM<-900>’:
../src/Main.cpp:27:18: recursively required from ‘struct SUM<-2>’
../src/Main.cpp:27:18: required from ‘struct SUM<-1>’
../src/Main.cpp:47:22: required from here
../src/Main.cpp:26:2: error: static assertion failed: X must be greater or equal to 0
../src/Main.cpp:27:18: error: template instantiation depth exceeds maximum of 900 (use -ftemplate-depth= to increase the maximum) instantiating ‘struct SUM<-901>’
enum {VALUE = x + SUM<x-1>::VALUE};
^
../src/Main.cpp:27:18: recursively required from ‘struct SUM<-2>’
../src/Main.cpp:27:18: required from ‘struct SUM<-1>’
../src/Main.cpp:47:22: required from here
../src/Main.cpp:27:18: error: incomplete type ‘SUM<-901>’ used in nested name specifier
make: *** [src/Main.o] Error 1
13:04:05 Build Finished (took 6s.877ms)
```
The only problem here is that it takes a lot of time till break, and produces a lot of output. Is there a way to make this better? Used compiler: gcc version 4.8.1
|
There is nothing in the language that requires compilation to abort immediately, so anything you get that *directly* answers your question will necessarily be implementation-specific and won't avoid the instantiations on other implementations. What I think would be better is to rework your code in such a way that it doesn't have any way to continue with the instantiations. One possible way to do this is with `std::enable_if`:
```
#include <iostream>
#include <type_traits>
template <int x, typename = typename std::enable_if<x >= 0>::type>
struct SUM_impl
{
enum {VALUE = x + SUM_impl<x-1>::VALUE};
};
template<>
struct SUM_impl<0>
{
enum {VALUE = 0};
};
template <int x>
struct SUM
{
static_assert(x >= 0, "X must be greater or equal to 0");
enum {VALUE = SUM_impl<x>::VALUE};
};
int main()
{
std::cout << SUM<-1>::VALUE << std::endl;
return 0;
}
```
This way, the `static_assert` in `SUM` prints out a user-friendly message. The `enable_if` in `SUM_impl` forcibly rejects anything where `x < 0`, before the template would be instantiated, and if the template doesn't get instantiated, it cannot be instantiated recursively either.
I initially made `SUM` derive from `SUM_impl`, but not doing so (and copying its value) provides better diagnostics.
|
Drawing the same random numbers in numpy
I got the following piece of code:
```
import numpy as np
rand_draw1 = np.random.rand(5,4)
rand_draw2 = rand_draw1
rand_draw2[0:2,0:4] = np.random.rand(2,4)
```
My intention is to have the variables rand\_draw1 and rand\_draw2 to be identical except for the first two rows. However they turn out to be completely identical, even the first two lines.
Initially I thought this question answers my problem:
[Random Number Generation - Same Number returned](https://stackoverflow.com/questions/4855756/random-number-generation-same-number-returned)
which suggested that it must be due to the fact that these random draws are based on the machine clock and because the commands are executed in virtually the same instance, one draws the same numbers. But if that is the case, why do I get the same result running this in the terminal (i.e. typing the lines one-by-one)?
To summarize I have two questions:
- How do I fix my code?
- Is it indeed the machine-time 'problem' that causes this?
Thanks in advance!
|
Just assigning `rand_draw2 = rand_draw1` **does not** create a copy, it simply binds the name `rand_draw2` to the *same object* already bound to `rand_draw1`:
```
>>> rand_draw2 = rand_draw1
>>> rand_draw2 is rand_draw1
True
```
Instead, you need to **explicitly copy** `rand_draw1`, and assign the *copy* to `rand_draw2`:
```
>>> rand_draw1 = np.random.rand(5, 4)
>>> rand_draw2 = rand_draw1.copy()
>>> rand_draw2[0:2] = np.random.rand(2, 4)
>>> rand_draw1
array([[ 0.08254004, 0.51848814, 0.69348487, 0.44053008],
[ 0.75273107, 0.64677024, 0.78397813, 0.12768647],
[ 0.37552669, 0.8365069 , 0.44490398, 0.3943413 ],
[ 0.27263619, 0.40379047, 0.43227555, 0.61552473],
[ 0.55214161, 0.21380748, 0.34122889, 0.44029075]])
>>> rand_draw2
array([[ 0.26229975, 0.02754367, 0.7989174 , 0.94619982],
[ 0.40869498, 0.01327566, 0.06437938, 0.94647506],
[ 0.37552669, 0.8365069 , 0.44490398, 0.3943413 ],
[ 0.27263619, 0.40379047, 0.43227555, 0.61552473],
[ 0.55214161, 0.21380748, 0.34122889, 0.44029075]])
```
See e.g. [here](http://nedbatchelder.com/text/names.html) for a good explanation of how names in Python work.
|
T-test for Bernoulli Distribution- Sample or Population data for SE calculation?
Am struggling to understand part of the answer to a question have done-
Qu- In a given population, 11% of the likely voters are African American. A survey using a simple random sample of 600 landline telephone numbers finds 8% African Americans. Is there evidence that the survey is biased?
To answer the question I found it quite simple. Set up
H0: Survey is random
H1: Survey is biased
$ \hat p=0.08$ & $p=0.11 $
Calculated my t value using $
t=(\hat p - p)/SE(\hat p)$
where $SE(\hat p)= (\hat p(1− \hat p)/n)^{1/2} \\$
and got a t value of $t=2.72$ and rejected the null as the p value was less than 1%.
According to the answers my method is correct, however it is also stated:
**An alternative formula for $SE(\hat p )$ is $0.11(1-0.11)/n$ which is valid under the null hypothesis that p=0.11)**
I imagine that the lack of square root there is just a mistype, however am I correct in assuming that what they've done is calculate the standard error using the population data rather than the sample data? Is that acceptable, because obviously it would produce a different t value. I'm aware that in most questions this wouldn't be possible, but in bernouilli distributions it is.
Thanks
|
The idea of a hypothesis test is that you come up with a statistic *whose distribution you know if the null hypothesis is true*.
The most well-known case is the t-statistic, where you divide sample mean minus mean under the null by the square root of sample standard deviation divided by $n$. Some mathematical statistics then shows that this t-statistic follows a t-distribution with $n-1$ degrees of freedom if the null is true and we sample from a normal population.
Now, if the null is true, computing the standard error from $p=0.11$ is correct, because you are using the right $p$.
Then, we can write your test statistic as
$$
t=\frac{\sqrt{n}(\hat p - p)}{\sqrt{\hat p(1− \hat p)}}
$$
By the CLT, because $p=E(X\_i)$ ($\hat p=1/n\sum\_iX\_i$) and assuming random sampling,
$$\sqrt{n}(\hat p - p)\to\_dN(0,Var(X\_i))$$
But $Var(X\_i)=p(1-p)$ for such a Bernoulli random variable, so that the test statistic converges in distribution to
$$\sqrt{n}\frac{(\hat p - p)}{\sqrt{p(1-p)}}\to\_dN(0,1),$$
i.e., it will behave like a standard normal r.v. in large samples, if the null is true.
Now, by the law of large numbers, $\hat p\to\_pp$, it is also correct to use $\hat p$, as replacing the true $p(1-p)$ by a consistent estimator of this quantity does not alter the asymptotic distribution.
(This result is known, at least in econometrics, as Slutzky's theorem, which says that a product of two sequences, one of which converges in distribution and one of which converges in probability to a constant will converge *in distribution* to the product of the limits - "the weaker convergence mode dominates".)
|
How to filter rows based on difference in dates between rows in R?
Within each `id`, I would like to keep rows that are at least 91 days apart. In my dataframe `df` below, `id=1` has 5 rows and `id=2` has 1 row.
For `id=1`, I would like to keep only the 1st, 3rd and 5th rows.
This is because if we compare 1st date and 2nd date, they differ by 32 days. So, remove 2nd date. We proceed to comparing 1st and 3rd date, and they differ by 152 days. So, we keep 3rd date.
Now, instead of using 1st date as reference, we use 3rd date. 3rd date and 4th date differ by 61 days. So, remove 4th date. We proceed to comparing 3rd date and 5th date, and they differ by 121 days. So, we keep 5th date.
In the end, the dates we keep are 1st, 3rd and 5th dates. As for `id=2`, there is only one row, so we keep that. The desired result is shown in `dfnew`.
```
df <- read.table(header = TRUE, text = "
id var1 date
1 A 2006-01-01
1 B 2006-02-02
1 C 2006-06-02
1 D 2006-08-02
1 E 2007-12-01
2 F 2007-04-20
",stringsAsFactors=FALSE)
dfnew <- read.table(header = TRUE, text = "
id var1 date
1 A 2006-01-01
1 C 2006-06-02
1 E 2007-12-01
2 F 2007-04-20
",stringsAsFactors=FALSE)
```
I can only think of starting with grouping the `df` by `id` as follows:
```
library(dplyr)
dfnew <- df %>% group_by(id)
```
However, I am not sure of how to continue from here. Should I proceed with `filter` function or `slice`? If so, how?
|
An alternative that uses `slice` from `dplyr` is to define the following recursive function:
```
library(dplyr)
f <- function(d, ind=1) {
ind.next <- first(which(difftime(d,d[ind], units="days") > 90))
if (is.na(ind.next))
return(ind)
else
return(c(ind, f(d,ind.next)))
}
```
This function operates on the `date` column starting at `ind = 1`. It then finds the next index `ind.next` that is the `first` index for which the date is greater than 90 days (at least 91 days) from the date indexed by `ind`. Note that if there is no such `ind.next`, `ind.next==NA` and we just return `ind`. Otherwise, we recursively call `f` starting at `ind.next` and return its result concatenated with `ind`. The end result of this function call are the row indices separated by at least 91 days.
With this function, we can do:
```
result <- df %>% group_by(id) %>% slice(f(as.Date(date, format="%Y-%m-%d")))
##Source: local data frame [4 x 3]
##Groups: id [2]
##
## id var1 date
## <int> <chr> <chr>
##1 1 A 2006-01-01
##2 1 C 2006-06-02
##3 1 E 2007-12-01
##4 2 F 2007-04-20
```
The use of this function assumes that the `date` column is sorted in ascending order by each `id` group. If not, we can just sort the dates before slicing. Not sure about the efficiency of this or the dangers of recursive calls in R. Hopefully, David Arenburg or others can comment on this.
---
As suggested by David Arenburg, it is better to convert `date` to a Date class first instead of by group:
```
result <- df %>% mutate(date=as.Date(date, format="%Y-%m-%d")) %>%
group_by(id) %>% slice(f(date))
##Source: local data frame [4 x 3]
##Groups: id [2]
##
## id var1 date
## <int> <chr> <date>
##1 1 A 2006-01-01
##2 1 C 2006-06-02
##3 1 E 2007-12-01
##4 2 F 2007-04-20
```
|
KVM/libvirt: How to configure static guest IP addresses on the virtualisation host
What I'd like to do is to set the guests' network configuration (IP address, subnet, gateway, broadcast address) from the host system. The used network setup is in `bridge` mode. How can I configure the network from the host rather than configuring the client itself to a static network configuration?
If I execute:
```
virsh edit vm1
```
there is a `<network>` block as well and I tried to configure the network interface from there, but unfortunately the guest VM doesn't seem to use it and as such is offline to the network (since it uses automatic network configuration only)... Guest VMs are both, Linux and Windows based. Any help would be highly appreciated.
|
If you don't want to do any configuration inside the guest, then the only option is a DHCP server that hands out static IP addresses. If you use `bridge` mode, that will probably be some external DHCP server. Consult its manual to find out how to serve static leases.
But at least in forward modes `nat` or `route`, you could use libvirt's built-in `dnsmasqd` (More recent versions of libvirtd support the dnsmasq's "dhcp-hostsfile" option). Here is how:
First, find out the MAC addresses of the VMs you want to assign static IP addresses:
```
virsh dumpxml $VM_NAME | grep 'mac address'
```
Then edit the network
```
virsh net-list
virsh net-edit $NETWORK_NAME # Probably "default"
```
Find the `<dhcp>` section, restrict the dynamic range and add host entries for your VMs
```
<dhcp>
<range start='192.168.122.100' end='192.168.122.254'/>
<host mac='52:54:00:6c:3c:01' name='vm1' ip='192.168.122.11'/>
<host mac='52:54:00:6c:3c:02' name='vm2' ip='192.168.122.12'/>
<host mac='52:54:00:6c:3c:03' name='vm3' ip='192.168.122.12'/>
</dhcp>
```
Then, reboot your VM (or restart its DHCP client, e.g. `ifdown eth0; ifup eth0`)
---
Update: I see there are reports that the change might not get into effect after "virsh net-edit". In that case, try this after the edit:
```
virsh net-destroy $NETWORK_NAME
virsh net-start $NETWORK_NAME
```
... and restart the VM's DHCP client.
If that still doesn't work, you might have to
- stop the libvirtd service
- kill any dnsmasq processes that are still alive
- start the libvirtd service
---
Note: There is no way the KVM host could force a VM with unknown OS and unknown config to use a certain network configuration. But if know know that the VM uses a certain network config protocol - say DHCP - you can can use that. This is what this post assumes.
*Some* OS (e.g. some Linux distros) also allow to pass network config options into the guest e.g. via the kernel command line. But that is very specific to the OS, and i see no advantage over the DHCP method.
|
Material UI Drawer set background color
How to simply set the background color of Material UI `Drawer`?
tried this, but doesn't work
```
<Drawer
style={listStyle3}
open={this.state.menuOpened}
docked={false}
onRequestChange={(menuOpened) => this.setState({menuOpened})}
/>
const listStyle3 = {
background: '#fafa00',
backgroundColor: 'red'
}
```
|
**Edit: (May-21) - Material UI V4.11.1**
This can be done differently in version 4.11.1 and functional components.
There's no need to use an HoC anymore. Here's how it's done:
You should use the [`makeStyles`](https://material-ui.com/styles/advanced/#makestyles) helper to create the hook with a definitions of the classes and use the hook to pull them out.
```
const useStyles = makeStyles({
list: {
width: 250
},
fullList: {
width: "auto"
},
paper: {
background: "blue"
}
});
const DrawerWrapper = () => {
const classes = useStyles();
return (
<Drawer
classes={{ paper: classes.paper }}
open={isOpen}
onClose={() => setIsOpen(false)}
>
<div
tabIndex={0}
role="button"
onClick={() => setIsOpen(true)}
classes={{ root: classes.root }}
>
{sideList}
</div>
</Drawer>
)
}
```
Here's a working [sandbox](https://codesandbox.io/s/material-demo-forked-jmyb1?file=/demo.js:2905-3298)
---
**Edit: (Jan-19) - Material UI V3.8.3**
As for the latest version asked, the way to configure the `backgroundColor` would be by overriding the classes.
Based on material-ui documentation [here](https://material-ui.com/customization/overrides/#overriding-with-classes), and the css api for drawer [here](https://material-ui.com/api/drawer/#css-api) - This can be done by creating an object in the form of:
```
const styles = {
paper: {
background: "blue"
}
}
```
and passing it to the Drawer component:
```
<Drawer
classes={{ paper: classes.paper }}
open={this.state.left}
onClose={this.toggleDrawer("left", false)}
>
```
A working example can be seen in [this](https://codesandbox.io/s/4q56r5ol89) codesandbox.
Don't forget to wrap your component with material-ui's `withStyles` HoC as mentioned [here](https://material-ui.com/css-in-js/basics/#higher-order-component-api)
---
Based on the props you used I have the reason to think that you're using a version which is lower than `v1.3.1` (the last stable version) but for the next questions you'll ask, I recommend writing the version you're using.
For version lower than `V1`, you can change the `containerStyle` prop like this:
`<Drawer open={true} containerStyle={{backgroundColor: 'black'}}/>`
|
!!! (splice operator) for ggplot2 geom\_point() function
I am using !!! (splice operator/big bang operator) for ggplot2::geom\_point() function, and it fails. Could someone point out what is wrong with this code? The following code tries to execute ggplot2 functions from character vectors.
```
library(rlang)
library(ggplot2)
data(mtcars)
data = mtcars
assoc = c( "cyl" , "hp" )
names(assoc) = c("x", "y")
assoc_lang = rlang::parse_exprs(assoc)
gg = ggplot2::ggplot(data, ggplot2::aes( ,, !!! assoc_lang )) # This works
params = c( "10", "\"black\"" )
names(params) = c("size", "colour" )
params_lang = rlang::parse_exprs(params)
gg = gg + ggplot2::geom_point( !!! params_lang ) # This fails
plot(gg)
```
- output
```
Error in !params_lang : invalid argument type
Calls: <Anonymous> -> layer
Execution halted
```
(NOTE)
The following code is an equivalent one in an interactive manner, which shows what I want to do in the above code.
```
library(ggplot2)
data(mtcars)
data = mtcars
gg = ggplot2::ggplot(data, ggplot2::aes( x = cyl , y = hp ))
gg = gg + ggplot2::geom_point( size = 10, colour = "black")
plot(gg)
```
|
These metaprogramming operators, including `{{…}}`, `!!` and `!!!` only work in quasiquotation functions. That is, functions whose arguments explicitly support tidy evaluation. In general, such function will explicitly mention quasiquotation support in their documentation.
Amongst these functions is `ggplot2::aes`, because it generally uses non-standard evaluation of its arguments. But other ‘ggplot2’ functions (including `ggplot2::geom_point`) perform standard evaluation of their arguments, and thus do not support any of these pseudo-operators.
If you want to dynamically construct a call to this function, you’ll need to go the conventional route, e.g. via (base R) `do.call`, or (‘rlang’) `exec` (or, manually, via `call` + `eval`/`call2` + `eval_tidy`).
|
Change the volume the windows page file is on
I have a setup where I'm booting to a vhd in windows 7. My drive has 2 main partitions that I use. For whatever reason, inside the VHD, the page file was put to my "secondary" partition. I'm wanting to change the drive letter of this partition inside my VHD, but it won't let me because the page file is on there. How can I move the page file for this VHD setup to a different drive?
|
In Windows 7, you can have one page file on each drive, and manually configure them per drive through the interface. So you can totally disable the page file on the drive you wish.
Windows sets the initial minimum size of the paging file equal to the amount of random access memory (RAM) installed on your computer plus 300 megabytes (MB), and the maximum size equal to three times the amount of RAM installed on your computer.
**To manually configure the virtual memory:**
- Open `Control Panel\System and Security\System`
- In the left pane, click `Advanced system settings`. If you are prompted for an administrator password or confirmation, type the password or provide confirmation.
- On the Advanced tab, under Performance, click `Settings`.
- Click the Advanced tab, and then, under Virtual memory, click Change.
- Clear the `Automatically manage paging file size for all drives` check box.
- Under Drive [Volume Label], click the drive that contains the paging file you want to change.
- To turn it off, click `No paging file`, or Click `Custom size`, type a new size in megabytes in the Initial size (MB) or Maximum size (MB) box, click `Set`, and then click `OK`.
**Note:**
Increases in size usually don't require a restart for the changes to take effect, but if you decrease the size, you'll need to restart your computer. It's recommended that you don't disable or delete the paging file.
|
Does the Primitive Obsession code smell apply to Python?
Does "primitive obsession" as a poor design practice apply to development in Python? I have seen a lot of examples and discussion in the context of statically typed languages (like Java, C#), but there are also dynamically typed languages like Python. Do solutions to primitive obsession look different in Python without having explicit types?
|
The simple answer here is 'yes': primitive obsession is still an issue in dynamically typed languages. The longer answer is that the issue is somewhat different in a dynamically typed language than a statically-typed one. I'm going to use Python as an example.
So the first bit of nuance is that in Python and similar languages, there aren't really 'primitives' in the first place. There are built-in types but they are objects. I only think we can mostly ignore this, but not completely. For most intents and purposes, they are used just like primitives are in other languages.
The other nuance is that in a dynamically-typed language, dependencies are not really defined by the type of an object, but rather by the operations it supports. For (somewhat of a silly) example, if I have a function that will add all the items in a sequence together, there's nothing stopping me from passing in a list of custom objects that support the `__add__` method. This effectively means it's harder to create a primitive obsessed API but as Flater noted in comments on another answer, it doesn't prevent you from improperly using 'primitives' in a sub-optimal way.
The most common primitive obsession error (IMO) that I see often is the use of primitive numeric types to represent money. Using a decimal type is typically better than float, but it's still inappropriate. Money has special rules and passing it around as a bare number will often result in bugs and hamper the evolution of an application. For example, I once was talking to someone about an application they were involved in that used integer types for money. Then later, integer type they used was too small. They went through and changed all the integers representing money to doubles. I consider that two mistakes. So first off, consider trying to find every instance of an integer being used in a financial application and then determine what it's trying to represent. That's a major PITA in itself. Instead of using a primitive for money, a better solution is to define a `Money` type. It can be stupid simple initially: don't over-design things. But once you've done that, if you need to add a capability such as a wider range or support conversions, you can often fix that in one spot. And if you need to find all the places it's used, it's far easier to locate. Can you make a primitive obsession mistake like using base numeric types for money in a dynamically-typed language? Absolutely.
On the other hand, in my experience, dynamically typed languages make reusing existing types easier and I therefore don't feel the need to create as many custom types as I would in a statically-typed language. But you should still be considering whether a custom type is warranted and largely for the same reasons that you would in a statically-typed language.
|
In Jenkins pipeline parallel stages, how to promptly kill other stages if one fail?
If the job failed, I don't have to wait everybody to finish. Is it possible to abort the parallel stages that are still running. They must display as "aborted", not with a red cross icon, since the failed one must be highlighted.
|
Add `parallelsAlwaysFailFast` to your `options{}` and the whole pipeline will stop if any (parallelized) stage fails.
>
> [parallelsAlwaysFailFast](https://jenkins.io/doc/book/pipeline/syntax/#options)
> Set failfast true for all subsequent parallel
> stages in the pipeline.
>
>
> For example: options {
> parallelsAlwaysFailFast() }
>
>
>
Example:
```
pipeline {
agent none
options {
parallelsAlwaysFailFast()
}
stages {
...
}
}
```
The option highlights the failed stage. Unfortunately the other stages are not displayed as `aborted` they just get the usual (not highlighted) red color.
|
Saving my PDO connection as a global variable
While asking another questions about PDO queries, I've been told that saving my PDO connection object as global to use it in my various functions that call queries to my database is generally bad practice.
Here is how I generally use my PDO object:
```
function somefunction(){
global $pdo;
$statement = $pdo->prepare("some query");
$statement->execute();
}
```
The arguments I've read are more about code maintenance and debugging, where it's hard to trace who modified the PDO object and where in the code it would be. Other people just simply reject using global variables for storing the PDO object, but can't really explain why global variables are a bad approach.
However, for a small-medium projects with only one database, is there really a disadvantage to using a global variable? I usually have my connection script and my functions script separately, where the functions script will require\_once() the connection script, where my PDO object is created. In this way, my connection is always established and all modifications to the PDO object are done in my connection script.
Is there any fundamental flaws in using this approach?
|
>
> Is there any fundamental flaws in using this approach?
>
>
>
The very first thing you have to understand, is that `$pdo` is a part of **storage** logic. That means, it should be only used inside classes that do **abstract data access**, be it a SQL table or a collection.
Let's look at your code,
```
function somefunction(){
global $pdo;
$statement = $pdo->prepare("some query");
$statement->execute();
}
```
What if you want to switch from MySQL to Mongo/MSSQL/PgSQL, in future?
Then you will have to rewrite a lot of code.
And for each database vendor, you will have to create a separated file with different variable. Just like this
```
function somefunction(){
global $mongo;
return $mongo->fetch(...);
}
```
>
>
> >
> > By using a global state, you end up with mass **code duplication**, because you cannot pass parameters and thus cannot change function's behavior at runtime.
> >
> >
> >
>
>
>
Now let's look at this,
```
function somefunction($pdo){
$statement = $pdo->prepare("some query");
$statement->execute();
}
```
Here, `$pdo` is passed as an argument, thus there's no global state. But the problem still remains, you end up violating the Single-Responsibility Principle
If you really want something that is maintainable, clean and very readable, you'd better stick with [DataMappers](http://en.wikipedia.org/wiki/Data_mapper_pattern). Here's an example,
```
$pdo = new PDO(...);
$mapper = new MySQL_DataMapper($pdo);
$stuff = $mapper->fetchUserById($_SESSION['id'])
var_dump($stuff); // Array(...)
// The class itself, it should look like this
class MySQL_DataMapper
{
private $table = 'some_table';
private $pdo;
public function __construct($pdo)
{
$this->pdo = $pdo;
}
public function fetchUserById($id)
{
$query = "SELECT * FROM `{$this->table}` WHERE `id` =:id";
$stmt = $this->pdo->prepare($query);
$stmt->execute(array(
':id' => $id
));
return $stmt->fetch();
}
}
```
# Conclusion
- It doesn't really matter if your project is small or large, you should always avoid global state in all it forms (global variables, static classes, Singletons) - **For the sake of code maintainability**
- You have to remember, that `$pdo` is not a part of your business logic. Its a part of storage logic. That means, before you even start doing something with business logic, like heavy computations, you should really abstract table access (including CRUD operations)
- The bridge that brings together your `data access abstraction` and `computation logic` is usually called **Service**
- You should always pass the things function's need as parameters
- You'd better stop worrying about your code and start thinking about abstraction layers.
- And finally, before you even start doing any stuff, you'd firstly initialize all your services in `bootstrap.php` and then start querying storage according to user's input (`$_POST` or `$_GET`).
Just like,
```
public function indexAction()
{
$id = $_POST['id']; // That could be $this->request->getPost('id')
$result = $this->dataMapper->fetchById($id);
return print_r($result, true);
}
```
|
Matplotlib - Hide error bars' label & points in legend
Here's an example of what I mean:
```
import matplotlib.pyplot as plt
xdata = [5, 10, 15, 20, 25, 30, 35, 40]
ydata = [1, 3, 5, 7, 9, 11, 13, 15]
yerr_dat = 0.5
plt.figure()
plt.plot(xdata, ydata, 'go--', label='Data', zorder=1)
plt.errorbar(xdata, ydata, yerr = yerr_dat, zorder=2, fmt='ko')
plt.legend()
plt.show()
```
which will plot this:

I don't want the error points and the *None* label in the legend, how can I take those out?
I'm using *Canopy* in its version 1.0.1.1190.
---
## Edit
After trying Joe's solution with this code:
```
import matplotlib.pyplot as plt
xdata = [5, 10, 15, 20, 25, 30, 35, 40]
ydata = [1, 3, 5, 7, 9, 11, 13, 15]
yerr_dat = 0.5
value = 20
plt.figure()
scatt = plt.plot(xdata, ydata, 'go--', label='Data', zorder=1)
hline = plt.hlines(y=5, xmin=0, xmax=40)
vline = plt.vlines(x=20, ymin=0, ymax=15)
plt.errorbar(xdata, ydata, yerr = yerr_dat, zorder=2, fmt='ko')
plt.legend([scatt, vline, hline], ['Data', 'Horiz line', 'Verti line = %d' % value], fontsize=12)
plt.show()
```
I get this warning:
```
/home/gabriel/Canopy/appdata/canopy-1.0.0.1160.rh5-x86/lib/python2.7/site-packages/matplotlib/legend.py:628: UserWarning: Legend does not support [<matplotlib.lines.Line2D object at 0xa09a28c>]
Use proxy artist instead.
http://matplotlib.sourceforge.net/users/legend_guide.html#using-proxy-artist
(str(orig_handle),))
```
and this output:

where the first label is not showing for some reason. Ideas?
---
## Edit 2
Turns out I was missing a comma in the line:
```
scatt, = plt.plot(xdata, ydata, 'go--', label='Data', zorder=1)
```
After adding it everything worked like a charm. Thanks Joe!
|
On newer versions of matplotlib, what you're wanting is the default behavior. Only artists with an explicitly assigned label will appear in the legend.
However, it's easy to control what's displayed in the legend. Just pass in only the artists you'd like to label:
```
import matplotlib.pyplot as plt
xdata = [5, 10, 15, 20, 25, 30, 35, 40]
ydata = [1, 3, 5, 7, 9, 11, 13, 15]
yerr_dat = 0.5
plt.figure()
dens = plt.plot(xdata, ydata, 'go--', zorder=1)
plt.errorbar(xdata, ydata, yerr = yerr_dat, zorder=2, fmt='ko')
plt.legend(dens, ['Density Profile'])
plt.show()
```

Alternately, you could specify `label='_nolegend_'` for the `errorbar` plot, but I don't know what versions of matplotlib support that, and passing in explicit lists of artists and labels will work for any version.
If you'd like to add other artists:
```
import matplotlib.pyplot as plt
xdata = [5, 10, 15, 20, 25, 30, 35, 40]
ydata = [1, 3, 5, 7, 9, 11, 13, 15]
yerr_dat = 0.5
plt.figure()
# Note the comma! We're unpacking the tuple that `plot` returns...
dens, = plt.plot(xdata, ydata, 'go--', zorder=1)
hline = plt.axhline(5)
plt.errorbar(xdata, ydata, yerr = yerr_dat, zorder=2, fmt='ko')
plt.legend([dens, hline], ['Density Profile', 'Ceiling'], loc='upper left')
plt.show()
```

|
How to format a date range (with only one year string) in different locale's in iOS?
The date string in English: `Jan 18 - Jan 26, 2018`
Incorrect Korean date string: `Jan 18 - 2018 Jan 26`
What should happen in Korean: `2018 Jan 18 - Jan 26` (not exactly correct Korean, just referring to the location of the year. See accepted answer to see proper Korean date format)
Right now this requires to date formatters, but you have to hardcode which date formatter has the year, so the Korean date doesn't look right.
Is this possible to do in Swift/Objc without just putting the year string on both sides of the date range?
|
Use a `DateIntervalFormatter`:
```
let sd = Calendar.current.date(from: DateComponents(year: 2018, month: 1, day: 18))!
let ed = Calendar.current.date(from: DateComponents(year: 2018, month: 1, day: 26))!
let dif = DateIntervalFormatter()
dif.dateStyle = .medium
dif.timeStyle = .none
dif.locale = Locale(identifier: "en_US")
let resEN = dif.string(from: sd, to: ed)
dif.locale = Locale(identifier: "ko_KR")
let resKO = dif.string(from: sd, to: ed)
```
This results in:
>
> Jan 18 – 26, 2018
>
> 2018. 1. 18. ~ 2018. 1. 26.
>
>
>
The output isn't exactly what you show in your question but the output is appropriate for the given locales.
|
On query parameters change, route is not updating
In my application, there are multiple links in which I have some `links` with the same `route` but with different `query parameters`.
say, I have links like:
```
.../deposits-withdrawals
.../deposits-withdrawals?id=1
.../deposits-withdrawals?id=2&num=12321344
```
When I am in one of the above routes and native to the other route from above mentioned, the route is not changing. Not even any of the functions like `ngOnInit` or `ngOnChanges` being called.I have changed the parameters from `queryParameters` to `matrixParameters` but with no success. I have gone through many links and answers. But, none of them solved my problem. Help me how to solve this.
Thank you...
**EDIT:**
```
<button routerLink="/deposits-withdrawals" [queryParams]="{ type: 'deposit' ,'productId': selectedBalance.ProductId}" class="wallet-btns">DEPOSIT {{selectedBalance.ProductSymbol}}</button>
<button routerLink="/deposits-withdrawals" [queryParams]="{ type: 'withdrawal' ,'productId': selectedBalance.ProductId }" class="wallet-btns">WITHDRAW {{selectedBalance.ProductSymbol}}</button>
```
|
I had this problem once. Can you put some code, or solutions you tried?
I'll give you something working for me, but you better give me some more details so that I can help.
Supposing we are here :
**some\_url*/deposits-withdrawals* and we wish to navigate , changing only parameters.
```
let url = "id=2&num=12321344"
this.router.navigate(['../', url], { relativeTo: this.route });
```
Hope it helps :/
=================================== **EDIT**==================================
You have to detect that query parameters have changed. And for that, you may add a listener to queryParameters changings in the constructor of your component. This can be done using your *router* this way :
```
constructor(route:ActivatedRoute) {
route.queryParams.subscribe(val => {
// put the code from ngOnInit here
});
}
```
Adding this listener to detect query parameters changes, means you have to move your code from ngOnInit function to this listener. And every time, you navigate, it will be called.
For navigating, you may use html navigation, or ts navigation. If you want it to be in html, you may use :
```
<button routerLink="/deposits-withdrawals" [queryParams]="{ type: 'withdrawal' ,'productId': selectedBalance.ProductId }" class="wallet-btns">WITHDRAW {{selectedBalance.ProductSymbol}}</button>
```
|
Simple function to sort an array of objects
I would like to create a (non-anonymous) function that sorts an array of objects alphabetically by the key `name`. I only code straight-out JavaScript so frameworks don't help me in the least.
```
var people = [
{'name': 'a75', 'item1': false, 'item2': false},
{'name': 'z32', 'item1': true, 'item2': false},
{'name': 'e77', 'item1': false, 'item2': false}
];
```
|
How about this?
```
var people = [
{
name: 'a75',
item1: false,
item2: false
},
{
name: 'z32',
item1: true,
item2: false
},
{
name: 'e77',
item1: false,
item2: false
}];
function sort_by_key(array, key)
{
return array.sort(function(a, b)
{
var x = a[key]; var y = b[key];
return ((x < y) ? -1 : ((x > y) ? 1 : 0));
});
}
people = sort_by_key(people, 'name');
```
This allows you to specify the key by which you want to sort the array so that you are not limited to a hard-coded name sort. It will work to sort any array of objects that all share the property which is used as they key. I believe that is what you were looking for?
And here is a jsFiddle: <http://jsfiddle.net/6Dgbu/>
|
How to skip initial data and trigger only new updates in Firestore Firebase?
I've searched everywhere with no luck. I want to query Firestore to get all users WHERE type is admin. Something like:
```
SELECT * FROM users WHERE type=admin
```
but **only** when the property `total` is changing. If I'm using:
```
users.whereEqualTo("type", "admin").addSnapshotListener(new EventListener<QuerySnapshot>() {
@Override
public void onEvent(@Nullable QuerySnapshot snapshots, @Nullable FirebaseFirestoreException e) {
for (DocumentChange dc : snapshots.getDocumentChanges()) {
switch (dc.getType()) {
case ADDED:
//Not trigger
break;
case MODIFIED:
//Trigger
break;
case REMOVED:
//
break;
}
}
}
});
```
The case ADDED is triggered first time when I query and when the `total` is changed `case MODIFIED` is triggered again (this is what is want). I want only changes and not the all initial data, I don't need it. How to get it?
Please help me, is the last part of my project. How to skip is `case ADDED`?
|
When you are listening for changes in Cloud Firestore for realtime changes, using Firestore Query's [addSnapshotListener()](https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/Query.html#addSnapshotListener(com.google.firebase.firestore.EventListener%3Ccom.google.firebase.firestore.QuerySnapshot%3E)) method, it:
>
> Starts listening to this query.
>
>
>
Which basically means that first time you attach the listener, you get all documents that correspond to that particular query. Furthermore, everytime a property within a document changes, you are notified according to that change. Obviously, this is happening only if the listener remains active and is not removed.
Unfortunately, Firestore listeners don't work that way, so you cannot skip that "case ADDED". What you can do instead, is to add add under each user object a `Date` property (**[this](https://stackoverflow.com/questions/48474957/servertimestamp-is-allways-null-on-firebase-firestore/48475027)** is how you can add it) and query your database on client, according to this new property, for all documents that have changed since a previous time.
According to Nick Cardoso's comment, for future visitors that might ask why this behaviour happens, is because the reason he mentioned in his comment. I also recommend see Doug Stevenson's answer from this [post](https://stackoverflow.com/questions/48489895/firestore-query-reads-all-the-documents-again-from-database-when-only-one-is-m/48491561#48491561), for a better understanding.
|
When do we use ruby module vs using class composition?
A question similar to this has been asked before, but I am asking specifically about using composition as an alternative to using module mixins.
```
class Helper
def do_somthing
end
end
```
If I need to 'use' a class but not inherit it , I would simply compose it and use it.
```
class MyStuff
def initialize
helper = Helper.new
helper.do_something
end
end
```
Why would I like to create a module for this:
```
module Helper
def do_something
end
end
class MyStuff
include Helper
end
```
The only difference I see is there wont be many `Helper` objects lying around if I use modules. But I don't see anything with more objects lying around vs less bigger objects.
Moreover, I don't know if I need to subclass it in the future. So how do I decide if the users of my library want to use a module mixin, or would want to use composition?
|
When the relationship between `Helper` and the `MyStuff` class is one of ownership, use **composition**. This is known as a **"has-a"** relationship. For example, let's say you have `Person` class and a `Car` class. You would use composition because a person has a car:
```
class Person
def initialize
@car = Car.new
end
end
class Car
def accelerate
# implementation
end
end
```
When `Helper` **"acts like"** `MyStuff`, use a **module mixin**. `Helper`, in this case, takes on the **role** of `MyStuff`. This is a bit different than a **"is-a" relationship**, which would imply that you should use **traditional inheritance**. For example, let's say we have a `Person` class and a `Sleeper` module. A person takes on the role of a sleeper sometimes, but so do other objects--instances of `Dog`, `Frog`, or maybe even `Computer`. Each of those other classes represent something that can go to sleep.
```
module Sleeper
def go_to_sleep
# implementation
end
end
class Person
include Sleeper
end
class Computer
include Sleeper
end
```
Sandi Metz's Practical [*Object-Oriented Design in Ruby*](http://www.poodr.com/) is an *excellent* resource for these topics.
|
Updating environment global variable in Jenkins pipeline from the stage level - is it possible?
I have a `Jenkinsfile` with some global variables and some stages.
can I update the global variable out from a stage?
An example:
```
pipeline {
agent any
environment {
PASSWD = "${sh(returnStdout: true, script: 'python -u do_some_something.py')}"
ACC = "HI"
}
stage('stage1') {
when { expression { params.UPDATE_JOB == false } }
steps{
script {
def foo= sh( returnStdout: true, script: 'python -u do_something.py ')
env.ACC = foo
println foo
print("pw")
println env.PASSWD
}
}
}
}
```
Is it possible to update the `ACC` variable with the value from foo, so that I can use the `ACC` Variable in the next stage?
|
You can't override the environment variable defined in the `environment {}` block. However, there is one trick you might want to use. You can refer to `ACC` environment variable in two ways:
- explicitly by `env.ACC`
- implicitly by `ACC`
The value of `env.ACC` cannot be changed once set inside `environment {}` block, but `ACC` behaves in the following way: when the variable `ACC` is not set then the value of `env.ACC` gets accessed (if exists of course). But when `ACC` variable gets initialized in any stage, `ACC` refers to this newly set value in any stage. Consider the following example:
```
pipeline {
agent any
environment {
FOO = "initial FOO env value"
}
stages {
stage("Stage 1") {
steps {
script {
echo "FOO is '${FOO}'" // prints: FOO is 'initial FOO env value'
env.BAR = "bar"
}
}
}
stage("Stage 2") {
steps {
echo "env.BAR is '${BAR}'" // prints: env.BAR is 'bar'
echo "FOO is '${FOO}'" // prints: FOO is 'initial FOO env value'
echo "env.FOO is '${env.FOO}'" // prints: env.FOO is 'initial FOO env value'
script {
FOO = "test2"
env.BAR = "bar2"
}
}
}
stage("Stage 3") {
steps {
echo "FOO is '${FOO}'" // prints: FOO is 'test2'
echo "env.FOO is '${env.FOO}'" // prints: env.FOO is 'initial FOO env value'
echo "env.BAR is '${BAR}'" // prints: env.BAR is 'bar2'
script {
FOO = "test3"
}
echo "FOO is '${FOO}'" // prints: FOO is 'test3'
}
}
}
}
```
And as you can see in the above example, the only exception to the rule is if the environment variable gets initialized outside the `environment {}` block. For instance, `env.BAR` in this example was initialized in `Stage 1`, but the value of `env.BAR` could be changed in `Stage 2` and `Stage 3` sees changed value.
## UPDATE 2019-12-18
There is one way to override the environment variable defined in the `environment {}` block - you can use `withEnv()` block that will allow you to override the existing env variable. It won't change the value of the environment defined, but it will override it inside the `withEnv()` block. Take a look at the following example:
```
pipeline {
agent any
stages {
stage("Test") {
environment {
FOO = "bar"
}
steps {
script {
withEnv(["FOO=newbar"]) {
echo "FOO = ${env.FOO}" // prints: FOO = newbar
}
}
}
}
}
}
```
>
> I also encourage you to check my ["Jenkins Pipeline Environment Variables explained
> "](https://www.youtube.com/watch?v=KwQDxwZRZiE) video.
>
>
>
|
Global.asax.cs name 'RouteConfig' does not exist in the current context
there are three files in my solution which I think I referencing but I am stuck with these 3 errors
Global.asax.cs name 'RouteConfig' does not exist in the current context
What am I missing ? thanks:)


```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.SessionState;
using System.Web.Http;
namespace PingYourPackage.API.WebHost
{
public class Global : System.Web.HttpApplication
{
protected void Application_Start(object sender, EventArgs e)
{
var config = GlobalConfiguration.Configuration;
RouteConfig.RegisterRoutes(config);
WebAPIConfig.Configure(config);
AutofacWebAPI.Initialize(config);
}
***************
```
here is the class autofac
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Web.Http;
using Autofac;
using Autofac.Integration.WebApi;
using System.Reflection;
namespace PingYourPackage.Config
{
public class AutofacWebAPI
{
public static void Initialize(HttpConfiguration config)
{
Initialize(config,
RegisterServices(new ContainerBuilder()));
}
public static void Initialize (
HttpConfiguration config, IContainer container)
{
config.DependencyResolver =
new AutofacWebApiDependencyResolver(container);
}
private static IContainer RegisterServices(ContainerBuilder builder)
{
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
// registeration goes here
return builder.Build();
}
}
}
```

|
When you generate a new project, these `*Config` classes (e.g. `RouteConfig`) are put in `App_Code` by default.
It looks like you moved the `*Config` classes out from the `App_Code` directory, into a `Config` directory. Everything in `App_Code` is automatically referenced by other code in your project.
<https://msdn.microsoft.com/en-us/library/ex526337(v=vs.140).aspx>
>
> Code in the `App_Code` folder is referenced automatically in your application.
>
>
>
It is okay that you moved them, they can live anywhere. Now, you just need to reference them manually inside your `Global.asax.cs` file.
```
using PingYourPackage.Config;
```
(assuming `PingYourPackage` is the name of your project/root namespace)
|
Longhorn using more than 50% of storage as reserved space
I'm running Longhorn v1.2.3 on RKE1 cluster (provisioned by rancher), this cluster has 5 nodes with dedicated 20GiB disks mounted on /var/lib/longhorn, with ext4 filesystem and 0% reserved blocks for root user/group.
In the dashboard, i see the following stats:
| Type | Size |
| --- | --- |
| Schedulable | 33.5 Gi |
| Reserved | 58.1 Gi |
| Used | 6.18 Gi |
| Disabled | 0 Bi |
| Total | 97.8 Gi |
I changed Storage Minimal Available Percentage in settings to 5 (from 25 as i recall), but that haven't changed anything. When i open "nodes" tab, i see the following in "Size" tab:
```
7.86 Gi
+11.7 Gi Reserved
```
exact size varies for different nodes, but it's around 8Gi.
These dedicated disks were added after provisioning longhorn in cluster, and system disks are 40 GiB in size, so possibly the reason for this overuse is because reserved size was calculated at the time, when longhorn was deployed alongside operating system, and haven't adjusted when i mounted this folder to new disk.
Why do i have more than half of my space "reserved"? What can i do to get more usable space from longhorn? Thanks!
|
After digging deeper and finding that it was one day possible to adjust these values from UI (i wasn't able to find it), i've searched for longhorn CRDs, and came across `nodes.longhorn.io`. And inside definition i've found exactly what i searched for:
```
spec:
allowScheduling: true
disks:
default-disk-fd0000000000:
allowScheduling: true
evictionRequested: false
path: /var/lib/longhorn/
storageReserved: 536870912
tags: null
```
Here i changed `storageReserved` to 536870912 (512 MiB) on all nodes, just in case, and longhorn applied this change immediately. This is ok in my case, because those disk are dedicated, and, per [docs](https://longhorn.io/docs/1.2.3/volumes-and-nodes/node-space-usage/#whole-cluster-space-usage)
>
> Reserved: The space reserved for other applications and system.
>
>
>
Now i have my storage back, hope it helps
|
How to structure database for unread item alerts per user
I just have a general database theory question. I have a need to make something similar to showing what posts/items a user has viewed or not (such as in a forum) or an unread email message. What I have is there are posts that multiple users can view, but it needs to separate by user who has actually viewed it. So if User A viewed Post 1, it would no longer show that Post 1 is a new item to view, but to User B, it would still show that Post 1 is a new item to view.
I've search for other ideas and one of them is to get a timestamp of when the user last logged in, but I actually need to keep track of the posts they've seen as opposed to posts that have happened since they last logged in.
I would like a MySQL database solution if possible, but I'm open to cookies if that is a must. I could do this on my own and just figure it out, but I'd appreciate any advice on how to properly structure a table(s) to make this the most efficient. Also, bandwidth and storage is not issue.
|
While reviewing the [relevant schema for phpBB](https://github.com/phpbb/phpbb3/blob/develop/phpBB/install/schemas/mysql_41_schema.sql#L819), I found the following:
```
# Table: 'phpbb_topics_track'
CREATE TABLE phpbb_topics_track (
user_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
topic_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
forum_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
mark_time int(11) UNSIGNED DEFAULT '0' NOT NULL,
PRIMARY KEY (user_id, topic_id),
KEY topic_id (topic_id),
KEY forum_id (forum_id)
) CHARACTER SET `utf8` COLLATE `utf8_bin`;
```
And:
```
# Table: 'phpbb_forums_track'
CREATE TABLE phpbb_forums_track (
user_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
forum_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
mark_time int(11) UNSIGNED DEFAULT '0' NOT NULL,
PRIMARY KEY (user_id, forum_id)
) CHARACTER SET `utf8` COLLATE `utf8_bin`;
```
Then I [look here in their wiki](http://wiki.phpbb.com/Table.phpbb_topics_track):
>
> This table keeps record for visited topics in order to mark them as
> read or unread. We use the mark\_time timestamp in conjunction with
> last post of topic x's timestamp to know if topic x is read or not.
>
>
> In order to accurately tell whether a topic is read, one has to also
> check phpbb\_forums\_track.
>
>
>
So essentially they have a lookup table to store the data associated with a user's viewing of a topic (thread), and then check it against the timestamp in the forum view table, to determine whether the topic has been viewed by the user.
|
How to pass a variable from ModelSerializer.update() to ModelViewSet.update() in Django REST Framework
I need to pass a return value of a custom model update method in the view response.
In my serializer I want to do:
```
class Serializer(ModelSerializer):
def update(self, instance, validated_data):
something_special = validated_data.pop('something_special')
important_info = model.update_something_special(something_special)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
```
And now in my view I'd like to return `important_info` in the response:
```
class View(ModelViewSet):
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
important_info = ???
return Response(serializer.data)
```
Is this possible in Django REST or is this a dead end? If so, how to do this differently?
|
```
class Serializer(ModelSerializer):
important_info = None
def update(self, instance, validated_data):
something_special = validated_data.pop('something_special')
self.important_info = model.update_something_special(something_special)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
class View(ModelViewSet):
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
important_info = serializer.important_info
return Response(serializer.data)
```
|
Is this list comprehension pythonic enough?
Let's say I want to create a list of `ints` using Python that consists of the cubes of the numbers 1 through 10 **only** if the **cube** is evenly divisible by four.
I wrote this working line:
```
cube4 = [x ** 3 for x in range(1, 11) if (x ** 3) % 4 == 0]
```
My beef with this line of code is that it's computing the cube of x twice. Is there *more pythonic* way to write this line? Or is this as good as it'll get in a list comprehension?
---
**Edit -** My question is intended to be focused how to avoid extraneous calculation using the features and nuances of Python while still keeping code concise and readable. Though this solution could have probably been reached looking at other questions, I wanted to be sure that I knew the *best* answer to this question, not just a solution that works.
|
You can use a generator expression:
```
cubed = (x ** 3 for x in range(1, 11))
cube4 = [c for c in cubed if c % 4 == 0]
```
This still iterates over `range()` only *once*, but now the `x ** 3` expression is calculated just the once as the generator expression is iterated over. You can combine it into one line:
```
cube4 = [c for c in (x ** 3 for x in range(1, 11)) if c % 4 == 0]
```
but keeping the generator expression on a separate line may aid in comprehension (no pun intended).
Demo:
```
>>> [c for c in (x ** 3 for x in range(1, 11)) if c % 4 == 0]
[8, 64, 216, 512, 1000]
```
Of course, mathematically speaking, for your simple example you could just use `[x ** 3 for x in range(2, 11, 2)]`, but I suspect that wasn't quite the aim of your question. :-)
|
How does Pyramid's add\_static\_view work?
How does add\_static\_view(name, path) in Pyramid work?
From the docstring:
>
> "The `name` argument is a string representing an application-relative
> local URL prefix. It may alternately be a full URL.
> The `path` argument is the path on disk where the static files
> reside. This can be an absolute path, a package-relative path,
> or an asset specification."
>
>
>
Somehow I have got the impression that this description is not
very accurate.
If I add some code along the lines of
```
config.add_static_view("static", "/path/to/resource/on/filesystem")
```
and I visit
```
http://localhost:PORT/static/logo.png
```
I see the logo.png given
that it can be found in
```
/path/to/resource/on/filesystem/
```
Now, if I have some code like the following
```
config.add_static_view("http://myfilehoster.com/images", "myproject:images")
```
the description that "the `path` argument is the path on disk where
the static files reside" does not seem accurate anymore because the actual
files reside on the disk of myfilehoster.
It seems to me that I am merely registering some kind of identifier
(myproject:images) that I can use within my program code to reference
the "real" location "http://myfilehoster.com/images".
E.g.
```
request.static_url("myproject:images/logo.png")
```
would be resolved
to "http://myfilehoster.com/images/logo.png".
So is the documentation inaccurate here or am I missing something?
|
You are missing something. In the [narrative documentation on static assets](http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/assets.html#serving-static-assets) it states:
>
> Instead of representing a URL prefix, the `name` argument of a call to `add_static_view()` can alternately be a *URL*. Each of examples we’ve seen so far have shown usage of the `name` argument as a URL prefix. However, when `name` is a *URL*, static assets can be served from an external webserver. In this mode, the `name` is used as the URL prefix when generating a URL using `pyramid.request.Request.static_url()`.
>
>
>
In the [API documentation](http://docs.pylonsproject.org/projects/pyramid/en/latest/api/config.html#pyramid.config.Configurator.add_static_view) similar wording is used:
>
> When `add_static_view` is called with a `name` argument that represents a URL prefix, as it is above, subsequent calls to `pyramid.request.Request.static_url()` with paths that start with the `path` argument passed to `add_static_view` will generate a URL something like `http://<Pyramid app URL>/images/logo.png`, which will cause the `logo.png` file in the images subdirectory of the `mypackage` package to be served.
>
>
>
Using a URL switches the behaviour of `add_static_view` altogether and the `path` argument is interpreted as a symbolic path only for the `.static_url()` method. That latter detail is perhaps not described as explicitly in the documentation, you could file an issue in the [pyramid issue tracker](https://github.com/Pylons/pyramid/issues) if you feel strongly about that.
|
Testing Controller Concerns with Rspec on Rails 6
I am not able to access session or route paths (root\_url) from the specs using anonymous controller on a controller concern.
Here is my code
```
module SecuredConcern
extend ActiveSupport::Concern
def logged_in?
session[:user_info].present?
end
def redirect_to_login
redirect_to login_path unless logged_in?
end
def redirect_to_home
redirect_to root_path if logged_in?
end
end
```
And spec
```
require 'rails_helper'
describe SecuredConcern, type: :controller do
before do
class FakesController < ApplicationController
include SecuredConcern
end
end
after { Object.send :remove_const, :FakesController }
let(:object) { FakesController.new }
# let(:session) {create(:session, user_info: '')}
describe 'logged_in' do
it "should return false if user is not logged in" do
expect(object.logged_in?).to eq(false)
end
end
end
```
Here is the trace:
```
Module::DelegationError:
ActionController::Metal#session delegated to @_request.session, but @_request is nil: #<FakesController:0x00007f9856c04c20 @_action_has_layout=true, @rendered_format=nil, @_routes=nil, @_request=nil, @_response=nil>
# ./app/controllers/concerns/secured_concern.rb:9:in `logged_in?'
# ./spec/controllers/concerns/secured_concern_spec.rb:21:in `block (3 levels) in <main>'
# ------------------
# --- Caused by: ---
# NoMethodError:
# undefined method `session' for nil:NilClass
# ./app/controllers/concerns/secured_concern.rb:9:in `logged_in?'
```
config is updated with `config.infer_base_class_for_anonymous_controllers = true`
Any pointers on what I am doing wrong here?
|
Here is how I solved it by using `RSpec.shared_examples`
In my controller spec where I am including this concern:
```
# spec/controllers/home_controller_spec.rb
RSpec.describe HomeController, type: :controller do
it_behaves_like 'SecuredConcern', HomeController
end
```
And in my concern spec:
```
# spec/shared/secured_concern_spec.rb
require 'rails_helper'
RSpec.shared_examples 'SecuredConcern' do |klass|
describe '#logged_in?' do
it "should return true if user is logged in" do
session[:user_info] = {uid: 1}
expect(subject.logged_in?).to eq(true)
end
it "should return false if user is not logged in" do
expect(subject.logged_in?).to eq(false)
end
end
end
```
Hope it helps anyone with a similar issue.
|
Elasticsearch 6.3.2 terms match empty array "plus" others
In my database, a post can have zero (0) or more categories represented as an array.
When I do the query, to look within those categories, passing some values:
```
{
"query": {
"bool": {
"should": {
"terms": {
"categories": ["First", "Second", "And so on"]
}
}
}
}
}
```
And it works well, I have the records I'm expecting. But the problem comes when I want to include those posts, where categories is an empty array ([]).
I'm upgrading from an old version of ES (1.4.5) now to the version 6.3.2, and this code was made using "missing", which has been deprecated.
I've tried changing the mapping adding the famouse `"null_value": "NULL"`, and query then, but didn't work.
Also tried the combination of should with must\_not, as suggested for upgrading "missing", but didn't work.
How can I achieve this? Meaning that if I have indexed:
```
Post.new(id: 1, title: '1st', categories: [])
Post.new(id: 2, title: '2nd', categories: ['news', 'tv'])
Post.new(id: 3, title: '3rd', categories: ['tv', 'trending'])
Post.new(id: 4, title: '4th', categories: ['movies'])
Post.new(id: 5, title: '5th', categories: ['technology', 'music'])
```
The result should return posts number 1, 2 y 3 - the ones that have "news", "tv" or an empty array as categories.
|
Missing can be replicated using `exists` inside `must_not`. You have to modify query as below:
```
{
"query": {
"bool": {
"should": [
{
"terms": {
"categories": [
"First",
"Second",
"And so on"
]
}
},
{
"bool": {
"must_not": [
{
"exists": {
"field": "categories"
}
}
]
}
}
]
}
}
}
```
You can read about it [here](https://www.elastic.co/guide/en/elasticsearch/reference/6.3/query-dsl-exists-query.html#missing-query).
|
Cant load saved policy (TF-agents)
I saved trained policy with policy saver as following:
```
tf_policy_saver = policy_saver.PolicySaver(agent.policy)
tf_policy_saver.save(policy_dir)
```
I want to continue training with the saved policy. So I tried initializing the training with the saved policy, which caused some error.
```
agent = dqn_agent.DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
agent.policy=tf.compat.v2.saved_model.load(policy_dir)
```
ERROR:
```
File "C:/Users/Rohit/PycharmProjects/pythonProject/waypoint.py", line 172, in <module>
agent.policy=tf.compat.v2.saved_model.load('waypoints\\Two_rewards')
File "C:\Users\Rohit\anaconda3\envs\btp36\lib\site-packages\tensorflow\python\training\tracking\tracking.py", line 92, in __setattr__
super(AutoTrackable, self).__setattr__(name, value)
AttributeError: can't set attribute
```
I just want to save time retraining from first every time. How can I load saved policy and continue training??
Thanks in advance
|
Yes, as previously stated, you should use the Checkpointer to do this have a look at the example code below.
```
agent = ... # Agent Definition
policy = agent.policy
# Policy --> Y
policy_checkpointer = common.Checkpointer(ckpt_dir='path/to/dir',
policy=policy)
... # Train the agent
# Policy --> X
policy_checkpointer.save(global_step=epoch_counter.numpy())
```
When you later want to reload the policy you simply run the same initialization code.
```
agent = ... # Agent Definition
policy = agent.policy
# Policy --> Y1, possibly Y1==Y depending on agent class you are using, if it's DQN
# then they are different because of random initialization of network weights
policy_checkpointer = common.Checkpointer(ckpt_dir='path/to/dir',
policy=policy)
# Policy --> X
```
Upon creation, the `policy_checkpointer` will automatically realize whether there are any preexisting checkpoints. If there are, it will update the value of the variables it is tracking automatically on creation.
A couple notes to make:
1. You can save with the checkpointer a lot more than just the policy, and indeed I recommend doing so. TF-Agent's Checkpointer object is extremely flexible, e.g.:
```
train_checkpointer = common.Checkpointer(ckpt_dir=first/dir,
agent=tf_agent, # tf_agent.TFAgent
train_step=train_step, # tf.Variable
epoch_counter=epoch_counter, # tf.Variable
metrics=metric_utils.MetricsGroup(
train_metrics, 'train_metrics'))
policy_checkpointer = common.Checkpointer(ckpt_dir=second/dir,
policy=agent.policy)
rb_checkpointer = common.Checkpointer(ckpt_dir=third/dir,
max_to_keep=1,
replay_buffer=replay_buffer # TFUniformReplayBuffer
)
```
2. Note that in the case of a `DqnAgent` the `agent.policy` and `agent.collect_policy` are essentially wrappers around a QNetwork. The implication of this is shown in the code below (look at the comments on the state of the policy variable)
```
agent = DqnAgent(...)
policy = agent.policy # Random initial policy ---> X
dataset = replay_buffer.as_dataset(...)
for data in dataset:
experience, _ = data
loss_agent_info = agent.train(experience=experience)
# policy variable stores a trained Policy object ---> Y
```
This happens because Tensors in TF are shared across your runtime. Therefore when you update your agent's `QNetwork` weigths with `agent.train`, those same weigths will implicitly update also in your `policy` variable's `QNetwork`. Indeed it's not that the `policy`'s Tensor get updated, but rather that they simply are the same as the Tensor's in your `agent`.
|
How to run some code as soon as new image gets uploaded in WordPress 3.5 uploader
I need to run some code as soon as new images get uploaded in WordPress 3.5 uploader. Here is the code of wp-includes/js/media-views.js (line 529-540)
```
uploading: function( attachment ) {
var content = this.frame.content;
// If the uploader was selected, navigate to the browser.
if ( 'upload' === content.mode() )
this.frame.content.mode('browse');
// If we're in a workflow that supports multiple attachments,
// automatically select any uploading attachments.
if ( this.get('multiple') )
this.get('selection').add( attachment );
},
```
I added alert('New image uploaded!') at the bottom of this uploading function, and the browser alert 'New image uploaded!' when new image was uploaded. However I don't want to hack the core of WordPress, so I'm wondering if there is a way that I can write some code in my theme that can do the same thing? Sorry for my English. Thank you for you attention guys!
|
[**This line of**](https://github.com/WordPress/WordPress/blob/master/wp-includes/js/plupload/wp-plupload.js#L221) **wp-plupload.js** shows that the uploader queue will reset on complete. So you can do this:
```
wp.Uploader.queue.on('reset', function() {
alert('Upload Complete!');
});
```
I've tested it and it works on WP **3.5** sites.
So, here is the full version including support for both the regular uploader on "**Upload New Media**" ***Page*** and the new [plupload](http://www.plupload.com/) uploader on "**Insert Media**" ***Dialog***.
Create a javascript file named: *`wp-admin-extender.js`* and save it under your `/custom/js/` folder or whatever within your template directory.
```
// Hack for "Upload New Media" Page (old uploader)
// Overriding the uploadSuccess function:
if (typeof uploadSuccess !== 'undefined') {
// First backup the function into a new variable.
var uploadSuccess_original = uploadSuccess;
// The original uploadSuccess function with has two arguments: fileObj, serverData
// So we globally declare and override the function with two arguments (argument names shouldn't matter)
uploadSuccess = function(fileObj, serverData)
{
// Fire the original procedure with the same arguments
uploadSuccess_original(fileObj, serverData);
// Execute whatever you want here:
alert('Upload Complete!');
}
}
// Hack for "Insert Media" Dialog (new plupload uploader)
// Hooking on the uploader queue (on reset):
if (typeof wp.Uploader !== 'undefined' && typeof wp.Uploader.queue !== 'undefined') {
wp.Uploader.queue.on('reset', function() {
alert('Upload Complete!');
});
}
```
And finally; add this into your theme's functions.php to get this functionality in WP Admin:
```
//You can also use other techniques to add/register the script for WP Admin.
function extend_admin_js() {
wp_enqueue_script('wp-admin-extender.js', get_template_directory_uri().'/custom/js/wp-admin-extender.js', array('media-upload', 'swfupload', 'plupload'), false, true);
}
add_action('admin_enqueue_scripts', 'extend_admin_js');
```
This might not be the legitimate solution but it's a workaround at least.
|
Is CSSOM and DOM creation asynchronous?
I have read that CSSOM creation is a bottleneck in terms of web page performance. But there seems to be some ways around it, like adding the `media` property to the stylesheet link. I'm trying to understand how to optimise my web app and came across this really interesting [link](https://developers.google.com/web/fundamentals/performance/) but couldn't understand what order CSSOM and DOM creation happen in.
[Here](https://stackoverflow.com/questions/4772333/are-css-stylesheets-loaded-asynchronously) I see some reference to asynchronous loading of CSS files, but the answer is not very clear. Of course, that is about loading and not object model creation.
My question is this: Does the CSSOM creation and DOM creation happen in parallel or in sequence?
|
Yes, the CSSOM and DOM creation happens asynchronously and it is only logical. I would recommend you start off at [Google Web fundamentals](https://developers.google.com/web/fundamentals/?hl=en) where topics like [rendering](https://developers.google.com/web/fundamentals/performance/critical-rendering-path/constructing-the-object-model?hl=en) are discussed and explained in depth.
1. DOM Construction starts as soon as the browser receives a webpage from a network request or reads it off the disk. It starts "parsing" the `html` and "tokenizing" it, creating a DOM tree of nodes that we are aware of.
2. While parsing and constructing the DOM tree, if it encounters a link tag in the `head` or any other section for that matter, referencing an external stylesheet. (from the [docs](https://developers.google.com/web/fundamentals/performance/critical-rendering-path/constructing-the-object-model?hl=en))
>
> Anticipating that it will need this resource to render the page, it
> immediately dispatches a request for this resource,...
>
>
>
3. The CSS rules are again tokenized and start forming what we call a CSSOM. The CSSOM tree is then generated finally as the entire webpage is parsed and **then** applied to the nodes in DOM tree.
>
> When computing the final set of styles for any object on the page, the browser starts with the most general rule applicable to that node (e.g. if it is a child of body element, then all body styles apply) and then recursively refines the computed styles by applying more specific rules - i.e. the rules **“cascade down”**.
>
>
>
We have all noticed that on slow connections, the DOM loads first and then styles are applied and webpage looks finished. It is because of this fundamental reason - The CSSOM and DOM are **independent** data structures.
I hope it answers your question and points you in the right direction.
PS: I would strongly recommend again, to read through [Google web performance fundamentals](https://developers.google.com/web/fundamentals/performance/?hl=en) to gain better insights.
|
How to find the hours difference between two dates in mongodb
I have the documents in following structure saved in my mongodb.
>
> UserLogs
>
>
>
```
{
"_id":"111",
"studentID" : "1",
"loginTime" : "2019-05-01 09:40:00",
"logoutTime" : "2019-05-01 19:40:00"
},
{
"_id":"222",
"studentID" : "1",
"loginTime" : "2019-05-02 09:40:00",
"logoutTime" : "2019-05-02 20:40:00"
},
{
"_id":"333",
"studentID" : "2",
"loginTime" : "2019-05-02 09:40:00",
"logoutTime" : "2019-05-02 20:40:00"
}
```
is it possible to query for documents where the period of time between loginTime and logoutTime.**eg: grater than 20 hrs**
`mongodb version = 3.4`
|
You can use below aggregation
```
db.collection.aggregate([
{ "$project": {
"difference": {
"$divide": [
{ "$subtract": ["$logoutTime", "$loginTime"] },
60 * 1000 * 60
]
}
}},
{ "$group": {
"_id": "$studentID",
"totalDifference": { "$sum": "$difference" }
}},
{ "$match": { "totalDifference": { "$gte": 20 }}}
])
```
You have to just [**`$subtract`**](https://docs.mongodb.com/manual/reference/operator/aggregation/subtract/) `loginTime` from `logoutTime` and it will give you the subtracted timestamp and then just [**`$divide`**](https://docs.mongodb.com/manual/reference/operator/aggregation/divide/) it with `3600000` to get the time in hours.
[**MongoPlayground**](https://mongoplayground.net/p/gBAxlJAqxBe)
|
Problems with reading blob field - out of memory
I have an application in Delphi 7, where I made the code below to load large PDF files from a blob field to memory and then I load the PDF, it works perfect with large files I have already tested with 1 gigabyte files.
However, somewhere there is a memory leak and I don't know where, after loading 10 large files, it presents the Message - Out of Memory.
I am not sure how to clear the memory after loading the memory.
I already tested loading several pdf files and it works perfectly, the component has no problem.
Note guys, I don't want to save it to file after loading it in the component, I want to do it directly in memory.
Note guys, I don't want to save to file on disk and then load the component, I want to do it directly in memory.
```
procedure TForm1.btnAbrirClick(Sender: TObject);
var
BlobStream: TStream;
Arquivo: Pointer;
begin
pdf2.Active := False;
Screen.Cursor := crHourGlass;
try
BlobStream := absqry1.CreateBlobStream(absqry1.FieldByName('binario'),bmRead);
Arquivo := AllocMem(BlobStream.Size);
BlobStream.Position := 0;
BlobStream.ReadBuffer(Arquivo^, BlobStream.Size);
pdf2.LoadDocument(Arquivo);
pdfvw1.Active := True;
finally
Screen.Cursor := crDefault;
BlobStream.Free;
Arquivo := nil;
end;
end;
```
|
`Arquivo := nil;` does not free memory allocated by `AllocMem`. For that, you need a call to `FreeMem`.
This is covered in the documentation (**emphasis** mine):
>
> AllocMem allocates a memory block and initializes each byte to zero.
>
>
> AllocMem allocates a block of the given Size on the heap, and returns the address of this memory. Each byte in the allocated buffer is set to zero. **To dispose of the buffer, use FreeMem.** If there is not enough memory available to allocate the block, an EOutOfMemory exception is raised.
>
>
>
I've also corrected your use of `try..finally`.
```
procedure TForm1.btnAbrirClick(Sender: TObject);
var
BlobStream: TStream;
Arquivo: Pointer;
begin
pdf2.Active := False;
Screen.Cursor := crHourGlass;
BlobStream := absqry1.CreateBlobStream(absqry1.FieldByName('binario'),bmRead);
try
Arquivo := AllocMem(BlobStream.Size);
try
BlobStream.Position := 0;
BlobStream.ReadBuffer(Arquivo^, BlobStream.Size);
pdf2.LoadDocument(Arquivo);
pdfvw1.Active := True;
finally
FreeMem(Arquivo);
end;
finally
Screen.Cursor := crDefault;
BlobStream.Free;
end;
end;
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.