prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
HTML5 card game
I created a card game in Silverlight a year ago, in order to learn a bit about Silverlight. I am now wanting to make a HTML5 version of the game in an effort to learn a little bit more about that. I am thinking I'd like to take advantage of stuff like Knockout.js and WebSockets and the canvas element.
Now what I'm confused about is how to lay out the cards on the screen. With Silverlight I was able to make a "Hand" control, which was made up of two sub controls: the cards the player has in their hand and the ones they have on the table. They were made up of Card controls.
I don't believe there is the concept on a User Control in JavaScript; so I am possibly thinking about this in entirely the wrong way.
I have a client side JSON object called game, which contains an array of players; each player has a hand which is made up of an array of in-hand cards and on-table cards. Ideally I would like to bind these to something using Knockout.js, but I don't know what I could bind to.
How could I lay out some cards on the table, and perhaps make reuse of something for each player? Would I simply position images (of cards) on a canvas? Is there a way to make some kind of hand object that each player could have and that I could bind to?
|
You don't need knockout with the canvas. Just do your design like you normally would.
Personally I create objects with Draw methods, these are Drawable.
[Live Example](http://jsfiddle.net/Z8yT3/15/).
Then you have other container object that have references to Drawable objects.
Example of a Drawable Card.
(Uses [`pd`](https://github.com/Raynos/pd#pd))
```
var Card = {
draw: function _draw() {
ctx.strokeRect(this.x, this.y, 30, 50);
ctx.strokeText(this._value, this.x+10, this.y+25);
},
setPosition: function _position(x, y) {
this.x = x;
this.y = y;
}
};
```
And a Container Hand that has references to drawable cards.
```
var Hand = {
draw: function _draw() {
this.cards.forEach(function (c, i) {
c.draw();
});
},
generateCards: function _generateCards(n) {
for (var i = 0; i < n; i++) {
this.cards.push(Object.create(Card, pd({ _value: i })));
}
},
setPosition: function _setPosition(x, y) {
this.x = x;
this.y = y;
this.cards.forEach(function _each(c, i) {
c.setPosition(x + i*40, y);
});
}
};
```
And some bootstrap code to draw something.
```
var h = Object.create(Hand, pd({ cards: [] }));
h.generateCards(3);
h.setPosition(10, 10);
h.draw();
```
|
nullptr as a template parameter
I have a template like:
```
template <class A, class B>
void func(A* a, B* b){
...
}
```
In some cases it happens that parameter `B* b` is not needed and therefore, I try to use a nullptr:
```
MyA a;
func(&a, nullptr);
```
The compiler doesn't like that since `nullptr` somehow is not a type.
How can I deal with that situation? The only idea is to just use a dummy type for that case.
|
The problem is that `nullptr` is not actually a pointer, but an object of type `nullptr_t`. So it cannot match either `A*` or `B*`. One option would be to provide an overload to specifically handle `nullptr_t`.
```
template<class A>
void func(A* a, nullptr_t)
{
func(a, (int*)nullptr);
}
```
If you also want to allow the first argument to be `nullptr`, you can provide 2 more overloads. One to handle just the first argument, and one to handle both.
```
template<class B>
void func(nullptr_t, B* b)
{
func((int*)nullptr, b);
}
void func(nullptr_t, nullptr_t)
{
func((int*)nullptr, (int*)nullptr);
}
```
For any more arguments, this approach becomes unfeasible without code generation, as the number of overloads required is an exponential function of the number of arguments. In that case, I would recommend jrok's approach.
|
Can't get stdout/stderr from (Python) subprocess.check\_output()
I'm trying to get the message from a `git add` command, to print to a log file later on.
```
import subprocess
import os
filename = 'test.txt'
# Add changes
add_cmd = """git add "%s" """ % filename
os.system(add_cmd)
a = subprocess.check_output(add_cmd, shell=True, stderr=subprocess.STDOUT)
```
The `os.system()` call shows in screen:
```
fatal: Not a git repository (or any of the parent directories): .git
```
which is correct, since this folder is not a `git` repo.
But the `subprocess.check_output()` call fails with:
```
File "test.py", line 11, in <module>
a = subprocess.check_output(add_cmd, shell=True, stderr=subprocess.STDOUT)
File "/usr/lib/python2.7/subprocess.py", line 573, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'git add "test.txt" ' returned non-zero exit status 128
```
Why am I not able to catch the error message with `subprocess.check_output()`?
|
From the documenation for [`subprocess.check_output()`](https://docs.python.org/2/library/subprocess.html#subprocess.check_output):
>
> If the return code was non-zero it raises a `CalledProcessError`. The `CalledProcessError` object will have the return code in the `returncode` attribute and any output in the `output` attribute.
>
>
>
`git add` returns a non-zero exit code when there is an error condition. Catch that exception, your output is there:
```
try:
a = subprocess.check_output(add_cmd, shell=True, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as cpe:
print cpe.output
```
Demo:
```
>>> import subprocess
>>> import os
>>> filename = 'test.txt'
>>> add_cmd = """git add "%s" """ % filename
>>> try:
... a = subprocess.check_output(add_cmd, shell=True, stderr=subprocess.STDOUT)
... except subprocess.CalledProcessError as cpe:
... print cpe.output
...
fatal: Not a git repository (or any of the parent directories): .git
>>> cpe.returncode
128
```
You probably don't need to use `shell=True`; pass in your arguments as a *list* instead and they'll be executed without an intermediary shell. This has the added advantage you don't need to worry about properly escaping `filename`:
```
add_cmd = ['git', 'add', filename]
try:
a = subprocess.check_output(add_cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as cpe:
print cpe.output
```
|
HP Proliant DL360 G3: What version of XenServer is supported?
I've spent some time looking, and haven't been able to find a decent HCL for the current *and* past versions of XenServer.
I have 3 HP Proliant DL360 G3's that I want to use for virtualization. They all have dual Xeon processors @ 3.06 Ghz, 2 GB of RAM (I'm getting much more for them though), integrated 5i controllers, etc.
I've tried to install few versions of ESXi and ESX, some with the DL360 G# on the HCL, and some without. I've had limited success with every version. I did get ESX 3.5 running, but I'm a little bit disappointed with it's feature set and want something a little more modern. I've heard that XenServer has a much broader range of supported hardware.
Does anyone know what versions of XenServer officially support the DL360 G3, or where I could find an HCL that includes all versions of XenServer?
|
I'd really like to say that you should avoid using any G3 ProLiant in 2013... *especially* for virtualization... Anywho, the XenServer HCL does not include the G3 ProLiants. You need a 64-bit CPU per the [XenServer software requirements](http://support.citrix.com/servlet/KbServlet/download/28750-102-664874/XenServer-6.0.0-installation.pdf).
*One or more 64-bit x86 CPU(s), 1.5 GHz minimum, 2 GHz or faster multicore CPU recommended*
The [HP ProLiant DL360 G3 systems](http://h18000.www1.hp.com/products/quickspecs/11504_div/11504_div.HTML) were introduced in 2003 and went end-of-life around 2005. These are Pentium 4 Xeon processors. There are only two parallel SCSI drives. They're 32-bit only. At this juncture *ANY* modern computer will perform better (even a desktop).
|
How to convert an ipython notebook to html with collapsed output (and/or input)
I have an `ipython notebook` I'd like to share with colleagues who may not have ipython installed.
So I converted it to html with :
```
ipython nbconvert my_notebook.ipynb
```
But my problem is that I have very long outputs which make the reading difficult, and I'd like to know whether it's possible to have the collapse or scroll option of the notebook viewer on the html version.
**Basically, I'd like this : [output example](http://ipython.org/ipython-doc/2/_images/ipy_013_notebook_long_out.png)**
[](https://i.stack.imgur.com/hLx9J.png)
But in the html version. Is this even possible ?
Thanks for helping !
|
I found what I wanted thanks to that [blog](https://damianavila.github.io/blog/posts/mimic-the-ipython-notebook-cell-execution.html) which does exactly what I wanted.
I modified it a bit to make it work with ipython 2.1 [edit: works aslo with Jupyter], and mixed the input and output hidding tricks.
# What it does:
When opening the html file, all input will be shown and output hidden. By clicking on the input field it will show the output field. And once you have both fields, you can hide one by clicking the other.
edit: It now hides long input, and about 1 line is always shown (by defa. You can show everything by clicking on the input number. This is convenient when you don't have output (like a definition of a long function you'd like to hide in your HTML doc)
You need to add a template while doing nbconvert :
```
ipython nbconvert --template toggle my_notebook.ipynb
```
where toggle is a file containing :
```
{%- extends 'full.tpl' -%}
{% block output_group %}
<div class="output_hidden">
{{ super() }}
</div>
{% endblock output_group %}
{% block input_group -%}
<div class="input_hidden">
{{ super() }}
</div>
{% endblock input_group %}
{%- block input -%}
<div class="in_container">
<div class="in_hidden">
{{ super() }}
<div class="gradient">
</div>
</div>
</div>
{%- endblock input -%}
{%- block header -%}
{{ super() }}
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<style type="text/css">
div.output_wrapper {
margin-top: 0px;
}
.output_hidden {
display: block;
margin-top: 5px;
}
.in_hidden {
width: 100%;
overflow: hidden;
position: relative;
}
.in_container {
width: 100%;
margin-left: 20px;
margin-right: 20px;
}
.gradient {
width:100%;
height:3px;
background:#eeeeee;
position:absolute;
bottom:0px;
left:0;
display: none;
opacity: 0.4;
border-bottom: 2px dashed #000;
}
div.input_prompt {
color: #178CE3;
font-weight: bold;
}
div.output_prompt {
color: rgba(249, 33, 33, 1);
font-weight: bold;
}
</style>
<script>
$(document).ready(function(){
$(".output_hidden").click(function(){
$(this).prev('.input_hidden').slideToggle();
});
$(".input_hidden").click(function(){
$(this).next('.output_hidden').slideToggle();
});
var slideHeight = 25;
$(".in_container").each(function () {
var $this = $(this);
var $in_hidden = $this.children(".in_hidden");
var defHeight = $in_hidden.height();
if (defHeight >= 61) {
var $prompt = $this.prev(".input_prompt");
var $gradient = $in_hidden.children(".gradient");
$in_hidden.css("height", slideHeight + "px");
$gradient.css("display", "block");
$prompt.click(function () {
var curHeight = $in_hidden.height();
if (curHeight == slideHeight) {
$in_hidden.animate({
height: defHeight
}, "normal");
$gradient.fadeOut();
}
else {
$in_hidden.animate({
height: slideHeight
}, "normal");
$gradient.fadeIn();
}
return false;
});
}
});
});
</script>
{%- endblock header -%}
```
Basically, you can inject whatever javascript and css you want to customize your notebook at will!
Have fun !
|
How to store a float in NSUserDefaults
I want to store a `float` value into `NSUserDefaults`.
I also need to check that the float value exists..if not I need to assign some value in it.
and retrieve it...for the above I have the below code but it gives me an error.
```
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
if ([defaults boolForKey:@"HANDWRITING_SIZE_SLIDER"] == YES) {
self.sizeSlider.value = 10.0;
} else {
self.sizeSlider.value = [[NSUserDefaults standardUserDefaults] floatForKey:@"HANDWRITING_SIZE_SLIDER"]];
}
```
Thanks for any help
|
Use the `NSNumber` class for this and store it via the `setObject:forKey:` method so you can check if it exists.
I'd also suggest the usage of constants as keys:
```
#define HANDWRITING_SIZE_SLIDER @"HSS"
```
Your code should be along these lines:
```
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
if ([defaults objectForKey:HANDWRITING_SIZE_SLIDER] == nil) {
//doesn't exist in NSUserDefaults, set to default value...
self.sizeSlider.value = 10.0;
} else {
self.sizeSlider.value = [[defaults objectForKey:HANDWRITING_SIZE_SLIDER] floatValue];
}
```
Somewhere else in your app, you'd set the value in `NSUserDefaults` like this:
```
float sizeSliderValue = ...
[[NSUserDefaults standardUserDefaults] setObject:[NSNumber numberWithFloat:sizeSliderValue] forKey:HANDWRITING_SIZE_SLIDER];
```
|
Is it possible to guarantee 100% uptime in redundant hosting servers inside a cloud computing structure?
I'm interested in find a hosting that can offer really 100% of uptime.
Is it possible in hosting servers inside a cloud computing structure?
|
Real 100%? No. Five nines (99.999%)? Yep. Five nines is about five minutes unscheduled outage/year. You can get more reliability if you want, but five nines is where the cost for increased reliability is really taking off. You can approximate four nines as less than an hour outage/year, three nines as less than nine hours and six nines as about half a minute.
Why, you might ask, does "hosting servers in a cloud computing structure" not provide this? What are the chances of the cloud data center going down (tornado, fire, flood)? If at multiple data centers, how long does it take to detect and accomplish the failover? How much does that extra redundancy cost? You can see how the expense mounts rapidly.
Figure out how much application downtime will cost you and from that, determine how much you're willing to pay for increased reliability. Then go looking for appropriate Service Level Agreements.
|
How to move along sinus function at constant speed
I have a Java program that moves an object according to a sinus function `asin(bx)`.
The object moves at a certain speed by changing the x parameter by the time interval x the speed. However, this only moves the object at a constant speed along the x axis.
What I want to do is move the object at a constant tangential speed along the curve of the function. Could someone please help me? Thanks.
|
Let's say you have a function `f(x)` that describes a curve in the `xy` plane. The problem consists in moving a point along this curve at a constant speed `S` (i.e., at a constant tangential speed, as you put it.)
So, let's start at an instant `t` and a position `x`. The point has coordinates `(x, f(x))`. An instant later, say, at `t + dt` the point has moved to `(x + dx, f(x + dx))`.

The distance between these two locations is:
dist = sqrt((x + dx - dx)2 + (f(x+dx) - f(x))2) = sqrt(dx2 + (f(x+dx) - f(x))2)
Now, let's factor out dx to the right. We get:
dist = sqrt(1 + f'(x)2) dx
where f'(x) is the derivative (f(x+dx) - f(x)) /dx.
If we now divide by the time elapsed dt we get
dist/dt = sqrt(1 + f'(x)2) dx/dt.
But dist/dt is the speed at which the point moves along the curve, so it is the constant S. Then
S = sqrt(1 + f'(x)2) dx/dt
Solving for dx
dx = S / sqrt(1 + f'(x)2) dt
which gives you how much you have to move the x-coordinate of the point after dt units of time.
|
ajax timeout callback function
Is there a way to run a function if jQuery's $.ajax function hits it's `timeout`?
i.e.
```
$.ajax({
...
...
,timeout:1000(){do something if timeout)
...
});
```
|
```
$.ajax({
...
timeout: 1000,
error: function(jqXHR, textStatus, errorThrown) {
if(textStatus==="timeout") {
//do something on timeout
}
}
});
```
For more information check out the jQuery documentation:
<http://api.jquery.com/jQuery.ajax/>
---
**Edited**
It's been over a year since I initially answered this and the `textStatus` possible values have changed to `"success", "notmodified", "error", "timeout", "abort",`or`"parsererror"`. For error callbacks, only the last four statuses are possible.
Also you can now wire your error handlers through the returned JQuery deferred promise object's `.fail` method:
```
var promise = $.ajax({ timeout: 1000 });
promise.fail(function(jqXHR, textStatus) {
if(textStatus==="timeout") {
// handle timeout
}
});
```
|
Oversampling with categorical variables
I would like to perform a combination of oversampling and undersampling in order to balance my dataset with roughly 4000 customers divided into two groups, where one of the groups have a proportion of roughly 15%.
I've looked into SMOTE (<http://www.inside-r.org/packages/cran/DMwR/docs/SMOTE>) and ROSE (<http://cran.r-project.org/web/packages/ROSE/ROSE.pdf>), but both of these create new synthetic samples using existing observations and e.g. kNN.
However, as many of the attributes associated with the customers are categorical I don't think this is the right way to go. For instance, a lot of my variables such as Region\_A and Region\_B are mutually exclusive, but using kNN the new observations may be placed in both Region\_A and Region\_B. Do you agree that this is an issue?
In that case - how do one perform oversampling in R by simply duplicating existing observations? Or is this the wrong way to do it?
|
ROSE and SMOTE are designed to handle categorical variables, so, unless your categorical variables are expressed in a binary format, you shouldn't normally have to worry about synthetic observations being assigned mutually exclusive categorical features. If they are, you can always restructure them as factors.
In your two-region example, you would create a new region variable with two levels, "A" and "B". Your records would take the appropriate values by referencing your original columns.
Now, if you are in a situation where your new synthetic observations could generate conflicting categories because they are spread across multiple, otherwise unrelated variables (e.g. syntheticObservation.isPig = 1 and syntheticObservation.hasWings = 1), you could always perform some additional data munging before doing your model estimation in order to clean such aberrations.
Also, since you do have about 600 event observations in your dataset, maybe consider the potential benefits of using synthetic observations derived through [undersampling](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3946903/) the majority class?
|
Android: Ruled/horizontal lines in Textview
In Android, I have a TextView and its content will be dynamic. I want to show a horizontal line after each line of text. I have searched a lot and found it for EditText( [How to use ruled/horizontal lines to align text in EditText in Android?](https://stackoverflow.com/questions/8667506/how-to-use-ruled-horizontal-lines-to-align-text-in-edittext-in-android)). I was planing to draw dynamic TextView and a horizontal line under them. But I don't know how can I detect an end of line. Any help will be highly appreciated. I want to have the same effect as the attached image of [How to use ruled/horizontal lines to align text in EditText in Android?](https://stackoverflow.com/questions/8667506/how-to-use-ruled-horizontal-lines-to-align-text-in-edittext-in-android)
|
I'm using the technique of drawing lines between each line of text in **EditText** and then I will make the **EditText** non-editable by setting **setKeyListener(null)** to the custom EditText object so that, the EditText acts like a **TextView** :)
---
## A custom EditText that draws lines between each line of text that is displayed:
```
public class LinedEditText extends EditText {
private Rect mRect;
private Paint mPaint;
// we need this constructor for LayoutInflater
public LinedEditText(Context context, AttributeSet attrs) {
super(context, attrs);
mRect = new Rect();
mPaint = new Paint();
mPaint.setStyle(Paint.Style.STROKE);
mPaint.setColor(0x800000FF);
}
@Override
protected void onDraw(Canvas canvas) {
int count = getLineCount();
Rect r = mRect;
Paint paint = mPaint;
for (int i = 0; i < count; i++) {
int baseline = getLineBounds(i, r);
canvas.drawLine(r.left, baseline + 1, r.right, baseline + 1, paint);
}
super.onDraw(canvas);
}
}
```
---
Now use object of **LinedEditText** class where you need your **TextView** and make it non-editable.
## An Example:
```
public class HorizontalLine extends Activity{
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setTitle("Android: Ruled/horizonal lines in Textview");
LinearLayout ll = new LinearLayout(this);
ll.setOrientation(LinearLayout.VERTICAL);
LayoutParams textViewLayoutParams = new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT);
LinedEditText et = new LinedEditText(this, null);
et.setText("The name of our country is Bangladesh. I am proud of my country :)");
et.setLayoutParams(textViewLayoutParams);
et.setKeyListener(null);
ll.addView(et);
this.setContentView(ll);
}
}
```
---
**et.setKeyListener(null)** makes the EditText non-editable so, it acts like a TextView.
---
## The Output:

## Issue with cursor:
If you use **et.setKeyListener(null)** only then it is just not listening to keys but
user can see a cursor on the EditText. If you don't want this cursor just disable the EditText by adding this line:
```
et.setEnabled(false);
```
|
Elastic search Multiple Aggregation
I have two elastic search queries
1. For Get transaction information for a particular day
2. For get transaction information for all days
How can i combine these two quires into a single query? Am struggling to write a signle query for these two similar needs. Please help me to solve this issue. Thank you
```
{
"query": {
"filtered": {
"query": {
"match": {
"payment_type": "paypal"
}
},
"filter": {
"range": {
"transaction_date": {
"from": "2014-11-10",
"to": "2014-11-10"
}
}
}
}
},
"aggs": {
"daily_price_sum": {
"sum": {
"field": "price"
}
},
"daily_post_sum": {
"sum": {
"field": "purchased_post_count"
}
}
}
}
{
"size": 0,
"query": {
"match": {
"payment_type": "paypal"
}
},
"aggs": {
"daily_price_sum": {
"sum": {
"field": "price"
}
},
"daily_post_sum": {
"sum": {
"field": "purchased_post_count"
}
}
}
}
```
|
If you are using ES of version less than 1.4.0, you can make use of [Filter Aggregations](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html). Query for the same is as below:
```
{
"size": 0,
"query": {
"match": {
"payment_type": "paypal"
}
},
"aggs": {
"daily_price_sum": {
"sum": {
"field": "price"
}
},
"daily_post_sum": {
"sum": {
"field": "purchased_post_count"
}
},
"one_day_aggs": {
"filter": {
"range": {
"transaction_date": {
"from": "2014-11-10",
"to": "2014-11-10"
}
}
},
"aggs": {
"daily_price_sum": {
"sum": {
"field": "price"
}
},
"daily_post_sum": {
"sum": {
"field": "purchased_post_count"
}
}
}
}
}
}
```
But if you are using ES 1.4.0, then you can use [Filters Aggregation](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filters-aggregation.html) to make the query more compact. Query for the same is as below:
```
{
"size": 0,
"query": {
"match": {
"payment_type": "paypal"
}
},
"aggs": {
"transactions": {
"filters": {
"filters": {
"one_day": {
"range": {
"transaction_date": {
"from": "2014-11-10",
"to": "2014-11-10"
}
}
},
"all_days": {
"match_all": {}
}
}
},
"aggs": {
"daily_price_sum": {
"sum": {
"field": "price"
}
},
"daily_post_sum": {
"avg": {
"field": "purchased_post_count"
}
}
}
}
}
}
```
I also see that you are not interested in the query hits but only on the aggregation values. In that case, you can improve the performance of these aggregations by making use of [Shard Query Cache](http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.4/index-modules-shard-query-cache.html) which is present in ES 1.4.0. For making use of this, enable shard query cache as mentioned in the link and add the following parameter to the `_search` operation:
`search_type=count`.
|
MSHTML : Calling member of javascript object?
Using the .NET WebBrowser control, it is fairly simple to execute a member of an HtmlElement.
Assuming there is a JavaScript object called "player" with a member called "getLastSongPlayed"; calling this from the .NET WebBrowser control would go something like this:
```
HtmlElement elem = webBrowser1.Document.getElementById("player");
elem.InvokeMember("getLastSongPlayed");
```
Now my question is: How do I accomplish that using mshtml ?
Thanks in advance,
Aldin
**EDIT:**
I got it up and running, see my answer below !
|
FINALLY !! I got it up and running !
The reason for the
```
System.InvalidCastException
```
that was thrown, whenever I tried to reference the parentWindow of an mshtml.IHTMLDocument2 and / or assign it to an mshtml.IHTMLWindow2 window object had to do with Threading.
For some, unknown to me, reason it seems that the COM objects of mshtml.IHTMLWindow are operating on another Thread that must be of Single-Threaded Apartment (STA) state.
So the trick was, calling / executing the required piece of code on another Thread with STA state.
Here's a sample code:
```
SHDocVw.InternetExplorer IE = new SHDocVw.InternetExplorer
bool _isRunning = false;
private void IE_DocumentComplete(object pDisp, ref obj URL)
{
//Prevent multiple Thread creations because the DocumentComplete event fires for each frame in an HTML-Document
if (_isRunning) { return; }
_isRunning = true;
Thread t = new Thread(new ThreadStart(Do))
t.SetApartmentState(ApartmentState.STA);
t.Start();
}
private void Do()
{
mshtml.IHTMLDocument3 doc = this.IE.Document;
mshtml.IHTMLElement player = doc.getElementById("player");
if (player != null)
{
//Now we're able to call the objects properties, function (members)
object value = player.GetType().InvokeMember("getLastSongPlayed", System.Reflection.BindingFlags.InvokeMethod, null, player, null);
//Do something with the returned value in the "value" object above.
}
}
```
We're now also able to reference the parentWindow of an mshtml.IHTMLDocument2 object and execute a sites script and / or our own (remember it must be on an STA thread):
```
mshtml.IHTMLWindow2 window = doc.parentWindow;
window.execScript("AScriptFunctionOrOurOwnScriptCode();", "javascript");
```
This might save someone from headaches in the future. lol
|
How well are the CSS2 'system colors' supported?
I've been reading the CSS2 spec in my spare time, and I've come across [this](http://www.w3.org/TR/CSS2/ui.html#system-colors). The note states that the system colours are deprecated, however, what is the browser support for the system colours part of the spec like?
|
This feature is **deprecated**. (it says so at the top of the link you provided)
It is not well supported, and is likely to get less so over time.
The intention of this feature was to provide the browser with access to the colour scheme of the underlying desktop operating system. However it has been dropped for several reasons. Desktop operating systems don't all have the same features, and furthermore the features they do have are open to change.
Finally, the main reason it was dropped is because the underlying reason for having them was to allow site designers to make their sites look like they belong to the parent OS. However there are other, better ways to achieve this now (notwithstanding the fact that most web site designers seem to prefer to have a consistent look and feel to their site that isn't dictated by the user).
See <http://www.w3.org/TR/2003/CR-css3-color-20030514/#css-system> for up-to-date info on this. At the bottom of this section is the following note:
>
> The CSS2 System Color values have been deprecated in favor of the CSS3 UI 'appearance' property for specifying the complete look of user interface related elements.
>
>
>
That should help tell you where to look for an up-to-date method of achieving a similar result.
The set of colours specified by the W3C in this page is roughly equivalent to the colour scheme values available in Windows 2000. This is obviously of limited use to someone running Windows XP, and no use at all to a Vista or Win7 user, let alone a Mac or Linux user.
|
ExtJs and nested models
Let's say I have a model called Vehicle
```
Ext.define('AM.model.Vehicle', {
extend: 'Ext.data.Model',
fields: [
{name: 'brandId', type: 'int'},
{name: 'brandName', type: 'string'},
{name: 'modelId', type: 'int'},
{name: 'modelName', type: 'string'},
{name: 'yearOfRelease', type: 'int'}
]
});
```
As you can see, the vehicle model has fields "brandId", "brandName", etc.
For example, I want to edit an instance of this model.
I create an edit form, which has combobox linked to 'brandId' field.
Then I save the form values to a model instance with this
```
values = form.getValues();
record.set(values);
```
It all works fine, but there is a problem with all the fields that represent some outer models:
only id's are updated, while all other fields, that depend on id remain the same.
In a regular OOP language(Java for example), I would create a class CarBrand
and put an instance of this class inside the Vehicle class.
In ExtJs 4 they have hasMany relationship, but don't have hasOne.
What is the best approach in ExtJS 4 for such nested models?
|
4.0.\* doesn't support the hasOne relationship. 4.1 is meant to support it.
for 4.0.7 I have the following override in place.
```
Ext.define('HOD.overrides.Form', {}, function() {
/*
* Implementing a nested setValues for forms with
* arrays in them.
*/
Ext.override(Ext.form.Basic, {
setValues: function(values, arrayField) {
var me = this;
function setVal(fieldId, val) {
if (arrayField) {
fieldId = arrayField + '.' + fieldId;
}
var field = me.findField(fieldId);
if (field) {
field.setValue(val);
if (me.trackResetOnLoad) {
field.resetOriginalValue();
}
} else if(Ext.isObject(val)) {
me.setValues(val, fieldId);
}
}
if (Ext.isArray(values)) {
// array of objects
Ext.each(values, function(val) {
setVal(val.id, val.value);
});
} else {
// object hash
Ext.iterate(values, setVal);
}
return this;
},
/**
* Persists the values in this form into the passed {@link Ext.data.Model} object in a beginEdit/endEdit block.
* @param {Ext.data.Model} record The record to edit
* @return {Ext.form.Basic} this
*/
updateRecord: function(record) {
var values = this.getFieldValues(),
name,
obj = {};
function populateObj(record, values) {
var obj = {},
name;
record.fields.each(function(field) {
name = field.name;
if (field.model) {
var nestedValues = {};
var hasValues = false;
for(var v in values) {
if (v.indexOf('.') > 0) {
var parent = v.substr(0, v.indexOf('.'));
if (parent == field.name) {
var key = v.substr(v.indexOf('.') + 1);
nestedValues[key] = values[v];
hasValues = true;
}
}
}
if (hasValues) {
obj[name] = populateObj(Ext.create(field.model), nestedValues);
}
} else if (name in values) {
obj[name] = values[name];
}
});
return obj;
}
obj = populateObj(record, values);
record.beginEdit();
record.set(obj);
record.endEdit();
return this;
}
});
```
Whats this lets me do is in my forms I create them with names like so.
```
// Contact details
{
xtype: 'fieldcontainer',
defaultType: 'textfield',
layout: 'anchor',
fieldDefaults: {
anchor: '80%',
allowBlank: true
},
items: [
{
xtype: 'textfield',
name: 'homePhone',
fieldLabel: 'Home phone number'
},
{
xtype: 'textfield',
name: 'mobilePhone',
fieldLabel: 'Mobile phone number'
}]
},
{
xtype: 'fieldcontainer',
defaultType: 'textfield',
layout: 'anchor',
fieldDefaults: {
anchor: '80%',
allowBlank: true
},
items: [
{
name: 'address.id',
xtype: 'hidden'
},
{
name: 'address.building',
fieldLabel: 'Building'
},
{
name: 'address.street',
fieldLabel: 'Street'
},
{
name: 'address.city',
fieldLabel: 'City'
},
{
name: 'address.county',
fieldLabel: 'County'
},
{
name: 'address.postcode',
fieldLabel: 'Postcode'
},
{
name: 'address.country',
fieldLabel: 'Country'
}
]
},
]
```
Notice the . in the name field which lets the overridden setValues and updateRecord form know it needs to map these values to the new model, which is defined in the model like so.
```
Ext.define('HOD.model.Employee', {
extend: 'Ext.data.Model',
fields: [
// Person Class
// Auto
'id',
'name',
'homePhone',
'mobilePhone',
{model: 'HOD.model.Address', name: 'address'}, //Address
{model: 'HOD.model.Contact', name: 'iceInformation'} //Person
]
});
Ext.define('HOD.model.Address', {
extend: 'Ext.data.Model',
fields: [
'building',
'street',
'city',
'county',
'postcode',
'country'
],
belongsTo: 'Employee'
});
```
|
Access Modifiers - what's the purpose?
I'm relatively new to programming in general and I was wondering if anyone could help me understand the purpose of access modifiers? I understand the fact that they set different levels of access for classes and variables etc. but why would you want to limit what has access to these? What is the point in not allowing access for different things? why not just allow access for everything?
Sorry if this is a stupid question!
|
There are thousands of reasons, but I'll quote a few from [here](http://en.wikipedia.org/wiki/Encapsulation_(object-oriented_programming)) and then expand on those:
---
>
> Hiding the internals of the object protects its integrity by preventing users from setting the internal data of the component into an invalid or inconsistent state.
>
>
>
It is very common for a type to enforce certain invariants (e.g., A person's ID number must always be 8 characters long). If a client has full access to every member of a class, then there's no way you can enforce those constraints. Here's a concrete example:
```
public class Person
{
public string Id;
public void SetId(string newId)
{
if(newId.Length != 8)
throw new InvalidArgumentException("newId");
Id = newId;
}
}
```
There's nothing preventing me from just accessing the `Id` field and setting it to whatever I want! I could just do this:
```
Person p = new Person();
p.Id = "Invalid Id";
```
That's why your private state needs to be *private*.
---
>
> A supposed benefit of encapsulation is that it can reduce system complexity, and thus increase robustness, by allowing the developer to limit the inter-dependencies between software component.
>
>
>
Say I develop a class which has 40 methods - 35 of which are needed to wire up the internals of the class and to implement its functionality, and 5 of which are actually important to the client. Now I give you this class for you to use - you look at its public interface and you see 40 methods, most of which are *completely* irrelevant to you, and you ask me "What the hell is this spaghetti of code??"
In order to make sure the intent of a type is clear, you restrain the access of any members that are not relevant to the client.
Also, more public members = greater public surface = more stuff that needs to be tested = harder to maintain.
---
As a rule of thumb, try to make your members as private as possible, and then work your way up from there. For example, start with `private`, and then:
1. Do/would derived classes need to access this member? If so, promote to `protected`
2. Do/would other classes need to access this member? If so, promote to `public`
|
Change The background color of a QPushButton progressively and in a time duration?
I am tired of searching !
i subclassed a Button from QPushbutton and set my QSS to it.the style is desired.
all i want is when the button is hover (enterevent happen) the button's color change over a specific time (for example 0.2 sec) not immediately (a soft color changing)
what should i do ?
\*\*\*\*\*\*\*Answer in PyQt4\*\*\*\*\*\*\*\*\*
```
class MyButton(QPushButton):
def __init__(self):
super(MyButton, self).__init__()
self.setMinimumSize(80,50)
self.setText('QPushButton')
def getColor(self):
return Qt.black
def setColor(self, color):
self.setStyleSheet("background-color: rgb({0}, {1}, {2});border:none;".format(color.red(), color.green(), color.blue()))
color=QtCore.pyqtProperty(QColor, getColor, setColor)
def enterEvent(self, event):
global anim
anim=QPropertyAnimation(self, "color")
anim.setDuration(200)
anim.setStartValue(QColor(216, 140, 230))
anim.setEndValue(QColor(230, 230, 230))
anim.start()
def leaveEvent(self, event):
self.setStyleSheet("background:none;")
```
|
One of the solutions is - `QPropertyAnimation` class. It does not support color change out of box, but since you have subclassed button already anyway - here is a sample code.
First - you would need to define new property in your class - right after Q\_OBJECT macro. And getter and setter methods for this property, example below:
```
class AnimatedButton : public QPushButton
{
Q_OBJECT
Q_PROPERTY(QColor color READ color WRITE setColor)
public:
AnimatedButton (QWidget *parent = 0)
{
}
void setColor (QColor color){
setStyleSheet(QString("background-color: rgb(%1, %2, %3);").arg(color.red()).arg(color.green()).arg(color.blue()));
}
QColor color(){
return Qt::black; // getter is not really needed for now
}
};
```
and then in your event handler, where you process enterEvent, you should do something like this -
```
// since it will be in event of button itself, change myButton to 'this'
QPropertyAnimation *animation = new QPropertyAnimation(myButton, "color");
animation->setDuration(200); // duration in ms
animation->setStartValue(QColor(0, 0, 0));
animation->setEndValue(QColor(240, 240, 240));
animation->start();
```
though you would probably want to make sure to not start new animation unless this one is finished, and make sure that you don't have memory leak by calling new again and again
|
Salesforce php REST API with automatic login
I am dragging my hair out over this one right now. I have followed the following example:
<http://developer.force.com/cookbook/recipe/interact-with-the-forcecom-rest-api-from-php>
but here the user is being sent to the login form and needs to login. Instead I would like to post that data in my code and not have the user login but my app do that automatically.
If anyone could post an example on how to do that with oAuth I would really appreciate it as I am not eager on using that bloated SOAP implementation.
Cheers guys!
|
It seems after some more tinkering my attempts have been a success::
```
$loginurl = "https://login.salesforce.com/services/oauth2/token";
$params = "grant_type=password"
. "&client_id=" . CLIENT_ID
. "&client_secret=" . CLIENT_SECRET
. "&username=" . USER_NAME
. "&password=" . PASSWORD;
$curl = curl_init($loginurl);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $params);
$json_response = curl_exec($curl);
$status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ( $status != 200 ) {
die("Error: call to URL failed with status $status, response $json_response, curl_error " . curl_error($curl) . ", curl_errno " . curl_errno($curl));
}
curl_close($curl);
echo $json_response;
```
Now all that is left to do is store the
access\_token & instance\_url from that response into a session var
and work away on our objects.
I hope the above helps someone else with similar issues.
|
What about this combination of gulp-concat and lazypipe is causing an error using gulp 4?
I'm upgrading from Gulp 3 to 4, and I'm running into an error:
```
The following tasks did not complete: build
Did you forget to signal async completion?
```
I understand what it's saying, but can't understand why this code is triggering it.
Error or not, the task completes (the files are concatenated and written to dest). Executing the same code without lazypipe results in no error, and removing the concatenation within lazypipe also fixes the error.
Wrapping the whole thing in something that creates a stream (like merge-stream) fixes the issue. I guess something about the interaction between gulp-concat and lazypipe is preventing a stream from being correctly returned.
Here's the (simplified) task:
```
gulp.task('build', function() {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js') // Task will complete if I remove this
.pipe(gulp.dest, dest);
// This works
// return gulp.src(src('js/**/*.js'))
// .pipe(plugins.concat('cat.js'))
// .pipe(gulp.dest(dest));
// This doesn't (unless you wrap it in a stream-making function)
return gulp.src(src('js/**/*.js'))
.pipe(buildFiles());
});
```
Any advice appreciated!
|
This is a [known issue](https://github.com/OverZealous/lazypipe/issues/14) when using `lazypipe` with gulp 4 and it's not going to be fixed in the near future. Quote from that issue:
>
> **OverZealous commented on 20 Dec 2015**
>
> As of now, I have no intention of making lazypipe work on Gulp 4.
>
>
>
As far as I can tell this issue is caused by the fact that gulp 4 uses [`async-done`](https://github.com/gulpjs/async-done) which has this to say about its stream support:
>
> **Note:** Only actual streams are supported, not faux-streams; Therefore, modules like `event-stream` are not supported.
>
>
>
When you use `lazypipe()` as the last pipe what you get is a stream that doesn't have a lot of the properties that you usually have when working with streams in gulp. You can see this for yourself by logging the streams:
```
// console output shows lots of properties
console.log(gulp.src(src('js/**/*.js'))
.pipe(plugins.concat('cat.js'))
.pipe(gulp.dest(dest)));
// console output shows much fewer properties
console.log(gulp.src(src('js/**/*.js'))
.pipe(buildFiles()));
```
This is probably the reason why gulp considers the second stream to be a "faux-stream" and doesn't properly detect when the stream has finished.
Your only option at this point is some kind of workaround. The easiest workaround (which doesn't require any additional packages) is to just add a callback function `cb` to your task and listen for the `'end'` event:
```
gulp.task('build', function(cb) {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js')
.pipe(gulp.dest, dest);
gulp.src(src('js/**/*.js'))
.pipe(buildFiles())
.on('end', cb);
});
```
Alternatively, adding any `.pipe()` after `buildFiles()` should fix this, even one that doesn't actually do anything like [`gutil.noop()`](https://github.com/gulpjs/gulp-util#noop):
```
var gutil = require('gulp-util');
gulp.task('build', function() {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js')
.pipe(gulp.dest, dest);
return gulp.src(src('js/**/*.js'))
.pipe(buildFiles())
.pipe(gutil.noop());
});
```
|
What is the difference between ContextCompat.startForegroundService(context, intent) and startforegroundservice(intent)?
As the question title asks, I would like to know what their differences are as documentation is not very clear if they actually have differences.
Thanks in advance.
|
`ContextCompat` is utitility class for compatibility purposes.
`context.startForegroundService` was introduced in Android Oreo(API 26) and is new correct way to start a foreground service. Before Android Oreo you had to just call `startService` and thats what `ContextCompat.startForegroundService` does. After API 26 it calls `context.startForegroundService` or calls `context.startService` otherwise.
Code from `ContextCompat` API 27 sources.
```
/**
* startForegroundService() was introduced in O, just call startService
* for before O.
*
* @param context Context to start Service from.
* @param intent The description of the Service to start.
*
* @see Context#startForegeroundService()
* @see Context#startService()
*/
public static void startForegroundService(Context context, Intent intent) {
if (Build.VERSION.SDK_INT >= 26) {
context.startForegroundService(intent);
} else {
// Pre-O behavior.
context.startService(intent);
}
}
```
|
OSX: CoreAudio API for setting IO Buffer length?
This is a follow-up to a previous question:
[OSX CoreAudio: Getting inNumberFrames in advance - on initialization?](https://stackoverflow.com/questions/35875886/osx-coreaudio-getting-innumberframes-in-advance-on-initialization)
I am trying to figure out what will be the AudioUnit API for possibly setting ***inNumberFrames*** or *preffered IO buffer duration* of an input callback for a single **HAL** audio component instance in **OSX** (not a plug-in!).
While I understand there is a comprehensive documentation on how this can be achieved in iOS, by means of ***AVAudioSession*** API, I can neither figure out nor find documentation on setting these values in OSX, whichever API.
The web is full of expert, yet conflicting statements ranging from "*There is an Audio Unit API to request a sample rate and a preferred buffer duration...*", to "*You can definitely get the number of frames, but only for the current callback call...*".
Is there a way of at least getting (and adapting to) the ***inNumberFrames*** or the **audio buffer length** offerd by the system, for the input-selected sampling rates in OSX? For example, for 44.1k and its multiples (this seems to work partly), as well as for 48k and its multiples (this doesn't seem to work at all, I don't know where's the hack which allows for adapting the buffer lenfth to these values)? Here's the console printout:
```
Available 7 Sample Rates
Available Sample Rate value : 8000.000000
Available Sample Rate value : 16000.000000
Available Sample Rate value : 32000.000000
Available Sample Rate value : 44100.000000
Available Sample Rate value : 48000.000000
Available Sample Rate value : 88200.000000
Available Sample Rate value : 96000.000000
.mSampleRate = 48000.00
.mFormatID = 1819304813
.mBytesPerPacket = 8
.mFramesPerPacket = 1
.mBytesPerFrame = 8
.mChannelsPerFrame = 2
.mBitsPerChannel = 32
.mFormatFlags = 9
_mFormatHumanReadable = kAudioFormatFlagIsFloat
kAudioFormatFlagIsPacked
kLinearPCMFormatFlagIsFloat
kLinearPCMFormatFlagIsPacked
kLinearPCMFormatFlagsSampleFractionShift
kAppleLosslessFormatFlag_16BitSourceData
kAppleLosslessFormatFlag_24BitSourceData
expectedInNumberFrames = 512
Couldn't render in current context (Error -10863)
```
The expected inNumberFrames is read from the system:
```
UInt32 expectedInNumberFrames = 0;
UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(gInputUnitComponentInstance,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&expectedInNumberFrames,
&propSize);
```
Thanks in advance for pointing me at the right direction!
|
See this Apple Technical Note: <https://developer.apple.com/library/mac/technotes/tn2321/_index.html#//apple_ref/doc/uid/DTS40013499-CH1-THE_I_O_BUFFER_SIZE>
See the OS X example code in this technical note for GetIOBufferFrameSizeRange(), GetCurrentIOBufferFrameSize(), and SetCurrentIOBufferFrameSize().
Note that there is an API property returning an allowed range, and an error return on the property setter. Also note the various Mac power saving modes may change the buffer size while an app is running, so the actual buffer size, inNumberFrames, may not stay constant, or even be known until the Audio Unit starts running.
If you get unusual buffer sizes (not a power of 2), it may be that the actual audio hardware on a particular Apple product model has a fixed or limited range of audio sample rates, and thus OS software is being used to resample and thus resize the buffers being sent to audio unit callbacks depending on that hardware, if the app requests a sample rate not supported by the actual codec chips on the circuit board.
|
Parameterized Query from an Expression Tree in Entity Framework Core
I'm trying to implement a dynamic filter in a generic repository (.NET Core 3.1 + EF Core 3.1)
by building an Expression Tree, but the generated SQL query is never parameterized (I'm verifying the generated query via `"Microsoft.EntityFrameworkCore.Database.Command": "Information"` in appsettings.json and have `EnableSensitiveDataLogging` in Startup.cs)
The code to build an Expression Tree is the following (for sake of simplicity working with string values only here):
```
public static IQueryable<T> WhereEquals<T>(IQueryable<T> query, string propertyName, object propertyValue)
{
var pe = Expression.Parameter(typeof(T));
var property = Expression.PropertyOrField(pe, propertyName);
var value = Expression.Constant(propertyValue);
var predicateBody = Expression.Equal(
property,
value
);
var whereCallExpression = Expression.Call(
typeof(Queryable),
"Where",
new[] { typeof(T) },
query.Expression,
Expression.Lambda<Func<T, bool>>(predicateBody, new ParameterExpression[] { pe })
);
return query.Provider.CreateQuery<T>(whereCallExpression);
}
```
The approach works, but values are always incorporated inside a generated SQL query and I afraid that it could lead to SQL injections.
Here is an example of a generated query:
```
Microsoft.EntityFrameworkCore.Database.Command: Information: Executed DbCommand (33ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT [p].[Id], [p].[Name], [p].[FirstName], [p].[Created], [p].[CreatedBy], [p].[Updated], [p].[UpdatedBy]
FROM [Persons] AS [p]
WHERE [p].[Name] = N'smith'
```
Found a potential answer from a EF team member (@divega):
[Force Entity Framework to use SQL parameterization for better SQL proc cache reuse](https://stackoverflow.com/questions/9201403/force-entity-framework-to-use-sql-parameterization-for-better-sql-proc-cache-reu#9651357),
managed it to work with Where method, but the generated SQL is still the same.
Tried to use System.Linq.Dynamic.Core,
but it has the same issue (generated SQL query is not parameterized).
Is there a way to force Entity Framework Core to generate a parameterized query from an Expression Tree?
|
The link you provided explains that EF uses a SQL parameter for variable values, so instead of creating an `Expression.Constant` for the value passed in, if you create a variable reference (which in C# is always a field reference), then you will get a parameterized query. The simplest solution seems to be to copy how the compiler handles a lambda outer scope variable reference, which is create a class object to hold the value, and reference that.
Unlike `Expression.Constant`, it isn't easy to get the actual type of the `object` parameter, so changing that to a generic type:
```
public static class IQueryableExt {
private sealed class holdPropertyValue<T> {
public T v;
}
public static IQueryable<T> WhereEquals<T, TValue>(this IQueryable<T> query, string propertyName, TValue propertyValue) {
// p
var pe = Expression.Parameter(typeof(T), "p");
// p.{propertyName}
var property = Expression.PropertyOrField(pe, propertyName);
var holdpv = new holdPropertyValue<TValue> { v = propertyValue };
// holdpv.v
var value = Expression.PropertyOrField(Expression.Constant(holdpv), "v");
// p.{propertyName} == holdpv.v
var whereBody = Expression.Equal(property, value);
// p => p.{propertyName} == holdpv.v
var whereLambda = Expression.Lambda<Func<T, bool>>(whereBody, pe);
// Queryable.Where(query, p => p.{propertyName} == holdpv.v)
var whereCallExpression = Expression.Call(
typeof(Queryable),
"Where",
new[] { typeof(T) },
query.Expression,
whereLambda
);
// query.Where(p => p.{propertyName} == holdpv.v)
return query.Provider.CreateQuery<T>(whereCallExpression);
}
}
```
If you need to pass in an `object` instead, it is simpler to add a conversion to the proper type (which won't affect the generated SQL), rather than dynamically create the right type of `holdPropertyValue` and assign it a value, so:
```
public static IQueryable<T> WhereEquals2<T>(this IQueryable<T> query, string propertyName, object propertyValue) {
// p
var pe = Expression.Parameter(typeof(T), "p");
// p.{propertyName}
var property = Expression.PropertyOrField(pe, propertyName);
var holdpv = new holdPropertyValue<object> { v = propertyValue };
// Convert.ChangeType(holdpv.v, p.{propertyName}.GetType())
var value = Expression.Convert(Expression.PropertyOrField(Expression.Constant(holdpv), "v"), property.Type);
// p.{propertyName} == Convert.ChangeType(holdpv.v, p.{propertyName}.GetType())
var whereBody = Expression.Equal(property, value);
// p => p.{propertyName} == Convert.ChangeType(holdpv.v, p.{propertyName}.GetType())
var whereLambda = Expression.Lambda<Func<T, bool>>(whereBody, pe);
// Queryable.Where(query, p => p.{propertyName} == Convert.ChangeType(holdpv.v, p.{propertyName}.GetType()))
var whereCallExpression = Expression.Call(
typeof(Queryable),
"Where",
new[] { typeof(T) },
query.Expression,
whereLambda
);
// query.Where(query, p => p.{propertyName} == Convert.ChangeType(holdpv.v, p.{propertyName}.GetType()))
return query.Provider.CreateQuery<T>(whereCallExpression);
}
```
|
Powershell Core + Pester - Separating tests from src
### Question:
What would be the best way to import functions to tests that don't reside in the same directory?
Example
```
src
Get-Emoji.ps1
test
Get-Emoji.Tests.ps1
```
### Inb4
- Pester documentation[1] suggests test files are placed in the same directory as the code that they test. No examples of alternatives provided.
- Pester documentation[2] suggests dot-sourcing to import files. Only with examples from within same directory
- Whether breaking out tests from the src is good practice, is to be discussed elsewhere
- Using Powershell Core for cross platform support on different os filesystems (forward- vs backward slash)
---
[1] [File placement and naming convention](https://pester.dev/docs/usage/file-placement-and-naming#common-convention)
>
> Pester considers all files named .Tests.ps1 to be test files. This is the default naming convention that is used by almost all projects.
>
>
> Test files are placed in the same directory as the code that they test. Each file is called as the function it tests. This means that for a function Get-Emoji we would have Get-Emoji.Tests.ps1 and Get-Emoji.ps1 in the same directory. What would be the best way to referencing tests to functions in Pester.
>
>
>
[2] [Importing the tested functions](https://pester.dev/docs/usage/importing-tested-functions#importing-the-tested-functions)
>
> Pester tests are placed in .Tests.ps1 file, for example Get-Emoji.Tests.ps1. The code is placed in Get-Emoji.ps1.
>
>
> To make the tested code available to the test we need to import the code file. This is done by dot-sourcing the file into the current scope like this:
>
>
> Example 1
>
>
>
> ```
> # at the top of Get-Emoji.Tests.ps1
> BeforeAll {
> . $PSScriptRoot/Get-Emoji.ps1
> }
>
> ```
>
> Example 2
>
>
>
> ```
> # at the top of Get-Emoji.Tests.ps1
> BeforeAll {
> . $PSCommandPath.Replace('.Tests.ps1','.ps1')
> }
>
> ```
>
>
|
I tend to keep my tests together in a single folder that is one or two parent levels away from where the script is (which is usually under a named module directory and then within a folder named either Private or Public). I just dot source my script or module and use `..` to reference the parent path with `$PSScriptRoot` (the current scripts path) as a point of reference. For example:
- Script in `\SomeModule\Public\get-something.ps1`
- Tests in `\Tests\get-something.tests.ps1`
```
BeforeAll {
. $PSScriptRoot\..\SomeModule\Public\get-something.ps1
}
```
Use forward slashes if cross platform compatibility is a concern, Windows doesn't mind if path separators are forward or backslashes. You could also run this path through `Resolve-Path` first if you wanted to be certain a valid full path is used, but I don't generally find that necessary.
|
Proving that a two-pointer approach works (pair sum)
I was trying to solve the pair sum problem, i.e., given a sorted array, we need to if there exist two indices `i` and `j` such that `i!=j` and `a[i]+a[j] == k` for some `k`.
One of the approaches to do the same problem is running two nested *for* loops, resulting in a complexity of `O(n*n)`.
Another way to solve it is using a two-pointer technique. I wasn't able to solve the problem using the two-pointer method and therefore looked it up, but I couldn't understand why it works. How do I prove that it works?
```
#define lli long long
//n is size of array
bool f(lli sum) {
int l = 0, r = n - 1;
while ( l < r ) {
if ( A[l] + A[r] == sum ) return 1;
else if ( A[l] + A[r] > sum ) r--;
else l++;
}
return 0;
}
```
|
Well, think of it this way:
You have a sorted array (you didn't mention that the array is sorted, but for this problem, that is generally the case):
{ -1,4,8,12 }
The algorithm starts by choosing the first element in the array and the last element, adding them together and comparing them to the sum you are after.
If our initial sum matches the sum we are looking for, great!! If not, well, we need to continue looking at possible sums either greater than or less than the sum we started with. By starting with the smallest and the largest value in the array for our initial sum, we can eliminate one of those elements as being part of a possible solution.
Let's say we are looking for the sum 3. We see that 3 < 11. Since our big number (12) is paired with the smallest possible number (-1), the fact that our sum is too large means that 12 cannot be part of any possible solution, since any other sum using 12 would have to be larger than 11 (12 + 4 > 12 - 1, 12 + 8 > 12 - 1).
So we know we cannot possibly make a sum of 3 using 12 + one other number in the array; they would all be too big. So we can eliminate 12 from our search by moving down to the next largest number, 8. We do the same thing here. We see 8 + -1 is still too big, so we move down to the next number, 4, and voila! We find a match.
The same logic applies if the sum we get is too small. We can eliminate our small number, because any sum we can get using our current smallest number has to be less than or equal to the sum we get when it is paired with our current largest number.
We keep doing this until we find a match, or until the indices cross each other, since, after they cross, we are simply adding up pairs of numbers we have already checked (i.e. 4 + 8 = 8 + 4).
This may not be a mathematical proof, but hopefully it illustrates how the algorithm works.
|
How to figure out all colors in a gradient?
>
> **Possible Duplicate:**
>
> [Javascript color gradient](https://stackoverflow.com/questions/3080421/javascript-color-gradient)
>
>
>
I have color one (let's say yellow) and color two (blue) - they make up a gradient.
Based on a value of 0 to 100, (0 being yellow and 100 being blue), I'd like to represent a mixture of color one and two.
I am trying to do this in a mobile browser (safari specifically).
Is there a way to do this in javascript?
|
If what you're trying to do is to create a color that is some percentage (0-100) between two other colors, you can do that with this javascript:
```
function makeGradientColor(color1, color2, percent) {
var newColor = {};
function makeChannel(a, b) {
return(a + Math.round((b-a)*(percent/100)));
}
function makeColorPiece(num) {
num = Math.min(num, 255); // not more than 255
num = Math.max(num, 0); // not less than 0
var str = num.toString(16);
if (str.length < 2) {
str = "0" + str;
}
return(str);
}
newColor.r = makeChannel(color1.r, color2.r);
newColor.g = makeChannel(color1.g, color2.g);
newColor.b = makeChannel(color1.b, color2.b);
newColor.cssColor = "#" +
makeColorPiece(newColor.r) +
makeColorPiece(newColor.g) +
makeColorPiece(newColor.b);
return(newColor);
}
```
This function assumes the gradient is made with linear interpolation between each r, g and b channel value of the two endpoint colors such that the 50% gradient value is the midpoint of each r,g,b value (halfway between the two colors presented). Once could make different types of gradients too (with different interpolation functions).
To assign this result to a background, you use the CSS color value I've added to the return result like this:
```
// sample usage
var yellow = {r:255, g:255, b:0};
var blue = {r:0, g:0, b:255};
var newColor = makeGradientColor(yellow, blue, 79);
element.style.backgroundColor = newColor.cssColor;
```
|
Powershell add to path, temporarily
I'm trying to write a simple PowerShell script to deploy a Visual Studio ASPNET Core 1 project.
Currently, in a batch file i can say
```
Path=.\node_modules\.bin;%AppData%\npm;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Web\External;%PATH%;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Web\External\git
```
That will modify the path variable for the duration of the session... for the life of me I can't figure out how to do this simple thing in powershell.
Can someone help me translate this into powershell?
TIA!
|
# Option 1: Modify the `$env:Path` Variable
1. Append to the `Path` variable in the current window:
```
$env:Path += ";C:\New directory 1;C:\New directory 2"
```
2. Prefix the `Path` variable in the current window:
```
$env:Path = "C:\New directory 1;C:\New directory 2;" + $env:Path
```
3. Replace the `Path` variable in the current window (use with caution!):
```
$env:Path = "C:\New directory 1;C:\New directory 2"
```
# Option 2: Use the `editenv` Utility
I wrote a Windows command-line tool called [`editenv`](https://github.com/Bill-Stewart/editenv) that lets you interactively edit the content of an environment variable. It works from a Windows console (notably, it does *not* work from the PowerShell ISE):
```
editenv Path
```
This can be useful when you want to edit and rearrange the directories in your `Path` in a more interactive fashion, and it affects only the current window.
|
How to debug a program when it crashes w/out exception?
One of my programs crashes periodically, but I don't know why. I'm running it in debug mode, but nothing pops up. The program just suddenly exits. I've had other bugs that do throw an exception...but not this one. Is there a magical way of catching it or something?
|
Presumably you're running it from within Visual Studio and for some reason it's not stopping in the debugger with an uncaught exception, in which case you could try catching the crash from outside of Visual Studio. See my answer at [System.AccessViolationException from unmanaged code?](https://stackoverflow.com/questions/3652380/system-accessviolationexception-from-unmanaged-code/3655392#3655392), in particular [how to capture a crash dump](https://www.atmosera.com/blog/how-to-capture-a-minidump-let-me-count-the-ways/).
If it only crashes periodically, but within a reasonably short period of time, start with Sysinternals [procdump](https://learn.microsoft.com/sysinternals/downloads/procdump). Start your executable from outside Visual Studio, then run:
```
procdump -e <YourExecutableNameOrPid>
```
and wait for it to harvest a crash dump - assuming it exits due to an unhandled exception - then load the crash dump into VS 2010 or WinDbg.
|
How to get started with TCP/IP v6?
Now and then there is "news" about that the IP numbers will run out in X years,
and we must change to TCP/IP v6...
And even though this crisis never really happens it has waken a question,
how do you get started with TCP/IP v6? What do I need to change in my home network?
And if I start to use IPv6 in my LAN, what are pros and cons, and do you think it is a good idea?
Can you use IPv4 and IPv6 in the same cables in the same LAN,
or do you need to physically separate them?
*Can you have some translation box so you can reach the "normal" Internet from the IPv6 connected devices? (Just like a NAT-Router.)*
Can someone please clarify the situation a little bit?
---
Thanks ChrisF for pointing out that it's OK to run IPv4 and IPv6 in parallel.
|
IPv6 is a "new" (for various definitions of "new") protocol. It has no physical manifestation, so doesn't need new wires etc.
It's quite old (in computing terms), but has only recently become mainstream due to the imminent exhaustion of IPv4 addresses. It took a while to take off as IPv4 was "good enough" for virtually all cases until now.
It needs new software to be running on your computer, but that will come with operating system updates or programs (such as FTP clients, BitTorrent clients etc.) that you install as you need them.
The [IPv4](http://en.wikipedia.org/wiki/IPv4) and [IPv6](http://en.wikipedia.org/wiki/IPv6) pages on Wikipedia are probably a good a place as any to start.
It's analogous to the situation in the UK a few years ago when they decided that they were running out of phone numbers and added a "1" into everybody's [STD](http://en.wikipedia.org/wiki/Subscriber_trunk_dialling) code (apart from a few places that got a whole new STD number and an extra digit added to their phone number. No new equipment was needed, just a change to the software running the network.
|
using sleep() for a single thread
I am fairly new to java, and am starting to get into using different threads in order to use `wait()` or `sleep()` on one part of my code and have the others still run.
For this project, I am using `JFrame` with the `javax.swing.*` and `java.awt.*` imports. What I am trying to do is have one of the threads (in my code it is the main, starting thread) allow the player to choose a space on the tic tac toe board, and when they click it, it will change icons, and then the AI will wait for 1 second before playing back from the second thread that I created.
Unfortunately, whenever I call `ait.sleep(1000)` (`ait` is my thread name) both threads wait for 1 second before finishing their execution. Can anyone tell me why sleeping one thread is stopping my whole execution?
|
>
> Can anyone tell me why sleeping one thread is stopping my whole
> execution
>
>
>
to better explain your Swing GUI is created on its own *special thread* separate from that which `main()` and other code will run in, this is done via creating your Swing components in the `SwingUtilities.invokeXXX` block (even if you have not done this your GUI will be run on a single thread called the [initial thread](http://docs.oracle.com/javase/tutorial/uiswing/concurrency/initial.html)) . Now if you simply call `sleep` while on `Event Dispatch Thread` (or for that matter on the same `Thread`) it will wait for the call to `Thread.sleep` to finish. Now because all Swing events are processed on EDT we pause its execution by calling `sleep(..)` thus pausing the UI events from being processed and therefore GUI is frozen (until `sleep(..)` returns).
You should not use `Thread.sleep(..)` on [`Event Dispatch Thread`](http://docs.oracle.com/javase/tutorial/uiswing/concurrency/dispatch.html) (or any `Thread` where sleep will cuase unwanted execution blocking), as this will cause the UI to seem frozen.
[Here](http://foxtrot.sourceforge.net/docs/freeze.php) is a nice example which demonstrates exactly, this unwanted behavior caused by invoking `Thread.sleep(..)` on GUI's EDT.
Rather use:
- [Swing Timer](http://docs.oracle.com/javase/tutorial/uiswing/misc/timer.html) for example:
```
int delay=1000;// wait for second
Timer timer = new Timer(delay, new AbstractAction() {
@Override
public void actionPerformed(ActionEvent ae) {
//action that you want performed
}
});
//timer.setRepeats(false);//the timer should only go off once
timer.start();
```
- [Swing Worker](http://docs.oracle.com/javase/tutorial/uiswing/concurrency/worker.html)
or if no Swing components are being created/modified:
- [TimerTask](http://docs.oracle.com/javase/7/docs/api/java/util/TimerTask.html)
- [Thread](http://docs.oracle.com/javase/tutorial/essential/concurrency/runthread.html), you would then use `Thread.sleep(int milis)` (but thats last option in any case IMO)
**UPDATE**
`Swing Timer`/`SwingWorker` was only added in Java 1.6, however, `TimerTask` and `Thread` have been around for alot longer sine Java 1.3 and JDK 1 repsectively, thus you could even use either of the 2 above methods and wrap calls that create/manipulate Swing components in `SwingUtilities/EventQueue#invokeXX` block; thats the way things used to be done :P
|
What does refs/heads/master:refs/heads/master mean?
As a rule, I use Git via command line. Today, I decided to use it with NetBeans IDE which generated the following command:
```
git push ssh://...myrepo.git/ refs/heads/master:refs/heads/master
```
Could anyone explain what `refs/heads/master:refs/heads/master` means?
|
The [syntax](https://git-scm.com/docs/git-push) used is as follows: `git push <repository> <src-ref>:<dst-ref>`
By using `refs/heads/master` as both `<src-ref>` and `<dst-ref>`, Git works with *qualified and explicit [refspecs](https://git-scm.com/book/en/v2/Git-Internals-The-Refspec)* (locally and on the remote) and does not need to guess the namespace based on source and destination *refspecs*. Additionally, the repository is explicitly provided which means that it is not addressed by a configured name (like `origin`).
---
Let's see this in action in a demo repository. The branch `dev` is checked out and the remote was removed after cloning. First, we list references in the (explicitly provided) remote repository and see that all refs are pointing to *7b7d5a3*. The log of *git-push* shows that we update `7b7d5a3..4a27218` on the remote `master` branch while no remote is configured and standing on the `dev` branch. Listing the references on the remote again confirms this.
```
$ git branch -va
* dev 7b7d5a3 Initial commit
master 4a27218 Add file.txt
$ git ls-remote [email protected]:user/repo.git
7b7d5a33d6e6ea3d69d9f87fa8ef1c596a37e24c HEAD
7b7d5a33d6e6ea3d69d9f87fa8ef1c596a37e24c refs/heads/dev
7b7d5a33d6e6ea3d69d9f87fa8ef1c596a37e24c refs/heads/master
$ git push -v [email protected]:user/repo.git refs/heads/master:refs/heads/master
Pushing to [email protected]:user/repo.git
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 294 bytes | 294.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To domain.tld:user/repo.git
7b7d5a3..4a27218 master -> master
$ git ls-remote [email protected]:user/repo.git
4a272186f7f56f2346fb2df7e63584f09936bdad HEAD
7b7d5a33d6e6ea3d69d9f87fa8ef1c596a37e24c refs/heads/dev
4a272186f7f56f2346fb2df7e63584f09936bdad refs/heads/master
```
|
Gsub a every element after a keyword in R
I'd like to remove all elements of a string after a certain keyword.
Example :
```
this.is.an.example.string.that.I.have
```
Desired Output :
```
This.is.an.example
```
I've tried using `gsub('string', '', list)` but that only removes the word string. I've also tried using the `gsub('^string', '', list)` but that also doesn't seem to work.
Thank you.
|
Following simple `sub` may help you here.
```
sub("\\.string.*","",variable)
```
***Explanation:*** Method of using `sub`
```
sub(regex_to_replace_text_in_variable,new_value,variable)
```
***Difference between `sub` and `gsub`:***
`sub`: is being used for performing substitution on variables.
`gsub`: `gsub` is being used for same substitution tasks only but only thing it will be perform substitution on ALL matches found though `sub` performs it only for first match found one.
***From help page of*** `R`***:***
>
> sub(pattern, replacement, x, ignore.case = FALSE, perl = FALSE,
> fixed = FALSE, useBytes = FALSE)
>
>
> gsub(pattern, replacement, x, ignore.case = FALSE, perl = FALSE,
> fixed = FALSE, useBytes = FALSE)
>
>
>
|
Publish limit on Facebook's Graph API
I've been using the Graph API for a while.
One feature of my application is that it allows a user to post a message on their friends walls (dont worry it is not spam).
Anyway...there is a limit on the API and it will only allow a certain number of posts before failing. I've read on the facebook bucket allocation limits but my app's limit has not moved. It was 26 when i created the app. It is still 26 even though there are about 20 users.
What can I do to increase my pulish limit?
And I promise this app is not used for anything spam related.
|
For those wanting an answer to this question: The posting limit is dynamic.
Facebook has implemented a bucket allocation system whereby each token/profile is given a set amount of posts per day (currently at 20-24). This allocation can go up or down based on the "affinity" your users show towards your application.
From Facebook:
>
> Based on the affinity users show for your apps use of Facebook Platform through their interactions, your app is allocated certain abilities and limits. This is the functionality currently allocated to your app. These values will change over time depending on how users interact with your app. All integration points have a set of limit values and the threshold bucket column tells you which of these limits buckets your app is in for that integration point. Bucket 1 is the smallest allocation bucket.
>
>
>
If more people like/use your application, your posting limit will be increased. On the other hand, if you do not have traction or many people are marking posts from your app as spam, then the limit will be decreased.
You can find the current limit of your application by going to the App Page > View Insights > Diagnostics.
You can find the current limit under the Allocations header.
|
Android: Passing parameters into Activity from AndroidManifest.xml
Does anyone know if it's possible to pass parameters into an Activity from the AndroidManifest.xml file? I want to use the same activity in a couple of apps, but have a way of conditioning the appearance of that activity based on the app.
E.g. I'd like to write something like (in AndroidManifest.xml)
```
<activity android:name=".MyActivity"
android:label="@string/app_name"
android:screenOrientation="portrait">
<extradata>
<item name="foo" value="bar"/>
</extradata>
</activity>
```
.. and be able to read out the value of bar in the onCreate of the activity. Is this something that can be done with the data attribute in an intent filter?
Thanks,
|
Actually, there is way to do just that. Check this article :
<http://blog.iangclifton.com/2010/10/08/using-meta-data-in-an-androidmanifest/>
In short you can get the content of the tag using this :
```
ApplicationInfo ai = getPackageManager().getApplicationInfo(activity.getPackageName(), PackageManager.GET_META_DATA);
Bundle bundle = ai.metaData;
String myApiKey = bundle.getString("my_api_key");
```
Metadata tag is defined as this :
```
<meta-data android:name="my_api_key" android:value="mykey123" />
```
It is possible to access data in values, but if you want to specify something that is related to the activity, this is much cleaner approach.
Hope it helps.
|
How to make progress bar move slowly by JavaScript
As I am increasing its width by 10%, it will apply suddenly, I want it to have some smooth movement.
```
var counter=0;
function moveBy10(x){
var width =10;
var bar = document.getElementById('bar');
counter++;
if(counter*x < 101){
bar.style.width = counter*x +'%';
}
}
```
```
#barHolder {
background-color: black;
width: 100%;
height: 80px;
}
#bar {
background-color: red;
width:5%;
height: 80px;
}
#by10 {
background-color: grey;
height: 40px;
width: 100px;
border-radius: 5px;
margin-top: 10px;
padding-top: 10px;
text-align: center;
cursor: pointer;
}
```
```
<!DOCTYPE html>
<html>
<head>
<title>Progress bar</title>
<link rel="stylesheet" type="text/css" href="bar.css">
<script type="text/javascript" src="bar.js"></script>
</head>
<body>
<!--- progress bar container -->
<div id="barHolder">
<div id="bar"></div>
</div>
<div type="button" id="by10" onclick="moveBy10(10)">Move 10%</div>
</body>
</html>
```
|
By adding to the desired element the property transition (in your case #bar), we can achieve the smoothing effect you seek for with CSS. That will result in a smoother experience, than accomplishing the same effect with Javascript.
`transition: width 2s;`
(Adds a smoothing of 2s for the transition of width)
CSS transitions allows you to change property values smoothly (from one value to another), over a given duration.
Learn more about [transitions](https://developer.mozilla.org/en-US/docs/Web/CSS/transition).
But to fully answer the question to achieve the same results with JavaScript (only) i would use a timeout to step to small intervals of the real step (If we wanted to transition by 10% for 1 second i'd split it to 0.1 per 1%)
But i strongly advice to use the best technology for each solution and not try to achieve something with a specific technology without a really good reason.
|
How to get Microsoft HTML Help 2 Compiler?
Where can I download the Microsoft HTML Help 2 compiler (or bounding SDK)? I want to generate HTML 2 documentation from my programming projects using [Sandcastle Help File Builder](http://shfb.codeplex.com/) and the Sandcastle Guided Installation wizard is instructing me to download that component.
Excerpt about HTML 2 taken from Sandcastle main page:
>
> The HTML Help 2.x output includes a
> valid set of collection files and an
> H2Reg.exe configuration file to
> simplify deployment and integration of
> the help file into existing
> collections such as those used by
> Visual Studio.
>
>
>
|
The HTML Help 2 compiler is **available from Microsoft in the [Visual Studio 2008 SDK 1.0](http://www.microsoft.com/downloads/en/details.aspx?FamilyID=30402623-93ca-479a-867c-04dc45164f5b&displaylang=en) download**.
*(You shouldn't need to have Visual Studio 2008 IDE installed to have these SDK tools.)*
Apparently the file needed is ***hxcomp.exe***
After installation of the SDK (on x64 system) the needed file was installed here and the Sandcastle Wizard found it.
```
C:\Program Files (x86)\Common Files\microsoft shared\Help 2.0 Compiler\hxcomp.exe
```
On an x86 system it would be found under the regular `Program Files` folder name.
---
## Addendum Notes: About tracking down the answer, etc:
According to a CodePlex Sandcastle Help File Builder forum post:
>
> So I checked my installation, and it
> appears that I don't have HXCOMP.EXE
> installed on my machine anywhere.
> According to [this article](http://www.helixoft.com/vsdocman-faqs/where-can-i-download-hxcompexe.html), it's only
> in version 1.0 of the VS 2008 SDK. I
> have Visual Studio 2008 installed.
>
> -- <http://shfb.codeplex.com/discussions/212256>
>
>
>
The article referenced in the above post mentions:
>
> **Where can I download HxComp.exe?**
>
>
> - **VS 2010** doesn't use MS Help2 format so ***there is no Help2
> compiler available for it***. Starting
> with VS 2010, the new MS Help Viewer
> format is used
> - For **VS 2008** hxcomp.exe is a part of VS 2008 SDK version 1.0. Note, the
> latest version VS 2008 SDK doesn't
> contain hxcomp.exe, so you need to
> install version 1.0! It is probably
> possible to install version 1.0 and
> then the latest version over it.
> - For **VS 2005** hxcomp.exe is a part of VS 2005 SDK.
> - For **VS .NET 2003** hxcomp.exe is a part of VSHIK 2003.
> - For **VS .NET 2002** hxcomp.exe is a part of VSHIK 2002 (Visual Studio Help
> Integration Kit v2.1).
>
>
> -- Excerpted from <http://www.helixoft.com/vsdocman-faqs/where-can-i-download-hxcompexe.html>
>
>
>
Although I found the solution by installing the recommended VS 2008 SDK 1.0, you might use one of the alternate sources listed in the points.
|
Where are variables in a closure stored - stack or heap?
Like the following codes:
```
var foo = function() {
var a = 1; // closure var
return function() { // closure fun
console.log(a);
}
};
var bar = foo();
```
When foo exits(or say, returns), we know that the variable a will not be destroyed and remains in memory(that's why closure works). So my problem is where does the the variable a store, stack or heap?
|
A closure is just an evolution of the concept of the stack.
The stack is used to separate/isolate scope when functions are called. When a function returns the stack frame (activation record) is popped off the call stack thus freeing the used memory allowing the next function call to reuse that RAM for its stack frame.
What a closure does is that instead of actually freeing that stack frame, if there's any object/variable in that stack frame that's referenced by anything else then it keeps that stack frame for future use.
Most languages implement this by implementing the stack as a linked list or hash table instead of a flat array. That way, the stack can be re-ordered at runtime and is not constrained by physical memory layout.
So. With this in mind, the answer is that variables in a closure are stored in the stack and heap. Depending on your point of view.
From the point of view of the language, it's definitely the stack. Since that's what closures are in theory - a modified stack.
From the point of view of the machine language or underlying C/assembly code, the idea of a linked-list stack is nonsense. Therefore the higher level language must be using the heap to implement its "stack".
So the variable is in the stack but that stack is probably located in the heap.
This of course depends on the implementation of your programming language. But the above description is valid for most javascript interpreters (certainly all the ones I've seen).
|
What is a Memory Heap?
What is a memory heap ?
|
Presumably you mean *heap* from a memory allocation point of view, not from a data structure point of view (the term has multiple meanings).
A very simple explanation is that the **heap** is the portion of memory where *dynamically allocated* memory resides (i.e. memory allocated via `malloc`). Memory allocated from the heap will remain allocated until one of the following occurs:
1. The memory is `free`'d
2. The program terminates
If all references to allocated memory are lost (e.g. you don't store a pointer to it anymore), you have what is called a *memory leak*. This is where the memory has still been allocated, but you have no easy way of accessing it anymore. Leaked memory cannot be reclaimed for future memory allocations, but when the program ends the memory will be free'd up by the operating system.
Contrast this with **stack** memory which is where local variables (those defined within a method) live. Memory allocated on the stack generally only lives until the function returns (there are some exceptions to this, e.g. static local variables).
You can find more information about the heap in [this article](http://en.wikipedia.org/wiki/Heap_%28programming%29).
|
How can I tell if I am overusing multi-threading?
I currently feel like I am over-using multi-threading.
I have 3 types of data, A, B and C.
Each `A` can be converted to multiple `B`s and each `B` can be converted to multiple `C`s.
I am only interested in treating `C`s.
I could write this fairly easily with a couple of conversion functions. But I caught myself implementing it with threads, three queues (`queue_a`, `queue_b` and `queue_c`). There are two threads doing the different conversions, and one worker:
- `ConverterA` reads from `queue_a` and writes to `queue_b`
- `ConverterB` reads from `queue_b` and writes to `queue_c`
- `Worker` handles each element from `queue_c`
The conversions are fairly mundane, and I don't know if this model is too convoluted. But it seems extremely robust to me. Each "converter" can start working even before data has arrived on the queues, and at any time in the code I can just "submit" new `A`s or `B`s and it will trigger the conversion pipeline which in turn will trigger a job by the worker thread.
Even the resulting code looks simpler. But I still am unsure if I am abusing threads for something simple.
|
It is almost always simpler to think sequentially, and then later modify that logic to work better using threads. And, as the expression goes, "If it ain't broken, don't fix it." Most programmers don't use threads simply because there is no need to use them.
If you feel more comfortable using them, more power to you. However, know that if threads do not offer a speed boost by eliminating bottlenecks, they are almost certainly slowing down your program.
Also consider that systems which dedicate only one CPU to a process will simulate multiple threads by one single thread in order to save resources (this does not happen often with modern computers, though smart phone applications are still very much subjected to this abuse). In this case, even if you're eliminating bottlenecks through the use of threads, it will actually be *slower* than if you didn't use threads at all.
And, perhaps the most subtle reason to use caution to use threads, but certainly not the least important, threads have a tendency to do what you don't expect. Yes, if you're taking precautions, you should be okay. Yes, if your threads don't write to variables shared between threads, you should be okay. That said, thread-related bugs are very hard to find. Since I'm of the idea that a programmer cannot ever completely eliminate the possibility to create bugs in code and therefore a programmer should take measures to protect against possible bugs rather than focus on completely eliminating them, you should definitely apply this idea to hard-to-find thread bugs as well. In other words, know that despite your very best efforts, using threads in code will almost certainly create some very serious bugs sooner or later which you wouldn't have otherwise without using threads.
So should you use threads anyway? Well, a healthy knowledge of threads is certainly not a bad thing, especially if you become good at it. However, the movement of late has been towards single-threaded languages such as node.js. One of the main advantages of having a single thread is that it is easy to scale and certain optimizations can be made if you know that the instructions are expected to be run sequentially (even if optimizations may mean that instructions which can be run in parallel can be run asynchronously).
That said, I say do what is most comfortable for you. In my experience, writing a program that you understand has higher priority than making it work faster. Just be sure to use threads when you think it helps you write the program, and not because you want it to work faster, since you shouldn't be worrying so much about performance *as you are writing* the program (optimization is important, but it can also wait).
|
EGIT branches local vs Remote tracking
I am new to GIT. I use EGIT with eclipse.
I have a question about branches where I see two sections "Local" and "Remote Tracking". If I have to work on one of the branches, I think there are two ways I can accomplish it.
a) Check out origin/featureBranch under remote tracking and work on it.
b) Right click under local and create a local branch and point to ref/remote/featureBranch and start working on it?
Are there any difference between these two approaches? which one is preferred?
|
b) would be best.
a) would create a [DETACHED HEAD](https://stackoverflow.com/questions/2696950/git-already-up-to-date-unless-i-reset/2697816#2697816), which would allow you to work on an "anonymous" branch, but would not allow you to push your work.
(See also "[Git Tip of the Week: Detached Heads](http://alblue.bandlem.com/2011/08/git-tip-of-week-detached-heads.html)" and "[detached head explained](http://sitaramc.github.com/concepts/detached-head.html)" for more on detached head)
For EGit, see "[Git Lesson: Be mindful of a detached head](http://eclipsesource.com/blogs/2011/05/29/life-lesson-be-mindful-of-a-detached-head/)":

|
How do I align the contents of app bar layout to horizontal
I have an activity with coordinator layout for which I have app bar layout and include an scrolling content
All the items of the app bar layout are by default arranged vertically and also when I override an menu item for the app bar layout it is not visible so I used an button in the app bar layout
I tried changing the orientation to horizontal by writing in the xml but the scrolling content is coming over the app bar layout
Kindly help me to correct this issue and also how to include a menu item for this layout
## XML
```
<?xml version="1.0" encoding="utf-8"?>
<android.support.design.widget.CoordinatorLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".AddItem">
<android.support.design.widget.AppBarLayout
android:layout_height="wrap_content"
android:layout_width="match_parent"
android:minHeight="?attr/actionBarSize"
android:theme="?attr/actionBarTheme"
android:background="#252A4E">
<TextView
android:text="Item :"
android:layout_marginLeft="4pt"
android:gravity="center"
android:layout_width="wrap_content"
android:layout_height="25pt" android:id="@+id/tvItem"
android:layout_gravity="center_vertical"
android:textColor="#F8F6F6" android:textSize="15pt"
app:fontFamily="@font/averia_libre" />
<TextView
android:text="1"
android:layout_marginLeft="4pt"
android:gravity="center"
android:layout_width="wrap_content"
android:layout_height="25pt" android:id="@+id/tvElevatorNo"
android:layout_gravity="center_vertical"
android:textColor="#F8F6F6" android:textSize="15pt"
app:fontFamily="@font/averia_libre"/>
<ImageButton
android:layout_marginLeft="72pt"
android:onClick="AddElevator"
android:layout_gravity="end"
android:layout_width="62dp"
android:layout_height="match_parent" app:srcCompat="@drawable/baseline_done_white_18dp"
android:id="@+id/ibComplete" tools:srcCompat="@drawable/baseline_done_white_18dp"
android:tint="#F9FBE7" android:background="#252A4E"/>
</android.support.design.widget.AppBarLayout>
<include layout="@layout/content_add_item" android:layout_height="674dp"/>
</android.support.design.widget.CoordinatorLayout>
```

|
`AppBarLayout` extends a `LinearLayout`. So, if you need to create a more complex layout, you must encapsulate your views inside another `ViewGroup` such as `RelativeLayout` or even a `ConstraintLayout` etc.
Something like:
```
<android.support.design.widget.AppBarLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<TextView
android:id="@+id/tvItem"
android:layout_width="wrap_content"
android:layout_height="25pt"/>
<TextView
android:id="@+id/tvElevatorNo"
android:layout_width="wrap_content"
android:layout_height="25pt"
android:layout_toEndOf="@id/tvItem"/>
<ImageButton
android:id="@+id/ibComplete"
android:layout_width="62dp"
android:layout_height="match_parent"
android:layout_alignParentEnd="true"/>
</RelativeLayout>
</android.support.design.widget.AppBarLayout>
```
|
Event Handling order
# Javascript jQuery event handling
If on event (for example 'click') bound one function for parent element and another handler function for child DOM element, which of them are called?
If all of them will be called, in which order are they called?
|
Events bubble "up" the DOM tree, so if you've got handlers for an element and its parent, the child element handler will be called first.
If you register more than one handler for an event on a single DOM element (like, more than one "click" handler for a single button), then the handlers are called in the order that they were attached to the element.
Your handlers can do a few things to change what happens after they're done:
- With the passed-in event parameter, call `event.preventDefault()` to keep any "native" action from happening
- call `event.stopPropagation()` to keep the event from bubbling up the DOM tree
- return false from the handler, to both stop propagation **and** prevent default
Note that for some input elements (checkboxes, radio buttons), the handling is a little weird. When your handler is called, the browser will have already set the checkbox "checked" value to the opposite of its former value. That is, if you have a checkbox that is not checked, then a "click" handler will notice that the "checked" attribute will be set to "true" when it is called (after the user clicks). However, if the handler returns false, the checkbox value will actually NOT be changed by the click, and it will remain un-checked. So it's like the browser does half of the "native" action (toggling the element "checked" attribute) before calling the handler, but then only *really* updates the element if the handler does not return false (or call "preventDefault()" on the event object).
|
In Angular 6 how make case insensitive url pattern?
In my case I want to support same url in case insensitive manner.
Example: it should support all url
```
localhost:1029/documentation
localhost:1029/DOCUMENTATION
localhost:1029/DOCUMENTAtion
localhost:1029/docuMENTATION
```
|
You should add this provide statement to the app.module.ts
```
import { DefaultUrlSerializer, UrlTree } from '@angular/router';
export class LowerCaseUrlSerializer extends DefaultUrlSerializer {
parse(url: string): UrlTree {
// Optional Step: Do some stuff with the url if needed.
// If you lower it in the optional step
// you don't need to use "toLowerCase"
// when you pass it down to the next function
return super.parse(url.toLowerCase());
}
}
```
And
```
@NgModule({
imports: [
...
],
declarations: [AppComponent],
providers: [
{
provide: UrlSerializer,
useClass: LowerCaseUrlSerializer
}
],
bootstrap: [AppComponent]
})
```
|
How to use Objective-C CocoaPods in a Swift Project
Is there a way I can use a CocoaPod written in Objective-C in my Swift project using swift?
Do I just make a bridging header? And if so, can I access the objects, classes, and fields defined by the libraries in the CocoaPod in Swift?
|
Basic answer to your question is Yes, you can use objective-c code built with CocoaPods.
More important question is "How to use such libs?"
Answer on this question depends on `use_frameworks!` flag in your `Podfile`:
Let's imagine that you want use Objective-C pod with name `CoolObjectiveCLib`.
**If your pod file uses `use_frameworks!` flag:**
```
// Podfile
use_frameworks!
pod 'CoolObjectiveCLib'
```
Then you don't need add any bridge header files.
Everything that you need is import framework in Swift source file:
```
// MyClass.swift
import CoolObjectiveCLib
```
Now you can use all classes that are presented in lib.
**If your pod file doesn't use `use_frameworks!` flag:**
```
// Podfile
pod 'CoolObjectiveCLib'
```
Then you need create bridging header file and import there all necessary Objective-C headers:
```
// MyApp-Bridging-Header
#import "CoolObjectiveCLib.h"
```
Now you can use all classes that are defined in imported headers.
|
Presenting Tukey HSD pairwise p-values in a table
I'm running a posthoc Tukey HSD on my data, which has ten factor levels. The table is massive and I was hoping to just present the p-values to the reader, in a pairwise table, leaving the 45 row-ed table for the appendix.
Here is an example dataset:
```
set.seed(42)
x <- rnorm(100,1,2)
category <- letters[1:10]
data <- cbind.data.frame(x, category)
summary(data.aov <- aov(x~category, data = data))
data.hsd<-TukeyHSD(data.aov)
data.hsd.result<-data.frame(data.hsd$category)
data.hsd.result
```
The result is a table of 45 rows. Instead, I'd like a table with the factor levels as row and column names, with the p-value in the cell, showing if the two are significantly different. Xs or underscores or whatever could represent repeated or unnecessary comparisons. Something like this:
```
a b c d e f ... j
a X 0.97 1 0.99 0.89 0.99 ... 0.99
b X X 0.99 0.89 0.94 0.92 ... 0.97
c X X X 0.85 0.93 0.96 ... 0.98
| ... ... ... ... ... ... ... ...
i X X X X X X ... 0.84
```
and so on.
Is there a way to produce a table like this automatically?
|
You want the p-values in the upper-triangular matrix form. That's a bit unnatural for R since it fills its matrices by column, but it easy enough to fix. First check that you are getting the correct order:
```
> resm <- matrix(NA, 10, 10)
> resm[lower.tri(resm) ] <- rownames(data.hsd.result)
> resm
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA NA NA NA NA NA NA NA
[2,] "b-a" NA NA NA NA NA NA NA NA NA
[3,] "c-a" "c-b" NA NA NA NA NA NA NA NA
[4,] "d-a" "d-b" "d-c" NA NA NA NA NA NA NA
[5,] "e-a" "e-b" "e-c" "e-d" NA NA NA NA NA NA
[6,] "f-a" "f-b" "f-c" "f-d" "f-e" NA NA NA NA NA
[7,] "g-a" "g-b" "g-c" "g-d" "g-e" "g-f" NA NA NA NA
[8,] "h-a" "h-b" "h-c" "h-d" "h-e" "h-f" "h-g" NA NA NA
[9,] "i-a" "i-b" "i-c" "i-d" "i-e" "i-f" "i-g" "i-h" NA NA
[10,] "j-a" "j-b" "j-c" "j-d" "j-e" "j-f" "j-g" "j-h" "j-i" NA
> t(resm)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA "b-a" "c-a" "d-a" "e-a" "f-a" "g-a" "h-a" "i-a" "j-a"
[2,] NA NA "c-b" "d-b" "e-b" "f-b" "g-b" "h-b" "i-b" "j-b"
[3,] NA NA NA "d-c" "e-c" "f-c" "g-c" "h-c" "i-c" "j-c"
[4,] NA NA NA NA "e-d" "f-d" "g-d" "h-d" "i-d" "j-d"
[5,] NA NA NA NA NA "f-e" "g-e" "h-e" "i-e" "j-e"
[6,] NA NA NA NA NA NA "g-f" "h-f" "i-f" "j-f"
[7,] NA NA NA NA NA NA NA "h-g" "i-g" "j-g"
[8,] NA NA NA NA NA NA NA NA "i-h" "j-h"
[9,] NA NA NA NA NA NA NA NA NA "j-i"
[10,] NA NA NA NA NA NA NA NA NA NA
```
So it's just:
```
resm <- matrix(NA, 10, 10)
resm[lower.tri(resm) ] <-round(data.hsd.result$p.adj, 3)
t(resm)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA 0.974 1.00 1 0.885 0.997 0.985 0.673 0.559 1.000
[2,] NA NA 0.99 1 1.000 1.000 1.000 0.999 0.997 0.999
[3,] NA NA NA 1 0.938 0.999 0.995 0.772 0.666 1.000
[4,] NA NA NA NA 0.990 1.000 1.000 0.921 0.856 1.000
[5,] NA NA NA NA NA 1.000 1.000 1.000 1.000 0.988
[6,] NA NA NA NA NA NA 1.000 0.991 0.974 1.000
[7,] NA NA NA NA NA NA NA 0.998 0.993 1.000
[8,] NA NA NA NA NA NA NA NA 1.000 0.914
[9,] NA NA NA NA NA NA NA NA NA 0.846
[10,] NA NA NA NA NA NA NA NA NA NA
```
Adding row and column names to a matrix is rather trivial with the functions: `rownames<-` and `colnames<-`. See their shared help page for worked examples.
|
How to align content bottom on Bootstrap 4 col
I have the following code for a footer:
```
<div class="container"> <hr>
<div class="row">
<div class="col-md-6">
© BusinessName 2017.
</div>
<div class="col-md-3"> <br>
<a href="/blog"> Blog </a> <br>
<a href="/blog"> FAQ </a> <br>
<a href="/blog"> About </a>
</div>
<div class="col-md-3">
<p><strong>Contact</strong> <br> [email protected] <br> more info bla bla <br> more more more info</p>
</div>
</div> </div>
```
I am trying to align all content on bottom line, I've tried some things but nothing works... any bright ideas?
|
You can use `align-items-end` from the [new Bootstrap 4 flexbox utilities](http://v4-alpha.getbootstrap.com/utilities/flexbox/#align-items)...
```
<div class="container">
<hr>
<div class="row align-items-end">
<div class="col-md-6">
© BusinessName 2017.
</div>
<div class="col-md-3">
<a href="/blog"> Blog </a>
<br>
<a href="/blog"> FAQ </a>
<br>
<a href="/blog"> About </a>
</div>
<div class="col-md-3">
<strong>Contact</strong>
<br> [email protected]
<br> more info bla bla
<br> more more more info
</div>
</div>
</div>
```
<http://codeply.com/go/GXTF43toS9>
Also, auto margins work inside flexbox. There you could use `mt-auto` (margin-top:auto) to "push" the col-\* to the bottom.
|
How to improve performance of ngRepeat over a huge dataset (angular.js)?
I have a huge dataset of several thousand rows with around 10 fields each, about 2MBs of data. I need to display it in the browser. Most straightforward approach (fetch data, put it into `$scope`, let `ng-repeat=""` do its job) works fine, but it freezes the browser for about half of a minute when it starts inserting nodes into DOM. How should I approach this problem?
One option is to append rows to `$scope` incrementally and wait for `ngRepeat` to finish inserting one chunk into DOM before moving to the next one. But AFAIK ngRepeat does not report back when it finishes "repeating", so it's going to be ugly.
Other option is to split data on the server into pages and fetch them in multiple requests, but that's even uglier.
I looked through Angular documentation in search of something like `ng-repeat="data in dataset" ng-repeat-steps="500"`, but found nothing. I am fairly new to Angular ways, so it is possible that I am missing the point completely. What are the best practices at this?
|
I agree with @AndreM96 that the best approach is to display only a limited amount of rows, faster and better UX, this could be done with a pagination or with an infinite scroll.
Infinite scroll with Angular is really simple with [limitTo](https://docs.angularjs.org/api/ng/filter/limitTo) filter. You just have to set the initial limit and when the user asks for more data (I am using a button for simplicity) you increment the limit.
```
<table>
<tr ng-repeat="d in data | limitTo:totalDisplayed"><td>{{d}}</td></tr>
</table>
<button class="btn" ng-click="loadMore()">Load more</button>
//the controller
$scope.totalDisplayed = 20;
$scope.loadMore = function () {
$scope.totalDisplayed += 20;
};
$scope.data = data;
```
Here is a [JsBin](http://jsbin.com/oricad/1/edit).
This approach could be a problem for phones because usually they lag when scrolling a lot of data, so in this case I think a pagination fits better.
To do it you will need the limitTo filter and also a custom filter to define the starting point of the data being displayed.
Here is a [JSBin](http://jsbin.com/ilugum/2/edit) with a pagination.
|
Configuring Elastic Beanstalk for SSH access to private git repo using Amazon Linux 2 hooks
Suppose we have a custom Python package, called `shared_package`, in a private repository, hosted on github or bitbucket. Our private repository is configured for read-only access via SSH, as described e.g. [here for github](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) and [here for bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/add-access-keys/).
Another one of our projects, aptly named `dependent_project`, depends on this `shared_package`, and needs to be deployed to AWS Elastic Beanstalk (EB). Our environment uses the latest "Python on Amazon Linux 2" platform, and we use `pipenv` as package manager.
For various reasons, it would be most convenient for us to install `shared_package` directly from our online git repository, as described [here for pipenv](https://pipenv.pypa.io/en/latest/basics/#a-note-about-vcs-dependencies) and [here for pip](https://pip.pypa.io/en/stable/topics/vcs-support/).
The `Pipfile` for our `dependent_project` looks like this:
```
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
shared_package = {git = "ssh://bitbucket.org/our_username/shared_package.git", editable = true, ref = "2021.0"}
[dev-packages]
awsebcli = "*"
[requires]
python_version = "3.8"
```
This works well on our local development systems, but when deploying `dependent_project` to Elastic Beanstalk, the `pipenv` installation fails with: `Permission denied (publickey)`.
This leads to the question:
**How to configure an Elastic Beanstalk environment, using [Amazon Linux 2 platform hooks](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html), so that `pipenv` can successfully install a package from a private online git repo, via SSH?**
Some pieces of the puzzle can be found in the following discussions, but these do not use Amazon Linux 2 platform hooks:
- [Setting up private Github access with AWS Elastic Beanstalk and Ruby container](https://stackoverflow.com/q/13476138)
- [Setting up SSH keys for github private repo access on Elastic Beanstalk](https://stackoverflow.com/q/43497663)
- [What is the recommended way to handle node.js private module dependencies?](https://stackoverflow.com/q/22864010)
- [BitBucket: Host key verification failed](https://stackoverflow.com/q/40576718)
- [Grant S3 access to Elastic Beanstalk instances](https://stackoverflow.com/q/21653176)
- [How to deploy private python pip dependency with Amazon AWS Elastic Beanstalk?](https://stackoverflow.com/q/34727442)
- [Python in AWS Elastic Beanstalk: Private package dependencies](https://stackoverflow.com/q/25367732)
|
## Summary
Assume we have defined the following Elastic Beanstalk environment properties, and both the bitbucket public key file and our private key file have been uploaded to the specified S3 bucket:
```
S3_BUCKET_NAME="my_bucket"
REPO_HOST_NAME="bitbucket.org"
REPO_HOST_PUBLIC_KEY_NAME="bitbucket_public_key"
REPO_PRIVATE_KEY_NAME="my_private_key"
```
The configuration can then be accomplished using this hook in `.platform/hooks/prebuild`:
```
#!/bin/bash
# git is required to install our python packages directly from bitbucket
yum -y install git
# file paths (platform hooks are executed as root)
SSH_KNOWN_HOSTS_FILE="/root/.ssh/known_hosts"
SSH_CONFIG_FILE="/root/.ssh/config"
PRIVATE_KEY_FILE="/root/.ssh/$REPO_PRIVATE_KEY_NAME"
# remove any existing (stale) keys for our host from the known_hosts file
[ -f $SSH_KNOWN_HOSTS_FILE ] && ssh-keygen -R $REPO_HOST_NAME
# read the (fresh) host key from S3 file and append to known_hosts file
aws s3 cp "s3://$S3_BUCKET_NAME/$REPO_HOST_PUBLIC_KEY_NAME" - >> $SSH_KNOWN_HOSTS_FILE
# copy our private key from S3 to our instance
aws s3 cp "s3://$S3_BUCKET_NAME/$REPO_PRIVATE_KEY_NAME" $PRIVATE_KEY_FILE
# create an ssh config file to point to the private key file
tee $SSH_CONFIG_FILE <<HERE
Host $REPO_HOST_NAME
User git
Hostname $REPO_HOST_NAME
IdentityFile $PRIVATE_KEY_FILE
HERE
# file permissions must be restricted
chmod 600 $SSH_CONFIG_FILE
chmod 600 $PRIVATE_KEY_FILE
```
Note this file requires execution permission (`chmod +x <file path>`).
## Detailed explanation
Read on for a detailed rationale.
### Git
To access a `git` repository, our Elastic Beanstalk environment will need to have `git` installed.
This can be done in a platform hook using `yum` (`-y` assumes "yes" to every question):
```
yum -y install git
```
### SSH keys
To set up an SSH connection between our Elastic Beanstalk (EB) instance and e.g. a bitbucket repository, we need three SSH keys:
- The **public** key for bitbucket.org, to verify that we are connecting to a trusted host.
To obtain the public key for bitbucket.org, in a suitable format for `known_hosts`, we can use [ssh-keyscan](https://www.openssh.com/).
To be on the safe side, we [should verify this key](https://security.stackexchange.com/a/251498) using a "trusted" source.
In our case the best we can do is compare the public key fingerprint with the "official" one published on the [bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/configure-ssh-and-two-step-verification/) (or [github](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints)) website.
The fingerprint can be calculated from the public key using `ssh-keygen` e.g.
```
ssh-keyscan -t rsa bitbucket.org | ssh-keygen -lf -
```
- The **private** key and **public** key for our repository.
A key pair, consisting of private and public key, can be generated using `ssh-keygen`.
The private key must be kept secret, the public key must be copied to the list of "access keys" for the bitbucket repository, as described in the [bitbucket docs](https://support.atlassian.com/bitbucket-cloud/docs/add-access-keys/).
Note that it is most convenient to create a key pair *without* passphrase, otherwise our script will need to handle the passphrase as well.
### Storing the keys on AWS
The public bitbucket host key and our private repo key need to be available in the EB environment during deployment.
The *private* key is secret, so it should *not* be stored in the source code, nor should it be otherwise version controlled.
The most convenient option would be to store the key values as [EB environment properties](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-softwaresettings.html#environments-cfg-softwaresettings-console) (i.e. environment variables), because these are readily available during deployment.
In principle, this *can* be done, e.g. using `base64` encoding to store the multiline private key in a single line environment property.
However, the total size of all EB environment property keys and values combined is [limited to a mere 4096 bytes](https://stackoverflow.com/q/54344236), which basically precludes this option.
An alternative is to store the key files in a secure private bucket on AWS S3.
The [documentation](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html) describes how to set up an IAM role that grants access to your S3 bucket for the EC2 instance. The documentation does provide a configuration example, but this uses `.ebextensions` and does not apply to `.platform` hooks.
In short, we can create a basic S3 bucket with default settings ("block public access" enabled, no custom permissions), and upload the SSH key files to that bucket.
Then, using the AWS IAM web console, select the `aws-elasticbeanstalk-ec2-role` (or, preferably, create a custom role), and attach the `AmazonS3ReadOnlyAccess` policy.
During deployment to Elastic Beanstalk, we can use `.platform` hooks to download the key files from the S3 bucket to the EC2 instance using the [aws cli](https://aws.amazon.com/cli/).
To test connectivity between EC2 and S3, we could use [eb ssh](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.html) to connect to the EC2 instance, followed by, for example, `aws s3 ls s3://<bucket name>` to list bucket contents.
### Updating known\_hosts
To indicate that bitbucket.org is a trusted host, its public key needs to be added to the `known_hosts` file on our instance.
In our platform hook script, we remove any existing public keys for the host, in case they are stale, and replace them by the current key from our file on S3:
```
SSH_KNOWN_HOSTS_FILE="/root/.ssh/known_hosts"
[ -f $SSH_KNOWN_HOSTS_FILE ] && ssh-keygen -R $REPO_HOST_NAME
aws s3 cp "s3://$S3_BUCKET_NAME/$REPO_HOST_PUBLIC_KEY_NAME" - >> $SSH_KNOWN_HOSTS_FILE
```
### Specifying the private key
The private key can be downloaded from S3 as follows, and we need to restrict the file permissions:
```
PRIVATE_KEY_FILE="/root/.ssh/$REPO_PRIVATE_KEY_NAME"
aws s3 cp "s3://$S3_BUCKET_NAME/$REPO_PRIVATE_KEY_NAME" $PRIVATE_KEY_FILE
chmod 600 $PRIVATE_KEY_FILE
```
An SSH configuration file is also required to point to the private key:
```
tee $SSH_CONFIG_FILE <<HERE
Host $REPO_HOST_NAME
User git
Hostname $REPO_HOST_NAME
IdentityFile $PRIVATE_KEY_FILE
HERE
chmod 600 $SSH_CONFIG_FILE
```
Again, file [permissions must be restricted](https://man.openbsd.org/ssh_config.5#FILES).
The final script is shown in the summary at the top.
This script could be stored e.g. as `.platform/hooks/prebuild/01_configure_bitbucket_ssh.sh` in the project folder.
### Hooks and confighooks
Note that Amazon Linux 2 uses `.platform/hooks`, for normal deployments, and `.platform/confighooks`, for configuration deployments.
Often, identical scripts need to be used in both cases.
To prevent duplication of code, our `.platform/confighooks/prebuild/01_configure_bitbucket_ssh.sh` could look like this:
```
#!/bin/bash
source "/var/app/current/.platform/hooks/prebuild/01_configure_bitbucket_ssh.sh"
```
Note that the application code ends up in `/var/app/current` on the instance.
|
How to print a DIV in ElectronJS
i'm trying to convert my web into an app made in ElectronJS
in my web i print a div with a barcode. this works pretty fine, but in electronjs i can't reach this.
originally i'd use this function
```
$scope.printDiv = function (divName) {
var printContents = document.getElementById(divName).innerHTML;
var popupWin = window.open('', '_blank', 'width=500,height=500');
popupWin.document.open();
popupWin.document.write('<html><head><link rel="stylesheet" type="text/css" href="styles/main.css" type=\"text/css\" media=\"print\" /></head><body onload="window.print()">' + printContents + '</body></html>');
popupWin.document.close();
}
```
with electronjs
i don't know how to pass the object to print.
also i'm trying to generate a PDF from content that i can load. but the PDF's are corrupted
```
var windowPrint = require('electron').remote.BrowserWindow;
var fs = require('fs');
var newWindow = new windowPrint({width: 800, height: 600, show: false});
console.log(newWindow);
newWindow.loadURL('http://github.com');
newWindow.show();
newWindow.webContents.print({silent: true, printBackground: true});
newWindow.webContents.printToPDF({printSelectionOnly : true, printBackground: true}, function (error, data) {
if (error) {
throw error;
}
console.log(error);
console.log(data);
fs.writeFile('print.pdf', function (data, error) {
if (error) {
throw error;
}
console.log(error);
console.log(data);
});
});
```
there's a simple way to print a DIV with electronjs?
thank you for reading.
|
You have printed this page before loading is finished.
My approach:
1. create a mainwindow and a (invisible) worker window
```
import {app, BrowserWindow, Menu, ipcMain, shell} from "electron";
const os = require("os");
const fs = require("fs");
const path = require("path");
let mainWindow: Electron.BrowserWindow = undefined;
let workerWindow: Electron.BrowserWindow = undefined;
async function createWindow() {
mainWindow = new BrowserWindow();
mainWindow.loadURL("file://" + __dirname + "/index.html");
mainWindow.webContents.openDevTools();
mainWindow.on("closed", () => {
// close worker windows later
mainWindow = undefined;
});
workerWindow = new BrowserWindow();
workerWindow.loadURL("file://" + __dirname + "/worker.html");
// workerWindow.hide();
workerWindow.webContents.openDevTools();
workerWindow.on("closed", () => {
workerWindow = undefined;
});
}
// retransmit it to workerWindow
ipcMain.on("printPDF", (event: any, content: any) => {
console.log(content);
workerWindow.webContents.send("printPDF", content);
});
// when worker window is ready
ipcMain.on("readyToPrintPDF", (event) => {
const pdfPath = path.join(os.tmpdir(), 'print.pdf');
// Use default printing options
workerWindow.webContents.printToPDF({}).then((data) {
fs.writeFile(pdfPath, data, function (error) {
if (error) {
throw error
}
shell.openItem(pdfPath)
event.sender.send('wrote-pdf', pdfPath)
})
}).catch((error) => {
throw error;
})
});
```
2, mainWindow.html
```
<head>
</head>
<body>
<button id="btn"> Save </button>
<script>
const ipcRenderer = require("electron").ipcRenderer;
// cannot send message to other windows directly https://github.com/electron/electron/issues/991
function sendCommandToWorker(content) {
ipcRenderer.send("printPDF", content);
}
document.getElementById("btn").addEventListener("click", () => {
// send whatever you like
sendCommandToWorker("<h1> hello </h1>");
});
</script>
</body>
```
3, worker.html
```
<head> </head>
<body>
<script>
const ipcRenderer = require("electron").ipcRenderer;
ipcRenderer.on("printPDF", (event, content) => {
document.body.innerHTML = content;
ipcRenderer.send("readyToPrintPDF");
});
</script>
</body>
```
|
Registry keys for out-of-process COM server
I'm in the process of implementing my first out-of-process COM server (my first COM server altogether, for that matter). I have followed the steps to write an IDL file, generate the code for the proxy/stub DLL, compile the DLL, and register it.
When I check the registry keys, I have
- A key named `HKEY_CLASSES_ROOT/Interface/<GUID>`, whose vaue is (say) `IMyApp` and
- A key named `HKEY_CLASSES_ROOT/Interface/<GUID>/ProxyStubClsid32`, whose value is `<GUID>`, i.e. *the same value as in the key name*
I don't understand how the the second key's value can be the same `<GUID>` value as in the key name, because my current understanding is that
- In `HKEY_CLASSES_ROOT/Interface/<GUID>`, GUID is an **interface** ID
- The **value** of `ProxyStubClsid32` is not an interface ID, but a **class** ID referring to the component that implements the above interface
- The value of `HKEY_CLASSES_ROOT/CLSID/<GUID>/InprocServer32` (where GUID is the above **class** ID) points to the proxy DLL
How, then, can the value of `HKEY_CLASSES_ROOT/Interface/<GUID>/ProxyStubClsid32` hold the same value GUID if one is an interface ID and the other is a class ID?
EDIT: I'm still hoping for an answer to this one. To put it short: Since a component and an interface are two different things, how can the same ID be used for both?
|
Your basic understanding of the way Guids are used in COM is correct. Notable first is that an interface and a coclass having the same guid is not a problem. They live in different registry keys, HKCR\Interface vs HKCR\CLSID and it is always clear in COM whether you are looking up a IID or a CLSID.
Second is the IDL you wrote. Note that there's no place there to specify the CLSID of the proxy, only the IIDs supported by the proxy and stub can be declared there.
Next, you need a wild goose chase through the way that the proxy/stub is autogenerated. The core Windows SDK header is RpcProxy.h, open it in a text editor to have a look see. The macro soup is very heavy but it does have a few decent comments that describe what's going on. The important RPC helper function is NdrDllRegisterProxy(), it registers the proxy and is called when you use Regsvr32.exe. Its 3rd argument specifies the CLSID of the proxy. I'll let you do the reading and just quote the important bits in the .h file:
>
> Compiler switches:
>
>
>
> ```
> -DPROXY_CLSID=clsid
> Specifies a class ID to be used by the proxy DLL.
>
> ```
>
>
This one you specify with Project + Properties, C/C++, Preprocessor, Preprocessor Definitions setting. Note that your project will **not** specify it.
Chasing through the soup then lands you on this one:
```
// if the user specified an override for the class id, it is
// PROXY_CLSID at this point
#ifndef PROXY_CLSID
#define GET_DLL_CLSID \
( aProxyFileList[0]->pStubVtblList[0] != 0 ? \
aProxyFileList[0]->pStubVtblList[0]->header.piid : 0)
#else //PROXY_CLSID
#define GET_DLL_CLSID &PROXY_CLSID
#endif //PROXY_CLSID
```
In other words, if you didn't specify the CLSID yourself (you didn't) then it uses the first **IID** in the stub table.
And that makes the ProxyStubClsid32 guid the same as the IID of your first interface. Feature, not a bug.
|
Horizontal add with \_\_m512 (AVX512)
How does one efficiently perform horizontal addition with floats in a 512-bit AVX register (ie add the items from a single vector together)? For 128 and 256 bit registers this can be done using \_mm\_hadd\_ps and \_mm256\_hadd\_ps but there is no \_mm512\_hadd\_ps. The Intel intrinsics guide documents \_mm512\_reduce\_add\_ps. It doesn't actually correspond to a single instruction but its existence suggests there is an optimal method, but it doesn't appear to be defined in the header files that come with the latest snapshot of GCC and I can't find a definition for it with Google.
I figure "hadd" can be emulated with \_mm512\_shuffle\_ps and \_mm512\_add\_ps or I could use \_mm512\_extractf32x4\_ps to break a 512-bit register into four 128-bit registers but I want to make sure I'm not missing something better.
|
The INTEL compiler has the following intrinsic defined to do horizontal sums
```
_mm512_reduce_add_ps //horizontal sum of 16 floats
_mm512_reduce_add_pd //horizontal sum of 8 doubles
_mm512_reduce_add_epi32 //horizontal sum of 16 32-bit integers
_mm512_reduce_add_epi64 //horizontal sum of 8 64-bit integers
```
However, as far as I can tell these are broken into multiple instructions anyway so I don't think you gain anything more than doing the horizontal sum of the upper and lower part of the AVX512 register.
```
__m256 low = _mm512_castps512_ps256(zmm);
__m256 high = _mm256_castpd_ps(_mm512_extractf64x4_pd(_mm512_castps_pd(zmm),1));
__m256d low = _mm512_castpd512_pd256(zmm);
__m256d high = _mm512_extractf64x4_pd(zmm,1);
__m256i low = _mm512_castsi512_si256(zmm);
__m256i high = _mm512_extracti64x4_epi64(zmm,1);
```
To get the horizontal sum you then do `sum = horizontal_add(low + high)`.
```
static inline float horizontal_add (__m256 a) {
__m256 t1 = _mm256_hadd_ps(a,a);
__m256 t2 = _mm256_hadd_ps(t1,t1);
__m128 t3 = _mm256_extractf128_ps(t2,1);
__m128 t4 = _mm_add_ss(_mm256_castps256_ps128(t2),t3);
return _mm_cvtss_f32(t4);
}
static inline double horizontal_add (__m256d a) {
__m256d t1 = _mm256_hadd_pd(a,a);
__m128d t2 = _mm256_extractf128_pd(t1,1);
__m128d t3 = _mm_add_sd(_mm256_castpd256_pd128(t1),t2);
return _mm_cvtsd_f64(t3);
}
```
I got all this information and functions from [Agner Fog's Vector Class Library](http://www.agner.org/optimize/#vectorclass) and the [Intel Instrinsics Guide online](https://software.intel.com/sites/landingpage/IntrinsicsGuide/).
|
why is "test-jar" dependency required for "mvn compile"
I'm having trouble using `test-jar` dependencies in a multi-module project. For example, when I declare that the `cleartk-syntax` module depends on the `cleartk-token` module's `test-jar` like this (the full code is [here](https://cleartk.googlecode.com/svn/trunk/)):
```
<modelVersion>4.0.0</modelVersion>
<groupId>org.cleartk</groupId>
<artifactId>cleartk-syntax</artifactId>
<version>0.5.0-SNAPSHOT</version>
<name>cleartk-syntax</name>
...
<dependencies>
...
<dependency>
<groupId>org.cleartk</groupId>
<artifactId>cleartk-token</artifactId>
<version>0.7.0-SNAPSHOT</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
```
I get the following error if I run `mvn compile` using maven 2:
```
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Failed to resolve artifact.
Missing:
----------
1) org.cleartk:cleartk-token:test-jar:tests:0.7.0-SNAPSHOT
```
If I use maven 3 I get the error:
```
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.654s
[INFO] Finished at: Mon Jan 24 21:19:17 CET 2011
[INFO] Final Memory: 16M/81M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project cleartk-syntax: Could not resolve
dependencies for project org.cleartk:cleartk-syntax:jar:0.5.0-SNAPSHOT: Could
not find artifact org.cleartk:cleartk-token:jar:tests:0.7.0-SNAPSHOT
```
In the latter case, I'm particularly confused because I would have thought it should be looking for an artifact of type `test-jar` not of type `jar`.
With maven 2 or maven 3, I can get it to compile by running `mvn compile package -DskipTests`. With maven 3, I can also get it to compile by running `mvn compile test-compile`.
But why is either maven 2 or maven 3 looking for a `test-jar` dependency during the `compile` phase? Shouldn't it wait until the `test-compile` phase to look for such dependencies?
**Update:** The answer was that the maven-exec-plugin, used during my compile phase, [requires dependency resolution of artifacts in scope:test](http://mojo.codehaus.org/exec-maven-plugin/java-mojo.html). I've created [a feature request to remove the scope:test dependency](http://jira.codehaus.org/browse/MEXEC-91).
|
This looks like a definite bug to me.
I have the same problem and tested Maven 3.0.1 and 3.0.2. Validate doesn't fail, only the compile step fails. With Maven 3 `mvn compile` breaks but `mvn test-compile` works.
It appears that the compile phase is looking for test-jar artifacts in the reactor and then repo, but it shouldn't since the dependency is in test scope. Test scope artifacts should be resolved during test-compile, not compile.
As a result, I thought this could be worked around by mapping the maven-compiler-plugin's testCompile goal to the compile phase, instead of the default test-compile phase.
I added this to my pom, right next to the part that adds the test-jar creation in the upstream pom:
```
<!-- there is a bug in maven causing it to resolve test-jar types
at compile time rather than test-compile. Move the compilation
of the test classes earlier in the build cycle -->
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<id>default-testCompile</id>
<phase>compile</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
```
But that won't work either because the five phases between compile and test-compile haven't run and set up things like the test classpath.
I guess the real workaround until this bug is fixed is to use `test-compile` in place of `compile`.
|
set tag attribute and add plain text content to the tag using nokogiri builder (ruby)
I am trying to build XML using Nokogiri with some tags that have both attributes and plain text inside the tag. So I am trying to get to this:
```
<?xml version="1.0"?>
<Transaction requestName="OrderRequest">
<Option b="hive">hello</Option>
</Transaction>
```
Using builder I have this:
```
builder = Nokogiri::XML::Builder.new { |xml|
xml.Transaction("requestName" => "OrderRequest") do
xml.Option("b" => "hive").text("hello")
end
}
```
which renders to:
```
<Transaction requestName="OrderRequest">
<Option b="hive" class="text">hello</Option>
</Transaction>
```
So it produces
`<Option b="hive" class="text">hello</Option>`
where I would just like it to be
`<Option b="hive">hello</Option>`
I am not sure how to do that. If I try to get a Nokogiri object by just feeding it the XML I want, it renders back exactly what I need with the internal text being within the `<Option>` tag set to `children=[#<Nokogiri::XML::Text:0x80b9e3dc "hello">]` and I don't know how to set that from builder.
If anyone has a reference to that in the Nokogiri documentation, I would appreciate it.
|
There are two approaches you can use.
**Using `.text`**
You can call the `.text` method to set the text of a node:
```
builder = Nokogiri::XML::Builder.new { |xml|
xml.Transaction("requestName" => "OrderRequest") do
xml.Option("b" => "hive"){ xml.text("hello") }
end
}
```
which produces:
```
<?xml version="1.0"?>
<Transaction requestName="OrderRequest">
<Option b="hive">hello</Option>
</Transaction>
```
**Solution using text parameter**
Alternatively, you can pass the text in as a parameter. The text should be passed in before the attribute values. In other words, the tag is added in the form:
```
tag "text", :attribute => 'value'
```
In this case, the desired builder would be:
```
builder = Nokogiri::XML::Builder.new { |xml|
xml.Transaction("requestName" => "OrderRequest") do
xml.Option("hello", "b" => "hive")
end
}
```
Produces the same XML:
```
<?xml version="1.0"?>
<Transaction requestName="OrderRequest">
<Option b="hive">hello</Option>
</Transaction>
```
|
Fitting models in R where coefficients are subject to linear restriction(s)?
How should I define a model formula in R, when one (or more) exact linear restrictions binding the coefficients is available. As an example, say that you know that b1 = 2\*b0 in a simple linear regression model.
|
Suppose your model is
$ Y(t) = \beta\_0 + \beta\_1 \cdot X\_1(t) + \beta\_2 \cdot X\_2(t) + \varepsilon(t)$
and you are planning to restrict the coefficients, for instance like:
$ \beta\_1 = 2 \beta\_2$
inserting the restriction, rewriting the original regression model you will get
$ Y(t) = \beta\_0 + 2 \beta\_2 \cdot X\_1(t) + \beta\_2 \cdot X\_2(t) + \varepsilon(t) $
$ Y(t) = \beta\_0 + \beta\_2 (2 \cdot X\_1(t) + X\_2(t)) + \varepsilon(t)$
introduce a new variable $Z(t) = 2 \cdot X\_1(t) + X\_2(t)$ and your model with restriction will be
$ Y(t) = \beta\_0 + \beta\_2 Z(t) + \varepsilon(t)$
In this way you can handle any exact restrictions, because the number of *equal signs* reduces the number of unknown parameters by the same number.
Playing with R formulas you can do directly by I() function
```
lm(formula = Y ~ I(1 + 2*X1) + X2 + X3 - 1, data = <your data>)
lm(formula = Y ~ I(2*X1 + X2) + X3, data = <your data>)
```
|
How to create a custom color theme with angular5 and angular materials
I have been following the angular/material documentation for how to create a custom theme, followed other blogs, and checked various stack overflow similar questions, but cant seem to get this working. I have the following styles.css, angular-cli.json, theme.scss, and another sass file where my theme colors come from super-styles.sass.
styles.css
```
...
@import 'assets/styles/theme.scss';
...
```
angular-cli.json
```
...
"styles": [
"styles.css",
"src/assets/styles/theme.scss"
],
...
```
theme.scss
```
@import '~@angular/material/theming';
@import "super-styles";
// Plus imports for other components in your app.
// Include the common styles for Angular Material. We include this here so that you only
// have to load a single css file for Angular Material in your app.
// Be sure that you only ever include this mixin once!
@include mat-core()
// Define the palettes for your theme using the Material Design palettes available in palette.scss
// (imported above). For each palette, you can optionally specify a default, lighter, and darker
// hue.
$candy-app-primary: mat-palette($darkblue, A400);
$candy-app-accent: mat-palette($orange, A400);
// The warn palette is optional (defaults to red).
$candy-app-warn: mat-palette($alert);
// Create the theme object (a Sass map containing all of the palettes).
$candy-app-theme: mat-light-theme($candy-app-primary, $candy-app-accent, $candy-app-warn);
// Include theme styles for core and each component used in your app.
// Alternatively, you can import and @include the theme mixins for each component
// that you are using.
@include angular-material-theme($candy-app-theme);
```
Super-styles.sass
```
...
$darkblue: #7faedd
$mediumblue: #85ceef
$lightblue: #c5e8f1
$yellow: #f4ef5f
$alert: #f37652
$orange: #fbb03c
...
```
According to the tutorials, I feel like this should be working, but angular doesnt compile and I get an error.
>
> ERROR in
> ./node\_modules/css-loader?{"sourceMap":false,"importLoaders":1}!./node\_modules/postcss-loader?{"ident":"postcss"}!./src/assets/styles/theme.scss
> Module build failed: Unknown word (23:1)
>
>
> 21 | $candy-app-theme: mat-light-theme($candy-app-primary,
> $candy-app-accent, $candy-app-warn); 22 |
>
>
>
> >
> > 23 | // Include theme styles for core and each component used in your app.
> > | ^ 24 | // Alternatively, you can import and @include the theme mixins for each component 25 | // that you are using.
> >
> >
> >
>
>
>
Any help as to how to build a custom theme and use it in my angular app would be greatly helpful. Thanks!
|
In order to use a custom hex palette for an Angular - Material you will need to define the different shades as well as contrast colors for the palette, even if you only want one color. I'd suggest using at least 3 colors (light, normal, dark) so that it works flawless with Material's built in animations:
```
// below defines your custom color to build a theme palette from
$my-blue: (
50: #7fdddd,
100: #7faedd,
200: #7f7fdd,
300: #7faedd,
400: #7faedd,
500: #7faedd,
600: #7faedd,
700: #7faedd,
800: #7faedd,
900: #7faedd,
A100: #7faedd,
A200: #7faedd,
A400: #7faedd,
A700: #7faedd,
contrast: (
50: white,
100: white,
200: white,
300: white,
400: white,
500: white,
600: white,
700: white,
800: white,
900: white,
A100: white,
A200: white,
A400: white,
A700: white,
)
);
// below creates a primary palette with three shades of blue
$my-primary: mat-palette($my-blue, 100, 50, 200);
```
|
Can anyone explain to me how this code allows the stars to move?
Trying to understand how this code works, can anyone explain it please.
```
def draw_star(star): # drawing a star
# you only need to change a pixel, so use set_at, not draw.line
screen.set_at((star[0], star[1]), (255, 255, 255))
star[0] -= 1
if star[0] < 0:
star[0] = screen.get_width()
star[1] = random.randint(0, screen.get_height())
stars = []
for i in range(1200):
x = random.randint(0, screen.get_width())
y = random.randint(0, screen.get_height())
stars.append([x,y])
for star in stars:
draw_star(star)
```
|
First, the code generates 1200 `[x, y]` coordinates, so each is a Python `list`:
```
stars = []
for i in range(1200):
x = random.randint(0, screen.get_width())
y = random.randint(0, screen.get_height())
stars.append([x,y])
```
Each `x` and `y` coordinate consist of a random value within the constraints of the screen.
Next, each of these coordinates are drawn:
```
for star in stars:
draw_star(star)
```
This passes the `[x, y]` coordinate list to the function `draw_star`
```
def draw_star(star): # drawing a star
```
This sets a white pixel at the given coordinates (`star[0]` is `x`, `star[1]` is the `y` coordinate):
```
# you only need to change a pixel, so use set_at, not draw.line
screen.set_at((star[0], star[1]), (255, 255, 255))
```
The code then subtracts 1 from the x coordinate (one step left). This changes the *original* list, because mutable lists are used:
```
star[0] -= 1
```
If this changed the coordinates beyond the edge of the screen, the star coordinates are *replaced* with new coordinates on the right-hand side of the screen, at a random height:
```
if star[0] < 0:
star[0] = screen.get_width()
star[1] = random.randint(0, screen.get_height())
```
If you were to *repeat* the `for star in stars: draw_star(star)` loop now with a screen blanking in between, you'd be *animating* the stars as moving from right to left, with new stars appearing on the right at random heights as stars drop off the left-hand side of the screen.
The core idea here is that the `draw_star()` function handles mutable lists and changes the values contained in them, effectively changing the contents of the global `stars` list for the next loop of the animation.
|
Best way to Immediately Invoked Function Expression (IIFE) in perl?
Borrowing the the term form a Javascript, what is the 'best practice' way to IIFE in perl?
My test code below is a simple loop calling anonymous sub and executing it right away to build an array of subs (which just return the loop index). It is mostly what I want, however I'm needing to use an intermediate variable ( instead of using @\_, which changes on the internal function).
```
use warnings;
use strict;
my @funcs;
for(my $i=0;$i<3;$i++) {
sub {
my $index=shift;
$funcs[$index]=sub {$index};
}
-> ($i);
}
for (@funcs) {
print &$_()."\n";
}
```
```
#Output
0
1
2
```
I know I can use `map` to restructure this problem. But putting that aside, is there a better way to do this?
**Update**
Thanks to @ikegami for highlighting some important points.
Just for future views of this question, my thoughts on this:
The 'iterator' for loop has different scoping (is it a map?) than a 'c style' for loop. That cleans up the code with out needing an IIFE at all. Sweet.
**Update 2**
Following code shows the differences I'm seeing. Not saying one is better than the other but good to know I think. The output I'm after is 0 1 2 but the first one only repeats the last value of $i (3 after the ++ operator).
```
use warnings;
use strict;
my @funcs;
print "C loop direct assignment of sub\n";
for(my $i=0;$i<3;$i++) {
$funcs[$i]= sub {$i};
}
print &$_()."\n" for @funcs;
print "C loop direct assignment of sub with variable\n";
for(my $i=0;$i<3;$i++) {
my $index=$i; #assignment/copy
$funcs[$index]= sub {$index};
}
print &$_()."\n" for @funcs;
print "For loop interator\n";
@funcs=[];
for my $i (0..2) {
$funcs[$i]=sub {$i};
}
print &$_()."\n" for @funcs;
print "C loop with IIFE assignment\n";
@funcs=[];
for (my $i=0;$i<3;$i++) {
sub {
my $index=shift;
$funcs[$index]=sub {$index};
}
-> ($i);
}
print &$_()."\n" for @funcs;
```
Out is:
```
C loop direct assignment of sub
3
3
3
C loop direct assignment of sub with variable
0
1
2
For loop interator
0
1
2
C loop with IIFE assignment
0
1
2
```
|
The Perl equivalent of
```
(function () {
var x = ...;
...
})();
```
is
```
sub {
my $x = ...;
...
}->();
```
That said, the IIFE is just a workaround that's simply not needed in Perl.
```
(function () {
var x = ...;
...
})();
```
is a workaround for
```
{
my $x = ...;
...
}
```
and
```
var result = (function () {
return ...;
})();
```
is a workaround for
```
my $result = do {
...
};
```
---
It looks like you are trying to translate something similar to the following:
```
let funcs = [];
for ( let i=0; i<3; ++i ) {
(function() {
var index = i;
funcs.push( function() { return index; } );
})();
}
for ( let func of funcs )
console.log( func() );
```
The following is the direct translation:
```
my @funcs;
for ( my $i=0; $i<3; ++$i ) {
sub {
my $index = $i;
push @funcs, sub { $index };
}->();
}
say $_->() for @funcs;
```
But there's simply no point in using an IIFE. One would simply use the following:
```
my @funcs;
for ( my $i=0; $i<3; ++$i ) {
my $index = $i;
push @funcs, sub { $index };
}
say $_->() for @funcs;
```
Now, one tends to avoid C-style for loops in Perl because using foreach loops are far more readable (and more efficient!). It also makes the solution even simpler because the loop variable of a foreach loop is scoped to the body of the loop rather than scoped to the loop statement.
```
my @funcs;
for my $i ( 0 .. 2 ) {
push @funcs, sub { $i };
}
say $_->() for @funcs;
```
We could also use `map`.
```
my @funcs = map { my $i = $_; sub { $i } } 0 .. 2;
say $_->() for @funcs;
```
|
How would you transpose a binary matrix?
I have binary matrices in C++ that I repesent with a vector of 8-bit values.
For example, the following matrix:
```
1 0 1 0 1 0 1
0 1 1 0 0 1 1
0 0 0 1 1 1 1
```
is represented as:
```
const uint8_t matrix[] = {
0b01010101,
0b00110011,
0b00001111,
};
```
The reason why I'm doing it this way is because then computing the product of such a matrix and a 8-bit vector becomes really simple and efficient (just one bitwise AND and a parity computation, per row), which is much better than calculating each bit individually.
I'm now looking for an efficient way to transpose such a matrix, but I haven't been able to figure out how to do it without having to manually calculate each bit.
Just to clarify, for the above example, I'd like to get the following result from the transposition:
```
const uint8_t transposed[] = {
0b00000000,
0b00000100,
0b00000010,
0b00000110,
0b00000001,
0b00000101,
0b00000011,
0b00000111,
};
```
**NOTE**: I would prefer an algorithm that can calculate this with arbitrary-sized matrices but am also interested in algorithms that can only handle certain sizes.
|
I've spent more time looking for a solution, and I've found some good ones.
# The SSE2 way
On a modern x86 CPU, transposing a binary matrix can be done very efficiently with SSE2 instructions. Using such instructions it is possible to process a 16×8 matrix.
This solution is inspired by [this blog post by mischasan](https://mischasan.wordpress.com/2011/10/03/the-full-sse2-bit-matrix-transpose-routine/) and is vastly superior to every suggestion I've got so far to this question.
The idea is simple:
- `#include <emmintrin.h>`
- Pack 16 `uint8_t` variables into an `__m128i`
- Use `_mm_movemask_epi8` to get the MSBs of each byte, producing an `uint16_t`
- Use `_mm_slli_epi64` to shift the 128-bit register by one
- Repeat until you've got all 8 `uint16_t`s
# A generic 32-bit solution
Unfortunately, I also need to make this work on ARM. After implementing the SSE2 version, it would be easy to just just find the NEON equivalents, but the *Cortex-M* CPU, (contrary to the *Cortex-A*) does not have SIMD capabilities, so NEON isn't too useful for me at the moment.
**NOTE**: Because the *Cortex-M* **doesn't have native 64-bit arithmetics**, I could not use the ideas in any answers that suggest to do it by treating a 8x8 block as an `uint64_t`. Most microcontrollers that have a *Cortex-M* CPU also don't have too much memory so I prefer to do all this without a lookup table.
After some thinking, the same algorithm can be implemented using plain 32-bit arithmetics and some clever coding. This way, I can work with 4×8 blocks at a time. It was suggested by a collegaue and the magic lies in the way 32-bit multiplication works: you can find a 32-bit number with which you can multiply and then the MSB of each byte gets next to each other in the upper 32 bits of the result.
- Pack 4 `uint8_t`s in a 32-bit variable
- Mask the 1st bit of each byte (using `0x80808080`)
- Multiply it with `0x02040810`
- Take the 4 LSBs of the upper 32 bits of the multiplication
- Generally, you can mask the Nth bit in each byte (shift the mask right by N bits) and multiply with the magic number, shifted left by N bits. The advantage here is that if your compiler is smart enough to unroll the loop, both the mask and the 'magic number' become compile-time constants so shifting them does not incur any performance penalty whatsoever. There's some trouble with the last series of 4 bits, because then one LSB is lost, so in that case I needed to shift the input left by 8 bits and use the same method as the first series of 4-bits.
If you do this with two 4×8 blocks, then you can get an 8x8 block done and arrange the resulting bits so that everything goes into the right place.
|
Jboss RestEasy - How do i extract binary data from a multipart mime MultipartInput using multipart/mixed
I have a MultipartInput that has two parts
- Part1 - XML string
- Part2 - Binary data (an image)
Here is an example of how i extract the data from the parts
```
@POST
@Path("/mixedMime")
@Consumes("multipart/mixed")
@ResponseStatus(HttpStatus.OK)
public String mixedMime(@Context ServletContext servletContext, MultipartInput input) throws Exception{
int index = 0;
String xmlText;
byte[] imageData;
for (InputPart part : input.getParts()) {
index++;
if(index==1){
//extract the xml test
xmlText = part.getBodyAsString()
}
if(index==2){
//extract the image data
imageData = part.getBody(???<<<<< WHAT GOES HERE >>>>???);
}
}
}
```
How would i extract the image data (binary data) shown above? I am using Jboss 7.0.2.
According to the documentation at <http://docs.jboss.org/resteasy/docs/2.3.0.GA/userguide/html/Multipart.html> , it is saying i need to specify a class? what class?
Thanks
## Edit
Sorry i forgot to include how i am sending the data to the REST service. Here is the relevant code from the client.
Basically i add the xml file from the file system as the first part. An an image as the second part.
```
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost("http://localhost:8080/RestTest/rest/mixedmime");
Scanner scanner =
new Scanner(new File("myXmlFile.xml")).useDelimiter("\\Z");
String messageHeader = scanner.next();
scanner.close();
FileBody bin = new FileBody(new File("dexter.jpg"));
StringBody header = new StringBody(messageHeader.toString());
MultipartEntity reqEntity = new MultipartEntity();
reqEntity.addPart("header", header);
reqEntity.addPart("payload", bin);
httppost.setEntity(reqEntity);
HttpResponse response = httpclient.execute(httppost);
```
|
Here's a simple example of how you would process binary (stream) data out of a multipart request with RESTEasy:
First, define a class to map your multipart form too:
```
public class DataUploadForm implements Serializable {
static final long serialVersionUID = IL;
@FormParam("xml")
private String xml;
@FormParam("file")
private InputStream fileStream;
public FileUploadForm() {
super();
}
// Getters and setters here
}
```
Then on your web service interface, declare a method that handles multipart content and maps it to your custom class:
```
@POST
@Path("/somepath")
@Consumes({ MediaType.MULTIPART_FORM_DATA })
public Response uploadData(@MultipartForm DataUploadForm uploadForm);
```
And in your web service implementation, process the incoming request:
```
@Override
public Response uploadData(DataUploadForm uploadForm) {
System.out.printf("Incoming xml data: %s\n", uploadForm.getXML());
System.out.printf("Incoming binary data: %s\n", uploadForm.getFileStream());
// Processing the input stream. For example, by using Apache Commons IO
final byte[] data ;
try {
data = IOUtils.toByteArray(uploadForm.getFileStream());
} catch (IOException ioe) {
throw new WebApplicationException("Could not read uploaded binary data");
}
return Response.ok().build();
}
```
|
Writing clojure maps/arrays to a file and reading them back
I need to save clojure maps to a file and read them back later to process them.
This is what I could come up with. Is there a better way to accomplish the same thing?
```
user=> (def my-data (for [ a [ "Person1" "Person2" ] b [ "Address1" "Address2"] c (range 10) ] {:name a :address b :index c} ))
#'user/my-data
user=> (count my-data)
40
user=> (defn write-data[xs] (with-open [wrtr (clojure.java.io/writer "my-data.txt") ]
(doall (map #(.write wrtr (str % "\n")) xs))))
#'user/write-data
user=> (write-data my-data)
user=> (defn read-data[] (with-open [rdr (clojure.java.io/reader "my-data.txt") ]
(doall (map #(load-string %) (line-seq rdr)))))
#'user/read-data
user=> (count (read-data))
40
```
|
It is easiest to read a single form to and from the file, so I usually put my data into a vector.
I also prefer to use pr or `pr-str` rather than print because it is guaranteed to produce readable data,
```
(def my-data [{:a "foo" :b [1 2 3]} "asdf" 42 #{1 2 3}])
(spit "/tmp/data.edn" (with-out-str (pr my-data)))
nil
(read-string (slurp "/tmp/data.edn"))
[{:a "foo", :b [1 2 3]} "asdf" 42 #{1 2 3}]
```
vs:
```
(spit "/tmp/data.edn" (with-out-str (print my-data)))
(read-string (slurp "/tmp/data.edn"))
[{:a foo, :b [1 2 3]} asdf 42 #{1 2 3}]
```
notice how the string `"asdf" was read back as a symbol.
`.toString` also works fine:
```
(spit "/tmp/data.edn" (.toString my-data))
(read-string (slurp "/tmp/data.edn"))
[{:a "foo", :b [1 2 3]} "asdf" 42 #{1 2 3}]
```
|
How to check if value is in a list or if the list is empty?
I'm using [`psycopg2`](http://initd.org/psycopg/) to access a PostgreSQL database through Python 3, and I'm attempting to make a query where I want to select all users whose name are in a list, *if the list is not empty*. If the provided list is empty, I want to ignore the condition, i.e. select all users regardless of their name.
I've already tried the following three calls:
```
# Using list
cursor.execute(
"SELECT age FROM user WHERE %(names) = '{}' OR user.name IN %(names)s",
{'names': []},
)
# Using tuple
cursor.execute(
"SELECT age FROM user WHERE %(names) = () OR user.name IN %(names)s",
{'names': ()},
)
# Using both list and tuple
cursor.execute(
"SELECT age FROM user WHERE %(names_l) = '{}' OR user.name IN %(names_t)s",
{'names_l': [], 'names_t': ()},
)
```
But they all raise an invalid syntax error from one point or another:
```
# Using list
psycopg2.ProgrammingError: syntax error at or near "'{}'"
LINE 17: user.name IN '{}'
# Using tuple
psycopg2.ProgrammingError: syntax error at or near ")"
LINE 16: () == ()
# Using both list and tuple
psycopg2.ProgrammingError: syntax error at or near ")"
LINE 17: user.name IN ()
```
|
For optional parameters you want a SQL `where` clause like:
```
where column = :parameter or :parameter is null
```
With the above when the parameter `is null` all rows will be returned otherwise only those meeting the condition.
Psycopg adapts a Python `list` to a Postgresql `array`. To check if any of the Postgresql `array` values is equal to a certain value:
```
where column = any (array[value1, value2])
```
To get a Python `None`, which is adapted to a Postgresql `null`, from an empty Python `list`:
```
parameter = [] or None
```
Passing a `dictionary` to the `cursor.execute` method avoids parameter repetition in the parameters argument:
```
names = ['John','Mary']
query = """
select age
from user
where user.name = any (%(names)s) or %(names)s is null
"""
print (cursor.mogrify(query, {'names': names or None}).decode('utf8'))
#cursor.execute(query, {'names': names or None})
```
Output:
```
select age
from user
where user.name = any (ARRAY['John', 'Mary']) or ARRAY['John', 'Mary'] is null
```
When the list is empty:
```
select age
from user
where user.name = any (NULL) or NULL is null
```
<http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries>
|
Permission for camera not working in android app React Native
No message is being prompted. It just denies the permission. I have also made same, the targetedSdk version and compilesdk version.
I've made same, the targetedSdk version and compilesdk version.
My function on 'Request' Button:
```
try {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.CAMERA,
{
'title': 'Cool Photo App Camera Permission',
'message': 'Cool Photo App needs access to your camera ' +
'so you can take awesome pictures.'
}
)
if (granted === PermissionsAndroid.RESULTS.GRANTED) {
alert("You can use the camera")
} else if (PermissionsAndroid.RESULTS.DENIED){
console.log("Camera permission denied")
}
} catch (err) {
console.warn(err)
}
```
I expect a prompt message and by tapping 'Yes', it must grant the permission but no prompt is shown.
|
You need to also add in the permissions to the Manifest file for Android or the pList in iOS.
For Android:
Add this to your Manifest:
`<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />`
<https://developer.android.com/reference/android/hardware/Camera>
For iOS
Take a look here: [iOS 10 - Changes in asking permissions of Camera, microphone and Photo Library causing application to crash](https://stackoverflow.com/questions/38498275/ios-10-changes-in-asking-permissions-of-camera-microphone-and-photo-library-c)
|
JavaFX entirely customized windows?
I would like to have an application where I've entirely customized the window's appearance. So far I've learned that I can remove the typical window stuff with:
```
class Application extends javafx.application.Application {
/**
* Starts the application.
*
* @param stage
*/
override def start(stage: Stage) {
stage.initStyle(StageStyle.TRANSPARENT)
// Load the main window view.
val loader = new FXMLLoader()
loader.setLocation(getClass.getResource("/com/myproj/application/MainWindow.fxml"))
val root = loader.load().asInstanceOf[Parent]
val scene: Scene = new Scene(root, Color.TRANSPARENT)
stage.setScene(scene)
stage.show()
}
}
```
Everything else works fine, except that window dragging, double-click-to-maximize, dragging to screen top edge on Windows should active maximizing, etc. The native Window capabilities are missing entirely.
Can I somehow rather easily customize the entire appear of the window without losing all these nice capabilities.
I'm talking about something like Adobe Photoshop which looks entirely different but still retains these features (or implements them on top of their UI manually).
It would be a start if I could at least implement dragging + window buttons for starters. I am targeting Linux, Mac and Windows here.
|
See the customized window appearance and handling in the [Ensemble Sample application](http://www.oracle.com/technetwork/java/javase/overview/javafx-samples-2158687.html), which includes source code. Download the [source](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html), build it and run it as a standalone application rather than embedded in a browser. It isn't going to be exactly what you are asking for because stuff like dragging to screen top edge to activate maximizing isn't going to work I think, but it should be very close and you could always code something yourself which maximized the window when it was dragged near the top edge. The Ensemble window has features like custom resize decorations, minimize, maximize, close icons, an area at the top of the window you can use to drag the window around or double click to maximize or minimize the window - i.e. most of the standard features you would expect from a desktop windowing system.
To get something even closer to what you are asking, perhaps you could hack something together by creating two windows. One, a standard decorated stage window which includes screen borders, the other an undecorated or transparent child stage always displayed on top of the main window and overlaying the borders of the main window with a custom rendering. I think you may run into difficulties trying to implement this approach, so I wouldn't really recommend it.
You may like to try an UNDECORATED stage style rather than TRANSPARENT and see if you get better native Windows integration with that.
There are some open feature request currently scheduled for JavaFX to be shipped with JDK8, [Windows: support Aero Glass effects for top-level windows](http://javafx-jira.kenai.com/browse/RT-20020), [Mac: Support NSTexturedBackgroundWindowMask style for windows](http://javafx-jira.kenai.com/browse/RT-19988) and [The solid white background created in a Stage should be created - if needed - in the Scenegraph](http://javafx-jira.kenai.com/browse/RT-19834), which, when implemented, will likely help you to acheive your goal - vote for them, if such features are important to you.

Also checkout [VFXWindows](http://mihosoft.eu/?p=365) which is an open source windowing framework for JavaFX.
*Update*
Also related is the [Undecorator](http://arnaudnouard.wordpress.com/2013/02/02/undecorator-add-a-better-look-to-your-javafx-stages-part-i/) project which allows you to easily create a JavaFX stage with standard minimize/maximize/close/resize chrome controls that are rendered via the JavaFX engine rather than the OS windowing system. This allows you to achieve the kind of custom control over window rendering that an application like Ensemble displays.
|
Echo new line and string beginning \t
Sure, echo -e can be used so that \n is understood as a new line. The problem is when I want to echo something beginning with \t e.g. "\test".
So let's say I want to perform `echo -e "test\n\\test"`. I expect this to output:
>
>
> ```
> test
> \test
>
> ```
>
>
But instead outputs:
>
>
> ```
> test
> est
>
> ```
>
>
The `\\t` is being interpreted as a tab instead of a literal \t. Is there a clean workaround for this issue?
|
```
echo -e "\\t"
```
passes `\t` to `echo` because backslash is special inside double-quotes in `bash`. It serves as an *escaping* (quoting) operator. In `\\`, it escapes itself.
You can either do:
```
echo -e "\\\\t"
```
for `echo` to be passed `\\t` (`echo -e "\\\t"` would also do), or you could use single quotes within which `\` is not special:
```
echo -e '\t'
```
Now, [`echo` is a very unportable command](/q/65803). Even in `bash`, its behaviour can depend on the environment. I'd would advise to avoid it and use `printf` instead, with which you can do:
```
printf 'test\n\\test\n'
```
Or even decide which parts undergo those escape sequence expansions:
```
printf 'test\n%s\n' '\test'
```
Or:
```
printf '%b%s\n' 'test\n' '\test'
```
`%b` understands the same escape sequences as `echo` (some `echo`s), while the first argument to `printf`, the format, also understands sequences, but in a slightly different way than `echo` (more like what is done in other languages). In any case `\n` is understood by both.
|
Circular linked list
Please review this:
```
#include <iostream>
using namespace std;
template <class T>
struct Node
{
T data;
Node * next;
Node(T data) : data(data), next(NULL) {}
};
template <class T>
class CircularLinkedList
{
public:
CircularLinkedList() : head(NULL) {}
~CircularLinkedList();
void addNode(T data);
void deleteNode(T data);
template <class U>
friend std::ostream & operator<<(std::ostream & os, const CircularLinkedList<U> & cll);
private:
Node<T> * head;
};
template <class T>
CircularLinkedList<T>::~CircularLinkedList()
{
if (head)
{
Node<T> * tmp = head;
while (tmp->next != head)
{
Node<T> * t = tmp;
tmp = tmp->next;
delete(t);
}
delete tmp;
head = NULL;
}
}
template <class T>
void CircularLinkedList<T>::addNode(T data)
{
Node<T> * t = new Node<T>(data);
if (head == NULL)
{
t->next = t;
head = t;
return;
}
Node<T> * tmp = head;
while (tmp->next != head)
{
tmp = tmp->next;
}
tmp->next = t;
t->next = head;
}
template <class T>
void CircularLinkedList<T>::deleteNode(T data)
{
Node<T> * tmp = head;
Node<T> * prev = NULL;
while (tmp->next != head)
{
if (tmp->data == data) break;
prev = tmp;
tmp = tmp->next;
}
if (tmp == head)
{
while (tmp->next != head)
{
tmp = tmp->next;
}
tmp->next = head->next;
delete head;
head = tmp->next;
}
else
{
prev->next = tmp->next;
delete tmp;
}
}
template <class U>
std::ostream & operator<<(std::ostream & os, const CircularLinkedList<U> & cll)
{
Node<U> * head = cll.head;
if (head)
{
Node<U> * tmp = head;
while (tmp->next != head)
{
os << tmp->data << " ";
tmp = tmp->next;
}
os << tmp->data;
}
return os;
}
int main()
{
CircularLinkedList<int> cll;
cll.addNode(1);
cll.addNode(2);
cll.addNode(3);
cll.addNode(4);
cll.addNode(5);
cout << cll << endl;
cll.deleteNode(3);
cll.deleteNode(1);
cll.deleteNode(5);
cout << cll << endl;
return 0;
}
```
|
)
My first comment is you should start looking at smart pointers.
They will definitely simplify your code.
The second comment: look at the [STL containers](http://www.sgi.com/tech/stl/Container.html).
Containers in C++ follow a specific pattern. The idea is that we want to use algorithms on containers interchangeably and thus containers follow these conventions to make them easy to use with the standard algorithms.
Third comment: you fail to follow the [rule of three](https://stackoverflow.com/questions/4172722/what-is-the-rule-of-three).
Basically. If you define one of 'Destructor'/'Copy-Constructor'/'Assignment-Operator' (rather than using the compiler generated defaults) then you probably need to define all three. In this case your code is going to crash if you make a copy of the list. The problem arises from having an owned pointer in your code.
Fourth comment: Prefer to be pass parameters by const reference. For simple objects like integers it will not make any difference. But you have templatised the code and T can be any type. Thus passing by value is going to cause a copy (which may be expensive).
Reviewing the above code:
The complexity of adding a node is O(n). By maintaining a head and a tail pointer you can change this to have a complexity of O(1). ie. this loop becomes unnecessary.
```
Node<T> * tmp = head;
while (tmp->next != head)
{
tmp = tmp->next;
}
```
You have an output operator `std::ostream & operator<<(std::ostream & os, const CircularLinkedList<U> & cll)` It would be nice if you had a symmetric input operator. In this situation you need to worry about the size of the list (or adding a special terminating character). So the creation of the input operator will affect your design of the output operator.
|
What exactly does this do: exec > > (command)
I saw [here](https://stackoverflow.com/questions/5955577/bash-automatically-capture-output-of-last-executed-command-into-a-variable/6052267#6052267) and [here too](https://superuser.com/questions/86915/force-bash-script-to-use-tee-without-piping-from-the-command-line/86955#86955) the following construction:
```
exec > >(tee -a script.log)
```
I know what the tee command is, and the (command...) usually means execute the `command` in a subshell, and exec replaces the current shell with a program, like `exec ls`, (but here there is no command) and additionally what is meant with the `> >`?
Can anybody clarify this dark wizzardy?
```
exec >{space}> (command)
```
@Seth? :) Any pointer where i can read more about this magic would be appreciated. :)
|
It replaces the current `bash` session with another, and writes the output of all commands to `script.log`.
In that way, you can use your `bash` shell normally, and you wouldn't see any difference (mostly), but all output is going to show up on your screen **and** in the `script.log` file.
From [exec manpages](http://ss64.com/bash/exec.html):
>
> If command is supplied, it replaces the shell without creating a new process. If no command is specified, redirections may be used to affect the current shell environment.
>
>
>
The `>(tee -a script.log)` magic creates a pipe, so instead of writing to a file like we would (with `>> script.log` in this case), we write to the process `tee -a script.log`, which does the same. For some reason unbeknown to me, using `>>` does not work, but writing to the named pipe works. [Source here](http://www.linuxtutorialblog.com/post/tutorial-the-best-tips-tricks-for-bash)
|
Is there an easy way of seeing PHP info?
Each time I want to see the `phpinfo();` I have to:
- Create a info.php file;
- Write `phpinfo();` in it.
- Go to the browser and type my "thisproject.dev/info.php"
I'm on Ubuntu.
Isn't there a more practical way to see phpinfo in the browser?
|
From your command line you can run..
```
php -i
```
I know it's not the browser window, but you can't see the `phpinfo();` contents without making the function call. Obviously, the best approach would be to have a phpinfo script in the root of your web server directory, that way you have access to it at all times via `http://localhost/info.php` or something similar (NOTE: don't do this in a production environment or somewhere that is publicly accessible)
EDIT: As mentioned by binaryLV, its quite common to have two versions of a php.ini per installation. One for the command line interface (CLI) and the other for the web server interface. If you want to see phpinfo output for your web server make sure you specify the ini file path, for example...
```
php -c /etc/php/apache2/php.ini -i
```
|
How does this trick type-check?
Reading this blog post – <https://www.haskellforall.com/2021/05/the-trick-to-avoid-deeply-nested-error.html> – I realised I don't understand why the 'trick' actually works in this situation:
```
{-# LANGUAGE NamedFieldPuns #-}
import Text.Read (readMaybe)
data Person = Person { age :: Int, alive :: Bool } deriving (Show)
example :: String -> String -> Either String Person
example ageString aliveString = do
age <- case readMaybe ageString of
Nothing -> Left "Invalid age string"
Just age -> pure age
if age < 0
then Left "Negative age"
else pure ()
alive <- case readMaybe aliveString of
Nothing -> Left "Invalid alive string"
Just alive -> pure alive
pure Person{ age, alive }
```
Specifically I'm struggling to understand why this bit
```
if age < 0
then Left "Negative age"
else pure ()
```
type checks.
`Left "Negative age"` has a type of `Either String b`
while
`pure ()` is of type `Either a ()`
Why does this work the way it does?
---
EDIT: I simplified and re-wrote the code into bind operations instead of `do` block, and then saw Will's edit to his already excellent answer:
```
{-# LANGUAGE NamedFieldPuns #-}
import Text.Read (readMaybe)
newtype Person = Person { age :: Int} deriving (Show)
example :: String -> Either String Person
example ageString =
getAge ageString
>>= (\age -> checkAge age
>>= (\()-> createPerson age))
getAge :: Read b => String -> Either [Char] b
getAge ageString = case readMaybe ageString of
Nothing -> Left "Invalid age string"
Just age -> pure age
checkAge :: (Ord a, Num a) => a -> Either [Char] ()
checkAge a = if a < 0
then Left "Negative age"
else pure ()
createPerson :: Applicative f => Int -> f Person
createPerson a = pure Person { age = a }
```
I think this makes the 'trick' of passing the `()` through binds much more visible - the values are taken from an outer scope, while `Left` indeed short-circuits the processing.
|
It typechecks because `Either String b` and `Either a ()` unify successfully, with `String ~ a` and `b ~ ()`:
```
Either String b
Either a ()
------------------
Either String () a ~ String, b ~ ()
```
It appears in the `do` block of type `Either String Person`, so it's OK, since it's the same monad, `Either`, with the same "error signal" type, `String`.
It appears in the *middle* of the `do` block, and there's no value "extraction". So it serves as a guard.
It goes like this: if it was `Right y`, then the `do` block's translation is
```
Right y >>= (\ _ -> .....)
```
and the computation continues inside `.....` with the `y` value ignored. But if it was `Left x`, then
```
Left x >>= _ = Left x
```
according to the definition of `>>=` for `Either`. Crucially, the `Left x` on the right is ***not*** the same value as `Left x` on the left. The one on the left has type `Either String ()`; the one on the right has type `Either String Person` indeed, as demanded by the return type of the `do` block overall.
The two `Left x` are *two* different values, each with its own specific type. The `x :: String` is the same, of course.
|
How to get information about a particular Dask task
I'm running into a problem whereby my distributed cluster appears to "hang" - e.g. tasks stop processing and hence a backlog of unprocessed tasks builds up so I'm looking for some way to help debug what's going on.
On the `Client` there's the `processing` method which will tell me what tasks are currently running on each worker but AFAICS that's the only info about the tasks available on the `Client` object?
What I'd like to to is to be able to query not just processing tasks, but all tasks including processed, processing and errored and for each task to be able to get some statistics such as `submitted_time` and `completion_time` which would allow me to find out what tasks are blocking the cluster.
This would be similar to the extended metadata on the [`ipyparallel.AsyncResult`](https://ipyparallel.readthedocs.io/en/latest/details.html#extended-interface)
A nice to have would be to to be able to get the `args/kwargs` for any give task too. This would be especially helpful in debugging failed tasks.
Is any of this functionality available currently or is there any way to get the info I'm after?
Any other suggestions on how to debug the problem would be greatly welcomed.
|
As of May 2017 no explicit "give me all of the information about a task" operation exists. However, you can use the client to investigate task state directly. This will require you to dive a bit into the information that the scheduler and worker track. See the following doc pages:
- <http://distributed.readthedocs.io/en/latest/scheduling-state.html>
- <http://distributed.readthedocs.io/en/latest/worker.html#api-documentation>
To query this state I would use the [Client.run\_on\_scheduler](http://distributed.readthedocs.io/en/latest/api.html#distributed.client.Client.run_on_scheduler) and [Client.run](http://distributed.readthedocs.io/en/latest/api.html#distributed.client.Client.run) methods. These take a function to run on the scheduler or workers respsectively. If this function includes a `dask_scheduler` or `dask_worker` argument then the function will be given the scheduler or worker object itself.
```
def f(dask_scheduler):
return dask_scheduler.task_state
client.run_on_scheduler(f)
```
You now have access to check any state that the scheduler or workers know about and to run any internal diagnostic checks. What you choose to investigate though depends entirely on your use case.
```
def f(keys, dask_scheduler=None):
return dask_scheduler.transition_story(*keys)
client.run_on_scheduler(f, [key1, key2, key3])
```
|
Use Eureka despite having random external port of docker containers
I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example <https://localhost:8080/> but due to dynamic port assignment it is really only accessible via <https://localhost:54321/>
So eureka will return the wrong URL for the services.
**UPDATE**
I have updated my answer below, so have a look there.
|
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
- When you use host networking you can't use the link-feature for these containers as link source or target.
- Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
**A lot of time has passed** and I think I should elaborate this a little bit further:
1. If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
2. If you have a public facing service then you would use a fixed port anyway.
3. For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
4. if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
|
BlockReentrancy in ObservableCollection
Could someone please be kind enough to explain to me what the purpose of the `BlockReentrancy` Method is in the `ObservableCollection<T>` ?
[MSDN](http://msdn.microsoft.com/en-us/library/ms654919.aspx) shows the following as an example:
```
//The typical usage is to wrap an OnCollectionChanged call within a using scope, as in the following example:
using (BlockReentrancy())
{
// OnCollectionChanged call
}
```
But this doesn't seem to clarify for me what the purpose is. Anyone care to explain?
|
An `ObservableCollection` implements `INotifyCollectionChanged` and so it has a `CollectionChanged` event. If there is a subscriber to this event, they could **further modify** the collection while the collection is already in the process of notification. Since the `CollectionChanged` event keeps track of exactly what changed, this interaction can get very messy.
As a result, the `ObservableCollection` allows, as a special case, a single subscriber of the `CollectionChanged` event to modify the collection from its handler. But it **disallows** modifying the collection from the `CollectionChanged` handler **if there are two or more subscribers** to the `CollectionChanged` event.
The pair of methods `BlockReentrancy` and `CheckReentancy` are used to implement this logic. The `BlockReentrancy` is used at the start of the `OnCollectionChanged` method and `CheckReentancy` is used in all methods that modify the collection.
|
"Array Subscript is not an integer" c
I am trying to do matrix multiplication using for loops and I am getting an error, "Array Subscript is not an integer" Could I get some help please.
```
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
float Matrix_1[3][3] = {{3.4, 4.4, 1.2},{5.3, 5.7, 2.2},{6.2, -2.4, 0.9}};
float Matrix_2[3][3] = {{7.3, 4.9, 3.7},{-2.4, 4.9, -10.2},{7.3, 5.2, 1.7}};
float i, j, k;
float result[3][3];
for (i = 0; i < 1; i++)
{
for (j = 0; j < 3; j++)
{
for(k = 0; k < 3; k++)
{
result = result + Matrix_1[i][k] * Matrix_2[k][j];
}
}
}
printf("The result of multiplying the matrices together\n");
printf("%.3f\t%.3f\t%.3f\n",result[0][0],result[0][1],result[0][2]);
printf("%.3f\t%.3f\t%.3f\n",result[1][0],result[1][1],result[1][2]);
printf("%.3f\t%.3f\t%.3f\n",result[2][0],result[2][1],result[2][2]);
system("PAUSE");
return 0;
}
```
|
The subscripts, or array indices need to be an `int` value. Therefore, change this:
`float i, j, k;`
to this: `int i, j, k;`
Also, you have declared `result` as a 3X3 matrix. So, when storing data into the matrix, you have to store element - by - element. So, instead of this:
`result = result + Matrix_1[i][k] * Matrix_2[k][j];` change it to this: `result[i][j] = result[i][j] + Matrix_1[i][k] * Matrix_2[k][j];`
One more thing: Initialise the `result` matrix!! In the above statement, you are using the value of its elements in the matrix, which maybe holding some garbage value, and you may not get the desired result. So, before using the `result` matrix, initialise it.
A simple way:
```
for(i=0;i<3;i++)
{
for(j=0;j<3;j++)
result[i][j]=0;
}
```
One last thing: your outermost `for loop` runs only one time: `for (i = 0; i < 1; i++)`
Probably you wanted this: `for (i = 0; i < 3; i++)`
Finally, the modified code: <http://ideone.com/26GSJa>
|
Hardware Drivers on Servers, update or not?
I recently got into a debate with a coworker about whether or not to update hardware drivers (such as NICs or RAID controllers). I have always followed the motto, "If it aint broke, dont fix it." And kept the default driver that shipped with the system. However, I recently started at a new company where the other sys admin seems to always update to the latest versions of drivers on servers - at least on their initial build. He said he used to follow my motto, but Dell has convinced him to do otherwise. To me this doesnt make sense, if you are going to update drivers, then you need to constantly be on the lookout for newer drivers, which can be done with management software, however you never know when a new bug will be introduced.
I cant seem to find a whitepaper or anything designating an official standard, should drivers on servers be updated or not?
|
There probably isn't an official standard. However, since I'm involved in security and IT audit, I can tell you that the "If it ain't broke, don't fix it" isn't always a good approach.
Now I won't say you should update all of your hardware instantly, unless the manufacturer recommends you too. Now obviously you can't just go upgrading RAID controllers or BIOS on production machines at any time.
First of all, and this goes for any software/hardware update, is to check what the update actually does. Sometimes the updates in the controllers fix a certain incompatibility with a certain motherboard or hard disk. Now if you aren't experiencing any issues and you recon there isn't anything in the update that should benefit you, then by all means do not update unless you actually have to take the machine down for maintenance.
However if the update specifies that it fixes an issue where you randomly could encounter things failing, I would schedule a maintenance window as soon as possible.
If the update is meant to fix a critical vulnerability, then you should instantly push it through change management and try to roll the update out in production as soon as possible.
This doesn't immediately apply to RAID, as far as I know I haven't seen too many exploits for RAID cards, but it's a good idea to update your machines. You shouldn't do it instantly if you haven't got to or when there is no benefit, but otherwise, by all means update your firmware.
Since drivers are also a form of software, they should go through normal patch management. Here are some resources you can read up on:
- <http://www.patchmanagement.org/pmessentials.asp>
- <http://www.sans.org/reading_room/whitepapers/iso17799/patch-management_2064>
|
jquery ui .autocomplete() not firing when same letter entered after blur()
Example code here: <http://jsbin.com/etaziy> (using <http://jqueryui.com/demos/autocomplete/>)
If you type in a 'j' you get the 3 names, 'John' 'Jack' and 'Joe'. If you then blur away, I clear the input, just by making the inputs val(''). Then if you go back to the input, and type a 'j' again, nothing happens? It's not until you type a second matching letter that you get the popup showing.
My example might seem a bit odd, but essentially this is a really cut back version of what I'm working on. I need to clear the input after a blur(), and clear it after a selection is made. Doing this is making subsequent selections look buggy.
I'm partly thinking this is intended functionality, but for my purpose, its not what i want. I really need the popup with the list to show as soon as any letter is typed.
Cheers.
|
The problem is that the autocomplete keeps track of the value it is matching on internally, so when you first type `j` it sets this internal value to 'j' and then it wasn't being reset to the empty string when you changed the input to be empty.
After looking at the autocomplete source I wasn't able to find out how to directly access the internal variable, but you can force the autocomplete to update it for you by running another search once you've changed the input to be empty (and with a length of zero it won't actually do the search).
```
$(function() {
$("#search").autocomplete({
source: data,
change: function() { // Triggered when the field is blurred, if the value has changed; ui.item refers to the selected item.
$("#search").val("");
$("#search").autocomplete("search", ""); //THIS IS THE NEW LINE THAT MAKES IT HAPPY
}
});
});
```
|
Eliminate duplicate pages from pdf
I have a pdf document with over 200 duplicate pages among the total 900 of the document. When there is a duplicate, it appears immediately after the original.
Maybe with `pdftk` the job can be done, but I need some way to find out the duplicates...
|
`comparepdf` is a command line tool for comparing PDFs. The exit code is `0` if the files are identical and non-zero otherwise. You may compare by text content or visually (interesting for e.g. scans):
```
comparepdf 1.pdf 2.pdf
comparepdf -ca 1.pdf 2.pdf #compare appearance instead of text
```
So what you could do is explode the PDF, then compare pairwise and delete accordingly:
```
#!/bin/bash
#explode pdf
pdftk original.pdf burst
#compare 900 pages pairwise
for (( i=1 ; i<=899 ; i++ )) ; do
#pdftk's naming is pg_0001.pdf, pg_0002.pdf etc.
pdf1=pg_$(printf 04d $i).pdf
pdf2=pg_$(printf 04d $((i+1))).pdf
#Remove first file if match. Loop not forwarded in case of three or more consecutive identical pages
if comparepdf $pdf1 $pdf2 ; then
rm $pdf1
fi
done
#renunite in sorted manner:
pdftk $(find -name 'pg_*.pdf' | sort ) cat output new.pdf
```
---
EDIT: Following @notautogenerated's remark, one might be bettor off selecting pages from the orginal file instead of unifying single-page PDFs. After the pairwise comparison is done, one could do the following:
```
pdftk original.pdf cat $(find -name 'pg_*.pdf' |
awk -F '[._]' '{printf "%d\n",$3}' |
sort -n ) output new.pdf
```
|
How to OSGIfy a library
I'm working on a project, it's integration project, we are using Apache Camel and Apache Karaf.
In the project, I need to use the [Jira REST Java client library.](https://studio.atlassian.com/wiki/display/JRJC/Home)
So I've read quite a lot of various articles and threads about how to wrap non-OSGI library to OSGI bundle, but I'm really not sure if I got it right.
So, I've created a POM file with a dependency to the needed library. Made a package and tried to deploy it to Karaf, of course, Karaf complained for missing packages.
So, I've found corresponding maven dependency, added it, package goes into `<Import-Package>` and dependency into `<Embed-Dependency>`.
Another round, deploy, find dependency, add, ... and again, and again, until Karaf is fine with the bundle.
Is that really correct? It seems to me like quite crazy, so I guess I don't got it as usualy :)
Finally, the package get to stable that was on my work computer, I checked it quickly and went home, there I continued but, strange, the same POM / package, compiled on my personal computer is not working, again complaining about missing package, but this time, this package is for sure in the POM file and for sure it is embeded in the package, I can see it there.
This missing package is this time org.apache.commons.codec.
```
org.osgi.framework.BundleException: Unresolved constraint in bundle jiraclient.bundle [134]: Unable to resolve 134.0: missing requirement [134.0] osgi.wiring.package; (osgi.wiring.package=org.apache.commons.codec)
at org.apache.felix.framework.Felix.resolveBundleRevision(Felix.java:3826)[org.apache.felix.framework-4.0.3.jar:]
at org.apache.felix.framework.Felix.startBundle(Felix.java:1868)[org.apache.felix.framework-4.0.3.jar:]
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:944)[org.apache.felix.framework-4.0.3.jar:]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundle(DirectoryWatcher.java:1247)[6:org.apache.felix.fileinstall:3.2.6]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundles(DirectoryWatcher.java:1219)[6:org.apache.felix.fileinstall:3.2.6]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startAllBundles(DirectoryWatcher.java:1208)[6:org.apache.felix.fileinstall:3.2.6]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.process(DirectoryWatcher.java:503)[6:org.apache.felix.fileinstall:3.2.6]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.run(DirectoryWatcher.java:291)[6:org.apache.felix.fileinstall:3.2.6]
```
So, now I'm totally confused, what is wrong :(
Pretty please, guys, help me. Thanks!
The POM file is long, so I guess link is better: <http://pastebin.com/j5cmWveG>
|
Yes OSGi is IMHO "far from easy to use" in terms of its deployment model, requiring 100% bundles with osgi metadata in MANIFEST.MF files. And you need a PhD in mathematics to understand the BND tool. And unfortunately many JARs are not OSGi bundles.
Looking at your pom.xml file with all the imports|exports, and that "not easy to understand" syntax, would just take 5-sec for any average engineer to understand that this "something is wrotten in the state of Denmark" ; eg OSGi != the world we live in. This must and should be easier IMHO.
You can install a plain JAR in Karaf using the **wrap** url handler:
<http://karaf.apache.org/manual/latest/developers-guide/creating-bundles.html>
Another trick is to create an **uber JAR**, eg to put it all in a single JAR file and then you can deploy that.
There is also **FAB** (Fuse Bundles) which makes OSGi deployment easier, as it handles much of this craziness for you at deploy time, instead of you having to deal with the OSGi MANIFEST.MF madness: <http://www.davsclaus.com/2012/08/osgi-deployment-made-easy-with-fab.html>
|
Well-formed program containing an ill-formed template member function?
In the snippet below, I'm puzzled about why the definition of `Wrapper::f() const` does not make my program ill-formed1 although it calls a non-const member function of a non mutable member variable:
```
// well-formed program (???)
// build with: g++ -std=c++17 -Wall -Wextra -Werror -pedantic
template<class T> struct Data { void f() {} };
template<class T> struct Wrapper
{
Data<T> _data;
void f() const { _data.f(); } // _data.f(): non-const!
};
int main()
{
Wrapper<void> w; // no error in instantiation point?
(void) w;
}
```
`[demo](http://coliru.stacked-crooked.com/a/8cd26e2129d114b5)`2
On the other hand, if `Data` is a non template class3, a diagnostic is issued by my compiler:
```
// ill-formed program (as expected)
// build with: g++ -std=c++17 -Wall -Wextra -Werror -pedantic
struct Data { void f() {} };
template<class T> struct Wrapper
{
Data _data;
void f() const { _data.f(); } //error: no matching function for call to 'Data::f() const'
};
int main()
{
Wrapper<void> w;
(void) w;
}
```
`[demo](http://coliru.stacked-crooked.com/a/03742f915020763c)`
I feel like the answer will contain expressions such as "deduced context" ... but I really cannot pin down the exact part of the standard scecifying this behaviour.
**Is there a language lawyer to enlighten me on the matter?**
---
Notes:
1) But I get an error if I try and effectively *call* `Wrapper<T>::f() const`.
2) I've compiled with `-std=c++17` but this is not specific to C++17, hence no specific tag.
3) In [this answer](https://stackoverflow.com/a/31328843/5470596), @Baum mit Augen quotes `[N4140, 14.7.1(2)]`:
>
> the specialization of the member is implicitly instantiated when the specialization is referenced in a context that requires the member definition to exist
>
>
>
but here in the compiling snippet (#2) `void f() const { _data.f(); }` fails although its *"specialization is **never** referenced in a context that requires the member definition to exist"*.
|
Snippet #2 is *ill-formed*.
As already stated in [this answer](https://stackoverflow.com/questions/10323980/templates-compilation-gcc-vs-vs2010), the template definition of `Wrapper::f` is well-formed (thus no diagonstics are issued) as long as a valid specialization can be *generated*.
[§17.7/8 [temp.res]](http://eel.is/c++draft/temp.res#8) states:
>
> Knowing which names are type names allows the syntax of every template
> to be checked. The program is ill-formed, no diagnostic required, if:
>
>
> - no valid specialization can be generated for a template or a substatement of a constexpr if statement within a template and the
> template is not instantiated, or [...]
>
>
>
In neither of the two code snippets, `Wrapper<void>::f` is getting instantiated, because of the rules in [§17.7.1/2 [temp.inst]](http://eel.is/c++draft/temp.inst#2):
>
> The implicit instantiation of a class template specialization causes
> the implicit instantiation of the *declarations*, but not of the
> definitions, [...].
>
>
>
(emphasizing done by me)
But now §17.7/8 kicks in: if there is no instantiation and *there can be no generated specialization* for which the template definition of `Wrapper::f` is valid (which is the case for snippet #2, as for every generated specialization `Wrapper<T>::f`, a `non-const` call inside a `const` function on a member is would be performed), the program is ill-formed and diagnostics are issued.
But because the diagnostics are not mandatory (see §17.7/8 above), the GCC can deny snippet #2 while both [VS](http://rextester.com/JTBRR73709) and [clang](https://wandbox.org/permlink/tiNxQqc2aGjCHzNd) compile the same code flawlessly.
For snippet #1 however you could provide a user-defined specialization for `Data` where `Data::f` is `const` (say `Data<void>::f`). Therefore, a valid, *generated* specialization of `Wrapper::f` is possible, i.e. `Wrapper<void>::f`. So in conclusion, snippet #1 is well-formed and snippet #2 is invalid; all compilers work in a standard-conforming manner.
|
Get power consumption of a USB device
Is there a way to check how much power a USB device requires?
*Why do I need this?*
I need to connect an LTE USB stick to my Raspberry Pi, and don't know how much power it needs. We got it quite easily on Windows, but haven't found a way to do it on Linux.
|
Take a look at this SuperUser Q&A titled: [How do you check how much power a USB port can deliver?](https://superuser.com/questions/297959/how-do-you-check-how-much-power-a-usb-port-can-deliver), specifically my [answer](https://superuser.com/a/541634/20568).
### lsusb -v
You can get the maximum power using `lsusb -v`, for example:
```
$ lsusb -v|egrep "^Bus|MaxPower"
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
MaxPower 0mA
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
MaxPower 0mA
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MaxPower 0mA
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MaxPower 0mA
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MaxPower 0mA
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MaxPower 0mA
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MaxPower 0mA
Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB
MaxPower 100mA
Bus 003 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser
MaxPower 98mA
Bus 001 Device 004: ID 04a9:1069 Canon, Inc. S820
MaxPower 2mA
Bus 001 Device 005: ID 05ac:120a Apple, Inc. iPod Nano
MaxPower 500mA
MaxPower 500mA
```
|
Database is in Transition state
Today I was trying to restore a database over an already existing database, I simply right clicked the database in SSMS --> Tasks --> Take Offline so I could restore the database.
A small pop up window appeared and showed `Query Executing.....` for sometime and then threw an error saying `Database is in use cannot take it offline`. From which I gathered there are some active connections to that database so I tried to execute the following query
```
USE master
GO
ALTER DATABASE My_DatabaseName
SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
```
Again at this point the SSMS showed `Query Executing.....` for a sometime and then threw the following error:
```
Msg 5061, Level 16, State 1, Line 1
ALTER DATABASE failed because a lock could not be placed on database 'My_DatabaseName'. Try again later.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
```
After this I could not connect to the database through SSMS. and when I tried to Take it offline using SSMS it threw an error saying:
```
Database is in Transition. Try later .....
```
At this point I simply could'nt touch the database anything I tried it returned the same error message `Database is in Transition`.
I got on google read some questions where people had faced similar issue and they recommended to close the SSMS and open it again, So did I and
Since it was only a dev server I just deleted the database using SSMS and restored on a new database.
My question is what could have possibly caused this ?? and how I can Avoid this to happen in future and if I ever end up in the same situation in future is there any other way of fixing it other then deleting the whole database ???
Thank you
|
Check out this article.
<http://oostdam.info/index.php/sectie-blog/289-sql-error-952-8ways-to-solve-it>
I use TSQL most of the time, so I have not run into this issue yet.
What version is the SQL Server database and at what patch level?
Next time, do a usp\_who2 to see what threads are running.
<http://craftydba.com/wp-content/uploads/2011/09/usp-who2.txt>
Since the output is in a table, you can search by database.
Kill all threads using the database before the trying the ALTER statement.
A night about 6 months ago, I had a terrible time getting a 2000 database offline due to an application constantly hitting it. I eventually disabled the user account so I would not get any more logins.
|
How to watch deep array using computed in Vuejs
I made a component like below in Vuejs.
But my goal is I want to get the value inside of filterdShoes of watch.
```
data():{
coor: [
{ "name": '',
"shoes": '' },
{ "name": '',
"shoes": '' }
]
},
computed {
filteredAny(){
return this.coor.filter(it=>['shoes','name'].includes(it));
},
filteredShoes(){
return this.coor.filter(it=>it === 'shoes');
},
filteredName(){
return this.coor.filter(it=>it === 'name');
}
}
watch {
filteredShoes(){
console.log('The shoes are changed');
}
}
```
So I tried like below.But it says val is undefined.
I hope that 'val' is defined as 'coor' of data.
How can I fix this code? Thank you so much for reading this.
```
watch {
filteredShoes(val){
for(let i=0; i < val.length; i+=1){}
}
}
```
|
Since `this.coor` is an array of objects, `it` will be an object. Thus `it != 'shoes'`, and your filter will return an empty array.
Assuming you are using computed `filteredShoes()` like this:
```
<div v-for="shoe in filteredShoes"> ... </div>
```
Then you can just use the computed property, no need for the watcher. *Everytime elements are added to/removed from the array,* the computed prop will run. **The computed property will not run if the properties of an object in the array are changed.**
Also, I'm not quite sure why your `this.coor` has such a structure, so I'm using this:
```
coor: [
{ "name": 'Model 1', "shoes": 'Nike' },
{ "name": 'Model 2', "shoes": 'Reebok' }
]
...
computed {
filteredShoes(){
let shoes = this.coor.filter(item => item.shoes === 'Nike');
for(let i = 0; i < shoes.length; i++){ ... } // assuming you're modifying each object here
return shoes; // return an array to be looped in HTML
},
}
```
**If you're trying to filter by type**, I would recommend changing your `coor` to the following structure:
```
coor: [
{ "name": 'Model 1', "type": 'shoe' },
{ "name": 'Model 2', "type": 'shoe' }
]
...
computed {
filteredShoes(){
let shoes = this.coor.filter(item => item.type === 'shoe');
...
return shoes; // return an array to be looped in HTML
},
}
```
|
ggplot scale color gradient to range outside of data range
I am looking for a way to stretch a color gradient between two values and label the legend, regardless of the range of data values in the dataset. Essentially, is there a functional equivalent to `ylim()` for color gradients?
Given code which plots a z value typically between -1 and 1, I can plot and label a gradient if the breaks are within the data range:
```
library(ggplot2)
#generator from http://docs.ggplot2.org/current/geom_tile.html
pp <- function (n, r = 4) {
x <- seq(-r * pi, r * pi, len = n)
df <- expand.grid(x = x, y = x)
df$r <- sqrt(df$x^2 + df$y^2)
df$z <- cos(df$r^2) * exp(-df$r / 6)
return(df)
}
t <- pp(30)
summary(t)
b <- c(-.5, 0, .5)
colors <- c('navyblue', 'darkmagenta', 'darkorange1')
p <- ggplot(data = t, aes(x = x, y = y))+
geom_tile(aes(fill = z))+
scale_fill_gradientn(colors = colors, breaks = b, labels = format(b))
ggsave(plot = p, filename = <somefile.png>, height = 3, width = 4)
```

But when I change the breaks to values outside of the observed range, the gradient coloring doesn't seem to adjust and the gradient labels don't appear.
```
b <- c(-3, 0, 3)
```

|
It's very important to remember that in ggplot, `breaks` will basically never change the scale itself. It will only change what is displayed in the guide or legend.
You should be changing the scale's limits instead:
```
ggplot(data=t, aes(x=x, y=y)) +
geom_tile(aes(fill=z)) +
scale_fill_gradientn(limits = c(-3,3),
colours=c("navyblue", "darkmagenta", "darkorange1"),
breaks=b, labels=format(b))
```
And now if you want the breaks that appear in the legend to extend further, you can change them to set where the tick marks appear.
A good analogy to keep in mind is always the regular x and y axes. Setting "breaks" there will just change where the tick marks appear. If you want to alter the *extent* of the x or y axes, you'd typically change a setting like their "limits".
|
Why is the DOMSubtreeModified event deprecated in DOM level 3?
Why is the DOMSubtreeModified event [deprecated](http://www.w3.org/TR/DOM-Level-3-Events/#event-type-DOMSubtreeModified) and what are we supposed to use instead?
|
If you [scroll down a bit](http://www.w3.org/TR/DOM-Level-3-Events/#events-mutationevents), you see:
>
> Warning! The `MutationEvent` interface was introduced in DOM Level 2
> Events, but has not yet been completely and interoperably implemented
> across user agents. In addition, there have been critiques that the
> interface, as designed, introduces a performance and implementation
> challenge. A new specification is under development with the aim of
> addressing the use cases that mutation events solves, but in more
> performant manner. Thus, this specification describes mutation events
> for reference and completeness of legacy behavior, but deprecates the
> use of both the `MutationEvent` interface and the `MutationNameEvent`
> interface.
>
>
>
The replacement API is [mutation observers](https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver), which are fully specified [in the DOM Living Standard](https://dom.spec.whatwg.org/#mutation-observers) that supercedes all of the DOM level X silliness.
|
R shiny color dataframe
I have a data frame:
```
runApp(
list(ui = bootstrapPage(pageWithSidebar(
headerPanel("Data frame with colors"),
sidebarPanel(),
mainPanel(
tableOutput("my_dataframe")
)
)
)
,
server = function(input, output) {
output$my_dataframe <- renderTable({
data.frame("Brand ID"=1:4,"Client1"=c("red", "green", "green", "green"),
"Client2"=c("green", "red", "green", "red"))
})
}
)
)
```
Is it possible to color data frame like:

For example, when I have contidion1 I need to color data frame cell with red, on condition2 - with green.
Any help would be really appreciated.
|
Here is a solution. To use it, you have to define css in a vector:
```
css <- c("#bgred {background-color: #FF0000;}",
"#bgblue {background-color: #0000FF;}")
```
and write `#...` inside the cell :
```
> data.frame(x=c("A","B"), y=c("red cell #bgred", "blue cell #bgblue"))
x y
1 A red cell #bgred
2 B blue cell #bgblue
```
Then use my `colortable()` function mainly inspired from the `highlightHTML` package and from my personal shiny experience. Here is an example:
```
library(pander)
library(markdown)
library(stringr)
library(shiny)
# function derived from the highlightHTMLcells() function of the highlightHTML package
colortable <- function(htmltab, css, style="table-condensed table-bordered"){
tmp <- str_split(htmltab, "\n")[[1]]
CSSid <- gsub("\\{.+", "", css)
CSSid <- gsub("^[\\s+]|\\s+$", "", CSSid)
CSSidPaste <- gsub("#", "", CSSid)
CSSid2 <- paste(" ", CSSid, sep = "")
ids <- paste0("<td id='", CSSidPaste, "'")
for (i in 1:length(CSSid)) {
locations <- grep(CSSid[i], tmp)
tmp[locations] <- gsub("<td", ids[i], tmp[locations])
tmp[locations] <- gsub(CSSid2[i], "", tmp[locations],
fixed = TRUE)
}
htmltab <- paste(tmp, collapse="\n")
Encoding(htmltab) <- "UTF-8"
list(
tags$style(type="text/css", paste(css, collapse="\n")),
tags$script(sprintf(
'$( "table" ).addClass( "table %s" );', style
)),
HTML(htmltab)
)
}
##
runApp(
list(
ui=pageWithSidebar(
headerPanel(""),
sidebarPanel(
),
mainPanel(
uiOutput("htmltable")
)
),
server=function(input,output,session){
output$htmltable <- renderUI({
# define CSS tags
css <- c("#bgred {background-color: #FF0000;}",
"#bgblue {background-color: #0000FF;}")
# example data frame
# add the tag inside the cells
tab <- data.frame(x=c("A","B"), y=c("red cell #bgred", "blue cell #bgblue"))
# generate html table with pander package and markdown package
htmltab <- markdownToHTML(
text=pandoc.table.return(
tab,
style="rmarkdown", split.tables=Inf
),
fragment.only=TRUE
)
colortable(htmltab, css)
})
})
)
```

|
Translate CRC8 from C to Java
I received a piece of code in C which calculates a CRC8-value of a byte array.
I need to translate it to Java.
Here the **C** Code:
```
CRC_POLYNOM = 0x9c;
CRC_PRESET = 0xFF;
unsigned int CRC = CRC_PRESET;
for (i = 0; i < Len; i++)
{
crc ^= FRAME[i];
for (j = 0; j < 8; j++)
{
if (crc & 0x01)
crc = (crc >> 1) ^ CRC_POLYNOM;
else
crc = (crc >> 1);
}
}
```
What I have managed to do is this in **Java**:
```
public static long calculateCRC8(byte[] b, int len) {
long crc = CRC_PRESET;
for (int i = 0; i < len; i++) {
crc ^= b[i];
for (int j = 0; j < 8; j++) {
if ((crc & 0x01) == 0)
crc = (crc >> 1) ^ CRC_POLYNOM;
else
crc = crc >> 1;
}
}
return crc;
}
```
For a sample byte-array:
```
byte[] b = new byte[] {1, 56, -23, 3, 0, 19, 0, 0, 2, 0, 3, 13, 8, -34, 7, 9, 42, 18, 26, -5, 54, 11, -94, -46, -128, 4, 48, 52, 0, 0, 0, 0, 0, 0, 0, 0, 4, 1, 1, -32, -80, 0, 98, -5, 71, 0, 64, 0, 0, 0, 0, -116, 1, 104, 2};
```
the **C** code returns **29**, my **Java** code returns **44** though.
What did I do wrong?
I think this is because of Java's signed-only datatypes, so how can I fix this?
|
```
if (crc & 0x01)
```
This tests if the lowest bit is set.
```
if ((crc & 0x01) == 0)
```
This tests if the bottom bit is *clear.*
You should also use an unsigned right shift in the Java code (i.e. `>>>` instead of `>>`, in two places), and mask the result with `0xff` before returning it.
**EDIT** And finally you need to change this:
```
crc ^= b[i];
```
to this:
```
crc ^= b[i] & 0xff;
```
However you should really throw it all away and look up the table driven method. It is eight times as fast as this.
***EDIT 2*** Table-driven version, revised so as to implement `java.util.zip.Checksum:`
```
public class CRC8 implements Checksum
{
private final short init;
private final short[] crcTable = new short[256];
private short value;
/**
* Construct a CRC8 specifying the polynomial and initial value.
* @param polynomial Polynomial, typically one of the POLYNOMIAL_* constants.
* @param init Initial value, typically either 0xff or zero.
*/
public CRC8(int polynomial, short init)
{
this.value = this.init = init;
for (int dividend = 0; dividend < 256; dividend++)
{
int remainder = dividend ;//<< 8;
for (int bit = 0; bit < 8; ++bit)
if ((remainder & 0x01) != 0)
remainder = (remainder >>> 1) ^ polynomial;
else
remainder >>>= 1;
crcTable[dividend] = (short)remainder;
}
}
@Override
public void update(byte[] buffer, int offset, int len)
{
for (int i = 0; i < len; i++)
{
int data = buffer[offset+i] ^ value;
value = (short)(crcTable[data & 0xff] ^ (value << 8));
}
}
/**
* Updates the current checksum with the specified array of bytes.
* Equivalent to calling <code>update(buffer, 0, buffer.length)</code>.
* @param buffer the byte array to update the checksum with
*/
public void update(byte[] buffer)
{
update(buffer, 0, buffer.length);
}
@Override
public void update(int b)
{
update(new byte[]{(byte)b}, 0, 1);
}
@Override
public long getValue()
{
return value & 0xff;
}
@Override
public void reset()
{
value = init;
}
public static void main(String[] args)
{
final int CRC_POLYNOM = 0x9C;
final byte CRC_INITIAL = (byte)0xFF;
final byte[] data = {1, 56, -23, 3, 0, 19, 0, 0, 2, 0, 3, 13, 8, -34, 7, 9, 42, 18, 26, -5, 54, 11, -94, -46, -128, 4, 48, 52, 0, 0, 0, 0, 0, 0, 0, 0, 4, 1, 1, -32, -80, 0, 98, -5, 71, 0, 64, 0, 0, 0, 0, -116, 1, 104, 2};
CRC8 crc8 = new CRC8(CRC_POLYNOM, CRC_INITIAL);
crc8.update(data,0,data.length);
System.out.println("Test successful:\t"+(crc8.getValue() == 29));
}
}
```
|
How to add direction on tracking path for google maps
I am showing the tracking on map with marker points and line connecting them.
The problem is that i want to show the direction of travel on the links;
so I am not getting how to show the direction on the line between the marker points.
Is there any way to accomplish this task.
|
Showing the direction on the polyline can be accomplished with arrows.
There are some predefined paths that the google maps api3 provides.
See this section of the documentation -
[**SYMBOLS ON POLYLINE**](https://developers.google.com/maps/documentation/javascript/overlays#PredefinedSymbols), that can be used other than an arrow.
Have a look at this fiddle that uses an arrow to indicate the direction on the polyline.
[**DEMO with a sigle symbol**](http://jsfiddle.net/q6eqV/1/)
You can also set the `repeat` property for the symbol so that it repeats for regular intervals.
[**DEMO with repeating symbols**](http://jsfiddle.net/Pc9Rv/)
**JavaScript-**
```
var iconsetngs = {
path: google.maps.SymbolPath.FORWARD_CLOSED_ARROW
};
var polylineoptns = {
path: markers,
strokeOpacity: 0.8,
strokeWeight: 3,
map: map,
icons: [{
icon: iconsetngs,
offset: '100%'}]
};
polyline = new google.maps.Polyline(polylineoptns);
```
The interesting feature of this predefined symbol(*the forward arrow specially*) is that the arrow points towards the exact direction of which your co-ordinates are located. So, that obviously serves the purpose of denoting the direction in a Tracking System.
**UPDATE:** Not sure about the point you are trying to tell in the comments. The markers can be displayed the same way. Here is the code that adds markers with a loop and also set the polyline with arrows:
[**DEMO WITH MARKERS AND POLYLINE**](http://jsfiddle.net/nX8U8/2/)
Javascript:
```
var polylineoptns = {
strokeOpacity: 0.8,
strokeWeight: 3,
map: map,
icons: [{
repeat: '70px', //CHANGE THIS VALUE TO CHANGE THE DISTANCE BETWEEN ARROWS
icon: iconsetngs,
offset: '100%'}]
};
polyline = new google.maps.Polyline(polylineoptns);
var z = 0;
var path = [];
path[z] = polyline.getPath();
for (var i = 0; i < markers.length; i++) //LOOP TO DISPLAY THE MARKERS
{
var pos = markers[i];
var marker = new google.maps.Marker({
position: pos,
map: map
});
path[z].push(marker.getPosition()); //PUSH THE NEWLY CREATED MARKER'S POSITION TO THE PATH ARRAY
}
```
|
Is it possible to have OpenGL draw on a memory surface?
I am starting to learn OpenGL and I was wondering if it is possible to have it draw on a video memory buffer that I've obtained through other libraries?
|
For drawing into video memory you can use [framebuffer objects](http://www.songho.ca/opengl/gl_fbo.html) to draw into OpenGL textures or renderbuffers (VRAM areas for offscreen rendering), like Stefan suggested.
When it comes to a VRAM buffer created by another library, it depends what library you are talking about. If this library also uses OpenGL under the hood, you need some insight into the library to get that "buffer" (be it a texture, into which you can render directly using FBOs, or a GL buffer object, into which you can read rendered pixel data using [PBOs](http://www.songho.ca/opengl/gl_pbo.html).
If this library uses some other API to interface the GPU, there are not so many possibilities. If it uses OpenCL or CUDA, these APIs have functions to directly use their memory buffers or images as OpenGL buffers or textures, which you can then render into with the mentioned techniques.
If this library uses Direct3D under the hood, it gets a bit more difficult. But at least nVidia has an [extension](http://www.opengl.org/registry/specs/NV/DX_interop.txt) to directly use Direct3D 9 surfaces and textures as OpenGL buffers and textures, but I don't have any experience with this and neither do I know if this is widely supported.
|
google script use html service in gmail addon
I've try to create simple form in gmail addon,
how to use html [service](https://developers.google.com/apps-script/reference/html/)
The below code,i have tried,
```
function buildAddOn(e) {
var accessToken = e.messageMetadata.accessToken;
GmailApp.setCurrentMessageAccessToken(accessToken);
var test_card = doGet()
cards.push(test_card);
return cards;
}
function doGet() {
return HtmlService.createHtmlOutput('<b>Hello, world!</b>');
}
```
Thanks in advance
|
I understood that you want to use HTML at Gmail add-on. If my understanding is correct, how about these sample scripts?
### Sample script 1 :
```
function buildAddOn() {
var html = HtmlService.createTemplate('<b>Hello, world!</b>').evaluate().getContent();
return CardService.newCardBuilder()
.setHeader(CardService.newCardHeader().setTitle('sample'))
.addSection(CardService.newCardSection().addWidget(CardService.newKeyValue().setContent(html)))
.build();
}
```
### Sample script 2 :
Code.gs
```
function buildAddOn() {
var html = HtmlService.createTemplateFromFile("index").evaluate().getContent();
return CardService.newCardBuilder()
.setHeader(CardService.newCardHeader().setTitle('sample'))
.addSection(CardService.newCardSection().addWidget(CardService.newKeyValue().setContent(html)))
.build();
}
```
index.html
```
<b>Hello, world!</b>
```
### Result :
[](https://i.stack.imgur.com/xAMdr.png)
### Note :
- As a sample, Manifests was used from [Quickstart](https://developers.google.com/gmail/add-ons/guides/quickstart#step_2_update_the_script_manifest).
- This is a very simple script. So please modify it for your environment.
If I misunderstand your question, I'm sorry.
|
View.getContext return null, why?
I'm using WebView with **javascript interface** and sometimes when I call `loadUrl` on the webView, `mWebView.getContext()` return `null`.
1. How can a view have no Context ? Why ?
2. Is a view whitout context still used or reusable ?
3. What should I do when my view did not have a context ?
4. **Most important**, if the view has no context, will `mWebView.post(new Runnable() { ...` be executed ?
5. Is this code relevant ?
```
if (Looper.getMainLooper().getThread() == Thread.currentThread()) {
mWebView.loadUrl("javascript:...");
} else {
mWebView.post(new Runnable() {
public void run() {
mWebView.loadUrl("javascript:...");
}
});
}
```
|
2 common reasons of a null context to a view:
- You're trying to get the context in the callback of an asynchronous (handler, network call) call but the activity/fragment is gone because of another action of the user (ex: pressed back). Therefore the view is detached and has no context anymore.
- You have a memory leak somewhere and two or more instances of your activity and view hierarchy. Then something happen in the instance you're not refering to anymore but the view has lost the link to its context.
Regarding the handler.
I am not sure if the view has its own handler or if it uses the handler of the Activity it is attached to, you'd probably have to read the source to find out. However the question is not really relevant: if the view has no context, you have a bigger problem: it's not on the screen.
Regarding whether the code in 5. is relevant, you would need to answer that questions: Why don't you know on which thread your code is running?
When you know on which thread you are, and if it makes sense to you not to be on the main thread, then using a handler is a valid way to execute your code on the main. As well as Activity.runOnUiThread()
Just remember that a Handler's lifecycle is not tied to the activity. So you should clear the queue of messages and runnables when your activity/fragment pauses
|
android camera: onActivityResult() intent is null if it had extras
After searching a lot in all the related issues at Stack Overflow and finding nothing, please try to help me.
I created an intent for capture a picture. Then I saw different behavior at `onActivityResult()`: if I don't put any extra in the Intent (for small pics) the Intent in onActivityResult is ok, but when I put extras in the intent for writing the pic to a file, the intent in onActivityResult is `null`!
The Intent creation:
```
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
// without the following line the intent is ok
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(f));
startActivityForResult(takePictureIntent, actionCode);
```
Why is it null, and how can I solve it?
|
It happens the same to me, if you are providing `MediaStore.EXTRA_OUTPUT`, then the intent is null, but you will have the photo in the file you provided (`Uri.fromFile(f)`).
If you don't specify `MediaStore.EXTRA_OUTPUT` then you will have an intent which contains the uri from the file where the camera has saved the photo.
Don't know if it as a bug, but it works that way.
**EDIT:** So in onActivityResult() you no longer need to check for data if null. The following worked with me:
```
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
switch (requestCode) {
case PICK_IMAGE_REQUEST://actionCode
if (resultCode == RESULT_OK && data != null && data.getData() != null) {
//For Image Gallery
}
return;
case CAPTURE_IMAGE_REQUEST://actionCode
if (resultCode == RESULT_OK) {
//For CAMERA
//You can use image PATH that you already created its file by the intent that launched the CAMERA (MediaStore.EXTRA_OUTPUT)
return;
}
}
}
```
Hope it helps
|
Disable compressed memory in Mac OS 10.9 Mavericks?
Is there any way to disable memory compression in Mavericks? Ever since I upgraded, my Minecraft server has been using ludicrous amounts of CPU time and choking. I'd like to test without compressed memory to see if that might be the culprit.
|
vm/vm\_pageout.h defines the modes for the vm\_compressor boot argument, which defaults to VM\_PAGER\_COMPRESSOR\_WITH\_SWAP (per vm/vm\_compressor.c). For OS X 10.9, 10.10, and 10.11, you can disable compression by changing the vm\_compressor\_mode argument to 1 (VM\_PAGER\_DEFAULT). That is:
```
sudo nvram boot-args="vm_compressor=1"
```
Then reboot. You can verify the change was successful by running:
```
sysctl -a vm.compressor_mode
```
Starting with macOS 10.12 Sierra, the old VM\_PAGER\_DEFAULT is no longer supported and `vm_compressor=1` is converted to `vm_compressor=4` inside the kernel.
|
Parent-child ordering in same table
I have a table like this-
```
id name ordering catid
1 parent1 1 0
2 parent2 2 0
3 parent3 3 0
4 child11 1 1
5 child12 2 1
6 child21 1 2
7 child22 2 2
8 child31 1 3
9 child32 2 3
```
I am trying to get the result like below-
```
id name ordering catid
1 parent1 1 0
4 child11 1 1
5 child12 2 1
2 parent2 2 0
6 child21 1 2
7 child22 2 2
3 parent3 3 0
8 child31 1 3
9 child32 2 3
```
I want to order the first `parent(catid=0)` than its child than second parent and its child.
Is this possible to achieve desire result with these column.I tried join but not succeeded.
Surely I am not doing it correctly.
This is what I have tried-
```
SELECT a.*,c.name AS category_title FROM table AS a
LEFT JOIN table AS c ON c.id = a.catid
ORDER BY c.ordering asc, a.ordering asc
```
|
This query would do a `SELF JOIN` on the same table, by joining the records from the first table with the corresponding parent records. The column parentId would contain the parent Id of the record if the parent existed, otherwise the ID of the record itself. The results are then ordered by parentId and then by the ID so that the parent always appears at the top in a given group of parent and its children.
```
SELECT
m.id,
m.name,
m.catid,
m.ordering,
p.ordering,
case
WHEN p.ordering IS NULL THEN m.ordering * 10
ELSE m.ordering + p.ordering * 10
END AS parentId
FROM
MyTable m
LEFT JOIN MyTable p
ON m.catid = p.id
ORDER BY parentId
```
The result is as follows:
```
1 parent1 0 1 10
4 child11 1 1 1 11
5 child12 1 2 1 12
2 parent2 0 2 20
6 child21 2 1 2 21
7 child22 2 2 2 22
3 parent3 0 3 30
8 child31 3 1 3 31
9 child32 3 2 3 32
```
|
Memory consuption code optimization, a garbage collector theory
In my WPF-application, I call new windows in the following way:
```
_newWin = new WinWorkers_AddWorker();
_newWin.WindowState = this.WindowState;
_newWin.Show();
```
Where `_newWin` is a `private Window object`.
My questions:
1. Should I assign a null value to `_newWin` after I call `_newWin.Show()`?
2. Will this decrease memory consumption because garbage collector / destructor will clean the null value objects earlier?
|
It's generally irrelevant to set a value to null. It's very rarely useful. It's occasionally harmful.
Let's consider first the simplest case:
```
private void DoStuff()
{
var newWin = new WinWorkers_AddWorker();
newWin.WindowState = this.WindowState;
newWin.Show();
int irrelevant = 42;
this.whoCares = irrelevant * 7;
int notRelevantEither = irrelevant + 1;
this.stillDontCare = notRelevantEither * irrelevant;
}
```
Here `newWin` only exists in this method; it is created in it and doesn't leave the scope of the method by being returned or assigned to a member with a wider scope.
Ask a lot of people when `newWin` gets garbage collected, and they'll tell you that it will happen after the line with `this.stillDontCare`, because that's when `newWin` goes out of scope. We could therefore have a slight win by assigning `newWin = null` just after its last use, but its probably negligible.
Conceptually this is true, because we can add code that deals with `newWin` anywhere up until that point, and `newWin` is there for us to make use of.
In fact though, it is quite likely that `newWin` becomes eligible for collection right after `.Show()`. While it is conceptually in scope after then, it isn't actually used and the compiler knows that. (By "compiler" from now on I'm going to mean the entire process that produces actual running code, combining the IL compiler and the jitter). Since the memory used by `newWin` itself (that is, the reference on the stack, not the object) is no longer used the compiler could use that memory for `irrelevant` or something else. There being no live reference any more, the object is eligible for collection.
Indeed, if the last few methods called on an object don't actually use the `this` pointer (whether directly or by using member fields) then the object can even be collected before those methods are called, because they don't actually make use of the object. If you had a method whose `this` pointer was never used (again, directly or indirectly) then it might never actually be created!
Now, bearing this in mind, we can see that it really isn't going to make even that slight negligible difference that it would seem to make, if we were to assign null to the variable before the variable falls out of scope.
Indeed, it is just about possible that the assignment could even make it take longer to become eligible, because if the compiler couldn't see that that use of the variable was not going to affect the object (unlikely, but perhaps it could happen if there are `try...catch...finally` blocks making the analysis more complicated), then it could even delay the point at which the object is deemed eligible. It is again probably negligible, but it is there.
So far so simple; good stuff happens if we leave well alone, and leaving well alone is easy.
It is however possible for a reference to benefit from being set to null. Consider:
```
public class SomeClass
{
private WorkerThing _newWin;
private void DoStuff()
{
_newWin = new WinWorkers_AddWorker();
_newWin.WindowState = this.WindowState;
_newWin.Show();
}
}
```
Consider here, that this time after `DoStuff()` is called, `_newWin` is stored in a member variable. It will not fall out of scope until the instance of `SomeClass` falls out of scope. When will that happen?
Well, I can't answer that question, but sometimes the answer is important. If the `SomeClass` itself is also short-lived, then who cares. It'll fall out of scope soon enough, taking `_newWin` with it. If however, we assigned `_newWin = null` then the object would immediately be eligible for collection.
Now, some important caveats to this:
1. In the first place, there is no good reason for `_newWin` to be a member variable. If the example above were complete code we would move it back to being local to `DoStuff()` and gain not only in this efficiency manner, but *much, much more importantly* in our chances of correctness, as we can't do something stupid to `_newWin` from another member.
2. If we are holding onto something in a member variable, it's probably for a good reason. That good reason is going to override being fanatic about cleaning out variables as fast as possible.
3. Most objects just don't take up that much memory by themselves anyway. A member variable here or there isn't going to hurt.
Because of this, the main reason to assign null to a member variable, is simply because null has become the most appropriate value. Assigning null to a member that is no longer going to be used is generally not to release its memory ASAP but because it is no longer appropriate to use, and that becomes impossible - and clearly signalled to the rest of your code as such - when it is null.
If a reference was longer-lived than a method (and hence put in a member variable) **and** considerably shorter-lived than the containing object **and** consumed a very large amount of memory, then it's just about possible that assigning null would begin to make sense. In the extremely rare cases where that combination happens, we probably want to assign it to null to indicate that it is no longer there for the class to use anyway, so we're still not going to assign null *with the purpose of* releasing it to the GC. It's just about possible, but really "nah".
|
using 'or' operator not working for menu
In my code here, I have a CSS class called "active" which I use if the $\_GET['page'] == tutorials, php, mysql, etc...
The problem is, even if the 'page' variable is not equal to any of these values, the Tutorials button in this case is still active for some reason.
Any ideas why this would be happening?
Am I using the 'or' (||) operand incorrectly?
```
<?php if($_GET['page'] == 'tutorials' || 'php' || 'mysql' || 'html' || 'css' || 'js') { ?>
<li class="active"> <?php } else { ?> <li> <?php } ?>
<a href="index.php?page=tutorials">Tutorials</a>
<ul>
<li><a href="index.php?page=php">PHP</a></li>
<li><a href="index.php?page=mysql">MySQL</a></li>
<li><a href="index.php?page=html">HTML</a></li>
<li><a href="index.php?page=css">CSS</a></li>
<li><a href="index.php?page=js">JS</a></li>
</ul>
</li>
```
|
You are using `||` incorrectly.
You would need to test each whole condition separately to use it that way. So you could do:
```
<?php if($_GET['page'] == 'tutorials' || $_GET['page'] == 'php' || $_GET['page'] == 'mysql' || $_GET['page'] == 'html' || $_GET['page'] == 'css' || $_GET['page'] == 'js') { ?>
```
Here's an alternate syntax:
```
<?php
if(in_array($_GET['page'], array('tutorials', 'php', 'mysql', 'html', 'css', 'js'))) {
?>
```
Read more about [PHP Control Structures](http://php.net/manual/en/language.control-structures.php), [arrays](http://us.php.net/manual/en/language.types.array.php), and [`in_array()`](http://php.net/in_array)
|
Bootstrap Sidebar Not Scrollable 100% Height
I currently have a Bootstrap Sidebar on the left part of my screen. `height` is set to `100%`. But when content goes past the bottom of the screen it's not scrollable.
I saw [this Stack Overflow](https://stackoverflow.com/questions/28375553/scrollable-sidebar-using-bootstrap#28376048) question which is very similar to my problem. One of the comments mentions that it could be due to my height being set to 100%.
I have tried to set my height to 100px just as a test to see if the content would become scrollable, and that worked.
So my question is, is there a way to make the content in my sidebar scrollable and keep the height at `100%` instead of having to set a specific pixel value?
**Edit:**
Below is my current CSS for `#sidebar-wrapper`.
```
#sidebar-wrapper {
overflow-y: scroll;
z-index: 1000;
position: fixed;
left: 175px;
width: 175px;
height: 100%;
margin-left: -175px;
overflow-y: auto;
background: #222222;
-webkit-transition: all 0.5s ease;
-moz-transition: all 0.5s ease;
-o-transition: all 0.5s ease;
transition: all 0.5s ease;
}
```
|
**Update:**
After going through the code specific to the website. The fixed positioned sidebar needed the CSS property `top:52px` which offsets the sidebar in the top to accomodate the navbar of height `52px` and `bottom:0px` which is added to ensure the sidebar extends to the bottom of the browser window.
```
#sidebar-wrapper {
z-index: 1000;
position: fixed;
left: 175px;
top: 52px;
bottom: 0px;
margin-left: -175px;
overflow-y: auto;
background: #222222;
-webkit-transition: all 0.5s ease;
-moz-transition: all 0.5s ease;
-o-transition: all 0.5s ease;
transition: all 0.5s ease;
}
```
**Old Answer:**
Here is some sample code from [bootsnipp.com/](https://bootsnipp.com/snippets/featured/sidebar-navigation-with-scrollspy) and a simple working snippet of the code. Refer the below JSFiddle.
So if the code is structured in this format, then bootstrap will take care of the sidebar scrolling internally, if you need to add scroll always then add the CSS property `overflow-y:scroll` to the ID `#sidebar-wrapper`
The below CSS is what I am talking about.
```
#sidebar-wrapper {
margin-left: -250px;
left: 250px;
width: 250px;
background: #000;
position: fixed;
height: 100%;
overflow-y: auto;
z-index: 1000;
transition: all 0.4s ease 0s;
}
```
[`**JSFiddle Demo**`](https://jsfiddle.net/zhvgnnmf/)
|
SQL - Only UNIQUE or PRIMARY KEY constraints can be created on computed columns
I am trying to create a table with a computed column called profileID, however when I try this:
```
CREATE TABLE Profiles
(
[id] [int] IDENTITY(1,1) NOT NULL,
[profileID] AS ((id * 19379 - 62327) % 99991) NOT NULL
)
```
However when I goto create it, I get this error:
>
> Only UNIQUE or PRIMARY KEY constraints can be created on computed columns, while CHECK, FOREIGN KEY, and NOT NULL constraints require that computed columns be persisted.
>
>
>
I've tried to adjust the profileID line to this
```
[profileID] as ( (id * 19379 - 62327) % 99991) NOT NULL UNIQUE
```
But I still get the same error.
FYI I have another column called id and its the primary key and is unique and auto\_incremented.
|
The error message is very clear - You can only use unique or primary key constraints on a computed column. You can't create a computed column with a not null constraint unless it's persisted.
So either create that column as `persisted`:
```
CREATE TABLE Profiles
(
[id] [int] IDENTITY(1,1) NOT NULL,
[profileID] as ( (id * 19379 - 62327) % 99991) PERSISTED NOT NULL
)
```
Or simply remove the `not null` (it's never going to be null anyway)
```
CREATE TABLE Profiles
(
[id] [int] IDENTITY(1,1) NOT NULL,
[profileID] as ( (id * 19379 - 62327) % 99991)
)
```
|
What is the summary of the differences in binding behaviour between Rebol 2 and 3?
The current in-depth [documentation on variable binding](http://www.rebol.net/wiki/Bindology) targets Rebol 2. Could someone provide a summary of differences between Rebol 2 and 3?
|
There isn't really a summary somewhere, so let's go over the basics, perhaps a little more informally than [Bindology](https://github.com/revault/rebol-wiki/wiki/Bindology "Bindology"). Let Ladislav write a new version of his treatise for R3 and Red. We'll just go over the basic differences, in order of importance.
## Object and Function Contexts
*Here's the big difference.*
In R2, there were basically two kinds of contexts: Regular object contexts and `system/words`. Both had static bindings, meaning that once the `bind` function was run, the word binding pointed to a particular object with a real pointer.
The `system/words` context was able to be expanded at runtime to include new words, but all other objects weren't. Functions used regular object contexts, with some hackery to switch out the value blocks when you call the function recursively.
The `self` word was just a regular word that happened to the first one in object contexts, with a display hack to not show the the first word in the context; function contexts didn't have that word, so they didn't display the first regular word properly.
*In R3, almost all of that is different.*
In R3 there are also two kinds of contexts: Regular and stack-local. Regular contexts are used by objects, modules, binding loops, `use`, basically everything but functions, and they are expandable like `system/words` was (yes, "was", we'll get to that). The old fixed-length objects are gone. Functions use stack-local contexts, which (barring any bugs we haven't seen yet) aren't supposed to be expandable, because that would mess up stack frames. As with the old `system/words` you can't shrink contexts, because removing words from a context would break any bindings of those words out there.
If you want to add words to a regular context, you can use `bind/new`, `bind/set`, `resolve/extend` or `append`, or the other functions that call those, depending on what behavior you need. That is new behavior for the `bind` and `append` functions in R3.
Bindings of words to regular and stack-local contexts are static, as before. Value *look-up* is another matter. For regular contexts value look-up is pretty direct, done by a simple pointer indirection to a static block of value slots. For stack-local contexts the value block is linked to by the stack frame and referenced from there, so to find the right frame you have to do a stack walk that is O(stack-depth). See [bug #1946](https://github.com/rebol/rebol-issues/issues/1946 "Function variable access is O(n) depending on the stack depth") for details - we'll get into why later.
Oh, and `self` isn't a regular word anymore, it's a binding trick, a keyword. When you bind blocks of words to object or module contexts, it binds the keyword `self` which evaluates to be a reference to the context. However, there is an internal flag that can be set which says that a context is "selfless", which turns that `self` keyword off. When that keyword is turned off, you can actually use the word `self` as a field in your context. Binding loops, `use` and function contexts set the selfless flag for their contexts, and the `selfless?` function checks for that.
This model was refined and documented in a fairly involved CureCode flame war, much like R2's model was documented by a REBOL mailing list flame war back in 1999-2000. :-)
## Functions vs. Closures
When I was talking about stack-local function contexts above, I meant the contexts used by `function!` type functions. R3 has a lot of function types, but most of them are native functions in one way or another, and native functions don't use these stack-local contexts (though they do get stack frames). The only function types that are for Rebol code are `function!` and a new `closure!` type. Closures are very different from regular functions.
When you create a `function!`, you're creating a function. It constructs a stack-local context, binds the code body to it, and bundles together the code body and the spec. When you call the function it makes a stack frame with a reference to the function's context and runs the code block. If it has access words in the function context it does the stack walk to find the right frame and then gets the values from there. Fairly straight-forward.
When you create a `closure!`, on the other hand, you create a *function builder*. It sets up the spec and function body pretty much the same as `function!`, but when you *call* a closure it makes a new *regular* selfless context, then does a `bind/copy` of the body, changing all references to the function context to be references to the new regular context in the copy. Then, when it does the copied body, all closure word references are as static as those of object contexts.
Another difference between the two is in how they behave before the function is running, while the function is running, and after the function is done running.
In R2, `function!` contexts still exist when the function isn't running, but the value block of the top-level call of the function still persists too. Only recursive calls get the new value blocks, the top-level call keeps a persistent value block Like I said, hackery. Worse, the top-level value block isn't cleared when the function returns, so you better make sure that you aren't referencing anything sensitive or that you want recycled when the function returns (use the `also` function to clean up, that's what I made it for).
In R3, the `function!` contexts still exist when the function isn't running, but the value block doesn't exist at all. All function calls act like recursive calls did in R2, except better because it's designed that way all the way down, referencing the stack frame instead. The scope of that stack frame is dynamic (stalk a Lisp fan if you need the history of that), so as long as the function is running on the current stack (yes, "current", we'll get to that), you can use one of its words to get at the values of the *most recent call* of that function. Once all of the nested calls of the function return, there won't be any values in scope, and you'll just trigger an error (the wrong error, but we'll fix that).
There's also a useless restriction on binding to an out-of-scope function word that is on my todo list to fix soon. See [bug #1893](https://github.com/rebol/rebol-issues/issues/1893 "BIND stuff function-bound-word restriction has no benefit, should be removed") for details.
For `closure!` functions, before the closure runs *the context doesn't exist at all*. Once the closure starts running, the context is created and exists persistently. If you call the closure again, or recursively, *another* persistent context is created. Any word that leaks from a closure only refers to the context created during that particular run of the closure.
You can't get a word bound to a function or closure context in R3 when the function or closure isn't running. For functions, this is a security issue. For closures, this is a definitional issue.
Closures were considered so useful that Ladislav and I both ported them to R2, independently at different times resulting in similar code, weirdly enough. I think Ladislav's version predated R3, and served as the inspiration for R3's `closure!` type; my version was based on testing the external behavior of that type and trying to replicate it in R2 for R2/Forward, so it's amusing that the solution for `closure` ended up being so similar to Ladislav's original, which I didn't see until much later. My version got included in R2 itself starting with 2.7.7, as the `closure`, `to-closure` and `closure?` functions, and the `closure!` word is assigned the same type value as `function!` in R2.
## Global vs. Local Contexts
*Here's where things get really interesting.*
In [Bindology](https://github.com/revault/rebol-wiki/wiki/Bindology "Bindology") there was a fairly large amount of the article talking about the distinction between the "global" context (which turned out to be `system/words`) and "local" contexts, a fairly important distinction for R2. In R3, *that distinction is irrelevant*.
In R3, `system/words` is *gone*. There is no one "global" context. All regular contexts are "local" in the sense that was meant in R2, which makes that meaning of "local" useless. For R3, we need a new set of terms.
For R3, the only difference that matters is whether contexts are *task-relative*, so the only useful meaning for "global" contexts are the ones that are not directly task-relative, and "local" contexts are the ones that are task-relative. A "task" in this case would be the `task!` type, which is basically an OS thread in the current model.
In R3, at the moment, the only things so far that are task-relative (barely) are the stack variables, which means that stack-relative function contexts are supposed to be *task-relative* too. This is why the [stack walk](https://github.com/rebol/rebol-issues/issues/1946 "Function variable access is O(n) depending on the stack depth") is necessary, because otherwise we'd need to keep and maintain TLS pointers in *every single function context*. All *regular* contexts are global.
Another thing to consider is that according to the plan (which is mostly unimplemented so far), the user context `system/contexts/user` and `system` itself are also intended to be task-relative, so even by R3 standards they would be considered "local". And since `system/contexts/user` is basically the closest thing that R3 has to R2's `system/words`, that means that what *scripts* think of as being their "global" context is actually supposed to be *task-local* in R3.
R3 does have a couple of system global contexts called `sys` and `lib`, though they are used quite differently from R2's global context. Also, all module contexts are global.
It is possible (and common) for there to be globally defined contexts that are only referenced from task-local root references, so that would make those contexts in effect *indirectly task-local*. This is what usually happens when binding loops, `use`, closures or private modules are called from "user code", which basically means non-module scripts, which get bound to `system/contexts/user`. Technically, this is also the case for functions called from modules as well (since functions are stack-local) but those references often eventually get assigned to module words, which are global.
No, we don't yet have synchronization either. Still, that's the model that R3's design is supposed to eventually have, and partly does already. See the [module binding article](https://stackoverflow.com/questions/14420942/how-are-words-bound-within-a-rebol-module/14552755 "How are words bound within a Rebol module?") for more details.
As a bonus, R3 has a real symbol table now instead of using `system/words` as an ad-hoc symbol table. This means that the word limit R2 used to hit pretty quickly is effectively gone in R3. I'm not aware of any app that has reached the new limit or even determined how high the limit is, though it is apparently well over many million distinct symbols. We should check the source to figure that out, now that we have access it it.
## LOAD and USE
Minor details. The `use` function initializes its words with `none` instead of leaving them unset. And because there is no "global" context the way there is in R2, `load` doesn't necessarily bind words at all. Which context `load` binds to depends on circumstances mentioned in the [module binding article](https://stackoverflow.com/questions/14420942/how-are-words-bound-within-a-rebol-module/14552755 "How are words bound within a Rebol module?"), though unless you specify otherwise it explicitly binds the words to `system/contexts/user`. And both are now mezzanine functions.
## Spelling and Aliases
R3 is basically the same as R2 in this, word bindings are by default case-insensitive. The words themselves are case-preserving, and if you compare them using case-sensitive methods you will see differences between words that differ only in case.
In object or function contexts though, when a word is mapped to a value slot, then another word is bound to that context or looked up at runtime, words that differ only by case are considered to be effectively the same word and map to the same value slot.
However, it was found that explicitly created aliases made with the `alias` function where the spelling of aliased words differed in other ways than just by case *broke object and function contexts really drastically*. In R2 they resolved these problems in `system/words`, which made `alias` merely too awkward to use in anything other than a demo, rather than actively dangerous.
Because of this, we removed the externally visible `alias` function altogether. The internal aliasing facility still works because it only aliases words that would normally be considered equivalent for context lookup, which means that contexts don't break. But we now recommend that localization and the other tricks that `alias` was used for in demos, if never in practice, be done using the old-fashioned method of assigning values to another word with the new spelling.
## Word Types
The `issue!` type is now a word type. You can bind it. So far, no-one has taken advantage of being able to bind issues, instead just using the increased speed of operations.
That's pretty much it, for the intentional changes. Most of the rest of the differences might be side effects of the above, or maybe even bugs or not-yet-implemented features. There might even be some R2-like behavior in R3 that is also the result of bugs or not-yet-implemented features. When in doubt, ask first.
|
Get Ajax data into Angular grid
Using Angular Grid, I get the ajax get data in console.log. But an empty grid.
The console log reads:
```
[13:56:11.411] now!!
[13:56:11.412] []
[13:56:11.412] now!!
[13:56:11.556] <there is data returned from console.log(getData); >
```
This is the js file.
```
// main.js
var app = angular.module('myApp', ['ngGrid']);
var getData = [];
function fetchData() {
var mydata = [];
$.ajax({
url:'/url/to/hell',
type:'GET',
success: function(data) {
for(i = 0, j = data.length; i < j; i++) {
mydata[i] = data[i];
}
getData = mydata;
console.log(getData);
}
});
}
fetchData();
app.controller('MyCtrl', function($scope) {
console.log('now!!')
console.log(getData)
console.log('now!!')
$scope.myData = getData
$scope.gridOptions = {
data: 'myData',
showGroupPanel: true
};
});
```
New Js file:
// main.js
```
var app = angular.module('myApp', ['ngGrid']);
app.controller('MyCtrl', function($scope, $http) {
function fetchData() {
$http({
url:'/url/to/hell',
type:'GET'})
.success(function(data) {
$scope.myData = data;
$scope.gridOptions = {
data: 'myData',
showGroupPanel: true
};
});
}
fetchData();
});
```
HTML file.
```
<html ng-app="myApp">
<head lang="en">
<meta charset="utf-8">
<title>Blank Title 3</title>
<link rel="stylesheet" type="text/css" href="http://angular-ui.github.com/ng-grid/css/ng-grid.css" />
<link rel="stylesheet" type="text/css" href="../static/css/style.css" />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.0.2/angular.min.js"></script>
<script type="text/javascript" src="http://angular-ui.github.com/ng-grid/lib/ng-grid.debug.js"></script>
<script type="text/javascript" src="../static/js/main.js"></script>
</head>
<body ng-controller="MyCtrl">
<div class="gridStyle" ng-grid="gridOptions"></div>
</body>
</html>
```
|
Your controller is probably accessing the getData array before the .success is finished. You're accessing the variable right away, outside of a promise function, which is initialized to an empty array.
Why don't you try putting the fetchData function into the controller (for now) and storing the getData directly into $scope.myData in the .success? Maybe even initialize the grid right there too? Not sure if you can do that but if you could it would look like this:
```
app.controller('MyCtrl', function($scope, $http) {
$scope.myData = '';
$scope.gridOptions = { showGroupPanel: true, data: 'myData' };
function fetchData() {
setTimeout(function(){
$http({
url:'/url/to/hell',
type:'GET'})
.success(function(data) {
$scope.myData = data;
if (!$scope.$$phase) {
$scope.$apply();
}
});
}, 3000);
}
fetchData();
});
```
(source for some of the $scope apply stuff: <https://github.com/angular-ui/ng-grid/issues/39>)
Also not sure why you're mixing in jQuery .ajax with angular code ($http will do that), and why none of your javascript has a semicolon.
|
Flatten Lambda expression to access collection property members names
Given the following class
```
public class Person
{
public string Name { get; }
public List<Person> Friends { get; }
}
```
I am looking for a way to get the following string "Friends.Name", when using an Expression>.
Here is the pseudo-code of what I want to do :
```
Expression<Func<Person,string>> exp = x => x.Friends.Name
```
Which won't compile for obvious reasons.
How can I achieve that ? Even if you don't have the code, a general approach would do the trick as I am lacking inspiration on this one.
Thanks
|
You can't get what you want, using this type of expression:
`Expression<Func<Person,string>>`
since `Person` has a collection of `Friends`. Actually, the return type of `Func` doesn't matter here. This will work:
```
static string GetPath(Expression<Func<Person, object>> expr)
{
var selectMethodCall = (MethodCallExpression)expr.Body;
var collectionProperty = (MemberExpression)selectMethodCall.Arguments[0];
var collectionItemSelector = (LambdaExpression)selectMethodCall.Arguments[1];
var collectionItemProperty = (MemberExpression)collectionItemSelector.Body;
return $"{collectionProperty.Member.Name}.{collectionItemProperty.Member.Name}";
}
```
Usage:
```
var path = GetPath(_ => _.Friends.Select(f => f.Name)); // Friends.Name
```
But this is a rather simple case, while it seems to me you're doing something like `Include` methods from Entity Framework.
So, if you want to parse more complex expressions, like this:
```
_ => _.Friends.Select(f => f.Children.Select(c => c.Age))
```
you'll need to explore the expression in more generic fashion.
|
Connect "computer-database-jpa" Play 2.1 sample application with MySQL
I'm playing with [computer-database-jpa (Java)](https://github.com/playframework/Play20/tree/2.1.x/samples/java/computer-database-jpa) Play Framework 2.1 sample application. Everything works fine when I'm using H2 in memory database but I had problems when I want to connect the application with MySQL.
Some one had the same problem ([Help wanted getting sample app connected to MySQL](https://groups.google.com/d/msg/play-framework/gIjd3O18TO4/OovsuKr9kGUJ)) but there was no solution.
I've added `mysql-connector` (Build.scala):
```
val appDependencies = Seq(
....
"mysql" % "mysql-connector-java" % "5.1.18"
)
```
and edited application.conf:
```
db.default.url="jdbc:mysql://password:user@localhost/my-database"
db.default.driver=com.mysql.jdbc.Driver
```
When I start the applications and apply 1.sql (evolution script) I get an error:
```
You have an error in your SQL syntax; check the manual that corresponds to
your MySQL server version for the right syntax to use near 'sequence company_seq
start with 1000' at line 1 [ERROR:1064, SQLSTATE:42000]
```
Does anyone have an idea how to solve the problem?
|
I've found the solution - <https://github.com/opensas/openshift-play2-computerdb>.
Syntax used in evolution scripts isn't conform with `MySQL`:
>
> ## List of changes needed to port computer-database sample app from H2 to mysql
>
>
> ### conf/evolutions/default/1.sql
>
>
> - added engine=innodb, to enable referential integrity
> - replaced sequences with autoincrement for id fields
> - replaced 'SET REFERENTIAL\_INTEGRITY' command with 'SET FOREIGN\_KEY\_CHECKS'
> - replaced timestamp fields with datetime
>
>
> ### conf/evolutions/default/2.sql
>
>
> - splitted the computer data between 2.sql and 3.sql file (avoid bug in evolutions running on mysql)
>
>
> ### models/Models.scala
>
>
> - removed 'nulls last' from Computer.list sql query
> - modified Computer.insert to skip id field (because is auto-assigned by mysql)
>
>
>
Because I was playing with `Java` and not `Scala` version I'd to change `Company.java` and `Computer.java` files. I've added `@GeneratedValue` annotation:
```
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
public Long id;
```
Here you can find modified evolution scripts: <https://github.com/opensas/openshift-play2-computerdb/tree/master/conf/evolutions/default>
|
Select records for a certain year Oracle
I have an oracle table that store transaction and a date column. If I need to select records for one year say 2013 I do Like this:
```
select *
from sales_table
where tran_date >= '01-JAN-2013'
and tran_date <= '31-DEC-2013'
```
But I need a Straight-forward way of selecting records for one year say pass the Parameter '2013' from an Application to get results from records in that one year without giving a range. Is this Possible?
|
You can use **to\_date** function
<http://psoug.org/reference/date_func.html>
```
select *
from sales_table
where tran_date >= to_date('1.1.' || 2013, 'DD.MM.YYYY') and
tran_date < to_date('1.1.' || (2013 + 1), 'DD.MM.YYYY')
```
solution with explicit comparisons `(tran_date >= ... and tran_date < ...)` is able to *use index(es)* on `tran_date` field.
Think on *borders*: e.g. if `tran_date = '31.12.2013 18:24:45.155'` than your code `tran_date <='31-DEC-2013'` will *miss* it
|
Serving gzipped CSS and JavaScript from Amazon CloudFront via S3
I've been looking for ways of making my site load faster and one way that I'd like to explore is making greater use of Cloudfront.
Because Cloudfront was originally not designed as a custom-origin CDN and because it didn't support gzipping, I have so far been using it to host all my images, which are referenced by their Cloudfront cname in my site code, and optimized with far-futures headers.
CSS and javascript files, on the other hand, are hosted on my own server, because until now I was under the impression that they couldn't be served gzipped from Cloudfront, and that the gain from gzipping (about 75 per cent) outweighs that from using a CDN (about 50 per cent): Amazon S3 (and thus Cloudfront) did not support serving gzipped content in a standard manner by using the HTTP Accept-Encoding header that is sent by browsers to indicate their support for gzip compression, and so they were not able to Gzip and serve components on the fly.
Thus I was under the impression, until now, that one had to choose between two alternatives:
1. move all assets to the Amazon CloudFront and forget about GZipping;
2. keep components self-hosted and configure our server to detect incoming requests and perform on-the-fly GZipping as appropriate, which is what I chose to do so far.
There *were* workarounds to solve this issue, but essentially these **didn't work**. [[link](http://www.alfajango.com/blog/how-to-combine-gzip-plus-cdn-for-fastest-page-loads/)].
Now, it seems Amazon Cloudfront supports custom origin, and that **it is now possible to use the standard HTTP Accept-Encoding method for serving gzipped content if you are using a Custom Origin** [[link](http://www.nomitor.com/blog/2010/11/10/gzip-support-for-amazon-web-services-cloudfront/)].
I haven't so far been able to implement the new feature on my server. The blog post I linked to above, which is the only one I found detailing the change, seems to imply that you can only enable gzipping (bar workarounds, which I don't want to use), if you opt for custom origin, which I'd rather not: I find it simpler to host the coresponding fileds on my Cloudfront server, and link to them from there. Despite carefully reading the documentation, I don't know:
- whether the new feature means the files should be hosted on my own domain server *via* custom origin, and if so, what code setup will achieve this;
- how to configure the css and javascript headers to make sure they are served gzipped from Cloudfront.
|
**UPDATE:** Amazon now supports gzip compression, so this is no longer needed. [Amazon Announcement](https://aws.amazon.com/blogs/aws/new-gzip-compression-support-for-amazon-cloudfront/)
Original answer:
The answer is to gzip the CSS and JavaScript files. Yes, you read that right.
```
gzip -9 production.min.css
```
This will produce `production.min.css.gz`. Remove the `.gz`, upload to S3 (or whatever origin server you're using) and explicitly set the `Content-Encoding` header for the file to `gzip`.
It's not on-the-fly gzipping, but you could very easily wrap it up into your build/deployment scripts. The advantages are:
1. It requires no CPU for Apache to gzip the content when the file is requested.
2. The files are gzipped at the highest compression level (assuming `gzip -9`).
3. You're serving the file from a CDN.
Assuming that your CSS/JavaScript files are (a) minified and (b) large enough to justify the CPU required to decompress on the user's machine, you can get significant performance gains here.
Just remember: If you make a change to a file that is cached in CloudFront, make sure you invalidate the cache after making this type of change.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.